Next Article in Journal
Joins, Secant Varieties and Their Associated Grassmannians
Previous Article in Journal
Requirement Dependency Extraction Based on Improved Stacking Ensemble Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Optimal Stopping Problem under a Random Horizon

1
Mathematical and Statistical Sciences Department, University of Alberta, Edmonton, AB T6G 2R3, Canada
2
Department of Mathematics and Statistics, Jordan University of Science and Technology, Irbid 22110, Jordan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1273; https://doi.org/10.3390/math12091273
Submission received: 4 March 2024 / Revised: 12 April 2024 / Accepted: 19 April 2024 / Published: 23 April 2024
(This article belongs to the Section Financial Mathematics)

Abstract

:
This paper considers a pair ( F , τ ) , where F is a filtration representing the “public” flow of information that is available to all agents over time, and τ is a random time that might not be an F -stopping time. This setting covers the case of a credit risk framework, where τ models the default time of a firm or client, and the setting of life insurance, where τ is the death time of an agent. It is clear that random times cannot be observed before their occurrence. Thus, the larger filtration, G , which incorporates F and makes τ observable, results from the progressive enlargement of F with τ . For this informational setting, governed by G , we analyze the optimal stopping problem in three main directions. The first direction consists of characterizing the existence of the solution to this problem in terms of F -observable processes. The second direction lies in deriving the mathematical structures of the value process of this control problem, while the third direction singles out the associated optimal stopping problem under F . These three aspects allow us to deeply quantify how τ impacts the optimal stopping problem and are also vital for studying reflected backward stochastic differential equations that arise naturally from pricing and hedging of vulnerable claims.
MSC:
60G44; 60G48; 60G40; 91G40; 91B05; 91B70

1. Introduction

In this paper, we consider a complete probability space Ω , F , P , on which we consider a complete and right-continuous filtration F : = ( F t ) t 0 . Besides this initial system ( Ω , F , F , P ) , we consider an arbitrary random time τ (i.e., an F -measurable random variable with values in [ 0 , + ] ) that might not be an F -stopping time. As in most applications, such as life insurance and credit risk, where τ is both the death time and the default time, τ is observable only when it occurs and cannot be seen before. Thus, the flow of information that incorporates both τ and F , which will be denoted throughout this paper by G = ( G t ) t 0 , makes τ a stopping time and is known in the literature as the progressive enlargement of F with τ . For this new system ( Ω , F , G , P ) , our objective consists of analyzing the following problem:
S σ G : = ess sup θ J σ E X θ G | G σ ,
where σ is a G -stopping time and J σ is the set of G -stopping times that are finite and greater than or equal to σ . X G is a G -optional process representing the reward satisfying “some integrability condition” and is stopped at τ (i.e., it does not vary after τ ). The essential supremum of an arbitrary family of random variables is the smallest random variable that is an upper bound for each element of this family almost surely (see [1] for more details and related properties).
This problem is known as the optimal stopping problem, and it is an example of a stochastic control problem. For more details about its origin, its applications, and its evolution, we refer the reader to [2,3,4,5,6], and the references therein, to cite a few. Herein, we address this optimal stopping problem, and we aim to measure the impact of τ on this problem in many aspects. In particular, for this problem, we answer the following questions:
  • Can we associate with (1) an optimal stopping problem under F with reward X ˜ F and value process S F ?
  • How are the two pairs, ( X G , S G ) and ( X ˜ F , S F ) , connected to each other?
  • What are the structures in S G induced by τ ?
  • How are the maximal (minimal) optimal times of (1) and their F -optimal stopping problem counterparts related to each other?
One of the direct applications of our optimal stopping problem under a random horizon, τ , lies in studying linear RBSDEs with the form
d Y t = f ( t ) d ( t τ ) d ( K t + M t ) + Z t d W t τ , Y τ = ξ , Y S o n [ [ 0 , τ [ [ , a n d E 0 τ ( Y t S t ) d K t = 0 .
where F is assumed to be generated by a Brownian motion W, f is an F -progressively measurable process (the driver rate), ξ is a random variable, and S is a right-continuous with left limits (RCLL for short hereafter) F -adapted process with values in [ , + ) . The two processes Y and S are the left limits of Y and S, respectively, which are defined in Section 2.1 for the sake of a smooth presentation. For this direct application, we refer the reader to our earlier version of the complete work, which can be found in [7]. The relationship between the optimal stopping problem and RBSDEs is well understood nowadays, and we refer the reader to [8,9,10,11,12,13,14,15] and the references therein, to cite a few.
This paper has three sections, including the current one. The second section defines the general notations, the mathematical setting of the random horizon τ , and its preliminaries. The third section addresses the optimal stopping problem under stopping with τ in various aspects. This paper also has an appendix, where we prove our technical lemmas.

2. Notations and the Random Horizon Setting

This section has two subsections. The first subsection defines the general notations used throughout this paper, while the second subsection presents the progressive enlargement setting associated with τ and its preliminaries.

2.1. General Notations

By H , we denote an arbitrary filtration that satisfies the usual conditions of completeness and right continuity. For any process, X, the H -optional projection and the H -predictable projection, when they exist, are denoted by X o , H and X p , H , respectively. The set M ( H , Q ) (respectively, M p ( H , Q ) for p ( 1 , + ) ) denotes the set of all H -martingales (respectively, p-integrable martingales) under Q, while A ( H , Q ) denotes the set of all H -optional processes that are RCLL with integrable variation under Q. When Q = P , we simply omit the probability for the sake of simple notation. For a d-dimensional H -semimartingale X, by L ( X , H ) , we denote the set of H -predictable processes (either d-dimensional or one-dimensional) that are X-integrable in the semimartingale sense. For φ L ( X , H ) , the resulting integral of φ with respect to X is denoted by φ · X . If φ is d-dimensional (respectively, one-dimensional), then φ · X is a one-dimensional process (respectively, d-dimensional process such as I ] ] 0 , σ ] ] · X = X σ X 0 for any stopping time σ ). For more details about the stochastic integral and its intrinsic calculus and notation, we refer the reader to [16,17,18]. For the H -local martingale, M, we denote by L l o c 1 ( M , H ) the set of H -predictable processes, φ , that is M-integrable, and the resulting integral φ · M is an H -local martingale. If C ( H ) is a set of processes adapted to H , then C l o c ( H ) is the set of processes, X, for which there exists a sequence of H -stopping times, ( T n ) n 1 , increasing to infinity, and X T n belongs to C ( H ) for each n 1 . The H -dual optional projection and the H -dual predictable projection of a process V with finite variation, when they exist, are denoted by V o , H and V p , H , respectively. For any real-valued H -semimartingale L, we denote by E ( L ) the Doléans–Dade (stochastic) exponential. It is the unique solution to d X = X d L , X 0 = 1 , given by
E t ( L ) = exp L t L 0 1 2 L c t 0 < s t ( 1 + Δ L s ) e Δ L s .
Throughout this paper, we consider the following notations. For any random time, σ , and any process, X, we denote by X σ the stopped process given by X t σ : = X σ t , t 0 . For any RCLL process, X, we denote by X the left-limits process of X, which is defined as X = ( X t ) t 0 , where X 0 = X 0 and X t = lim s t X s for t > 0 . A process, X, is called a BMO F -martingale if X M ( F ) (hence, X t = E [ X | F t ] ), and there exists a constant C > 0 such that E [ | X X σ | | F σ ] C for any F -stopping time, σ . For more details about BMO martingales and their properties, we refer the reader to [16] or its English version [19].

2.2. The Random Horizon and the Progressive Enlargement of F

In addition to the initial model Ω , F , F , P , we consider an arbitrary random time, τ , that might not be an F -stopping time. This random time is parametrized through F by the pair ( G , G ˜ ) , called the survival probabilities or Azéma supermartingales, which are given by
G t : = ( I [ [ 0 , τ [ [ ) t o , F = P ( τ > t | F t ) a n d G ˜ t : = ( I [ [ 0 , τ ] ] ) t o , F = P ( τ t | F t ) .
Furthermore, the following process, m, is given by
m : = G + D o , F , w h e r e D : = I [ [ τ , + [ [ ,
is a BMO F -martingale that plays an important role in our analysis. The flow of information, G , which incorporates both F and τ , is defined as follows:
G : = ( G t ) t 0 , G t : = G t + 0 : = s > t G s 0 w h e r e G t 0 : = F t σ D s , s t .
Throughout this paper, on Ω × [ 0 , + ) , we consider the F -optional σ -field, denoted by O ( F ) , and the F -progressive σ -field, denoted by P r o g ( F ) . Thanks to [20] (Theorem 2.3) and [21] (Theorem 2.3 and Theorem 2.11), we recall the following theorem.
Theorem 1. 
The following assertions hold.
(a)
For any M M l o c ( F ) , the process
T ( M ) : = M τ G ˜ 1 I ] ] 0 , τ ] ] · [ M , m ] + I ] ] 0 , τ ] ] · Δ M I { G ˜ = 0 < G } p , F
is a G -local martingale (recall that G t coincides with P ( τ t | F t ) = E [ G ˜ t | F t ] for t > 0 ).
(b)
The process
N G : = D G ˜ 1 I ] ] 0 , τ ] ] · D o , F
is a G -martingale with integrable variation. Moreover, H · N G is a G -local martingale with locally integrable variation for any H belonging to I l o c o ( N G , G ) , given by
I l o c o ( N G , G ) : = K O ( F ) : | K | G G ˜ 1 I { G ˜ > 0 } · D A l o c ( G ) .
For any q [ 1 , + ) and a σ -algebra H on Ω × [ 0 , + ) , we define
L q H , P d D : = X H - m e a s u r a b l e : E [ | X τ | q I { τ < + } ] < + .
Throughout this paper, we make the following assumption
G > 0 ( i . e . , G   i s   a   p o s i t i v e   p r o c e s s )   a n d τ < + P - a . s . .
Under the positivity of G, this process can be decomposed multiplicatively into two processes, which play central roles in this paper, as outlined below.
Lemma 1. 
If G > 0 , then G ˜ > 0 , G > 0 , and G = G 0 E ( G 1 · m ) E ˜ , where
E ˜ : = E G ˜ 1 · D o , F .
For more details about this lemma and the related results, we refer the reader to [20] (Lemma 2.4). Below, we recall [20] (Proposition 4.3), which plays an important role throughout this paper.
Proposition 1. 
Assume that G > 0 , and consider the process
Z ˜ : = 1 / E ( G 1 · m ) .
Then, the following assertions hold:
(a)
Z ˜ τ is a G -martingale, and for any T ( 0 , + ) , Q ˜ T is given by
d Q ˜ T d P : = Z ˜ T τ .
It is a well-defined probability measure on G τ T .
(b)
For any M M l o c ( F ) , we have M T τ M l o c ( G , Q ˜ T ) . In particular, W T τ is a Brownian motion for ( Q ˜ T , G ) for any T ( 0 , + ) .
Remark 1.  (1) Under the condition G > 0 , we obtain T ( M ) : = M τ G ˜ 1 I ] ] 0 , τ ] ] · [ M , m ] for any M M l o c ( F ) . This due to the fact that when G > 0 , we obtain G ˜ > 0 thanks to Lemma 1. Thus, the process I { G ˜ = 0 > G } is null, and as a consequence, I ] ] 0 , τ ] ] · Δ M I { G ˜ = 0 < G } p , F = 0 .
(2)
In general, the G -martingale Z ˜ τ might not be uniformly integrable, and hence in general, Q ˜ T might not be well defined for T = . For these facts, we refer the reader to [20] (Proposition 4.3), where the conditions for Z ˜ τ being uniformly integrable are fully singled out when G > 0 .

3. The Optimal Stopping Problem under a Random Horizon

Throughout the rest of this paper, J σ 1 σ 2 ( H ) denotes the set of all H -stopping times with values in [ [ σ 1 , σ 2 ] ] for any two H -stopping times σ 1 and σ 2 such that σ 1 σ 2 P-a.s. This section has three subsections. The first subsection connects the G -reward process to an F -reward process in a unique manner and investigates how their integrability is transmitted back and forth. The second subsection elaborates on the results of the mathematical structures induced by τ . The third subsection connects the minimal and maximal G -optimal stopping times to the corresponding F -optimal stopping times.

3.1. Parametrization of G -Reward Using F -Processes

This subsection establishes an exact relationship between the G -reward and F -reward processes and shows how some features travel back and forth between them. To this end, we recall the notion of class-D processes.
Definition 1. 
Let ( X , H ) be a pair of a process, X, and a filtration, H . Then, X is said to be of class- ( H , D ) if { X σ : σ i s   a   f i n i t e   H - s t o p p i n g   t i m e } is a uniformly integrable family of random variables.
Below, we state the main results of this subsection. To this end, throughout the rest of this paper, we consider the following notation.
σ 1 σ 2 : = min ( σ 1 , σ 2 ) , f o r   a n y   t w o   r e a l - v a l u e d   r a n d o m   v a r i a b l e s ,   σ 1   a n d   σ 2 .
Theorem 2. 
Assume that (11) holds, and let X G be a G -optional process such that ( X G ) τ = X G . Then, there exists a pair ( X F , k ( p r ) ) of processes such that X F is F -optional, k ( p r ) is F -progressive,
X G = k 0 ( p r ) I { τ = 0 } + X F I [ [ 0 , τ [ [ + k ( p r ) · D a n d X F = ( X G I [ [ 0 , τ [ [ ) o , F / G .
The pair ( X F , k ( p r ) ) is unique in the sense that if there exists another pair ( X ¯ F , k ¯ ( p r ) ) satisfying (16), then X F and X ¯ F are modifications of each other, and k ¯ ( p r ) = k ( p r ) P d D -a.e.
Furthermore, the following assertions hold:
(a)
X G is locally bounded if and only if X F and k ( p r ) are locally bounded.
(b)
X G is RCLL if and only if X F is RCLL.
(c)
X G is an RCLL G -semimartingale if and only if X F is an RCLL F -semimartingale. Furthermore,
X G = ( X F ) τ + ( k ( p r ) X F ) · D + ( k 0 ( p r ) X 0 F ) I { τ = 0 } .
(d)
For any q [ 1 , ) , E sup t 0 | X t G | q < if and only if
k ( p r ) L q Ω ˜ , Prog ( F ) , P d D a n d sup 0 s < · | X s F | q · D o , F A + ( F ) .
(e)
If X G is of class- ( G , D ) , then k ( p r ) L 1 Ω ˜ , Prog ( F ) , P d D , and X F G is of class- ( F , D ) .
The local boundedness of the process k ( p r ) , which is defined up to a P d D -evanescent set, is understood in the sense that there exists a sequence of F -stopping times ( T n ) n 1 that increases to infinity almost surely, and | k ( p r ) | I [ [ 0 , T n ] ] C n , P d D -a.e. for all n 1 for some C n ( 0 , ) , or equivalently, | k τ ( p r ) | I { τ T n } C n , P-a.s.
Proof of Theorem 2. 
Consider a G -optional process, X G . Then, thanks to [22] (Lemma B.1) (see also [23] (Lemma 4.4)), there exists a pair ( X F , k ( p r ) ) such that X F is an F -optional, k ( p r ) is P r o g ( F ) -measurable,
X G I [ [ 0 , τ [ [ = X F I [ [ 0 , τ [ [ , a n d X τ G = k τ ( p r ) , P a . s . .
Furthermore, on the one hand, the uniqueness of this pair follows from the assumption that G > 0 and the second equality in (19). On the other hand, X G = ( X G ) τ yields
X G = X G I [ [ 0 , τ [ [ + X τ G I [ [ τ , + [ [ = X F I [ [ 0 , τ [ [ + k ( p r ) · D + k 0 ( p r ) I { τ = 0 } ,
and the equality (16) is proved.
(a)
By virtue of (19), it is clear that the local boundedness of the pair ( X F , k ( p r ) ) implies the local boundedness of X G . To prove the reverse, we assume that X G is locally bounded and ( T n G ) n is the localizing sequence of stopping times. Hence, there exists a sequence of positive constants ( C n ) n such that
ess sup t 0 | X t T n G G | C n , P - a . s .
Then, there exists a sequence of F -stopping times ( T n ) n that increases to infinity almost surely, and min ( τ , T n ) = min ( T n G , τ ) P-a.s. By virtue of the assumption ( X G ) τ = X G and (19), the inequality (20) is equivalent to
max ess sup 0 t < τ | X t T n F | , | X τ T n G | = max ess sup 0 t < τ | X t T n G G | , | X τ T n G G | C n .
Hence, this inequality implies | k τ ( p r ) | I { τ T n } | X τ T n G | C n and ess sup 0 t < τ | X t T n F | C n P-a.s., or equivalently, k ( p r ) I [ [ 0 , T n ] ] and ( X F ) T n I [ [ 0 , τ [ [ are bounded by C n . Thanks to G > 0 and the fact that T n is an F -stopping time that increases to infinity, these latter conditions are equivalent to saying that both k ( p r ) and X F are locally bounded. This proves assertion (a).
(b)
Thanks to (16) and the fact that k ( p r ) · D is an RCLL process, we deduce that X G is RCLL if and only if X F I [ [ 0 , τ [ [ is RCLL. Thus, we assume that X G is an RCLL process, and we consider the sequence of G -stopping times ( T n G ) , given by
T n G : = inf t 0 : | X t G | > n , t h a t   s a t i s f i e s | X G , n | n , X G , n : = X G I [ [ 0 , T n G [ [ .
It is clear that T n G increases to infinity, and by virtue of [22] (Proposition B.2-(b)) and G > 0 , there exists a sequence of F -stopping times ( T n ) n that increases to infinity and satisfies T n G τ = T n τ . Furthermore, by applying (16) to each X G , n , on the one hand, we obtain
X G , n I [ [ 0 , τ [ [ = X F I [ [ 0 , τ [ [ I [ [ 0 , T n [ [ .
On the other hand, as T n increases to infinity, it is clear that X F is RCLL if and only if X F I [ [ 0 , T n [ [ = ( X G , n I [ [ 0 , τ [ [ ) o , F G 1 is RCLL. The latter follows directly from combining the boundedness of X G , n , [16] (Théorème 47, pp. 119), and the right continuity of G. This completes the proof of assertion (b).
(c)
It is clear that k ( p r ) · D is an RCLL G -semimartingale, and hence X G is an RCLL G -semimartingale if and only if X F I [ [ 0 , τ [ [ is an RCLL G -semimartingale. Thus, if X F is an RCLL F -semimartingale, then X G is an RCLL G -semimartingale. To prove the converse, we note that by stopping with T n G defined above and by using [16] (Théorème 26, Chapter VII, pp. 235), there is no loss of generality in assuming X G is bounded, which leads to the boundedness of X F (see [22] (Lemma B.1) or [23] (Lemma 4.4 (b), pp. 63)). Thus, thanks to [16] (Théorème 47, pp. 119, and Théorème 59, pp. 268), which implies that the optional projection of a bounded RCLL G -semimartingale is an RCLL F -semimartingale, we deduce that X F G = X G I [ [ 0 , τ [ [ o , F is an RCLL F -semimartingale. A combination of this with the condition G > 0 and the fact that G is an RCLL F -semimartingale implies that X F is an RCLL F -semimartingale. Furthermore, direct calculation yields
X F I [ [ 0 , τ [ [ = ( X F ) τ X F · D X 0 F I { τ = 0 } i s   a   G - s e m i m a r t i n g a l e ,
and (17) follows from this equality and (16).
(d)
Here, we prove assertion (d). To this end, we use (16) and derive
I 2 sup t 0 | X t G | q = max sup 0 t < τ | X t F | q , | k τ ( p r ) | q I ,
where I : = sup 0 u < · | X u F | q + | k ( p r ) | q · D . Hence, E sup t 0 | X t G | q < if and only if both E 0 | k t ( p r ) | q d D t and E 0 sup 0 u < t | X u F | q d D t o , F are finite. This proves assertion (d).
(e)
Assume that X G is of class- ( G , D ) . On the one hand, we have E [ | k τ ( p r ) | ] = E [ | X τ G | ] < , or equivalently, k ( p r ) L 1 Ω ˜ , Prog ( F ) , P d D . On the other hand, due to G σ 1 , for any c > 0 , we have
E | X σ F | G σ I { | X σ F | G σ > c } E | X σ F | G σ I { | X σ F | > c } = E | X σ F | I { σ < τ } I { | X σ F | > c } = E | X σ G | I { σ < τ } I { | X σ G | > c } .
This proves that X F G is of class- ( F , D ) , and the proposition is proved. □

3.2. The Mathematical Structures of the Value Process (Snell Envelope)

The main result of this subsection characterizes, in different ways, the Snell envelope of a process under G in terms of the F -Snell envelope and some G -local martingales. To this end, we start with the following two lemmas.
Lemma 2. 
For any nonnegative or integrable process, X, we always have
E [ X t | G t ] I { t < τ } = E [ X t I { t < τ } | F t ] G t 1 I { t < τ } .
This follows from [24] (XX.75-(c)/(d)), while the next lemma appears to be new.
Lemma 3. 
Let σ 1 and σ 2 be two F -stopping times such that σ 1 σ 2 P-a.s. Then, for any G - stopping time, σ G , satisfying
σ 1 τ σ G σ 2 τ P - a . s . ,
there exists an F -stopping time σ F such that
σ 1 σ F σ 2 a n d σ F τ = σ G P - a . s .
The proof of this lemma is relegated to Appendix A. Throughout this paper, for any F × B ( R + ) -measurable process, X, which is nonnegative or μ : = P d D -integrable, its F -optional projection with respect to μ , denoted by M μ P ( X | O ( F ) ) , is the unique F -optional process, Y, such that for any bounded and F -optional H,
M μ P [ X H ] : = E 0 X s H s d D s = E 0 Y s H s d D s .
Theorem 3. 
Assume G > 0 , and let X G be an RCLL and G -adapted process such that ( X G ) τ = X G and sup 0 s · | X s G | A l o c + ( G ) . Then, consider the unique pair ( X F , k ( p r ) ) associated with X G via Theorem 2. Denote
k ( o p ) : = M μ P ( k ( p r ) | O ( F ) ) and k ( F ) : = k ( pr ) k ( op ) ,
where μ : = P d D (see (23)). Then, the following assertions hold:
(a)
If either X G is nonnegative or E sup t 0 ( X t G ) + < , then the ( G , P ) -Snell envelope of X G , denoted by S G , is given by
S G = S F G I [ [ 0 , τ [ [ + ( k ( o p ) · D o , F ) G 2 · T ( m ) + k ( F ) · D + k ( o p ) + k ( o p ) · D o , F G · N G ,
where S F is the ( F , P ) -Snell envelope of the reward X ˜ F : = X F G + k ( o p ) · D o , F .
(b)
Let T ( 0 , + ) , and Q ˜ is given in (14). If either E Q ˜ sup 0 t T ( X t G ) + < or X G 0 , then the ( G , Q ˜ ) -Snell envelope of ( X G ) T , denoted by S G , Q ˜ , satisfies
S G , Q ˜ = S ˜ F E ˜ T ( I [ [ 0 , τ [ [ ) T + k ( F ) · D T + k ( o p ) k ( o p ) · E ˜ E ˜ · ( N G ) T .
Here, S ˜ F is the ( F , P ) -Snell envelope of the F -reward ( X F E ˜ k ( o p ) · E ˜ ) T .
Remark 2.  (a) The condition sup 0 s · | X s G | A l o c + ( G ) implies that the two last terms on the right-hand side of (25) and (26) are well-defined G -local martingales. In fact, by virtue of Theorem 2(c), sup 0 s · | X s G | A l o c + ( G ) yields k ( p r ) L l o c 1 Ω ˜ , Prog ( F ) , P d D , or equivalently, the pair ( k ( F ) , k ( o p ) ) belongs to L l o c 1 Ω ˜ , Prog ( F ) , P d D × L l o c 1 Ω ˜ , O ( F ) , P d D . On the one hand, this condition obviously implies that k ( F ) · D M l o c ( G ) and k ( o p ) belongs to I l o c o ( N G , G ) . On the other hand, by considering
σ n : = inf { t 0 : E ˜ t < n 1 o r | k ( o p ) · D t o , F | > n } a n d T n : = n σ n ,
which both increase to infinity, and using
G 1 = G 0 1 E ( G 1 · m ) 1 E ˜ 1 = G 0 1 E ( G 1 · m ) 1 E ˜ 1 G ˜ / G ,
we obtain
E 0 T n G t G ˜ t | k ( o p ) · D t o , F | G t d D t G 0 1 n 2 + E 0 T n G t G ˜ t | k t ( o p ) | Δ D t o , F G t d D t G 0 1 n 2 + E 0 T n | k t ( o p ) | d D t < .
This proves that G 1 ( k ( o p ) · D o , F ) I l o c o ( N G , G ) , and similar reasoning proves that the process E ˜ 1 ( k ( o p ) · E ˜ ) belongs to I l o c o ( N G , G ) also. Hence, the claim is proved.
(b) 
For θ T t τ τ ( G ) and σ T t ( F ) with θ = σ τ (see Lemma 3), we have
X θ G = X σ τ G I { σ < τ } + k τ ( p r ) I { σ τ } = X σ F I { σ < τ } + 0 σ k s ( p r ) d D s + k 0 ( p r ) I { τ = 0 } = X σ F I { σ < τ } + ( k ( o p ) G ˜ · D o , F ) σ τ + ( k ( o p ) · N G ) θ + k ( F ) · D θ + k 0 ( p r ) I { τ = 0 } ,
The latter remark plays an important role in proving Theorem 3. The rest of this subsection is devoted to this proof. Hence, we start with the next lemma, which is useful here and in the rest of the paper as well.
Lemma 4. 
Assume G > 0 , and let E ˜ be defined in (12). Then, the following assertions hold:
(a)
For any RCLL F -semimartingale L, it holds that
L E ˜ 1 I [ [ 0 , τ [ [ + L E ˜ 1 · N G = L 0 I { τ > 0 } + E ˜ 1 · L τ ,
and
L G I [ [ 0 , τ [ [ = L 0 G 0 I { τ > 0 } L G 2 I ] ] 0 , τ ] ] · T ( m ) + 1 G I ] ] 0 , τ ] ] · T ( L ) L G I ] ] 0 , τ ] ] · N G .
(b) For any F -optional process k such that V : = k · D o , F A l o c ( F ) , we have
V τ G τ = V G 2 · ( m τ G ˜ 1 · [ m , m ] τ ) = T ( m ) k + V G 1 G ˜ I ] ] 0 , τ ] ] · D o , F .
The proof is given in Appendix A, while below, we prove Theorem 3.
Proof of Theorem 3. 
This proof is divided into three parts. The first and second parts prove assertions (a) and (b), respectively, when X G is bounded, while the third part relaxes this condition and proves the theorem.
  • Part 1. In this part, we assume that X G is bounded and prove assertion (a). Hence, under this assumption, the associated processes X F , k ( p r ) , and k ( o p ) are also bounded. As a result, both k ( o p ) · N G and k ( F ) · D are uniformly integrable G -martingales. Thus, by defining
L G : = k ( o p ) · N G + k ( F ) · D ,
combining the remarks above with Lemma 2, and taking conditional expectations with respect to G t on both sides of (27), we derive
Y t ( θ ) : = E X θ G | G t = E X σ F I { σ < τ } + 0 σ τ k s ( o p ) G ˜ s d D s o , F | G t + L t G = E X σ F I { σ < τ } + t τ σ τ k s ( o p ) G ˜ s d D s o , F | G t + ( k ( o p ) G ˜ · D o , F ) t τ + L t G = E X σ F I { σ < τ } + t τ σ τ k s ( o p ) G ˜ s d D s o , F | F t I { τ > t } G t + k ( o p ) G ˜ · D t τ o , F + L t G = E G σ X σ F + t σ k s ( o p ) d D s o , F | F t I { τ > t } G t + ( k ( o p ) G ˜ · D o , F ) t τ + L t G = : X t F ( σ ) ( k ( o p ) · D o , F ) t G t I { t < τ } + ( k ( o p ) G ˜ · D o , F ) t τ + L t G .
Thus, by taking the essential supremum and using (31), we obtain
S G = S F ( k ( o p ) · D o , F ) G I [ [ 0 , τ [ [ + k ( o p ) G ˜ · ( D o , F ) τ + k ( o p ) · N G + k ( F ) · D .
Therefore, by combining this with (30) (see Lemma 4(c) ), and
X I [ [ 0 , τ [ [ = X τ X · D X 0 I { τ = 0 } , f o r   a n y F - s e m i m a r t i n g a l e X ,
we immediately obtain (25), and part 1 is completed.
  • Part 2. Here, we assume that X G is bounded, and we fix T ( 0 , + ) and prove assertion (b). Let θ T t τ T τ ( G ) and σ T t T ( F ) such that θ = σ τ . Then, similarly to Part 1, by taking Q ˜ -conditional expectations on both sides of (27) and using (31) and the fact that the two processes k ( o p ) · N G and k ( F ) · D remain uniformly integrable G -martingales under Q ˜ (due the boundedness of k ( p r ) and k ( F ) ), we write
Y ˜ t ( θ ) : = E Q ˜ X θ G | G t = E Q ˜ X σ F I { σ < τ } + 0 σ τ k s ( o p ) G ˜ s d D s o , F | G t + L t G = E Z ˜ σ Z ˜ t X σ F I { σ < τ } + t τ σ τ k s ( o p ) Z ˜ s G ˜ s Z ˜ t d D s o , F | G t + k ( o p ) G ˜ · D t τ o , F + L t G = E Z ˜ σ X σ F I { σ < τ } + t τ σ τ G 0 k s ( o p ) G ˜ s d V s F | F t I { τ > t } Z ˜ t G t + k ( o p ) G ˜ · D t τ o , F + L t G ,
where V F is an RCLL and nondecreasing process given by
V F : = 1 E ˜ o r   e q u i v a l e n t l y d V F = G ˜ 1 E ˜ d D o , F = G 0 1 Z ˜ d D o , F a n d V 0 F = 0 .
Thus, by further simplifying the right-hand-side of (34), we obtain
Y ˜ t ( θ ) = E E ˜ σ X σ F + t σ k s ( o p ) d V s F | F t I { τ > t } E ˜ t + ( k ( o p ) G ˜ · D o , F ) t τ + L t G = : X t F ( σ ) E ˜ t I { τ > t } ( k ( o p ) · V F ) t E ˜ t I { τ > t } + ( k ( o p ) G ˜ · D o , F ) t τ + L t G .
By taking the essential supremum over all θ T t τ T τ ( G ) , we obtain
S G , Q ˜ = S ˜ F k ( o p ) · ( V F ) T E ˜ T ( I [ [ 0 , τ [ [ ) T + ( k ( o p ) G ˜ · D o , F ) T τ + ( L G ) T ,
where S ˜ F is the Snell envelope for the reward ( X F E ˜ + k ( o p ) · D o , F ) T under ( F , P ) . Thus, by combining the above equality with (28) (see Lemma 4(a)) and (31), we obtain (26). This ends the second part.
  • Part 3. Here, we prove the theorem without the boundedness assumption on X G . To this end, by virtue of parts 1 and 2, we note that the theorem follows immediately as soon as we prove that the equalities (32) and (36) hold. Thus, for n 0 , we consider
X G , n : = X G I { | X G | n } ,
and its associated triplet ( X F , n , k ( p r , n ) , k ( o p , n ) ) is given by
X F , n : = X F I { | X F | n } , k ( p r , n ) : = k ( p r ) I { | k ( p r ) | n } , k ( o p , n ) : = M μ P k ( p r , n ) | O ( F ) .
Therefore, it is clear that X G , n and its associated triplet ( X F , n , k ( p r , n ) , k ( o p , n ) ) are bounded, and thanks to parts 1 and 2, we conclude that they fulfill (32) and (36). If X G 0 , then X G , n is nonnegative and increases to X G , and all components of ( X F , n , k ( p r , n ) , k ( o p , n ) ) are nonnegative and increase to the corresponding components of ( X F , k ( p r ) , k ( o p ) ) , respectively. Thus, thanks to the convergence monotone theorem, it is clear that in this case, E [ X θ G , n | G t ] (respectively, E Q ˜ [ X θ G , n | G t ] ) increases to E [ X θ G | G t ] (respectively, E Q ˜ [ X θ G | G t ] ) and E G σ X σ F , n + t σ k s ( o p , n ) d D s o , F | F t (respectively, E E ˜ σ X σ F , n + t σ k s ( o p , n ) d V s F | F t ) increases to E G σ X σ F + t σ k s ( o p ) d V s F | F t (respectively, E E ˜ σ X σ F , n + t σ k s ( o p , n ) d V s F | F t ). This proves that (32) and (36) hold for the case when X G 0 , and the theorem is proved in this case. Now, assume that E sup t 0 ( X t G ) + < . Then, Fatou’s lemma yields
E [ X θ G | G t ] lim sup n + E [ X θ G , n | G t ] ,
while a combination of the convergence-dominated theorem with the inequality
E [ X θ G , n | G t ] E [ X θ G | G t ] E [ sup t ( X t G ) + I { θ T n G } | G t ]
implies that
lim inf n + E [ X θ G , n | G t ] E [ X θ G | G t ] .
Hence, a combination of the latter inequality with (37) proves that E [ X θ G , n | G t ] converges to E [ X θ G | G t ] almost surely. Similar arguments allow us to prove the almost sure convergence of E G σ X σ F , n + t σ k s ( o p , n ) d D s o , F | F t to E G σ X σ F + t σ k s ( o p ) d V s F | F t . This proves assertion (a). The proof of assertion (b), under the assumption E Q ˜ sup 0 t T ( X t G ) + < , exactly mimics the proof of assertion (a) and is omitted. This ends the proof of the theorem. □

3.3. G -Optimal Stopping Times Versus F -Optimal Stopping Times

In this subsection, we investigate how the solutions to the optimal stopping problems under G and F are related to each other in many aspects.
Theorem 4. 
Assume that G > 0 , and let X G be an RCLL G -optional process of class- ( G , D ) such that ( X G ) τ = X G . Consider the unique pair ( X F , k ( p r ) ) associated with X G via Theorem 2, and define X ˜ F : = X F G + k ( o p ) · D o , F , where k ( op ) is given in (24). Then, the following assertions hold:
(a)
The optimal stopping problem for ( X G , G ) has a solution if and only if the optimal stopping problem for ( X ˜ F , F ) has a solution. Furthermore, if one of these solutions exists, then the minimal optimal stopping times, θ * G and θ * F , for ( X G , G ) and ( X ˜ F , F ) , respectively, exist, and θ * G = min ( θ * F , τ ) .
(b)
The maximal optimal stopping time, θ ˜ G , for ( X G , G ) exists if and only if the maximal optimal stopping time, θ ˜ F , for ( X ˜ F , F ) also exists, and they satisfy θ ˜ G = min ( θ ˜ F , τ ) .
This theorem is established under the strong integrability of X G . Thus, in our random horizon setting, the general framework of [4] remains open. The proof of Theorem 4 essentially relies on Lemma 4 and the following lemma, which is interesting in itself.
Lemma 5. 
Let H be a filtration, X be an RCLL and H -adapted process of class- ( H , D ) , S H be its Snell envelope, and consider
T ( X , H ) : = θ J 0 ( H ) : [ [ θ ] ] { X = S H } , ( S H ) θ M ( H ) .
Then, ess inf T ( X , H ) belongs to T ( X , H ) as soon as this set is not empty.
Proof. 
Assume that T ( X , H ) , and note that θ = min ( θ 1 , θ 2 ) T ( X , H ) for any θ i T ( X , H ) , i = 1 , 2 , due to [ [ θ ] ] [ [ θ 2 ] ] [ [ θ 1 ] ] . This implies that this set is downward directed and hence there exists a non-increasing sequence, θ n T ( X , H ) , such that θ ˜ : = ess inf T ( X , H ) = inf n θ n . It is obvious that [ [ θ ˜ ] ] { X = S H } due to the right continuity of both X and S H . This proves the lemma. □
The remaining part of this subsection proves Theorem 4.
Proof of Theorem 4. 
The proof of this theorem is given in three parts.
  • Part 1. This part proves the following fact:
θ G T ( X G , G ) i f f   t h e r e   e x i s t s θ F T ( X ˜ F , F ) a n d θ G = min ( θ F , τ ) , P - a . s . .
Let θ G T ( X G , G ) . Thus, there exists an F -stopping time θ F such that
[ [ θ F τ ] ] { X G = S G } , a n d ( S G ) θ F M ( G ) .
On the one hand, by virtue of Lemma 4(b) and (25), we note that X τ G = k τ ( p r ) = S τ G P-a.s., and hence
{ X G = S G } = ( { X G = S G } [ [ 0 , τ [ [ ) [ [ τ ] ] .
On the other hand, by combining Lemma 4(b) (precisely equality (30)) and (25), again, we obtain
{ X G = S G } [ [ 0 , τ [ [ = X F = S F k ( op ) · D o , F G 1 [ [ 0 , τ [ [ = { S F = G X F + k ( op ) · D o , F } [ [ 0 , τ [ [ = { S F = X ˜ F } [ [ 0 , τ [ [ .
Therefore, by combining this equality with (41) and
[ [ min ( θ F , τ ) ] ] [ [ 0 , τ [ [ = [ [ θ F ] ] [ [ 0 , τ [ [ ,
we deduce that the first condition in (40) is equivalent to
[ [ θ F ] ] [ [ 0 , τ [ [ { S F = X ˜ F } [ [ 0 , τ [ [ , o r   e q u i v a l e n t l y , I [ [ θ F ] ] I [ [ 0 , τ [ [ I { S F = X ˜ F } I [ [ 0 , τ [ [ .
Thus, by taking the F -optional projection and using G = ( I [ [ 0 , τ [ [ ) o , F > 0 , the latter condition is equivalent to
[ [ θ F ] ] { S F = X ˜ F } .
Thanks again to Theorem 3(a) and Lemma 4(a), we conclude that
( S G ) θ M l o c ( G ) i f f ( S F G 1 I [ [ 0 , τ [ [ ) θ M l o c ( G ) i f f T ( ( S F ) θ ) M l o c ( G ) ,
for any F -stopping time θ . As ( S F ) θ is an F -supermartingale, there exists M M l o c ( F ) and a nondecreasing F -predictable A such that ( S F ) θ = S 0 F + M A and M 0 = A 0 = 0 . Thus, T ( ( S F ) θ ) M l o c ( G ) if and only if
G G ˜ · A τ ( I { G ˜ = 1 } ) p , F · A τ = T ( A ) M l o c ( G ) ,
or equivalently, its G -compensator, which coincides with A τ , is a null process. This implies that G · A 0 , and hence A 0 due to G > 0 . Therefore, ( S F ) θ is an F -local martingale. This proves the claim that
( S G ) θ M l o c ( G ) i f f ( S F ) θ M l o c ( F ) , f o r   a n y   F - s t o p p i n g θ .
Hence, the claim in (39) follows immediately from combining (40), (42), and (43). This ends part 1.
  • Part 2. Here, we prove assertion (a). Thanks to part 1, it is clear that T ( X G , G ) if and only if T ( X ˜ F , F ) . Thus, on the one hand, this proves the first statement in assertion (a). On the other hand, by again combining this statement with Lemma 5 and part 1, the second statement of assertion (a) follows immediately.
  • Part 3. If the maximal optimal stopping time, θ ˜ F , for ( X ˜ F , F ) exists, then θ ˜ F T ( X ˜ F , F ) , and for any θ T ( X ˜ F , F ) , we have θ θ ˜ F = ess sup T ( X ˜ F , F ) P-a.s. Hence, for any θ G T ( X G , G ) , there exists θ T ( X ˜ F , F ) , satisfying
θ G = min ( θ , τ ) min ( θ ˜ F , τ ) T ( X G , G ) .
This proves that min ( θ ˜ F , τ ) = ess sup T ( X G , G ) , and hence the maximal optimal stopping time for ( X G , G ) exists and coincides with θ ˜ F τ . To prove the converse, we assume that the maximal optimal stopping time θ ˜ G for ( X G , G ) exists. Then, by virtue of part 1, there exists θ F T ( X ˜ F , F ) such that θ ˜ G = min ( θ F , τ ) , P-a.s., and for any θ T ( X ˜ F , F ) , we have min ( θ , τ ) T ( X G , G )
a n d min ( θ , τ ) θ ˜ G = min ( θ F , τ ) , P - a . s .
This yields θ θ F P - a . s . o n ( θ < τ ) , or equivalently, I { θ < τ } I { θ θ F } P - a . s . By taking conditional expectations with respect to F θ , on both sides of this inequality, we obtain G θ I { θ θ F } P-a.s., and hence θ θ F P-a.s. Therefore, we obtain ess sup T ( X ˜ F , F ) = θ F T ( X ˜ F , F ) . Hence, the maximal optimal stopping time for ( X ˜ F , F ) , denoted by θ ˜ F , exists and satisfies min ( τ , θ ˜ F ) = θ ˜ G . This proves assertion (b) and completes the proof of the theorem. □

4. Conclusions

In this paper, we addressed various aspects of the optimal stopping problem in the setting where there are two flows of information: one “public” flow, F , which is received by everyone in the system over time, and a larger flow, G , containing additional information about the occurrence of a random time, τ . In this framework, our study starts by parametrizing in a unique manner (i.e., one-to-one parametrization) any G -reward by processes and rewards that are F -observable. Afterward, we use this parametrization to single out the deep mathematical structures of the value process of the optimal stopping problem under G while highlighting the various terms induced by the randomness in τ . The resulting decomposition is highly motivated by applications in risk management of the informational risks intrinsic to τ . Furthermore, we establish the one-to-one connection between the G -optimal stopping problem and its associated F -optimal stopping problem, and we describe the exact relationship between their maximal (respectively, minimal) optimal times. To the best of our knowledge, the obtained results are the first of their kind.
Besides this, our setting is the most general considered in the literature for the pair ( F , τ ) . In fact, herein, we assume that the survival probability process, G, is positive (i.e., G > 0 ) only, while most of the literature (or all of it) makes other assumptions, such as the initial system (represented by F ) being Markovian and τ satisfying either the immersion assumption, the density assumption, or the independence assumption between τ and F (see [25] and the references therein, to cite a few). Another direction in the literature consists of addressing the optimal stopping problem with restricted information instead, and for this framework, we refer the reader to [26,27] and the references therein, to cite a few.
Even though our setting is very general, it can be extended in two directions. The first extension consists of relaxing the assumption G > 0 , even though it is always assumed in the literature and is very acceptable in practice in contrast to the other assumptions. The second extension lies in allowing the reward process to have irregularities in its paths, as in [4], and then exploring how these irregularities interplay with the randomness in τ .

Author Contributions

Conceptualization, T.C.; methodology, T.C.; formal analysis, S.A. and T.C.; investigation, S.A. and T.C; writing–original draft preparation, T.C.; supervision, T.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was conducted at the University of Alberta and was fully supported by the Natural Sciences and Engineering Research Council of Canada, Grant NSERC RGPIN-2019-04779.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs of Lemmas 3 and 4

Proof of Lemma 3. 
Thanks to [24] (XX.75 b) (see also [22] (Proposition B.2-(b))), for a G -stopping time, σ G , there exists an F -stopping time, σ , such that
σ G = σ G τ = σ τ .
Define σ F : = min max ( σ , σ 1 ) , σ 2 . On the one hand, we note that σ F is an F -stopping time satisfying the first condition in (22). On the other hand, it is clear that
min ( τ , max ( σ , σ 1 ) ) = ( τ σ 1 ) I { σ 1 > σ } + ( τ σ ) I { σ 1 σ } = max ( τ σ , σ 1 τ ) .
Thus, by using this equality, we derive
σ F τ = τ σ 2 max ( σ , σ 1 ) = ( τ σ 2 ) ( τ max ( σ , σ 1 ) ) = ( τ σ 2 ) max ( τ σ , σ 1 τ ) = σ τ = σ G .
This ends the proof of the lemma. □
Proof of Lemma 4. 
(1) Here, we prove assertion (a). Let L be an F -semimartingale. Then, throughout this proof, we define X : = L E ˜ 1 I [ [ 0 , τ [ [ , and we derive
X = L τ E ˜ τ L E ˜ · D L 0 I { τ = 0 } = X 0 + L · 1 E ˜ τ + 1 E ˜ · L τ L E ˜ · D = X 0 + L G E ˜ I ] ] 0 , τ ] ] · D o , F + 1 E ˜ · L τ L E ˜ · D = X 0 + L G ˜ E ˜ I ] ] 0 , τ ] ] · D o , F + 1 E ˜ · L τ L E ˜ · D = X 0 L E ˜ · N G + 1 E ˜ · L τ .
The third equality follows from d E ˜ 1 = E ˜ 1 G 1 d D o , F , while the fourth equality is due to E ˜ = E ˜ G / G ˜ . A combination of the latter equality with X 0 = L 0 I { τ > 0 } proves (28).
(2)
This part proves assertion (b). To this end, by virtue of Lemma 1, we note that L G 1 I [ [ 0 , τ [ [ = G 0 1 Z ˜ X , where Z ˜ is defined in (13) and satisfies
Z ˜ τ = 1 / E ( G 1 · m ) τ = E ( G 1 · T ( m ) ) .
Here, thanks to G > 0 and Lemma 1, we use the fact that T ( M ) = M τ G ˜ 1 · [ M , m ] τ for any F -local martingale, M. Thus, by applying Itô to Z ˜ X , we obtain
Z ˜ X = Z ˜ τ X = X 0 + Z ˜ · X + X · Z ˜ + [ X , Z ˜ ] = X 0 + Z ˜ · X X Z ˜ G 1 · T ( m ) Z ˜ G 1 · [ X , T ( m ) ] = X 0 + Z ˜ · X X Z ˜ G 1 · T ( m ) Z ˜ G ˜ 1 · [ X , m ] τ .
Thus, by inserting (28) into the latter equality and using Z ˜ / E ˜ = G 0 / G , X = L E ˜ 1 I ] ] 0 , τ ] ] , Z ˜ = Z ˜ G / G ˜ , and E ˜ = E ˜ G / G ˜ , we obtain
Z ˜ X = X 0 G 0 G ˜ L G G · N G + G 0 G · L τ G 0 L G 2 · T ( m ) G 0 G ˜ 1 G ˜ 1 · [ L , m ] τ + G 0 L Δ m G G · N G = X 0 G 0 L G · N G + G 0 G · T ( L ) G 0 L G 2 · T ( m ) .
Therefore, by combining the latter equality with L G 1 I [ [ 0 , τ [ [ = G 0 1 Z ˜ X , equality (29) follows immediately, and the proof of the lemma is complete. □

References

  1. Neveu, J. Discrete Parameter Martingales; Elsevier: Amsterdam, The Netherlands, 1975. [Google Scholar]
  2. Arai, T.; Takenaka, M. Constrained optimal stopping under regime switching model. arXiv 2022, arXiv:2204.07914v1. [Google Scholar] [CrossRef]
  3. El Karoui, N. Les aspects probabilistes du control stochastiques. In Ninth Saint Flour Probability Summer School-1979 (Saint Flour, 1979), 73-238, Lecture Notes in Math. 876; Springer: Berlin/Heidelberg, Germany, 1979. [Google Scholar]
  4. Kobylanski, M.; Quenez, M.C. Optimal stopping time problem in a general framework. Electron. J. Probab. 2012, 17, 1–28. [Google Scholar] [CrossRef]
  5. Peksir, G.; Shiryaev, A. Optimal stopping and free boundary problems. In Lectures in Mathematics ETH Zürich; Birkhauser: Basel, Switzerland, 2006. [Google Scholar]
  6. Shiryaev, A. Optimal Stopping Rules; Aries, A.B., Translator; Springer: New York, NY, USA, 1978. [Google Scholar]
  7. Alsheyab, S.; Choulli, T. Reflected backward stochastic differential equations under stopping with an arbitrary random time. arXiv 2021, arXiv:2107.11896. [Google Scholar]
  8. Di Nunno, G.; Khedher, A.; Vanmaele, M. Robustness of quadratic hedging strategies in finance via backward stochastic differential equations with jumps. Appl. Math. Optim. 2015, 72, 353–389. [Google Scholar] [CrossRef]
  9. El Karoui, N.; Peng, S.G.; Quenez, M.C. Backward stochastic differential equations in finance. Math. Financ. 1997, 7, 1–71. [Google Scholar] [CrossRef]
  10. Hamadène, S.; Lepeltier, J.P.; Zhen, W. Infinite Horizon Reflected BSDE and its applications in Mixed Control and Game Problems. Probab. Math. Stat. 1999, 19, 211–234. [Google Scholar]
  11. Hamadène, S.; Lepeltier, J.P. Backward equations, stochastic control and zero-sum stochastic differential games. Stoch. Stoch. Rep. 1995, 54, 221–231. [Google Scholar] [CrossRef]
  12. Khedher, A.; Vanmaele, M. Discretisation of FBSDEs driven by càdlàg martingales. J. Math. Anal. Appl. 2016, 435, 508–531. [Google Scholar] [CrossRef]
  13. Quenez, M.-C.; Sulem, A. Refelcted BSDEs and robust optimal stopping for dynamic risk measures with jumps. Stoch. Processes Their Appl. 2014, 124, 3031–3054. [Google Scholar] [CrossRef]
  14. El Karoui, N.; Kapoudjian, C.; Pardoux, E.; Peng, S.; Quenez, M.C. Reflected Solutions of Backward SDE and Related Obstacle Problems for PDEs. Ann. Probab. 1997, 25, 702–737. [Google Scholar] [CrossRef]
  15. ∅ksendal, B.; Zhang, T. Backward stochastic differential equations with respect to general filtrations and applications to insider finance. Commun. Stoch. Anal. 2012, 6, 703–722. [Google Scholar] [CrossRef]
  16. Dellacherie, C.; Meyer, P.-A. Probabilités et Potentiel, Théorie des martingales. Chapter V to VIII; Hermann: Paris, France, 1980. [Google Scholar]
  17. He, S.W.; Wang, J.G.; Yan, J.A. Semimartingale Theory and Stochastic Calculus; Science Press: Beijing, China; CRC Press: New York, NY, USA, 1992. [Google Scholar]
  18. Jacod, J.; Shiryaev, A. Limit Theorems for Stochastic Processes, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  19. Dellacherie, C.; Meyer, P.-A. Probabilities and Potential B: Theory of Martingales; North-Holland: New York, NY, USA, 1982. [Google Scholar]
  20. Choulli, T.; Yansori, S. Explicit description of all deflators for markets under random horizon with application to NFLVR. Stoch. Processes Their Appl. 2022, 151, 230–264. [Google Scholar] [CrossRef]
  21. Choulli, T.; Daveloose, C.; Vanmaele, M. A martingale representation theorem and valuation of defaultable securities. Math. Financ. 2020, 30, 1–38. [Google Scholar] [CrossRef]
  22. Aksamit, A.; Choulli, T.; Deng, J.; Jeanblanc, M. No-arbitrage up to random horizon for quasi-left-continuous models. Finance Stoch. 2017, 21, 1103–1139. [Google Scholar] [CrossRef]
  23. Jeulin, T. Semi-Martingales et Grossissement d’Une Filtration; Springer: Berlin/Heidelberg, Germany, 1980. [Google Scholar]
  24. Dellacherie, C.; Maisonneuve, B.; Meyer, P.-A. Probabilités et Potentiel: Tome V, Processus de Markov (fin), Compléments de Calcul Stochastique, Chapitres XVII-XXIV; Hermann: Paris, France, 1992. [Google Scholar]
  25. Chakrabarty, A.; Guo, X. Optimal stopping times with different information levels with time uncertainty. Stoch. Anal. Appl. Financ. 2012, 13, 19–38. [Google Scholar]
  26. Agram, N.; Haadem, S.; ∅ksendal, B.; Proske, F. Optimal stopping, randomized stopping and singular control with general information flow. Theory Probab. Appli. 2022, 66, 601–612. [Google Scholar] [CrossRef]
  27. ∅ksendal, B. Optimal stopping with delayed information. Stoch. Dyn. 2005, 5, 271–280. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Choulli, T.; Alsheyab, S. The Optimal Stopping Problem under a Random Horizon. Mathematics 2024, 12, 1273. https://doi.org/10.3390/math12091273

AMA Style

Choulli T, Alsheyab S. The Optimal Stopping Problem under a Random Horizon. Mathematics. 2024; 12(9):1273. https://doi.org/10.3390/math12091273

Chicago/Turabian Style

Choulli, Tahir, and Safa’ Alsheyab. 2024. "The Optimal Stopping Problem under a Random Horizon" Mathematics 12, no. 9: 1273. https://doi.org/10.3390/math12091273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop