The Optimal Stopping Problem under a Random Horizon

: This paper considers a pair ( F , τ ) , where F is a filtration representing the “public” flow of information that is available to all agents over time, and τ is a random time that might not be an F -stopping time. This setting covers the case of a credit risk framework, where τ models the default time of a firm or client, and the setting of life insurance, where τ is the death time of an agent. It is clear that random times cannot be observed before their occurrence. Thus, the larger filtration, G , which incorporates F and makes τ observable, results from the progressive enlargement of F with τ . For this informational setting, governed by G , we analyze the optimal stopping problem in three main directions. The first direction consists of characterizing the existence of the solution to this problem in terms of F -observable processes. The second direction lies in deriving the mathematical structures of the value process of this control problem, while the third direction singles out the associated optimal stopping problem under F . These three aspects allow us to deeply quantify how τ impacts the optimal stopping problem and are also vital for studying reflected backward stochastic differential equations that arise naturally from pricing and hedging of vulnerable claims.


Introduction
In this paper, we consider a complete probability space (Ω, F , P), on which we consider a complete and right-continuous filtration F := (F t ) t≥0 .Besides this initial system (Ω, F , F, P), we consider an arbitrary random time τ (i.e., an F -measurable random variable with values in [0, +∞]) that might not be an F-stopping time.As in most applications, such as life insurance and credit risk, where τ is both the death time and the default time, τ is observable only when it occurs and cannot be seen before.Thus, the flow of information that incorporates both τ and F, which will be denoted throughout this paper by G = (G t ) t≥0 , makes τ a stopping time and is known in the literature as the progressive enlargement of F with τ.For this new system (Ω, F , G, P), our objective consists of analyzing the following problem: S G σ := ess sup where σ is a G-stopping time and J σ is the set of G-stopping times that are finite and greater than or equal to σ. X G is a G-optional process representing the reward satisfying "some integrability condition" and is stopped at τ (i.e., it does not vary after τ).The essential supremum of an arbitrary family of random variables is the smallest random variable that is an upper bound for each element of this family almost surely (see [1] for more details and related properties).This problem is known as the optimal stopping problem, and it is an example of a stochastic control problem.For more details about its origin, its applications, and its evolution, we refer the reader to [2][3][4][5][6], and the references therein, to cite a few.Herein, we address this optimal stopping problem, and we aim to measure the impact of τ on this problem in many aspects.In particular, for this problem, we answer the following questions: 1.
Can we associate with (1) an optimal stopping problem under F with reward X F and value process S F ? 2.
How are the two pairs, (X G , S G ) and ( X F , S F ), connected to each other?3.
What are the structures in S G induced by τ? 4.
How are the maximal (minimal) optimal times of (1) and their F-optimal stopping problem counterparts related to each other?
One of the direct applications of our optimal stopping problem under a random horizon, τ, lies in studying linear RBSDEs with the form where F is assumed to be generated by a Brownian motion W, f is an F-progressively measurable process (the driver rate), ξ is a random variable, and S is a right-continuous with left limits (RCLL for short hereafter) F-adapted process with values in [−∞, +∞).The two processes Y − and S − are the left limits of Y and S, respectively, which are defined in Section 2.1 for the sake of a smooth presentation.For this direct application, we refer the reader to our earlier version of the complete work, which can be found in [7].The relationship between the optimal stopping problem and RBSDEs is well understood nowadays, and we refer the reader to [8][9][10][11][12][13][14][15] and the references therein, to cite a few.This paper has three sections, including the current one.The second section defines the general notations, the mathematical setting of the random horizon τ, and its preliminaries.The third section addresses the optimal stopping problem under stopping with τ in various aspects.This paper also has an appendix, where we prove our technical lemmas.

Notations and the Random Horizon Setting
This section has two subsections.The first subsection defines the general notations used throughout this paper, while the second subsection presents the progressive enlargement setting associated with τ and its preliminaries.

General Notations
By H, we denote an arbitrary filtration that satisfies the usual conditions of completeness and right continuity.For any process, X, the H-optional projection and the H-predictable projection, when they exist, are denoted by o,H X and p,H X, respectively.The set M(H, Q) (respectively, M p (H, Q) for p ∈ (1, +∞)) denotes the set of all H-martingales (respectively, p-integrable martingales) under Q, while A(H, Q) denotes the set of all H-optional processes that are RCLL with integrable variation under Q.When Q = P, we simply omit the probability for the sake of simple notation.For a d-dimensional H-semimartingale X, by L(X, H), we denote the set of H-predictable processes (either d-dimensional or one-dimensional) that are X-integrable in the semimartingale sense.For φ ∈ L(X, H), the resulting integral of φ with respect to X is denoted by φ For more details about the stochastic integral and its intrinsic calculus and notation, we refer the reader to [16][17][18].For the H-local martingale, M, we denote by L 1 loc (M, H) the set of H-predictable processes, φ, that is M-integrable, and the resulting integral φ • M is an H-local martingale.If C(H) is a set of processes adapted to H, then C loc (H) is the set of processes, X, for which there exists a sequence of H-stopping times, (T n ) n≥1 , increasing to infinity, and X T n belongs to C(H) for each n ≥ 1.The H-dual optional projection and the H-dual predictable projection of a process V with finite variation, when they exist, are denoted by V o,H and V p,H , respectively.For any real-valued H-semimartingale L, we denote by E (L) the Doléans-Dade (stochastic) exponential.It is the unique solution to dX = X − dL, X 0 = 1, given by Throughout this paper, we consider the following notations.For any random time, σ, and any process, X, we denote by X σ the stopped process given by X σ t := X σ∧t , t ≥ 0. For any RCLL process, X, we denote by X − the left-limits process of X, which is defined as For more details about BMO martingales and their properties, we refer the reader to [16] or its English version [19].

The Random Horizon and the Progressive Enlargement of F
In addition to the initial model (Ω, F , F, P), we consider an arbitrary random time, τ, that might not be an F-stopping time.This random time is parametrized through F by the pair (G, G), called the survival probabilities or Azéma supermartingales, which are given by Furthermore, the following process, m, is given by is a BMO F-martingale that plays an important role in our analysis.The flow of information, G, which incorporates both F and τ, is defined as follows: Throughout this paper, on Ω × [0, +∞), we consider the F-optional σ-field, denoted by O(F), and the F-progressive σ-field, denoted by Prog(F).Thanks to [20] (Theorem 2.3) and [21] (Theorem 2.3 and Theorem 2.11), we recall the following theorem.
Theorem 1.The following assertions hold.
(a) For any M ∈ M loc (F), the process is a G-local martingale (recall that G t− coincides with P(τ is a G-martingale with integrable variation.Moreover, H • N G is a G-local martingale with locally integrable variation for any H belonging to I o loc (N G , G), given by For any q ∈ [1, +∞) and a σ-algebra H on Ω × [0, +∞), we define Throughout this paper, we make the following assumption G > 0 (i.e., G is a positive process) and τ < +∞ P-a.s..
Under the positivity of G, this process can be decomposed multiplicatively into two processes, which play central roles in this paper, as outlined below.
, where For more details about this lemma and the related results, we refer the reader to [20] (Lemma 2.4).Below, we recall [20] (Proposition 4.3), which plays an important role throughout this paper.
Proposition 1. Assume that G > 0, and consider the process Then, the following assertions hold: (a) Z τ is a G-martingale, and for any T ∈ (0, +∞), Q T is given by It is a well-defined probability measure on G τ∧T .(b) For any M ∈ M loc (F), we have M T∧τ ∈ M loc (G, Q T ).In particular, W T∧τ is a Brownian motion for ( Q T , G) for any T ∈ (0, +∞).

Remark 1.
(1) Under the condition G > 0, we obtain T (M) for any M ∈ M loc (F).This due to the fact that when G > 0, we obtain G > 0 thanks to Lemma 1.Thus, the process I { G=0>G − } is null, and as a consequence, (2) In general, the G-martingale Z τ might not be uniformly integrable, and hence in general, Q T might not be well defined for T = ∞.For these facts, we refer the reader to [20] (Proposition 4.3), where the conditions for Z τ being uniformly integrable are fully singled out when G > 0.

The Optimal Stopping Problem under a Random Horizon
Throughout the rest of this paper, J σ 2 σ 1 (H) denotes the set of all H-stopping times with values in [[σ 1 , σ 2 ]] for any two H-stopping times σ 1 and σ 2 such that σ 1 ≤ σ 2 P-a.s.This section has three subsections.The first subsection connects the G-reward process to an Freward process in a unique manner and investigates how their integrability is transmitted back and forth.The second subsection elaborates on the results of the mathematical structures induced by τ.The third subsection connects the minimal and maximal G-optimal stopping times to the corresponding F-optimal stopping times.

Parametrization of G-Reward Using F-Processes
This subsection establishes an exact relationship between the G-reward and F-reward processes and shows how some features travel back and forth between them.To this end, we recall the notion of class-D processes.Definition 1.Let (X, H) be a pair of a process, X, and a filtration, H.Then, X is said to be of class-(H, D) if {X σ : σ is a finite H-stopping time} is a uniformly integrable family of random variables.
Below, we state the main results of this subsection.To this end, throughout the rest of this paper, we consider the following notation.σ 1 ∧ σ 2 := min(σ 1 , σ 2 ), for any two real-valued random variables, σ 1 and σ 2 . (15) Theorem 2. Assume that (11) holds, and let X G be a G-optional process such that (X G ) τ = X G .Then, there exists a pair (X F , k (pr) ) of processes such that X F is F-optional, k (pr) is F-progressive, The pair (X F , k (pr) ) is unique in the sense that if there exists another pair (X F , k (pr) ) satisfying ( 16), then X F and X F are modifications of each other, and k (pr) = k (pr) P ⊗ dD-a.e.
Furthermore, the following assertions hold: (a) X G is locally bounded if and only if X F and k (pr) are locally bounded.
The local boundedness of the process k (pr) , which is defined up to a P ⊗ dD-evanescent set, is understood in the sense that there exists a sequence of F-stopping times (T n ) n≥1 that increases to infinity almost surely, and |k (pr) |I [[0,T n ]] ≤ C n , P ⊗ dD-a.e. for all n ≥ 1 for some Proof of Theorem 2. Consider a G-optional process, X G .Then, thanks to [22] (Lemma B.1) (see also [23] (Lemma 4.4)), there exists a pair (X F , k (pr) ) such that X F is an F-optional, k (pr) is Prog(F)-measurable, Furthermore, on the one hand, the uniqueness of this pair follows from the assumption that G > 0 and the second equality in (19).On the other hand, and the equality ( 16) is proved.
(a) By virtue of (19), it is clear that the local boundedness of the pair (X F , k (pr) ) implies the local boundedness of X G .To prove the reverse, we assume that X G is locally bounded and (T G n ) n is the localizing sequence of stopping times.Hence, there exists a sequence of positive constants Then, there exists a sequence of F-stopping times (T n ) n that increases to infinity almost surely, and min(τ, T n ) = min(T G n , τ) P-a.s.By virtue of the assumption (X G ) τ = X G and ( 19), the inequality ( 20) is equivalent to Hence, this inequality implies |k Thanks to G > 0 and the fact that T n is an F-stopping time that increases to infinity, these latter conditions are equivalent to saying that both k (pr) and X F are locally bounded.This proves assertion (a).
(b) Thanks to (16) and the fact that k (pr) • D is an RCLL process, we deduce that X G is RCLL if and only if X F I [[0,τ[[ is RCLL.Thus, we assume that X G is an RCLL process, and we consider the sequence of G-stopping times (T G n ), given by It is clear that T G n increases to infinity, and by virtue of [22] (Proposition B.2-(b)) and G > 0, there exists a sequence of F-stopping times (T n ) n that increases to infinity and satisfies Furthermore, by applying ( 16) to each X G,n , on the one hand, we obtain On the other hand, as T n increases to infinity, it is clear that X F is RCLL if and only if The latter follows directly from combining the boundedness of X G,n , [16] (Théorème 47, pp.119), and the right continuity of G.This completes the proof of assertion (b).
(c) It is clear that k (pr) • D is an RCLL G-semimartingale, and hence X G is an RCLL Gsemimartingale if and only if To prove the converse, we note that by stopping with T G n defined above and by using [16] (Théorème 26, Chapter VII, pp.235), there is no loss of generality in assuming X G is bounded, which leads to the boundedness of X F (see [22] (Lemma B.1) or [23] (Lemma 4.4 (b), pp.63)).Thus, thanks to [16] (Théorème 47, pp.119, and Théorème 59, pp.268), which implies that the optional projection of a bounded RCLL G-semimartingale is an RCLL F-semimartingale, we deduce that A combination of this with the condition G > 0 and the fact that G is an RCLL F-semimartingale implies that X F is an RCLL F-semimartingale.Furthermore, direct calculation yields and (17) follows from this equality and ( 16).
(d) Here, we prove assertion (d).To this end, we use ( 16) and derive where are finite.This proves assertion (d).
(e) Assume that X G is of class-(G, D).On the one hand, we have E[|k or equivalently, k (pr) ∈ L 1 Ω, Prog(F), P ⊗ dD .On the other hand, due to G σ ≤ 1, for any c > 0, we have This proves that X F G is of class-(F, D), and the proposition is proved.

The Mathematical Structures of the Value Process (Snell Envelope)
The main result of this subsection characterizes, in different ways, the Snell envelope of a process under G in terms of the F-Snell envelope and some G-local martingales.To this end, we start with the following two lemmas.Lemma 2. For any nonnegative or integrable process, X, we always have This follows from [24] (XX.75-(c)/(d)), while the next lemma appears to be new.

Lemma 3.
Let σ 1 and σ 2 be two F-stopping times such that σ 1 ≤ σ 2 P-a.s.Then, for any Gstopping time, σ G , satisfying there exists an F-stopping time σ F such that The proof of this lemma is relegated to Appendix A. Throughout this paper, for any F × B(R + )-measurable process, X, which is nonnegative or µ := P ⊗ dD-integrable, its F-optional projection with respect to µ, denoted by M P µ (X O(F)), is the unique F-optional process, Y, such that for any bounded and F-optional H, Theorem 3. Assume G > 0, and let X G be an RCLL and G-adapted process such that (X where µ := P ⊗ dD (see (23)).Then, the following assertions hold: (a) If either X G is nonnegative or E sup t≥0 (X G t ) + < ∞, then the (G, P)-Snell envelope of X G , denoted by S G , is given by where S F is the (F, P)-Snell envelope of the reward X (b) Let T∈ (0, +∞), and Q is given in (14).
Here, S F is the (F, P)-Snell envelope of the F-reward implies that the two last terms on the right-hand side of ( 25) and ( 26) are well-defined G-local martingales.In fact, by virtue of Theorem 2(c) , loc Ω, Prog(F), P ⊗ dD , or equivalently, the pair (k (F) , k (op) ) belongs to L 1 loc Ω, Prog(F), P ⊗ dD × L 1 loc Ω, O(F), P ⊗ dD .On the one hand, this condition obviously implies that k (F) • D ∈ M loc (G) and k (op) belongs to I o loc (N G , G).On the other hand, by considering which both increase to infinity, and using This proves that G −1 (k (op) • D o,F ) ∈ I o loc (N G , G), and similar reasoning proves that the process The latter remark plays an important role in proving Theorem 3. The rest of this subsection is devoted to this proof.Hence, we start with the next lemma, which is useful here and in the rest of the paper as well.Lemma 4. Assume G > 0, and let E be defined in (12).Then, the following assertions hold: (a) For any RCLL F-semimartingale L, it holds that The proof is given in Appendix A, while below, we prove Theorem 3.
Proof of Theorem 3.This proof is divided into three parts.The first and second parts prove assertions (a) and (b), respectively, when X G is bounded, while the third part relaxes this condition and proves the theorem.
Part 1.In this part, we assume that X G is bounded and prove assertion (a).Hence, under this assumption, the associated processes X F , k (pr) , and k (op) are also bounded.As a result, both k (op) • N G and k (F) • D are uniformly integrable G-martingales.Thus, by defining combining the remarks above with Lemma 2, and taking conditional expectations with respect to G t on both sides of ( 27), we derive Thus, by taking the essential supremum and using (31), we obtain Therefore, by combining this with (30) (see Lemma 4(c)), and we immediately obtain (25), and part 1 is completed.
Part 2. Here, we assume that X G is bounded, and we fix T ∈ (0, +∞) and prove assertion (b).Let θ ∈ T T∧τ t∧τ (G) and σ ∈ T T t (F) such that θ = σ ∧ τ.Then, similarly to Part 1, by taking Q-conditional expectations on both sides of (27) and using (31) and the fact that the two processes k (op) • N G and k (F) • D remain uniformly integrable G-martingales under Q (due the boundedness of k (pr) and k (F) ), we write where V F is an RCLL and nondecreasing process given by Thus, by further simplifying the right-hand-side of (34), we obtain By taking the essential supremum over all θ ∈ T T∧τ t∧τ (G), we obtain where S F is the Snell envelope for the reward (X F E + k (op) • D o,F ) T under (F, P).Thus, by combining the above equality with (28) (see Lemma 4(a)) and (31), we obtain (26).This ends the second part.

Part 3.
Here, we prove the theorem without the boundedness assumption on X G .To this end, by virtue of parts 1 and 2, we note that the theorem follows immediately as soon as we prove that the equalities (32) and (36) hold.Thus, for n ≥ 0, we consider and its associated triplet (X F,n , k (pr,n) , k (op,n) ) is given by Therefore, it is clear that X G,n and its associated triplet (X F,n , k (pr,n) , k (op,n) ) are bounded, and thanks to parts 1 and 2, we conclude that they fulfill (32) and (36).If X G ≥ 0, then X G,n is nonnegative and increases to X G , and all components of (X F,n , k (pr,n) , k (op,n) ) are nonnegative and increase to the corresponding components of (X F , k (pr) , k (op) ), respectively.Thus, thanks to the convergence monotone theorem, it is clear that in this case, . This proves that (32) and (36) hold for the case when X G ≥ 0, and the theorem is proved in this case.Now, assume that E sup t≥0 (X G t ) + < ∞.Then, Fatou's lemma yields while a combination of the convergence-dominated theorem with the inequality Hence, a combination of the latter inequality with (37 This proves assertion (a).The proof of assertion (b), under the assumption E Q sup 0≤t≤T (X G t ) + < ∞, exactly mimics the proof of assertion (a) and is omitted.This ends the proof of the theorem.

G-Optimal Stopping Times Versus F-Optimal Stopping Times
In this subsection, we investigate how the solutions to the optimal stopping problems under G and F are related to each other in many aspects.Theorem 4. Assume that G > 0, and let X G be an RCLL G-optional process of class-(G, D) such that (X G ) τ = X G .Consider the unique pair (X F , k (pr) ) associated with X G via Theorem 2, and define X F := X F G + k (op) • D o,F , where k (op) is given in (24).Then, the following assertions hold: (a) The optimal stopping problem for (X G , G) has a solution if and only if the optimal stopping problem for ( X F , F) has a solution.Furthermore, if one of these solutions exists, then the minimal optimal stopping times, θ G * and θ F * , for (X G , G) and ( X F , F), respectively, exist, and θ G * = min(θ F * , τ).(b) The maximal optimal stopping time, θ G , for (X G , G) exists if and only if the maximal optimal stopping time, θ F , for ( X F , F) also exists, and they satisfy θ G = min( θ F , τ).This theorem is established under the strong integrability of X G .Thus, in our random horizon setting, the general framework of [4] remains open.The proof of Theorem 4 essentially relies on Lemma 4 and the following lemma, which is interesting in itself.
Lemma 5. Let H be a filtration, X be an RCLL and H-adapted process of class-(H, D), S H be its Snell envelope, and consider

T(X, H)
Then, ess inf T(X, H) belongs to T(X, H) as soon as this set is not empty.
Proof.Assume that T(X, H) ̸ = ∅, and note that θ = min(θ 1 , θ 2 ) ∈ T(X, H) for any ].This implies that this set is downward directed and hence there exists a non-increasing sequence, θ n ∈ T(X, H), such that θ := ess inf T(X, H) = inf n θ n .It is obvious that [[ θ]] ⊂ {X = S H } due to the right continuity of both X and S H .This proves the lemma.
The remaining part of this subsection proves Theorem 4.
Proof of Theorem 4. The proof of this theorem is given in three parts.
Let θ G ∈ T(X G , G).Thus, there exists an F-stopping time θ F such that On the one hand, by virtue of Lemma 4(b) and ( 25), we note that X G τ = k (pr) τ = S G τ P-a.s., and hence On the other hand, by combining Lemma 4(b) (precisely equality (30)) and ( 25), again, we obtain Therefore, by combining this equality with (41) and we deduce that the first condition in (40) is equivalent to Thus, by taking the F-optional projection and using Thanks again to Theorem 3(a) and Lemma 4(a), we conclude that for any F-stopping time θ.As (S F ) θ is an F-supermartingale, there exists M ∈ M loc (F) and a nondecreasing F-predictable A such that (S F ) θ = S F 0 + M − A and M 0 = A 0 = 0. Thus, T ((S F ) θ ) ∈ M loc (G) if and only if or equivalently, its G-compensator, which coincides with A τ , is a null process.This implies that G − • A ≡ 0, and hence A ≡ 0 due to G > 0. Therefore, (S F ) θ is an F-local martingale.This proves the claim that Hence, the claim in (39) follows immediately from combining (40), (42), and (43).This ends part 1.
Part 2. Here, we prove assertion (a).Thanks to part 1, it is clear that T(X G , G) ̸ = ∅ if and only if T( X F , F) ̸ = ∅.Thus, on the one hand, this proves the first statement in assertion (a).On the other hand, by again combining this statement with Lemma 5 and part 1, the second statement of assertion (a) follows immediately.
Part 3. If the maximal optimal stopping time, θ F , for ( X F , F) exists, then θ F ∈ T( X F , F), and for any θ ∈ T( X F , F), we have θ ≤ θ F = ess sup T( X F , F) P-a.s.Hence, for any θ G ∈ T(X G , G), there exists θ ∈ T( X F , F), satisfying This proves that min( θ F , τ) = ess sup T(X G , G), and hence the maximal optimal stopping time for (X G , G) exists and coincides with θ F ∧ τ.To prove the converse, we assume that the maximal optimal stopping time θ G for (X G , G) exists.Then, by virtue of part 1, there exists θ F ∈ T( X F , F) such that θ G = min(θ F , τ), P-a.s., and for any θ ∈ T( X F , F), we have min(θ, τ) ∈ T(X G , G) and min(θ, τ) ≤ θ G = min(θ F , τ), P-a.s.

Conclusions
In this paper, we addressed various aspects of the optimal stopping problem in the setting where there are two flows of information: one "public" flow, F, which is received by everyone in the system over time, and a larger flow, G, containing additional information about the occurrence of a random time, τ.In this framework, our study starts by parametrizing in a unique manner (i.e., one-to-one parametrization) any G-reward by processes and rewards that are F-observable.Afterward, we use this parametrization to single out the deep mathematical structures of the value process of the optimal stopping problem under G while highlighting the various terms induced by the randomness in τ.The resulting decomposition is highly motivated by applications in risk management of the informational risks intrinsic to τ.Furthermore, we establish the one-to-one connection between the G-optimal stopping problem and its associated F-optimal stopping problem, and we describe the exact relationship between their maximal (respectively, minimal) optimal times.To the best of our knowledge, the obtained results are the first of their kind.
Besides this, our setting is the most general considered in the literature for the pair (F, τ).In fact, herein, we assume that the survival probability process, G, is positive (i.e., G > 0) only, while most of the literature (or all of it) makes other assumptions, such as the initial system (represented by F) being Markovian and τ satisfying either the immersion assumption, the density assumption, or the independence assumption between τ and F (see [25] and the references therein, to cite a few).Another direction in the literature consists of addressing the optimal stopping problem with restricted information instead, and for this framework, we refer the reader to [26,27] and the references therein, to cite a few.
Even though our setting is very general, it can be extended in two directions.The first extension consists of relaxing the assumption G > 0, even though it is always assumed in the literature and is very acceptable in practice in contrast to the other assumptions.The second extension lies in allowing the reward process to have irregularities in its paths, as in [4], and then exploring how these irregularities interplay with the randomness in τ.
Informed Consent Statement: Not applicable.Data Availability Statement: Data are contained within the article.

Conflicts of Interest:
The authors declare no conflicts of interest.
Proof of Lemma 4. (1) Here, we prove assertion (a).Let L be an F-semimartingale.Then, throughout this proof, we define X := L E −1 I [[0,τ[[ , and we derive The third equality follows from d E −1 = E −1 − G −1 dD o,F , while the fourth equality is due to E = E − G/ G.A combination of the latter equality with X 0 = L 0 I {τ>0} proves (28).