Next Article in Journal
A Multivariate Model to Quantify and Mitigate Cybersecurity Risk
Next Article in Special Issue
Pricing and Hedging Bond Power Exchange Options in a Stochastic String Term-Structure Model
Previous Article in Journal
How Does the Volatility of Volatility Depend on Volatility?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bank Salvage Model by Impulse Stochastic Controls

by
Francesco Giuseppe Cordoni
1,
Luca Di Persio
1 and
Yilun Jiang
2,*
1
Department of Computer Science, University of Verona, Strada le Grazie, 15, 37134 Verona, Italy
2
Department of Mathematics, Penn State University, University Park, PA 16802, USA
*
Author to whom correspondence should be addressed.
Risks 2020, 8(2), 60; https://doi.org/10.3390/risks8020060
Submission received: 23 April 2020 / Revised: 28 May 2020 / Accepted: 2 June 2020 / Published: 4 June 2020

Abstract

:
The present paper is devoted to the study of a bank salvage model with a finite time horizon that is subjected to stochastic impulse controls. In our model, the bank’s default time is a completely inaccessible random quantity generating its own filtration, then reflecting the unpredictability of the event itself. In this framework the main goal is to minimize the total cost of the central controller, which can inject capitals to save the bank from default. We address the latter task, showing that the corresponding quasi-variational inequality (QVI) admits a unique viscosity solution—Lipschitz continuous in space and Hölder continuous in time. Furthermore, under mild assumptions on the dynamics the smooth-fit W l o c ( 1 , 2 ) , p property is achieved for any 1 < p < + .

1. Introduction

Mainly motivated by the financial credit crisis that happened over the last decades, starting from the 2008–2009 credit crunch, the financial and mathematical community started investigating and generalizing existing models. In fact, previous events have shown that financial models used prior to the crisis where inadequate to describe and capture main features of financial markets. Therefore the mathematical and financial communities have focused on developing general and robust models that are able to properly describe financial markets and their main peculiarities.
From a purely mathematical perspective, the above-mentioned attention led, among many other research topics, into the study of general stochastic optimal control problems, where instead of classical type of controls, more realistic controls have been considered. Impulse-type controls have to be mentioned among the most studied. Such a type of controls regained attention in the last decades also due to its many applications in finance and economics. In this setting, the controller can intervene on the system at some random time with a discrete type control, where in this case the control solution is represented by the couple u = ( τ n , K n ) n , where τ n is the decision time at which the controller intervenes and K n denotes the action taken by the controller. The above type of control implies that at the intervention time τ n , the system jumps from the state X ( τ n ) to the new state X ( τ n ) = Γ ( X ( τ n ) , K n ) , for a suitable function Γ . Therefore, as is standard in optimal control theory, using the dynamic programming principle, it can be shown that stochastic impulse control problems can be associated with a quasi-variational Hamilton–Jacobi–Bellman equation (HJB) of the form
min t V L V f , V H V
where f is the running cost, L is the infinitesimal generator for the process X and V is the value function solution to the above HJB equation. Further, H is the nonlocal impulse operator that characterizes the HJB equation for impulse type of control. The particular form for the HJB implies that two regions can be obtained, the continuation region, where V > H V and therefore no impulse control is used, and the impulse region, where on the contrary V = H V and the controller intervenes. The solution to Equation (1) can be formally defined so that the value function is in fact a viscosity solution, in a sense to be properly defined later on, in Equation (1). It is clear that, following the above characterization of the domains for the HJB equation, particular attention must be given to the intervention boundary. In fact, particular attention is usually given in this field to proving that the boundary is regular enough; this regularity is referred to in the literature as the smooth-fit principle. Several results exist in the smooth-fit principle where the terminal horizon for the control problem, whereas instead finite horizon problem, and in particular the terminal condition of the problem, makes less straightforward the derivation of the smooth-fit principle.
It is worth stressing that impulse-type stochastic control is strictly connected to optimal stopping problems and optimal switching. The literature on the topic is wide, and several issues has been addressed in the literature. For instance, in Belak et al. (2017); Cosso (2013); Egami (2008); Guo and Wu (2009); Guo and Chen (2013); Pham (2007); Tang and Yong (1993) general results related to the existence as well as regularity of an optimal control have been proven under different assumptions of regularity; in Øksendal and Sulem (2008) an impulse-type optimal control was considered subject to a delayed reaction. A stochastic impulse control problem for a system perturbed by general jump noise was considered in Bayraktar et al. (2013). Finally, impulse optimal controls have been used in concrete financial applications, see Aïd et al. (2019); Chevalier et al. 2013, 2016; Federico et al. (2019); Vath et al. (2007).
A second crucial financial aspect that emerged to be fundamental in a general financial formulation is that any financial entities may fail. In fact, one of the major lacks of classical financial models is that no risk of failure is considered in the general setting. Recent financial events have shown that no financial operator can be considered immune from bankruptcy. Therefore an extensive body of literature has emerged that focusses on credit risk modeling, assessing the risk as the main object that financial entities have to face when borrowing or lending money to other players that might fail; see, e.g., Brigo et al. (2013); Crépey 2015a, 2015b.
Along aforementioned lines, two main approaches have been developed in the literature: the structural approach and the intensity-based approach; see, e.g., Bielecki et al. (2009). Mathematically speaking, the first scenario consists of considering some default event that can be triggered by the underlying process. A typical example is default triggered by some stopping time defined as a hitting time. Such an approach has been for instance considered in Cordoni and Di Persio (2020); Cordoni et al. (2019); Lipton (2016). The latter instead considers a default event that is completely inaccessible for the probabilistic reference filtration, so that in order to solve the problem the typical approach is to rely on filtration enlargement techniques; see, e.g., Bielecki and Rutkowski (2013); Pham (2009).
It is worth to stress further that the obtained results, which are related to a stochastic optimal control problem when impulses have to be taken into account, play a relevant role within current financial practice. In particular, the work can be concretely used to derive effective strategies by financial supervisors aiming at controlling financial networks characterized by cash flows under constraints imposed by over-national organizations. The latter is, e.g., the case of the rules of (supervisor) intervention regarding the Basel (now at its third generation) regulatory framework on bank capital requirements, stress tests and market liquidity risk and related bounds. Analogously, with respect to the European insurance market, where the solvency (now at its second generation) set of rules aims at organized insurance and re-insurance institutions in order to obtain a financially solid ecosystem, particularly aiming at reducing insolvency risk so as to maximize customers’ safety.
Along the aforementioned lines, our techniques allow the controller to prevent failures of the whole system acting with capital injections so as to save the supervised financial entities from possible failure.
It is worth mentioning that the economic literature as well as financial practice have not expressed a uniquely determined statement concerning intervention criteria with respect to the idea of intervention itself either. Without entering into details, we refer the interested reader to Hetzel (2009), where the authors make a comparison concerning possible reforms about existing bank regulations and related systems of surveillance; to Calderon and Schaeck (2016) where the authors analyze supervisors’ interventions, such as, e.g., recapitalizations and liquidity injections, with respect to the effect they have caused to the overall (system of) banks’ level of competitiveness; and to Aikins (2009), which critically studies the free-market efficiency and possible supervisor intervention within a broad theoretical perspective, also with specific links highlighting the stability-oriented relevance of governance rules acting on a rather heterogeneous type of financial institutions.
The present paper is devoted to study a stochastic optimal control problem of the impulse type, where a financial supervisor controls a system, such as financial operators or some banks. The final goal of the controller is to prevent failures, injecting capital into the system according to a given criterion to be maximized. The controller has no perfect information regarding the failure of the bank, so that mathematically speaking the failure cannot be foreseen by the controller. The supervisor, which can be though of for instance as a central bank, can intervene with some impulse-type controls over a finite horizon, so that the optimal solution is represented by both the intervention time and the quantity injected into the system.
Our approach will be an intensity-based approach, so that we will assume the default event to be totally inaccessible from the reference filtration, assuming only a typical density assumption. This assumption will allows us to rewrite the system as a finite horizon impulse problem, using the density distribution of the default event, via enlargement of filtrations techniques. We stress that, due to the terminal condition to be imposed, typically a finite horizon stochastic impulse control problem is more difficult to solve than infinite horizon impulse control problems. In fact, an exhaustive literature on stochastic impulse control on an infinite time horizon exists—see, e.g., Bayraktar et al. (2013); Belak et al. (2017); Chevalier et al. (2013); Egami (2008); Guo and Wu (2009); Øksendal and Sulem (2008); Pham (2007); Vath et al. (2007); whereas very few results exist for the finite dimensional case—see, e.g., Chevalier et al. (2016); Guo and Chen (2013); Tang and Yong (1993).
A more financially oriented motivation of the control problem considered in the present work has often arisen in the last decade, mostly as a consequence of the 2007–2008 credit crunch. This has been, for instance, the case of the Lehman Brothers failure, which has shown the cascade effect triggered by the default of a sufficiently large and interconnected financial institution; see, e.g., Ivashina and Scharfstein (2010); Kahle and Stulz (2013) and references therein. We stress that particular attention has to be given not only towards the magnitude of the stressed bank’s financial assets, but also to its interconnection grade. Indeed, while the exposure with a few financial institutions, provided its magnitude is reasonable, can be managed by ad hoc politics established on a one-to-one relationship basis, the situation could be simply ungovernable in the case of a high number of connections, hidden links and over-structured contracts.
Since the above mentioned financial crisis, it has became typical within the financial oriented stochastic optimal control theory to model a given problem up to a random terminal time instead of considering a fixed, even infinite, horizon. From a modeling point of view, the aforementioned scenario has led to the consideration of the stochastic optimal control approach so as to model such situations by considering random terminal times instead of considering a fixed, or infinite, horizon. Analogously, data analysts as well as mathematicians have started to consider problems of bank bailouts, where a bank’s default and the consequent contagion spreading inside the network may induce serious consequences for decades; see, e.g., Eichengreen et al. (2012).
From a government perspective, such a type of a likely high financial fallout has pushed several central banks to establish specific economic actions to help those parts of the banking sector of (at least) national interest that are under concrete failure risks. As an example, the latter has been the case of the pro bail-in procedures followed in agreement with the Directive 2014/59/UE (approved on 1 January 2016 by the European Union Parliament), and then applied, e.g., in Italy and the Ukraine; see, e.g., Kuznetsova et al. (2017); Sakuramoto and Urbani (2018). It is relevant to underline that such actions also rely on the following grades of freedom: the possibility, as an alternative to internal rescue, to relocate goods as well as legal links to a third party, often called a bridge-bank, or to a bad bank that will collect only a part of the assets while aiming at maximizing its long-term value; the hierarchical order of those who are called to bear the bail-in, which means that the government can decide to put small creditors on the safe side; and the principle that no shareholder, or creditor, has to bear greater losses than would be expected if there was an administrative liquidation, namely the no worse-off creditor idea.
Similar situations have been recently taken into consideration by a series the Central European Bank procedures, with particular reference to the well known quantitative easing, as well as in agreement to the creation of injected currency; see, e.g., Altavilla et al. (2015); Andrade et al. (2016); Blattner and Joyce (2016); De Santis (2020). We would like to underline that quantitative easing type procedures have been experienced also outside the European Union, as in the case of the actions undertaken by the Japanese Central Bank, whose intervention has lasted years—see, e.g., Bowman et al. (2011); Miyao (2000); Voutsinas and Werner (2011); or how it has been done by the US Federal Reserve, not only starting from 2008, but also during the Great Depression of the 1930s—see, e.g., Blinder (2010); Bordo and Sinha (2016); Edward (2016); Fawley and Neely (2013); Hoover Institution (2014).
The main contribution of the present paper is to develop a concrete financial setting that models the evolution of a financial entity, controlled by an external supervisor who is willing to lend money in order to maximize a given utility function; see also Capponi and Chen (2015); Cordoni et al. (2019); Eisenberg and Noe (2001); Lipton (2016); Rogers and Veraart (2013) for a setting in which a financial supervisor aims at controlling a system of banks of general financial entities. In complete generality, we will assume that the financial entity may fail at some random time that is inaccessible to the reference filtration, which represents the controller knowledge. Additionally, we consider a controller that can act on a system with an impulse-type control, so that the optimal solution consists of both a random time at which money is injected into the system and the precise amount of money to inject. We characterize the value function of the above problem, showing that it must solve in a given viscosity sense a certain quasi-variational inequality (QVI). Finally we prove that the above QVI admits a unique solution in a viscosity sense, and we also provide regularity results for the intervention boundary, known in the literature as the smooth fit principle.
The paper is organized as follows, Section 2 introduces the general financial and mathematical setting; then Section 3 proves some regularity results for the value function and Section 4 addresses the problem of existence and uniqueness of a solution. Finally, Section 5 is devoted to the smooth fit principle.

2. The General Setting

We will, in what follows, consider a complete filtered probability space Ω , F , F t t [ 0 , T ] , P , F t t [ 0 , T ] being a filtration satisfying the usual assumptions, namely right-continuity and saturation by P -null sets. Let T < be a fixed terminal time, and let x, resp. y, denote the total value of the investments of a given bank resp. the total amount of deposit of the same bank. We assume that x and y evolve according to the following system of SDEs:
d x ( t ) = c 1 y ˙ ( t ) d t + μ ˜ ( t ) x ( t ) d t + σ ( t , x ( t ) ) d W ( t ) y ˙ ( t ) = λ x ( t ) y ( t ) y ( t )
where W ( t ) is assumed to be a d-dimensional Brownian motion adapted to the aforementioned filtration. Specification regarding driving coefficients will be made later in the paper. In particular, the first term in Equation (2) accounts for the increase in X due to the fact that new deposits are made, where c 1 [ 0 , 1 ] denotes the fractions of deposits that are actually invested in more or less risky financial operations. We stress that it can be assumed using a rescaling argument, with a generality no smaller than c 1 = 1 . Moreover, we define the value over liability ratio X ( t ) : = x ( t ) y ( t ) . Then, according to Equation (2) and exploiting the Itô–Döblin formula, we have:
d X ( t ) = ( c 1 X ( t ) ) λ ( X ( t ) ) + μ ˜ ( t ) X ( t ) d t + σ ( t , X ( t ) ) d W ( t ) x ( 0 ) = x 0
We assume the process X to be stopped at completely inaccessible random time τ , which is not adapted to the reference filtration F t t [ 0 , T ] . From a financial point of view, assuming that X represents the financial value of an agent, the above assumption reflects the fact that a bank’s failure cannot be predicted. In particular, let us introduce the filtration H t t [ 0 , T ] generated by the stopping time τ , namely H t : = 𝟙 { τ t } . Then we define the augmented filtration G t t [ 0 , T ] , where G t : = F t H t .
Within this setting it is interesting to consider an external controller, e.g., a central bank, or an equivalent financial agent acting as a governance institution with suitable surveillance rights. Such a controller can inject capital into the bank at random times τ n . Then, at time τ n , the state process X ( t ) jumps, and in particular we have:
X ( τ n ) X ( τ n ) = X ( τ n ) + K n
therefore X ( t ) evolves according to
d X ( t ) = ( c 1 X ( t ) ) λ ( X ( t ) ) + μ ˜ ( t ) X ( t ) d t + σ ( t , X ( t ) ) d W ( t ) + n : τ n t K n X ( 0 ) = x 0
The solution to the aforementioned system is represented by a couple u = τ n , K n n 1 , where ( τ n ) n 1 is a non-decreasing sequence of stopping times representing the intervention times, while ( K n ) n 0 is a sequence of G t -adapted random variables taking values in A [ 0 , ) . In particular the sequence ( K n ) n 0 indicates the financial actions taken at time τ n . The following is the definition of admissible impulse strategy u.
Definition 1
(Admissible impulse strategy). The admissible control set U consists of all the impulse controls u = τ n , K n n 0 , such that
{ τ i } i 1 are G t adapted stopping times and increasing , i . e . , τ 1 < τ 2 < < τ i < , K i A and K i G τ i , i 1 .
Remark 1.
Equivalently, we will use a different notation, ξ t ( · ) to express the same space, i.e., for all 0 t s T ,
ξ t ( s ) = t τ i < s K i
where τ i , K i satisfies (8) and the corresponding admissible control set U [ t , T ] consists of all ξ t ( · ) .
In what follows we will denote for short
μ ( t , X t ) : = ( c 1 X ( t ) ) λ ( X ( t ) ) + μ ˜ ( t ) X ( t )
We will assume the following assumptions to hold.
Hypothesis 1.
The functions μ : [ 0 , T ] × R R and σ : [ 0 , T ] × R R d are bounded on [ 0 , T ] × R and Lipschitz continuous w.r.t. for the second variable and uniformly w.r.t. for the first variable.
In the following, for any admissible control u U [ t , T ] , define X t , x u ( s ) as:
X t , x u ( s ) = x + t s μ ( r , X t , x u ( r ) ) d r + t s σ ( r , X t , x u ( r ) ) d W ( r ) + ξ t ( s ) s t .
Under assumption 1 there exists a unique strong solution X t , x u ( s ) of dynamics (4), for any ( t , t ) [ 0 , T ] × R n , with initial condition X t , x u ( t ) = x .
We aim at solving the following stochastic control problem, whose value function is defined as:
V ( t , x ) sup u U [ t , T ] J u ( t , x )
where J u ( t , x ) is the expected cost of the form
J u ( t , x ) E t τ T f ( X t , x u ( s ) ) d s + g 1 ( X t , x u ( T ) ) 𝟙 τ T g 2 ( X t , x u ( τ ) ) 𝟙 τ < T t τ n τ T K n + κ
where f resp. g, represents the running cost resp. the terminal cost, while K + κ , κ > 0 is a suitable constant defining the cost required by the capital injection. Above, we have denoted by τ the bank default time with respect to the process X ( t ) . We assume, as specified above, that τ is a completely inaccessible random time, and it is not adapted to the reference filtration F t t [ 0 , T ] . Additionally, recall that H t t [ 0 , T ] is the filtration generated by the stopping time τ , namely H t : = 𝟙 { τ t } , whilst G t t [ 0 , T ] is the filtration, namely G t : = F t H t .
Following the standard literature (see, e.g., Karatzas (1989)) both the running and terminal costs are usually given in terms of suitable utility functions representing the utility gains from the bank’s value. A typical example is f ( x ) = x p p , p ( 0 , 1 ) . As regards the cost K + κ , it reflects the fact that injecting an amount K of capital to increase the bank’s liquidity level implies a non-negligible cost; otherwise such a financial help would be always profitable.
Throughout the work we will make the following assumptions:
Hypothesis 2.
(i) 
the functions f, g 1 , g 2 : R R are Lipschitz continuous and bounded;
(ii) 
the following holds:
g 1 ( x ) sup K > 0 g 1 ( x + K ) K κ
The boundedness properties for the running and terminal cost can be interpreted in the following sense: since we are seeking the optimal capital injection strategy for the government over a finite time horizon, we may think that there is a healthy level U > 0 such that when the bank’s capital is growing to infinity, then the utility remains flat, so that the government will have no interest in injecting more capital. To make an example, we can take
U ( x ) = U e x
Definition 2
(Admissible impulse control). An admissible control impulse is an impulse control on [ 0 , T ] with a finite average number of impulse, that is,
E ξ t ( s ) <
It is worth stressing that under Hypotheses 1 and 2 the optimization functional J u ( t , x ) is well-defined for every ( t , x ) [ 0 , T ] × R and every u U .
The admissible control set U consists of all the impulse controls u = τ n , K n n 0 such that
{ τ i } i 1 are G t adapted stopping times and increasing , i . e . , τ 1 < τ 2 < < τ i < , K i A and K i G τ i , i 1 .
Remark 2.
A further generalization of the above optimal control problem consists in considering a controller having two different ways to influence the evolution of the state process x, namely:
(1) 
An impulse type control ( τ n , K n ) n , hence as in Equation (4) by injecting capital at random times τ n ;
(2) 
A continuous type control α ( t ) , by choosing at any time t the rate at which x is growing.
In particular, an action of type 2 implies that Equation (4) can be reformulated as follows:
d X ( t ) = ( c 1 X ( t ) ) λ ( X ( t ) ) + ( μ ( t ) α ( t ) ) X ( t ) d t + σ ( t ) X ( t ) d W ( t ) + n : τ n t K n X ( 0 ) = x 0
where α represents the continuous control variable α ( t ) [ 0 , r ¯ ] for a suitable constant r ¯ , where α = 0 stands for higher returns and α = r ¯ denotes lower returns. This reflects the financial assumption that the controller, e.g., a central bank, can change the interest rate according to macroeconomic variables, such as the country inflation level, the forecast of supranational interest rates, the the markets’ belief about the health of the financial sector under the central bank control, etc. In fact, choosing α = 0 , the bank value grows at the rate μ ( t ) , which is strictly greater than μ ( t ) α ( t ) for a given control 0 < α ( t ) r ¯ . We refer to the above discussion; see also, e.g., Altavilla et al. (2015); Andrade et al. (2016); Blattner and Joyce (2016); Bowman et al. (2011); De Santis (2020); Miyao (2000); Voutsinas and Werner (2011), for more financially-oriented ideas supporting the latter setting. Accordingly, we can assume that the controller aims at maximizing a functional of the following type:
J u , a ( t , x ) = E t [ t τ T f ( X t , x u , a ( s ) , α ( s ) ) d s + g 1 ( X t , x u , a ( T ) ) 𝟙 τ T + g 2 ( X t , x u , a ( τ ) ) 𝟙 τ < T t τ n τ T K n + κ ]
In what follows we assume the following density hypothesis on the random time to hold, hence requiring that the distribution of τ is absolutely continuous with respect to the Lebesgue measure:
Hypothesis 3.
For any t [ 0 , T ] , there exists a process ρ t ( s ) t [ 0 , T ] , such that
P τ s F t = 1 ρ t ( s )
The main idea of the following procedure is to switch from the reference filtration F t to the default free filtration G t by means of the following lemma (Bielecki et al. 2009, Lemma 4.1.1).
Lemma 1.
For any F T -measurable random variable X, it holds that
E X 𝟙 T τ G t = 𝟙 τ > t E X 𝟙 τ > T F t E 𝟙 τ > t F t = 𝟙 τ > t e Γ t E X e Γ T F t ,
with
Γ t : = ln 1 P τ t F t .
A typical example, which will be used in what follows, consists in considering a Cox process, hence taking ρ to be an exponential function of the form
ρ t ( s ) : = e t s β ( r ) d r
for a suitable function β . In this particular case we have that
Γ s : = ln e t s β ( r ) d r = t s β ( r ) d r ,
so that the Equation (11) reads:
E X 𝟙 T τ G s = 𝟙 τ > s e t s β ( r ) d r E X e t T β ( r ) d r F s .
We can thus prove the following result.
Hypothesis 4.
Let us assume that τ is a Cox process, namely it is of the form
ρ t ( s ) : = e t s β ( r ) d r
with intensity given by β.
Remark 3.
Notice that we could have assumed a more general assumption, often denoted in the literature as density hypothesis, requiring that there exists a process β such that
P τ d s | F t = β ( s ) ;
see, e.g., Bielecki et al. (2009).
Theorem 1.
Let F be a G -adapted process and let us assume τ to be a Cox process defined as in Equation (12), then it holds that
E t τ T F r d r G t = 𝟙 τ > t t T E e t r β ( s ) d s F r F t d r
Proof. 
Exploiting (1) together with (12), we have that:
E t τ T F r d r G t = t T E 𝟙 τ > r 𝟙 τ > t F r G t d r = = 𝟙 τ > t t T E 𝟙 τ > r F r e 0 t β ( s ) d s F t d r = = 𝟙 τ > t t T e 0 t β ( s ) d s E E 𝟙 τ > r F r F r F t d r = = 𝟙 τ > t t T e 0 t β ( s ) d s E e 0 r β ( s ) d s F r F t d r = = 𝟙 τ > t t T E e t r β ( s ) d s F r F t d r ,
and this completes the proof. □
Let us then denote the impulse control for this system by
u = ( τ 1 , τ 2 , , τ j , ; K 1 , K 2 , , K j , ) U ,
where 0 τ 1 τ 2 are G t stopping times and K j A is G τ j - measurable for all j, and for any u U ; then, using (4) together with (1), the corresponding functional in Equation (7) can be rewritten as
J u ( t , x ) = E t [ t T ρ t ( s ) f ( X ( s ) ) β ( s ) g 2 ( X ( s ) ) d s + ρ t ( T ) g 1 ( X ( T ) ) + t τ n T ρ t ( τ n ) K n + κ ]
so that the original stochastic control problem, with random terminal time, turns out to be a stochastic control problem with deterministic terminal time.
Remark 4.
A different approach would be to consider τ to be F t -adapted, for instance of the form
τ = inf { t : x ( t ) 0 } ,
which implies that the hypothesis (10) is no longer satisfied and, consequently, the above-mentioned techniques cannot be exploited any longer. Nevertheless, under this setting it is possible to recover an HJB equation endowed with suitable boundary conditions. We refer to Fleming and Soner (2006); Øksendal and Sulem (2005) for a mathematical treatment of this type of stochastic control problems, while in Lipton (2016); Merton (1974) one can find applications to the mathematical finance scenario.
For simplicity, we define the following functions:
c ( t , s , x ) = ρ t ( s ) ( f ( x ) β ( s ) g 2 ( x ) ) with s t , g ( t , x ) = ρ t ( T ) g 1 ( x ) ,
which will be used throughout the paper.

3. On the Regularity of the Value Function

The present section is devoted to proving regularity properties of the value function. In particular, the next two Lemmas prove respectively that the value function is a bounded, Lipschitz continuity in space, and a 1 2 -Hölder continuity in time of the value function V.
Lemma 2.
Assume that Hypotheses 1, 2 and 4 hold, then there exist constants C 0 , C 1 such that
C 1 V ( t , x ) C 0 ( 1 + | x | ) .
Proof. 
For simplicity, in what follows, for any fixed ( t , x ) [ 0 , T ] × R and u U [ t , T ] , we will denote for short X t , x u ( s ) , resp. ξ t ( s ) by X ( s ) , resp. ξ ( s ) . Then by Gronwall’s inequality we have:
1 + | X ( s ) | 1 + | x | + | ξ ( s ) | + t s σ ( r , X ( r ) ) d W r + C t s ( 1 + | X ( r ) | ) d r 1 + | x | + | ξ ( s ) | + t s σ ( r , X ( r ) ) d W r + + C t s e C ( s r ) 1 + | x | + | ξ ( r ) | + t r σ ( r , X ( r ) ) d W r d r C [ 1 + | x | + | ξ ( s ) | + t s | ξ ( r ) | d r + + t s σ ( r , X ( r ) ) d W r + t s t r σ ( r , X ( r ) ) d W r d r ] ,
thus
E | X ( s ) | C { 1 + E | x | + E | ξ ( s ) | + E t s | ξ ( r ) | d r + + E t s σ ( r , X ( r ) ) d W r + E t s t r σ ( r , X ( r ) ) d W r d r ] .
On the other hand, under (1), we have:
E t s σ ( r , X ( r ) ) d W r + E t s t r σ ( r , X ( r ) ) d W r d r E t s σ ( r , X ( r ) ) d W r 2 1 / 2 + ( s t ) 1 / 2 t s E t r σ ( r , X ( r ) ) d W r 2 d r 1 / 2 = t s E | σ ( r , X ( r ) ) | 2 d r 1 / 2 + ( s t ) 1 / 2 t s t r E | σ ( r , X ( r ) ) | 2 d r d r 1 / 2 1 + ( s t ) t s E | σ ( r , X ( r ) ) | 2 d r 1 / 2 C ( s t ) 1 / 2 + t s E | X ( r ) | 2 d r 1 / 2 C 1 + t s E | X ( r ) | d r ,
where we have exploited both the Jensen’s and Hölder’s inequalities several times. Hence it follows that
E | X ( s ) | C 1 + E | x | + E | ξ ( s ) | + E t s | ξ ( r ) | d r + t s E | X ( r ) | d r C 1 + E | x | + E | ξ ( s ) | + E t s | ξ ( r ) | d r . .
Again under (1)–Hypothesis 2, we achieve that
| J u ( t , x ) | t T C ( 1 + E | X ( s ) | ) d s + C ( 1 + E | X ( T ) | ) t r n T ρ t ( r n ) K n + κ C 1 + | x | + E | ξ ( T ) | + E t T | ξ ( r ) | d r .
For the trivial control u 0 = ξ t ( . ) 0 , one has that
V ( t , x ) J u 0 C 0 ( 1 + | x | ) for all ( t , x ) [ 0 , T ] × R ,
which proves the lower bound of the value function.
The boundedness of c ( t , s , x ) , g ( t , x ) , immediately gives us that the value function is bounded, i.e., there exists C 1 > 0 such that
V ( t , x ) C 1 .
 □
Lemma 3.
Assume Hypotheses 1, 2 and 4 hold, the value function V ( t , x ) is Lipschitz continuous in x, and 1 2 -Hölder continuous in t, namely there exists a constant C > 0 such that ∀ t 1 , t 2 [ 0 , T ) , x 1 , x 2 R ,
| V ( t 1 , x 1 ) V ( t 2 , x 2 ) | C | x 1 x 2 | + ( 1 + | x 1 | + | x 2 | ) | t 1 t 2 | 1 2 ,
Proof. 
Again, for simplicity, for any admissible control u U [ t , T ] , we denote for short X t , x 1 u , resp X t , x 2 u by X t , x 1 , resp X t , x 2 dropping the explicit dependence on the control u. Notice that applying the Itô–Döblin formula to | X t , x 1 ( s ) X t , x 2 ( s ) | 2 , and using Gronwall’s lemma, we can infer that
E | X t , x 1 ( s ) X t , x 2 ( s ) | C | x 1 x 2 | , s [ t , T ] , x 1 , x 2 R .
Therefore, by (1)–Hypothesis 2, for any fixed t [ 0 , T ) and all x 1 , x 2 R and u U [ t , T ] ,
(19) | J u ( t , x 1 ) J u ( t , x 2 ) | E t T | c ( t , s , X t , x 1 ( s ) ) c ( t , s , X t , x 2 ( s ) ) | d s + (20) + | g ( t , X t , x 1 ( T ) ) g ( t , X t , x 2 ( T ) ) | (21) L E t T | X t , x 1 ( s ) X t , x 2 ( s ) | d s + C | X t , x 1 ( T ) X t , x 2 ( T ) | C | x 1 x 2 | ,
which implies that
V ( t , x 1 ) J u ( t , x 1 ) J u ( t , x 2 ) + C | x 1 x 2 | ,
and thus
V ( t , x 1 ) V ( t , x 2 ) + C | x 1 x 2 | .
By interchanging x 1 and x 2 , we get
V ( t , x 1 ) V ( t , x 2 ) C | x 1 x 2 | .
For the time regularity, first we show that
E X t , x ( s ) x ξ t ( s ) C ( 1 + | x | ) ( s t ) 1 2 + E t s | ξ t ( s ) d s | .
For notation simplicity, we suppress the subscripts t,x for X t , x , ξ t and define
z ( s ) = X ( s ) x ξ ( s ) .
Then by (1)–Hypothesis 2, we have
| z ( s ) | C t s ( 1 + | X ( r ) | ) d r + t s σ ( r , X ( r ) ) C t s ( 1 + | x | + | z ( r ) | + | ξ ( r ) | ) d r + t s σ ( r , X ( r ) ) d W r .
By Gronwall’s inequality, we achieve
| z ( s ) | C [ ( 1 + | x | ) ( s t ) + s t | ξ ( r ) | d r + + t s σ ( r , X ( r ) ) d W r + t s t r σ ( r , X ( r ) ) d W r d r ]
Using (16) and again Gronwall’s inequality, we further get
E | z ( s ) | C ( 1 + | x | ) ( s t ) + s t E | ξ ( r ) | d r + ( s t ) 1 2 + t s E | X ( r ) | d r C [ ( 1 + | x | ) ( s t ) + s t E | ξ ( r ) | d r + ( s t ) 1 2 + + t s E ( 1 + | x | + | ξ ( r ) | + | z ( r ) | ) d r ] C ( 1 + | x | ) ( s t ) 1 2 + s t E | ξ ( r ) | d r ,
which proves (22).
For all p [ 0 , ) , define the control space
U p [ t , T ] = u U [ t , T ] | E t r i < T ( ρ t ( r i ) K i + κ ) 2 C 0 ( 1 + p ) + C 1 ,
where C 0 and C 1 are the constants in (18) and (17). Notice that another important corollary of (22) is that for all u U | x | [ t , T ] ,
E X t , x ( s ) x ξ t ( s ) C ( 1 + | x | ) ( s t ) 1 2 t s T .
We claim that for all | x | p , the value function V ( t , x ) satisfies
V ( t , x ) = inf u U p [ t , T ] J u ( t , x ) .
This is due to the fact that for any u U [ t , T ] \ U p [ t , T ] ,
J u ( t , x ) C 1 E t r i < T ( ρ t ( r i ) K i + κ ) C 1 2 C 0 ( 1 + p ) C 1 < V ( t , x ) C 0 ( 1 + p ) .
Fix x R and 0 t 1 < t 2 < T . For any u 2 U | x | [ t 2 , T ) , extend the control to [ t 1 , T ) by setting
ξ ˜ t 1 ( s ) = 0 s [ t 1 , t 2 ) , ξ ˜ t 1 ( s ) = ξ t 2 ( s ) s [ t 2 , T ) .
and call u ˜ 1 ξ ˜ t 1 ( · ) U [ t 1 , T ] . Then we have:
V ( t 1 , x ) J u ˜ 1 ( t 1 , x ) = J u 2 ( t 2 , x ) + E t 1 t 2 c ( t 1 , s , X t 1 , x ( s ) ) d s + E t 2 T c ( t 1 , s , X t 1 , x ( s ) ) c ( t 2 , s , X t 2 , x ( s ) ) d s + + E g ( t 1 , X t 1 , x ( T ) ) g ( t 2 , X t 2 , x ( T ) ) J u 2 ( t 2 , x ) + C ( 1 + | x | ) | t 1 t 2 | + C ( 1 + | x | ) ( | t 1 t 2 | 1 2 ) J u 2 ( t 2 , x ) + C ( 1 + | x | ) ( | t 1 t 2 | 1 2 ) ,
where X t 1 , x ( s ) , resp. X t 2 , x ( s ) represents X t 1 , x u ˜ 1 , resp X t 2 , x u 2 and the second-to-last row in (24) is achieved by exploiting (22). Thus we obtain that
V ( t 1 , x ) V ( t 2 , x ) + C ( 1 + | x | ) | t 1 t 2 | 1 2 .
On the other hand, for any ε > 0 , there exists u 1 U | x | [ t 1 , T ) , such that
ε + V ( t 1 , x ) J u 1 ( t 1 , x ) .
Then we define the impulse controls u ^ 2 , u ¯ 2 U [ t 2 , T ) by
ξ ^ t 2 ( s ) = ξ t 1 ( s ) s t 2 ,
ξ ¯ t 2 ( s ) = ξ t 1 ( s ) ξ t 1 ( t 2 ) s t 2 .
Notice that u ^ 2 is the impulse control such that at the initial time t 2 , there is a impulse of size ξ t 1 ( t 2 ) and u ¯ 2 is the impulse control mimicking all the impulses in ξ t 1 ( · ) on [ t 2 , T ) . By denoting x ¯ = x + ξ t 1 ( t 2 ) , which is F t 2 adapted, we have that
J u ^ 2 ( t 2 , x ) = J u ¯ 2 ( t 2 , x ¯ ) ( ξ t 1 ( t 2 ) + κ ) ,
and thus
ε + V ( t 1 , x ) J u ^ 2 ( t 2 , x ) + E t 2 T ( c ( t 1 , s , X t 1 , x ( s ) ) c ( t 2 , s , X t 2 , x ¯ ( s ) ) ) + E [ g ( t 1 , X t 1 , x ( T ) ) g ( t 2 , X t 2 , x ¯ ( T ) ) ] d s + t 1 r i < t 2 ( 1 ρ t 1 ( r i ) ) ( K i + κ ) + r i t 2 ( ρ t 2 ( r i ) ρ t 1 ( r i ) ) ( K i + κ ) V ( t 2 , x ) C ( 1 + | x | ) ( t 2 t 1 ) + C E | X t 1 , x ( T ) X t 2 , x ¯ ( T ) | C t 2 T E | X t 1 , x ( s ) X t 2 , x ¯ ( s ) | d s V ( t 2 , x ) C ( 1 + | x | ) ( t 2 t 1 ) 1 2 ,
where X t 1 , x , resp. X t 2 , x represents X t 1 , x u 1 , resp. X t 2 , x u ¯ 2 . Notice that in (25), we extensively use the following inequality:
E | X t 1 , x ( s ) X t 2 , x ¯ ( s ) | C E | X t 1 , x ( t 2 ) X t 2 , x ¯ ( t 2 ) | = C E | X t 1 , x ( t 2 ) x ξ t 1 ( t 2 ) | C ( 1 + | x | ) ( t 2 t 1 ) 1 2
for all s t 2 and u 1 U | x | [ t 1 , T ) , where the last row is achieved by (23).
Since (25) holds for all ε > 0 , we obtain
V ( t 1 , x ) V ( t 2 , x ) C ( 1 + | x | ) ( t 2 t 1 ) 1 2 .
Adding (24), we finally get the 1 2 -Hölder continuity in time, i.e.,
| V ( t 1 , x ) V ( t 2 , x ) | C ( 1 + | x | ) | t 1 t 2 | 1 2 .
 □
In order to make the treatment as self-contained as possible, we end the current section proving the following dynamic programming principle (DPP).
Theorem 2
(Dynamic programming principle). Under Hypotheses 1, 2 and 4, for any ( t , x ) [ 0 , T ] × R , the following holds:
V ( t , x ) = sup u U [ t , T ] E [ t θ ρ t ( s ) f ( X ( s ) ) β ( s ) g 2 ( X ( s ) ) d s + t τ n θ ρ t ( τ n ) K n + κ + ρ t ( θ ) V ( θ , X t , x u ( θ ) ) ] ,
for any stopping time θ valued in [ t , T ] .
Proof. 
For any ϵ > 0 , u ¯ U [ t , T ] exists, so that
sup u U [ t , T ] E t θ c u ( t , s , x ) d s t τ n θ ρ t ( τ n ) K n + κ + ρ t ( θ ) V ( θ , X t , x u ( θ ) ) E t θ c u ¯ ( t , s , x ) d s t τ n θ ρ t ( τ n ) K ¯ n + κ + ρ t ( θ ) V ( θ , X t , x u ¯ ( θ ) ) ϵ .
Using the regularity for the value function proven above, and in particular Lemma 3 it follows that another strategy u ˜ U [ t , T ] exists, such that
E V ( θ , X t , x u ¯ ( θ ) ) E J ( θ , X t , x u ˜ ( θ ) ϵ .
Therefore from Equation (26) we obtain
E t θ c ( s , X t , x u ¯ ( s ) ) d s t τ n θ ρ t ( τ n ) K n + κ + ρ t ( θ ) V ( θ , X t , x u ¯ ( θ ) ) ϵ E t θ c ( s , X t , x u ¯ ( s ) ) d s t τ n θ ρ t ( τ n ) K n + κ + ρ t ( θ ) J u ˜ ( θ , X t , x u ¯ ( θ ) 2 ϵ = = J u ϵ ( t , x ) 2 ϵ .
We thus have the following inequality:
V ( t , x ) sup u U [ t , T ] E [ t θ ρ t ( s ) f ( X ( s ) ) β ( s ) g 2 ( X ( s ) ) d s + t τ n θ ρ t ( τ n ) K n + κ + ρ t ( θ ) V ( θ , X t , x u ( θ ) ) ] .
A completely analogous argument shows the reverse inequality, hence proving the claim. □
Remark 5.
It must be stressed how Hypothesis 4 on τ being a Cox process is essential, since without it the problem might be time-inconsistent.

4. Viscosity Solution to the Hamilton–Jacobi–Bellman Equation

An application of an ad hoc dynamic programming principle (2) (see, e.g., Øksendal and Sulem (2005); Pham (2009)) leads to the following quasi-variational inequality (QVI).
min t V ( t , x ) L V ( t , x ) f ( x ) + β ( t ) ( V ( t , x ) + g 2 ( x ) ) , V ( t , x ) I V ( t , x ) = 0 , on [ 0 , T ) × R , V ( T , x ) = g 1 ( x ) , on { T } × R ,
with I being the non-local impulse operator defined as
I V ( t , x ) : = sup K A ( t , x ) V ( t , x + K ) ( K + κ ) ) .
We underline that the problem (27) identifies two distinct regions: the continuation region
C = ( t , x ) [ 0 , T ) × R : V ( t , x ) > I V ( t , x ) ,
and the impulse region or action region
A = ( t , x ) [ 0 , T ) × R : V ( t , x ) = I V ( t , x ) .
The following is the definition of the viscosity solution to the QVI (see Equation (27)) within the general setting (possibly not continuous).
Definition 3.
A function V : [ 0 , T ] × R R is said to be a viscosity solution to the QVI (27) if the following two properties hold:
(i) 
viscosity supersolution a function V : [ 0 , T ] × R R is said to be a viscosity supersolution to the QVI (27) if ( t ^ , x ^ ) [ 0 , T ] × R and ϕ C 1 , 2 ( [ 0 , T ] × R ) with
0 = V ϕ ( t ^ , x ^ ) = min ( t , x ) [ 0 , T ) × R V ϕ ,
it holds that
min t ϕ ( t , x ) L ϕ ( t , x ) f ( x ) + β ( t ) ( ϕ ( t , x ) + g 2 ( x ) ) , V ( t , x ) I V ( t , x ) 0 , o n [ 0 , T ) × R min V ( T , x ) g 1 ( x ) , V ( T , x ) I V ( T , x ) 0 , o n { T } × R , ;
(ii) 
viscosity subsolution a function V : [ 0 , T ] × R R is said to be a viscosity subsolution to the QVI (27) if ( t ^ , x ^ ) [ 0 , T ] × R and ϕ C 1 , 2 ( [ 0 , T ] × R ) with
0 = V ϕ ( t ^ , x ^ ) = max ( t , x ) [ 0 , T ) × R V ϕ ,
it holds
min t ϕ ( t , x ) L ϕ ( t , x ) f ( x ) + β ( t ) ( ϕ ( t , x ) + g 2 ( x ) ) , V ( t , x ) I V ( t , x ) 0 , o n [ 0 , T ) × R min V ( T , x ) g 1 ( x ) , V ( T , x ) I V ( T , x ) 0 , o n { T } × R , ;
(iii) 
viscosity solution a function V : [ 0 , T ] × R R is said to be a viscosity solution to the QVI (27) if it is both a viscosity supersolution and a viscosity subsolution.
In order to prove that the value function V is the viscosity solution to Equation (27), we first need the following.
Lemma 4.
Assume that (1)–Hypotheses 2–4 hold, then we have
V ( t , x ) I V ( t , x ) ,
for all t [ 0 , T ) , x R .
Proof. 
Reasoning by contradiction, we first suppose that there exists ( t , x ) S : = [ 0 , T ) × [ 0 , + ) , such that
V ( t , x ) < I V ( t , x ) ,
i.e.,
V ( t , x ) < sup K A V ( t , x + K ) ( K + k ) ,
then there exists also ϵ > 0 and K ^ A , such that
V ( t , x ) < V ( t , x + K ^ ) ( K ^ + k ) 2 ϵ .
On the other hand, according to Equation (6), there exists u U [ t , T ] such that
J u ( t , x + K ^ ) > V ( t , x + K ^ ) ϵ .
Defining now u ^ = ξ ^ t ( · ) K ^ + ξ t ( · ) , we have
V ( t , x ) J u ^ ( t , x ) = J u ( t , x + K ^ ) ( K ^ + k ) .
Combining all of the estimates above, we have
V ( t , x + K ^ ) ( K ^ + k ) 2 ϵ > V ( t , x ) > V ( t , x + K ^ ) ( K ^ + k ) ϵ ,
from which we have the desired contradiction. □
Remark 6.
(4) implies that we are considering I V ( t , x ) as a lower obstacle, which is given in implicit form, since it depends on the value function V itself.
Theorem 3.
The value function V ( t , x ) is a viscosity solution to the QVI (27) on [ 0 , T ] × R , in the sense of (3).
Proof. 
(3) implies that the value function is continuous. Therefore the lower-semicontinuous, resp. upper-semicontinuous, envelop of V in (3) does in fact coincide with V.
Let us prove that V ( t , x ) is a viscosity sub-solution of (13). By (4), we know that V ( t , x ) I V ( t , x ) , so that in what follows we only need to show that given ( t 0 , x 0 ) [ 0 , T ) × R such that
V ( t 0 , x 0 ) > I V ( t 0 , x 0 ) ,
for every ϕ ( t , x ) C 1 , 2 ( [ 0 , T ] × [ 0 , + ) ) and every t 0 , x 0 [ 0 , + ) , such that ϕ V for all ( t , x ) S B r ( ( t 0 , x 0 ) ) and V ( t 0 , x 0 ) = ϕ ( t 0 , x 0 ) , we want to show that
t ϕ ( t 0 , x 0 ) L ϕ ( t 0 , x 0 ) f ( x 0 ) + β ( t 0 ) ϕ ( x 0 ) + g 2 ( x 0 ) 0 .
In fact, if V ( t 0 , x 0 ) I V ( t 0 , x 0 ) , then (29) immediately follows.
Choose ϵ > 0 and let u = ( τ 1 , τ 2 , ; K 1 , K 2 , ) U [ t 0 , T ] be a ϵ - optimal control, i.e.,
V ( t 0 , x 0 ) < J u ( t 0 , x 0 ) + ϵ .
Since τ 1 is a stopping time, { ω , τ 1 ( ω ) = t 0 } is F t 0 - measurable, and thus
τ 1 ( ω ) = t 0 a . s . o r τ 1 ( ω ) > t 0 a . s .
If τ 1 = t 0 a.s., X t 0 , x 0 u takes a immediate jump from x 0 to the point x 0 + K 1 we have J u ( t 0 , x 0 ) = J u ( t 0 , x 0 + K 1 ) ( K 1 + k ) where u = ( τ 2 , τ 3 , ; K 2 , K 3 , ) U [ t 0 , T ] . This implies that
V ( t 0 , x 0 ) J u ( t 0 , x 0 + K 1 ) ( K 1 + k ) + ϵ < V ( t 0 , x 0 + K 1 ) ( K 1 + k ) I V ( t 0 , x 0 ) + ϵ ,
which is a contradiction for ϵ < V ( t 0 , x 0 ) I V ( t 0 , x 0 ) . Thus (28) implies that τ 1 > t 0 a . s for all ϵ - optimal controls such that ϵ < V ( t 0 , x 0 ) I V ( t 0 , x 0 ) . For any impulse control u = ( τ 1 , τ 2 , ; K 1 , K 2 , ) U [ t 0 , T ] , define
τ ^ τ 1 ( t 0 + r ) inf t | t > t 0 , | x ( t ) x 0 | r .
By the dynamic programming principle, for any ϵ > 0 , there exists a control u such that
V ( t 0 , x 0 ) E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) f ( X ( s ) ) + β ( s ) g 2 ( x ( s ) ) d s + e t 0 τ ^ β ( s ) d s V ( τ ^ , X ( τ ^ ) ) ) + ϵ .
By (30) and the Dynkin formula we have that
V ( t 0 , x 0 ) E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) f ( X ( s ) ) + β ( s ) g 2 ( X ( s ) ) d s + e t 0 τ ^ β ( s ) d s ϕ ( τ ^ , X ( τ ^ ) ) ) + ϵ = E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) f ( X ( s ) ) + β ( s ) g 2 ( X ( s ) ) ϕ ( s , X ( s ) ) + d s + E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) t ϕ ( s , X ( s ) ) + L ϕ ( s , X ( s ) ) d s + ϕ ( t 0 , x 0 ) + ϵ .
Using V ( t 0 , x 0 ) = ϕ ( t 0 , x 0 ) , we further obtain
E t 0 , x 0 [ t 0 τ ^ { ρ t 0 ( s ) ( f ( X ( s ) ) + β ( s ) g 2 ( X ( s ) ) ϕ ( s , X ( s ) ) + + t ϕ ( s , X ( s ) ) + L ϕ ( s , X ( s ) ) ) } d s ] + ϕ ( t 0 , x 0 ) ϵ .
Dividing both sides of (32) by E ( τ ^ t 0 ) and letting r 0 , we further get that
f ( x 0 ) + β ( t 0 ) g ( x 0 ) ϕ ( t 0 , x 0 ) + t ϕ ( t 0 , x 0 ) + L ϕ ( t 0 , x 0 ) ϵ .
Since ϵ > 0 is arbitrary, we finally get the desired inequality
f ( x 0 ) + β ( t 0 ) ϕ ( t 0 , x 0 ) + g 2 ( x 0 ) t ϕ ( t 0 , x 0 ) L ϕ ( t 0 , x 0 ) 0 ,
then V ( t , x ) is a viscosity sub-solution.
To prove that V ( t , x ) is also a viscosity super-solution of (13), let us consider ϕ C 1 , 2 ( S ) , and any ( t 0 , x 0 ) S such that ϕ V on B r ( t 0 , x 0 ) and ϕ ( t 0 , x 0 ) = V ( t 0 , x 0 ) . Taking the trivial control u 0 = 0 (no interventions), calling the corresponding trajectory X ( t ) = X u 0 ( t ) with x ( t 0 ) = x 0 , and defining τ ^ = ( t 0 + r ) inf t | t > t 0 , | x ( t ) x 0 | > r , then, by the dynamic programming principle and the Dynkin formula, we have:
V ( t 0 , x 0 ) E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) f ( X ( s ) ) + β ( s ) g 2 ( X ( s ) ) d s + e t 0 τ ^ β ( s ) d s ϕ ( τ ^ , X ( τ ^ ) ) ) = E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) f ( x ( s ) ) + β ( s ) g 2 ( X ( s ) ) ϕ ( s , X ( s ) ) d s + E t 0 , x 0 t 0 τ ^ ρ t 0 ( s ) t ϕ ( s , X ( s ) ) + L ϕ ( s , X ( s ) ) d s + ϕ ( t 0 , x 0 ) + ϵ .
Using V ( t 0 , x 0 ) = ϕ ( t 0 , x 0 ) , we obtain that
E t 0 , x 0 [ t 0 τ ^ { ρ t 0 ( s ) ( f ( X ( s ) ) + β ( s ) g 2 ( X ( s ) ) ϕ ( s , X ( s ) ) + t ϕ ( s , X ( s ) ) + L ϕ ( s , X ( s ) ) ) } d s ] ϵ .
Dividing both sides of (34) by E [ τ ^ t 0 ] and letting r 0 , we obtain
f ( x 0 ) + β ( t 0 ) ϕ ( t 0 , x 0 ) + g 2 ( x 0 ) t ϕ ( t 0 , x 0 ) L ϕ ( t 0 , x 0 ) 0 .
Since we have already proven that V ( t , x ) I V ( t , x ) , we finally conclude that
min [ t ϕ ( t 0 , x 0 ) L ϕ ( t 0 , x 0 ) f ( x 0 ) + β ( t 0 ) ϕ ( x 0 ) + g 2 ( x 0 ) ,
, V ( t 0 , x 0 ) I V ( t 0 , x 0 ) ] 0 .
Combining (29) and (36), we have that v ( t , x ) is a viscosity solution of (13). It is worth mentioning that the terminal condition is non trivial. In fact, it has to take into account that right before the horizon time T, the controller might act by an impulse control. To this extent we have to specify that the terminal condition in Equation (27) is to be intended as
V ( T , x ) : = lim ( t , x ) ( T , x ) ) V ( t , x ) .
Since (4) implies that V ( t , x ) I V ( t , x ) for all ( t , x ) [ 0 , T ) × R , in the limit one has V ( T , x ) I V ( T , x ) for all x R . To show the boundary condition
min V ( T , x ) g 1 ( x ) , V ( T , x ) I V ( T , x ) = 0 ,
one first considers all the x R such that V ( T , x ) > I V ( T , x ) . For any sequence ( t n , x n ) ( T , x ) with ( t n , x n ) [ 0 , T ) × R , by continuity one has V ( t n , x n ) > I V ( t n , x n ) for all n large enough. Then for each sufficiently small ε > 0 , consider the controls u n U [ t n , T ] such that
V ( t n , x n ) J u n ( t n , x n ) + ε .
It then suffices to show that
E t n T ρ t n ( s ) f ( X t n , x n u n ( s ) ) β ( s ) g 2 ( X t n , x n u n ( s ) ) d s + + E ρ t n ( T ) g 1 ( X t n , x n u n ( T ) ) t n τ j T ρ t n ( τ j ) K j + κ g 1 ( x )
as n + . Notice that since V ( t , x ) > I V ( t , x ) for all ( t , x ) in a neighborhood of ( T , x ) , for all δ > 0 small enough one has
P sup s [ t n , T ] | X t n , x n u n ( s ) x n | < δ 1 as n .
Suppose that there exists F L 1 ( P ; R ) such that
t n T ρ t n ( s ) | f ( X t n , x n u n ( s ) ) | + β ( s ) | g 2 ( X t n , x n u n ( s ) ) | d s + ρ t n ( T ) | g 1 ( X t n , x n u n ( T ) ) | + | t n τ j T ρ t n ( τ j ) K j + κ | F
For all n large enough, an application of dominant convergence theorem proves (39). Thus we conclude that for any ( T , x ) such that V ( T , x ) > I V ( T , x ) , one has
V ( T , x ) g 1 ( x ) + ε , ε > 0 V ( T , x ) g 1 ( x ) .
By a similar approach, one can show that V ( T , x ) g 1 ( x ) for all x R . This completes the proof of (38). □

On the Uniqueness of the Viscosity Solution

We will now show that the value function is the unique viscosity solution to Equation (27) based on a comparison principle. In order to do that, let us introduce a different definition of viscosity solution (see, e.g., Ishi (1990)) based on the notion of jets.
Definition 4.
Let V : [ 0 , T ] × R R be a upper-semicontinuous function, then we define
P 2 , + V ( s , x ) = { p , q , M R × R × R : V ( s , y ) V ( t , x ) + p ( t s ) + q ( x y ) + 1 2 M ( x y ) 2 + o ( | t s | + | x y | 2 ) } P ¯ 2 , + V ( s , x ) = p , q , M R × R × R : ( t n , x n ) [ 0 , T ] × R : p n , q n , M n P 2 , + V ( t n , x n ) , t n , x n , V ( t n , x n ) , p n , q n , M n t , x , V ( t , x ) , p , q , M .
For a lower-semicontinuous function V, we define
P 2 , V ( s , x ) : = P 2 , + V ( s , x ) , P ¯ 2 , V ( s , x ) : = P ¯ 2 , + V ( s , x ) .
We can therefore state the equivalence between the two notions of viscosity solution stated before.
Proposition 1.
A function V : [ 0 , T ] × R R is a viscosity sub, resp. super, solution to Equation (27) if and only if ( p , q , M ) P ¯ 2 , + V ( s , x ) , resp. P ¯ 2 , V ( s , x ) ,
min [ p μ ( t , x ) q 1 2 σ 2 ( t , x ) M f ( x ) + β ( t ) ( V ( t , x ) + g 2 ( x ) ) ,
, V ( t , x ) I V ( t , x ) ] 0 ( 0 ) .
Before proving the main result of the present section, i.e., the comparison principle, we need two technical lemmas.
Lemma 5.
Assume that (1)–Hypotheses 2–4 are satisfied and that U and V are, respectively, a viscosity super-solution and viscosity sub-solution to Equation (27). Then, assume that for ( t ^ , x 0 ) [ 0 , T ] × R it holds that
V ( t ^ , x 0 ) I V ( t ^ , x 0 ) , U ( t ^ , x 0 ) I U ( t ^ , x 0 ) .
Then, for every ϵ > 0 x ^ exists, such that
V ( t ^ , x 0 ) U ( t ^ , x 0 ) V ( t ^ , x ^ ) U ( t ^ , x ^ ) + ϵ , V ( t ^ , x ^ ) I V ( t ^ , x ^ ) , U E ( t ^ , x ^ ) I U ( t ^ , x ^ ) .
Proof. 
Assume that Equation (42) holds, then for λ ( 0 , 1 ) , there exists y 0 such that
I V ( t ^ , x 0 ) V ( t ^ , x 0 + y 0 ) ( y 0 + κ ) + λ ϵ .
It therefore holds that
V ( t ^ , x 0 ) U ( t ^ , x 0 ) V ( t ^ , x 0 + y 0 ) U ( t ^ , x 0 + y 0 ) + λ ϵ .
Choosing a sufficiently small λ , it follows that
V ( t ^ , x 0 + y 0 ) > I V ( t ^ , x 0 + y 0 ) .
An analogous argument shows that the converse inequality holds true for U and the claim follows. □
Lemma 6.
Assume that (1)–Hypotheses 2–4 are satisfied and that U and V are uniformly continuous on [ 0 , T ) × R . Then, if ( t ^ , x ^ ) [ 0 , T ) × R is so that
V ( t ^ , x ^ ) < I V ( t ^ , x ^ ) , U ( t ^ , x ^ ) > I U ( t ^ , x ^ ) ,
then δ > 0 exists, so that for ( t , x ) [ t ^ δ , t ^ + δ ] × [ x ^ δ , x ^ + δ ] , with t ^ + δ < T and t ^ δ > 0 , it holds that
V ( t , x ) < I V ( t , x ) , U ( t , x ) > I U ( t , x ) .
Proof. 
Since V ( t ^ , x ^ ) < I V ( t ^ , x ^ ) , then λ ( 0 , 1 ) exists, and there exists y ^ such that
V ( t ^ , x ^ ) > V ( t ^ , x ^ + y ^ ) ( y ^ + κ ) + λ .
From the assumption on the uniform continuity of of V δ ( 0 , ϵ ) , ϵ > 0 exists, such that, for | t 1 t 2 | δ and | x 1 x 2 | δ , it holds that
| V ( t 1 , x 1 ) V ( t 2 , x 2 ) | ϵ .
Therefore, for ( t , x ) [ t ^ δ , t ^ + δ ] × [ x ^ δ , x ^ + δ ] , it holds that
V ( t , x ) > V ( t , x + y ) ( y + κ ) 2 ϵ + λ ,
so that for sufficiently small ϵ it holds that
V ( t , x ) < I V ( t , x ) , ( t , x ) [ t ^ δ , t ^ + δ ] × [ x ^ δ , x ^ + δ ] .
An analogous argument proves the converse results for U and the claim follows. □
Next is the main Theorem of the current section.
Theorem 4
(Comparison principle). Assume that (1)–Hypotheses 2–4 are satisfied and that U and V are, respectively, a viscosity super solution and viscosity sub solution to the Equation (27). Assume also that U and V are uniformly continuous, then V U on [ 0 , T ] × R .
Proof. 
Let us prove the result by contradiction, assuming that
sup [ 0 , T ] × R ( V U ) = η > 0 .
For r > 0 let us define
V ˜ ( t , x ) : = e r t V ( x , t ) , U ˜ ( t , x ) : = e r t U ( t , x ) .
Using the theorem hypotheses, that U and V are viscosity super and sub solution to Equation (27), we immediately have that V ˜ and U ˜ are viscosity super and sub solution to
min [ r u ( t , x ) t u ( t , x ) L u ( t , x ) e r t f ( x ) + e r t β ( t ) ( V ( t , x ) + g 2 ( x ) ) , , u ( t , x ) I ˜ u ( t , x ) ] = 0 , on [ 0 , T ) × R u ( T , x ) = e r t g 1 ( x ) , on { T } × R , ,
I ˜ being the non-local impulse operator defined as
I ˜ u ( t , x ) : = sup K A ( t , x ) u ( t , x + K ) e r t ( K + κ ) ) .
Assume that x 0 R exists so that we have
V ˜ ( T , x 0 ) U ˜ ( T , x 0 ) > 0 .
Then, from the fact that U ˜ is a viscosity super-solution, resp. V ˜ is a viscosity sub-solution, using Lemma 5 we have that x ¯ R exists such that
V ˜ ( T , x ¯ ) U ˜ ( T , x ¯ ) > 0 ,
and
U ˜ ( T , x ¯ ) < I ˜ U ˜ ( T , x ¯ ) , resp . V ˜ ( T , x ¯ ) > I ˜ V ˜ ( T , x ¯ ) .
Since we also have that V ˜ ( T , x ¯ ) e r t g 1 ( x ¯ ) and U ˜ ( T , x ¯ ) e r t g 1 ( x ¯ ) , we conclude that
V ˜ ( T , x ¯ ) U ˜ ( T , x ¯ ) 0 ,
which contradicts Equation (44).
Then, suppose there exists ( t ¯ , x ¯ ) [ 0 , T ) × R , such that
V ˜ ( t ¯ , x ¯ ) U ˜ ( t ¯ , x ¯ ) > 0 ,
then, again using Lemmas 5 and 6, for all t I δ : = [ t ¯ δ , t ¯ + δ ] and x B δ : = [ x ¯ δ , x ¯ + δ ] it holds that
U ˜ ( t , x ) < I ˜ U ˜ ( t , x ) , resp . V ˜ ( t , x ) > I ˜ V ˜ ( t , x ) .
It further holds that ( t ˜ , x ˜ ) [ 0 , T ) × R exists, such that
sup I δ × B δ V ˜ U ˜ ( V ˜ U ˜ ) ( t ˜ , x ˜ ) > 0 .
Consider thus ( t 0 , x 0 ) [ 0 , T ) × R , so that
sup I δ × B δ V ˜ U ˜ = ( V ˜ U ˜ ) ( t 0 , x 0 ) > 0 ,
and, for any n N , define
φ n ( t , x , y ) : = V ˜ ( t , x ) U ˜ ( t , x ) ϱ n ( t , x , y ) ,
with
ϱ n ( t , x , y ) = n | x y | 2 + | x x 0 | 4 + | t t 0 | 2 .
For any n N there exists a point ( t n , x n , y n ) I δ × B δ × B δ attaining the maximum of φ in I δ × B δ × B δ ; therefore, up to a subsequence that for the sake of brevity we will still denote ( t n , x n , y n ) , we have that
( t n , x n , y n ) ( t 0 , x 0 , y 0 ) , n | x n y n | 2 0 ,
as n . It also holds that
V ˜ ( t n , x n ) U ˜ ( t n , x n ) V ˜ ( t 0 , x 0 ) U ˜ ( t 0 , x 0 ) , as n .
In fact, since
V ˜ ( t 0 , x 0 ) U ˜ ( t 0 , x 0 ) = φ n ( t 0 , x 0 , x 0 ) φ n ( t n , x n , x n ) ,
then
V ˜ ( t 0 , x 0 ) U ˜ ( t 0 , x 0 ) lim   inf n   φ n ( t 0 , x 0 , y 0 ) lim   sup n   φ n ( t 0 , x 0 , y 0 ) V ˜ ( t ¯ , x ¯ ) U ˜ ( t ¯ , x ¯ ) lim   inf n   n | x y | 2 + | x x 0 | 4 + | t t 0 | 2 .
Therefore, using the optimality of ( x 0 , t 0 ) , we obtain that, considering up to a subsequence, that what is claimed Equations (45) and (46) holds true.
Finally, from the fact that x 0 B δ , it follows that ( t n , x n , y n ) I × B δ × B δ .
Applying the Ishii lemma, we have that there exists ( p V n , q V n , M V n ) P ¯ 2 , + V ˜ ( t n , x n ) and ( p U n , q U n , M U n ) P ¯ 2 , U ˜ ( t n , x n ) , such that
p V n p V n = 2 ( t n t 0 ) , q V n = x ϱ n , q U n = y ϱ n ,
and
M n 0 0 N m A n + 1 2 n A n n ,
with A n = x y ϱ n . Therefore from the viscosity sub-solution property of V ˜ , resp. the viscosity super-solution property of U ˜ , by the Lipschitz continuity of μ and σ in x and (4) we have that
r V ˜ ( t 0 , x 0 ) U ˜ ( t 0 , x 0 ) 0 ,
which gives the desired contradiction. □
We are now able to state the uniqueness result for the viscosity solution.
Corollary 1.
Assume that (1)–Hypotheses 2–4 hold true, then there exists a unique viscosity solution to Equation (27).
Proof. 
Let V 1 and V 2 be two viscosity solutions to Equation (27); then since V 1 is a subsolution and V 2 is a supersolution, by comparison with principle (4) we obtain that V 2 V 1 . Since the opposite must also hold we obtain the claim. □

5. Smooth Fit Principle on the Value Function

Under further regularity assumptions on the coefficients, to be further specified below, one can prove the regularity property of the value function, with particular reference to the smooth-fit property through the switching boundaries between action and continuation regions. These results, known as the smooth-fit principle (see, e.g, Egami and Yamazaki (2014); Guo and Wu (2009); Hernández-Hernández and Yamazaki (2015)) have already been proven to hold in the infinite horizon case. Additionally, we will prove W l o c ( 1 , 2 ) , p regularity for the value function V ( t , x ) on any fixed parabolic domain Q T ( δ , T ] × B R ( 0 ) for any constants 0 < δ < T , R > 0 .
In what follows we introduce the definition of the function spaces we are going to use throughout the section, Ω being a bounded open set:
W ( 0 , 1 ) , p ( Ω ) = { u L p ( Ω ) : u x i L p ( Ω ) } , W ( 1 , 2 ) , p ( Ω ) = { u W ( 0 , 1 ) , p ( Ω ) : u x i x j L p ( Ω ) } , C 0 + α 2 , 0 + α ( Ω ¯ ) = u C ( Ω ¯ ) : sup ( x , t ) , ( y , s ) Ω , ( x , t ) ( y , s ) | u ( t , x ) u ( s , y ) | ( | t s | + | x y | 2 ) α / 2 < + , C 1 + α 2 , 2 + α ( Ω ¯ ) = u C ( Ω ¯ ) : u t , u x i x j C 0 + α 2 , 0 + α , W l o c ( 1 , 2 ) , p ( Ω ) = u L l o c p ( Ω ) : u W ( 1 , 2 ) , p ( U ) open U with U ¯ Ω ¯ \ P Ω .
The above notations are similar to the notations used in Guo and Chen (2013).
Recall that β ( t ) is the hazard rate function defined in (12), and we make the following assumption:
Hypothesis 5.
Let α ( 0 , 1 ] , we assume that the intensity function β ( t ) C α / 2 ( [ 0 , T ] ) and σ ( s , x ) C 0 + α , 0 + α 2 ( Q ¯ T ) satisfying the uniform elliptic condition, i.e.,
σ ( s , x ) δ 1 > 0
for some constant δ 1 > 0 depending on the domain Q T .
Before proceeding to the smooth fit principle, recall that we divide the region [ 0 , T ] × R into the following regions:
C ( t , x ) : V ( t , x ) > I V ( t , x ) , A ( t , x ) : V ( t , x ) = I V ( t , x )
and for any open set Ω R 2 , the parabolic boundary P Ω is defined as
P Ω ( t , x ) Ω ¯ | ε > 0 , Q ( ( t , x ) , ε ) contains points not in Ω ,
where Q ( ( t 0 , x 0 ) , r ) { ( t , x ) ; | x x 0 | < r , t < t 0 } for all ( t 0 , x 0 , r ) R 2 × R + . For any ( t , x ) A , define the set
Θ ( t , x ) = ξ 0 | I V ( t , x ) = V ( t , x + ξ 0 ) ξ 0 κ .
Notice that in the regularity analysis in Section 2, we already show that V ( T t , x ) C 0 + 1 / 2 , 0 + 1 ( Ω ) , so we immediately have the following lemma.
Lemma 7
(Theorems 4.9, 5.9, 5.10, and 6.33 in Lieberman (1996)). Under (2) and (5), for any open set Ω C , the linear parabolic PDE
u t L u ( t , x ) + β ˜ ( t ) u ( t , x ) = f ˜ ( t , x ) i n Ω , u ( t , x ) = V ( T t , x ) , o n P Ω .
admits a unique solution u ( t ) C 0 + α / 2 , 0 + α ( Ω ¯ ) C l o c 1 + α / 2 , 2 + α ( Ω ) , where
β ˜ ( t ) = β ( T t ) , f ˜ ( t , x ) = f ( x ) β ( t ) g 2 ( x ) .
Theorem 5
(Smooth fit principle). Under (2) and (5), the value function V ( t , x ) is a unique W l o c ( 1 , 2 ) , p ( R × ( 0 , T ) ) viscosity solution to the QVI (27) for any 1 < p < + . Furthermore, for any t [ 0 , T ) , V ( t , · ) C l o c 1 , γ ( R ) for any 0 < γ < 1 .
Proof. 
Using the cost function
B ( K ) K + κ , K > 0 ,
which is independent of time and satisfies the subadditivity property, i.e.,
B ( K 1 + K 2 ) + κ = B ( K 1 ) + B ( K 2 ) , K 1 , K 2 > 0 ,
so that the claim follows from Guo and Chen (2013) together with (7).

Structure of the Value Function

In this subsection, we study the general property of the value function V ( t , x ) under further assumptions of σ ( t , x ) , β ( t ) , μ ( t , x ) , f ˜ ( t , x ) and g 1 ( x ) .
Hypothesis 6.
f ˜ ( t , x ) and g 1 ( x ) are monotonically increasing with
lim x f ˜ ( t , x ) = lim x g 1 ( x ) = , lim x + f ˜ ( t , x ) = U ( t ) > 0 , lim x + g 1 ( x ) = U g < .
Lemma 8.
Under (6), for any t > 0 the value function V ( t , x ) satisfies
V ( t , x 1 ) V ( t , x 2 ) x 1 x 2 .
Furthermore, there exists L [ , + ) , such that
[ 0 , T ] × ( L , + ) C .
Proof. 
First, we show the monotonicity of V ( t , x ) with respect to x. By applying the same adapted control u U [ t , T ] with different initial values x 1 x 2 , the solution satisfies X t , x 1 u X t , x 2 u a . s . Since f ˜ ( t , x ) is increasing with respect to x, one has J u ( t , x 1 ) J u ( t , x 2 ) for all u U [ t , T ] , and thus V ( t , x 1 ) V ( t , x 2 ) for any x 1 x 2 .
It remains to be shown that there exists L [ , + ) , such that for any fixed t > 0 and any x 0 > L , ( t , x 0 ) C . Fix any t ( 0 , T ) , suppose that there exists a sequence x 1 < x 2 < < x k < such that
lim k + x k = + and ( t , x k ) A , k > 0 ,
and for any k > 0 there exists ξ k Θ ( t , x k ) , such that
V ( t , x k ) = V ( t , x k + ξ k ) ξ k κ .
However, since V ( t , x ) is monotone, uniformly Lipschitz continuous in x and upper bounded by C 1 according to (2) and (3), for any ε > 0 one can choose L large enough such that
V ( t , x + ξ ) V ( t , x ) ε , x > L , ξ > 0 ,
in contradiction to (50). Notice that since such a choice of L is independent of t, we conclude that there exists L [ , + ) , such that [ 0 , T ] × ( L , + ) C . □
Lemma 9.
For any ( t 0 , x 0 ) A , the set Θ ( t 0 , x 0 ) is nonempty and ( t 0 , x 0 + ξ 0 ) C for any ξ 0 Θ ( t 0 , x 0 ) .
Proof. 
Since V is uniformly bounded, one has
lim ξ + V ( t 0 , x + ξ ) B ( ξ ) = , lim ξ 0 + V ( t 0 , x + ξ ) B ( ξ ) = V ( t 0 , x 0 ) κ .
Then the condition V ( t 0 , x 0 ) = I V ( t 0 , x 0 ) implies that the supremum in I V ( t 0 , x 0 ) is achieved in the interior and thus Θ ( t 0 , x 0 ) is nonempty.
By property (49), for any ξ 0 Θ ( t 0 , x 0 ) one has
I V ( t 0 , x 0 ) = sup ξ R + V ( t 0 , x 0 + ξ ) B ( ξ ) sup ξ R + V ( t 0 , x 0 + ξ 0 + ξ ) B ( ξ 0 + ξ ) = sup ξ R + V ( t 0 , x 0 + ξ 0 + ξ ) B ( ξ ) B ( ξ 0 ) + κ = I V ( t 0 , x 0 + ξ 0 ) B ( ξ 0 ) + κ .
On the other hand, since I V ( t 0 , x 0 ) + B ( ξ 0 ) = V ( t 0 , x 0 + ξ 0 ) , we have
V ( t 0 , x 0 + ξ 0 ) I V ( t 0 , x 0 + ξ 0 ) + κ ,
which implies that x + ξ 0 C . □
Lemma 10.
Fix any ( t 0 , x 0 ) A and for any ξ 0 Θ ( t 0 , x 0 ) , one has
V x ( t 0 , x 0 ) = V x ( t 0 , x 0 + ξ 0 ) = 1 .
Proof. 
By the definition of Θ ( t , x ) , ξ 0 is a global maximum of the function ξ V ( t 0 , x 0 + ξ ) B ( ξ ) . Thus the first-order condition yields that
V x ( t 0 , x 0 + ξ 0 ) = B ( ξ 0 ) = 1 .
On the other hand, for any δ 0 , we have
V ( t 0 , x 0 + δ ) I V ( t 0 , x 0 + δ ) V ( t 0 , x 0 + δ + ξ 0 ) B ( ξ 0 ) .
So one has
V ( t 0 , x 0 + δ ) V ( t 0 , x 0 ) δ V ( t 0 , x 0 + δ + ξ 0 ) V ( t 0 , x 0 + ξ 0 ) δ , δ > 0 V ( t 0 , x 0 + δ ) V ( t 0 , x 0 ) δ V ( t 0 , x 0 + δ + ξ 0 ) V ( t 0 , x 0 + ξ 0 ) δ . δ < 0 .
By (5), V x ( t , x ) is well defined for all ( t , x ) ( 0 , T ) × R . Taking δ 0 + and δ 0 , one achieves that
V x ( t 0 , x 0 ) = V x ( t 0 , x 0 + ξ 0 ) = 1 .
 □

6. Conclusions

In the present work we have introduced a concrete financial setting in which a regulator can lend money to a player, typically represented by a bank, so as to avoid its default. The financial regulator is assumed to intervene in the system with an impulse-type control, representing the aforementioned money injection.
We have shown that the value function related to the associated stochastic optimal control type problems is represented by the unique viscosity solution of a quasi-variational inequality (QVI). Moreover, we have derived suitable conditions of regularity both in time and space for the solution to the QVI. We underline that, provided properly stated regularity requirements, our method allows to prove the smooth-fit property, which turns out to be fundamental when aiming at finding an explicit optimal solution.
Future research will be focused on extensions of both the setting and the provided results introduced in the current paper.
In particular, we aim at including the treatment of a hybrid type of control, namely combining the impulsive part with a classical continuous one. Additionally, an investigation regarding less stringent assumptions on the regularity of the coefficients will be carried out.
Finally, it is worth mentioning that machine and deep learning techniques have proven to provide effective algorithms able to efficiently solve several stochastic optimal control problems; see, e.g., Bachouch et al. (2018); Deschatre and Mikael (2020); Huré et al. (2018). Therefore, our future investigations will be extensively carried out so as to consider the impulse type optimal control tasks proposed here while exploiting innovative neural networks solutions.

Author Contributions

All the authors have equally contributed to the present article. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Aïd, René, Basei Matteo, Callegaro Giorgia, Campi Luciano, and Vargiolu Tiziano. 2020. Nonzero-Sum Stochastic Differential Games with Impulse Controls: A Verification Theorem with Applications. Mathematics of Operations Research 45: 205–32. [Google Scholar] [CrossRef] [Green Version]
  2. Aikins, Stephen. 2009. Global Financial Crisis and Government Intervention: A Case for Effective Regulatory Governance. International Public Management Review 10: 23–43. [Google Scholar]
  3. Altavilla, Carlo, Giacomo Carboni, and Roberto Motto. 2015. Asset Purchase Programmes and Financial Markets: Lessons from the Euro Area. Frankfurt: Germany. [Google Scholar]
  4. Andrade, Philippe, Johannes Breckenfelder, Fiorella De Fiore, Peter Karadi, and Oreste Tristani. 2016. The ECB’s Asset Purchase Programme: An Early Assessment. Frankfurt: Germany. [Google Scholar]
  5. Bachouch, Achref, Côme Huré, Nicolas Langrené, and Huyen Pham. 2018. Deep neural networks algorithms for stochastic control problems on finite horizon, part 2: Numerical applications. arXiv arXiv:1812.05916. [Google Scholar]
  6. Bayraktar, Erhan, Thomas Emmerling, and José-Luis Menaldi. 2013. On the impulse control of jump diffusions. SIAM Journal on Control and Optimization 51: 2612–37. [Google Scholar] [CrossRef] [Green Version]
  7. Belak, Christoph, Sören Christensen, and Frank Thomas Seifried. 2017. A general verification result for stochastic impulse control problems. SIAM Journal on Control and Optimization 55: 627–49. [Google Scholar] [CrossRef]
  8. Bielecki, Tomasz R., and Marek Rutkowski. 2013. Credit Risk: Modeling, Valuation and Hedging. Berlin/Heidelberg: Springer Science & Business Media. [Google Scholar]
  9. Bielecki, Tomasz R., Monique Jeanblanc-Picqué, and Marek Rutkowski. 2009. Credit Risk Modeling. Osaka: Osaka University Press, vol. 5. [Google Scholar]
  10. Blattner, Tobias Sebastian, and Michael A. S. Joyce. 2016. Net Debt Supply Shocks in the Euro Area and the Implications for QE. Frankfurt: Germany. [Google Scholar]
  11. Blinder, Alan S. 2010. Quantitative Easing: Entrance and Exit Strategies (Digest Summary). Federal Reserve Bank of St. Louis Review 92: 465–79. [Google Scholar]
  12. Bordo, Michael, and Arunima Sinha. 2016. A Lesson from the Great Depression that the Fed Might Have Learned: A Comparison of the 1932 Open Market Purchases with Quantitative Easing. Economics Working Paper 16113. Cambridge: National Bureau of Economic Research. [Google Scholar]
  13. Bowman, David, Fang Cai, Sally Davies, and Steven Kamin. 2011. Quantitative Easing and Bank Lending: Evidence from Japan. Washington: Board of Governors of the Federal Reserve System. [Google Scholar]
  14. Brigo, Damiano, Massimo Morini, and Andrea Pallavicini. 2013. Counterparty Credit Risk, Collateral and Funding: With Pricing Cases for All Asset Classes. Hoboken: John Wiley & Sons. [Google Scholar]
  15. Calderon, Cesar A., and Klaus Schaeck. 2016. The Effects of Government Interventions in the Financial Sector on Banking Competition and the Evolution of Zombie Banks. Journal of Financial and Quantitative Analysis (JFQA) 51: 1391–1436. [Google Scholar] [CrossRef] [Green Version]
  16. Capponi, Agostino, and Peng-Chu Chen. 2015. Systemic risk mitigation in financial networks. Journal of Economic Dynamics and Control 58: 152–66. [Google Scholar] [CrossRef]
  17. Chevalier, Etienne, Vathana Ly Vath, and Simone Scotti. 2013. An optimal dividend and investment control problem under debt constraints. SIAM Journal on Financial Mathematics 4: 297–326. [Google Scholar] [CrossRef] [Green Version]
  18. Chevalier, Etienne, Vathana Ly Vath, Simone Scotti, and Alexandre Roch. 2016. Optimal execution cost for liquidation through a limit order market. International Journal of Theoretical and Applied Finance 19: 1650004. [Google Scholar] [CrossRef]
  19. Cordoni, Francesco, and Luca Di Persio. 2020. A maximum principle for a stochastic control problem with multiple random terminal times. Mathematics in Engineering, 557–83. [Google Scholar] [CrossRef]
  20. Cordoni, Francesco, Luca Di Persio, and Luca Prezioso. 2019. A lending scheme for a system of interconnected banks with probabilistic constraints of failure. arXiv (accepted for publication in Automatica). arXiv:1903.06042. [Google Scholar]
  21. Cosso, Andrea. 2013. Stochastic differential games involving impulse controls and double-obstacle quasi-variational inequalities. SIAM Journal on Control and Optimization 51: 2102–31. [Google Scholar] [CrossRef]
  22. Crandall, Michael G., and Hitoshi Ishii. 1990. The maximum principle for semicontinuous functions. Differential and Integral Equations 3: 1001–14. [Google Scholar]
  23. Crépey, Sthephane. 2015a. Bilateral counterparty risk under funding constraints—Part I: Pricing. Mathematical Finance 25: 1–22. [Google Scholar] [CrossRef]
  24. Crépey, Sthephane. 2015b. Bilateral counterparty risk under funding constraints—Part II: CVA. Mathematical Finance 25: 23–50. [Google Scholar] [CrossRef]
  25. De Santis, Roberto A. 2020. Impact of the asset purchase programme on euro area government bond yields using market news. Economic Modelling 86: 192–209. [Google Scholar] [CrossRef] [Green Version]
  26. Deschatre, Thomas, and Joseph Mikael. 2020. Deep combinatorial optimisation for optimal stopping time problems and stochastic impulse control. Application to swing options pricing and fixed transaction costs options hedging. arXiv arXiv:2001.11247. [Google Scholar]
  27. Edward, J. Pinto. 2016. The 30-Year Fixed Mortgage Should Disappear. Washington: American Enterprise Institute. [Google Scholar]
  28. Eisenberg, Larry, and Thomas H. Noe. 2001. Systemic risk in financial systems. Management Science 47: 236–49. [Google Scholar] [CrossRef]
  29. Egami, Masahiko. 2008. A direct solution method for stochastic impulse control problems of one-dimensional diffusions. SIAM Journal on Control and Optimization 47: 1191–1218. [Google Scholar] [CrossRef] [Green Version]
  30. Egami, Masahiko, and Kazutoshi Yamazaki. 2014. On the continuous and smooth fit principle for optimal stopping problems in spectrally negative Lévy models. Advances in Applied Probability 46: 139–67. [Google Scholar] [CrossRef] [Green Version]
  31. Eichengreen, Barry, Ashoka Mody, Milan Nedeljkovic, and Lucio Sarno. 2012. How the Subprime Crisis went global: Evidence from bank credit default swap spreads. Journal of International Money and Finance 31: 1299–1318. [Google Scholar] [CrossRef] [Green Version]
  32. Fawley, Brett W., and Christopher J. Neely. 2013. Four stories of quantitative easing. Federal Reserve Bank of St. Louis Review 95: 51–88. [Google Scholar] [CrossRef]
  33. Federico, Salvatore, Mauro Rosestolato, and Elisa Tacconi. 2019. Irreversible investment with fixed adjustment costs: A stochastic impulse control approach. Mathematics and Financial Economics 13: 579–616. [Google Scholar] [CrossRef] [Green Version]
  34. Fleming, Wendell H., and Halil Mete Soner. 2006. Controlled Markov Processes and Viscosity Solutions. Berlin/Heidelberg: Springer Science & Business Media, vol. 25. [Google Scholar]
  35. Guo, Xin, and Guoliang Wu. 2009. Smooth Fit Principle for Impulse Control of Multidimensional Diffusion Processes. SIAM Journal on Control and Optimization 48: 594–617. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, Yann-Shin Aaron, and Xin Guo. 2013. Impulse Control of Multidimensional Jump Diffusions in Finite Time Horizon. SIAM Journal on Control and Optimization 51: 2638–63. [Google Scholar] [CrossRef] [Green Version]
  37. Hernández-Hernández, Daniel, and Kazutoshi Yamazaki. 2015. Games of singular control and stopping driven by spectrally one-sided Lévy processes. Stochastic Processes and Their Applications 125: 1–38. [Google Scholar] [CrossRef]
  38. Hetzel, Robert L. 2009. Government Intervention in Financial Markets: Stabilizing or Destabilizing? In Financial Market Regulation in the Wake of Financial Crises: The Historical Experience Conference. p. 207. July 8 2012. Available online: https://ssrn.com/abstract=2101729 (accessed on 2 June 2020).
  39. Hoover Institution. 2014. Economics Working Paper 14110, Exiting from Low Interest Rates to Normality: An Historical Perspective. Stanford: Stanford University. [Google Scholar]
  40. Huré, Côme, Huyên Pham, Achref Bachouch, and Nicolas Langrené. 2018. Deep neural networks algorithms for stochastic control problems on finite horizon, part I: Convergence analysis. arXiv arXiv:1812.04300. [Google Scholar]
  41. Ivashina, Victoria, and David Scharfstein. 2010. Bank lending during the financial crisis of 2008. Journal of Financial Economics 97: 319–38. [Google Scholar] [CrossRef]
  42. Kahle, Kathleen M., and René M. Stulz. 2013. Access to capital, investment, and the financial crisis. Journal of Financial Economics 110: 280–99. [Google Scholar] [CrossRef]
  43. Karatzas, Ioannis. 1989. Optimization Problems in the Theory of Continuous Trading. SIAM Journal on Control and Optimization 27: 1221–59. [Google Scholar] [CrossRef]
  44. Kuznetsova, Anzhela, Galyna Azarenkova, and Ievgeniia Olefir. 2017. Implementation of the “bail-in” mechanism in the banking system of Ukraine. Banks and Bank Systems 12: 269–82. [Google Scholar] [CrossRef] [Green Version]
  45. Lieberman, Gary M. 1996. Second Order Parabolic Differential Equations. River Edge: Work Scientific. [Google Scholar]
  46. Lipton, Alexander. 2016. Modern Monetary Circuit Theory, Stability of Interconnected Banking Network, and Balance Sheet Optimization for Individual Banks. International Journal of Theoretical and Applied Finance 19: 1650034. [Google Scholar] [CrossRef] [Green Version]
  47. Merton, Robert C. 1974. On the pricing of corporate debt: The risk structure of interest rates. The Journal of Finance 29: 449–70. [Google Scholar]
  48. Miyao, Ryuzo. 2000. The Role of Monetary Policy in Japan: A Break in the 1990s? Journal of the Japanese and International Economies 14: 366–84. [Google Scholar] [CrossRef]
  49. Øksendal, Bernt, and Agnes Sulem. 2008. Optimal stochastic impulse control with delayed reaction. Applied Mathematics and Optimization 58: 243–55. [Google Scholar] [CrossRef] [Green Version]
  50. Øksendal, Bernt, and Agnes Sulem. 2005. Applied Stochastic Control of Jump Diffusions. Berlin: Springer, vol. 498. [Google Scholar]
  51. Pham, Huyên. 2007. On the smooth-fit property for one-dimensional optimal switching problem. In Séminaire de Probabilités XL. Berlin/Heidelberg: Springer, pp. 187–99. [Google Scholar]
  52. Pham, Huyên. 2009. Continuous-Time Stochastic Control and Optimization with Financial Applications. Berlin/Heidelberg: Springer Science & Business Media, vol. 61. [Google Scholar]
  53. Rogers, Leonard C. G., and Luitgard A. M. Veraart. 2013. Failure and rescue in an interbank network. Management Science 59: 882–98. [Google Scholar] [CrossRef] [Green Version]
  54. Sakuramoto, Masaki, and Alberto Urbani. 2018. Current basic lines of the discipline of bank crises and unresolved problems: An initial comparison between the solutions accepted in the european union (with special emphasis on Italy) and in Japan, between the role of banking authorities and the powers of judicial authority). Law and Economics Yearly Review 7: 4–28. [Google Scholar]
  55. Tang, Shanjian, and Jiongmin Yong. 1993. Finite horizon stochastic optimal switching and impulse controls with a viscosity solution approach. Stochastics: An International Journal of Probability and Stochastic Processes 45: 145–76. [Google Scholar] [CrossRef]
  56. Vath, Vathana Ly, Mohamed Mnif, and Huyên Pham. 2007. A model of optimal portfolio selection under liquidity risk and price impact. Finance and Stochastics 11: 51–90. [Google Scholar] [CrossRef] [Green Version]
  57. Voutsinas, Konstantinos, and Richard A. Werner. 2011. New Evidence on the Effectiveness of ’Quantitative Easing’ in Japan, Center for Banking. Southampton: Finance and Sustainable Development, School of Management, University of Southampton. [Google Scholar]

Share and Cite

MDPI and ACS Style

Cordoni, F.G.; Di Persio, L.; Jiang, Y. A Bank Salvage Model by Impulse Stochastic Controls. Risks 2020, 8, 60. https://doi.org/10.3390/risks8020060

AMA Style

Cordoni FG, Di Persio L, Jiang Y. A Bank Salvage Model by Impulse Stochastic Controls. Risks. 2020; 8(2):60. https://doi.org/10.3390/risks8020060

Chicago/Turabian Style

Cordoni, Francesco Giuseppe, Luca Di Persio, and Yilun Jiang. 2020. "A Bank Salvage Model by Impulse Stochastic Controls" Risks 8, no. 2: 60. https://doi.org/10.3390/risks8020060

APA Style

Cordoni, F. G., Di Persio, L., & Jiang, Y. (2020). A Bank Salvage Model by Impulse Stochastic Controls. Risks, 8(2), 60. https://doi.org/10.3390/risks8020060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop