Next Article in Journal
Effects of Hausdorff Dimension on the Static and Free Vibration Response of Beams with Koch Snowflake-like Cross Section
Next Article in Special Issue
Impact of Lévy Noise with Infinite Activity on the Dynamics of Measles Epidemics
Previous Article in Journal
New Fractional Cancer Mathematical Model via IL-10 Cytokine and Anti-PD-L1 Inhibitor
Previous Article in Special Issue
Numerical Simulation of Nonlinear Stochastic Analysis for Measles Transmission: A Case Study of a Measles Epidemic in Pakistan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

First-Passage Times and Optimal Control of Integrated Jump-Diffusion Processes

Department of Mathematics and Industrial Engineering, Polytechnique Montréal, Montréal, QC H3T 1J4, Canada
Fractal Fract. 2023, 7(2), 152; https://doi.org/10.3390/fractalfract7020152
Submission received: 14 December 2022 / Revised: 21 January 2023 / Accepted: 30 January 2023 / Published: 3 February 2023
(This article belongs to the Special Issue Stochastic Modeling in Biological System)

Abstract

:
Let Y ( t ) be a one-dimensional jump-diffusion process and X ( t ) be defined by d X ( t ) = ρ [ X ( t ) , Y ( t ) ] d t , where ρ ( · , · ) is either a strictly positive or negative function. First-passage-time problems for the degenerate two-dimensional process ( X ( t ) , Y ( t ) ) are considered in the case when the process leaves the continuation region at the latest at the moment of the first jump, and the problem of optimally controlling the process is treated as well. A particular problem, in which ρ [ X ( t ) , Y ( t ) ] = Y ( t ) X ( t ) and Y ( t ) is a standard Brownian motion with jumps, is solved explicitly.

1. Introduction

Diffusion processes are used as models in various applications, in particular in neuroscience to emulate the dynamics of the membrane potential of a neuron [1]. Moreover, to take into account the spikes of the neuron, jump-diffusion processes have been proposed by Jahn et al. [2] and Melanson and Longtin [3], among others.
Now, diffusion and jump-diffusion processes both increase and decrease in any interval, however small. However, in some applications, it is not realistic to assume that the process can decrease or increase. For example, if X ( t ) represents the wear of a machine at time t, the stochastic process { X ( t ) , t 0 } should increase with time.
One way to obtain a strictly increasing or decreasing process is to consider degenerate two-dimensional diffusion processes ( X ( t ) , Y ( t ) ) defined by
d X ( t ) = ρ [ X ( t ) , Y ( t ) ] d t ,
d Y ( t ) = f [ Y ( t ) ] d t + v [ Y ( t ) ] 1 / 2 d B ( t ) ,
where ρ ( · , · ) is either a strictly positive or negative function and { B ( t ) , t 0 } is a standard Brownian motion. The functions f and v are such that { Y ( t ) , t 0 } is a diffusion process. When ρ [ X ( t ) , Y ( t ) ] = Y ( t ) , the process { X ( t ) , t 0 } is called an integrated diffusion process. We can of course generalize the definition to the case when { Y ( t ) , t 0 } is a jump-diffusion process.
The author has published a number of papers on integrated diffusion processes; see, for instance, Lefebvre [4] for a recent one. Other papers on this topic include those by Lachal [5], Makasu [6], Metzler [7], Caravelli et al. [8] and Levy [9].
Next, we define the first-passage time
T ( x , y ) = inf { t > 0 : ( X ( t ) , Y ( t ) ) C ( X ( 0 ) , Y ( 0 ) ) = ( x , y ) C } ,
where C is a subset of R 2 .
Let ϕ ( ξ , η ; x , y ) be the joint probability density function (pdf) of the random vector ( X ( t ) , Y ( t ) ) , with ( X ( 0 ) , Y ( 0 ) ) = ( x , y ) . As is well known (see, for example, Cox and Miller [10], p. 247), the function ϕ satisfies the Kolmogorov backward equation
1 2 v ( y ) ϕ y y + f ( y ) ϕ y + ρ ( x , y ) ϕ x = ϕ t for ( x , y ) C .
Moreover, the pdf g ( t ; x , y ) of the random variable T ( x , y ) satisfies the same partial differential equation (PDE):
1 2 v ( y ) g y y + f ( y ) g y + ρ ( x , y ) g x = g t
(subject to different boundary conditions). It follows that the moment-generating function of the random variable T ( x , y ) , namely
M ( x , y ; α ) : = E e α T ( x , y ) ,
where α > 0 , is a solution of the following PDE:
1 2 v ( y ) M y y + f ( y ) M y + ρ ( x , y ) M x = α M for ( x , y ) C ,
where M y y : = 2 M ( x , y ; α ) / y 2 , etc. Furthermore, this equation is subject to the boundary condition
M ( x , y ; α ) = 1 for ( x , y ) C .
We now replace the diffusion process { Y ( t ) , t 0 } by the jump-diffusion process defined by
Y ( t ) = Y ( 0 ) + 0 t f [ Y ( s ) ] d s + 0 t v [ Y ( s ) ] 1 / 2 d B ( s ) + n = 1 N ( t ) Z n ,
where { N ( t ) , t 0 } is a Poisson process with rate λ . The random variables Z 1 , Z 2 , are assumed to be independent and identically distributed (i.i.d.), and also independent of the Poisson process. We can state the following proposition.
Proposition 1.
The function M ( x , y ; α ) satisfies the integro-differential equation (dropping the dependence on α from the notation)
α M ( x , y ) = 1 2 v ( y ) M y y ( x , y ) + f ( y ) M y ( x , y ) + ρ ( x , y ) M x ( x , y ) + λ M ( x , y + z ) f Z ( z ) d z M ( x , y )
for ( x , y ) C , where f Z ( z ) is the common density function of the Z n s. As above, the equation is subject to the boundary condition (8).
Proof. 
This result is obtained by generalizing the infinitesimal generator of the jump-diffusion process { Y ( t ) , t 0 } in Kou and Wang [11] to the case when f ( y ) and v ( y ) are not necessarily constant functions. □
Remark 1.
See also the remark after the proof of Proposition 2.
There are still few explicit solutions to first-passage problems for jump-diffusion processes. Kou and Wang [11] obtained explicit formulae for the Laplace transform of the pdf of the first-passage time τ to a constant boundary for a Wiener process with jumps having a double exponential distribution. This model was generalized or modified in various papers. In Chen et al. [12], τ was the first-exit time from a finite interval, while in Yin et al. [13] the jumps were mixed-exponential random variables. Karnaukh [14] considered the case when the parameters of the Wiener process depend on a finite Markov chain. In Lefebvre [15], the jump sizes were assumed to be uniformly distributed, while in Abundo [16] the jumps (positive and/or negative) were of a constant size and the boundaries were time-dependent. Because obtaining exact analytical solutions to such problems is difficult, some authors presented numerical techniques to obtain the quantities of interest; see, for example, Belkaid and Utzet [17].
Di Crescenzo et al. [18] computed bounds for first-crossing-time probabilities of a Wiener process with random jumps driven by a counting process. Fernández et al. [19] proposed algorithms to compute double-barrier first-passage-time probabilities of a jump-diffusion process with an arbitrary jump size distribution.
D’Onofrio and Lanteri [20] obtained numerical approximations for the density functions of first-passage times in the case of diffusion processes with state-dependent jumps. Finally, in Lefebvre [21], the author was able to obtain exact solutions to optimal control problems for Wiener processes with exponential jumps.
In the current paper, explicit results will be presented for the first-passage time of a two-dimensional jump-diffusion process.
In the next section, the special case when the two-dimensional process ( X ( t ) , Y ( t ) ) is killed at the latest at the moment of the first jump, will be considered. We are also interested in the mean value of T ( x , y ) , as well as in the probability that ( X ( t ) , Y ( t ) ) will leave the continuation region through a given part of its boundary C . In Section 3, the problem of maximizing or minimizing the time the controlled version of the process ( X ( t ) , Y ( t ) ) spends in the continuation region C will be treated. A particular problem will be solved explicitly in Section 4. Finally, we will conclude with a few remarks in Section 5.

2. Killed Processes

Assume that the random variables Z 1 , Z 2 , are such that no overshoot is possible. That is, the degenerate two-dimensional jump-diffusion process ( X ( t ) , Y ( t ) ) cannot jump over the boundary of the continuation region C. Let m ( x , y ) : = E [ T ( x , y ) ] and
p ( x , y ) : = P [ ( X ( T ) , Y ( T ) ) C 0 X ( 0 ) = x , Y ( 0 ) = y ] ,
where C 0 C . We have the following corollaries.
Corollary 1.
The function m ( x , y ) satisfies the integro-differential equation
1 = 1 2 v ( y ) m y y ( x , y ) + f ( y ) m y ( x , y ) + ρ ( x , y ) m x ( x , y ) + λ m ( x , y + z ) f Z ( z ) d z m ( x , y )
for ( x , y ) C , subject to the boundary condition
m ( x , y ) = 0 for ( x , y ) C .
Proof. 
It follows from the expansion of M ( x , y ; α ) into an infinite series (see Cox and Miller [10]):
M ( x , y ; α ) : = E e α T ( x , y ) = E 1 α T ( x , y ) + 1 2 α 2 T 2 ( x , y )
= 1 α m ( x , y ) + 1 2 α 2 E [ T 2 ( x , y ) ]
Notice that in our case, E [ T n ( x , y ) ] will exist for any n { 1 , 2 , } because of Equation (18) below. □
Corollary 2.
The probability p ( x , y ) is a solution of the integro-differential equation
0 = 1 2 v ( y ) p y y ( x , y ) + f ( y ) p y ( x , y ) + ρ ( x , y ) p x ( x , y ) + λ p ( x , y + z ) f Z ( z ) d z p ( x , y )
for ( x , y ) C . Moreover, the boundary condition is
p ( x , y ) = 1 if ( x , y ) C 0 , 0 if ( x , y ) D ,
where D : = C \ C 0 .
In this paper, we consider the special case when the random variable Z 1 is such that the process ( X ( t ) , Y ( t ) ) will leave the continuation region C at the latest at the moment τ 1 of the first event of the Poisson process. Let T 0 ( x , y ) be the random variable that corresponds to T ( x , y ) when λ = 0 . We can write that
T ( x , y ) = min { T 0 , τ 1 } ,
where τ 1 has an exponential distribution with parameter λ . Furthermore, the sum in Equation (9) can be replaced by Z 1 1 { N ( t ) > 0 } , where 1 { N ( t ) > 0 } is the indicator variable of the event { N ( t ) > 0 } , and the equation is valid for t [ 0 , T ( x , y ) ] . We can say that the process ( X ( t ) , Y ( t ) ) is killed at time T ( x , y ) .
An application of the above problem is the following: as mentioned in Section 1, a more realistic model for the wear of a machine is the degenerate two-dimensional process ( X ( t ) , Y ( t ) ) defined in Equations (1) and (2), when ρ ( · , · ) is a strictly increasing function. Rishel [22] proposed this model (in n dimensions) in the context of an optimal control problem. If X ( t ) denotes the remaining amount of deterioration that a device can undergo before it must be repaired or replaced, then ρ ( · , · ) should be strictly negative instead. Moreover, the remaining lifetime of the device is the first-passage time to zero or to a level at which it is considered worn out.
Now, many electronic devices, especially mobile phones, are often replaced as soon as they break down, rather than being repaired. A mobile phone failure can be seen as a jump from the current value of X ( t ) to zero, so that the device is killed at the time of the jump. It is also possible that the device will be replaced before a failure occurs, due to normal wear and tear or because it has become obsolete. Thus, deterioration could also include the age of the device.
Similarly, in the case of humans, the downward jump to zero could occur during a massive heart attack or stroke.
Because we assume that ( x , y + z ) C for any possible value z of the random variable Z, the integro-differential Equations (10) and (12) become, respectively, the partial differential equations
α M ( x , y ) = 1 2 v ( y ) M y y ( x , y ) + f ( y ) M y ( x , y ) + ρ ( x , y ) M x ( x , y ) + λ [ 1 M ( x , y ) ]
and
1 = 1 2 v ( y ) m y y ( x , y ) + f ( y ) m y ( x , y ) + ρ ( x , y ) m x ( x , y ) λ m ( x , y ) .
In the case of Equation (16), if ( x , y + z ) C 0 z , then
0 = 1 2 v ( y ) p y y ( x , y ) + f ( y ) p y ( x , y ) + ρ ( x , y ) p x ( x , y ) + λ [ 1 p ( x , y ) ] ,
whereas, we have
0 = 1 2 v ( y ) p y y ( x , y ) + f ( y ) p y ( x , y ) + ρ ( x , y ) p x ( x , y ) λ p ( x , y )
if ( x , y + z ) D z . If ( x , y + z ) belongs to C 0 for some values of z, and to D for other values of z, then the integral in Equation (16) is replaced by
1 { ( x , y + z ) C 0 } f Z ( z ) d z = P [ ( x , y + Z ) C 0 ] .
Solving integro-differential equations explicitly and exactly is not an easy task. In Section 4, an example of a problem that we can indeed solve analytically will be presented. As above, the integro-differential equations will be reduced to PDE’s, and the method of similarity solutions will be used to transform these PDE’s into ordinary differential equations.

3. Optimal Control

In this section, we consider a controlled version of the two-dimensional process ( X ( t ) , Y ( t ) ) :
X u ( t ) = X u ( 0 ) + 0 t ρ [ X u ( s ) , Y u ( s ) ] d s , Y u ( t ) = Y u ( 0 ) + 0 t b u [ X u ( s ) , Y u ( s ) ] d s + 0 t f [ Y u ( s ) ] d s
+ 0 t v [ Y u ( s ) ] 1 / 2 d B ( s ) + n = 1 N ( t ) Z n ,
where u ( · , · ) is the control variable, which is assumed to be a continuous function, and b is a non-zero constant. The aim is to find the value of the control that minimizes the expected value of the cost function
J ( x , y ) : = 0 T ( x , y ) 1 2 q u 2 [ X u ( t ) , Y u ( t ) ] + θ d t ,
where q > 0 and θ are constants. If the parameter θ is positive (respectively negative), the optimizer must try to minimize (respectively maximize) the time spent by the controlled process in the continuation C, taking the quadratic control costs into account. This type of problem is known as a homing problem; see Whittle [23] and/or [24].
To solve the above problem, we can make use of dynamic programming. We define the value function
F ( x , y ) = inf u [ X u ( t ) , Y u ( t ) ] t [ 0 , T ( x , y ) ) E [ J ( x , y ) ] .
That is, F ( x , y ) is the expected cost (or reward, if it is negative) obtained by choosing the optimal value of the control variable in the interval [ 0 , T ( x , y ) ) .
Proposition 2.
The value function F ( x , y ) satisfies the second-order, non-linear partial integro-differential equation
0 = θ 1 2 b 2 q F y 2 ( x , y ) + ρ ( x , y ) F x ( x , y ) + f ( y ) F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) + λ F ( x , y + z ) f Z ( z ) d z F ( x , y ) .
Moreover, we have the boundary condition
F ( x , y ) = 0 for ( x , y ) C .
Proof. 
First, thanks to Bellman’s principle of optimality, we can write that
F ( x , y ) = inf u [ X u ( t ) , Y u ( t ) ] 0 t Δ t E [ 0 Δ t 1 2 q u 2 [ X u ( t ) , Y u ( t ) ] + θ d t + F ( x + ρ ( x , y ) Δ t , y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) + n = 1 N ( Δ t ) Z n ) + o ( Δ t ) ]
= inf u [ X u ( t ) , Y u ( t ) ] 0 t Δ t E [ 1 2 q u 2 ( x , y ) + θ Δ t + F ( x + ρ ( x , y ) Δ t , y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) + n = 1 N ( Δ t ) Z n ) + o ( Δ t ) ] .
Next, let
ξ : = x + ρ ( x , y ) Δ t
and
η : = y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) .
We have
E F ξ , η + n = 1 N ( Δ t ) Z n = E E F ξ , η + n = 1 N ( Δ t ) Z n | N ( Δ t ) .
Since N ( Δ t ) has a Poisson distribution with parameter λ Δ t , we can write that
P [ N ( Δ t ) = 1 ] = λ Δ t e λ Δ t = λ Δ t + o ( Δ t )
and
P [ N ( Δ t ) 2 ] = o ( Δ t ) .
Hence,
E F ξ , η + n = 1 N ( Δ t ) Z n = E [ F ( ξ , η ) ] ( 1 λ Δ t ) + E F ( ξ , η + Z 1 ) λ Δ t + o ( Δ t ) .
Now, assuming that F ( x , y ) is twice differentiable with respect to x and to y, making use of Taylor’s formula for functions of two variables, we obtain that
F x + ρ ( x , y ) Δ t , y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) = F ( x , y ) + ρ ( x , y ) Δ t F x ( x , y ) + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) F y ( x , y ) + 1 2 [ ρ ( x , y ) Δ t ] 2 F x x ( x , y ) + 1 2 [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) 2 F y y ( x , y ) + [ ρ ( x , y ) Δ t ] [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) F x y ( x , y ) + o ( Δ t ) .
Furthermore, we have E [ B ( Δ t ) ] = 0 and E [ B 2 ( Δ t ) ] = V [ B ( Δ t ) ] = Δ t , which implies that
E F ( ξ , η ) ( 1 λ Δ t ) = F ( x , y ) + ρ ( x , y ) Δ t F x ( x , y ) + [ f ( y ) + b u ( x , y ) ] Δ t F y ( x , y ) + 1 2 v ( y ) Δ t F y y ( x , y ) F ( x , y ) λ Δ t + o ( Δ t ) .
Similarly, we find that
E F ( ξ , η + Z 1 ) λ Δ t = E [ F ( x , y + Z 1 ) ] λ Δ t + o ( Δ t ) = λ Δ t F ( x , y + z ) f Z ( z ) d z + o ( Δ t ) .
Indeed, by independence, we have
E F ( ξ , η + Z 1 ) = E F ( ξ , η + z ) f Z ( z ) d z .
Let w : = y + z . We compute
E F x + ρ ( x , y ) Δ t , y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) + z = F ( x , w ) + ρ ( x , y ) Δ t F x ( x , w ) + [ f ( y ) + b u ( x , y ) ] Δ t F w ( x , w ) + 1 2 v ( y ) Δ t F w w ( x , w ) + o ( Δ t ) ,
so that
E F x + ρ ( x , y ) Δ t , y + [ f ( y ) + b u ( x , y ) ] Δ t + v 1 / 2 ( y ) B ( Δ t ) + z λ Δ t = F ( x , w ) λ Δ t + o ( Δ t ) = F ( x , y + z ) λ Δ t + o ( Δ t ) .
Thus,
E F ( ξ , η + Z 1 ) λ Δ t = F ( x , y + z ) λ Δ t + o ( Δ t ) f Z ( z ) d z = E [ F ( x , y + Z 1 ) ] λ Δ t + o ( Δ t ) .
From Equation (31) and the above results, we deduce that
0 = inf u [ X u ( t ) , Y u ( t ) ] 0 t Δ t { 1 2 q u 2 ( x , y ) + θ Δ t + ρ ( x , y ) Δ t F x ( x , y ) + [ f ( y ) + b u ( x , y ) ] Δ t F y ( x , y ) + 1 2 v ( y ) Δ t F y y ( x , y ) F ( x , y ) λ Δ t + λ Δ t F ( x , y + z ) f Z ( z ) d z + o ( Δ t ) } .
Dividing both sides of the above equation by Δ t and letting Δ t decrease to zero, we obtain the dynamic programming equation
0 = inf u ( x , y ) { 1 2 q u 2 ( x , y ) + θ + ρ ( x , y ) F x ( x , y ) + [ f ( y ) + b u ( x , y ) ] F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) λ F ( x , y ) + λ F ( x , y + z ) f Z ( z ) d z } .
From the preceding equation, we find that the optimal control u * ( x , y ) can be expressed as follows:
u * ( x , y ) = b q F y ( x , y ) .
Substituting the optimal control into Equation (46), we obtain Equation (28).
Finally, the boundary condition (29) follows at once from the definition of F ( x , y ) , since T ( x , y ) = 0 if ( x , y ) C . □
Remark 2.
Suppose that we set u [ X u ( s ) , Y u ( s ) ] equal to zero in Equation (25) and that we replace the cost function J ( x , y ) defined in Equation (26) by
J 0 ( x , y , t 0 ) : = t 0 e α t g ( t ; t 0 , x , y ) d t ,
where g ( t ; t 0 , x , y ) is the pdf of T ( x , y ) when the starting time is equal to t 0 . Then, since J 0 ( x , y , t 0 ) is actually a deterministic function, we may write that
Φ ( x , y , t 0 ) : = E [ J 0 ( x , y , t 0 ) ] = J 0 ( x , y , t 0 ) = M ( x , y , t 0 ; α ) .
Proceeding as in the above proof, we find that
0 = e α t 0 g ( t 0 ; t 0 , x , y ) + Φ t 0 ( x , y , t 0 ) + ρ ( x , y ) Φ x ( x , y , t 0 ) + f ( y ) Φ y ( x , y , t 0 ) + 1 2 v ( y ) Φ y y ( x , y , t 0 ) λ Φ ( x , y , t 0 ) + λ Φ ( x , y + z , t 0 ) f Z ( z ) d z .
We have g ( t 0 ; t 0 , x , y ) = 0 for ( x , y ) C . Moreover, using the Leibniz integral rule,
Φ t 0 ( x , y , t 0 ) = e α t 0 g ( t 0 ; t 0 , x , y ) + t 0 e α t g t 0 ( t ; t 0 , x , y ) d t = 0 t 0 e α t g t ( t ; t 0 , x , y ) d t = e α t g ( t ; t 0 , x , y ) | t 0 α t 0 e α t g ( t ; t 0 , x , y ) d t = α M ( x , y , t 0 ; α ) ,
where we used the fact that g t 0 ( t ; t 0 , x , y ) = g t ( t ; t 0 , x , y ) because the two-dimensional process ( X u ( t ) , Y u ( t ) ) is time-invariant. Hence, setting t 0 equal to zero, we retrieve Equation (10).
In the case of the killed processes considered in Section 2, the integro-differential Equation (28) reduces to the non-linear PDE
0 = θ 1 2 b 2 q F y 2 ( x , y ) + ρ ( x , y ) F x ( x , y ) + f ( y ) F y ( x , y ) + 1 2 v ( y ) F y y ( x , y ) λ F ( x , y ) .
The boundary condition remains the one in Equation (29).
In the next section, a particular problem will be treated. We will find the exact optimal control when the parameter λ is equal to zero, so that there are no jumps, and a numerical solution will be computed in the case when λ > 0 .

4. A Particular Problem

We consider the process ( X ( t ) , Y ( t ) ) , defined by
X ( t ) = X ( 0 ) + 0 t [ Y ( s ) X ( s ) ] d s ,
Y ( t ) = Y ( 0 ) + B ( t ) + n = 1 N ( t ) Z n .
That is, { Y ( t ) , t 0 } is a standard Brownian motion with jumps. Moreover, we can write that
X ( t ) = Y ( t ) X ( t ) ,
which implies that
X ( t ) = e t X ( 0 ) + 0 t Y ( s ) e s d s .
Let
T ( x , y ) = inf { t > 0 : Y ( t ) X ( t ) = k 1 or k 2 ( X ( 0 ) , Y ( 0 ) ) = ( x , y ) } ,
where 0 k 1 < y x < k 2 . Notice that ρ [ X ( t ) , Y ( t ) ] = Y ( t ) X ( t ) > 0 in the continuation region, so that X ( t ) is strictly increasing with time.
Next, we define
Z 1 = x y + k 1 with probability p 0 ( 0 , 1 ) , x y + k 2 with probability 1 p 0 .
Thus, Z 1 is a discrete random variable such that at time τ 1 the process will leave the continuation region, if it has not already done so. We can write that
f Z ( z ) = δ ( z x + y k 1 ) p 0 + δ ( z x + y k 2 ) ( 1 p 0 ) ,
where δ ( · ) is the Dirac delta function.
We deduce from Equation (19) that the moment-generating function of T ( x , y ) satisfies the PDE
α M ( x , y ) = 1 2 M y y ( x , y ) + ( y x ) M x ( x , y ) + λ [ 1 M ( x , y ) ] .
Based on this equation and the boundary conditions M ( x , y ) = 1 if y x = k 1 or k 2 , we look for a solution of the form
M ( x , y ) = N ( r ) ,
where r : = y x . This is an application of the method of similarity solutions, and r is called the similarity variable. For the method to work, both the equation and the boundary conditions must be expressed in terms of r (after simplification). Here, we find that Equation (60) reduces to the ordinary differential equation (ODE)
α N ( r ) = 1 2 N ( r ) r N ( r ) + λ [ 1 N ( r ) ] ,
subject to the boundary conditions N ( k i ) = 1 , for i = 1 , 2 . With the help of the mathematical software program Maple, we find that the general solution of the above equation can be written as
N ( r ) = c 1 r M 1 + α + λ 2 , 3 2 , r 2 + c 2 r U 1 + α + λ 2 , 3 2 , r 2 + λ α + λ ,
where M ( · , · , · ) and U ( · , · , · ) are Kummer functions. The constants c 1 and c 2 are uniquely determined from the boundary conditions N ( k 1 ) = N ( k 2 ) = 1 .
Since, as noted in Section 2 (see Equation (18)), T ( x , y ) = min { T 0 , τ 1 } , when λ is large, the function M ( x , y ; α ) should be close to the moment-generating function of an exponential random variable with parameter λ , namely
M 0 ( α ) : = 0 e α t λ e λ t d t = λ α + λ .
In Figure 1, we present the functions M 0 ( α ) and M ( x , y ; α ) for α ( 0 , 10 ) , when λ = 1 , k 1 = 0 , k 2 = 1 and y x = 0.5 . We observe that the two functions differ significantly. However, the two functions are very similar when λ = 20 , as can be observed in Figure 2. When λ = 100 , M 0 ( α ) and M ( x , y ; α ) practically coincide for α ( 0 , 10 ) .
Next, the function m ( x , y ) = E [ T ( x , y ) ] satisfies the PDE (see Equation (20))
1 = 1 2 m y y ( x , y ) + ( y x ) m x ( x , y ) λ m ( x , y ) ,
subject to m ( x , y ) = 0 if y x = k 1 or k 2 . Setting m ( x , y ) = n ( r ) , we obtain the ODE
1 = 1 2 n ( r ) r N ( r ) λ N ( r ) ,
with n ( k i ) = 0 , for i = 1 , 2 . We find that
n ( r ) = c 1 r M 1 + λ 2 , 3 2 , r 2 + c 2 r U 1 + λ 2 , 3 2 , r 2 + 1 α .
The particular solution that satisfies the boundary conditions n ( 0 ) = n ( 1 ) = 0 is presented in Figure 3.
Finally, let
p ( x , y ) = P [ Y ( T ) X ( T ) = k 1 X ( 0 ) = x , Y ( 0 ) = y ] .
This function is a solution of the PDE
0 = 1 2 p y y ( x , y ) + ( y x ) p x ( x , y ) + λ [ p 0 p ( x , y ) ] .
Assuming that p ( x , y ) = q ( r ) , we obtain the ODE
0 = 1 2 q ( r ) r q ( r ) + λ [ p 0 q ( r ) ] ,
whose general solution is
q ( r ) = c 1 r M 1 + λ 2 , 3 2 , r 2 + c 2 r U 1 + λ 2 , 3 2 , r 2 + p 0 .
The solution that satisfies the boundary conditions q ( 0 ) = 1 and q ( 1 ) = 0 is shown in Figure 4, when λ = 1 and p 0 = 1 / 2 .
To conclude this section, we will try to find the optimal control of the two-dimensional process ( X u ( t ) , Y u ( t ) ) defined by
X u ( t ) = X u ( 0 ) + 0 t [ Y u ( s ) X u ( s ) ] d s ,
Y u ( t ) = Y u ( 0 ) + 0 t b u [ X u ( s ) , Y u ( s ) ] d s + B ( t ) + n = 1 N ( t ) Z n .
To do so, we must solve the non-linear second-order PDE
0 = θ 1 2 b 2 q F y 2 ( x , y ) + ( y x ) F x ( x , y ) + 1 2 F y y ( x , y ) λ F ( x , y ) ,
subject to F ( x , y ) = 0 if y x = k 1 or k 2 .
As above, we make use of the method of similarity solutions. We look for a solution of the form F ( x , y ) = G ( r = y x ) . Equation (74) becomes
0 = θ 1 2 b 2 q [ G ( r ) ] 2 r G ( r ) + 1 2 G ( r ) λ G ( r ) .
If λ = 0 , Maple is able to obtain the general solution of the preceding equation:
G ( r ) = q b 2 w 2 + ln ( Δ 1 / Δ 2 ) ,
where
Δ 1 : = b 2 c 1 M b 2 θ + q 2 q , 3 2 , r 2 + c 2 U b 2 θ + q 2 q , 3 2 , r 2
and
Δ 2 : = ( b 2 θ 2 q ) U b 2 θ + q 2 q , 3 2 , r 2 M b 2 θ q 2 q , 3 2 , r 2 2 q M b 2 θ + q 2 q , 3 2 , r 2 U b 2 θ q 2 q , 3 2 , r 2 .
The constants c 1 and c 2 are determined by making use of the boundary conditions G ( k 1 ) = G ( k 2 ) = 0 .
When λ > 0 , Maple and Mathematica are unable to provide an analytical expression for the solution of Equation (75). It is, however, not difficult to obtain a numerical solution for any choice of the parameters. For instance, if we choose b = q = θ = λ = 1 , k 1 = 1 and k 2 = 2 , we obtain the function G ( r ) , as shown in Figure 5, together with the function obtained when λ = 0 . Finally, in Figure 6, we present the corresponding optimal controls.

5. Conclusions

In this paper, we have considered degenerate two-dimensional jump-diffusion processes, defined in such a way that the first component of the vector ( X ( t ) , Y ( t ) ) is a strictly increasing or decreasing function with respect to time. This kind of process is more realistic than a one-dimensional diffusion or jump-diffusion process in many applications, especially when X ( t ) represents the age or wear of a certain device. We could generalize the model by incorporating more than one diffusion process Y ( t ) . The diffusion processes could model the various variables that influence X ( t ) . For example, in the case of wear, important factors to consider are temperature, speed of use, etc.
In Section 2, we obtained equations for functions defined in terms of a first-passage time for the processes ( X ( t ) , Y ( t ) ) . Moreover, we treated an optimal control problem for these processes in Section 3. Finally, a particular problem was solved explicitly in Section 4.
As mentioned in Section 1, there are few first-passage problems for one-dimensional jump-diffusion processes that have been solved exactly and explicitly so far. Here, we were able to find exact analytical expressions for quantities defined in terms of a first-passage time for a (degenerate) two-dimensional jump-diffusion process. Furthermore, in Section 2, we saw that the processes considered in this paper could serve as models in real-life applications, such as the remaining amount of deterioration that a given device can undergo before it needs to be repaired or replaced.
In general, to solve this type of problem, it is necessary to find the solution of an integro-differential equation with partial derivatives. We considered the case when the process leaves the continuation region at the latest when the first event of the Poisson process occurs. In this case, the equation to be solved becomes a partial differential equation. Using the method of similarity solutions, it is sometimes possible to reduce this PDE to an ODE. It should also be possible to find a numerical solution to any particular problem.
As a follow-up to this work, we would like to find exact analytical solutions to problems where the equations to be solved are integro-differential equations; for example, by trying to transform the integro-differential equations into differential equations.

Funding

This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). The author also wishes to thank the anonymous reviewers of this paper for their constructive comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Höpfner, R. On a set of data for the membrane potential in a neuron. Math. Biosci. 2007, 207, 275–301. [Google Scholar] [CrossRef] [PubMed]
  2. Jahn, P.; Berg, R.W.; Hounsgaard, J.; Ditlevsen, S. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. J. Comput. Neurosci. 2011, 31, 563–579. [Google Scholar] [CrossRef] [PubMed]
  3. Melanson, A.; Longtin, A. Data-driven inference for stationary jump-diffusion processes with application to membrane voltage fluctuations in pyramidal neurons. J. Math. Neurosci. 2019, 9, 30. [Google Scholar] [CrossRef] [PubMed]
  4. Lefebvre, M. A first-passage problem for exponential integrated diffusion processes. J. Stoch. Anal. 2022, 3, 2. [Google Scholar] [CrossRef]
  5. Lachal, A. L’intégrale du mouvement brownien. J. Appl. Probab. 1993, 30, 17–27. [Google Scholar] [CrossRef]
  6. Makasu, C. Exit probability for an integrated geometric Brownian motion. Stat. Probab. Lett. 2009, 79, 1363–1365. [Google Scholar] [CrossRef]
  7. Metzler, A. The Laplace transform of hitting times of integrated geometric Brownian motion. J. Appl. Probab. 2013, 50, 295–299. [Google Scholar] [CrossRef]
  8. Caravelli, F.; Mansour, T.; Sindoni, L.; Severini, S. On moments of the integrated exponential Brownian motion. Eur. Phys. J. Plus 2008, 131, 245. [Google Scholar] [CrossRef]
  9. Levy, E. On the moments of the integrated geometric Brownian motion. J. Comput. Appl. Math. 2018, 342, 263–273. [Google Scholar] [CrossRef]
  10. Cox, D.R.; Miller, H.D. The Theory of Stochastic Processes; Methuen: London, UK, 1965. [Google Scholar]
  11. Kou, S.G.; Wang, H. First passage times of a jump diffusion process. Adv. Appl. Probab. 2003, 35, 504–531. [Google Scholar] [CrossRef]
  12. Chen, Y.-T.; Sheu, Y.-C.; Chang, M.-C. A note on first passage functionals for hyper-exponential jump-diffusion processes. Electron. Commun. Probab. 2013, 18, 8. [Google Scholar] [CrossRef]
  13. Yin, C.; Wen, Y.; Zong, Z.; Shen, Y. The first passage time problem for mixed-exponential jump processes with applications in insurance and finance. Abstr. Appl. Anal. 2014, 2014, 571724. [Google Scholar] [CrossRef]
  14. Karnaukh, I. Exit problems for Kou’s process in a Markovian environment. Theory Stoch. Process. 2020, 25, 37–60. [Google Scholar] [CrossRef]
  15. Lefebvre, M. Exit problems for jump-diffusion processes with uniform jumps. J. Stoch. Anal. 2020, 1, 5. [Google Scholar] [CrossRef]
  16. Abundo, M. On first-passage times for one-dimensional jump-diffusion processes. Probab. Math. Stat. 2020, 20, 399–423. [Google Scholar]
  17. Belkaid, A.; Utzet, F. Efficient computation of first passage times in Kou’s jump-diffusion model. Methodol. Comput. Appl. Probab. 2017, 19, 957–971. [Google Scholar] [CrossRef]
  18. Di Crescenzo, A.; Di Nardo, E.; Ricciardi, L.M. On certain bounds for first-crossing-time probabilities of a jump-diffusion process. Sci. Math. Jpn. 2006, 64, 449–460. [Google Scholar]
  19. Fernández, L.; Hieber, P.; Scherer, M. Double-barrier first-passage times of jump-diffusion processes. Monte Carlo Methods Appl. 2013, 19, 107–141. [Google Scholar] [CrossRef]
  20. D’Onofrio, G.; Lanteri, A. Approximating the first passage time density of diffusion processes with state-dependent jumps. Fractal Fract. 2023, 7, 30. [Google Scholar] [CrossRef]
  21. Lefebvre, M. Exact solutions to optimal control problems for Wiener processes with exponential jumps. J. Stoch. Anal. 2021, 2, 1. [Google Scholar] [CrossRef]
  22. Rishel, R. Controlled wear process: Modeling optimal control. IEEE Trans. Autom. Control 1991, 36, 1100–1102. [Google Scholar] [CrossRef]
  23. Whittle, P. Optimization over Time; Wiley: Chichester, UK, 1982; Volume 1. [Google Scholar]
  24. Whittle, P. Risk-Sensitive Optimal Control; Wiley: Chichester, UK, 1990. [Google Scholar]
Figure 1. Functions M 0 ( α ) (below) and M ( x , y ; α ) for α in the interval ( 0 , 10 ) , when λ = 1 , k 1 = 0 , k 2 = 1 and y x = 0.5 .
Figure 1. Functions M 0 ( α ) (below) and M ( x , y ; α ) for α in the interval ( 0 , 10 ) , when λ = 1 , k 1 = 0 , k 2 = 1 and y x = 0.5 .
Fractalfract 07 00152 g001
Figure 2. Functions M 0 ( α ) (below) and M ( x , y ; α ) for α in the interval ( 0 , 10 ) , when λ = 20 , k 1 = 0 , k 2 = 1 and y x = 0.5 .
Figure 2. Functions M 0 ( α ) (below) and M ( x , y ; α ) for α in the interval ( 0 , 10 ) , when λ = 20 , k 1 = 0 , k 2 = 1 and y x = 0.5 .
Fractalfract 07 00152 g002
Figure 3. Function n ( r ) for 0 r 1 , when λ = 1 , k 1 = 0 and k 2 = 1 .
Figure 3. Function n ( r ) for 0 r 1 , when λ = 1 , k 1 = 0 and k 2 = 1 .
Fractalfract 07 00152 g003
Figure 4. Function q ( r ) for 0 r 1 , when λ = 1 , k 1 = 0 , k 2 = 1 and p 0 = 1 / 2 .
Figure 4. Function q ( r ) for 0 r 1 , when λ = 1 , k 1 = 0 , k 2 = 1 and p 0 = 1 / 2 .
Fractalfract 07 00152 g004
Figure 5. Function G ( r ) for 1 r 2 , when b = q = θ = 1 , k 1 = 1 , k 2 = 2 , and λ = 1 (with the squares) and λ = 0 (with the circles).
Figure 5. Function G ( r ) for 1 r 2 , when b = q = θ = 1 , k 1 = 1 , k 2 = 2 , and λ = 1 (with the squares) and λ = 0 (with the circles).
Fractalfract 07 00152 g005
Figure 6. Optimal control in the interval [ 1 , 2 ] when b = q = θ = 1 , k 1 = 1 , k 2 = 2 , and λ = 1 (with the squares) and λ = 0 (with the circles).
Figure 6. Optimal control in the interval [ 1 , 2 ] when b = q = θ = 1 , k 1 = 1 , k 2 = 2 , and λ = 1 (with the squares) and λ = 0 (with the circles).
Fractalfract 07 00152 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lefebvre, M. First-Passage Times and Optimal Control of Integrated Jump-Diffusion Processes. Fractal Fract. 2023, 7, 152. https://doi.org/10.3390/fractalfract7020152

AMA Style

Lefebvre M. First-Passage Times and Optimal Control of Integrated Jump-Diffusion Processes. Fractal and Fractional. 2023; 7(2):152. https://doi.org/10.3390/fractalfract7020152

Chicago/Turabian Style

Lefebvre, Mario. 2023. "First-Passage Times and Optimal Control of Integrated Jump-Diffusion Processes" Fractal and Fractional 7, no. 2: 152. https://doi.org/10.3390/fractalfract7020152

Article Metrics

Back to TopTop