Next Article in Journal
Modeling and Exploratory Analysis of Discrete Event Simulations for Optimizing Overhead Hoist Transport Systems and Logistics in Semiconductor Manufacturing
Previous Article in Journal
Complete Weight Enumerator of Torsions and Their Applications
Previous Article in Special Issue
The Heroic Age of Probability: Kolmogorov, Doob, Lévy, Khinchin and Feller
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Exact Results on Lindley Process with Laplace Jumps

Department of Mathematics “G.Peano”, University of Torino, Via Carlo Alberto 10, 10123 Torino, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(7), 1166; https://doi.org/10.3390/math13071166
Submission received: 24 February 2025 / Revised: 26 March 2025 / Accepted: 31 March 2025 / Published: 1 April 2025

Abstract

:
We consider a Lindley process with Laplace-distributed space increments. We obtain closed-form recursive expressions for the density function of the position of the process and for its first exit time distribution from the domain  [ 0 , h ] . We illustrate the results in terms of the parameters of the process. An example of the application of the analytical results is discussed in the framework of the CUSUM method.
MSC:
60G50; 60K25; 60G40

1. Introduction

Random walk has attracted mathematicians’ attention both for its analytical properties and for its role in modeling instances ever since it was mentioned by Pearson [1]. Presently, random walks are universally recognized as the simplest example of a stochastic process and are used as a simplified version of more complex models. For modeling purposes, it is often necessary to introduce suitable boundary conditions, constraining the evolution of the process in specific regions. Typical conditions are absorbing or reflecting boundaries and the involved random walk may be in one or more dimensions. The recurrence of the states of the free or bounded process in one or more dimensions has been the subject of past and recent studies, and results are available for the free random walk [2] or in the presence of specific boundaries [3].
The distribution of a simple random walk at Time  n , n = 1 , 2 ,  has a simple expression while, in the case of the distribution of the random walk constrained by two absorbing boundaries, it involves important calculation efforts [4].
Classical methods for the study of random walks refer to their Markov property. This happens, for example, when the focus is on recurrence properties in one or more dimensions. Alternatively, some results can be obtained by means of diffusion limits that give good approximations under specific hypotheses [5,6,7,8] or allow the determination of asymptotic results for problems such as the maximum of a random walk or the first exit time across general boundaries [9]. However, the presence of boundaries often determines combinatorics problems and discourages the search for closed-form solutions. Furthermore, the switch from unitary jumps to continuous jumps introduces important computational difficulties. There are thousands of papers about random walk, and it is impossible to refer to all of them. We limit ourselves to citing the recent excellent review by Dshalalow and White [10] that lists many of the most important available results.
In this paper, we focus on a particular constrained random walk: the Lindley process [11]. In particular, we prove a set of analytical results about it. This process is a discrete-time random walk characterized by continuous jumps with a specified distribution.
Without loss of generality, we define it as
W n = max ( 0 , W n 1 + Z n 1 ) n 1 W 0 = x 0 .
where  { Z n , n 0 }  are i.i.d. random variables.
Modeling interest for the Lindley process arises in several fields. Historically, it was introduced in [11] to describe waiting times experienced by customers in a queue over time and it has been extensively studied in recent decades [12,13,14].
This process also arises in a reliability context, in a sequential test framework, through the study of CUSUM tests [15]. Moreover, the same process appears in problems related to resources management [16] and in highlighting atypical regions in biological sequences, transferring biological sequence analysis tools to break-point detection for on-line monitoring [17].
Many contributions concerning properties of the Lindley process are motivated by their important role in applications [2,11,16,18,19]. A large part of the papers about the Lindley process concerns its asymptotic behavior. Using the strong law of large numbers, in 1952, Lindley [11] in the framework of queuing theory showed that the process admits a limit distribution as n diverges if and only if  E [ Z ] < 0 , i.e., if the customer’s arrival rate is slower than the service one. Furthermore, when  E [ Z ] = 0 , the ratio  W n / n  converges to the modulus of a Gaussian random variable. Lindley also showed that the limit distribution is the solution of an integral equation, and Kendall solved it in the case of exponential arrivals [14] while Erlang determined its expression when both arrival and service times are exponential [20]. Simulations were used in [21] to determine invariant distribution when  Z i , i = 1 , 2 ,  are Laplace or Gaussian distributed. The analytical expression of the limit distribution was determined by Stadje [22] for the case of integer-valued increments. Recent contributions consider recurrence properties in higher dimensions [3,23,24] or the study of the rate of convergence of expected values of functions of the process.
Other contributions make use of stochastic ordering notion [25] or of continuous-time versions of the Lindley process [12]. Recently, the use of machine learning techniques has been proposed to learn Lindley recursion (1) directly from the waiting time data of a G/G/1 queue [26]. Furthermore, Lakatos and Zbaganu [25], and Raducan, Lakatos and Zbaganu [27] introduced a class of computable Lindley processes for which the distribution of the space increments is a mixture of two distributions. To the best of our knowledge, no other analytical expressions are available for a Lindley process.
Concerning the first exit time problem for the Lindley process, it has been considered mainly in the framework of CUSUM methodology to detect the time in which a change in the parameters of the process happens [15,28,29,30]. In this context, the focus was on the Average Run Length that corresponds to the exit time of the process from a boundary, and in [31,32], such distribution and its expected value are determined when the  Z i , i = 1 , 2 ,  are exponentially distributed with a shift. Markovich and Razumchick [33] investigated a problem related to first exit times, i.e., the appearance of clusters of extremes defined as subsequent experiences of high thresholds in a Lindley process.
Here, we consider a Lindley process characterized by Laplace-distributed space increments  Z L a p l a c e ( μ , σ ) , i.e., Z distributed as a Laplace random variable characterized by parameter  μ  and scale parameter  σ ,
f Z ( z ) = e | z μ | σ 2 σ ,
where the mean and the variance are  E ( Z ) = μ  and  V ( Z ) = 2 σ 2 .
In Figure 1, three trajectories are plotted of the Lindley process with  σ = 1 x = 1 , and different values of  μ . The Laplace distribution is often used to model the distribution of financial assets and market fluctuations. It is also used in signal processing and communication theory to model noise and other random processes. Additionally, the Laplace distribution has been used in image processing and speech recognition.
As in the case of the simple random walk, here, the behavior of the random walk dramatically changes according to the sign of  μ . Indeed, it is well known [9] that, when  μ > 0 , all the states are transient, and the process has a drift toward infinity. However, in the case  μ = 0 , the process is null recurrent, and when  μ < 0 , the process is positive recurrent and admits limit distribution. Such distribution is the solution of the Lindley integral equation [11]. Unfortunately, the analytical solution is unknown and numerical methods become necessary.
The recurrence properties imply certainty of the first exit time of the process W from  [ 0 , h ]  for each  h > x , and the study of the first exit time distribution can be performed.
Here, our aim is to derive expressions for the distribution of the process as the time evolves together with its first exit time (FET) from  [ 0 , h ] . Taking advantage of the presence of exponential in the Laplace distribution, we prove recursive closed-form formulae for such distributions, and we show that different formulae hold on different parameters domain. We underline that the special expression of the Laplace distribution allows the use of recursion in various steps of our proofs. This makes hard the extension to other types of distributions. In Section 2 and Section 3, we study the distribution of the process and its first exit times from  [ 0 , h ] , respectively. In these sections, we do not present any proofs that we postpone in Section 4 and Section 5. Due to the complexity of the derived exact formulae for the studied distributions, we complete our work implementing the software necessary for fast computation of the formulae of interest. This software is open source and can be found in the GitHub repository [34]. In Section 6, we illustrate the role of the parameters of the Laplace distribution on the position and FET distribution of the Lindley process. Lastly, in Section 7, we present an application of our theoretical results in the CUSUM framework. In particular, we discuss a method to detect the change point when the data moves from a symmetric Laplace distribution to an asymmetric one.

2. Distribution of  W n

Let us consider the Lindley process (1) with Laplace-distributed jumps (2). In the following, we will use the distribution of the process at Time  n 0  that we denote as
F n ( u | x ) : = F W n ( u | x ) = P ( W n u | W 0 = x ) .
and, with an abuse of notation, we indicate with
f n ( u | x ) : = f W n ( u | x ) = u P ( W n u | W 0 = x ) .
the corresponding probability density function, since the derivative is the distributional derivative because the  W n  are mixed random variables.
In the following we make use of Dirac delta function  δ ( u )  with the agreement that, for each  x > 0
0 x f ( u ) δ ( u ) = f ( 0 ) .
Since  W n 0 , the density  f n ( u | x ) = 0  for  u < 0 . Moreover, if  a > b  the sum  i = a b f i = 0 .
Lemma 1.
The probability distribution function of  W 1  for  μ > x  is
F 1 ( u | x ) = F W 1 ( u | x ) = 0 u < 0 1 2 e μ u + x σ 0 u x + μ 1 1 2 e μ u + x σ u > x + μ
while, for  μ x  is
F 1 ( u | x ) = F W 1 ( u | x ) = 0 u < 0 1 1 2 e μ u + x σ u 0
The corresponding probability density function is
f 1 ( u | x ) = f W 1 ( u | x ) = e | u x μ | σ 2 σ 1 ( 0 , + ) ( u ) + e ( x + μ ) σ 2 δ ( u ) x > μ e u x μ σ 2 σ 1 ( 0 , + ) ( u ) + 1 e ( x + μ ) σ 2 δ ( u ) x μ
where  δ ( u )  is the Dirac delta function.
Remark 1.
Using the Markov property of the process W, the one-step transition probability density function, for  n 0 , is
f ( u , n + 1 | y , n ) : = u P ( W n + 1 u | W n = y ) = f 1 ( u | y ) = e | u y μ | σ 2 σ 1 ( 0 , + ) ( u ) + e ( y + μ ) σ 2 δ ( u ) y > μ e u y μ σ 2 σ 1 ( 0 , + ) ( u ) + 1 e ( y + μ ) σ 2 δ ( u ) y μ
Notation 1.
In the forthcoming, when not necessary, we will skip the dependence on the initial position of the process:  F n ( u | x ) = F n ( u ) f n ( u | x ) = f n ( u ) .
To determine the distribution of  W n , computations change according to the sign of  μ . Theorem 1 gives the distribution for  μ > 0 . When  μ  is negative two different cases arise when  x < μ  (Theorem 2) and  μ > x  (Theorem 3).
Theorem 1.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ 0 , the probability density function of the position is given by
(7a) f n ( u ) = i = 0 n + 1 f n i ( u ) , u 0 , n 1 (7b) f 0 ( u ) = δ ( u x )
where
(8a) f n i ( u ) = j = 0 m n i 1 1 ( 2 σ ) n a n ( i , j ) ( u n μ ) j e u σ + b n ( i , j ) ( u n μ ) j e u σ 1 I n i ( u ) 1 i n + 1 (8b) f n 0 ( u ) = c n δ ( u )
m n i = min ( n , i )  and  I n i i = 1 , n + 1 , with  i = 1 n + 1 I n i = ( 0 , + ) , are
(9a) I n i = ( ( i 1 ) μ , i μ ] i = 1 , , n 1 (9b) I n n = ( ( n 1 ) μ , n μ + x ] (9c) I n n + 1 = ( n μ + x , + ) ,
The coefficients  a n + 1 ( i , j ) b n + 1 ( i , j )  and  c n + 1  verify the following recursive relations for  1 i n + 1 1 j m n i 1
(10a) a n + 1 ( i , j ) = e μ σ k = j 1 m n i 1 1 ( 1 ) k + j a n ( ( i 1 ) , k ) k ! j ! 2 σ k j + 1 (10b) b n + 1 ( i , j ) = e μ σ k = j 1 m n i 1 1 b n ( ( i 1 ) , k ) k ! j ! 2 σ k j + 1 a n + 1 ( i , 0 ) = e μ σ j = i n + 1 A j ( 2 ) + k = 0 m n i 1 1 a n ( ( i 1 ) , k ) ( ( i 1 n ) μ ) k + 1 k + 1 + ( 1 ) k k ! σ 2 k + 1 (10c) b n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( ( i 1 n ) μ ) σ + c n δ i , 1 b n + 1 ( i , 0 ) = e μ σ j = 1 i 2 A j ( 1 ) k = 0 m n i 1 1 b n ( ( i 1 ) , k ) ( ( i 2 n ) μ ) k + 1 k + 1 k ! σ 2 k + 1 (10d) a n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( ( i 1 n ) μ ) σ + c n ( 1 δ i , 1 ) c n + 1 = e μ σ 2 ( 2 σ ) n j = 1 n + 1 k = 0 m n i 1 a n ( j , k ) ( y n μ ) k + 1 k + 1 I n j b n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j (10e) + ( 2 σ ) n c n
where  δ i , j  is the Kronecker delta and
(11a) A j ( 1 ) : = A j ( 1 ) ( y ) I n j = k = 0 m n j 1 a n ( j , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j + b n ( j , k ) ( y n μ ) k + 1 k + 1 I n j (11b) A j ( 2 ) : = A j ( 2 ) ( y ) I n j = k = 0 m n j 1 a n ( j , k ) ( y n μ ) k + 1 k + 1 I n j b n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j .
The initial values for the recursion of (10) are
(12a) c 1 = e μ + x σ 2 (12b) a 1 ( 1 , 0 ) = e μ + x σ (12c) b 1 ( 1 , 0 ) = a 1 ( 2 , 0 ) = 0 (12d) b 1 ( 2 , 0 ) = e μ + x σ
Remark 2.
Observe that the coefficients  a n ( n + 1 , k ) 0 , for each admissible k. This prevents the terms in (10e) and (11b) to explode.
The following corollary may be useful for computational purposes.
Corollary 1.
The constant coefficients  c n n = 1 , 2 ,  can also be obtained as
c n + 1 = σ f n + 1 1 ( 0 + ) .
Remark 3.
Observe that the density (7) refers to a mixed random variable.
Theorem 2.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  x < μ < 0 , the probability density function of the position is given by
(14a) f n ( u ) = i = 0 2 f n i ( u ) , u 0 , n 1 (14b) f 0 ( u ) = δ ( u x )
where
(15a) f n i ( u ) = j = 0 n 1 1 ( 2 σ ) n a n ( i , j ) ( u n μ ) j e u σ + b n ( i , j ) ( u n μ ) j e u σ 1 I n i ( u ) i = 1 , 2 (15b) f n 0 ( u ) = c n δ ( u )
and  I n i i = 1 , 2  are
(16a) I n 1 = ( 0 , max ( 0 , n μ + x ) ) (16b) I n 2 = ( max ( 0 , n μ + x ) , + )
  • If  x + n μ > 0 , the coefficients  a n + 1 ( i , j ) b n + 1 ( i , j )  verify the recursive relations for  i = 1 , 2  and  1 j n
    (17a) a n + 1 ( i , j ) = e μ σ k = j 1 n 1 ( 1 ) k + j a n ( i , k ) k ! j ! σ 2 k j + 1 (17b) b n + 1 ( i , j ) = e μ σ k = j 1 n 1 b n ( i , k ) k ! j ! σ 2 k j + 1 a n + 1 ( 1 , 0 ) = e μ σ B 2 + k = 0 n 1 a n ( 1 , k ) ( x ) k + 1 k + 1 + ( 1 ) k k ! σ 2 k + 1 (17c) b n ( 1 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 x σ b n + 1 ( 1 , 0 ) = e μ σ k = 0 n 1 b n ( 1 , k ) ( n μ ) k + 1 k + 1 + k ! σ 2 k + 1 (17d) a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 n μ σ + ( 2 σ ) n c n (17e) a n + 1 ( 2 , 0 ) = 0 b n + 1 ( 2 , 0 ) = e μ σ B 1 + k = 0 n 1 b n ( 2 , k ) x k + 1 k + 1 + k ! σ 2 k + 1 (17f) a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 x σ + ( 2 σ ) n c n
    where
    (18a) B 1 = k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 x + n μ + b n ( 1 , k ) ( y n μ ) k + 1 k + 1 0 x + n μ (18b) B 2 = k = 0 n 1 b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 x σ
    The values of coefficients  c n + 1  for  n 1  are
    c n + 1 = c ˜ n + 1 + c ˜ ˜ n + 1
    where
    c ˜ n + 1 = 1 ( 2 σ ) n k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k Γ 1 + k , ( y n μ ) σ 0 x + n μ + b n ( 1 , k ) e 2 n μ σ σ k + 1 Γ 1 + k , ( y n μ ) σ 0 x + n μ e μ σ 2 n + 1 σ n k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 x + n μ + b n ( 1 , k ) ( y n μ ) k + 1 k + 1 0 x + n μ + 1 ( 2 σ ) n k = 0 n 1 a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k Γ 1 + k , ( y n μ ) σ x + n μ μ + b n ( 2 , k ) e 2 n μ σ σ k + 1 Γ 1 + k , ( y n μ ) σ x + n μ μ e μ σ 2 n + 1 σ n k = 0 n 1 a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ x + n μ μ + b n ( 2 , k ) ( y n μ ) k + 1 k + 1 x + n μ μ + e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( n + 1 ) μ σ + 1 e μ σ 2 c n 1 [ 0 , ( n + 1 ) μ ) ( x )
    and
    c ˜ ˜ n + 1 = 1 ( 2 σ ) n k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k Γ 1 + k , ( y n μ ) σ 0 μ + b n ( 1 , k ) e 2 n μ σ σ k + 1 Γ 1 + k , ( y n μ ) σ 0 μ e μ σ 2 n + 1 σ n k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 μ + b n ( 1 , k ) ( y n μ ) k + 1 k + 1 0 μ + e μ σ 2 n + 1 σ n k = 0 n 1 a n ( 1 , k ) ( y n μ ) k + 1 k + 1 μ x + n μ + b n ( 1 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ μ x + n μ + e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 x σ + 1 e μ σ 2 c n 1 [ ( n + 1 ) μ ) , ) ( x )
    The initial conditions for the recursion of the coefficients are
    (21a) a 1 ( 1 , 0 ) = e μ + x σ (21b) b 1 ( 1 , 0 ) = 0 (21c) a 1 ( 2 , 0 ) = 0 (21d) b 1 ( 2 , 0 ) = e μ + x σ (21e) c 1 = e μ + x σ 2
  • If  x + n μ < 0 , we have  f n 1 ( u ) 0  for each n and the coefficients  a n + 1 ( 2 , j ) b n + 1 ( 2 , j )  verify the following recursive relations for  1 j n .
    (22a) a n + 1 ( 2 , j ) = 0 (22b) b n + 1 ( 2 , j ) = e μ σ k = j 1 n 1 b n ( 2 , k ) k ! j ! σ 2 k j + 1 (22c) a n + 1 ( 2 , 0 ) = 0 (22d) b n + 1 ( 2 , 0 ) = e μ σ k = 0 n 1 b n ( 2 , k ) ( n μ ) k + 1 k + 1 + k ! σ 2 k + 1 a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 n μ σ + ( 2 σ ) n c n c n + 1 = 1 ( 2 σ ) n k = 0 n 1 b n ( 2 , k ) e 2 n μ σ σ k + 1 Γ 1 + k , ( y n μ ) σ 0 μ e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 2 , k ) ( y n μ ) k + 1 k + 1 0 μ (22e) + e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( n + 1 ) μ σ + 1 e μ σ 2 c n
    with initial conditions given by (21c), (21d) and (21e).
Theorem 3.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ x , the probability density function of the position is given by
(23a) f n ( u ) = i = 0 1 f n i ( u ) , u 0 , n 1 (23b) f 0 ( u ) = δ ( u x )
where
(24a) f n 1 ( u ) = j = 0 n 1 1 ( 2 σ ) n b n ( 1 , j ) ( u ( n 1 ) μ ) j e u σ 1 I n 1 ( u ) (24b) f n 0 ( u ) = c n δ ( u )
and  I n 1 = ( 0 , + ] .
The coefficients  c n + 1  and  b n + 1 ( i , j )  verify the following recursive relations for  j = 1 , , n
(25a) b n + 1 ( 1 , j ) = e μ σ k = j 1 n 1 b n ( ( 1 , k ) k ! j ! σ 2 k j + 1 (25b) b n + 1 ( i , 0 ) = e μ σ k = 0 n 1 b n ( 1 , k ) [ ( n 1 ) μ ] k + 1 k + 1 + k ! σ 2 k + 1 + ( 2 σ ) n c n c n + 1 = e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 1 , k ) e 2 ( n 1 ) μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y ( n 1 ) μ ) σ μ 1 ( 2 σ ) n k = 0 n 1 b n ( 1 , k ) e ( n 1 ) μ σ σ k + 1 Γ 1 + k , ( y ( n 1 ) μ ) σ 0 μ (25c) e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 1 , k ) ( y ( n 1 ) μ ) k + 1 k + 1 0 μ + 1 e μ σ 2 c n .
The initial values for the recursion are
l c 1 = 1 1 2 e μ + x σ b 1 ( 1 , 0 ) = e μ + x σ .

3. First Exit Time of  W n

Let  N x : = min n > 0 W n h | W 0 = x  be the first exit time (FET) of the Lindley process (1) from the domain  [ 0 , h ]  for fixed  h > 0  and let  P ( n | x ) : = P [ N x = n ]  indicate the probability that the FET is equal to n n > 0 , given that the process starts in  x [ 0 , h ) .
In order to determine the distribution of N, computations change according to the sign of  μ  and its order with respect to h. Theorem 4 gives the distribution for  0 < μ < h  while Corollary 2 considers the case  0 < h μ . For  μ < 0 , Theorem 5 gives the distribution for  μ < h  while Corollary 3 considers the case  0 < h < μ  and Theorem 6 refers to  μ = 0 .
Theorem 4.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  0 < μ < h , the probability distribution of the FET through h is given by
P ( n | x ) = i = 1 n P i ( n | x ) , n > 0 , 0 x < h
where  n : = min { n + 1 , h μ }  and
P i ( n | x ) = j = 0 m n i 1 1 2 n σ n 1 α n ( i , j ) ( x + n μ ) j e x σ + β n ( i , j ) ( x + n μ ) j e x σ + η n ( i , 0 ) 1 I n i ( x ) 1 i n .
Here  m n i : = min ( n , n i + 1 )  and we partition the interval  [ 0 , h )  in
I n i = [ h ( n i + 1 ) μ , h ( n i ) μ ) i = 2 , , n I n 1 = [ 0 , h ( n 1 ) μ ) .
The coefficients  α n + 1 ( i , j ) β n + 1 ( i , j )  and  η n + 1 ( i , 0 )  are defined by the following recursive relations for  1 i n + 1  and  1 j m n + 1 i 1
(29a) α n + 1 ( i , 0 ) = e μ σ j = i * + 1 n K n j + k = 0 m n i * 1 α n ( i * , k ) ( h ( n i * ) μ + n μ ) k + 1 k + 1 + k ! σ k + 1 ( 1 ) k 2 k + 1 β n ( i * , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( h ( n i * ) μ + n μ ) σ η n ( i * , 0 ) ( 2 σ ) n e h ( n i * ) μ σ (29b) α n + 1 ( i , j ) = e μ σ k = j 1 m n i * 1 α n ( i * , k ) σ 2 k j + 1 k ! j ! (29c) β n + 1 ( i , 0 ) = e μ σ j = 0 i * 1 K n j + k = 0 m n i * 1 α n ( i * , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( h ( n i * + 1 ) μ + n μ ) σ + β n ( i * , k ) ( h ( n i * + 1 ) μ + n μ ) k + 1 k + 1 + k ! σ k + 1 2 k + 1 η n ( i * , 0 ) ( 2 σ ) n e h ( n i * + 1 ) μ σ (29d) β n + 1 ( i , j ) = e μ σ k = j 1 m n i * 1 β n ( i * , k ) σ 2 k j + 1 k ! j ! (29e) η n + 1 ( i , 0 ) = η n ( i * , 0 ) ,
Here,  i *  is the index of the interval (28) that contains  x + μ , and
K n j = k = 0 m n 1 1 σ α n ( 1 , k ) ( n μ ) k + β n ( 1 , k ) ( n μ ) k + ( 2 σ ) n η n ( 1 , 0 ) j = 0 k = 0 m n j 1 α n ( j , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y + n μ ) σ I n j + β n ( j , k ) ( y + n μ ) k + 1 k + 1 I n j + η n ( j , 0 ) ( 2 σ ) n e y σ I n j j = 1 , , i * 1 k = 0 m n j 1 α n ( j , k ) ( y + n μ ) k + 1 k + 1 I n j β n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y + n μ ) σ I n j η n ( j , 0 ) ( 2 σ ) n e y σ I n j j = i * + 1 , , n
The initial values are
(31a) η 1 ( 1 , 0 ) = β 1 ( 1 , 0 ) = α 1 ( 2 , 0 ) = 0 (31b) α 1 ( 1 , 0 ) = e μ h σ (31c) η 1 ( 2 , 0 ) = 1 (31d) β 1 ( 2 , 0 ) = e μ h σ .
When  μ h , the partition reduces to a single interval  [ 0 , h ] , and we obtain a compact closed-form solution.
Corollary 2.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ > 0  and  0 < h μ , the probability distribution of the FET through h is given by
P ( n | x ) = η n + β n e x σ n > 0 , 0 x < h
where
(33a) η n = δ 1 , n (33b) β 1 = 1 2 e h μ σ (33c) β n = e h ( n 1 ) μ σ 2 1 2 + h 2 σ n 2 e h n μ σ 2 1 2 + h 2 σ n 1 , n 2 .
Theorem 5.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ < 0 , the probability distribution of the FET through h is given by
P ( n | x ) = i = 1 h n P i ( n | x ) , n > 0 , 0 x < h
where  h n : = min { n , h | μ | }  and, for  1 i h n ,
P i ( n | x ) = j = 0 n 1 1 2 n σ n 1 α n ( i , j ) ( x + ( n 1 ) μ ) j e x σ + β n ( i , j ) ( x + ( n 1 ) μ ) j e x σ + η n i 1 I n i ( x ) .
Here we partition the interval  [ 0 , h )  as
I n i = [ ( i 1 ) μ , i μ ) i = 1 , . . . , h n 1 I n h n = [ ( h n 1 ) μ , h ) .
The coefficients  α n + 1 ( i , j ) β n + 1 ( i , j )  and  η n + 1 i  are defined by the recursive relations, for  1 < i h n + 1  and  0 < j n
(37a) α n + 1 ( i , 0 ) = e μ σ j = i * + 1 h n K n j + k = 0 n 1 α n ( i * , k ) ( ( i * + n 1 ) μ ) k + 1 k + 1 + k ! σ k + 1 ( 1 ) k 2 k + 1 (37b) β n ( i * , k ) e 2 ( n 1 ) μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( i * + n 1 ) μ ) σ η i * ( 2 σ ) n e i * μ σ (37c) α n + 1 ( i , j ) = e μ σ k = j 1 n 1 α n ( i * , k ) σ 2 k j + 1 k ! j ! (37d) β n + 1 ( i , 0 ) = e μ σ j = 0 i * 1 K n j + k = 0 n 1 α n ( i * , k ) e 2 ( n 1 ) μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( i * + 1 + n 1 ) μ ) σ + β n ( i * , k ) ( ( i * + 1 + n 1 ) μ ) k + 1 k + 1 + k ! σ k + 1 2 k + 1 η i * ( 2 σ ) n e ( i * 1 ) μ σ (37e) β n + 1 ( i , j ) = e μ σ k = j 1 n 1 β n ( i * , k ) σ 2 k j + 1 k ! j ! (37f) η n + 1 i = η n i *
where  I n i *  is the interval that contains  min ( x + μ , h ) .
For  i = 1  the coefficients are defined by the recursive relations
(38a) α n + 1 ( 1 , 0 ) = e μ σ j = 1 h n K n j K n 0 (38b) α n + 1 ( 1 , j ) = 0 j = 1 n (38c) β n + 1 ( 1 , j ) = 0 j = 0 n (38d) η n + 1 1 = K n 0 2 n σ n
where
K n j = k = 0 n 1 σ α n ( 1 , k ) ( ( n 1 ) μ ) k + β n ( 1 , k ) ( ( n 1 ) μ ) k + ( 2 σ ) n η n 1 j = 0 k = 0 n 1 α n ( j , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y + ( n 1 ) μ ) σ I n j + β n ( j , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 I n j + η n j ( 2 σ ) n e y σ I n j j = 1 , , i * 1 k = 0 n 1 α n ( j , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 I n j β n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y + ( n 1 ) μ ) σ I n j η n j ( 2 σ ) n e y σ I n j j = i * + 1 , , h n
The coefficients for the base case of the recursion are given by
(40a) α 1 ( 1 , 0 ) = e μ h σ (40b) β 1 ( 1 , 0 ) = η 1 1 = 0
When  μ h  in the previous theorem, we observe that the partition is formed of a single interval  [ 0 , h ] . This simplifies a lot of the computations, and we obtain the following:
Corollary 3.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ < 0 , and  0 < h μ , the probability distribution of the FET through h is given by
P ( n | x ) = η n + α n e x σ n > 0 , 0 x < h
where for  n > 1
α n + 1 = e μ σ 2 h σ 1 α n e μ h σ 2 η n
η n + 1 = α n + η n
and the initial values are
α 1 = 1 2 e μ h σ η 1 = 0
Theorem 6.
For a Lindley process  { W n , n 0 }  characterized by Laplace increments with location parameter  μ = 0 , the probability distribution of the FET through h is given by
P ( n | x ) = 1 2 n σ n 1 j = 0 n 1 α n ( 1 , j ) x j e x σ + j = 0 n 2 β n ( 1 , j ) x j e x σ n > 0 , 0 x < h
The coefficients satisfy the following recursive relations for  n > 1
(44a) α n + 1 ( 1 , j ) = k = j 1 n 1 α n ( 1 , k ) σ 2 k j + 1 k ! j ! j = 1 , , n (44b) β n + 1 ( 1 , j ) = k = j 1 n 2 β n ( 1 , k ) σ 2 k j + 1 k ! j ! j = 1 , , n 1 (44c) α n + 1 ( 1 , 0 ) = k = 0 n 1 α n ( 1 , k ) h k + 1 k + 1 + σ k + 1 2 k + 1 ( 1 ) k k ! + k = 0 n 2 β n ( 1 , k ) σ k + 1 2 k + 1 Γ 1 + k , 2 h σ (44d) β n + 1 ( 1 , 0 ) = k = 0 n 1 α n ( 1 , k ) σ k + 1 2 k + 1 ( 1 ) k k ! + k = 0 n 2 β n ( 1 , k ) σ k + 1 2 k + 1 k ! + σ ( α n ( 1 , 0 ) + β n ( 1 , 0 ) )
with initial conditions
l α 1 ( 1 , 0 ) = e h σ β 1 ( 1 , 0 ) = 0
Remark 4.
In Theorems 4 and 5, when μ is not small, the value of i such that  i | μ | > h  is small. Hence, the sums include very few terms since  l n  and  k n  coincide with i soon.
Remark 5.
Please note that from Corollary 3, we easily observe the exponential decay of the tails of the FET probability function. Since larger values of μ facilitate the crossing of the boundary, this tail result holds for any choice of μ.

4. Proofs of Theorems on the Position

4.1. Proof of Lemma 1

Proof. 
The thesis follows immediately by determining the distribution of  x + Z 1  and applying the definition (1). □

4.2. Proof of Theorem 1

Proof. 
We proceed by induction.
Case n = 1. Since  μ > 0 , we have  x > μ . Hence, this case follows from Lemma 1 introducing the partition of  ( 0 , + )  as  I n 1 I n 2 = ( 0 , μ + x ] ( μ + x , + )  and rewriting the corresponding density function (5) as
f 1 ( u ) = f W 1 ( u ) = e μ + x σ 2 u = 0 1 2 σ e μ u + x σ 0 < u x + μ 1 2 σ e μ u + x σ u > x + μ = e μ + x σ 2 δ ( u ) + 1 2 σ e μ u + x σ 1 I 1 1 ( u ) + 1 2 σ e μ u + x σ 1 I 1 2 ( u ) = i = 0 2 f 1 i ( u ) . u 0
The proof is completed recognizing the initial conditions (12)
Case n. We assume that (7) holds for n, and we show that it holds for  n + 1 , for  n 1 .
Conditioning on the position reached at Time n, we can write the probability distribution function of the position at Time  n + 1  as
F n + 1 ( u ) = P ( W n + 1 u ) = 0 + P ( W n + 1 u | W n = y ) P ( W n d y ) n = 1 , 2 ,
Differentiating with respect to u, we obtain the equivalent relation for the probability density function of the position at Time  n + 1
f n + 1 ( u ) = u P ( W n + 1 u ) = 0 + f ( u , n + 1 | y , n ) f n ( y ) d y , n = 1 , 2 ,
Using (6) and (7), Equation (46) becomes
f n + 1 ( u ) = i = 0 n + 1 0 + e | u y μ | σ 2 σ f n i ( y ) d y 1 ( 0 , + ) ( u ) + i = 0 n + 1 0 + e y + μ σ 2 f n i ( y ) d y δ ( u )
Please note that  f n + 1 ( u )  is decomposed in a continuous part and in a discrete one.
Let us observe that when we open the modulus in (47), we add a further interval in the partition  { I n j j = 0 , , n + 1 } . Indeed, since  ( i 2 ) μ < u μ < ( i 1 ) μ  implies  ( i 1 ) μ < u < i μ , it induces a shift in the partition intervals (9) and we obtain  { I n + 1 j j = 0 , , n + 2 } . Hence  f n + 1 ( u )  can be expressed in terms of  f n + 1 i ( u ) , according to the new partition.
f n + 1 ( u ) = i = 0 n + 2 f n + 1 i ( u ) , u 0
where
f n + 1 i ( u ) = j = 0 n + 1 0 + e | u y μ | σ 2 σ f n j ( y ) d y 1 I n + 1 i ( u ) , i = 1 , n + 2
f n + 1 0 ( u ) = j = 0 n + 1 0 + e y + μ σ 2 f n j ( y ) d y δ ( u ) = c n + 1 δ ( u )
Let us consider (48). It holds for
f n + 1 1 ( u ) = e ( u μ ) σ 2 σ c n + j = 1 n + 1 I n j e ( u y μ ) σ 2 σ f n j ( y ) d y 1 I n + 1 1 ( u )
f n + 1 i ( u ) = e ( u μ ) σ 2 σ c n + j = 1 i 2 I n j e ( u y μ ) σ 2 σ f n j ( y ) d y + ( i 2 ) μ u μ e ( u y μ ) σ 2 σ f n i 1 ( y ) d y + u μ ( i 1 ) μ e ( u y μ ) σ 2 σ f n i 1 ( y ) d y + j = i n + 1 I n j e ( u y μ ) σ 2 σ f n j ( y ) d y 1 I n + 1 i ( u ) i = 2 , , n
f n + 1 n + 1 ( u ) = e ( u μ ) σ 2 σ c n + j = 1 n 1 I n j e ( u y μ ) σ 2 σ f n j ( y ) d y + ( n 1 ) μ u μ e ( u y μ ) σ 2 σ f n n ( y ) d y + u μ x + n μ e ( u y μ ) σ 2 σ f n n ( y ) d y + I n n + 1 e ( u y μ ) σ 2 σ f n n + 1 ( y ) d y 1 I n + 1 n + 1 ( u )
f n + 1 n + 2 ( u ) = e ( u μ ) σ 2 σ c n + j = 1 n I n j e ( u y μ ) σ 2 σ f n j ( y ) d y + x + n μ u μ e ( u y μ ) σ 2 σ f n n ( y ) d y + u μ + e ( u y μ ) σ 2 σ f n n ( y ) d y 1 I n + 1 n + 1 ( u )
Let us now focus on (51) that holds for  2 i n . Using the inductive property on  f n j ( y ) , i.e., substituting (8a) in (51), we obtain
l f n + 1 i ( u ) = e ( u μ ) σ 2 σ c n + j = 1 i 2 1 ( 2 σ ) n k = 0 m n j 1 I n j e ( u y μ ) σ 2 σ a n ( j , k ) ( y n μ ) k e y σ + b n ( j , k ) ( y n μ ) k e y σ d y + 1 ( 2 σ ) n k = 0 m n i 1 1 ( i 2 ) μ u μ e ( u y μ ) σ 2 σ a n ( ( i 1 ) , k ) ( y n μ ) k e y σ + b n ( ( i 1 ) , k ) ( y n μ ) k e y σ d y + 1 ( 2 σ ) n k = 0 m n i 1 1 u μ ( i 1 ) μ e ( u y μ ) σ 2 σ a n ( ( i 1 ) , k ) ( y n μ ) k e y σ + b n ( ( i 1 ) , k ) ( y n μ ) k e y σ d y + j = i n + 1 1 ( 2 σ ) n k = 0 m n j 1 I n j e ( u y μ ) σ 2 σ a n ( j , k ) ( y n μ ) k e y σ + b n ( j , k ) ( y n μ ) k e y σ d y 1 I n + 1 i ( u )
which can be written as
l f n + 1 i ( u ) = e ( u μ ) σ 2 σ c n + j = 1 i 2 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 I n j a n ( j , k ) ( y n μ ) k e 2 y σ + b n ( j , k ) ( y n μ ) k d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 ( i 2 ) μ u μ a n ( ( i 1 ) , k ) ( y n μ ) k e 2 y σ + b n ( ( i 1 ) , k ) ( y n μ ) k d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 u μ ( i 1 ) μ a n ( ( i 1 ) , k ) ( y n μ ) k + b n ( ( i 1 ) , k ) ( y n μ ) k e 2 y σ d y + j = i n + 1 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 I n j a n ( j , k ) ( y n μ ) k + b n ( j , k ) ( y n μ ) k e 2 y σ d y 1 I n + 1 i ( u )
Computing the integrals we obtain
(54a) f n + 1 i ( u ) = e ( u μ ) σ 2 σ c n (54b) + j = 1 i 2 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 a n ( j , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j + b n ( j , k ) ( y n μ ) k + 1 k + 1 I n j + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 a n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ ( i 2 ) μ u μ (54c) + b n ( ( i 1 ) , k ) ( y n μ ) k + 1 k + 1 ( i 2 ) μ u μ (54d) + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 a n ( ( i 1 ) , k ) ( y n μ ) k + 1 k + 1 u μ ( i 1 ) μ b n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ u μ ( i 1 ) μ (54e) + j = i n + 1 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 a n ( j , k ) ( y n μ ) k + 1 k + 1 I n j b n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j 1 I n + 1 i ( u )
where  Γ ( n , y )  denotes the incomplete Gamma function
Γ ( n , y ) = y t n 1 e t d t .
and  Γ ( n , ϕ ( y ) ) [ a , b ] = Γ ( n , ϕ ( b ) ) Γ ( n , ϕ ( a ) ) .
The divergence of  ( y n μ ) k + 1  on the last interval  I n n + 1 = ( n μ + x , + )  in (54e) will not be a problem because from the recurrent expression of the coefficients we will obtain  a n ( n + 1 , k ) 0  for each admissible k.
Using (11) and the expansion [35]
Γ ( n + 1 , x ) = n ! e x r = 0 n x r r !
we obtain
l f n + 1 i ( u ) = e ( u μ ) σ 2 σ c n + e ( u μ ) σ ( 2 σ ) n + 1 j = 1 i 2 A j ( 1 ) + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 a n ( ( i 1 ) , k ) e 2 y σ k ! r = 0 k ( 1 ) k + r ( 2 ) r k 1 r ! σ r k 1 ( y n μ ) r ( i 2 ) μ u μ + b n ( ( i 1 ) , k ) ( y n μ ) k + 1 k + 1 ( i 2 ) μ u μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n i 1 1 a n ( ( i 1 ) , k ) ( y n μ ) k + 1 k + 1 u μ ( i 1 ) μ + b n ( ( i 1 ) , k ) e 2 y σ k ! r = 0 k ( 2 ) r k 1 r ! σ r k 1 ( y n μ ) r u μ ( i 1 ) μ + e ( u μ ) σ ( 2 σ ) n + 1 j = i n + 1 A j ( 2 ) 1 I n + 1 i ( u )
Collecting powers  u ( n + 1 ) μ j e u σ  and  ( u ( n + 1 ) μ ) j e u σ , we obtain for  i > 1  and  j = 1 , , m n + 1 i
l a n + 1 ( i , j ) = e μ σ a n ( ( i 1 ) , ( j 1 ) ) j + k = j m n i 1 1 ( 1 ) k + j a n ( ( i 1 ) , k ) k ! j ! 2 σ j k 1 = e μ σ k = j 1 m n i 1 1 ( 1 ) k + j a n ( ( i 1 ) , k ) k ! j ! 2 σ j k 1 b n + 1 ( i , j ) = e μ σ b n ( ( i 1 ) , ( j 1 ) ) j + k = j m n i 1 1 b n ( ( i 1 ) , k ) k ! j ! 2 σ j k 1 = e μ σ k = j 1 m n i 1 1 b n ( ( i 1 ) , k ) k ! j ! 2 σ j k 1
while the coefficients of  e u σ  and  e u σ  are, respectively,
l a n + 1 ( i , 0 ) = e μ σ j = i n + 1 A j ( 2 ) + k = 0 m n i 1 1 a n ( ( i 1 ) , k ) ( ( i 1 n ) μ ) k + 1 k + 1 + ( 1 ) k k ! σ 2 k + 1 b n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( ( i 1 n ) μ ) σ b n + 1 ( i , 0 ) = e μ σ j = 1 i 2 A j ( 1 ) k = 0 m n i 1 1 b n ( ( i 1 ) , k ) ( ( i 2 n ) μ ) k + 1 k + 1 k ! σ 2 k + 1 a n ( ( i 1 ) , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( ( i 1 n ) μ ) σ + c n
In a similar way, expanding (50), (52) and (53), we obtain the thesis.
In order to find the recursive expression for  c n , let us consider (49)
(56a) c n + 1 = 0 + e y + μ σ 2 f n ( y ) d y (56b) = j = 0 n + 1 0 + e y + μ σ 2 f n j ( y ) d y (56c) = e μ σ 2 j = 1 n + 1 k = 0 m n i 1 I n j e y σ ( 2 σ ) n a n ( j , k ) ( y n μ ) k e y σ + b n ( j , k ) ( y n μ ) k e y σ d y + c n (56d) = e μ σ 2 j = 1 n + 1 k = 0 m n i 1 I n j 1 ( 2 σ ) n a n ( j , k ) ( y n μ ) k + b n ( j , k ) ( y n μ ) k e 2 y σ d y + c n (56e) = e μ σ 2 ( 2 σ ) n j = 1 n + 1 k = 0 m n i 1 a n ( j , k ) ( y n μ ) k + 1 k + 1 I n j b n ( j , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j + ( 2 σ ) n c n

4.3. Proof of Corollary 1

Proof. 
From (8a) and (50) we obtain
f n + 1 1 ( u ) = e ( u μ ) σ 2 σ c n + j = 1 n + 1 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 I n j a n ( j , k ) ( y n μ ) k + b n ( j , k ) ( y n μ ) k e 2 y σ d y 1 I n + 1 1 ( u )
Computing the integrals we obtain
f n + 1 1 ( u ) = e ( u μ ) σ 2 σ c n + j = 1 n + 1 e ( u μ ) σ ( 2 σ ) n + 1 k = 0 m n j 1 a n ( j , k ) ( y n μ ) k + 1 k + 1 I n j b n ( j , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ I n j 1 I n + 1 1 ( u )
Computing the limit as  u 0 +  in (57) and comparing it with (56e) we obtain the thesis. □

4.4. Proof of Theorem 2

Proof. 
In analogy with the case  μ > 0 , we proceed by induction.
Case n = 1. This part of the proof coincides with the analogous part of the proof of Theorem 1. Using (5) according to  x + μ  positive or negative, we obtain (14).
Case n. Let us assume that (14) holds for n, and we show that they hold for  n + 1 , for  n 1 .
In analogy with the proof of Theorem 1, using the transition density function (6) in (46), we obtain
f n + 1 ( u ) = 1 ( 0 , + ) ( u ) μ + e | u y μ | σ 2 σ f n ( y ) d y + δ ( u ) μ + e y + μ σ 2 f n ( y ) d y + 1 ( 0 , + ) ( u ) 0 μ e u y μ σ 2 σ f n ( y ) d y + δ ( u ) 0 μ 1 e y + μ σ 2 f n ( y ) d y
Since, by induction,  f n ( u )  is given by (14), for  u > 0  we obtain
f n + 1 ( u ) = i = 1 2 0 μ e u y μ σ 2 σ f n i ( y ) d y + μ + e | u y μ | σ 2 σ f n i ( y ) d y + e u μ σ 2 σ c n 1 ( 0 , + ) ( u )
while for  u = 0  we have the mass in zero
c n + 1 = i = 1 2 0 μ 1 e y + μ σ 2 f n i ( y ) d y + μ + e y + μ σ 2 f n i ( y ) d y + 1 e μ σ 2 c n
Let us focus on the continuous part (58). Expanding the modulus we obtain
f n + 1 ( u ) = i = 1 2 0 u μ e u y μ σ 2 σ f n i ( y ) d y + u μ + e u y μ σ 2 σ f n i ( y ) d y + e u μ σ 2 σ c n 1 ( 0 , + ) ( u )
Now, in order to complete the proof, we have to distinguish two cases according to the sign of  x + n μ .
Let us consider  x + n μ > 0 . Observe that the partition of step n (16) is updated in step  n + 1  after checking whether  u μ < x + n μ . The corresponding functions  f n + 1 i i = 1 , 2  are
f n + 1 1 ( u ) = 0 u μ e u y μ σ 2 σ f n 1 ( y ) d y + u μ x + n μ e u y μ σ 2 σ f n 1 ( y ) d y + x + n μ + e u y μ σ 2 σ f n 2 ( y ) d y + e μ u σ 2 σ c n 1 I n + 1 1 ( u )
f n + 1 2 ( u ) = 0 x + n μ e u y μ σ 2 σ f n 1 ( y ) d y + x + n μ u μ e u y μ σ 2 σ f n 2 ( y ) d y + u μ + e u y μ σ 2 σ f n 2 ( y ) d y + e μ u σ 2 σ c n 1 I n + 1 2 ( u )
Substituting in (61) the expressions given in (15)
f n + 1 1 ( u ) = e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 0 u μ a n ( 1 , k ) ( y n μ ) k e 2 y σ + b n ( 1 , k ) ( y n μ ) k d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 u μ x + n μ a n ( 1 , k ) ( y n μ ) k + b n ( 1 , k ) ( y n μ ) k e 2 y σ d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 x + n μ a n ( 2 , k ) ( y n μ ) k + b n ( 2 , k ) ( y n μ ) k e 2 y σ d y + e u + μ σ 2 σ c n 1 I n + 1 1 ( u )
Computing the integrals we obtain
f n + 1 1 ( u ) = e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 u μ + b n ( 1 , k ) ( y n μ ) k + 1 k + 1 0 u μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 1 , k ) ( y n μ ) k + 1 k + 1 u μ x + n μ b n ( 1 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ u μ x + n μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 2 , k ) ( y n μ ) k + 1 k + 1 x + n μ + b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 x σ + e u + μ σ 2 σ c n 1 I n + 1 1 ( u )
Proceeding in an analogous way for (62) we compute
f n + 1 2 ( u ) = e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 0 x + n μ a n ( 1 , k ) ( y n μ ) k e 2 y σ + b n ( 1 , k ) ( y n μ ) k d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 x + n μ u μ a n ( 2 , k ) ( y n μ ) k e 2 y σ + b n ( 2 , k ) ( y n μ ) k d y + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 u μ a n ( 2 , k ) ( y n μ ) k + b n ( 2 , k ) ( y n μ ) k e 2 y σ d y + e u + μ σ 2 σ c n 1 I n + 1 2 ( u )
Computing the integrals we obtain
f n + 1 2 ( u ) = e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 1 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 x + n μ + b n ( 1 , k ) ( y n μ ) k + 1 k + 1 0 x + n μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ x + n μ u μ + b n ( 2 , k ) ( y n μ ) k + 1 k + 1 x + n μ u μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 2 , k ) ( y n μ ) k + 1 k + 1 u μ + b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( u ( n + 1 ) μ ) σ + e u + μ σ 2 σ c n 1 I n + 1 2 ( u )
Expanding the incomplete Gamma function (55) in (63) and (64) and collecting powers  ( u ( n + 1 ) μ ) j e u σ  and  ( u ( n + 1 ) μ ) j e u σ j = 0 , , n 1  we recognize the coefficients (17).
Observe that the divergence of  ( y n μ ) k + 1  on  y = +  in (63) and (64) is not a problem since, from the recurrent expression (17a) of the coefficients and (21c),  a 2 ( n + 1 , k ) 0  for each admissible k.
Let us now consider  c n + 1 = f n + 1 ( 0 ) , given by (59). This value changes according to the value of x with respect to the following intervals  [ 0 , ( n + 1 ) μ )  and  [ ( n + 1 ) μ , ) .
Making use of the indicators of such intervals, we obtain
c n + 1 = 0 x + n μ 1 e y + μ σ 2 f n 1 ( y ) d y + x + n μ μ 1 e y + μ σ 2 f n 2 ( y ) d y + μ + e y + μ σ 2 f n 2 ( y ) d y (65a) + 1 e μ σ 2 c n 1 [ 0 , ( n + 1 ) μ ) ( x ) + 0 μ 1 e y + μ σ 2 f n 1 ( y ) d y + μ x + n μ e y + μ σ 2 f n 1 ( y ) d y + x + n μ + e y + μ σ 2 f n 2 ( y ) d y (65b) + 1 e μ σ 2 c n 1 [ ( n + 1 ) μ ) , ) ( x ) = c ˜ n + 1 + c ˜ ˜ n + 1
where  c ˜ n + 1  corresponds to (65a) and  c ˜ ˜ n + 1  corresponds to (65b).
Substituting (15) in (65) we obtain
c ˜ n + 1 = 1 ( 2 σ ) n k = 0 n 1 0 x + n μ a n ( 1 , k ) ( y n μ ) k e y σ + b n ( 1 , k ) ( y n μ ) k e y σ d y e μ σ 2 n + 1 σ n k = 0 n 1 0 x + n μ a n ( 1 , k ) ( y n μ ) k e 2 y σ + b n ( 1 , k ) ( y n μ ) k d y + 1 ( 2 σ ) n k = 0 n 1 x + n μ μ a n ( 2 , k ) ( y n μ ) k e y σ + b n ( 2 , k ) ( y n μ ) k e y σ d y e μ σ 2 n + 1 σ n k = 0 n 1 x + n μ μ a n ( 2 , k ) ( y n μ ) k e 2 y σ + b n ( 2 , k ) ( y n μ ) k d y + e μ σ 2 n + 1 σ n k = 0 n 1 μ + a n ( 2 , k ) ( y n μ ) k + b n ( 2 , k ) ( y n μ ) k e 2 y σ d y + 1 e μ σ 2 c n 1 [ 0 , ( n + 1 ) μ ) ( x )
c ˜ ˜ n + 1 = 1 ( 2 σ ) n k = 0 n 1 0 μ a n ( 1 , k ) ( y n μ ) k e y σ + b n ( 1 , k ) ( y n μ ) k e y σ d y e μ σ 2 n + 1 σ n k = 0 n 1 0 μ a n ( 1 , k ) ( y n μ ) k e 2 y σ + b n ( 1 , k ) ( y n μ ) k d y + e μ σ 2 n + 1 σ n k = 0 n 1 μ x + n μ a n ( 1 , k ) ( y n μ ) k + b n ( 1 , k ) ( y n μ ) k e 2 y σ d y + e μ σ 2 n + 1 σ n k = 0 n 1 x + n μ + a n ( 2 , k ) ( y n μ ) k + b n ( 2 , k ) ( y n μ ) k e 2 y σ d y + 1 e μ σ 2 c n 1 [ ( n + 1 ) μ ) , ) ( x )
Computing the integrals, we obtain the thesis (19) and (20).
Let us consider now  x + n μ 0 .
Please note that in this case  I n 1 =  and  f n 1  is identically 0 while  f n + 1 2 ( u )  becomes
f n + 1 2 ( u ) = 0 u μ e u y μ σ 2 σ f n 2 ( y ) d y + u μ + e u y μ σ 2 σ f n 2 ( y ) d y + e u + μ σ 2 σ c n 1 I n + 1 2 ( u )
Substituting in (66) the expressions given in (15a)
f n + 1 2 ( u ) = 1 ( 2 σ ) n k = 0 n 1 0 u μ e ( u y μ ) σ 2 σ a n ( 2 , k ) ( y n μ ) k e y σ + b n ( 2 , k ) ( y n μ ) k e y σ d y + 1 ( 2 σ ) n k = 0 n 1 u μ e ( u y μ ) σ 2 σ a n ( 2 , k ) ( y n μ ) k e y σ + b n ( 2 , k ) ( y n μ ) k e y σ d y + e u + μ σ 2 σ c n 1 I n + 1 2 ( u )
Computing the integrals we obtain
f n + 1 2 ( u ) = e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 2 , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y n μ ) σ 0 u μ + b n ( 2 , k ) ( y n μ ) k + 1 k + 1 0 u μ + e ( u μ ) σ ( 2 σ ) n + 1 k = 0 n 1 a n ( 2 , k ) ( y n μ ) k + 1 k + 1 u μ + b n ( 2 , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( u ( n + 1 ) μ ) σ + e u + μ σ 2 σ c n 1 I n + 1 2 ( u )
The coefficients of  e u σ  and  e u σ  give (22).
It remains to compute the mass in 0 when  x + n μ 0 . Note that in this setting, we have only to consider the case  x + n μ μ  since  x + n μ > μ  is impossible.
  • For  x < ( n + 1 ) μ  (59) becomes
    c n + 1 = 0 μ 1 e y + μ σ 2 f n 2 ( y ) d y + μ + e y + μ σ 2 f n 2 ( y ) d y + 1 e μ σ 2 c n
Substituting (15a)
c n + 1 = 1 ( 2 σ ) n k = 0 n 1 0 μ a n ( 2 , k ) ( y n μ ) k e y σ + b n ( 2 , k ) ( y n μ ) k e y σ d y e μ σ 2 n + 1 σ n k = 0 n 1 0 μ a n ( 2 , k ) ( y n μ ) k e 2 y σ + b n ( 2 , k ) ( y n μ ) k d y + e μ σ 2 n + 1 σ n k = 0 n 1 μ + a n ( 2 , k ) ( y n μ ) k + b n ( 2 , k ) ( y n μ ) k e 2 y σ d y + 1 e μ σ 2 c n
that gives the result (22e). □

4.5. Proof of Theorem 3

Proof. 
We proceed by induction.
Case n = 1. This part of the proof coincides with the analogous part of the proof of Theorem 1.
Case n. Let us assume that (24a) holds for n, and we show that it holds for  n + 1 , for  n 1 .
In analogy with the proof of Theorem 1, using the transition density function (6) in (46), we obtain
f n + 1 ( u ) = 1 ( 0 , + ) ( u ) μ + e | u y μ | σ 2 σ f n ( y ) d y + δ ( u ) μ + e y + μ σ 2 f n ( y ) d y + 1 ( 0 , + ) ( u ) 0 μ e u y μ σ 2 σ f n ( y ) d y + δ ( u ) 0 μ 1 e y + μ σ 2 f n ( y ) d y
Using (23), we obtain
f n + 1 ( u ) = i = 0 1 μ + e | u y μ | σ 2 σ f n i ( y ) d y 1 ( 0 , + ) ( u ) + i = 0 1 μ + e y + μ σ 2 f n i ( y ) d y δ ( u ) + i = 0 1 0 μ e u y μ σ 2 σ f n i ( y ) d y 1 ( 0 , + ) ( u ) + i = 0 1 0 μ 1 e y + μ σ 2 f n i ( y ) d y δ ( u )
When  u > 0  we obtain
f n + 1 ( u ) = μ + e | u y μ | σ 2 σ f n 1 ( y ) d y + 0 μ e u y μ σ 2 σ f n 1 ( y ) d y + e u μ σ 2 σ c n 1 ( 0 , + ) ( u ) = 0 u μ e u y μ σ 2 σ f n 1 ( y ) d y + u μ + e u y μ σ 2 σ f n 1 ( y ) d y + e u + μ σ 2 σ c n 1 ( 0 , + ) ( u )
Applying the inductive hypothesis (24)
f n + 1 ( u ) = 1 ( 2 σ ) n e u μ σ 2 σ j = 0 n 1 0 u μ b n ( 1 , j ) ( y ( n 1 ) μ ) j d y + 1 ( 2 σ ) n e u μ σ 2 σ j = 0 n 1 u μ + b n ( 1 , j ) ( y ( n 1 ) μ ) j e 2 y σ d y + e u + μ σ 2 σ c n 1 ( 0 , + ) ( u )
Computing the integrals and expanding the incomplete Gamma functions (55), we obtain
f n + 1 ( u ) = 1 ( 2 σ ) n e u μ σ 2 σ k = 0 n 1 b n ( 1 , k ) ( u n μ ) k + 1 k + 1 b n ( 1 , k ) ( ( n 1 ) μ ) k + 1 k + 1 + 1 ( 2 σ ) n e u μ σ 2 σ k = 0 n 1 b n ( 1 , k ) e 2 ( n 1 ) μ σ σ k + 1 2 k + 1 k ! e 2 ( u n μ ) σ r = 0 k 2 ( u n μ ) σ r r ! + e u + μ σ 2 σ c n 1 ( 0 , + ) ( u )
where we can recognize the recurrence relations (25).
Let us now work out the calculation of the probability to be in 0 at step  n + 1
c n + 1 = μ + e y + μ σ 2 f n 1 ( y ) d y + 0 μ 1 e y + μ σ 2 f n 1 ( y ) d y + 1 e μ σ 2 c n
Substituting in (67) the expression of the inductive hypothesis (24a) we obtain
c n + 1 = e μ σ 2 μ + k = 0 n 1 1 ( 2 σ ) n b n ( 1 , k ) ( y ( n 1 ) μ ) k e 2 y σ d y + 0 μ k = 0 n 1 1 ( 2 σ ) n b n ( 1 , k ) ( y ( n 1 ) μ ) k e y σ d y e μ σ 2 0 μ k = 0 n 1 1 ( 2 σ ) n b n ( 1 , k ) ( y ( n 1 ) μ ) k d y + 1 e μ σ 2 c n = e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 1 , k ) e 2 ( n 1 ) μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y ( n 1 ) μ ) σ μ 1 ( 2 σ ) n k = 0 n 1 b n ( 1 , k ) e ( n 1 ) μ σ σ k + 1 Γ 1 + k , ( y ( n 1 ) μ ) σ 0 μ e μ σ 2 n + 1 σ n k = 0 n 1 b n ( 1 , k ) ( y ( n 1 ) μ ) k + 1 k + 1 0 μ + 1 e μ σ 2 c n
that gives the coefficient (25c). □

5. Proofs of Theorems on the FET

5.1. Proof of Theorem 4

Proof. 
We proceed by induction.
Case n = 1. Since  P ( 1 | x ) = 1 F 1 ( h | x )  and  0 < μ < h  by hypothesis, using (3) in Lemma 1, for  x [ 0 , h ) , we have
l P ( 1 | x ) = 1 2 e μ h + x σ 1 I 1 1 ( x ) + 1 1 2 e μ h + x σ 1 I 1 2 ( x ) = i = 1 2 P i ( 1 | x )
where the partition (28) is
I 1 1 = [ 0 , h μ ) I 1 2 = [ h μ , h )
and we easily recognize the coefficients (31).
Case n.
We assume that (26) holds for n, and we show that it holds for  n + 1 , for  n 1 .
Conditioning on the position reached at Time 1, we obtain
P ( n + 1 | x ) = 0 h P ( n | y ) P ( W 1 d y ) n = 1 , 2 ,
Substituting (5) and using the inductive hypothesis (27), we obtain:
P ( n + 1 | x ) = P 1 ( n | 0 ) e μ + x σ 2 + j = 1 i * 1 I n j P j ( n | y ) e y μ x σ 2 σ d y + h ( n i * + 1 ) μ min ( h , μ + x ) P i * ( n | y ) e y μ x σ 2 σ d y + min ( h , μ + x ) min ( h , h ( n i * ) μ ) P i * ( n | y ) e y + μ + x σ 2 σ d y + j = i * + 1 n I n j P j ( n | y ) e y + μ + x σ 2 σ d y
where  I n i *  is the interval containing  x + μ , for  n > 1 . Please note that the value of  i *  depends on x μ  and n and  1 i * n .
Now we rewrite  P ( n + 1 | x )  as a sum of  n + 1  terms  P i ( n + 1 | x ) i = 1 , , n + 1  where
P i ( n + 1 | x ) = P 1 ( n | 0 ) e μ + x σ 2 + j = 1 i * 1 I n j P j ( n | y ) e y μ x σ 2 σ d y + h ( n i * + 1 ) μ min ( h , μ + x ) P i * ( n | y ) e y μ x σ 2 σ d y + min ( h , μ + x ) min ( h , h ( n i * ) μ ) P i * ( n | y ) e y + μ + x σ 2 σ d y + j = i * + 1 n I n j P j ( n | y ) e y + μ + x σ 2 σ d y 1 I n + 1 i ( x ) .
  • Observe that, when  i = n + 1  we have  h μ < x < h , hence  i * = n + 1  and (70) reduces to the sum of the first two terms.
  • Now, using the inductive hypothesis (27), we obtain
    P i ( n + 1 | x ) = e μ + x σ 2 k = 0 m n 1 1 1 2 n σ n 1 α n ( 1 , k ) ( n μ ) k + β n ( 1 , k ) ( n μ ) k + η n ( 1 , 0 ) + j = 1 i * 1 1 2 n σ n 1 I n j e ( y μ x ) σ 2 σ k = 0 m n j 1 α n ( j , k ) ( y + n μ ) k e y σ + β n ( j , k ) ( y + n μ ) k e y σ + 2 n σ n 1 η n ( j , 0 ) d y + 1 2 n σ n 1 h ( n i * + 1 ) μ min ( h , x + μ ) e ( y μ x ) σ 2 σ k = 0 m n i * 1 α n ( i * , k ) ( y + n μ ) k e y σ + β n ( i * , k ) ( y + n μ ) k e y σ + η n ( i * , 0 ) 2 n σ n 1 d y + 1 2 n σ n 1 min ( h , x + μ ) min ( h , h ( n i * ) μ ) e ( y μ x ) σ 2 σ k = 0 m n i * 1 α n ( i * , k ) ( y + n μ ) k e y σ + β n ( i * , k ) ( y + n μ ) k e y σ + η n ( i * , 0 ) 2 n σ n 1 d y + j = i * + 1 n 1 2 n σ n 1 I j e ( y μ x ) σ 2 σ k = 0 m n j 1 α n ( j , k ) ( y + n μ ) k e y σ + β n ( j , k ) ( y + n μ ) k e y σ + η n ( j , 0 ) 2 n σ n 1 d y 1 I n + 1 i ( x )
Recognizing Gamma functions, we obtain
P i ( n + 1 | x ) = e ( x + μ ) σ 2 n + 1 σ n j = 0 i * 1 K n j + e ( x + μ ) σ 2 n + 1 σ n k = 0 m n i * 1 α n ( i * , k ) e 2 n μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y + n μ ) σ h ( n i * + 1 ) μ min ( h , x + μ ) + β n ( i * , k ) ( y + n μ ) k + 1 k + 1 h ( n i * + 1 ) μ min ( h , x + μ ) + η n ( i * , 0 ) ( 2 σ ) n e y σ h ( n i * + 1 ) μ min ( h , x + μ ) + e ( x + μ ) σ 2 n + 1 σ n k = 0 m n i * 1 α n ( i * , k ) ( y + n μ ) k + 1 k + 1 min ( h , μ + x ) min ( h , h ( n i * ) μ ) β n ( i * , k ) e 2 n μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y + n μ ) σ min ( h , x + μ ) min ( h , h ( n i * ) μ ) η n ( i * , 0 ) ( 2 σ ) n e y σ min ( h , x + μ ) min ( h , h ( n i * ) μ ) + e ( x + μ ) σ 2 n + 1 σ n j = i * + 1 n K n j 1 I n + 1 i ( x )
where  K n j  are given by (30). Now we can use the expansion (55) to obtain
P i ( n + 1 | x ) = e ( x + μ ) σ 2 n + 1 σ n j = 0 i * 1 K n j + e ( x + μ ) σ 2 n + 1 σ n k = 0 m n i * 1 α n ( i * , k ) σ k + 1 ( 1 ) k 2 k + 1 k ! e 2 y σ r = 0 k ( 2 ) r r ! σ r ( y + n μ ) r h ( n i * + 1 ) μ min ( h , x + μ ) + β n ( i * , k ) ( y + n μ ) k + 1 k + 1 h ( n i * + 1 ) μ min ( h , x + μ ) + η n ( i * , 0 ) ( 2 σ ) n e y σ h ( n i * + 1 ) μ min ( h , x + μ ) + e ( x + μ ) σ 2 n + 1 σ n k = 0 m n i * 1 α n ( i * , k ) ( y + n μ ) k + 1 k + 1 β n ( i * , k ) σ k + 1 2 k + 1 k ! e 2 y σ r = 0 k ( 2 ) r r ! σ r ( y + n μ ) r min ( h , x + μ ) min ( h , h ( n i * ) μ ) η n ( i * , 0 ) ( 2 σ ) n e y σ min ( h , x + μ ) min ( h , h ( n i * ) μ ) + e ( x + μ ) σ 2 n + 1 σ n j = i * + 1 n K n j 1 I n + 1 i ( x )
Substituting the integration extremes, for  i < n + 1  we obtain
P i ( n + 1 | x ) = e x σ 2 n + 1 σ n e μ σ j = 0 i * 1 K n j + e x σ 2 n + 1 σ n e μ σ k = 0 m n i * 1 α n ( i * , k ) σ k + 1 ( 1 ) k 2 k + 1 k ! e 2 ( x + μ ) σ r = 0 k ( 2 ) r r ! σ r ( ( x + μ ) + n μ ) r e 2 ( h ( n i * + 1 ) μ ) σ r = 0 k ( 2 ) r r ! σ r ( ( h ( n i * + 1 ) μ ) + n μ ) r + β n ( i * , k ) ( ( x + μ ) + n μ ) k + 1 k + 1 ( ( h ( n i * + 1 ) μ ) + n μ ) k + 1 k + 1 + η n ( i * , 0 ) ( 2 σ ) n e x + μ σ e h ( n i * + 1 ) μ σ + e x σ 2 n + 1 σ n e μ σ k = 0 m n i * 1 α n ( i * , k ) ( ( h ( n i * ) μ ) + n μ ) k + 1 k + 1 ( μ + x + n μ ) k + 1 k + 1 β n ( i * , k ) σ k + 1 2 k + 1 k ! e 2 ( h ( n i * ) μ ) σ r = 0 k ( 2 ) r r ! σ r ( h ( n i * ) μ + n μ ) r e 2 ( x + μ ) σ r = 0 k ( 2 ) r r ! σ r ( x + μ + n μ ) r η n ( i * , 0 ) ( 2 σ ) n e h ( n i * ) μ σ e x + μ σ + e x σ 2 n + 1 σ n e μ σ j = i * + 1 n K n j 1 I n + 1 i ( x )
and if  i = n + 1 P i ( n + 1 | x )  reduces to
P n + 1 ( n + 1 | x ) = e x σ 2 n + 1 σ n e μ σ j = 0 n K n j .
Grouping out the terms  e x σ ( x + ( n + 1 ) μ ) j  and  e x σ ( x + ( n + 1 ) μ ) j , we obtain the coefficients  α n + 1 ( i , j )  and  β n + 1 ( i , j ) , respectively. Furthermore, (29e) is obtained by grouping constant terms. □

5.2. Proof of Corollary 2

Proof. 
We proceed by induction. However, in this case we need two steps before starting the induction.
Case n = 1. Using (3) we obtain (32) since, for  x [ 0 , h ) , we have
P ( 1 | x ) = 1 1 2 e h μ x σ
from which we recognize  η 1  and  β 1 .
Case n = 2. Conditioning on the position reached at Time 1 and observing that  h μ μ + x  we obtain
P ( 2 | x ) = 0 h e y μ x σ 2 σ + e μ + x σ 2 δ ( y ) 1 1 2 e h μ y σ d y = e h μ σ 2 h 4 σ + 1 4 e h 2 μ σ e x σ
where we recognize the coefficient  β 2  in (33c).
Case n > 2 . Conditioning on the position reached at Time 1, we obtain
l P ( n + 1 | x ) = 0 h e y μ x σ 2 σ + e μ x σ 2 δ ( y ) β n e y σ d y = β n 1 2 + h 2 σ e μ σ e x σ
The coefficients become
l β n + 1 = β n 1 2 + h 2 σ e μ σ = β 2 1 2 + h 2 σ n 1 e ( n 1 ) μ σ = e h n μ σ 2 1 2 + h 2 σ n 1 e h ( n + 1 ) μ σ 2 1 2 + h 2 σ n

5.3. Proof of Theorem 5

Proof. 
We proceed by induction.
Case n = 1. Mimicking the proof for  n = 1  in Theorem 4, with  0 x < h < h μ , we have
P ( 1 | x ) = 1 2 e μ h + x σ
where we recognize  α 1 ( 1 , 0 )  as (40a).
Case n. We assume that (34) holds for n, and we show that it holds for  n + 1 , for  n 1 .
Conditioning on the position reached at Time 1, using (68) we have
P ( n + 1 | x ) = i = 1 h n 0 h P i ( n | y ) f Z ( y , 1 | x , 0 ) d y
The structure of  P i ( n + 1 | x )  changes according to  i = 1  or  1 < i h n + 1 .
If  i = 1  we have  x < μ  and
P 1 ( n + 1 | x ) = j = 1 h n I n j e y x μ σ 2 σ P j ( n | y ) d y + 1 e μ + x σ 2 P 1 ( n | 0 ) 1 I n + 1 1 ( x ) = e x + μ σ 2 n + 1 σ n j = 1 h n k = 0 n 1 I n j α n ( j , k ) ( y + ( n 1 ) μ ) k + I n j β n ( j , k ) ( y + ( n 1 ) μ ) k e 2 y σ + 2 n σ n 1 η n j e y σ d y + 1 e μ + x σ 2 P 1 ( n | 0 ) 1 I n + 1 1 ( x ) = e x + μ σ 2 n + 1 σ n j = 1 h n K n j K n 0 + K n 0 2 n σ n 1 I n + 1 1 ( x )
and we recognize the parameters (38).
If  i > 1  we have  x μ  and
P i ( n + 1 | x ) = P 1 ( n | 0 ) e μ + x σ 2 + j = 1 i * 1 I n j P j ( n | y ) e y μ x σ 2 σ d y + ( i * 1 ) μ μ + x P i * ( n | y ) e y μ x σ 2 σ d y + μ + x i * μ P i * ( n | y ) e y + μ + x σ 2 σ d y + j = i * + 1 h n I n j P j ( n | y ) e y + μ + x σ 2 σ d y 1 I n + 1 i ( x ) .
Now, using the inductive hypothesis (35), we have
P i ( n + 1 | x ) = e μ + x σ 2 k = 0 n 1 1 2 n σ n 1 α n ( 1 , k ) ( ( n 1 ) μ ) k + β n ( 1 , k ) ( ( n 1 ) μ ) k + η n 1 + j = 1 i * 1 1 2 n σ n 1 I n j e ( y μ x ) σ 2 σ k = 0 n 1 α n ( j , k ) ( y + ( n 1 ) μ ) k e y σ + β n ( j , k ) ( y + ( n 1 ) μ ) k e y σ + η n j 2 n σ n 1 d y + 1 2 n σ n 1 ( i * 1 ) μ x + μ e ( y μ x ) σ 2 σ k = 0 n 1 α n ( i * , k ) ( y + ( n 1 ) μ ) k e y σ + β n ( i * , k ) ( y + ( n 1 ) μ ) k e y σ + η n i * 2 n σ n 1 d y + 1 2 n σ n 1 x + μ i * μ e ( y μ x ) σ 2 σ k = 0 n 1 α n ( i * , k ) ( y + ( n 1 ) μ ) k e y σ + β n ( i * , k ) ( y + ( n 1 ) μ ) k e y σ + η n i * 2 n σ n 1 d y + j = i * + 1 h n 1 2 n σ n 1 I n j e ( y μ x ) σ 2 σ k = 0 n 1 α n ( j , k ) ( y + ( n 1 ) μ ) k e y σ + β n ( j , k ) ( y + ( n 1 ) μ ) k e y σ + η n j 2 n σ n 1 d y 1 I n + 1 i ( x )
Computing the integrals, we obtain
P i ( n + 1 | x ) = e μ + x σ 2 K n 0 ( 2 σ ) n + e ( x + μ ) σ 2 n + 1 σ n j = 1 i * 1 K n j + e ( x + μ ) σ 2 n + 1 σ n k = 0 n 1 α n ( i * , k ) e 2 ( n 1 ) μ σ σ k + 1 ( 1 ) k 2 k + 1 Γ 1 + k , 2 ( y + ( n 1 ) μ ) σ ( i * 1 ) μ x + μ + β n ( i * , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 ( i * 1 ) μ x + μ + η n i * ( 2 σ ) n e y σ ( i * 1 ) μ x + μ + e ( x + μ ) σ 2 n + 1 σ n k = 0 n 1 α n ( i * , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 μ + x i * μ β n ( i * , k ) e 2 ( n 1 ) μ σ σ k + 1 2 k + 1 Γ 1 + k , 2 ( y + ( n 1 ) μ ) σ x + μ i * μ η n i * ( 2 σ ) n e y σ x + μ i * μ + e ( x + μ ) σ 2 n + 1 σ n j = i * + 1 h n K n j 1 I n + 1 i ( x )
Expanding the incomplete Gamma function (55) we obtain
P i ( n + 1 , x ) = e μ + x σ 2 K n 0 ( 2 σ ) n + e ( x + μ ) σ 2 n + 1 σ n j = 1 i * 1 K n j + e ( x + μ ) σ 2 n + 1 σ n k = 0 n 1 α n ( i * , k ) σ k + 1 ( 1 ) k 2 k + 1 k ! e 2 y σ r = 0 k ( 2 ) r r ! σ r ( y + ( n 1 ) μ ) r ( i * 1 ) μ x + μ + β n ( i * , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 ( i * 1 ) μ x + μ + η n i * ( 2 σ ) n e y σ ( i * 1 ) μ x + μ + e ( x + μ ) σ 2 n + 1 σ n k = 0 n 1 α n ( i * , k ) ( y + ( n 1 ) μ ) k + 1 k + 1 μ + x i * μ β n ( i * , k ) σ k + 1 2 k + 1 k ! e 2 y σ r = 0 k ( 2 ) r r ! σ r ( y + ( n 1 ) μ ) r x + μ i * μ η n i * ( 2 σ ) n e y σ x + μ i * μ + e ( x + μ ) σ 2 n + 1 σ n j = i * + 1 n K n j 1 I n + 1 i ( x )
Using (74) and (75) we recognize the coefficients (37). □

5.4. Proof of Corollary 3

Proof. 
We could show that (34) in Theorem 5 simplifies to (41). However, it is easier to proceed by induction.
Case n = 1. Since  0 < x h μ  implies  x + μ 0 , using (4), we have
P ( 1 | x ) = 1 2 e μ h + x σ
Case n. Using the inductive hypothesis as in the previous theorems, conditioning on the position reached at Time 1, we obtain
P ( n + 1 | x ) = 0 h e y + μ + x σ 2 σ + 1 e μ + x σ 2 δ ( y ) α n e y σ + η n d y = h e μ + x σ 2 σ α n + α n e μ + x σ 2 α n + e μ + x σ 2 1 e h σ η n + 1 e μ + x σ 2 η n
Collecting the terms in  e x σ  we find the recursive relations (42) and (43). □

5.5. Proof of Theorem 6

Proof. 
We proceed by induction.
Case n = 1. Since  0 < x h , using (3) we have
P ( 1 | x ) = 1 2 e x h σ = 1 2 α 1 ( 1 , 0 ) e x σ
Case n. Using the inductive hypothesis as in the previous theorems, conditioning on the position reached at Time 1, we obtain
P ( n + 1 | x ) = 0 h 1 2 n σ n 1 j = 0 n 1 α n ( 1 , j ) y j e y σ + j = 0 n 2 β n ( 1 , j ) y j e y σ e | y x | σ 2 σ d y + P ( n | 0 ) e x σ 2 = e x σ 2 n + 1 σ n j = 0 n 1 α n ( 1 , j ) σ j + 1 ( 1 ) j 2 j + 1 Γ 1 + j , 2 y σ 0 x + j = 0 n 2 β n ( 1 , j ) y j + 1 j + 1 0 x + e x σ 2 n + 1 σ n j = 0 n 1 α n ( 1 , j ) y j + 1 j + 1 x h + j = 0 n 2 β n ( 1 , j ) σ j + 1 2 j + 1 Γ 1 + j , 2 y σ x h + P ( n | 0 ) e x σ 2
Expanding the incomplete Gamma function (55), we recognize the coefficients (44). □

6. Sensitivity Analysis

Here, we apply the theorems presented in Section 2 to investigate the shapes of the distribution of  W n  for finite times, emphasizing the variety of behaviors as the parameters change.
In Figure 2, the density functions of the Lindley process with  μ = 0.3  and starting position  x = 1  for different values of n and  σ  is shown. We can see that as n increases, the density flattens out, the variance increases, and the maximum of the density moves toward higher values. As  σ  increases, the density flattens out but keeps the position of the maximum. The discrete part of the distribution, represented by a colored dot on the y-axis decreases as n increases. Note also that only when  σ = 1  we observe the continuity between the continuous and discrete part of the density; this is due to Corollary 1.
A different behavior appears when the shift term  μ  is negative. In Figure 3 the density functions of the Lindley process with the same parameters of Figure 2 and  μ = 0.3  is shown. We can see that, as n increases, the density converges to a stationary distribution, as expected from the theory. The interesting remark is that such convergence appears for reasonable small values of n. Please note that the convergence to the steady-state distribution is faster for larger values of  σ . Furthermore, as  σ  increases, the density flattens out, and the variance of  W n  increases, as expected since we increased the variability of the process.
Figure 4 shows the density function of the Lindley process with  σ = 1 , starting position  x = 1  for different values of n and  μ . We notice that if  μ  is positive, the increase of  μ  determines a very similar shape of the density but shifted while, if  μ  is negative, as  μ  decreases, the density concentrates more and more in zero. Another interesting feature concerns the mass in  u = 0 . For positive  μ , it decreases as  μ  increases. Indeed, the trajectories move quickly away from zero. However, for negative  μ , we observe the opposite behavior due to the fact that the trajectories are pushed towards zero, and it becomes increasingly difficult to leave zero. Recall that  μ < 0  implies the existence of the steady-state distribution. We underline that such distribution is attained faster as  μ  decreases.
As far as the FET  N x  are concerned, in Figure 5, we illustrate the behavior of the probability distribution function  P ( n | x )  and of its cumulative with starting position  x = 1 , boundary  h = 3 μ = 0.3  and for different values of  σ . We see a different behavior for small or large values of the dispersion parameter  σ . Indeed, when  σ 0.5 P ( n | x = 1 )  has a maximum whose ordinate decreases as  σ  increases while it increases when  σ > 0.5 . This behavior has an immediate interpretation by observing that almost deterministic crossings determine a high peak when  σ  is small; as far as  σ  increases, the variability of the increments facilitates the crossing that also happens for smaller times, and the probabilistic mass starts to increase sooner.
In Figure 6, we investigate how this behavior evolves when  μ  decreases. In particular, we compare the shapes for different values of  μ = [ 0.3 , 0 , 0.3 ] . Observe that the abscissa of the maximum of the distribution decreases as  σ  increases. Concerning the corresponding ordinate, we observe different behaviors depending on the sign of the parameter  μ . Indeed, for positive  μ , we have the features already noted in Figure 5. These features are no longer observed when  μ 0  because here, the deterministic crossing is no longer possible, and crossings are determined only by the noise.

7. An Application: CUSUM with Laplace-Distributed Scores

A problem in reliability theory concerns the change detection of a machinery’s performance. A widely used technique for this aim is known as CUSUM [15,36,37]. In this context, given the observation of a sequence of independent random variables  X n , with a probability density function that changes at an unknown Time m:
X n f ( x ) , if 1 n < m g ( x ) , if n m ,
the method aims to recognize the unknown Time m in which such sequence changes its distribution.
In his pioneering paper [15], Page proposes the CUSUM procedure as a series of Sequential Probability Ratio Tests (SPRTs) between two simple hypotheses, with thresholds of 0 and h. The detection of a change is achieved through repeated application of the likelihood ratio test. Page shows that the likelihood ratio test can be written in an iterative form as (1), where  Z n  corresponds to the instantaneous loglikelihood ratio at Time n
Z n = log g ( X n ) f ( X n )
and the stopping time of the test is
N h = inf { n 1 | W n ( 0 , h ) } .
Generally, hypothesis tests involve comparing two alternative values of one distribution parameter. Unfortunately, in most cases, closed-form expressions for the distribution of  Z n  are not available. In particular, when parameter changes result in a shift from a symmetric to an asymmetric distribution, it cannot be computed in closed form. However, the results from previous sections allow us to study the case where  f ( x )  is the Laplace density function (2) and  g ( x ) = f ( x ) e θ x b ( θ ) , where
b ( θ ) = log ( E [ e θ X ] ) = μ θ log ( 1 σ 2 θ 2 )
with  X f ( x )  and  0 < θ < 1 / σ . In other words, we suppose that up to Time m, the random variable  X n  follows a Laplace distribution  X n L a p l a c e ( μ , σ )  with mean  μ  while, after Time m, it switches to a skewed Laplace distribution with mean  μ + ( 2 σ 2 θ ) / ( 1 σ 2 θ 2 ) > μ  (cf. Figure 7). Please note that this special case where  f ( x )  is the Laplace density function (2) is relevant in applications [38]. In this instance, the instantaneous loglikelihood ratio of the n-th observation is
Z n = θ X n b ( θ ) ,
i.e., it is a linear function of  X n  with slope  θ  and specific intercept (78) for each slope. The distribution of the loglikelihood ratio is then a rescaled and shifted Laplace random variable  X n L a p l a c e ( θ μ b ( θ ) , σ θ ) .
Typically, the value of the boundary h is determined using the average time interval between anomalies, estimated by the experimenter. However, this procedure highly depends on such subjective estimation that does not allow the fixing of Type I error rate  α .
Here, we propose an alternative algorithm based on the previous section’s results that allows the creation of a test involving the Type I error rate  α . We fix  α , and we consider a sequence of SPRTs: in each step  k 1 , we determine the boundary value  h k *  such that
P ( N h k * k ) = α .
In this way, for the k-th SPRT test, we determine a constant boundary  h k *  that holds up to Time k.
In Figure 8, we simulate data from Laplace distribution with parameters  μ = 0 σ = 1 x = 0  and time change  m = 50 , i.e., for  n < 50  the parameter  θ  is null and for  n 50  we select  θ = 0.6 . Applying the CUSUM algorithm with the boundary evaluated through (80) we obtain a detection time = 64 (red star in the figure). Looking at the trajectory  X n , it is very hard to detect a change time but the test works properly, though with a slight delay in detection.
In Figure 9, we perform the same experiment but with  θ = 0.9 . Applying the CUSUM algorithm with the corresponding boundary, we obtain a detection time = 52 (red star in the figure). As expected, as  θ  increases, the detection becomes easier and more precise. This is confirmed in Figure 10 where histograms of the first exit time when  θ = 0.6  (left) and  θ = 0.9  (right) are shown for a sample of  10 4  trajectories.

8. Conclusions

We derived recursive formulae for the distributions of both the position and the first passage time through a constant boundary of a Lindley process with Laplace-distributed increments. The existence of these formulae enabled us to develop the necessary computational software, which is available on GitHub. Our approach heavily relies on the ability to obtain recursive formulae for the involved integrals. It is mainly possible by the presence of exponentials in the increment distribution. Unfortunately, this advantage limits the immediate generalization of our method to different distributions or boundaries, while the case of a multidimensional Lindley process could be tractable but would be highly demanding in terms of the complexity of the involved formulae. Nonetheless, the Laplace distribution plays a significant role in the CUSUM framework and queuing theory. Future research could further explore these applications, potentially addressing related estimation problems in reliability. Additionally, it would be interesting to compare the performance of other change point detection methods and investigate the applicability of the presented results within the context of queuing models.

Author Contributions

Conceptualization, E.L., L.S. and C.Z.; Methodology, E.L., L.S. and C.Z.; Software, E.L., L.S. and C.Z.; Validation, E.L., L.S. and C.Z.; Formal analysis, E.L., L.S. and C.Z.; Investigation, E.L., L.S. and C.Z.; Resources, E.L., L.S. and C.Z.; Writing—original draft, E.L., L.S. and C.Z.; Writing—review & editing, E.L., L.S. and C.Z.; Funding acquisition, L.S. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the MIUR-PRIN 2022 project “Non-Markovian dynamics and non-local equations”, no. 202277N5H9 and by the Spoke 1 “FutureHPC & BigData” of ICSC-Centro Nazionale di Ricerca in High-Performance-Computing, Big Data and Quantum Computing, funded by European Union-NextGenerationEU.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We are grateful to I. Meilijson for his useful suggestions and comments. L.S. and C.Z. are also grateful to INdAM-GNAMPA.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pearson, K. The Problem of the Random Walk. Nature 1905, 72, 294. [Google Scholar]
  2. Feller, W. An Introduction to Probability Theory and Its Applications II, 2nd ed.; Wiley: New York, NY, USA, 1971. [Google Scholar]
  3. Cygan, W.; Kloas, J. On recurrence of the multidimensional Lindley process. Electron. Commun. Probab. 2018, 23, 1–14. [Google Scholar]
  4. Cox, D.R.; Miller, H.D. The Theory of Stochastic Processes; Wiley: Hoboken, NJ, USA, 1965. [Google Scholar]
  5. Ethier, S.N.; Kurtz, T.G. The infinitely-many-neutral-alleles diffusion model. Adv. Appl. Probab. 1981, 13, 429–452. [Google Scholar]
  6. Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; Wiley: New York, NY, USA, 1986. [Google Scholar]
  7. Karlin, S.; Taylor, H.M. A Second Course in Stochastic Processes; Academic Press: New York, NY, USA, 1981; Volume 2. [Google Scholar]
  8. Lawler, G.F.; Limic, V. Random Walk: A Modern Introduction; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  9. Gut, A. Stopped Random Walks; Springer: New York, NY, USA, 1988. [Google Scholar]
  10. Dshalalow, J.H.; White, R.T. Current trends in random walks on random lattices. Mathematics 2021, 9, 1148. [Google Scholar] [CrossRef]
  11. Lindley, D.V. The theory of queues with a single server. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1952; pp. 277–289. [Google Scholar]
  12. Asmussen, S. Applied Probability and Queues; Springer: New York, NY, USA, 2003. [Google Scholar]
  13. Borovkov, A.A. Stochastic Processes in Queueing Theory; Springer: New York, NY, USA; Berlin/Heidelberg, Germany, 1976. [Google Scholar]
  14. Kendall, D.G. Some problems in the theory of queues. J. R. Stat. Soc. B Met. 1951, 13, 151–173. [Google Scholar] [CrossRef]
  15. Page, E.S. Continuous inspection schemes. Biometrika 1954, 41, 100–115. [Google Scholar]
  16. Bhattacharya, R.; Majumdar, M.; Lizhen, L. Problems of ruin and survival in economics: Applications of limit theorems in probability. Sankhya Ser. B 2013, 75, 145–180. [Google Scholar]
  17. Mercier, S. Transferring biological sequence analysis tools to break-point detection for on-line monitoring: A control chart based on the local score. Qual. Reliab. Eng. Int. 2020, 36, 2379–2397. [Google Scholar]
  18. Bhattacharya, R.; Majumdar, M.; Hashimzade, N. Limit theorems for monotone Markov processes. Sankhya Ser. A 2010, 72, 170–190. [Google Scholar]
  19. Dshalalow, J.H. An anthology of classical queueing methods. In Advances in Queueing Theory, Methods, and Open Problems; CRC Press: Boca Raton, FL, USA, 1995; pp. 1–42. [Google Scholar]
  20. Brockmeyer, R.; Halstrom, H.L.; Jensen, A. The Life and Works of A.K. Erlang; Copenhagen Telephone Company: Copenhagen, Denmark, 1948. [Google Scholar]
  21. Iams, S.; Majumdar, M. Stochastic equilibrium: Concepts and computations for Lindley processes. Int. J. Econ. Theory 2010, 6, 47–56. [Google Scholar] [CrossRef]
  22. Stadje, W. A new approach to the Lindley recursion. Stat. Probabil. Lett. 1997, 31, 169–175. [Google Scholar] [CrossRef]
  23. Diaconis, P.; Freedman, D. Iterated random functions. SIAM Rev. 1999, 41, 45–76. [Google Scholar]
  24. Peigné, M.; Woess, M.W. Recurrence of two-dimensional queueing processes, and random walk exit times from the quadrant. Ann. Appl. Probab. 2021, 31, 2519–2537. [Google Scholar]
  25. Lakatos, L.; Zbaganu, G. Comparisons of G/G/1 queues. Proc. Rom. Acad. 2007, 8, 85–94. [Google Scholar]
  26. Palomo, S.; Pender, J. Learning Lindley’s Recursion. In Proceedings of the 2020 Winter Simulation Conference (WSC), Orlando, FL, USA, 14–18 December 2020; pp. 644–655. [Google Scholar]
  27. Raducan, A.; Lakatos, L.; Zbaganu, G. Computable Lindley processes in queueing and risk theories. Rev. Roum. Math. Pure Appl. 2008, 53, 239–266. [Google Scholar]
  28. Stadje, W. First-Passage Times for Some Lindley Processes in Continuous Time. Seq. Anal. 2002, 21, 87–97. [Google Scholar]
  29. Van Dobben de Bruyn, C.S. Cumulative Sum Tests: Theory and Practice; Griffin: London, UK, 1968. [Google Scholar]
  30. Zacks, S. Exact determination of the run length distribution of a one-sided CUSUM procedure applied on an ordinary Poisson process. Seq. Anal. 2004, 23, 159–178. [Google Scholar] [CrossRef]
  31. Vardeman, S.; Ray, D. Average run lengths for CUSUM schemes when observations are exponentially distributed. Technometrics 1985, 27, 145–150. [Google Scholar]
  32. Gan, F.F. Exact run length distributions for one-sided exponential CUSUM schemes. Stat. Sin. 1992, 2, 297–312. [Google Scholar]
  33. Markovich, N.; Razumchik, R. Cluster modeling of Lindley process with application to queuing. In International Conference on Distributed Computer and Communication Networks; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 330–341. [Google Scholar]
  34. Lucrezia, E.; Sacerdote, L.; Zucca, C. GitHub 2023. Available online: https://github.com/emanuelelucreziaA/Some-exactresults–on-Lindley-process-with-Laplace-jumps/tree/main (accessed on 25 March 2025).
  35. Olver, F.W.J.; Olde Daalhuis, A.B.; Lozier, D.W.; Schneider, B.I.; Boisvert, R.F.; Clark, C.W.; Miller, B.R.; Saunders, B.V.; Cohl, H.S.; McClain, M.A. (Eds.) NIST Digital Library of Mathematical Functions; Springer: New York, NY, USA, 2023. [Google Scholar]
  36. Basseville, M.; Nikiforov, I. Detection of Abrupt Changes: Theory and Application; Prentice Hall Information And System Sciences Series; Prentice Hall: Hoboken, NJ, USA, 1993. [Google Scholar]
  37. Kay, S. Fundamentals of Statistical Signal Processing, Volume 2: Detection Theory; Prentice Hall PTR: Hoboken, NJ, USA, 1998. [Google Scholar]
  38. Khan, R.A. Distributional properties of CUSUM stopping times. Seq. Anal. 2008, 27, 420–434. [Google Scholar]
Figure 1. Trajectories of the Lindley process with  σ = 1 x = 1  and  μ = 0.3  (red),  μ = 0  (blue) and  μ = 0.3  (green), for different values of  σ .
Figure 1. Trajectories of the Lindley process with  σ = 1 x = 1  and  μ = 0.3  (red),  μ = 0  (blue) and  μ = 0.3  (green), for different values of  σ .
Mathematics 13 01166 g001
Figure 2. Density functions of the Lindley process with  μ = 0.3  and starting position  x = 1  for different values of  n = [ 2 , 3 , 5 , 9 ]  and  σ  ( σ = 0.5  blue,  σ = 1  green and  σ = 2  red). All these graphs have been obtained using Theorem 1.
Figure 2. Density functions of the Lindley process with  μ = 0.3  and starting position  x = 1  for different values of  n = [ 2 , 3 , 5 , 9 ]  and  σ  ( σ = 0.5  blue,  σ = 1  green and  σ = 2  red). All these graphs have been obtained using Theorem 1.
Mathematics 13 01166 g002
Figure 3. Density functions of the Lindley process with  μ = 0.3  and starting position  x = 1  for different values of  n = [ 2 , 3 , 5 , 9 ]  and  σ  ( σ = 0.5  blue,  σ = 1  green and  σ = 2  red). All these graphs have been obtained using Theorem 2.
Figure 3. Density functions of the Lindley process with  μ = 0.3  and starting position  x = 1  for different values of  n = [ 2 , 3 , 5 , 9 ]  and  σ  ( σ = 0.5  blue,  σ = 1  green and  σ = 2  red). All these graphs have been obtained using Theorem 2.
Mathematics 13 01166 g003
Figure 4. Density functions of the Lindley process with  σ = 1  and starting position  x = 1  for different values of  n = [ 2 , 9 ]  and  μ  (first line:  μ = 1  blue,  μ = 2  green and  μ = 3  red; second line:  μ = 0.3  blue,  μ = 0.7  green and  μ = 1.2  red). The graphs of the first line have been obtained using Theorem 1. In the second line for the graphs corresponding to  μ = 1.2  we used Theorem 3 while for the other cases we used Theorem 2.
Figure 4. Density functions of the Lindley process with  σ = 1  and starting position  x = 1  for different values of  n = [ 2 , 9 ]  and  μ  (first line:  μ = 1  blue,  μ = 2  green and  μ = 3  red; second line:  μ = 0.3  blue,  μ = 0.7  green and  μ = 1.2  red). The graphs of the first line have been obtained using Theorem 1. In the second line for the graphs corresponding to  μ = 1.2  we used Theorem 3 while for the other cases we used Theorem 2.
Mathematics 13 01166 g004
Figure 5. Distribution and cumulative distribution function of the FET of the Lindley process originated in  x = 1  with  μ = 0.3  and for different values of  σ  ( σ = 0.1  blue,  σ = 0.2  red,  σ = 0.3  green,  σ = 0.5  magenta,  σ = 1  black and  σ = 2  orange). All these graphs have been obtained using Theorem 4. In the plots, we connected the probabilities to facilitate reading.
Figure 5. Distribution and cumulative distribution function of the FET of the Lindley process originated in  x = 1  with  μ = 0.3  and for different values of  σ  ( σ = 0.1  blue,  σ = 0.2  red,  σ = 0.3  green,  σ = 0.5  magenta,  σ = 1  black and  σ = 2  orange). All these graphs have been obtained using Theorem 4. In the plots, we connected the probabilities to facilitate reading.
Mathematics 13 01166 g005
Figure 6. Distribution and cumulative distribution function of the FET of the Lindley process originated in  x = 1  with  μ = 0.3  (first row),  μ = 0  (second row), and  μ = 0.3  (third row). For each choice of  μ  a comparison between different values of  σ  is performed ( σ = 0.2  red,  σ = 0.5  blue,  σ = 1  green,  σ = 2  black). All these graphs have been obtained using Theorem 4 (first row), Theorem 6 (second row), and Theorem 5 (third row). The probabilities in the plots are connected to facilitate reading.
Figure 6. Distribution and cumulative distribution function of the FET of the Lindley process originated in  x = 1  with  μ = 0.3  (first row),  μ = 0  (second row), and  μ = 0.3  (third row). For each choice of  μ  a comparison between different values of  σ  is performed ( σ = 0.2  red,  σ = 0.5  blue,  σ = 1  green,  σ = 2  black). All these graphs have been obtained using Theorem 4 (first row), Theorem 6 (second row), and Theorem 5 (third row). The probabilities in the plots are connected to facilitate reading.
Mathematics 13 01166 g006
Figure 7. f ( x ) : Laplace with  μ = 0 σ = 1 g ( x )  Skewed Laplace with  μ = 0 σ = 1  and  θ = 0.6  (red) and  θ = 0.9  (blue).
Figure 7. f ( x ) : Laplace with  μ = 0 σ = 1 g ( x )  Skewed Laplace with  μ = 0 σ = 1  and  θ = 0.6  (red) and  θ = 0.9  (blue).
Mathematics 13 01166 g007
Figure 8. (a): sample generated with  μ = 0 σ = 1 x = 0  for  n < 50  and  θ = 0.6  for  n 50 . Time change  m = 50 . (b): Detection time = 64 (red star). Please note that only the final point of each boundary is plotted.
Figure 8. (a): sample generated with  μ = 0 σ = 1 x = 0  for  n < 50  and  θ = 0.6  for  n 50 . Time change  m = 50 . (b): Detection time = 64 (red star). Please note that only the final point of each boundary is plotted.
Mathematics 13 01166 g008
Figure 9. (a): sample generated with  μ = 0 σ = 1 x = 0  for  n < 50  and  θ = 0.9  for  n 50 . Time change  m = 50 . (b): Detection time = 52 (red star). Observe that only the final point of each boundary is plotted.
Figure 9. (a): sample generated with  μ = 0 σ = 1 x = 0  for  n < 50  and  θ = 0.9  for  n 50 . Time change  m = 50 . (b): Detection time = 52 (red star). Observe that only the final point of each boundary is plotted.
Mathematics 13 01166 g009
Figure 10. Histogram of the detection times when  θ = 0.6  (left) and  θ = 0.9  (right).
Figure 10. Histogram of the detection times when  θ = 0.6  (left) and  θ = 0.9  (right).
Mathematics 13 01166 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lucrezia, E.; Sacerdote, L.; Zucca, C. Some Exact Results on Lindley Process with Laplace Jumps. Mathematics 2025, 13, 1166. https://doi.org/10.3390/math13071166

AMA Style

Lucrezia E, Sacerdote L, Zucca C. Some Exact Results on Lindley Process with Laplace Jumps. Mathematics. 2025; 13(7):1166. https://doi.org/10.3390/math13071166

Chicago/Turabian Style

Lucrezia, Emanuele, Laura Sacerdote, and Cristina Zucca. 2025. "Some Exact Results on Lindley Process with Laplace Jumps" Mathematics 13, no. 7: 1166. https://doi.org/10.3390/math13071166

APA Style

Lucrezia, E., Sacerdote, L., & Zucca, C. (2025). Some Exact Results on Lindley Process with Laplace Jumps. Mathematics, 13(7), 1166. https://doi.org/10.3390/math13071166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop