Abstract
We consider a Lindley process with Laplace-distributed space increments. We obtain closed-form recursive expressions for the density function of the position of the process and for its first exit time distribution from the domain . We illustrate the results in terms of the parameters of the process. An example of the application of the analytical results is discussed in the framework of the CUSUM method.
MSC:
60G50; 60K25; 60G40
1. Introduction
Random walk has attracted mathematicians’ attention both for its analytical properties and for its role in modeling instances ever since it was mentioned by Pearson [1]. Presently, random walks are universally recognized as the simplest example of a stochastic process and are used as a simplified version of more complex models. For modeling purposes, it is often necessary to introduce suitable boundary conditions, constraining the evolution of the process in specific regions. Typical conditions are absorbing or reflecting boundaries and the involved random walk may be in one or more dimensions. The recurrence of the states of the free or bounded process in one or more dimensions has been the subject of past and recent studies, and results are available for the free random walk [2] or in the presence of specific boundaries [3].
The distribution of a simple random walk at Time has a simple expression while, in the case of the distribution of the random walk constrained by two absorbing boundaries, it involves important calculation efforts [4].
Classical methods for the study of random walks refer to their Markov property. This happens, for example, when the focus is on recurrence properties in one or more dimensions. Alternatively, some results can be obtained by means of diffusion limits that give good approximations under specific hypotheses [5,6,7,8] or allow the determination of asymptotic results for problems such as the maximum of a random walk or the first exit time across general boundaries [9]. However, the presence of boundaries often determines combinatorics problems and discourages the search for closed-form solutions. Furthermore, the switch from unitary jumps to continuous jumps introduces important computational difficulties. There are thousands of papers about random walk, and it is impossible to refer to all of them. We limit ourselves to citing the recent excellent review by Dshalalow and White [10] that lists many of the most important available results.
In this paper, we focus on a particular constrained random walk: the Lindley process [11]. In particular, we prove a set of analytical results about it. This process is a discrete-time random walk characterized by continuous jumps with a specified distribution.
Without loss of generality, we define it as
where are i.i.d. random variables.
Modeling interest for the Lindley process arises in several fields. Historically, it was introduced in [11] to describe waiting times experienced by customers in a queue over time and it has been extensively studied in recent decades [12,13,14].
This process also arises in a reliability context, in a sequential test framework, through the study of CUSUM tests [15]. Moreover, the same process appears in problems related to resources management [16] and in highlighting atypical regions in biological sequences, transferring biological sequence analysis tools to break-point detection for on-line monitoring [17].
Many contributions concerning properties of the Lindley process are motivated by their important role in applications [2,11,16,18,19]. A large part of the papers about the Lindley process concerns its asymptotic behavior. Using the strong law of large numbers, in 1952, Lindley [11] in the framework of queuing theory showed that the process admits a limit distribution as n diverges if and only if , i.e., if the customer’s arrival rate is slower than the service one. Furthermore, when , the ratio converges to the modulus of a Gaussian random variable. Lindley also showed that the limit distribution is the solution of an integral equation, and Kendall solved it in the case of exponential arrivals [14] while Erlang determined its expression when both arrival and service times are exponential [20]. Simulations were used in [21] to determine invariant distribution when are Laplace or Gaussian distributed. The analytical expression of the limit distribution was determined by Stadje [22] for the case of integer-valued increments. Recent contributions consider recurrence properties in higher dimensions [3,23,24] or the study of the rate of convergence of expected values of functions of the process.
Other contributions make use of stochastic ordering notion [25] or of continuous-time versions of the Lindley process [12]. Recently, the use of machine learning techniques has been proposed to learn Lindley recursion (1) directly from the waiting time data of a G/G/1 queue [26]. Furthermore, Lakatos and Zbaganu [25], and Raducan, Lakatos and Zbaganu [27] introduced a class of computable Lindley processes for which the distribution of the space increments is a mixture of two distributions. To the best of our knowledge, no other analytical expressions are available for a Lindley process.
Concerning the first exit time problem for the Lindley process, it has been considered mainly in the framework of CUSUM methodology to detect the time in which a change in the parameters of the process happens [15,28,29,30]. In this context, the focus was on the Average Run Length that corresponds to the exit time of the process from a boundary, and in [31,32], such distribution and its expected value are determined when the are exponentially distributed with a shift. Markovich and Razumchick [33] investigated a problem related to first exit times, i.e., the appearance of clusters of extremes defined as subsequent experiences of high thresholds in a Lindley process.
Here, we consider a Lindley process characterized by Laplace-distributed space increments , i.e., Z distributed as a Laplace random variable characterized by parameter and scale parameter ,
where the mean and the variance are and .
In Figure 1, three trajectories are plotted of the Lindley process with , , and different values of . The Laplace distribution is often used to model the distribution of financial assets and market fluctuations. It is also used in signal processing and communication theory to model noise and other random processes. Additionally, the Laplace distribution has been used in image processing and speech recognition.
Figure 1.
Trajectories of the Lindley process with , and (red), (blue) and (green), for different values of .
As in the case of the simple random walk, here, the behavior of the random walk dramatically changes according to the sign of . Indeed, it is well known [9] that, when , all the states are transient, and the process has a drift toward infinity. However, in the case , the process is null recurrent, and when , the process is positive recurrent and admits limit distribution. Such distribution is the solution of the Lindley integral equation [11]. Unfortunately, the analytical solution is unknown and numerical methods become necessary.
The recurrence properties imply certainty of the first exit time of the process W from for each , and the study of the first exit time distribution can be performed.
Here, our aim is to derive expressions for the distribution of the process as the time evolves together with its first exit time (FET) from . Taking advantage of the presence of exponential in the Laplace distribution, we prove recursive closed-form formulae for such distributions, and we show that different formulae hold on different parameters domain. We underline that the special expression of the Laplace distribution allows the use of recursion in various steps of our proofs. This makes hard the extension to other types of distributions. In Section 2 and Section 3, we study the distribution of the process and its first exit times from , respectively. In these sections, we do not present any proofs that we postpone in Section 4 and Section 5. Due to the complexity of the derived exact formulae for the studied distributions, we complete our work implementing the software necessary for fast computation of the formulae of interest. This software is open source and can be found in the GitHub repository [34]. In Section 6, we illustrate the role of the parameters of the Laplace distribution on the position and FET distribution of the Lindley process. Lastly, in Section 7, we present an application of our theoretical results in the CUSUM framework. In particular, we discuss a method to detect the change point when the data moves from a symmetric Laplace distribution to an asymmetric one.
2. Distribution of
Let us consider the Lindley process (1) with Laplace-distributed jumps (2). In the following, we will use the distribution of the process at Time that we denote as
and, with an abuse of notation, we indicate with
the corresponding probability density function, since the derivative is the distributional derivative because the are mixed random variables.
In the following we make use of Dirac delta function with the agreement that, for each
Since , the density for . Moreover, if the sum .
Lemma 1.
The probability distribution function of for is
while, for is
The corresponding probability density function is
where is the Dirac delta function.
Remark 1.
Using the Markov property of the process W, the one-step transition probability density function, for , is
Notation 1.
In the forthcoming, when not necessary, we will skip the dependence on the initial position of the process: , .
To determine the distribution of , computations change according to the sign of . Theorem 1 gives the distribution for . When is negative two different cases arise when (Theorem 2) and (Theorem 3).
Theorem 1.
For a Lindley process characterized by Laplace increments with location parameter , the probability density function of the position is given by
where
and , , with , are
The coefficients , and verify the following recursive relations for ,
where is the Kronecker delta and
The initial values for the recursion of (10) are
Remark 2.
Observe that the coefficients , for each admissible k. This prevents the terms in (10e) and (11b) to explode.
The following corollary may be useful for computational purposes.
Corollary 1.
The constant coefficients , can also be obtained as
Remark 3.
Observe that the density (7) refers to a mixed random variable.
Theorem 2.
For a Lindley process characterized by Laplace increments with location parameter , the probability density function of the position is given by
where
and , are
- If , the coefficients , verify the recursive relations for andwhereThe values of coefficients for arewhereandThe initial conditions for the recursion of the coefficients are
- If , we have for each n and the coefficients , verify the following recursive relations for .with initial conditions given by (21c), (21d) and (21e).
Theorem 3.
For a Lindley process characterized by Laplace increments with location parameter , the probability density function of the position is given by
where
and .
The coefficients and verify the following recursive relations for
The initial values for the recursion are
3. First Exit Time of
Let be the first exit time (FET) of the Lindley process (1) from the domain for fixed and let indicate the probability that the FET is equal to n, , given that the process starts in .
In order to determine the distribution of N, computations change according to the sign of and its order with respect to h. Theorem 4 gives the distribution for while Corollary 2 considers the case . For , Theorem 5 gives the distribution for while Corollary 3 considers the case and Theorem 6 refers to .
Theorem 4.
For a Lindley process characterized by Laplace increments with location parameter , the probability distribution of the FET through h is given by
where and
Here and we partition the interval in
The coefficients , and are defined by the following recursive relations for and
Here, is the index of the interval (28) that contains , and
The initial values are
When , the partition reduces to a single interval , and we obtain a compact closed-form solution.
Corollary 2.
For a Lindley process characterized by Laplace increments with location parameter and , the probability distribution of the FET through h is given by
where
Theorem 5.
For a Lindley process characterized by Laplace increments with location parameter , the probability distribution of the FET through h is given by
where and, for ,
Here we partition the interval as
The coefficients , and are defined by the recursive relations, for and
where is the interval that contains .
For the coefficients are defined by the recursive relations
where
The coefficients for the base case of the recursion are given by
When in the previous theorem, we observe that the partition is formed of a single interval . This simplifies a lot of the computations, and we obtain the following:
Corollary 3.
For a Lindley process characterized by Laplace increments with location parameter , and , the probability distribution of the FET through h is given by
where for
and the initial values are
Theorem 6.
For a Lindley process characterized by Laplace increments with location parameter , the probability distribution of the FET through h is given by
The coefficients satisfy the following recursive relations for
with initial conditions
Remark 4.
In Theorems 4 and 5, when μ is not small, the value of i such that is small. Hence, the sums include very few terms since and coincide with i soon.
Remark 5.
Please note that from Corollary 3, we easily observe the exponential decay of the tails of the FET probability function. Since larger values of μ facilitate the crossing of the boundary, this tail result holds for any choice of μ.
4. Proofs of Theorems on the Position
4.1. Proof of Lemma 1
Proof.
The thesis follows immediately by determining the distribution of and applying the definition (1). □
4.2. Proof of Theorem 1
Proof.
We proceed by induction.
Case n = 1. Since , we have . Hence, this case follows from Lemma 1 introducing the partition of as and rewriting the corresponding density function (5) as
The proof is completed recognizing the initial conditions (12)
Case n. We assume that (7) holds for n, and we show that it holds for , for .
Conditioning on the position reached at Time n, we can write the probability distribution function of the position at Time as
Differentiating with respect to u, we obtain the equivalent relation for the probability density function of the position at Time
Please note that is decomposed in a continuous part and in a discrete one.
Let us observe that when we open the modulus in (47), we add a further interval in the partition , . Indeed, since implies , it induces a shift in the partition intervals (9) and we obtain , . Hence can be expressed in terms of , according to the new partition.
where
Let us now focus on (51) that holds for . Using the inductive property on , i.e., substituting (8a) in (51), we obtain
which can be written as
Computing the integrals we obtain
where denotes the incomplete Gamma function
and .
The divergence of on the last interval in (54e) will not be a problem because from the recurrent expression of the coefficients we will obtain for each admissible k.
Using (11) and the expansion [35]
we obtain
Collecting powers and , we obtain for and
while the coefficients of and are, respectively,
In a similar way, expanding (50), (52) and (53), we obtain the thesis.
In order to find the recursive expression for , let us consider (49)
□
4.3. Proof of Corollary 1
4.4. Proof of Theorem 2
Proof.
In analogy with the case , we proceed by induction.
Case n = 1. This part of the proof coincides with the analogous part of the proof of Theorem 1. Using (5) according to positive or negative, we obtain (14).
Case n. Let us assume that (14) holds for n, and we show that they hold for , for .
In analogy with the proof of Theorem 1, using the transition density function (6) in (46), we obtain
Since, by induction, is given by (14), for we obtain
while for we have the mass in zero
Let us focus on the continuous part (58). Expanding the modulus we obtain
Now, in order to complete the proof, we have to distinguish two cases according to the sign of .
Let us consider . Observe that the partition of step n (16) is updated in step after checking whether . The corresponding functions , are
Substituting in (61) the expressions given in (15)
Computing the integrals we obtain
Proceeding in an analogous way for (62) we compute
Computing the integrals we obtain
Expanding the incomplete Gamma function (55) in (63) and (64) and collecting powers and , we recognize the coefficients (17).
Observe that the divergence of on in (63) and (64) is not a problem since, from the recurrent expression (17a) of the coefficients and (21c), for each admissible k.
Let us now consider , given by (59). This value changes according to the value of x with respect to the following intervals and .
Making use of the indicators of such intervals, we obtain
where corresponds to (65a) and corresponds to (65b).
Substituting (15) in (65) we obtain
Let us consider now .
Please note that in this case and is identically 0 while becomes
Substituting in (66) the expressions given in (15a)
Computing the integrals we obtain
The coefficients of and give (22).
It remains to compute the mass in 0 when . Note that in this setting, we have only to consider the case since is impossible.
- For (59) becomes
Substituting (15a)
that gives the result (22e). □
4.5. Proof of Theorem 3
Proof.
We proceed by induction.
Case n = 1. This part of the proof coincides with the analogous part of the proof of Theorem 1.
Case n. Let us assume that (24a) holds for n, and we show that it holds for , for .
In analogy with the proof of Theorem 1, using the transition density function (6) in (46), we obtain
Using (23), we obtain
When we obtain
Applying the inductive hypothesis (24)
Computing the integrals and expanding the incomplete Gamma functions (55), we obtain
where we can recognize the recurrence relations (25).
Let us now work out the calculation of the probability to be in 0 at step
Substituting in (67) the expression of the inductive hypothesis (24a) we obtain
that gives the coefficient (25c). □
5. Proofs of Theorems on the FET
5.1. Proof of Theorem 4
Proof.
We proceed by induction.
Case n = 1. Since and by hypothesis, using (3) in Lemma 1, for , we have
where the partition (28) is
and we easily recognize the coefficients (31).
Case n.
We assume that (26) holds for n, and we show that it holds for , for .
Conditioning on the position reached at Time 1, we obtain
Substituting (5) and using the inductive hypothesis (27), we obtain:
where is the interval containing , for . Please note that the value of depends on x, and n and .
Now we rewrite as a sum of terms , where
Recognizing Gamma functions, we obtain
where are given by (30). Now we can use the expansion (55) to obtain
Substituting the integration extremes, for we obtain
and if , reduces to
Grouping out the terms and , we obtain the coefficients and , respectively. Furthermore, (29e) is obtained by grouping constant terms. □
5.2. Proof of Corollary 2
Proof.
We proceed by induction. However, in this case we need two steps before starting the induction.
Case n = 2. Conditioning on the position reached at Time 1 and observing that we obtain
where we recognize the coefficient in (33c).
Case . Conditioning on the position reached at Time 1, we obtain
The coefficients become
□
5.3. Proof of Theorem 5
Proof.
We proceed by induction.
Case n = 1. Mimicking the proof for in Theorem 4, with , we have
where we recognize as (40a).
Case n. We assume that (34) holds for n, and we show that it holds for , for .
Conditioning on the position reached at Time 1, using (68) we have
The structure of changes according to or .
If we have and
and we recognize the parameters (38).
If we have and
Now, using the inductive hypothesis (35), we have
Computing the integrals, we obtain
Expanding the incomplete Gamma function (55) we obtain
5.4. Proof of Corollary 3
5.5. Proof of Theorem 6
6. Sensitivity Analysis
Here, we apply the theorems presented in Section 2 to investigate the shapes of the distribution of for finite times, emphasizing the variety of behaviors as the parameters change.
In Figure 2, the density functions of the Lindley process with and starting position for different values of n and is shown. We can see that as n increases, the density flattens out, the variance increases, and the maximum of the density moves toward higher values. As increases, the density flattens out but keeps the position of the maximum. The discrete part of the distribution, represented by a colored dot on the y-axis decreases as n increases. Note also that only when we observe the continuity between the continuous and discrete part of the density; this is due to Corollary 1.
Figure 2.
Density functions of the Lindley process with and starting position for different values of and ( blue, green and red). All these graphs have been obtained using Theorem 1.
A different behavior appears when the shift term is negative. In Figure 3 the density functions of the Lindley process with the same parameters of Figure 2 and is shown. We can see that, as n increases, the density converges to a stationary distribution, as expected from the theory. The interesting remark is that such convergence appears for reasonable small values of n. Please note that the convergence to the steady-state distribution is faster for larger values of . Furthermore, as increases, the density flattens out, and the variance of increases, as expected since we increased the variability of the process.
Figure 3.
Density functions of the Lindley process with and starting position for different values of and ( blue, green and red). All these graphs have been obtained using Theorem 2.
Figure 4 shows the density function of the Lindley process with , starting position for different values of n and . We notice that if is positive, the increase of determines a very similar shape of the density but shifted while, if is negative, as decreases, the density concentrates more and more in zero. Another interesting feature concerns the mass in . For positive , it decreases as increases. Indeed, the trajectories move quickly away from zero. However, for negative , we observe the opposite behavior due to the fact that the trajectories are pushed towards zero, and it becomes increasingly difficult to leave zero. Recall that implies the existence of the steady-state distribution. We underline that such distribution is attained faster as decreases.
Figure 4.
Density functions of the Lindley process with and starting position for different values of and (first line: blue, green and red; second line: blue, green and red). The graphs of the first line have been obtained using Theorem 1. In the second line for the graphs corresponding to we used Theorem 3 while for the other cases we used Theorem 2.
As far as the FET are concerned, in Figure 5, we illustrate the behavior of the probability distribution function and of its cumulative with starting position , boundary , and for different values of . We see a different behavior for small or large values of the dispersion parameter . Indeed, when , has a maximum whose ordinate decreases as increases while it increases when . This behavior has an immediate interpretation by observing that almost deterministic crossings determine a high peak when is small; as far as increases, the variability of the increments facilitates the crossing that also happens for smaller times, and the probabilistic mass starts to increase sooner.
Figure 5.
Distribution and cumulative distribution function of the FET of the Lindley process originated in with and for different values of ( blue, red, green, magenta, black and orange). All these graphs have been obtained using Theorem 4. In the plots, we connected the probabilities to facilitate reading.
In Figure 6, we investigate how this behavior evolves when decreases. In particular, we compare the shapes for different values of . Observe that the abscissa of the maximum of the distribution decreases as increases. Concerning the corresponding ordinate, we observe different behaviors depending on the sign of the parameter . Indeed, for positive , we have the features already noted in Figure 5. These features are no longer observed when because here, the deterministic crossing is no longer possible, and crossings are determined only by the noise.
Figure 6.
Distribution and cumulative distribution function of the FET of the Lindley process originated in with (first row), (second row), and (third row). For each choice of a comparison between different values of is performed ( red, blue, green, black). All these graphs have been obtained using Theorem 4 (first row), Theorem 6 (second row), and Theorem 5 (third row). The probabilities in the plots are connected to facilitate reading.
7. An Application: CUSUM with Laplace-Distributed Scores
A problem in reliability theory concerns the change detection of a machinery’s performance. A widely used technique for this aim is known as CUSUM [15,36,37]. In this context, given the observation of a sequence of independent random variables , with a probability density function that changes at an unknown Time m:
the method aims to recognize the unknown Time m in which such sequence changes its distribution.
In his pioneering paper [15], Page proposes the CUSUM procedure as a series of Sequential Probability Ratio Tests (SPRTs) between two simple hypotheses, with thresholds of 0 and h. The detection of a change is achieved through repeated application of the likelihood ratio test. Page shows that the likelihood ratio test can be written in an iterative form as (1), where corresponds to the instantaneous loglikelihood ratio at Time n
and the stopping time of the test is
Generally, hypothesis tests involve comparing two alternative values of one distribution parameter. Unfortunately, in most cases, closed-form expressions for the distribution of are not available. In particular, when parameter changes result in a shift from a symmetric to an asymmetric distribution, it cannot be computed in closed form. However, the results from previous sections allow us to study the case where is the Laplace density function (2) and , where
with and . In other words, we suppose that up to Time m, the random variable follows a Laplace distribution with mean while, after Time m, it switches to a skewed Laplace distribution with mean (cf. Figure 7). Please note that this special case where is the Laplace density function (2) is relevant in applications [38]. In this instance, the instantaneous loglikelihood ratio of the n-th observation is
i.e., it is a linear function of with slope and specific intercept (78) for each slope. The distribution of the loglikelihood ratio is then a rescaled and shifted Laplace random variable .
Figure 7.
: Laplace with , ; Skewed Laplace with , and (red) and (blue).
Typically, the value of the boundary h is determined using the average time interval between anomalies, estimated by the experimenter. However, this procedure highly depends on such subjective estimation that does not allow the fixing of Type I error rate .
Here, we propose an alternative algorithm based on the previous section’s results that allows the creation of a test involving the Type I error rate . We fix , and we consider a sequence of SPRTs: in each step , we determine the boundary value such that
In this way, for the k-th SPRT test, we determine a constant boundary that holds up to Time k.
In Figure 8, we simulate data from Laplace distribution with parameters , , and time change , i.e., for the parameter is null and for we select . Applying the CUSUM algorithm with the boundary evaluated through (80) we obtain a detection time = 64 (red star in the figure). Looking at the trajectory , it is very hard to detect a change time but the test works properly, though with a slight delay in detection.
Figure 8.
(a): sample generated with , , for and for . Time change . (b): Detection time = 64 (red star). Please note that only the final point of each boundary is plotted.
In Figure 9, we perform the same experiment but with . Applying the CUSUM algorithm with the corresponding boundary, we obtain a detection time = 52 (red star in the figure). As expected, as increases, the detection becomes easier and more precise. This is confirmed in Figure 10 where histograms of the first exit time when (left) and (right) are shown for a sample of trajectories.
Figure 9.
(a): sample generated with , , for and for . Time change . (b): Detection time = 52 (red star). Observe that only the final point of each boundary is plotted.
Figure 10.
Histogram of the detection times when (left) and (right).
8. Conclusions
We derived recursive formulae for the distributions of both the position and the first passage time through a constant boundary of a Lindley process with Laplace-distributed increments. The existence of these formulae enabled us to develop the necessary computational software, which is available on GitHub. Our approach heavily relies on the ability to obtain recursive formulae for the involved integrals. It is mainly possible by the presence of exponentials in the increment distribution. Unfortunately, this advantage limits the immediate generalization of our method to different distributions or boundaries, while the case of a multidimensional Lindley process could be tractable but would be highly demanding in terms of the complexity of the involved formulae. Nonetheless, the Laplace distribution plays a significant role in the CUSUM framework and queuing theory. Future research could further explore these applications, potentially addressing related estimation problems in reliability. Additionally, it would be interesting to compare the performance of other change point detection methods and investigate the applicability of the presented results within the context of queuing models.
Author Contributions
Conceptualization, E.L., L.S. and C.Z.; Methodology, E.L., L.S. and C.Z.; Software, E.L., L.S. and C.Z.; Validation, E.L., L.S. and C.Z.; Formal analysis, E.L., L.S. and C.Z.; Investigation, E.L., L.S. and C.Z.; Resources, E.L., L.S. and C.Z.; Writing—original draft, E.L., L.S. and C.Z.; Writing—review & editing, E.L., L.S. and C.Z.; Funding acquisition, L.S. and C.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the MIUR-PRIN 2022 project “Non-Markovian dynamics and non-local equations”, no. 202277N5H9 and by the Spoke 1 “FutureHPC & BigData” of ICSC-Centro Nazionale di Ricerca in High-Performance-Computing, Big Data and Quantum Computing, funded by European Union-NextGenerationEU.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
We are grateful to I. Meilijson for his useful suggestions and comments. L.S. and C.Z. are also grateful to INdAM-GNAMPA.
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
References
- Pearson, K. The Problem of the Random Walk. Nature 1905, 72, 294. [Google Scholar]
- Feller, W. An Introduction to Probability Theory and Its Applications II, 2nd ed.; Wiley: New York, NY, USA, 1971. [Google Scholar]
- Cygan, W.; Kloas, J. On recurrence of the multidimensional Lindley process. Electron. Commun. Probab. 2018, 23, 1–14. [Google Scholar]
- Cox, D.R.; Miller, H.D. The Theory of Stochastic Processes; Wiley: Hoboken, NJ, USA, 1965. [Google Scholar]
- Ethier, S.N.; Kurtz, T.G. The infinitely-many-neutral-alleles diffusion model. Adv. Appl. Probab. 1981, 13, 429–452. [Google Scholar]
- Ethier, S.N.; Kurtz, T.G. Markov Processes: Characterization and Convergence; Wiley: New York, NY, USA, 1986. [Google Scholar]
- Karlin, S.; Taylor, H.M. A Second Course in Stochastic Processes; Academic Press: New York, NY, USA, 1981; Volume 2. [Google Scholar]
- Lawler, G.F.; Limic, V. Random Walk: A Modern Introduction; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Gut, A. Stopped Random Walks; Springer: New York, NY, USA, 1988. [Google Scholar]
- Dshalalow, J.H.; White, R.T. Current trends in random walks on random lattices. Mathematics 2021, 9, 1148. [Google Scholar] [CrossRef]
- Lindley, D.V. The theory of queues with a single server. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1952; pp. 277–289. [Google Scholar]
- Asmussen, S. Applied Probability and Queues; Springer: New York, NY, USA, 2003. [Google Scholar]
- Borovkov, A.A. Stochastic Processes in Queueing Theory; Springer: New York, NY, USA; Berlin/Heidelberg, Germany, 1976. [Google Scholar]
- Kendall, D.G. Some problems in the theory of queues. J. R. Stat. Soc. B Met. 1951, 13, 151–173. [Google Scholar] [CrossRef]
- Page, E.S. Continuous inspection schemes. Biometrika 1954, 41, 100–115. [Google Scholar]
- Bhattacharya, R.; Majumdar, M.; Lizhen, L. Problems of ruin and survival in economics: Applications of limit theorems in probability. Sankhya Ser. B 2013, 75, 145–180. [Google Scholar]
- Mercier, S. Transferring biological sequence analysis tools to break-point detection for on-line monitoring: A control chart based on the local score. Qual. Reliab. Eng. Int. 2020, 36, 2379–2397. [Google Scholar]
- Bhattacharya, R.; Majumdar, M.; Hashimzade, N. Limit theorems for monotone Markov processes. Sankhya Ser. A 2010, 72, 170–190. [Google Scholar]
- Dshalalow, J.H. An anthology of classical queueing methods. In Advances in Queueing Theory, Methods, and Open Problems; CRC Press: Boca Raton, FL, USA, 1995; pp. 1–42. [Google Scholar]
- Brockmeyer, R.; Halstrom, H.L.; Jensen, A. The Life and Works of A.K. Erlang; Copenhagen Telephone Company: Copenhagen, Denmark, 1948. [Google Scholar]
- Iams, S.; Majumdar, M. Stochastic equilibrium: Concepts and computations for Lindley processes. Int. J. Econ. Theory 2010, 6, 47–56. [Google Scholar] [CrossRef]
- Stadje, W. A new approach to the Lindley recursion. Stat. Probabil. Lett. 1997, 31, 169–175. [Google Scholar] [CrossRef]
- Diaconis, P.; Freedman, D. Iterated random functions. SIAM Rev. 1999, 41, 45–76. [Google Scholar]
- Peigné, M.; Woess, M.W. Recurrence of two-dimensional queueing processes, and random walk exit times from the quadrant. Ann. Appl. Probab. 2021, 31, 2519–2537. [Google Scholar]
- Lakatos, L.; Zbaganu, G. Comparisons of G/G/1 queues. Proc. Rom. Acad. 2007, 8, 85–94. [Google Scholar]
- Palomo, S.; Pender, J. Learning Lindley’s Recursion. In Proceedings of the 2020 Winter Simulation Conference (WSC), Orlando, FL, USA, 14–18 December 2020; pp. 644–655. [Google Scholar]
- Raducan, A.; Lakatos, L.; Zbaganu, G. Computable Lindley processes in queueing and risk theories. Rev. Roum. Math. Pure Appl. 2008, 53, 239–266. [Google Scholar]
- Stadje, W. First-Passage Times for Some Lindley Processes in Continuous Time. Seq. Anal. 2002, 21, 87–97. [Google Scholar]
- Van Dobben de Bruyn, C.S. Cumulative Sum Tests: Theory and Practice; Griffin: London, UK, 1968. [Google Scholar]
- Zacks, S. Exact determination of the run length distribution of a one-sided CUSUM procedure applied on an ordinary Poisson process. Seq. Anal. 2004, 23, 159–178. [Google Scholar] [CrossRef]
- Vardeman, S.; Ray, D. Average run lengths for CUSUM schemes when observations are exponentially distributed. Technometrics 1985, 27, 145–150. [Google Scholar]
- Gan, F.F. Exact run length distributions for one-sided exponential CUSUM schemes. Stat. Sin. 1992, 2, 297–312. [Google Scholar]
- Markovich, N.; Razumchik, R. Cluster modeling of Lindley process with application to queuing. In International Conference on Distributed Computer and Communication Networks; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 330–341. [Google Scholar]
- Lucrezia, E.; Sacerdote, L.; Zucca, C. GitHub 2023. Available online: https://github.com/emanuelelucreziaA/Some-exactresults–on-Lindley-process-with-Laplace-jumps/tree/main (accessed on 25 March 2025).
- Olver, F.W.J.; Olde Daalhuis, A.B.; Lozier, D.W.; Schneider, B.I.; Boisvert, R.F.; Clark, C.W.; Miller, B.R.; Saunders, B.V.; Cohl, H.S.; McClain, M.A. (Eds.) NIST Digital Library of Mathematical Functions; Springer: New York, NY, USA, 2023. [Google Scholar]
- Basseville, M.; Nikiforov, I. Detection of Abrupt Changes: Theory and Application; Prentice Hall Information And System Sciences Series; Prentice Hall: Hoboken, NJ, USA, 1993. [Google Scholar]
- Kay, S. Fundamentals of Statistical Signal Processing, Volume 2: Detection Theory; Prentice Hall PTR: Hoboken, NJ, USA, 1998. [Google Scholar]
- Khan, R.A. Distributional properties of CUSUM stopping times. Seq. Anal. 2008, 27, 420–434. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).