Abstract
Positive recurrence for a single-server queueing system is established under generalized service intensity conditions, without the assumption of the existence of a service density distribution function, but with a certain integral type lower bound as a sufficient condition. Positive recurrence implies the existence of the invariant distribution and a guaranteed slow convergence to it in the total variation metric.
MSC:
60K25; 90B22
1. Introduction
The goal of this paper is to establish the positive recurrence of the model under certain assumptions. Intensity of service is assumed only partially at zero (as a lower left derivative value at zero of the “integrated intensity”); in addition, an integral type condition on the “integrated intensity” over intervals of some length is assumed.
By positive recurrence we mean the property of finite expectation of the time to hit the regeneration state starting from any initial state with an estimate of such expected value (see (2) in what follows for the definition of ), and not just a finite expectation of a regeneration period. For this aim, a new approach is suggested. For other more standard ways to establish positive recurrence for regenerative models, see [1,2], and also the references therein. While it may require conditions that may look too restrictive, the hidden main goal is to develop this new method. It looks likely that it may help establish better rates of convergence toward stationarity in the model under investigation. We also hope that it may help to make some progress in more involved queueing models such as Erlang–Sevastyanov, with an infinite number of servers. To the best of our knowledge, moment conditions on the service time under which positive recurrence is established in ([1], Chapter 2) and ([2], Chapter 5) are not applicable to infinite server Erlang–Sevastyanov-like systems; at least, the author is not aware of any progress in this direction so far. There is some well-known advice from Leonard Euler: if you see several possible paths toward the goal, then, in mathematics, you have to try all of them, not just one; some of them, or even all of them may turn out to be useful in some other adjacent areas or problems. Even though our conditions for positive recurrence are stronger than necessary for applications of other approaches, they may be more useful in some other particular situations. This was the main motivation for this work.
For the recent history of the topic see [3,4,5,6,7,8,9,10,11]. One of the reasons—although not the only one—why various versions of this system are so popular is because of their intrinsic links to important topics of mathematical insurance theory, see [5].
In this paper, we return to the less involved single-server system, , where the intensity of arrivals may only depend on the number of customers in the system, with the goal of reviewing conditions of its positive recurrence. An importance of this property may be highlighted, for example, by the publications [3,12] where the investigation of the model assumes that it is in the “steady state”, which is a synonym for stationarity. As is well-known, positive recurrence along with some mild mixing or coupling properties guarantees the existence of a stationary regime of the system. One particular aspect of this issue is how to achieve bounds without assuming the existence of intensity of service in the model and, more generally, in Erlang–Sevastyanov-type systems. Certain results in this direction were recently established in [13] for a slightly different model. Still, in [13] it is essential that the absolute continuous part of the distribution function F (in our notation) is non-degenerate; in the present paper, this is not required and the approach is different.
Please note that in such a model certain close results may be obtained by the methods of regenerative processes if it is assumed that the same distribution function F (see below) has enough moments. However, our conditions and methods are different. The main (moderate) hope of the author is that this approach may also be useful in studying ergodic properties of Erlang–Sevastyanov-type models, as happened with the earlier results and approaches based on the intensity of service as in [11], successfully applied in [14]. The present paper is an initial attempt in the program of developing tools that could help approach the problem outlined recently in [15].
The paper consists of the introduction in Section 1, the setting and the main result in Section 2, two simple auxiliary lemmata in Section 3, the proof of the main result in Section 4, and two simple examples in Section 5 for the comparison of sufficient conditions of Theorem 1 with conditions in terms of the intensity of service in the case when the latter does exist.
2. The Setting and Main Results
2.1. Definition of the Process
The model is as follows. There is one server with an incoming flow of customers or jobs; this flow is Poissonian with intensity where n is the number of customers in the system. If the server is idle, it immediately starts the service of the customer who arrives, unless the queue of waiting customers is not empty; in the latter case, it starts the service of one of them. If the server is busy, then the newly arrived customer goes to the queue where it waits until the server completes the earlier job(s). The buffer for the queue is unlimited (denumerable). The discipline of how the server chooses the next customer from the queue for serving is FIFO (“first in–first out”). All services are independent with the same distribution function F, and they are independent of the arrivals. A serve is a synonym of a completed job. It is assumed that the mean value is finite. It is assumed that
Also, it is assumed that
This will not be used except in one not-so-important remark. Therefore, we do not number this equation.
The following state space is convenient for the description of the stochastic process which describes the model. It will be convenient to identify the zero state with a zero couple . Then the state space of the process is a union,
and the process itself is described at all times by a two-dimensional vector , with , where stands for the number of the customers in the system including both in the server and in the queue; after the identification of with mentioned above, in the case of by definition; the second component stands for the elapsed time of the current service. It is assumed that the initial value is any pair of non-negative values, , and the process evolves in time according to the provided description. By the construction, it is a Markov process in the state space .
We are interested in estimating the expectation of the stopping time
where the process starts from any initial state .
On some occasions, it will be convenient to write for the first component of and for the second one. For any where , the “integrated intensity” of service
is defined by a Stiltjes integral
The integral is assumed to be finite for any , and for the additional small part of the assumption see the next subsection. The intuitive meaning of the differential is the infinitesimal conditional probability of a job completion on the interval under the condition that this job was not fulfilled by time t. If , then is called intensity of service at ; however, we do not assume that has to be absolutely continuous with respect to the Lebesgue measure. For simplicity of setting and proofs, in order to avoid possible singularities it is assumed that
2.2. Some Notation
- 1.
- The notation for the probability and for the expectation will be used. Both correspond to the initial value of the Markov process under consideration. We highlight that this is a standard notation from the theory of homogeneous Markov processes.
- 2.
- For a possibly discontinuous distribution function F, or for its integrated intensity H, integrals written like will be understood as integralsand likewise with .
- (3)
- The following convention will be used, if .
2.3. Main Result for
Recall that it is assumed that
Let us also assume that there exists a constant r such that
(so that ) where was defined in (1), and such that
and, moreover,
Let us highlight that the latter inequality is not supposed to hold for small values of approaching zero, but only for . The increase of the integral for a fixed x as may be achieved due to a positive intensity if this derivative exists, and due to jumps of H, and also due to the increase of this function on sets of the Lebesgue measure zero like for the Cantor function.
Note that the process may have no explosion on any bounded interval of time with probability one since the arrival intensities are bounded. In what follows is the distance in total variation between two probability measures; here the supremum is taken over all Borel measurable sets .
Theorem 1.
Remark 1.
We highlight that the initial value of the second component here is arbitrary. Also, please note that this theorem is not a pure existence-type result because the constant C may be evaluated, as can be seen from the proof.
3. Lemmata
Lemma 1.
Under assumption (3), for any and for any ,
Proof.
The probability of no less than m completed jobs over time T for is given by the repeated integral
Hence, we obtain both inequalities in (10), as required. Please note that assumption (3) implies that for any , otherwise the distribution would be concentrated on a finite interval, which would contradict the assumption. □
Recall the notation and assumption (1).
Lemma 2.
Under assumption (3), for any n
Proof.
First, for any , and any x,
Second, with probability no less than there is at least one completed job on . The value p is positive according to assumption (3). After each jump down on — which is a stopping time — this argument may be repeated by induction n times. Note that since is the initial value of the process, the server may not become idle until n jobs are completed. So, this gives the lower bound
By the multiplication of the two values and , due to the independence of the services and arrivals, the proof is completed. □
4. Proof of Theorem 1
- 0.
- The proof will be split into several steps. We shall consider the embedded Markov chain, namely, the process at times , and it will be shown that this process hits some suitable compact around “zero state” in time which admits a finite expectation. From this property, the main result will follow. The reader is warned that after this first hit the definition of the embedded Markov chain will change, as further times may become random and possibly non-integer, see step 4 of this proof.
- 1.
- Let us choose so that(see condition (5)). NB: We highlight that this value will be fixed for what follows and will not tend to zero.
Since as , this is possible for any . Let us introduce an auxiliary stopping time
Denote
Function L will serve as a Lyapunov function outside the compact set .
First of all, we are going to estimate the first moment of , namely, to prove that there exists such that
Recall that
and highlight that the definition of is quite different from that of .
Let . Applying the rules of stochastic calculus, we have for ,
where is a martingale (see, e.g., [16]). Indeed, let us comment on (17). Denote
and let . At any non-random time , on the infinitesimal interval there might occur the following events with corresponding probabilities, assuming that :
Notice that we write because when we integrate over a finite interval of time, the integral is finite anyway, while the multiplier is, in any case, . By a similar reason we claim since the assumption means that, in any case, .
So, by the full expectation formula we have for ,
Integrating from to t with , we obtain,
Denote
and
Notice that the assumption (1) straightforwardly implies that for any . Hence, we have for any ,
So, as promised, is a martingale. This justifies (17), as required.
The following bound will be established:
with some , if .
First, we have,
To evaluate let us introduce a sequence of stopping times. Let
To evaluate , using the identity which holds provided that , let us estimate,
Let us estimate the term . We have (recall that by definition),
Here the complementary part of the integral was just dropped.
Further, it will be shown that (see (21))
For this aim, let us introduce by induction the sequence of stopping times,
Please note that the component might only have a finite number of jumps down on any finite interval. The times of jumps on the interval are exactly the times , and possibly the last jump down on this interval may or may not occur at . In any case, clearly,
So, we have,
where for each outcome this series is almost surely a finite sum. On each interval we may write down
This is by virtue of assumption (6) and because at each stopping time which is less than 1 we have . If , then both sides in the latter inequality equal zero, so that the inequality still holds true. Therefore, taking a sum over n, we obtain (23), just without on the left-hand side. This multiplier in the right-hand side guarantees that its presence in the left-hand side of (23) still leads to a valid inequality, which means that the bound (23) is justified.
Due to Lemma 1, if then (see (13))
(NB: In fact, at least n jobs should be completed, the first one on ; however, we prefer to have a bound independent of x. In any case, this does not change the scheme of the proof.) Likewise, if , then
due to the choice of , see (14).
Recall that was chosen so that , see (12). Hence,
Denote
Thus, for any we obtain,
The bound (19) follows with a constant C which may be evaluated via .
- 2.
The event may be equivalently expressed as . Hence, the latter inequality may also be rewritten in the form suitable for the induction:
Similarly, may be equivalently expressed as . Therefore, we obtain
Hence, by taking the expectations we obtain
Similarly, by induction (in what follows the notation is used),
Due to the elementary bound , this implies,
Summing up and dropping the negative term on the right-hand side, we obtain
for any . So, by the monotone convergence theorem,
By virtue of the well-known relation
for the expectation of the bound (25) implies the following inequality with ,
In particular, this bound signifies that
- 3.
- Now, once the bound for the expected value of is established, we are ready to explain the details of how to obtain a bound for . The rest of the proof is devoted to this implication, with the last sentences related to the corollary about the invariant measure and convergence to it.
At , the process attains the set , while ; hence, both
By definition, the random variable is the first integer k where simultaneously
Therefore, at we have either , or , or both. If there are no completed jobs on , then may only increase, or, at least, stay equal on this interval, while certainly increases. Therefore, may only be achieved by at least one completed job; it would mean a jump down by one of the n-component and simultaneously a jump down to zero of the x-component. Then at k we obtain , which certainly makes it less than , regardless of whether or not there were other arrivals or completed jobs on (recall that in addition to inequality (14) we assumed that ).
Now, given and , by virtue of lemma 2, for any we have with any ,
where
Here T is any positive integer and recall that
Note that, of course, for non-integer values of there is a likewise bound, but it looks a bit more involved, and using integer T values suffices for the proof. Recall that it was assumed in (3) that , and it follows from the first line of (3) that . Inequality (27) implies that
NB:Here the standard notations for homogeneous Markov processes are used, which means that after stopping time τ is, actually, the value .
Please note that for any the event implies that
Hence, we conclude,
and, therefore, for any x,
- 4.
- Consider now the process X started at time from state with and .
Let and let us stop the process either at , or at
whichever happens earlier. In other words, consider the stopping time
The event implies that the process does not exceed the level on the interval . On the other hand, according to the arguments of step 3 – namely, due to (29) and (30) – we have,
Let
and further, let us define two sequences of stopping times by induction,
Both sequences of stopping times are monotonically increasing. Note that all integers like and here in the expressions stand for upper indices, not for power functions. Let us highlight that the stopping time equals plus some integer, but may or may be not an integer itself. All these stopping times are finite with probability one, and, moreover, due to (31) and because of the strong Markov property we have almost surely,
Also, almost surely
on the event where . Indeed, suppose the opposite, that is,
Then on a finite interval of time the process either crosses the interval and back an infinite number of times, or the process an infinite number of times crosses the interval and back; in any case, it would mean an infinite number of arrivals to . Since , the first option is clearly not possible. If the second option occurred, it would mean that there were an infinite number of completed jobs on ; this is not possible for various reasons, for example, because this would also require an infinite number of arrivals on the same interval, and this possibility we have already excluded. Thus, the assumption (33) may only happen with probability zero, and so, (32) with holds true with probability one.
- 5.
- Denote
Also by induction, it follows from (26) and from the elementary bound by definition
that there exists a constant C such that
Now we are going to estimate this sum by some geometric type series in combination with bounds (34) and (35). A small issue is that we are not able to use Hölder’s, or Cauchy–Buniakowskii–Schwarz inequality here because we only possess a first moment bound for , while higher moments are not available. This minor obstacle is resolved in the next step of the proof by the following arguments using conditional expectation with respect to suitable sigma-algebras.
- 6.
- We have,
Let us investigate the last term. Since each random variable is -measurable for any , and because, due to the strong Markov property and the bound (34),
(It was used that implies for all as well.) Moreover, by virtue of the inequality (16) and since by definition all , we have for ,
with some finite constant C.
Let us inspect the previous term. Using that and all random variables are -measurable for any , we obtain by induction,
Also by induction, we find that a similar upper bound with the multiplier holds for each term in the sum in the right-hand side of (36), for , and . Indeed, using the identity we have for ,
Here we used that the random variable is -measurable.
Further, by virtue of the bound (34)
We used the bound (37) with replaced by its upper bound T (as in the calculus leading to (37)) and with k replaced by i.
The first term is estimated similarly with the only change that instead of the constant C we obtain a multiplier which is a function of the initial data and which makes the resulting bound non-uniform with respect to the initial data:
5. Two Examples
Let us provide two examples for a comparison to “local” conditions in terms of the intensity of service if the latter exists.
Example 1.
Assume that there exists the derivative function and that the hazard function is no less than a constant,
The upper bound for is the same as in the proof of the theorem:
To estimate , in the case of either initial value n, or initial second component x, or both are large enough, by Lemma 1 we have similarly to (21) and (22) with μ in place of r,
Therefore, we obtain
where the latter inequality is due to the Lemma 1 and to the choice of the values of M and , see (13) and (14).
Therefore, the condition suffices for the claims of the theorem (assuming all its other conditions are met). This should be compared with the assumption (5). The multiplier 2 in (5) may be regarded as a price for non-local, integral-type conditions, see (6) and (7). Please note, however, that assumption (40) looks clearly stronger than necessary for the bound obtained.
Example 2.
Assume that there exists the derivative function and that the hazard function satisfies the condition (compare to [14])
with a constant μ.
Here the upper bound for is the same as in the proof of the theorem and as in the previous example:
To estimate , in the case if either the initial value n, or the initial second component x (or both) is large enough, we have by Lemma 1, similarly to (21) and to the previous example,
again due to the definition of M and , see (13) and (14). Hence, here the same condition as in the previous example suffices for the claims of the theorem (assuming all other conditions of the theorem are met). Condition (41) is clearly more relaxed than (40), but both assume the existence of the intensity of service, which is not required in Theorem 1.
6. Discussion
The result may serve as a sufficient condition for the “steady-state” property of the model used as a background in [3,12]. It is plausible that the method used in this paper admits extensions to more general models. As it was said in the introduction, there is some moderate hope that it may also be applied to Erlang–Sevastyanov’s type systems, which could potentially allow finding sufficient conditions for convergence rates in such systems without assuming the existence of intensity of service, thus, generalizing the results from [14].
Funding
This research was funded by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”.
Data Availability Statement
Not applicable.
Acknowledgments
The author is grateful to two Referees for very useful remarks.
Conflicts of Interest
The author declares no conflict of interest. The funders had no role in the design of the study, in the writing of the manuscript, and in the decision to publish the results.
References
- Borovkov, A.A. Stochastic Processes in Queueing Theory; Springer: New York, NY, USA, 1976. [Google Scholar] [CrossRef]
- Morozov, E.; Steyaert, B. Stability Analysis of Regenerative Queueing Models; Springer: Cham, Switzerland, 2021. [Google Scholar] [CrossRef]
- Abouee-Mehrizi, H.; Baron, O. State-dependent M/G/1 queueing systems. Queueing Syst. 2016, 82, 121–148. [Google Scholar] [CrossRef]
- Asmussen, S. Applied Probability and Queues, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
- Asmussen, S.; Teugels, J.L. Convergence rates for M/G/1 queues and ruin problems with heavy tails. J. Appl. Probab. 1996, 33, 1181–1190. [Google Scholar] [CrossRef]
- Bambos, N.; Walrand, J. On stability of state-dependent queues and acyclic queueing networks. Adv. Appl. Probab. 1989, 21, 681–701. [Google Scholar] [CrossRef]
- Borovkov, A.A.; Boxma, O.J.; Palmowski, Z. On the Integral of the Workload Process of the Single Server Queue. J. Appl. Probab. 2003, 40, 200–225. [Google Scholar] [CrossRef]
- Bramson, M. Stability of Queueing Networks. In École d’Été de Probabilités de Saint-Flour XXXVI-2006, Lecture Notes in Math; Springer: Berlin/Heidelberg, Germany, 2008; Volume 1950. [Google Scholar]
- Fakinos, D. The Single-Server Queue with Service Depending on Queue Size and with the Preemptive-Resume Last-Come-First-Served Queue Discipline. J. Appl. Probab. 1987, 24, 758–767. [Google Scholar] [CrossRef]
- Thorisson, H. The queue GI/G/1: Finite moments of the cycle variables and uniform rates of convergence. Stoch. Proc. Appl. 1985, 19, 85–99. [Google Scholar] [CrossRef]
- Veretennikov, A.Yu. On the rate of convergence to the stationary distribution in the single-server queuing system. Autom. Remote Control 2013, 74, 1620–1629. [Google Scholar] [CrossRef]
- Kerner, Y. The conditional distribution of the residual service time in the Mn/G/1 queue. Stoch. Model. 2008, 24, 364–375. [Google Scholar] [CrossRef]
- Zverkina, G.A. On some extended Erlang–Sevastyanov queueing system and its convergence rate. J. Math. Sci. 2021, 254, 485–503. [Google Scholar] [CrossRef]
- Veretennikov, A.Yu. On the rate of convergence for infinite server Erlang–Sevastyanov’s problem. Queueing Syst. 2014, 76, 181–203. [Google Scholar] [CrossRef]
- Veretennikov, A.Yu. An open problem about the rate of convergence in Erlang-Sevastyanov’s model. Queueing Syst. 2022, 100, 357–359. [Google Scholar] [CrossRef]
- Liptser, R.Sh.; Shiryaev, A.N. Stochastic calculus on filtered probability spaces. In Probability Theory III, Stochastic Calculus; Prokhorov, Yu.V., Shiryaev, A.N., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 111–157. [Google Scholar]
- Davis, M.H.A. Piecewise-Deterministic Markov Processes: A General Class of Non-Diffusion Stochastic Models. J. R. Stat. Soc. Ser. Methodol. 1984, 46, 353–388. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).