On Positive Recurrence of the M n / G I /1/ ∞ Model

: Positive recurrence for a single-server queueing system is established under generalized service intensity conditions, without the assumption of the existence of a service density distribution function, but with a certain integral type lower bound as a sufﬁcient condition. Positive recurrence implies the existence of the invariant distribution and a guaranteed slow convergence to it in the total variation metric.


Introduction
The goal of this paper is to establish the positive recurrence of the model M n /G/1/∞ under certain assumptions.Intensity of service is assumed only partially at zero (as a lower left derivative value at zero of the "integrated intensity"); in addition, an integral type condition on the "integrated intensity" over intervals of some length is assumed.
By positive recurrence we mean the property of finite expectation of the time τ 0 to hit the regeneration state starting from any initial state with an estimate of such expected value (see (2) in what follows for the definition of τ 0 ), and not just a finite expectation of a regeneration period.For this aim, a new approach is suggested.For other more standard ways to establish positive recurrence for regenerative models, see [1,2], and also the references therein.While it may require conditions that may look too restrictive, the hidden main goal is to develop this new method.It looks likely that it may help establish better rates of convergence toward stationarity in the model under investigation.We also hope that it may help to make some progress in more involved queueing models such as Erlang-Sevastyanov, with an infinite number of servers.To the best of our knowledge, moment conditions on the service time under which positive recurrence is established in ( [1], Chapter 2) and ( [2], Chapter 5) are not applicable to infinite server Erlang-Sevastyanovlike systems; at least, the author is not aware of any progress in this direction so far.There is some well-known advice from Leonard Euler: if you see several possible paths toward the goal, then, in mathematics, you have to try all of them, not just one; some of them, or even all of them may turn out to be useful in some other adjacent areas or problems.Even though our conditions for positive recurrence are stronger than necessary for applications of other approaches, they may be more useful in some other particular situations.This was the main motivation for this work.
For the recent history of the topic see [3][4][5][6][7][8][9][10][11].One of the reasons-although not the only one-why various versions of this system are so popular is because of their intrinsic links to important topics of mathematical insurance theory, see [5].
In this paper, we return to the less involved single-server system, M n /GI/1/∞, where the intensity of arrivals may only depend on the number of customers in the system, with the goal of reviewing conditions of its positive recurrence.An importance of this property may be highlighted, for example, by the publications [3,12] where the investigation of the model assumes that it is in the "steady state", which is a synonym for stationarity.As is well-known, positive recurrence along with some mild mixing or coupling properties guarantees the existence of a stationary regime of the system.One particular aspect of this issue is how to achieve bounds without assuming the existence of intensity of service in the M/GI/1 model and, more generally, in Erlang-Sevastyanov-type systems.Certain results in this direction were recently established in [13] for a slightly different model.Still, in [13] it is essential that the absolute continuous part of the distribution function F (in our notation) is non-degenerate; in the present paper, this is not required and the approach is different.
Please note that in such a model certain close results may be obtained by the methods of regenerative processes if it is assumed that the same distribution function F (see below) has enough moments.However, our conditions and methods are different.The main (moderate) hope of the author is that this approach may also be useful in studying ergodic properties of Erlang-Sevastyanov-type models, as happened with the earlier results and approaches based on the intensity of service as in [11], successfully applied in [14].The present paper is an initial attempt in the program of developing tools that could help approach the problem outlined recently in [15].
The paper consists of the introduction in Section 1, the setting and the main result in Section 2, two simple auxiliary lemmata in Section 3, the proof of the main result in Section 4, and two simple examples in Section 5 for the comparison of sufficient conditions of Theorem 1 with conditions in terms of the intensity of service in the case when the latter does exist.

Definition of the Process
The model is as follows.There is one server with an incoming flow of customers or jobs; this flow is Poissonian with intensity λ n where n is the number of customers in the system.If the server is idle, it immediately starts the service of the customer who arrives, unless the queue of waiting customers is not empty; in the latter case, it starts the service of one of them.If the server is busy, then the newly arrived customer goes to the queue where it waits until the server completes the earlier job(s).The buffer for the queue is unlimited (denumerable).The discipline of how the server chooses the next customer from the queue for serving is FIFO ("first in-first out").All services are independent with the same distribution function F, and they are independent of the arrivals.A serve is a synonym of a completed job.It is assumed that the mean value Also, it is assumed that This will not be used except in one not-so-important remark.Therefore, we do not number this equation.
The following state space is convenient for the description of the stochastic process which describes the model.It will be convenient to identify the zero state {0} with a zero couple {0, 0}.Then the state space of the process is a union, and the process itself is described at all times by a two-dimensional vector X t = (n t , x t ), with t ≥ 0, where n t = 0, 1, . . .stands for the number of the customers in the system including both in the server and in the queue; after the identification of {0} with {0, 0} mentioned above, x t = 0 in the case of n t = 0 by definition; the second component x t stands for the elapsed time of the current service.It is assumed that the initial value X 0 is any pair of non-negative values, X 0 = (n, x), and the process evolves in time according to the provided description.By the construction, it is a Markov process in the state space X .
We are interested in estimating the expectation of the stopping time where the process X t starts from any initial state X 0 = (n 0 , x 0 ).
On some occasions, it will be convenient to write n t = n(X t ) for the first component of X t and x t = x(X t ) for the second one.For any X = (n, x) where F(x) < 1, the "integrated intensity" of service is defined by a Stiltjes integral The integral x 0 (1 − F(s)) −1 dF(s) is assumed to be finite for any x ≥ 0, and for the additional small part of the assumption see the next subsection.The intuitive meaning of the differential dH(x t ) is the infinitesimal conditional probability of a job completion on the interval (t, t + dt] under the condition that this job was not fulfilled by time t.If dH(x t ) = µ(x t )dt, then µ(x t ) is called intensity of service at x t ; however, we do not assume that dH(•) has to be absolutely continuous with respect to the Lebesgue measure.For simplicity of setting and proofs, in order to avoid possible singularities it is assumed that

1.
The notation P X = P n,x for the probability and E X = E n,x for the expectation will be used.Both correspond to the initial value X = (n, x) of the Markov process under consideration.We highlight that this is a standard notation from the theory of homogeneous Markov processes.

2.
For a possibly discontinuous distribution function F, or for its integrated intensity H, integrals written like t t . . .dF(s) will be understood as integrals . . .dF(s), and likewise with dH(s).

Main Result for M n /GI/1/∞
Recall that it is assumed that Let us also assume that there exists a constant r such that (so that 1/2 > (1 + Λ)/r) where Λ was defined in (1), and such that and, moreover, Let us highlight that the latter inequality is not supposed to hold for small values of ∆ approaching zero, but only for ∆ ∈ [1/2, 1].The increase of the integral for a fixed x as ∆ ↑ 1 may be achieved due to a positive intensity H > 0 if this derivative exists, and due to jumps of H, and also due to the increase of this function on sets of the Lebesgue measure zero like for the Cantor function.
Note that the process X t may have no explosion on any bounded interval of time with probability one since the arrival intensities are bounded.
In what follows is the distance in total variation between two probability measures; here the supremum is taken over all Borel measurable sets A ∈ X .Theorem 1.Let assumptions ( 1)-( 7) be satisfied.Then there exists C > 0 such that Also, there exists a unique stationary measure µ, and, moreover, there exists C > 0 such that for any t ≥ 0, where µ n,x t is a marginal distribution of the process (X t , t ≥ 0) with the initial data X = (n, x), and constant C is the same as in (8).
Remark 1.We highlight that the initial value of the second component here is arbitrary.Also, please note that this theorem is not a pure existence-type result because the constant C may be evaluated, as can be seen from the proof.

Lemmata
Lemma 1.Under assumption (3), for any T > 0 and for any m ≤ n, Proof.The probability of no less than m completed jobs over time T for m ≤ n is given by the repeated integral Hence, we obtain both inequalities in (10), as required.Please note that assumption (3) implies that sup x≥0 (F(x + T) − F(x)) < 1 for any T > 0, otherwise the distribution dF would be concentrated on a finite interval, which would contradict the assumption.
Second, with probability no less than p = inf x≤1 (F(x + 1) − F(x) there is at least one completed job on [0, 1].The value p is positive according to assumption (3).After each jump down on [0, 1] -which is a stopping time -this argument may be repeated by induction n times.Note that since (n, x) is the initial value of the process, the server may not become idle until n jobs are completed.So, this gives the lower bound By the multiplication of the two values exp(−nΛ) and p n , due to the independence of the services and arrivals, the proof is completed.

Proof of Theorem 1
0. The proof will be split into several steps.We shall consider the embedded Markov chain, namely, the process X t at times t = 0, 1, . .., and it will be shown that this process hits some suitable compact around "zero state" (0, 0) in time which admits a finite expectation.From this property, the main result will follow.The reader is warned that after this first hit the definition of the embedded Markov chain will change, as further times may become random and possibly non-integer, see step 4 of this proof.The main goal is to establish the bound (8), from which the estimate (9) follows as a corollary.

Let us choose
(see condition (5)).NB: We highlight that this value ε will be fixed for what follows and will not tend to zero.Once ε is chosen, let (see (3) for the definition of F( 1)) and let us choose Since F(x) ↑ 1 as x ↑ ∞, this is possible for any ε > 0. Let us introduce an auxiliary stopping time Denote Function L will serve as a Lyapunov function outside the compact set K ε .
First of all, we are going to estimate the first moment of τ, namely, to prove that there exists C > 0 such that Recall that τ 0 := inf(t ≥ 0 : X t = (0, 0)), and highlight that the definition of τ = τ ε is quite different from that of τ 0 .Let X 0 = X.Applying the rules of stochastic calculus, we have for X t = (n t , x t ) = (0, 0), where M t is a martingale (see, e.g., [16]).Indeed, let us comment on (17).Denote and let J i (0) = 0, 1 ≤ i ≤ 3.At any non-random time t ≥ 0, on the infinitesimal interval (t, t + dt] there might occur the following events with corresponding probabilities, assuming that n t > 0:  Notice that we write dH(x t )λ n t dt = 0 because when we integrate over a finite interval of time, the integral . . .dH(x t ) is finite anyway, while the multiplier dt is, in any case, o (1).By a similar reason we claim (dH(x t ))(dH(0)) = 0 since the assumption F(0+) = 0 means that, in any case, dH(0) = o(1).
So, by the full expectation formula we have for n t > 0, Integrating from t 1 to t with t > t 1 , we obtain, Denote Notice that the assumption (1) straightforwardly implies that EM t < ∞ for any t ≥ 0. Hence, we have for any t 1 < t, So, as promised, M t is a martingale.This justifies (17), as required.
The following bound will be established: with some C > 0, if (n, x) ∈ K ε .First, we have, To evaluate J 3 let us introduce a sequence of stopping times.Let To evaluate J 3 , using the identity 1(t < γ)1(n t > 0) = 1(t < γ) which holds provided that n > 0, let us estimate, Let us estimate the term F 1 .We have (recall that γ ≤ 1 by definition), Here the complementary part of the integral E n,x 1(γ < 1/2) γ 0 (1 + x + t)dH(x + t) ≥ 0 was just dropped.
Further, it will be shown that (see (21)) For this aim, let us introduce by induction the sequence of stopping times, Please note that the component x t might only have a finite number of jumps down on any finite interval.The times of jumps on the interval [0, 1] are exactly the times γ n < 1, and possibly the last jump down on this interval may or may not occur at t = 1.In any case, clearly, lim So, we have, where for each outcome this series is almost surely a finite sum.On each interval [γ n , γ n+1 ] we may write down This is by virtue of assumption ( 6) and because at each stopping time γ n which is less than 1 we have x γ n = 0.If γ n ≥ 1, then both sides in the latter inequality equal zero, so that the inequality still holds true.Therefore, taking a sum over n, we obtain (23), just without 1(inf 0≤t≤1 n t > 0) on the left-hand side.This multiplier 1(inf 0≤t≤1 n t > 0) in the right-hand side guarantees that its presence in the left-hand side of (23) still leads to a valid inequality, which means that the bound (23) is justified.
It follows from ( 22) and (23) that Due to Lemma 1, if n > M then (see ( 13)) (NB: In fact, at least n jobs should be completed, the first one on [x, x + 1]; however, we prefer to have a bound independent of x.In any case, this does not change the scheme of the proof.)Likewise, if x > x(ε), then due to the choice of x(ε), see (14).

Similarly, by induction (in what follows the notation
Due to the elementary bound 1(k Summing up and dropping the negative term on the right-hand side, we obtain for any N > 0. So, by the monotone convergence theorem, By virtue of the well-known relation for the expectation of τ the bound (25) implies the following inequality with X 0 = (n, x), In particular, this bound signifies that 3. Now, once the bound for the expected value of τ is established, we are ready to explain the details of how to obtain a bound for E n,x τ 0 .The rest of the proof is devoted to this implication, with the last sentences related to the corollary about the invariant measure and convergence to it.At τ, the process X k attains the set (X : X ∈ K ε ), while X 0 ∈ K ε ; hence, both By definition, the random variable τ is the first integer k where simultaneously Therefore, at k − 1 we have either n k > M, or x k > x(ε), or both.If there are no completed jobs on (k − 1, k], then n t may only increase, or, at least, stay equal on this interval, while x t certainly increases.Therefore, τ = k may only be achieved by at least one completed job; it would mean a jump down by one of the n-component and simultaneously a jump down to zero of the x-component.Then at k we obtain x k ≤ 1, which certainly makes it less than x(ε), regardless of whether or not there were other arrivals or completed jobs on (k − 1, k] (recall that in addition to inequality (14) we assumed that x(ε) ≥ 2).Now, given n ≤ M and x ≤ x(ε), by virtue of lemma 2, for any T > 0 we have with any x ≤ 1, where p(T) := p T exp(−TΛ) = F(1) T exp(−TΛ).
Here T is any positive integer and recall that Note that, of course, for non-integer values of T > 0 there is a likewise bound, but it looks a bit more involved, and using integer T values suffices for the proof.Recall that it was assumed in (3) that F(1) > 0, and it follows from the first line of (3) that F(1) < 1.
NB:Here the standard notations for homogeneous Markov processes are used, which means that X T after stopping time τ is, actually, the value X τ+T .Please note that for any T > 0 the event (n(X τ+T ) = 0) implies that Hence, we conclude, and, therefore, for any x, 4. Consider now the process X started at time τ from state (n τ , x τ ) with x τ ≤ 1 and n τ ≤ M. Let T := x(ε) and let us stop the process either at τ + T, or at whichever happens earlier.In other words, consider the stopping time The event (χ 1 = τ + T) implies that the process L(X t ) does not exceed the level M + 2 + T on the interval [τ, τ + T].On the other hand, according to the arguments of step 3 -namely, due to (29) and (30) -we have, Let χ 0 := 0, τ 1 := τ, and further, let us define two sequences of stopping times by induction, Both sequences of stopping times are monotonically increasing.Note that all integers like χ k and τ k here in the expressions stand for upper indices, not for power functions.Let us highlight that the stopping time τ k+1 equals χ k plus some integer, but χ k may or may be not an integer itself.All these stopping times are finite with probability one, and, moreover, due to (31) and because of the strong Markov property we have almost surely, on the event where τ 0 < ∞.Indeed, suppose the opposite, that is, Then on a finite interval of time [0, τ 0 ] the process n t either crosses the interval [M, M + 1] and back an infinite number of times, or the process x t an infinite number of times crosses the interval [x(ε), x(ε) + 1] and back; in any case, it would mean an infinite number of arrivals to [0, τ 0 ].Since Λ < ∞, the first option is clearly not possible.If the second option occurred, it would mean that there were an infinite number of completed jobs on [0, τ 0 ]; this is not possible for various reasons, for example, because this would also require an infinite number of arrivals on the same interval, and this possibility we have already excluded.Thus, the assumption (33) may only happen with probability zero, and so, (32) with τ 0 < ∞ holds true with probability one.
Using the strong Markov property at time χ i (see [17], Section 4), we obtain by induction,

5.
Denote Also by induction, it follows from (26) and from the elementary bound by definition that there exists a constant C such that Using the representation and due to (32), we estimate, Now we are going to estimate this sum by some geometric type series in combination with bounds (34) and (35).A small issue is that we are not able to use Hölder's, or Cauchy-Buniakowskii-Schwarz inequality here because we only possess a first moment bound for τ k , while higher moments are not available.This minor obstacle is resolved in the next step of the proof by the following arguments using conditional expectation with respect to suitable sigma-algebras.6.We have, Let us investigate the last term.Since each random variable 1(χ j ≤ τ 0 ) is F χ kmeasurable for any j ≤ k, and because, due to the strong Markov property and the bound (34), (It was used that χ k < τ 0 implies χ j < τ 0 for all j < k as well.)Moreover, by virtue of the inequality (16) and since by definition all d k ≤ T, we have for k ≥ 1, with some finite constant C.
Let us inspect the previous term.Using that δ k−1 and all random variables 1(χ j < τ 0 ) are F τ k -measurable for any j ≤ k − 1, we obtain by induction, Also by induction, we find that a similar upper bound with the multiplier (1 − p(T)) k holds for each term in the sum in the right-hand side of (36), for 2 ≤ i < k, and k ≥ 3.

Indeed, using the identity 1(χ
Here we used that the random variable (δ We used the bound (37) with d k replaced by its upper bound T (as in the calculus leading to (37)) and with k replaced by i.
The first term is estimated similarly with the only change that instead of the constant C we obtain a multiplier C(x + n + 1) which is a function of the initial data X 0 = (n, x) and which makes the resulting bound non-uniform with respect to the initial data: Overall, collecting the bounds (37)-(39), we obtain, Therefore, it follows that with some new constant C, as required.Existence of the invariant measure for the model and the inequality (9) follows straightforwardly from the established positive recurrence (8) and in a usual way from the coupling technique, or by means of other well-known tools.Theorem 1 is proved.

Two Examples
Let us provide two examples for a comparison to "local" conditions in terms of the intensity of service if the latter exists.
Example 1. Assume that there exists the derivative function F (s) and that the hazard function h = H is no less than a constant, The upper bound for J 1 n,x + J 2 n,x is the same as in the proof of the theorem: To estimate J 3 n,x , in the case of either initial value n, or initial second component x, or both are large enough, by Lemma 1 we have similarly to (21) and ( 22) with µ in place of r, Therefore, we obtain where the latter inequality is due to the Lemma 1 and to the choice of the values of M and x(ε), see (13) and (14).
Therefore, the condition µ > 1 + Λ suffices for the claims of the theorem (assuming all its other conditions are met).This should be compared with the assumption (5).The multiplier 2 in (5) may be regarded as a price for non-local, integral-type conditions, see ( 6) and (7).Please note, however, that assumption (40) looks clearly stronger than necessary for the bound obtained.
Example 2. Assume that there exists the derivative function F (s) and that the hazard function h = H satisfies the condition (compare to [14]) with a constant µ.
Here the upper bound for J 1 n,x + J 2 n,x is the same as in the proof of the theorem and as in the previous example: J 1 n,x + J 2 n,x ≤ 1 + Λ.To estimate J 3 n,x , in the case if either the initial value n, or the initial second component x (or both) is large enough, we have by Lemma 1, similarly to (21) and to the previous example, J 3 n,x (1) := E n,x again due to the definition of M and x(ε), see ( 13) and ( 14).Hence, here the same condition µ > 1 + Λ as in the previous example suffices for the claims of the theorem (assuming all other conditions of the theorem are met).Condition (41) is clearly more relaxed than (40), but both assume the existence of the intensity of service, which is not required in Theorem 1.

Discussion
The result may serve as a sufficient condition for the "steady-state" property of the model M n /GI/1 used as a background in [3,12].It is plausible that the method used in this paper admits extensions to more general models.As it was said in the introduction, there is some moderate hope that it may also be applied to Erlang-Sevastyanov's type systems, which could potentially allow finding sufficient conditions for convergence rates in such systems without assuming the existence of intensity of service, thus, generalizing the results from [14].