First Passage Analysis in a Queue with State Dependent Vacations

: This paper deals with a single-server queue where the server goes on maintenance when the queue is exhausted. Initially, the maintenance time is ﬁxed by deterministic or random number T . However, during server’s absence, customers are screened by a dispatcher who estimates his service times based on his needs. According to these estimates, the dispatcher shortens server’s maintenance time and as the result the server returns earlier than planned. Upon server’s return, if there are not enough customers waiting (under the N -Policy), the server rests and then resumes his service. At ﬁrst, the input and service are general. We then prove a necessary and sufﬁcient condition for a simple linear dependence between server’s absence time (including his rest) and the number of waiting customers. It turns out that the input must be (marked) Poisson. We use ﬂuctuation and semi-regenerative analyses (previously established and embellished in our past work) to obtain explicit formulas for server’s return time and the queue length, both with discrete and continuous time parameter. We then dedicate an entire section to related control problems including the determination of the optimal T -value. We also support our tractable formulas with many numerical examples and validate our results by simulation.


Description of the Model
This paper deals with a single server queueing exhaustive system under N-Policy and a single vacation. When the system is emptied, the server leaves for a routine maintenance during a period of time initially set to equal T (>0). During this time, the server is unavailable for service of regular customers. However, when a batch of customers arrives at the system, the dispatcher tentatively estimates their total service time according to their demands, say X, and commands the server to shorten his vacation time by exactly X units of time. So, if at time t 1 , a random batch of Y 1 customers comes in with its estimated service time of X 1 = x 1 + . . . + x Y 1 , the server remaining absence time at time t 1 becomes (T − t 1 − X 1 ) + . If it is zero, the server immediately returns to the system and resumes his service. Otherwise, he is out of system until the next arrival at time t 2 of a second group of Y 2 customers with an estimated service time X 2 = x Y 1 +1 , . . . , x Y 1 +Y 2 making the remaining absence time of the server (T − X 1 − t 2 − X 2 ) + and he returns to the system immediately if the latter quantity is zero. If during the entire absence of the server, there is no arrival, the server returns to the system and rests. In fact, he rests if there are less than N customers in the system.
Note that the estimated service time has no impact during the time when the server returns to the system, as it may be different from the real service time. In practice, it is impossible to determine how much real service time customers end up requiring, but dependent on customers' alleged needs it makes sense to estimate their service times to better organize server's absence and return.
As mentioned, on his return, if there are less than N (N ≥ 1) customers waiting, the server rests until the queue replenishes to N or more customers. The latter is a more likely scenario, because the input is bulk. Only then is the service activated. We will return to further specifications.
Note that this is not a usual N-Policy/vacations-system in which the server goes on a single or multiple vacations. In this model, he is absent during a state dependent, periodically updated, random period of time. Namely, the service vacation time is gradually shortened by new arrivals under their estimated service times.

More Formal Description
Now we return to our more or less formal description of the system. Define an auxiliary continuous time parameter process S(t) as where t 0 = X 0 = 0. This process determines server's absence time at any time t ≥ 0 accelerated by increments X i at epochs t i , i = 1, 2, . . . To simplify the formalism we first assume that N = 1. When the server is done with their maintenance, he returns to the system, with possibly no customers waiting (it happens when t 1 > T). In this case, he waits until a next batch of customers arrives. The time of return and the number of customers present in the system resemble our approach in [1][2][3][4] now with one more "passive" component Y, because on server's return, we need to know how many customers have so far accumulated in the waiting room. In addition, the choice of the queue in the system is subject to two alternatives to be discussed in a moment. Now with arrivals of customers at times t 1 , t 2 , . . . and their respective batch sizes Y 1 , Y 2 , . . . , define B n = Y 1 + . . . + Y n and A n = A n−1 + δ n + X n = t n + X 1 + . . . + X n , where δ n = t n − t n−1 .
From Figure 1, where S(t) defined in (1) is in green, we see an excerpt of the process between the two arrivals at t j−1 and t j . Here B j−1 is the total number of all customers that arrived by time t j−1 , whereas E j−1 = X 1 + . . . + X j−1 is the total estimated service time of these customers. S(t) is the cumulative time spent in maintenance by time t ∈ t j−1 , t j appreciated by E j . Because of the latter, the actual maintenance time is shortened compared to the initially planned T.
From Figure 2 below (where S(t) is in green), upon the server's return, there will be B ν−1 customers waiting, because the crossing of T by S(t) occurs at some instant τ ν in interval (t ν−1 , t ν ). Note that the next arrival at t ν will take place after server's earlier return at τ ν . Figure 3 below depicts the situation when the first crossing of T occurs at the arrival of the t ν th batch of customers Y ν and not earlier. It happens that the estimated service time X ν of that batch, along with δ ν , is the increment added to A ν−1 , which makes S(τ ν ) = S(t ν ) cross T for the first time.   As we see in Figures 2 and 3, the server's return to the system takes place on the first crossing of T by S(t) with two incidents. If ν = inf{n : A n ≥ T}, then t ν is the nearest arrival epoch at which A ν ≥ T. However, the crossing can take place earlier in interval (t ν−1 , t ν ), because S(t) also appreciates with the unit velocity. So if the latter happens, the number of customers in the system will be regarded as B ν−1 . If T will be crossed only at t ν , then the total number of customers on server's return is B ν . Figure 4 is yet another realization of the two variants of crossing depicting the events in a vicinity of the two crossing options dependent on the position of threshold T within interval (A ν−1 , A ν ], that is whether T ∈ (A ν−1 , A ν−1 + δ ν ] or T ∈ (A ν−1 + δ ν , A ν ].

Our Results
Here we analyzed a single-server queue with unlimited buffer and single, state dependent, vacations. Initially, we make no assumptions on the nature of the input and service, that is, the system is of GI/G-NSV/1 type to obtain the return time and the number of customers Q(τ ν ) (i.e., B ν−1 or B ν ) on server's return at τ ν . Dependent on the situation, the total number of customers gathered at τ ν can be less than N, and for that matter, it can be even zero (if t 1 > T). In the latter case, the server rests until the queue crosses N (which is equal to or greater than N), and only then does he resume his service. He processes customers singly under the FIFO discipline, and he goes on a single vacation when the queue is exhausted in accordance with the above specified state dependent vacation policy.
We use and embellish fluctuation theory to arrive at a closed form for joint and marginal distributions of τ ν and Q(τ ν ). Further on, we obtain the distribution of the two moments t µ and W µ = Q t µ , where t µ is the beginning of the new busy period when W µ ≥ N. The entire server's endeavor consists of his maintenance time and a possible wait. A very interesting special relationships between τ ν and Q(τ ν ) and t µ and W µ turn out to be linear with the same multiplier if and only if the input stream is Poisson. In general, we assume that the setup maintenance time T (that is then randomly reduced) is a constant. However, in a variety of useful special cases, there is no analytical complexity to upgrade T to a random variable, which we discuss.
We then continue with the rest of our queueing analysis under the assumption that the input is marked Poisson, herewith working on the M X /G-NSV/1/∞ type queue under our special vacation policy. Here we obtain a closed form for the invariant probability measure of the embedded process observed on departures. To make our study complete and attain to an optimal value of T (initially set) we employ semi-regenerative analysis to obtain the steady state probability distribution of the continuous time parameter queueing process Q(t). This is followed by a discussion of various performance measures and control problems related to the determination of the optimal initial value of T to reduce the frequently unwanted numbers of activations and deactivations (switchovers) regarding busy and non-busy periods, the number of customers waiting during the maintenance time, and the number of overall waiting customers. We also provide numerous special cases, numerical examples, and simulated cases validating our results.

Topically Related Literature
There is a very extensive research done on queues with vacations and various associated policies. Monograph [5] by Tian and Zhang of 2006 is an excellent source of multitudes of papers on vacations that is well categorized and analyzed. A very useful recent paper of 2021 [6] by Panta et al. surveys over 150 articles. It seems like the pioneering work in the 70s has never faded, but in all truth it has become even more attractive for many, because this work is very practical and universally applicable. Perhaps most numerous are queues with multiple vacations, but they are not related to our work. In these settings, the server goes on multiple repeated vacation segments until a desired number of customers (such as N) accumulate. The server does not break any particular vacation segment in the middle, but he will not start a new one if the queue has attained to a desired minimal length. Many papers on T-Policy deal with multiple vacations except that every vacation segment there is assumed constant, equal to some real number T and thus are special cases of multiple vacations. (Cf. Alfa [7]).
The work on single vacations is closest to ours. A standard setting such as in [8] by Liu et al., Gupur [9], Choudhury [10], Y. Tang and X. Tang [11], S. Jin and W. Yue [12], Ghosh et al. [13], and Lee et al. [14] treats an M/G/1-type exhaustive queue. The server goes on a single vacation trip when the system becomes empty. He resumes his service if there is at least one customer waiting or the server rests until one arrives. A slightly embellished variant the M/G/1-type (cf. Gupta and Sikdar [15]) is when a single server processes customers in batches sized between a and b. If upon a service completion, the number of available units drops below a, the server goes on a single vacation, and on his return, if there are still less than a units, he rests until queue replenishes to at least a. A more advanced setting (such as in [16] by Lee et al.) takes on M X /G/1-type queue under N-Policy. That is, the server leaves the empty system on a single vacation and on his return, when he finds less than N customers waiting, he rests and then resumes his service until the queue crosses N. Kempa [17] targets the virtual waiting time process in a single-vacation queue with a finite waiting room. A very interesting recent article [18] by Kazem and Al-Obaidi deals with a single vacation queue and N-Policy using fluctuation analysis.
Less standard settings pertain to GI/M/1 (cf. [19] by Chaea et al. and [20] by Gabryel et al.) and multiserver queues. In the former case they undergo a different analysis compared to M/G/1 counterpart. Multiserver queues assume synchronous leave of a group of servers. See [21] by Zhang and Tian (M/M/c), Wu and Ke [22] (M/M/c), Xu and Zhang [23] (M/M/c). Another paper [24] by Madan et al. deals with M/M 1 M 2 /2 queue with heterogeneous service with synchronous and asynchronous single vacations.
Other systems with single vacations are with additional features, such as retrials (Gao and Zhang [25] and Malik and Upadhyaya [26]), unreliable servers, or both (cf. Gao et al. [27], Ke and Lin [28]), and by Yang et al. [29] with retrials, server breakdowns and repair.
Another class of modeling close to ours is with state dependent vacations. In essence, all systems with N-Policy and multiple vacations have elements of queue length dependent vacations. Al-Matar and Dshalalow [30] modeled a system under N-Policy and multiple vacations. A vacation series terminates, when the total vacation time exceeds some random time T. Then the server returns to the system, and if there are less than N customers in the queue, the server returns to the maintenance facility for the second phase and continues his maintenance until the queue crosses N.
However, certain papers introduce special forms of state dependent vacations. Chao and Rahman [31,32] studied two systems with multiple server's vacations, GI/M(n)/1/K and M(n)/G/1/K. In the former, the vacations are distributed exponentially with state dependent rate v n , where n is the number of customers in the system. The vacation rate changes along with every new arrival, and the whole vacation lasts until there are N customers waiting. In the second model, it is the arrival rate that alters. The vacation times are independent and identically distributed (i.i.d.) and in accordance with the multiple vacations and N-Policy rule. The arrivals rates are λ n and v n dependent on whether the server is in his primary mode or on vacation, respectively, and either rate depends on the number of customers in the system. A recent paper [33] of 2021, by Tamrakar, G.K. and Banerjee, deals with an interesting M/G Y /1-type queue with single and multiple vacations in which the service and vacation times depend on batch size and queue length (upon the vacation start), respectively. This system was inspired by a group testing method during a pandemic such as COVID-19. The batch size taken for testing lies between the two thresholds, a and b (a ≤ b). A new service consists of testing in a pool of r samples, provided a ≤ r ≤ b. If the whole sample is tested negative, then the server is done with that group and takes a new one. If the pool of samples is tested positive, that group with be further tested, apparently breaking it in smaller groups. If r < a, then the server (a test employee) goes on vacation. During the vacation, the server performs maintenance, such as restocking the health care inventory, increasing people's awareness, and attending the quarantine room. The vacation time is picked from the set of a different distributions (µ 0 , . . . , µ a−1 ) that are general in their nature and it is chosen µ r if the server leaves behind r units in the system. Another recent paper [34] by Xie et al. studies an M/G Y /1/K queue with queue-length dependent multiple vacations. The most interesting aspect of this work is an application to computer operating systems.
The most basic view of fluctuations is that of random variables that can be measured by the variance. A more contemporary view of fluctuations pertains to the analytical behavior of stochastic processes about critical thresholds whose crossing at certain epoch of time (called the first passage time) is of interest. Fluctuations also cross with random walks. More specifically, a classical random walk of a particle or walker takes place on a multi-dimensional grid or lattice. The walker attempts to escape some bounded set A. In this case, not only the first passage time, but also walker's location on its escape are common targets of an investigation, because not only does the walker crosses the boundary, but it may land at some remote location away from set A. It happens when the walker jumps from one node of the lattice to another rather than drifts one step at a time.
The most typical lattice on which the walk takes place is Z d . However, more recent work pertains to walks on graphs of arbitrary configurations, which in turn apply to the walks on networks. Another modification of a random walk is when the grid is not even deterministic, but it is generated randomly each time the walker lands at some node. Thus, the locations of respective nodes are in R d and not equidistant. Under these more general conditions, the walker intends to escape a deterministic subset A of R d and the underlying studies target the escape time from A as well as the location of the walker outside A. See for instance, a recent survey [4] by Dshalalow and White. Fluctuations of random walks are subject to widespread studies in stochastic analysis, cf. Barral and Loiseau [46] and an excellent survey by Bingham [47], then random walk on graphs by Confortia and Leonard [48], Telcs [49], and Volchenkov [50], random walks on trees by Andreolettia and Dielb [51] and Takács [52], and random walks on random lattices by White and Dshalalow [3].
Random walks find applications in various areas of science and technology, such as physics, in particular, nuclear physics, surveyed in [53] by Durkee in 2021 and astrophysics [54] by Uchaikin and Gusarov, epidemiology in Pu et al. [55], genetics by Murase et al. [56], finance in Jarner and Kronborg [57] and Scalas [58], medicine (Alzheimer) in Rahimiasi et al. [59] and (cancer research) in Seah et al. [60]), strategic networks in Dshalalow and White [41], electrical networks in [49] by Telcs, social networks by Li et al. [61] and Sarkar and Moore [62]) Additionally, a variant of random walks known as quantum random walks or just quantum walks in quantum mechanics and quantum computing, Kiumi et al. [63] and Sajida et al. [64]. Note that quantum walks are quantum analogs of classical random walks which are dynamical tools used to control the motion of a quantum particle in space and time.

Layout of the Paper
After an informal description in Section 1, we rigorously formalize our GI X /G-NSV/1 model and obtain joint and marginal distributions for server's return time τ ν and the number of waiting customers Q(τ ν ) in the form of transforms. We use operational calculus to establish analytically tractable formulas, all in Section 2. In Section 3, we deal with special cases. First, we prove that the functionals of τ ν and Q(τ ν ) are linearly related if and only if the input process is Poisson. It is allowed to be marked though. From that on, at least some of our further efforts pertain to the system GI X /G-NSV/1. We also support our claims in Section 3 (and also some other sections) that the results are tractable. Section 4 is fully dedicated to numerical results that amplify our claims that the results are tame. We also provide simulated cases to validate the results. Sections 5-8 deal with queueing processes, which we analyze combining fluctuations and semi-regenerative analysis, again arriving at formulas for the steady state distribution in the form of analytically tractable and computationally tame functionals. Section 8 discusses various control problems, in particular, determination of the optimal value of T that the system needs to set initially. Overall, we offer a stand-alone analysis different from the known to us literature on queueing, in particular, queueing under N-Policy and with single vacations or maintenance.

Fluctuation Analysis of the System
From Section 1, we can conveniently model the server's absence and system's status by the following scheme. In the absence of any arrivals in the system, the server will spend exactly T units of time outside the system and then returns. If there is at least one arrival in interval [0, T] of a random batch Y 1 of customers with a total estimated process time X 1 = x 1 + . . . + x Y 1 of Y 1 customers, then the server shortens his vacation by X 1 units of time. It means then that the server who vacationed δ 1 = t 1 time on the first arrival, will be counted A 1 = δ 1 + X 1 vacation time instead of t 1 which further shortens his vacation from T down to (T − δ 1 − X 1 ) + . If X 1 + δ 1 has not crossed T, the server continues his maintenance until t 2 , which is the arrival time of the second batch Y 2 of customers with total estimated service time X 2 . Thus, the server at t 2 spends a total of X 1 + δ 1 + X 2 + δ 2 = A 2 time outside the system (not δ 1 + δ 2 = t 2 ). The server continues this pattern until some A n or A n + δ n+1 reaches or crosses T, when he returns to the system.
Consider the random measure that describes the transition of the system during server's maintenance with no regard of his return to the system. For example, A ⊗ T ⊗ B[0, t] gives the position of the server with respect to his time spent in maintenance at t m =max{t k ≤ t : k = 1, 2, . . .} that is accelerated through respective increments X 1 , . . . , X m , the t m , and the total number of customers, Accordingly, with the marginal transform of the kth batch size of entering units If A ⊗ T ⊗ B is with position independent marking, then Further, define to be the minimum number ν of arrivals when A ν crosses T, while A ν−1 did not. Obviously, t ν is the first passage time when A ν crosses T. Here we see that of the three components in the process A ⊗ T ⊗ B, only A (namely A k ) is active, while the rest are passive. The reason why we call the components this way is because A fluctuates about threshold T, while the other two are not related to any threshold. Thus, the notion of active or passive applies to the relation of a component toward a threshold if it exists. In this case, A k has a threshold that it crosses at some point of time, while the other two components do not have their thresholds and assume their values appreciated by the first passage time t ν . A ⊗ T ⊗ B gives a comprehensive descriptive measure of the status of the system during server's maintenance, but it overlooks a fine crossing of S(t) and thus the server's return between two arrivals. This is because t ν is not necessarily the return time of the server, since the threshold T is not only crossed by component A, but also by S(t) from within interval (t ν−1 , t ν ]. If S(t) crosses T earlier, the server will be immediately called off. Consequently, there may be an earlier return of the server at some point of time The return time is a r.v. denoted by τ ν and it can assume one of the two values. It either equals t ν if there is no crossing of T by S(t) in interval (t ν−1 , t ν ) (but exactly at t ν ) or it equals t ν−1 + T − A ν−1 and thus the return takes place in (t ν−1 , t ν ). As per Figure 4 (shown in Section 1), it is easy to conclude that the return time of the server depends on the position of T towards the two subintervals within (A ν−1 , A ν ]. Namely, on the location of In Figure 4, we have two variants of crossing, where τ ν takes one of the two values relative to the interval (t ν−1 , t ν ]. As we see it, the first passage time τ ν of S(t) can come earlier than the first passage The number of customers in the system waiting upon server's return is due to the following pick.
Here, the first case indicates there is no crossing of T in (t ν−1 , t ν ) by S(t). The latter case indicates there is an actual crossing of T by S(t) at τ ν ∈ (t ν−1 , t ν ).
We target the LST Φ ν of the joint distribution of the server's return time τ ν , the crossing level S ν of S(t) at τ ν (T or A ν ), the pre-return time t ν−1 (the time of the (ν − 1)st batch's arrival), the total estimated time A ν−1 of ν − 1 arrived batches of customers, and the number of customers Q(τ ν ) waiting upon server's return. Thus, x is the inverse Laplace transform with respect to variable x.
Proof. First, we introduce the set of exit indices {ν(p) = inf{n : A n > p} : p > 0} replacing a single T, so that ν = ν(T−), with the associated set of functionals {Φ ν(p) : p > 0}. We will derive an expression for Φ ν(p) and then use operational calculus to find a formula for Φ ν . The functional Φ ν(p) can be computed as a sum of functionals over a convenient partition of the sample space Ω. The partition includes two subsets, such that, given and we break the latter interval into two subintervals, So that, if p ∈ I d j , crossing of p by S(t) occurs at time t = τ j = t j−1 + p − A j−1 , and in fact, S τ j = p sharp. However, if p ∈ I s j , then the crossing of p by S(t) occurs exactly at t j , but in this case, S t j almost surely (a.s.) exceeds p and S t j = A j = A j−1 + δ j + X j . Thus, the crossing of p by S(t) occurs at time τ j specified as See Figure 5, with two alternative locations of threshold p from within interval Therefore, we have Next, we apply operator L d p to Φ d ν(p) defined as follows. Then, Now we turn to the second summand.
There is only one term 1 {p∈I s j } dependent on p. Thus, the pertinent operator is Laplace- Note that there are some similarities in the construction of the server's maintenance time process S(t) with a reliability aging process S(t) studied in our recent papers [42,68]. However, in these works, the aging process accelerated by random shocks, was the main process, while in this paper, it plays an auxiliary role. Furthermore, in the present construction, we include the maintenance time dependent queue located in the system, along with the choice of the queue lengths upon variants of level crossings by S(t). Here only the queue length Q(τ ν ), the exit time (first passage time) τ ν , and their relationship are of interest. Yet even the associated results on Q(τ ν ) and τ ν are integrated in a comprehensive analysis of our system throughout the article.
As already mentioned, the random quantities A ν−1 , S(τ ν ), and t ν−1 are auxiliary. We need τ ν (the exit from vacation) and Q(τ ν )-the number of customers waiting in the system on server's return.
Corollary 1. The joint LST φ(θ, z) of the system's status upon server's return giving the number of customers waiting and the total vacation time is The two terms simplify to Plugging these into the Laplace inverse and minor simplification yields the statement of this corollary.

Corollary 2.
The marginal distributions of the number of customers Ez Q(τ ν ) and the total vacation time Ee −τ ν satisfy the following formulas.
confirming the first claim of the corollary. With θ = 0, Corollary 3. The system's status upon server's return, under the special assumption that the input is with position independent marking, that is, is characterized through the following functional Proof. By double expectation, where ξ(α) = Ee −αx 1 is the LST of the estimated service time of one customer x 1 and a(z) = Ez Y is the PGF of the size of each batch of arriving customers. Assuming further ϕ(θ) = Ee −θδ 1 is the LST of the interarrival times, Under this condition, Corollary 1 simplifies to This is a logical consequence that the jointly mean vacation time (constant T) along with that no customer has arrived during the whole vacation time equals T. From this, the marginal transforms of τ ν and Q ν follow trivially.

Corollary 4.
Under the conditions of Corollary 3, namely, position independent marking, the following formulas for the marginal distributions hold. Proof. From Corollary 4 and a property of LSTs, Further, from Corollary 4, we find formulas for the means of τ ν and Q ν .

Corollary 6.
Under the conditions of Corollary 3, the means of τ ν and Q ν satisfy the following formulas.
where a (1) = a = EY is the mean batch size.

Special Cases and Examples
Example 1. If the interarrival times δ of batches of customers have an Erlang distribution with parameters λ and r, then their LST is with a = a (1), which are very different. However, when the interarrival times δ are exponential, i.e., r = 1, some interesting structure emerges. As we see, In other words, the mean queue length upon the return time is simply the mean number of arriving customers per unit time multiplied by the mean return time, seems a very intuitive result to be expected with marked renewal processes. However, when one observes the process backwards beginning from the first passage time τ ν , the process is no longer renewal, nor even Markov. There are discussions in our past work [1,69] somewhat similar to our process, but not that complex. This is why this result is surprising. Now we prove an even stronger result.

Corollary 7.
If and only if ϕ(x) = λ λ+x (input is marked Poisson) are the marginal means related as follows.
Proof. From Corollary 6, by the comparison of ∆ 0 and α 0 , we easily conclude that α 0 = c∆ 0 if and only if . The latter reduces to the equation where c = 1 λ . The rest is a straightforward exercise.

Example 2.
From Corollary 4 (position independent marking), now under ϕ(x) = λ λ+x with a marked Poisson input in mind, where L(z) = L −1 Then, its derivative is where In particular, The second derivative is This example will demonstrate that the results above are analytically tractable under a special case. All results herein were found via the theorems above and symbolic computation in Wolfram Mathematica. With the results of Example 2, make the following two further assumptions.

1.
The batch sizes Y are geometric with parameter p and PGF a(z) = pz 1−qz where q = 1 − p.
Then, Theorem 1 implies where Corollary 3 and the assumptions above give First, we will find the probability the server initiates his return due to a smooth level (M) crossing by S(t). This simply requires an inverse Laplace transform of a rational expression, and, therefore, the probability the dispatcher initiates the server's return due to S(t) being crossed by a jump, i.e., an arriving batch whose estimated service time exceeds the remaining vacation time of the server, is Next, let us pursue the mean and variance of the return time τ ν . Here, we find The second moment is This formula was computed with software explicitly, but in the interest of space, we present only the formula for variance derived as Eτ 2 ν − (Eτ ν ) 2 as follows.
By Example 1, under the Poisson input we have Plotting α 0 = α 0 (T) with λ = 1 and xi = p = 0.1, we have in Figure 6 the mean queue length grown during server's maintenance. As expected, α 0 is monotone increasing in T. In Figure 7, we plot the mean maintenance time as a function of T under the same assumptions as those for α 0 . As we see, the mean maintenance time is largely shortened (at T = 30 down to only 1.3) by incoming customers due to their estimated service times.
Next, we are interested to see the dependence of ∆ 0 from ξ (the reciprocal of the mean estimated service time) rather than from T. For this reason, we rewrite Formula (17) in the form of where c = 1 ξ is the mean estimated service time of one customer. Figure 8 below depicts ∆ 0 (c) under λ = 1 (same as the mean interarrival), T = 30 (fixed), and p = 0.1 (mean arriving batch size of 10). As we said, the mean maintenance time ∆ 0 in Figure 8 is now a function of c = 1 ξ . We see that the ∆ 0 equals 30 when c = 0. Then ∆ 0 (c) sharply falls with c increasing from 0 to 10 and then ∆ 0 slows down from c = 2 (∆ 0 (2) ≈ 2.33), equals 1.278 at c = 10, then asymptotically approaches 1.
In Figure 9, below, we took λ = 0.1 (mean interarrival time of 10), p = 0.1 (mean batch size of 10), and T = 30. This time, the behavior of ∆ 0 is smoother compared to Figure 8, and at c = 30 (same value as the originally planned maintenance time), ∆ 0 (30) ≈ 9.9, close to the mean arrival time of customers Here we see that ∆ 0 (c) → 1 λ 1 − e −λT (still < 1 λ ) asymptotically in c and it is smaller than 1 λ . However, as other "experiments" show, ∆ 0 (c) may be larger than 1 λ even for fairly large c. This is because ∆ 0 can be impacted by the first summand containing T and it offsets factor 1 − e −λT .
For example, in Figure 10, we see a very sharp decline of ∆ 0 just around 0, with T = 30 and λ = 10. Now at c = 100 (large), ∆ 0 (100) ≈ 0.103 > 1 λ . It takes a very large c to have ∆ 0 (c) = 1 λ = 0.1 and it is not smaller than 0.1. This is because T is relatively large and its impact is still seen for very large c's. It takes T = 0.5 and c = 1000 to see ∆ 0 (c) drop below 0.1, such as ∆ 0 (1000) = 0.09932925 or even further, with T = 0.1 to get ∆ 0 (1000) = 0.06321216. For the variance, we exploit the property of the PGF α 0 (z) of Q(τ ν ) where Conversely, Computing the second derivative as z approaches 1 from inside the unit ball, and combining with the α 0 terms, we find where In Figure 11, we plot Σ 0 (T) of Formula (21) with λ = 1 and ξ = p = 0.1.    Figure 13 is another variant of α 0 with λ = 2, ξ = 3, and p = 0.1.

Numerical Examples and Simulation
Next, Formulas (16)- (20) for probabilities, means, and variances derived for the case of Poisson arrivals, geometric batch sizes, and exponential estimated service times will be shown to agree with Monte Carlo simulations of the process under various numerical assumptions on the parameters: the arrival rate λ, the parameter of the batch size p, and the parameter of the estimated service time for each customer ξ, and the vacation time T.
In each case, all 5 formulas are validated for a variety of parameter values, each based on 100,000 simulations of vacations of the server.
Code for simulating the vacations of the present queueing system, implementations of Formulas (16)- (20), and numerous experiments generating all diagrams in this section is available publicly at the GitHub repository linked at the end of the article. Figures 14-16 show the probability of the server initiating his own return, i.e., event I d where a smooth level crossing occurs, from (16)      . It shows that, as λ grows, i.e., arrivals occur more frequently, probabilities that the server returns on his own shrink because surges are more frequent. Figure 15 has predicted and empirical probabilities for λ = 1, ξ = 5, p ∈ {0.05, 0.1, . . . , 1}, and T ∈ {0.1, 0.5, 1, 2}. It shows that, as p grows, i.e., the batch sizes tend to be smaller, so the probabilities that the server returns on his own grow. Figure 16 has predicted and empirical probabilities for λ = 1, ξ ∈ {0.5, 1, . . . , 10}, p = 0.5, and T ∈ {0.1, 0.5, 1, 2}. It shows that, as ξ grows, i.e., the estimated service times tend to be smaller, so the probabilities that the server returns on his own grow.
In all cases, a larger T implies higher probabilities since, with all else equal, this means threshold is further away and, hence the surges are less likely to cause the dispatcher to call the server back. Figures 17-19 show the empirical means and variances of the return time compared with graphs of Formulas (18) and (19) under various sets of parameters, each for several T values.    The mean return time of course shrinks as λ grows and waiting times become longer. Variances seems to peak and then decrease as λ grows. Figure 18 includes values for λ = 1, ξ = 5, p ∈ {0.05, 0.1, . . . , 1}, and T ∈ {0.1, 0.5, 1, 2}. The mean return time of course grows as p grows and batch sizes become smaller. Variance shrinks as p grows because the variance in the batch sizes shrinks. Figure 19 includes values for λ = 1, p = 0.5, ξ ∈ {0.5, 1, . . . , 10}, and T ∈ {0.1, 0.5, 1, 2}. The mean return time of course grows as ξ grows and service times shrink, and the dispatcher subtracts less from the server's vacation. Shorter service times also cause the return time more predictable and have less variance. Figures 20-22 show the empirical means and variances of the return queue length compared with graphs of Formulas (20) and (21) under various sets of parameters, each for several T values. The property EQ ν = λ p Eτ ν is clearly seen as the plots show the same trend as the means in Figures 17-19, just with different scaling. Figure 20 includes values for p = 0.5, ξ = 5, λ ∈ {0.5, 1, . . . , 10}, and T ∈ {0.1, 0.25, 0.5, 1}. The variances grows as λ grows since shorter waiting times cause higher variance in the arrivals per unit time, and hence the return queue length. Figure 21 includes values for λ = 1, ξ = 5, p ∈ {0.05, 0.1, . . . , 1}, and T ∈ {0.1, 0.5, 1, 2}. Larger p reduces the variance precipitously as smaller batches make for lower variance in queue length upon the server's return. Figure 22 includes values for λ = 1, p = 0.5, ξ ∈ {0.5, 1, . . . , 10}, and T ∈ {0.1, 0.5, 1, 2}. For small T, ξ has little effect on the variance of the return queue length, presumably because the server will quickly return from his vacation regardless of customer activity. However, larger ξ and the resulting smaller service times increase variance for larger T because, with dispatcher-initiated returns more likely, these cases will typically involve unpredictable large batches and frequent arrivals to overcome the short service times.

System Policy on Server Return
When the server returns to the system, he need not expect customers waiting, even a single one. For example, if t 1 > T, this will be exactly the case. We go one step further. Suppose the server does not resume his service unless there are at least N customers waiting. Otherwise, the server waits for the buffer to be replenished accordingly. It corresponds to the N-Policy queue or rather single-vacation-N-Policy combined (NSV-Policy for short).

Notation 1.
We use the original formula due to Dshalalow [2]. See also Dshalalow and White [4,41]. Under the following notation, (a) φ(θ, z) = Φ ν (0, 0, 0, θ, z) is the distribution of waiting room load on server's return along with the total vacation time (b) W n = Q(τ ν ) + Y 1 + . . . + Y n (we can regard Q(τ ν ) as Y 0 ) -W n is the active component (c) Then, if the input is marked Poisson with position independent marking, meaning the joint transform of the inter-arrival time and the arriving batch size is Ee −θδ z Y = a(z) λ λ+θ , then from Dshalalow [2], With the input general renewal, still with position independent marking, γ(θ, z) = Ee −θδ z Y = a(z)ϕ(θ) where ϕ(θ) = Ee −θδ , we have from Dshalalow [2], Theorem 2. In the queueing system with NSV-Policy and a marked renewal process with position independent marking as input, with the LST of the interarrival time ϕ(θ) and mean ϕ, the mean length of the service cycle ∆ = E∆ µ that starts with zero customers, followed by state dependent single vacation time and wait specified by the N-Policy and the mean number of customers α = EW µ accumulated in the system upon service resumption are related as if and only if the input to the system is marked Poisson.

Service Cycle
Recall that when the server returns to the system and he finds the waiting room (buffer) with less than N customers wafting, he rests until there are at least a total of N customers available and then immediately resumes his service, which he also does if on his return, the buffer has at least N waiting customers. Thus, a new service begins and so does a new busy period. It continues with another service of a customer and so on. When the queue becomes exhausted, an underlying busy period ends. Furthermore, it prompts the server to leave the system and go on one, state dependent, vacation trip, and then to return with a possible wait and rest. This ends one of his service cycles.
A service cycle is thus a random period of time that includes a service time from its beginning to the end unless the queue gets empty. In the latter case, the server leaves on vacation, then returns to the system, and possibly rests for a while. When at least one customer becomes available, the server ends the cycle and resumes his service. Thus, a service cycle includes a mere service time, or a service time and vacation and wait altogether. If the system is under the N-Policy, then service is resumed only after at least N customers are amalgamated.
Speaking of a service cycle, it makes sense to distinguish one, regular, that includes only service of one customer and a long one that contains vacation and a possible rest. Now if the system is being observed beginning with T 0 = 0, the time T 1 denotes the end of the first service cycle, whether regular or long. Then, {T n : n = 0, 1, . . .} is the sequence of the ends of successive service cycles {(T n , T n+1 ]}, also known as departure epochs of serviced customers. The end of a long service cycle is followed by the beginning of a new busy period that begins with a regular service cycle. A new busy period begins with a number of customers specified by F µ (0, z) = Ez W µ in one of the forms discussed in Section 5, in notation α(z).

M/G/1-Type Queue
If the queue in mind is of an M/G/1-type, then the queueing process Q(t) is with rightcontinuous paths and defined on a filtered space (Ω, F , (F t ), P). It is semi-regenerative with respect to the sequence {T n } of the ends of service cycles (stopping times relative to (F t )), so that Q n = Q(T n ) forms a homogeneous Markov chain with the TPM (transition probability matrix) P = p ij : i, j ∈ N 0 . The TPM P is a ∆ 2 -matrix defined so in Abolnikov-Dukhovny [35] as it reads Abolnikov-Dukhovy also provided a necessary and sufficient ergodicity condition for {Q n } that we mention in a moment. Given the ergodicity condition is met, the invariant probability measure p = (p 0 , p 1 , . . .) of {Q n } exists and can be uniquely found from the equation p = pP, p, 1 = 1 or from the equation is the PGF of the ith row of P and P(z) is the PGF formed of vector p.
Row zero in P consists of one-step transition probabilities representing all transitions during the long service cycle, whereas the rest-on regular cycles. The variant of Kendall's formula reads where β(θ) = Ee −θσ , σ is a pure service time of one customer ρ < 1 is the sufficient and necessary condition for p to exist (establised in [35]) and for Formula (35) for P(z) to hold. The formula for P(z) is similar to that for the M X /G/1/∞ system when in (36), a(z) is replaced with α(z) and a is replaced with α = EW µ as per (35).

Remark 3. From Theorem 2,
implying that and where under the assumption that a(y) = py 1−qy and ξ(x) = ξ ξ+x , (see Formulas (10) and (11) under no assumptions on a(y) and ξ(x).) Then, From (44), and from (46), For N = 2, In terms of our previous abbreviation b = λ + pξ, M 2 (x, 1) turns to where assuming b 2 − 4λpξ > 0, demonstrating the formula can be reduced to simple algebraic expressions under N policy. Note that we can proceed in the same manner when the roots are complex numbers. We avoid cases with multiple roots though, considering them probabilistically impossible and thus impractical in a real-world situation. (41), under the assumptions in Example 3, that is, a(z) = pz 1−qz and a (1) = 2q

Mean Stationary Service Cycle
Recall that all underlying processes are considered on a filtered probability space (Ω, F , (F t : t ≥ 0), P, (P x , x = 0, 1, . . .)). Because the service cycles represent important time epochs of decision makings over which the queueing process conditionally regenerates, we would like discuss the computational aspect of ends of service cycles {T n } in some form and lay foundation for the forthcoming analysis of the continuous time parameter process Q(t) rendered in Section 7. Let C n = |(T n , T n+1 ]| (the length of the nth cycle). Then, with We call EC the mean value of the stationary service cycle. We will show that EC exists and equals 1 λa under the condition that ρ < 1. Now, where implying that where ∆ = E∆ µ The limit on the right of (65) exists if and only if ρ = λab 1 < 1, so does the limit on the left, which by the Lebesgue dominated convergence theorem (applied to lim n→∞ EC n ) gives EC introduced in (62). Hence, and by Formula (27) of Theorem 2 we have from the last equation that Thus, we proved that the limit below exists and equals This is an interesting result showing that in spite of all complex settings of the first service cycle that includes a state dependent vacation time and a random wait for N customers, the mean service cycle is identical to that of the regular M X /G/1/∞ queue with no vacation and no N-Policy. We need to be reminded that, in the context of a G X /G-NSV/1 system, (66) holds under two necessary and sufficient conditions.

1.
The arrival process is marked Poisson.
In conclusion, we derive the following proposition.

Proposition 1.
For a G X /G − NSV/1 type queue (with state dependent vacation and N-Policy), the mean stationary service cycle EC defined in (62) equals (λa) −1 if and only if the arrival process is marked Poisson and the offered load ρ < 1.
Combining Proposition 1 and Theorem 2, we have the following corollary.
Corollary 9. In the queueing system with NSV-Policy and a marked renewal process with position independent marking as input, with the LST of the interarrival time ϕ(θ) and mean ϕ, the mean length of the service cycle ∆ = E∆ µ that starts with zero customers, followed by state dependent single vacation time and wait specified by the N-Policy and the mean number of customers α = EW µ accumulated in the system upon service resumption are related as if and only if the input to the system is marked Poisson with the rate λ for its support counting measure and for ρ < 1.

Remark 5.
Because by Proposition 1, EC = lim n→∞ EC n exists (and equals 1 λa ) under the necessary and sufficient condition that ρ < 1, we conclude from (63) that where

Semi-Regenerative Analysis
Let Z = (Z(t)) be an F t -adapted process on a filtered space denoted by (Ω, F (Ω), (F t ), (P x ) x=0,1,... ) and τ be a stopping time relative to (F t ). Z is said to have a locally strong Markov property at τ if for each t ≥ 0, holding true for any integrable Borel measurable function g.
An integer-valued F t -adapted process Z ≥ 0 is called semi-regenerative if (i) There is a point process {T n ; n = 0, 1, . . .} such that for each n, T n is a stopping time relative to (F t ).
Z has a locally strong Markov property at T n , n = 0, 1, . . .
Z is a.s. right-continuous. (iv) (Z(T n ), T n ) := (X n , T n ) is a time-homogeneous Markov renewal process embedded in Z over (T n ). Let Then, matrix K(t) = {K ij (t)} is called the semi-regenerative kernel. In the context of our queueing system M X /G T /1, the queueing process {Q(t)} is semi-regenerative relative to the sequence {T n } of departures. So we will use notation Q = {Q(t)} in place of Z.
Then from the key renewal theorem and (68) and (69) of Remark 5, and call it the integrated semi-regenerative kernel. We assume that the semi-regenerative kernel K is integrable over R + . If h j (z) is the PGF of the jth row of H, also expressed as then Equation (70) can be rewritten as With the notation and Equation (68), (73) reads as with no restrictions on a(z) and ξ(x).

Theorem 3.
In a single-server queue a with marked Poisson input stream and general service, suppose the first service cycle [T 0 , T 1 ] consists of the first service period preceded by serve's vacation time that lasts from T 0 = 0 to τ ν . Upon server's return at time τ ν , he may or may not wait for at least N customers until time ∆ µ . The queue length Q ∆ µ becomes W µ and the first service begins, while new customers enter the system. The Laplace functional of the continuous time parameter queueing process Q(t) observed over the first service cycle, jointly with ∆ µ satisfies the formula where β(θ) = Ee −θσ 1 .
which completes the proof of the theorem, because (78) is obvious.

Theorem 4.
In a single-server queue a with marked Poisson input stream, general service, suppose the first service cycle [T 0 , T 1 ] consists of the first service period preceded by serve's vacation time that lasts from T 0 = 0 to τ ν . Upon server's return at time τ ν , he may or may not wait for at least N customers until time ∆ µ . Then the PGF of the stationary distribution of the continuous time parameter queueing process Q(t) satisfies the formula where Proof. From (77), because the system is exhaustive For Q 0 = j > 0, the server does not leave the system, and to utilize Theorem 70, we can set From (76), implying that and Combining (35) with (74) and (75) and (86) we get where and With EC = 1 λa , we finally have that .

Remark 6.
In particular, for a(z) = z, π(z) = P(z) which is the same result as for a regular M/G/1 queue but now with a complex state dependent NPSV system.

Example 6. From Theorem 4 and Example 5,
Therefore, where EQ ∞ was treated in Section 6. See Formula (41) and follow-up discussions. Under the assumptions of Example 5, with a(z) = pz 1−qz , we have a (1) = 2q p 2 and from (89) and (55), The version of (90) under N = 1, according to (60), reads In Figure 25, below we see EQ(∞) behaves analogous to EQ ∞ (on Figure 24) because it differs in the additive term q/p 2 . However, with p small such as p = 0.1, EQ(∞) moves up by as much as 90 compared to EQ ∞ .

Switchovers
A switchover is manifested by server's exiting a busy period, after the queue becomes exhausted, and starting his work at the maintenance facility. When the server resumes his service of customers, the associated previous busy cycle ends and the new one begins. A busy cycle consists of a busy period and maintenance period with a possible wait. Each busy cycle thus includes exactly one switchover or equivalently one attendance of maintenance.
While maintenance is a mandatory work, in some applications, a large number of back and forth movements in and out of the system (switchovers) is undesirable and it may indicate that the input traffic is sparse. If so, the server may be better off to stay longer at the maintenance (that is increasing T) to ensure that enough customers at the main facility will accumulate. This can also be maintained by increasing the threshold N thereby letting the server wait longer for the input to deliver N or more units before a new cycle begins. It seems plausible though that working at the maintenance facility is economically more efficient than staying idle and waiting for untis to replenish the buffer to N or more.
As most performance measures require, the number of switchovers should be counted within a fixed time interval, say [0, t], thus making the time sensitive analysis necessary.
Suppose (T j ):= {T j k : k = 0, 1, . . .} are the successive moments of those T n 's at which Q n enters state {j}, if T n is the nth departure epoch. [(T j ) is a delayed renewal process embedded in (T).] For any fixed t ≥ 0, the random variable gives the total number of visits of state j by the Markov renewal process (Q n , T n ) in time interval [0, t]. In particular, for j = 0, V 0 (t) is the total number of visits of state 0 by Q n over [0, t] or equivalently, the total number of server's idle periods (maintenance and wait) in interval [0, t].
Next, we define The functional matrix {R(i, j, t) : i, j = 0, 1, . . . ; t ≥ 0} is the Markov Renewal Function (MRF). The MRF gives the expected number of entrances of the queue (upon departures) in every state j during interval [0, t], given that the queue started from a state i. In Figure 27, below, we depicted the curve of S, which is also monotone decreasing in T, but unlike the previous plot, S(T) is concave down. Then concavity changes for larger values of T beyond T = 100. Here s = 1, λ = 0.01, p = 0.1, b 1 = Eσ = 0.1, and ξ = 0.4.

Buffer Load
Let q(k) be the expense function due to the presence of k customers in the system per unit time. Then, gives the mean expense due to all customers present in the system on interval [0, t] as illustrated in Figure 28 below. We can represent Q[0, t] as follows by using the Fubini's Theorem, Now from Çinlar [70] or Dshalalow [36] (p. 98), Theorem A.1, Applying (101) to (100) and by the monotone convergence theorem, Q is the penalty rate for all customers that occupy the system per unit time as observed over a long period time.
Assuming the expense function q(k) to be linear q(k) = qk, we have Another variant of the expense function q(k) = q k , making which from (80) and (81) reads In either case, we look for Example 8. Combining the penalties for too many switchovers and the queue cost per unit time gives a simple objective function (if the cost is linear) as a linear combination of switchovers λa 1−ρ α and queue length EQ(∞) with respective penalties of s = 1000 and q = 2 (per switchover and per customers per unit time) produces Figure 29 as a function of T, for 0 ≤ T ≤ 50. Here we use λ = 0.01, ξ = 0.4, p = 0.1, b 1 = 0.1, and b 2 = 4, on the left and with same parameters except for b 1 = 2.5 on the right.

Number of Customers during Server's Idle Period
Recall that the mean number of customers gathered in the system during the pure maintenance period is EQ(τ ν ) = α 0 . Again, the longer the maintenance period lasts, the more customers come during a maintenance who end up waiting. In some applications, those customers will be most impatient waiting in the system especially with no server present. This situation may become undesirable and it can mean an extra stiff penalty per waiting customer that the system will incur.
With a linear cost for k customers, we have U(k) = uk and thus, is the cost for all customers in the system waiting during a maintenance period which is a multiple of the expected queue length upon server's return. If a total number of waiting customers gathered during server's inactivity that needs to be taken into account whether or not during maintenance, then α 0 needs to be replaced with α yielding Remark 7. From Section 8.1, λap 0 is the mean number of maintenances per unit time. We assume that can be regarded as the mean number of customers waiting during the maintenance periods per unit time. Note that the latter is very speculative without any rigorous justification. In fact, the law of double expectation cannot be used in this case, because the number of visits of state 0 and the length of one maintenance period (and thus the number of customers during the maintenance period) are not independent. This, however, does not mean that the above assertion is wrong either, because a similarly paradoxical property shown in Corollary 7 holds true.
Example 9. From Example 3, Here u · α 0 is the penalty for the mean number of customers EQ(τ ν ) present in the system during a maintenance period. Then, from (109) and (110), is the mean number of customers during all maintenance instances per unit time. Under the assumption that N = 1, using Formula (58) gives Example 10. Let U(k) = u kt (u < 1) be the penalty for k customers waiting by the end of time interval [0, t] (if a maintenance period lasts t) regardless on when they came in. Then, is the mean penalty for Q(τ ν ) customers during one maintenance period. From Example 2, we have Example 11. If the penalty for k customers during a maintenance period is U(k) = uk 2 , then EU(Q(τ ν )) = u · EQ 2 (τ ν ) is the expected cost for all customers waiting during one maintenance period. From Example 3, We can regard the expected cost for all customers waiting during a unit time of a maintenance period. By Corollary 7, Note that EQ 2 (τ ν )/Eτ ν = E Q 2 (τ ν ) τ ν (the latter would be a right choice for C m ), but it seems more plausible compared to all other choices of penalties for waiting customers during maintenance.
To draw Figure 30 below, we set s = 1200 cost per switchover, cost per customer in the system q = 1.5, and the cost per customer during a maintenance period as c = 8. We set λ = 0.01, ξ = 4, p = 0.99, b 1 = 10, and b 2 = 4. We considered O(T) for 0 ≤ T ≤ 1000 and found O(363.4775) =min O(t) (see the snapshot on the right in a close vicinity of T = 363.4775. made of the switchovers' cost S(T) = s · λa 1−ρ α of (97), EQ(∞) of (86), (102), and (103), and α = EW µ of (31) which represents the total number of customers accumulated during an entire maintenance period and wait. We note that this objective function is inconsistent, because the first two summands represent costs of switchovers and buffer load per unit time, while the third term does not.

Example 14.
Another variant of the objective function would be c · α(T) in the objective function O(T) = s · λa 1−ρ α + q · EQ(∞) + c · α(T) of (118) with M c = α 0 λa(1 − ρ)/α of (112) bringing all terms to those in unit time, although this choice of the objective function has its shortcoming as pointed out in Remark 7.
Thus, the modified version of (118) reads where we set the same parameters as in Figure 30, namely, s = 1200 cost per switchover, cost per customer in the system q = 1.5, and the cost per customer during a mean maintenance period and wait combined is c = 8. We also set λ = 0.01, ξ = 4, p = 0.99, b 1 = 10, and b 2 = 4 in Figure 31.

Conclusions
In this paper, we studied a single server queueing system of GI X /G/1 type with N-Policy and single vacation, in notation, GI X /G-NSV/1. Under our assumptions, the vacation length is state dependent. When the queue is exhausted, the server leaves the system for a routine maintenance initially constrained by a deterministic or random number T, which is the time the server is absent from the system. If during server's maintenance, there are new customers entering the system, for each one of them, the dispatcher estimates their service times (in line with their needs) and shortens the server vacation time T accordingly. (Note that the real service times of customers may turn out to be different from those estimated by the dispatcher.) If there are less than N waiting customers upon server's return to the system, he rests until the queue length reaches or exceeds N. The latter case is more likely, because the input is bulk.
There are two key factors in how the system is run. The times of customers' departures determine when the server leaves, namely when the queue is exhausted. An embellished variant is when the queue drops below some specified threshold to order the server to leave, which is especially interesting if the service is batch. We do not study the system under this policy (that along with N-Policy is known as hysteretic control), but it is worth including in our future work. Thus, the departure times, define the beginning of the maintenance period, service cycles, and busy periods. A service cycle is the period of time between two successive departure epochs that include a pure service time or a service time together with a maintenance period followed by a possible wait of the server. The other key factor is customers' arrivals times (that come in random batches) that, as mentioned, impact the server maintenance time.
We began our analysis of the system on a maintenance period whose policy is unordinary and it required a special treatment using first passage time principles of random walks on non-rectangular stochastic grid. We managed to obtain explicit formulas for joint and marginal distributions of the times of server's return and the beginning of a new busy cycle, along with the associated contents of system's buffer upon these times. We justified our claims through many explicit examples and special cases. In one of the main assertions, we proved that the ratio between the times of return and beginning of a busy period are linearly dependent upon the contents of the buffer at these times, respectively, if an only if the input to the system is marked Poisson of rate L for the underlying support counting process and mean size a of arriving batches of customers. This factor is La which is also the reciprocal of the mean stationary service cycle. This would be a noteworthy result for queues even of a lesser complexity than ours.
To tame or next study, we continued our analysis under the assumption that the input is marked Poisson, first considering the embedded queueing process on departures, obtaining the invariant probability measure and the mean stationary service cycle EC which turned out to equal (λa) −1 . Furthermore, we went through special cases pertaining to the mean queue length, N-Policy, and the use of discrete operational calculus that we developed and embellished in our earlier papers. We provided discussion on numerical examples and graphics.
We then turned to semi-regenerative analysis of the system to obtain explicit results for the continuous time parameter queue. We found an explicit relation between the PGF (probability generating function) for the embedded and continuous time parameter queueing processes in equilibrium. In the final section, we discussed various performance measures of the system (such as the number of switchovers between busy and maintenance periods, the queue length, the number of impatiently waiting customers during server's absence) all pertaining to control of the system through initially set up time T. Because the control imposed was time sensitive, the prior semi-regenerative analysis of the system was imperative. We closed the last section with a number of numerical and graphical examples under detailed discussions.
Overall, we validated our main results through stochastic simulation and dedicated an entire Section 4 for this. We believe that our system can be further embellished and still remain analytically tractable. It is novel and it can be applied to various servicing systems. In particular, it can be used in computer operating systems that can be pre-programmed according to our results to optimize the performance of an underlying system through the choice of maintenance time T and of threshold N (especially if a large number of switchovers is undesirable).