We will prove now two theorems on the burst ratio in systems with and without the dropping function, respectively.
Proof. Proof of Theorem 1. Let , denote consecutive arrival times and , . Sequence constitutes a discrete-time Markov chain. This follows from the memorylessness property of the exponential distribution; namely, the remaining service time and the packet arrival are exponentially distributed with parameter no matter how much time the current service has already taken.
Take an arbitrary and assume that the queue length is n at time . Let denote the average number of consecutive losses from time under the condition that the packet arriving at time is lost. It must hold , because the packet lost at time is included in . Moreover, due to the fact that is a Markov chain, the evolution of the system from time depends only on the queue length at time and does not depend on previous queue’s lengths. Therefore, depends only on n and does not depend on k.
In the first part of the proof, we will prove Formulas (
9)–(
11) for
. If
, then we have the following.
Equation (
14) can be explained as follows. Number 1 stands for the packet loss at time
. Let
m be the queue length upon the next arrival time,
. If
, then the probability of having the queue length
m at time
is given by the first integral in (
14). This integral is obtained by conditioning on the duration of the interarrival time and using the Poisson formula for the probability of
completed services by time
. Then, the packet arriving at time
can be lost with probability
and the average number of consecutive losses from time
is then
. This explains the first row of (
14). The second row corresponds to the case
. The probability of having queue length 0 at time
is expressed now by the sum of integrals. Then, the packet arriving at time
can be lost with probability
and the average number of consecutive losses from time
is
.
If
, then we obtain the following:
Again, 1 stands for the packet loss at time . The queue length just before the next arrival time must be 0; thus, the packet arriving at time is lost with probability and the average number of consecutive losses from time is again.
From (
15), we immediately obtain (
9). Exploiting (
9) and notation (
8), Equation (
14) can be rewritten as follows:
which for
, gives the following:
while for
, the following is obtained.
Finally, from (
17) and (
18), we obtain (
10) and (
11), respectively.
In the second part of the proof, we will derive the overall loss probability, L.
We start with computing the transition matrix for the Markov chain . Firstly, the probability of the transition from non-zero state i to non-zero state is equal to the following.
Indeed, such a transition can happen in two ways—either the first packet is accepted and services are completed by the next arrival time, or the first packet is dropped and services are completed by the next arrival time. Secondly, the probability of the transition from state to state equals the following.
Indeed, the first packet must be accepted and no service can be completed by the next arrival time in this case. Finally, the probability of the transition from any state to state 0 is as follows.
Other transitions are not possible. Summarizing these considerations, we obtain the transition matrix
Q in the following form:
The stationary vector
for this chain can be obtained in the standard method by solving the system of equations:
Matrix
Q is known to be linearly dependent; thus, the Equation in (
20) corresponding to the first column of
Q can be removed. Then, rearranging the remaining equations and grouping
on the left side, we obtain an explicit solution of (
20) in (
12) and (
13).
Now, due to the fact that
is the stationary probability that the queue length upon a packet arrival is
n, we can compute the loss probability as follows.
In the third part of the proof, we will derive the average length of the sequence of consecutive losses, , using . Note first that it is not quite trivial to obtain from , because in the definition of , it is not assumed that the sequence of losses begins at time —it may begin before .
To overcome this, consider a sequence of losses that begins at arrival time
, when the system is in the stationary regime and
. Such a sequence can be initiated only if the previous packet was accepted to the buffer. In particular, the previous packet must have arrived to the buffer at time
, when the queue length was
i and
, it must have been accepted with probability
, and the queue length must have changed from
i to
j during the interarrival time. It is easy to see that for
, such a series of events happens with the following probability:
while for
, such a series of events happens with the following probability.
Now, define
as the probability that, at an arbitrary arrival time, the queue length is
j and a sequence of losses begins at that time. From the considerations of the previous paragraph and (
22) and (
23), we obtain Formulas (
7) and (
6) for
, respectively. The probability that a sequence of losses begins at arbitrary arrival time is then
. Therefore, we can conclude that the average length of a sequence of losses, which begins at arbitrary arrival time, is as follows.
Finally, combining (
4) with (
21) and (
24), we arrive at (
5), which completes the proof. □
Now we can prove an analog of Theorem 1 but for the system without the dropping function.
Proof. Proof of Theorem 2. This theorem can be proven at least in two different ways.
In one proof, we can use Theorem 1 and apply a trivial dropping function, i.e.,
for
and
otherwise. Obviously, such a dropping function renders the system equivalent to the system without the dropping function, but with a finite buffer. Using this trivial dropping function, we obtain the following from (
6) and (
7):
while from (
9) to (
11), the following is obtained
and from (
21), we have the following.
Formulas (
28)–(
31) combined with (
5) lead to (
25), where matrix
A in (
27) is just a simplified matrix (
13).
Alternatively, the proof can be conducted without references to Theorem 1 using probabilistic arguments. We have to notice two facts. Firstly, a sequence of losses in the system can occur only during the buffer’s overflow period. Secondly, the duration of the buffer’s overflow period is exponentially distributed with parameter
. This is a consequence of the memorylessness property of the exponential distribution, i.e., no matter when the buffer is overflowed, the remaining service time is exponentially distributed. On the other hand, the buffer overflow period is equal to the remaining service time upon the buffer’s overflow. Now, consider the beginning of the overflow period, i.e., the arrival time, when the queue length jumps from
to
K. With probability
, the next arrival will happen before the end of the overflow period. In this case, the new packet is lost and the sequence of losses is extended by 1. What is more, the probability of having yet another arrival before the end of the overflow period is again
due to the memorylessness property of the exponential distribution. Therefore, we have in fact a series of Bernoulli experiments, in which the probability of a failure (loss) in a single experiment is
. Therefore, the average length of a sequence of failures (losses) is equal to
. This, combined with the obvious relation
and (
4), gives (
25), while matrix
A can be easily obtained by using transition probabilities of chain
. □