Next Article in Journal
GAFOR: Genetic Algorithm Based Fuzzy Optimized Re-Clustering in Wireless Sensor Networks
Next Article in Special Issue
Automatic Convexity Deduction for Efficient Function’s Range Bounding
Previous Article in Journal
Delay-Dependent and Order-Dependent Guaranteed Cost Control for Uncertain Fractional-Order Delayed Linear Systems
Previous Article in Special Issue
Fundamentals of Synthesized Optimal Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method

1
Institute of Informatics Problems, Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, Vavilova 44-2, 119333 Moscow, Russia
2
Department of Applied Mathematics, Vologda State University, Lenina 15, 160000 Vologda, Russia
3
Moscow Center for Fundamental and Applied Mathematics, Moscow State University, Leninskie Gory 1, 119991 Moscow, Russia;[email protected]
4
Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(1), 42; https://doi.org/10.3390/math9010042
Submission received: 28 November 2020 / Revised: 18 December 2020 / Accepted: 19 December 2020 / Published: 27 December 2020
(This article belongs to the Special Issue Control, Optimization, and Mathematical Modeling of Complex Systems)

Abstract

:
The problem considered is the computation of the (limiting) time-dependent performance characteristics of one-dimensional continuous-time Markov chains with discrete state space and time varying intensities. Numerical solution techniques can benefit from methods providing ergodicity bounds because the latter can indicate how to choose the position and the length of the “distant time interval” (in the periodic case) on which the solution has to be computed. They can also be helpful whenever the state space truncation is required. In this paper one such analytic method—the logarithmic norm method—is being reviewed. Its applicability is shown within the queueing theory context with three examples: the classical time-varying M / M / 2 queue; the time-varying single-server Markovian system with bulk arrivals, queue skipping policy and catastrophes; and the time-varying Markovian bulk-arrival and bulk-service system with state-dependent control. In each case it is shown whether and how the bounds on the rate of convergence can be obtained. Numerical examples are provided.

1. Introduction

The topic of this paper concerns the analysis of (one-dimensional) inhomogeneous continuous-time Markov chains (CTMC) with discrete state space. The inhomogeneity property implies that (some or all) transition intensities are non-random functions of time and (may or may not) depend on the state of the chain. For such mathematical models many operations research applications are known (see, for example, [1,2,3,4] and [Section 5] in [5]), but the motivation of this paper is queueing. Thus all the examples considered in this paper are devoted to time varying queues. Substantial literature on the problem exists in which various aspects (like existence of processes, numerical algorithms, asymptotics, approximations and others) are analyzed. The attempt to give a systematic classification of the available approaches (based on the papers published up to 2016) is made in [5]; up-to-date point of view is given in [Sections 1 and 1.2] of [4] (see also [6]).
The specific question, being the topic of this paper, is the computation of the long-run (see, for example, in [Introduction] of [7]), (limiting) time-dependent performance characteristics of a CTMC with time varying intensities. This question can be considered from different point of views: computation time, accuracy, complexity, storage use etc. As a result, various solution techniques have been developed, but none of them is the ubiquitous tool. One of the ways to improve the efficiency of a solution technique is to supply it with a method for the limiting regime detection, (or, in other words, a method providing ergodicity bounds): once the limiting regime is reached, there is no need to continue the computation indefinitely. The main contribution of this paper is the review of one such method (see Section 2) and presentation of its applicability in two new use-cases, not considered before in the literature (see Section 4 and Section 5). It is worth noting that methods, which provide ergodicity bounds, can be also helpful, whenever a truncation of the countable state space of the chain is required. The method presented in Section 2, whenever applicable, is helpful in this aspect as well (see also [8,9]).
The end of this section is devoted to the review (by no means exhaustive) of the popular solution techniques for the analysis of Markov chains in time varying queueing models. The attention is drawn to the ability of a technique to yield limiting time-dependent performance characteristics of a Markov chain with time varying intensities. For each technique mentioned, (computer simulation methods and numerical transform inversion algorithms are not discussed here), it is highlighted if any benefit can be gained when the technique is used along with a method providing ergodicity bounds.
In many applied settings the performance analysis is based on the procedure known as point-wise stationary approximation [10] and its ramifications. According to it the time-dependent probability vector x ( t ) at time t is approximated by the steady-state probability vector y ( t ) by solving y ( t ) H ( t ) = 0 and y ( t ) 1 = 1 , where H ( t ) is the time-dependent intensity matrix (throughout the paper the vectors denoted by bold letters are regarded as column vectors, e k denotes the kth unit basis vector, 1 T —row vector of 1’s with T denoting the matrix transpose). In its initial version, the approximation breaks down if the instantaneous system’s load is allowed to exceed 1. In general its quality depends on the values of the transition rates, and for some models (like time-dependent birth-and-death processes) the approach is proved to be correct asymptotically in the limit (as transition intensities increase). Another fruitful set of techniques, which help one understand the performance of complex queueing systems, is the (conventional and many-server) heavy-traffic approximations, (another approximation technique, worth mentioning here especially because of its applicability to non-Markov time varying queues, is robust optimization. See [4], Section 2). Since scaling is important in heavy-traffic limits, usually the technique is more justified whenever the state space of a chain is in some intuitive sense close to continuous (see e.g., [11,12] and no doubt others), and less (or even not at all) justified if the state space is essentially discrete, (for example, when formed by the number of customers in the system M t / M t / 1 / N (for fixed N) at time t). Due to the nature of both class of techniques mentioned above they do not benefit from methods providing ergodicity bounds.
The very popular set of techniques to calculate performance measures, which stands apart from the two mentioned above, is comprised of numerical methods for systems of ordinary differential equations (ODEs)—Kolmogorov forward equations, (for an illustration the reader can refer to, for example, [13]). Due to the increasing computer power such methods keep gaining popularity. By introducing approximations these methods can be made more efficient. For example, when only moments of the Markov chain are of interest one can use closure approximations, (since the moment dynamics are (when available) close to the true dynamics of the original process, the benefits from the methods providing ergodicity bounds, when used alongside, are clear), (see e.g., [14,15,16]). Another method for the computation of transient distributions of Markov chains is uniformization (see [17]). It is numerically stable and, as reported, usually outperforms known differential equation solvers (see [Section 6] in [18]).
The methods based on uniformization suffer from slow convergence of a Markov chain: whenever it is slow, computations involve a large number of matrix-vector products. An ODE technique yields the numerical values of performance measures, but it is complicated by a number of facts, among which we highlight only those which are related to the topic of this paper. Firstly, there can be infinitely many ODEs in the system of equations. Traditionally this is circumvented by truncating the system, i.e., making the number of equations finite. But there is no general “rule of thumb” for choosing the truncation threshold. Secondly, (time-dependent) limiting characteristics of a CTMC are usually considered to be identical to the solution of the system on some distant time interval (see, for example, [17,18,19,20,21,22,23]). This procedure yields limiting characteristics with any desired accuracy, whenever the CTMC is ergodic. Yet, in general, it is not suitable for Markov chains with countable (or finite but large) state space. Moreover it is not clear, (convergence tests are usually required, which result in additional computations). how to choose the position and the length of the “distant time interval”, on which the solution of the system must be found. Thus in practice without an understanding a priori about when the limiting regime is reached, significant computational efforts are required to make oneself sure that the obtained solution is the one required, (and, for example, the steady-state is not detected prematurely (see [24]). The authors in [20] propose the solution technique equipped with the steady-state detection. As is shown, it allows significant computational savings and simultaneously ensures strict error bounding. Yet the technique is only applicable, when the stationary solution of a Markov chain can be efficiently calculated in advance).
The approaches mentioned in the previous paragraph have straightforward benefit from the methods providing a priori determination of point of convergence. Although generally this task is not feasible, certain techniques exist, which provide ergodicity bounds for some classes of Markov chains. In the next section we review one such technique, being developed by the authors, which is based on the logarithmic norm of linear operators and special transformations of the intensity matrix, governing the behaviour of a CTMC. In the Section 3, Section 4 and Section 5 it is applied to three use-cases. Section 6 concludes the paper.
In what follows by · we denote the l 1 -norm, i.e., if x is an ( l + 1 ) -dimensional column vector then x = k = 0 l | x k | . If x is a probability vector, then x = 1 . The choice of operator norms will be the one induced by the l 1 -norm on column vectors, i.e., A = sup 0 j l i = 0 l | a i j | for a linear operator (matrix) A.

2. Logarithmic Norm Method

Ergodic properties of Markov chains have been the subject of many research papers (see e.g., [25,26]). Yet obtaining practically useful general ergodicity bounds is difficult and remains, to large extent, an open problem. Below we describe one method, called the “logarithmic norm” method, which is applicable in the situations, when the discrete state space of the Markov chain cannot be replaced by the continuous one and the transition intensities are such that the chain is either null or weakly ergodic. The method is based on the notion of the logarithmic norm (see e.g., [27,28]) and utilizes the properties of linear systems of differential equations.
Consider an ODE system
d d t y ( t ) = H ( t ) y ( t ) , t 0 ,
where the entries of the matrix H ( t ) = ( h i j ( t ) ) i , j = 0 are locally integrable on [ 0 , ) and H ( t ) is bounded in the sense that H ( t ) is finite for any fixed t. Then
d d t y ( t ) β ( t ) y ( t ) ,
where β ( t ) is the logarithmic norm of H ( t ) i.e.
β ( t ) = sup i h i i ( t ) + j i | h j i ( t ) | .
Thus the following upper bound holds:
y ( t ) e 0 t β ( u ) d u y ( 0 ) .
If H ( t ) has non-negative non-diagonal elements (and arbitrary elements on the diagonal, (such a matrix in the literature is called sometimes essentially nonnegative).) and all of its column sums are identical, then there exist y ( 0 ) such that in (4) the equality holds.
The logarithmic norm method is put into an application in four consecutive steps. Firstly one has to determine whether the given Markov chain (further always denoted by X ( t ) ) is null-ergodic or weakly ergodic,(a Markov chain is called null-ergodic, if for all its state probabilities p i ( t ) 0 as t for any initial condition; a Markov chain is called weakly ergodic if p * ( t ) p * * ( t ) 0 as t for any initial condition p * ( 0 ) , p * * ( 0 ) , where the vector p ( t ) contains state probabilities). Secondly one excludes one “border state” from the Kolmogorov forward equations and thus obtains the new system with the matrix which, in general, may have negative off-diagonal terms. The third step is to perform (if possible) the similarity transformation (see (11) and (24)), i.e., to transform the new matrix in such a way that its off-diagonal terms are nonnegative and the column sums differ as little as possible. At the final, fourth step one uses the logarithmic norm to estimate the convergence rate. The key step is the third one. The transformation is made using a sequence of positive numbers (see the sequences { δ n , n 0 } below), which usually has to be guessed, does not have any probabilistic meaning and can be considered as an analogue of Lyapunov functions.

3. Time-Varying M / M / 2 System

We start with the well-known time-varying M / M / 2 / system with two servers and the infinite-capacity queue in which customers arrive one by one with the intensity λ ( t ) . The service intensity of each server does not depend on the total number of customers in the queue and is equal to μ ( t ) . The functions λ ( t ) and μ ( t ) are assumed to be nonrandom, nonnegative and locally integrable on [ 0 , ) continuous functions. Let the integer-valued time-dependent random variable X ( t ) denote the total number of customers in the system at time t 0 . Then X ( t ) is the CTMC with the state space { 0 , 1 , 2 } . Its transposed time-dependent intensity matrix (generator) A ( t ) = ( a i j ( t ) ) i , j = 0 has the form
A ( t ) = λ ( t ) μ ( t ) 0 0 λ ( t ) λ ( t ) + μ ( t ) 2 μ ( t ) 0 0 λ ( t ) λ ( t ) + 2 μ ( t ) 2 μ ( t ) 0 0 λ ( t ) λ ( t ) + 2 μ ( t ) .
For all t 0 we represent the distribution of X ( t ) as a probability vector p ( t ) , where p ( t ) = k = 0 P ( X ( t ) = k ) e k (as above, e k denotes the kth unit basis vector). Given any proper initial condition p ( 0 ) , the Kolmogorov forward equations for the distribution of X ( t ) can be written as
d d t p ( t ) = A ( t ) p ( t ) .
Assume that X ( t ) is null ergodic. The condition on the intensities λ ( t ) and μ ( t ) , which guarantees null ergodicity will be derived shortly below, (clearly, if the intensities are constants, i.e., λ ( t ) = λ and μ ( t ) = μ , then the condition is simply λ > 2 μ . If both are periodic and the smallest common multiple of the periods is T, then the condition is 0 T λ ( u ) d u > 2 0 T μ ( u ) d u ). Fix a positive number d > 1 and define the sequence { δ n , n 0 } by δ n = d n . It is the decreasing sequence of positive numbers. By multiplying (5) from the right with Λ = d i a g δ 0 , δ 1 , , we get
d d t p ˜ ( t ) = A ˜ ( t ) p ˜ ( t ) ,
where p ˜ ( t ) = Λ p ( t ) and A ˜ ( t ) = Λ A ( t ) Λ 1 . Denote by α ˜ k ( t ) the sum of all elements in the kth column of A ˜ ( t ) . By direct inspection it can be checked that
α ˜ 0 ( t ) = 1 d 1 λ ( t ) , α ˜ 1 ( t ) = 1 d 1 λ ( t ) d μ ( t ) , α ˜ k ( t ) = 1 d 1 λ ( t ) 2 d μ ( t ) = β ( t ) , k 2 .
Since α ˜ 0 ( t ) β ( t ) and α ˜ 1 ( t ) β ( t ) , the upper bound follows from (4) applied to (6):
k = 0 d k p k ( t ) e 0 t β ( u ) d u k = 0 d k p k ( 0 ) .
If d is chosen such that d > 1 and 0 λ ( t ) 2 d μ ( t ) d t = + , then from (7) it follows that p k ( t ) 0 as t for each k 0 and thus X ( t ) is null ergodic. In such a case it is possible to extract more information from (7). Note that for any fixed n 0 it holds that
d n i = 0 n p i ( t ) k = 0 n d k p k ( t ) k = 0 d k p k ( t ) .
Thus, if X ( 0 ) = N , i.e., p N ( 0 ) = 1 then for any n 0 the following upper bound for the conditional probability P X ( t ) n | X ( 0 ) = N , N 0 , holds:
P X ( t ) n | X ( 0 ) = N d n N e 0 t β ( u ) d u .
Now assume that X ( t ) is weakly ergodic (the corresponding condition on the intensities λ ( t ) and μ ( t ) will be derived shortly below). Using the normalization condition p 0 ( t ) = 1 i 1 p i ( t ) it can be checked that the system (5) can be rewritten as follows:
d d t z ( t ) = B ( t ) z ( t ) + f ( t ) ,
where the matrix B ( t ) with the elements b i j ( t ) = a i j ( t ) a i 0 ( t ) has no probabilistic meaning and the vectors f ( t ) and z ( t ) are
f ( t ) = λ ( t ) , 0 , , 0 T , z ( t ) = p 1 ( t ) , p 2 ( t ) , T .
Let z * ( t ) and z * * ( t ) be the two solutions of (9) corresponding to two different initial conditions z * ( 0 ) and z * * ( 0 ) . Then for the vector y ( t ) = z * ( t ) z * * ( t ) = y 1 ( t ) , y 2 ( t ) , T , with arbitrary elements we have the system
d d t y ( t ) = B ( t ) y ( t ) .
The matrix B ( t ) in (10) may have negative off-diagonal elements. But it is straightforward to see, that the similarity transformation T B ( t ) T 1 = B * ( t ) , where T is the upper triangular matrix of the form
T = 1 1 1 0 1 1 0 0 1 ,
gives the matrix B * ( t ) :
B * ( t ) = λ ( t ) + μ ( t ) μ ( t ) 0 0 λ ( t ) λ ( t ) + 2 μ ( t ) 2 μ ( t ) 0 0 λ ( t ) λ ( t ) + 2 μ ( t ) 2 μ ( t ) ,
which off-diagonal elements are always nonnegative. Let u ( t ) = T y ( t ) = u 1 ( t ) , u 2 ( t ) , T . Then by multiplying both parts of (10) from the left by T, we get
d d t u ( t ) = B * ( t ) u ( t ) .
Fix a positive number d > 1 and define the increasing sequence of positive numbers { δ n , n 0 } by δ n = d n 1 . Let D = d i a g ( δ 1 , δ 2 , ) . By putting w ( t ) = D u ( t ) in (12), we obtain the system of equations
d d t w ( t ) = B * * ( t ) w ( t ) ,
where the matrix B * * ( t ) = D B * ( t ) D 1 has nonnegative off-diagonal elements. Denote by α k ( t ) the sum of all elements in the kth column of B * * ( t ) i.e.
α 1 ( t ) = μ ( t ) d 1 λ ( t ) , α 2 ( t ) = 1 d 1 μ ( t ) + μ ( t ) d 1 λ ( t ) , α k ( t ) = 1 d 1 2 μ ( t ) d λ ( t ) = β ( t ) , k 3 .
Note that if 1 < d 2 then α 1 ( t ) β ( t ) and α 2 ( t ) β ( t ) . Now, remembering that w ( t ) = D u ( t ) = D T y ( t ) , the upper bound for y ( t ) = z * ( t ) z * * ( t ) in the weighted norm due to (4) is (from (14) the purpose of the similarity transformation D B * ( t ) D 1 can be recognized: it is to make β ( t ) in the exponent as large as possible).
D T y ( t ) e 0 t β ( u ) d u D T y ( 0 ) .
The upper bound for p * ( t ) p * * ( t ) is obtained from (14). Firstly notice that y ( t ) 2 p * ( t ) p * * ( t ) since y ( t ) is the solution of (10)—the system with the excluded state ( 0 ) . Secondly, it can be proved, (this is shown, for example, in [Equation (18)] of the [29]), that x 2 D T x for any vector x . Hence
p * ( t ) p * * ( t ) 4 e 0 t β ( u ) d u D T y ( 0 ) .
If d is chosen such that d > 1 and 0 2 μ ( t ) d λ ( t ) d t = + , then from (15) it follows that p * ( t ) p * * ( t ) 0 as t for any initial conditions p * ( 0 ) and p * * ( 0 ) , i.e., X ( t ) is weakly ergodic. Note that it is sufficient to choose d ( 1 , 2 ] : if the integral diverges for d > 2 it also diverges for d = 2 and this is sufficient for (14) to hold.
Sometimes it is also possible to obtain bounds similar to (15) for other characteristics of X ( t ) . For example, denote by E ( t , k ) the conditional mean number of customers in the system at time t, given that initially there where k customers in the system, i.e., E ( t , k ) = n 1 n P ( X ( t ) = n | X ( 0 ) = k ) . Then using [Equation (22)] of [29] it can be shown, that
| E ( t , k ) E ( t , 0 ) | 4 ( 1 d k ) W ( 1 d ) e 0 t β ( u ) d u , k 1 , W = inf n d n n + 1 .
The results obtained above for both, null and weak ergodic, cases can be put together in the single theorem.
Theorem 1.
Let there exist a positive d 1 such that 0 ( 1 d ) λ ( t ) + 2 ( 1 d 1 ) μ ( t ) d t = + . Then X ( t ) is null (weakly) ergodic if d < 1 ( d > 1 ) and the ergodicity bounds (7) and (15) hold.
Whenever the intensities λ ( t ) and μ ( t ) are constants or periodic functions stronger results can be obtained.
Corollary 1.
If in the Theorem 1 the intensities λ ( t ) and μ ( t ) are constants or 1 periodic, (i.e., λ ( t ) and μ ( t ) are periodic functions and the length of their periods is equal to one), then X ( t ) is exponentially null (weakly) ergodic if d < 1 ( d > 1 ) and there exist R > 0 and a > 0 such that e s t β ( u ) d u R e a ( t s ) for 0 s t .
We now consider the numerical example. Let λ ( t ) = 9 1 + sin 2 π t and μ ( t ) = 8 1 + cos 2 π t . It is straightforward to check from the Theorem 1 that if d = 4 3 then X ( t ) is weakly ergodic. Then the ergodicity bounds follow from (15) and (16):
p * ( t ) p * * ( t ) 72 e t D T p * ( 0 ) p * * ( 0 ) ,
| E ( t , k ) E ( t , 0 ) | 162 4 3 k e t , k > 0 .
Figure 1 shows the graph of the probability p 0 ( t ) as t increases. It can be seen that for any initial condition p ( 0 ) there exists one periodic function of t, say π 0 ( t ) (i.e., π 0 ( t ) = π 0 ( t + T ) , where T = 1 is the smallest common multiple of the periods of λ ( t ) and μ ( t ) ), such that lim t ( p 0 ( t ) π 0 ( t ) ) = 0 . Figure 2 shows the detailed behaviour of π 0 ( t ) . Now consider (17). If t 37 then the right part of (17) does not exceed 10 3 i.e., starting from the instant t = 37 = t * the system “forgets” its initial state and the distribution of X ( t ) for t > t * can be regarded as limiting. The error (in l 1 -norm), which is thus made, is not greater than 10 3 . Moreover, since the limiting distribution of X ( t ) is periodic, it is sufficient to solve numerically the system of ODEs only in the interval [ 0 , t * + T ] . The distribution of X ( t ) in the interval [ t * , t * + T ] is the limiting probability distribution of X ( t ) (with error not greater than 10 3 in l 1 -norm). Note that the system of ODEs contains infinite number of equations. Thus in order to solve it numerically one has to truncate it; this truncation was performed according to the method in [30]. The upper bound on the rate of convergence of the conditional mean E ( t , k ) is given in (18). If t t * then the right part does not exceed 10 2 i.e., starting from t = t * the system “forgets” its initial state and the value of E ( t , k ) can be regarded as the limiting value of the conditional mean number of customers with the error not greater than 10 2 . The rate of convergence of E ( t , k ) and the behaviour of its limiting value is shown in the Figure 3 and Figure 4. Note that the obtained upper bounds are not tight: the system enters the periodic limiting regime before the instant t = t * .

4. Time-Varying Single-Server Markovian System with Bulk Arrivals, Queue Skipping Policy and Catastrophes

Consider the time-varying M / M / 1 system with the intensities being periodic functions of time and the queue skipping policy as in [31] (see also [32]). Customers arrive to the system in batches according to the inhomogeneous Poisson process with the intensity λ ( t ) . The size of an arriving batch becomes known upon its arrival to the system and is the random variable with the given probability distribution { b n , n 1 } , having finite mean b ¯ = k = 1 B k , B k = n = k b n . The implemented queue skipping policy implies that whenever a batch arrives to the system its size, say B ^ , is compared with the remaining total number of customers in the system, say B ˜ . If B ^ > B ˜ , then all customers, that are currently in the system, are instantly removed from it, the whole batch B ^ is placed in the the queue and one customer from it enters server. If B ^ B ˜ the new batch leaves the system without having any effect on it. Whenever the server becomes free the first customer from the queue (if there is any) enters server and gets served according to the exponential distribution with the intensity μ ( t ) . Finally the additional inhomogeneous Poisson flow of negative customers with the intensity γ ( t ) arrives to the system. Each negative arrival results in the removal of all customers present in the system at the time of arrival. The negative customer itself leaves the system. Since γ ( t ) depends on t it can happen that the effect of negative arrivals fades away too fast as t (for example, if γ ( t ) = ( 1 + t ) n , n > 1 ). Such cases are excluded from the consideration.
Let X ( t ) be the total number of customers in the system at time t. From the system description it follows that X ( t ) is the CTMC with state space { 0 , 1 , 2 , , b * } , where b * is the maximum possible batch size i.e., b * = max n 1 ( b n > 0 ) . Thus if the batch size distribution has infinite support then the state space is countable, otherwise it is finite.
It is straightforward to see that the transposed time-dependent generator A ( t ) = ( a i j ( t ) ) i , j = 0 for X ( t ) has the form
A ( t ) = λ ( t ) μ ( t ) + γ ( t ) γ ( t ) γ ( t ) λ ( t ) b 1 λ ( t ) B 2 + μ ( t ) + γ ( t ) μ ( t ) 0 λ ( t ) b 2 λ ( t ) b 2 λ ( t ) B 3 + μ ( t ) + γ ( t ) μ ( t ) λ ( t ) b 3 λ ( t ) b 3 λ ( t ) b 3 λ ( t ) B 4 + μ ( t ) + γ ( t ) .
We represent the distribution of X ( t ) as a probability vector p ( t ) , where p ( t ) = k = 0 b * P ( X ( t ) = k ) e k tor all t 0 . Given a proper p ( 0 ) , the probabilistic dynamics of X ( t ) is described by the Kolmogorov forward equations d d t p ( t ) = A ( t ) p ( t ) , which can be rewritten in the form
d d t p ( t ) = A * ( t ) p ( t ) + g t , t 0 ,
where g t = γ t , 0 , 0 , T and A * ( t ) is the matrix with the terms a i j * ( t ) equal to
a i j * t = a 0 j t γ t , if   i = 0 , a i j t , otherwise .
Due to the restrictions imposed on γ ( t ) , we have that 0 γ t d t = . Thus X ( t ) cannot be null ergodic irrespective of the values of λ ( t ) and μ ( t ) .
Theorem 2.
Assume that the catastrophe intensity γ ( t ) is such that 0 γ t d t = . Then the Markov chain X ( t ) is weakly ergodic and for any two initial conditions p * 0 and p * * 0 it holds that
p * t p * * t e 0 t γ u d u p * 0 p * * 0 2 e 0 t γ u d u , t 0 .
Proof. 
It is straightforward to check, that the logarithmic norm (see (3)) of the operator A * ( t ) is equal to γ ( t ) . Denote now by U * ( t , s ) the Cauchy operator of the Equation (19). Then the statement of the theorem follows from the inequalities U * ( t , s ) e s t γ ( u ) d u and
p * t p * * t U * t , 0 p * 0 p * * 0 .
 □
Even though (21) is the valid ergodicity bound for X ( t ) , it is of little help whenever the state space of X ( t ) is countable and one needs to perform the numerical solution of (5). This is due to the fact that the bound (21) is in the uniform operator topology, which does not allow to use the analytic frameworks (for example, [29]) for finding proper truncations of an infinite ODE system. For the latter task ergodicity bounds for X ( t ) in stronger (than l 1 ), weighted norms are required. It can be said that with such bounds we have a weight assigned to each initial state and thus a truncation procedure becomes sensitive to the number of states. Below (in the Theorem 3) we obtain such a bound under the additional assumption, (for the definition used see [33]; appropriate test for monotone functions can be found in [Proposition 1] of [34]. Although the Theorem 2 below holds for any distribution { b n , n 1 } , this assumption is essential for the Theorem 3. For distributions with tails heavier than the geometric distribution we were unable to find the conditions, which guarantee the existence of the limiting regime of queue-size process even for periodic intensities). that the batch size distribution { b n , n 1 } is harmonic new better than used in expectation i.e., j = k B j + 1 b ¯ 1 b ¯ 1 k for all k 0 .
Using the normalization condition p 0 ( t ) = 1 i 1 p i ( t ) the forward Kolmogorov system d d t p ( t ) = A ( t ) p ( t ) can be rewritten as
d d t z ( t ) = A * * t z ( t ) + f t , t 0 ,
where
f t = λ ( t ) b 1 , λ ( t ) b 2 , λ ( t ) b 3 , λ ( t ) b 4 , T and
A * * ( t ) = λ ( t ) + μ ( t ) + γ ( t ) μ ( t ) λ ( t ) b 1 λ ( t ) b 1 λ ( t ) b 1 0 λ ( t ) B 2 + μ ( t ) + γ ( t ) μ ( t ) λ ( t ) b 2 λ ( t ) b 2 0 0 λ ( t ) B 3 + μ ( t ) + γ ( t ) μ ( t ) λ ( t ) b 3 0 0 0 λ ( t ) B 4 + μ ( t ) + γ ( t ) .
Fix d ( 1 , 1 + ( b ¯ 1 ) 1 ] and define the increasing sequence of positive numbers { δ n , n 0 } by δ n = d n 1 . Then instead of the matrix B * * ( t ) in (13) we have the matrix A ˜ ( t ) = ( a ˜ i j ( t ) ) i , j = 0 with the following structure:
A ˜ ( t ) = λ ( t ) + μ ( t ) + γ ( t ) 1 d μ ( t ) 0 0 0 λ ( t ) B 2 + μ ( t ) + γ ( t ) 1 d μ ( t ) 0 0 0 λ ( t ) B 3 + μ ( t ) + γ ( t ) 1 d μ ( t ) 0 0 0 λ ( t ) B 4 + μ ( t ) + γ ( t ) .
Since the logarithmic norm (see (3)) of A ˜ ( t ) is equal to
β * ( t ) = sup i a ˜ i i ( t ) + j i a ˜ j i ( t ) = inf i γ ( t ) + 1 1 d μ ( t ) + λ ( t ) B i = γ ( t ) 1 1 d μ ( t ) ,
then from (4) we get:
z * ( t ) z * * ( t ) 1 D e 0 t γ ( u ) + 1 d 1 μ ( u ) d u z * ( 0 ) z * * ( 0 ) 1 D .
Arguments similar to those used to establish the Theorem 1 lead to the following ergodicity bounds for p * ( t ) p * * ( t ) and the conditional mean E ( t , k ) :
p * ( t ) p * * ( t ) 4 e 0 t γ ( u ) + 1 d 1 μ ( u ) d u z * ( 0 ) z * * ( 0 ) 1 D ,
| E ( t , k ) E ( t , 0 ) | 1 + d k 1 W e 0 t γ ( u ) + 1 d 1 μ ( u ) d u , k 1 , W = inf n d n n + 1 .
These results can be put together in the single theorem.
Theorem 3.
Assume that the distribution { b n , n 1 } with finite mean b ¯ is harmonic new better than used in expectation. Then if 0 γ ( t ) + ( 1 d 1 ) μ ( t ) d t = + for some d ( 1 , 1 + ( b ¯ 1 ) 1 ] , then the Markov chain X ( t ) is weakly ergodic and the ergodicity bound (26) holds.
We close this section with the example, showing the dependence on t of the same two quantities — p 0 ( t ) and E ( t , k ) —considered in the Section 3. Assume here that b k = 1 3 2 3 k 1 , λ ( t ) = 9 1 + sin 2 π t , μ ( t ) = 8 1 + cos 2 π t and γ ( t ) = 1 , i.e., the catastrophe intensity is constant and the mean size b ¯ of an arriving batch is equal to 3. It can be checked that d = 3 2 satisfies the conditions of the Theorem 3. Then from (26) and (27) we get the upper bounds
p * ( t ) p * * ( t ) 4 e 5 3 t z * ( 0 ) z * * ( 0 ) 1 D ,
| E ( t , k ) E ( t , 0 ) | 1 + 3 2 k 1 9 8 e 5 3 t , k 0 .
In Figure 5 it is depicted how p 0 ( t ) behaves as t increases and Figure 6 shows its limiting value. If t 60 then the right part of (28) does not exceed 3 · 10 2 , i.e., starting from the instant t = 60 = t * the system “forgets” its initial state and the distribution of X ( t ) for t > t * can be regarded as limiting. Moreover, since the limiting distribution of X ( t ) is periodic, it is sufficient to solve (numerically, (it must be noticed that since b k > 0 for all k, the system of ODEs contains infinite number of equations. Thus in order to solve it numerically one has to truncate it. We perform this truncation according to the method in [30])). the system of ODEs only in the interval [ 0 , t * + T ] , where T is the smallest common multiple of the periods of λ ( t ) and μ ( t ) i.e., T = 1 . The probability distribution of X ( t ) in the interval [ t * , t * + T ] is the estimate (with error not greater than 3 · 10 2 in l 1 -norm) of the limiting probability distribution of X ( t ) . The upper bound on the rate of convergence of the conditional mean number of customers in the system E ( t , k ) is given in (29). If t t * then the right part does not exceed 0 , 3 , i.e., starting from the instant t = t * the system “forgets” its initial state and the value of E ( t , k ) can be regarded as the limiting value of the mean number of customers with the error not greater than 0 , 3 . The rate of convergence of E ( t , k ) and the behaviour of its limiting value can be seen in Figure 7 and Figure 8. As in the previous numerical example, the obtained upper bounds are not tight: the system enters the periodic limiting regime before the instant t = t * .

5. Time-Varying Markovian Bulk-Arrival and Bulk-Service System with State-Dependent Control

In the recent paper [35] the authors considered the Markovian bulk-arrival and bulk-service system with the general state-dependent control (see also [35,36,37,38,39]). The total number X ( t ) of customers at time t in that system constitutes CTMC with state space { 0 , 1 , 2 , } . Its generator Q ( t ) = ( q i j ( t ) ) i , j = 0 has quite a specific structure:
q i j = h i j , if   0 i k 1 , j 0 , b i j + k , if   i k , j i k , 0 , otherwise ,
where k 1 is the fixed integer. For further explanations and the motivation behind such structure of Q ( t ) we refer the reader to [Section 1] in [35]. The purpose of this section is to show that for at least one particular case of this system, even when the intensities are time-dependent, one can obtain the upper bounds for the rate of convergence using the method based on the logarithmic norm. Specifically, we take the example, (in the example of [Section 7] in [35] the entries of the intensity matrix Q ( t ) are: h i , i 1 = μ , h i , i + 1 = λ , h i , i = ( λ + μ ) , b 0 = a , b k + 1 = b , b k = ( a + b ) and k = 3 ). from the Section 7 of [35], with the exception that all the transition intensities are time-dependent i.e., b i = λ ( t ) and a i = μ ( t ) and are both nonnegative locally integrable on [ 0 , ) . Then the transposed generator A ( t ) = ( a i j ( t ) ) i , j = 0 = Q T ( t ) of X ( t ) has the form
A ( t ) = λ ( t ) μ ( t ) 0 μ ( t ) 0 0 λ ( t ) ( λ ( t ) + μ ( t ) ) μ ( t ) 0 μ ( t ) 0 0 λ ( t ) ( λ ( t ) + μ ( t ) ) 0 0 μ ( t ) 0 0 λ ( t ) ( λ ( t ) + μ ( t ) ) 0 0 0 0 0 λ ( t ) ( λ ( t ) + μ ( t ) ) 0 0 0 0 0 λ ( t ) ( λ ( t ) + μ ( t ) )
Denote the distribution of X ( t ) by p ( t ) i.e., p ( t ) = ( p 0 ( t ) , p 1 ( t ) , ) T = k = 0 P
( X ( t ) = k ) e k (as above, e k denotes the kth unit basis vector). The ergodicity bound for X ( t ) in the null ergodic case is given below in the Theorem 4.
Theorem 4.
If 0 λ ( t ) ( 1 σ ) + μ ( t ) ( 1 σ 3 ) d t = + for some σ ( 0 , 1 ) , then the Markov chain X ( t ) is null ergodic,
i = 0 σ i p i ( t ) e 0 t λ ( u ) + μ ( u ) σ λ ( u ) σ 3 μ ( u ) d u i = 0 σ i p i ( 0 ) , t 0 ,
and for any n 0 and N 0 the following inequality holds:
P X ( t ) > n | X ( 0 ) = N 1 σ N n e 0 t λ ( u ) + μ ( u ) σ λ ( u ) σ 3 μ ( u ) d u .
Proof. 
Fix σ > 0 and define the decreasing sequence of positive numbers { δ n , n 0 } by δ n = σ n . Put p ˜ ( t ) = Λ p ( t ) , where Λ = d i a g δ 0 , δ 1 , . Then we have (6). Denote by α ˜ k ( t ) the sum of all elements in the kth column of A ˜ ( t ) i.e.
α ˜ 0 ( t ) = 1 σ λ ( t ) , α ˜ k ( t ) = 1 σ λ ( t ) + μ ( t ) σ 1 μ ( t ) , k = 1 , 2 , α ˜ k ( t ) = λ ( t ) + μ ( t ) σ λ ( t ) σ 3 μ ( t ) = β ( t ) , k 3 .
If 0 < σ < 1 then α ˜ 0 ( t ) β ( t ) , α ˜ 1 ( t ) β ( t ) and α ˜ 2 ( t ) β ( t ) , and thus (32) and (33) follow from (4) and (8) respectively. □
The ergodicity bound in the weakly ergodic case, state below in the Theorem 5, is obtained by analogy with the Theorem 1. Define an increasing sequence of positive numbers { δ n , n 0 } . Then the matrix B * * ( t ) built from the matrix A ( t ) , in the same way as it is done in the Section 3, has the form:
B * * ( t ) = ( λ ( t ) + μ ( t ) ) μ ( t ) δ 1 δ 2 μ ( t ) δ 1 δ 3 μ ( t ) δ 1 δ 4 0 0 λ ( t ) δ 2 δ 1 ( λ ( t ) + μ ( t ) ) 0 0 μ ( t ) δ 2 δ 5 0 0 λ ( t ) δ 3 δ 2 ( λ ( t ) + μ ( t ) ) 0 0 μ ( t ) δ 3 δ 6 0 0 λ ( t ) δ 4 δ 3 ( λ ( t ) + μ ( t ) ) 0 0 0 0 0 λ ( t ) δ 5 δ 4 ( λ ( t ) + μ ( t ) ) 0 0 0 0 0 λ ( t ) δ 6 δ 5 ( λ ( t ) + μ ( t ) )
Denote by α ˜ k ( t ) the sum of all elements in the kth column of B * * ( t ) i.e.,
α 1 ( t ) = λ ( t ) + μ ( t ) λ ( t ) δ 2 δ 1 , α 2 ( t ) = λ ( t ) + μ ( t ) λ ( t ) δ 3 δ 2 μ ( t ) δ 1 δ 2 , α 3 ( t ) = λ ( t ) + μ ( t ) λ ( t ) δ 4 δ 3 μ ( t ) δ 1 δ 3 , α k ( t ) = λ ( t ) + μ ( t ) λ ( t ) δ k + 1 δ k μ ( t ) δ k 3 δ k , k 4 .
Since the logarithmic norm of B * * ( t ) is equal to β ( t ) = min min 1 k 3 α k ( t ) , inf k 4 α k ( t ) , we can apply (4) to (13) and (15) with δ k + 1 = σ δ k , k 5 .
Theorem 5.
If 0 λ ( t ) ( 1 σ ) + μ ( t ) ( 1 σ 3 ) d t = + for some σ > 0 , then the Markov chain X ( t ) is weakly ergodic and the ergodicity bound (15) holds.
As the numerical example we again consider the periodic case: λ ( t ) = 3 1 + sin π t and μ ( t ) = 4 1 + cos 2 π t . By direct inspection it can be checked that the sequence { δ n , n 1 } , defined by δ n = 10 9 n 1 , leads to β ( t ) = α 2 ( t ) . Thus the conditions of the Theorem 5 are fulfilled with σ = 10 9 . The pre-limiting and the limiting values of the same quantities as in the two previous examples— p 0 ( t ) and E ( t , k ) —are shown in Figure 9, Figure 10, Figure 11 and Figure 12.

6. Conclusions

As can be seen from the last three sections, in order to obtain the ergodicity bounds the values of λ ( t ) and μ ( t ) for each t may not be needed. Instead it may be sufficient to know only the time-average intensities λ ¯ = 1 t lim t 0 t λ ( u ) d u and μ ¯ = 1 t lim t 0 t μ ( u ) d u . For periodic intensities with the smallest common multiple of the periods T, the values λ ¯ and μ ¯ are exactly the average arrival and service intensity over one period.
The classes of CTMC to which the logarithmic norm method is applicable and gives meaningful results is not limited to those considered in this paper, (necessary and sufficient conditions for a CTMC “to fit” the logarithmic norm method are not known). For example, the same reasoning, which has led to the Theorem 1, can be used to obtain the upper bounds for the rate of convergence of the M t / M t / S / system with any (finite) number of servers. Moreover, whenever X ( t ) is weakly ergodic, the analysis can be carried on beyond what is stated in the Theorem 1. For example, one can obtain the perturbation bounds (see e.g., [40]) and study different state space truncation options: one-sided or two sided (see e.g., [29,41,42]).

Author Contributions

Investigation, A.Z., R.R., Y.S., I.K. and V.K. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the Ministry of Science and Higher Education of the Russian Federation, project No. 075-15-2020-799.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Liu, Y.; Whitt, W. Stabilizing performance in a service system with time-varying arrivals and customer feedback. Eur. J. Oper. Res. 2017, 256, 473–486. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, Y. Staffing to stabilize the tail probability of delay in service systems with time-varying demand. Oper. Res. 2018, 66, 514–534. [Google Scholar] [CrossRef] [Green Version]
  3. Kwon, S.; Gautam, N. Guaranteeing performance based on time-stability for energy-efficient data centers. IIE Trans. 2016, 48, 812–825. [Google Scholar] [CrossRef]
  4. Whitt, W.; You, W. Time-Varying Robust Queueing; Columbia University: New York, NY, USA, 2019. [Google Scholar]
  5. Schwarz, J.A.; Selinka, G.; Stolletz, R. Performance analysis of time-dependent queueing systems: Survey and classification. Omega 2016, 63, 170–189. [Google Scholar] [CrossRef] [Green Version]
  6. Whitt, W. Time-Varying Queues. Available online: http://www.columbia.edu/~ww2040/TVQ_082617.pdf (accessed on 28 October 2020).
  7. Falin, G.I. Periodic queues in heavy traffic. Adv. Appl. Probab. 1989, 21, 485–487. [Google Scholar] [CrossRef] [Green Version]
  8. Masuyama, H. Error bounds for augmented truncations of discrete-time block-monotone Markov chains under geometric drift conditions. Adv. Appl. Probab. 2015, 47, 83–105. [Google Scholar] [CrossRef] [Green Version]
  9. Tweedie, R.L. Truncation approximations of invariant measures for Markov chains. J. Appl. Probab. 1998, 35, 517–536. [Google Scholar] [CrossRef]
  10. Green, L.; Kolesar, P. The pointwise stationary approximation for queues with nonstationary arrivals. Manag. Sci. 1991, 37, 84–97. [Google Scholar] [CrossRef] [Green Version]
  11. Di Crescenzo, A.; Nobile, A.G. Diffusion approximation to a queueing system with time-dependent arrival and service rates. Queueing Syst. 1995, 19, 41–62. [Google Scholar] [CrossRef]
  12. Di Crescenzo, A.; Giorno, V.; Nobile, A.G.; Ricciardi, L.M. On the M/M/1 queue with catastrophes and its continuous approximation. Queueing Syst. 2003, 43, 329–347. [Google Scholar] [CrossRef] [Green Version]
  13. Kolesar, P.J.; Rider, P.J.; Craybill, T.B.; Walker, W.E. A queueing linear-programming approach to scheduling police patrol cars. Oper. Res. 1975, 23, 1045–1062. [Google Scholar] [CrossRef]
  14. Taaffe, M.R.; Ong, K.L. Approximating Nonstationary Ph(t)/M(t)/s/c queueing systems. Ann. Oper. Res. 1987, 8, 103–116. [Google Scholar] [CrossRef]
  15. Clark, G.M. Use of Polya distributions in approximate solutions to nonstationary M/M/s queues. Commun. ACM 1981, 24, 206–217. [Google Scholar] [CrossRef]
  16. Massey, W.; Pender, J. Gaussian skewness approximation for dynamic rate multiserver queues with abandonment. Queueing Syst. 2013, 75, 243–327. [Google Scholar] [CrossRef]
  17. Van Dijk, N.M.; van Brummelen, S.P.J.; Boucherie, R.J. Uniformization: Basics, extensions and applications. Perform. Eval. 2018, 118, 8–32. [Google Scholar] [CrossRef]
  18. Arns, M.; Buchholz, P.; Panchenko, A. On the numerical analysis of inhomogeneous continuous-time Markov chains. Informs J. Comput. 2010, 22, 416–432. [Google Scholar] [CrossRef]
  19. Andreychenko, A.; Sandmann, W.; Wolf, V. Approximate adaptive uniformization of continuous-time Markov chains. Appl. Math. Model. 2018, 61, 561–576. [Google Scholar] [CrossRef]
  20. Burak, M.R.; Korytkowski, P. Inhomogeneous CTMC Birth-and-Death Models Solved by Uniformization with Steady-State Detection. ACM Trans. Model. Comput. Simul. 2020, 30, 1–18. [Google Scholar] [CrossRef]
  21. Burak, M.R. Efficiency Improvements to Uniformization for Markovian Birth-and-Death Models. In Proceedings of the 2018 23rd International Conference on Methods & Models in Automation & Robotics (MMAR), Miedzyzdroje, Poland, 27–30 August 2018; pp. 741–746. [Google Scholar]
  22. Ingolfsson, A.; Akhmetshina, E.; Budge, S.; Li, Y.; Wu, X. A survey and experimental comparison of service-level-approximation methods for nonstationary M (t)/M/s (t) queueing systems with exhaustive discipline. Informs J. Comput. 2007, 19, 201–214. [Google Scholar] [CrossRef]
  23. Li, Y.F.; Zio, E.; Lin, Y.H. Methods of solutions of inhomogeneous continuous time Markov chains for degradation process modeling. Appl. Reliab. Eng. Risk Anal. Probab. Models Stat. Inference 2013, 3–16. [Google Scholar] [CrossRef]
  24. Katoen, J.-P.; Zapreev, I.S. Safe on-the-fly steady-state detection for time-bounded reachability. In Proceedings of the 3rd International Conference on the Quantitative Evaluation of Systems, California, CA, USA, 11–14 September 2016. [Google Scholar]
  25. Down, D.; Meyn, S.P.; Tweedie, R.L. Exponential and uniform ergodicity of Markov processes. Ann. Probab. 1995, 23, 1671–1691. [Google Scholar] [CrossRef]
  26. Meyn, S.P.; Tweedie, R.L. Markov Chains and Stochastic Stability; Springer Science and Business Media: New York, NY, USA, 2012. [Google Scholar]
  27. Zeifman, A.; Satin, Y.; Kiseleva, K.; Korolev, V. On the Rate of Convergence for a Characteristic of Multidimensional Birth-Death Process. Mathematics 2019, 7, 477. [Google Scholar] [CrossRef] [Green Version]
  28. Zeifman, A.; Satin, Y.; Kryukova, A.; Razumchik, R.; Kiseleva, K.; Shilova, G. On Three Methods for Bounding the Rate of Convergence for Some Continuous-Time Markov Chains. Int. J. Appl. Math. Comput. Sci. 2020, 30, 251–266. [Google Scholar]
  29. Zeifman, A.; Satin, Y.; Korolev, V.; Shorgin, S. On truncations for weakly ergodic inhomogeneous birth and death processes. Int. J. Appl. Math. Comput. Sci. 2014, 24, 503–518. [Google Scholar] [CrossRef] [Green Version]
  30. Zeifman, A.I.; Korotysheva, A.; Satin, Y.; Kiseleva, K.; Korolev, V.; Shorgin, S. Bounds for Markovian Queues With Possible Catastrophes. In Proceedings of the 31st Conference on Modelling and Simulation, Budapest, Hungary, 23–26 May 2017; pp. 628–634. [Google Scholar]
  31. Marin, A.; Rossi, S. A Queueing Model that Works Only on the Biggest Jobs. Lect. Notes Comput. Sci. Book Ser. 2020, 12039, 118–132. [Google Scholar]
  32. Zeifman, A.; Razumchik, R.; Satin, Y.; Kovalev, I. Ergodicity Bounds for the Markovian Queue With Time-Varying Transition Intensities, Batch Arrivals and One Queue Skipping Policy. arXiv 2020, arXiv:2007.15833. [Google Scholar]
  33. Klefsjö, B. The hnbue and hnwue classes of life distributions. Nav. Res. Logist. 1982, 29, 331–344. [Google Scholar] [CrossRef]
  34. Conti, P.L.J. An asymptotic test for a geometric process against a lattice distribution with monotone hazard. Ital. Statist. Soc. 1997, 6, 213–231. [Google Scholar] [CrossRef]
  35. Chen, A.; Wu, X.; Zhang, J. Markovian bulk-arrival and bulk-service queues with general state-dependent control. Queueing Syst. 2020, 1–48. [Google Scholar] [CrossRef]
  36. Chen, A.; Renshaw, E. Markovian bulk-arriving queues with state-dependent control at idle time. Adv. Appl. Probab. 2004, 36, 499–524. [Google Scholar] [CrossRef]
  37. Chen, A.; Pollett, P.; Li, J.; Zhang, H. Markovian bulk-arrival and bulk-service queues with state-dependent control. Queueing Syst. 2010, 64, 267–304. [Google Scholar] [CrossRef]
  38. Chen, A.; Li, J.; Hou, Z.; Ng, K.W. Decay properties and quasi-stationary distributions for stopped Markovian bulk-arrival and bulk-service queues. Queueing Syst. 2010, 66, 275–311. [Google Scholar] [CrossRef] [Green Version]
  39. Li, J.; Chen, A. Decay property of stopped Markovian bulk-arriving queues. Adv. Appl. Probab. 2008, 40, 95–121. [Google Scholar] [CrossRef] [Green Version]
  40. Zeifman, A.; Korolev, V.; Satin, Y. Two approaches to the construction of perturbation bounds for continuous-time Markov chains. Mathematics 2020, 8, 253. [Google Scholar] [CrossRef] [Green Version]
  41. Satin, Y.; Kiseleva, K.; Shorgin, S.; Korolev, V.; Zeifman, A. Two-Sided Truncations for The Mt/Mt/S Queueing Model. In Proceedings of the 31st European Conference on Modelling and Simulation, Budapest, Hungary, 23–26 May 2017; pp. 635–641. [Google Scholar]
  42. Zeifman, A.; Leorato, S.; Orsingher, E.; Satin, Y.; Shilova, G. Some universal limits for nonhomogeneous birth and death processes. Queueing Syst. 2006, 52, 139–151. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 37 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 189 ( 0 ) = 1 (blue line).
Figure 1. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 37 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 189 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g001
Figure 2. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 189 ( 0 ) = 1 (blue line).
Figure 2. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 189 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g002
Figure 3. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 37 ] : E ( t , 0 ) (red line), E ( t , 189 ) (blue line).
Figure 3. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 37 ] : E ( t , 0 ) (red line), E ( t , 189 ) (blue line).
Mathematics 09 00042 g003
Figure 4. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 189 ) (blue line).
Figure 4. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 189 ) (blue line).
Mathematics 09 00042 g004
Figure 5. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 60 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Figure 5. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 60 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g005
Figure 6. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Figure 6. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g006
Figure 7. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 60 ] : E ( t , 0 ) (red line), E ( t , 250 ) (blue line).
Figure 7. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 60 ] : E ( t , 0 ) (red line), E ( t , 250 ) (blue line).
Mathematics 09 00042 g007
Figure 8. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 250 ) (blue line).
Figure 8. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 250 ) (blue line).
Mathematics 09 00042 g008
Figure 9. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 45 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Figure 9. Rate of convergence of the empty system probability p 0 ( t ) in the interval [ 0 , 45 ] given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 250 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g009
Figure 10. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 300 ( 0 ) = 1 (blue line).
Figure 10. Limiting probability p 0 ( t ) of the empty queue given two different initial conditions: p 0 ( 0 ) = 1 (red line), p 300 ( 0 ) = 1 (blue line).
Mathematics 09 00042 g010
Figure 11. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 45 ] : E ( t , 0 ) (red line), E ( t , 300 ) (blue line).
Figure 11. Rate of convergence of the conditional mean E ( t , k ) number of customers in the system in the interval [ 0 , 45 ] : E ( t , 0 ) (red line), E ( t , 300 ) (blue line).
Mathematics 09 00042 g011
Figure 12. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 300 ) (blue line).
Figure 12. Limiting conditional mean E ( t , k ) number of customers in the system: E ( t , 0 ) (red line), E ( t , 300 ) (blue line).
Mathematics 09 00042 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zeifman, A.; Satin, Y.; Kovalev, I.; Razumchik, R.; Korolev, V. Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method. Mathematics 2021, 9, 42. https://doi.org/10.3390/math9010042

AMA Style

Zeifman A, Satin Y, Kovalev I, Razumchik R, Korolev V. Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method. Mathematics. 2021; 9(1):42. https://doi.org/10.3390/math9010042

Chicago/Turabian Style

Zeifman, Alexander, Yacov Satin, Ivan Kovalev, Rostislav Razumchik, and Victor Korolev. 2021. "Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method" Mathematics 9, no. 1: 42. https://doi.org/10.3390/math9010042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop