Next Article in Journal
Existence Results for Nonlinear Fractional Problems with Non-Homogeneous Integral Boundary Conditions
Next Article in Special Issue
Wavelet Thresholding Risk Estimate for the Model with Random Samples and Correlated Noise
Previous Article in Journal
A Multi-Criteria Decision Support Framework for Inland Nuclear Power Plant Site Selection under Z-Information: A Case Study in Hunan Province of China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Two Approaches to the Construction of Perturbation Bounds for Continuous-Time Markov Chains

1
Department of Applied Mathematics, Vologda State University, 160000 Vologda, Russia
2
Institute of Informatics Problems of the Federal Research Center “Computer Science and Control”, Russian Academy of Sciences, 119333 Moscow, Russia
3
Vologda Research Center of the Russian Academy of Sciences, 160014 Vologda, Russia
4
Faculty of Computational Mathematics and Cybernetics, Moscow State University, 119991 Moscow, Russia
5
Department of Mathematics, School of Science, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 253; https://doi.org/10.3390/math8020253
Submission received: 20 January 2020 / Revised: 10 February 2020 / Accepted: 11 February 2020 / Published: 14 February 2020
(This article belongs to the Special Issue Stability Problems for Stochastic Models: Theory and Applications)

Abstract

:
This paper is largely a review. It considers two main methods used to study stability and to obtain appropriate quantitative estimates of perturbations of (inhomogeneous) Markov chains with continuous time and a finite or countable state space. An approach is described to the construction of perturbation estimates for the main five classes of such chains associated with queuing models. Several specific models are considered for which the limit characteristics and perturbation bounds for admissible “perturbed” processes are calculated.

1. Introduction

In this paper, some topics are considered that are related to the stability of both homogeneous and non-homogeneous continuous-time Markov chains with respect to the perturbation of their intensities (infinitesimal characteristics). It is assumed that the evolution of the system under consideration is described by a Markov chain with the known state space, and it is the infinitesimal matrix that is given inexactly. Different classes of admissible perturbations can be considered. The “perturbed” infinitesimal matrix can be arbitrary, and the small deviation of its norm from that of the original matrix is assumed or it can be assumed that the structure of the infinitesimal matrix is known and only its elements are “perturbed” within the same structure. Below we will give a detailed description of these cases. In some papers it is assumed that the perturbations have a special form and, for example, are expanded in a power series of a small parameter. This assumption seems to be too restrictive and unrealistic.
The study of stability of characteristics of stochastic models has been actively developing since the 1970s [1,2,3]. At that time, Zolotarev proposed to treat limit theorems of probability theory as special stability theorems. Zolotarev created the theoretical foundation of the key method used within this approach, namely, the theory of probability metrics [4]. This approach assumes that statements establishing the convergence must be accompanied by statements establishing the convergence rate. Zolotarev called the conditions of convergence that simultaneously serve as convergence rate estimates “natural.” This approach was developed in the works of Zolotarev, Kalashnikov, Kruglov, Senatov, Yu, Korolev, Yu, Khokhlov, and their colleagues in the framework of international seminars on stability problems for stochastic models. This seminar was founded by Zolotarev in the early 1970s and still continues to hold its regular (as a rule, annual) international sessions (see the series of the proceedings of the seminar published as Springer Lecture Notes starting from [5] or as issues of the Journal of Mathematical Sciences). In particular, this approach proved to be very productive for the study of random sums in queueing theory, renewal theory, and the theory of branching processes [6].
Since the 1980s, the problems related to the estimation of stability of Markov chains with respect to perturbations of their characteristics have been thoroughly studied by Kartashov for homogeneous discrete-time chains with general state space and, in parallel, by Zeifman for inhomogeneous continuous-time chains within the seminar mentioned above (see [7,8,9]). In particular, a general approach for inhomogeneous continuous-time chains was developed in [9]. That paper was published in the proceedings of the seminar “Stability Problems for Stochastic Models” and dealt with both uniform and strong cases.
Later birth-death processes were considered in [10], and general properties and estimates for inhomogeneous finite chains were considered in [11]. The paper [12] was specially devoted to estimates for general birth-death processes, with the queueing system M t | M t | N considered as an example. It should be mentioned that these papers were not noticed by Western authors. For example, in [13], it was stated that there were no papers on the stability of the (simplest stationary!!) system M | M | 1 . For the first time, we used the term “perturbation bounds” instead of “stability” in the paper [14] on the referee’s prompt. The same situation takes place with Kartashov’s papers cited above. The methods proposed in those papers seem to be used by most authors of subsequent studies in estimations of perturbations of discrete-time chains. Possibly, poor acquaintance with the early papers of Kartashov and Zeifman can be explained by the differences in terminology mentioned above: in the original (and foundational) papers, the term “stability” was used (in the proceedings of the seminar with the consonant appellation “Stability Problems for Stochastic Models”).
The present paper deals only with continuous-time chains, so the subsequent remarks mainly regard such a case.
Note that, to obtain explicit and exact estimates of the perturbation bounds of a chain, it is required to have estimates of the rate of convergence of the chain to its limit characteristics in the form of explicit inequalities. Moreover, the sharper the convergence rate estimates are, the more accurate the perturbation bounds are. These bounds can be more easily obtained for finite homogeneous Markov chains. Therefore, most publications concern this situation only (see, e.g., [15,16,17,18,19,20]). Thus, two main approaches can be highlighted.
The first of them can be used for the case of weak ergodicity of a chain in the uniform operator topology. The first bounds in this direction were obtained in [9]. The principal progress related to the replacement of the constant S with log S in the bound was implemented in [17] and continued in Mitrophanov’s papers [18,19,20] for the case of homogeneous chains and then in [14,21] and in the subsequent papers of these authors for the inhomogeneous chains. The contemporary state of affairs in this field and new applied problems related to the link between convergence rate and perturbation bounds in the “uniform” case were described in [22]. In some recent papers, uniform perturbation bounds of homogeneous Markov chains were studied by the techniques of stochastic differential equations (see, for instance, [23] and the references therein).
The second approach is used in the case where the uniform ergodicity is not assured, which is typical for the processes most interesting from a practical viewpoint. For example, birth-death processes used for modeling queueing systems, and real processes in biology, chemistry, and physics, as a rule, are not uniformly ergodic.
Following the ideas of Kartashov (see a detailed description in [24]), most authors use the probability methods to study ergodicity and perturbation bounds of stationary chains (with a finite, countable, or general state space) in various norms [13,25,26]. For a wide class of (mainly) stationary discrete-time chains, a close approach was considered in [27] and more recent papers [28,29,30,31,32,33,34,35,36,37,38].
In the works of the authors of the present paper, perturbation bounds for non-stationary finite or infinite continuous-time chains were studied by other methods.
The first papers dealing with non-stationary queueing models appeared in the 1970s (see [39,40], and the more recent paper [41]). Moreover, as far back as the year in which [42] was published, it was noted that it is principally possible to use the logarithmic matrix norm for the study of the convergence rate of continuous-time Markov chains. The corresponding general approach employing the theory of differential equations in Banach spaces was developed in a series of papers by the authors of the present paper(see a detailed description in [43,44]). In [9] (see also [10,11]), a method for the study of perturbation bounds for the vector of state probabilities of a continuous-time Markov chain with respect to the perturbations of infinitesimal characteristics of the chain in the total variation norm ( l 1 -norm) was proposed. The paper [12] contained a detailed study of the stability of essentially non-stationary birth-death processes with respect to conditionally small perturbations. Convergence rate estimates in terms of weight norms, and hence the corresponding bounds for new classes of Markov chains, were considered in [45,46,47,48].
In the present paper, both approaches are considered along with the classes of inhomogeneous Markov chains, for which at least one of these approaches yields reasonable perturbation bounds for basic probability characteristics.
The paper is organized as follows. In Section 2, basic notions and preliminary results are introduced. In Section 3, general theorems on perturbation bounds are considered. Section 4 contains convergence rate estimates and perturbation bounds for basic classes of the chains under consideration. Finally, in Section 5, some special queueing models are studied.

2. Basic Notions and Preliminaries

Let X = X ( t ) , t 0 , be, in general, an inhomogeneous continuous-time Markov chain with a finite or countable state space E S = 0 , 1 , , S , S . The transition probabilities for X = X ( t ) will be denoted p i j ( s , t ) = Pr X ( t ) = j X ( s ) = i , i , j 0 , 0 s t . Let p i ( t ) = Pr X ( t ) = i be the state probabilities of the chain and p ( t ) = p 0 ( t ) , p 1 ( t ) , T be the corresponding vector of state probabilities. In what follows, it is assumed that
Pr X t + h = j | X t = i =
= q i j t h + α i j t , h , if j i 1 k i q i k t h + α i t , h , if j = i
where all α i ( t , h ) are o ( h ) uniformly in i, that is, sup i | α i ( t , h ) | = o ( h ) .
As usual, we assume that, if a chain is inhomogeneous, then all the infinitesimal characteristics (intensity functions) q i j t are integrable in t on any interval [ a , b ] , 0 a b .
Let a i j ( t ) = q j i ( t ) for j i and a i i ( t ) = j i a j i ( t ) = j i q i j ( t ) .
Further, to provide the possibility to obtain more evident estimates, we will assume that
| a i i ( t ) | L <
for almost all t 0 .
The state probabilities then satisfy the forward Kolmogorov system
d p d t = A ( t ) p ( t )
where A ( t ) = Q T ( t ) , and Q ( t ) is the infinitesimal matrix of the process.
Let · be the usual l 1 -norm, i.e. x = | x i | , and B = sup j i | b i j | for B = ( b i j ) i , j = 0 . Denote Ω = x : x l 1 + & x = 1 . Therefore,
A ( t ) = 2 sup k a k k ( t ) 2 L
for almost all t 0 , and we can apply the results of [49] to Equation (3) in the space l 1 . Namely, in [49] it was shown that the Cauchy problem for Equation (3) has a unique solution for an arbitrary initial condition. Moreover, if p ( s ) Ω , then p ( t ) Ω , for any 0 s t and any initial condition p ( s ) .
Let p 0 ( t ) = 1 i 1 p i ( t ) .
Put z = p 1 , p 2 , T .
Therefore, from Equation (3), we obtain the equation
d z d t = B ( t ) z ( t ) + f ( t ) ,
where f = a 10 , a 20 , T ,
B = a 11 a 10 a 12 a 10 a 1 r a 10 a 21 a 20 a 22 a 20 a 2 r a 20 a 31 a 30 a 32 a 30 a 3 r a 30 a r 1 a r 0 a r 2 a r 0 a r r a r 0 ,
where all expressions depend on t.
By X ¯ = X ¯ ( t ) , we will denote the “perturbed” Markov chain with the same state space, state probabilities p ¯ i ( t ) , transposed infinitesimal matrix A ¯ ( t ) = a ¯ i j ( t ) i , j = 0 , and so on, and the “perturbations” themselves, that is, the differences between the corresponding “perturbed” and original characteristics will be denoted by a ^ i j ( t ) , A ^ ( t ) .
Let E ( t , k ) = E X ( t ) X ( 0 ) = k . Recall that a Markov chain X ( t ) is weakly ergodic if p * ( t ) p * * ( t ) 0 as t for any initial condition, and it has the limiting mean ϕ ( t ) if | E ( t , k ) ϕ ( t ) | 0 as t for any k.
Now we briefly describe the main classes of the chains under consideration. The details concerning the first four classes can be found in [47,50].
Case 1.
Let a i j ( t ) = 0 for all t 0 if | i j | > 1 , and a i , i + 1 ( t ) = μ i + 1 ( t ) , a i + 1 , i ( t ) = λ i ( t ) . This is an inhomogeneous birth-death process (BDP) with the intensities λ i ( t ) (of birth) and μ i + 1 ( t ) (of death) correspondingly.
Case 2.
Now let a i j ( t ) = 0 for i < j 1 , a i + k , i ( t ) = a k ( t ) for k 1 , and a i , i + 1 ( t ) = μ i + 1 ( t ) . This chain describes, for instance, the number of customers in a queueing system in which the customers arrive in groups, but are served one by one (in this case, a k ( t ) is the arrival intensity of a group of k customers, and μ i ( t ) is the service intensity of the ith customer). The simplest models of this type were considered in [51] (see also [47,50]).
Case 3.
Let a i j ( t ) = 0 for i > j + 1 , a i , i + k ( t ) = b k ( t ) , k 1 , and a i + 1 , i ( t ) = λ i ( t ) . This situation occurs in modeling queueing systems with the arrivals of single customers and group service.
Case 4.
Let a i + k , i ( t ) = a k ( t ) , a i , i + k ( t ) = b k ( t ) for k 1 . This process appears in the description of a system with group arrival and group service, for earlier studies see [46,52,53].
Case 5.
Consider a Markov chain with “catastrophes” used for modeling of some queueing systems (see, e.g., [14,54,55,56,57,58]). Here the intensities have a general form, whereas a single (although substantial) restriction consists in that the zero state is attainable from any other state, and the corresponding intensities q k , 0 ( t ) = a 0 , k ( t ) for k 1 are called the intensities of catastrophes.
Now consider the following example illustrating some specific features of the problem under consideration.
Example 1
([14]). Consider a homogeneous BDP (Class I) with the intensities λ k ( t ) = 1 , μ k ( t ) = 4 for all t and k and denote by A the corresponding transposed intensity matrix. Therefore, as is known (see, e.g., [59]), the BDP is strongly ergodic and stable in the corresponding norm. On the other hand, take a perturbed process with the transposed infinitesimal matrix A ¯ = A + A ^ , where a ^ 00 = ε , a ^ k 0 = ε k ( k + 1 ) for k 1 , and a ^ i j = 0 for the other i , j . The perturbed Markov chain X ¯ ( t ) (describing the “M|M|c queue with mass arrivals when empty” (see [54,58,60])) is then not ergodic, since, from the condition A ¯ p ¯ = 0 , it follows that the coordinates of the stationary distribution (if it exists) must satisfy the condition 4 p ¯ k + 1 = p ¯ k + p ¯ 0 ε k + 1 p ¯ 0 ε k + 1 , which is impossible.
As has already been noted, the (upper) bounds of perturbations are closely connected with the (correspondingly, upper) estimates for the convergence rate (see also the two next sections). On the other side, it is also possible to construct important lower estimates of the rate of convergence provided that the influence of the initial conditions cannot fade too rapidly (see [61]). It turns out that it is principally impossible to construct lower bounds for perturbations. Indeed, if we consider the same BDP and, as a perturbed BDP, choose a BDP with the intensities λ ¯ k ( t ) = 1 + ε , μ ¯ k ( t ) = 4 ( 1 + ε ) , then the stationary distribution for the perturbed process will be the same as that for the original BDP for any positive ε .

3. General Theorems Concerning Perturbation Bounds

First consider uniform bounds that provide the first approach to perturbation estimation. This approach is applied to uniformly ergodic Markov chains and the study of stability of the state probability vector. The most important class of such processes is that of Markov chains on finite state space, both homogeneous and inhomogeneous.
Theorem 1.
Let the Markov chain X ( t ) be exponentially weakly ergodic; that is, for any initial conditions p * ( s ) Ω , p * * ( s ) Ω and any s 0 , t s , there holds the inequality
p * ( t ) p * * ( t ) 2 c e b ( t s ) .
Therefore, for the perturbations small enough ( A ^ ( t ) ε for almost all t 0 ), the perturbed chain X ¯ ( t ) is also exponentially weakly ergodic, and the following perturbation bound takes place:
lim sup t p ( t ) p ¯ ( t ) 1 + log ( c / 2 ) ε b .
For the proof, we will use the approach proposed in [17] and modified in [21] for the inhomogeneous case, see also [14]. Let
β ( t , s ) = sup v = 1 , v i = 0 U ( t ) v = 1 2 sup i , j k | p i k ( t , s ) p j k ( t , s ) | .
Therefore,
p ( t ) p ¯ ( t ) β ( t , s ) p ( s ) p ¯ ( s ) + s t A ^ ( u ) β ( u , s ) d u .
Moreover,
β ( t , s ) 1 , β ( t , s ) c 2 e b ( t s ) , 0 s t .
Hence,
p ( t ) p ¯ ( t )
p ( s ) p ¯ ( s ) + ( t s ) ε , i f 0 < t s < 1 b ln c 2 , c 2 e b ( t s ) p ( s ) p ¯ ( s ) + 1 b ( ln c 2 + 1 c e b ( t s ) ) ε , i f t s 1 b ln c 2
whence, as t , we obtain Equation (7).
Corollary 1.
If under the conditions of Theorem 1 the Markov chain X ( t ) has a finite state space, then both Markov chains X ( t ) and X ¯ ( t ) have limit expectations and
| ϕ ( t ) ϕ ¯ ( t ) | 1 b S 1 + log ( c / 2 ) ε .
Now consider the second approach. Namely, we turn to weighted bounds. Such estimates can be applied to a wide class of Markov chains which are exponentially ergodic in some weighted norms. Moreover, as a rule, these estimates also allow one to study stability characteristics of the mathematical expectation for countable Markov chains, both homogeneous and inhomogeneous. Here we use the approach proposed in [9] (see also the detailed description in [43,44]).
Let 1 d 1 d 2 ,
D = d 1 d 1 d 1 0 d 2 d 2 0 0 d 3 .
Let l 1 D = z = ( p 1 , p 2 , ) T : z 1 D D z < . Therefore, B 1 D = D B D 1 . In addition, let p 1 D = z 1 D .
Below we will assume that the following conditions hold:
B ( t ) 1 D B < , f ( t ) 1 D f <
for almost all t 0 .
Recall that X ( t ) is a 1 D -exponentially weakly ergodic Markov chain if
p * ( t ) p * * ( t ) 1 D M e a t s p * ( s ) p * * ( s ) 1 D .
for some M > 0 , a > 0 and any s , t : t s 0 , any initial conditions p * ( s ) l 1 D , p * * ( s ) l 1 D .
If one can choose p * * ( t ) = π , then the chain is 1 D -exponentially strongly ergodic.
Let
B ( t ) B ¯ ( t ) 1 D B B ¯ , f ( t ) f ¯ ( t ) 1 D f f ¯ .
for almost all t 0 .
Theorem 2.
If a Markov chain X ( t ) is 1 D -exponentially weakly ergodic, then X ¯ ( t ) is also 1 D -exponentially weakly ergodic and the following perturbation estimate in the 1 D -norm holds:
lim sup t p ( t ) p ¯ ( t ) 1 D M M B B ¯ f + a f f ¯ a a M B B ¯ .
If W = inf i 1 d i i > 0 , then both chains X ( t ) and X ¯ ( t ) have limiting means and
lim sup t | ϕ ( t ) ϕ ¯ ( t ) | M M B B ¯ f + a f f ¯ W a a M B B ¯ .
Proof. 
The detailed consideration can be found in [44]. Here we only outline the scheme of reasoning. Let V ( t , s ) and V ¯ ( t , s ) be the Cauchy operators for Equation (4) and for the corresponding “perturbed” equation, respectively. Therefore,
V ( t , s ) 1 D M e a ( t s ) , V ¯ ( t , s ) 1 D M e a M B B ¯ ( t s )
for all t s 0 . Therefore, rewriting Equation (4) as
d z d t = B ¯ ( t ) z ( t ) + f ( t ) + B ( t ) B ¯ ( t ) z ( t ) ,
after some algebra, we obtain the following inequality in the 1 D -norm:
z ( t ) z ¯ ( t ) 0 t V ¯ ( t , τ ) B ( τ ) B ¯ ( τ ) z ( τ ) + f ( τ ) f ¯ ( τ ) d τ
0 t M e a M B B ¯ ( t τ ) B B ¯ z ( τ ) + f f ¯ d τ .
On the other hand, z ( t ) 1 D M e a t z ( 0 ) 1 D + M a f , for any 0 s t . Hence, under any initial condition z ( 0 ) l 1 D , we obtain the following inequalities for the 1 D -norm:
z ( t ) z ¯ ( t ) M B B ¯ M a f + f f ¯ 0 t e a M B B ¯ ( t τ ) d τ +
+ M 0 t e a M B B ¯ ( t τ ) B B ¯ M e a τ z ( 0 ) d τ
M M B B ¯ f + a f f ¯ a a M B B ¯ + o 1 .
Therefore, the first assertion of the theorem is proved.
Therefore, the second assertion follows from the inequality z 1 E W 1 z 1 D (see, e.g., [62]) and the estimate expressed by Equation (22), where l 1 E = z = ( p 1 , p 2 , ) T : z 1 E n | p n | < □.
Remark 1.
A number of consequences of this statement can be formulated, for example,
lim sup t p ( t ) p ¯ ( t ) 4 M M B B ¯ f + a f f ¯ a d a M B B ¯ ,
which follows from
p * p * * 2 z * z * * 4 d z * z * * 1 D .
The respective perturbation bounds can be formulated for strongly ergodic (for instance, homogeneous) Markov chains (see [44]).
Remark 2.
As shown in [44], the bounds presented in Theorem 2 and its corollaries are sufficiently sharp. Namely, in [44], we considered the queue-length process for the simplest ordinary M / M / 1 queue and proved that the bounds established in Theorem 2 have the proper order.

4. Convergence Rate Estimates and Perturbation Bounds for Main Classes

For Markov chains of Classes 1–4, an important role is played by the matrix B * * ( t ) = D B ( t ) D 1 . To begin with, write out this matrix for each of these classes.
For Class 1, this matrix has the form
B * * ( t ) =
= λ 0 + μ 1 d 1 d 2 μ 1 0 0 d 2 d 1 λ 1 λ 1 + μ 2 d 2 d 3 μ 2 0 0 d r d r 1 λ r 1 λ r 1 + μ r d r d r + 1 μ r
in the case of a countable state space ( S = );
B * * ( t ) =
= λ 0 + μ 1 d 1 d 2 μ 1 0 0 d 2 d 1 λ 1 λ 1 + μ 2 d 2 d 3 μ 2 0 0 d S d S 1 λ S 1 λ S 1 + μ S
in the case of a finite state space ( S < ).
For Class 2, this matrix has the form
B * * ( t ) = a 11 d 1 d 2 μ 1 0 0 d 2 d 1 a 1 a 22 d 2 d 3 μ 2 0 d 3 d 1 a 2 d 3 d 2 a 1 a 33 d 3 d 4 μ 3
in the case of a countable state space ( S = );
B * * ( t ) =
= a 11 a S d 1 d 2 μ 1 0 0 d 2 d 1 a 1 a S a 22 a S 1 d 2 d 3 μ 2 0 d S d 1 a S 1 a S d S d S 1 a 1 a 2 a S S a 1
in the case of a finite state space ( S < ).
For Class 3, this matrix has the form
B * * ( t ) =
= λ 0 + b 1 d 1 d 2 b 1 b 2 d 1 d 3 b 2 b 3 d 2 d 1 λ 1 λ 1 + i 2 b i d 2 d 3 b 1 b 3 0 d r d r 1 λ r 1 λ r 1 + i r b i
in the case of a countable state space ( S = );
B * * ( t ) =
= λ 0 + b 1 d 1 d 2 b 1 b 2 d 1 d 3 b 2 b 3 d 1 d S b S 1 b S d 2 d 1 λ 1 λ 1 + i 2 b i d 2 d 3 b 1 b 3 d 2 d S b S 2 b S 0 d S d S 1 λ S 1 λ S 1 + i S b i
in the case of a finite state space ( S < ).
Finally, for Class 4, this matrix has the form
B * * = a 11 d 1 d 2 b 1 b 2 d 1 d 3 b 2 b 3 d 2 d 1 a 1 a 22 d 2 d 3 b 1 b 3 d r d 1 a r 1 d r d r 1 a 1 a r r
in the case of a countable state space ( S = );
B * * ( t ) =
= a 11 a S d 1 d 2 b 1 b 2 d 1 d 3 b 2 b 3 d 1 d S b S 1 b S d 2 d 1 a 1 a S a 22 a S 1 d 2 d 3 b 1 b 3 d 2 d S b S 2 b S d S d 1 a S 1 a S d S d S 1 a 1 a 2 a S S a 1
in the case of a finite state space ( S < ).
In the proofs of the following theorems, we use the notion of the logarithmic norm of a linear operator function and the related estimates of the norm of the Cauchy operator of a linear differential equation. The corresponding results are described in detail in our preceding works (see [47,62,63]). Here we restrict ourselves only to the necessary minimum.
Recall that the logarithmic norm of an operator function B * * ( t ) is defined as the number
γ ( B * * ( t ) ) = lim h + 0 h 1 I + h B * * ( t ) 1 .
Let V ( t , s ) = V ( t ) V 1 ( s ) be the Cauchy operator of the differential equation
d w d t = B * * ( t ) w .
Therefore, the estimate
V ( t , s ) e s t γ ( B * * ( u ) ) d u
holds. Moreover, if for each t 0 B * * ( t ) maps l 1 into itself, then the logarithmic norm can be calculated by the formula
γ ( B * * ( t ) ) = sup 1 j S b j j * * ( t ) + i j | b i j * * ( t ) | .
Now let
α i t = b j j * * ( t ) + i j | b i j * * ( t ) | , α t = inf i 1 α i t .
Also note that, if in Classes 2–4 the intensities a k ( t ) and b k ( t ) do not increase in k for each t, then in all the cases the matrix B * * ( t ) is essentially nonnegative (that is, its non-diagonal elements are nonnegative); therefore, in Equations (33) and (34), the signs of the absolute value can be omitted.
The following statement ([47]) Theorem 1 is given here for convenience.
Theorem 3.
Let, for some sequence { d i , i 1 } of positive numbers, the conditions d 1 = 1 , d = inf i 1 d i > 0 and
0 α ( t ) d t = +
hold. Therefore, the Markov chain X ( t ) is weakly ergodic and for any initial condition s 0 , w ( s ) , and for all t s the following estimate holds:
w t e s t α u d u w ( s ) .
Now let, instead of Equation (35), for all 0 s t , a stronger condition
e s t α ( τ ) d τ M * e a * ( t s )
hold.
Theorem 4.
Let, under the conditions of Theorem 3, Inequality (37) hold. Therefore, the Markov chain X ( t ) is 1 D -exponentially weakly ergodic, and for all t s 0 and p * ( s ) l 1 D , p * * ( s ) l 1 D , Inequality (15) holds with M = M * and a = a * .
Remark 3.
In the case of a homogeneous Markov chain, or if all intensities are periodic with one and the same period, conditions expressed by Equations (35) and (37) are equivalent.
Theorem 5.
Let the conditions of Theorem 4 hold. Therefore, the Markov chain X ( t ) is 1 D -exponentially weakly ergodic. Under perturbations small enough (see Equation (16)), the perturbed chain X ¯ ( t ) is also 1 D -exponentially weakly ergodic, and the perturbation bound expressed by Equation (17) in the 1 D -norm holds. If, moreover, W = inf i 1 d i i > 0 , then both chains X ( t ) and X ¯ ( t ) have limit expectations and the estimate expressed by Equation (18) holds for the perturbation of the mathematical expectation.
To obtain perturbation bounds in the natural norm, it suffices to use Inequality (24) mentioned above.
Corollary 2.
Under the conditions of Theorem 5, the following perturbation bound in the natural l 1 - (total variation) norm holds:
lim sup t p ( t ) p ¯ ( t ) 4 M M B B ¯ f + a f f ¯ a d a M B B ¯ .
Note that it is convenient to use the results formulated above for the construction of perturbation bounds for Markov chains of the first four classes (see, e.g., [12,44,46,47]).
For chains of the fifth class, as a rule, it is convenient to use the approach based on uniform bounds as shown below. These models were considered, e.g., in [14,64,65].
Let
β * t = inf k a 0 k ( t ) .
Theorem 6.
Let the intensities of catastrophes be essential, that is
0 β * t d t = + .
Therefore, the chain X t is weakly ergodic in the uniform operator topology and for any initial conditions p * 0 , p * * 0 , and any 0 s t , the following convergence rate estimate holds:
p * t p * * t 2 e s t β * τ d τ .
To prove this theorem, we will use the same technique as in [14]. Rewrite the forward Kolmogorov system expressed by Equation (3) in the form
d p d t = A * t p + g t , t 0 .
Here g t = β * t , 0 , 0 , T , A * t = a i j * t i , j = 0 , and
a i j * t = a 0 j t β * t , if i = 0 , a i j t , otherwise .
The solution to this equation can be written as
p t = U * t , 0 p 0 + 0 t U * t , τ g τ d τ
where U * t , s is the Cauchy operator of the differential equation
d z d t = A * t z .
Note that the matrix A * t is essentially nonnegative for all t 0 . Its logarithmic norm is equal to
γ ( A * ( t ) ) = sup i a i i * t + j i a j i * t = β * t .
Hence,
p * t p * * t e s t β * τ d τ p * s p * * s 2 e s t β * τ d τ .
Theorem 7.
Let, instead of Equation (40), the stronger condition
e s t β * ( τ ) d τ c * e b * ( t s )
hold. Therefore, the chain X t is weakly exponentially ergodic in the uniform operator topology, and if the perturbations are small enough, that is, A ^ ( t ) ε for almost all t 0 , then the perturbed chain X ¯ ( t ) is also exponentially weakly ergodic, and the perturbation bound expressed by Equation (7) holds with c = c * and b = b * .

5. Examples

First note that many examples of perturbation bounds for queueing systems have been considered in [11,12,14,44,46,66,67].
Here, to compare both approaches, we will mostly deal with the queueing system M t | M t | N | N with losses and 1-periodic intensities. In the preceding papers on this model, other problems were considered. For example, in [68], the asymptotics of the rate of convergence to the stationary mode as N , was studied, whereas the paper [69] dealt with the asymptotics of the convergence parameter under various limit relations between the intensities and the dimensionality of the model. In [66,67], perturbation bounds were considered under additional assumptions.
Let N 1 be the number of servers in the system. Assume that the customers arrival intensity λ ( t ) and the service intensity of a server μ ( t ) are 1-periodic nonnegative functions integrable on the interval [ 0 , 1 ] . Therefore, the number of customers in the system (queue length) X ( t ) is a finite Markov chain of Class 1, that is, a BDP with the intensities λ k 1 ( t ) = λ ( t ) , μ k ( t ) = k μ ( t ) for k = 1 , , N .
It should be especially noted that the process X ( t ) is weakly ergodic (obviously exponentially and uniformly ergodic, since the intensities are periodic and the state space is finite) if and only if
0 1 λ ( t ) + μ ( t ) d t > 0
(see, e.g., [70]).
For definiteness, assume that 0 1 μ ( t ) d t > 0 .
Apply the approach described in Theorems 3 and 4.
Let all d k = 1 . Therefore,
B * * ( t ) = λ + μ μ 0 0 λ λ + 2 μ 2 μ 0 0 λ λ + N μ ,
and in Equation (34) we have α i t = μ ( t ) for all i; hence, α t = μ ( t ) .
Therefore, Theorem 3 yields the estimate
p * ( t ) p * * ( t ) 1 D e s t μ ( τ ) d τ p * ( s ) p * * ( s ) 1 D .
To find the constants in the estimates, let μ * = 0 1 μ ( τ ) d τ and consider
0 t μ ( τ ) d τ = μ * t + 0 { t } μ ( τ ) μ * d τ .
Find the bound for the second summand in Equation (52). Assuming u = { t } , we obtain
0 u μ ( τ ) μ * d τ K * = sup u [ 0 , 1 ] 0 u μ ( τ ) μ * d τ .
Therefore,
e s t μ ( τ ) d τ e K * e μ * t s .
Therefore, for the queueing system M t | M t | N | N , the conditions of Theorem 5 and Corollary 2
d = 1 , M = M * = e K * , a = a * = μ * , W = 1 N .
These statements imply the following perturbation bounds:
lim sup t p ( t ) p ¯ ( t ) 4 e K * e K * B B ¯ f + μ * f f ¯ μ * μ * e K * B B ¯
for the vector od=f state probabilities, and
lim sup t | ϕ ( t ) ϕ ¯ ( t ) | N e K * e K * B B ¯ f + μ * f f ¯ μ * μ * e K * B B ¯ ,
for limit expectations.
Moreover, for these bounds to be consistent, additional information is required concerning the form of the perturbed intensity matrix. The simplest bounds can be obtained, if it is assumed that the perturbed Markov chain is also a BDP with the same state space and the birth and death intensities λ k 1 ( t ) and μ k ( t ) , respectively. Therefore, if the birth and death intensities themselves do not exceed ε for almost all t 0 , then | f f ¯ | ε and | B B ¯ | 5 ε , so that the bounds expressed by Equations (56) and (57) have the form
lim sup t p ( t ) p ¯ ( t ) 4 e K * 5 L e K * + μ * ε μ * μ * 5 ε e K *
for the vectors of state probabilities, and
lim sup t | ϕ ( t ) ϕ ¯ ( t ) | 4 N e K * 5 L e K * + μ * ε μ * μ * 5 ε e K *
for the limit expectations.
On the other hand, Theorem 7 can be applied as well. To construct the bounds for the corresponding parameters, Equation (24) and the fact that D 1 = N is exploited. Therefore, Theorem 7 is valid for the queueing system M t | M t | N | N with the following values of the parameters:
c = c * = 4 N e K * , b = b * = μ * .
According to this theorem, we obtain the estimate
lim sup t p ( t ) p ¯ ( t ) 1 + K * + log ( 2 N ) ε μ * .
Moreover, the Markov chains X ( t ) and X ¯ ( t ) have limit expectations and
| ϕ ( t ) ϕ ¯ ( t ) | N 1 + K * + log ( 2 N ) ε μ * .
It is worth noting that, for the estimates expressed by Equations (61) and (62) to hold, only the condition of the smallness of perturbations is required, and no additional information concerning the structure of the intensity matrix is required.
Thus, in the example with the finite state space under consideration, uniform bounds turn out to be more exact.
Now consider a more special example. Let N = 299 , λ ( t ) = 200 ( 1 + sin 2 π ω t ) , μ ( t ) = 1 .
In Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5, there are plots of the expected number of customers in the system for some of most probable states with ω = 1 ; in Figure 6 and Figure 7, there are plots of the expected number of customers with ω = 0.5 .
On the other hand, as has already been noted, for the Markov chains of Classes 1–4 with countable state space, no uniform bounds could be constructed.
Consider the construction of bounds on the example of a rather simple model, which, however, does not belong to the most well-studied Class 1 (that is, which is not a BDP).
Let a queueing system be given in which the customers can appear separately or in pairs with the corresponding intensities a 1 ( t ) = λ ( t ) and a 2 ( t ) = 0.5 λ ( t ) , but are served one by one on one of two servers with constant intensities μ k ( t ) = min ( k , 2 ) μ , where λ ( t ) is a 1-periodic function integrable on the interval [ 0 , 1 ] . Therefore, the number of customers in this system belongs to Class 2, and the corresponding matrix B * * ( t ) has the form
B * * ( t ) = a 11 d 1 d 2 μ 0 0 d 2 d 1 λ a 22 d 2 d 3 2 μ 0 d 3 d 1 0.5 λ d 3 d 2 λ a 33 d 3 d 4 2 μ 0
where a 11 ( t ) = 1.5 λ ( t ) + μ , a k k ( t ) = 1.5 λ ( t ) + 2 μ , if k 2 . This matrix is essentially nonnegative, such that, in the expression for the logarithmic norm, the signs of the absolute value can be omitted. Let d 1 = 1 , d k + 1 = δ d k , and δ > 1 . For this purpose, consider the expressions from Equation (34). We have
α 1 ( t ) = μ λ ( t ) 0.5 δ 2 + δ 1.5 ,
α 2 ( t ) = μ 2 δ 1 λ ( t ) 0.5 δ 2 + δ 1.5 ,
α k ( t ) = 2 μ 1 δ 1 λ ( t ) 0.5 δ 2 + δ 1.5 , k 3 .
Therefore, for δ 2 , we obtain
α t = inf i 1 α i t = 2 μ 1 δ 1 λ ( t ) 0.5 δ 2 + δ 1.5 = = δ 1 2 μ δ 0.5 λ ( t ) δ + 3 ,
and the condition
α * = 0 1 δ 1 2 μ δ 0.5 λ ( t ) δ + 3 d t = δ 1 2 4 μ δ λ * δ + 3 > 0
will a fortiori hold if μ > λ * with a corresponding choice of δ ( 1 , 2 ] .
The further reasoning is almost the same as in the preceding example: instead of Equation (54), we obtain
e s t α ( τ ) d τ e K * e α * t s
where now
K * = sup u [ 0 , 1 ] 0 u α ( τ ) α * d τ .
Hence, the conditions of Theorem 5 and Corollary 2 for the number of customers in the system under consideration hold for
d = 1 , M = M * = e K * , a = a * = α * , W = inf k 1 δ k 1 k .
To construct meaningful perturbation bounds, it is necessarily required to have additional information concerning the form of the perturbed intensity matrix. Therefore, Example 1 in Section 2 shows that, if a possibility of the arrival of an arbitrary number of customers (“mass arrival” in the terminology of [58]) to an empty queue is assumed, then an arbitrarily small (in the uniform norm) perturbation of the intensity matrix can “spoil” all the characteristics of the process. For example, satisfactory bounds can be constructed if we know that the intensity matrix of the perturbed system has the same form; that is, the customers can appear either separately or in pairs and are served one by one. Therefore, if the perturbations of the intensities themselves do not exceed ε for almost all t 0 , then | f f ¯ | 5 ε and | B B ¯ | 5 ε , such that, instead of Equations (56) and (57), we obtain
lim sup t p ( t ) p ¯ ( t ) 20 e K * ε L e K * + α * α * α * 20 ε e K *
for the vectors of state probabilities and
lim sup t | ϕ ( t ) ϕ ¯ ( t ) | 20 e K * ε L e K * + α * α * W α * 20 ε e K *
for the limit expectations.
For example, let λ ( t ) = 1 + sin 2 π t , μ ( t ) = 3 , and δ = 2 . Therefore, we have
α ( t ) = μ 2.5 λ ( t ) , α * = 0.5 , W = 1 .
Furthermore, we follow the method described in [71,72] in detail. Namely, we choose the dimensionality of the truncated process (300 in our case), the interval on which the desired accuracy is achieved ( [ 0 , 100 ] ) in the example under consideration) and the limit interval itself (here it is [ 100 , 101 ] ).
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 expose the plots of the expected number of customers in the system and some of the most probable states.

Author Contributions

Conceptualization, A.Z and V.K.; Software, Y.S.; Investigation, A.Z. (Section 1, Section 2, Section 3, Section 4 and Section 5), V.K. (Section 1, Section 2 and Section 3) and Y.S. (Section 4 and Section 5); Writing—original draft preparation, A.Z., V.K. and Y.S.; Writing—review and editing, A.Z., V.K. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

Section 1, Section 2 and Section 3 were written by Korolev and Zeifman under the support of the Russian Science Foundation, project 18-11-00155. Section 4 and Section 5 were written by Satin and Zeifman under the support of the Russian Science Foundation, project 19-11-00020.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Kalashnikov, V.V. Qualitative analysis of the behavior of complex systems by the method of test functions. M. Nauka 1978, 247. (In Russian) [Google Scholar]
  2. Stoyan, D. Qualitative Eigenschaften Und Abschätzungen Stochastischer Modelle; Akademie-Verlag: Berlin, Germany, 1977. (In Germany) [Google Scholar]
  3. Zolotarev, V.M. Quantitative estimates for the continuity property of queueing systems of type G/G/. Theory Prob. Its Appl. 1977, 22, 679–691. [Google Scholar] [CrossRef]
  4. Zolotarev, V.M. Probability metrics. Theory Prob. Its Appl. 1983, 28, 264–287. [Google Scholar] [CrossRef]
  5. Kalashnikov, V.V.; Zolotarev, V.M. Stability Problems for Stochastic Models; Lecture Notes in Mathematics; VSP: Perm, Russia, 1983; Volume 982. [Google Scholar]
  6. Gnedenko, B.V.; Korolev, V.Y. Random Summation: Limit Theorems and Applications; CRC Press: Boca, OH, USA, 1996. [Google Scholar]
  7. Kartashov, N.V. Strongly stable Markov chains. J. Soviet Math. 1986, 34, 1493–1498. [Google Scholar] [CrossRef]
  8. Kartashov, N.V. Criteria for uniform ergodicity and strong stability of Markov chains with a common phase space. Theory Prob. Its Appl. 1985, 30, 71–89. [Google Scholar]
  9. Zeifman, A.I. Stability for contionuous-time nonhomogeneous Markov chains. Lect. Notes Math. 1985, 1155, 401–414. [Google Scholar]
  10. Zeifman, A. Qualitative properties of inhomogeneous birth and death processes. J. Sov. Math. 1991, 57, 3217–3224. [Google Scholar] [CrossRef]
  11. Zeifman, A.I.; Isaacson, D. On strong ergodicity for nonhomogeneous continuous-time Markov chains. Stochast. Process. Their Appl. 1994, 50, 263–273. [Google Scholar] [CrossRef] [Green Version]
  12. Zeifman, A.I. Stability of birth and death processes. J. Math. Sci. 1998, 91, 3023–3031. [Google Scholar] [CrossRef]
  13. Altman, E.; Avrachenkov, K.; Núnez-Queija, R. Perturbation analysis for denumerable Markov chains with application to queueing models. Adv. Appl. Prob. 2004, 36, 839–853. [Google Scholar] [CrossRef] [Green Version]
  14. Zeifman, A.I.; Korotysheva, A. Perturbation bounds for Mt/Mt/N queue with catastrophes. Stochast. Models 2012, 28, 49–62. [Google Scholar] [CrossRef]
  15. Aldous, D.J.; Fill, J. Reversible Markov Chains and Random Walks on Graphs. Chapter 8. Available online: http://www.stat.berkeley.edu/users/aldous/RWG/book.html (accessed on 13 February 2020).
  16. Diaconis, P.; Stroock, D. Geometric Bounds for Eigenvalues of Markov Chains. Ann. Appl. Prob. 1991, 1, 36–61. [Google Scholar] [CrossRef]
  17. Mitrophanov, A.Y. Stability and exponential convergence of continuous-time Markov chains. J. Appl. Prob. 2003, 40, 970–979. [Google Scholar] [CrossRef]
  18. Mitrophanov, A.Y. The spectral gap and perturbation bounds for reversible continuous-time Markov chains. J. Appl. Prob. 2004, 41, 1219–1222. [Google Scholar] [CrossRef] [Green Version]
  19. Mitrophanov, A.Y. Ergodicity coefficient and perturbation bounds for continuous-time Markov chains. Math. Inequal. Appl. 2005, 8, 159–168. [Google Scholar] [CrossRef]
  20. Mitrophanov, A.Y. Estimates of sensitivity to perturbations for finite homogeneous continuous-time Markov chains. Theory Probab. Its Appl. 2006, 50, 319–326. [Google Scholar] [CrossRef] [Green Version]
  21. Zeifman, A.I.; Korotysheva, A.V.; Panfilova, T.Y.L.; Shorgin, S.Y. Stability bounds for some queueing systems with catastrophes. Inf. Primeneniya 2011, 5, 27–33. [Google Scholar]
  22. Mitrophanov, A.Y. Connection between the Rate of Convergence to Stationarity and Stability to Perturbations for Stochastic and Deterministic Systems. In Proceedings of the 38th International Conference Dynamics Days Europe (DDE 2018), Loughborough, UK, 3–7 September 2018. [Google Scholar]
  23. Shao, J.; Yuan, C. Stability of regime-switching processes under perturbation of transition rate matrices. Nonlinear Anal. Hybrid Syst. 2019, 33, 211–226. [Google Scholar] [CrossRef] [Green Version]
  24. Kartashov, N.V. Strong Stable Markov Chains; Utrecht VSP: Perm, Russia, 1996. [Google Scholar]
  25. Ferre, D.; Herve, L.; Ledoux, J. Regular perturbation of V-geometrically ergodic Markov chains. J. Appl. Prob. 2013, 50, 84–194. [Google Scholar] [CrossRef] [Green Version]
  26. Mouhoubi, Z.; Aïssani, D. New perturbation bounds for denumerable Markov chains. Linear Algebra Its Appl. 2010, 432, 1627–1649. [Google Scholar] [CrossRef] [Green Version]
  27. Meyn, S.P.; Tweedie, R.L. Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Prob. 1994, 4, 981–1012. [Google Scholar] [CrossRef]
  28. Abbas, K.; Berkhout, J.; Heidergott, B. A critical account of perturbation analysis of Markov chains. arXiv 2016, arXiv:1609.04138. [Google Scholar]
  29. Jiang, S.; Liu, Y.; Tang, Y. A unified perturbation analysis framework for countable Markov chains. Linear Algebra Its Appl. 2017, 529, 413–440. [Google Scholar] [CrossRef]
  30. Liu, Y.; Li, W. Error bounds for augmented truncation approximations of Markov chains via the perturbation method. Adv. Appl. Prob. 2018, 50, 645–669. [Google Scholar] [CrossRef]
  31. Medina-Aguayo, F.; Rudolf, D.; Schweizer, N. Perturbation bounds for Monte Carlo within Metropolis via restricted approximations. Stochast. Process. Appl. 2019. [Google Scholar] [CrossRef]
  32. Negrea, J.; Rosenthal, J.S. Error bounds for approximations of geometrically ergodic Markov chains. arXiv 2017, arXiv:1702.07441. [Google Scholar]
  33. Rudolf, D.; Schweizer, N. Perturbation theory for Markov chains via Wasserstein distance. Bernoulli 2018, 24, 2610–2639. [Google Scholar] [CrossRef] [Green Version]
  34. Liu, Y. Perturbation bounds for the stationary distributions of Markov chains. SIAM J. Matrix Anal. Appl. 2012, 33, 1057–1074. [Google Scholar] [CrossRef]
  35. Thiede, E.; Van Koten, B.; Weare, J. Sharp entrywise perturbation bounds for Markov chains. SIAM J. Matrix Anal. Appl. 2015, 36, 917–941. [Google Scholar] [CrossRef] [Green Version]
  36. Truquet, L. A perturbation analysis of some Markov chains models with time-varying parameters. arXiv 2017, arXiv:1706.03214. [Google Scholar]
  37. Vial, D.; Subramanian, V. Restart perturbations for lazy, reversible Markov chains: Trichotomy and pre-cutoff equivalence. arXiv 2019, arXiv:1907.02926. [Google Scholar]
  38. Zheng, Z.; Honnappa, H.; Glynn, P.W. Approximating Performance Measures for Slowly Changing Non-stationary Markov Chains. arXiv 2018, arXiv:1805.01662. [Google Scholar]
  39. Gnedenko, B.; Soloviev, A. On the conditions of the existence of final probabilities for a Markov process. Math. Operationsforsch. Statist. 1973, 4, 379–390. [Google Scholar]
  40. Gnedenko, D.B. On a generalization of Erlang formulae. Zastosow. Mater. 1972, 12, 239–242. [Google Scholar]
  41. Massey, W.A.; Whitt, W. Uniform acceleration expansions for Markov chains with time-varying rates. Ann. Appl. Prob. 1998, 8, 1130–1155. [Google Scholar] [CrossRef]
  42. Gnedenko, B.V.; Makarov, I.P. Properties of a problem with losses in the case of periodic intensities. Differ. Eq. 1971, 7, 1696–1698. (In Russian) [Google Scholar]
  43. Zeifman, A.I.; Korolev, V.Y.E.; Korotysheva, A.V.; Shorgin, S.Y. General bounds for nonstationary continuous-time Markov chains. Inf. Primeneniya 2014, 8, 106–117. [Google Scholar]
  44. Zeifman, A.I.; Korolev, V.Y. On perturbation bounds for continuous-time Markov chains. Stat. Probab. Lett. 2014, 88, 66–72. [Google Scholar] [CrossRef]
  45. Satin, Y.; Zeifman, A.; Kryukova, A. On the Rate of Convergence and Limiting Characteristics for a Nonstationary Queueing Model. Mathematics 2019, 7, 678. [Google Scholar] [CrossRef] [Green Version]
  46. Zeifman, A.; Korotysheva, A.; Korolev, V.; Satin, Y.; Bening, V. Perturbation bounds and truncations for a class of Markovian queues. Queueing Syst. 2014, 76, 205–221. [Google Scholar] [CrossRef]
  47. Zeifman, A.; Razumchik, R.; Satin, Y.; Kiseleva, K.; Korotysheva, A.; Korolev, V. Bounds on the Rate of Convergence for One Class of Inhomogeneous Markovian Queueing Models with Possible Batch Arrivals and Services. Int. J. Appl. Math. Comput. Sci. 2018, 28, 141–154. [Google Scholar] [CrossRef] [Green Version]
  48. Zeifman, A.; Satin, Y.; Kiseleva, K.; Korolev, V.; Panfilova, T. On limiting characteristics for a non-stationary two-processor heterogeneous system. Appl. Math. Comput. 2019, 351, 48–65. [Google Scholar] [CrossRef] [Green Version]
  49. Daleckii, J.L.; Krein, M.G. Stability of solutions of differential equations in Banach space. Am. Math. Soc. 2002, 43, 1024–1102. [Google Scholar]
  50. Zeifman, A.; Sipin, A.; Korolev, V.; Shilova, G.; Kiseleva, K.; Korotysheva, A.; Satin, Y. On Sharp Bounds on the Rate of Convergence for Finite Continuous-Time Markovian Queueing Models. In Computer Aided Systems Theory EUROCAST 2017; Lecture Notes in Computer, Science; Moreno-Diaz, R., Pichler, F., Quesada-Arencibia, A., Eds.; Springer: Cham, Switzerland, 2018; Volume 10672, pp. 20–28. [Google Scholar]
  51. Nelson, R.; Towsley, D.; Tantawi, A.N. Performance analysis of parallel processing systems. IEEE Trans. Softw. Eng. 1988, 14, 532–540. [Google Scholar] [CrossRef]
  52. Satin, Y.A.; Zeifman, A.I.; Korotysheva, A.V.; Shorgin, S.Y. On a class of Markovian queues. Inf. Its Appl. 2011, 5, 6–12. (In Russian) [Google Scholar]
  53. Satin, Y.A.; Zeifman, A.I.; Korotysheva, A.V. On the rate of convergence and truncations for a class of Markovian queueing systems. Theory Prob. Its Appl. 2013, 57, 529–539. [Google Scholar] [CrossRef]
  54. Li, J.; Zhang, L. MX/M/c Queue with catastrophes and state-dependent control at idle time. Front. Math. China 2017, 12, 1427–1439. [Google Scholar] [CrossRef]
  55. Di Crescenzo, A.; Giorno, V.; Nobile, A.G.; Ricciardi, L.M. A note on birthdeath processes with catastrophes. Stat. Prob. Lett. 2008, 78, 2248–2257. [Google Scholar] [CrossRef] [Green Version]
  56. Dudin, A.; Nishimura, S. A BMAP/SM/1 queueing system with Markovian arrival input of disasters. J. Appl. Prob. 1999, 36, 868–881. [Google Scholar] [CrossRef]
  57. Dudin, A.; Karolik, A. BMAP/SM/1 queue with Markovian input of disasters and non-instantaneous recovery. Perform. Eval. 2001, 45, 19–32. [Google Scholar] [CrossRef]
  58. Zhang, L.; Li, J. The M/M/c queue with mass exodus and mass arrivals when empty. J. Appl. Prob. 2015, 52, 990–1002. [Google Scholar] [CrossRef]
  59. Zeifman, A.I. Upper and lower bounds on the rate of convergence for nonhomogeneous birth and death processes. Stochast. Process. Their Appl. 1995, 59, 157–173. [Google Scholar] [CrossRef]
  60. Chen, A.; Renshaw, E. The M/M/1 queue with mass exodus and mass arrivals when empty. J. Appl. Prob. 1997, 34, 192–207. [Google Scholar] [CrossRef]
  61. Zeifman, A.I.; Korolev, V.Y.; Satin, Y.A.; Kiseleva, K.M. Lower bounds for the rate of convergence for continuous-time inhomogeneous Markov chains with a finite state space. Statist. Prob. Lett. 2018, 137, 84–90. [Google Scholar] [CrossRef]
  62. Zeifman, A.I.; Leorato, S.; Orsingher, E.; Satin, Y.; Shilova, G. Some universal limits for nonhomogeneous birth and death processes. Queueing Syst. 2006, 52, 139–151. [Google Scholar] [CrossRef] [Green Version]
  63. Granovsky, B.L.; Zeifman, A.I. Nonstationary Queues: Estimation of the Rate of Convergence. Queueing Syst. 2004, 46, 363–388. [Google Scholar] [CrossRef]
  64. Zeifman, A.; Korotysheva, A.; Satin, Y.; Razumchik, R.; Korolev, V.; Shorgin, S. Ergodicity and truncation bounds for inhomogeneous birth and death processes with additional transitions from and to origin. Stochast. Models 2017, 33, 598–616. [Google Scholar] [CrossRef]
  65. Zeifman, A.I.; Korotysheva, A.; Satin, Y.; Kiseleva, K.; Korolev, V.; Shorgin, S. Bounds For Markovian Queues With Possible Catastrophes. In ECMS. May 2017; pp. 628–634. Available online: http://www.scs-europe.net/dlib/2017/2017-0628.htm (accessed on 13 February 2020).
  66. Zeifman, A.; Korotysheva, A.; Satin, Y. On stability for Mt/Mt/N/N queue. Int. Congr. Ultra Modern Telecommun. Control Syst. Moscow 2010, 1102–1105. [Google Scholar] [CrossRef]
  67. Zeifman, A.; Shorgin, S.; Korotysheva, A.; Bening, V. Stability bounds for Mt/Mt/N/N + R queue. In Proceedings of the 5th International ICST Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS ’11), ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, 16–20 May 2011; pp. 434–438. [Google Scholar]
  68. Van Doorn, E.A.; Zeifman, A.I.; Panfilova, T.L. Bounds and asymptotics for the rate of convergence of birth-death processes. Theory Prob. Its Appl. 2010, 54, 97–113. [Google Scholar] [CrossRef] [Green Version]
  69. Van Doorn, E.A. Rate of convergence to stationarity of the system M/M/N/N+R. Top 2011, 19, 336–350. [Google Scholar] [CrossRef]
  70. Zeifman, A.I. On the nonstationary Erlang loss model. Autom. Remote Control 2009, 70, 2003–2012. [Google Scholar] [CrossRef]
  71. Zeifman, A.; Satin, Y.; Korolev, V.; Shorgin, S. On truncations for weakly ergodic inhomogeneous birth and death processes. Int. J. Appl. Math. Comput. Sci. 2014, 24, 503–518. [Google Scholar] [CrossRef] [Green Version]
  72. Zeifman, A.I.; Korotysheva, A.V.; Korolev, V.Y.; Satin, Y.A. Truncation bounds for approximations of inhomogeneous continuous-time Markov chains. Theory Prob. Appl. 2017, 61, 513–520. [Google Scholar] [CrossRef]
Figure 1. Example 1. The mean E ( t , 0 ) and E ( t , N ) for the original process t [ 0 , 19 ] , ω = 1 .
Figure 1. Example 1. The mean E ( t , 0 ) and E ( t , N ) for the original process t [ 0 , 19 ] , ω = 1 .
Mathematics 08 00253 g001
Figure 2. Example 1. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 19 , 20 ] , ω = 1 .
Figure 2. Example 1. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 19 , 20 ] , ω = 1 .
Mathematics 08 00253 g002
Figure 3. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 190 ) , t [ 19 , 20 ] , ω = 1 .
Figure 3. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 190 ) , t [ 19 , 20 ] , ω = 1 .
Mathematics 08 00253 g003
Figure 4. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 200 ) , t [ 19 , 20 ] , ω = 1 .
Figure 4. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 200 ) , t [ 19 , 20 ] , ω = 1 .
Mathematics 08 00253 g004
Figure 5. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 210 ) , t [ 19 , 20 ] , ω = 1 .
Figure 5. Example 1. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 210 ) , t [ 19 , 20 ] , ω = 1 .
Mathematics 08 00253 g005
Figure 6. Example 1. The expectations E ( t , 0 ) and E ( t , N ) for the original process t [ 0 , 18 ] , ω = 0.5 .
Figure 6. Example 1. The expectations E ( t , 0 ) and E ( t , N ) for the original process t [ 0 , 18 ] , ω = 0.5 .
Mathematics 08 00253 g006
Figure 7. Example 1. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 18 , 20 ] , ω = 0.5 .
Figure 7. Example 1. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 18 , 20 ] , ω = 0.5 .
Mathematics 08 00253 g007
Figure 8. Example 2. The expectations E ( t , 0 ) and E ( t , 299 ) for the original process t [ 0 , 100 ] .
Figure 8. Example 2. The expectations E ( t , 0 ) and E ( t , 299 ) for the original process t [ 0 , 100 ] .
Mathematics 08 00253 g008
Figure 9. Example 2. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 100 , 101 ] .
Figure 9. Example 2. The perturbation bounds for the limit expectation E ( t , 0 ) , t [ 100 , 101 ] .
Mathematics 08 00253 g009
Figure 10. Example 2. The probabilities of the empty queue for X ( 0 ) = 0 and X ( 0 ) = 299 for the original process t [ 0 , 100 ] .
Figure 10. Example 2. The probabilities of the empty queue for X ( 0 ) = 0 and X ( 0 ) = 299 for the original process t [ 0 , 100 ] .
Mathematics 08 00253 g010
Figure 11. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 0 ) , t [ 100 , 101 ] .
Figure 11. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 0 ) , t [ 100 , 101 ] .
Mathematics 08 00253 g011
Figure 12. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 1 ) , t [ 100 , 101 ] .
Figure 12. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 1 ) , t [ 100 , 101 ] .
Mathematics 08 00253 g012
Figure 13. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 2 ) , t [ 100 , 101 ] .
Figure 13. Example 2. The perturbation bounds for the “limit” probability Pr ( X ( t ) = 2 ) , t [ 100 , 101 ] .
Mathematics 08 00253 g013

Share and Cite

MDPI and ACS Style

Zeifman, A.; Korolev, V.; Satin, Y. Two Approaches to the Construction of Perturbation Bounds for Continuous-Time Markov Chains. Mathematics 2020, 8, 253. https://doi.org/10.3390/math8020253

AMA Style

Zeifman A, Korolev V, Satin Y. Two Approaches to the Construction of Perturbation Bounds for Continuous-Time Markov Chains. Mathematics. 2020; 8(2):253. https://doi.org/10.3390/math8020253

Chicago/Turabian Style

Zeifman, Alexander, Victor Korolev, and Yacov Satin. 2020. "Two Approaches to the Construction of Perturbation Bounds for Continuous-Time Markov Chains" Mathematics 8, no. 2: 253. https://doi.org/10.3390/math8020253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop