Next Article in Journal
An Approach to the Total Least Squares Method for Symmetric Triangular Fuzzy Numbers
Previous Article in Journal
Forecasting the Number of Passengers on Hungarian Railway Routes Using a Similarity and Fuzzy Arithmetic-Based Inference Method
Previous Article in Special Issue
Analysis of Dynamic Transaction Fee Blockchain Using Queueing Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type

by
Valeriy Naumov
1,* and
Konstantin Samouylov
2
1
Service Innovation Research Institute, Annankatu 8 A, 00120 Helsinki, Finland
2
Institute of Computer Science and Telecommunications, RUDN University, 6 Miklukho-Maklaya St., Moscow 117198, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1223; https://doi.org/10.3390/math13081223
Submission received: 23 February 2025 / Revised: 4 April 2025 / Accepted: 6 April 2025 / Published: 8 April 2025
(This article belongs to the Special Issue Queue and Stochastic Models for Operations Research, 3rd Edition)

Abstract

:
We consider Md-M/G/1 processes, which are irreducible discrete-time Markov chains consisting of two components. The first component is a nonnegative integer vector, while the second component indicates the state (or phase) of the external environment. The level of a state is defined by the minimum value in its first component. The matrix G of the process represents the conditional probabilities that, starting from a given state of a certain level, the Markov chain will first reach a lower level in a specific state. This study aims to develop an effective algorithm for computing matrices G for Md-M/G/1 processes.

1. Introduction

Multi-dimensional Markov chains of M/G/1 type (Md-M/G/1) are the natural extensions of the classical Markov chains of M/G/1 type [1,2]. They are discrete-time Markov processes with state space X = + M × J , where + is the set of nonnegative integers and J = { 1 , 2 , , n } [3,4,5,6]. The number n of the elements in the set J can be finite or infinite. The probability of transition from a state ( k , i ) with k 1 to states ( n , j ) may depend on i , j , and n k but not on the specific values of k and n . The one-step transitions of Md-M/G/1 processes from a state ( k , i ) are limited to states ( n , j ) such that k n 1 , where the vector 1 consists of all 1s. Md-QBD processes are a specific type of multi-dimensional Markov chain of M/G/1 type, where one-step transitions from ( k , i ) to states ( n , j ) are allowed only if 1 k n 1 [3,5,6,7]. Md-M/G/1 processes are Markov chains of M/G/1 type with the level l consisting of states ( k 1 , k 2 , , k M , j ) , for which condition min k m = l holds.
The matrix G is a key characteristic of Markov chains of M/G/1 type. Each element of this matrix represents the conditional probabilities that, starting from a specific state at a given level, the process will first appear at a lower level in a particular state. It has been shown in [8] that the matrix G can be expressed in terms of matrices of order n , called the sector exit probabilities. A system of equations was created for these matrices, and an algorithm to find its minimal nonnegative solution was proposed.
For the Md-M/G/1 processes, the concept of the state sectors has been introduced in [8]. It has been shown that the matrix G can be expressed in terms of matrices of order n representing the sector exit probabilities. A system of equations was developed for these matrices, and an algorithm for solving it was proposed. However, it remains unclear whether the set of matrices of sector exit probabilities constitutes the minimal nonnegative solution to this system.
This study builds upon the work presented in [8]. We demonstrate that the family of matrices representing the sector exit probabilities is the minimal nonnegative solution to the system established in [8]. Additionally, we introduce a new iterative algorithm for computing blocks of order n of the matrix G . Section 2 reviews the relevant results obtained in [8]. Section 3 focuses on the joint distribution of the sector exit times and the number of sectors crossed. Section 4 establishes the minimality property of the matrices of the sector exit probabilities. In Section 5, we introduce our new iterative algorithm for computing the matrix G . Finally, Section 6 presents our concluding remarks.
We use bold capital letters to denote matrices and bold lowercase letters to denote vectors. Unless otherwise stated, all vectors in this paper have integer components and the length M . For any vector x , we use the notation x i for the i th component of x . For vectors x = ( x 1 , x 2 , , x M ) and y = ( y 1 , y 2 , , y M ) , x y means that x j y j for all j , and x y means that x j y j for all j . Functions l ( x ) and Δ ( x ) are defined, respectively, as l ( x ) = min x m and Δ ( x ) = x l ( x ) 1 . Given a vector w 1 , we define sets Z ( w ) , X ( w ) and X c ( w ) as Z ( w ) = { n Z + N | n w } , X ( w ) = Z ( w ) × J , and X c ( w ) = X \ X ( w ) . We refer to the sets of the form X ( w ) as the sectors.

2. Multi-Dimensional Process of M/G/1 Type

Let ξ ( t ) = ( α ( t ) , β ( t ) ) be an irreducible multi-dimensional Markov chain of M/G/1 type on the state space X = + M × J , and p k , n ( i , j ) denote the probability of a one-step transition from ( k , i ) to ( n , j ) . We assume that the transition probability matrix P , partitioned into blocks P k , n = [ p k , n ( i , j ) ] ( i , j J ) , for all k 1 , satisfies the following properties:
P k , n = Q n k , k n 1 , O , otherwise ,
where Q r = [ q r ( i , j ) ]   ( i , j J ) , r 1 , are nonnegative square matrices such that Q = r 1 Q r is a stochastic matrix. Process ξ ( t ) is the Markov chain of M/G/1 type, with the level l consisting of states ( n , j ) such that min n i = l . We refer to this process as an M-dimensional Markov chain of M/G/1 type ( M d - M / G / 1 ) .
The level l of a multi-dimensional process of M/G/1 type consists of states ( n , j ) such that min n i = l . For instance, consider the state space X of a 2 d - M / G / 1 process, which is divided into subsets k × J , k + 2 , as illustrated in Figure 1. Solid lines represent the boundaries of the process levels. The states of the sector X ( 3 , 1 ) belong to the gray-colored subsets.
The transition matrix of a multi-dimensional Markov chain of M/G/1 type has a block Hessenberg form
P = B 0 B 1 B 2 B 3 A 1 A 0 A 1 A 2 O A 1 A 0 A 1 O O A 1 A 0
where blocks B k and A k are nonnegative square matrices, such that k = 0 B k and k = 1 A k are stochastic matrices. Each state ( n , j ) of a level l can be characterized by the triple ( l , α , j ) , where α = n l 1 is an element of the set N defined as
N = { ( α 1 , , α M ) + M | min α m = 0 } .
Hence, entries of the matrices B k and A k can be indexed by the elements of the set N × J . As it follows from (1), the matrices A k , partitioned into blocks A k , α , β = [ a k ( ( α , i ) , ( β , j ) ) ]   ( i , j J ) of order n , can be represented as
A k , α , β = Q β α + k 1 ,   α , β N ,   k 1 .
For any level l 1 , the entry g ( ( α , i ) , ( β , j ) ) of the matrix G represents the probability that, starting from the state ( α + l 1 , i ) of the level l , the chain will first appear at the level l 1 in state ( β + ( l 1 ) 1 , j ) . The matrix G is the minimal nonnegative solution to the equation ([1])
G = v = 0 A v 1 G v
Equation (3) can be transformed into the following system for the blocks G α , β = [ g ( ( α , i ) , ( β , j ) ) ]   ( i , j J ) of the matrix G :
G α , β = Q β α 1 + δ N Q δ α G δ , β + l = 1 δ N Q δ + l 1 α
γ 1 , , γ l N G δ , γ 1 G γ 1 , γ 2 G γ l 1 , γ l G γ l , β ,   α , β N .
Let the set ε be defined as
ε = { β + M | min β m = 1 } .
For any vectors w 1 and β ε , the entry f β ( i , j ) of the matrix of the sector exit probabilities F β represents the probability that, starting in the state ( w , i ) , the process ξ ( t ) reaches the set X c ( w ) by hitting the state ( w + β , j ) ([8]). It implies that the matrix F = β ε F β is substochastic.
The family of matrices F β , β ε , and the matrix G uniquely define each other, since we have the following equalities
F β = G 0 , β + 1 ,   β ε ,
G α , β = v = 1 ( ε 1 , , ε v ) E α , β 1 ( v ) F ε 1 F ε 2 F ε v ,   α , β N ,
where E α , γ ( v ) is the set of all v−tuples ( ε 1 , , ε v ) ε v satisfying α + ε 1 0 ,   α + ε 1 + ε 2 0 , …, α + ε 1 + + ε v 1 0 , α + ε 1 + + ε v = γ .
As shown in [8], the matrices F β , β ε , satisfy the following system:
F β = Q β + α 0 Q α v = 1 ( ε 1 , , ε v ) E α , β ( v ) F ε 1 F ε 2 F ε v ,   β ε .
We will demonstrate that the family F β , β ε , is the minimal nonnegative solution to the system (8) in the set of families Y β , β ε , of nonnegative matrices.

3. The Joint Distribution of the Sector Exit Times and the Number of Sectors Crossed

Let us define the sequence of passage times as follows:
θ 0 = 0 , θ k + 1 = min { t > θ k | ξ ( t ) X c ( α ( θ k ) } , k 0 .
We say that at time t , the process ξ ( t ) = ( α ( t ) , β ( t ) ) is in the sector X ( r ) if conditions θ k t < θ k + 1 and α ( θ k ) = r are met. The difference σ k = θ k + 1 θ k represents the time the process spends in the sector X ( α ( θ k ) } . We define the sector X ( w ) exit time as the moment τ w when the process leaves the sector X ( w ) ,
τ w = min { t 0 | ξ ( t ) X c ( w ) } .
Additionally, we define the number of sectors visited along a path to X c ( w ) as
κ w = max { k 0 | θ k τ w } .
If an initial state ξ ( 0 ) of the process belongs to X c ( w ) , then we have τ w = 0 and κ w = 0 . If ξ ( 0 ) = ( k , i ) with k w , then at the first hitting time of X c ( w ) , the process exits the sector X ( α ( θ κ w 1 ) ) and enters the sector X ( α ( θ κ w ) ) , which implies equality τ w = θ κ w . The set X c ( w ) is reached at the moment of transition from the set X 0 ( α ( θ κ w 1 ) ) X 0 ( w ) to the state ξ ( θ κ w ) X 1 ( α ( θ κ w 1 ) ) X 1 ( w ) .
For vectors w 1 , k X ( w ) , and n X c ( w ) , we define matrices Φ k , n ( v ) ( w ) = [ φ ( k , i ) , ( r , j ) ( v ) ( w ) ]   ( i , j J ) , Φ k , n ( w ) = [ φ ( k , i ) , ( n , j ) ( w ) ]   ( i , j J ) , Φ ¯ k , n ( s , v ) ( w ) = [ φ ¯ ( k , i ) , ( r , j ) ( s , v ) ( w ) ]   ( i , j J ) , and Φ ¯ k , n ( s ) ( w ) = [ φ ¯ ( k , i ) , ( n , j ) ( s ) ( w ) ]   ( i , j J ) as follows.
The element φ ( k , i ) , ( n , j ) ( v ) ( w ) of the matrix Φ k , n ( v ) ( w ) is the conditional probability that the process ξ ( t ) , starting in the state ( k , i ) X ( w ) , reaches the set X c ( w ) by hitting the state ( n , j ) after passing through exactly v sectors,
φ ( k , i ) , ( n , j ) ( v ) ( w ) = { κ w = v , ξ ( θ κ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }  
= { κ w = v , ξ ( τ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }   .
The element φ ( k , i ) , ( n , j ) ( w ) of the matrix Φ k , n ( w ) is the conditional probability that the process ξ ( t ) will eventually hit the set X c ( w ) in the state ( n , j ) , given that it starts in the state ( k , i ) ,
φ ( k , i ) , ( n , j ) ( w ) = { κ w < , ξ ( θ κ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }  
= { τ w < , ξ ( τ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }   .
The element φ ¯ ( k , i ) , ( r , j ) ( s , v ) ( w ) of the matrix Φ ¯ k , n ( s , v ) ( w ) is the conditional probability that the process ξ ( t ) , starting in the state ( k , i ) X ( w ) , reaches the set X c ( w ) by hitting the state ( n , j ) after no more than s transition steps, and passing through exactly v sectors,
φ ¯ ( k , i ) , ( n , j ) ( s , v ) ( w ) = { κ w = v , θ κ w s , ξ ( θ κ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }  
= { κ w = v , τ w s , ξ ( τ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }   .
The element φ ¯ ( k , i ) , ( n , j ) ( s ) ( w ) of the matrix Φ ¯ k , n ( s ) ( w ) is the conditional probability that the process ξ ( t ) will hit the set X c ( w ) in the state ( n , j ) after no more than s transition steps, given that it starts in the state ( k , i ) ,
φ ¯ ( k , i ) , ( n , j ) ( s ) ( w ) = { θ κ w s , ξ ( θ κ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }  
= { τ w s , ξ ( τ w ) = ( n , j ) | ξ ( 0 ) = ( k , i ) }   .
It is easy to see that the matrices Φ k , n ( w ) , Φ k , n ( v ) ( w ) , Φ ¯ k , n ( s ) ( w ) , and Φ ¯ k , n ( s , v ) ( w ) satisfy the following relations
Φ k , n ( w ) = v = 1 Φ k , n ( v ) ( w ) ,   Φ ¯ k , n ( s ) ( w ) = v = 1 Φ ¯ k , n ( s , v ) ( w ) ,
lim s Φ ¯ k , n ( s , v ) ( w ) = Φ k , n ( v ) ( w ) ,   lim s Φ ¯ k , n ( s ) ( w ) = Φ k , n ( w ) .
For each l 0 , any path of ξ ( t ) leading from a state ( k , i ) X l ( w ) to a state ( n , j ) X c ( w ) must successively visit sets X l 1 ( w ) , X l 2 ( w ) , , X 0 ( w ) , X 1 ( w ) , which will require visiting at least l + 1 sectors. Therefore, we have
Φ k , n ( v ) ( w ) = O
for all k Z l ( w ) , n Z 1 ( w ) , and v l . Additionally, it is impossible to visit v sectors without taking at least v transition steps, which implies that
Φ ¯ k , n ( s , v ) ( w ) = O ,
for all k Z l ( w ) , n Z 1 ( w ) , and s < v l .
The transition probabilities away from the boundary are spatially homogeneous, indicating that for any vector k w 1 and any vector n Z 1 ( w ) , the probabilities φ ( k , i ) , ( n , j ) ( w ) may depend on i , j ,   k w , and n w , but not on the specific values of k , n and w , i.e.,
Φ k , n ( w ) = Φ k w + 1 , n w + 1 ( 1 ) ,   Φ ¯ k , n ( s ) ( w ) = Φ ¯ k w + 1 , n w + 1 ( s ) ( 1 ) .
This means that the matrices Φ k , n ( w ) and Φ ¯ k , n ( s ) ( w ) may be expressed as
Φ k , n ( w ) = Ψ k w , n w ,   Φ ¯ k , n ( s ) ( w ) = Ψ ¯ k w , n w ( s ) .
Here, for vectors α 0 and β ε , the matrices Ψ α , β = [ ψ ( α , i ) , ( β , j ) ]   ( i , j J ) and Ψ ¯ α , β ( s ) = [ ψ ¯ ( α , i ) , ( β , j ) ( s ) ]   ( i , j J ) are defined as
Ψ α , β = Φ w + α , w + β ( w ) ,   Ψ ¯ α , β ( s ) = Φ ¯ w + α , w + β ( s ) ( w )
independently of the vector w 1 . Vectors α and β in (17) satisfy conditions w + α w and w + β Z 1 ( w ) . Therefore, the index α of matrices Ψ α , β and Ψ ¯ α , β ( s ) is a nonnegative vector and its index β belongs to the set ε . We refer to the matrices Ψ α , β as the matrices of the first passage probabilities.
It was demonstrated in [8] that for all α 0 and β ε , the matrices Ψ α , β satisfy the system
Ψ α , β = Q β α + γ 0 Q γ α Ψ γ , β .
In the next theorem, we obtain a similar property for the matrices Ψ ¯ α , β ( s ) .
Theorem 1.
The matrices Ψ ¯ α , β ( s ) satisfy the system
Ψ ¯ α , β ( 1 ) = Q β α ,   Ψ ¯ α , β ( s ) = Q β α + γ 0 Q γ α Ψ ¯ γ , β ( s 1 ) ,   s 2 ,   α 0 , β ε .
Proof of Theorem 1.
We will initially demonstrate the validity of the following formulae for all values of k Z ( w ) and n Z 1 ( w ) :
Φ ¯ k , n ( 1 ) ( w ) = P k , n ,   Φ ¯ k , n ( s ) ( w ) = P k , n + r w P k , r Φ ¯ k , n ( s 1 ) ( w ) ,   s 2 .
The first formula, in (19), is straightforward. The second formula adheres to the law of total probability, accounting for all possible process states following the first transition. Consider two states: ( k , i ) X ( w ) and ( n , j ) X 1 ( w ) . The state ( n , j ) can be reached from the state ( k , i ) after a single transition, which contributes the term P k , n to the second formula in (19). Additionally, the first transition can lead the process to some state ( r , j ) with r w . To reach the set X 1 ( w ) from state ( r , j ) , the process ξ ( t ) must necessarily hit the state ( n , j ) X 1 ( w ) within no more than s 1 transition steps. This contributes to the second term on the right-hand side of (19). Equation (18) for the matrices Ψ ¯ α , β is derived from (19) using Formulas (1) and (17). □

4. Minimality Property of the Sector Exit Probabilities

Matrices of the sector exit probabilities were defined in [8] as the F β = Ψ 0 , β , β ε . Entries of the matrix F β = [ f β ( i , j ) ]   ( i , j J ) determine the transition probabilities of the embedded Markov chain ξ ( θ k ) , since for k 1 , we have
{ θ k < , ξ ( θ k ) = ( n , j ) | ξ ( θ k 1 ) = ( k , i ) }
= { τ k < , ξ ( τ k ) = ( n , j ) | ξ ( 0 ) = ( k , i ) } = f n k ( i , j ) .
We define matrices F β ( s ) = [ f β ( s ) ( i , j ) ]   ( i , j J ) , β ε , as F β ( s ) = Ψ ¯ 0 , β ( s ) . These matrices determine the transition probabilities of the Markov process ( ξ ( θ k ) , σ k ) as
{ ξ ( θ k ) = ( n , j ) , σ k s | ξ ( θ k 1 ) = ( k , i ) }
= { τ k s , ξ ( τ k ) = ( n , j ) | ξ ( 0 ) = ( k , i ) } = f n k ( s ) ( i , j )
for all k 1 and s = 1 , 2 , . From (20) and (21), it follows that the matrices F β and F β ( s ) are related as
lim s F β ( s ) = F β ,   β ε .
As a direct consequence of Theorem 1, we can derive the following property of the matrices F β ( s ) = Ψ ¯ 0 , β ( s ) :
F β ( 1 ) = Q β ,   F β ( s ) = Q β + γ 0 Q γ Ψ ¯ γ , β ( s 1 ) ,   for   s 2 and   β ε .
Neuts has demonstrated in [1] that in one-dimensional cases—when the equality G = F 1 holds—the matrix G is the minimal nonnegative solution of (3). We will show that similar results are also held in multi-dimensional cases. The proof is based on inequalities that we will derive in Lemma 1.
Lemma 1.
The matrices  F β ( s )  satisfy the following inequalities
F β ( s ) Q β + γ 0 Q α v = 1 ( ε 1 , , ε v ) E γ , β ( v ) F ε 1 ( s 1 ) F ε 2 ( s 1 ) F ε v ( s 1 ) ,   s 2 ,   β ε .
Proof of Lemma 1.
For any vectors w 1 , ( k , i ) X ( w ) , and ( n , j ) X c ( w ) , the element φ ¯ ( k , i ) , ( r , j ) ( s , v ) ( w ) of the matrix Φ ¯ k , n ( s , v ) ( w ) is the conditional probability that the process ξ ( t ) , starting in the state ( k , i ) , reaches the set X c ( w ) by hitting the state ( n , j ) after no more than s transition steps and passing through exactly v sectors. To hit the set X c ( w ) , starting in a state ( k , i ) X ( w ) and passing through exactly one sector is only possible if that single crossed sector is the set X ( k ) . Therefore, we have the equalities
Φ ¯ k , n ( s , 1 ) ( w ) = Φ ¯ k , n ( s , 1 ) ( k ) = F ¯ n k ( s ) ,   s 1 .
Let the sets T k , n ( u ) ( w ) be defined as the set of all u tuples ( r 1 , , r u ) Z ( w ) u satisfying r 1 Z 1 ( k ) , r 2 Z 1 ( r 1 ) ,…, r u Z 1 ( r u 1 ) , n Z 1 ( r u ) . Hitting the set X c ( w ) after no more than s transition steps is only possible if the total number of steps taken in the crossed sectors does not exceed s . It implies the following inequality
Φ ¯ k , n ( s , v ) ( w ) = ( r 1 , , r v 1 ) T k , n ( v 1 ) ( w ) s 1 , , s v 1 s 1 + + s v s Φ ¯ k , r 1 ( s 1 , 1 ) ( k ) Φ ¯ r 1 , r 2 ( s 2 , 1 ) ( r 1 ) Φ ¯ r v 2 , r v 1 ( s v 1 , 1 ) ( r v 2 ) Φ ¯ r v 1 , n ( s v , 1 ) ( r v 1 )
( r 1 , , r v 1 ) T k , n ( v 1 ) ( w ) 1 s 1 s 1 s v s Φ ¯ k , r 1 ( s 1 , 1 ) ( k ) Φ ¯ r 1 , r 2 ( s 2 , 1 ) ( r 1 ) Φ ¯ r v 2 , r v 1 ( s v 1 , 1 ) ( r v 2 ) Φ ¯ r v 1 , n ( s v , 1 ) ( r v 1 )
= ( r 1 , , r v 1 ) T k , n ( v 1 ) ( w ) Φ ¯ k , r 1 ( s , 1 ) Φ ¯ r 1 , r 2 ( s , 1 ) ( r 1 ) Φ ¯ r v 2 , r v 1 ( s , 1 ) ( r v 2 ) Φ ¯ r v 1 , n ( s , 1 ) ( r v 1 )
= ( r 1 , , r v 1 ) T k , n ( v 1 ) ( w ) F r 1 k ( s ) F r 2 r 1 ( s ) F r v 1 r v 2 ( s ) F n r v 1 ( s ) .
Let us introduce, in (26), new variables γ = k w 0 , β = n w ε , ε 1 = r 1 k , ε 2 = r 2 r 1 , …, ε v 1 = r v 1 r v 2 ,   ε v = n r v 1 . Since ( r 1 , , r v 1 ) T k , n ( v 1 ) ( w ) , it is clear that the v-tuple ( ε 1 , , ε v ) belongs to the set E γ , β ( v ) . Using these variables and Formula (17), we can obtain from (26) the inequality
Ψ ¯ γ , β ( s ) v = 1 ( ε 1 , , ε v ) E γ , β ( v ) F ε 1 ( s ) F ε 2 ( s ) F ε v ( s ) .
The statement of Lemma 1 follows from (23) and (27). □
Let matrices X β ( k ) ,   β ε , k 1 , be defined as
X β ( 1 ) = Q β ,
X β ( k + 1 ) = Q β + α 0 Q α v = 1 ( ε 1 , , ε v ) E α , β ( v ) X ε 1 ( k ) X ε v ( k ) ,   k 1 .
It has been shown in [8] that for each   β ε , the sequence X β ( k ) , k 1 , entry-wise monotonically converges to matrix X β . The family of matrices X β ,     β ε , is the minimal solution of system (8) in the set of families Y β ,   β ε , of nonnegative matrices. In the following theorem, we will demonstrate that the equality X β = F β holds for all   β ε .
Theorem 2.
The family of matrices of the sector exit probabilities F β , β ε , is the minimal solution of the system (8) in the set of families of nonnegative matrices Y β ,   β ε . For each   β ε , the sequence X β ( k ) , k 1 , entry-wise monotonically converges to matrix F β .
Proof of Theorem 2.
At first, we prove by induction that matrices X β ( k ) defined by (28)–(29) satisfy X β ( k ) F β ( k ) for all   β ε and for all k 1 . Since X β ( 1 ) = Q β and F β ( 1 ) = Q β , we know that X β ( 1 ) F β ( 1 ) . Let us assume that X β ( k ) F β ( k ) for some k and for all   β ε . Then, using (29), we obtain
X β ( k + 1 ) = Q β + α 0 Q α v = 1 ( ε 1 , , ε v ) E α , β ( v ) X ε 1 ( k ) X ε v ( k )
Q β + α 0 Q α v = 1 ( ε 1 , , ε v ) E α , β ( v ) F ε 1 ( k ) F ε 3 ( k ) F ε v ( k ) F β ( k + 1 ) ,
which proves the induction step. The sequence X β ( k ) , k 1 , entry-wise monotonically converges to matrix X β and the sequence F β ( k ) , k 1 , converges to matrix F β [8]. This implies the following inequalities for limiting matrices X β and F β :
X β = lim k X β ( k ) lim k F β ( k ) = F β ,   β ε .
Since both families X β ,   β ε , and F β , β ε , are solutions of the system (8), and the family X β ,     β ε , is the minimal nonnegative solution of (8), and the inequalities X β F β hold for all   β ε , we necessarily have equality X β = F β for all   β ε . □

5. Computing the Matrix G

Any vector α + M can be represented as α = δ + k 1 , where δ = Δ ( α ) and k = l ( α ) . It was shown in [8] that the matrices of the first passage probabilities possess the following properties
Ψ α , ε = v = 1 ( ε 1 , , ε v ) E α , ε ( v ) F ε 1 F ε 2 F ε v ,   α + M ,   ε ε ,
Ψ δ + k 1 , ε = γ 1 , , γ k N Ψ δ , γ 1 1 Ψ γ 1 , γ 2 1 Ψ γ k 1 , γ k 1 Ψ γ k , ε ,   δ N ,   ε ε ,   k 1 .
In Theorem 3 we show that decomposition (32) is a special case of more general results for nonnegative matrices of the form (31).
Theorem 3.
Let Y ε , ε 1 , be a family of nonnegative matrices such that Y ε = O for all ε ε , and let each entry of the matrix series
Z α , β = v = 1 ε 1 , , ε v E α , β 1 ( v ) Y ε 1 Y ε 2 Y ε v ,   α + M ,   β N ,
be convergent. Then, matrices  Z α , β  satisfy the following system
Z α , β = Y β α 1 + ε ε l ( α + ε ) = 0 Y ε Z α + ε , β + i = 1 ε ε l ( α + ε ) = i Y ε
γ 1 , , γ i N Z α + ε , γ 1 1 Z γ 1 , γ 2 1 Z γ i 1 , γ i 1 Z γ i , β ,   α + M ,   β N .
For each vector  α + M  such that  l ( α ) 1 ,  matrices  Z α , β β N , can be decomposed as
Z α , β = γ 1 , , γ l ( α ) N Z Δ ( α ) , γ 1 Z γ 1 , γ 2 Z γ l ( α ) 1 , γ l ( α ) Z γ l ( α ) , β
Proof of Theorem 3.
It was shown in [7] (Lemma 1), that for δ , β N and k 1 , the sets E δ + k 1 , β 1 ( v ) , v 1 , can be decomposed in terms of the cartesian products of sets E γ , ε ( u ) , γ N , ε ε , as
E δ + k 1 , β 1 ( v ) = u 1 , , u k + 1 1 u 1 + + u k + 1 = v   γ 1 , , γ k N E δ , γ 1 1 ( u 1 ) × E γ 1 , γ 2 1 ( u 2 ) × × E γ k 1 , γ k 1 ( u k ) × E γ k , β 1 ( u k + 1 )
It follows from (33) and (35) that the matrix Z δ + k 1 , β can be represented as follows:
Z δ + k 1 , β = v = 1 u 1 , , u k + 1 1 u 1 + + u k + 1 = v γ 1 , , γ k N ( ( ε 1 , 1 , , ε 1 , u 1 ) E δ , γ 1 1 ( u 1 ) Y ε 1 , 1 Y ε 1 , u 1 )
( ( ε 2 , 1 , , ε 2 , u 2 ) E γ 1 , γ 2 1 ( u 2 ) Y ε 2 , 1 Y ε 2 , u 2 ) ( ( ε k , 1 , , ε k , u k ) E γ k 1 , γ k 1 ( u k ) Y ε k .1 Y ε k , u k )
( ( ε k + 1 , 1 , , ε k + 1 , u k + 1 ) E γ k , β ( u k + 1 ) F ε k + 1 , 1 Y ε k + 1 , u k + 1 )
= γ 1 , , γ k N [ u 1 = 1 ( ( ε 1 , 1 , , ε 1 , u 1 ) E δ , γ 1 1 ( u 1 ) Y ε 1 , 1 Y ε 1 , u 1 )
u 2 = 1 ( ( ε 2 , 1 , , ε 2 , u 2 ) E γ 1 , γ 2 1 ( u 2 ) Y ε 2 , 1 Y ε 2 , u 2 )
u k = 1 ( ( ε k , 1 , , ε k , u k ) E γ k 1 , γ k 1 ( u k ) Y ε k , 1 Y ε k , u k )
u k + 1 = 1 ( ( ε k + 1 , 1 , , ε k + 1 , u k + 1 ) E γ k , β 1 ( u k + 1 ) Y ε k + 1.1 Y ε k + 1 , 2 Y ε k + 1 , u k + 1 ) ]
After applying (33) to each sum inside the square brackets in (37), we obtain
Z δ + k 1 , β = γ 1 , , γ k N Z δ , γ 1 Z γ 1 , γ 2 Z γ k 1 , γ k Z γ k , β ,
which can be rewritten as (35).
It follows from the definition of the set E α , β ( v ) , that isolating in (33) the first component of the v -tuple ε 1 , , ε v , the matrix Z α , β can be transformed as
Z α , β = i = 1 ( ε 1 , , ε i ) E α , β 1 ( i ) Y ε 1 Y ε i
= Y β α 1 + ε ε α + ε 0 Y ε w = 1 ( ε 1 , , ε w ) E α + ε , β 1 ( w ) Y ε 1 Y ε w = Y β α 1 + ε ε α + ε 0 Y ε Z α + ε , β .
From here, using Formula (35), we obtain
Z α , β = Y β α 1 + ε ε l ( α + ε ) = 0 Y ε Z α + ε , β
+ i = 1 ε ε l ( α + ε ) = i Y ε γ 1 , , γ i N Z Δ ( α + ε ) , γ 1 Z γ 1 , γ 2 Z γ i 1 , γ i Z γ i , β ,
which proves Formula (34). □
Let us define matrices G α , β ( k ) as
G α , β ( k ) = v = 1 ( ε 1 , , ε v ) E α , β 1 ( v ) X ε 1 ( k ) X ε v ( k ) ,   α , β N ,   k 1 .
For each ε ε , the sequence X ε ( k ) , k 1 , is entry-wise monotonically increasing and converges to the matrix F ε ([8]). This implies that for all α , β N , the sequence G α , β ( k ) ,   k 1 , is also entry-wise monotonically increasing and converges to the matrix G α , β given by (7).
It follows from Theorem 3 that matrices Z α , β = G α , β ( k ) satisfy the system
G α , β ( k ) = X β 1 α ( k ) + ε ε l ( α + ε ) = 0 X ε ( k ) G α + ε , β ( k )
+ i = 1 ε ε l ( α + ε ) = i X ε ( k ) γ 1 , , γ i N G Δ ( α + ε ) , γ 1 ( k ) G γ 1 , γ 2 ( k ) G γ i 1 , γ i ( k ) G γ i , β ( k ) .
Using decomposition (35), we can rewrite Equation (29) as
X ε ( k + 1 ) = Q ε + α 0 Q α G α , ε + 1 ( k ) = Q ε + δ N l = 0 Q δ + l 1 G δ + l 1 , ε + 1 ( k )
= Q ε + δ N Q δ G δ , ε + 1 ( k ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N G δ , γ 1 ( k ) G γ 1 , γ 2 ( k ) G γ l 1 , γ l ( k ) G γ l , ε + 1 ( k ) .
When using the iterative algorithm (28) and (29) to solve the system (8), a key challenge is the enumeration of elements of the set E α , β ( v ) . In the subsequent theorem, we will show how to avoid these calculations.
Theorem 4.
Let matrices  V ε ( k )   ε ε , and W α , β ( k ) α , β N k 0 , be defined as
W α , β ( 0 ) = O ,   α , β N ;
V ε ( k + 1 ) = Q ε + δ N Q δ W δ , ε + 1 ( k ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N W δ , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ l 1 , γ l ( k ) W γ l , ε + 1 ( k ) ,     ε ε ,
W α , β ( k + 1 ) = V β α 1 ( k + 1 ) + ε ε l ( α + ε ) = 0 V ε ( k + 1 ) W α + ε , β ( k ) + i = 1 ε ε l ( α + ε ) = i V ε ( k + 1 )
γ 1 , , γ i N W Δ ( α + ε ) , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ i 1 , γ i ( k ) W γ i , β ( k ) ,   α , β N ,   k 0 .
Then, for each  k 1 , matrices  V ε ( k )  and  W α , β ( k )  satisfy the following inequalities
X ε ( k 1 ) V ε ( k ) X ε ( k ) ,     ε ε ;   G α , β ( k 1 ) W α , β ( k ) G α , β ( k ) ,   α , β N .
Proof of Theorem 4.
The proof is based on the fact that the sequences X ε ( k ) , k 1 , and G α , β ( k ) , α , β N , are entry-wise monotonically increasing.
First, we will demonstrate that for all vectors   ε ε and α , β N , the sequences V ε ( k ) ,   k 1 , and W α , β ( k ) , k 1 , satisfy V ε ( k ) X ε ( k ) , W α , β ( k ) G α , β ( k ) . Since V ε ( 1 ) = Q ε = X ε ( 1 ) and
W α , β ( 1 ) = X β 1 α ( 1 ) v = 1 ( ε 1 , , ε v ) E α , β 1 ( v ) X ε 1 ( 1 ) X ε v ( 1 ) = G α , β ( 1 ) ,
we know that V ε ( 1 ) X ε ( 1 ) and W α , β ( 1 ) G α , β ( 1 ) . Let us assume that V ε ( k ) X ε ( k ) and W α , β ( k ) G α , β ( k ) for some k and for all   ε ε , α , β N . Then, it follows from (42) and (40) that the following inequality holds:
V ε ( k + 1 ) = Q ε + δ N Q δ W δ , ε + 1 ( k ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N W δ , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ l 1 , γ l ( k ) W γ l , ε + 1 ( k ) )
Q ε + δ N Q δ G δ , ε + 1 ( k ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N G δ , γ 1 ( k ) G γ 1 , γ 2 ( k ) G γ l 1 , γ l ( k ) G γ l , ε + 1 ( k ) = X ε ( k + 1 ) .
Using (43), inequality (45), and (39), we obtain
W α , β ( k + 1 ) = V β 1 α ( k + 1 ) + ε ε l ( α + ε ) = 0 V ε ( k + 1 ) W α + ε , β ( k ) + i = 1 ε ε l ( α + ε ) = i V ε ( k + 1 )
γ 1 , , γ i N W Δ ( α + ε ) , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ i 1 , γ i ( k ) W γ i , β ( k )
X β 1 α ( k + 1 ) + ε ε l ( α + ε ) = 0 X ε ( k + 1 ) G α + ε , β ( k + 1 ) + i = 1 ε ε l ( α + ε ) = i X ε ( k + 1 )
γ 1 , , γ i N G Δ ( α + ε ) , γ 1 ( k + 1 ) G γ 1 , γ 2 ( k + 1 ) G γ i 1 , γ i ( k + 1 ) G γ i , β ( k + 1 ) = G α , β ( k + 1 ) ,
which proves the induction step. Therefore, V ε ( k ) X ε ( k ) and W α , β ( k ) G α , β ( k ) for all k 1 and for all   ε ε , α , β N .
Let us demonstrate that for all   ε ε and α , β N , the sequences Y ε ( k ) ,   k 1 , and W α , β ( k ) , k 1 , satisfy X ε ( k 1 ) V ε ( k ) , G α , β ( k 1 ) W α , β ( k ) . Since V ε ( 1 ) = Q ε X ε ( 0 ) = O and
W α , β ( 1 ) = X β 1 α ( 1 ) G α , β ( 0 ) = O ,
we know that V ε ( 1 ) X ε ( 0 ) and W α , β ( 1 ) G α , β ( 0 ) . Let us assume that V ε ( k ) X ε ( k 1 ) and W α , β ( k ) G α , β ( k 1 ) for some k and for all   ε ε , α , β N . The following inequality follows from (42) and (40):
V ε ( k + 1 ) = Q ε + δ N Q δ W δ , ε + 1 ( k ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N W δ , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ l 1 , γ l ( k ) W γ l , ε + 1 ( k ) )
Q ε + δ N Q δ G δ , ε + 1 ( k 1 ) + δ N l = 1 Q δ + l 1
γ 1 , , γ l N G δ , γ 1 ( k 1 ) G γ 1 , γ 2 ( k 1 ) G γ l 1 , γ l ( k 1 ) G γ l , ε + 1 ( k 1 ) = X ε ( k ) .
By applying this inequality along with the equalities (43) and (39), we can derive the following results:
W α , β ( k + 1 ) = V β 1 α ( k + 1 ) + ε ε l ( α + ε ) = 0 V ε ( k + 1 ) W α + ε , β ( k ) + i = 1 ε ε l ( α + ε ) = i V ε ( k + 1 )
γ 1 , , γ i N W Δ ( α + ε ) , γ 1 ( k ) W γ 1 , γ 2 ( k ) W γ i 1 , γ i ( k ) W γ i , β ( k )
X β 1 α ( k ) + ε ε l ( α + ε ) = 0 X ε ( k ) G α + ε , β ( k ) + i = 1 ε ε l ( α + ε ) = i X ε ( k )
γ 1 , , γ i N G Δ ( α + ε ) , γ 1 ( k ) G γ 1 , γ 2 ( k ) G γ i 1 , γ i ( k ) G γ i , β ( k ) = G α , β ( k ) ,
which prove the induction step. Thus, V ε ( k ) X ε ( k 1 ) and W α , β ( k ) G α , β ( k 1 ) for all k 1 and for all   ε ε , α , β N . □
Given that the sequences X ε ( k ) , k 1 , and G α , β ( k ) ,   k 1 , are entry-wise monotonically increasing, we can derive the following inequalities based on Theorem 4:
O = X ε ( 0 ) V ε ( 1 ) X ε ( 1 ) V ε ( 2 ) X ε ( 2 ) V ε ( 3 ) X ε ( 3 )
for all   ε ε , and
O = G α , β ( 0 ) W α , β ( 1 ) G α , β ( 1 ) W α , β ( 2 ) G α , β ( 2 ) W α , β ( 3 ) G α , β ( 3 )
for all α , β N . Since the sequence X ε ( k ) , k 1 , converges to the matrix F ε , and the sequence G α , β ( k ) ,   k 1 , converges to the matrix G α , β , the inequalities (46) and (47) lead to the conclusion presented in Corollary 1.
Corollary 1.
For each   ε ε , the sequence V ε ( k ) , k 1 , is entry-wise monotonically increasing and converges to the matrix F ε . For each α , β N , the sequence W α , β ( k ) , k 1 , is entry-wise monotonically increasing and converges to the matrix G α , β .
Consequently, Theorem 4 outlines the new algorithm for computing the matrices of the sector exit probabilities and the matrix G . Passing in the equalities (42) and (43) to the limit as k tends to infinity, and using Corollary 1, we obtain a system of equations for matrices F ε and G α , β .
Corollary 2.
Matrices F ε and G α , β satisfy the following system:
F ε = Q ε + δ N Q δ G δ , ε + 1 + δ N l = 1 Q δ + l 1
γ 1 , , γ l N G δ , γ 1 G γ 1 , γ 2 G γ l 1 , γ l G γ l , ε + 1 ,     ε ε ;
G α , β = F β 1 α + ε ε l ( α + ε ) = 0 F ε G α + ε , β + i = 1 ε ε l ( α + ε ) = i F ε
γ 1 , , γ i N G Δ ( α + ε ) , γ 1 G γ 1 , γ 2 G γ i 1 , γ i G γ i , β ,   α , β N .
Please note, if α = 0 , all sums in Equation (49) will equal zero, since l ( ε ) = 1 for all   ε ε . Therefore, in these cases, Equation (49) has the form G 0 , β = F β 1 , β N . Consequently, Equations (48) and (49) outline the relationships between the blocks F ε = G 0 , ε + 1 ,   ε ε , of the matrix G and all its other blocks.

6. Conclusions

Matrices of the sector exit probabilities F ε were introduced in [8] as a means of calculating the matrix G of multi-dimensional processes of M/G/1 type using matrices of order n . A system of Equations (28) and (29) for the matrices F ε was obtained, and an algorithm for calculating its minimal nonnegative solution was proposed. However, the question remained whether the family of matrices F ε ,   ε ε , was a minimal nonnegative solution to the system (28) and (29). In Theorem 2, we gave a positive answer to this question. The algorithm proposed in [8] was difficult to apply due to the need to enumerate the elements of the set E α , β ( v ) . In Section 5, we demonstrated that the matrices F ε and blocks G α , β of the matrix G satisfied the system (48) and (49), and provided an algorithm outlined in Equations (41) and (43) for solving this system. This algorithm successfully avoided the challenges associated with the enumeration of the elements of the set E α , β ( v ) in the algorithm introduced in [8].
In multi-dimensional cases, both families of the matrices F ε and G α , β are infinite, leading to a system with infinitely many equations. Managing systems with infinitely many equations and unknown infinite matrices is not feasible. Therefore, future research should concentrate on developing a method for selecting an appropriate truncation approximation.

Author Contributions

Conceptualization, V.N. and K.S.; formal analysis, V.N.; investigation, V.N. and K.S.; methodology, V.N. and K.S.; funding acquisition, K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This publication has been supported by the RUDN University Scientific Projects Grant System, project No. 021937-2-000.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Neuts, M.F. Structured Stochastic Matrices of M/G/1 Type and Their Applications; Marcel Dekker: New York, NY, USA, 1989. [Google Scholar] [CrossRef]
  2. Bini, D.A.; Latouche, G.; Meini, B. Numerical Methods for Structured Markov Chains; Oxford University Press: Oxford, UK, 2005. [Google Scholar] [CrossRef]
  3. Miyazawa, M. Tail decay rates in double QBD processes and related reflected random walks. Math. Oper. Res. 2009, 34, 547–575. [Google Scholar] [CrossRef]
  4. Kobayashi, M.; Miyazawa, M. Tail asymptotics of the stationary distribution of a two-dimensional reflecting random walk with unbounded upward jumps. Adv. Appl. Prob. 2014, 46, 365–399. [Google Scholar] [CrossRef]
  5. Ozawa, T.; Kobayashi, M. Exact asymptotic formulae of the stationary distribution of a discrete-time two-dimensional QBD process. Queueing Syst. 2018, 90, 351–403. [Google Scholar] [CrossRef]
  6. Ozawa, T. Stability condition of a two-dimensional QBD process and its application to estimation of efficiency for two-queue models. Perf. Eval. 2019, 130, 101–118. [Google Scholar] [CrossRef]
  7. Ozawa, T. Tail Asymptotics in any direction of the stationary distribution in a two-dimensional discrete-time QBD process. Queueing Syst. 2022, 102, 227–267. [Google Scholar] [CrossRef]
  8. Naumov, V.; Samouylov, K. Multi-Dimensional Markov Chains of M/G/1 Type. Mathematics 2025, 13, 209. [Google Scholar] [CrossRef]
Figure 1. The levels and sector X ( 3 , 1 ) of a 2d-M/G/1 process.
Figure 1. The levels and sector X ( 3 , 1 ) of a 2d-M/G/1 process.
Mathematics 13 01223 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Naumov, V.; Samouylov, K. Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type. Mathematics 2025, 13, 1223. https://doi.org/10.3390/math13081223

AMA Style

Naumov V, Samouylov K. Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type. Mathematics. 2025; 13(8):1223. https://doi.org/10.3390/math13081223

Chicago/Turabian Style

Naumov, Valeriy, and Konstantin Samouylov. 2025. "Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type" Mathematics 13, no. 8: 1223. https://doi.org/10.3390/math13081223

APA Style

Naumov, V., & Samouylov, K. (2025). Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type. Mathematics, 13(8), 1223. https://doi.org/10.3390/math13081223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop