Abstract
We consider an irreducible discrete-time Markov process with states represented as (k, i) where k is an M-dimensional vector with non-negative integer entries, and i indicates the state (phase) of the external environment. The number n of phases may be either finite or infinite. One-step transitions of the process from a state (k, i) are limited to states (n, j) such that n ≥ k−1, where 1 represents the vector of all 1s. We assume that for a vector k ≥ 1, the one-step transition probability from a state (k, i) to a state (n, j) may depend on i, j, and n − k, but not on the specific values of k and n. This process can be classified as a Markov chain of M/G/1 type, where the minimum entry of the vector n defines the level of a state (n, j). It is shown that the first passage distribution matrix of such a process, also known as the matrix G, can be expressed through a family of nonnegative square matrices of order n, which is a solution to a system of nonlinear matrix equations.
Keywords:
discrete-time Markov chain; Markov chain of M/G/1 type; matrix G; system of nonlinear matrix equations MSC:
60J10; 60J35; 60K37; 94-08
1. Introduction
The Markov chains of M/G/1 type represent a basic model that explains the temporal evolution of population size. It is a two-component process, the first component of which is an integer-valued process describing the dynamics of population size, and the other represents the state (phase) of the external environment. The state space of Markov chains of M/G/1 type can be partitioned into non-empty disjoint subsets known as levels, such that one-step state transitions are limited to states at the same level, the adjacent lower level, or any higher level. The transition matrix of a Markov chain of M/G/1 type has a block upper Hessenberg form and all blocks along the main diagonal, except for the top ones, are identical.
The matrix geometric method proposed by M.F. Neuts in [1] and his study of Markov chains of M/G/1 type in [2] led to the rapid development of matrix-analytic methods in operations research. These methods provide a powerful framework for the unified analysis of large classes of Markov processes and, more importantly, for their numerical solution. Matrix-analytic methods and their applications as they stand today are outlined in [3,4,5,6,7,8,9,10,11].
In this paper, we study discrete-time Markov chains on the state space , where is the set of nonnegative integers and . The number of elements in may be either finite or infinite. One-step transitions of the process from a state are limited to states such that where represents the vector of all 1s. We assume that the process is spatially homogeneous, meaning that for the one-step transition probability from a state to a state may depend on , and but not on the specific values of and We will refer to these processes as M-dimensional Markov chains of M/G/1 type . The processes are Markov chains of M/G/1 type, with the level consisting of states for which .
Multidimensional quasi-birth-and-death processes () are a specific type of multidimensional Markov chains of M/G/1 type. They are characterized by having one-step transitions from a state restricted to states such that . The explicit analytical representation for the stationary distribution of processes are unknown, and most work has been devoted to deriving asymptotic formulas of the stationary distribution (see [12,13,14]). The conditions ensuring a positive recurrent or transient 2d-QBD process were analyzed in [15]. Specific cases of processes, where only one component of the vector may change at a time, were studied in [16].
In the theory of Markov chains of M/G/1 type, a crucial role is played by the so-called matrix [2]. If it is known, many relevant quantities may be efficiently computed. Different algorithms for the numerical computation of the matrix have been proposed [2,17,18,19,20,21]. However, the structure of the levels of processes can make it challenging to use these methods effectively.
This study aims to describe the matrix of processes in terms of matrices of order . Section 2 explores classical Markov chains of M/G/1 type. Section 3 defines the multidimensional Markov chains of M/G/1 type and introduces the concept of the state sectors and the sector exit probabilities. We analyze the first passage probabilities and demonstrate that the family of matrices representing the sector exit probabilities satisfies a system of nonlinear matrix equations. Section 4 illustrates how a solution to this system can be obtained through successive substitution. An example of the Md-QBD process is discussed in Section 5. Finally, we offer some concluding remarks in Section 6.
We use bold capital letters to denote matrices and bold lowercase letters to denote vectors. Unless otherwise stated, all vectors in this paper have integer components and the length . For any vector we use the notation for the component of . For vectors and , means that for all , and means that for at least one value of . Notations and are defined similarly. Vector represents the vector of all 1s, and the vector indicates the vector with zero entries except the mth entry, which equals one. Given a vector and an integer , we define two sets and . Sets , , and are defined as , , and We refer to the set of states as the sector.
2. Markov Chains of M/G/1 Type
A Markov chain of M/G/1 type with a state space is characterized by a block Hessenberg transition matrix of the form
where blocks and are nonnegative square matrices, such that and are stochastic matrices. Entries of the matrix blocks are indexed by the elements of the phase space . The subset of all states , such that , is called the level .
A fundamental role in the theory of Markov chains of M/G/1 type is played by the matrix , described by Neuts in [2]. The element of this matrix represents the probability that, starting from the state , the chain will first appear at the level in state . Neuts demonstrated that is the minimal nonnegative solution to the equation
There are several algorithms for calculating the invariant measures of the transition matrix using the matrix [22,23,24].
3. Multidimensional Processes of M/G/1 Type
Consider an irreducible discrete-time Markov chain on the state space . Let us denote the probability of a one-step transition from to as . We assume that the transition probability matrix , , partitioned into blocks of order , for all has the following properties:
where , , are nonnegative square matrices such that is a stochastic matrix. We refer to this process as an M-dimensional Markov chain of M/G/1 type
3.1. Sectors of the Process States
Let us define the sequence of passage times as follows:
We say that at time , the process is in the sector if and if . For a vector , we define the sector exit time as the moment when the process first enters the set ,
Additionally, we define the number of sectors visited along a path to as
If an initial state of the process belongs to , then we have and If with , then at the first hitting time of , the process exits the sector and enters the sector , which implies equality . The set is reached at the moment of transition from the set to the state
For vectors , , and we define matrices and as follows.
The element of the matrix is the conditional probability that the process , starting in the state , reaches the set by hitting the state after passing through exactly sectors,
The element of the matrix is the conditional probability that the process will eventually hit the set in the state , given that it starts in the state ,
It is clear that matrices and , are related to each other by the equality
For , any path of leading from a state to a state must successively visit sets , which will require visiting at least sectors. Therefore, we have
Since the process is spatially homogeneous, for any vector and any vector , the probabilities may depend on , and but not on the specific values of , and , i.e., This means that the matrix may be expressed as
where the matrix is defined as
independently of the vector .
The element of the matrix is the conditional probability that the process will hit the state on the first visit to the set , given that it starts in the state Here, the vectors and satisfy conditions and . Therefore, the index of a matrix is a nonnegative vector and its index belongs to the set defined as
We refer to the matrices as the matrices of the first passage probabilities.
The matrices determine the transition probabilities of the embedded Markov chain , since for we have
We define matrices as
and refer to these matrices as the matrices of the sector exit probabilities.
3.2. Matrices of the First Passage Probabilities
Theorem 1.
The matrices of the first passage probabilities satisfy the system
Proof of Theorem 1.
We will initially demonstrate the validity of the following equality for all vectors and :
This formula adheres to the law of total probability, taking into account all possible states of the process following the first transition. Consider two states: and . The state can be reached from the state after a single transition. This contributes to (12) the term .The first transition may take the process to some state with . To reach the set from state , the process must necessarily cross one or more sectors and then hit the state . This contributes to the second term on the right-hand side of (12). Equation (10) for the matrices is derived from (12) using Formulas (2) and (7). □
For vectors and , let the set be defined as the set of all tuples , satisfying …, , and , and let the set be defined as .
Any nonnegative vector can be represented as , with , and the set being defined as . Since the inequality holds for all vectors , only elements of the set in quantity can satisfy the condition . Therefore, for any , and , the set is empty. If this set is not empty, then the following decomposition is valid in terms of the Cartesian products of sets of the form , where , , and ,
Lemma 1.
For any vectors , and an integer the sets , , and the set can be represented as follows:
Proof of Lemma 1.
Since and , for any tuple of the length , in the interval there necessarily exist numbers , such that . Let us denote by the minimum of such numbers. The number of the elements , satisfying the condition , must be at least . Therefore, the set can be represented as follows:
Each element of the set can be written as where . Therefore, Equation (15) implies the following equalities:
Thus, for any integers , the following formula is valid
Applying this formula for , we can obtain the decomposition (13) for the set .
Introducing a new variable in (17) and changing the order of summation, we obtain the following result:
Applying this formula for , we can achieve the decomposition (14) for the set , which proves Lemma 1. □
Theorem 2.
For and , the matrices are given by
Proof of Theorem 2.
Entries of matrices are the one-step transition probabilities (9) of the embedded Markov chain , while entries of the matrices are the probabilities of reaching the set after steps of this Markov chain. Therefore, for all and , the matrices are completely determined by the matrices , , as follows:
Here, the summation extends over the set of all tuples , satisfying , , ,…, , .
From (5) and (19), it follows that for all and , we have
Using Formulas (7) and (20), we can obtain the matrix as
Let us introduce new variables: , , , …, and . From the definition of the set , it follows that vectors belong to the set and satisfy the following conditions: , …, and . Taking into account that , we finally obtain (18). □
It follows from Lemma 1 and Theorem 2 that matrices with arbitrary nonnegative index can be expressed in terms of the matrices , with .
Corollary 1.
For any vectors , and integers , the matrix can be represented as
Proof of Corollary 1.
It follows from (13) and (18) that the matrix
can be represented as follows:
After applying Formula (18) to each sum inside the square brackets in (23), we obtain Formula (22), which proves Corollary 1. □
3.3. Md-M/G/1 Processes as One-Dimensional Markov Chains of M/G/1 Type
Let be a Md-M/G/1 process and processes and be defined, respectively, as and . Consider a discrete-time process on the state space , where the set is defined as . The process is the Markov chain of M/G/1 type with the phase space . Mapping , with , is a bijection from onto . When the process is in a state , the process is in the state . Therefore, the probability of a one-step transition of from to is given by Element of the matrix of the process represents the probability that, starting from the state , the process will first appear at the level in state . It may be expressed by elements of the matrix of the process as
Let us partition the matrix into blocks of order : , As a result of the equality , Equation (24) can be expressed in matrix form as
It implies that the matrices are expressed through the matrices as
From (26) and Theorem 2, it follows that
Hence, the matrices uniquely define the matrix and vice versa.
4. Matrices of the Sector Exit Probabilities
As a direct consequence of Theorem 1, we can derive the following representation for the matrices of the sector exit probabilities :
By combining Formulas (11) and (12), we obtain the system of nonlinear equations for the matrices of the sector exit probabilities:
This system may be solved by successive substitutions, starting with zero matrices.
Theorem 3.
Let , , be families of matrices of order , recursively defined by
Then, each sequence
is element-wise monotonically increasing and converges to a nonnegative matrix
. The family of matrices
,
, is the minimal solution of the system
in the set of families , , of nonnegative matrices of order .
Proof of Theorem 3.
We first demonstrate that the sequences , , are monotonically increasing and satisfy for all and for all . We proceed using induction.
Since and , we know that . Let us assume that for some and for all . Then, it follows from (30) that
and
which proves the induction step. Thus, each entry in the sequence , is bounded and monotonically increasing for every . This implies the existence of the limits , , satisfying the system (31).
Assume that , , is another nonnegative solution of (31). We show via induction that for all and all . Since , we know that for all . Now, let us assume that for some and all . Then, we obtain inequalities
Therefore, for all and all , which proves the induction step and the minimality property of the family , . □
Note that in one-dimensional cases, when , the sets and are singletons, and we have equality . In these cases, a system of matrix Equations (29) consists of a single Equation (1).
5. Example of the Md-QBD Process
Consider a Md-QBD process characterized by having one-step transitions from a state restricted to states such that , where . Matrices of transition probabilities of such process have the following form:
where , , are nonnegative square matrices such that
is a stochastic matrix. In this case, the representation (28) for the matrices of the sector exit probabilities has the following form:
We show that the matrices , , in (35) are nonzero only if the vector has a single negative component. For any vector , the element of the matrix is the conditional probability that the process will visit the state on the first visit to the set , given that it starts in the state The probability is nonzero if and only if there is some state , the probability of transition from which to the state is positive. For this, it is necessary that the matrix be nonzero. Since and , the vector has negative components. Since , we have either and for some , or and for some and . In both cases, the vector has a single negative component .
Let us denote by the set of vectors with a single negative component and by the set of vectors with a large number of negative components. Given the above, the matrices are zero for all and . In addition, for matrices are also zero. Therefore, as follows from equality (35), the matrices , are also zero. After removing the zero matrices, system (29) acquires the following form:
where the sets are defined as
Specific cases of processes, where only one component of the vector can change at a time, were studied in [16]. In these cases, the matrices are zero for all indices . Equation (18) in [16] applies only when as there is an error in the proof of Theorem 3. As a result, the findings presented in Section 3 of the mentioned work that rely on this theorem are only valid in the one-dimensional case. For multidimensional processes, Formula (27) of this article provides the correct matrix-multiplicative representation of the matrix through matrices of order , utilizing the solution of the system (36).
6. Conclusions and Future Work
This study presents several theoretical results that aim to simplify the analysis of multidimensional Markov chains of M/G/1 type. The novelty is that we have introduced new concepts of the state sectors and the sector exit probabilities. We have been able to demonstrate that Equation (1), which involves a matrix , can be replaced by a system of Equation (29) that utilizes a family of matrices of the sector exit probabilities. Entries of the matrix are indexed by elements of the set , where and , while the family of the matrices of the order is indexed by the set In one-dimensional cases, the results of the article are reduced to the existing results by Neuts in [2].
However, there are several challenges to overcome in practically implementing these results. In multidimensional cases, the family of the matrices is infinite. Since it is not feasible to compute infinite families of matrices, future research should focus on developing a method to select an appropriate truncation approximation for the proposed algorithm and conducting complexity and error analyses. It remains unclear whether in multi-dimensional cases the family of matrices is the minimal nonnegative solution to the system (29). Future research should also address this question.
Author Contributions
Conceptualization, V.N. and K.S.; formal analysis, V.N.; investigation, V.N. and K.S.; methodology, V.N. and K.S.; funding acquisition, K.S. All authors have read and agreed to the published version of the manuscript.
Funding
This publication has been supported by the RUDN University Scientific Projects Grant System, project No. 021937-2-000.
Data Availability Statement
Data are contained within the article.
Acknowledgments
The authors wish to thank the anonymous reviewers for their valuable and helpful comments, which have significantly improved the study.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Neuts, M.F. Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach; The Johns Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar] [CrossRef]
- Neuts, M.F. Structured Stochastic Matrices of M/G/1 Type and Their Applications; Marcel Dekker: New York, NY, USA, 1989. [Google Scholar] [CrossRef]
- Latouche, G.; Ramaswami, V. Introduction to Matrix Analytic Methods in Stochastic Modeling; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar] [CrossRef]
- Lipsky, L. Queueing Theory. A Linear Algebraic Approach; Springer: New York, NY, USA, 2009. [Google Scholar]
- Li, Q.L. Constructive Computation in Stochastic Models with Applications. The RG-Factorizations; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- He, Q.M. Fundamentals of Matrix-Analytic Methods; Springer: New York, NY, USA, 2014. [Google Scholar]
- Phung-Duc, T. Retrial Queueing Models: A Survey on Theory and Applications. In Stochastic Operations Research in Business and Industry; Dohi, T., Ano, K., Kasahara, S., Eds.; World Scientific Publisher: Singapore, 2017; pp. 1–26. Available online: https://arxiv.org/abs/1906.09560 (accessed on 24 January 2024).
- Dudin, A.N.; Klimenok, V.I.; Vishnevsky, V.M. The Theory of Queueing Systems with Correlated Flows; Springer: Basel, Switzerland, 2020. [Google Scholar]
- Naumov, V.; Gaidamaka, Y.; Yarkina, N.; Samouylov, K. Matrix and Analytical Methods for Performance Analysis of Telecommunication Systems; Springer: Cham, Switzerland, 2021. [Google Scholar]
- Chakravarthy, S.R. Introduction to Matrix-Analytic Methods in Queues 1: Analytical and Simulation Approach—Basics; Wiley: London, UK, 2022. [Google Scholar]
- Chakravarthy, S.R. Introduction to Matrix-Analytic Methods in Queues 2: Analytical and Simulation Approach—Queues and Simulation; Wiley: London, UK, 2022. [Google Scholar]
- Kobayashi, M.; Miyazawa, M. Tail asymptotics of the stationary distribution of a two-dimensional reflecting random walk with unbounded upward jumps. Adv. Appl. Probab. 2014, 46, 365–399. [Google Scholar] [CrossRef]
- Ozawa, T.; Kobayashi, M. Exact asymptotic formulae of the stationary distribution of a discrete-time two-dimensional QBD process. Queueing Syst. 2018, 90, 351–403. [Google Scholar] [CrossRef]
- Ozawa, T. Tail Asymptotics in any direction of the stationary distribution in a two-dimensional discrete-time QBD process. Queueing Syst. 2022, 102, 227–267. [Google Scholar] [CrossRef]
- Ozawa, T. Stability condition of a two-dimensional QBD process and its application to estimation of efficiency for two-queue models. Perf. Eval. 2019, 130, 101–118. [Google Scholar] [CrossRef]
- Naumov, V.A. Matrix-Multiplicative Solution for Multi-Dimensional QBD Processes. Mathematics 2024, 12, 444. [Google Scholar] [CrossRef]
- Ramaswami, V. Nonlinear matrix equations in applied probability—Solution techniques and open problems. SIAM Rev. 1988, 30, 256–263. [Google Scholar] [CrossRef]
- Bini, D.A.; Latouche, G.; Meini, B. Numerical Methods for Structured Markov Chains; Oxford University Press: Oxford, UK, 2005. [Google Scholar] [CrossRef]
- Chiang, C.-Y.; Chu, E.K.-W.; Guo, C.-H.; Huang, T.-M.; Lin, W.-W.; Xu, S.-F. Convergence analysis of the doubling algorithm for several nonlinear matrix equations in the critical case. SIAM J. Matrix Anal. Appl. 2009, 31, 227–247. [Google Scholar] [CrossRef]
- Bini, D.A.; Meini, B. The cyclic reduction algorithm: From Poisson equation to stochastic processes and beyond. Numer. Algorithms 2009, 51, 23–60. [Google Scholar] [CrossRef]
- Bini, D.A.; Latouche, G.; Meini, B. A family of fast fixed point iterations for M/G/1-type Markov chains. IMA J. Numer. Anal. 2022, 42, 1454–1477. [Google Scholar] [CrossRef]
- Ramaswami, V. Stable recursion for the steady state vector for Markov chains of M/G/1 type. Stoch. Models 1988, 4, 183–188. [Google Scholar] [CrossRef]
- Meini, B. An improved FFT-based version of Ramaswami’s formula. Stoch. Models 1997, 13, 223–238. [Google Scholar] [CrossRef]
- Li, Q.; Zhao, Y.Q. A constructive method for finding β-invariant measures for transition matrices of M/G/1-type. In Matrix-Analytic Methods. Theory and Applications; Latouche, G., Taylor, P.G., Eds.; World Scientific: Hackensack, NJ, USA, 2002; pp. 237–263. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).