Limiting Distributions of a Non-Homogeneous Markov System in a Stochastic Environment in Continuous Time

: The stochastic process non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC) is introduced in the present paper. The ordinary non-homogeneous Markov process is a very special case of an S-NHMSC. I studied the expected population structure of the S-NHMSC, the ﬁrst central classical problem of ﬁnding the conditions under which the asymptotic behavior of the expected population structure exists and the second central problem of ﬁnding which expected relative population structures are possible limiting ones, provided that the limiting vector of input probabilities into the population is controlled. Finally, the rate of convergence was studied.


Introductory Notes
The stochastic process of a non-homogeneous Markov system in a stochastic environment (S-NHMS) in discrete time was introduced in [1]. The main goal was to satisfy the need for a more realistic stochastic model in populations with various entities, which were possible to be categorized in a finite number of exhaustive and exclusive states. The expected population structure is studied, that is, the distribution of the expected number of memberships in each state. Note that in the population, apart from the transition of memberships among the states, there are transitions to the external environment, often called wastage from the population, and flow of memberships in the population (system) in the various states, often called recruitment.
The S-NHMS in discrete time is a generalization of the stochastic concept of an NHMS in discrete time, which incorporated the idea of having a pool I (t) of transition probability matrices to choose from, the roots of which were in [2,3], for the special case where the transition matrices are Leslie matrices.
The stochastic process of an NHMS was first introduced in [4]. This new concept provided a more general framework for a number of Markov chain models in manpower systems, which was actually the initial motive. For examples, see [5][6][7][8][9][10].
There are also a large number and a great diversity of applied probability models that could be accommodated in this general framework. A simple fact that shows the dynamics of the concept of an NHMS is, as we will show later, that the well known simple Markov chain is a very special case of an NHMS.
In the present paper, we study the development of a continuous time version of a S-NHMS. The choice in practice between a stochastic process in discrete and continuous time is partly a matter of realism and partly one of convenience. With regard to realism, for example, usually one would want to deal with the transitions between the states of the members of the population in continuous time. However, in practice, the computational advantages of discrete time, as well as the mental process of the researcher, leads all too often to the choice of a discrete time process. On the other hand, continuous time models are often more amenable to mathematical analysis and this may count many times in their favor. Having developed both versions of the theory of S-NHMS, more choices are at our disposal, and hence, a more complete version of the entire theory.
A first concise and complete presentation of the theory of non-homogeneous Markov process exists in [11], Section 8.9. There, apart from building a rigorous foundation of the subject, in the respective references, one could also find the initial founders of the subject. Reference [12] started a period of intense study of non-homogeneous Markov processes. Strong ergodicity for continuous time non-homogeneous Markov processes, using mean visit times, was studied in [13]. Important results on the strong ergodicity for continuous time non-homogeneous Markov processes, using criteria on the functions of intensity transition matrices, were provided by [14][15][16]. I will make extensive use of these results in the present paper.
Estimations of the transition intensities in NHMS in continuous time were provided by [26] for various cases of missing data. In [27], transition intensities were studied for homogeneous Markov systems (HMS) in continuous time, as well as the relation between the volume of the attainable expected population structures at time t and the trace and rank of the intensity matrix.
In [28], the authors studied, for closed HMS in continuous time, the stability of size order of elements in an expected population structure as t → ∞. The state sizes of the elements of the expected population structures and their distributions for an HMS in continuous time were studied in [29] with the use of factorial moments. In [30], the author discussed the case of closed HMS with finite capacities of the states. In [31], the close relation between M/M/k/T/T queues and close HMS in continuous time is presented. More recent results on NHMS in continuous time could be found in [32], while a more recent review on the subject was given by [33].
The paper is organized as follows: In Section 2, I define in detail for the first time the stochastic process S-NHMSC. I also show that the ordinary non-homogeneous Markov process is a special case of an S-NHMSC. Furthermore, I clarify that the open homogeneous Markov models and the ordinary NHMS in continuous time are special cases of the S-NHMSC.
In Section 3, I evaluate the expected population structure of the S-NHMSC at any time t, as a function of the basic parameters of the population by establishing the appropriate differential and integral equation it satisfies.
In Section 4, I study the central classical problem, that of finding the conditions under which the asymptotic behavior of the expected population structure E[N(t)] as t → ∞ exists, and finding its limit in closed analytic form as a function of the limits of the basic parameters of the system. The second central problem is finding which expected relative population structures are possible limiting ones, provided that we control the limiting vector of input probabilities into the population. We prove that the set A ∞ of asymptotically expected relative population structure E[q(t)], under asymptotic input control of the S-NHMSC, is a convex hull of the points, which are functions of the left eigenvector of a certain limiting transition probability matrix and the limiting transition intensity matrices of the inherent non-homogeneous Markov process. I conclude this section by studying an important question, which logically arises, that is, what is the rate of convergence to asymptotically attainable structures in an S-NHMSC. In fact, I am interested in finding conditions under which the rate is exponential, because then, the practical value of the asymptotic result is greater.
Finally, in Section 5, I present an illustrative example from manpower planning.

The S-NHMS in Continuous Time
I will start by presenting the concept of a non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC). Let {T(t), t ≥ 0}, a known con-tinuous function of time or a realization of a known stochastic process denoting the total number of members in the system. Let S = {1, 2, ..., k} be the set of states that are assumed to be exclusive and exhaustive. The state of the system at any time t is represented by the expected population structure: is the expected number of members of the population at time t. Another representation of the state of the system is provided by the relative expected population structure: Furthermore, among the states of the system, as in the case of a non-homogeneous Markov process ( [11]), at the infinitesimal time interval [t, t + δt), the probabilities of members of the system to move from state i to state j are generated by the transition intensities r ij (t): It is important to note at this point that (1) is valid as long as during the interval [t, t + δt) the transition intensities r ij (t) will operate. When taking a step up the ladder towards reality, I will assume a stochastic mechanism of selecting the values of r ij (t), and the equation will be altered accordingly.
Furthermore, let state k + 1 represent members leaving the population and assume that r i,k+1 (t) is the transition intensity for a member of the population in state i to leave in the time interval [t, t + δt): The transition intensities r jj (t) are defined by: Let R(t) = r ij (t) i,j∈S be the matrix of transition intensities at time t and r k+1 (t) = [r 1,k+1 (t), r 2,k+1 (t), ..., r k,k+1 (t)] be the vector of leaving intensities at time t. Now, let p 0i (t, t + δt) be the probability of a new member to enter the population in state i, given that it will enter the population in the time interval [t, t + δt) and let p 0 (t, t + δt) = [p 01 (t, t + δt), p 02 (t, t + δt), ..., p 0k (t, t + δt)]. Define the following probabilities: Now, let: be the transition intensity of a membership to move to state j in the time interval [t, t + δt), given that it was in state i at time t. To visualize this deeper, let there be T(t) memberships at the beginning of the interval [t, t + δt), and each member of the population holds one. During the interval [t, t + δt), members are leaving the population and at the exit they give their memberships to their replacement, who is distributed among the states with probabilities p 0 (t, t + δt) at the end of the interval. Furthermore, let Q(t) = q ij (t) i,j∈S be the matrix of transition intensities of the memberships. Assume that Q(t) is measurable and that sup ij∈S q ij (t) is integrable on every finite interval of t. We call the Markov process defined by the matrix of intensities {Q(t), t ≥ 0} the imbedded or inherent Markov process of the S-NHMSC. Assume now that in the infinitesimal time interval [t, t + δt), the system has the choice of selecting a transition intensity matrix from the pool: such that R i (t)1 + r k+1 (t) = 0 for i = 1, 2, ..., ν and for every t. Furthermore, assume that it makes its choice in a stochastic way, and more specifically, in the infinitesimal time interval [t, t + δt), the probability of selecting an intensity matrix from the set R I (t) is given by and z ii (t) is defined to be: and let c i (0) for i = 1, 2, ..., k be the probabilities of the initial states. Let Z(t) = z ij (t) i,j∈I be the above intensity matrix and assume that Z(t) is measurable for every t ≥ 0 and that sup i∈I {|z ii (t)|} is integrable on every finite interval of time. Then, the intensity matrices {Z(t)} t≥0 define a non-homogeneous Markov process, which we call the compromise non-homogeneous Markov process of the S-NHMSC. The word 'compromise' is selected in the sense that it is the outcome of the choice of strategy under the various pressures in the environment. We call a process like the one described above a non-homogeneous Markov system in a stochastic environment in continuous time (S-NHMSC).
We defined the S-NHMSC in the most general way, in order to provide an inclusive framework that could accommodate a large variety of applied probability models. Furthermore, in the following, some basic questions will be answered within this general framework. However, it is of great importance, in order to increase our intuition about the potential power of applicability of the present theory and in order to place it at the right position in the pyramid of progress towards reality, to make the following comments. Firstly, when: then the S-NHMSC is the ordinary non-homogeneous Markov process, which has found applications in almost all areas.
Secondly, when: then the S-NHMSC is the open homogeneous Markov model applied extensively in manpower systems (see [5,34]). Thirdly, when: then the S-NHMSC is the ordinary NHMS in continuous time, which is a general framework for many applied probability models (see [35,36]).

The Expected Population Structure of the S-NHMSC
We will now study the problem of finding the expected population structure E[N(t)] in terms of the basic functions of the parameters of the system. We call basic functions of the parameters the least number of parameters that uniquely determine an S-NHMSC. These are the functions {R , the initial population structure N(0), and the initial probabilities c j (0). These are defined by: Let N 0 (t, t + δt) be the random variable which represents the number of new members entering the population in the infinitesimal time interval [t, t + δt). Then, since the number of losses from the population is a random variable, with the distribution for each state i ∈ S, Furthermore, let N ij (t, t + δt) be the random variable representing the number of members of the system moving from state i to state j in the time interval [t, t + δt). Then, these flows from i to j ∈ S are multinomial random variables, in the sense that: and: Consequently, we have: Equation (12), for all j ∈ S, could be written in matrix notation: where: We will now prove that the sum of the rows of the matrix E[Q(t)] is equal to zero. We have: Hence, the matrix E[Q(t)] is an intensity matrix and defines a non-homogeneous Markov process which, by analogy with the ordinary NHMS in discrete time [5,35], we call the expected embedded or inherent non-homogeneous Markov process for the S-NHMSC. Assume that t 0 E[Q(u)]du < ∞ for all t ≥ 0, then there exists a unique transition function (see [36] paragraph 8.9) E P q (., .) , such that: is a set of Lebesgue measure zero. Moreover, E P q (., .) satisfies the integral matrix equations: and: A detailed solution of (17) and (18) could be found in [36], paragraph 8.9, where apparently E[Q(t)] is a function of {Z(t)} t≥0 and {R I (t)} t≥0 due to the selection of R(t) by the compromise non-homogeneous Markov process. However, we are not interested in a closed analytic formula E P q (s, t) , and it is sufficient that we know that it exists and that it is unique.
In what follows, I will use a probabilistic argument in order to find E[N(t)], which will also be the solution of the differential Equation (13). The initial number of memberships T(0) = N(0)1 at time t will be distributed to the various states with probabilities E P q (0, t) , which are the probabilities of transitions of the expected embedded non-homogeneous Markov process generated by the intensity matrix E[Q(t)]. Thus, the expected distribution across the states of the initial memberships will be: Now, let the time interval be [x, x + δx), then the new memberships entering in that time interval are D(x)δx, and their expected values in the various states at the end of the interval are given by p 0 (x, x + δx)D(x)δx. After time t − x, the expected number of new memberships will be distributed to the various states of the population and their expected values will be p 0 (x, x + δx)D(x)δxE P q (x, t) ; therefore, integrating x from 0 to t, we get:

The Asymptotic Behavior of the S-NHMSC
It is evident from previous studies, for example [1,4,35,[37][38][39][40], that the central problems in the theory of NHMS and S-NHMS in discrete time, which will be studied in the present for S-NHMSC, are basically of two natures. The first classical problem is that of finding the conditions under which the asymptotic behavior of the expected population structure E[N(t)] as t → ∞ exists and finding its limit in closed analytic form as a function of the limits of the basic parameters of the system. The second classical problem is finding which expected relative population structures are possible limiting ones, provided that we control the limiting vector of input probabilities in the population.
In what follows, I will use as a norm of matrix A ∈ M k×k (R) the following: I will start by refreshing concepts and borrowing some important results from the theory of non-homogeneous Markov processes, starting with the following definitions for non-homogeneous Markov processes with countable state spaces.
In the case of weak ergodicity the probability of the occurrence of any of the states at time t tends to be independent from the initial probability distribution, but is in general dependent on t.
Remark 1. When the state space S is finite, then the concepts of ergodic and strongly ergodic coincide.
As the reader by now may have recognized, the generator of a non-homogeneous Markov process is the sequence of intensity matrices {Q(t)} ∞ t=0 . This is so in the sense that the transition probability matrix P could be seen as the generator of a homogeneous Markov chain, and the sequence of transition probability matrices {P(t)} ∞ t=1 as the generator of a non-homogeneous Markov chain. Hence, our goal will now be to find conditions for strong ergodicity for a non-homogeneous Markov process based on the convergence of the sequence of intensity matrices {Q(t)} ∞ t=1 . I will now borrow a basic theorem concerning strong ergodicity for a non-homogeneous Markov chain based on its sequence of intensity matrices.
Theorem 1 ([14,15]). Let a complete probability space be (Ω, F , P) and a non-homogeneous Markov process {X t } ∞ t=0 with sequence of intensity matrices {Q(t)} ∞ t=0 , which is such that sup t≥0 Q(t) ≤ c. Let also a homogeneous Markov process be X t ∞ t=0 with intensity matrix Q, such that Q ≤ c, and which is strongly ergodic. If lim t→∞ Q(t) − Q = 0, then if Π is the stable stochastic matrix, the limit of X t ∞ t=0 , then {X t } ∞ t=0 is also strongly ergodic with limit Π.

Remark 2.
At this point, let us refresh the fact that for finite homogeneous, discrete, or continuous Markov chains, the concept of ergodicity, strong ergodicity, and weak ergodicity coincide. For an infinite chain, the notions of ergodicity and strong ergodicity are separated.
I will present an important result from [16]. Let Q be the intensity matrix of a homogeneous Markov process {X t } ∞ t=0 and sup i∈S {|q ii |} < c < ∞ and b > c, define: thenP generates a discrete Markov chain X t I will now prove the following basic theorem: Theorem 3. Let a complete probability space be (Ω, F , P) and a finite S-NHMSC, as defined in Section 2. Assume that the following conditions hold: with c 1 > z, P Z an irreducible, aperiodic matrix then, as t → ∞ E[Q(t)] converges in norm to the intensity matrix: where Π Z = π z 1 , π z 2 , ..., π z k is the left eigenvector of the eigenvalue 1 of the matrix P Z .
Proof. From condition (4), since P Z is an irreducible, aperiodic stochastic matrix, then there exists a stable stochastic matrix Π Z with common row Π Z = π z 1 , π z 2 , ..., π z k , which is the left eigenvector of the eigenvalue 1 of the matrix P Z , that is: Furthermore, from condition (4), we have that the intensity matrices {Z(t)} t≥0 converge to the intensity matrix Z, and from (22), we know that it generates an ergodic Markov process. Therefore, {Z(t)} t≥0 , due to Theorem 2, generates an ergodic non-homogeneous Markov process, and we have that: We have that: Now, consider: We have that: and since r j,ii = ∑ l =i r j,il + r j,i k+1 , and by condition (5) we have sup i∈S r j,ii < a < ∞, we could easily prove that: By condition (1), one can choose t * 0 such that for From (25), (27), and the conditions of the Theorem, we get that for t > t 0 : Furthermore, it is not difficult using the conditions of the Theorem to see that: In analogy with the discrete case for an S-NHMS, we provide the following definition: Definition 4. We say that an S-NHMSC has an asymptotically attainable expected relative population structure E[q(∞)] under asymptotic input control, if there exists a p 0 = lim t→∞ p 0 (t) such that lim t→∞ E[q(t)] = E[q(∞)]. We denote by A ∞ the set of asymptotically expected relative population structures under asymptotic input control of the S-NHMSC.
We now provide the following basic theorem concerning the asymptotic behavior of the S-NHMSC. where T(t) is a non-dicreasing continuous function. (7) The matrix: with c 2 > sup i∈S {|E(q ii )|} is an irreducible, aperiodic stochastic matrix. Then, (i) as t → ∞, E[q(t)] converges to Π q = π q1 , π q2 , ..., π qk , which is the left eigenvector of the eigenvalue 1 of the matrix P q . (ii) The set A ∞ is the convex hull of the points: Proof. Since Π z is the left eigenvector of the eigenvalue 1 of the irreducible, aperiodic matrix P Z , we have that 0 ≤ π z j ≤ 1 for j = 1, 2, ..., ν. Furthermore, condition (5) of Theorem 3 is also valid for the present; hence, sup i∈S r j,ii < ∞ and sup r i,k+1 < b < ∞.
Consequently, from the expression of E[Q] in Theorem 3 we get that: Now, since P q is an irreducible, aperiodic stochastic matrix, we have that: where Π q is a stable stochastic matrix with row Π q = π q1 , π q2 , ..., π qk , which is the left eigenvector of the eigenvalue 1 of the matrix P q . From (28), Theorems 1 and 2, we have that if we denote with E P q (s, t) the probability transition matrix of the non-homogeneous Markov process defined by the intensities {E[Q(t)]} t≥0 , then: Therefore, as t → ∞ is the first part of the right hand side of Equation (20): Now, consider: From (29), we have that there exists a t 0 > 0 such that for t − x > t 0 : Thus: Now, from condition (3), we have that there exists a t 1 > 0 such that for t > t 1 , p 0 (t) − p 0 < . Thus: Hence, we have that lim t→∞ A(t) = 0 and lim t→∞ B(t) < ∞. Thus, U(t) is an increasing function bounded from above and lim t→∞ U(t) = 0. Therefore, from (31), we have that: Hence, from (20), (30) and (32), we get that: Since E[Q] is finitely bounded and defines an ergodic Markov process, it is known that: From Theorem 3 and Equation (34), we get that The matrix ∑ ν j=1 π z j R j , due to condition (7), is irreducible and aperiodic and is part of the intensity matrix E[Q]. Hence, ( [41]) ∑ ν j=1 π z j R j −1 exists and is nonnegative. Therefore: and: Multiplying both sides of (37) by 1 , we obtain: Let: Then: Therefore, from (33) and the above, we get that: Hence, E[q(∞)] is a convex combination of the vertices: It is well known that for a homogeneous Markov process, with intensity matrix Q and transition matrix P(t), which is strongly ergotic, the rate of convergence with which P(t) converges to a stable stochastic matrix is exponential. Logically, this fact creates the intuition, that possibly for a non-homogeneous Markov process with sequence of intensity matrices Q(t), the rate at which the transition probability matrices converge to a stable stochastic matrix is also exponential. The answer to this is negative, since we need one more condition for this to be true, and that is lim t→∞ Q(t) − Q = 0 with an exponential rate of convergence. This result is stated formally in the following theorem, the proof of which could be found in [14].
Theorem 5. Let a complete probability space be (Ω, F , P) and a non-homogeneous Markov process {X t } ∞ t=0 with sequence of intensity matrices {Q(t)} ∞ t=0 , which is strongly ergodic. Let also a homogeneous Markov process be X t ∞ t=0 with intensity matrix Q, which is strongly ergodic. Let g : R + → R + be a monotonically increasing function. If lim t→∞ g(2t) Q(t) − Q = 0 then: {min(exp(λt)), g(t) P(s, t) − Π } = 0, where 0 < λ < β/2 and β > 0 is the constant parameter of the exponential rate of convergence at which X t ∞ t=0 converges.
An important question which logically arises is: what is the rate of convergence to asymptotically attainable structures in an S-NHMSC? In fact, I am interested in finding conditions under which the rate is exponential, because then, the practical value of the asymptotic result is greater (see [42,43]). Furthermore, as in [20], the problem of construction of sharp bounds for the rate of convergence of characteristics of Markov chains to their limiting vectors is very important. That is, all too often, it is easier to calculate the limit characteristics of a process than to find the exact distribution of state probabilities. Therefore, it is very important to have a possibility to use the limit characteristics as asymptotic approximations for the exact distribution. The following Theorem answers the question of the rate of convergence of the expected structure of an S-MHMSC. Theorem 6. Let a complete probability space be (Ω, F , P) and a finite S-NHMSC as defined in Section 2. Furthermore, let the conditions (1) → (7) of Theorem 4 hold and in addition assume that the convergences in conditions (1) → (4) and (6) are exponentially fast. Then, the convergence of E[N(t)] as t → ∞ is exponentially fast.
Proof. Since lim t→∞ Z(t) − Z = 0 is exponentially fast and in addition Z is strongly ergodic, then in Theorem 5 there are constants c 3 and λ 1 > 0 such that: Since the convergences in conditions (1)-(3) are exponentially fast, we have that: From (25), (42) → (45), we arrive at: Now, from (46), condition (7), of Theorems 4 and 5 we get that: We now have the following: From (47), condition (3), we obtain the fact that the convergence as t → ∞ of T(t) is exponentially fast, and based on (49), we arrive at the following relation: E[N(t)] − TΠ q ≤ ce −λt with c, λ, t > 0 and for every t > 0, which proves the Theorem.

An Illustrative Example from Manpower Planning
In the present section, the previous results are illustrated through an example from manpower planning. Interesting examples of such systems can be found in [44]. Suppose that intensities were estimated from the historical records of a firm with three grades, and they found that three were repeatedly exercised; thus, the pool R I (t) has the elements: Let also: r k+1 (t) = 1 + e −3t , 2 + e −7 , 7 + e −5t , p 0 (t) = 0.2 0.3 0.5 .
In addition, let us utilize the well-known maximum likelihood estimates for transition intensities ( [44]); the matrix of the transition intensities of the compromise nonhomogeneous Markov process {Z(t)} t≥0 , under the assumption that they are time independent, was found to be:Ẑ Obviously, sup t≥0 Z(t) < ∞, and with c 1 = 10, we get: which, apparently, is a matrix of transition intensities. Theorems 4 and 5 are straightforwardly applicable with the above data. The present example could be used as a guide for applying the theoretical results in many areas of potential applications, such as for example in [44][45][46][47][48].

Conclusions
The concept of a non-homogeneous Markov system in a stochastic environment and in continuous time was introduced. It was found under which conditions, using basic parameters, the limiting population structure and the relating relative population structure exist, and they were evaluated in elegant closed analytic forms. The set of all possible relative population structures was characterized under all possible input probability vectors. Finally, an illustrative example from manpower planning was presented, which could be used as a guide for applications in other areas.

Conflicts of Interest:
The author declares no conflict of interest.