Abstract
In this article we study the asymptotic behaviour of the expected population structure of a Markov system that lives in a general state space (MSGS) and its rate of convergence. We continue with the study of the asymptotic periodicity of the expected population structure. We conclude with the study of total variability from the invariant measure in the periodic case for the expected population structure of an MSGS.
1. Introductory Notes
Ref [1] introduced the stochastic process non-homogeneous Markov system (NHMS) with countable state space. Ref [2] introduced the stochastic process non-homogeneous semi-Markov system (NHSMS) in countable spaces. The theory and applications of both processes have seen the realization of an interesting growth and have found many applications in fields with great diversity. The latest reviews on the theory are given in [3,4]. The motives of this theory go back in the homogeneous Markov chain models for manpower systems in [5,6] and the evolution of which, in a variety of populations, are well described in [7]. The stochastic process of an NHMS represents a general frame work for a large variety of applied probability models to be accommodated in. However, real motives were the works on non-homogeneous Markov models for manpower systems such as [8,9,10]. We could selectively only supply some applications of the theory due to their large numbers. We start with the evolution of the HIV virus within the human body of T-cells in [11,12,13]; asthma was studied in [14]; reliability applications exist in [15] for example; examples of biomedical studies exist in [16,17]; applications in gene sequences ([18]); in DNA and web navigation ([19]); in manpower systems in [20,21,22,23,24,25]; in Physical Chemistry ([26]); examples from ecological modelling ([27]). Finally, there is the work of the research school of Prof McClean in the health systems (see, for example, [28,29,30,31,32]). Note that health systems are large manpower systems and each member has many parameters that are used to categorize him/her into the different groups. This characteristic of the health systems is what makes it an area of potential application of the present results (see [33]).
The rigorous foundation of Markov systems in general spaces (MSGS) was introduced in [33]. Also, the problem of asymptotic behaviour or ergodicity of Markov systems was studied. Important theorems were proved on the ergodicity of the distribution of expected population structure of a Markov system, which lives in the general space . In addition, the total variability from the invariant measure was studied given that the Markov system is ergodic. It is shown that the total variation is finite.
Markov chains in general spaces have very important applications in the areas of Markov models in time series, models in control and systems theory, Markov models with regenerations times, Moran dams, Markov chains Monte Carlo simulation algorithms, etc. For more details, see [34] Chapter 2. There is a belief that the introduction in the presence of a population which is modulated as a Markov chain with general state space will increase the dynamics for new applications considerably, as it was described above in the countable case. Hence, although the results in the present are purely theoretical, the prospects for interesting applications are present and promising.
In Section 2 and Section 2.1, we start with the formal definition of a Markov system in a general state space, as it was introduced in [33]. In Section 2.2, we provide an abstract image for an MSGS in an attempt to help the reader in understanding the present paper. In Section 2.3, we introduce some known concepts and results useful in the foundation of the novel results that follow from this point to the end. In Section 3, we study the rate of convergence of the expected population structure in an MSGS. Two theorems are provided where in the first we provide conditions under which the MSGS is uniformly ergodic, and in the second we provide conditions under which the MSGS is V-uniformly ergodic. In Section 4, we proceed to study the asymptotic periodicity of the expected population structure, for a population that lives in a general state space. A basic theorem is proved, where it is shown that if d is the period of the inherent Markov chain, then the sequence of the expected population structures splits into d converging subsequences. The asymptotic periodicity of non-homogeneous Markov systems in countable spaces was studied by [35,36]. In the present study, the basic tools and methodology are different since, among other reasons, the equations are of completely different nature. That is, we move from difference equations to integral equations for the expected population structure. In Section 5, we study the total variability from the invariant measure in the periodic case for the expected population structure of an MSGS. In the form of two theorems, we provide conditions under which the total variability in the periodic case is finite.
2. The Markov System in a General State Space
We will start with a formal definition of a Markov system in a general state space ([33]) and then we provide an explanatory example. The reader could choose his own suitable order of reading the two subsections.
2.1. The Foundation of an MSGS
Let be the state space where X is a general set and denote a countably generated -field on X. Assume that small letters are elements of X and elements of . Let us denote by the population of the system at time t, which lives in . We assume that is a known stochastic process with state space and in discrete time.
Let that be a positive -finite measure , that is,
for which we have that We assume that represents the initial distribution of a member of the population in the space . We also assume that there is a set for which . We will refer to W as the “gate” of the space .
In the space where the population lives, it is assumed that each member has a membership, and that the leavers are leaving their memberships at the “gate”. Therefore, at any time t at the “gate” W, the number of memberships are those left by leavers, and the necessary memberships to complete the desired population . We will only work the cases for which . New members of the population take their memberships at the gate “gate” W from which they then make a transition to any . Now let
be the transition probability kernel from to the set in one time step for a member of the population. Denote by the Markov chain with state space , which is defined uniquely by the probability kernel . We now make the following:
Assumption 1.
For the Markov chain the set W is an atom.
That is, there exists a measure on such that
with .
Now, define by
The transition probability kernel in is the probability of a membership to make a transition from point x into the set A in one time step, either by direct transition of the member holding the membership with probability or by the member leaving the system through the gate W with probability and the entrance of a new member, who gets the membership of the member from gate W, into set A with probability . The transition probability kernel in is the probability of new memberships entering the space at each time interval From the gate of the atom, W is then distributed according to the -finite probability measure : .
We will use the notation by for the Markov chain with state space defined uniquely by in and . We also use the notation for probability that a membership in x will move in n time steps to the set .
Now, define:
the number of memberships of the population that are in set at time t given that were initially at .
Assume, as it is usual in many applications, a natural partition of the space , that is,
It is of interest the expected population structure defined as:
where is a non-negative measurable function on X. We are intersted also for the relative expected population structure defined by
It could be easily seen that is a positive -finite probability measure since
Of central importance is the evolution of the expected population structure or the relative expected population structure in the study of Markov systems in general state space. From Vassiliou we get that
Note that
is the expected number of memberships from the initial population that survived up to time t and are in set A. The part
is the expected number of memberships that entered the system in the interval and survived up to time t and are in set
We call the random process described above with state space a Markov system in a general state space. In addition, we will call the Markov chain the inherent Markov chain of the Markov system.
2.2. An Abstract Image for MSGS
In the present subsection, in order to understand the various concepts and results, we will use the analogy of the example of the frog who jumps on lilies in a pond. This example of an abstract image was firstly used in [33] and the reader is referred to that study for a more extensive description and an explanation of the various concepts. In summary, imagine a lily pond that is covered by the leaves of the lilies. A population of frogs is living on the pond and a number of them are in every lily at each point in time and hence we have an expected population of frogs in the lilies. There are also leavers and newcomers in the pond at every point in time due to the antagonistic mating situation that exists between the males.
2.3. Introducing Some Important Concepts and Known Results
Throughout the paper, we will assume that the Markov chain is -irreducible. That is, there is a -finite measure on such that, for any with and any ,
Assume that is maximal with respect to . We will also assume that the Markov chain is aperiodic, apart from the last section, where we will study periodicity. The concept of irreducibility for Markov chains on general state spaces was introduced by [37,38], and followed by many authors [39,40,41,42,43,44].
We will adopt the term ergodicity for the Markov chains for which the limit, exists and is equal with , where is the invariant measure or stationary distribution of . As a mode of convergence in the space , the total variation norm will be used. For a a signed measure on , the total variation norm is given by
The key limit of interest in the present paper will be of the form
For more on the spectral theory and rates of convergence for Markov chains in general spaces, see [34]. Research is motivated by elegant mathematics as well as a range of applications. Note that, for countable inhomogeneous Markov systems it was proved that the space of all possible expected population structures is a Hilbert space ([45]). Now, for a irreducible Markov chain who lives in , denoted by
Let and be transition kernels for Markov chains, then for a positive function , define the V-norm distance between and as
We define the outer product of the function 1 and the invariant measure the kernel
In many applications, we consider the distance for large n. We provide the following definition from [34]:
Definition 1.
(V-uniform ergodicity). An ergodic chain Φ with transition kernel Q is called V-uniformly ergodic if
Of great interest is the special case when . In this case, we provide the following definition:
Definition 2.
(Uniform ergodicity). A chain Φ with transition kernel Q is called uniformly ergodic if it is V-uniformly ergodic with that is, if
The concept of a small set is very useful in order to have a finite break up into cyclic sets for -irreducible chains.
Definition 3.
For we say that it is a small set if there exists an and a non-trivial measure on , such that for all ,
In such a case we say that C is -small.
From [34] p. 105 we get the following Theorem:
Theorem 1.
If Φ is ψ-irreducible, then for every , there exists and a -small set such that and
Given the existence of just one small set from the previous Theorem, the Proposition in [34] p. 106 states that it is further possible for set X to be covered by small sets in the -irreducible case.
Assume that C is any -small set, and , without loss of generality. With the use of the set C we define a cycle for a general irreducible Markov chain. We will suppress the subscript of for simplicity.
We have that
hence, when the chain starts in C, there is a positive probability that the chain will return to C at time M. Let
Then the “period” for the set C, is given by the greatest common divisor of The following Theorem ([34] p. 113) will be useful in what follows:
Theorem 2.
Assume that Φ is a ψ-irreducible Markov chain on
If is -small and d the greatest common divisor of the set Then there exist disjoint sets (a "d-cycle) such that
for
the set is ψ-null.
The d-cycle is maximal in the sense that for any collection
satisfying -, we have dividing d; whilst if , then, by reordering the indices if necessary a.e. ψ.
It is obvious from the above that any small set must be essentially contained inside one specific member of the d-cycle cyclic class. From [34] p. 115, we need the following Proposition:
Proposition 1.
Suppose Φ is a ψ-irreducible chain with period d and a D-cycle . Then each of the sets is an absorbing ψ-irreducible set for the chain corresponding to the transition kernel , and on each is aperiodic.
For any set , we denote by the number of visits by to A after time zero and it is given by
We define the kernel
The chain is called recurrent if it is -irreducible and for every and every . If the chain is irreducible and admits an atom then if a is recurrent, then every set in is recurrent.
Definition 4.
The set is called Harris recurrent if
A chain Φ is called Harris recurrent if it is ψ-irreducible and every set in is Harris recurrent.
Definition 5.
A σ-finite measure π on with the property
will be called invariant.
Suppose that is irreducible, and admits an invariant probability measure . Then is called a positive chain. If is Harris recurrent and positive, then is called a positive Harris chain. From [34] p. 328 and p. 204, we get the following two results:
Theorem 3.
If Φ is positive Harris and aperiodic, then for any initial distribution λ
For any Harris recurrent set H, we write , where is the probability of of ever entering H from y, so that and is absorbing. We will call H a maximal absorbing set if .
Theorem 4.
If Φ is recurrent, then we can write
where H is a non-empty maximal Harris set and N is transient.
3. Rate of Convergence of MSGS
In the present section, we will study the rate of convergence of Markov systems in general spaces. We will provide conditions under which the rate of convergence of the expected population structure of an ergodic Markov system is uniformly ergodic and V-uniformly ergodic. From [34] p. 393, we get the following two Theorems:
Theorem 5.
Suppose that Φ is ψ-irreducible and aperiodic with transition kernel Q. Then the following are equivalent for
Φ is V-uniformly ergodic.
There exists and such that for all
There exists some such that for and
Theorem 6.
For any Markov chain Φ with transition kernel Q, the following are equivalent:
Φ is uniformly ergodic.
There exists and such that for all x
that is, the convergence takes place at a uniform geometric rate.
For some
The chain is aperiodic and Doeblin condition holds: that is, there is a probability measure ϕ on and , such that when
The set X for some m is -small.
For every set and an aperiodic Markov chain there is a petit set C with
and in this case
For every set and an aperiodic Markov chain there is a petite set C and with
and we have for some
For an aperiodic Markov chain there is a bounded solution to
for some , and some petit set
Under we get that for any
where
Now, from [46], we will borrow the following theorem adapted for an MSGS:
Theorem 7.
Let a Markov system in a general state space , which is expanding . The following two statements are equivalent
The sequence converges, that is, geometrically.
The non-negative sequence tends to zero geometrically.
We will start by studying Uniform ergodicity for an MSGS for simplicity reasons and then proceed to study V-uniform ergodicity.
Definition 6.
(Uniform ergodicity for an MSGS).
Let a Markov system in a general state space Also let be the total population of memberships with with and the transition kernel of the inherent Markov chain of memberships and the expected population structure. We say that the MSGS is uniformly ergodic if and only if there exists a and an such that
We now provide the following theorem concerning uniform ergodicity for an MSGS.
Theorem 8.
Let a Markov system that lives in . Assume in addition that
and that the inherent Markov chain with transition kernel is uniformly ergodic. Then the MSGS is uniformly ergodic.
Proof.
We have that
From and since is geometrically fast and for every from Theorem 6 we get that there exist a and a such that
Also from , since and since the inherent Markov chain with kernel is uniformly ergodic, from Theorem 6 (ii) there exists and such that
with
Now, again since is geometrically fast and for every from Theorem 6 there exist a and a with ; in addition since the inherent Markov chain with kernel is uniformly ergodic, from Theorem 6 (ii) there exists and , that is, such that
Now, from (27)–(30) we have that
Let then and
which concludes the proof that MSGS is uniformly ergodic. □
Generalizing the above results to iniform ergodicity is now straight forward. We start with the definition
Definition 7.
(V-Uniform ergodicity for an MSGS). Let a Markov system in a general state space Also let be the total population of memberships with with and the transition kernel of the inherent Markov chain of memberships and the expected population structure. We say that the MSGS is uniformly ergodic if and only if there exists a and an such that
Following exactly the proof of Theorem 8, we arrive at the following result:
Theorem 9.
Let a Markov system that leaves in . Assume in addition that
and that the inherent Markov chain with transition kernel is uniformly ergodic. Then the MSGS is uniformly ergodic.
4. Asymptotic Periodicity of an MSGS
The study of the asymptotic behaviour of Markov chains has been very important for finite or countable spaces from the start of their history. The general state space results are due to [40,41,47]. In the non-homogeneous Markov chains when the state space is countable, the initial important results on asymptotic behaviour were achieved by [48,49,50,51,52]. For the semi Markov process analogue, a wealth of results exist in the books by [53,54]. For NHMS and NHSMS on countable spaces the asymptotic behaviour has also been a problem of central importance see for example [1,36,55,56,57]. In the present section, we will assume that the Markov chain is periodic and study the asymptotic behaviour of the expected population structure, which apparently is an important problem for a population that lives in . Let it be that the inherent Markov chain of the MSGS with kernel Q is positive Harris with period , and let the d-cycle described in Theorem 2 be the set , where for We have assumed that is the initial distribution on X. Then the distribution on the set is
From Theorem 2, we have that if then and consequently . However, for we have hence Similarly, since the period is d we have
Now, in general for and for we have
It is apparent from that
If and for we have
From the fact that the inherent Markov chain with kernel Q is assumed to be positive Harris with period d we know that for each cyclic set is positive Harris on the d-skeleton and aperiodic, and by Theorem 3 it is straightforward to check the following Theorem:
Theorem 10.
If the inherent Markov chain of the MSGS is positive Harris and periodic with period d, then for the sets with we have that
for any set and
for any set
Now, from the above Theorem and the decomposition Theorem 4, we immediately get the following:
Theorem 11.
If the inherent Markov chain of the MSGS is positive recurrent and periodic with period d, then for the sets with , then for each i there exist a null set which for every initial distribution with
for any set and
for any set
We will now start building the proof for the periodicity Theorem for an MSGS and we will state the theorem in the end. The important question is the periodicity of the expected population structure
when the inherent Markov chain with kernel Q is positive Harris and periodic with period d or is positive recurrent and periodic with period d. With no loss of generality and for simplicity reasons, in what follows we will use the set A instead of any of the sets From Equation we have that
We will work with each part of the right hand side of Equation separately. We have that
We could write the set A in general as follows
Let without loss of generality that and consider in what follows that for all , . Then we have that
From we have that
Now, from and we get that
Hence, we proved that when and then the limit of as is
Similarly, following the same steps we could prove that for and then
From and it is not difficult to check that
Assume now that and for every t. Let again that then
Now, since we have
Hence, from and we get that
It is apparent that since
and thus the series
is bounded by , its elements are non-negative and thus it converges; so let
Now, let us define by
Now from we get that
Therefore becomes
We now provide the following useful Lemma:
Lemma 1.
If as and non-negative, and then
From Lemma 1, and the result in where we may replace by it is not difficult to check that the following holds
Define by
Then similarly we could prove that
Therefore from and for we get that
for . Hence, we have proved the following Theorem:
Theorem 12.
Let an MSGS and let that the inherent Markov chain of the MSGS be positive Harris and periodic with period d.
OR let it be that the inherent Markov chain of the MSGS is positive recurrent and periodic with period d, and let the sets with . Then the expected population structure splits into d converging subsequences, that is,
for
5. Total Variability from the Invariant Measures in the Periodic Case. Coupling Theorems
In this section, we study the total variability from the invariant measures in the periodic case for an MSGS. We show that the total variation in the periodic case is finite. This is also known as the coupling problem (see [58]). From [34] (p. 332) we get the following:
Theorem 13.
Suppose that Φ is positive Harris and aperiodic chain and assume that the chain has an atom . Then for any regular initial distributions
and in the case that Φ is regular, then for any
Theorem 14.
Suppose Φ is a positive Harris and aperiodic chain. For any regular initial distributions
Theorem 15.
Suppose that Φ is a positive Harris and aperiodic chain and then let α be an accessible atom. If , then for any regular initial distribution
Now, if we assume that the inherent Markov chain with kernel Q of the MSGS is positive Harris with period d then by Proposition 1 when restricted to each cyclic set is aperiodic and positive Harris on the d-skeleton, and by Theorems 13–15 it is straightforward to check the following theorem:
Theorem 16.
If the inherent Markov chain with kernel Q of the MSGS is positive Harris with period then for the sets with we have that
(a) For any regular initial distributions and for
Also, for
for
(b) If for the accessible atom W we have that then for any regular initial distribution λ we have for and
and for and
We will now prove the following coupling theorem for an MSGS with inherent Markov chain , which is assumed to be positive Harris and periodic.
Theorem 17.
Assume that the inherent Markov chain of a Markov system that lives in with kernel is positive Harris with period Also, let it be that
Then for any regular distributions we have
Proof.
We have that
With no loss of generality assume that . Denoted by
From and we have that
We have that
From it is apparent that with no loss of generality we may assume that
Also, without loss of generality again we may assume that , then from and
Now, it is not difficult to check that
and
Therefore from (63)–(65) we get that
From Theorem 16 and we get that
In a similar way we get that and and consequently from we get that
The case for is proved in an analogous way.
We have that
We start with and we have that
Denote by
Then with no loss of generality we may assume that there is an r such that
From (70)–(72) and using Theorem 16 we get that
We now move to and we have that
We start first with and we get that
Now, using Theorem 17 and from we have that
Similarly, we get also that
From , , , and , we conclude the proof of the Theorem. □
Similarly, and with the use of Theorem 16.b we could prove the following Theorem
Theorem 18.
Assume that the inherent Markov chain of a Markov system that lives in with kernel is positive Harris with period d. Assume, in addition, that for the accessible atom W, we have and that
Then for any regular distribution λ we have
6. Conclusions and Further Research
We studied a population that lives in a general state space that at every instant has leavers and new entrants, which evolves under the Markov property. We proved that under certain conditions the expected population structure converges with geometrical rate to an invariant population structure with loss of memory. We also provided the analogous result for the V-uniformly ergodicity of MSGS. Moreover, under the assumption that the inherent Markov chain is periodic with period d, we proved that the sequence of the expected population structure splits into d convergent subsequences and provided their limits in closed forms. Finally, we proved that the total variability from the invariant measure in the periodic case is finite. We concluded the novel results by proving two coupling theorems under the assumption that the inherent Markov chain is positive Harris with period d using different additional conditions in each case. The above theoretical results have important applications both in the classical areas where Markov chains in general spaces have already been applied but especially to the important area of the health systems that, nowadays, are in crisis all over the world.
The results in the present paper together with the ones introduced in [33] provide the nucleus for a new path for extensive research in the area. Immediate further research of interest is to establish Laws of Large numbers for MSGS as a possible extension of the Laws of Large numbers for populations that live in countable spaces [59].
Funding
This research received no external funding.
Conflicts of Interest
The author declares no conflict of interest.
References
- Vassiliou, P.-C.G. Asymptotic behaviour of Markov systems. J. Appl. Probab. 1982, 19, 851–857. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. and Papadopoulou, A.A. Nonhomogeneous semi-Markov systems and maintainability of the state sizes. J. Appl. Probab. 1992, 29, 519–534. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. The evolution of the theory of non-homogeneous Markov systems. Appl. Stoch. Models Data Anal. 1997, 13, 159–176. [Google Scholar] [CrossRef]
- Ugwuogo, F.I.; Mc Clean, S.I. Modelling heterogeneity in manpower systems: A review. Appl. Stoch. Models Bus. Ind. 2000, 2, 99–110. [Google Scholar] [CrossRef]
- Young, A.; Almond, G. Predicting distributions of staff. Comput. J. 1961, 3, 246–250. [Google Scholar] [CrossRef][Green Version]
- Bartholomew, D.J. A multistage renewal process. J. R. Stat. Soc. B 1963, 25, 150–168. [Google Scholar]
- Bartholomew, D.J. Stochastic Models for Social Processes; Wiley: New York, NY, USA, 1982. [Google Scholar]
- Young, A.; Vassiliou, P.-C.G. A non-linear model on the promotion of staff. J. R. Stat. Soc. A 1974, 138, 584–595. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. A markov model for wastage in manpower systems. Oper. Res. Quart. 1976, 27, 57–70. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. A high order non-linear Markovian model for promotion in manpower systems. J. R. Stat. Soc. A 1978, 141, 86–94. [Google Scholar] [CrossRef]
- Mathew, E.; Foucher, Y.; Dellamonika, P. Daures, J.-P. Parametric and Non-Homogeneous Semi-Markov Process for HIV Control; Working paper; Archer Hospital: Nice, France, 2006. [Google Scholar]
- Foucher, Y.; Mathew, E.; Saint Pierre, P.; Durant, J.-F.; Daures, J.P. A semi-Markov model based on generalized Weibull distribution with an illustration for HIV disease. Biom. J. 2005, 47, 825–833. [Google Scholar] [CrossRef]
- Dessie, Z.G. Modeling of HIV/AIDS dynamic evolution using non-homogeneous semi-Markov process. Springerplus 2014, 3, 537. [Google Scholar] [CrossRef] [PubMed]
- Saint Pierre, P. Modelles multi-etas de type Markovien et Application a la Astme. Ph.D. Thesis, University of Monpellier I, Montpellier, France, 2005. [Google Scholar]
- Barbu, V.; Boussement, M.; Limnios, N. Discrete-time semi Markov model for reliability and survival analysis. Commun. Stat. Theory Methods 2004, 33, 2833–2886. [Google Scholar] [CrossRef]
- Perez-Ocon, R.; Castro, J.E.R. A semi-Markov model in biomedical studies. Commun. Stat. Theory Methods 2004, 33, 437–455. [Google Scholar]
- Ocana-Riola, R. Non-homogeneous Markov process for biomedical data analysis. Biom. J. 2005, 47, 369–376. [Google Scholar] [CrossRef] [PubMed]
- McClean, S.I.; Scotney, B.; Robinson, S. Conceptual clustering of heterogeneous gene expression sequences. Artif. Intell. Rev. 2003, 20, 53–73. [Google Scholar] [CrossRef]
- Papadopoulou, A.A. Some results on modeling biological sequences and web navigation with a semi-Markov chain. Commun. Stat.Theory Methods 2013, 41, 2853–2871. [Google Scholar] [CrossRef]
- De Freyter, T. Modelling heterogeneity in manpower planning: Dividing the personel system in more homogeneous subgroups. Appl. Stoch. Model. Bus. Ind. 2006, 22, 321–334. [Google Scholar] [CrossRef]
- Nilakantan, K.; Raghavendra, B.G. Control aspects in proportionality Markov manpower systems. Appl. Stoch. Models Data Anal. 2005, 7, 27–341. [Google Scholar] [CrossRef]
- Yadavalli, V.S.S.; Natarajan, R.; Udayabhaskaram, S. Optimal training policy for promotion-stochastic models of manpower systems. Electron. Publ. 2002, 13, 13–23. [Google Scholar] [CrossRef]
- Sen Gupta, A.; Ugwuogo, F.L. Modelling multi-stage processes through multivariate distributions. J. Appl. Stat. 2006, 33, 175–187. [Google Scholar] [CrossRef]
- De Freyter, T.; Guerry, M. Markov manpower models: A review. In Handbook of Optimization Theory: Decision Analysis and Applications; Varela, J., Acuidja, S., Eds.; Nova Science Publishers: Hauppauge, NY, USA, 2011; pp. 67–88. [Google Scholar]
- Guerry, M.A. Some results on the Embeddable problem for discrete time Markov models. Commun. Stat. Theory Methods 2014, 43, 1575–1584. [Google Scholar] [CrossRef]
- Crooks, G.E. Path ensemble averages in system driven far from equilibrium. Phys. Rev. E 2000, 61, 2361–2366. [Google Scholar] [CrossRef]
- Patoucheas, P.D.; Stamou, G. Non-homogeneous Markovian models in ecological modelling: A study of the zoobenthos dynamics in Thermaikos Gulf, Greece. Ecol. Model. 1993, 66, 197–215. [Google Scholar] [CrossRef]
- Faddy, M.J.; McClean, S.I. Markov chain for geriatric patient care. Methods Inf. Med. 2005, 44, 369–373. [Google Scholar] [CrossRef] [PubMed]
- Gournescu, F.; McClean, S.I.; Millard, P.H. A queueing model for bed-occupancy management and planning of hospitals. J. Oper. Res. Soc. 2002, 53, 19–24. [Google Scholar] [CrossRef]
- Gournescu, F.; McClean, S.I.; Millard, P.H. Using a queueing model to help plan bed allocation in a department of geriatric medicine. Health Care Manag. Sci. 2002, 5, 307–312. [Google Scholar] [CrossRef]
- Marshall, A.H.; McClean, S.I. Using Coxian phase-type distributions to identify patient characteristics for duration of stay in hospital. Health Care Manag. Sci. 2004, 7, 285–289. [Google Scholar] [CrossRef]
- McClean, S.I.; Millard, P. Where to treat the older patient? Can Markov models help us better understand the relationship between hospital and community care. J. Oper. Res. Soc. 2007, 58, 255–261. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. Markov systems in a General State Space. Commun. Stat.-Theory Methods 2014, 43, 1322–1339. [Google Scholar] [CrossRef]
- Meyn, S.; Tweedie, R.L. Markov Chains and Stochastic Stability; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
- Tsaklidis, G.M.; Vassiliou, P.-C.G. Asymptotic periodicity of the variances and covariances of the state sizes in non-homogeneous Markov systems. J. Appl. Probab. 1988, 25, 21–33. [Google Scholar] [CrossRef]
- Georgiou, A.C.; Vassiliou, P.-C.G. Periodicity of asymptotically attainable structures in non-homogeneous Markov systems. Linear Algebra Its Appl. 1992, 176, 137–174. [Google Scholar] [CrossRef]
- Doeblin, W. Sur les proprietes asymptotiques de mouvement regis par cerain types de chaines simples. Bull. Math. Soc. Roum. Sci. 1937, 39, 57–115. [Google Scholar]
- Doeblin, W. Elements d’une theorie general des chaines simple constantes de Markov. Ann. Sci. l’Ecole. Norm. Super. 1940, 57, 61–111. [Google Scholar] [CrossRef]
- Doob, J.L. Stochastic Processes; John Wiley: New York, NY, USA, 1953. [Google Scholar]
- Harris, T.E. The existence of stationary measures for certain Markov processes. In Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Berkeley, CA, USA, 1956; Volume 2, pp. 113–124. [Google Scholar]
- Orey, S. Recurrent Markov chains. Pacific J. Math. 1959, 9, 805–827. [Google Scholar] [CrossRef]
- Tweedie, R.L. R-theory for Markov chains on a general state space I: Solidarity properties and R-recurrent chains. Adv. Appl. Probab. 1974, 2, 840–864. [Google Scholar] [CrossRef]
- Tweedie, R.L. R-theory for Markov chains on a general state space II: R-subinvariant measures for R-transient chains. Adv. Appl. Probab. 1974, 2, 865–878. [Google Scholar] [CrossRef]
- Tweedie, R.L. Sufficient conditions for ergodicity and recurrence of Markov chains on a general state space. Stoch. Proc. Appl. 1975, 3, 385–403. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. Exotic properties of non-homogeneous Markov and semi-Markov systems. Commun. Stat.-Theory Methods 2013, 42, 2971–2990. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G.; Tsaklidis, G.M. On the rate of convergence of the vector of variances and covariances in non-homogeneous Markov systems. J. Appl. Probab. 1989, 26, 776–783. [Google Scholar] [CrossRef]
- Orey, S. Limit Theorems for Markov Chain Transition Probabilities; Van Nostrand Reinhold: London, UK, 1971. [Google Scholar]
- Hajnal, J. The ergodic properties of non-homogeneous finite Markov chains. Proc. Camb. Philos. Soci. 1956, 52, 67–77. [Google Scholar] [CrossRef]
- Hajnal, J. Weak ergodicity in non-homogeneous Markov chains. Proc. Camb. Philos. Soc. 1958, 54, 233–246. [Google Scholar] [CrossRef]
- Hajnal, J. Shuffling with two matrices. Contemp. Math. 1993, 149, 271–287. [Google Scholar]
- Dobrushin, R.L. Central limit theorem for non-stationary Markov chains, I, II. Theory Probab. Its Appl. 1956, 1, 65–80. [Google Scholar] [CrossRef]
- Nummelin, E. General Irreducible Markov chains and Nonnegative Operators; Cambridge University Press: Cambridge, UK, 1984. [Google Scholar]
- Howard, R.A. Dynamic Probabilistic Systems; Wiley: Chichester, UK, 1971; Volumes I and II. [Google Scholar]
- Hunter, J.J. Mathematical Techniques of Applied Probability; Academic Press: Cambridge, MA, USA, 1983. [Google Scholar]
- Vassiliou, P.-C.G. On the limiting behavior of a non-homogeneous Markov chain model in manpower systems. Biometrika 1981, 68, 557–561. [Google Scholar]
- Papadopoulou, A.A.; Vassiliou, P.-C.G. Asymptotic behaviour of non-homogeneous Markov systems. Linear Algebra Its Appl. 1994, 210, 153–198. [Google Scholar] [CrossRef]
- Vassiliou, P.-C.G. Onn the periodicity of non-homogeneous Markov chains and systems. Linear Algebra Its Appl. 2015, 471, 654–684. [Google Scholar] [CrossRef]
- Lindvall, T. Lectures on the Coupling Method; John Wiley: Hoboken, NJ, USA, 1992. [Google Scholar]
- Vassiliou, P.-C.G. Laws of Large numbers for non-homogeneous Markov systems. Methodol. Comput. Appl. Probab. 2018, 1–28. [Google Scholar] [CrossRef]
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).