Discrete-Time Semi-Markov Random Evolutions in Asymptotic Reduced Random Media with Applications

: This paper deals with discrete-time semi-Markov random evolutions (DTSMRE) in reduced random media. The reduction can be done for ergodic and non ergodic media. Asymptotic approximations of random evolutions living in reducible random media (random environment) are obtained. Namely, averaging, diffusion approximation and normal deviation or diffusion approximation with equilibrium by martingale weak convergence method are obtained. Applications of the above results to the additive functionals and dynamical systems in discrete-time produce the above tree types of asymptotic results.


Introduction
In order to simplify the analysis of complex systems, we consider stochastic approximation methods where we not only simplify the system but also the random media. These address the fact that some subsets of state space are weakly connected with the other subsets, that is, the transition probabilities are very small compared to transition probabilities inside the considered subsets. This fact allows one to proceed into an asymptotic reduction of the state space of system and also of the random medium.
In fact, the random medium, which is coming to perturb the considered system, or equivalently the random evolution, can explore some subsets of its state space in a fast time, while some other subsets of states are explored in a slow time. On the one hand, in the scale of fast time, the slow time explored subsets take place as rare events. On the other hand, in the slow time scale, the fast time can be considered as a unique merged state, since states into the fast subsets are undistinguishable from the point of view of slow time scale.
Of course, the different kind of stochastic approximations of the random evolutions give us different kind of results. Namely, the average approximation leave the same structure of the system but with a simpler state space of the random medium and structure. In the diffusion approximation the structure of the system is simplified to a switched diffusion process, and the switching random medium in a simpler state space and structure. In the normal deviation, or equivalently in the merging with equilibrium, the considered process is the difference of the initial process by the mean process obtained in the averaging scheme and the limit is a switched diffusion process.
Concerning the state space of random media we may consider a finite or even uncountable factor space of the state space on which we consider a supporting Markov chain and the original process is considered as a perturbation of the above supporting Markov chain by a signed transition kernel.
Results of this kind in continuous time have been presented in several works including those by the authors of the present paper. This kind of results in semi-Markov setting were first presented by V.S. Koroliuk and his collaborators [1][2][3][4][5]. Asymptotic merging called consolidation is also studied by Anisimov [6]. See also results by Yin and Zhang [7]. Some recent papers are also dedicated to the above problems. For the Markov switching models, see, for example, References [8,9], and for non equilibrium Markov processes, see, for example, References [10,11].
Discrete-time semi-Markov random evolution have already been studied as the embedded Markov process of semi-Markov processes, where in fact it turns out to be a Markov chain random evolution, see, for example, References [1,2]. Discrete calendar time Markov evolution were introduced first by Keepler in Reference [12]. In semi-Markov setting they have been introduced in Reference [13], and studied in depth in Reference [14]. This paper presents new results as continuation of those presented in Reference [14]; they are different from the fact that the random media there were on fixed state space and not reducible as in the present case. Nevertheless, for the first part, the merging of semi-Markov chain, we are using here a different technique to obtain merging of the state space and asymptotic results for stochastic systems, which is based on the compensating operator of the semi-Markov chain, see, for example, References [1,15].
The paper is organized as follows. Section 2 includes the semi-Markov chain setting needed in the sequel. Section 3 includes merging state space definition and results of asymptotic merging in the ergodic and non ergodic cases. Section 4, includes discrete-time semi-Markov random evolution (DTSMRE) definition and preliminary results. Section 5 presents the main results of this paper, that is average, diffusion and diffusion with equilibrium approximation or normal deviation results for DTSMRE with merging. Section 6 presents average, diffusion and diffusion with equilibrium approximation results for particular systems: integral functionals and dynamical systems. Section 7 presents proofs of the theorems. Finally, Section 8 contains concluding remarks.

Semi-Markov Chains with Merging
Let (E, E ) be a measurable space with countably generated σ-algebra and (Ω, F , (F n ) n∈I N , P) be a stochastic basis on which we consider a Markov renewal process (MRP), (x n , τ n , n ∈ IN), in discrete time k ∈ IN, with state space (E, E ). It is worth noticing that k is the calendar time, while n is the number of jumps, both are IN-valued. Notice that IN is the set of non-negative integer numbers. The semi-Markov kernel q is defined by (see, e.g., Reference [33]), We will also denote q(x, B, Γ) = ∑ k∈Γ q(x, B, k), where Γ ⊂ IN. The process (x n ) is the embedded Markov chain (EMC) of the MRP (x n , τ n ) with transition kernel P(x, dy) on the state space (E, E ). The semi-Markov kernel q is written as q(x, dy, k) = P(x, dy) f xy (k), where f xy (k) := P(τ n+1 − τ n = k | x n = x, x n+1 = y), the conditional distribution of the sojourn time in state x given that the next visited state is y. We set q(·, ·, 0) ≡ 0.
Here for simplicity we do not consider dependence of the function f xy on the second state y, that is, the sojourn time distribution in state x is independent of the arrival state y, and we will denote it as f x . In fact, any semi-Markov process with x and y dependence can be transformed to one with dependence only on x, see, for example, Reference [29]. So, there is no restriction to the generality.
Let ν k = max{n : τ n ≤ k} be the process which counts the jumps of the EMC x n , in the time interval [0, k] ⊂ IN, and the discrete-time semi-Markov chain z k by z k = x ν k , for k ∈ IN. Define now the backward recurrence time process γ k := k − τ ν k , k ≥ 0, and the filtration F k := σ(z , γ ; ≤ k), k ≥ 0.
The Markov chain (z k , γ k ), k ≥ 0, has the following transition probability operator on the real bounded measurable functions defined on E × IN, It is worth noticing that the above relation (2) can be written also in another interesting form as follows where λ x (k + 1) is the exit rate of the SMC from the state x ∈ E and time k + 1, given by P x (τ 1 = k + 1 | τ 1 > k). Of course, as usually, the transition rate in discrete-time is a probability and not a positive real-valued function as is the case in continuous-time. The above relation is similar to the generator of the process (z t , γ t ) in the continuous-time, see, for example, References [1][2][3].
The stationary distribution of the process (z k , γ k ), if there exists, is given by The probability measure π defined by π(B) = π (B × IN) is the stationary probability of the SMC (z k ). From the above equality we get the following useful equality which connect the stationary distribution of the semi-Markov chain, with the stationary distribution of the embedded Markov chain, when they exist. Define also the r-th moment of holding time in state x ∈ E, Define now the uniform integrability of the r-th moments of the sojourn time in states by for any n ≥ 1.

Merging of Semi-Markov Chains
We present here the two cases: ergodic and non ergodic of the semi-Markov chain in merging scheme.

The Ergodic Case
Let us consider a family of ergodic semi-Markov chains z ε k , k ≥ 0, ε > 0, with semi-Markov kernel q ε and a fixed state space (E, E ), a measurable space.
Let us consider the following partition (split) of the state space.
Let us also consider the trace of σ−algebra E on E j , denoted by E j , for j = 1, ..., d.
The semi-Markov kernels have the following representation where the transition kernel of the EMC x ε n , n ≥ 0, has the representation The transition kernel P determines a support Markov chain, say x 0 n , n ≥ 0, and satisfies the following relations for j = 1, ..., d. Of course, the signed perturbing kernel P 1 satisfies the relation P 1 (x, E) = 0, and P ε (x, E) = P(x, E) = 1. The perturbing signed transition kernel, P 1 , provides transition probabilities between merged states.
Let v : E →Ê be the merging onto function defined by is the integer part of the positive real number x, and define the split family of processesx Define also the projector operator Π onto the null space, N (Q), of the operator Q := P − I by This operator satisfies the equations The potential operator of Q, denoted by R 0 , is defined by Let us now consider the following assumptions needed in the sequel.
C1: The transition kernel P ε (x, B) of the embedded Markov chain x ε n has the representation (7).
C2: The supporting Markov chain (x 0 n ) with transition kernel P is uniformly ergodic in each class E j , with stationary distribution ρ j (dx), j ∈Ê, that is, The average exit probabilities of the initial embedded Markov chain (x ε n ) are positive, that is, C4: The mean merged values are positive and bounded, that is, From relation (3), we get directly, where q(x) := 1/m(x) and q j :

The Non-Ergodic Case
Let us consider a family of semi-Markov chains z ε k , k ≥ 0, ε > 0, with semi-Markov kernels q ε and a fixed state space (E , E ), a measurable space, which includes an absorbing state, say 0. Of course, here state 0 can represent a final class, say E 0 , and the analysis presented here is the same.
Let us consider the following partition of the state space.
We now need the following condition.
C5: The average transition probabilities of the initial embedded Markov chain (x ε n ) to state 0, satisfy the following,p , with partition of the state space as defined by (13).

Semi-Markov Random Evolution
Let us consider a separable Banach space B of real-valued measurable functions defined on E, endowed with the sup norm · and denote by B its Borel σ-algebra. Let us given a family of bounded contraction operators D(x), x ∈ E, defined on B, where the maps D(x)ϕ : E → B are E -measurable, ϕ ∈ B. Denote by I the identity operator on B. For a discrete generator Q, on B, let ΠB = N (Q) be the null space, and (I − Π)B = R(Q) be the range values space of operator Q. We will suppose here that the Markov chain (x n , n ∈ IN), with discrete operator Q = P − I, is uniformly ergodic, that is, (P n − Π)ϕ → 0, as n → ∞, for any ϕ ∈ B. In that case, the transition operator is reducible-invertible on B. Thus, we have B = N (Q) ⊕ R(Q), the direct sum of the two subspaces. The domain of an operator A on B is D(A) := {ϕ ∈ B : Aϕ ∈ B}.
on B, is an F k -martingale. The random evolution Φ k can be written as follows and then, the martingale (17) can be written as follows Let us now define the average random evolution u k (x), x ∈ E, k ∈ IN, by Theorem 3. The random evolution u k (x) satisfy the following Markov renewal equation q(x, dy, l)D(y)u k−l (y).

Average and Diffusion Approximation with Merging
In this section we present average and diffusion approximation results for the discrete-time semi-Markov random evolution, as well as diffusion approximation with equilibrium or normal deviation.

Let us consider the continuous time process
We will prove here asymptotic results for this process as ε → 0. The following assumptions are needed for averaging.
A1: The MC (z k , γ k , k ∈ IN) is uniformly ergodic in each class E j , with ergodic distribution π j (B × {k}), B ∈ E ∩ E j , k ∈ IN, and the projector operator Π is defined by relation (10). A2: The moments m 2 (x), x ∈ E, are uniformly integrable, that is, relation (4) holds for r = 2. A3: Let us assume that the perturbed operator D ε (x) has the following representation in B D ε (x) = I + εD 1 where operators D 1 (x) on B are closed and B 0 := ∩ x∈E D(D 1 (x)) is dense in B, B 0 = B. Operators D ε 0 (x) are negligible, that is, lim ε→0 D ε 0 (x)ϕ = 0 for ϕ ∈ B 0 . A4: We have: E π(dx) D 1 (x)ϕ 2 < ∞. A5: There exists Hilbert spaces H and H * such that compactly embedded in Banach spaces B and B * , respectively, where B * is a dual space to B. A6: Operators D ε (z) and (D ε ) * (z) are contractive on Hilbert spaces H and H * , respectively.
We note that if B = C 0 (IR), the space of continuous function on IR vanishing at infinity, then H = W l,2 (IR) is a Sobolev space, and W l,2 (IR) ⊂ C 0 (IR) and this embedding is compact (see References [34,43]). For the spaces B = L 2 (IR) and H = W l,2 (IR) the situation is the same.

Theorem 4.
Under assumptions A1-A6 and C1-C4 the following weak convergence takes place where the limit random evolutionΦ(t) is determined by the following equation with generatorÎ L, defined byÎ and acting on test functions ϕ(x, v(x)). The operator Q 1 is defined by Let us consider the average random evolution defined as Λ x (t) := E x [Φ(t)φ(u)], x ∈ E. SetD 1 Π = ΠD 1 Π andQΠ = ΠQ 1 Π. For detailed description of operatorQ see Theorem 1.Then we have the following straightforward result. Corollary 1. The average random evolution Λ x (t) satisfy the following Cauchy problem:

Diffusion Approximation
For the diffusion approximation we will consider a different time-scaling and some additional assumptions. In this case, we replace relation (7) by the following one D1: Let us assume that the perturbed operators D ε (x) have the following representation in B where operators D 2 (x) on B are closed and B 0 := ∩ x∈E D(D 2 (x)) is dense in B, B 0 = B; operators D ε 0 (x) are a negligible operator, that is, lim ε↓0 D ε 0 (x)ϕ = 0. D2: The following balance condition holds D3: The moments m 3 (x), x ∈ E, are uniformly integrable, that is, relation (4) holds for r = 3.

Theorem 5.
Under Assumptions A1, A5-A6 and D1-D3, the following weak convergence takes place where the limit random evolution Φ 0 (t) is a diffusion random evolution determined by the following generatorÎ L, where the operatorQ is defined in Theorem 1.

Normal Deviation with Merging
We note, that averaged semi-Markov random evolutions can be considered as the first approximation to the initial evolutions. The diffusion approximation of the semi-Markov random evolutions determine the second approximation to the initial evolution, since the first approximation under balance condition appears to be trivial.
Here we consider the algorithms of construction of the first and the second approximation in the case when the balance condition in the diffusion approximation scheme is not fulfilled. We introduce the deviated semi-Markov random evolution as the normalized difference between the initial and averaged evolutions. In the limit, we obtain the diffusion approximation with equilibrium of the initial evolution from the averaged one.
Let us consider the discrete-time semi-Markov random evolution Φ ε [t/ε] , averaged evolutionΦ(t) (see Section 5.1) and the deviated evolution Theorem 6. Under Assumptions A1, A5-A6 and D3, with operators D ε (x) in A3, instead of D1, the deviated semi-Markov random evolution Ψ ε t weakly convergence when ε → 0 to the diffusion random evolution Ψ 0 t defined by the following generatorÎ where the operatorQ is defined in Theorem 1.

Application to Particular Systems
In this section we will apply the above Theorems 4-6 to obtain limit results for particular stochastic systems, namely, additive functionals of semi-Markov chains and discrete-time dynamical systems perturbed by semi-Markov chains.

Integral Functionals
The integral functional of a semi-Markov chain considered here is defined by where a is a real-valued measurable function defined on the state space E.

Average Approximation
In the averaging scheme the additive functional has the representation Theorem 7. Under conditions C1-C4 and A1-A2 the following weak convergence holds where the limit process is an integral functional, defined bŷ y t := u + t 0â (x s )ds, withâ(j) = E j π j (dx)a(x). The Markov processx t , is defined on the state spaceÊ as in the previous section by the generatorQ, defined in Theorem 1 It is worth noticing here that the initial processes (28) are switched by a SMC, and the limit process by a continuous-time Markov process on a finite state space, which is much simpler.

Diffusion Approximation
In the diffusion approximation the additive functional is time-rescaled as follows Then we have the following result.

Theorem 8.
Under conditions C1-C4, A1-A6 and D1-D2 the following weak convergence holds where the limit process is a switched diffusion process, The process W t , t ≥ 0, is a standard Brownian motion, and b 2 (j) :=â 0 (j) − 1 2â 2 (j). The coefficients arê It is worth noticing that the generator of the diffusion ξ 0 (t) can be written as follows In fact the limit process is switched by the merged Markov processx t , defined in Theorem 1.

Normal Deviation
The diffusion approximation with equilibrium will be realized without balance condition D2. Let us consider the stochastic processes ζ ε t , t ≥ 0, ε > 0, The processŷ t is the limit process in the averaging scheme. Then we have the following weak convergence result.
The process W t , t ≥ 0, is a standard Brownian motion, and The limit process here is also switched by the merged Markov processx t .

Discrete Dynamical Systems
Let us consider the family of difference equations switched by the SMC (z k ). The perturbed operators D ε (x), x ∈ E, are defined now by (u, x)).

Average Approximation
The time-scaled system considered here is Theorem 10. Under conditions C1-C4 and A1-A2 the following weak convergence holds where the limit process is a continuous-time dynamical system, defined by

Diffusion Approximation
The time-scaled dynamical system considered here is Theorem 11. Under conditions C1-C4, A1-A2 and D2 the following weak convergence holds where the limit process is a switched diffusion process, The process W t , t ≥ 0, is a standard Brownian motion, and c 2 (j) := 1 2Ĉ (u, j) +Ĉ 0 (u, j) −Ĉ 2 (u, j). The coefficients arê

Normal Deviation
The time-scaled system considered here is Theorem 12. Under conditions C1-C4 and A1-A2 the following weak convergence holds where the limit process is a switched diffusion process, The process W t , t ≥ 0, is a standard Brownian motion, and

Proofs
As the state spaceÊ of the switching process z k is a finite set, we do not consider the new component v(z k ), and the proofs of tightness in Reference [14] are also valuable here. So, we will only prove here the finite dimensional distributions convergence concerned by the transition kernels.

Proof of Theorem 1
Let us consider the extended Markov renewal process where n : ]. The compensating operator of this process is defined by the following relation, (see Reference [1]), The compensating operator IL ε acting on test functions ϕ(x, v(x)), x ∈ E, can be written as follows And now from (7), the operator IL ε , can be written as follows where and Now, by the following singular perturbation problem, on test functions and from Proposition 5.1 in Reference [1], we get the limit operator IL, whose the contracting form,Î L, defined by the relation ΠQ 1 Π =Î LΠ provide us directly the generatorQ ≡Î L of the limit processx(t), and the prove is achieved.
Proof of Theorem 2 is the same as the previous one.

Proof of Theorem 3
We , and the result follows.
The discrete generators of the four component family of The asymptotic representation of the above operator acting on test functions ϕ(u, x, v(x), k) is given by where Q ,ε := P ,ε − I. Now from (2) the transition operator P ,ε can be written as follows P ,ε = P + εQ 1 , where the operator Q 1 is defined by relation (23).
The proof of Theorem 6 is similar to the previous ones. Finally, the proofs of Theorems 7 to 12 are obtained directly as corollaries from Theorems 4 to 6.

Concluding Remarks
In this paper, we presented the semi-Markov random evolution in reduced random media, where the main results were given in Sections 4 and 5. And their applications for integral functionals of semi-Markov chains and dynamical systems perturbed by semi-Markov chains were given in Section 6. These kind of results have to be extended to many other stochastic systems, as in hidden semi-Markov chains, controlled systems, epidemiology systems, and so forth.
For simplicity, we consider fixed initial conditions of processes, that is, independent of ε. This is not a loss of generality since we can add dependent on ε initial conditions without any problem, see, for example, References [1,2].
It is worth noticing that the discrete-time semi-Markov chains have to be developed, in parallel with the continuous-time semi-Markov processes, as is the case with discrete-time Markov chains and continuous-time Markov processes.
Author Contributions: Both authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.