1. Introduction
Random Evolutions started to be studied in the 1970’s, because of their potential applications to biology, movement of particles, signal processing, quantum physics, finance and insurance, among others (see [
1,
2,
3]). They allow modeling a situation in which the dynamics of a system are governed by various regimes, and the system switches from one regime to another at random times. Let us be given a state process
(with state space
) and a sequence of increasing random times
. In short, during each random time interval
, the evolution of the system will be governed by the state
. At the random time
, the system will switch to state
, and so on. Think for example of regime-switching models in finance, where the state process is linked with, e.g., different liquidity regimes, as in the recent article [
4]. In their work, the sequence
represents the random times at which the market switches from one liquidity regime to another.
When random evolutions began to be studied [
1,
2,
3], the times
were assumed to be exponentially distributed and
was assumed to be a Markov chain, so that the process
was assumed to be a continuous-time Markov chain. Here, as in [
5], we let:
the number of regime changes up to time
t. In these first papers, limit theorems for the expectation of the random evolution were studied. Then, in 1984–1985, J. Watkins [
6,
7,
8] established weak law of large numbers and diffusion limit results for a specific class of random evolutions, where the times
are deterministic and the state process
is strictly stationary. In [
9,
10], A. Swishchuk and V. Korolyuk extended the work of J. Watkins to a setting where
is assumed to be a Markov renewal process, and obtained weak law of large numbers, central limit theorem and diffusion limit results. It can also be noted that they allowed the system to be “impacted” at each regime transition, which they modeled with the use of operators that we will denote
D in the following.
As we show in applications (
Section 6), these operators
D can be used for example to model an impact on the stock price at each regime transition due to illiquidity as in [
4], or to model situations where the stock price only evolves in a discrete way (i.e., only by successive “impacts”), such as in the recent articles [
11,
12] on, respectively, high frequency price dynamics and modeling of price impact from distressed selling (see
Section 6.1 and
Section 6.3). For example, the price dynamics of the latter article [
11] can be seen as a specific case of random evolution for which the operators
D are given by:
where
f,
, and
h are some functions and
ϵ and
m some constants. In
Section 6.3, we generalize their diffusion limit result by using our tools and results on random evolutions (see [
9,
13,
14] for more details).
In the aforementioned literature on (limit theorems for) random evolutions [
6,
7,
8,
9,
10], the evolution of the system on each time interval
is assumed to be driven by a (time-homogeneous) semigroup
depending on the current state of the system
. A semigroup
U is a family of time-dependent operators satisfying
. For example, if
Z is a time-homogeneous Markov process, then we can define the semigroup:
where
f is a bounded measurable function and
. The idea of a random evolution is therefore the following: during each time interval
, the system (e.g., price process) evolves according to the semigroup
(associated to the Markov process
in the case of a price process). At the transition
, the system will be impacted by an operator
modeling for example a drop in the price at what [
4] call a
liquidity breakdown. Then, on the interval
, the price will be driven by the semigroup
, and so on: it can be summarized as a regime-switching model with impact at each transition. We can write the previously described random evolution
in the following compact form (where the product applies on the right):
This time-homogeneous setting (embedded in the time-homogeneous semigroups
U) does not include, e.g., time-inhomogeneous Lévy processes (see
Section 6.1), or processes
Z solutions of stochastic differential equations (SDEs) of the form:
where
L is a vector-valued Lévy process and
a driving matrix-valued function (see
Section 6.2). The latter class of SDEs includes many popular models in finance, e.g., time-dependent Heston models. Indeed, these examples can be associated to time-inhomogeneous generalizations of semigroups, also called propagators. A (backward) propagator Γ (see
Section 2) is simply a family of operators satisfying
for
. We then define for a time-inhomogeneous Markov process
Z:
The celebrated Chapman–Kolmogorov equation guaranties the relation for .
Let us explain here the main ideas of the main results of the paper, Theorems 8 (LLN) and 11 (FCLT), using one of the applications, namely, a regime-switching inhomogeneous Lévy-based stock price model. We consider a regime-switching inhomogeneous Lévy-based stock price model, very similar in the spirit to the recent article [
4]. In short, an inhomogeneous Lévy process differs from a classical Lévy process in the sense that it has time-dependent (and absolutely continuous) characteristics. We let
be a collection of such
-valued inhomogeneous Lévy processes with characteristics
, and we define:
for some bounded function α. We give in
Section 6 a financial interpretation of this function α, as well as reasons we consider a regime-switching model. In this setting,
f represents a contingent claim on a (
d-dimensional) risky asset
S having regime-switching inhomogeneous Lévy dynamics driven by the processes
: on each random time interval
, the risky asset is driven by the process
. Indeed, we have the following representation, for
(to make clear that the expectation below is taken with respect to
ω embedded in the process
L and not
):
where we have denoted for clarity:
The random evolution
represents in this case the present value of the contingent claim
f of maturity
t on the risky asset
S, conditionally on the regime switching process
: indeed, remember that
is random, and that its randomness (only) comes from the Markov renewal process. Our main results, Theorems 8 and 11, allow approximating the impact of the regime-switching on the present value
of the contingent claim. Indeed, we get the following normal approximation, for small
ϵ:
The above approximation allows to quantify the risk inherent to regime-switchings occurring at a high frequency governed by ϵ. The parameter ϵ reflects the frequency of the regime-switchings and can therefore be calibrated to market data by the risk manager. For market practitioners, because of the computational cost, it is often convenient to have asymptotic formulas that allow them to approximate the present value of a given derivative, and by extent the value of their whole portfolio. In addition, the asymptotic normal form of the regime-switching cost allows the risk manager to derive approximate confidence intervals for his portfolio, as well as other quantities of interest such as reserve policies linked to a given model.
Therefore, the goal of this paper is to focus on limit theorems for so-called time-inhomogeneous random evolutions, i.e., random evolutions that we construct with propagators, and not with (time-homogeneous) semigroups, in order to widen the range of possible applications, as mentioned above. We call these random evolutions as inhomogeneous random evolutions (IHRE).
The notion of “time-inhomogeneity” in fact appears twice in our framework: in addition to being constructed with propagators, random evolutions are driven by time-inhomogeneous semi-Markov processes (using results from [
5]). This general case has not been treated before. Even if we do not have the same goal in mind, our methodology is similar—in spirit—to the recent article [
15] on the comparison of time-inhomogeneous Markov processes, or to the article [
16] on time-inhomogeneous affine processes. We say “similar”, because we are all in a position where results for the time-homogenous case exist, and our goal is to develop a rigorous treatment of the time-inhomogeneous case. It is interesting to see that even if the topics we deal with are different, the ”philosophy” and “intuition” behind is similar in many ways.
The paper is organized as follows:
Section 2 is devoted to propagators. We introduce the concept of regular propagators, which we characterize as unique solutions to well-posed Cauchy problems: this is of crucial importance for both our main weak law of large numbers (WLLN) and central limit theorem (CLT) results, in order to get the unicity of the limiting process. In
Section 3, we introduce inhomogeneous random evolutions and present some of their properties. In
Section 4 and
Section 5, we prove, respectively, a WLLN and a CLT, which are the main results of the paper (Theorems 8 and 11). In particular, for the CLT, we obtain a precise (and new) characterization of the limiting process using weak Banach-valued stochastic integrals and so-called orthogonal martingale measures: this result—to the best of our knowledge—has not even been obtained in the time-homogeneous case. In
Section 6, we present financial applications to illiquidity modeling using regime-switching time-inhomogeneous Lévy price dynamics and regime-switching Lévy driven diffusion based price dynamics. We also present a generalized version of the multi-asset model of price impact from distressed selling introduced in the recent article [
11], for which we retrieve (and generalize) their diffusion limit result for the price process. We would like to mention that copula-based multivariate semi-Markov models with application in high-frequency finance is considered in [
17], which is related to some generalization of semi-Markov processes and their application.
2. Propagators
This section aims at presenting a few results on propagators, which are used in the sequel. Most of them (as well as the corresponding proofs) are similar to what can be found in [
18] Chapter 5, [
19] Chapter 2 or [
15], but, to the best of our knowledge, they do not appear in the literature in the form presented below. In particular, the main result of this section is Theorem 4, which characterizes so-called regular propagators as unique solutions to well-posed Cauchy problems.
Let
be a real separable Banach space. Let
be the dual space of
Y.
is assumed to be a real separable Banach space which is continuously embedded in
Y (this idea was used in [
18], Chapter 5), i.e.,
and
:
. Unless mentioned otherwise, limits are taken in the
Y-norm, normed vector spaces are equipped with the norm topology and subspaces of normed vector spaces are equipped with the subspace topology. Limits in the
norm are denoted
, for example. In the following,
J refers to either
or
for some
and
. In addition, let, for
,
and
. We start by a few introductory definitions:
Definition 1. A function is called a Y-(backward) propagator if:
- (i)
: and
- (ii)
:
If, in addition, : , Γ is called a Y-semigroup.
Note that we focus our attention on backward propagators as many applications only fit the backward case, as shown below. Forward propagators differ from backward propagators in the way that they satisfy (). We now introduce the generator of the propagator:
Definition 2. For , define: Define similarly for :and define similarly to . Let . Then, is called the infinitesimal generator of the Y-propagator Γ.
In the following definitions, which deal with continuity and boundedness of propagators, and represent Banach spaces such that (possibly ).
Definition 3. A -propagator Γ is -bounded if . It is a -contraction if . It is -locally bounded if for every compact .
Definition 4. Let . A -propagator Γ
is -strongly continuous if , :When , we simply write that it is F-strongly continuous. We use the terminologies
t-continuity and
s-continuity for the continuity of the partial applications. According to [
19], strong joint continuity is equivalent to strong separate continuity together with local boundedness of the propagator.
Definition 5. Let . The generator AΓ or the -propagator Γ
is -strongly continuous if , : When , we simply write that it is F-strongly continuous.
The following results give conditions under which the propagator is differentiable in s and t.
Theorem 1. Let Γ
be a Y-propagator. Assume that , . Then: If, in addition, Γ
is -strongly s-continuous, -strongly t-continuous, then: Proof of Theorem 1. Let
,
.
since
. We have for
:
Let
:
the last inequality holding because
:
. We apply the uniform boundedness principle to show that
.
is Banach. We have to show that
:
. Let
. We have
since
.
. Then, by
-strong
t-continuity of Γ,
. Let
. Then, we get
.
Further, by -strong s-continuity of Γ, . Finally, since , .
Therefore, we get for , which shows that for . □
Theorem 2. Let Γ
be a Y-propagator. Assume that . Then, we have: If, in addition, Γ
is Y-strongly t-continuous, then we have: Proof of Theorem 2. Let
,
. We have:
For
:
:
since
. Therefore,
. Now, if
:
Since , . By Y-strong t-continuity of Γ: . By the principle of uniform boundedness together with the Y-strong t-continuity of Γ, we have . Therefore, we get for , which shows for . □
In general, we want to use the evolution equation: ; therefore, we need that is in . The following result gives sufficient conditions for which it is the case.
Theorem 3. Assume that Theorem 2 holds true, i.e., , and , . Then, , : Proof of Theorem 3. Let
,
. First,
as the derivative of
. By the principle of uniform boundedness together with the
Y-strong
t-continuity of Γ, we have
. We then observe that for
:
The following definition introduces the concept of regular propagator, which in short means that it is differentiable with respect to both variables, and that its derivatives are integrable. □
Definition 6. A Y-propagator Γ is said to be regular if it satisfies Theorems 1 and 2 and , , and are in .
Now, we are ready to characterize a regular propagator as the unique solution of a well-posed Cauchy problem, which is needed in the sequel. Note that the proof of the theorem below requires that Γ satisfies both Theorems 1 and 2 (hence, our above definition of regular propagators). The next result is the initial value problem statement for operator
Theorem 4. Let AΓ be the generator of a regular Y-propagator Γ
and , . A solution operator to the Cauchy problem:is said to be regular if it is Y-strongly continuous. If G is such a regular solution, then we have , , . Proof of Theorem 4. Let
,
. Consider the function
. We show that
and therefore that
. We have for
:
Let
. We have:
We have:
(1) → 0 as G satisfies the initial value problem and .
(2) → 0 as .
(3) → 0 by Y-strong continuity of G.
Further, by the principle of uniform boundedness together with the
Y-strong continuity of
G, we have
. We therefore get
. Now, for
:
By the principle of uniform boundedness together with the Y-strong t-continuity of G, we have . Furthermore:
(4) → 0 as G satisfies the initial value problem and .
(5) → 0 as .
(6) → 0 by Y-strong continuity of G.
We therefore get . □
The following corollary expresses the fact that equality of generators implies equality of propagators.
Corollary 1. Assume that and are regular Y-propagators and that , , . Then, , : . In particular, if is dense in Y, then .
We conclude this section with a second-order Taylor formula for propagators. Let:
Theorem 5. Let Γ
be a regular Y-propagator, . Assume that , and . Then, we have for : Proof of Theorem 5. Since Γ is regular and
f,
and
is integrable on
we have by 3:
□
3. Inhomogeneous Random Evolutions: Definitions and Properties
Let
be a complete probability space,
a finite set and
an inhomogeneous Markov renewal process on it, with associated inhomogeneous semi-Markov process
(as in [
5]). In this section, we use the same notations for the various kernels and cumulative distribution functions (
,
, etc.) as in [
5] on inhomogeneous Markov renewal processes. Throughout the section, we assume that the inhomogeneous Markov renewal process
is regular (cf. definition in [
5]), and that
for all
. Further assumptions on it are made below. We define the following random variables on
, for
:
the number of jumps on : ;
the jump times on : for , and ; and
the states visited by the process on : , for .
Consider a family of
Y-propagators
, with respective generators
, satisfying:
as well as a family
of
-contractions, satisfying:
We define the inhomogeneous random evolution the following way:
Definition 7. The function defined pathwise by:is called a -inhomogeneous Y-random evolution, or simply an inhomogeneous Y-random evolution. V is said to be continuous (respectively, purely discontinuous) if (respectively, ), . V is said to be regular (respectively, -contraction) if are regular (respectively, -contraction). Remark 1. In the latter definition, we use as conventions that , that is, the product operator applies the product on the right. Further, if , then . If , then and . Therefore, in all cases, . By Proposition 2 below, if , we see that V is continuous and if (and therefore ), we see that V has no continuous part, hence Definition 7.
As long as our main object is stochastic process we have to show a measurability of this process. Thus, we have the following measurability result:
Proposition 1. For , , the stochastic process is adapted to the (augmented) filtration: Proof of Proposition 1. Let
,
,
. We have:
Denote the
measurable (by construction) function
. Since
, remark that
and is therefore
measurable. Therefore,
. Let:
since
, and for
, let the sigma-algebra
(
is a sigma-algebra on
since
). Now, consider the map
:
Therefore, it remains to show that
, since
. First, let
. Notice that
, where:
The previous mapping holding since
,
.
is measurable iff each one of the coordinate mappings are. The canonical projections are trivially measurable. Let
,
. We have:
Now, by measurability assumption, we have for
:
Therefore,
is
measurable. Define for
:
Again, the canonical projections are trivially measurable. We have for
:
Now, by measurability assumption,
,
:
which proves the measurability of
. Then, we define for
:
By measurability assumption,
,
:
which proves the measurability of
. Finally, define the canonical projection:
which proves the measurability of
. For
, we have
and the proof is similar. □
The following result characterizes an inhomogeneous random evolution as a propagator, shows that it is right-continuous and that it satisfies an integral representation (which is used extensively below). It also clarifies why we use the terminology “continuous inhomogeneous Y-random evolution” when .
Proposition 2. Let V an inhomogeneous Y-random evolution and , . Then, is a Y-propagator. If we assume that V is regular, then we have on Ω the following integral representation: Further is Y-strongly RCLL on , i.e., , . More precisely, we have for :where we denote . Proof of Proposition 2. The fact that
is straightforward from the definition of
V and using the fact that
are
-contractions. We can also obtain easily that
V is a propagator by straightforward computations. We now show that
is
Y-strongly continuous on each
,
and
Y-strongly RCLL at each
,
. Let
such that
.
, we have:
Therefore, by
Y-strong
t-continuity of Γ, we get that
is
Y-strongly continuous on
. If
, the fact that
has a left limit at
also comes from the
Y-strong
t-continuity of Γ:
Therefore, we get the relationship:
To prove the integral representation, let
,
,
. We proceed by induction and show that
, we have
:
For
, we have
:
, and therefore
by regularity of Γ. Now, assume that the property is true for
, namely:
. We have:
Therefore, it implies that (by continuity of the Bochner integral):
Now,
we have that:
and therefore
, by Theorem 2 and regularity of Γ:
Further, we already proved that
. Therefore, combining these results we have:
□
4. Weak Law of Large Numbers
In this section, we introduce a rescaled random evolution
Vϵ, in which time is rescaled by a small parameter
ϵ. The main result of this section is Theorem 8 in
Section 4.5. To prove the weak convergence of
Vϵ to some regular propagator
, we prove in
Section 4.3 that
Vϵ is relatively compact, which informally means that, for any sequence
, there exists a subsequence
along which
converges weakly. To show the convergence of
Vϵ to
, we need to show that all limit points of the latter
are equal to
. To prove relative compactness, we need among other things that
Vϵ satisfies the so-called compact containment criterion (CCC), which in short requires that for every
,
remains in a compact set of
Y with an arbitrarily high probability as
. This compact containment criterion is the topic of
Section 4.2.
Section 4.1 introduces the rescaled random evolution
Vϵ as well as some regularity assumptions (condensed in Assumptions 1,
Section 4.1, for spaces and operators under consideration, and in Assumptions 2,
Section 4.1, for the semi-Markov processes) which will be assumed to hold throughout the rest of the paper. It also reminds the reader of some definitions and results on relative compactness in the Skorohod space, which are mostly taken from the well-known book [
20]. Finally, the main WLLN result Theorem 8 is proved using a martingale method similar in the spirit to what is done in [
10] (Chapter 4, Section 4.2.1) for time-homogeneous random evolutions. This method is here adapted rigorously to the time-inhomogeneous setting: this is the topic of
Section 4.4. The martingale representation presented in Lemma 5 of
Section 4.4 will be used in the
Section 5 to prove a CLT for time-inhomogeneous random evolutions.
4.1. Preliminary Definitions and Assumptions
In this section, we prove a weak law of large numbers for inhomogeneous random evolutions. We rescale both time and the jump operators
D in a suitable way by a small parameter
ϵ and study the limiting behavior of the rescaled random evolution. To this end, the same way we introduce inhomogeneous
Y-random evolutions, we consider a family
of
-contractions, satisfying
:
and let
. We define:
and
:
The latter operators correspond, in short, to the (first-order) derivatives of the operators
Dϵ with respect to
ϵ. We need them in the following to be able to use the expansion
, which proves useful when proving limit theorems for random evolutions. The same way, we introduce
, corresponding to the second derivative. We also let:
For
, remembering the definition of
in (
38), we let:
and for
,
:
Here,
simply indicates that the limit is taken in the
norm. We also introduce the space
on which we are mostly working on:
Throughout this section, we make the following set of regularity assumptions, which we first state and then comment on immediately afterwards. We recall that the various notions of continuity and regularity are defined in
Section 2.
Assumptions 1. Assumptions on the structure of spaces:
- 1.
The subset contains a countable family which is dense in both and Y.
- 2.
.
Assumptions on the regularity of operators:
- 1.
are regular Y-propagators.
- 2.
is -strongly continuous, .
Assumptions on the boundedness of operators:
- 1.
are -exponentially bounded, i.e., such that , for all , .
- 2.
and , .
- 3.
, , , for all .
- 4.
.
- 5.
, .
- 6.
, .
Assumptions 2. Assumptions on the semi-Markov process:
- 1.
(ergodicity) Assumptions from [5] holds true for the function , so that: - 2.
(uniform boundedness of sojourn increments) such that: - 3.
(regularity of the inhomogeneous Markov renewal process) The conditions for are satisfied (see [5]), namely: there exists and such that:
Let us make a few comments on the previous assumptions. The assumptions regarding the regularity of operators mainly ensure that we can use the results obtained on propagators in
Section 2, for example Theorem 5. The (strong) continuity of
also proves to be useful when working with convergence in the Skorokhod space. The assumptions on the boundedness of operators is used to show that various quantities converge well. Finally, regarding the assumptions on the semi-Markov process, the almost sure convergence of
as
is used very often. It is one of the fundamental requirement for the work below. The uniform boundedness of the sojourn increments is a mild assumption in practice. There might be a possibility to weaken it, but the proofs would become heavier, for example because the jumps of the martingales introduced below would no longer be uniformly bounded.
Notation: In the following, we let for
,
and
(their existence is guaranteed by Assumption 1):
We also let if and if . In the latter case, it is assumed that . Similarly, we let for : .
We now introduce the rescaled random evolution, with the notation :
Definition 8. Let V be an inhomogeneous Y-random evolution. We define (pathwise on Ω
) the rescaled inhomogeneous Y-random evolution Vϵ for , by: Remark 2. we notice that Vϵ is well-defined since on Ω
: and that it coincides with V for , i.e., .
Our goal is to prove, as in [
10], that, for each
f in some suitable subset of
Y,
—seen as a family of elements of
—converges weakly to some continuous limiting process
to be determined. To this end, we first prove that
is relatively compact with almost surely continuous weak limit points. This is equivalent to the notion of
C-tightness in [
21] (VI.3) because
topologized with the Prohorov metric is a separable and complete metric space (
Y being a separable Banach space), which implies that relative compactness and tightness are equivalent in
(by Prohorov’s theorem). Then, we identify the limiting operator-valued process
, using the results of [
14]. We first need some elements that can be found in [
10] (Section 1.4) and [
20] (Sections 3.8–3.11):
Definition 9. Let be a sequence of probability measures on a metric space . We say that converges weakly to ν, and write iff : Definition 10. Let be a family of probability measures on a metric space . is said to be relatively compact iff for any sequence , there exists a weakly converging subsequence.
Definition 11. Let , a family of stochastic processes with sample paths in . We say that is relatively compact iff is (in the metric space endowed with the Prohorov metric). We write that iff . We say that is C-relatively compact iff it is relatively compact and if ever , then X has a.e. continuous sample paths. If , we say that is -relatively compact (respectively, -C-relatively compact) iff is , .
Definition 12. Let , be a family of stochastic processes with sample paths in . We say that satisfies the compact containment criterion (CCC) if , , compact set such that: We say that satisfies the compact containment criterion in (-CCC), if , , CCC.
Theorem 6. Let , a family of stochastic processes with sample paths in . is C-relatively compact iff it is relatively compact and , where: Theorem 7. Let , be a family of stochastic processes with sample paths in . is relatively compact in iff:
- 1.
CCC
- 2.
, and a family of nonnegative random variables such that , , :where .
If is relatively compact, then the stronger compact containment criterion holds: , , compact set such that: 4.2. The Compact Containment Criterion
We show that, to prove relative compactness, we need to prove that the compact containment criterion is satisfied. We give below some sufficient conditions for which it is the case, in particular for the space
, which is used in many applications. In [
6], it is mentioned that there exists a compact embedding of a Hilbert space into
. Unfortunately, this is not true (to the best of our knowledge), and we show below in Proposition 4 how to overcome this problem. This latter proposition is applied in
Section 6 to the time-inhomogeneous Lévy case, and the corresponding proof can easily be recycled for many other examples.
Proposition 3. Assume that there exists a Banach space compactly embedded in Y, that , are -exponentially bounded (uniformly in ), and that are -contractions. Then, -CCC.
Proof. Let , , and assume for some . Let and , the Y-closure of the Z-closed ball of radius c. K is compact because of the compact embedding of Z into Y. Let . We have : . Therefore, and so . □
For example, we can consider the Rellich–Kondrachov compactness theorem: if is an open, bounded Lipschitz domain, then the Sobolev space is compactly embedded in , where and .
For the space
, there is no well-known such compact embedding, therefore we have to proceed differently. The result below is applied for the time-inhomogeneous Lévy case (see
Section 6), and the corresponding proof can easily be recycled for other examples.
Proposition 4. Let , . Assume that , , , , and the family converge uniformly to 0 at infinity, is equicontinuous and uniformly bounded. Then, -CCC.
Proof. Let
,
K the
Y-closure of the set:
is a family of elements of
Y that are equicontinuous, uniformly bounded and that converge uniformly to 0 at infinity by assumption. Therefore, it is well-known, using the Arzela–Ascoli theorem on the one-point compactification of
, that
is relatively compact in
Y and therefore that
K is compact in
Y. We have
:
□
4.3. Relative Compactness of
This section is devoted to proving that
is relatively compact. In the following we assume that
satisfies the compact containment criterion (see [
14]):
We first state an integral representation of , which proof is the same as the proof of Proposition 2.
Lemma 1. Let Assumption 1 hold true. Let , . Then, Vϵ satisfies on Ω
: We now prove that is relatively compact.
Lemma 2. Let Assumptions 1 and 2, hold true. Then, is -relatively compact.
Proof. We use Theorem 7 to show this result. Using Lemma 1, we have for
:
where
by Assumption 1. Now, for
:
Note that the supremums in the previous expression are a.e. finite as they are a.e. bounded by
. Now, let:
We have to show that
. We have:
Let
any sequence converging to 0, and denote
We first want to show that
is uniformly integrable. According to [
20], it is sufficient to show that
. We have that
. By Assumption 1 (more precisely, the regularity of the inhomogeneous Markov renewal process), we get:
and therefore
is uniformly integrable. Then, we show that
. Let:
Let
and
. There exists some constant
such that for
:
and if
:
Let
(recall
) and
. Then,
, and therefore:
Therefore, for
and
:
We have proved that
. By uniform integrability of
, we get that
and therefore since the sequence
is arbitrary:
□
We now prove that the limit points of are continuous.
Lemma 3. Let Assumptions 1 and 2 hold true. Then, is -C-relatively compact.
Proof. The proof is presented for the case
. The proof for the case
is exactly the same. By Lemma 2, it is relatively compact. By Theorem 6, it is sufficient to show that
. Let
and fix
. For
, we have:
Since:
we get
(choose
T big enough, then small enough). □
4.4. Martingale Characterization of the Random Evolution
To prove the weak law of large numbers for the random evolution, we use a martingale method similar to what is done in [
10] (Section 4.2.1), but adapted rigorously to the inhomogeneous setting. We first introduce the quantity
, solution to a suitable “Poisson equation”:
Definition 13. Let Assumption 1 hold true. For , , , let , where is the unique solution of the equation:namely , where is the fundamental matrix associated to P. Remark 3. The existence of is guaranteed because by construction (see [22], Proposition 4). In fact, in [22], the operators Π
and P are defined on but the results hold true if we work on , where E is any Banach space such that (e.g., if , if ). To see that, first observe that P and Π
can be defined the same way on as they were on . Then, take such that and such that . We therefore have that: , and since we have the uniform ergodicity on , we have that: By linearity of we get that . However, because and that this supremum is attained (see, e.g., [23], Section III.6), then:and thus we also have , i.e., the uniform ergodicity in . Now, according to the proofs of Theorems 3.4 and 3.5, Chapter VI of [24], is the only thing we need to prove that is invertible on:the space E plays no role. Further, by the bounded inverse theorem. We now introduce the martingale which plays a central role in the following.
Lemma 4. Let Assumption 1 hold true. Define recursively for , :i.e., ; and for (we recall that ):so that is a -martingale by construction. Let for :where and is defined the usual way (provided we have shown that is a -stopping time ). Then, , , , , is a real-valued square-integrable martingale. Proof. By construction
is a martingale. Let
.
,
is a
-stopping time, because:
Let
. We have that
is a martingale. Assume we have shown that it is uniformly integrable, then we can apply the optional sampling theorem for UI martingales to the stopping times
a.e and get:
which shows that
is a martingale. Now, to show the uniform integrability, according to [
20], it is sufficient to show that
. However:
where
(
by Assumption 1). The fact that
(Assumptions 1) concludes the proof. □
Remark 4. In the following, we use the fact that, as found in [25] (Theorem 3.1), for sequences , of random variables with value in a separable metric space with metric d, if and , then . In our case, , take value in and to show that , we use the fact that:and therefore that it is sufficient to have:to obtain (respectively, in probability). Lemma 5. Let Assumption 1 hold true. For and , has the asymptotic representation:where is an element of the space and is defined by the following property: , , as (so that Remark 4 is satisfied). Proof. For the sake of clarity, let:
Let
. First, we have that:
because
. Again and as in Lemma 4, we denote
, and
by Assumption 1. Now, we have:
and:
as
is
measurable. Now, we know that every discrete time Markov process has the strong Markov property, so the Markov process
has it. For
, the times
are
-stopping times. Therefore, for
:
Consider the following derivatives with respect to
t:
which exist because
and
. Using the Fundamental theorem of Calculus for the Bochner integral (
by Assumption 1) we get:
because, by Assumption 1:
We note that the contribution of the terms of order
inside the sum will make the sum of order
, since for some constant
:
because
. Altogether, we have for
, using the definition of
:
The term involving
above will vanish as
by Assumption 1, as
. We also have for the first term (
):
Now, we have to compute the terms corresponding to
. We show that the term corresponding to
is
and that for
:
For the term
, we have, using Assumption 1, the definition of
and Theorem 3:
By Assumption 1, we have
for
. Therefore, we get using Theorem 3, for
:
Because
, we get that
,
. Since
, we get that
and therefore:
Therefore, taking the conditional expectation, we get:
and so:
Now, because
and by Assumption 1 (which ensures that the integral below exists):
Thus, using boundedness of
(again Assumption 1):
The first term above has the representation (by Theorem 5):
Taking the conditional expectation and using the fact that
, we can show as we did before that:
The second term has the following representation, because
(which ensures that
) and using Theorem 3: