Next Article in Journal
An Operator Based Approach to Irregular Frames of Translates
Next Article in Special Issue
Stability Estimates for Finite-Dimensional Distributions of Time-Inhomogeneous Markov Chains
Previous Article in Journal
A Single-Stage Manufacturing Model with Imperfect Items, Inspections, Rework, and Planned Backorders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inhomogeneous Random Evolutions: Limit Theorems and Financial Applications

Department Mathematics & Statistics, University of Calgary, 2500 University Drive NW, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 447; https://doi.org/10.3390/math7050447
Submission received: 20 February 2019 / Revised: 19 April 2019 / Accepted: 5 May 2019 / Published: 19 May 2019
(This article belongs to the Special Issue New Trends in Random Evolutions and Their Applications)

Abstract

:
The paper is devoted to the inhomogeneous random evolutions (IHRE) and their applications in finance. We introduce and present some properties of IHRE. Then, we prove weak law of large numbers and central limit theorems for IHRE. Financial applications are given to illiquidity modeling using regime-switching time-inhomogeneous Levy price dynamics, to regime-switching Levy driven diffusion based price dynamics, and to a generalized version of the multi-asset model of price impact from distress selling, for which we retrieve and generalize their diffusion limit result for the price process.

1. Introduction

Random Evolutions started to be studied in the 1970’s, because of their potential applications to biology, movement of particles, signal processing, quantum physics, finance and insurance, among others (see [1,2,3]). They allow modeling a situation in which the dynamics of a system are governed by various regimes, and the system switches from one regime to another at random times. Let us be given a state process { x n } n N (with state space J ) and a sequence of increasing random times { T n } n N . In short, during each random time interval [ T n , T n + 1 ) , the evolution of the system will be governed by the state x n . At the random time T n + 1 , the system will switch to state x n + 1 , and so on. Think for example of regime-switching models in finance, where the state process is linked with, e.g., different liquidity regimes, as in the recent article [4]. In their work, the sequence { T n } represents the random times at which the market switches from one liquidity regime to another.
When random evolutions began to be studied [1,2,3], the times { T n } were assumed to be exponentially distributed and { x n } n N was assumed to be a Markov chain, so that the process x ( t ) : = x N ( t ) was assumed to be a continuous-time Markov chain. Here, as in [5], we let:
N ( t ) : = sup { n N : T n t }
the number of regime changes up to time t. In these first papers, limit theorems for the expectation of the random evolution were studied. Then, in 1984–1985, J. Watkins [6,7,8] established weak law of large numbers and diffusion limit results for a specific class of random evolutions, where the times { T n } are deterministic and the state process { x n } n N is strictly stationary. In [9,10], A. Swishchuk and V. Korolyuk extended the work of J. Watkins to a setting where ( x n , T n ) n N is assumed to be a Markov renewal process, and obtained weak law of large numbers, central limit theorem and diffusion limit results. It can also be noted that they allowed the system to be “impacted” at each regime transition, which they modeled with the use of operators that we will denote D in the following.
As we show in applications (Section 6), these operators D can be used for example to model an impact on the stock price at each regime transition due to illiquidity as in [4], or to model situations where the stock price only evolves in a discrete way (i.e., only by successive “impacts”), such as in the recent articles [11,12] on, respectively, high frequency price dynamics and modeling of price impact from distressed selling (see Section 6.1 and Section 6.3). For example, the price dynamics of the latter article [11] can be seen as a specific case of random evolution for which the operators D are given by:
D ( x , y ) f ( z ) : = f exp z + ϵ 2 m + ϵ y + ϕ ( h ( z + ϵ 2 m + ϵ y ) h ( z ) ) ,
where x , y J , z R d , f, ϕ , and h are some functions and ϵ and m some constants. In Section 6.3, we generalize their diffusion limit result by using our tools and results on random evolutions (see [9,13,14] for more details).
In the aforementioned literature on (limit theorems for) random evolutions [6,7,8,9,10], the evolution of the system on each time interval [ T n , T n + 1 ) is assumed to be driven by a (time-homogeneous) semigroup U x n depending on the current state of the system x n . A semigroup U is a family of time-dependent operators satisfying U ( t + s ) = U ( t ) U ( s ) . For example, if Z is a time-homogeneous Markov process, then we can define the semigroup:
U ( t ) f ( z ) : = E [ f ( Z t ) | Z 0 = z ] ,
where f is a bounded measurable function and z R d . The idea of a random evolution is therefore the following: during each time interval [ T n , T n + 1 ) , the system (e.g., price process) evolves according to the semigroup U x n (associated to the Markov process Z x n in the case of a price process). At the transition T n + 1 , the system will be impacted by an operator D ( x n , x n + 1 ) modeling for example a drop in the price at what [4] call a liquidity breakdown. Then, on the interval [ T n + 1 , T n + 2 ) , the price will be driven by the semigroup U x n + 1 , and so on: it can be summarized as a regime-switching model with impact at each transition. We can write the previously described random evolution V ( t ) in the following compact form (where the product applies on the right):
V ( t ) = k = 1 N ( t ) U x k 1 T k T k 1 D ( x k 1 , x k ) U x ( t ) t T N ( t ) .
This time-homogeneous setting (embedded in the time-homogeneous semigroups U) does not include, e.g., time-inhomogeneous Lévy processes (see Section 6.1), or processes Z solutions of stochastic differential equations (SDEs) of the form:
d Z t = Φ ( Z t , t ) d L t ,
where L is a vector-valued Lévy process and Φ a driving matrix-valued function (see Section 6.2). The latter class of SDEs includes many popular models in finance, e.g., time-dependent Heston models. Indeed, these examples can be associated to time-inhomogeneous generalizations of semigroups, also called propagators. A (backward) propagator Γ (see Section 2) is simply a family of operators satisfying Γ ( s , t ) = Γ ( s , u ) Γ ( u , t ) for s u t . We then define for a time-inhomogeneous Markov process Z:
Γ ( s , t ) f ( z ) : = E [ f ( Z t ) | Z s = z ] .
The celebrated Chapman–Kolmogorov equation guaranties the relation Γ ( s , t ) = Γ ( s , u ) Γ ( u , t ) for s u t .
Let us explain here the main ideas of the main results of the paper, Theorems 8 (LLN) and 11 (FCLT), using one of the applications, namely, a regime-switching inhomogeneous Lévy-based stock price model. We consider a regime-switching inhomogeneous Lévy-based stock price model, very similar in the spirit to the recent article [4]. In short, an inhomogeneous Lévy process differs from a classical Lévy process in the sense that it has time-dependent (and absolutely continuous) characteristics. We let { L x } x J be a collection of such R d -valued inhomogeneous Lévy processes with characteristics ( b t x , c t x , ν t x ) x J , and we define:
Γ x ( s , t ) f ( z ) : = E [ f ( L t x L s x + z ) ] , z R d , x J ,
D ϵ ( x , y ) f ( z ) : = f ( z + ϵ α ( x , y ) ) , z R d , x , y J ,
for some bounded function α. We give in Section 6 a financial interpretation of this function α, as well as reasons we consider a regime-switching model. In this setting, f represents a contingent claim on a (d-dimensional) risky asset S having regime-switching inhomogeneous Lévy dynamics driven by the processes { L x } x J : on each random time interval [ T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) ) , the risky asset is driven by the process L x k ( s ) . Indeed, we have the following representation, for ω Ω (to make clear that the expectation below is taken with respect to ω embedded in the process L and not ω ):
V ϵ ( s , t ) ( ω ) f ( z ) = E f z + k = 1 N s t 1 ϵ , s ( ω ) + 1 Δ L k + k = 1 N s t 1 ϵ , s ( ω ) ϵ α ( x k 1 ( s ) ( ω ) , x k ( s ) ( ω ) ) ,
where we have denoted for clarity:
Δ L k = Δ L k ( ϵ , ω ) : = L T k ϵ , s ( s ) ( ω ) t x k 1 ( s ) ( ω ) L T k 1 ϵ , s ( s ) ( ω ) x k 1 ( s ) ( ω ) .
The random evolution V ϵ ( s , t ) f represents in this case the present value of the contingent claim f of maturity t on the risky asset S, conditionally on the regime switching process ( x n , T n ) n 0 : indeed, remember that V ϵ ( s , t ) f is random, and that its randomness (only) comes from the Markov renewal process. Our main results, Theorems 8 and 11, allow approximating the impact of the regime-switching on the present value V ϵ ( s , t ) f of the contingent claim. Indeed, we get the following normal approximation, for small ϵ:
V ϵ ( s , t ) f Γ ^ ( s , t ) f 1 s t order regime - switching approx . + ϵ I σ ( s , t ) f noise due to regime-switching
The above approximation allows to quantify the risk inherent to regime-switchings occurring at a high frequency governed by ϵ. The parameter ϵ reflects the frequency of the regime-switchings and can therefore be calibrated to market data by the risk manager. For market practitioners, because of the computational cost, it is often convenient to have asymptotic formulas that allow them to approximate the present value of a given derivative, and by extent the value of their whole portfolio. In addition, the asymptotic normal form of the regime-switching cost allows the risk manager to derive approximate confidence intervals for his portfolio, as well as other quantities of interest such as reserve policies linked to a given model.
Therefore, the goal of this paper is to focus on limit theorems for so-called time-inhomogeneous random evolutions, i.e., random evolutions that we construct with propagators, and not with (time-homogeneous) semigroups, in order to widen the range of possible applications, as mentioned above. We call these random evolutions as inhomogeneous random evolutions (IHRE).
The notion of “time-inhomogeneity” in fact appears twice in our framework: in addition to being constructed with propagators, random evolutions are driven by time-inhomogeneous semi-Markov processes (using results from [5]). This general case has not been treated before. Even if we do not have the same goal in mind, our methodology is similar—in spirit—to the recent article [15] on the comparison of time-inhomogeneous Markov processes, or to the article [16] on time-inhomogeneous affine processes. We say “similar”, because we are all in a position where results for the time-homogenous case exist, and our goal is to develop a rigorous treatment of the time-inhomogeneous case. It is interesting to see that even if the topics we deal with are different, the ”philosophy” and “intuition” behind is similar in many ways.
The paper is organized as follows: Section 2 is devoted to propagators. We introduce the concept of regular propagators, which we characterize as unique solutions to well-posed Cauchy problems: this is of crucial importance for both our main weak law of large numbers (WLLN) and central limit theorem (CLT) results, in order to get the unicity of the limiting process. In Section 3, we introduce inhomogeneous random evolutions and present some of their properties. In Section 4 and Section 5, we prove, respectively, a WLLN and a CLT, which are the main results of the paper (Theorems 8 and 11). In particular, for the CLT, we obtain a precise (and new) characterization of the limiting process using weak Banach-valued stochastic integrals and so-called orthogonal martingale measures: this result—to the best of our knowledge—has not even been obtained in the time-homogeneous case. In Section 6, we present financial applications to illiquidity modeling using regime-switching time-inhomogeneous Lévy price dynamics and regime-switching Lévy driven diffusion based price dynamics. We also present a generalized version of the multi-asset model of price impact from distressed selling introduced in the recent article [11], for which we retrieve (and generalize) their diffusion limit result for the price process. We would like to mention that copula-based multivariate semi-Markov models with application in high-frequency finance is considered in [17], which is related to some generalization of semi-Markov processes and their application.

2. Propagators

This section aims at presenting a few results on propagators, which are used in the sequel. Most of them (as well as the corresponding proofs) are similar to what can be found in [18] Chapter 5, [19] Chapter 2 or [15], but, to the best of our knowledge, they do not appear in the literature in the form presented below. In particular, the main result of this section is Theorem 4, which characterizes so-called regular propagators as unique solutions to well-posed Cauchy problems.
Let ( Y , | | · | | ) be a real separable Banach space. Let Y be the dual space of Y. ( Y 1 , | | · | | Y 1 ) is assumed to be a real separable Banach space which is continuously embedded in Y (this idea was used in [18], Chapter 5), i.e., Y 1 Y and c 1 R + : | | f | | c 1 | | f | | Y 1 f Y 1 . Unless mentioned otherwise, limits are taken in the Y-norm, normed vector spaces are equipped with the norm topology and subspaces of normed vector spaces are equipped with the subspace topology. Limits in the Y 1 norm are denoted Y 1 lim , for example. In the following, J refers to either R + or [ 0 , T ] for some T > 0 and Δ J : = { ( s , t ) J 2 : s t } . In addition, let, for s J , J ( s ) : = { t J : s t } and Δ J ( s ) : = { ( r , t ) J 2 : s r t } . We start by a few introductory definitions:
Definition 1.
A function Γ : Δ J B ( Y ) is called a Y-(backward) propagator if:
(i) 
t J : Γ ( t , t ) = I ; and
(ii) 
( s , r ) , ( r , t ) Δ J : Γ ( s , r ) Γ ( r , t ) = Γ ( s , t ) .
If, in addition, ( s , t ) Δ J : Γ ( s , t ) = Γ ( 0 , t s ) , Γ is called a Y-semigroup.
Note that we focus our attention on backward propagators as many applications only fit the backward case, as shown below. Forward propagators differ from backward propagators in the way that they satisfy Γ ( t , r ) Γ ( r , s ) = Γ ( t , s ) ( s r t ). We now introduce the generator of the propagator:
Definition 2.
For t i n t ( J ) , define:
D ( A Γ ( t ) ) : = f Y : lim h 0 t + h J ( Γ ( t , t + h ) I ) f h = lim h 0 t h J ( Γ ( t h , t ) I ) f h Y
a n d   f o r   f D ( A Γ ( t ) ) : A Γ ( t ) f : = lim h 0 t + h J ( Γ ( t , t + h ) I ) f h = lim h 0 t h J ( Γ ( t h , t ) I ) f h .
Define similarly for t = 0 :
D ( A Γ ( 0 ) ) : = f Y : lim h 0 h J ( Γ ( 0 , h ) I ) f h Y
a n d   f o r   f D ( A Γ ( 0 ) ) : A Γ ( 0 ) f : = lim h 0 h J ( Γ ( 0 , h ) I ) f h ,
and define A Γ ( T ) similarly to A Γ ( 0 ) . Let D ( A Γ ( : = t J D ( A Γ ( t ) ) . Then, A Γ : J L ( D ( A Γ ) , Y ) is called the infinitesimal generator of the Y-propagator Γ.
In the following definitions, which deal with continuity and boundedness of propagators, ( E 1 , | | · | | E 1 ) and ( E 2 , | | · | | E 2 ) represent Banach spaces such that E 2 E 1 (possibly E 1 = E 2 ).
Definition 3.
A E 1 -propagator Γ is B ( E 2 , E 1 ) -bounded if sup ( s , t ) Δ J | | Γ ( s , t ) | | B ( E 2 , E 1 ) < . It is a B ( E 2 , E 1 ) -contraction if sup ( s , t ) Δ J | | Γ ( s , t ) | | B ( E 2 , E 1 ) 1 . It is B ( E 2 , E 1 ) -locally bounded if sup ( s , t ) K | | Γ ( s , t ) | | B ( E 2 , E 1 ) < for every compact K Δ J .
Definition 4.
Let F E 2 . A E 1 -propagator Γ is ( F , | | · | | E 2 ) -strongly continuous if ( s , t ) Δ J , f F :
Γ ( s , t ) F E 2 a n d lim ( h 1 , h 2 ) ( 0 , 0 ) ( s + h 1 , t + h 2 ) Δ J | | Γ ( s + h 1 , t + h 2 ) f Γ ( s , t ) f | | E 2 = 0 .
When E 1 = E 2 = Y , we simply write that it is F-strongly continuous.
We use the terminologies t-continuity and s-continuity for the continuity of the partial applications. According to [19], strong joint continuity is equivalent to strong separate continuity together with local boundedness of the propagator.
Definition 5.
Let F E 2 . The generator AΓ or the E 1 -propagator Γ is ( F , | | · | | E 2 ) -strongly continuous if t J , f F :
A Γ ( t ) F E 2 a n d lim h 0 t + h J | | A Γ ( t + h ) f A Γ ( t ) f | | E 2 = 0 .
When E 1 = E 2 = Y , we simply write that it is F-strongly continuous.
The following results give conditions under which the propagator is differentiable in s and t.
Theorem 1.
Let Γ be a Y-propagator. Assume that ( s , t ) Δ J , Γ ( s , t ) Y 1 D ( A Γ ) . Then:
s Γ ( s , t ) f = A Γ ( s ) Γ ( s , t ) f , ( s , t ) Δ J , f Y 1 .
If, in addition, Γ is ( Y 1 , | | · | | Y 1 ) -strongly s-continuous, Y 1 -strongly t-continuous, then:
s Γ ( s , t ) f = A Γ ( s ) Γ ( s , t ) f ( s , t ) Δ J , f Y 1 .
Proof of Theorem 1.
Let ( s , t ) Δ J , f Y 1 .
s Γ ( s , t ) f = lim h 0 ( s h , t ) Δ J Γ ( s , t ) f Γ ( s h , t ) f h
= lim h 0 ( s h , t ) Δ J Γ ( s h , s ) I h Γ ( s , t ) f = A Γ ( s ) Γ ( s , t ) f
since Γ ( s , t ) f D ( A Γ ) . We have for s < t :
+ s Γ ( s , t ) f = lim h 0 ( s + h , t ) Δ J Γ ( s + h , t ) f Γ ( s , t ) f h
= lim h 0 ( s + h , t ) Δ J Γ ( s , s + h ) I h Γ ( s + h , t ) f .
Let h ( 0 , t s ] :
( Γ ( s , s + h ) I ) h Γ ( s + h , t ) f A Γ ( s ) Γ ( s , t ) f ( Γ ( s , s + h ) I ) h Γ ( s , t ) f A Γ ( s ) Γ ( s , t ) f + ( Γ ( s , s + h ) I ) h B ( Y 1 , Y ) | | Γ ( s + h , t ) f Γ ( s , t ) f | | Y 1 ,
the last inequality holding because ( s , t ) Δ J : Γ ( s , t ) Y 1 Y 1 . We apply the uniform boundedness principle to show that sup h ( 0 , t s ] ( Γ ( s , s + h ) I ) h B ( Y 1 , Y ) < . Y 1 is Banach. We have to show that g Y 1 : sup h ( 0 , t s ] ( Γ ( s , s + h ) I ) h g < . Let g Y 1 . We have ( Γ ( s , s + h ) I ) h g h 0 | | A Γ ( s ) g | | since Y 1 D ( A Γ ) . δ ( g ) ( 0 , t s ) : h ( 0 , δ ) ( Γ ( s , s + h ) I ) h g < 1 + | | A Γ ( s ) g | | . Then, by Y 1 -strong t-continuity of Γ, h ( Γ ( s , s + h ) I ) h g C ( [ δ , t s ] , R ) . Let M : = max h [ δ , t s ] ( Γ ( s , s + h ) I ) h g . Then, we get ( Γ ( s , s + h ) I ) h g max ( M , 1 + | | A Γ ( s ) g | | ) h ( 0 , t s ] sup h ( 0 , t s ] ( Γ ( s , s + h ) I ) h g < .
Further, by ( Y 1 , | | · | | Y 1 ) -strong s-continuity of Γ, | | Γ ( s + h , t ) f Γ ( s , t ) f | | Y 1 h 0 0 . Finally, since Γ ( s , t ) f D ( A Γ ) , ( Γ ( s , s + h ) I ) h Γ ( s , t ) f A Γ ( s ) Γ ( s , t ) f h 0 0 .
Therefore, we get + s Γ ( s , t ) f = A Γ ( s ) Γ ( s , t ) f for s < t , which shows that s Γ ( s , t ) f = A Γ ( s ) Γ ( s , t ) f for ( s , t ) Δ J .  □
Theorem 2.
Let Γ be a Y-propagator. Assume that Y 1 D ( A Γ ) . Then, we have:
+ t Γ ( s , t ) f = Γ ( s , t ) A Γ ( t ) f ( s , t ) Δ J , f Y 1 .
If, in addition, Γ is Y-strongly t-continuous, then we have:
t Γ ( s , t ) f = Γ ( s , t ) A Γ ( t ) f ( s , t ) Δ J , f Y 1 .
Proof of Theorem 2.
Let ( s , t ) Δ J , f Y 1 . We have:
+ t Γ ( s , t ) f = lim h 0 ( s , t + h ) Δ J Γ ( s , t + h ) f Γ ( s , t ) f h
= lim h 0 ( s , t + h ) Δ J Γ ( s , t ) ( Γ ( t , t + h ) I ) f h .
For h J : t + h J :
Γ ( s , t ) ( Γ ( t , t + h ) I ) f h Γ ( s , t ) A Γ ( t ) f | | Γ ( s , t ) | | ( Y ) ( Γ ( t , t + h ) I ) f h A Γ ( t ) f h 0 0 .
since f D ( A Γ ) . Therefore, + t Γ ( s , t ) f = Γ ( s , t ) A Γ ( t ) f . Now, if s < t :
t Γ ( s , t ) f = lim h 0 ( s , t h ) Δ J Γ ( s , t ) f Γ ( s , t h ) f h
= lim h 0 ( s , t h ) Δ J Γ ( s , t h ) ( Γ ( t h , t ) I ) f h .
For h ( 0 , t s ] :
Γ ( s , t h ) ( Γ ( t h , t ) I ) f h Γ ( s , t ) A Γ ( t ) f | | Γ ( s , t h ) | | B ( Y ) ( Γ ( t h , t ) I ) f h A Γ ( t ) f + | | Γ ( ( s , t h ) Γ ( s , t ) ) A Γ ( t ) f | | .
Since f D ( A Γ ) , ( Γ ( t h , t ) I ) f h A Γ ( t ) f h 0 0 . By Y-strong t-continuity of Γ: | | ( Γ ( s , t h ) Γ ( s , t ) ) A Γ ( t ) f | | h 0 0 . By the principle of uniform boundedness together with the Y-strong t-continuity of Γ, we have sup h ( 0 , t s ] | | Γ ( s , t h ) | | B ( Y ) sup h [ 0 , t s ] | | Γ ( s , t h ) | | B ( Y ) < . Therefore, we get t Γ ( s , t ) f = Γ ( s , t ) A Γ ( t ) f for s < t , which shows t Γ ( s , t ) f = Γ ( s , t ) A Γ ( t ) f for ( s , t ) Δ J .  □
In general, we want to use the evolution equation: Γ ( s , t ) f = f + s t Γ ( s , u ) A Γ ( u ) f d u ; therefore, we need that u Γ ( s , u ) A Γ ( u ) f is in L Y 1 ( [ s , t ] ) . The following result gives sufficient conditions for which it is the case.
Theorem 3.
Assume that Theorem 2 holds true, i.e., t J , A Γ ( t ) B ( Y 1 , Y ) and ( s , t ) Δ J , u | | A Γ ( u ) | | B ( Y 1 , Y ) L R 1 [ s , t ] ) . Then, f Y 1 , ( s , t ) Δ J :
Γ ( s , t ) f = f + s t Γ ( s , u ) A Γ ( u ) f d u .
Proof of Theorem 3.
Let f Y 1 , ( s , t ) Δ J . First, u Γ ( s , u ) A Γ ( u ) f B Y ( [ s , t ] ) as the derivative of u Γ ( s , u ) f . By the principle of uniform boundedness together with the Y-strong t-continuity of Γ, we have M : = sup u [ s , t ] | | Γ ( s , u ) | | B ( Y ) < . We then observe that for u [ s , t ] :
| | Γ ( s , u ) A Γ ( u ) f | | M | | A Γ ( u ) f | | M | | A Γ ( u ) | | B ( Y 1 , Y ) | | f | | Y 1 .
The following definition introduces the concept of regular propagator, which in short means that it is differentiable with respect to both variables, and that its derivatives are integrable.  □
Definition 6.
A Y-propagator Γ is said to be regular if it satisfies Theorems 1 and 2 and ( s , t ) Δ J , f Y 1 , u | | A Γ ( u ) Γ ( u , t ) f | | and u | | Γ ( s , u ) A Γ ( u ) f | | are in L R 1 [ s , t ] ) .
Now, we are ready to characterize a regular propagator as the unique solution of a well-posed Cauchy problem, which is needed in the sequel. Note that the proof of the theorem below requires that Γ satisfies both Theorems 1 and 2 (hence, our above definition of regular propagators). The next result is the initial value problem statement for operator G .
Theorem 4.
Let AΓ be the generator of a regular Y-propagator Γ and s J , G s B ( Y ) . A solution operator G : J ( s ) B ( Y ) to the Cauchy problem:
d d t G ( t ) f = G ( t ) A Γ ( t ) f t J ( s ) , f Y 1 G ( s ) = G s
is said to be regular if it is Y-strongly continuous. If G is such a regular solution, then we have G ( t ) f = G s Γ ( s , t ) f , t J ( s ) , f Y 1 .
Proof of Theorem 4.
Let ( s , u ) , ( u , t ) Δ J , f Y 1 . Consider the function ϕ : u G ( u ) Γ ( u , t ) f . We show that ϕ ( u ) = 0 u [ s , t ] and therefore that ϕ ( s ) = ϕ ( t ) . We have for u < t :
d + ϕ d u ( u ) = lim h 0 h ( 0 , t u ] 1 h [ G ( u + h ) Γ ( u + h , t ) f G ( u ) Γ ( u , t ) f ] .
Let h ( 0 , t u ] . We have:
1 h [ G ( u + h ) Γ ( u + h , t ) f G ( u ) Γ ( u , t ) f ] 1 h G ( u + h ) Γ ( u , t ) f 1 h G ( u ) Γ ( u , t ) f G ( u ) A Γ ( u ) Γ ( u , t ) f ( 1 ) + | | G ( u + h ) | | B ( Y ) 1 h Γ ( u + h , t ) f 1 h Γ ( u , t ) f + A Γ ( u ) Γ ( u , t ) f ( 2 ) + | | G ( u + h ) A Γ ( u ) Γ ( u , t ) f G ( u ) A Γ ( u ) Γ ( u , t ) f | | ( 3 )
We have:
  • (1) → 0 as G satisfies the initial value problem and Γ ( u , t ) Y 1 Y 1 .
  • (2) → 0 as u Γ ( u , t ) f = A Γ ( u ) ( u , t ) f .
  • (3) → 0 by Y-strong continuity of G.
Further, by the principle of uniform boundedness together with the Y-strong continuity of G, we have sup h ( 0 , t u ] | | G ( u + h ) | | B ( Y ) sup h [ 0 , t u ] | | G ( u + h ) | | B ( Y ) < . We therefore get d + ϕ d u ( u ) = 0 . Now, for u > s :
d ϕ d u ( u ) = lim h 0 h ( 0 , u s ] 1 h [ G ( u ) Γ ( u , t ) f G ( u h ) Γ ( u h , t ) f ]
Let h ( 0 , u s ] :
1 h [ G ( u ) Γ ( u , t ) f G ( u h ) Γ ( u h , t ) f ] 1 h G ( u ) Γ ( u , t ) f 1 h G ( u h ) Γ ( u , t ) f G ( u ) A Γ ( u ) Γ ( u , t ) f ( 4 ) + | | G ( u h ) | | B ( Y ) 1 h Γ ( u h , u ) Γ ( u , t ) f + 1 h Γ ( u , t ) f + A Γ ( u ) Γ ( u , t ) f ( 5 ) + | | G ( u ) A Γ ( u ) Γ ( u , t ) f G ( u h ) A Γ ( u ) Γ ( u , t ) f | | ( 6 )
By the principle of uniform boundedness together with the Y-strong t-continuity of G, we have sup h ( 0 , u s ] | | G ( u h ) | | B ( Y ) sup h [ 0 , u s ] | | G ( u h ) | | B ( Y ) < . Furthermore:
  • (4) → 0 as G satisfies the initial value problem and Γ ( u , t ) Y 1 Y 1 .
  • (5) → 0 as Γ ( u , t ) Y 1 Y 1 .
  • (6) → 0 by Y-strong continuity of G.
We therefore get d ϕ d u ( u ) = 0 .  □
The following corollary expresses the fact that equality of generators implies equality of propagators.
Corollary 1.
Assume that Γ 1 and Γ 2 are regular Y-propagators and that f Y 1 , t J , A Γ 1 ( t ) f = A Γ 2 ( t ) f . Then, f Y 1 , ( s , t ) Δ J : Γ 1 ( s , t ) f = Γ 2 ( s , t ) f . In particular, if Y 1 is dense in Y, then Γ 1 = Γ 2 .
We conclude this section with a second-order Taylor formula for propagators. Let:
D ( A Γ Y 1 ) : = f D ( A Γ ) Y 1 : A Γ ( t ) f Y 1 t J .
Theorem 5.
Let Γ be a regular Y-propagator, ( s , t ) Δ J . Assume that u J , A Γ ( u ) B ( Y 1 , Y ) and u | | A Γ ( u ) | | B ( Y 1 , Y ) L R 1 [ s , t ] ) . Then, we have for f D ( A Γ Y 1 ) :
Γ ( s , t ) f = f + s t A Γ ( u ) f d u + s t s u Γ ( s , r ) A Γ ( r ) A Γ ( u ) f d r d u .
Proof of Theorem 5.
Since Γ is regular and f, A Γ ( u ) f Y 1 and u | | A Γ ( u ) | | B ( Y 1 , Y ) is integrable on [ s , t ] we have by 3:
Γ ( s , t ) f = f + s t Γ ( s , u ) A Γ ( u ) f d u = f + s t A Γ ( u ) f + s u Γ ( s , r ) A Γ ( r ) A Γ ( u ) f d r d u
= f + s t A Γ ( u ) f d u + s t s u Γ ( s , r ) A Γ ( r ) A Γ ( u ) f d r d u .
 □

3. Inhomogeneous Random Evolutions: Definitions and Properties

Let ( Ω , F , P ) be a complete probability space, J a finite set and ( x n , T n ) n N an inhomogeneous Markov renewal process on it, with associated inhomogeneous semi-Markov process ( x ( t ) ) t R + : = ( x N t ) t R + (as in [5]). In this section, we use the same notations for the various kernels and cumulative distribution functions ( Q s ( i , j , t ) , F s ( i , t ) , etc.) as in [5] on inhomogeneous Markov renewal processes. Throughout the section, we assume that the inhomogeneous Markov renewal process ( x n , T n ) n N is regular (cf. definition in [5]), and that Q s ( · , · , 0 ) = 0 for all s R + . Further assumptions on it are made below. We define the following random variables on Ω , for s t R + :
  • the number of jumps on ( s , t ] : N s ( t ) : = N ( t ) N ( s ) ;
  • the jump times on ( s , ) : T n ( s ) : = T N ( s ) + n for n N , and T 0 ( s ) : = s ; and
  • the states visited by the process on [ s , ) : x n ( s ) : = x ( T n ( s ) ) , for n N .
Consider a family of Y-propagators ( Γ x ) x J , with respective generators ( A x ) x J , satisfying:
s J : ( r , t , x , f ) Γ x ( r t , r t ) f
is B o r ( J ( s ) ) B o r ( J ( s ) ) B o r ( J ) B o r ( Y ) B o r ( Y ) measurable ,
as well as a family ( D ( x , y ) ) ( x , y ) J 2 B ( Y ) of B ( Y ) -contractions, satisfying:
( x , y , f ) D ( x , y ) f is B o r ( J ) B o r ( J ) B o r ( Y ) B o r ( Y ) measurable .
We define the inhomogeneous random evolution the following way:
Definition 7.
The function V : Δ J × Ω B ( Y ) defined pathwise by:
V ( s , t ) ( ω ) = k = 1 N s ( t ) Γ x k 1 ( s ) T k 1 ( s ) , T k ( s ) D ( x k 1 ( s ) , x k ( s ) ) Γ x ( t ) T N s ( t ) ( s ) , t
is called a ( Γ , D , x ) -inhomogeneous Y-random evolution, or simply an inhomogeneous Y-random evolution. V is said to be continuous (respectively, purely discontinuous) if D ( x , y ) = I (respectively, Γ x = I ), ( x , y ) J 2 . V is said to be regular (respectively, B ( Y ) -contraction) if ( Γ x ) x J are regular (respectively, B ( Y ) -contraction).
Remark 1.
In the latter definition, we use as conventions that k = 1 n A k : = A 1 A n 1 A n , that is, the product operator applies the product on the right. Further, if N s ( t ) > 0 , then x N s ( t ) ( s ) = x ( T N s ( t ) ( s ) ) = x ( T N ( t ) ) = x ( t ) . If N s ( t ) = 0 , then x ( s ) = x ( t ) and x N s ( t ) ( s ) = x 0 ( s ) = x ( T 0 ( s ) ) = x ( s ) = x ( t ) . Therefore, in all cases, x N s ( t ) ( s ) = x ( t ) . By Proposition 2 below, if D = I , we see that V is continuous and if Γ x = I (and therefore A x = 0 ), we see that V has no continuous part, hence Definition 7.
As long as our main object is stochastic process ( V ( s , t ) ( ω ) f ) , we have to show a measurability of this process. Thus, we have the following measurability result:
Proposition 1.
For s J , f Y , the stochastic process ( V ( s , t ) ( ω ) f ) ( ω , t ) Ω × J ( s ) is adapted to the (augmented) filtration:
F t ( s ) : = σ x n N s ( t ) ( s ) , T n N s ( t ) ( s ) : n N σ ( P n u l l   s e t s ) .
Proof of Proposition 1.
Let E B o r ( Y ) , ( s , t ) Δ J , f Y . We have:
V ( s , t ) f 1 ( E ) = n N { V ( s , t ) f E } { N s ( t ) = n } .
Denote the F t ( s ) B o r ( R + ) measurable (by construction) function h k : = T ( k + 1 ) N s ( t ) ( s ) T k N s ( t ) ( s ) . Since Q ( · , · , 0 ) = 0 , remark that N s ( t ) ( ω ) = sup m N k = 0 m 1 h k 1 ( R + ) ( ω ) and is therefore F t ( s ) B o r ( R + ) measurable. Therefore, { N s ( t ) = n } F t ( s ) . Let:
Ω n : = { N s ( t ) = n } M : = n N : Ω n .
M since Ω = n N Ω n , and for n M , let the sigma-algebra F n : = F t ( s ) | Ω n : = { A F t ( s ) : A Ω n } ( F n is a sigma-algebra on Ω n since Ω n F t ( s ) ). Now, consider the map V n ( s , t ) f : ( Ω n , F n ) ( Y , B o r ( Y ) ) :
V n ( s , t ) f : = k = 1 n Γ x k 1 ( s ) T k 1 ( s ) , T k ( s ) D ( x k 1 ( s ) , x k ( s ) ) Γ x n ( s ) T n ( s ) , t f .
We have:
V ( s , t ) f 1 ( E ) = n N { V n ( s , t ) f E } Ω n = n N { ω Ω n : V n ( s , t ) f E }
= n N V n ( s , t ) f 1 ( E ) .
Therefore, it remains to show that V n ( s , t ) f 1 ( E ) F n , since F n F t ( s ) . First, let n > 0 . Notice that V n ( s , t ) f = ψ β n α n β 1 α 1 ϕ , where:
ϕ : Ω n J ( s ) × J × Ω n Y × Ω n
ω ( T n ( s ) ( ω ) , x n ( s ) ( ω ) , ω ) ( Γ x n ( s ) ( ω ) ( T n ( s ) ( ω ) , t ) f , ω ) .
The previous mapping holding since T k ( s ) ( ω ) [ s , t ] ω Ω n , k [ | 1 , n | ] . ϕ is measurable iff each one of the coordinate mappings are. The canonical projections are trivially measurable. Let A B o r ( J ( s ) ) , B B o r ( J ) . We have:
{ ω Ω n : T n ( s ) A } = Ω n T n N s ( t ) ( s ) 1 ( A ) F n
{ ω Ω n : x n ( s ) B } = Ω n x n N s ( t ) ( s ) 1 ( B ) F n .
Now, by measurability assumption, we have for B B o r ( Y ) :
{ ( t n , y n ) J ( s ) × J : Γ y n ( t t n , t t n ) f B } = C B o r ( J ( s ) ) B o r ( J )
Therefore , { ( t n , y n , ω ) J ( s ) × J × Ω n : Γ y n ( t t n , t t n ) f B }
= C × Ω n B o r ( J ( s ) ) B o r ( J ) F n .
Therefore, ϕ is F n B o r ( Y ) F n measurable. Define for i [ 1 , n ] :
α i : Y × Ω n J × J × Y × Ω n Y × Ω n
( g , ω ) ( x n i ( s ) ( ω ) , x n i + 1 ( s ) ( ω ) , g , ω ) ( D ( x n i ( s ) ( ω ) , x n i + 1 ( s ) ( ω ) ) g , ω ) .
Again, the canonical projections are trivially measurable. We have for p [ | 0 , n | ] :
{ ω Ω n : x p ( s ) B } = Ω n x p N s ( t ) ( s ) 1 ( B ) : = C F n
Therefore , { ( g , ω ) Y × Ω n : x p ( s ) B } = Y × C B o r ( Y ) F n .
Now, by measurability assumption, B B o r ( Y ) , C B o r ( J ) B o r ( J ) B o r ( Y ) :
{ ( y n i , y n i + 1 , g , ω ) J × J × Y × Ω n : D ( y n i , y n i + 1 ) g B }
= C × Ω n B o r ( J ) B o r ( J ) B o r ( Y ) F n ,
which proves the measurability of α i . Then, we define for i [ 1 , n ] :
β i : Y × Ω n J ( s ) × J ( s ) × J × Y × Ω n Y × Ω n
( g , ω ) ( T n i ( s ) ( ω ) , T n i + 1 ( s ) ( ω ) , x n i ( s ) ( ω ) , g , ω )
( Γ x n i ( s ) ( ω ) ( T n i ( s ) ( ω ) , T n i + 1 ( s ) ( ω ) ) g , ω ) .
By measurability assumption, B B o r ( Y ) , C B o r ( J ( s ) ) B o r ( J ( s ) ) B o r ( J ) B o r ( Y ) :
{ ( t n i , t n i + 1 , y n i , g , ω ) J ( s ) × J ( s ) × J × Y × Ω n : Γ y n i ( t n i t n i + 1 , t n i t n i + 1 ) g B }
= C × Ω n B o r ( J ( s ) ) B o r ( J ( s ) ) B o r ( J ) B o r ( Y ) F n ,
which proves the measurability of β i . Finally, define the canonical projection:
ψ : Y × Ω n Y
( g , ω ) g
which proves the measurability of V n ( s , t ) f . For n = 0 , we have V n ( s , t ) f = Γ x ( s ) s , t f and the proof is similar.  □
The following result characterizes an inhomogeneous random evolution as a propagator, shows that it is right-continuous and that it satisfies an integral representation (which is used extensively below). It also clarifies why we use the terminology “continuous inhomogeneous Y-random evolution” when D = I .
Proposition 2.
Let V an inhomogeneous Y-random evolution and ( s , t ) Δ J , ω Ω . Then, V ( , ) ( ω ) is a Y-propagator. If we assume that V is regular, then we have on Ω the following integral representation:
V ( s , t ) f = f + s t V ( s , u ) A x ( u ) ( u ) f d u + k = 1 N s ( t ) V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f .
Further u V ( s , u ) ( ω ) is Y-strongly RCLL on J ( s ) , i.e., f Y , u V ( s , u ) ( ω ) f D ( J ( s ) , ( Y , | | · | | ) ) . More precisely, we have for f Y :
V ( s , u ) f = V ( s , u ) f   i f   u { T n ( s ) : n N }
V ( s , T n + 1 ( s ) ) f = V ( s , T n + 1 ( s ) ) D ( x n ( s ) , x n + 1 ( s ) ) f n N ,
where we denote V ( s , t ) f : = lim u t V ( s , u ) f .
Proof of Proposition 2.
The fact that V ( s , t ) B ( Y ) is straightforward from the definition of V and using the fact that ( D ( x , y ) ) ( x , y ) J 2 are B ( Y ) -contractions. We can also obtain easily that V is a propagator by straightforward computations. We now show that u V ( s , u ) ( ω ) is Y-strongly continuous on each [ T n ( s ) , T n + 1 ( s ) ) J ( s ) , n N and Y-strongly RCLL at each T n + 1 ( s ) J ( s ) , n N . Let n N such that T n ( s ) J ( s ) . t [ T n ( s ) , T n + 1 ( s ) ) J ( s ) , we have:
V ( s , t ) = k = 1 n Γ x k 1 ( s ) T k 1 ( s ) , T k ( s ) D ( x k 1 ( s ) , x k ( s ) ) Γ x n ( s ) T n ( s ) , t .
Therefore, by Y-strong t-continuity of Γ, we get that u V ( s , u ) ( ω ) is Y-strongly continuous on [ T n ( s ) , T n + 1 ( s ) ) J ( s ) . If T n + 1 ( s ) J ( s ) , the fact that V ( s , ) has a left limit at T n + 1 ( s ) also comes from the Y-strong t-continuity of Γ:
V ( s , T n + 1 ( s ) ) f = lim h 0 G n s Γ x n ( s ) ( T n ( s ) , T n + 1 ( s ) h ) f = G n s Γ x n ( s ) ( T n ( s ) , T n + 1 ( s ) ) f
G n s = k = 1 n Γ x k 1 ( s ) T k 1 ( s ) , T k ( s ) D ( x k 1 ( s ) , x k ( s ) ) .
Therefore, we get the relationship:
V ( s , T n + 1 ( s ) ) f = V ( s , T n + 1 ( s ) ) D ( x n ( s ) , x n + 1 ( s ) ) f .
To prove the integral representation, let s J , ω Ω , f Y 1 . We proceed by induction and show that n N , we have t [ T n ( s ) , T n + 1 ( s ) ) J ( s ) :
V ( s , t ) f = f + s t V ( s , u ) A x ( u ) ( u ) f d u + k = 1 n V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f .
For n = 0 , we have t [ s , T 1 ( s ) ) J ( s ) : V ( s , t ) f = Γ x ( s ) ( s , t ) f , and therefore V ( s , t ) f = f + s t V ( s , u ) A x ( u ) ( u ) f d u by regularity of Γ. Now, assume that the property is true for n 1 , namely: t [ T n 1 ( s ) , T n ( s ) ) J ( s ) . We have:
V ( s , t ) f = f + s t V ( s , u ) A x ( u ) ( u ) f d u + k = 1 n 1 V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f .
Therefore, it implies that (by continuity of the Bochner integral):
V ( s , T n ( s ) ) f = f + s T n ( s ) V ( s , u ) A x ( u ) ( u ) f d u + k = 1 n 1 V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f .
Now, t [ T n ( s ) , T n + 1 ( s ) ) J ( s ) we have that:
V ( s , t ) = G n s Γ x n ( s ) ( T n ( s ) , t )
G n s : = k = 1 n Γ x k 1 ( s ) T k 1 ( s ) , T k ( s ) D ( x k 1 ( s ) , x k ( s ) ) ,
and therefore t [ T n ( s ) , T n + 1 ( s ) ) J ( s ) , by Theorem 2 and regularity of Γ:
t V ( s , t ) f = V ( s , t ) A x ( t ) ( t ) f V ( s , t ) f = V ( s , T n ( s ) ) f + T n ( s ) t V ( s , u ) A x ( u ) ( u ) f d u .
Further, we already proved that V ( s , T n ( s ) ) f = V ( s , T n ( s ) ) D ( x n 1 ( s ) , x n ( s ) ) f . Therefore, combining these results we have:
V ( s , t ) f = V ( s , T n ( s ) ) D ( x n 1 ( s ) , x n ( s ) ) f + T n ( s ) t V ( s , u ) A x ( u ) ( u ) f d u
= V ( s , T n ( s ) ) f + T n ( s ) t V ( s , u ) A x ( u ) ( u ) f d u + V ( s , T n ( s ) ) D ( x n 1 ( s ) , x n ( s ) ) f V ( s , T n ( s ) ) f
= f + s T n ( s ) V ( s , u ) A x ( u ) ( u ) f d u + k = 1 n 1 V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f
+ T n ( s ) t V ( s , u ) A x ( u ) ( u ) f d u + V ( s , T n ( s ) ) D ( x n 1 ( s ) , x n ( s ) ) f V ( s , T n ( s ) ) f
= f + s t V ( s , u ) A x ( u ) ( u ) f d u + k = 1 n V ( s , T k ( s ) ) [ D ( x k 1 ( s ) , x k ( s ) ) I ] f .
 □

4. Weak Law of Large Numbers

In this section, we introduce a rescaled random evolution Vϵ, in which time is rescaled by a small parameter ϵ. The main result of this section is Theorem 8 in Section 4.5. To prove the weak convergence of Vϵ to some regular propagator Γ ^ , we prove in Section 4.3 that Vϵ is relatively compact, which informally means that, for any sequence ϵ n 0 , there exists a subsequence { n k } along which V ϵ n k converges weakly. To show the convergence of Vϵ to Γ ^ , we need to show that all limit points of the latter V ϵ n k are equal to Γ ^ . To prove relative compactness, we need among other things that Vϵ satisfies the so-called compact containment criterion (CCC), which in short requires that for every f Y , V ϵ ( s , t ) f remains in a compact set of Y with an arbitrarily high probability as ϵ 0 . This compact containment criterion is the topic of Section 4.2. Section 4.1 introduces the rescaled random evolution Vϵ as well as some regularity assumptions (condensed in Assumptions 1, Section 4.1, for spaces and operators under consideration, and in Assumptions 2, Section 4.1, for the semi-Markov processes) which will be assumed to hold throughout the rest of the paper. It also reminds the reader of some definitions and results on relative compactness in the Skorohod space, which are mostly taken from the well-known book [20]. Finally, the main WLLN result Theorem 8 is proved using a martingale method similar in the spirit to what is done in [10] (Chapter 4, Section 4.2.1) for time-homogeneous random evolutions. This method is here adapted rigorously to the time-inhomogeneous setting: this is the topic of Section 4.4. The martingale representation presented in Lemma 5 of Section 4.4 will be used in the Section 5 to prove a CLT for time-inhomogeneous random evolutions.

4.1. Preliminary Definitions and Assumptions

In this section, we prove a weak law of large numbers for inhomogeneous random evolutions. We rescale both time and the jump operators D in a suitable way by a small parameter ϵ and study the limiting behavior of the rescaled random evolution. To this end, the same way we introduce inhomogeneous Y-random evolutions, we consider a family ( D ϵ ( x , y ) ) ( x , y ) J 2 , ϵ ( 0 , 1 ] of B ( Y ) -contractions, satisfying ϵ ( 0 , 1 ] :
( x , y , f ) D ϵ ( x , y ) f is B o r ( J ) B o r ( J ) B o r ( Y ) B o r ( Y ) measurable .
and let D 0 ( x , y ) : = I . We define:
D ( D 1 ) : = ϵ [ 0 , 1 ] ( x , y ) J 2 f Y : lim h 0 ϵ + h [ 0 , 1 ] D ϵ + h ( x , y ) f D ϵ ( x , y ) f h Y
and f ( D D 1 ) :
D 1 ϵ ( x , y ) f : = lim h 0 ϵ + h [ 0 , 1 ] D ϵ + h ( x , y ) f D ϵ ( x , y ) f h .
The latter operators correspond, in short, to the (first-order) derivatives of the operators Dϵ with respect to ϵ. We need them in the following to be able to use the expansion D ϵ I + ϵ D 1 ϵ + , which proves useful when proving limit theorems for random evolutions. The same way, we introduce D 2 ϵ , corresponding to the second derivative. We also let:
D ( D 1 0 Y 1 ) : = f D ( D 1 ) Y 1 : D 1 0 ( x , y ) f Y 1 ( x , y ) J 2 .
For x J , remembering the definition of D ( A x Y 1 ) in (38), we let:
D ( A x ) : = D ( A x Y 1 ) f ( A x ) Y 1 : Y 1 lim h 0 t + h J A x ( t + h ) f A x ( t ) f h Y 1 t J
and for t J , f D ( A x ) :
A x ( t ) f : = Y 1 lim h 0 t + h J A x ( t + h ) f A x ( t ) f h .
Here, Y 1 lim simply indicates that the limit is taken in the Y 1 norm. We also introduce the space D ^ on which we are mostly working on:
D ^ : = x J D ( A x ) D ( D 2 ) ( D D 1 0 Y 1 ) .
Throughout this section, we make the following set of regularity assumptions, which we first state and then comment on immediately afterwards. We recall that the various notions of continuity and regularity are defined in Section 2.
Assumptions 1.
Assumptions on the structure of spaces:
1. 
The subset D ^ contains a countable family which is dense in both Y 1 and Y.
2. 
Y 1 D ( D 1 ) .
Assumptions on the regularity of operators:
1. 
( Γ x ) x J are regular Y-propagators.
2. 
A x is Y 1 -strongly continuous, x J .
Assumptions on the boundedness of operators:
1. 
( Γ x ) x J are B ( Y ) -exponentially bounded, i.e., γ 0 such that | | Γ x ( s , t ) | | B ( Y ) e γ ( t s ) , for all x J , ( s , t ) Δ J .
2. 
A x ( t ) B ( Y 1 , Y ) and sup u [ 0 , t ] | | A x ( u ) | | B ( Y 1 , Y ) < t J , x J .
3. 
sup t [ 0 , T ] x J | | A x ( t ) f | | < , sup t [ 0 , T ] x J | | A x ( t ) f | | Y 1 < , f x J D ( A x ) , for all T J .
4. 
D 1 0 ( x , y ) B ( Y 1 , Y ) x , y J .
5. 
sup ϵ [ 0 , 1 ] ( x , y ) J 2 | | D 1 ϵ ( x , y ) f | | < , f D ( D 1 ) .
6. 
sup ϵ [ 0 , 1 ] ( x , y ) J 2 | | D 2 ϵ ( x , y ) f | | < , f D ( D 2 ) .
Assumptions 2.
Assumptions on the semi-Markov process:
1. 
(ergodicity) Assumptions from [5] holds true for the function t t , so that:
lim t N ( t ) t = 1 Π m a . e .
2. 
(uniform boundedness of sojourn increments) τ ¯ > 0 such that:
sup t R + i J F t ( i , τ ¯ ) = 1 .
3. 
(regularity of the inhomogeneous Markov renewal process) The conditions for F t ( i , τ ¯ ) are satisfied (see [5]), namely: there exists τ > 0 and β > 0 such that:
sup t R + i J F t ( i , τ ) < 1 β .
Let us make a few comments on the previous assumptions. The assumptions regarding the regularity of operators mainly ensure that we can use the results obtained on propagators in Section 2, for example Theorem 5. The (strong) continuity of A x also proves to be useful when working with convergence in the Skorokhod space. The assumptions on the boundedness of operators is used to show that various quantities converge well. Finally, regarding the assumptions on the semi-Markov process, the almost sure convergence of t 1 N ( t ) as t is used very often. It is one of the fundamental requirement for the work below. The uniform boundedness of the sojourn increments is a mild assumption in practice. There might be a possibility to weaken it, but the proofs would become heavier, for example because the jumps of the martingales introduced below would no longer be uniformly bounded.
Notation: In the following, we let for n N , i J and t R + (their existence is guaranteed by Assumption 1):
m n ( i , t ) : = 0 s n F t ( i , d s ) m n ( i ) : = 0 s n F ( i , d s ) .
We also let J : = J if J = R + and J : = [ 0 , T τ ¯ ) if J = [ 0 , T ] . In the latter case, it is assumed that T > τ ¯ . Similarly, we let for s J : J ( s ) : = { t J : s t } .
We now introduce the rescaled random evolution, with the notation t ϵ , s : = s + ϵ ( t s ) :
Definition 8.
Let V be an inhomogeneous Y-random evolution. We define (pathwise on Ω) the rescaled inhomogeneous Y-random evolution Vϵ for ϵ ( 0 , 1 ] , ( s , t ) Δ J by:
V ϵ ( s , t ) : = k = 1 N s t 1 ϵ , s Γ x k 1 ( s ) T k 1 ϵ , s ( s ) , T k ϵ , s ( s ) D ϵ ( x k 1 ( s ) , x k ( s ) ) Γ x t 1 ϵ , s T N s t 1 ϵ , s ϵ , s ( s ) , t .
Remark 2.
we notice that Vϵ is well-defined since on Ω:
T N s t 1 ϵ , s ϵ , s ( s ) = s + ϵ T N s t 1 ϵ , s ϵ ( s ) s s + ϵ t 1 ϵ , s s = t ,
and that it coincides with V for ϵ = 1 , i.e., V 1 ( s , t ) = V ( s , t ) .
Our goal is to prove, as in [10], that, for each f in some suitable subset of Y, { V ϵ ( s , ) f } —seen as a family of elements of D ( J ( s ) , Y ) —converges weakly to some continuous limiting process V 0 ( s , ) f to be determined. To this end, we first prove that { V ϵ ( s , ) f } is relatively compact with almost surely continuous weak limit points. This is equivalent to the notion of C-tightness in [21] (VI.3) because P ( D ( J ( s ) , Y ) ) topologized with the Prohorov metric is a separable and complete metric space (Y being a separable Banach space), which implies that relative compactness and tightness are equivalent in P ( D ( J ( s ) , Y ) ) (by Prohorov’s theorem). Then, we identify the limiting operator-valued process V 0 , using the results of [14]. We first need some elements that can be found in [10] (Section 1.4) and [20] (Sections 3.8–3.11):
Definition 9.
Let ( ν n ) n N be a sequence of probability measures on a metric space ( S , d ) . We say that ν n converges weakly to ν, and write ν n ν iff f C b ( S , R ) :
lim n S f d ν n = S f d ν
Definition 10.
Let { ν ϵ } be a family of probability measures on a metric space ( S , d ) . { ν ϵ } is said to be relatively compact iff for any sequence ( ν n ) n N { ν ϵ } , there exists a weakly converging subsequence.
Definition 11.
Let s J , { X ϵ } a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . We say that { X ϵ } is relatively compact iff { L ( X ϵ ) } is (in the metric space P ( D ( J ( s ) , Y ) ) endowed with the Prohorov metric). We write that X ϵ X iff L ( X ϵ ) L ( X ) . We say that { X ϵ } is C-relatively compact iff it is relatively compact and if ever X ϵ X , then X has a.e. continuous sample paths. If E Y Y , we say that { V ϵ } is E Y -relatively compact (respectively, E Y -C-relatively compact) iff { V ϵ ( s , ) f } is f E Y , s J .
Definition 12.
Let s J , { X ϵ } be a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . We say that { X ϵ } satisfies the compact containment criterion ( { X ϵ } CCC) if Δ ( 0 , 1 ] , t J ( s ) Q , K Y compact set such that:
lim inf ϵ 0 P [ X ϵ ( t ) K ] 1 Δ .
We say that { V ϵ } satisfies the compact containment criterion in E Y Y ( { V ϵ } E Y -CCC), if f E Y , s J , { V ϵ ( s , ) f } CCC.
Theorem 6.
Let s J , { X ϵ } a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . { X ϵ } is C-relatively compact iff it is relatively compact and j s ( X ϵ ) 0 , where:
j s ( X ) : = J ( s ) e u ( j s ( X , u ) 1 ) d u
j s ( X , u ) : = sup t [ s , u ] | | X ( t ) X ( t ) | | .
Theorem 7.
Let s J , { X ϵ } be a family of stochastic processes with sample paths in D ( J ( s ) , Y ) . { X ϵ } is relatively compact in D ( J ( s ) , Y ) iff:
1. 
{ X ϵ } CCC
2. 
T J ( s ) , r > 0 and a family { C s ( ϵ , η ) : ( ϵ , η ) ( 0 , 1 ] × ( 0 , 1 ) } of nonnegative random variables such that ( ϵ , η ) ( 0 , 1 ] × ( 0 , τ ¯ 1 ) , h [ 0 , η ] , t [ s , T ] :
E | | X ϵ ( t + h ) X ϵ ( t ) | | r G t ϵ , s E [ C s ( ϵ , η ) | G t ϵ , s ]
lim η 0 lim sup ϵ 0 E [ C s ( ϵ , η ) ] = 0 ,
where G t ϵ , s : = σ X ϵ ( u ) : u [ s , t ] .
If { X ϵ } is relatively compact, then the stronger compact containment criterion holds: Δ ( 0 , 1 ] , T J ( s ) , K Y compact set such that:
lim inf ϵ 0 P [ X ϵ ( t ) K t [ s , T ] ] 1 Δ .

4.2. The Compact Containment Criterion

We show that, to prove relative compactness, we need to prove that the compact containment criterion is satisfied. We give below some sufficient conditions for which it is the case, in particular for the space C 0 ( R d ) , which is used in many applications. In [6], it is mentioned that there exists a compact embedding of a Hilbert space into C 0 ( R d ) . Unfortunately, this is not true (to the best of our knowledge), and we show below in Proposition 4 how to overcome this problem. This latter proposition is applied in Section 6 to the time-inhomogeneous Lévy case, and the corresponding proof can easily be recycled for many other examples.
Proposition 3.
Assume that there exists a Banach space ( Z , | | · | | Z ) compactly embedded in Y, that ( Γ x ) x J , are B ( Z ) -exponentially bounded (uniformly in J ), and that ( D ϵ ( x , y ) ) ϵ ( 0 , 1 ] ( x , y ) J 2 are B ( Z ) -contractions. Then, { V ϵ } Z -CCC.
Proof. 
Let f Z , ( s , t ) Δ J , and assume | | Γ x ( s , t ) f | | Z e r ( t s ) | | f | | Z for some r 0 . Let c : = e r ( t s ) | | f | | Z and K : = c l ( Y ) S c ( Z ) , the Y-closure of the Z-closed ball of radius c. K is compact because of the compact embedding of Z into Y. Let ϵ ( 0 , 1 ] . We have ω Ω : | | V ϵ ( s , t ) ( ω ) f | | Z c . Therefore, V ϵ ( s , t ) ( ω ) f S c ( Z ) K and so P [ V ϵ ( s , t ) f K ] = P ( Ω ) = 1 1 Δ .  □
For example, we can consider the Rellich–Kondrachov compactness theorem: if U R d is an open, bounded Lipschitz domain, then the Sobolev space W 1 , p ( U ) is compactly embedded in L q ( U ) , where p [ 1 , d ) and q [ 1 , d p d p ) .
For the space C 0 ( R d ) , there is no well-known such compact embedding, therefore we have to proceed differently. The result below is applied for the time-inhomogeneous Lévy case (see Section 6), and the corresponding proof can easily be recycled for other examples.
Proposition 4.
Let Y : = C 0 ( R d ) , E Y Y . Assume that Δ ( 0 , 1 ] , ( s , t ) Δ J , ϵ ( 0 , 1 ] , f E Y , A ϵ Ω : P ( A ϵ ) 1 Δ and the family { V ϵ ( s , t ) ( ω ) f : ϵ ( 0 , 1 ] , ω A ϵ } converge uniformly to 0 at infinity, is equicontinuous and uniformly bounded. Then, { V ϵ } E Y -CCC.
Proof. 
Let f E Y , K the Y-closure of the set:
K 1 : = { V ϵ ( s , t ) ( ω ) f : ϵ ( 0 , 1 ] , ω A ϵ } .
K 1 is a family of elements of Y that are equicontinuous, uniformly bounded and that converge uniformly to 0 at infinity by assumption. Therefore, it is well-known, using the Arzela–Ascoli theorem on the one-point compactification of R d , that K 1 is relatively compact in Y and therefore that K is compact in Y. We have ϵ ( 0 , 1 ] :
P [ V ϵ ( s , t ) f K ] P [ ω A ϵ : V ϵ ( s , t ) f K ] = P ( A ϵ ) 1 Δ .
 □

4.3. Relative Compactness of { V ϵ }

This section is devoted to proving that { V ϵ } is relatively compact. In the following we assume that { V ϵ } satisfies the compact containment criterion (see [14]):
{ V ϵ } Y 1 CCC .
We first state an integral representation of { V ϵ } , which proof is the same as the proof of Proposition 2.
Lemma 1.
Let Assumption 1 hold true. Let ( s , t ) Δ J , f Y 1 . Then, Vϵ satisfies on Ω:
V ϵ ( s , t ) f = f + s t V ϵ ( s , u ) A x u 1 ϵ , s ( u ) f d u + k = 1 N s t 1 ϵ , s V ϵ ( s , T k ϵ , s ( s ) ) [ D ϵ ( x k 1 ( s ) , x k ( s ) ) I ] f .
We now prove that { V ϵ } is relatively compact.
Lemma 2.
Let Assumptions 1 and 2, hold true. Then, { V ϵ } is Y 1 -relatively compact.
Proof. 
We use Theorem 7 to show this result. Using Lemma 1, we have for h [ 0 , η ] :
| | V ϵ ( s , t + h ) f V ϵ ( s , t ) f | |
t t + h V ϵ ( s , u ) A x u 1 ϵ , s ( u ) f d u + k = N s t 1 ϵ , s + 1 N s ( t + h ) 1 ϵ , s V ϵ ( s , T k ϵ , s ( s ) ) [ D ϵ ( x k 1 ( s ) , x k ( s ) ) I ] f
η M 1 + ϵ e γ ( T + 1 s ) k = N s t 1 ϵ , s + 1 N s ( t + η ) 1 ϵ , s 1 ϵ | | D ϵ ( x k 1 ( s ) , x k ( s ) ) f f | |
η M 1 + ϵ M 2 N ( t + η ) 1 ϵ , s N t 1 ϵ , s .
where
M 1 : = e γ ( T + 1 s ) sup x J , u [ s , T + 1 τ ¯ ] | | A x ( u ) | | B ( Y 1 , Y ) | | f | | Y 1 ,
M 2 : = e γ ( T + 1 s ) sup ϵ , x , y | | D 1 ϵ ( x , y ) f | | ,
by Assumption 1. Now, for ϵ ( 0 , 1 ] :
ϵ N ( t + η ) 1 ϵ , s N t 1 ϵ , s
ϵ sup t [ s , s + η ] N ( t + η ) 1 ϵ , s N t 1 ϵ , s + ϵ sup t [ s + η , T ] N ( t + η ) 1 ϵ , s N t 1 ϵ , s
ϵ N ( s + 2 η ) 1 ϵ , s + ϵ sup t [ s + η , T ] N ( t + η ) 1 ϵ , s N t 1 ϵ , s .
Note that the supremums in the previous expression are a.e. finite as they are a.e. bounded by N ( T + 1 ) 1 ϵ , s . Now, let:
C s ( ϵ , η ) : = η M 1 + M 2 ϵ N ( s + 2 η ) 1 ϵ , s + M 2 ϵ sup t [ s + η , T ] N ( t + η ) 1 ϵ , s N t 1 ϵ , s .
We have to show that lim η 0 lim ϵ 0 E [ C s ( ϵ , η ) ] = 0 . We have:
lim η 0 lim ϵ 0 η M 1 + M 2 ϵ E N ( s + 2 η ) 1 ϵ , s = lim η 0 η M 1 + M 2 2 η Π m = 0 .
Let { ϵ n } any sequence converging to 0, and denote
Z n : = ϵ n sup t [ s + η , T ] N ( t + η ) 1 ϵ n , s N t 1 ϵ n , s .
We first want to show that { Z n } is uniformly integrable. According to [20], it is sufficient to show that sup n E ( Z n 2 ) < . We have that E ( Z n 2 ) ϵ n 2 E N 2 ( T + 1 ) 1 ϵ n , s . By Assumption 1 (more precisely, the regularity of the inhomogeneous Markov renewal process), we get:
lim t E ( N 2 ( t ) ) t 2 < ,
and therefore { Z n } is uniformly integrable. Then, we show that Z n a . e . Z : = η Π m . Let:
Ω : = lim ϵ 0 ϵ N ( s + 1 ) 1 ϵ , s = 1 Π m .
Let Ω and δ > 0 . There exists some constant r 2 ( , δ ) > 0 such that for ϵ < r 2 :
ϵ N ( s + 1 ) 1 ϵ , s 1 Π m < δ T + η ,
and if t [ s + η , T + η ] :
( t s ) ϵ N ( s + 1 ) 1 ϵ , s t s Π m < δ ( t s ) T + η δ .
Let ϵ < η r 2 (recall η > 0 ) and ϵ 2 : = ϵ t s . Then, ϵ 2 < η r 2 η = r 2 , and therefore:
( t s ) ϵ 2 N ( s + 1 ) 1 ϵ 2 , s t s Π m < δ
ϵ N t 1 ϵ , s t s Π m < δ .
Therefore, for ϵ < η r 2 and t [ s + η , T ] :
ϵ N ( t + η ) 1 ϵ , s ϵ N t 1 ϵ , s η Π m < 2 δ
sup t [ s + η , T ] ϵ N ( t + η ) 1 ϵ , s ϵ N t 1 ϵ , s η Π m 2 δ < 3 δ .
We have proved that Z n a . e . Z . By uniform integrability of { Z n } , we get that lim n E ( Z n ) = E ( Z ) and therefore since the sequence { ϵ n } is arbitrary:
lim ϵ 0 ϵ E sup t [ s + η , T ] N ( t + η ) 1 ϵ , s N t 1 ϵ , s = η Π m .
 □
We now prove that the limit points of { V ϵ } are continuous.
Lemma 3.
Let Assumptions 1 and 2 hold true. Then, { V ϵ } is Y 1 -C-relatively compact.
Proof. 
The proof is presented for the case J = R + . The proof for the case J = [ 0 , T ] is exactly the same. By Lemma 2, it is relatively compact. By Theorem 6, it is sufficient to show that j s ( V ϵ ( s , ) f ) P 0 . Let δ > 0 and fix T > 0 . For u [ s , T ] , we have:
j s ( V ϵ ( s , ) f , u ) sup t [ s , T ] | | V ϵ ( s , t ) f V ϵ ( s , t ) f | |
= max k 1 , N s T 1 ϵ , s V ϵ s , T k ϵ , s ( s ) f V ϵ s , T k ϵ , s ( s ) f
( using Lemma 1 ) = max k 1 , N s T 1 ϵ , s | | V ϵ s , T k ϵ , s ( s ) ( D ϵ ( x k 1 ( s ) , x k ( s ) ) f f ) | |
e γ ( T s ) max ( x , y ) J 2 | | D ϵ ( x , y ) f f | | C T ϵ ,
with C T : = e γ ( T s ) sup ϵ , x , y | | D 1 ϵ ( x , y ) f | | ( by Assumption 1 ) .
Since:
j s ( V ϵ ( s , ) f ) = s T e u ( j s ( V ϵ ( s , ) f , u ) 1 ) d u + T e u ( j s ( V ϵ ( s , ) f , u ) 1 ) d u
C T ϵ + e T ,
we get j s ( V ϵ ( s , ) f ) a . e . 0 (choose T big enough, then small enough).  □

4.4. Martingale Characterization of the Random Evolution

To prove the weak law of large numbers for the random evolution, we use a martingale method similar to what is done in [10] (Section 4.2.1), but adapted rigorously to the inhomogeneous setting. We first introduce the quantity f 1 , solution to a suitable “Poisson equation”:
Definition 13.
Let Assumption 1 hold true. For f Y 1 , x J , t J , let f ϵ ( x , t ) : = f + ϵ f 1 ( x , t ) , where f 1 is the unique solution of the equation:
( P I ) f 1 ( , t ) ( x ) = Π m [ A ^ ( t ) a ( x , t ) ] f
a ( x , t ) : = 1 Π m m 1 ( x ) A x ( t ) + P D 1 0 ( x , ) ( x )
A ^ ( t ) : = Π a ( , t ) ,
namely f 1 ( x , t ) = Π m R 0 [ A ^ ( t ) f a ( , t ) f ] ( x ) , where R 0 : = ( P I + Π ) 1 is the fundamental matrix associated to P.
Remark 3.
The existence of f 1 is guaranteed because Π [ A ^ ( t ) a ( , t ) ] f = 0 by construction (see [22], Proposition 4). In fact, in [22], the operators Π and P are defined on B R b ( J ) but the results hold true if we work on B E b ( J ) , where E is any Banach space such that [ A ^ ( t ) a ( x , t ) ] f E (e.g., E = Y 1 if f D ^ , E = Y if f Y 1 ). To see that, first observe that P and Π can be defined the same way on B E b ( J ) as they were on B R b ( J ) . Then, take E such that | | | | = 1 and g B E b ( J ) such that | | g | | B E b ( J ) = max x | | g ( x ) | | E = 1 . We therefore have that: | | g | | B R b ( J ) 1 , and since we have the uniform ergodicity on B R b ( J ) , we have that:
sup | | | | = 1 | | g | | B E b ( J ) = 1 x J | P n ( g ) ( x ) Π ( g ) ( x ) | | | P n Π | | B ( B R b ( J ) ) 0 .
By linearity of , P , Π we get that | P n ( g ) ( x ) Π ( g ) ( x ) | = | ( P n g ( x ) Π g ( x ) ) | . However, because | | P n g ( x ) Π g ( x ) | | E = sup | | | | = 1 | ( P n g ( x ) Π g ( x ) ) | and that this supremum is attained (see, e.g., [23], Section III.6), then:
sup | | | | = 1 | | g | | B E b ( J ) = 1 x J | ( P n g ( x ) Π g ( x ) ) | = sup | | g | | B E b ( J ) = 1 x J | | P n g ( x ) Π g ( x ) | | E
= sup | | g | | B E b ( J ) = 1 | | P n g Π g | | B E b ( J ) = | | P n Π | | B ( B E b ( J ) ) ,
and thus we also have | | P n Π | | B ( B E b ( J ) ) 0 , i.e., the uniform ergodicity in B E b ( J ) . Now, according to the proofs of Theorems 3.4 and 3.5, Chapter VI of [24], | | P n Π | | B ( B E b ( J ) ) 0 is the only thing we need to prove that P + Π I is invertible on:
B E Π ( J ) : = { f B E b ( J ) : Π f = 0 } ,
the space E plays no role. Further, ( P + Π I ) 1 B ( B E Π ( J ) ) by the bounded inverse theorem.
We now introduce the martingale ( M ˜ t ϵ ( s ) ) t 0 which plays a central role in the following.
Lemma 4.
Let Assumption 1 hold true. Define recursively for ϵ ( 0 , 1 ] , s J :
V 0 ϵ ( s ) : = I
V n + 1 ϵ ( s ) : = V n ϵ ( s ) Γ x n ( s ) T n ϵ , s ( s ) , T n + 1 ϵ , s ( s ) D ϵ ( x n ( s ) , x n + 1 ( s ) ) ,
i.e., V n ϵ ( s ) = V ϵ ( s , T n ϵ , s ( s ) ) ; and for f Y 1 (we recall that f ϵ ( x , t ) : = f + ϵ f 1 ( x , t ) ):
M n ϵ ( s ) f : = V n ϵ ( s ) f ϵ ( x n ( s ) , T n ϵ , s ( s ) ) f ϵ ( x ( s ) , s ) k = 0 n 1 E [ V k + 1 ϵ ( s ) f ϵ ( x k + 1 ( s ) , T k + 1 ϵ , s ( s ) ) V k ϵ ( s ) f ϵ ( x k ( s ) , T k ϵ , s ( s ) ) | F k ( s ) ] ,
so that ( M n ϵ ( s ) f ) n N is a F n ( s ) -martingale by construction. Let for t J ( s ) :
M ˜ t ϵ ( s ) f : = M N s t 1 ϵ , s + 1 ϵ ( s ) f
F ˜ t ϵ ( s ) : = F N s t 1 ϵ , s + 1 ( s ) ,
where F n ( s ) : = σ x k ( s ) , T k ( s ) : k n σ ( P n u l l   s e t s ) and F N s t 1 ϵ , s + 1 ( s ) is defined the usual way (provided we have shown that N s t 1 ϵ , s + 1 is a F n ( s ) -stopping time t J ( s ) ). Then, Y , s J , ϵ ( 0 , 1 ] , f Y 1 , ( ( M ˜ t ϵ ( s ) f ) , F ˜ t ϵ ( s ) ) t J ( s ) is a real-valued square-integrable martingale.
Proof. 
By construction ( ( M n ϵ ( s ) f ) , F n ( s ) ) is a martingale. Let θ s ϵ ( t ) : = N s t 1 ϵ , s + 1 . t J ( s ) , θ s ϵ ( t ) is a F n ( s ) -stopping time, because:
θ s ϵ ( t ) = n = N s t 1 ϵ , s = n 1 = T n 1 ( s ) t 1 ϵ , s T n ( s ) > t 1 ϵ , s F n ( s ) .
Let t 1 t 2 J ( s ) . We have that ( ( M θ s ϵ ( t 2 ) n ϵ ( s ) f ) , F n ( s ) ) is a martingale. Assume we have shown that it is uniformly integrable, then we can apply the optional sampling theorem for UI martingales to the stopping times θ s ϵ ( t 1 ) θ s ϵ ( t 2 ) a.e and get:
E [ ( M θ s ϵ ( t 2 ) θ s ϵ ( t 2 ) ϵ ( s ) f ) | F θ s ϵ ( t 1 ) ( s ) ] = ( M θ s ϵ ( t 1 ) θ s ϵ ( t 2 ) ϵ ( s ) f ) a . e .
E [ ( M θ s ϵ ( t 2 ) ϵ ( s ) f ) | F θ s ϵ ( t 1 ) ( s ) ] = ( M θ s ϵ ( t 1 ) ϵ ( s ) f ) a . e .
E [ ( M ˜ t 2 ϵ ( s ) f ) | F ˜ t 1 ϵ ( s ) ] = ( M ˜ t 1 ϵ ( s ) f ) a . e . ,
which shows that ( ( M ˜ t ϵ ( s ) f ) , F ˜ t ϵ ( s ) ) t J ( s ) is a martingale. Now, to show the uniform integrability, according to [20], it is sufficient to show that sup n E ( | | M θ s ϵ ( t 2 ) n ϵ ( s ) f | | 2 ) < . However:
| | M θ s ϵ ( t 2 ) n ϵ ( s ) f | | 2 e γ ( t 2 + τ ¯ s ) ( | | f | | + | | f 1 | | ) + 2 e γ ( t 2 + τ ¯ s ) ( | | f | | + | | f 1 | | ) ( θ s ϵ ( t 2 ) n )
2 e γ ( t 2 + τ ¯ s ) ( | | f | | + | | f 1 | | ) ( 1 + θ s ϵ ( t 2 ) ) ,
where | | f 1 | | : = sup x J u [ 0 , t 2 + ] | | f 1 ( x , u ) | | ( | | f 1 | | < by Assumption 1). The fact that E ( θ s ϵ ( t 2 ) 2 ) < (Assumptions 1) concludes the proof.  □
Remark 4.
In the following, we use the fact that, as found in [25] (Theorem 3.1), for sequences ( X n ) , ( Y n ) of random variables with value in a separable metric space with metric d, if X n X and d ( X n , Y n ) 0 , then Y n X . In our case, ( X n ) , ( Y n ) take value in D ( J ( s ) , Y ) and to show that d ( X n , Y n ) 0 , we use the fact that:
d ( X n , Y n ) sup t [ s , T ] | | X n ( t ) Y n ( t ) | | + e T T J ( s ) , i f J = R +
d ( X n , Y n ) sup t [ s , T τ ¯ ] | | X n ( t ) Y n ( t ) | | , i f J = [ 0 , T ] ,
and therefore that it is sufficient to have:
sup t [ s , T ] | | X n ( t ) Y n ( t ) | | a . e . 0 ( r e s p e c t i v e l y , i n   p r o b a b i l i t y ) , T J ( s )
to obtain d ( X n , Y n ) a . e . 0 (respectively, in probability).
Lemma 5.
Let Assumption 1 hold true. For f D ^ and s J , M ˜ ϵ ( s ) f has the asymptotic representation:
M ˜ ϵ ( s ) f = V ϵ ( s , ) f f ϵ Π m k = 1 N s t 1 ϵ , s V ϵ ( s , T k ϵ , s ( s ) ) A ^ T k ϵ , s ( s ) f + O ( ϵ ) a . e . ,
where O ( ϵ p ) is an element of the space D ( J ( s ) , Y ) and is defined by the following property: r > 0 , T Q + J ( s ) , ϵ p + r sup t [ s , T ] | | O ( ϵ p ) | | 0 as ϵ 0 (so that Remark 4 is satisfied).
Proof. 
For the sake of clarity, let:
f k ϵ : = f ϵ ( x k ( s ) , T k ϵ , s ( s ) ) f 1 , k : = f 1 ( x k ( s ) , T k ϵ , s ( s ) )
Let T J ( s ) . First, we have that:
V N s t 1 ϵ , s + 1 ϵ ( s ) f N s t 1 ϵ , s + 1 ϵ = V N s t 1 ϵ , s + 1 ϵ ( s ) f + O ( ϵ )
because sup t [ s , T ] | | V N s t 1 ϵ , s + 1 ϵ ( s ) f 1 , N s t 1 ϵ , s + 1 | | e γ ( T + τ ¯ s ) | | f 1 | | . Again and as in Lemma 4, we denote | | f 1 | | : = sup x J u [ 0 , T + τ ] | | f 1 ( x , u ) | | , and | | f 1 | | < by Assumption 1. Now, we have:
V k + 1 ϵ ( s ) f k + 1 ϵ V k ϵ ( s ) f k ϵ = V k ϵ ( s ) ( f k + 1 ϵ f k ϵ ) + V k + 1 ϵ ( s ) f k + 1 ϵ V k ϵ ( s ) f k + 1 ϵ ,
and:
E [ V k ϵ ( s ) ( f k + 1 ϵ f k ϵ ) | F k ( s ) ] = ϵ V k ϵ ( s ) E [ ( f 1 , k + 1 f 1 , k ) | F k ( s ) ] ,
as V k ϵ ( s ) is F k ( s ) B o r ( B ( Y ) ) measurable. Now, we know that every discrete time Markov process has the strong Markov property, so the Markov process ( x n , T n ) has it. For k 1 , the times N ( s ) + k are F n ( 0 ) -stopping times. Therefore, for k 1 :
E [ ( f 1 , k + 1 f 1 , k ) | F k ( s ) ] = E [ ( f 1 , k + 1 f 1 , k ) | T k ( s ) , x k ( s ) ]
= y J 0 f 1 y , T k ϵ , s ( s ) + ϵ u Q T k ( s ) ( x k ( s ) , y , d u ) f 1 , k .
Consider the following derivatives with respect to t:
a ( x , t ) : = 1 Π m m 1 ( x ) A x ( t ) ,
A ^ ( t ) : = Π a ( , t )
f 1 ( x , t ) : = Π m R 0 [ A ^ ( t ) f a ( , t ) f ] ( x ) ,
which exist because R 0 B ( B Y 1 Π ( J ) ) and f x J D ( A x ) . Using the Fundamental theorem of Calculus for the Bochner integral ( v f 1 ( y , v ) L Y 1 ( [ a , b ] ) [ a , b ] by Assumption 1) we get:
E [ ( f 1 , k + 1 f 1 , k ) | T k ( s ) , x k ( s ) ]
= y J 0 f 1 y , T k ϵ , s ( s ) + T k ϵ , s ( s ) T k ϵ , s ( s ) + ϵ u f 1 ( y , v ) d v Q T k ( s ) ( x k ( s ) , y , d u ) f 1 x k ( s ) , T k ϵ , s ( s )
= ( P T k ( s ) I ) f 1 , T k ϵ , s ( s ) ( x k ( s ) )
+ y J 0 0 ϵ u f 1 ( y , T k ϵ , s ( s ) + v ) d v Q T k ( s ) ( x k ( s ) , y , d u )
= ( P T k ( s ) I ) f 1 , T k ϵ , s ( s ) ( x k ( s ) ) + O ( ϵ ) .
because, by Assumption 1:
y J 0 0 ϵ u f 1 ( y , T k ϵ , s ( s ) + v ) d v Q T k ( s ) ( x k ( s ) , y , d u ) ϵ | | f 1 | | τ ¯ .
We note that the contribution of the terms of order O ( ϵ 2 ) inside the sum will make the sum of order O ( ϵ ) , since for some constant C T :
sup t [ s , T ] k = 1 N s t 1 ϵ , s | | O ( ϵ 2 ) | | N s T 1 ϵ , s C T ϵ 2 = O ( ϵ )
because ϵ N s T 1 ϵ , s a . e . T s Π m . Altogether, we have for k 1 , using the definition of f 1 :
E [ V k ϵ ( s ) ( f k + 1 ϵ f k ϵ ) | F k ( s ) ] = ϵ V k ϵ ( s ) ( P T k ( s ) I ) f 1 , T k ϵ , s ( s ) ( x k ( s ) ) + O ( ϵ 2 )
= ϵ Π m V k ϵ ( s ) A ^ ( T k ϵ , s ( s ) ) a ( x k ( s ) , T k ϵ , s ( s ) ) f
+ ϵ V k ϵ ( s ) ( P T k ( s ) P ) f 1 , T k ϵ , s ( s ) ( x k ( s ) ) + O ( ϵ 2 ) .
The term involving P T k ( s ) P above will vanish as k by Assumption 1, as | | P t P | | 0 . We also have for the first term ( k = 0 ):
E [ V 0 ϵ ( s ) ( f 1 ϵ f 0 ϵ ) | F k ( s ) ] = O ( ϵ ) ( 2 ϵ | | f 1 | | ) .
Now, we have to compute the terms corresponding to V k + 1 ϵ ( s ) f k + 1 ϵ V k ϵ ( s ) f k + 1 ϵ . We show that the term corresponding to k = 0 is O ( ϵ ) and that for k 1 :
E [ V k + 1 ϵ ( s ) f k + 1 ϵ V k ϵ ( s ) f k + 1 ϵ | F k ( s ) ] = ϵ Π m V k ϵ ( s ) a ( x k ( s ) , T k ϵ , s ( s ) ) f + negligible terms .
For the term k = 0 , we have, using Assumption 1, the definition of V k ϵ and Theorem 3:
V 1 ϵ ( s ) f 1 V 0 ϵ ( s ) f 1 ϵ = V 1 ϵ ( s ) f f + O ( ϵ )
= Γ x ( s ) s , T 1 ϵ , s ( s ) D ϵ ( x ( s ) , x 1 ( s ) ) f f + O ( ϵ )
= Γ x ( s ) s , T 1 ϵ , s ( s ) ( D ϵ ( x ( s ) , x 1 ( s ) ) f f ) + Γ x ( s ) s , T 1 ϵ , s ( s ) f f + O ( ϵ )
E [ V 1 ϵ ( s ) f 1 ϵ V 0 ϵ ( s ) f 1 ϵ | F 0 ( s ) ] e γ τ ¯ max x , y | | D ϵ ( x , y ) f f | | + 0 ϵ u sup x , t [ s , s + τ ] | | A x ( t ) | | B ( Y 1 , Y ) | | f | | Y 1 F T 0 ( s ) ( x , d u )
ϵ e γ τ ¯ sup ϵ , x , y | | D 1 ϵ ( x , y ) f | | + | | f | | Y 1 τ ¯ | | A x ( t ) | | B ( Y 1 , Y ) = O ( ϵ ) .
Now, we have for k 1 :
V k + 1 ϵ ( s ) V k ϵ ( s ) = V k ϵ ( s ) Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) D ϵ ( x k ( s ) , x k + 1 ( s ) ) I .
By Assumption 1, we have sup ϵ , x , y | | D 1 ϵ ( x , y ) g | | < for g Y 1 . Therefore, we get using Theorem 3, for g Y 1 :
D ϵ ( x k ( s ) , x k + 1 ( s ) ) g = g + 0 ϵ D 1 u ( x k ( s ) , x k + 1 ( s ) ) g d u
and Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) g = g + T k ϵ , s ( s ) T k + 1 ϵ , s ( s ) Γ x k ( s ) ( T k ϵ , s ( s ) , u ) A x k ( s ) ( u ) g d u .
Because f D ^ , we get that A ^ ( t ) f Y 1 , a ( x , t ) f Y 1 t , x . Since R 0 B ( B Y 1 Π ( J ) ) , we get that f 1 , k + 1 Y 1 and therefore:
Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) D ϵ ( x k ( s ) , x k + 1 ( s ) ) f 1 , k + 1 = Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) f 1 , k + 1 + O ( ϵ )
= f 1 , k + 1 + T k ϵ , s ( s ) T k + 1 ϵ , s ( s ) Γ x k ( s ) ( T k ϵ , s ( s ) , u ) A x k ( s ) ( u ) f 1 , k + 1 d u + O ( ϵ ) .
Therefore, taking the conditional expectation, we get:
E [ V k + 1 ϵ ( s ) f 1 , k + 1 V k ϵ ( s ) f 1 , k + 1 | F k ( s ) ]
= V k ϵ ( s ) y J 0 0 ϵ u Γ x k ( s ) ( T k ϵ , s ( s ) , T k ϵ , s ( s ) + v ) A x k ( s ) ( T k ϵ , s ( s ) + v ) f 1 , k + 1 d v × Q T k ( s ) ( x k ( s ) , y , d u ) + O ( ϵ )
= O ( ϵ ) ( ϵ C τ ¯   for   some   constant   C   by   Assumption   1 ) ,
and so:
E [ V k + 1 ϵ ( s ) f k + 1 ϵ V k ϵ ( s ) f k + 1 ϵ | F k ( s ) ] = E [ V k + 1 ϵ ( s ) f V k ϵ ( s ) f | F k ( s ) ] + O ( ϵ 2 ) .
Now, because f D ^ and by Assumption 1 (which ensures that the integral below exists):
D ϵ ( x k ( s ) , x k + 1 ( s ) ) f = f + ϵ D 1 0 ( x k ( s ) , x k + 1 ( s ) ) f + 0 ϵ ( ϵ u ) D 2 u ( x k ( s ) , x k + 1 ( s ) ) f d u .
Thus, using boundedness of D 2 ϵ (again Assumption 1):
Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) D ϵ ( x k ( s ) , x k + 1 ( s ) ) f = Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) f + ϵ Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) D 1 0 ( x k ( s ) , x k + 1 ( s ) ) f + O ( ϵ 2 ) .
The first term above has the representation (by Theorem 5):
Γ x k ( s ) T k ϵ , s ( s ) , T k + 1 ϵ , s ( s ) f = f + T k ϵ , s ( s ) T k + 1 ϵ , s ( s ) A x k ( s ) ( u ) f d u + T k ϵ , s ( s ) T k + 1 ϵ , s ( s ) T k ϵ , s ( s ) u Γ x k ( s ) ( T k ϵ , s ( s ) , r ) A x k ( s ) ( r ) A x k ( s ) ( u ) f d r d u .
Taking the conditional expectation and using the fact that sup u [ 0 , T + τ ¯ ] x J | | A x ( u ) f | | Y 1 < , we can show as we did before that:
E T k ϵ , s ( s ) T k + 1 ϵ , s ( s ) T k ϵ , s ( s ) u Γ x k ( s ) ( T k ϵ , s ( s ) , r ) A x k ( s ) ( r ) A x k ( s ) ( u ) f d r d u F k ( s ) = O ( ϵ 2 ) .
The second term has the following representation, because f D ^ (which ensures that D 1 0 ( x , y ) f Y 1 ) and using Theorem 3:
ϵ