Abstract
The paper is devoted to the inhomogeneous random evolutions (IHRE) and their applications in finance. We introduce and present some properties of IHRE. Then, we prove weak law of large numbers and central limit theorems for IHRE. Financial applications are given to illiquidity modeling using regime-switching time-inhomogeneous Levy price dynamics, to regime-switching Levy driven diffusion based price dynamics, and to a generalized version of the multi-asset model of price impact from distress selling, for which we retrieve and generalize their diffusion limit result for the price process.
Keywords:
propagators; inhomogeneous random evolutions; inhomogeneous semi-Markov process; weak law of large numbers; central limit theorem; orthogonal martingale measure; regime-switching time-inhomogeneous Levy price dynamics; multi-asset model of price impact from distress selling MSC:
60F05; 47H40; 47D06; 91B28; 91B70
1. Introduction
Random Evolutions started to be studied in the 1970’s, because of their potential applications to biology, movement of particles, signal processing, quantum physics, finance and insurance, among others (see [1,2,3]). They allow modeling a situation in which the dynamics of a system are governed by various regimes, and the system switches from one regime to another at random times. Let us be given a state process (with state space ) and a sequence of increasing random times . In short, during each random time interval , the evolution of the system will be governed by the state . At the random time , the system will switch to state , and so on. Think for example of regime-switching models in finance, where the state process is linked with, e.g., different liquidity regimes, as in the recent article [4]. In their work, the sequence represents the random times at which the market switches from one liquidity regime to another.
When random evolutions began to be studied [1,2,3], the times were assumed to be exponentially distributed and was assumed to be a Markov chain, so that the process was assumed to be a continuous-time Markov chain. Here, as in [5], we let:
the number of regime changes up to time t. In these first papers, limit theorems for the expectation of the random evolution were studied. Then, in 1984–1985, J. Watkins [6,7,8] established weak law of large numbers and diffusion limit results for a specific class of random evolutions, where the times are deterministic and the state process is strictly stationary. In [9,10], A. Swishchuk and V. Korolyuk extended the work of J. Watkins to a setting where is assumed to be a Markov renewal process, and obtained weak law of large numbers, central limit theorem and diffusion limit results. It can also be noted that they allowed the system to be “impacted” at each regime transition, which they modeled with the use of operators that we will denote D in the following.
As we show in applications (Section 6), these operators D can be used for example to model an impact on the stock price at each regime transition due to illiquidity as in [4], or to model situations where the stock price only evolves in a discrete way (i.e., only by successive “impacts”), such as in the recent articles [11,12] on, respectively, high frequency price dynamics and modeling of price impact from distressed selling (see Section 6.1 and Section 6.3). For example, the price dynamics of the latter article [11] can be seen as a specific case of random evolution for which the operators D are given by:
where f, , and h are some functions and ϵ and m some constants. In Section 6.3, we generalize their diffusion limit result by using our tools and results on random evolutions (see [9,13,14] for more details).
In the aforementioned literature on (limit theorems for) random evolutions [6,7,8,9,10], the evolution of the system on each time interval is assumed to be driven by a (time-homogeneous) semigroup depending on the current state of the system . A semigroup U is a family of time-dependent operators satisfying . For example, if Z is a time-homogeneous Markov process, then we can define the semigroup:
where f is a bounded measurable function and . The idea of a random evolution is therefore the following: during each time interval , the system (e.g., price process) evolves according to the semigroup (associated to the Markov process in the case of a price process). At the transition , the system will be impacted by an operator modeling for example a drop in the price at what [4] call a liquidity breakdown. Then, on the interval , the price will be driven by the semigroup , and so on: it can be summarized as a regime-switching model with impact at each transition. We can write the previously described random evolution in the following compact form (where the product applies on the right):
This time-homogeneous setting (embedded in the time-homogeneous semigroups U) does not include, e.g., time-inhomogeneous Lévy processes (see Section 6.1), or processes Z solutions of stochastic differential equations (SDEs) of the form:
where L is a vector-valued Lévy process and a driving matrix-valued function (see Section 6.2). The latter class of SDEs includes many popular models in finance, e.g., time-dependent Heston models. Indeed, these examples can be associated to time-inhomogeneous generalizations of semigroups, also called propagators. A (backward) propagator Γ (see Section 2) is simply a family of operators satisfying for . We then define for a time-inhomogeneous Markov process Z:
The celebrated Chapman–Kolmogorov equation guaranties the relation for .
Let us explain here the main ideas of the main results of the paper, Theorems 8 (LLN) and 11 (FCLT), using one of the applications, namely, a regime-switching inhomogeneous Lévy-based stock price model. We consider a regime-switching inhomogeneous Lévy-based stock price model, very similar in the spirit to the recent article [4]. In short, an inhomogeneous Lévy process differs from a classical Lévy process in the sense that it has time-dependent (and absolutely continuous) characteristics. We let be a collection of such -valued inhomogeneous Lévy processes with characteristics , and we define:
for some bounded function α. We give in Section 6 a financial interpretation of this function α, as well as reasons we consider a regime-switching model. In this setting, f represents a contingent claim on a (d-dimensional) risky asset S having regime-switching inhomogeneous Lévy dynamics driven by the processes : on each random time interval , the risky asset is driven by the process . Indeed, we have the following representation, for (to make clear that the expectation below is taken with respect to ω embedded in the process L and not ):
where we have denoted for clarity:
The random evolution represents in this case the present value of the contingent claim f of maturity t on the risky asset S, conditionally on the regime switching process : indeed, remember that is random, and that its randomness (only) comes from the Markov renewal process. Our main results, Theorems 8 and 11, allow approximating the impact of the regime-switching on the present value of the contingent claim. Indeed, we get the following normal approximation, for small ϵ:
The above approximation allows to quantify the risk inherent to regime-switchings occurring at a high frequency governed by ϵ. The parameter ϵ reflects the frequency of the regime-switchings and can therefore be calibrated to market data by the risk manager. For market practitioners, because of the computational cost, it is often convenient to have asymptotic formulas that allow them to approximate the present value of a given derivative, and by extent the value of their whole portfolio. In addition, the asymptotic normal form of the regime-switching cost allows the risk manager to derive approximate confidence intervals for his portfolio, as well as other quantities of interest such as reserve policies linked to a given model.
Therefore, the goal of this paper is to focus on limit theorems for so-called time-inhomogeneous random evolutions, i.e., random evolutions that we construct with propagators, and not with (time-homogeneous) semigroups, in order to widen the range of possible applications, as mentioned above. We call these random evolutions as inhomogeneous random evolutions (IHRE).
The notion of “time-inhomogeneity” in fact appears twice in our framework: in addition to being constructed with propagators, random evolutions are driven by time-inhomogeneous semi-Markov processes (using results from [5]). This general case has not been treated before. Even if we do not have the same goal in mind, our methodology is similar—in spirit—to the recent article [15] on the comparison of time-inhomogeneous Markov processes, or to the article [16] on time-inhomogeneous affine processes. We say “similar”, because we are all in a position where results for the time-homogenous case exist, and our goal is to develop a rigorous treatment of the time-inhomogeneous case. It is interesting to see that even if the topics we deal with are different, the ”philosophy” and “intuition” behind is similar in many ways.
The paper is organized as follows: Section 2 is devoted to propagators. We introduce the concept of regular propagators, which we characterize as unique solutions to well-posed Cauchy problems: this is of crucial importance for both our main weak law of large numbers (WLLN) and central limit theorem (CLT) results, in order to get the unicity of the limiting process. In Section 3, we introduce inhomogeneous random evolutions and present some of their properties. In Section 4 and Section 5, we prove, respectively, a WLLN and a CLT, which are the main results of the paper (Theorems 8 and 11). In particular, for the CLT, we obtain a precise (and new) characterization of the limiting process using weak Banach-valued stochastic integrals and so-called orthogonal martingale measures: this result—to the best of our knowledge—has not even been obtained in the time-homogeneous case. In Section 6, we present financial applications to illiquidity modeling using regime-switching time-inhomogeneous Lévy price dynamics and regime-switching Lévy driven diffusion based price dynamics. We also present a generalized version of the multi-asset model of price impact from distressed selling introduced in the recent article [11], for which we retrieve (and generalize) their diffusion limit result for the price process. We would like to mention that copula-based multivariate semi-Markov models with application in high-frequency finance is considered in [17], which is related to some generalization of semi-Markov processes and their application.
2. Propagators
This section aims at presenting a few results on propagators, which are used in the sequel. Most of them (as well as the corresponding proofs) are similar to what can be found in [18] Chapter 5, [19] Chapter 2 or [15], but, to the best of our knowledge, they do not appear in the literature in the form presented below. In particular, the main result of this section is Theorem 4, which characterizes so-called regular propagators as unique solutions to well-posed Cauchy problems.
Let be a real separable Banach space. Let be the dual space of Y. is assumed to be a real separable Banach space which is continuously embedded in Y (this idea was used in [18], Chapter 5), i.e., and : . Unless mentioned otherwise, limits are taken in the Y-norm, normed vector spaces are equipped with the norm topology and subspaces of normed vector spaces are equipped with the subspace topology. Limits in the norm are denoted , for example. In the following, J refers to either or for some and . In addition, let, for , and . We start by a few introductory definitions:
Definition 1.
A function is called a Y-(backward) propagator if:
- (i)
- : and
- (ii)
- :
If, in addition, : , Γ is called a Y-semigroup.
Note that we focus our attention on backward propagators as many applications only fit the backward case, as shown below. Forward propagators differ from backward propagators in the way that they satisfy (). We now introduce the generator of the propagator:
Definition 2.
For , define:
Define similarly for :
and define similarly to . Let . Then, is called the infinitesimal generator of the Y-propagator Γ.
In the following definitions, which deal with continuity and boundedness of propagators, and represent Banach spaces such that (possibly ).
Definition 3.
A -propagator Γ is -bounded if . It is a -contraction if . It is -locally bounded if for every compact .
Definition 4.
Let . A -propagator Γ is -strongly continuous if , :
When , we simply write that it is F-strongly continuous.
We use the terminologies t-continuity and s-continuity for the continuity of the partial applications. According to [19], strong joint continuity is equivalent to strong separate continuity together with local boundedness of the propagator.
Definition 5.
Let . The generator AΓ or the -propagator Γ is -strongly continuous if , :
When , we simply write that it is F-strongly continuous.
The following results give conditions under which the propagator is differentiable in s and t.
Theorem 1.
Let Γ be a Y-propagator. Assume that , . Then:
If, in addition, Γ is -strongly s-continuous, -strongly t-continuous, then:
Proof of Theorem 1.
Let , .
since . We have for :
Let :
the last inequality holding because : . We apply the uniform boundedness principle to show that . is Banach. We have to show that : . Let . We have since . . Then, by -strong t-continuity of Γ, . Let . Then, we get .
Further, by -strong s-continuity of Γ, . Finally, since , .
Therefore, we get for , which shows that for . □
Theorem 2.
Let Γ be a Y-propagator. Assume that . Then, we have:
If, in addition, Γ is Y-strongly t-continuous, then we have:
Proof of Theorem 2.
Let , . We have:
For : :
since . Therefore, . Now, if :
For :
Since , . By Y-strong t-continuity of Γ: . By the principle of uniform boundedness together with the Y-strong t-continuity of Γ, we have . Therefore, we get for , which shows for . □
In general, we want to use the evolution equation: ; therefore, we need that is in . The following result gives sufficient conditions for which it is the case.
Theorem 3.
Assume that Theorem 2 holds true, i.e., , and , . Then, , :
Proof of Theorem 3.
Let , . First, as the derivative of . By the principle of uniform boundedness together with the Y-strong t-continuity of Γ, we have . We then observe that for :
The following definition introduces the concept of regular propagator, which in short means that it is differentiable with respect to both variables, and that its derivatives are integrable. □
Definition 6.
A Y-propagator Γ is said to be regular if it satisfies Theorems 1 and 2 and , , and are in .
Now, we are ready to characterize a regular propagator as the unique solution of a well-posed Cauchy problem, which is needed in the sequel. Note that the proof of the theorem below requires that Γ satisfies both Theorems 1 and 2 (hence, our above definition of regular propagators). The next result is the initial value problem statement for operator
Theorem 4.
Let AΓ be the generator of a regular Y-propagator Γ and , . A solution operator to the Cauchy problem:
is said to be regular if it is Y-strongly continuous. If G is such a regular solution, then we have , , .
Proof of Theorem 4.
Let , . Consider the function . We show that and therefore that . We have for :
Let . We have:
We have:
- (1) → 0 as G satisfies the initial value problem and .
- (2) → 0 as .
- (3) → 0 by Y-strong continuity of G.
Further, by the principle of uniform boundedness together with the Y-strong continuity of G, we have . We therefore get . Now, for :
Let :
By the principle of uniform boundedness together with the Y-strong t-continuity of G, we have . Furthermore:
- (4) → 0 as G satisfies the initial value problem and .
- (5) → 0 as .
- (6) → 0 by Y-strong continuity of G.
We therefore get . □
The following corollary expresses the fact that equality of generators implies equality of propagators.
Corollary 1.
Assume that and are regular Y-propagators and that , , . Then, , : . In particular, if is dense in Y, then .
We conclude this section with a second-order Taylor formula for propagators. Let:
Theorem 5.
Let Γ be a regular Y-propagator, . Assume that , and . Then, we have for :
Proof of Theorem 5.
Since Γ is regular and f, and is integrable on we have by 3:
□
3. Inhomogeneous Random Evolutions: Definitions and Properties
Let be a complete probability space, a finite set and an inhomogeneous Markov renewal process on it, with associated inhomogeneous semi-Markov process (as in [5]). In this section, we use the same notations for the various kernels and cumulative distribution functions (, , etc.) as in [5] on inhomogeneous Markov renewal processes. Throughout the section, we assume that the inhomogeneous Markov renewal process is regular (cf. definition in [5]), and that for all . Further assumptions on it are made below. We define the following random variables on , for :
- the number of jumps on : ;
- the jump times on : for , and ; and
- the states visited by the process on : , for .
Consider a family of Y-propagators , with respective generators , satisfying:
as well as a family of -contractions, satisfying:
We define the inhomogeneous random evolution the following way:
Definition 7.
The function defined pathwise by:
is called a -inhomogeneous Y-random evolution, or simply an inhomogeneous Y-random evolution. V is said to be continuous (respectively, purely discontinuous) if (respectively, ), . V is said to be regular (respectively, -contraction) if are regular (respectively, -contraction).
Remark 1.
In the latter definition, we use as conventions that , that is, the product operator applies the product on the right. Further, if , then . If , then and . Therefore, in all cases, . By Proposition 2 below, if , we see that V is continuous and if (and therefore ), we see that V has no continuous part, hence Definition 7.
As long as our main object is stochastic process we have to show a measurability of this process. Thus, we have the following measurability result:
Proposition 1.
For , , the stochastic process is adapted to the (augmented) filtration:
Proof of Proposition 1.
Let , , . We have:
Denote the measurable (by construction) function . Since , remark that and is therefore measurable. Therefore, . Let:
since , and for , let the sigma-algebra ( is a sigma-algebra on since ). Now, consider the map :
We have:
Therefore, it remains to show that , since . First, let . Notice that , where:
The previous mapping holding since , . is measurable iff each one of the coordinate mappings are. The canonical projections are trivially measurable. Let , . We have:
Now, by measurability assumption, we have for :
Therefore, is measurable. Define for :
Again, the canonical projections are trivially measurable. We have for :
Now, by measurability assumption, , :
which proves the measurability of . Then, we define for :
By measurability assumption, , :
which proves the measurability of . Finally, define the canonical projection:
which proves the measurability of . For , we have and the proof is similar. □
The following result characterizes an inhomogeneous random evolution as a propagator, shows that it is right-continuous and that it satisfies an integral representation (which is used extensively below). It also clarifies why we use the terminology “continuous inhomogeneous Y-random evolution” when .
Proposition 2.
Let V an inhomogeneous Y-random evolution and , . Then, is a Y-propagator. If we assume that V is regular, then we have on Ω the following integral representation:
Further is Y-strongly RCLL on , i.e., , . More precisely, we have for :
where we denote .
Proof of Proposition 2.
The fact that is straightforward from the definition of V and using the fact that are -contractions. We can also obtain easily that V is a propagator by straightforward computations. We now show that is Y-strongly continuous on each , and Y-strongly RCLL at each , . Let such that . , we have:
Therefore, by Y-strong t-continuity of Γ, we get that is Y-strongly continuous on . If , the fact that has a left limit at also comes from the Y-strong t-continuity of Γ:
Therefore, we get the relationship:
To prove the integral representation, let , , . We proceed by induction and show that , we have :
For , we have : , and therefore by regularity of Γ. Now, assume that the property is true for , namely: . We have:
Therefore, it implies that (by continuity of the Bochner integral):
Now, we have that:
and therefore , by Theorem 2 and regularity of Γ:
Further, we already proved that . Therefore, combining these results we have:
□
4. Weak Law of Large Numbers
In this section, we introduce a rescaled random evolution Vϵ, in which time is rescaled by a small parameter ϵ. The main result of this section is Theorem 8 in Section 4.5. To prove the weak convergence of Vϵ to some regular propagator , we prove in Section 4.3 that Vϵ is relatively compact, which informally means that, for any sequence , there exists a subsequence along which converges weakly. To show the convergence of Vϵ to , we need to show that all limit points of the latter are equal to . To prove relative compactness, we need among other things that Vϵ satisfies the so-called compact containment criterion (CCC), which in short requires that for every , remains in a compact set of Y with an arbitrarily high probability as . This compact containment criterion is the topic of Section 4.2. Section 4.1 introduces the rescaled random evolution Vϵ as well as some regularity assumptions (condensed in Assumptions 1, Section 4.1, for spaces and operators under consideration, and in Assumptions 2, Section 4.1, for the semi-Markov processes) which will be assumed to hold throughout the rest of the paper. It also reminds the reader of some definitions and results on relative compactness in the Skorohod space, which are mostly taken from the well-known book [20]. Finally, the main WLLN result Theorem 8 is proved using a martingale method similar in the spirit to what is done in [10] (Chapter 4, Section 4.2.1) for time-homogeneous random evolutions. This method is here adapted rigorously to the time-inhomogeneous setting: this is the topic of Section 4.4. The martingale representation presented in Lemma 5 of Section 4.4 will be used in the Section 5 to prove a CLT for time-inhomogeneous random evolutions.
4.1. Preliminary Definitions and Assumptions
In this section, we prove a weak law of large numbers for inhomogeneous random evolutions. We rescale both time and the jump operators D in a suitable way by a small parameter ϵ and study the limiting behavior of the rescaled random evolution. To this end, the same way we introduce inhomogeneous Y-random evolutions, we consider a family of -contractions, satisfying :
and let . We define:
and :
The latter operators correspond, in short, to the (first-order) derivatives of the operators Dϵ with respect to ϵ. We need them in the following to be able to use the expansion , which proves useful when proving limit theorems for random evolutions. The same way, we introduce , corresponding to the second derivative. We also let:
Here, simply indicates that the limit is taken in the norm. We also introduce the space on which we are mostly working on:
Throughout this section, we make the following set of regularity assumptions, which we first state and then comment on immediately afterwards. We recall that the various notions of continuity and regularity are defined in Section 2.
Assumptions 1.
Assumptions on the structure of spaces:
- 1.
- The subset contains a countable family which is dense in both and Y.
- 2.
- .
Assumptions on the regularity of operators:
- 1.
- are regular Y-propagators.
- 2.
- is -strongly continuous, .
Assumptions on the boundedness of operators:
- 1.
- are -exponentially bounded, i.e., such that , for all , .
- 2.
- and , .
- 3.
- , , , for all .
- 4.
- .
- 5.
- , .
- 6.
- , .
Assumptions 2.
Assumptions on the semi-Markov process:
- 1.
- (ergodicity) Assumptions from [5] holds true for the function , so that:
- 2.
- (uniform boundedness of sojourn increments) such that:
- 3.
- (regularity of the inhomogeneous Markov renewal process) The conditions for are satisfied (see [5]), namely: there exists and such that:
Let us make a few comments on the previous assumptions. The assumptions regarding the regularity of operators mainly ensure that we can use the results obtained on propagators in Section 2, for example Theorem 5. The (strong) continuity of also proves to be useful when working with convergence in the Skorokhod space. The assumptions on the boundedness of operators is used to show that various quantities converge well. Finally, regarding the assumptions on the semi-Markov process, the almost sure convergence of as is used very often. It is one of the fundamental requirement for the work below. The uniform boundedness of the sojourn increments is a mild assumption in practice. There might be a possibility to weaken it, but the proofs would become heavier, for example because the jumps of the martingales introduced below would no longer be uniformly bounded.
Notation: In the following, we let for , and (their existence is guaranteed by Assumption 1):
We also let if and if . In the latter case, it is assumed that . Similarly, we let for : .
We now introduce the rescaled random evolution, with the notation :
Definition 8.
Let V be an inhomogeneous Y-random evolution. We define (pathwise on Ω) the rescaled inhomogeneous Y-random evolution Vϵ for , by:
Remark 2.
we notice that Vϵ is well-defined since on Ω:
and that it coincides with V for , i.e., .
Our goal is to prove, as in [10], that, for each f in some suitable subset of Y, —seen as a family of elements of —converges weakly to some continuous limiting process to be determined. To this end, we first prove that is relatively compact with almost surely continuous weak limit points. This is equivalent to the notion of C-tightness in [21] (VI.3) because topologized with the Prohorov metric is a separable and complete metric space (Y being a separable Banach space), which implies that relative compactness and tightness are equivalent in (by Prohorov’s theorem). Then, we identify the limiting operator-valued process , using the results of [14]. We first need some elements that can be found in [10] (Section 1.4) and [20] (Sections 3.8–3.11):
Definition 9.
Let be a sequence of probability measures on a metric space . We say that converges weakly to ν, and write iff :
Definition 10.
Let be a family of probability measures on a metric space . is said to be relatively compact iff for any sequence , there exists a weakly converging subsequence.
Definition 11.
Let , a family of stochastic processes with sample paths in . We say that is relatively compact iff is (in the metric space endowed with the Prohorov metric). We write that iff . We say that is C-relatively compact iff it is relatively compact and if ever , then X has a.e. continuous sample paths. If , we say that is -relatively compact (respectively, -C-relatively compact) iff is , .
Definition 12.
Let , be a family of stochastic processes with sample paths in . We say that satisfies the compact containment criterion (CCC) if , , compact set such that:
We say that satisfies the compact containment criterion in (-CCC), if , , CCC.
Theorem 6.
Let , a family of stochastic processes with sample paths in . is C-relatively compact iff it is relatively compact and , where:
Theorem 7.
Let , be a family of stochastic processes with sample paths in . is relatively compact in iff:
- 1.
- CCC
- 2.
- , and a family of nonnegative random variables such that , , :where .
If is relatively compact, then the stronger compact containment criterion holds: , , compact set such that:
4.2. The Compact Containment Criterion
We show that, to prove relative compactness, we need to prove that the compact containment criterion is satisfied. We give below some sufficient conditions for which it is the case, in particular for the space , which is used in many applications. In [6], it is mentioned that there exists a compact embedding of a Hilbert space into . Unfortunately, this is not true (to the best of our knowledge), and we show below in Proposition 4 how to overcome this problem. This latter proposition is applied in Section 6 to the time-inhomogeneous Lévy case, and the corresponding proof can easily be recycled for many other examples.
Proposition 3.
Assume that there exists a Banach space compactly embedded in Y, that , are -exponentially bounded (uniformly in ), and that are -contractions. Then, -CCC.
Proof.
Let , , and assume for some . Let and , the Y-closure of the Z-closed ball of radius c. K is compact because of the compact embedding of Z into Y. Let . We have : . Therefore, and so . □
For example, we can consider the Rellich–Kondrachov compactness theorem: if is an open, bounded Lipschitz domain, then the Sobolev space is compactly embedded in , where and .
For the space , there is no well-known such compact embedding, therefore we have to proceed differently. The result below is applied for the time-inhomogeneous Lévy case (see Section 6), and the corresponding proof can easily be recycled for other examples.
Proposition 4.
Let , . Assume that , , , , and the family converge uniformly to 0 at infinity, is equicontinuous and uniformly bounded. Then, -CCC.
Proof.
Let , K the Y-closure of the set:
is a family of elements of Y that are equicontinuous, uniformly bounded and that converge uniformly to 0 at infinity by assumption. Therefore, it is well-known, using the Arzela–Ascoli theorem on the one-point compactification of , that is relatively compact in Y and therefore that K is compact in Y. We have :
□
4.3. Relative Compactness of
This section is devoted to proving that is relatively compact. In the following we assume that satisfies the compact containment criterion (see [14]):
We first state an integral representation of , which proof is the same as the proof of Proposition 2.
Lemma 1.
Let Assumption 1 hold true. Let , . Then, Vϵ satisfies on Ω:
We now prove that is relatively compact.
Lemma 2.
Let Assumptions 1 and 2, hold true. Then, is -relatively compact.
Proof.
We use Theorem 7 to show this result. Using Lemma 1, we have for :
where
by Assumption 1. Now, for :
Note that the supremums in the previous expression are a.e. finite as they are a.e. bounded by . Now, let:
We have to show that . We have:
Let any sequence converging to 0, and denote
We first want to show that is uniformly integrable. According to [20], it is sufficient to show that . We have that . By Assumption 1 (more precisely, the regularity of the inhomogeneous Markov renewal process), we get:
and therefore is uniformly integrable. Then, we show that . Let:
Let and . There exists some constant such that for :
and if :
Let (recall ) and . Then, , and therefore:
Therefore, for and :
We have proved that . By uniform integrability of , we get that and therefore since the sequence is arbitrary:
□
We now prove that the limit points of are continuous.
Lemma 3.
Let Assumptions 1 and 2 hold true. Then, is -C-relatively compact.
Proof.
The proof is presented for the case . The proof for the case is exactly the same. By Lemma 2, it is relatively compact. By Theorem 6, it is sufficient to show that . Let and fix . For , we have:
Since:
we get (choose T big enough, then small enough). □
4.4. Martingale Characterization of the Random Evolution
To prove the weak law of large numbers for the random evolution, we use a martingale method similar to what is done in [10] (Section 4.2.1), but adapted rigorously to the inhomogeneous setting. We first introduce the quantity , solution to a suitable “Poisson equation”:
Definition 13.
Let Assumption 1 hold true. For , , , let , where is the unique solution of the equation:
namely , where is the fundamental matrix associated to P.
Remark 3.
The existence of is guaranteed because by construction (see [22], Proposition 4). In fact, in [22], the operators Π and P are defined on but the results hold true if we work on , where E is any Banach space such that (e.g., if , if ). To see that, first observe that P and Π can be defined the same way on as they were on . Then, take such that and such that . We therefore have that: , and since we have the uniform ergodicity on , we have that:
By linearity of we get that . However, because and that this supremum is attained (see, e.g., [23], Section III.6), then:
and thus we also have , i.e., the uniform ergodicity in . Now, according to the proofs of Theorems 3.4 and 3.5, Chapter VI of [24], is the only thing we need to prove that is invertible on:
the space E plays no role. Further, by the bounded inverse theorem.
We now introduce the martingale which plays a central role in the following.
Lemma 4.
Let Assumption 1 hold true. Define recursively for , :
i.e., ; and for (we recall that ):
so that is a -martingale by construction. Let for :
where and is defined the usual way (provided we have shown that is a -stopping time ). Then, , , , , is a real-valued square-integrable martingale.
Proof.
By construction is a martingale. Let . , is a -stopping time, because:
Let . We have that is a martingale. Assume we have shown that it is uniformly integrable, then we can apply the optional sampling theorem for UI martingales to the stopping times a.e and get:
which shows that is a martingale. Now, to show the uniform integrability, according to [20], it is sufficient to show that . However:
where ( by Assumption 1). The fact that (Assumptions 1) concludes the proof. □
Remark 4.
In the following, we use the fact that, as found in [25] (Theorem 3.1), for sequences , of random variables with value in a separable metric space with metric d, if and , then . In our case, , take value in and to show that , we use the fact that:
and therefore that it is sufficient to have:
to obtain (respectively, in probability).
Lemma 5.
Let Assumption 1 hold true. For and , has the asymptotic representation:
where is an element of the space and is defined by the following property: , , as (so that Remark 4 is satisfied).
Proof.
For the sake of clarity, let:
Let . First, we have that:
because . Again and as in Lemma 4, we denote , and by Assumption 1. Now, we have:
and:
as is measurable. Now, we know that every discrete time Markov process has the strong Markov property, so the Markov process has it. For , the times are -stopping times. Therefore, for :
Consider the following derivatives with respect to t:
which exist because and . Using the Fundamental theorem of Calculus for the Bochner integral ( by Assumption 1) we get:
because, by Assumption 1:
We note that the contribution of the terms of order inside the sum will make the sum of order , since for some constant :
because . Altogether, we have for , using the definition of :
The term involving above will vanish as by Assumption 1, as . We also have for the first term ():
Now, we have to compute the terms corresponding to . We show that the term corresponding to is and that for :
For the term , we have, using Assumption 1, the definition of and Theorem 3:
Now, we have for :
By Assumption 1, we have for . Therefore, we get using Theorem 3, for :
Because , we get that , . Since , we get that and therefore:
Therefore, taking the conditional expectation, we get:
and so:
Now, because and by Assumption 1 (which ensures that the integral below exists):
Thus, using boundedness of (again Assumption 1):
The first term above has the representation (by Theorem 5):
Taking the conditional expectation and using the fact that , we can show as we did before that:
The second term has the following representation, because (which ensures that ) and using Theorem 3:
Thus, we have overall:
We have by the strong Markov property (because :
and:
as . Thus, finally, we get:
In the expression above, Assumption 1 ensures that the terms containing and will vanish as , and therefore we get overall:
Now, let . Using Assumption 1 (in particular uniform boundedness of sojourn times):
Therefore:
□
We now show that the martingale converges to 0 as .
Lemma 6.
Let Assumption 1 hold true. Let , . For every , . Further, if is relatively compact (in ), then .
Proof.
Take any sequence . Weak convergence in is equivalent to weak convergence in for every . Thus, let us fix . Let . Then, is a real-valued martingale by Lemma 4. To show that , we are going to apply Theorem 3.11 of [21], chapter VIII. First, is square-integrable by Lemma 4. From the proof of Lemma 5, we have that:
and that:
for some uniform constant dependent on T, so that:
The latter shows that the jumps are uniformly bounded by and that
To show that , it remains to show that the following predictable quadratic variation goes to 0 in probability:
Using the definition of in Lemma 4, we get that:
Therefore:
because . Now, to show the second part of the lemma, we observe that if for some converging sequence , we get that by Problem 13, Section 3.11 of [20] together with the continuous mapping theorem. Therefore, is equal to the zero process, up to indistinguishability. In particular, it yields that , : a.e. Now, according to [26], we know that the dual space of every separable Banach space has a countable total subset, more precisely there exists a countable subset such that :
Since , we get a.e., i.e., is a modification of the zero process. Since both processes have a.e. right-continuous paths, they are in fact indistinguishable (see, e.g., [27]). Thus, a.e. □
4.5. Weak Law of Large Numbers
To prove our next WLLN theorem—the main result of this section—we need the following:
Assumptions 3.
In the applications we consider in Section 6, the above assumption is very easily checked as we can exhibit almost immediately after looking at . This assumption is used to be able to use Theorem 4, which gives uniqueness of solutions to the Cauchy problem:
For the cases where cannot be exhibited, we refer to [18], Chapter 5, Theorems 3.1 and 4.3 for requirements on to obtain the well-posedness of the Cauchy problem.
Theorem 8.
Under Assumptions 1 and 2, and Assumption 3 above, we get that is -exponentially bounded (with constant γ). Further, for every , we have the weak convergence in the Skorokhod topology :
where is defined similarly to in [14], with replaced by .
Proof.
By Lemma 3, we know that is -C-relatively compact. Take a countable family of that is dense in both and Y. Marginal relative compactness implies countable relative compactness and therefore, is C-relatively compact in , and actually in since the limit points are continuous. Take a weakly converging sequence . By the proof of Theorem 4.1 (see [14]), we can find a probability space and -valued random variables , on it, such that:
We can extend the probability space so that it supports a Markov renewal process such that:
Now, let us take some and show that on we have the following convergence in the Skorohod topology:
We have on :
Let . By continuity of , we get that . Since the latter limit is continuous, the convergence in the Skorohod topology is equivalent to convergence in the local uniform topology, that is:
Therefore, we get on :
since . By Remark 4 we get . For (ii), we observe that we can refine as we wish the partition uniformly in k, since almost surely. Therefore, we get the convergence of the Riemann sum to the Riemann integral uniformly on , i.e.,
Finally, we get on , by continuity of the limit points:
Since we have:
by Lemmas 5 and 6, we get that:
for all p on some set of probability one . Let and . Since is dense in , there exists a sequence . We have on :
using Assumption 1. Therefore, on , we have :
By continuity of on (Assumption 1), and exponential boundedness + continuity of : on and therefore we have on :
By Assumption 3, and Theorem 4, , , , . By density of in Y, the previous equality is true in Y. □
5. Central Limit Theorem
In the previous section, we obtain in Theorem 8 a WLLN for the random evolution. The goal of this section is to obtain a CLT. The main result is Theorem 11, which states that—in a suitable operator Skorohod topology—we have the weak convergence:
where is a “Gaussian operator” defined in Theorem 11. In the WLLN result in Theorem 8, it is relatively obvious that the convergence would hold on the space . Here, because of the rescaling by , it is not a priori obvious. In fact, the above weak convergence proves to hold in the space , which is defined very similarly to :
where is a Banach space continuously embedded in . We need the Banach space because the limiting Gaussian operator takes value in , but not in , as shown in Lemma 7. It is proved that the volatility vector defined in Proposition 5 (which governs the convergence of Vϵ to , as shown in Theorem 11) comes from three independent sources, which is what we expected intuitively:
- comes from the variance of the sojourn times .
- comes from the jump operators D.
- comes from the “state process” .
For example, if there is only 1 state in , then and only comes from the randomness of the times . If, on the other hand, the times are assumed to be deterministic, then . If the random evolution is continuous (), then . This decomposition of the volatility in three independent sources is obtained using the concept of so-called orthogonal martingale measures (see, e.g., [28], Definition I-1), which is the heart of this section.
To get the main result of Theorem 11, we need a series of preliminary results: Theorem 9 establishes the weak convergence of the martingale as for , Proposition 5 and Theorem 10 allow expressing the limit of the previous martingale in a neat way using so-called orthogonal martingale measures and weak Banach-valued stochastic integrals introduced in Definition 14 (which are an extension of the weak Banach-valued stochastic integrals of [29]). Lemmas 7 and 8 give well-posedness of a Cauchy problem, which is used in the proof of Theorem 11, and show that the correct space to establish the convergence of is indeed .
Theorem 9.
Let , and . Under Assumptions 1–3, we have the following weak convergence in the space :
where is a continuous Gaussian martingale with characteristics given by:
where and is the following non-linear variance operator:
Proof.
Because the proof of this theorem is long, we explain the main parts of the proof before actually beginning the proof. The main idea is to prove that the quadratic variation of converges to in probability. We express the martingale-difference as the sum of two terms in Equations (265) and (266). Then, we calculate the expected values of the square of the functional of each term (see Equation (267)), as done in Equations (282), (287) and (288). Thus, our martingale-difference can be expressed in the form of Equations (298) and (299), or Equations (300) and (301) after Therefore, the quadratic variation is expressed as shown in Equations (303)–(305). To show the last convergence in Equation (308), we express the difference between the left-hand side and the right-hand side in Equation (309) by two terms, Terms (i) and (ii), as in Equations (310) and (312), respectively. Finally, we show that Terms (i) and (ii) both converge to 0 in probability.
Now, let us go into details of the proof. Weak convergence in is equivalent to weak convergence in for every . Thus, let us fix . To apply Theorem 3.11 of [21], chapter VIII, we first need to show that the jumps of the martingale are uniformly bounded. By the proof of Lemma 6, we know that these jumps are uniformly bounded by , where is some constant dependent on T. The theorem we want to apply also requires the two following conditions, for all t in some dense subset of :
- condition “”, which is equivalent by Equation (3.5) of [21], chapter VIII to:
Condition is true since, as mentioned above, the jumps of the martingale are uniformly bounded by . Now, we need to show . The same way as in the proof of Lemma 6, we use the definition of in Lemma 4 to get:
We have by definition, using the same notations as in the proof of Lemma 6:
It follows that, denoting the conditional expectation with respect to :
To make the following lines of computation more readable, we let:
Using the proof of Lemma 5, we get:
On the other hand, we have:
The fact that the last term is equal to is what we have shown in the proof of Lemma 5. Now:
The fact that the last term is equal to comes from the fact in the proof of Lemma 5 we show that . In the same way, we can prove that:
Thus, overall:
It remains to compute the term . We can show the same way as in the proof of Lemma 5 that:
We get, using what we have computed in the proof of Lemma 5, that:
Again, using the same method as in the proof of Lemma 5, we have:
Furthermore:
Thus:
Now, we gather all the terms (so that (A), (B), (C), and (D) cancel each other):
where . We can also write the last result in terms of the nonlinear variance operator associated to the kernel :
As and by assumption, will tend to P, to and therefore we get:
To obtain the convergence of the latter predictable quadratic variation process, we first “replace”—in the previous Riemann sum—the random times and counting process by their deterministic equivalents and defined by:
It is possible to do so using Assumption 1 and up to some technicalities. We do so for technical reasons to be able to use a SLLN for weakly correlated random variables [30], as specified below. We get:
The second step is the following: denote for sake of clarity the inside of the Riemann sum by :
We want to show that:
To do so, we write the decomposition:
First, let us show that the term converges in probability to 0. We have:
In the proof of Theorem 8, we obtain that is C-relatively compact. Similarly, we also have that , , and are C-relatively compact, and therefore they converge jointly, weakly in to:
Using the continuous mapping theorem and the fact that the limit points belong to (and so that the limit of the sum is equal to the sum of the limits), we get that:
Since is continuous, convergence in the Skorohod topology is equivalent to convergence in the local uniform topology and therefore we get that . The previous weak convergence to 0 also holds in probability and that completes the proof that converges in probability to 0, since is bounded uniformly in ϵ. To show that the term converges a.e. to 0, we write:
We have to prove that:
To do so, we use Corollary 11 of [30]. It is a SLLN for weakly correlated random variables . It tells us that if and if with , then we have:
In fact, it turns out that we need a generalized version of this corollary, which can be obtained exactly the same way as in the original proof. We need:
for deterministic, nonnegative and uniformly bounded coefficients . Note that we can drop the nonnegativity assumption on the s by requiring that , and not only . Here, we let so that and , which are indeed deterministic, nonnegative and uniformly bounded. Before showing for some suitable , let us show that and, thus, by convergence of the Riemann sum:
As mentioned in the proof of Lemma 5, the process has the strong Markov property and therefore, since and that for , is a stopping time of the augmented filtration , where:
then we have for , noting :
which, by Assumption 1, converges as to . Now, it remains to study . We have for , again using the strong Markov property of the process :
The proof of Lemma 6 then shows that , which completes the proof that Term converges a.e. to 0. □
The following result allows us to represent the limiting process obtained above in a very neat fashion, using orthogonal martingale measures (see [28], Definition I-1).
Proposition 5.
Let , and . Under Assumptions 1 and 3, we have the following equality in distribution in (using the notations of Theorem 9):
where is a vector of orthogonal martingale measures on with intensities , is the vector with components () given by:
Proof.
Take any probability space supporting three independent real-valued Brownian motions . Define for :
We therefore have:
According to [28] (Theorem III-10), there exists an extension of the original space supporting orthogonal martingale measures on with intensity such that:
is a continuous Gaussian martingale with characteristics (using the notations of Theorem 9). Therefore, in , which completes the proof. □
The above characterization of is relevant especially because it allows us to express the limiting process of in the form , allowing us to express the limiting process of in terms of the weak random integral defined in [29] (Definition 3.1). In fact, we need a slight generalization of the latter integral to vectors, which is given in the following Definition 14 (the case in the following definition coincides precisely with [29], Definition 3.1). We also note that the above representation of the Gaussian martingale allows us to see that the volatility vector (which governs the convergence of Vϵ to , as shown in the main result in Theorem 11) comes from three independent sources, which is what we expected intuitively:
- comes from the variance of the sojourn times (component ).
- comes from the jump operators D.
- comes from the “state process” .
Definition 14.
Let be a vector of martingale measures on , where E is a Lusin space and a ring of subsets of E (see [28], Definition I-1). Let be the vector (), and the square-mean measures (, ). We say that σ is stochastically integrable with respect to M on (, ) if :
and there exists a Y-valued random variable α such that :
In this case, we write:
The previous definition now allows us to express the limiting process of in for all .
Theorem 10.
Let and . Under Assumptions 1–3, we have the following weak convergence in for each countable family :
where is the vector with components given in Proposition 5, is a vector of orthogonal martingale measures on with intensities . Further, if in addition is -relatively compact, we have the following weak convergence in for each countable family :
where each integral on the right-hand side of the above is a weak random integral in the sense of Definition 14.
Proof.
We split the proof into two steps. For Step 1, we fix and prove the weak convergence in :
In Step 2, we prove the joint convergence in .
Step 1. Let a sequence such that for some - valued random variable . By the Skorohod representation theorem, there exists a probability space and random variables with the same distribution as the original ones, denoted by the subscript ’, such that the previous convergence holds a.e. By Theorem 9, for all , is a continuous Gaussian martingale with characteristics . By the proof of Proposition 5, we can consider on an extension of , orthogonal martingale measures on with intensity such that :
By definition of the weak random integrals in the sense of Definition 14, we get:
Actually, there is a subtlety that we voluntarily skipped: a priori, it is not obvious that the orthogonal martingale measures do not depend on ℓ, i.e., we can represent all Gaussian martingales according to the same vector . To see that it is the case, observe that the joint distribution of the Gaussian martingales is completely determined by their pairwise predictable quadratic co-variations. Take . Then, and , therefore we can compute by Theorem 9 the quantity according to:
The latter gives us that we can indeed represent all Gaussian martingales according to the same vector .
Step 2. The limit points are continuous, thus weak convergence in is equivalent to weak convergence in . Similar to what we do for , the joint distribution of the Gaussian martingales is completely determined by their pairwise predictable quadratic co-variations. We can compute by Theorem 9 the quantity according to:
The latter gives us that we can indeed represent all Gaussian martingales according to the same vector . □
Let be a separable Banach space continuously embedded in such that . We now show our main result, which is the weak convergence of the process:
in a suitable topology, which is the space defined similarly to , the following way:
We will need the following regularity:
Assumptions 4.
For , there exists separable Banach spaces continuously embedded in such that and such that the following hold true for :
- contains a countable family which is dense in .
- is a regular -propagator with respect to the Banach spaces and .
- and , .
- and , .
- .
Remark 5.
We previously introduced the notion of “regular” Y-propagator. This notion implicitly involves the Banach spaces Y and . Therefore, by “ is a regular -propagator with respect to the Banach spaces and ” we mean the same thing as our previously introduced notion of regularity, but with the roles of Y and being, respectively, replaced by and .
A few comments: We need the Banach space because the limiting Gaussian operator of our main result Theorem 11 takes value in , but not in , as shown in Lemma 7. We need the Banach space for the same reason we need in the WLLN result in Theorem 8: to get well-posedness of a Cauchy problem and conclude on the unicity of the limit. This technical point is established in Lemma 8.
Lemma 7.
Let and be a probability space which supports , a vector of orthogonal martingale measures on with intensities . With the definition of σ in Proposition 5, define:
Under Assumptions 1, 2 and 4 is a -valued random variable. Further, for all , there exists a constant such that for a.e. :
Proof.
First, we recall the fact that if , then by Ito’s lemma, for every and we have almost surely:
In our case, we have for :
For the derivative of , denote:
so that:
where we recall that:
With these notations, we have:
By Equation (351) and Assumption 4, for every , there exists a constant such that for every , every we have for a.e. :
According to [26], the unit ball of the dual of every separable Banach space Y contains a norming subset, namely a countable subset such that for every , . In our case and using Definition 14, it means that we get for every , every and a.e. :
□
Lemma 8.
Let and fix a countable dense family of , as well as a probability space supporting the orthogonal martingale measures W of Theorem 10. Consider the following Cauchy problem on , for a -valued random variable , where σ is defined in Proposition 5:
If we assume that the previous problem has at least one solution , and that Assumptions 1, 3 and 4 hold true, then we have .
Proof.
First, let us prove uniqueness. Let be two solutions and let . Then, we have:
Consider the following Cauchy problem, very similar to Theorem 4, where , G is an operator that is -strongly continuous and represents the derivative with respect to the Y-norm (unless specified otherwise, all norms, limits, and derivatives are taken with respect to Y, which we prefer to specify explicitly here as we are dealing with several Banach spaces):
Because by assumption is a regular -propagator with respect to the Banach spaces and , we can prove—similar to the proof of Theorem 4—that the previous Cauchy problem has a unique solution . In our context, let be the subset of probability one on which we have:
By density of in and because takes value in and for all t, we can extend the previous equality for all on . Observing that Z takes value in , we can use the unicity of the solution to the Cauchy problem discussed above, and we get on . This proves uniqueness.
It is proved below that (using the notation of Lemma 7) is the unique solution to our Cauchy problem. Indeed, Lemma 7 gives us that takes value in . It remains to show that it satisfies Equation (363). Let . Since and since is a regular -propagator by assumption, we get for all :
We get for all p, using the stochastic Fubini theorem:
According to [26], the unit ball of the dual of every separable Banach space Y contains a norming subset, namely a countable subset such that for every , . We therefore obtain:
which completes the proof. □
We now state the main result of this section:
Theorem 11.
Let . Assume that Assumptions 1–4 (see 1 for Assumptions 1–2, Assumprton 3, and Assumprton 4) hold true, that is -relatively compact and that, for all , the sequence of real-valued random variables is tight, where:
Then, we have the following weak convergence in :
where is the -valued random variable defined by:
where is a vector of orthogonal martingale measures on with intensity and σ is defined in Proposition 5.
Proof.
Denote . Take a countable family of that is dense in both and . Marginal relative compactness implies countable relative compactness and therefore, is relatively compact in . Actually, is C-relatively compact in , and therefore in . This is because the jumps of are the jumps of which are uniformly bounded by on each interval for some constant which depends only on T (see the proof of Lemma 3). Take a weakly converging sequence . By the proof of Theorem 4.1 (see [14]), we can find a probability space and -valued random variables , on it, such that on we have the sure convergence:
We can extend the probability space so that it supports a Markov renewal process such that:
By continuity of , we have the following representation of the Riemann sum, for all (actually, for all ):
Therefore, by Lemma 5 we have the representation, for all (actually, for all ):
As in the proof of Theorem 8, using continuity of , we have the following convergence in the Skorohod topology on , for all :
Similar to what is done in the proof of Theorem 10, we can extend the probability space to a probability space , which supports the orthogonal martingale measures of Theorem 10. Since we have:
using Equation (381) and Theorem 10, we get for all p, e.g., on some set of probability one , that:
The result of Lemma 8 concludes the proof. □
6. Financial Applications
Financial applications are given to illiquidity modeling using regime-switching time- inhomogeneous Levy price dynamics, to regime-switching Levy driven diffusion based price dynamics, and to a generalized version of the multi-asset model of price impact from distress selling, for which we retrieve and generalize their diffusion limit result for the price process.
6.1. Regime-Switching Inhomogeneous Lévy Based Stock Price Dynamics and Application to Illiquidity Modeling
For specific examples in this section, we state all the proofs related to regularity of the propagator, compact containment criterion, etc. It is worth noting that we focus below on the Banach space , but that, in the article [15], other -type Banach spaces denoted by are considered (in addition to ), for which a suitable weighted sup-norm is introduced (see their Lemma 2.6).
We first give a brief overview of the proposed model and the intuition behind it, and then we go into more formal details, presenting precisely all the various variables of interest, checking assumptions we have made up to this point, etc. We consider a regime-switching inhomogeneous Lévy-based stock price model, very similar in the spirit to the recent article [4]. In short, an inhomogeneous Lévy process differs from a classical Lévy process in the sense that it has time-dependent (and absolutely continuous) characteristics. We let be a collection of such -valued inhomogeneous Lévy processes with characteristics , and we define:
for some bounded function α. We give below a financial interpretation of this function α, as well as reasons we consider a regime-switching model. In this setting, f represents a contingent claim on a (d-dimensional) risky asset S having regime-switching inhomogeneous Lévy dynamics driven by the processes : on each random time interval , the risky asset is driven by the process . Indeed, we have the following representation, for (to make clear that the expectation below is taken with respect to ω embedded in the process L and not ):
where we have denoted for clarity:
The random evolution represents in this case the present value of the contingent claim f of maturity t on the risky asset S, conditionally on the regime switching process : indeed, remember that is random, and that its randomness (only) comes from the Markov renewal process. Our main results in Theorems 8 and 11 allow approximating the impact of the regime-switching on the present value of the contingent claim. Indeed, we get the following normal approximation, for small ϵ:
The above approximation allows quantifying the risk inherent to regime-switchings occurring at a high frequency governed by ϵ. The parameter ϵ reflects the frequency of the regime-switchings and can therefore be calibrated to market data by the risk manager. For market practitioners, because of the computational cost, it is often convenient to have asymptotic formulas that allow them to approximate the present value of a given derivative, and by extent the value of their whole portfolio. In addition, the asymptotic normal form of the regime-switching cost allows the risk manager to derive approximate confidence intervals for his portfolio, as well as other quantities of interest such as reserve policies linked to a given model.
In the recent article [4], an application to illiquidity is presented: the authors interpreted the state process as a “proxy for market liquidity with states (or regimes) representing the level of liquidity activity”. In their context, however, is taken to be a continuous-time Markov chain, which is a specific case of semi-Markov process with exponentially distributed sojourn times . With creative choices of the Markov renewal kernel, one could choose alternative distributions for the “waiting times” between regimes, thus generalizing the setting of [4]. In [4], the equivalent of our quantities are the constants , which represent an impact on the stock price at each regime-switching time. The authors stated: “Typically, there is a drop of the stock price after a liquidity breakdown”. This justifies the use of the operators D.
In addition to the liquidity modeling of [4], another application of our framework could be to view the different states as a reflection of uncertainty on market data induced by, for example, a significant bid-ask spread on the observed market data. The different characteristics would, in this context, reflect this uncertainty induced by the observables. The random times would then correspond to times at which the observed market data (e.g., call option prices, or interest rates) switch from one value to another. As these switching times are typically very small (∼ms), an asymptotic expansion as would make perfect sense in order to get an approximation of the stock price at a scale ≫, e.g., ∼min.
Now, let us formalize the model a bit more precisely. We let , for some . Let a probability space and let us begin with a -valued (time-inhomogeneous) Markov process on it with transition function (, ) and starting point (in the sense of [31], II.10.2). Formally, , , p satisfies:
- , is a probability measure.
- is measurable.
- .
- (Chapman–Kolmogorov).
One example of such time-inhomogeneous Markov processes are additive processes (in the sense of [31], I.1.6). This class of processes differs from Lévy processes only on the fact that increments are not assumed to be stationary. Additive processes are translation invariant (or spatially homogeneous) and we have (by [31], II.10.4) . We focus on additive processes in the following as they are rich enough to cover many financial applications. We define for and :
It can be proved that Γ is a regular -contraction propagator and we have and . The Lévy–Khintchine representation of such a process (see [32], 14.1) ensures that there exists unique , a family of symmetric nonnegative-definite matrices and a family of measures on such that:
are called the spot characteristics of L. They satisfy the following regularity conditions:
- , and
- and : is symmetric nonnegative-definite and .
- and as , , and such that for some .
If , , we can replace by in the Lévy–Khintchine representation of L, where . We denote by this other version of the spot characteristics of L.
Let . We can consider a specific case of additive processes, called processes with independent increments and absolutely continuous characteristics (PIIAC), or inhomogeneous Lévy processes in [33], for which there exists , a family of symmetric nonnegative-definite matrices and a family of measures on satisfying:
where denotes any norm on the space of matrices. are called the local characteristics of L. We can notice by [33] that PIIAC are semi-martingales, which is not the case of all additive processes. According to [33], we have the following representation for L:
where for : and N the Poisson measure of L ( is then called the compensated Poisson measure of L). is a d-dimensional Brownian motion on , independent from the jump part. here stands for the unique symmetric nonnegative-definite square root of . Sometimes it is convenient to write a Cholesky decomposition and replace by in the previous representation (in this case, the Brownian motion W would have to be replaced by another Brownian motion , the point being that both and are processes with independent Gaussian increments with mean 0 and variance ). It can be shown (see [15]) that the infinitesimal generator of the propagator Γ is given by:
and that . If is well-defined:
As a first example, we can consider the inhomogeneous Poisson process, for which , local characteristics , where the intensity function s.t. . We have, letting :
As a second example, we can consider a risk process L (used in insurance, for example):
where is an inhomogeneous Poisson process, is the premium intensity function and the ’s are iid random variables, independent from N with common law that represent the random payments that an insurance company has to pay. In this case, the local characteristics . We get:
We can also consider a Brownian motion with time-dependent variance-covariance matrix and drift (local characteristics ):
To define the corresponding random evolution, we let L be an inhomogeneous Lévy process taking value in , independent of the semi-Markov process . In this case, the ℝ-valued Lévy processes are not necessarily independent but they have independent increments over non-overlapping time-intervals. Denote the corresponding -valued inhomogeneous Lévy processes, with local characteristics . Define for , , :
We define the jump operators, for by:
where , so that and . Assume that the ℝ-valued inhomogeneous Lévy processes admit a second moment, namely they satisfy:
then it can be proved (see below) that the compact containment criterion is satisfied: -CCC (Assumption 2). Assumption 1 will be satisfied provided the characteristics are smooth enough. Regarding Assumption 3, it is clear that is the generator of the (inhomogeneous) Lévy propagator with “weighted” local characteristics given by:
In particular, we notice that is indeed a Lévy measure on .
Proof that Γ is a propagator and that we have and . Γ satisfies the propagator equation because of the Chapman–Kolmogorov equation. Now, let us show that we have and . Let . Let us first start with . Let . By ([31]), we get the representation , where is the distribution of , i.e., for . Let and take any sequence : and denote and . By continuity of f, pointwise. Further, . Therefore, by Lebesgue dominated convergence theorem, we get . Therefore, . By the same argument but now taking any sequence : , we get and therefore . Further, we get:
and therefore . Let , (). Take any sequence : , and . Then:
Let and . We have pointwise since . By the Mean Value theorem, , and therefore . Therefore, by Lebesgue dominated convergence theorem, we get:
Using the same argument as for , we get that since . Repeating this argument by computing successively every partial derivative up to order q by the relationship , we get . Further, the same way we got for , we get for : . Therefore, .
Proof that Γ is -strongly s-continuous, Y-strongly t-continuous. Let , and :
Let and any sequence such that . Let and . We have by stochastic continuity of additive processes. By the Skorokhod’s representation theorem, there exists a probability space and random variables , on it such that , and . Let , we therefore get:
Further, since , is uniformly continuous on and , : . Because , for a.e. , . Therefore, we have that . Further . By Lebesgue dominated convergence theorem, we get:
We can notice that the proof strongly relies on the uniform continuity of f, and therefore on the topological properties of the space (which does not have). We prove that Γ is Y-strongly t-continuous exactly the same way, but now considering and any sequence such that , where if and if .
Proof that Γ is regular. By Taylor’s theorem, we get , , :
observing that by assumption. Therefore, by integrability assumption on the local characteristics and Theorem 3, we get the propagator evolution equations and therefore the regularity of Γ.
Proof that -CCC. We are going to apply Proposition 4. We showed Vϵ is a -contraction, so it remains to show the uniform convergence to 0 at infinity. We have the following representation, for (to make clear that the expectation is with respect to ω and not ):
where we have denoted for clarity . In the following we drop the index . Denote the d components of the inhomogeneous Lévy process by (). The multivariate version of Chebyshev’s inequality yields for any d-vector of integrable random variables :
We apply this inequality to the d-vector having components
, where is the component of . If we denote by
the local characteristics of , we have ([32], proposition 3.13):
and so:
where and . Similarly, we have:
where . Now, since L is a valued inhomogeneous Lévy process, the ℝ valued inhomogeneous Lévy processes are not necessarily independent but they have independent increments over non-overlapping time intervals. This yields:
Overall, we get that for every , there exists such that:
Define the set:
We have:
On the other hand, there exists : . Define, as in the traffic case, so that . We have for and :
so that overall, for and (uniform in , ϵ) we have:
which completes the proof.
6.2. Regime-Switching Lévy Driven Diffusion Based Price Dynamics
We consider the same financial setting as in Section 6.1, with the exception that the dynamics of the stock price will be driven by a Lévy driven diffusion process, and not a time-inhomogeneous Lévy process. As in [15] (Section 4.1), consider for the function and a collection of -valued classical Lévy processes with local characteristics . Let a solution of the SDE (actually, for any initial condition , the SDE admits a unique strong solution, cf. [15], Lemma 4.6.):
and define as in Section 6.1:
This general setting includes many popular models in finance, for example the Heston model. Time-inhomogeneity of also enables one to consider the time-dependent Heston model, which is characterized by time-dependent parameters. The Delayed Heston model considered in [34] can be seen as a specific example of the class of time-dependent Heston models, and therefore fits into this framework. Indeed, let and be the price and variance process, respectively. The time-dependent Heston dynamics read:
where , , , , , and are deterministic processes and , independent Brownian motions. Then, letting the -valued Lévy process and , we have , where for :
Then, we can consider regime-switching extensions of the latter time-dependent Heston dynamics by allowing the various coefficients , , , , , and (and therefore the matrix ) to depend on the regime . It also has to be noted that the previously defined is not in , which is required in [15] to prove the strong continuity of Γ, as mentioned below. Nevertheless, in practice, it is always possible to consider bounded approximations of the latter .
Let and for some as in Section 6.1. Lemma 4.6, the corresponding proof and Remark 4.1 of [15] give us strong continuity of and for :
In particular, taking we retrieve the so-called traffic random evolution of [9,27]. If for all x, is the generator of the Lévy driven diffusion propagator with “weighted” local characteristics and driver implicitly defined by the following equations:
If does not depend on x, then is the generator of the Lévy driven diffusion propagator with driver and “weighted” local characteristics defined implicitly by the following equations:
6.3. Multi-Asset Model of Price Impact from Distressed Selling: Diffusion Limit
Take, again, . The setting of the recent article [11] fits exactly into our framework. In this article, they consider a discrete-time stock price model for d stocks . It is assumed that a large fund V holds units of each asset i. Denoting the ith stock price and fund value at time by, respectively, and , we have , and the following dynamics are assumed for the stock prices:
where is the constant time-step, are i.i.d. -valued centered random variables with covariance matrix (i.e., =0, ); is increasing, concave, equal to 0 on and , where ; represents the depth of the market in asset i, and are constants. It is also assumed that:
so that the stock prices remain positive. The idea is that when the fund value V reaches a certain “low” level , market participants will start selling the stocks involved in that fund, inducing a correlation between the stocks, which “adds” to their fundamental correlations: this is captured by the function g (which, if equal to 0, gives us the “standard” Black and Scholes setting).
The above setting is a particular case of our framework, where for all , and the operators D have the following form:
where ∘ denotes composition of functions, , m is the vector with coordinates , has coordinate functions defined by:
has coordinate functions defined by:
and finally where we have extended the definition of by the convention that is the vector with coordinates . We notice that the operator D defined above does not depend on x, and so we let . By i.i.d. assumption on the random variables , the process is chosen such that it is a Markov chain with all rows of the transition matrix P equal to each other, meaning that is independent of : in this context the ergodicity of the Markov chain is immediate, and we have . It remains to choose the finite state space : we assume that the random variables can only take finitely many values, say M, and we denote these possible values. In the paper [11], the authors considered that these random variables can have arbitrary distributions: here we approximate these distributions by finite state distributions. Note that in practice (e.g., on a computer), all distributions are in fact approximated by finite-state distributions, since the finite floating-point precision of the computer only allows finitely many values for a random variable. The parameter M in fact plays no role in the analysis below. We let . We have, for and :
In [11], the times are deterministic (so that ), but in our context we allow them to be possibly random, which can be seen for example as an “illiquidity extension” of the model in [11], similar to what we have done in Section 6.1. In addition, our very general framework allows one to be creative with the choices of the operators D and Γ, leading to possibly interesting extensions of the model [11] presented above. For example, one could simply “enlarge” the Markov chain and consider the same model but with regime-switching parameters and , and one could carry on all the analysis below at almost no extra technical and computational cost. Since , we let , and we get:
where , and:
Since and , it results that:
From the last expression, and remembering that each component of is centered, we get:
Let us denote the vector of initial log-spot prices which elements are (), as well as the d-dimensional price process with elements:
In this context, the spot price changes at each jump time of the Markov renewal process . As mentioned above, in [11], the times are deterministic, so that . In our context, the random evolution and its rescaled version are simply equal to a functional of the spot price:
Because , Theorem 8 tells us that the limit of is trivial: . In addition, we get:
so that by Theorem 11, we have the convergence in the Skorohod topology:
where are orthogonal martingale measures on with intensity . This leads to the approximation:
Because the “first-order” limit is trivial, it calls to study the “second-order” limit, or diffusion limit, i.e., the convergence of . This is indeed what is done in [11]. A complete and rigorous treatment of the diffusion limit case for time-inhomogeneous random evolutions is below the scope of this thesis. This is done in the homogeneous case in [10] (Section 4.2.2), or in a simplified context in [6,8]. Nevertheless, in the model we are presently focusing on, we can characterize the diffusion limit. The martingale representation of Lemma 5 is trivial as . We define another rescaled random evolution which is defined exactly as , but with replaced by in the product. In our context:
Using the same techniques as in Lemma 5 (but going one more order in the “Taylor” expansion and writing for suitable function ), we can get a martingale representation of the form:
for a suitable operator . In fact, after a few lines of computation, we get:
In our present model, is independent of time and the above reduces to:
Denote:
With this notation, we have:
and therefore:
We have:
Denoting for clarity, we get:
At this point, it can be noted that the above result should coincide with the quantity of [11] (Proposition 5.1), and it does! Note, however, that, because the times are random in our context, the above quantity is multiplied by in , i.e., the inverse of a weighted average of the mean values of the inter-arrival times . If as in [11], then and so : our framework is therefore a generalization of the one in [11]. One interesting information coming from the above result is that in case of zero “fundamental correlations” for , we get for , denoting :
This is the point of the model: distressed selling (modeled by the function g) induces a correlation between assets even if their fundamental correlation are equal to zero. It now remains to compute the quantities . After some tedious computations, we get, denoting again (and similarly for ):
Denote now:
The generator is therefore given by:
We get in consequence, still denoting for clarity:
and:
which is exactly equal to the result of [11], Theorem 4.2, at the small exception that in their drift result for , the above denoted by is replaced by a , where we recall that . We checked our computations and did not seem to find a mistake, therefore, to the best of our knowledge, the coefficient in should indeed be and not . We will leave this small issue for further analysis. By Equation (471), any limiting operator of satisfies:
Because of the above specific form of , it can be proved as in [11] (assuming ), using the results of [20], that the martingale problem related to is well-posed, and therefore using Equation (492) that converges weakly in the Skorohod topology as to the d-dimensional process solution of:
where W is a standard d-dimensional Brownian motion and . This result is a generalization of the one of [11] as in our context, the times at which the price jumps are random. The consequence of this randomness is that the driving coefficients b and c of the limiting diffusion process are multiplied by, respectively, and . If as in [11], then and we retrieve exactly their result. As mentioned above, our very general framework allows one to be creative with the choices of the operators D and Γ, leading to possibly interesting extensions of the considered model. For example, one could simply “enlarge” the Markov chain and consider the same model but with regime-switching parameters and , leading to diffusion limit results at almost no extra technical and computational cost.
Author Contributions
Conceptualization, A.S.; methodology, A.S.; software, N.V.; validation, A.S. and N.V.; formal analysis, N.V. and A.S.; investigation, N.V. and A.S.; resources, N.V. and A.S.; writing—original draft preparation, A.S.; writing—review and editing, A.S.; supervision, A.S.; project administration, A.S.; funding acquisition, A.S.
Funding
This research was funded by NSERC grant number RT732266.
Acknowledgments
The second author wishes to thank NSERC for continuing support. We also thank two anonymous referees very much for many valuable remarks and suggestions that definitely improved the final version of the paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Griego, R.; Hersh, R. Random evolutions, Markov chains, and systems of partial differential equations. Proc. Natl. Acad. Sci. USA 1969, 62, 305–308. [Google Scholar] [CrossRef] [PubMed]
- Hersh, R. Random evolutions: A survey of results and problems. Rocky Mt. J. Math. 1972, 4, 443–475. [Google Scholar] [CrossRef]
- Hersh, R.; Pinsky, M. Random evolutions are asymptotically Gaussian. Commun. Pure Appl. Math. 1972, XXV, 33–44. [Google Scholar] [CrossRef]
- Gassiat, P.; Gozzi, F.; Pham, H. Investment/consumption problem in illiquid mar- kets with regime-switching. SIAM J. Control Optim. 2014, 52, 1761–1786. [Google Scholar] [CrossRef]
- Vadori, N.; Swishchuk, A. Strong law of large numbers and central limit theorems for functionals of inhomogeneous semi-Markov processes. Stoch. Anal. Appl. 2015, 33, 213–243. [Google Scholar] [CrossRef]
- Watkins, J. A central limit ttheorem in random evolution. Ann. Probab. 1984, 12, 480–513. [Google Scholar] [CrossRef]
- Watkins, J. Limit theorems for stationary random evolutions. Stoch. Proc. Appl. 1985, 19, 189–224. [Google Scholar] [CrossRef]
- Watkins, J. A stochastic integral representation for random evolution. Ann. Probab. 1985, 13, 531–557. [Google Scholar] [CrossRef]
- Swishchuk, A. Random Evolutions and Their Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar]
- Swishchuk, A.; Korolyuk, V. Evolutions of Systems in Random Media; CRC Press: Boca Raton, FL, USA, 1995. [Google Scholar]
- Cont, R.; Wagalath, L. Fire sale forensics: Measuring endogenous risk. Math. Financ. 2016, 26, 835–866. [Google Scholar] [CrossRef]
- Fodra, P.; Pham, H. Semi markov model for market microstructure. arXiv 2013, arXiv:1305.0105. [Google Scholar] [CrossRef]
- Vadori, N. Semi-Markov Driven Models: Limit Theorems and Financial Applications. Ph.D. Thesis, University of Calgary, Calgary, AB, Canada, 2015. [Google Scholar]
- Vadori, N.; Swishchuk, A. Convergence of random bounded operators in the Skorokhod space. Random Oper. Stoch. Equ. 2019, 27, 1–13. [Google Scholar] [CrossRef]
- Ruschendorf, L.; Schnurr, A.; Wolf, V. Comparison of Time-Inhomogeneous Markov Processes. Adv. Appl. Probab. 2016, 48, 1015–1044. [Google Scholar] [CrossRef][Green Version]
- Filipovic, D. Time-inhomogeneous affine processes. Stoch. Process. Their Appl. 2005, 115, 639–659. [Google Scholar] [CrossRef]
- D’Amico, G.; Petroni, F. Copula based multivariate semi-Markov models with applications in high-frequency finance. Eur. J. Oper. Res. 2018, 267, 765–777. [Google Scholar] [CrossRef]
- Pazy, A. Semigroups of Linear Operators and Applications to Partial Differential Equations; Springer-Verlag: New York, NY, USA, 1983. [Google Scholar]
- Gulisashvili, A.; van Casteren, J. Non Autonomous Kato Classes and Feynman-Kac Propagators; World Scientific Publishing Co. Pte. Ltd.: Singapore, 1986. [Google Scholar]
- Ethier, S.; Kurtz, T. Markov Processes: Characterization and Convergence; John Wiley: Hoboken, NJ, USA, 1986. [Google Scholar]
- Jacod, J.; Shiryaev, A. Limit Theorems for Stochastic Processes; Springer: New York, NY, USA, 2003. [Google Scholar]
- Mathé, P. Numerical integration using V-uniformly ergodic Markov chains. J. Appl. Probab. 2004, 41, 1104–1112. [Google Scholar] [CrossRef]
- Conway, J. A Course in Functional Analysis; Springer: New York, NY, USA, 2007. [Google Scholar]
- Revuz, D. Markov Chains; Elsevier Science Publishers B.V.: London, UK, 1984. [Google Scholar]
- Billingsley, P. Convergence of Probability Measures; Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
- Ledoux, M.; Talagrand, M. Probability in Banach Spaces: Isoperimetry and Processes; Springer-Verlag: New York, NY, USA, 1991. [Google Scholar]
- Karatzas, I.; Shreve, S. Brownian Motion and Stochastic Calculus; Springer: New York, NY, USA, 1998. [Google Scholar]
- El Karoui, N.; Méléard, S. Martingale measures and stochastic calculus. Probab. Theory Relat. Fields 1990, 84, 83–101. [Google Scholar] [CrossRef]
- Riedle, M.; van Gaans, O. Stochastic integration for lévy processes with values in Banach spaces. Stoch. Process. Their Appl. 2009, 119, 1952–1974. [Google Scholar] [CrossRef][Green Version]
- Lyons, R. Strong laws of large numbers for weakly correlated random variables. Mich. Math J. 1988, 35, 353–359. [Google Scholar] [CrossRef]
- Sato, K. Lévy Processes and Infinitely Divisible Distributions; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
- Cont, R.; Tankov, P. Financial Modelling with Jump Processes; CRC Press LLC: Boca Raton, FL, USA, 2004. [Google Scholar]
- Kluge, W. Time-Inhomogeneous Léevy Processes in Interest Rate and Credit Risk Models. Ph.D. Thesis, Albert-Ludwigs-Universität Freiburg, Freiburg im Breisgau, Baden-Württemberg, Germany, 2005. [Google Scholar]
- Swishchuk, A.; Vadori, N. Smiling for the Delayed Volatility Swaps. Wilmott Magazine, 19 December 2014; 62–72. [Google Scholar]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).