Next Article in Journal
Scaling in Anti-Plane Elasticity on Random Shear Modulus Fields with Fractal and Hurst Effects
Next Article in Special Issue
Cauchy Processes, Dissipative Benjamin–Ono Dynamics and Fat-Tail Decaying Solitons
Previous Article in Journal
Fejér–Hadamard Type Inequalities for (α, h-m)-p-Convex Functions via Extended Generalized Fractional Integrals
Previous Article in Special Issue
Nonexistence of Global Positive Solutions for p-Laplacian Equations with Non-Linear Memory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Random Times for Markov Processes with Killing

by
Yuri G. Kondratiev
1,2,† and
José Luís da Silva
3,*,†
1
Department of Mathematics, University of Bielefeld, D-33615 Bielefeld, Germany
2
Institute of Mathematics of the National Academy of Sciences of Ukraine, National Pedagogical Dragomanov University, 33615 Kiev, Ukraine
3
CIMA, University of Madeira, Campus da Penteada, 9020-105 Funchal, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Fractal Fract. 2021, 5(4), 254; https://doi.org/10.3390/fractalfract5040254
Submission received: 8 November 2021 / Revised: 1 December 2021 / Accepted: 3 December 2021 / Published: 4 December 2021
(This article belongs to the Special Issue Probabilistic Method in Fractional Calculus)

Abstract

:
We consider random time changes in Markov processes with killing potentials. We study how random time changes may be introduced in these Markov processes with killing potential and how these changes may influence their time behavior. As applications, we study the parabolic Anderson problem, the non-local Schrödinger operators as well as the generalized Anderson problem.

1. Introduction

The idea to consider stochastic processes with general random times is known foremost from the classical book by Gikhman and Skorokhod Gikhman and Skorokhod [1]. In the case of Markov processes, random time changes by subordinators were already considered by Bochner in Bochner [2], showing that it gives again a Markov process, the so-called Bochner subordinated Markov process. A more interesting scenario is realized when the random times are given by the inverse of subordinators. Indeed, after the random time change, the resulting process fails to be a Markov process.
The technically easiest case is the inverse of an α -stable subordinator, α ( 0 , 1 ) . On the one hand, additional assumptions on the subordinator considerably reduce the set of time change processes we can count on, resulting in restrictive assumptions for possible applications. On the other hand, we find technical difficulties in handling general inverse subordinators. Such limitations can be overcome for certain sub-classes of inverse subordinators; see, for example, Kochubei et al. [3], Kochubei et al. [4]. Let us underline that the random time change approach turns out to be a very effective tool in modeling several physical systems, spanning from ecological to biological (see, for example, Magdziarz and Schilling [5] and the references therein), and in view of additional applications. All these considerations are related to the case of conservative Markov processes. However, for Markov processes with killing potential, the situation is less studied. The aim of this paper is to show how random time changes may be introduced in these Markov processes with killing potential and how these changes may influence their time behavior.
The paper is organized as follows. In Section 2, we briefly describe the class of Markov processes with killing potential we are interested in, their transition probability and the Markov semigroup. In Section 3, we introduce the class of random time process and the corresponding time evolution solutions. Section 4 is devoted to the explicit calculations of the time asymptotic behavior of two families of functions when they are integrated against the density of the random time needed in the following sections.
In Section 5 and Section 6, we consider important examples of Markov processes with killing potential. Namely, we study random time changes in the parabolic Anderson problem (see Theorem 1) and in the generalized Anderson problem (see Theorem 4). We also study random time changes for dynamics corresponding to non-local Schrödinger operators (see Theorem 3).

2. Markov Processes with Killing Potentials

Let { X ( t ) , t 0 , P x , x R d } be a right continuous Feller strictly Markov process in R d starting at 0 R d . Denote by L the generator of this process and by ( T t ) t 0 the corresponding Markov semigroup. Following Gikhman and Skorokhod [1], we show how to construct the transition probabilities of a process with killing potential from the transition probabilities of the process ( X ( t ) ) t 0 .
Let V : R d R + be a given continuous function, which is called the killing potential. For any t 0 and x R d , we define the function
P V ( t , x , B ) : = E x 𝟙 B ( X ( t ) ) exp 0 t V ( X ( s ) ) d s ,
where B B ( R d ) and E x [ · ] denotes the expectation with respect to the probability P x . Then P V ( t , x , B ) is a transition probability and we denote the corresponding Markov process by ( X V ( t ) ) t 0 . The generator L V of the Markov process ( X V ( t ) ) t 0 is the potential perturbation of L, that is,
L V = L V .
The Markov semigroup ( T t V ) t 0 associated to the Markov process ( X V ( t ) ) t 0 has the explicit form (for every x R d and t 0 )
( T t V f ) ( x ) = e t ( L V ) f ( x ) = E x f ( X ( t ) ) exp 0 t V ( X ( s ) ) d s .
In the language of mathematical physics, this representation is known as Feynman–Kac formula and was originally discovered for the case of X ( t ) = B ( t ) , t 0 , that is, the Brownian motion on R d .
For V , f C b ( R d ) (the set of all bounded continuous functions), the function
u ( t , x ) = E x f ( X ( t ) ) exp 0 t V ( X ( s ) ) d s
satisfies the following integral equation:
u ( t , x ) = ( T t f ) ( x ) 0 t R d u ( t s , y ) V ( y ) P x ( s , d y ) d s .
The latter is the integral form of the Kolmogorov equation for X V ( t ) , that is,
t u ( t , x ) = L u ( t , x ) V ( x ) u ( t , x ) .

3. Random Times Processes

In this section, we introduce the classes of subordinators, which serve as random time processes. More precisely, these random times correspond to the inverse of α -stable subordinator processes. This random time process is well studied in the literature (see Bingham [6] and Feller [7]) and in Section 4, we apply it to derive the explicit calculations for the subordination of two families of exponential functions.
From now on, we fix a probability space ( Ω , F , P ) and we start with the basic definition.
Definition 1.
A random time process is a map E : [ 0 , + ) × Ω R + satisfying the following conditions:
  • For a.a. ω Ω , E ( t , ω ) 0 for all t [ 0 , + ) ;
  • For a.a. ω Ω , E ( 0 , ω ) = 0 ;
  • The function E ( · , ω ) is increasing and satisfies
    lim t E ( t , ω ) = .
Definition 2.
A process ( S ( t ) ) t 0 is a subordinator if the following conditions are satisfied:
  • S ( 0 ) = 0 , P -a.s.;
  • S ( t + s ) S ( t ) has the same law of S ( s ) for all t , s > 0 ;
  • if ( F t ) t 0 denotes the filtration generated by ( S ( t ) ) t 0 , i.e., F t : = σ ( { S ( r ) , r t } ) , then S ( t + s ) S ( t ) is independent of F t for all t , s > 0 ;
  • t S ( t ) is P -a.s. right continuous with left limits;
  • t S ( t ) is P -a.s. an increasing function.
A process which satisfies 1–4 of Definition 2 is called a Lévy process; see, for example, Bertoin [8], Applebaum [9] for more details, examples and properties. It follows from the Lévy–Khintchine formula that the Laplace transform of S ( t ) may be written in terms of a Bernstein function (also known as the Laplace exponent) Φ : [ 0 , ) [ 0 , ) by
E e λ S ( t ) = e t Φ ( λ ) , λ 0 .
Moreover, assuming no drift and killing rate, the function Φ admits the representation
Φ ( λ ) = ( 0 , ) ( 1 e λ τ ) d σ ( τ ) ,
where the non-negative measure σ , also called the Lévy measure, has support in [ 0 , ) and fulfills the integrability condition
( 0 , ) ( 1 τ ) d σ ( τ ) < .
In what follows, we assume that the Lévy measure σ satisfies
σ ( 0 , ) = .
Using the Lévy measure σ , we define the kernel k as follows:
k : ( 0 , ) ( 0 , ) , t k ( t ) : = σ ( t , ) .
Its Laplace transform we denote by K , that is, for any λ 0 , one has
K ( λ ) : = 0 e λ t k ( t ) d t .
The relation between the function K and the Laplace exponent Φ is given by
Φ ( λ ) = λ K ( λ ) , λ 0 .
Example 1.
(α-stable subordinator) A classical example of a subordinator S is the so-called α-stable process with index α ( 0 , 1 ) . Specifically, a subordinator is α-stable if its Laplace exponent is
Φ α ( λ ) = λ α = α Γ ( 1 α ) 0 ( 1 e λ τ ) τ ( 1 + α ) d τ .
In this case, it follows that the Lévy measure is d σ α ( τ ) = α Γ ( 1 α ) τ ( 1 + α ) d τ . In addition, we have K α ( λ ) = λ α 1 and k α ( t ) = t α Γ ( 1 α ) .
The inverse of a subordinator ( S ( t ) ) t 0 is a stochastic process ( E ( t ) ) t 0 defined by
E ( t ) : = inf { s > 0 S ( s ) t } = sup { s 0 S ( s ) t } .
For each t 0 , E ( t ) represents the time at which S ( t ) hits level t. The stochastic process E is positive and non-decreasing and has all required properties to be used as random time. However, E is, in general, no more a Lévy process because by construction, E exhibits flat time periods determined by the jumps of S.
For any t 0 , we denote by G t ( τ ) , τ 0 the marginal density of E ( t ) (when it exists) or, equivalently,
G t ( τ ) d τ = τ P ( E ( t ) τ ) d τ = τ P ( S ( τ ) t ) d τ = τ P ( S ( τ ) < t ) d τ .
Remark 1.
If ( S ( t ) ) t 0 is an α-stable subordinator process, α ( 0 , 1 ) , then the inverse process E ( t ) has Laplace transform (cf. Proposition 1(a) in Bingham [6] or Feller [7], Piryatinska et al. [10]) given by
E ( e λ E ( t ) ) = 0 e λ τ G t ( τ ) d τ = n = 0 ( λ t α ) n Γ ( n α + 1 ) = E α ( λ t α ) ,
where E α is the Mittag–Leffler function. The asymptotic behavior of E α ensures that E ( e λ E ( t ) ) C t α as t . It is possible to find the density G t ( τ ) explicitly using the completely monotonic property of the Mittag–Leffler function E α . The result may be expressed in terms of the Wright function W μ , ν as a series
G t ( τ ) = t α W α , 1 α ( τ t α ) = t α n = 0 ( t α τ ) n n ! Γ ( α ( n + 1 ) + 1 ) .
See Ref. Gorenflo et al. [11] for more details. We use the short notation G t ( τ ) = : t α M α ( t α τ ) and later, we need the moments of the function M α . These moments are given by the formula (see, for example, Mainardi et al. [12])
0 τ n M α ( τ ) d τ = Γ ( n + 1 ) Γ ( n α + 1 ) , n = 0 , 1 , 2 , .
Introduce the random time change process
Y ( t ) : = X ( E ( t ) ) , t 0 ,
that is, the subordination of ( X ( t ) ) t 0 by the random time ( E ( t ) ) t 0 . The process ( Y ( t ) ) t 0 is a non-Markov process, and when ( E ( t ) ) t 0 is the inverse of an α -stable subordinator, then the process Y ( t ) was used in models of biology and other branches; see Baeumer and Meerschaert [13], Meerschaert and Scheffler [14] and special examples of the initial process ( X ( t ) ) t 0 (see, for example, Magdziarz and Schilling [5], Li et al. [15], Mimica [16]).
As a basic characteristic of the new random time change process ( Y ( t ) ) t 0 , we may study the time evolution
u ( t , x ) = E x [ f ( Y ( t ) ) ]
for a given initial data f.
As it was pointed out in several works (see, for example, Toaldo [17], Chen [18] and references therein), if X has generator L, then u ( t , x ) is the unique strong solution (in some appropriate sense) to the following Cauchy problem:
D t α u α ( t , x ) = L u α ( t , x ) , u α ( 0 , x ) = f ( x ) .
Here, D t α denotes the Caputo–Dzhrbashyan fractional derivative defined by
D t α ϕ ( t ) = D t ( k α ) ϕ ( t ) = d d t 0 t k α ( t s ) ( ϕ ( s ) ϕ ( 0 ) ) d s
with the kernel k α ( t ) = t α Γ ( 1 α ) , see Example 1.
Let u 1 ( t , x ) be the solution to a similar Cauchy problem (4) but with ordinary time derivative
t u 1 ( t , x ) = L u 1 ( t , x ) , u 1 ( 0 , x ) = f ( x ) .
In stochastic terminology, it is the solution to the forward Kolmogorov equation corresponding to the process ( X ( t ) ) t 0 . Under quite general assumptions, there is a convenient and essentially obvious relation between these evolutions that is known as the subordination principle:
u α ( t , x ) = 0 u 1 ( τ , x ) G t ( τ ) d τ .
where G t is the density of E ( t ) , t 0 .
We say that the functions f and g are asymptotically equivalent at infinity, and denote f ( x ) g ( x ) as x , meaning that
lim x f ( x ) g ( x ) = 1 .
Below C is a constant whose value is unimportant and may change from line to line.

4. Asymptotic Behavior of Particular Families

In this section, we want to study the time asymptotic behavior of the subordination of the family of functions
f κ ( t ) : = e t κ , and g κ ˜ ( t ) : = e κ ˜ t , t 0 , κ , κ ˜ > 0 .
By a subordination of a function, we mean the integration against the one-dimensional distribution of an inverse subordinator which corresponds to the subordination principle. More precisely, if G t ( τ ) denotes the density kernel of an inverse α -stable subordinator E ( t ) , then the objects to look at are
F κ ( t ) : = 0 f κ ( τ ) G t ( τ ) d τ = 0 e τ κ G t ( τ ) d τ
and
H κ ˜ ( t ) : = 0 g κ ˜ ( τ ) G t ( τ ) d τ = 0 e κ ˜ t G t ( τ ) d τ .

4.1. The Case f κ ( t ) = e t κ , κ > 0

For this case, we may use the series (2) in order to compute the integral, that is,
F κ ( t ) = 0 e τ κ G t ( τ ) d τ = 0 e τ κ t α M α ( τ t α ) d τ = t α n = 0 ( t α ) n n ! Γ ( α ( n + 1 ) + 1 ) 0 e τ κ τ n d τ = t α n = 0 ( t α ) n n ! Γ ( n + 1 κ ) Γ ( α ( n + 1 ) + 1 ) = t α Γ ( 1 κ ) Γ ( 1 α ) t 2 α Γ ( 2 κ ) Γ ( 1 2 α ) + o ( t 3 α ) .
It follows that the time asymptotic of F κ ( t ) is
F κ ( t ) Γ ( 1 κ ) Γ ( 1 α ) t α as t .
In particular, for κ = 1 , then f 1 ( t ) = e t has an exponential decaying, while F 1 ( t ) has a decaying power.

4.2. The Case g κ ˜ ( t ) = e κ ˜ t , κ ˜ > 0

We have to compute the integral
H κ ˜ ( t ) = 0 e κ ˜ τ t α M α ( τ t α ) d τ = 0 e κ ˜ t α τ M α ( τ ) d τ .
To this end, we expand the exponential and use the explicit moments (3) of the function M α , that is,
H κ ˜ ( t ) = n = 0 ( κ ˜ t α ) n n ! 0 τ n M α ( τ ) d τ = n = 0 ( κ ˜ t α ) n Γ ( 1 + α n ) = E α ( κ ˜ t α ) .
It follows from the asymptotic of the Mittag–Leffler function that
H κ ˜ ( t ) C e κ ˜ 1 / α t as t .
Remark 2.
Note that both g κ ˜ and its subordination H κ ˜ by G t ( τ ) exhibit the same type of exponential growth.

5. Applications to Parabolic Anderson Models

5.1. Anderson Problem

We briefly describe the Anderson model in the framework of Markov processes with killing potential. The Anderson Hamiltonian is a random operator
L γ = Δ V ( · , γ ) ,
where Δ is the Laplace operator in R d and V ( x , γ ) 0 is a random potential, x R d , γ Γ ( R d ) (the configuration space over R d ). The study of the Anderson model is a big area in modern mathematical physics and stochastics. In the stationary situation, the main focus goes to the spectral analysis of L γ ; see, for example, Pastur and Figotin [19]. Denote (as above) the Markov process with this generator by X V . The related parabolic problem concerns the study of the solution to the following Cauchy problem for the evolution equation x R d , w Ω ,
t u ( t , x , γ ) = Δ u ( t , x , γ ) V ( x , γ ) u ( t , x , γ ) u ( 0 , x , γ ) = 1 .
In ecological terms, the interpretation of this solution of the Cauchy problem describes the evolution of the density starting with a uniformly distributed population. In the terminology of Section 3, we are dealing with the Brownian motion X ( t ) = B ( t ) , t 0 , in R d with the killing potential V ( x , γ ) . As in the general case, the solution u has the representation
u ( t , x , γ ) = E x exp 0 t V ( B ( s ) , γ ) d s .
The random potential V is specified as follows. Let ϕ : R d R + be a given interacting potential, which is an even function. Define
V ( x , γ ) : = y γ ϕ ( x y ) ,
where γ is a Poisson random point field on R d with the intensity parameter λ > 0 . The corresponding Poisson measure on the configuration space we denote by π λ ; see, for example, Albeverio et al. [20] for more details. Of course, the decay properties of ϕ are essential for the analysis below; see Lemma 1 below.
Our aim is to study the properties of the time changed process Y ( t ) = X V ( E ( t ) ) . More precisely, we want to investigate the asymptotic of the solution v ( t , x , γ ) to the related fractional evolution Kolmogorov equation, i.e.,
D t ( k ) v ( t , x , γ ) = Δ v ( t , x , γ ) V ( x , γ ) v ( t , x , γ ) , v ( 0 , x , γ ) = 1 ,
where the operator D t ( k ) is defined in (5).
Under quite general assumptions, the relation between the two evolutions u and v is given by the following subordination formula:
v ( t , x , γ ) = 0 u ( τ , x , γ ) G t ( τ ) d τ .
Below, we investigate the annealed case, that is, we are interested in the average
I ( t ) = v ( t , 0 , · ) = 0 G t ( τ ) J ( τ ) d τ ,
where · = E π λ ( · ) denotes the expectation with respect to the Poisson measure π λ of the configuration space Γ ( R d ) , and J ( t ) = u ( t , 0 , · ) denotes the expectation of the annealed density u at time t; see (9) below.

5.2. Annealed Asymptotic

At first, we study the behavior of the annealed density u ( t , x , · ) using its semigroup representation. Explicitly, the density u ( t , x , γ ) has a representation via the semigroup as
u ( t , x , γ ) = exp [ t L γ ] 1 = exp [ t ( Δ V ( x , γ ) ) ] 1
which, by the Feynman–Kac formula (given above), yields
u ( t , x , γ ) = E x exp 0 t V ( B ( s ) , γ ) d s .
Because the asymptotic behavior of this density is independent of the starting point, we set x = 0 and denote
J ( t ) : = u ( t , 0 , · ) : = E π λ u ( t , 0 , · ) .
Using the Lemma from Pastur [21] (that is, a direct application of the Jensen inequality), we obtain the following bound:
J ( t ) e t V ( 0 , γ ) .
The expectation with respect to π λ with V ( 0 , γ ) = y γ ϕ ( y ) leads to
J ( t ) exp λ R d 1 e t ϕ ( y ) d y .
Lemma 1.
Assume that there is C 1 > 0 and
ϕ ( x ) = C 1 | x | θ 1 + o ( 1 ) , | x | ,
with θ > d . Then there exists C > 0 such that the following bound holds:
J ( t ) exp [ λ C t d θ ] .
Proof. 
First note that assumption (10) ensures the existence of the integral R d ( 1 e t ϕ ( y ) ) d y . For each t > 0 , define the subset D t of R d by
D t : = x R d | x | C 1 1 / θ t 1 / θ R d .
Then, x D t holds t ϕ ( x ) 1 and then
R d ( 1 e t ϕ ( x ) ) d x D t ( 1 e t ϕ ( x ) ) d x ( 1 e 1 ) D t d x = C t d / θ .
The claim of the lemma follows from this lower estimate. □
Remark 3.
Note that in Donsker and Varadhan [22], it was shown that for θ > d + 2 , the following bound holds:
J ( t ) exp [ C t d d + 2 ] .
which is better for these values of θ than our bound. Actually, in Donsker and Varadhan [22], the authors obtained even the asymptotic for J ( t ) . However, for d < θ < d + 2 , our bound is stronger.
We are now ready to state the main result of this section on the average I ( t ) = v ( t , 0 , · ) of the subordination of the density u ( t , x , γ ) by the distribution of the inverse α -stable subordinator.
Theorem 1.
Assume that the conditions of Lemma 1 hold and let v ( t , x , γ ) be the solution of the Cauchy problem (7) given by (8). Then the average I ( t ) = v ( t , 0 , · ) satisfies the following bound:
I ( t ) C t α + o ( t 2 α ) , C > 0 .
Proof. 
The statement of the theorem follows from the result of Section 4.1 and Lemma 1 above. □

6. Applications to Non-Local Schrödinger Operators

6.1. Ground State Problem

The contact model in the continuum was introduced in Kondratiev and Skorokhod [23] as a Markov process in the configuration space Γ ( R d ) over R d . This process describes the branching of particles with a dispersion kernel a ( x y ) and independent deaths with a mortality intensity m ( x ) . We assume that a is a probability density on R d and m 0 . The evolution of states in this process has a very easy recurrent description of correlation functions evolution; see Kondratiev et al. [24], Kondratiev et al. [25].
In particular, the evolution equation for the density u ( t , x ) of the system is given by
t u a ( t , x ) = R d a ( x y ) u a ( t , y ) d y m ( x ) u a ( t , x ) .
It is useful to rewrite this equation in terms of the jump generator:
t u a ( t , x ) = R d a ( x y ) [ u a ( t , y ) u a ( t , x ) ] d y + ( 1 m ( x ) ) u a ( t , x ) = : ( L a + ( 1 m ( x ) ) u a ( t , x ) .
It is clear that, for m ( x ) = 1 , we have a stationary solution. For the initial density u a ( 0 , x ) = 1 , in this case, u a ( t , x ) = 1 for all x R d . Introducing the potential V ( x ) = 1 m ( x ) , we can obtain the Markov generator for the process with killing as L a V : = L a + V . We denote the corresponding Markov process by X a V ( t ) , t 0 . When V = 0 , the process X a 0 ( t ) , t 0 , has generator L a , which coincides with the compound Poisson process or a random walk in R d with the jump kernel a ( x y ) .
In the study of the behavior of the density u a ( t , x ) , we have two obvious situations.
  • If the mortality m ( x ) m 0 > 1 , then u a ( t , x ) 0 , t exponentially fast and uniformly in x.
  • On the other hand, for m ( x ) m 0 < 1 , this density grows exponentially fast to infinity.
The non-trivial case is when the mortality has some fluctuations below the critical value m = 1 . In terms of the potential V, we are interested in fluctuations of V over the level V = 0 . Hence, a key question is formulated as follows: does a small positive fluctuation of V create global growth of the population density u a ?
The answer to the above question is closely related to the existence and uniqueness of the ground state of the operator L a V in L 2 ( R d ) , that is, a function ψ λ such that L a V ψ λ = λ ψ λ , where λ > 0 is the maximal eigenvalue of L a V .
An example and related analysis was considered in Kondratiev et al. [26].
Theorem 2.
Assume there exists δ > 0 such that V ( x ) = 1 for every x B δ (a ball of radius δ). Then, the following holds:
  • The ground state ψ λ of L a V exists.
  • ψ λ C b ( R d ) , ψ λ ( x ) > 0 , x R d .
  • For the density u a ( t , x ) , the following asymptotic formula holds:
    u a ( t , x ) = e λ t C ψ λ ( x ) ( 1 + o ( 1 ) ) as t .
Remark 4.
We would like to stress that even local fluctuation of V on a very small region will create the growth of the density of the population everywhere in space X.
Consider now the random time E ( t ) , t 0 corresponding to the α -stable subordinator and the time changed Markov process Y a V ( t ) = X a V ( E ( t ) ) , where X a V is the Markov process with generator L a V . That is, we have the evolution equation (Kolmogorov equation) for the density u a
t u a ( t , x ) = ( L a u a ) ( t , x ) + V ( x ) u a ( t , x ) , u a ( 0 , x ) = 1 ,
and the evolution equation (fractional Kolmogorov equation) for the density v a
( D t ( k ) v a ) ( t , x ) = ( L a u a ) ( t , x ) + V ( x ) u a ( t , x ) , v a ( 0 , x ) = 1 ,
where the fractional operator D t ( k ) is defined in (5). The relation between the two densities u a and v a is given by the subordination formula, that is,
v a ( t , x ) = 0 u a ( τ , x ) G t ( τ ) d τ .
Hence, combining the result in (11) for u a , the results from Section 4.2 and the subordination formula above for v a , we obtain the following asymptotic behavior result for density v a of the fractional Kolmogorov equation.
Theorem 3.
Assume that the conditions of Theorem 2 hold and let v a be the solution of (12) given by (13). Then the following asymptotic behavior of v a holds:
v a ( t , x ) C e λ 1 / α t as t .
Proof. 
The claim follows using u a given in (11) and the result from Section 4.2. □

6.2. Generalized Anderson Problem

A generalized Anderson Hamiltonian is a random operator
L ˜ a , γ V = L a V ( · , γ ) ,
where the random potentials V are defined above (6). Note that the result of Lemma 1 is valid for the compound Poisson process as well as for Brownian motion.
Let X ˜ a V ( t ) , t 0 be the Markov process with generator L ˜ a , γ V and Y ˜ a V ( t ) : = X ˜ a V ( E ( t ) ) , t 0 , the random time change process where E is the inverse of a α -stable subordinator. The corresponding evolutions for these processes u ˜ a and v ˜ a , respectively, are related via the subordination formula, that is,
v ˜ a ( t , x , γ ) = 0 u ˜ a ( τ , x , γ ) G t ( τ ) d τ ,
where G t ( · ) is the density function of E ( t ) , t 0 . We proceed as in Section 5.2 and consider the average
I ˜ a ( t ) : = v ˜ a ( t , 0 , · ) = 0 J ˜ a ( τ ) G t ( τ ) d τ ,
where
J ˜ a ( t ) : = u ˜ a ( t , 0 , · ) = E π λ [ u ˜ a ( t , 0 , · ) ] .
This leads us to the following theorem on the asymptotic upper estimation of I ˜ a .
Theorem 4.
Assume that the conditions of Lemma 1 hold and let u ˜ a and v ˜ a be the evolutions related by (14). Then the average I ˜ a ( t ) = v ˜ a ( t , 0 , · ) satisfies the following bound:
I ˜ a ( t ) C t α + o ( t 2 α ) , C > 0 .
Proof. 
As the result of Lemma 1 is valid for the compound Poisson process, the statement of the theorem follows using the result of SubSection 4.1. □

7. Conclusions

We studied random time changes in Markov processes with killing potentials. More precisely, we investigated how random time changes may be introduced in these Markov processes with killing potential and their influence in the time behavior. Applications to the parabolic Anderson problem (see Theorem 1), the non-local Schrödinger operators (see Theorem 3) and the generalized Anderson problem (see Theorem 4) were worked out.

Author Contributions

Writing—original draft preparation, Y.G.K. and J.L.d.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundação para a Ciência e a Tecnologia, Portugal grant number UIDB/MAT/04674/2020 and Ministry of Education and Science of Ukraine grant number 0119U002583.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gikhman, I.I.; Skorokhod, A.V. The Theory of Stochastics Processes II; Springer: Berlin/Heidelberg, Germany, 1974. [Google Scholar]
  2. Bochner, S. Subordination of non-Gaussian stochastic processes. Proc. Natl. Acad. Sci. USA 1962, 4, 19–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kochubei, A.; Kondratiev, Y.G.; da Silva, J.L. From random times to fractional kinetics. Interdiscip. Stud. Complex Syst. 2020, 16, 5–32. [Google Scholar] [CrossRef]
  4. Kochubei, A.; Kondratiev, Y.G.; da Silva, J.L. Random Time Change and Related Evolution Equations. Time Asymptotic Behavior. Stoch. Dyn. 2020, 4, 20500344. [Google Scholar] [CrossRef] [Green Version]
  5. Magdziarz, M.; Schilling, R.L. Asymptotic properties of Brownian motion delayed by inverse subordinators. Proc. Am. Math. Soc. 2015, 143, 4485–4501. [Google Scholar] [CrossRef] [Green Version]
  6. Bingham, N.H. Limit theorems for occupation times of Markov processes. Z. Wahrsch. Verw. Gebiete 1971, 17, 1–22. [Google Scholar] [CrossRef]
  7. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; John Wiley & Sons Inc.: New York, NY, USA, 1971; Volume II. [Google Scholar]
  8. Bertoin, J. Lévy Processes, Volume 121. In Cambridge Tracts in Mathematics; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  9. Applebaum, D. Lévy Processes and Stochastic Calculus; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  10. Piryatinska, A.; Saichev, A.; Woyczynski, W. Models of anomalous diffusion: The subdiffusive case. Phys. A Stat. Mech. Appl. 2005, 349, 375–420. [Google Scholar] [CrossRef]
  11. Gorenflo, R.; Luchko, Y.; Mainardi, F. Analytical properties and applications of the Wright function. Fract. Calc. Appl. Anal. 1999, 2, 383–414. [Google Scholar]
  12. Mainardi, F.; Mura, A.; Pagnini, G. The M-Wright function in time-fractional diffusion processes: A tutorial survey. Int. J. Differential Equ. 2010, 2010, 104505. [Google Scholar] [CrossRef] [Green Version]
  13. Baeumer, B.; Meerschaert, M.M. Stochastic solutions for fractional Cauchy problems. Fract. Calc. Appl. Anal. 2001, 4, 481–500. [Google Scholar]
  14. Meerschaert, M.M.; Scheffler, H.P. Limit theorems for continuous-time random walks with infinite mean waiting times. J. Appl. Probab. 2004, 41, 623–638. [Google Scholar] [CrossRef] [Green Version]
  15. Li, Y.C.; Sato, R.; Shaw, S.Y. Ratio Tauberian theorems for positive functions and sequences in Banach lattices. Positivity 2007, 11, 433–447. [Google Scholar] [CrossRef] [Green Version]
  16. Mimica, A. Heat kernel estimates for subordinate Brownian motions. Proc. Lond. Math. Soc. 2016, 113, 627–648. [Google Scholar] [CrossRef] [Green Version]
  17. Toaldo, B. Convolution-type derivatives, hitting-times of subordinators and time-changed C0-semigroups. Potential Anal. 2015, 42, 115–140. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, Z.Q. Time fractional equations and probabilistic representation. Chaos Solitons Fractals 2017, 102, 168–174. [Google Scholar] [CrossRef] [Green Version]
  19. Pastur, L.; Figotin, A. Spectra of Random and Almost-Periodic Operators; Number 297 in Grundlehren der Mathematischen Wissenschaften; Springers: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
  20. Albeverio, S.; Kondratiev, Y.G.; Röckner, M. Analysis and Geometry on Configuration Spaces. J. Funct. Anal. 1998, 154, 444–500. [Google Scholar] [CrossRef] [Green Version]
  21. Pastur, L.A. Behavior of certain Wiener integrals as t and the density of states of Schrödinger equations with random potential. Teoret. Mat. Fiz. 1977, 32, 88–95. [Google Scholar] [CrossRef]
  22. Donsker, M.D.; Varadhan, S.R.S. Asymptotics for the Wiener sausage. Comm. Pure Appl. Math. 1975, 28, 525–565. [Google Scholar] [CrossRef]
  23. Kondratiev, Y.G.; Skorokhod, A.V. On Contact Processes in Continuum. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2006, 9, 187–191. [Google Scholar] [CrossRef] [Green Version]
  24. Kondratiev, Y.G.; Kutoviy, O.; Pirogov, S. Correlation functions and invariant measures in continuous contact model. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2008, 11, 231–258. [Google Scholar] [CrossRef]
  25. Kondratiev, Y.G.; Piatnitski, A.; Zhizhina, E. Asymptotics of fundamental solutions for time fractional equations with convolution kernels. Fract. Calc. Appl. Anal. 2020, 23, 1161–1187. Available online: https://arxiv.org/abs/1907.08677v1 (accessed on 1 December 2021). [CrossRef]
  26. Kondratiev, Y.; Molchanov, S.; Pirogov, S.; Zhizhina, E. On ground state of non local Schrödinger operators. Appl. Anal. 2016, 96, 1390–1400. Available online: https://arxiv.org/abs/1601.01136v2 (accessed on 1 December 2021). [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kondratiev, Y.G.; da Silva, J.L. Random Times for Markov Processes with Killing. Fractal Fract. 2021, 5, 254. https://doi.org/10.3390/fractalfract5040254

AMA Style

Kondratiev YG, da Silva JL. Random Times for Markov Processes with Killing. Fractal and Fractional. 2021; 5(4):254. https://doi.org/10.3390/fractalfract5040254

Chicago/Turabian Style

Kondratiev, Yuri G., and José Luís da Silva. 2021. "Random Times for Markov Processes with Killing" Fractal and Fractional 5, no. 4: 254. https://doi.org/10.3390/fractalfract5040254

APA Style

Kondratiev, Y. G., & da Silva, J. L. (2021). Random Times for Markov Processes with Killing. Fractal and Fractional, 5(4), 254. https://doi.org/10.3390/fractalfract5040254

Article Metrics

Back to TopTop