Next Article in Journal
Relaxation Subgradient Algorithms with Machine Learning Procedures
Next Article in Special Issue
Poissonization Principle for a Class of Additive Statistics
Previous Article in Journal
An Intelligent System for Patients’ Well-Being: A Multi-Criteria Decision-Making Approach
Previous Article in Special Issue
Asymptotic Properties and Application of GSB Process: A Case Study of the COVID-19 Dynamics in Serbia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Functional Limit Theorem for the Sums of PSI-Processes with Random Intensities

1
Mathematics and Mechanics Faculty, St. Petersburg State University, 199034 St. Petersburg, Russia
2
Steklov Mathematical Institute of Russian Academy of Sciences, 119991 Moscow, Russia
3
Faculty of Economic Sciences, National Research University Higher School of Economics, 109028 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 3955; https://doi.org/10.3390/math10213955
Submission received: 22 September 2022 / Revised: 20 October 2022 / Accepted: 21 October 2022 / Published: 25 October 2022
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
We consider a sequence of i.i.d. random variables, ( ξ ) = ( ξ i ) i = 0 , 1 , 2 , , E ξ 0 = 0 , E ξ 0 2 = 1 , and subordinate it by a doubly stochastic Poisson process Π ( λ t ) , where λ 0 is a random variable and Π is a standard Poisson process. The subordinated continuous time process ψ ( t ) = ξ Π ( λ t ) is known as the PSI-process. Elements of the triplet ( Π , λ , ( ξ ) ) are supposed to be independent. For sums of n, independent copies of such processes, normalized by n , we establish a functional limit theorem in the Skorokhod space D [ 0 , T ] , for any T > 0 , under the assumption E | ξ 0 | 2 h < for some h > 1 / γ 2 . Here, γ ( 0 , 1 ] reflects the tail behavior of the distribution of λ , in particular, γ 1 when E λ < . The limit process is a stationary Gaussian process with the covariance function E e λ u , u 0 . As a sample application, we construct a martingale from the PSI-process and establish a convergence of normalized cumulative sums of such i.i.d. martingales.

1. Introduction

The Poisson Stochastic Index process (PSI-process) represents a special kind of a random process when the discrete time of a random sequence is replaced by the continuous time of a “counting” process of a Poisson type.
Throughout this paper, we consider the triplet { Π , λ , ( ξ ) } of jointly independent components defined on a probability space { Ω , F , P } . Here, Π is a standard Poisson process on R + : = { t R : t 0 } , λ is an almost surely (a.s.) non-negative random variable, which plays a role of random intensity, and ( ξ ) denotes a random sequence ξ 0 , ξ 1 , of independent and identically distributed (i.i.d.) random variables. Let us define a PSI-process in the following way:
ψ ( t ; λ ) ψ ( t ) : = ξ Π ( λ t ) , t R + .
The mechanism of PSI-processes is reduced to sequential replacements of terms of the “driven” sequence ( ξ ) at arrival times of the “driving” doubly stochastic Poisson process Π ( λ t ) .
Let us introduce a “natural” filtration F ( F t ) t R + , generated by the PSI-process
F t : = σ Π ( λ s ) , s t ; ξ 0 , ξ k , k Π ( λ t ) F .
Note that if the distribution of ξ 0 has no atoms, then the natural filtration F coincides with a filtration, which is generated by a compound Poisson type process with the random intensity λ : Y ( t ) : = k = 0 Π ( λ t ) ξ k starting at the random point ξ 0 . (In the case when ξ 0 has an atom at 0, some jumps of Π ( λ t ) may be “missed” in Y, the process Y is known as a stuttering compound Poisson process. A similar phenomenon happens with a PSI-process when ξ 0 has any atom, not necessarily at 0. For details we refer to [1].)
PSI-processes may have a lot of interpretations. For instance, in insurance models and their applications: while a compound Poisson process Y ( t ) is monitoring the cumulative value of claims up to a current time t, the corresponding PSI-process ψ ( t ) is monitoring the last claim.
Another interpretation arises in models of information channels. Here, ( ξ ) plays a role of random loads on an information channel. The driving doubly stochastic Poisson process Π ( λ t ) affects ( ξ ) in the following manner. At arrival points of the driving process Π ( λ t ) , the current term of ( ξ ) is replaced with the next term.
In view of these interpretations, as well as from a point of view of the classical probability theory, it makes sense to consider sums of independent PSI-processes. In this paper, we confine ourselves to the case when all terms in these sums are identically distributed PSI-processes and when the terms of the driven sequences have a finite second moment. Without loss of generality, we assume that E ξ 0 = 0 and E ξ 0 2 = 1 . Let ψ ( k ) , k = 1 , 2 , , denote independent copies of ψ . Note that the Poisson processes in the definition (1) are also independent in different copies, as well as the time change factors λ k = d λ , for any k N . Introduce
ζ n ( t ) : = 1 n k = 1 n ψ ( k ) ( t ; λ k ) , n N , t 0 ,
the normalized cumulative sum. Note that ζ n is a stationary process for any n.
When one of the processes ψ ( 1 ) , , ψ ( n ) changes its value, all the values of other processes remain the same a.s. Hence, the change mechanism behind the sums of type (3) can be described as a projection of some information from past to future and replacement of other information with new independent values. This can be opposed to autoregression schemes, which are based on contractions of information. This mechanism of projection survives after a passage to the limit as n . Hence, if the limit exists in some sense, it has to be described by so-called “trawl” or “upstairs represented” processes introduced by O.E. Barndorf-Nielsen [2,3] and R. Wolpert, M.Taqqu [4], respectively. A relationship of PSI-processes with trawl processes is discussed briefly in [5].
Our main result is a functional limit theorem for normalized cumulative sums (3) (Theorem 1): random processes ζ n weakly converge, as n , in the Skorokhod space of càdlàg functions defined on a compact [ 0 , T ] , T > 0 . The limit process ζ is Gaussian, centered, stationary, and its covariance function is L λ ( | t s | ) , s , t R + , where L λ denotes the Laplace transform of the random intensity λ . In a simpler case of non-random intensity λ , the analogous functional limit theorem has been established by the second author in [6]. In this case, the limit is necessarily an Ornstein–Uhlenbeck process. Introducing a random intensity significantly widens the class of possible limiting processes but makes a proof of the corresponding functional limit theorem more involved. Our method of proof is essentially based on a detailed analysis of a modulus of continuity for the PSI-process.
In our research, we came upon the following interesting phenomena, which occurs if E λ = + . Then, the fatter the tail of λ is, the more moments of ξ 0 are needed for the relative compactness of the family ( ζ n ) n N . When E λ < , our method of proof requires just a condition E | ξ 0 | 2 + ε < , for some ε > 0 .
As an example of a functional of the PSI-process, we construct a martingale adapted to the natural filtration ( F t ) generated by the PSI-process defined in (2). Consider a pathwise integrated PSI-process
Ψ ( t ) : = 0 t ψ ( s ) d s
and define a so-called M-process associated with the PSI-process as
M ( t ; λ ) M ( t ) : = λ Ψ ( t ) + ψ ( t ) ξ 0 , t 0 .
Suppose that λ is a positive constant and E ξ 0 = 0 . Then, M ( t ) is an ( F t ) -martingale, starting at the origin. The proof presented in Section 3 is reduced to a direct calculation and exploits the fact that the pair ( Ψ , ψ ) is an R 2 -valued Markov process (moreover, a strong Markov process with respect to ( F t ) ).
This example shows that the PSI-process ψ ( t ) is the stationary solution of the Langevin equation driven by the martingale M ( t ) :
d ψ ( t ) = λ ψ ( t ) + d M ( t ) .
As one of the consequences of our main result, we obtain as a limit the classical martingale 2 λ W ( t ) , t 0 , which replaces M ( t ) in (6). Here and below, W ( t ) is a standard Brownian motion.
Remark that if λ is a non-degenerate random variable, then M ( t ; λ ) is not measurable with respect to F t , and hence, it is not an ( F t ) -martingale. However, if we supplement F 0 with σ ( λ ) to generate an initially enlarged filtration ( F t λ ) , then the M-process becomes a local martingale with respect to the new adjusted filtration. If E λ < , then it is a martingale (see Proposition 2).
Suppose now as usual that E ξ 0 2 = 1 . Direct application of Theorem VIII.3.46 [7] (p. 481) allows us to obtain a functional limit theorem for the martingale M ( t ) , i.e., for
M ¯ n : = 1 n i = 1 n M ( i ) ( t ) ,
where M ( i ) ( t ) , i = 1 , 2 , , are independent copies of M ( t ) . Here, the convergence takes place in the Skorokhod space, and the limit process is 2 E λ W ( t ) , t 0 .
The rest of the paper is organized as follows. In Section 2, we introduce some notation and formulate our main result, Theorem 1. In Section 3, the M-process described above is studied in some details, as an example of the application of Theorem 1. Another example of the PSI-process such that the normalized cumulative sums do not converge in the Skorokhod space is constructed in Section 4 in order to show that some conditions are indeed necessary in a functional limit theorem. Section 5 collects some auxiliary facts about PSI-processes and their modulus of continuity. In Section 6, we study sums of PSI-processes and prove our main result. We finish the article with some conclusions in Section 7.

2. Main Results

Let ( ξ ) = ( ξ 0 , ξ 1 , ) be a sequence of random variables. Consider an independent of ( ξ ) standard Poisson process Π ( t ) , t 0 . Then, one can subordinate the sequence by the Poisson process to obtain a continuous time process
ψ ( t ) = ξ Π ( t ) , t 0 .
Consider also a non-negative random variable λ , which is independent of ( ξ ) and Π . The time-changed Poisson process Π ( λ t ) is a Poisson process with random intensity, also known as (a specific case of) a Cox process or a doubly stochastic Poisson process. We consider the PSI-process with the random time-change
ψ ( t ; λ ) = ξ Π ( λ t ) , t 0 .
We call ψ ( t ; λ ) the Poisson stochastic index process, or PSI-process for short.
It turns out that if random variables ξ i , i = 0 , 1 , , are uncorrelated and have zero expectations and unit variances, then the covariance function for ψ ( t ; λ ) is equal to the Laplace transform of λ
L λ ( u ) = E e λ u , u 0 .
Lemma 1. 
Let ( ξ ) = ( ξ 0 , ξ 1 , ) be a sequence of uncorrelated random variables with E ξ i 0 and E ξ i 2 1 . Let λ be a non-negative random variable and Π ( t ) be a standard Poisson process. Suppose that ( ξ ) , λ, and Π are mutually independent. Then, for any s , t 0
Cov ψ ( s ; λ ) , ψ ( t ; λ ) = L λ ( | t s | ) .
In particular, ψ is a wide sense stationary process.
Proof. 
First note that E ψ ( s , λ ) = 0 since any E ξ i = 0 . Hence, Cov ψ ( s ; λ ) , ψ ( t ; λ ) = E ψ ( s ; λ ) ψ ( t ; λ ) . Suppose without loss of generality that 0 s t . Given λ , one has
E ψ ( s ; λ ) ψ ( t ; λ ) | λ = E ξ Π ( λ s ) ξ Π ( λ t ) | λ = E 1 { Π ( λ s ) = Π ( λ t ) } | λ = E 1 { Π ( λ ( t s ) ) = 0 } | λ = e λ ( t s ) .
Here and below, 1 { A } denotes the indicator of an event A. We used the assumption that E ξ i ξ j = δ i j , the Kronecker delta, and also the stationarity of the increments of the Poisson process. Taking expectation with respect to λ yields the result. □
Remark 1. 
Unlike [8], we allow λ to have an atom at 0, which implies that lim u L λ ( u ) = P ( λ = 0 ) > 0 .
Corollary 1. 
Let the triplet ( Π , λ , ( ξ ) ) satisfy the assumptions of Lemma 1. Then, the processes ( ζ n ) defined in (3) as normalized cumulative sums of independent copies of ψ ( t ; λ ) converge in the sense of finite dimensional distributions (f.d.d.), as n , to a stationary centered Gaussian process ζ ( t ) with the covariance function Cov ( ζ ( s ) , ζ ( t ) ) = L λ ( | t s | ) , s , t R + .
Proof. 
This is an immediate consequence of the central limit theorem (CLT) for vectors. Indeed, for any fixed time moments 0 t 1 < < t d , the finite-dimensional distributions of ψ ( k ) ( t 1 ; λ k ) , , ψ ( k ) ( t d ; λ k ) are i.i.d. for different k and have zero mean and the covariation matrix
B = L λ ( | t i t j | ) i , j = 1 d .
Lemma 1 emphasizes a special role played by the Laplace transform L λ in the study of PSI-processes with random intensities. We will need asymptotics of the Laplace transform L λ in the right neighborhood of 0.
Assumption 1. 
For some γ ( 0 , 1 ] and any ε > 0 , the Laplace transform (9) of λ satisfies
1 L λ ( s ) = o ( s γ ε ) , s 0 .
It is well known that (10) holds with γ = 1 if E λ < or with γ ( 0 , 1 ] if the tail P ( λ > x ) of λ varies regularly of index γ at x , see, e.g., [9] (Theorem 8.1.6).
Below, we shall always suppose that terms of the sequence ( ξ ) are i.i.d., hence uncorrelated, and satisfy the assumptions of Lemma 1. By Corollary 1, random processes ( ζ n ) have a limit ζ as n but in the rather weak f.d.d. sense. The aim of this paper is to establish a more strong result, a functional limit theorem for ( ζ n ) in an appropriate functional space. If Assumption 1 holds, then the covariance function of the limiting process ζ ( t ) behaves in a controllable way at 0, and ζ ( t ) has a version with almost surely continuous paths because γ > 0 in (10), see, e.g., [10] (§9.2). Our main result is that, under additional moment assumptions E | ξ 0 | 2 h < for some h > 1 / ( γ 2 ) (where γ is the exponent in (10)), the convergence indeed takes place in the Skorokhod space D [ 0 , T ] , for any T > 0 .
Theorem 1. 
Consider a triplet Π , λ , ( ξ ) that consists of a standard Poisson process Π, a non-negative random variable λ satisfying Assumption 1, and a sequence ( ξ ) = ( ξ 0 , ξ 1 , ) of i.i.d. random variables such that E ξ 0 = 0 and E ξ 0 2 = 1 . Elements of the triplet are supposed to be independent and to satisfy the condition
E | ξ 0 | 2 h < for some h > 1 γ 2 .
Let Π k , λ k , ( ξ ( k ) ) , k = 1 , 2 , , be a sequence of independent copies of the triplet Π , λ , ( ξ ) , ψ ( k ) ψ ( k ) ( t ; λ k ) be the PSI-process (1) constructed from the k-th triplet, and ζ n be defined by (3). Then, for any T > 0 , the sequence of stochastic processes ( ζ n ( t ) ) converges in the Skorokhod space D [ 0 , T ] , as n , to a zero mean stationary Gaussian process ζ ( t ) with the covariance function E ζ ( s ) ζ ( t ) = L λ ( | s t | ) , s , t [ 0 , T ] .
Remark 2. 
Nowadays, it is common to consider a weak convergence in the space D [ 0 , ) . Due to specific features of our model (stationary of ζ n for every n, continuity of ζ), this implies a weak convergence in D [ 0 , T ] for all T > 0 . Since we essentially use the results from Billingsley’s book [11] that deals with D [ 0 , T ] , we prefer to formulate our results in D [ 0 , T ] , T > 0 , as in Theorem 1.
We prove Theorem 1 in Section 6 and now proceed with studying some of its consequences.

3. Example: A PSI-Martingale

Recall the definition (2) of the natural filtration F given in the Introduction. Note that since PSI-processes (with non-random λ ) belong to a so-called class of “Pseudo-Poisson processes” [12] (Ch. X), they have the Markov property with the following transition probabilities: for x R ; t , u R + ,
P ψ ( t + u ) x | ψ ( t ) = x 0 = P Π ( λ u ) > 0 P ξ 0 x + P Π ( λ u ) = 0 1 { x 0 x } = 1 e λ u P ξ 0 x + e λ u 1 { x 0 x } .
Denote the pathwise integrated PSI-process Ψ ( t ) = 0 t ψ ( s ) d s . Note that a pair ( Ψ , ψ ) is an R 2 -valued Markov process, although Ψ itself is not Markovian.
Proposition 1 
(The PSI-martingale). Assume that ξ 0 , ξ 1 , are i.i.d. and E ξ 0 = 0 . Then, for a non-random λ > 0 , the stochastic process M ( t ) defined in (5) is a starting at the origin F -martingale for t R + .
Proof. 
Let us introduce a slightly modified M-process
M ( t ) : = λ Ψ ( t ) + ψ ( t ) = M ( t ) + ξ 0 .
First, we show that it is an F -martingale starting at the random point ξ 0 . Since the pair ( Ψ ( t ) , ψ ( t ) ) is a Markov process adapted to the filtration ( F t ) , and M ( t ) is determined by ( Ψ ( t ) , ψ ( t ) ) , we have
E ( M ( t + u ) | F t ) = E ( M ( t + u ) | Ψ ( t ) , ψ ( t ) ) , u , t 0 .
Let 0 < T 1 < T 2 < be jump times of the driving Poisson process Π ( λ t ) . Denote the random period θ ( t ) = min { T k : T k > t } t ; that is the time for which the Poisson process Π ( λ s ) does not change after time t. For each fixed t, the period θ ( t ) has the exponential distribution with the intensity λ . Using this notation, we can calculate
E ( ψ ( t + u ) | Ψ ( t ) , ψ ( t ) ) = ψ ( t ) E 1 { θ ( t ) > u } = ψ ( t ) e λ u ,
E ( Ψ ( t + u ) | Ψ ( t ) , ψ ( t ) ) = Ψ ( t ) + ψ ( t ) E min { θ ( t ) , u } = Ψ ( t ) + ψ ( t ) 1 e λ u λ .
Multiplying (14) by λ and adding (13), we obtain E ( M ( t + u ) | Ψ ( t ) , ψ ( t ) ) = M ( t ) , which proves the assertion about M ( t ) due to (12).
Now, the claim of Proposition 1 easily follows from σ ( Ψ ( t ) , ψ ( t ) ) F t and E ( ξ 0 | F t ) = ξ 0 . □
As it has been mentioned in the Introduction, for a random non-degenerate λ , the process M ( t ) is not F t -measurable, and the filtration F should be augmented by σ ( λ ) :
F t λ : = σ Π ( λ s ) , s t ; ξ 0 , ξ k , k Π ( λ t ) ; λ ; F λ : = ( F t λ ) t R + .
The following analog of Proposition 1 holds, but the proof is more tricky.
Proposition 2 
(The PSI-martingale with random intensity). Assume that ( ξ ) = ( ξ 0 , ξ 1 , ) is a sequence of i.i.d. random variables with E ξ 0 = 0 , Π = Π ( t ) is a standard Poisson process, a random variable λ is positive a.s.; λ, ( ξ ) , and Π are independent. Then, the stochastic process M ( t ; λ ) , t 0 , defined in (5) is a local martingale with respect to F λ . If E λ < , then M ( t ) is a martingale.
Proof. 
Let 0 < τ 1 < τ 2 < be jump times of the Poisson process Π ( t ) and T k : = τ k / λ corresponding jump times of the process Π ( λ t ) . Recall that filtrations F = ( F t ) t 0 and F λ = ( F t λ ) t 0 are defined in (2) and (15), respectively. It is easy to check that a set A F belongs to F t (resp. to F t λ ), t 0 , if and only if A { T k t < T k + 1 } = A { Π ( λ t ) = k } G k (resp. A { T k t < T k + 1 } G k λ ) for every k = 0 , 1 , . Here,
G k : = σ T 1 , , T k ; ξ 0 , ξ k = σ τ 1 , , τ k ; ξ 0 , ξ k ,
the latter equality being held if λ = c o n s t , and
G k λ : = σ T 1 , , T k ; ξ 0 , ξ k ; λ = σ τ 1 , , τ k ; ξ 0 , ξ k ; λ .
In particular, the filtrations ( F t ) t 0 and ( F t λ ) t 0 are right-continuous.
First, we calculate the F λ -compensator of the locally integrable process
Π ( λ t ) = n = 1 1 { t T n } .
Since, for λ = c o n s t , Π ( λ t ) is a Poisson process with intensity λ , its F -compensator is λ t . This means that Π ( λ t ) λ t is an F -martingale. Denoting N ( t ) : = Π ( t ) t , this can be written as
E N ( λ t ) N ( λ s ) 1 { Π ( λ s ) = k } f ( τ 1 , , τ k ; ξ 0 , ξ k ) = 0
for every 0 s < t , k = 0 , 1 , , and any bounded Borel function f from R 2 k + 1 in R . Consider now the case of random λ . Note that E Π ( λ t ) 1 { λ k } k t < for any t and k 1 . This allows us to take a conditional expectation given λ in the expression below, where f is as above and g is a bounded measurable function from R to R :
E N ( λ t ) N ( λ s ) 1 { Π ( λ s ) = k } f ( τ 1 , , τ k ; ξ 0 , ξ k ) g ( λ ) 1 { λ k } = E E N ( λ t ) N ( λ s ) 1 { Π ( λ s ) = k } f ( τ 1 , , τ k ; ξ 0 , ξ k ) g ( λ ) 1 { λ k } | λ = 0 .
This means
0 = E N ( λ t ) N ( λ s ) 1 { λ k } | F s λ = E N ( λ t σ k ) N ( λ s σ k ) | F s λ ,
where σ k = 0 if λ > k and σ k = + otherwise. We conclude that N ( λ t ) is an F λ -local martingale, and λ t is the F λ -compensator of Π ( λ t ) .
The same proof shows that
K ( λ t ) : = n = 1 ξ n 1 { t T n }
is an F λ -local martingale. Indeed, it is a compound Poisson process with zero mean; hence, it itself is an F -martingale for a deterministic λ . To ensure that the corresponding expectation is finite, we note that E K ( λ t ) 1 { λ k } n = 1 E | ξ n | P ( t T n , λ k ) E | ξ 0 | E Π ( λ t ) 1 { λ k } < .
The final step of the proof is to determine the F λ -compensator of the process
J ( λ t ) : = n = 1 ξ n 1 1 { t T n }
We can represent J ( λ t ) as the pathwise Lebesgue–Stieltjes integral of a predictable process
H ( λ t ) : = n = 1 ξ n 1 1 { T n 1 < t T n }
with respect to Π ( λ t ) . Note that the integral process
( 0 , t ] H ( λ t ) d Π ( λ t )
is a process with F λ -locally integrable variation because its variation up to σ k is estimated from above similarly to K ( λ t ) . This allows us to conclude that the F λ -compensator of J ( λ t ) is the Lebesgue–Stieltjes integral process of H ( λ t ) with respect to the F λ -compensator of Π ( λ t ) , see, e.g., Theorem 2.21 (2) in [13], i.e., the F λ -compensator of J ( λ t ) equals
( 0 , t ] H ( λ t ) λ d t = λ Ψ ( t ) .
Summarizing, we obtain that the F λ -compensator of
ψ ( t ) ξ 0 = n = 1 ( ξ n ξ n 1 ) 1 { t T n } = K ( λ t ) J ( λ t ) ,
that is λ Ψ ( t ) .
Finally, the quadratic variation of M is
[ M , M ] t = k = 1 ξ k ξ k 1 2 1 { T k t } .
Hence, if E λ < ,
E ( [ M , M ] t ) 1 / 2 E k = 1 ξ k ξ k 1 1 { T k t } 2 E | ξ 0 | E k = 1 1 { T k t } 2 E | ξ 0 | E Π ( λ t ) = 2 t E | ξ 0 | E λ .
Therefore, M ( t ) is a martingale according to Davis’ inequality (see [14] (Ch. 9)). □
If we assume also that E ξ 0 2 = 1 , then the F λ -martingale M ( t ) has E M ( t ) 2 < for all t R + . Its quadratic variation is calculated in (16). The variance of M ( t ) can then be calculated as follows:
Var M ( t ) = E [ M , M ] t = E k = 1 ξ k ξ k 1 2 1 { T k t } = E ξ 1 ξ 0 2 E Π ( λ t ) = 2 t E λ .
If E λ < (in particular, if λ is not random), then the variance of M ( t ) is finite for any t R + . Hence, direct application of Theorem VIII.3.46 [7] (p. 481) allows us to obtain a functional limit theorem for properly normalized sums of independent copies M ( i ) ( t ) , i = 1 , 2 , , of the martingale M ( t ) , i.e., for the processes
M ¯ n ( t ) : = 1 n i = 1 n M ( i ) ( t ) , n = 1 , 2 , , t 0 .
Here, the convergence takes place in the Skorokhod space, and the limit process is 2 E λ W ( t ) , where W ( t ) , t 0 , is a standard Brownian motion.
Assume now that λ > 0 is non-random. It is easy to see that the mapping ( ψ ( t ) ) t [ 0 , T ] ( M ( t ) ) t [ 0 , T ] is continuous in the Skorokhod space D [ 0 , T ] , for any T > 0 . Hence, as a corollary of Theorem 1, we reconstruct the above result that the convergence M ¯ n 2 λ W takes place in the Skorokhod space, under the condition that E | ξ 0 | 2 + ε < , for some ε > 0 .

4. Counterexample: Diverging Sums

For β > 1 , denote μ β = β β 1 and consider a function
f β ( x ) = β ( x + μ β ) β 1 , x 1 / ( β 1 ) , 0 , x < 1 / ( β 1 )
of x R . This is a probability density. Let ξ be a random variable with this density, then, by the choice of μ β the mean E ξ = 0 for any β > 1 , and Var ξ = β ( β 2 ) ( β 1 ) 2 < for any β > 2 . Moreover, all absolute moments of non-negative order less than β exist, while E | ξ | β = . The tail distribution function is P ξ > x = ( x + μ β ) β for x 1 / ( β 1 ) . Let ( ξ ) = ( ξ 0 , ξ 1 , ) be a sequence of i.i.d. random variables distributed as ξ .
For α > 0 , let λ be independent of ( ξ ) and have the tail distribution function P λ > x = ( x + 1 ) α for x 0 . The Laplace transform of λ can be expressed in terms of the (upper) incomplete Gamma function function
Γ ( α , x ) = x e y y α 1 d y .
By a simple change of variables, we obtain
L λ ( s ) = E e s λ = α e s s α Γ ( α , s ) , s > 0 .
The asymptotics of L λ ( s ) as s 0 can be read, say, from Theorem 8.1.6 [9] (p. 333): as s 0 ,
1 L λ ( s ) Γ ( 1 α ) s α , α ( 0 , 1 ) , s log 1 s , α = 1 , s α 1 . α > 1 .
Hence, λ satisfies Assumption 1 with γ = min { α , 1 } .
Let Π ( t ) be a standard Poisson process, independent of both ( ξ ) and λ . Define a PSI-process ψ ( t ; λ ) with the random intensity λ as in (1).
Consider independent copies ψ ( k ) ( t ; λ k ) , k = 1 , 2 , , where λ k are independent copies of λ , and let ( ζ n ( t ) ) be their normalized cumulative sums, as in (3). The CLT for vectors implies that, for β > 2 and α > 0 , in terms of finite-dimensional distributions, the processes ( ζ n ) converge, as n , to a stationary centered Gaussian process with the covariance function β ( β 2 ) 1 ( β 1 ) 2 L λ ( u ) , u 0 . We claim that, nevertheless, for certain parameters α > 0 and β > 2 , the functional limit theorem cannot hold true for these ( ζ n ) . The proof is based on the following technical result.
Proposition 3. 
One can find n 0 such that for any n n 0 , with probability not less than 1 / 16 , one of the PSI-processes ψ ( 1 ) ( t ; λ 1 ) , , ψ ( n ) ( t , λ n ) has a jump of size at least n 1 / ( α β ) , for t [ 0 , 1 ] .
Proof. 
Define for n = 1 , 2 ,⋯
μ n : = max { λ 1 . , λ n } .
The cumulative distribution function of μ n is
F n ( x ) : = P μ n x = 1 ( x + 1 ) α n , x 0 .
Notice that lim n F n ( n 1 / α ) = e 1 . Hence, for large enough n, there exists ϰ { 1 , , n } such that λ ϰ n 1 / α with probability not less than 1 / 2 . Since Π ϰ is independent of λ ϰ and the Poisson distribution is asymptotically symmetric around its mean as the parameter becomes large, we may claim that P ( Π ϰ ( λ ϰ ) > n 1 / α | λ ϰ n 1 / α ) > 1 / 3 . Hence, with probability not less than 1 / 6 among PSI-process ψ ( 1 ) , , ψ ( n ) , at least one process ψ ( ϰ ) engages more than n 1 / α random variables ( ξ i ( ϰ ) ) on the time interval [ 0 , 1 ] ; that is, Π ϰ ( λ ϰ ) m : = n 1 / α + 1 . Here and below for x R , we denote x = max { n Z : n x } the floor function.
Consider now η ϰ , m : = max { ξ 1 ( ϰ ) , , ξ m ( ϰ ) } . For any fixed n, they are i.i.d. and have the cumulative distribution function
G m ( x ) : = P η ϰ , m x = 1 ( x + μ β ) β m , x > 1 / ( β 1 ) ,
and η ϰ , m > m 1 / β with probability not less than 1 / 2 for all m large enough, because G m ( m 1 / β ) = 1 ( m 1 / β + μ β ) β m e 1 as m . This maximum is attained on some ξ j ( ϰ ) , and with probability 3 / 4 at least one of ξ j 1 ( ϰ ) and ξ j + 1 ( ϰ ) is less than 2 1 / β μ β < 0 . (We neglect a situation when the maximum is attained for j = 1 or j = m , which happens with the probability 2 / m , see, e.g., [15].) It means that, for large m, ψ ( ϰ ) ( t , λ ϰ ) has at least one jump greater than m 1 / β , with probability at least 3 / 8 .
Combining the above estimates and using the independence between Π ( λ ϰ t ) and the corresponding driven sequence ( ξ ( ϰ ) ) , we see that, with probability not less than 1 / 16 , the process ψ ( ϰ ) ( t ; λ ϰ ) , t [ 0 , 1 ] , has a jump of size at least m 1 / β n 1 / ( α β ) , for all n n 0 = n 0 ( α , β ) . □
Since all these PSI-processes jump at different moments of time a.s., the jump of any process is not compensated by other PSI-processes and makes a contribution to ζ n . If α β 2 , then after the scaling by n in (3), the size of the jump that exists according to Proposition 3 exceeds n 1 / ( α β ) 1 / 2 0 as n . Hence, the limit in the Skorokhod space D [ 0 , 1 ] , if it exists, should have jumps with positive probability. However, it is well known that the stationary Gaussian process with the covariance function c o n s t · L λ ( u ) , u 0 , where L λ ( u ) is given by (17), has a continuous modification a.s. This contradiction shows that the convergence ζ n ζ cannot take place in D [ 0 , 1 ] as n .
Remark 3. 
The considered counterexample suggests that the correct condition for the functional limit theorem could be E | ξ 0 | 2 h < for some h > 1 / γ . Theorem 1 is proved under the more restrictive condition h > 1 / γ 2 . In the case E λ < , Assumption 1 holds with γ = 1 , so both inequalities become h > 1 . In the more interesting case E λ = , we conjecture that the less restrictive inequality h > 1 / γ should be enough. The only place in our proof where we need h > 1 / γ 2 is Lemma 4, which is proved with a straightforward and rather rough approach. A more sophisticated technique is needed to show that the same or similar result holds if h > 1 / γ .

5. Modulus of Continuity for PSI-Processes with Random Intensity

We need to bound the probability of large changes of the PSI-process with random intensity. The following result builds a base for such bounds.
Proposition 4. 
Consider a PSI-process ψ defined by (1). Then, for any fixed δ > 0 ,
P sup 0 t δ | ψ ( t ; λ ) ψ ( 0 ; λ ) | r = 1 L λ δ ( 1 F ( x + r ) + F ( x r ) ) d F ( x )
at least for all r > 0 such that F ( x ) and F ( x + r ) have no common discontinuity points.
Proof. 
Suppose first that λ is fixed. If there are no jumps of Π ( λ t ) on [ 0 , δ ] t , then ψ ( t ; λ ) = ψ ( 0 , λ ) = ξ 0 for all t [ 0 , δ ] . If Π ( λ t ) has k > 0 jumps on [ 0 , δ ] , then
sup 0 t δ | ψ ( t ; λ ) ψ ( 0 ; λ ) | = max { | ξ 1 ξ 0 | , , | ξ k ξ 0 | } .
Since ( ξ i ) are i.i.d., conditioning on the value of ξ 0 = x , we obtain
P max { | ξ 1 ξ 0 | , , | ξ k ξ 0 | } < r = P | ξ 1 x | < r k d F ( x )
and if F ( x ) and F ( x + r ) have no common discontinuities as functions of x, it implies
P max { | ξ 1 ξ 0 | , , | ξ k ξ 0 | } r = 1 F ( x + r ) F ( x r ) k d F ( x ) .
For a fixed λ , the process Π ( λ t ) has k jumps on [ 0 , δ ] with probability ( λ δ ) k k ! e λ δ , so by the law of total probability,
P ( sup 0 s δ | ψ ( s ; λ ) ψ ( 0 ; λ ) | r | λ ) = k = 1 1 F ( x + r ) F ( x r ) k d F ( x ) ( λ δ ) k k ! e λ δ = 1 e λ δ e λ δ exp λ δ F ( x + r ) F ( x r ) 1 d F ( x ) = 1 exp λ δ 1 F ( x + r ) + F ( x r ) d F ( x ) ,
where changing the order of summation and integration is justified by Fubini’s theorem, and the last line follows by simple manipulations using d F ( x ) = 1 . The claim (18) follows by taking expectation with respect to λ , and again, the order of integration can be changed by Fubini’s theorem. □
The equality (18) easily implies a bound for the probability in the left-hand part of (18) in terms of the so-called concentration function of a random variable ξ defined as
Q ξ ( r ) = sup x R P ( x ξ x + r ) .
The straightforward calculation shows that (18) implies that
P sup 0 t δ | ψ ( t ; λ ) ψ ( 0 ; λ ) | r 1 L λ δ ( 1 Q ξ 0 ( 2 r ) ) .
However, we need a more explicit bound. To obtain such bound, we analyze the behavior of the Laplace transform L λ ( s ) for small s. It is postulated in Assumption 1, but for applications, it is convenient to obtain an explicit inequality. It can always be done by slightly reducing the power of s.
Lemma 2. 
If λ satisfies Assumption 1, then for any ε ( 0 , γ ) , there exists a constant C > 0 such that
0 1 L λ ( s ) C s γ ε , s 0 .
Proof. 
Since ( 1 L λ ( s ) ) s γ + ε 0 as s 0 , according to (10), the inequality (20) holds with C = 1 when s [ 0 , s 0 ) for some sufficiently small s 0 = s 0 ( γ , ε , L λ ) . The inequality for s s 0 can be fulfilled by increasing C if necessary. □
A combination of the above statements gives an estimate for the probability of big changes of the PSI-process with random intensity on a small interval, provided that we can bound the tail probability for an individual random variable ξ 0 , say under some moment assumptions.
Proposition 5. 
Suppose that the PSI-process ψ ( t ; λ ) with the random intensity λ defined by (1) satisfies the assumptions of Proposition 4, that λ satisfies Assumption 1, and that E | ξ 0 | 2 h < for some h > 0 . Then, for any ε ( 0 , γ ) , there exists a constant C > 0 such that for all r > 0 and δ [ 0 , 1 ]
P sup 0 t δ | ψ ( t ; λ ) ψ ( 0 ; λ ) | r C δ γ ε r 2 h ( γ ε ) .
Proof. 
Denote for short m 2 h : = E | ξ 0 | 2 h < by assumption. Take r > 0 , then for any | x | < r / 2
1 F ( x + r ) + F ( x r ) = P ξ 0 x r or ξ 0 > x + r P | ξ 0 | r / 2 2 2 h m 2 h r 2 h
by Markov’s inequality. Thus, since L λ does not increase
r / 2 r / 2 1 L λ δ ( 1 F ( x + r ) + F ( x r ) ) d F ( x ) 1 L λ 4 h m 2 h δ r 2 h .
On the other hand, 1 L λ δ ( 1 F ( x + r ) + F ( x r ) ) 1 L λ ( δ ) for any x R and r 0 . Hence, for any ε > 0 , again by the Markov inequality applied to | ξ 0 | 2 h ( γ ε ) , one has
r / 2 + r / 2 1 L λ δ ( 1 F ( x + r ) + F ( x r ) ) d F ( x ) ( 1 L λ ( δ ) ) P | ξ 0 | r / 2 ( 1 L λ ( δ ) ) 2 2 h ( γ ε ) m 2 h ( γ ε ) r 2 h ( γ ε ) .
Combining (22) and (23) and using Lemma 2, we obtain the result. □

6. Sums of PSI-Processes

Since the limit of the normalized cumulative sums ( ζ n ) is an a.s. continuous stochastic process, we can use Theorem 15.5 from Billingsley’s book [11] (p. 127), which gives the conditions for convergence of processes from the Skorokhod space D [ 0 , 1 ] to a process with realizations lying in C [ 0 , 1 ] a.s., in terms of the modulus of continuity
ω ζ ( δ ) = sup s , t [ 0 , 1 ] | s t | δ { | ζ ( s ) ζ ( t ) | } .
It claims that if
(i)
for any ε > 0 there exists t such that P | ζ n ( 0 ) | > t ε for all n 1 ;
(ii)
for any positive ε and w there exist δ ( 0 , 1 ) and n 0 such that
P ω ζ n ( δ ) w ε , n n 0 ;
(iii)
( ζ n ) converges weakly in terms of finite-dimensional distributions to some random function ζ as n , then ( ζ n ) converges to ζ as n , in D [ 0 , 1 ] and ζ is continuous a.s.
In order to bound ω ζ n in probability, Billingsley suggests to use a corollary to Theorem 8.3 in the same book, which can be formulated as follows. Suppose that ζ is some random element in D [ 0 , 1 ] , then for any δ > 0 and w > 0
P ω ζ ( δ ) 3 w i = 0 1 / δ 1 P sup t [ i δ , ( i + 1 ) δ ] ζ ( t ) ζ ( i δ ) w .
The sum (26) can be estimated efficiently in our settings because ζ n is stationary by construction for any n. Hence, all the probabilities in the sum (26) are the same and
P ω ζ n ( δ ) 3 w 1 δ P sup t [ 0 , δ ] ζ n ( t ) ζ n ( 0 ) w .
Remark 4. 
Actually, the events whose probabilities are added in the right-hand side of (26) are dependent since for a large n and a small δ, an appearance of a big ( ε ) jump of ζ n on [ 0 , δ ] suggests that there are many jumps of some ψ ( i ) ( t ; λ i ) , and hence, the correspondent λ i is large; so it is probable that there would be many jumps on other intervals and a probability of a big jump is not too small. Perhaps this observation can be used to find a better bound than the union bound (27), but we have not used it.
In order to check assumption (ii) of Billingsley’s theorem, we apply the following two-stage procedure. We use (27) to bound the “global” probability of jumps greater than w on some interval of the length δ . We aim to show that for any w > 0 and ε > 0 , one can find positive C, τ , and δ such that
P sup t [ 0 , δ ] ζ n ( t ) ζ n ( 0 ) w C δ 1 + τ and C δ τ < ε
for all n greater than some n 0 . To this end, we first show that one can find positive C, τ , δ , and n 0 such that (28) holds for n = n 0 and then analyze the local structure of ζ n to show that (28) actually holds for all n n 0 .
Our analysis of sup t [ 0 , δ ] ζ n ( t ) ζ n ( 0 ) is based on the results of Section 5. Consider the Poisson processes with random intensity Π i ( λ i t ) , i = 1 , , n , used in the construction of ψ ( 1 ) , , ψ ( n ) , and denote κ n ( δ ) the (random) number of these processes that have at least one jump on [ 0 , δ ] :
κ n ( δ ) : = i = 1 n 1 { Π i ( λ i δ ) > 0 } .
This is a binomial random variable with n trials and the success probability
p 1 p 1 ( δ ) : = 1 L λ ( δ ) .
Lemma 2 provides an upper bound for p 1 ( δ ) . We are interested just in the case when p 1 ( δ ) is small compared to 1 / n , that is, when E κ n ( δ ) is small. Then, the probability that κ n ( δ ) b decays fast enough even for an appropriately chosen but fixed b.
Lemma 3. 
Let λ satisfy Assumption1. Then, for any a > 1 / γ , b > a / ( a γ 1 ) and c > 0 , one can find positive τ and δ 0 such that for all n satisfying n δ 1 / a c , it holds
P κ n ( δ ) b δ 1 + τ , δ ( 0 , δ 0 ) .
Proof. 
The well-known Chernoff bound [16] (Theorem 2.1) ensures that for any t 0 ,
P κ n ( δ ) n p 1 ( δ ) + t exp f t / ( n p 1 ( δ ) ) n p 1 ( δ ) ,
where f ( x ) = ( 1 + x ) log ( 1 + x ) x . For a > 1 / γ , Lemma 2 along with the assumption n δ 1 / a c guarantee that n p 1 ( δ ) C δ γ 1 / a ε for any ε ( 0 , γ ) and some C (which may depend on ε ). Taking ε < γ 1 / a yields n p 1 ( δ ) 0 as δ 0 . Plugging t = b n p 1 ( δ ) , which is positive for small δ , into (31) gives
log P κ n ( δ ) b f ( b / ( n p 1 ( δ ) ) 1 n p 1 ( δ ) = b ( log b 1 ) + b log ( n p 1 ( δ ) ) n p 1 ( δ ) b ( log b 1 log c ) + b ( γ 1 / a ε ) log δ .
Restricting ε further to be less than γ 1 / a 1 / b , which is positive by the assumptions, implies that the coefficient of log δ , that is b ( γ 1 / a ε ) , is bigger than 1, and Lemma 3 is proved. □
Lemma 4. 
Suppose that the random λ satisfies Assumption 1 and that E | ξ 0 | 2 h < for some h > 1 / γ 2 . Let 0 < c 1 < c 2 < . Then for any a ( 1 / γ , ( h γ 1 ) / ( 1 γ ) ) (with the right bound understood as ∞ if γ = 1 )and for any fixed w > 0 , there exist positive δ 0 and τ such that for all n [ c 1 δ 1 / a , c 2 δ 1 / a ]
P sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w δ 1 + τ , δ ( 0 , δ 0 ] .
Proof. 
Let a > 1 / γ and w > 0 be fixed. Denote for short δ = n a . By the law of total probability,
P ( sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w ) = k = 0 n P sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w | κ n ( δ ) = k P κ n ( δ ) = k k = 1 b 1 P sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w | κ n ( δ ) = k P κ n ( δ ) = k + P κ n ( δ ) b
for any integer b 2 . Consider an event κ n ( δ ) = k 1 , which means that not more than some k of n processes ψ ( 1 ) , , ψ ( n ) jump on [ 0 , δ ] , and other n k processes are constant. Then, sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w implies that at least one of k PSI-processes that jumps on [ 0 , δ ] changes by more than w n / k . So, for k 1 ,
P ( sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w | κ n ( δ ) = k ) k P sup t [ 0 , δ ] | ψ ( t ; λ ) ψ ( 0 ; λ ) | w n / k | Π ( λ · ) jumps on [ 0 , δ ] = k p 1 P sup t [ 0 , δ ] | ψ ( t ; λ ) ψ ( 0 ; λ ) | w n / k .
Proposition 5 provides a bound for the probability in the right-hand part of (34), and since κ n ( δ ) has the binomial distribution with the parameters n and p 1 , using the total probability formula, we continue (33) as
P ( sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | w ) k = 1 b 1 k P sup t [ 0 , δ ] | ψ ( t ; λ ) ψ ( 0 ; λ ) | w n / k n k p 1 k 1 ( 1 p 1 ) n k + P κ n ( δ ) b C k = 1 b 1 k n k k 2 h δ k w 2 h n h γ ε + P κ n ( δ ) b
for any ε ( 0 , γ ) , h > 0 such that E | ξ 0 | 2 h < , and some C depending on the choice of ε , where the last inequality follows from Proposition 5.
Suppose now that h > 1 / γ 2 . Then 1 / γ < ( h γ 1 ) / ( 1 γ ) , where the right part is understood as if γ = 1 . Choose a ( 1 / γ , ( h γ 1 ) / ( 1 γ ) ) and an integer b > a / ( a γ 1 ) . Then, by Lemma 3, there exists a positive τ such that P κ n ( δ ) b δ 1 + τ for small enough δ . Bounds c 1 δ 1 / a n c 2 δ 1 / a give
k n k k 2 h δ k w 2 h n h γ ε k 2 h ( γ ε ) ( k 1 ) ! w 2 h ( γ ε ) n k δ k ( γ ε ) n h ( γ ε ) c 2 k k 2 h ( γ ε ) c 1 h ( γ ε ) w 2 h ( γ ε ) δ k ( γ ε 1 / a ) + h ( γ ε ) / a .
Choosing ε < γ 1 / a ensures that the power of δ is minimal for k = 1 , and the inequality a < ( h γ 1 ) / ( 1 γ ) guarantees that for k = 1 this power γ + ( h γ 1 ) / a ( 1 + h / a ) ε > 1 for small enough ε ; thus, (32) follows from (35). □
The estimates that are used in the proof of Lemma 4 essentially rely on the relation between δ and n. Therefore, this argument cannot be used to provide a bound (28) uniformly for all n n 0 . In order to obtain such bound, we apply the technique close to the one used in Billingsley’s book [11] (Ch. 12). If we impose some moment condition on ξ 0 , then the following bound holds:
Lemma 5. 
Suppose that E ξ 0 = 0 , E ξ 0 2 = 1 and E | ξ 0 | 2 h < for some h > 1 . Then, for some constant C > 0 and for all n = 1 , 2 , and 0 s < t 1
E ζ n ( t ) ζ n ( s ) 2 h C max { p 1 ( t s ) h , p 1 ( t s ) n 1 h } ,
where p 1 ( · ) is defined by (30).
Proof. 
Due to stationarity of ζ n for each n, it is enough to consider the case s = 0 . For any t 0 , we can represent the increment ζ n ( t ) ζ n ( 0 ) as a sum of i.i.d. random variables
ζ n ( t ) ζ n ( 0 ) = d 1 n i = 1 n η i ,
η i = d ξ 1 ξ 0 1 { Π ( λ t ) > 0 } .
Each summand η i has a symmetric distribution, and two factors in the right-hand part of (38) are independent. By Rosenthal’s inequality (see, e.g., [17] (Th. 2.9)), we obtain
E ζ n ( t ) ζ n ( 0 ) 2 h C n h max Var i = 1 n η i h , n E | η 1 | 2 h
for some constant C > 0 . Both moments can be easily evaluated. Since the summands are i.i.d.,
Var i = 1 n η i = n Var η 1 = n p 1 ( t ) Var ( ξ 1 ξ 0 ) = 2 n p 1 ( t ) ,
because E 1 { Π ( λ t ) > 0 } = p 1 ( t ) . Similarly,
E | η 1 | 2 h = p 1 ( t ) E | ξ 1 ξ 0 | 2 h .
Plugging these two values into (39), we readily obtain (36), maybe with another constant C than in (39). □
Corollary 2. 
Suppose that Assumption 1 holds, and h > 1 / γ in the settings of Lemma 5. Then, for any fixed w > 0 , one can find positive δ 1 and τ such that for all n ( t s ) ( γ + 1 ) / ( h + 1 ) it holds
P ζ n ( t ) ζ n ( s ) w ( t s ) 1 + τ , t s ( 0 , δ 1 ] .
Proof. 
By the Markov inequality, we have
P ζ n ( t ) ζ n ( s ) w E ζ n ( t ) ζ n ( s ) 2 h w 2 h , 0 s < t 1 .
Lemma 5 gives a bound for the right-hand side in terms of p 1 ( t s ) and n. Lemma 2 provides the upper bound for p 1 ( t s ) , and the condition on n imposed in the claim implies n 1 ( t s ) ( γ + 1 ) / ( h + 1 ) . Hence, for any ε > 0 , there exists a constant C > 0 such that for all 0 s < t 1
P ζ n ( t ) ζ n ( s ) w C max ( t s ) h ( γ ε ) , ( t s ) γ ε + ( h 1 ) ( γ + 1 ) / ( h + 1 ) .
Taking ε = ( h γ 1 ) / ( h + 1 ) , which is positive by the assumptions, makes both exponents above equal: h ( γ ε ) = γ ε + ( h 1 ) ( γ + 1 ) / ( h + 1 ) = 1 + ε . Hence, this choice of ε yields (40) with τ = ε for all 0 s < t 1 , but with a constant in the right-hand side of the inequality. Reducing to t s lying in a proper interval ( 0 , δ 1 ] allows us to get rid of the constant. □
Proof of Theorem 1.
Without loss of generality, we may assume T = 1 (otherwise perform a non-random time change t t / T ). We need to show that the conditions of Theorem 15.5 of [11] (recalled in the beginning of Section 6) hold. Condition (iii) was already verified (see Corollary 1), and it implies condition (i). So it remains to check condition (ii), which follows from (28).
Suppose that we are given positive ε and w and want to find δ and n 0 such that (25) holds. Lemma 4 applied with c 1 = 1 / 2 , c 2 = 2 implies that for some positive δ 0 , τ and any a ( 1 / γ , ( h γ 1 ) / ( 1 γ ) ) inequality (32) holds for δ ( 0 , δ 0 ] . Corollary 2 guarantees that for some positive δ 1 , inequality (40) holds for n sufficiently large and δ ( 0 , δ 1 ] , and in our application below, the lower bound on n will be fulfilled if a < ( h + 1 ) / ( γ + 1 ) . Choose some a ( 1 / γ , min { ( h γ 1 ) / ( 1 γ ) , ( h + 1 ) / ( γ + 1 ) } ) (this interval is not empty if h > 1 / γ 2 ), fix a positive δ min { δ 0 , δ 1 } and let n 0 = δ 1 / a .
For this choice of parameters, Lemma 4 (again with c 1 = 1 / 2 , c 2 = 2 ) ensures that (28) holds for all n [ n 0 , 2 n 0 ] . Suppose now that n > 2 n 0 and let m = n a δ . (Note that a > 1 / γ 1 , so m 2 if n > 2 n 0 .) Then for c 1 = 1 / 2 , c 2 = 2 we have n [ c 1 ( δ / m ) 1 / a , c 2 ( δ / m ) 1 / a ] , so (32) holds with δ / m instead of δ , implying that for any i = 1 , , m
P sup t [ δ ( i 1 ) / m , δ i / m ] ζ n ( t ) ζ n ( δ ( i 1 ) / m ) w ( δ / m ) 1 + τ ,
due to the stationarity of ζ n . Let
Z m ( δ ) : = max i = 1 , , m ζ n ( δ i / m ) ζ n ( 0 ) .
Take s = i δ / m and t = j δ / m for some 0 i < j m . Now, we aim to apply Corollary 2 for these s and t. Note that t s ( 0 , δ 1 ) by the choice of δ , so it remains to check that the assumption n ( t s ) ( γ + 1 ) / ( h + 1 ) holds. Indeed, t s δ / m and m / δ n a ; thus, ( t s ) ( γ + 1 ) / ( h + 1 ) n a ( γ + 1 ) / ( h + 1 ) < n by the choice of a. Hence, Corollary 2 implies
P ζ n ( j δ / m ) ζ n ( i δ / m ) w ( j i ) δ / m 1 + τ
for some τ > 0 . Hence, Theorem 12.2 from Billingsley’s book [11] implies that
P Z m ( δ ) w K δ 1 + τ
for some K > 0 , which depends on τ but not on δ .
Suppose now that Z m ( δ ) < w and sup t [ δ ( i 1 ) / m , δ i / m ] ζ n ( t ) ζ n ( δ ( i 1 ) / m ) < w for all i = 1 , , m . Then, sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | < 2 w by the triangle inequality. Hence,
P sup t [ 0 , δ ] | ζ n ( t ) ζ n ( 0 ) | 2 w P Z m ( δ ) w + m P sup t [ 0 , δ / m ] ζ n ( t ) ζ n ( 0 ) w ( K + 1 ) δ 1 + τ 1
with τ 1 = min { τ , τ } , by inequalities (41) and (42). This argument works for any δ min { δ 0 , δ 1 } , with δ 0 and δ 1 given by Lemma 4 and Corollary 2, and choosing δ > 0 small enough, one can guarantee that ( K + 1 ) δ τ 1 ε . This proves (28) (with 2 w instead of w, but w > 0 is arbitrary) for all n n 0 , and the claim follows by application of Theorem 15.5 from Billingsley’s book [11]. □

7. Conclusions

The functional limit theorem for normalized cumulative sums of PSI-processes (Theorem 1) can be used in opposite directions. The PSI-processes are very simple, and some results can be obtained directly for their sums and imply the corresponding facts for the limiting stationary Gaussian process ζ . On the other hand, the theory of stationary Gaussian processes has been deeply developed in the last few decades, and some results of this theory can have consequences for the pre-limiting processes ( ζ n ) , which model a number of real life phenomena.
When γ < 1 in Assumption 1, there is some gap between the conditions implied by the counterexample of Section 4, that is E | ξ 0 | 2 / γ + ε < for some ε > 0 , and the actual condition E | ξ 0 | 2 / γ 2 + ε < (see (11)) under which Theorem 1 is proven. Also, if E λ < , it is still unclear if just the finiteness of the variance E ξ 0 2 < would be sufficient for the convergence in the Skorokhod space.

Author Contributions

Writing – original draft, Y.Y., O.R. and A.G.; Writing – review & editing, Y.Y., O.R. and A.G. All authors have read and agreed to the published version of the manuscript.

Funding

The reported study was funded by RFBR, project number 20-01-00646 A.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to A.V. Liulintsev (the last year student at the Math. and Mech. Dept. of St. Petersburg State University, a participant of the project 20-01-00646 A) for active discussion of M-processes studied in Section 3.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Rusakov, O.; Yakubovich, Y. Poisson processes directed by subordinators, stuttering Poisson and pseudo-Poisson processes, with applications to actuarial mathematics. J. Phys. Conf. Ser. 2021, 2131, 022107. [Google Scholar] [CrossRef]
  2. Barndorff-Nielsen, O.E. Stationary infinitely divisible processes. Braz. J. Probab. Stat. 2011, 25, 294–322. [Google Scholar]
  3. Barndorff-Nielsen, O.E.; Benth, F.E.; Veraart, A.E.D. Ambit Stochastics; Springer Nature: Cham, Switzerland, 2018. [Google Scholar]
  4. Wolpert, R.L.; Taqqu, M.S. Fractional Ornstein-Uhlenbeck Lévy processes and the Telecom process: Upstairs and downstairs. Signal Process. 2005, 85, 1523–1545. [Google Scholar] [CrossRef]
  5. Rusakov, O.; Yakubovich, Y. On PSI, trawl, and ambit stochastics. In Proceedings of the 7th International Conference on Stochastic Methods (ICSM-7), Gelendzhik, Russia, 2–9 June 2022. To appear in Theory Probab. Appl.. [Google Scholar]
  6. Rusakov, O.V. Tightness of sums of independent identically distributed pseudo-Poisson processes in the Skorokhod space. Zap. Nauchn. Sem. POMI 2015, 442, 122–132. (In Russian). English transl. in J. Math. Sci. 2017, 225, 805–811 [Google Scholar] [CrossRef]
  7. Jacod, J.; Shiryaev, A.N. Limit Theorems for Stochastic Processes, 2nd ed.; Springer: Berlin, Germany, 2002. [Google Scholar]
  8. Rusakov, O.V. Pseudo-Poissonian processes with stochastic intensity and a class of processes generalizing the Ornstein–Uhlenbeck process. Vestn. St. Petersb. Univ. Math. Mech. Astron. 2017, 4, 247–257. (In Russian). English transl. in Vestn. St.Petersb. Univ. Math. 2017, 50, 153–160 [Google Scholar] [CrossRef] [Green Version]
  9. Bingham, N.H.; Goldie, C.M.; Teugels, J.L. Regular Variation; Encyclopedia of Mathematics and its Applications, 27; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  10. Cramér, H.; Leadbetter, M.R. Stationary and Related Stochastic Processes; Reprint of the 1967 original; Dover Publications, Inc.: Mineola, NY, USA, 2004. [Google Scholar]
  11. Billingsley, P. Convergence of Probability Measures; John Wiley and Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
  12. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; John Wiley & Sons, Inc.: New York, NY, USA; London, UK; Sydney, Australia, 1971; Volume II. [Google Scholar]
  13. Gushchin, A.A. Stochastic Calculus for Quantitative Finance; ISTE Press: London, UK; Elsevier Ltd.: Oxford, UK, 2015. [Google Scholar]
  14. Karandikar, R.L.; Rao, B.V. Introduction to Stochastic Calculus; Indian Statistical Institute Series; Springer: Singapore, 2018. [Google Scholar]
  15. Nevzorov, V.B. Records: Mathematical Theory; Translations of Mathematical Monographs; AMS: Providence, RI, USA, 2001; Volume 194, Translated from the Russian manuscript by D. M. Chibisov. [Google Scholar]
  16. Janson, S.; Łuczak, T.; Rucinski, A. Random Graphs; Wiley-Interscience Series in Discrete Mathematics and Optimization; Wiley-Interscience: New York, NY, USA, 2000. [Google Scholar]
  17. Petrov, V.V. Limit Theorems of Probability Theory; Oxford Studies in Probability; Clarendon Press: Oxford, UK, 1995. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yakubovich, Y.; Rusakov, O.; Gushchin, A. Functional Limit Theorem for the Sums of PSI-Processes with Random Intensities. Mathematics 2022, 10, 3955. https://doi.org/10.3390/math10213955

AMA Style

Yakubovich Y, Rusakov O, Gushchin A. Functional Limit Theorem for the Sums of PSI-Processes with Random Intensities. Mathematics. 2022; 10(21):3955. https://doi.org/10.3390/math10213955

Chicago/Turabian Style

Yakubovich, Yuri, Oleg Rusakov, and Alexander Gushchin. 2022. "Functional Limit Theorem for the Sums of PSI-Processes with Random Intensities" Mathematics 10, no. 21: 3955. https://doi.org/10.3390/math10213955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop