Next Article in Journal
Model Predictive Control of Mineral Column Flotation Process
Next Article in Special Issue
An M[X]/G(a,b)/1 Queueing System with Breakdown and Repair, Stand-By Server, Multiple Vacation and Control Policy on Request for Re-Service
Previous Article in Journal
Gray Codes Generation Algorithm and Theoretical Evaluation of Random Walks in N-Cubes
Previous Article in Special Issue
The Randomized First-Hitting Problem of Continuously Time-Changed Brownian Motion
Article Menu
Issue 6 (June) cover image

Export Article

Mathematics 2018, 6(6), 99; doi:10.3390/math6060099

Article
Convergence in Total Variation to a Mixture of Gaussian Laws
1
Accademia Navale, Viale Italia 72, 57100 Livorno, Italy
2
Dipartimento di Matematica “F. Casorati”, Universita’ di Pavia, via Ferrata 1, 27100 Pavia, Italy
*
Author to whom correspondence should be addressed.
Received: 29 April 2018 / Accepted: 5 June 2018 / Published: 11 June 2018

Abstract

:
It is not unusual that X n d i s t V Z where X n , V, Z are real random variables, V is independent of Z and Z N ( 0 , 1 ) . An intriguing feature is that P V Z A = E N ( 0 , V 2 ) ( A ) for each Borel set A R , namely, the probability distribution of the limit V Z is a mixture of centered Gaussian laws with (random) variance V 2 . In this paper, conditions for d T V ( X n , V Z ) 0 are given, where d T V ( X n , V Z ) is the total variation distance between the probability distributions of X n and V Z . To estimate the rate of convergence, a few upper bounds for d T V ( X n , V Z ) are given as well. Special attention is paid to the following two cases: (i) X n is a linear combination of the squares of Gaussian random variables; and (ii) X n is related to the weighted quadratic variations of two independent Brownian motions.
Keywords:
mixture of Gaussian laws; rate of convergence; total variation distance; Wasserstein distance; weighted quadratic variation
MSC:
60B10; 60F05

1. Introduction

All random elements involved in the sequel are defined on a common probability space ( Ω , F , P ) . We let B denote the Borel σ -field on R and N ( a , b ) the Gaussian law on B with mean a and variance b, where a R , b 0 , and N ( a , 0 ) = δ a . Moreover, Z always denotes a real random variable such that:
Z N ( 0 , 1 ) .
In plenty of frameworks, it happens that:
X n d i s t V Z ,
where X n and V are real random variables and V is independent of Z. Condition (1) actually occurs in the CLT, both in its classical form (with V = 1 ) and in its exchangeable and martingale versions (Examples 3 and 4). In addition, condition (1) arises in several recent papers with various distributions for V. See, e.g., [1,2,3,4,5,6,7,8].
An intriguing feature of condition (1) is that the probability distribution of the limit:
P V Z A = N ( 0 , V 2 ) ( A ) d P , A B ,
is a mixture of centered Gaussian laws with (random) variance V 2 . Moreover, condition (1) can be often strengthened into:
d W ( X n , V Z ) 0 ,
where d W ( X n , V Z ) is the Wasserstein distance between the probability distributions of X n and V Z . In fact, condition (2) amounts to (1) provided the sequence ( X n ) is uniformly integrable; see Section 2.1.
A few (engaging) problems are suggested by conditions (1) and (2). One is:
(*) Give conditions for d T V ( X n , V Z ) 0 , where
d T V ( X n , V Z ) = sup A B | P ( X n A ) P ( V Z A ) | .
Under such (or stronger) conditions, estimate the rate of convergence, i.e., find quantitative bounds for d T V ( X n , V Z ) .
Problem (*) is addressed in this paper. Before turning to results, however, we mention an example.
Example 1.
Let B be a fractional Brownian motion with Hurst parameter H and
X n = n 1 + H 2 0 1 t n 1 ( B 1 2 B t 2 ) d t .
The asymptotics of X n and other analogous functionals of the B-paths (such as weighted power variations) is investigated in various papers. See, e.g., [5,7,8,9,10] and references therein. We note also that:
0 1 t n B t d B t = X n n H H 2 H + n f o r e a c h H 1 / 4 ,
where the stochastic integral is meant in Skorohod’s sense (it reduces to an Ito integral if H = 1 / 2 ).
Let a ( H ) = 1 / 2 | 1 / 2 H | and V = H Γ ( 2 H ) B 1 N 0 , H Γ ( 2 H ) . In [8], it is shown that, for every β ( 0 , 1 ) , there is a constant k (depending on H and β only) such that:
d T V ( X n , V Z ) k n β a ( H ) f o r a l l n 1 ,
where Z is a standard normal random variable independent of V. Furthermore, the rate n β a ( H ) is quite close to be optimal; see condition (2) of [8].
In Example 1, problem (*) admits a reasonable solution. In fact, in a sense, Example 1 is our motivating example.
This paper includes two main results.
The first (Theorem 1) is of the general type. Suppose l n : = | t ϕ n ( t ) | d t < , where ϕ n is the characteristic function of X n . (In particular, X n has an absolutely continuous distribution). Then, an upper bound for d T V ( X n , V Z ) is provided in terms of l n and d W ( X n , V Z ) . In some cases, this bound allows to prove d T V ( X n , V Z ) 0 and to estimate the convergence rate. In Example 5, for instance, such a bound improves on the existing ones; see Theorem 3.1 of [6] and Remark 3.5 of [7]. However, for the upper bound to work, one needs information on l n and d W ( X n , V Z ) , which is not always available. Thus, it is convenient to have some further tools.
In the second result (Theorem 2), the ideas underlying Example 1 are adapted to weighted quadratic variations; see [5,8,9]. Let B and B be independent standard Brownian motions and
X n = n 1 / 2 k = 0 n 1 f ( B k / n B k / n ) { ( Δ B k / n ) 2 ( Δ B k / n ) 2 } ,
where f : R R is a suitable function, Δ B k / n = B ( k + 1 ) / n B k / n and Δ B k / n = B ( k + 1 ) / n B k / n . Under some assumptions on f (weaker than those usually requested in similar problems), it is shown that d T V ( X n , V Z ) = O ( n 1 / 4 ) , where V = 2 0 1 f 2 2 B t d t . Furthermore, d T V ( X n , V Z ) = O ( n 1 / 2 ) if one also assumes inf | f | > 0 . (We recall that, if a n and b n are non-negative numbers, the notation a n = O ( b n ) means that there is a constant c such that a n c b n for all n).

2. Preliminaries

2.1. Distances between Probability Measures

In this subsection, we recall a few known facts on distances between probability measures. We denote by ( S , E ) a measurable space and by μ and ν two probability measures on E .
The total variation distance between μ and ν is:
μ ν = sup A E | μ ( A ) ν ( A ) | .
If X and Y are ( S , E ) -valued random variables, we also write:
d T V ( X , Y ) = P ( X · ) P ( Y · ) = sup A E | P ( X A ) P ( Y A ) |
to denote the total variation distance between the probability distributions of X and Y.
Next, suppose S is a separable metric space, E the Borel σ -field and
d ( x , x 0 ) μ ( d x ) + d ( x , x 0 ) ν ( d x ) < for some x 0 S ,
where d is the distance on S. The Wasserstein distance between μ and ν is:
W ( μ , ν ) = inf X μ , Y ν E [ d ( X , Y ) ] ,
where inf is over the pairs ( X , Y ) of ( S , E ) -valued random variables such that X μ and Y ν . By a duality theorem, W ( μ , ν ) admits the representation:
W ( μ , ν ) = sup f f d μ f d ν ,
where sup is over those functions f : S R such that | f ( x ) f ( y ) | d ( x , y ) for all x , y S ; see, e.g., Section 11.8 of [11]. Again, if X and Y are ( S , E ) -valued random variables, we write:
d W ( X , Y ) = W P ( X · ) , P ( Y · )
to mean the Wasserstein distance between the probability distributions of X and Y.
Finally, we make precise the connections between convergence in distribution and convergence according to Wasserstein distance in the case S = R . Let X n and X be real random variables such that E | X n | + E | X | < for each n. Then, the following statements are equivalent:
-
lim n d W ( X n , X ) = 0 ;
-
X n d i s t X and E | X n | E | X | ;
-
X n d i s t X and the sequence ( X n ) is uniformly integrable.

2.2. Two Technical Lemmas

The following simple lemma is fundamental for our purposes.
Lemma 1.
If a 1 , a 2 R , 0 b 1 b 2 and b 2 > 0 , then:
N ( a 1 , b 1 ) N ( a 2 , b 2 ) 1 b 1 b 2 + | a 1 a 2 | 2 π b 2 .
Lemma 1 is well known; see e.g. Proposition 3.6.1 of [12] and Lemma 3 of [8].
Note also that, if a 1 = a 2 = a , Lemma 1 yields:
N ( a , b 1 ) N ( a , b 2 ) | b 1 b 2 | b i for each i such that b i > 0 .
The next result, needed in Section 4, is just a consequence of Lemma 1. In such a result, X and Y are separable metric spaces, g n : X × Y R and g : X × Y R Borel functions, and X and Y random variables with values in X and Y , respectively.
Lemma 2.
Let ν be the probability distribution of Y. If X is independent of Y and
g n ( X , y ) N 0 , σ n 2 ( y ) , g ( X , y ) N 0 , σ 2 ( y ) , σ 2 ( y ) > 0
for ν-almost all y Y , then:
d T V g n ( X , Y ) , g ( X , Y ) min E | σ n ( Y ) σ ( Y ) | σ ( Y ) , E | σ n 2 ( Y ) σ 2 ( Y ) | σ 2 ( Y ) .
Proof. 
Since X is independent of Y,
d T V g n ( X , Y ) , g ( X , Y ) = sup A B P g n ( X , y ) A P g ( X , y ) A ν ( d y ) P g n ( X , y ) · P g ( X , y ) · ν ( d y ) .
Thus, since g n ( X , y ) and g ( X , y ) have centered Gaussian laws and g ( X , y ) has strictly positive variance, for ν -almost all y Y , Lemma 1 yields:
d T V g n ( X , Y ) , g ( X , Y ) | σ n ( y ) σ ( y ) | σ ( y ) ν ( d y ) = E | σ n ( Y ) σ ( Y ) | σ ( Y ) and d T V g n ( X , Y ) , g ( X , Y ) | σ n 2 ( y ) σ 2 ( y ) | σ 2 ( y ) ν ( d y ) = E | σ n 2 ( Y ) σ 2 ( Y ) | σ 2 ( Y ) .

3. A General Result

As in Section 1, let X n , V and Z be real random variables, with Z N ( 0 , 1 ) and V independent of Z. Since | V | Z V Z , it can be assumed V 0 . We also assume E | X n | + E | V Z | < , so that we can define:
d n = d W ( X n , V Z ) .
In addition, we let:
X n = X n + d n 1 / 2 U ,
where U is a standard normal random variable independent of ( X n , V , Z : n 1 ) .
We aim to estimate d T V ( X n , V Z ) . Under some conditions, however, the latter quantity can be replaced by d T V ( X n , X n ) .
Lemma 3.
For each α < 1 / 2 ,
| d T V ( X n , V Z ) d T V ( X n , X n ) | d n 1 / 2 + d n 1 / 2 α + P ( V < d n α ) .
In addition, if E ( 1 / V ) < , then:
| d T V ( X n , V Z ) d T V ( X n , X n ) | d n 1 / 2 1 + E ( 1 / V ) .
Proof. 
The Lemma is trivially true if d n = 0 . Hence, it can be assumed d n > 0 . Define X n = V Z + d n 1 / 2 U and note that:
| d T V ( X n , V Z ) d T V ( X n , X n ) | d T V ( X n , X n ) + d T V ( X n , V Z ) .
For each A B ,
P ( X n A ) = N ( X n , d n ) ( A ) d P and P ( X n A ) = N ( V Z , d n ) ( A ) d P .
Hence, Lemma 1 yields:
d T V ( X n , X n ) = sup A B N ( X n , d n ) ( A ) N ( V Z , d n ) ( A ) d P N ( X n , d n ) N ( V Z , d n ) d P E | X n V Z | d n 1 / 2 .
On the other hand, the probability distribution of X n can also be written as:
P ( X n A ) = N ( 0 , V 2 + d n ) ( A ) d P .
Arguing as above, Lemma 1 implies again:
d T V ( X n , V Z ) N ( 0 , V 2 + d n ) N ( 0 , V 2 ) d P E 1 V V 2 + d n E d n 1 / 2 V 2 + d n d n 1 / 2 ϵ + P ( V < ϵ )
for each ϵ > 0 . Letting ϵ = d n α with α < 1 / 2 , it follows that:
| d T V ( X n , V Z ) d T V ( X n , X n ) | E | X n V Z | d n 1 / 2 + d n 1 / 2 α + P ( V < d n α ) .
Inequality (3) holds true for every joint distribution for the pair ( X n , V Z ) . In particular, inequality (3) holds if such a joint distribution is taken to be one that realizes the Wasserstein distance, namely, one such that E | X n V Z | = d n . In this case, one obtains:
| d T V ( X n , V Z ) d T V ( X n , X n ) | d n 1 / 2 + d n 1 / 2 α + P ( V < d n α ) .
Finally, if E ( 1 / V ) < , it suffices to note that:
d T V ( X n , V Z ) E d n 1 / 2 V 2 + d n d n 1 / 2 E ( 1 / V ) .
For Lemma 3 to be useful, d T V ( X n , X n ) should be kept under control. This can be achieved under various assumptions. One is to ask X n to admit a Lipschitz density with respect to Lebesgue measure.
Theorem 1.
Let ϕ n be the characteristic function of X n and
l n = | t ϕ n ( t ) | d t = 2 0 t | ϕ n ( t ) | d t .
Given β 1 , suppose sup n E | X n | β < and d n 0 . Then, there is a constant k, independent of n, such that:
d T V ( X n , X n ) k l n d n 1 / 2 β / ( β + 1 ) .
In particular,
d T V ( X n , V Z ) d n 1 / 2 + d n 1 / 2 α + P ( V < d n α ) + k l n d n 1 / 2 β / ( β + 1 )
for each α < 1 / 2 , and
d T V ( X n , V Z ) d n 1 / 2 1 + E ( 1 / V ) + k l n d n 1 / 2 β / ( β + 1 ) if E ( 1 / V ) < .
It is worth noting that, if β = 1 , the condition sup n E | X n | < follows from d n 0 . On the other hand, d n 0 can be weakened into X n d i s t V Z whenever sup n E | X n | β < for some β > 1 ; see Section 2.1.
Proof of Theorem 1
If l n = , the Theorem is trivially true. Thus, it can be assumed l n < .
Since ϕ n is integrable, the probability distribution of X n admits a density f n with respect to Lebesgue measure. In addition,
| f n ( x ) f n ( y ) | = ( 1 / 2 π ) | ( e i t x e i t y ) ϕ n ( t ) d t | | x y | 2 π | t ϕ n ( t ) | d t = l n | x y | 2 π .
Given t > 0 , it follows that:
2 d T V ( X n , X n ) 2 P ( X n · ) P ( X n + d n 1 / 2 u · ) N ( 0 , 1 ) ( d u ) = | f n ( x ) f n ( x d n 1 / 2 u ) | d x N ( 0 , 1 ) ( d u ) P ( | X n | > t ) + P ( | X n | > t ) + t t | f n ( x ) f n ( x d n 1 / 2 u ) | N ( 0 , 1 ) ( d u ) d x .
Since sup n E | X n | β < and d n 0 , one obtains:
P ( | X n | > t ) + P ( | X n | > t ) P ( | X n | > t ) + P ( | X n | > t / 2 ) + P ( d n 1 / 2 | U | > t / 2 ) 2 P ( | X n | > t / 2 ) + d n β / 2 E | U | β ( t / 2 ) β 2 E | X n | β + d n β / 2 E | U | β ( t / 2 ) β k * t β
for some constant k * . Hence,
2 d T V ( X n , X n ) k * t β + l n d n 1 / 2 2 π t t | u | N ( 0 , 1 ) ( d u ) d x k * t β + l n d n 1 / 2 π t for each t > 0 .
Minimizing over t, one finally obtains:
2 d T V ( X n , X n ) c ( β ) ( k * ) 1 / ( β + 1 ) l n d n 1 / 2 π β / ( β + 1 ) ,
where c ( β ) is a constant that depends on β only. This concludes the proof. ☐
Theorem 1 provides upper bounds for d T V ( X n , V Z ) in terms of l n and d n . It is connected to Proposition 4.1 of [4], where d T V is replaced by the Kolmogorov distance.
In particular, Theorem 1 implies that d T V ( X n , V Z ) 0 provided V > 0 a.s. and
lim n d n 1 / 2 l n = lim n d W ( X n , V Z ) 1 / 2 | t ϕ n ( t ) | d t = 0 .
In addition, Theorem 1 allows to estimate the convergence rate. As an extreme example, if d n 0 , E ( 1 / V ) < and sup n l n + E | X n | β < for all β 1 , then:
d T V ( X n , V Z ) = O ( d n α ) for every α < 1 / 2 .
We next turn to examples. In each such examples, Z is a standard normal random variable independent of all other random elements.
Example 2. (Classical CLT).
Let V = 1 and X n = ( 1 / n ) i = 1 n ξ i , where ξ 1 , ξ 2 , is an i.i.d. sequence of real random variables such that E ( ξ 1 ) = 0 and E ( ξ 1 2 ) = 1 . In this case, d n = O ( n 1 / 2 ) ; see Theorem 2.1 of [13]. Suppose now that E | ξ 1 | β < for all β 1 and ξ 1 has a density f (with respect to Lebesgue measure) such that | f ( x ) | d x < . Then, sup n l n + E | X n | β < for all β 1 , and Theorem 1 yields:
d T V ( X n , Z ) = O ( n α ) f o r e a c h α < 1 / 4 .
This rate, however, is quite far from optimal. Under the present assumptions on ξ 1 , in fact, d T V ( X n , Z ) = O ( n 1 / 2 ) ; see Theorem 1 of [14].
We finally prove sup n l n + E | X n | β < . It is well known that E | ξ 1 | β < for all β implies sup n E | X n | β < for all β. Hence, it suffices to prove sup n l n < . Let ϕ be the characteristic function of ξ 1 and q = | f ( x ) | d x . An integration by parts yields | ϕ ( t ) | q / | t | for each t 0 . By Lemma 1.4 of [15], one also obtains | ϕ ( t ) | 1 ( 1 / 43 ) ( t / q ) 2 for | t | < 2 q (just let b = 2 q and c = 1 / 2 in Lemma 1.4 of [15]). Since ϕ n ( t ) = ϕ ( t / n ) n for each t R ,
| ϕ n ( t ) | q n | t | n f o r | t | q n a n d | ϕ n ( t ) | 1 t 2 43 q 2 n n f o r | t | < q n .
Using these inequalities, sup n l n < follows from a direct calculation.
As noted above, the rate provided by Theorem 1 in the classical CLT is not optimal. While not exciting, this fact could be expected. Indeed, Theorem 1 is a general result, applying to arbitrary X n , and should not be requested to give optimal bounds in a very special case (such as the classical CLT).
Example 3. (Exchangeable CLT).
Suppose now that ( ξ n ) is an exchangeable sequence of real random variables with E ( ξ 1 2 ) < . Define
V = E ( ξ 1 2 T ) E ( ξ 1 T ) 2 a n d X n = i = 1 n { ξ i E ( ξ 1 T ) } n ,
where T is the tail σ-field of ( ξ n ) . By de Finetti’s theorem,
d T V ( X n , V Z ) E P ( X n · T ) N ( 0 , V 2 ) .
Hence, d T V ( X n , V Z ) 0 provided P ( X n · T ) N ( 0 , V 2 ) P 0 . As to Theorem 1, note that X n d i s t V Z (see e.g. Theorem 3.1 of [16]) and
E ( X n 2 ) = E E ( X n 2 T ) = n E E i = 1 n ( ξ i E ( ξ 1 T ) ) n 2 T = n E ( V 2 / n ) = E ( V 2 ) < .
Furthermore, l n E | t | | E e i t X n T | d t . Thus, by Theorem 1, d T V ( X n , V Z ) 0 whenever
E ( ξ 1 2 T ) > E ( ξ 1 T ) 2 a . s . and lim n d n 1 / 2 E | t | | E e i t X n T | d t = 0 .
Example 4. (Martingale CLT).
Let
X n = j = 1 k n ξ n , j ,
where ( ξ n , j : n 1 , j = 1 , , k n ) is an array of real square integrable random variables and k n . For each n 1 , let:
F n , 0 F n , 1 F n , k n
be sub-σ-fields of F with F n , 0 = { , Ω } . A well known version of the CLT (see e.g. Theorem 3.2 of [17]) states that X n d i s t V Z provided:
(i)
ξ n , j is F n , j -measurable and E ξ n , j F n , j 1 = 0 a.s.;
(ii)
j ξ n , j 2 P V 2 , max j | ξ n , j | P 0 , sup n E max j ξ n , j 2 < ;
(iii)
F n , j F n + 1 , j .
Condition (iii) can be replaced by:
(iv)
V is measurable with respect to the σ-field generated by N n , j F n , j where N = { A F : P ( A ) = 0 } .
Note also that, under (i), one obtains E ( X n 2 ) = j = 1 k n E ( ξ n , j 2 ) .
Now, in addition to (i)–(ii)–(iii) or (i)–(ii)–(iv), suppose sup n j = 1 k n E ( ξ n , j 2 ) < . Then, Theorem 1 (applied with β = 2 ) implies d T V ( X n , V Z ) 0 whenever V > 0 a.s. and lim n d n 1 / 2 l n = 0 . Moreover,
d T V ( X n , V Z ) = O l n d n 1 / 2 2 / 3 i f E ( 1 / V ) + l n < f o r e a c h n .
Our last example is connected to the second order Wiener chaos. We first note a simple fact as a lemma.
Lemma 4.
Let ξ = ( ξ 1 , , ξ k ) be a centered Gaussian random vector. Define:
Y = j = 1 k a j ξ j 2 γ j 2 ,
where a j R and γ = ( γ 1 , , γ k ) is an independent copy of ξ. Then, the characteristic function ψ of Y can be written as:
ψ ( t ) = E e t 2 S , t R , w h e r e S = i , j a i a j E ξ i ξ j ξ i + γ i ξ j + γ j .
Proof. 
Let σ i , j = E ξ i ξ j , ξ * = ( ξ + γ ) / 2 and γ * = ( ξ γ ) / 2 . Then,
( ξ * , γ * ) ( ξ , γ ) , Y = 2 j a j ξ j * γ j * , S = 2 i , j a i a j σ i , j ξ i * ξ j * .
Therefore,
ψ ( t ) = E E e i t Y ξ * = E e 2 t 2 i , j a i a j σ i , j ξ i * ξ j * = E e t 2 S .
Example 5. (Squares of Gaussian random variables).
For each n 1 , let ( ξ n , 1 , , ξ n , k n ) be a centered Gaussian random vector and
X n = j = 1 k n a n , j ξ n , j 2 E ( ξ n , j 2 ) w h e r e a n , j R .
Take an independent copy ( γ n , 1 , , γ n , k n ) of ( ξ n , 1 , , ξ n , k n ) and define:
Y n = j = 1 k n a n , j ξ n , j 2 γ n , j 2 , S n = i = 1 k n j = 1 k n a n , i a n , j E ξ n , i ξ n , j ξ n , i + γ n , i ξ n , j + γ n , j .
Note that S n is a (random) quadratic form of the covariance matrix E ξ n , i ξ n , j : 1 i , j k n . Therefore, S n 0 .
Since | ϕ n | 2 agrees with the characteristic function of Y n , Lemma 4 yields:
| ϕ n ( t ) | 2 = E e t 2 S n .
Being S n 0 , it follows that:
| ϕ n ( t ) | 2 = E e t 2 S n 1 { S n t ϵ 4 2 } + E e t 2 S n 1 { S n < t ϵ 4 2 } e t ϵ / 2 + P ( S n < t ϵ 4 2 ) = e t ϵ / 2 + P S n 2 ϵ > t 4 + ϵ ( 2 ϵ ) 2 e t ϵ / 2 + E S n 2 ϵ t 4 ϵ ( 2 ϵ ) 2 f o r a l l ϵ > 0 a n d t > 0 .
Hence,
l n 2 = 0 t | ϕ n ( t ) | d t 1 + 1 t | ϕ n ( t ) | d t 1 + 1 t e t ϵ / 2 2 d t + E S n 2 ϵ 1 t 1 ϵ ( 2 ϵ ) 4 d t ,
so that sup n l n < whenever sup n E S n 2 ϵ < for some ϵ ( 0 , 2 ) .
To summarize, applying Theorem 1 with β = 2 , one obtains:
d T V ( X n , V Z ) = O ( d n 1 / 3 )
provided X n d i s t V Z , for some V independent of Z, and
E ( 1 / V ) + sup n E S n 2 ϵ + E ( X n 2 ) < f o r s o m e ϵ ( 0 , 2 ) .
The bound (4) requires strong conditions, which may be not easily verifiable in real problems. However, the above result is sometimes helpful, possibly in connection with the martingale CLT of Example 4. As an example, the conditions for (4) are not hard to be checked when ξ n , 1 , , ξ n , k n are independent for fixed n. We also note that, to our knowledge, the bound (4) improves on the existing ones. In fact, letting p = 2 in Theorem 3.1 of [6] (see also Remark 3.5 of [7]) one only obtains d T V ( X n , V Z ) = O ( d n 1 / 5 ) .

4. Weighted Quadratic Variations

Theorem 1 works nicely if one is able to estimate d n and l n , which is usually quite hard. Thus, it is convenient to have some further tools. In this section, d T V ( X n , V Z ) is upper bounded via Lemma 2. We focus on a special case, but the underlying ideas are easily adapted to more general situations. The results in [8], for instance, arise from a version of such ideas.
For any function x : [ 0 , 1 ] R , denote:
Δ x ( k / n ) = x ( ( k + 1 ) / n ) x ( k / n ) where n 1 and k = 0 , 1 , , n 1 .
Let q 2 be an integer, f : R R a Borel function, and J = { J t : 0 t 1 } a real process. The weighted q-variation of J on { 0 , 1 / n , 2 / n , , 1 } is:
J n * = k = 0 n 1 f ( J k / n ) Δ J k / n q .
As noted in [5], to fix the asymptotic behavior of J n * is useful to determine the rate of convergence of some approximation schemes of stochastic differential equations driven by J. Moreover, the study of J n * is also motivated by parameter estimation and by the analysis of single-path behaviour of J. See [5,9,18,19,20,21] and references therein.
More generally, given an R 2 -valued process:
( I , J ) = { ( I t , J t ) : 0 t 1 } ,
one could define:
( I , J ) n * = k = 0 n 1 f ( I k / n ) Δ J k / n q .
The weight f ( I k / n ) of Δ J k / n q depends now on I. Thus, in a sense, ( I , J ) n * can be regarded as the weighted q-variation of J relative to I.
Here, we focus on:
X n = n 1 / 2 k = 0 n 1 f B k / n B k / n ( Δ B k / n ) 2 ( Δ B k / n ) 2 ,
where B and B are independent standard Brownian motions. Note that, letting q = 2 and I = B B , one obtains:
X n = n 1 / 2 ( I , B ) n * ( I , B ) n * .
Thus, n 1 / 2 X n can be seen as the difference between the quadratic variations of B and B relative to I = B B .
We aim to show that, under mild assumptions on f, the probability distributions of X n converge in total variation to a certain mixture of Gaussian laws. We also estimate the rate of convergence. The smoothness assumptions on f are weaker than those usually requested in similar problems; see, e.g., [5].
Theorem 2.
Let B and B be independent standard Brownian motions and Z a standard normal random variable independent of ( B , B ) . Define X n by Equation (5) and
V = 2 0 1 f 2 2 B t d t .
Suppose E ( 1 / V 2 ) < and
| f ( x ) f ( y ) | c | x y | e | x | + | y |
for some constant c and all x , y R . Then, there is a constant k independent of n satisfying:
d T V ( X n , V Z ) k n 1 / 4 .
Moreover, if inf | f | > 0 , one also obtains d T V ( X n , V Z ) k n 1 / 2 .
To understand better the spirit of Theorem 2, think of the trivial case f = 1 . Then, the asymptotic behavior of X n = n 1 / 2 k = 0 n 1 { ( Δ B k / n ) 2 ( Δ B k / n ) 2 } can be deduced by classical results. In fact, d T V ( X n , 2 Z ) = O ( n 1 / 2 ) and this rate is optimal; see Theorem 1 of [14]. On the other hand, since V = 2 , the same conclusion can be drawn from Theorem 2.
We finally prove Theorem 2.
Proof of Theorem 2.
First note that T = ( B + B ) / 2 and Y = ( B B ) / 2 are independent standard Brownian motions and
X n = 2 n 1 / 2 k = 0 n 1 f 2 Y k / n Δ T k / n Δ Y k / n .
Note also that:
V Z 2 T 1 0 1 f 2 2 Y t d t .
Thus, in order to apply Lemma 2, it suffices to let X = Y = C [ 0 , 1 ] , X = T , and
g n ( x , y ) = 2 n 1 / 2 k = 0 n 1 f 2 y ( k / n ) Δ x ( k / n ) Δ y ( k / n ) , g ( x , y ) = 2 x ( 1 ) 0 1 f 2 2 y ( t ) d t .
For fixed y Y , g n ( T , y ) and g ( T , y ) are centered Gaussian random variables. Since E Δ T k / n 2 = 1 / n ,
σ n 2 ( y ) = E g n ( T , y ) 2 = 4 k = 0 n 1 f 2 2 y ( k / n ) Δ y ( k / n ) 2 and σ 2 ( y ) = E g ( T , y ) 2 = 4 0 1 f 2 2 y ( t ) d t .
On noting that σ 2 ( Y ) V 2 , one also obtains:
σ 2 ( Y ) > 0 a . s . and E 1 / σ 2 ( Y ) = E ( 1 / V 2 ) < .
Next, define:
a n = ( 1 / 4 ) E | σ n 2 ( Y ) σ 2 ( Y ) | = E k = 0 n 1 f 2 2 Y k / n Δ Y k / n 2 0 1 f 2 2 Y t d t .
By Lemma 2 and the Cauchy–Schwarz inequality,
d T V ( X n , V Z ) 2 = d T V g n ( T , Y ) , g ( T , Y ) 2 E | σ n ( Y ) σ ( Y ) | σ ( Y ) 2 E 1 / σ 2 ( Y ) E σ n ( Y ) σ ( Y ) 2 E ( 1 / V 2 ) E | σ n 2 ( Y ) σ 2 ( Y ) | = 4 E ( 1 / V 2 ) a n .
If inf | f | > 0 , since σ 2 ( Y ) 4 inf f 2 , Lemma 2 implies again:
d T V ( X n , V Z ) E | σ n 2 ( Y ) σ 2 ( Y ) | σ 2 ( Y ) E | σ n 2 ( Y ) σ 2 ( Y ) | 4 inf f 2 = a n inf f 2 .
Thus, to conclude the proof, it suffices to show that a n = O ( n 1 / 2 ) .
Define c * = max ( c , | f ( 0 ) | ) and note that:
| f ( s ) | c * e 2 | s | and | f ( s ) 2 f ( t ) 2 | 2 c c * | s t | e 3 ( | s | + | t | ) for all s , t R .
Define also:
a n ( 1 ) = E k = 0 n 1 f 2 2 Y k / n Δ Y k / n 2 1 / n and a n ( 2 ) = E ( 1 / n ) k = 0 n 1 f 2 2 Y k / n 0 1 f 2 2 Y t d t .
Since a n a n ( 1 ) + a n ( 2 ) , it suffices to see that a n ( i ) = O ( n 1 / 2 ) for each i. Since Y has independent increments and E Δ Y k / n 2 1 / n 2 = 2 / n 2 ,
a n ( 1 ) 2 = E k = 0 n 1 f 2 2 Y k / n Δ Y k / n 2 1 / n 2 k = 0 n 1 E f 4 2 Y k / n Δ Y k / n 2 1 / n 2 = k = 0 n 1 E f 4 2 Y k / n E Δ Y k / n 2 1 / n 2 = ( 2 / n 2 ) k = 0 n 1 E f 4 2 Y k / n 2 ( c * ) 4 E e 8 2 M n where M = sup 0 t 1 | Y t | .
Similarly,
a n ( 2 ) = E ( 1 / n ) k = 0 n 1 f 2 2 Y k / n 0 1 f 2 2 Y t d t k = 0 n 1 k / n ( k + 1 ) / n E | f 2 2 Y k / n f 2 2 Y t | d t 2 2 c c * k = 0 n 1 k / n ( k + 1 ) / n E | Y k / n Y t | e 6 2 M d t 2 2 c c * E e 12 2 M k = 0 n 1 k / n ( k + 1 ) / n E ( Y k / n Y t ) 2 d t 2 2 c c * E e 12 2 M 1 n .
Therefore, a n ( i ) = O ( n 1 / 2 ) for each i, and this concludes the proof. ☐

Author Contributions

Each author contributed in exactly the same way to each part of this paper.

Funding

This research was supported by the Italian Ministry of Education, University and Research (MIUR): Dipartimenti di Eccellenza Program (2018-2022) - Dept. of Mathematics “F. Casorati”, University of Pavia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azmoodeh, E.; Gasbarra, D. New moments criteria for convergence towards normal product/tetilla laws. arXiv, 2017. [Google Scholar]
  2. Eichelsbacher, P.; Thäle, C. Malliavin-Stein method for Variance-Gamma approximation on Wiener space. Electron. J. Probab. 2015, 20, 1–28. [Google Scholar] [CrossRef]
  3. Gaunt, R.E. On Stein’s method for products of normal random variables and zero bias couplings. Bernoulli 2017, 23, 3311–3345. [Google Scholar] [CrossRef]
  4. Gaunt, R.E. Wasserstein and Kolmogorov error bounds for variance-gamma approximation via Stein’s method I. arXiv, 2017. [Google Scholar]
  5. Nourdin, I.; Nualart, D.; Tudor, C.A. Central and non-central limit theorems for weighted power variations of fractional Brownian motion. Ann. I.H.P. 2010, 46, 1055–1079. [Google Scholar] [CrossRef]
  6. Nourdin, I.; Poly, G. Convergence in total variation on Wiener chaos. Stoch. Proc. Appl. 2013, 123, 651–674. [Google Scholar] [CrossRef]
  7. Nourdin, I.; Nualart, D.; Peccati, G. Quantitative stable limit theorems on the Wiener space. Ann. Probab. 2016, 44, 1–41. [Google Scholar] [CrossRef]
  8. Pratelli, L.; Rigo, P. Total Variation Bounds for Gaussian Functionals. 2018. Submitted. Available online: http://www-dimat.unipv.it/rigo/frac.pdf (accessed on 10 April 2018).
  9. Nourdin, I.; Peccati, G. Weighted power variations of iterated Brownian motion. Electron. J. Probab. 2008, 13, 1229–1256. [Google Scholar] [CrossRef]
  10. Peccati, G.; Yor, M. Four limit theorems for quadratic functionals of Brownian motion and Brownian bridge. In Asymptotic Methods in Stochastics, AMS, Fields Institute Communication Series; Amer. Math. Soc.: Providence, RI, USA, 2004; pp. 75–87. [Google Scholar]
  11. Dudley, R.M. Real Analysis and Probability; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  12. Nourdin, I.; Peccati, G. Normal Approximations with Malliavin Calculus: From Stein’s Method to Universality; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  13. Goldstein, L. L1 bounds in normal approximation. Ann. Probab. 2007, 35, 1888–1930. [Google Scholar] [CrossRef]
  14. Sirazhdinov, S.K.H.; Mamatov, M. On convergence in the mean for densities. Theory Probab. Appl. 1962, 7, 424–428. [Google Scholar] [CrossRef]
  15. Petrov, V.V. Limit Theorems of Probability Theory: Sequences of Independent Random Variables; Clarendon Press: Oxford, UK, 1995. [Google Scholar]
  16. Berti, P.; Pratelli, L.; Rigo, P. Limit theorems for a class of identically distributed random variables. Ann. Probab. 2004, 32, 2029–2052. [Google Scholar]
  17. Hall, P.; Heyde, C.C. Martingale Limit Theory and Its Applications; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  18. Barndorff-Nielsen, O.E.; Graversen, S.E.; Shepard, N. Power variation and stochastic volatility: A review and some new results. J. Appl. Probab. 2004, 44, 133–143. [Google Scholar] [CrossRef]
  19. Gradinaru, M.; Nourdin, I. Milstein’s type schemes for fractional SDEs. Ann. I.H.P. 2009, 45, 1058–1098. [Google Scholar] [CrossRef]
  20. Neuenkirch, A.; Nourdin, I. Exact rate of convergence of some approximation schemes associated to SDEs driven by a fractional Brownian motion. J. Theor. Probab. 2007, 20, 871–899. [Google Scholar] [CrossRef]
  21. Nourdin, I. A simple theory for the study of SDEs driven by a fractional Brownian motion in dimension one. In Seminaire de Probabilites; Springer: Berlin, Germany, 2008; Volume XLI, pp. 181–197. [Google Scholar]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Mathematics EISSN 2227-7390 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top