Next Article in Journal
Design and Analysis of a Cluster-Based Intelligent Hybrid Recommendation System for E-Learning Applications
Previous Article in Journal
On the Complexity of Finding the Maximum Entropy Compatible Quantum State
Previous Article in Special Issue
Analysis of the Past Lifetime in a Replacement Model through Stochastic Comparisons and Differential Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence in Total Variation of Random Sums

1
Accademia Navale, viale Italia 72, 57100 Livorno, Italy
2
Dipartimento di Scienze Statistiche “P. Fortunati”, Università di Bologna, via delle Belle Arti 41, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(2), 194; https://doi.org/10.3390/math9020194
Submission received: 21 December 2020 / Revised: 12 January 2021 / Accepted: 15 January 2021 / Published: 19 January 2021
(This article belongs to the Special Issue Stochastic Processes in Neuronal Modeling)

Abstract

:
Let ( X n ) be a sequence of real random variables, ( T n ) a sequence of random indices, and ( τ n ) a sequence of constants such that τ n . The asymptotic behavior of L n = ( 1 / τ n ) i = 1 T n X i , as n , is investigated when ( X n ) is exchangeable and independent of ( T n ) . We give conditions for M n = τ n ( L n L ) M in distribution, where L and M are suitable random variables. Moreover, when ( X n ) is i.i.d., we find constants a n and b n such that sup A B ( R ) | P ( L n A ) P ( L A ) | a n and sup A B ( R ) | P ( M n A ) P ( M A ) | b n for every n. In particular, L n L or M n M in total variation distance provided a n 0 or b n 0 , as it happens in some situations.

1. Introduction

All random elements appearing in this paper are defined on the same probability space, say ( Ω , A , P ) .
A random sum is a quantity such as i = 1 T n X i , where ( X n : n 1 ) is a sequence of real random variables and ( T n : n 1 ) a sequence of N -valued random indices. In the sequel, in addition to ( X n ) and ( T n ) , we fix a sequence ( τ n : n 1 ) of positive constants such that τ n and we let
L n = i = 1 T n X i τ n .
Random sums find applications in a number of frameworks, including statistical inference, risk theory and insurance, reliability theory, economics, finance, and forecasting of market changes. Accordingly, the asymptotic behavior of L n , as n , is a classical topic in probability theory. The related literature is huge and we do not try to summarize it here. We just mention a general text book [1] and some useful recent references: [2,3,4,5,6,7,8,9,10].
In this paper, the asymptotic behavior of L n is investigated in the (important) special case where ( X n ) is exchangeable and independent of ( T n ) . More precisely, we assume that:
(i)
( X n ) is exchangeable;
(ii)
( X n ) is independent of ( T n ) ;
(iii)
T n τ n P V for some random variable V > 0 .
Under such conditions, we prove a weak law of large numbers (WLLN), a central limit theorem (CLT), and we investigate the rate of convergence with respect to the total variation distance.
Suppose in fact E | X 1 | < and conditions (i)-(ii)-(iii) hold. Define
L = V E ( X 1 T ) and M n = τ n ( L n L ) ,
where V is the random variable involved in condition (iii) and T the tail σ -field of ( X n ) . Then, it is not hard to show that L n P L . To obtain a CLT, instead, is not straightforward. In Section 3, we prove that M n M in distribution, where M is a suitable random variable, provided E ( X 1 2 ) < and τ n T n τ n V converges stably. Finally, in Section 4, assuming ( X n ) i.i.d. and some additional conditions, we find constants a n and b n such that
sup A B ( R ) | P ( L n A ) P ( L A ) | a n and sup A B ( R ) | P ( M n A ) P ( M A ) | b n for every n 1 .
In particular, L n L or M n M in total variation distance provided a n 0 or b n 0 , as it happens in some situations.
A last note is that, to our knowledge, random sums have been rarely investigated when ( X n ) is exchangeable. Similarly, convergence of L n or M n in total variation distance is usually not taken into account. This paper contributes to fill this gap.

2. Preliminaries

In the sequel, the probability distribution of any random element U is denoted by L ( U ) . If S is a topological space, B ( S ) is the Borel σ -field on S and C b ( S ) the space of real bounded continuous functions on S. The total variation distance between two probability measures on B ( S ) , say μ and ν , is
d T V ( μ , ν ) = sup A B ( S ) | μ ( A ) ν ( A ) | .
With a slight abuse of notation, if X and Y are S-valued random variables, we write d T V ( X , Y ) instead of d T V L ( X ) , L ( Y ) , namely
d T V ( X , Y ) = sup A B ( S ) | P ( X A ) P ( Y A ) | .
If X is a real random variable, we say that L ( X ) is absolutely continuous to mean that L ( X ) is absolutely continuous with respect to Lebesgue measure. The following technical fact is useful in Section 4.
Lemma 1.
Let X be a strictly positive random variable. Then,
lim n d T V X + q n X , X = 0
provided the q n are constants such that q n 0 and L ( X ) is absolutely continuous.
Proof. 
Let f be a density of X. Since lim n | f n ( x ) f ( x ) | d x = 0 , for some sequence f n of continuous densities, it can be assumed that f is continuous. Furthermore, since X > 0 , for each ϵ > 0 there is b > 0 such that P ( X < b ) < ϵ . For such a b, one obtains
d T V X + q n X , X ϵ + sup A B ( R ) P X + q n X A X b P X A X b .
Hence, it can be also assumed X b a.s. for some b > 0 .
Let g n be a density of X + q n X . Since
d T V X + q n X , X = f ( x ) g n ( x ) + d x = b f ( x ) g n ( x ) + d x ,
it suffices to show that f ( x ) = lim n g n ( x ) for each x > b . To prove the latter fact, define ϕ n ( x ) = x + q n x . For large n, one obtains 4 q n 2 < b . In this case, ϕ n > 0 on ( b , ) and g n can be written as
g n ( x ) = f ϕ n 1 ( x ) 2 ϕ n 1 ( x ) q n + 2 ϕ n 1 ( x ) .
Therefore, f ( x ) = lim n g n ( x ) follows from the continuity of f and
ϕ n 1 ( x ) = x + q n 2 2 q n 2 q n 2 + 4 x x .
 ☐

Stable Convergence

Stable convergence, introduced by Renyi in [11], is a strong form of convergence in distribution. It actually occurs in a number of frameworks, including the classical CLT, and thus it quickly became popular; see, e.g., [12] and references therein. Here, we just recall the basic definition.
Let S be a metric space, ( Y n ) a sequence of S-valued random variables, and K a kernel (or a random probability measure) on S. The latter is a map K on Ω such that K ( ω ) is a probability measure on B ( S ) , for each ω Ω , and ω K ( ω ) ( B ) is A -measurable for each B B ( S ) . Say that Y n converges stably to K if
lim n E f ( Y n ) H = E K ( · ) ( f ) H ,
for all f C b ( S ) and H A with P ( H ) > 0 , where K ( · ) ( f ) = f ( x ) K ( · ) ( d x ) .
More generally, take a sub- σ -field G A and suppose K is G -measurable (i.e., ω K ( ω ) ( B ) is G -measurable for fixed B B ( S ) ). Then, Y n converges G -stably toK if condition (1) holds whenever H G and P ( H ) > 0 .
An important special case is when K is a trivial kernel, in the sense that
K ( ω ) = ν for all ω Ω
where ν is a fixed probability measure on B ( S ) . In this case, Y n converges G -stably to ν if and only if
lim n E G f ( Y n ) = E ( G ) f d ν
whenever f C b ( S ) and G : Ω R is bounded and G -measurable.

3. WLLN and CLT for Random Sums

In this section, we still let
L n = i = 1 T n X i τ n , L = V E ( X 1 T ) and M n = τ n ( L n L ) ,
where V is the random variable involved in condition (iii) and
T = n σ ( X n , X n + 1 , )
is the tail σ -field of ( X n ) . Recall that V > 0 . Recall also that, by de Finetti’s theorem, ( X n ) is exchangeable if and only if is i.i.d. conditionally on T , namely
P X 1 A 1 , , X n A n T = i = 1 n P X 1 A i T a . s .
for all n 1 and all A 1 , , A n B ( R ) .
The following WLLN is straightforward.
Theorem 1.
If E | X 1 | < and conditions (i) and (iii) hold, then L n P L .
Proof. 
Recall that, if Y n and Y are any real random variables, Y n P Y if and only if, for each subsequence ( n ) , there is a sub-subsequence ( n ) ( n ) such that Y n a . s . Y . Fix a subsequence ( n ) . Then, by (iii),
T n τ n a . s . V
along a suitable sub-subsequence ( n ) ( n ) . Since V > 0 , then T n a . s . . As a result of the SLLN for exchangeable sequences, ( 1 / n ) i = 1 n X i a . s . E ( X 1 T ) . Therefore,
L n = T n τ n i = 1 n X i T n a . s . V E ( X 1 T ) = L .
 ☐
For definiteness, Theorem 1 has been stated in terms of convergence in probability, but other analogous results are available. As an example, suppose that E | X 1 | < and conditions (i)–(ii) are satisfied. Then, L n L in distribution provided T n τ n V in distribution. This follows from Skorohod representation theorem and the current version of Theorem 1. Similarly, L n a . s . L or L n L 1 L whenever T n τ n a . s . V or T n τ n L 1 V .
We also note that, as implicit in the proof of Theorem 1, condition (iii) implies T n P or equivalently
lim n P ( T n c ) = 0 for every fixed c > 0 .
We next turn to the CLT. It is convenient to begin with the i.i.d. case. From now on, U and Z are two real random variables such that
Z N ( 0 , 1 ) , U is independent of Z and ( U , Z ) is independent of ( X n , T n : n 1 ) .
We also let
a = E ( X 1 ) and σ 2 = var ( X 1 ) .
Theorem 2.
Suppose ( X n ) is i.i.d., E ( X 1 2 ) < , condition (ii) holds, and
τ n T n τ n V converges stably to L ( U ) .
Then,
M n σ V Z + a U in distribution .
Proof. 
Let
W n = a τ n T n τ n V + V T n i = 1 T n ( X i a ) .
Since ( X n ) is i.i.d., E ( X 1 T ) = E ( X 1 ) = a a.s. Since E i = 1 T n ( X i a ) T n 2 = σ 2 for every n, the sequence i = 1 T n ( X i a ) T n is L 2 -bounded, and this implies
W n M n = W n τ n L n a V = i = 1 T n ( X i a ) T n V T n τ n P 0 .
Therefore, it suffices to prove W n σ V Z + a U in distribution. We prove the latter fact by means of characteristic functions.
Fix t R . Let μ n , j ( · ) = P ( V · T n = j ) be the probability distribution of V under P ( · T n = j ) and
ϕ j ( s ) = E exp i s i = 1 j ( X i a ) j for all s R .
Then,
E exp i t W n = j = 1 P ( T n = j ) exp i t a τ n j τ n v ϕ j ( v t ) μ n , j ( d v ) .
In addition, for each c > 0 , the classical CLT yields
lim j sup 0 < v c ϕ j ( v t ) exp t 2 σ 2 v 2 = 0 .
Since condition (3) implies condition (iii), lim n P ( T n b ) = 0 for all b > 0 . Given ϵ > 0 , take c > 0 such that P ( V > c ) < ϵ . As a result of (4), one can find an integer m such that
E exp i t W n E exp i t a τ n T n τ n V exp t 2 σ 2 V 2 ϵ + 2 P ( T n m ) + 2 P ( V > c ) < 3 ϵ + 2 P ( T n m ) .
Since ϵ is arbitrary and lim n P ( T n m ) = 0 , it follows that
lim sup n E exp i t W n E exp i t a τ n T n τ n V exp t 2 σ 2 V 2 = 0 .
Finally, since Z N ( 0 , 1 ) and Z is independent of V,
E exp i t σ V Z = E exp t 2 σ 2 V 2 .
Therefore,
E exp i t σ V Z + i t a U = E exp t 2 σ 2 V 2 E exp i t a U = lim n E exp t 2 σ 2 V 2 exp i t a τ n T n τ n V = lim n E exp i t W n
where the second equality is due to condition (3). Hence, W n σ V Z + a U in distribution, and this concludes the proof. ☐
The argument used in the proof of Theorem 2 yields a little bit more. Let ν = L σ V Z + a U and G = σ ( V , X 1 , X 2 , ) . Then, M n converges G -stably (and not only in distribution) to ν . Among other things, since L n P L , this implies that ( L n , M n ) ( L , R ) in distribution, where R denotes a random variable independent of L such that R ν . Moreover, condition (3) can be weakened into τ n T n τ n V converges σ ( V ) -stably to L ( U ) .
We also note that, under some extra assumptions, Theorem 2 could be given a simpler proof based on some version of Anscombe’s theorem; see, e.g., [13] and references therein.
Finally, we adapt Theorem 2 to the exchangeable case. Let
W = E ( X 1 2 T ) E ( X 1 T ) 2 and M = W V Z + U E ( X 1 T ) .
To introduce the next result, it may be useful to recall that
n i = 1 n X i n E ( X 1 T ) N ( 0 , W ) stably
provided ( X n ) is exchangeable and E ( X 1 2 ) < , where N ( 0 , W ) is the Gaussian kernel with mean 0 and random variance W (with N ( 0 , 0 ) = δ 0 ); see, e.g., ([14] Th. 3.1).
Theorem 3.
If E ( X 1 2 ) < and conditions (i)–(ii) and (3) hold, then M n M in distribution.
Proof. 
Just note that ( X n ) is i.i.d. conditionally on T , with mean E ( X 1 T ) and variance W. Hence, for each f C b ( R ) , Theorem 2 yields
E f ( M n ) T a . s . E f ( M ) T ,
which in turn implies
E f ( M ) = E lim n E f ( M n ) T = lim n E E f ( M n ) T = lim n E f ( M n ) .
 ☐

4. Rate of Convergence with Respect to Total Variation Distance

To obtain upper bounds for d T V ( L n , L ) and d T V ( M n , M ) , some additional assumptions are needed. In particular, in this section, ( X n ) is i.i.d. (with the exception of Remark 1). Hence, L and M reduce to L = a V and M = σ V Z + a U , where a = E ( X 1 ) , σ 2 = var ( X 1 ) and ( U , Z ) satisfies condition (2).
We begin with a rough estimate for d T V ( L n , L ) .
Theorem 4.
Suppose that conditions (ii)–(iii) hold, ( X n ) is i.i.d., E ( | X 1 | 3 ) < and L ( X 1 ) has an absolutely continuous part. Then,
d T V ( L n , L ) P ( T n m ) + c m + 1 + d T V L + σ V τ n Z , L + + E | V T n τ n | max ( V , T n τ n ) + | a | τ n σ E | V T n τ n | max ( V , T n τ n )
for all m , n 1 , where c > 0 is a constant independent of m and n.
In order to prove Theorem 4, we recall that
d T V N ( a 1 , b 1 ) , N ( a 2 , b 2 ) | b 1 b 2 | + | a 1 a 2 | max ( b 1 , b 2 )
for all a 1 , a 2 R and b 1 , b 2 > 0 ; see, e.g., ([15] Lem. 3).
Proof of Theorem 4.
Fix m , n 1 . By ([16] Lem. 2.1), up to enlarging the underlying probability space ( Ω , A , P ) , there is a sequence ( ( S j , Z j ) : j 1 ) of random variables, independent of ( T n , V ) , such that
S j i = 1 j X i , Z j N ( 0 , 1 ) , P S j a j + σ j Z j = d T V S j , a j + σ j Z j .
In addition, by ([17] Th. 2.6), there is a constant c > 0 depending only on E ( | X 1 | 3 ) such that
d T V S j , a j + σ j Z j = d T V S j a j σ j , Z j c m + 1 for all j > m .
Having noted these facts, define
L n * = a T n + σ T n Z T n τ n .
Then,
d T V ( L n , L n * ) P ( T n m ) + j > m P ( T n = j ) d T V P ( L n · T n = j ) , P ( L n * · T n = j ) P ( T n m ) + sup j > m d T V P ( L n · T n = j ) , P ( L n * · T n = j ) = P ( T n m ) + sup j > m d T V i = 1 j X i τ n , a j + σ j Z j τ n = P ( T n m ) + sup j > m d T V S j , a j + σ j Z j P ( T n m ) + c m + 1 .
Next, since Z T n N ( 0 , 1 ) , by conditioning on ( L n , V ) and applying inequality (5), one obtains
d T V L n * , a V + σ V τ n Z T n E | V T n τ n | max ( V , T n τ n ) + | a | τ n σ E | V T n τ n | max ( V , T n τ n ) .
Moreover, since Z T n Z and both Z T n and Z are independent of V,
d T V a V + σ V τ n Z T n , L = d T V L + σ V τ n Z , L .
Collecting all these facts together, one finally obtains
d T V ( L n , L ) d T V ( L n , L n * ) + d T V ( L n * , L ) P ( T n m ) + c m + 1 + d T V L + σ V τ n Z , L + + E | V T n τ n | max ( V , T n τ n ) + | a | τ n σ E | V T n τ n | max ( V , T n τ n ) .
 ☐
The upper bound provided by Theorem 4 is generally large but it becomes manageable under some further assumptions. For instance, if V b a.s. for some constant b > 0 , it reduces to
d T V ( L n , L ) P ( T n m ) + c m + 1 + d T V L + σ V τ n Z , L + + 1 b + | a | τ n σ b E V T n τ n .
As an example, we discuss a simple but instructive case.
Example 1.
For each x R , denote by J ( x ) the integer part of x. Suppose V b a.s. for some constant b > 0 and define
T n = J ( τ n V + 1 ) .
Suppose also that ( X n ) is independent of V and satisfies the other conditions of Theorem 4. Then,
T n > τ n b and V T n τ n = T n τ n V 1 τ n a . s .
Hence, letting m = J ( τ n b ) , inequality (6) reduces to
d T V ( L n , L ) c * τ n + d T V L + σ V τ n Z , L
for some constant c * . Finally, d T V L + σ V τ n Z , L = O ( 1 / τ n ) if V is bounded above and L ( V ) is absolutely continuous with a Lipschitz density. Hence, under the latter condition on V, one obtains
d T V ( L n , L ) = O ( 1 / τ n ) .
Incidentally, this bound is essentially of the same order as the bound obtained in [6] when T n has a mixed Poisson distribution and the total variation distance is replaced by the Wasserstein distance.
One more consequence of Theorem 4 is the following.
Corollary 1.
L n L in total variation distance provided the conditions of Theorem 4 hold, a 0 , L ( V ) is absolutely continuous, and
lim n τ n E V T n τ n = 0 .
Proof. 
First, assume V b a.s. for some constant b > 0 . For each z R , letting q n = σ a τ n z , Lemma 1 implies
lim sup n d T V L + σ V τ n z , L = lim sup n d T V V + q n V , V = 0 .
Conditioning on Z and taking inequality (6) into account, it follows that
lim sup n d T V ( L n , L ) c m + 1 + lim sup n d T V L + σ V τ n Z , L c m + 1 + lim sup n d T V L + σ V τ n z , L N ( 0 , 1 ) ( d z ) = c m + 1 for each m 1 .
This concludes the proof if V b a.s. In general, for each b > 0 , define
V b = 1 { V > b } V + 1 { V b } ( V + b ) and T n , b = J 1 { V > b } T n + 1 { V b } ( 1 + τ n ( V + b ) )
where J ( x ) denotes the integer part of x. Since T n , b τ n P V b > b , the first part of the proof implies
i = 1 T n , b X i τ n a V b in total variation distance .
Finally, since V > 0 and
d T V ( L n , L ) 2 P ( V b ) + d T V i = 1 T n , b X i τ n , a V b for all b > 0 ,
one obtains lim n d T V ( L n , L ) = 0 . ☐
We next turn to d T V ( M n , M ) . Following [18], our strategy is to estimate d T V ( M n , M ) through the Wasserstein distance between L ( M n ) and L ( M ) .
Recall that, if X and Y are real integrable random variables, the Wasserstein distance between L ( X ) and L ( Y ) is
d W ( X , Y ) = inf ( H , K ) E | H K | = sup f | E ( f ( X ) ) E ( f ( Y ) ) | ,
where inf is over the real random variables H and K such that H X and K Y while sup is over the 1-Lipschitz functions f : R R . Define also
l n = | t ϕ n ( t ) | d t = 2 0 t | ϕ n ( t ) | d t
where ϕ n is the characteristic function of M n .
Theorem 5.
Assume the conditions of Theorem 2 and:
(iv)
U = V 0 Z 0 , where Z 0 N ( 0 , 1 ) , V 0 0 is independent of Z 0 , and ( V 0 , Z 0 ) is independent of ( V , Z ) ;
(v)
E T n 0 2 < for some n 0 and
sup n τ n E T n τ n V 2 < .
Then, d W ( M n , M ) 0 . Moreover, letting d n = d W ( M n , M ) , one obtains
d T V ( M n , M ) d n 1 / 2 + d n 1 / 2 α + P σ 2 V + a 2 V 0 < d n α + k l n d n 1 / 2 2 / 3 and d T V ( M n , M ) d n 1 / 2 1 + 1 σ E ( V 1 / 2 ) + k l n d n 1 / 2 2 / 3
for each n 1 and α < 1 / 2 , where k is a constant independent of n.
Proof. 
By Theorem 2, M n M in distribution. By condition (iv),
M = σ V Z + a V 0 Z 0 σ 2 V + a 2 V 0 Z ,
so that L ( M ) is a mixture of centered Gaussian laws. On noting that
E i = 1 T n ( X i a ) 2 = σ 2 E ( T n ) ,
one obtains
E ( M n 2 ) = τ n E i = 1 T n ( X i a ) τ n + a T n τ n V 2 2 τ n E i = 1 T n ( X i a ) 2 + 2 a 2 τ n E T n τ n V 2 = 2 σ 2 E T n τ n + 2 a 2 τ n E T n τ n V 2 .
Finally, by condition (v), lim n E T n τ n = E ( V ) < and sup n E ( M n 2 ) < . To conclude the proof, it suffices to apply Theorem 1 of [18] (see also the subsequent remark) with β = 2 . ☐
Theorem 5 gives two upper bounds for d T V ( M n , M ) in terms of d n = d W ( M n , M ) and l n . To avoid trivialities, suppose σ > 0 . Obviously, the second bound makes sense only if E ( V 1 / 2 ) < . However, since V > 0 and d n 0 , the first bound implies d T V ( M n , M ) 0 if lim n l n d n 1 / 2 = 0 . In particular, d T V ( M n , M ) 0 if sup n l n < .
Example 2.
Under the conditions of Theorem 5, suppose also that L ( X 1 ) is absolutely continuous with a density f satisfying | f ( x ) | d x < . Then, conditioning on T n and V and arguing as in ([18] Ex. 2), it can be shown that sup n l n < . Hence, M n M in total variation distance. Furthermore, if E ( V 1 / 2 ) < , the second bound of Theorem 5 yields
d T V ( M n , M ) k * 1 d n 1 / 3
for all n 1 and a suitable constant k * (independent of n).
We close the paper by briefly discussing the exchangeable case.
Remark 1.
Usually, the upper bounds for the total variation distance are preserved under mixtures. Hence, by conditioning on T and making some further assumptions, the results obtained in this section can be extended to the case where ( X n ) is exchangeable. As an example, define L and M as in Section 3 and suppose
E exp ( i t X 1 ) T Q | t | a . s .
for each t R \ { 0 } and for some integrable random variable Q. Then, Corollary 1 and Theorem 5 are still valid even if ( X n ) is exchangeable (and not necessarily i.i.d.) up to replacing a 0 with E ( X 1 T ) 0 a.s. in Corollary 1.

Author Contributions

Methodology, L.P. and P.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gnedenko, B.V.; Korolev, V. Random Summation: Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  2. Kiche, J.; Ngesa, O.; Orwa, G. On generalized gamma distribution and its application to survival data. Int. J. Stat. Probab. 2019, 8, 85–102. [Google Scholar]
  3. Korolev, V.; Chertok, A.; Korchagin, A.; Zeifman, A. Modeling high-frequency order flow imbalance by functional limit theorems for two-sided risk processes. Appl. Math. Comput. 2015, 253, 224–241. [Google Scholar] [CrossRef] [Green Version]
  4. Korolev, V.; Dorofeeva, A. Bounds of the accuracy of the normal approximation to the distributions of random sums under relaxed moment conditions. Lith. Math. J. 2017, 57, 38–58. [Google Scholar] [CrossRef] [Green Version]
  5. Korolev, V.; Zeifman, A. Generalized negative binomial distributions as mixed geometric laws and related limit theorems. Lith. Math. J. 2019, 59, 366–388. [Google Scholar] [CrossRef] [Green Version]
  6. Korolev, V.; Zeifman, A. Bounds for convergence rate in laws of large numbers for mixed Poisson random sums. Stat. Prob. Lett. 2021, 168, 1–8. [Google Scholar] [CrossRef]
  7. Mattner, L.; Shevtsova, I. An optimal Berry-Esseen type theorem for integrals of smooth functions. ALEA Lat. Am. J. Probab. Math. Stat. 2019, 16, 487–530. [Google Scholar] [CrossRef]
  8. Schluter, C.; Trede, M. Weak convergence to the student and Laplace distributions. J. Appl. Probab. 2016, 53, 121–129. [Google Scholar] [CrossRef]
  9. Shevtsova, I.; Tselishchev, M. A generalized equilibrium transform with application to error bounds in the Renyi theorem with no support constraints. Mathematics 2020, 8, 577. [Google Scholar] [CrossRef]
  10. Sheeja, S.; Kumar, S. Negative binomial sum of random variables and modeling financial data. Int. J. Stat. Appl. Math. 2017, 2, 44–51. [Google Scholar]
  11. Renyi, A. On stable sequences of events. Sankhya A 1963, 25, 293–302. [Google Scholar]
  12. Nourdin, I.; Nualart, D.; Peccati, G. Quantitative stable limit theorems on the Wiener space. Ann. Probab. 2016, 44, 1–41. [Google Scholar] [CrossRef] [Green Version]
  13. Berti, P.; Crimaldi, I.; Pratelli, L.; Rigo, P. An Anscombe-type theorem. J. Math. Sci. 2014, 196, 15–22. [Google Scholar] [CrossRef]
  14. Berti, P.; Pratelli, L.; Rigo, P. Limit theorems for a class of identically distributed random variables. Ann. Probab. 2004, 32, 2029–2052. [Google Scholar] [CrossRef] [Green Version]
  15. Pratelli, L.; Rigo, P. Total variation bounds for Gaussian functionals. Stoch. Proc. Appl. 2019, 129, 2231–2248. [Google Scholar] [CrossRef]
  16. Sethuraman, J. Some extensions of the Skorohod representation theorem. Sankhya 2002, 64, 884–893. [Google Scholar]
  17. Bally, V.; Caramellino, L. Asymptotic development for the CLT in total variation distance. Bernoulli 2016, 22, 2442–2485. [Google Scholar] [CrossRef]
  18. Pratelli, L.; Rigo, P. Convergence in total variation to a mixture of Gaussian laws. Mathematics 2018, 6, 99. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pratelli, L.; Rigo, P. Convergence in Total Variation of Random Sums. Mathematics 2021, 9, 194. https://doi.org/10.3390/math9020194

AMA Style

Pratelli L, Rigo P. Convergence in Total Variation of Random Sums. Mathematics. 2021; 9(2):194. https://doi.org/10.3390/math9020194

Chicago/Turabian Style

Pratelli, Luca, and Pietro Rigo. 2021. "Convergence in Total Variation of Random Sums" Mathematics 9, no. 2: 194. https://doi.org/10.3390/math9020194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop