Next Article in Journal
Necessary and Sufficient Second-Order Optimality Conditions on Hadamard Manifolds
Next Article in Special Issue
Sensitivity Analysis and Simulation of a Multiserver Queueing System with Mixed Service Time Distribution
Previous Article in Journal
Subordination Implications and Coefficient Estimates for Subclasses of Starlike Functions
Previous Article in Special Issue
On the Fractional Wave Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Second Order Expansions for High-Dimension Low-Sample-Size Data Statistics in Random Setting

by
Gerd Christoph
1,2,† and
Vladimir V. Ulyanov
2,3,*,†
1
Department of Mathematics, Otto-von-Guericke University Magdeburg, 39016 Magdeburg, Germany
2
Moscow Center for Fundamental and Applied Mathematics, Lomonosov Moscow State University, 119991 Moscow, Russia
3
Faculty of Computer Science, National Research University Higher School of Economics, 167005 Moscow, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(7), 1151; https://doi.org/10.3390/math8071151
Submission received: 12 May 2020 / Revised: 6 July 2020 / Accepted: 9 July 2020 / Published: 14 July 2020
(This article belongs to the Special Issue Stability Problems for Stochastic Models: Theory and Applications)

Abstract

:
We consider high-dimension low-sample-size data taken from the standard multivariate normal distribution under assumption that dimension is a random variable. The second order Chebyshev–Edgeworth expansions for distributions of an angle between two sample observations and corresponding sample correlation coefficient are constructed with error bounds. Depending on the type of normalization, we get three different limit distributions: Normal, Student’s t-, or Laplace distributions. The paper continues studies of the authors on approximation of statistics for random size samples.

1. Introduction

Let X 1 = ( X 11 , , X 1 m ) T , , X k = ( X k 1 , , X k m ) T be a random sample from m-dimensional population. The data set can be regarded as k vectors or points in m-dimensional space. Recently, there has been significant interest in a high-dimensional datasets when the dimension is large. In a high-dimensional setting, it is assumed that either (i) m tends to infinity and k is fixed, or (ii) both m and k tend to infinity. Case (i) is related to high-dimensional low sample size (HDLSS) data. One of the first results for HDLSS data appeared in Hall et al. [1]. It became the basis of research in mathematical statistics for the analysis of high-dimensional data, see, e.g., Fujikoshi et al. [2], which are an important part of the current data analysis fashionable area called Big data. Scientific areas where these settings have proven to be very useful include genetics and other types of cancer research, neuroscience, and also image and shape analysis. See a recent survey on HDLSS asymptotics and its applications in Aoshima et al. [3].
For examining the features of the data set, it is necessary to study the asymptotic behavior of three functions: the length X i of a m-dimensional observation vector, the distance X i X j between any two independent observation vectors, and the angle ang ( X i , X j ) between these vectors at the population mean. Assuming that X i ’s are a sample from N ( 0 , I m ) , it was shown in Hall et al. [1] that for HDLSS data the three geometric statistics satisfy the following relations:
X i = m + O p ( 1 ) , i = 1 , , k ,
X i X j = 2 m + O p ( 1 ) , i , j = 1 , k , i j ,
ang ( X i , X j ) = 1 2 π + O p ( m 1 / 2 ) , i , j = 1 , k , i j ,
where · is the Euclidean distance and O p denotes the stochastic order. These interesting results imply that the data converge to the vertices of a deterministic regular simplex. These properties were extended for non-normal sample under some assumptions (see Hall et al. [1] and Aoshima et al. [3]). In Kawaguchi et al. [4], the relations (1)–(3) were refined by constructing second order asymptotic expansions for distributions of all three basic statistics. The refinements of (1) and (2) were achieved by using the idea of Ulyanov et al. [5] who obtained the computable error bounds of order O ( m 1 ) for the chi-squared approximation of transformed chi-squared random variables with m degrees of freedom.
The aim of the present paper is to study approximation for the third statistic ang ( X 1 , X 2 ) under generalized assumption that m is a realization of a random variable, say N n , which represents the sample dimension and is independent of X 1 and X 2 . This problem is closely related to approximations of statistics constructed from the random size samples, in particular, to this kind of problem for the sample correlation coefficient R m .
The use of samples with random sample sizes has been steadily growing over the years. For an overview of statistical inferences with a random number of observations and some applications, see Esquível et al. [6] and the references cited therein. Gnedenko [7] considered the asymptotic properties of the distributions of sample quantiles for samples of random size. In Nunes et al. [8] and Nunes et al. [9], unknown sample sizes are assumed in medical research for analysis of one and more than one-way fixed effects ANOVA models to avoid false rejections, obtained when using the classical fixed size F-tests. Esquível et al. [6] considered inference for the mean with known and unknown variance and inference for the variance in the normal model. Prediction intervals for the future observations for generalized order statistics and confidence intervals for quantiles based on samples of random sizes are studied in Barakat et al. [10] and Al-Mutairi and Raqab [11], respectively. They illustrated their results with real biometric data set, the duration of remission of leukemia patients treated by one drug. The present paper continues studies of the authors on non-asymptotic analysis of approximations for statistics based on random size samples. In Christoph et al. [12], second order expansions for the normalized random sample sizes are proved, see below Propositions 1 and 2. These results allow for proving second order asymptotic expansions of random sample mean in Christoph et al. [12] and random sample median in Christoph et al. [13]. See also Chapters 1 and 9 in Fujikoshi and Ulyanov [14].
The structure of the paper is the following. In Section 2, we describe the relation between ang ( X 1 , X 2 ) and R m . We recall also previous approximation results proved for distributions of ang ( X 1 , X 2 ) and R m . Section 3 is on general transfer theorems, which allow us to construct asymptotic expansions for distributions of randomly normalized statistics on the base of approximation results for non-randomly normalized statistics and for the random size of the underlying sample, see Theorems 1 and 2. Section 4 contains the auxiliary lemmas. Some of them have independent interest. For example, Lemma 3 on the upper bounds for the negative order moments of a random variable having negative binomial distribution. We formulate and discuss main results in Section 5 and Section 6. In Theorems 3–8, we construct the second order Chebyshev–Edgeworth expansions for distributions of ang ( X 1 , X 2 ) and R m in random setting. Depending on the type of normalization, we get three different limit distributions: Normal, Laplace, or Student’s t-distributions. All proofs are given in the Appendix A.

2. Sample Correlation Coefficient, Angle between Vectors and Their Normal Approximations

We slightly simplify notation. Let X m = ( X 1 , , X m ) T and Y m = ( Y 1 , , Y m ) T be two vectors from an m-dimensional normal distribution N ( 0 , I m ) with zero mean, identity covariance matrix I m and the sample correlation coefficient
R m = R m ( X m , Y m ) = k = 1 m X k Y k k = 1 m X k 2 k = 1 m Y k 2 .
Under the null hypothesis H 0 : { X m and Y m are uncorrelated}, the so-called null density p R m ( y ; n ) of R m is given in Johnson, Kotz and Balakrishnan [15], Chapter 32, Formula (32.7):
p R m ( y ; m ) = Γ ( ( m 1 ) / 2 ) π Γ ( ( m 2 ) / 2 ) 1 y 2 ( m 4 ) / 2 I ( 1 1 ) ( y )
for m 3 , where I A ( . ) denotes indicator function of a set A.
  • Note μ = E R m = 0 and σ 2 = Var ( R m ) = 1 / ( m 1 ) for m 2 ,
  • R 2 is two point distributed with P ( R 2 = 1 ) = P ( R 2 = 1 ) = 1 / 2 ,
  • R 3 is U-shaped with p R 3 ( y ; 3 ) = ( 1 / π ) ( 1 y 2 ) 1 / 2 I ( 1 , 1 ) ( y ) and
  • R 4 is uniform with density p R 4 ( y ; 4 ) = 1 / 2 I ( 1 , 1 ) ( y ) .
  • Moreover, for m 5 , the density function p R m ( y ; m ) is unimodal.
Consider now the standardized correlation coefficient
R ¯ m = m c R m
with some correcting real constant c < m having density
p R ¯ m ( y ; m , c ) = Γ ( ( m 1 ) / 2 ) m c π Γ ( ( m 2 ) / 2 ) 1 y 2 m c ( m 4 ) / 2 I { | r | < m c } ( y ) ,
which converges with c = O ( 1 ) as m to the standard normal density
φ ( y ) = 1 2 π e y 2 / 2 , y ( )
and by Konishi [16], Section 4, Formula (4.1) as m :
F m * ( x , c ) : = P m c R m x = Φ ( x ) + x 3 + ( 2 ( c 1 ) 3 ) x 4 ( m c ) φ ( x ) + O ( m 3 / 2 ) ,
where Φ ( x ) = x φ ( y ) d y is the standard normal distribution function. Note that in Konishi [16] the sample size (in our case the dimension of vectors) is m + 1 and c = 1 + 2 Δ with Konishi’s correcting constant Δ . Moreover, (7) follows from the more general Theorem 2.2 in the mentioned paper for independent components in the pairs ( X k Y k ) , k = 1 , , m .
In Christoph et al. [17], computable error bounds of approximations in (7) with c = 2 and c = 2.5 of order O ( m 2 ) for all m 7 are proved:
sup x P m 2.5 R m x Φ ( x ) x 3 φ ( x ) 4 ( m 2.5 ) B m ( m 2.5 ) 2 B m 2
and
sup x P m 2 R m x Φ ( x ) ( x 3 x ) φ ( x ) 4 ( m 2 ) B m * ( m 2 ) 2 B * m 2
where for some m 7 constants B m and B m * are calculated and presented in Table 1 in Christoph et al. [17]: i.e., B 7 = 1.875 , B 7 * = 2.083 and B 50 = 0.720 , B 50 * = 0.982 .
Usually, the asymptotic for R ¯ m is (9), where c = 2 since it is related to the t-distributed statistic m 2 R m / 1 R m 2 . With the correcting constant c = 2.5 , one term in the asymptotic in (8) vanishes.
In order to use a transfer theorem from non-random to random dimension of the vectors, we prefer (7) with c = 0 . In a similar manner as proving (8) and (9) in Christoph et al. [17], one can verify the following inequalities for m 3 :
sup x P m R m x Φ ( x ) ( x 3 5 x ) 4 m φ ( x ) C 1 m 2 .
Let us consider now the connection between the correlation coefficient R m and the angle θ m of the involved vectors X m , Y m :
θ m = ang ( X m , Y m ) .
Hall et al. [1] showed that under the given conditions
θ m = 1 2 π + O p ( m 1 / 2 ) as m ,
where O p denotes the stochastic order. Since
cos θ m = X m 2 + Y m 2 X m Y m 2 2 X m Y m = R m ( X m , Y m ) = R m ,
the computable error bounds for θ m follows from computable error bounds for R m .
For any fixed constant c < m , and arbitrary x with | x | < m c π / 2 , we obtain for the angle θ m : 0 < θ m < π :
P m c ( θ m π / 2 ) x = P θ m π / 2 + x / m c = P cos θ m cos ( π / 2 + x / m c ) = P R m sin ( x / m c ) = P m c R m m c sin ( x / m c )
because R m is symmetric and P ( R m x ) = P ( R m x ) .
Equation (12) shows the connection between the correlation coefficient R m and the angle θ m among the vectors involved. In Christoph et al. [17], computable error bound of approximation in (8) are used to obtain similar bound for the approximation of the angle between two vectors, defined in (11). Here, the approximation (10) and (12) with c = 0 lead for any m 3 and for | x | π m / 2 to
sup x | P m ( θ m π 2 ) x Φ ( x ) ( 1 / 3 ) x 3 5 x 4 m φ ( x ) | C 1 m 2 .
Many authors investigated limit theorems for the sums of random vectors when their dimension tends to infinity, see, e.g., Prokhorov [18]. In (6) and (7), the dimension m of the vectors X m and Y m tends to infinity.
Now, we consider the correlation coefficient of vectors X m and Y m , where the non-random dimension m is replaced by a random dimension N n N + = { 1 , 2 , } depending on some natural parameter n N + and N n is independent of X m and Y m for any m , n N + . Define
R N n = k = 1 N n X k Y k k = 1 N n X k 2 k = 1 N n Y k 2 .

3. Statistical Models with a Random Number of Observations

Let X 1 , X 2 , R = ( ) and N 1 , N 2 , N + = { 1 , 2 , } be random variables on the same probability space Ω , A , P . Let N n be a random size of the underlying sample, i.e., the random number of observations, which depends on parameter n N + . We suppose for each n N + that N n N + is independent of random variables X 1 , X 2 , and N n in probability as n . Let T m : = T m X 1 , , X m be some statistic of a sample with non-random sample size m N + . Define the random variable T N n for every n N + :
T N n ( ω ) : = T N n ( ω ) X 1 ( ω ) , , X N n ( ω ) ( ω ) , ω Ω ,
i.e., T N n is some statistic obtained from a random sample X 1 , X 2 , , X N n .
The randomness of the sample size may crucially change asymptotic properties of T N n , see, e.g., Gnedenko [7] or Gnedenko and Korolev [19].

3.1. Random Sums

Many models lead to random sums and random means
S N n = k = 1 N n X k and M N n = 1 N n k = 1 N n X k , .
A fundamental introduction to asymptotic distributions of random sums is given in Döbler [20].
It is worth mentioning that a suitable scaled factor by S N n affects the type of limit distribution. In fact, consider random sum S N n given in (14). For the sake of convenience, let X 1 , X 2 , be independent standard normal random variables and N n N + be geometrically distributed with E ( N n ) = n and independent of X 1 , X 2 , . Then, one has
P 1 N n S N n x = x 1 2 π e u 2 / 2 d u for   all n N ,
P 1 E ( N n ) S N n x x 1 2 e 2 | u | d u as n ,
P E ( N n ) N n S N n x x 2 + u 2 3 / 2 d u as n .
We have three different limit distributions. The suitable scaled geometric sum S N n is standard normal distributed or tends to the Laplace distribution with variance 1 depending on whether we take the random scaling factor 1 / N n or the non-random scaling factor 1 / E N n , respectively. Moreover, we get the Student distribution with two degrees of freedom as the limit distribution if we use scaling with the mixed factor E ( N n ) / N n . Similar results also hold for the normalized random mean M N n = 1 N n S N n .
Assertion (15) is obtained by conditioning and the stability of the normal law. Moreover, using Stein’s method, quantitative Berry–Esseen bounds in (15) and (16) for arbitrary centered random variables X 1 with E ( | X 1 | 3 ) < were proved in (Chen et al. [21], Theorem 10.6), (Döbler [20] Theorems 2.5 and 2.7) and (Pike and Ren [22] Theorem 3), respectively. Statement (17) follows from (Bening and Korolev [23] Theorem 2.1).
First order asymptotic expansions are obtained for the distribution function of random sample mean and random sample median constructed from a sample with two different random sizes in Bening et al. [24] and in the conference paper Bening et al. [25]. The authors make use of the rate of convergence of P ( N n g n x ) to the limit distribution H ( x ) with some g n . In Christoph et al. [12], second order expansions for the normalized random sample sizes are proved, see below Propositions 1 and 2. These results allow for proving second order asymptotic expansions of random sample mean in Christoph et al. [12] and random sample median in Christoph et al. [13].

3.2. Transfer Proposition from Non-Random to Random Sample Sizes

Consider now the statistic T N n = T N n X N n , Y N n , where the dimension of the vectors X N n , Y N n is a random number N n N + .
In order to avoid too long expressions and at the same time to preserve a necessary accuracy, we limit ourselves to obtaining limit distributions and terms of order m 1 in the following non-asymptotic approximations with a bounds of order m a for some a > 1 .
We suppose that the following condition on the statistic T m = T m ( X m , Y m ) with E T m = 0 is met for a non-random sample size m N + :
Condition 1.
There exist differentiable bounded function f 2 ( x ) with sup x | x f 2 ( x ) | < c 0 and real numbers a > 1 , C 1 > 0 such that for all integer m 1
sup x | P m γ T m x Φ ( x ) m 1 f 2 ( x ) | C 1 m a ,
where γ { 1 / 2 , 0 , 1 / 2 } .
Remark 1.
Relations (10) and (13) give the examples of statistics such that Condition 1 is met. For other examples of multivariate statistics of this kind, see Chapters 14–16 in Fujikoshi et al. [2].
Suppose that the limiting behavior of distribution functions of the normalized random size N n N + is described by the following condition.
Condition 2.
There exist a distribution function H ( y ) with H ( 0 + ) = 0 , a function of bounded variation h 2 ( y ) , a sequence 0 < g n and real numbers b > 0 and C 2 > 0 such that for all integer n 1
sup y 0 P g n 1 N n y H ( y ) C 2 n b , 0 < b 1 sup y 0 P g n 1 N n y H ( y ) n 1 h 2 ( y ) C 2 n b , b > 1
Remark 2.
In Propositions 1 and 2 below, we get the examples of discrete random variables N n such that Condition 2 is met.
Conditions 1 and 2 allow us to construct asymptotic expansions for distributions of randomly normalized statistics on the base of approximation results for normalized fixed-size statistics (see relation (18)) and for the random size of the underlying sample (see relation (19)). As a result, we obtain the following transfer theorem.
Theorem 1.
Let | γ | K < and both Conditions 1 and 2 be satisfied. Then, the following inequality holds for all n N + :
sup x R | P g n γ T N n x G n ( x , 1 / g n ) | C 1 E N n a + ( C 3 D n + C 4 ) n b ,
G n ( x , 1 / g n ) = 1 / g n Φ ( x y γ ) + f 2 ( x y γ ) g n y d H ( y ) + h 2 ( y ) n ,
D n = sup x 1 / g n y Φ ( x y γ ) + f 2 ( x y γ ) y g n d y ,
where a > 1 , b > 0 , f 2 ( z ) , h 2 ( y ) are given in (18) and (19). The constants C 1 , C 3 , C 4 do not depend on n.
Remark 3.
Later, we use only the cases γ { 1 / 2 , 0 , 1 / 2 } .
Remark 4.
The domain [ 1 / g n , ) of integration in (21) depends on g n . Thus, it is not clear how G n ( x , 1 / g n ) is represented as a polynomial in g n 1 and n 1 . To overcome this problem (see (26)), we prove the following theorem.
Theorem 2.
Under the conditions of Theorem 1 and the additional conditions on functions H ( . ) and h 2 ( . ) , depending on the convergence rate b in (19):
H ( 1 / g n ) c 1 g n b , b > 0 ,
i : 0 1 / g n y 1 d H ( y ) c 2 g n b + 1 , i i : h 2 ( 0 ) = 0 a n d | h 2 ( 1 / g n ) | c 3 n g n b , i i i : 0 1 / g n y 1 | h 2 ( y ) | d y c 4 n g n b , f o r b > 1 ,
we obtain for the function G n ( x , 1 / g n ) defined in (21):
sup x | G n ( x , 1 / g n ) G n , 2 ( x ) | C g n b + sup x | I 1 ( x , n ) | I { b < 1 } ( b ) + | I 2 ( x , n ) |
with
G n , 2 ( x ) = 0 Φ ( x y γ ) d H ( y ) , 0 < b < 1 , 0 Φ ( x y γ ) d H ( y ) + 1 g n 0 f 2 ( x y γ ) y d H ( y ) , b = 1 0 Φ ( x y γ ) d H ( y ) + 1 g n 0 f 2 ( x y γ ) y d H ( y ) I { γ = 0 } ( γ ) + 1 n 0 Φ ( x y γ ) d h 2 ( y ) , b > 1 .
I 1 ( x , n ) = 1 / g n f 2 ( x y γ ) g n y d H ( y ) f o r b 1 a n d I 2 ( x , n ) = 1 / g n f 2 ( x y γ ) g n n y d h 2 ( y ) f o r b > 1 .
Remark 5.
The additional conditions (23) and (24) guarantee to extend the integration range from [ 1 / g n , ) to ( 0 , ) of the integrals in (26).
Theorems 1 and 2 are proved in Appendix A.1.

4. Auxiliary Propositions and Lemmas

Consider the standardized correlation coefficient (5) having density (6) with correcting real constant c = 0 and standardized angle m ( θ m π / 2 ) , see (12). By (10) and (13) for m 3 , we have
sup x P m R m x Φ ( x ) ( x 3 5 x ) 4 m φ ( x ) C 1 m 2 , m N + ,
and for the angle θ m between the vectors for | x | π m / 2
sup x | P m ( θ m π 2 ) x Φ ( x ) ( 1 / 3 ) x 3 5 x 4 m φ ( x ) | C 1 m 2 , m N + ,
where (27) and (28) for m = 1 and m = 2 are trivial and C 1 does not depend on m.
Suppose f 2 ( x ; a ) = ( a x 3 5 x ) φ ( x ) / 4 with a = 1 or a = 1 / 3 when (27) or (28) are considered. Since a product of polynomials in x with φ ( x ) is always bounded, numerical calculus leads to
sup x | x f 2 ( x ; a ) | = sup x | x ( a x 4 ( 3 a + 5 ) x 2 + 5 ) | φ ( x ) / 4 0.4 .
Condition 1 of the transfer Theorem 1 to the statistics R m and θ m are satisfied with c 0 = 0.4 and a = 2 .
Next, we estimate D n ( x ) defined in (22).
Lemma 1.
Let g n a sequence with 0 < g n as n . Then, with some 0 < c ( γ , a ) < , we obtain with a = 1 and a = 1 / 3 :
D n = sup x 1 / g n y Φ ( x y γ ) + f 2 ( x y γ ; a ) y g n d y 1 2 + c ( γ , a ) 4 .
In the next subsection, we consider the cases when the random dimension N n is negative binomial distributed with success probability 1 / n .

4.1. Negative Binomial Distribution as Random Dimension of the Normal Vectors

Let the random dimension N n ( r ) of the underlying normal vectors be negative binomial distributed (shifted by 1) with parameters 1 / n and r > 0 , having probability mass function
P ( N n ( r ) = j ) = Γ ( j + r 1 ) Γ ( j ) Γ ( r ) 1 n r 1 1 n j 1 , j = 1 , 2 ,
with E ( N n ( r ) ) = r ( n 1 ) + 1 . Then, P ( N n ( r ) / g n x ) tends to the Gamma distribution function G r , r ( x ) with the shape and rate parameters r > 0 , having density
g r , r ( x ) = r r Γ ( r ) x r 1 e r x I ( 0 ) ( x ) , x R .
If the statistic T m is asymptotically normal, the limit distribution of the standardized statistic T N n ( r ) with random size N n ( r ) is Student’s t-distribution S 2 r ( x ) having density
s ν ( x ) = Γ ( ( ν + 1 ) / 2 ) ν π Γ ( ν / 2 ) 1 + x 2 ν ( ν + 1 ) / 2 , ν > 0 , x R ,
with ν = 2 r , see Bening and Korolev [23] or Schluter and Trede [26].
Proposition 1.
Let r > 0 , discrete random variable N n ( r ) have probability mass function (29) and g n : = E N n ( r ) = r ( n 1 ) + 1 . For x > 0 and all n N there exists a real number C 2 ( r ) > 0 such that
sup x 0 P N n ( r ) g n x G r , r ( x ) h 2 ; r ( x ) n C 2 ( r ) n min { r , 2 } ,
where
h 2 ; r ( x ) = 0 , f o r r 1 , g r , r ( x ) ( x 1 ) ( 2 r ) + 2 Q 1 g n x 2 r , f o r r > 1 .
Q 1 ( y ) = 1 / 2 ( y [ y ] ) a n d [ . ] denotes   the   integer   part   of   a   number .
Figure 1 shows the approximation of P N n ( r ) ( r ( n 1 ) + 1 ) x by G 2 , 2 ( x ) and G 2 , 2 ( x ) + h 2 ( x ) / n .
Remark 6.
The convergence rate for r 1 is given in Bening et al. [24] or Gavrilenko et al. [27]. The Edgeworth expansion for r > 1 is proved in Christoph et al. [12], Theorem 1. The jumps of the sample size N n ( r ) have an effect only on the function Q 1 ( . ) in the term h 2 ; r ( . ) .
The negative binomial random variable N n satisfies Condition 2 of the transfer Theorem 1 with H ( x ) = G r , r ( x ) , h 2 ( x ) = h 2 ; r ( x ) , g n = E N n ( r ) = r ( n 1 ) + 1 and b = min { r 2 } .
Lemma 2.
In Theorem 2 the additional conditions (23) and (24) are satisfied with H ( x ) = G r , r ( x ) , h 2 ( x ) = h 2 ; r ( x ) , g n = E N n ( r ) = r ( n 1 ) + 1 and b = min { r 2 } . Moreover, one has for γ { 1 / 2 , 0 , 1 / 2 } and f 2 ( z ; a ) = ( a z 3 5 z ) φ ( z ) / 4 , with a = 1 or a = 1 / 3 :
| I 1 ( x , n ) | = | 1 / g n f 2 ( x y γ ; a ) g n y d G r , r ( y ) | c 5 g n r r < 1 , | 1 / n f 2 ( x y γ ; a ) n y d G 1 , 1 ( y ) n 1 f 2 ( x ; a ) ln n I { γ = 0 } ( γ ) | c 6 n 1 , r = 1 ,
| I 2 ( x , n ) | = 1 / g n f 2 ( x y γ ; a ) g n n y d h 2 ; r ( y ) c 7 g n r , r > 1 , r 2 , c 7 + c 8 ln n I { γ = 0 } ( γ ) g n 2 , r = 2 .
Furthermore, we have
0 g n 1 ( r n ) 1 ( r 1 ) ( r n ) 2 e 1 / 2 f o r r > 1 , n 2 .
In addition to the expansion of N n ( r ) a bound of E ( N n ( r ) ) a is required, where m a is rate of convergence of Edgeworth expansion for T m , see (18).
Lemma 3.
Let r > 0 , α > 0 and the random variable N n ( r ) is defined by (29). Then,
E N n ( r ) α C ( r ) n min { r , α } , r α ln ( n ) n α , r = α
and the convergence rate in case r = α cannot be improved.

4.2. Maximum of n Independent Discrete Pareto Random Variables Is the Dimension of the Normal Vectors

Let Y ( s ) N be discrete Pareto II distributed with parameter s > 0 , having probability mass and distribution functions
P ( Y ( s ) = k ) = s s + k 1 s s + k and P Y ( s ) k = k s + k , k N ,
which is a particular class of a general model of discrete Pareto distributions, obtained by discretization continuous Pareto II (Lomax) distributions on integers, see Buddana and Kozubowski [28].
Now, let Y 1 ( s ) , Y 2 ( s ) , , be independent random variables with the same distribution (38). Define for n N and s > 0 the random variable
N n ( s ) = max 1 j n Y j ( s ) with P ( N n ( s ) k ) = k s + k n , n N .
It should be noted that the distribution of N n ( s ) is extremely spread out on the positive integers.
In Christoph et al. [12], the following Edgeworth expansion was proved:
Proposition 2.
Let the discrete random variable N n ( s ) have distribution function (39). For x > 0 , fixed s > 0 and all n N , then there exists a real number C 3 ( s ) > 0 such that
sup y > 0 P N n ( s ) n y H s ( y ) h 2 ; s ( y ) n C 3 ( s ) n 2 ,
H s ( y ) = e s / y a n d h 2 ; s ( y ) = s e s / y s 1 + 2 Q 1 ( n y ) / 2 y 2 , y > 0
where Q 1 ( y ) is defined in (33).
Remark 7.
The continuous function H s ( y ) = e s / y I ( 0 ) ( y ) with parameter s > 0 is the distribution function of the inverse exponential random variable W ( s ) = 1 / V ( s ) , where V ( s ) is exponentially distributed with rate parameter s > 0 . Both H s ( y ) and P ( N n ( s ) y ) are heavy tailed with shape parameter 1.
Remark 8.
Therefore, E N n ( s ) = for all n N and E W ( s ) = . Moreover:
  • First absolute pseudo moment ν 1 = 0 x | d P N n ( s ) n x e s / x | = ,
  • Absolute difference moment χ u = 0 x u 1 | P N n ( s ) n x e s / x | d x <
    for 1 u < 2 , see Lemma 2 in Christoph et al. [12].
On pseudo moments and some of their generalizations, see Chapter 2 in Christoph and Wolf [29].
Lemma 4.
In Transfer Theorem 2, the additional conditions (23) and (24) are satisfied with H ( y ) = H s ( y ) = e s / y , h 2 ( y ) = h 2 ; s ( y ) = s e s / y s 1 + 2 Q 1 ( n y ) / 2 y 2 , y > 0 , g n = n and b = 2 . Moreover, one has for | γ | K < and f 2 ( z ; a ) = ( a z 3 5 z ) φ ( z ) / 4 , with a = 1 or a = 1 / 3 :
I 2 ( x , n ) = 1 / n f 2 ( x y γ ; a ) n 2 y d h 2 ; s ( y ) C ( s ) n 2 .
Lemma 5.
For random size N n ( s ) with probabilities (39) with reals s s 0 > 0 and arbitrary small s 0 > 0 and n 1 , we have
E N n ( s ) α C ( s ) n α .
The Lemmas are proved in Appendix A.2.

5. Main Results

Consider the sample correlation coefficient R m = R m ( X m , Y m ) , given in (4) and the two statistics R m * = m R m and R m * * = m R m which differ from R m by scaling factors. Hence, by (10),
P ( m R m x ) = P ( R m * x ) = P 1 m R m * * x = Φ ( x ) + ( x 3 5 x ) 4 m φ ( x ) + r ( m )
with | r ( m ) | C m 2 .
Let θ m be the angle between the vectors X m and Y m . Contemplate the statistics Θ m = θ m π / 2 , Θ m * = m ( θ m π / 2 ) and Θ m * * = m ( θ m π / 2 ) which differ only in scaling. Then, by (13),
P ( m Θ m x ) = P ( Θ m * x ) = P 1 m Θ m * * x = Φ ( x ) + ( 1 / 3 ) x 3 5 x 4 m φ ( x ) + r * ( m )
with | r * ( m ) | C m 2 .
Consider now the statistics R N n , R N n * and R N n * * as well as Θ N n , Θ N n * and Θ N n * * when the vectors have random dimension N n . The normalized statistics have different limit distributions as n .

5.1. The Random Dimension N n = N n ( r ) Is Negative Binomial Distributed

Let the random dimension N n ( r ) be negative binomial distributed with probability mass function (29) and g n = E N n ( r ) = r ( n 1 ) + 1 . “The negative binomial distribution is one of the two leading cases for count models, it accommodates the overdispersion typically observed in count data (which the Poisson model cannot)”, see Schluter and Trede [26].
It follows from Theorems 1 and 2 and Proposition 1 that if limit distributions for P g n γ N n ( r ) 1 / 2 γ R N n ( r ) x for γ { 1 / 2 , 0 1 / 2 } exist they are 0 Φ ( x y γ ) d G r , r ( y ) with densities given bellow in the proof of the corresponding theorems:
r r 2 π Γ ( r ) 0 y r 1 / 2 e ( x y γ + r y ) d y = s 2 r ( x ) = Γ ( r + 1 / 2 ) 2 r π Γ ( r ) 1 + x 2 2 r ( r + 1 / 2 ) , γ = 1 / 2 , φ ( x ) = 1 2 π e x 2 / 2 , γ = 0 , l 1 ( x ) = 1 2 e 2 | x | , for r = 1 , γ = 1 / 2 ,
where in case γ = 1 / 2 for r 1 generalized Laplace distributions occur.

5.1.1. Student’s t-Distribution

We start with the case γ = 1 / 2 in Theorems 1 and 2. Consider the statistic R ¯ N n ( r ) = g n R N n ( r ) . The limit distribution is the Student’s t-distribution S 2 r ( x ) with 2 r degrees of freedom with density (31).
Theorem 3.
Let r > 0 and (29) be the probability mass function of the random dimension N n = N n ( r ) of the vectors under consideration. If the representation (42) for the statistic R m and the inequality (32) with g n = E N n ( r ) = r ( n 1 ) + 1 hold, then there exists a constant C r such that for all n N +
sup x P g n R N n ( r ) x S 2 r ; n ( x ; 1 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where
S 2 r ; n ( x ; a ) = S 2 r ( x ) + s 2 r ( x ) r n a x 3 10 r x + 5 x 3 2 r 1 + ( 2 r ) ( x 3 + x ) 4 ( 2 r 1 ) .
Moreover, the scaled angle θ N n ( r ) between the vectors X N n ( r ) and Y N n ( r ) allows the estimate
sup x P g n ( θ N n ( r ) π / 2 ) x S 2 r ; n ( x ; 1 / 3 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where S 2 r ; n ( x ; 1 / 3 ) is given in (45) with a = 1 / 3 .
Figure 2 shows the advantage of the Chebyshev–Edgeworth expansion versus the limit law in approximating the empirical distribution function.
Remark 9.
The limit Student’s t-distribution S 2 r ( x ) is symmetric and a generalized hyperbolic distribution which can be written as a regularized incomplete beta function I z ( a , b ) . For x > 0 :
S 2 r ( x ) = x s 2 r ( u ) d u = 1 2 1 + I 2 r / ( x 2 + 2 r ) ( 1 / 2 , r ) a n d I z ( a , b ) = Γ ( a + b ) Γ ( a ) Γ ( b ) 0 z t a 1 ( 1 t ) b 1 .
Remark 10.
For integer values ν = 2 r { 1 , 2 , } the Student’s t-distribution S 2 r ( x ) is computable in closed form:
t h e   C a u c h y   l a w S 1 ( x ) = 1 2 + 1 π arctan ( x ) , S 2 ( x ) = 1 2 + x 2 2 + x 2 ,
S 3 ( x ) = 1 2 + 1 π x 3 ( 1 + x 2 / 3 ) + arctan ( x / 3 ) a n d S 4 ( x ) = 1 2 + 27 ( x 2 + 3 ) x ( 2 x 2 + 9 ) 8 ( 3 x 2 + 9 ) 5 / 2 .
Remark 11.
If the dimension of the vectors has the geometric distribution N n ( 1 ) , then asymptotic distribution of the sample coefficient is the Student law S 2 ( x ) with two degrees of freedom.
Remark 12.
The Cauchy limit distribution occurs when the dimension of the vectors has distribution N n ( 1 / 2 ) .
Remark 13.
The Student’s t-distributions S 2 r ( x ) are heavy tailed and their moments of orders α 2 r do not exist.

5.1.2. Standard Normal Distribution

Let γ = 0 in the Theorems 1 and 2 examining the statistics R N n ( r ) * and Θ N n ( r ) * = N n ( r ) ( θ N n ( r ) π / 2 ) .
Theorem 4.
Let r > 0 and N n = N n ( r ) be the random vector dimension having probability mass function (29). If the representation (42) for the statistic R m and the inequality (32) with g n = E N n ( r ) = r ( n 1 ) + 1 hold, then there exists a constant C r such that for all n N +
sup x P N n ( r ) R N n ( r ) x Φ n ; 2 ( x ; 1 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where
Φ n ; 2 ( x ; a ) = Φ ( x ) + φ ( x ) n ( a x 3 5 x ) ln n 4 I { r = 1 } ( r ) + Γ ( r 1 ) ( a x 3 5 x ) 4 Γ ( r ) I { r > 1 } ( r ) .
Moreover, the scaled angle θ N n ( r ) * between the vectors X N n ( r ) and Y N n ( r ) allows the estimate
sup x P N n ( r ) ( θ N n ( r ) π / 2 ) x Φ n ; 2 ( x ; 1 / 3 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where Φ n ; 2 ( x ; 1 / 3 ) is given in (47) with a = 1 / 3 .
Figure 3 shows that the second order Chebyshev–Edgeworth expansion approximates the empirical distribution function better than the limit normal distribution.
Remark 14.
When the distribution function of a statistic T m without standardization tends to the standard normal distribution Φ ( x ) , i.e., P ( T m x ) Φ ( x ) , then the limit law for P ( T N n x ) remains the standard normal distribution Φ ( x ) .

5.1.3. Generalized Laplace Distribution

Finally, we use γ = 1 / 2 in Theorems 1 and 2 examining the statistic g n 1 / 2 R N n ( r ) * * . Theorems 1 and 2 state that if there exists a limit distribution of P g n 1 / 2 R N n * * x as n then it has to be a scale mixture of normal distributions with zero mean and gamma distribution:
L r ( x ) = 0 Φ ( x y 1 / 2 ) d G r , r ( y )
having density, see formula (A9) in the proof of Theorem 5:
l r ( x ) = r r Γ ( r ) 0 φ ( x y 1 / 2 ) y r 3 / 2 e r y d y = 2 r r Γ ( r ) 2 π | x | 2 r r 1 / 2 K r 1 / 2 ( 2 r | x | ) .
where K α ( u ) is the α-order Macdonald function or α -order modified Bessel function of the third kind. See, e.g., Oldham et al. [30], Chapter 51, or Kotz et al. [31], Appendix, for properties of these functions.
For integer r = 1 , 2 , 3 , these densities l r ( x ) , so-called Sargan densities, and their distribution functions are computable in closed forms:
l 1 ( x ) = 1 2 e 2 | x | and L 1 ( x ) = 1 1 2 e 2 | x | , x > 0 l 2 ( x ) = 1 2 + | x | e 2 | x | and L 2 ( x ) = 1 1 2 ( 1 + x ) e 2 | x | , x > 0 l 3 ( x ) = 3 6 16 1 + 6 | x | + 2 x 2 e 6 | x | ) and L 3 ( x ) = 1 1 2 + 5 6 x 16 + 3 x 2 8 e 6 | x | ,
where L r ( x ) = 1 L r ( x ) for x 0 .
The standard Laplace distribution is L 1 ( x ) with variance 1 and density l 1 ( x ) given in (49). Therefore, Sargans distributions are a kind of generalizations of the standard Laplace distribution.
Theorem 5.
Let r = 1 , 2 , 3 and (29) be probability mass function of the random dimension N n = N n ( r ) of the vectors under consideration. If the representation (42) for the statistic R m and the inequality (32) for N n ( r ) with g n = E N n ( r ) = r ( n 1 ) + 1 hold, then there exists a constant C r such that for all n N +
sup x P g n 1 / 2 N n ( r ) R N n ( r ) x L n ; 2 ( x ; 1 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where
L n ; 2 ( x ; a ) = L 1 ( x ) , r = 1 , L 2 ( x ) + a x | x | 5 x 2 2 ( n 1 ) + 1 e 2 | x | , r = 2 , L 3 ( x ) + 27 24 ( n 1 ) + 8 a x 3 2 5 x | x | 6 5 x 6 6 e 6 | x | + 9 x 2 n 1 12 6 + | x | 12 x 2 6 6 e 6 | x | , r = 3 .
For arbitrary r > 0 , the approximation rate is given by:
sup x P g n 1 / 2 N n ( r ) R N n ( r ) x L r ( x ) C r n min { r 1 } .
Moreover, the scaled angle N n ( r ) θ N n ( r ) between the vectors X N n ( r ) and Y N n ( r ) allows the estimate
sup x P g n 1 / 2 N n ( r ) θ N n ( r ) x L n ; 2 ( x ; 1 / 3 ) C r n min { r , 2 } , r 2 , ln ( n ) n 2 , r = 2 ,
where L n ; 2 ( x ; 1 / 3 ) is given in (51) with a = 1 / 3 .
Figure 4 shows that the Chebyshev–Edgeworth expansion approaches the empirical distribution function better than the limit Laplace law.
Remark 15.
One can find the distribution functions L r ( x ) for arbitrary r > 0 with formula 1.12.1.3 in Prudnikov et al. [32]:
L r ( x ) = 1 2 + 2 r r 2 π Γ ( r ) 0 x | x | 2 r r 1 / 2 K r 1 / 2 ( 2 r | x | ) d x = 1 2 + x ( 2 r ) ( r 1 / 2 ) / 2 K r 1 / 2 ( 2 r x ) L r 3 / 2 ( 2 r x ) + K r 3 / 2 ( 2 r x ) L r 1 / 2 ( 2 r x ) .
where L α ( x ) are the modified Struve functions of order α, for properties of modified Struve functions see, e.g., Oldham et al. [30], Section 57:13.
Remark 16.
The function (48) as density of a mixture of normal distributions with zero mean and random variance W r having gamma distribution P ( W r y ) = G r , r ( y ) is given also in Kotz et al. [31], Formula (4.1.32) with τ = r , σ = 1 / r , using Formula (A.0.4) with λ = r + 3 / 2 and the order-reflection formula K α ( x ) = K α ( x ) . Such a variance gamma model is studied in Madan and Senata [33] for share market returns.
Remark 17.
A systematic exposition about the Laplace distribution and its numerous generalization and diverse applications one finds in the useful and interesting monography by Kotz et al. [31]. Here, these generalized Laplace distributions L 1 ( x ) , L 2 ( x ) and L 3 ( x ) are the leading terms in the approximations of the sample correlation coefficient R N n ( r ) * * of two Gaussian vectors with negative binomial distributed random dimension N n ( r ) and the angle θ N n ( r ) * * between these vectors.
Remark 18.
In Goldfeld and Quandt [34] and Missiakoulis [35] Sargans densities l r ( x ) and distribution functions L r ( x ) for arbitrary integer r = 1 , 2 , 3 , have been studied as an alternative to normal law in econometric models because they are computable in closed form, see also Kotz et al. [31], Section 4.4.3 and the references therein.

5.2. The Random Dimension N n = N n ( s ) Is the Maximum of n Independent Discrete Pareto Random Variables

The random dimension N n ( s ) has probability mass function (39): Since E N n ( s ) = we choose g n = n and consider again the cases γ = 1 / 2 , γ = 0 and γ = 1 / 2 .
It follows from Theorems 1 and 2 and Proposition 2 that if limit distributions for P g n γ R N n ( s ) x for γ { 1 / 2 , 0 1 / 2 } exist, they are 0 Φ ( x y γ ) d H s ( y ) with densities given below in the proof of the corresponding theorems
s 2 π 0 y 3 / 2 e ( x 2 y 2 γ / 2 + s / y ) d y = l 1 / s ( x ) = 2 s 2 e 2 s | x | , γ = 1 / 2 , φ ( x ) = 1 2 π e x 2 / 2 , γ = 0 , s 2 * ( x ; s ) = 1 2 2 s 1 + x 2 2 s 3 / 2 , γ = 1 / 2 ,
where s 2 * ( x ; s ) is the density of the scaled Student’s t-distribution S 2 * ( x ; s ) with 2 degrees of freedom, see Definition B37 in Jackman [36], p.507. If Z has density s 2 * ( x ; s ) then Z / s has a classic Student’s t-distribution with two degrees of freedom.

5.2.1. Laplace Distribution

We start with the case γ = 1 / 2 in Theorems 1 and 2. Consider the statistics n R N n ( s ) and n ( θ N n ( s ) π / 2 ) . The limit distribution is now the Laplace distribution
L 1 / s ( x ) = 1 2 + 1 2 sign ( x ) 1 e 2 s | x | with   density l 1 / s ( x ) = 2 s 2 e 2 s | x | .
Theorem 6.
Let s > 0 and (39) be the probability mass function of the random dimension N n = N n ( s ) of the vectors under consideration. If the representation (42) for the statistic R m and the inequality (32) with g n = n hold, then there exists a constant C s such that for all n N +
sup x P n R N n ( s ) x L 1 / s ; n ( x ; a ) C s n 2 ,
where
L 1 / s ; n ( x ; a ) = L 1 / s ( x ) + l 1 / s ( x ) 8 s n a 2 s x 3 ( 4 s ) x 1 + 2 s | x | .
Moreover, the scaled angle θ N n ( s ) between the vectors X N n ( s ) and Y N n ( s ) allows the estimate
sup x P n ( θ N n ( s ) π / 2 ) x L 1 / s ; n ( x ; 1 / 3 ) C s n 2 ,
where L 1 / s ; n ( x ; 1 / 3 ) is given in (53) with a = 1 / 3 .

5.2.2. Standard Normal Distribution

Let γ = 0 in the Theorems 1 and 2 examine the statistics R N n ( s ) * and Θ N n ( s ) * = N n ( s ) ( θ N n ( s ) π / 2 ) .
Theorem 7.
Let s > 0 and N n = N n ( s ) be the random vector dimension having probability mass function (39). If the representation (42) for the statistic R m and the inequality (32) with g n = n hold, then there exists a constant C s such that, for all n N +
sup x P N n ( s ) R N n ( s ) x Φ ( x ) 1 4 n φ ( x ) s 2 ( x 3 5 x ) C s n 2 ,
Moreover, the scaled angle θ N n ( s ) * between the vectors X N n ( s ) and Y N n ( s ) allows the estimate
sup x P N n ( s ) ( θ N n ( s ) π / 2 ) x Φ ( x ) 1 4 n φ ( x ) s 2 ( 1 3 x 3 5 x ) C s n 2 .

5.2.3. Scaled Student’s t-Distribution

Finally, we use γ = 1 / 2 in Theorems 1 and 2 examining the statistics n 1 / 2 N n ( s ) R N n ( s ) and n 1 / 2 N n ( s ) ( θ N n ( s ) π / 2 ) . The limit Scaled Student’s t-Distribution S 2 * ( x ; s ) with two degrees of freedom is a scale mixture of the normal distribution with zero mean and mixing exponential distribution 1 e s y , y 0 , and it is representable in a closed form, see (A15) below in the proof of Theorem 8:
0 Φ ( x / y ) d e s / y = 0 Φ ( x / y ) s y 2 e s / y d y = 0 Φ ( x z ) s e s z d z = 0 Φ ( x z ) d ( 1 e s z ) d z = 1 2 + x / s 2 2 1 + x 2 / ( 2 s ) = S 2 * ( x )
Theorem 8.
Let s > 0 and N n = N n ( s ) be the random vector dimension having probability mass function (39). If the representation (42) for the statistic R m and the inequality (32) with g n = n hold, then there exists a constant C s such that for all n N +
sup x P n 1 / 2 N n ( s ) R N n ( s ) x S n ; 2 * ( x ; 1 ) C r n 2 ,
where
S n ; 2 * ( x ; s ; a ) = S 2 * ( x ; s ) + ( 15 a + 3 s 18 ) x 3 6 x s ( 6 s ) 4 n ( x 2 + 2 s ) 2 s 2 * ( x ; s )
Moreover, the scaled angle θ N n ( s ) * between the vectors X N n ( s ) and Y N n ( s ) allows the estimate
sup x P n 1 / 2 N n ( s ) θ N n ( s ) x S n ; 2 ( x ; s ; 1 / 3 ) C s n 2 ,
where S n ; 2 ( x ; s ; 1 / 3 ) is given in (55) with a = 1 / 3 .
Theorems 3 to 8 are proved in Appendix A.3.

6. Conclusions

The asymptotic distributions of the sample correlation coefficient of vectors with random dimensions are normal scale mixtures. From (43) and (52), one can conclude that random dimension and corresponding scaling have significant influence on limit distributions A scale mixture of a normal distribution change the tail behavior of the distribution. Students t-Distributions have polynomial tails, as one class of heavy-tailed distributions, they can be used to model heavy-tail returns data in finance. The Laplace distributions have heavier tails than normal distributions.

Author Contributions

Conceptualization, G.C. and V.V.U.; methodology, V.V.U. and G.C.; writing–original draft, G.C. and V.V.U.; writing–review and editing, V.V.U. and G.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. It was done within the framework of the Moscow Center for Fundamental and Applied Mathematics, Lomonosov Moscow State University, and HSE University Basic Research Programs.

Acknowledgments

The authors would like to thank the Managing Editor and the Reviewers for the careful reading of the manuscript and pertinent comments. Their constructive feedback helped to improve the quality of this work and shape its final form.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proofs of the Theorems and Lemmas

Appendix A.1. Proofs of Theorems 1 and 2

Proof of Theorem 1.
The proof follows along the similar arguments of the more general transfer Theorem 3.1 in Bening et al. [24]. Since in Theorem 3.1 in Bening et al. [24] the constant γ has to be non-negative and in our Theorem 1, we also need γ = 1 / 2 , therefore we repeat the proof. Conditioning on N n , we have
P g n γ T N n x = P N n γ T N n x ( N n / g n ) γ = m = 1 P m γ T m x ( m / g n ) γ P ( N n = m ) .
Using now (18) with Φ m ( z ) : = Φ ( z ) + m 1 f 2 ( z ) , we find
m = 1 P m γ T m x ( m / g n ) γ Φ m ( x ( m / g n ) γ ) P ( N n = m ) ( 18 ) C 1 m = 1 m a P ( N n = m ) = C 1 E ( N n a ) .
Taking into account P N n / g n < 1 / g n = P N n < 1 = 0 , we obtain
m = 1 Φ m ( x ( m / g n ) γ ) P ( N n = m ) = E Φ N n ( x ( N n / g n ) γ ) = 1 / g n Δ n ( x , y ) d P N n g n y = G n ( x , 1 / g n ) + I 1 ,
where Δ n ( x , y ) : = Φ ( x y γ ) + f 2 ( x y γ ) / ( g n y ) , G n ( x , 1 / g n ) is defined in (21) and
I 1 = 1 / g n Δ n ( x , y ) d P N n g n y H ( y ) h 2 ( y ) I { b > 1 } ( b ) n .
Estimating integral I 1 , we use the integration by parts for Lebesgue–Stieltjes integrals, the boundedness of f 2 ( z ) , say sup z | f 2 ( z ) | c 1 * , and estimates (19)
| I 1 | sup x lim L | Δ n ( x , y ) | | P N n / g n y H ( y ) n 1 h 2 ( y ) I { b > 1 } ( b ) | y = 1 / g n y = L     + sup x 1 / g n | y Δ n ( x , y ) | P N n / g n y H ( y ) n 1 h 2 ( y ) I { b > 1 } ( b ) d y ( 1 + c 1 * ) C 2 n b + C 2 D n n b ,
where D n is defined in (22). Together with (A1), we obtain (20) and Theorem 1 is proved. □
Proof of Theorem 2.
Using (23), we find for b > 0
0 1 / g n Φ ( x y γ ) d H ( y ) 0 1 / g n d H ( y ) = H ( 1 / g n ) H ( 0 ) ( 23 ) c 1 g n b .
Let now b > 1 . Since f 2 ( z ) is supposed to be bounded, it follows from | f 2 ( z ) | c 1 * < and (24i) that
0 1 / g n | f 2 ( x y γ ) | y 1 d H ( y ) c 1 * 0 1 / g n y 1 d H ( y ) ( 24 i ) c 1 * c 2 g n b + 1 .
Integration by parts, | z | φ ( z ) c * = ( 2 π e ) 1 / 2 , (24ii) and (24iii) lead to
0 1 / g n Φ ( x y γ ) d h 2 ( y ) | h 2 ( 1 / g n ) | + γ c * 0 1 / g n y 1 | h 2 ( y ) | d y ( c 3 + γ c * c 4 ) n g n b .
Theorem 2 is proved. □

Appendix A.2. Proofs of Lemmas 1 to 5

Proof of Lemma 1.
To estimate D n , we consider three cases:
D n = sup x | D n ( x ) | = max { sup x > 0 | D n ( x ) | , sup x < 0 | D n ( x ) | , | D n ( 0 ) | } .
Let x > 0 . Since y Φ ( x y γ ) = γ x y γ 1 φ ( x y γ ) 0 , we find
1 / g n y Φ ( x y γ ) d y = 1 / g n γ x y γ 1 φ ( x y γ ) d y = x g n γ φ ( u ) d u = Φ ( ) Φ ( x g n γ ) 1 / 2 .
Consider now f 2 ( x y γ ; a ) = ( a ( x y γ ) 3 5 x y γ ) φ ( x y γ ) / 4 with a = 1 or a = 1 / 3 . Then,
y f 2 ( x y γ ; a ) y = Q 5 ( x y γ ; a ) 4 y 2 , Q 5 ( z ; a ) = ( γ a z 5 ( ( 3 a + 5 ) γ a ) z 3 + 5 ( γ 1 ) z ) φ ( z ) .
Since sup z | Q 5 ( z ; a ) | c ( γ ; a ) < and g n 1 1 / g n y 2 d y = 1 , inequality (29) holds for x > 0 . Taking into account | D n ( x ) | = | D n ( x ) | and D n ( 0 ) = 0 , Lemma 1 is proved. □
Proof of Lemma 2.
Using (30), we find G r , r ( 1 / g n ) c 1 g n r with c 1 = r r 1 / Γ ( r ) . For r > 1 , then 0 1 / g n y 1 d G r , r ( y ) c 2 g n r + 1 with c 2 = r r / ( r 1 ) Γ ( r ) . Since g r , r ( 0 ) = 0 , h 2 ; r ( 0 ) = 0 and g n r n for r > 1 , then (24ii) and (24iii) hold with c 3 = c r * and c 4 = c r * / ( r 1 ) , where c r * = r r 2 r Γ ( r ) sup y { e r y ( | y 1 | | 2 r | + 1 ) } < .
It remains to prove the bounds in (34) and (35). Let first r < 1 . With c 1 * = sup z | f 2 ( z ; a ) | , we find
| I 1 ( x , n ) | c 1 * r r g n Γ ( r ) 1 / g n y r 2 d y c 1 * r r ( r 1 ) Γ ( r ) g n r with c 5 = c 1 * r r ( r 1 ) Γ ( r ) .
If r = 1 with c 1 * * = sup z { | a z 2 5 | φ ( z / 2 ) } , we find | f 2 ( z ; a ) | c 1 * * | z | φ ( z / 2 ) and
| I 1 ( x , n ) | c 1 * * | x | 2 π n 1 / n y γ 1 e ( y + x 2 y 2 γ / 4 ) d y with γ { 1 / 2 , 0 , 1 / 2 } .
For γ = 1 / 2 using | x | ( 1 + x 2 / 4 ) 1 / 2 2 , we obtain
| I 1 ( x , n ) | c 1 * * | x | 2 π n 1 / n y 1 / 2 1 e ( 1 + x 2 / 4 ) y d y c 1 * * | x | Γ ( 1 / 2 ) 2 π ( 1 + x 2 / 4 ) 1 / 2 n 1 c 6 n 1 , c 6 = 2 c 1 * * .
If γ = 1 / 2 , then Prudnikov et al. [37], formula 2.3.16.3, for x 0 leads to
I 1 ( x , n ) c 1 * * | x | 2 π n 1 / n y 1 1 / 2 e ( 2 y + x 2 / ( 4 y ) ) d y c 1 * * | x | 2 π n 2 π | x | e 2 | x | 2 c 1 * * n , c 6 = 2 c 1 * * .
Finally, if γ = 0 , then f 2 ( x y γ ; a ) = f 2 ( x ; a ) does not depend on y. Using now
0 ln n 1 / n 1 y 1 d G 1 , 1 ( y ) = 1 / n 1 1 e y y d y 1 and 1 y 1 d G 1 , 1 ( y ) e 1 ,
then (34) for r = 1 holds with c 6 = c 1 * ( 1 + e 1 ) .
Let r > 1 . Integration by parts for Lebesgue–Stieltjes integrals in I 2 ( x , n ) in (35) and (A2) lead to
I 2 ( x , n ) 1 n g n c 1 * g n | h 2 ; r ( 1 / g n ) | + 1 / g n | Q 5 ( x y γ ; a ) | 4 y 2 | h 2 ; r ( y ) | d y .
Since c ( γ ; a ) = sup z | Q 5 ( z ; a ) | < and with above defined c r * , we find
1 / g n | h 2 ; r ( y ) | y 2 d y c r * 1 / g n y r 3 d y = c r * ( 2 r ) g n r + 2 f o r 1 < r < 2
and with c r * * = r r 1 2 Γ ( r ) sup y { ( e r y / 2 ( | y 1 | | 2 r | + 1 ) } < , we obtain
1 / g n | h 2 ; r ( y ) | y 2 d y c r * * 1 / g n y r 3 e r y / 2 d y c r * * Γ ( r 2 ) ( r / 2 ) r 2 for r > 2 .
Hence, we obtain (35) for r > 1 , r 2 with some constant 0 < c 7 < .
For r = 2 , the second integral in line above is an exponential integral. Therefore, we estimate the integral I 2 ( x , n ) in (35) more precisely like in estimating I 1 ( x , n ) above, taking into account the given function f 2 ( z ; a ) .
Using | h 2 ; 2 ( y ) | 4 y e 2 y and consider (A2), define P 4 ( z ; a ) by Q 5 ( z ; a ) = z P 4 ( z ; a ) φ ( z / 2 ) with c 2 * = sup z | P 4 ( z ; a ) | φ ( z / 2 ) < , we obtain in (A3)
1 / g n | Q 5 ( x y γ ) | 4 y 2 | h 2 ; 2 ( y ) | d y c 2 * | x | 2 π 1 / g n y γ 1 e ( 2 y + x 2 y 2 γ / 4 ) d y .
We estimate the latter integral in the same way as I 1 ( x , n ) for the two cases γ = 1 / 2 γ = 1 / 2 and find (35) for r = 2 with some constants 0 < c 7 < .
In order to prove (35) for r = 2 and γ = 0 , we consider for α > 0 the following inequalities:
1 / g n y 1 e α y d y 1 / g n 1 y 1 d y + 1 e α y d y ln g n + α 1 e α , 1 / g n 1 y 1 e α y d y e α 1 / g n 1 y 1 d y e α ln g n .
The upper bound in (A4) leads to (35) for r = 2 , γ = 0 , too. The lower bound in (A4) shows that the ln n -term cannot be improved.
Bound (36) for n 2 , r > 1 results from 0 1 g n 1 r n = r 1 r 2 n 2 ( 1 ( r 1 ) / ( r n ) 2 ( r 1 ) r 2 n 2 . □
Proof of Lemma 3.
Let r > 0 . If n = 1 , then P ( N 1 ( r ) = 1 ) = 1 and (37) holds with C ( r ) = 1 . Let n 2 and α > 0
E N n ( r ) α = 1 n r 1 + k = 2 Γ ( k + r 1 ) k α Γ ( r ) Γ ( k ) 1 1 n k 1 .
It follows from the relations (49) and (50) with their corresponding bounds in the proof of Theorem 1 in Christoph et al. [12] that
Γ ( k + r 1 ) Γ ( r ) Γ ( k ) = 1 ( k + r 1 ) B ( r k ) = k r 1 Γ ( r ) 1 + R 1 ( k ) , | R 1 ( k ) | c 1 ( r ) k .
For x k 2 using ( 1 1 / n ) x e x / n , we find
k r 1 ( 1 1 / n ) k 1 k α k k + 1 x r ( 1 1 / n ) x 2 ( x 1 ) 1 + α d x 2 3 + α k k + 1 x r 3 e x / n d x .
Then, with c 2 ( r ) = 2 3 + α ( 1 + c 1 ( r ) ) / Γ ( r ) , we obtain
E N n ( r ) α c 2 ( r ) n r J r ( n ) , where J r ( n ) = 1 x r α 1 e x / n d x = n r α 1 / n y r α 1 e y d y .
Since J r ( n ) ( α r ) 1 for 0 < r < α , J r ( n ) n r α Γ ( r α ) for r > α and using (A4) with r = α J r ( n ) ln n + e 1 , the upper bound (37) is proved.
Let r = α > 0 . Considering the formula (A5), 0 k = 2 k 1 | R 1 ( k ) | c 1 ( r ) π 2 / ( 6 Γ ( r ) ) < , ( n ) : = k = 2 n 1 k 1 ln n ln 2 and k = 2 n 1 1 ( 1 1 / n ) k 1 k k = 2 n 1 k 1 k n 1 , we find:
E ( N n ( r ) ) r 1 n r Γ ( r ) k = 2 n 1 1 k 1 1 n k 1 c 3 1 n r Γ ( r ) k = 2 n 1 1 k c 4 1 n r Γ ( r ) ln n c 5 ,
where c 3 = c 1 ( r ) π 2 / 6 , c 4 = 1 + c 3 and c 5 = c 4 ln 2 . Hence, the ln n -term cannot be dropped. □
Proof of Lemma 4.
The upper bounds in the estimates (23) and (24) with H s ( y ) , h 2 ; s ( y ) and I 2 ( x , n ) given in (40) are C ( s ) e s n / 2 . For example, (24ii):
0 1 / n y 1 | h 2 ; s ( y ) | d y s ( s + 1 ) / 2 0 1 / n y 3 e s / y d y ( s + 1 ) / ( 2 s ) s n z e z d z ( s + 1 ) / ( 2 s ) e s n / 2 .
Proof of Lemma 5.
Proceeding as in Bening et al. [24] using
P ( N n ( s ) = k ) = k s + k n k 1 s + k 1 n = s n k 1 k x n 1 ( s + x ) n + 1 d x
and Formula 2.2.4.24 in Prudnikov et al. [37], p. 298, then
E ( N n α ) = s n k = 1 1 k α k 1 k x n 1 ( s + x ) n + 1 d x s n 0 x n 1 α ( s + x ) n + 1 d x = s n B ( n α , 1 + α ) .
Using B ( n α , 1 + α ) = Γ ( 1 + α ) ( n + 1 ) 1 + α ( 1 + R 1 / n ) with | R 1 | c < , we obtain (41). □

Appendix A.3. Proofs of Theorems 3 to 8

Proof of Theorem 3.
Since the additional assumptions (23) and (24) in the transfer Theorem 2 for the limit Gamma distribution H ( x ) = G r , r ( x ) of the normalized sample size N n ( r ) are satisfied by Lemma 2 with b = r > 0 and by Lemma 3 for α = 2 , it remains to calculate the integrals in (26). Define
J 1 * ( x ) = 0 Φ ( x y ) d G r , r ( y ) , J 2 * ( x ) = 0 a ( x y ) 3 5 x y φ ( x y ) 4 y d G r , r ( y ) , and J 3 * ( x ) = 0 Φ ( x y ) d h 2 ; r ( y ) with h 2 ; r ( y ) = ( y 1 ) ( 2 r ) + 2 Q 1 ( r ( n 1 ) + 1 ) y g r , r ( y ) 2 r ,
and Q 1 ( y ) = 1 / 2 ( y [ y ] ) . Then,
G 2 ; n ( x ; 0 ) = J 1 * ( x ) + J 2 * ( x ) g n + J 3 * ( x ) n with g n = E N n ( r ) = r ( n 1 ) + 1 .
Using formula 2.3.3.1 in Prudnikov et al. [37], p. 322, with α = r 1 / 2 , r + 1 / 2 , p = 1 + x 2 / ( 2 r ) and q = 1 :
M α ( x ) = r r Γ ( r ) 2 π 0 y α 1 e ( r + x 2 / 2 ) y d y = Γ ( α ) r r α Γ ( r ) 2 π 1 + x 2 / ( 2 r ) α
we calculate the integrals occurring in (A6). Consider
x J 1 * ( x ) = 0 y 1 / 2 φ ( x y ) g r , r ( y ) d y = r r Γ ( r ) 2 π 0 y r 1 / 2 e ( r + x 2 / 2 ) y d y = M r + 1 / 2 ( x ) = s 2 r ( x ) and J 1 * ( x ) = S 2 r ( x ) .
The integral J 2 * ( x ) in (A6) we calculate again with (A7) using M r 1 / 2 ( x ) = s 2 r ( x ) ( 2 r + x 2 ) / ( 2 r 1 ) and M r + 1 / 2 ( x ) = s 2 r ( x )
J 2 * ( x ) : = r r 2 π Γ ( r ) 0 1 y a x 3 y 3 / 2 5 x y 1 / 2 y r 1 e ( r + x 2 / 2 ) y d y = a x 3 M r + 1 / 2 ( x ) 5 x M r 1 / 2 ( x ) = a x 3 10 r x + 5 x 3 2 r 1 s 2 r ( x ) .
The integral J 3 * ( x ) in (A6) is the same as the integral J 4 ( x ) in the proof of Theorem 2 in Christoph et al. [12] with the estimate
sup x J 3 * ( x ) ( 2 r ) x ( x 2 + 1 ) 4 r ( 2 r 1 ) s 2 r ( x ) c ( r ) n r + 1 .
With (36), we proved (44). □
Proof of Theorem 4.
By Lemma 2, the additional assumptions (23) and (24) in the transfer Theorem 2 are satisfied with the limit Gamma distribution H ( x ) = G r , r ( x ) of the normalized sample size N n ( r ) with b = r > 0 . In Transfer Theorem 1 for T N n = N n R N n , the right-hand side of (20) is estimated by Lemma 1 and Lemma 3 for α = 2 for the case γ = 0 . Then, we have by (21) with (35)
G n ( x , 1 / g n ) = Φ ( x ) 1 G r , r ( 1 / g n ) n 1 h 2 ; r ( 1 / g n ) I { r > 1 } ( r ) + f 2 ( x ; a ) g n 1 / g n 1 y d G r , r ( y ) I { r > 1 } ( r ) .
The estimates (23), (24i), (24ii), (34) and 0 y 1 d G r , r ( y ) = r Γ ( r 1 ) / Γ ( r ) for r > 1 lead to (46) with Φ n ; 2 ( x ; 1 ) defined in (47). Thus, Theorem 4 is proved. □
Proof of Theorem 5.
By Lemma 2, the additional assumptions (23) and (24) in the transfer Theorem 2 are satisfied with the limit Gamma distribution H ( x ) = G r , r ( x ) of the normalized sample size N n ( r ) with b = r > 0 . In Transfer Theorem 1 for T N n = g n 1 / 2 N n R N n , the right-hand side of (20) is estimated by Lemma 1 and Lemma 3 for α = 2 for the case γ = 1 / 2 . Then, we have in (25)
G 2 ; n ( x ; 0 ) = J 1 ; r * ( x ) + J 2 ; r * ( x ) g n + J 3 ; r * ( x ) n I { r > 1 } ( r ) with g n = E N n ( r ) = r ( n 1 ) + 1
J 1 ; r * ( x ) = 0 Φ ( x / y ) d G r , r ( y ) , J 2 ; r * ( x ) = 0 ( a x 3 y 3 / 2 5 x y 1 / 2 ) φ ( x / y ) 4 y d G r , r ( y ) , and J 3 ; r * ( x ) = 0 Φ ( x / y ) d h 2 ; r ( y ) with h 2 ; r ( y ) = ( y 1 ) ( 2 r ) + 2 Q 1 ( r ( n 1 ) + 1 ) y g r , r ( y ) 2 r .
Consider formula 2.3.16.1 in Prudnikov et al. [37], p. 444:
I α : = 0 y α 1 e p y q / y d y = 2 q p α / 2 K α ( 2 p q ) p > 0 , q > 0 ,
where K α ( u ) is the α -order Macdonald function (or α -order modified Bessel function of the second kind), see, e.g., Oldham et al. [30], Chapter 51, for properties of these functions.
Let us calculate the integral J 1 ; r * ( x ) occurring in (A8). Consider
d d x J 1 ; r * ( x ) = r r 2 π Γ ( r ) 0 y r 3 / 2 e r y ( x 2 / ( 2 y ) d y = 2 r r 2 π Γ ( r ) | x | 2 r r 1 / 2 K r 1 / 2 ( 2 r | x | ) = : l r ( x ) .
If α = ± 1 / 2 , ± 3 / 2 , ± 5 / 2 , the integral I α and consequently K α ( x ) are computable in closed-form expressions with formula 2.3.16.2 in Prudnikov et al. [37], p. 444:
I m * = 0 y m 1 / 2 e p y q / y d y = 1 m π m p m p 1 / 2 e 2 p q , p > 0 , q > 0 , m = 0 , 1 , 2 ,
and with formula 2.3.16.3 in Prudnikov et al. [37], p. 444:
I m * = 0 y m 1 / 2 e p y q / y d y = 1 m π p m q m e 2 p q , p > 0 , q > 0 , m = 0 , 1 , 2 ,
For r = 1 , 2 , 3 using (A10) with m = r 1 , we find
l r ( x ) = d d x J 1 , r * ( x ) = r r Γ ( r ) 2 π 0 y r 3 / 2 e r y x 2 / ( 2 y ) d y = r r Γ ( r ) 2 π I r 1 *
and we obtain the densities l r ( x ) in (49) with
I m * ( x ) = = 2 π 1 | x | e 2 r | x | , x 0 m = 1 , π e 2 r | x | , m = 0 , π 1 2 r 3 / 2 + | x | 2 2 r e 2 r | x | , m = 1 , π 3 4 r 5 / 2 + 3 | x | 2 4 r 2 + | x | 2 2 r 3 / 2 e 2 r | x | , m = 2 .
Consider now J 2 ; r ( x ) for r = 2 and r = 3 :
J 2 ; r * ( x ) = 0 ( a x 3 y 3 / 2 5 x y 1 / 2 ) r r y r 1 e r y x 2 / ( 2 y ) 4 y 2 π Γ ( r ) d y = r r 4 2 π Γ ( r ) a x 3 I r 3 * ( x ) 5 x I r 2 * ( x ) .
Hence,
J 2 ; 2 * ( x ) = a x | x | 5 x / 2 e 2 | x | and J 2 ; 3 * ( x ) = 27 8 a x 3 2 5 x 1 6 6 + | x | 6 e 6 | x | .
Integration by parts in the integral J 3 ; r * in (A8) leads to
J 3 ; r * ( x ) : = 0 Φ ( x y 1 / 2 ) d ( h 2 ; r ( y ) ) = x 2 0 y 3 / 2 φ ( x y 1 / 2 ) h 2 ; r ( y ) d y = r r x 2 r Γ ( r ) 2 π 0 y r 5 / 2 e r y x 2 / ( 2 y ) ( y 1 ) ( 2 r ) + 2 Q 1 ( g n y ) d y , = r r 1 x 2 Γ ( r ) 2 π 0 y r 5 / 2 ( y 1 ) ( 2 r ) e r y x 2 / ( 2 y ) d y + r r 1 x Γ ( r ) 2 π 0 y r 5 / 2 Q 1 ( g n y ) e r y x 2 / ( 2 y ) d y = J 3 ; r , 1 ( x ) + J 3 ; r , 2 ( x ) .
Since J 3 ; 2 , 1 ( x ) vanishes, we calculate J 3 ; 3 , 1 ( x ) :
J 3 ; 3 , 1 ( x ) = 9 x 2 2 π ( I 1 * ( x ) I 2 * ( x ) ) = 9 x 2 1 12 6 + | x | 12 | x | 2 6 6 e 6 | x | .
It remains to estimate J 3 ; 2 , 2 ( x ) and J 3 ; 3 , 2 ( x ) . The function Q 1 ( y ) is periodic with period 1:
Q 1 ( y ) = Q 1 ( y + 1 ) for   all y R and Q 1 ( y ) : = 1 / 2 y for 0 y < 1
It is right-continuous and has the jump 1 at every integer point y. The Fourier series expansion of Q 1 ( y ) at all non-integer points y is
Q 1 ( y ) = 1 / 2 ( y [ y ] ) = k = 1 sin ( 2 π k y ) k π y [ y ] ,
see formula 5.4.2.9 in Prudnikov et al. [37], p. 726, with a = 0 .
Using the Fourier series expansion (A11) of the periodic function Q 1 ( y ) and interchange sum and integral, we find
J 3 ; r , 2 * = x 2 π k = 1 1 k 0 y r 5 / 2 e r y x 2 / ( 2 y ) sin ( 2 π k g n y ) d y .
First, we consider r = 2 . Let p > 0 , q > 0 and b > 0 be some real constants. Formula 2.5.37.4 in Prudnikov et al. [37], p. 453 reads
0 y 1 / 2 e p y q / y sin ( b y ) d y = π p 2 + b 2 e 2 q z + z + sin ( 2 q z ) + z cos ( 2 q z ) .
with 2 z ± 2 = p 2 + b 2 ± p . Consider z ± with p = r , q = x 2 / 2 , b = 2 π k g n , k 1 and n 1 : Then,
π p 2 + b 2 = π r 2 + ( 2 π k g n ) 2 π 2 π k g n , 2 | x | z + e 2 | x | z + e 1 and 0 < z z +
leads to
| J 3 ; 2 , 2 * ( x ) | 2 | x | 2 π k = 1 1 k π p 2 + b 2 e 2 q z + z + sin ( 2 q z ) + z cos ( 2 q z ) 2 2 π k = 1 1 k π 2 e 1 2 π k g n = 1 2 π e g n k = 1 1 k 2 = π 12 e g n .
Together with g n n , we find n 1 | J 3 ; 2 , 2 * ( x ) | C n 2 .
Consider finally J 3 ; 3 , 2 * given in (A12). In order to estimate J 3 ; 3 , 2 * ( x ) , we apply Leibniz’s rule for differentiation under the integral sign with respect to p in (A13) and obtain
0 y 1 / 2 e p y q / y sin ( b y ) d y = p π p 2 + b 2 e 2 q z + z + sin ( 2 q z ) + z cos ( 2 q z ) .
Simple calculation considering q = | x | / 2 and | x | m e 2 | x | z + m 2 m / 2 z + m 2 m / 2 b m for m = 1 , 2 , leads to
| x | p π p 2 + b 2 e 2 q z + z + sin ( 2 q z ) + z cos ( 2 q z ) C b 2 = C ( 2 π k g n ) 2
and we find equation (A12) with r = 3 that n 1 | J 3 ; 3 , 2 * | C n 3 and (50) is proved. The approximation (52) holds since Lemmas 1, 2, and 3 are valid for arbitrary r > 0 . Theorem 5 is proved. □
Proof of Theorem 6.
By Lemma 4, the additional assumptions (23) and (24) in the transfer Theorem 2 are satisfied with the limit inverse exponential distribution H s ( y ) and h 2 ; s ( y ) given in (40), g n = n and b = 2 . In Transfer Theorem 1, the right-hand side of (20) is estimated by Lemma 1 and Lemma 5 for α = 2 for the case γ = 1 / 2 . Then, we have in (25) with (35)
G 2 ; n ( x ; 0 ) = J 1 ; s * ( x ) + n 1 J 2 ; s * ( x ) + n 1 J 3 ; s * ( x ) ,
where J 1 ; s * ( x ) = 0 Φ ( x y ) d e s / y , J 2 ; s * ( x ) = 0 ( a x 3 y 3 / 2 5 x y ) φ ( x y ) 4 y d e s / y , and J 3 ; s * ( x ) = 0 Φ ( x y ) d h 2 ; s ( y ) with h 2 ; s ( y ) = s e s / y s 1 + 2 Q 1 ( n y ) / 2 y 2 .
To obtain (53), we calculate the above integrals as in the proof of Theorem 5 in Christoph et al. [12]. Here, we use Formula 2.3.16.3 in Prudnikov et al. [37], p. 344 with p = x 2 / 2 > 0 , s > 0 , m = 1 , 2 :
0 e x 2 y / 2 2 π y m 3 / 2 d H s ( y ) = 0 s e x 2 y / 2 s / y 2 π y m + 1 / 2 d y = 1 m s | x | m s m e 2 s | x | .
In the mentioned proof we obtained with (A14) for m = 1
0 Φ ( x y ) d H s ( y ) = L 1 / s ( x )
and with (A14) for m = 2
n 1 sup x J 3 ; s * ( x ) ( 1 s ) x ( 1 + 2 s | x | ) 8 s l 1 / s ( x ) n 1 c ( s ) e π s n / 2 C ( s ) n 2 .
Again using (A14) with p = x 2 / 2 > 0 , s > 0 , m = 1 , 2 we find
J 2 ; s ( x ) = s 2 π 0 ( a x 3 y 1 1 / 2 5 x y 2 1 / 2 ) e ( x 2 y / 2 + s / y ) d y = 2 s a x 3 5 x ( 2 s | x | + 1 ) 8 s l 1 / s ( x ) .
Proof of Theorem 7.
By Lemma 4, the additional assumptions (23) and (24) in Transfer Theorem 2 are satisfied with the limit inverse exponential distribution H s ( y ) and h 2 ; s ( y ) given in (40), g n = n and b = 2 . In Transfer Theorem 1, the right-hand side of (20) is estimated by Lemma 1 and Lemma 5 for α = 2 in the case γ = 0 . Then, we have in (21) with (35)
G n ( x , 1 / n ) = Φ ( x ) 1 e s n n 1 h 2 ; s ( 1 / n ) + f 2 ( x ; a ) n 1 / n 1 y d e s / y .
The estimates (24i), (24ii) for b = 2 and 0 y 1 d e s / y = s 0 y 3 e s / y d y = s 2 0 z e z d z = s 2 lead to
G n ( x , 1 / g n ) Φ ( x ) n 1 s 2 f 2 ( x ; a ) C s n 2
and Theorem 7 is proved. □
Proof of Theorem 8.
By Lemma 4, the additional assumptions (23) and (24) in Transfer Theorem 2 are satisfied with the limit inverse exponential distribution H s ( y ) and h 2 ; s ( y ) given in (40), g n = n and b = 2 . In Transfer Theorem 1, the right-hand side of (20) is estimated by Lemma 1 and Lemma 5 for α = 2 in the case γ = 1 / 2 . Then, we have in (21) with (35)
G 2 ; n ( x ; 0 ) = J 1 ; s * ( x ) + n 1 J 2 ; s * ( x ) + n 1 J 3 ; s * ( x ) ,
where J 1 ; s * ( x ) = 0 Φ ( x y 1 / 2 ) d e s / y , J 2 ; s * ( x ) = 0 ( a x 3 y 3 / 2 5 x y 1 / 2 ) φ ( x y 1 / 2 ) 4 y d e s / y , and J 3 ; s * ( x ) = 0 Φ ( x y 1 / 2 ) d h 2 ; s ( y ) with h 2 ; s ( y ) = s e s / y s 1 + 2 Q 1 ( n y ) / 2 y 2 .
To obtain (54), we calculate the above integrals:
x 0 Φ ( x y ) d e s / y = s 2 π 0 y 3 / 2 e ( x 2 / 2 + s ) / y ) d y = s 2 π 0 z 1 / 2 1 e ( x 2 / 2 + s ) z ) d z = 1 2 2 s 1 + x 2 2 s 3 / 2 = s 2 * ( x ; s ) , and 0 Φ ( x y ) d e s / y = S 2 * ( x ; s ) .
Define K = ( s + x 2 / 2 ) . With z = K / y and Γ ( α ) = 0 z α 1 e z d z , α > 0 , we obtain
J 2 ; s * ( x ) = s 4 2 π 0 ( a x 3 y 9 / 2 5 x y 7 / 2 ) e K / y d y = s K 7 / 2 4 2 π 0 ( a x 3 z 5 / 2 5 x z 3 / 2 K ) e z d z = s K 7 / 2 4 2 π a x 3 Γ ( 7 / 2 ) 5 x K Γ ( 5 / 2 ) = 1 4 ( x 2 + 2 s ) 2 15 ( a 1 ) x 3 30 x s s 2 * ( x ; s ) .
Integration by parts in J 3 ; s * ( x ) leads to
J 3 ; s * ( x ) = x 2 2 π 0 y 3 / 2 e x 2 / ( 2 y ) s y 2 e s / y ( s 1 ) / 2 + Q 1 ( n y ) d y = J 4 ; s * ( x ) + J 5 ; s * ( x ) ,
where
J 4 ; s * ( x ) = s ( s 1 ) x 4 2 π 0 y 7 / 2 e K / y d y = s ( s 1 ) x Γ ( 5 / 2 ) 4 2 π K 5 / 2 = 3 ( s 1 ) x 4 ( x 2 + 2 s ) s 2 * ( x ; s )
and using the Fourier series expansion (A11) of the periodic function Q 1 ( y ) and interchange sum and integral, we find
J 5 ; s * ( x ) = s x 2 2 π 0 y 7 / 2 e K / y Q 1 ( n y ) d y = s x 2 2 π k = 1 1 k 0 y 7 / 2 e K / y sin ( 2 π k n y ) d y = s x 2 2 π k = 1 1 k 0 y 7 / 2 e K / y sin ( 2 π k n y ) d y .
Integration by parts in the latter integral and | x | / K 2 leads now to
sup x | J 5 ; s * ( x ) | sup x s | x | ( 2 π ) 3 / 2 n k = 1 1 k 2 0 7 2 y 9 / 2 + K y 11 / 2 e K / y d y c s n 1
with c s = s 2 ( 2 π ) 3 / 2 n 7 Γ ( 11 / 2 ) 2 s 3 + Γ ( 13 / 2 ) s 4 π 2 6 and Theorem 8 is proved. □

References

  1. Hall, P.; Marron, J.S.; Neeman, A. Geometric representation of high dimension, low sample size data. J. R. Stat. Soc. Ser. 2005, 67, 427–444. [Google Scholar] [CrossRef]
  2. Fujikoshi, Y.; Ulyanov, V.V.; Shimizu, R. Multivariate Statistics. High-Dimensional and Large-Sample Approximations; Wiley Series in Probability and Statistics; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010. [Google Scholar]
  3. Aoshima, M.; Shen, D.; Shen, H.; Yata, K.; Zhou, Y.-H.; Marron, J.S. A survey of high dimension low sample size asymptotics. Aust. N. Z. J. Stat. 2018, 60, 4–19. [Google Scholar] [CrossRef] [PubMed]
  4. Kawaguchi, Y.; Ulyanov, V.V.; Fujikoshi, Y. Asymptotic distributions of basic statistics in geometric representation for high-dimensional data and their error bounds (Russian). Inform. Appl. 2010, 4, 12–17. [Google Scholar]
  5. Ulyanov, V.V.; Christoph, G.; Fujikoshi, Y. On approximations of transformed chi-squared distributions in statistical applications. Sib. Math. J. 2006, 47, 1154–1166. [Google Scholar] [CrossRef]
  6. Esquível, M.L.; Mota, P.P.; Mexia, J.T. On some statistical models with a random number of observations. J. Stat. Theory Pract. 2016, 10, 805–823. [Google Scholar] [CrossRef]
  7. Gnedenko, B.V. Estimating the unknown parameters of a distribution with a random number of independent observations. (Probability theory and mathematical statistics (Russian)). Tr. Tbil. Mat. Instituta 1989, 92, 146–150. [Google Scholar]
  8. Nunes, C.; Capristrano, G.; Ferreira, D.; Ferreira, S.S.; Mexia, J.T. Fixed effects ANOVA: An extension to samples with random size. J. Stat. Comput. Simul. 2014, 84, 2316–2328. [Google Scholar] [CrossRef]
  9. Nunes, C.; Capristrano, G.; Ferreira, D.; Ferreira, S.S.; Mexia, J.T. Exact critical values for one-way fixed effects models with random sample sizes. J. Comput. Appl. Math. 2019, 354, 112–122. [Google Scholar] [CrossRef]
  10. Barakat, H.M.; Nigm, E.M.; El-Adll, M.E.; Yusuf, M. Prediction of future generalized order statistics based on exponential distribution with random sample size. Stat. Pap. 2018, 59, 605–631. [Google Scholar] [CrossRef]
  11. Al-Mutairi, J.S.; Raqab, M.Z. Confidence intervals for quantiles based on samples of random sizes. Stat. Pap. 2020, 61, 261–277. [Google Scholar] [CrossRef]
  12. Christoph, G.; Monakhov, M.M.; Ulyanov, V.V. Second-order Chebyshev–Edgeworth and Cornish-Fisher expansions for distributions of statistics constructed with respect to samples of random size. J. Math. Sci. 2020, 244, 811–839. [Google Scholar] [CrossRef]
  13. Christoph, G.; Ulyanov, V.V.; Bening, V.E. Second Order Expansions for Sample Median with Random Sample Size. arXiv 2019, arXiv:1905.07765v2. [Google Scholar]
  14. Fujikoshi, Y.; Ulyanov, V.V. Non-Asymptotic Analysis of Approximations for Multivariate Statistics; Springer: Singapore, 2020. [Google Scholar]
  15. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, 2nd ed.; Wiley: New York, NY, USA, 1995; Volume 2. [Google Scholar]
  16. Konishi, S. Asymptotic expansions for the distributions of functions of a correlation matrix. J. Multivar. Anal. 1979, 9, 259–266. [Google Scholar] [CrossRef] [Green Version]
  17. Christoph, G.; Ulyanov, V.V.; Fujikoshi, Y. Accurate approximation of correlation coefficients by short Edgeworth-Chebyshev expansion and its statistical applications. In Prokhorov and Contemporary Probability Theory; Proceedings in Mathematics & Statistics 33; Shiryaev, A.N., Varadhan, S.R.S., Presman, E.L., Eds.; Springer: Heidelberg, Germany, 2013; pp. 239–260. [Google Scholar]
  18. Prokhorov, Y.V. Limit theorems for the sums of random vectors whose dimension tends to infinity. Teor. Veroyatnostei Primen. 1990, 35, 751–753, English translation: Theory Probab. Appl. 1991, 35, 755–757. [Google Scholar] [CrossRef]
  19. Gnedenko, B.V.; Korolev, V.Y. Random Summation. Limit Theorems and Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  20. Döbler, C. New Berry-Esseen and Wasserstein bounds in the CLT for non-randomly centered random sums by probabilistic methods. ALEA Lat. Am. J. Probab. Math. Stat. 2015, 12, 863–902. [Google Scholar]
  21. Chen, L.H.Y.; Goldstein, L.; Shao, Q.-M. Normal Approximation by Stein’s Method. Probability and its Applications; Springer: Heidelberg, Germany, 2011. [Google Scholar]
  22. Pike, J.; Ren, H. Stein’s method and the Laplace distribution. ALEA Lat. Am. J. Probab. Math. Stat. 2014, 11, 571–587. [Google Scholar]
  23. Bening, V.E.; Korolev, V.Y. On the use of Student’s distribution in problems of probability theory and mathematical statistics. Theory Probab. Appl. 2005, 49, 377–391. [Google Scholar] [CrossRef]
  24. Bening, V.E.; Galieva, N.K.; Korolev, V.Y. Asymptotic expansions for the distribution functions of statistics constructed from samples with random sizes (Russian). Inform. Appl. 2013, 7, 75–83. [Google Scholar]
  25. Bening, V.E.; Korolev, V.Y.; Zeifman, A.I. Asymptotic expansions for the distribution function of the sample median constructed from a sample with random size. In Proceedings 30th ECMS 2016 Regensburg; Claus, T., Herrmann, F., Manitz, M., Rose, O., Eds.; European Council for Modeling and Simulation: Regensburg, Germany, 2016; pp. 669–675. [Google Scholar]
  26. Schluter, C.; Trede, M. Weak convergence to the Student and Laplace distributions. J. Appl. Probab. 2016, 53, 121–129. [Google Scholar] [CrossRef]
  27. Gavrilenko, S.V.; Zubov, V.N.; Korolev, V.Y. The rate of convergence of the distributions of regular statistics constructed from samples with negatively binomially distributed random sizes to the Student distribution. J. Math. Sci. 2017, 220, 701–713. [Google Scholar] [CrossRef]
  28. Buddana, A.; Kozubowski, T.J. Discrete Pareto distributions. Econ. Qual. Control. 2014, 29, 143–156. [Google Scholar] [CrossRef]
  29. Christoph, G.; Wolf, W. Convergence Theorems with a Stable Limit Law, Mathematical Research 70; Akademie-Verlag: Berlin, Germany, 1992. [Google Scholar]
  30. Oldham, K.B.; Myl, J.C.; Spanier, J. An Atlas of Functions, 2nd ed.; Springer Science + Business Media: New York, NY, USA, 2009. [Google Scholar]
  31. Kotz, S.; Kozubowski, T.J.; Podgórski, K. The Laplace distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  32. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series, Volume 2: Special Functions, 3rd ed.; Gordon & Breach Science Publishers: New York, NY, USA, 1992. [Google Scholar]
  33. Madan, D.B.; Senata, E. The variance gamma (V.G.) model for share markets returns. J. Bus. 1990, 63, 511–524. [Google Scholar] [CrossRef]
  34. Goldfeld, S.M.; Quandt, R.E. Econometric modelling with non-normal disturbances. J. Econom. 1981, 17, 141–155. [Google Scholar] [CrossRef]
  35. Missiakeles, S. Sargan densities which one? J. Econom. 1983, 23, 223–233. [Google Scholar] [CrossRef]
  36. Jackman, S. Bayesian Analysis for the Social Sciences; Wiley Series in Probability and Statistics; John Wiley & Sons, Ltd.: Chichester, UK, 2009. [Google Scholar]
  37. Prudnikov, A.P.; Brychkov, Y.A.; Marichev, O.I. Integrals and Series, Volume 1: Elementary Functions, 3rd ed.; Gordon & Breach Science Publishers: New York, NY, USA, 1992. [Google Scholar]
Figure 1. Distribution function P N n ( r ) ( r ( n 1 ) + 1 ) x (black line, almost covered by the red line), the limit law G 2 , 2 ( x ) (blue line) and the second approximation G 2 , 2 ( x ) + h 2 ( x ) / n (red line) with n = 25 and r = 2 .
Figure 1. Distribution function P N n ( r ) ( r ( n 1 ) + 1 ) x (black line, almost covered by the red line), the limit law G 2 , 2 ( x ) (blue line) and the second approximation G 2 , 2 ( x ) + h 2 ( x ) / n (red line) with n = 25 and r = 2 .