Second Order Expansions for High-Dimension Low-Sample-Size Data Statistics in Random Setting

: We consider high-dimension low-sample-size data taken from the standard multivariate normal distribution under assumption that dimension is a random variable. The second order Chebyshev–Edgeworth expansions for distributions of an angle between two sample observations and corresponding sample correlation coefﬁcient are constructed with error bounds. Depending on the type of normalization, we get three different limit distributions: Normal, Student’s t -, or Laplace distributions. The paper continues studies of the authors on approximation of statistics for random size samples.


Introduction
Let X 1 = (X 11 , ..., X 1m ) T , . . . , X k = (X k1 , ..., X km ) T be a random sample from m-dimensional population. The data set can be regarded as k vectors or points in m-dimensional space. Recently, there has been significant interest in a high-dimensional datasets when the dimension is large. In a high-dimensional setting, it is assumed that either (i) m tends to infinity and k is fixed, or (ii) both m and k tend to infinity. Case (i) is related to high-dimensional low sample size (HDLSS) data. One of the first results for HDLSS data appeared in Hall et al. [1]. It became the basis of research in mathematical statistics for the analysis of high-dimensional data, see, e.g., Fujikoshi et al. [2], which are an important part of the current data analysis fashionable area called Big data. Scientific areas where these settings have proven to be very useful include genetics and other types of cancer research, neuroscience, and also image and shape analysis. See a recent survey on HDLSS asymptotics and its applications in Aoshima et al. [3].
For examining the features of the data set, it is necessary to study the asymptotic behavior of three functions: the length X i of a m-dimensional observation vector, the distance X i − X j between any two independent observation vectors, and the angle ang( X i , X j ) between these vectors at the Mathematics 2020, 8, 1151 2 of 28 population mean. Assuming that X i 's are a sample from N(0, I m ), it was shown in Hall et al. [1] that for HDLSS data the three geometric statistics satisfy the following relations: ang( X i , X j ) = where · is the Euclidean distance and O p denotes the stochastic order. These interesting results imply that the data converge to the vertices of a deterministic regular simplex. These properties were extended for non-normal sample under some assumptions (see Hall et al. [1] and Aoshima et al. [3]). In Kawaguchi et al. [4], the relations (1)-(3) were refined by constructing second order asymptotic expansions for distributions of all three basic statistics. The refinements of (1) and (2) were achieved by using the idea of Ulyanov et al. [5] who obtained the computable error bounds of order O(m −1 ) for the chi-squared approximation of transformed chi-squared random variables with m degrees of freedom. The aim of the present paper is to study approximation for the third statistic ang( X 1 , X 2 ) under generalized assumption that m is a realization of a random variable, say N n , which represents the sample dimension and is independent of X 1 and X 2 . This problem is closely related to approximations of statistics constructed from the random size samples, in particular, to this kind of problem for the sample correlation coefficient R m .
The use of samples with random sample sizes has been steadily growing over the years. For an overview of statistical inferences with a random number of observations and some applications, see Esquível et al. [6] and the references cited therein. Gnedenko [7] considered the asymptotic properties of the distributions of sample quantiles for samples of random size. In Nunes et al. [8] and Nunes et al. [9], unknown sample sizes are assumed in medical research for analysis of one and more than one-way fixed effects ANOVA models to avoid false rejections, obtained when using the classical fixed size F-tests. Esquível et al. [6] considered inference for the mean with known and unknown variance and inference for the variance in the normal model. Prediction intervals for the future observations for generalized order statistics and confidence intervals for quantiles based on samples of random sizes are studied in Barakat et al. [10] and Al-Mutairi and Raqab [11], respectively. They illustrated their results with real biometric data set, the duration of remission of leukemia patients treated by one drug. The present paper continues studies of the authors on non-asymptotic analysis of approximations for statistics based on random size samples. In Christoph et al. [12], second order expansions for the normalized random sample sizes are proved, see below Propositions 1 and 2. These results allow for proving second order asymptotic expansions of random sample mean in Christoph et al. [12] and random sample median in Christoph et al. [13]. See also Chapters 1 and 9 in Fujikoshi and Ulyanov [14].
The structure of the paper is the following. In Section 2, we describe the relation between ang( X 1 , X 2 ) and R m . We recall also previous approximation results proved for distributions of ang( X 1 , X 2 ) and R m . Section 3 is on general transfer theorems, which allow us to construct asymptotic expansions for distributions of randomly normalized statistics on the base of approximation results for non-randomly normalized statistics and for the random size of the underlying sample, see Theorems 1 and 2. Section 4 contains the auxiliary lemmas. Some of them have independent interest. For example, Lemma 3 on the upper bounds for the negative order moments of a random variable having negative binomial distribution. We formulate and discuss main results in Sections 5 and 6. In Theorems 3-8, we construct the second order Chebyshev-Edgeworth expansions for distributions of ang( X 1 , X 2 ) and R m in random setting. Depending on the type of normalization, we get three different limit distributions: Normal, Laplace, or Student's t-distributions. All proofs are given in the Appendix A.

Sample Correlation Coefficient, Angle between Vectors and Their Normal Approximations
We slightly simplify notation. Let X m = (X 1 , ..., X m ) T and Y m = (Y 1 , ..., Y m ) T be two vectors from an m-dimensional normal distribution N(0, I m ) with zero mean, identity covariance matrix I m and the sample correlation coefficient Under the null hypothesis H 0 : X m and Y m are uncorrelated , the so-called null density p R m (y; n) of R m is given in Johnson, Kotz and Balakrishnan [15], Chapter 32, Formula (32.7): Moreover, for m ≥ 5, the density function p R m (y; m) is unimodal.
Consider now the standardized correlation coefficient with some correcting real constant c < m having density which converges with c = O(1) as m → ∞ to the standard normal density and by Konishi [16], Section 4, Formula (4.1) as m → ∞: where Φ(x) = x −∞ ϕ(y)dy is the standard normal distribution function. Note that in Konishi [16] the sample size (in our case the dimension of vectors) is m + 1 and c = 1 + 2∆ with Konishi's correcting constant ∆. Moreover, (7) follows from the more general Theorem 2.2 in the mentioned paper for independent components in the pairs (X k Y k ), k = 1, ..., m.
In Christoph et al. [17], computable error bounds of approximations in (7) with c = 2 and c = 2.5 of order O(m −2 ) for all m ≥ 7 are proved: and where for some m ≥ 7 constants B m and B * m are calculated and presented in Table 1 in Christoph et al. [17]: i.e., B 7 = 1.875, B * 7 = 2.083 and B 50 = 0.720, B * 50 = 0.982. Usually, the asymptotic for R m is (9) With the correcting constant c = 2.5, one term in the asymptotic in (8) vanishes. In order to use a transfer theorem from non-random to random dimension of the vectors, we prefer (7) with c = 0. In a similar manner as proving (8) and (9) in Christoph et al. [17], one can verify the following inequalities for m ≥ 3: Let us consider now the connection between the correlation coefficient R m and the angle θ m of the involved vectors X m , Y m : Hall et al. [1] showed that under the given conditions where O p denotes the stochastic order. Since the computable error bounds for θ m follows from computable error bounds for R m . For any fixed constant c < m, and arbitrary x with |x| < √ m − c π/2, we obtain for the angle θ m : 0 < θ m < π : because R m is symmetric and P(R m ≤ x) = P(− R m ≤ x). Equation (12) shows the connection between the correlation coefficient R m and the angle θ m among the vectors involved. In Christoph et al. [17], computable error bound of approximation in (8) are used to obtain similar bound for the approximation of the angle between two vectors, defined in (11). Here, the approximation (10) and (12) with c = 0 lead for any m ≥ 3 and for |x| ≤ π √ m /2 to Many authors investigated limit theorems for the sums of random vectors when their dimension tends to infinity, see, e.g., Prokhorov [18]. In (6) and (7), the dimension m of the vectors X m and Y m tends to infinity. Now, we consider the correlation coefficient of vectors X m and Y m , where the non-random dimension m is replaced by a random dimension N n ∈ N + = {1, 2, ...} depending on some natural parameter n ∈ N + and N n is independent of X m and Y m for any m, n ∈ N + . Define

Statistical Models with a Random Number of Observations
Let X 1 , X 2 , . . . ∈ R = (−∞ ∞) and N 1 , N 2 , . . . ∈ N + = {1, 2, ...} be random variables on the same probability space (Ω, A, P). Let N n be a random size of the underlying sample, i.e., the random number of observations, which depends on parameter n ∈ N + . We suppose for each n ∈ N + that N n ∈ N + is independent of random variables X 1 , X 2 , . . . and N n → ∞ in probability as n → ∞. Let T m := T m (X 1 , . . . , X m ) be some statistic of a sample with non-random sample size m ∈ N + . Define the random variable T N n for every n ∈ N + : i.e., T N n is some statistic obtained from a random sample X 1 , X 2 , . . . , X N n .
The randomness of the sample size may crucially change asymptotic properties of T N n , see, e.g., Gnedenko [7] or Gnedenko and Korolev [19].

Random Sums
Many models lead to random sums and random means A fundamental introduction to asymptotic distributions of random sums is given in Döbler [20]. It is worth mentioning that a suitable scaled factor by S N n affects the type of limit distribution. In fact, consider random sum S N n given in (14). For the sake of convenience, let X 1 , X 2 , ... be independent standard normal random variables and N n ∈ N + be geometrically distributed with E(N n ) = n and independent of X 1 , X 2 , .... Then, one has We have three different limit distributions. The suitable scaled geometric sum S N n is standard normal distributed or tends to the Laplace distribution with variance 1 depending on whether we take the random scaling factor 1/ √ N n or the non-random scaling factor 1/ √ EN n , respectively. Moreover, we get the Student distribution with two degrees of freedom as the limit distribution if we use scaling with the mixed factor E(N n )/N n . Similar results also hold for the normalized random mean M N n = 1 N n S N n . Assertion (15) is obtained by conditioning and the stability of the normal law. Moreover, using Stein's method, quantitative Berry-Esseen bounds in (15) and (16) for arbitrary centered random variables X 1 with E(|X 1 | 3 ) < ∞ were proved in (Chen et al. [21], Theorem 10.6), (Döbler [20] Theorems 2.5 and 2.7) and (Pike and Ren [22] Theorem 3), respectively. Statement (17) follows from (Bening and Korolev [23] Theorem 2.1).
First order asymptotic expansions are obtained for the distribution function of random sample mean and random sample median constructed from a sample with two different random sizes in Bening et al. [24] and in the conference paper Bening et al. [25]. The authors make use of the rate of convergence of P(N n ≤ g n x) to the limit distribution H(x) with some g n ↑ ∞. In Christoph et al. [12], second order expansions for the normalized random sample sizes are proved, see below Propositions 1 and 2. These results allow for proving second order asymptotic expansions of random sample mean in Christoph et al. [12] and random sample median in Christoph et al. [13].

Transfer Proposition from Non-Random to Random Sample Sizes
Consider now the statistic T N n = T N n X N n , Y N n ,, where the dimension of the vectors X N n , Y N n is a random number N n ∈ N + .
In order to avoid too long expressions and at the same time to preserve a necessary accuracy, we limit ourselves to obtaining limit distributions and terms of order m −1 in the following non-asymptotic approximations with a bounds of order m −a for some a > 1.
We suppose that the following condition on the statistic T m = T m ( X m , Y m ) with ET m = 0 is met for a non-random sample size m ∈ N + : where γ ∈ {−1/2, 0, 1/2}. Suppose that the limiting behavior of distribution functions of the normalized random size N n ∈ N + is described by the following condition.

Condition 2.
There exist a distribution function H(y) with H(0+) = 0, a function of bounded variation h 2 (y), a sequence 0 < g n ↑ ∞ and real numbers b > 0 and C 2 > 0 such that for all integer n ≥ 1 Remark 2. In Propositions 1 and 2 below, we get the examples of discrete random variables N n such that Condition 2 is met.
Conditions 1 and 2 allow us to construct asymptotic expansions for distributions of randomly normalized statistics on the base of approximation results for normalized fixed-size statistics (see relation (18)) and for the random size of the underlying sample (see relation (19)). As a result, we obtain the following transfer theorem. Theorem 1. Let |γ| ≤ K < ∞ and both Conditions 1 and 2 be satisfied. Then, the following inequality holds for all n ∈ N + : where a > 1, b > 0, f 2 (z), h 2 (y) are given in (18) and (19). The constants C 1 , C 3 , C 4 do not depend on n.

Remark 5. The additional conditions
Theorems 1 and 2 are proved in Appendix A.1.

Auxiliary Propositions and Lemmas
Consider the standardized correlation coefficient (5) having density (6) with correcting real constant c = 0 and standardized angle √ m(θ m − π/2), see (12). By (10) and (13) for m ≥ 3, we have and for the angle θ m between the vectors for |x| ≤ π √ m /2 where (27) and (28) for m = 1 and m = 2 are trivial and C 1 does not depend on m. (27) or (28) are considered. Since a product of polynomials in x with ϕ(x) is always bounded, numerical calculus leads to Condition 1 of the transfer Theorem 1 to the statistics R m and θ m are satisfied with c 0 = 0.4 and a = 2.
Next, we estimate D n (x) defined in (22).

Lemma 1.
Let g n a sequence with 0 < g n ↑ ∞ as n → ∞. Then, with some 0 < c(γ, a) < ∞, we obtain with a = 1 and a = 1/3: In the next subsection, we consider the cases when the random dimension N n is negative binomial distributed with success probability 1/n.

Negative Binomial Distribution as Random Dimension of the Normal Vectors
Let the random dimension N n (r) of the underlying normal vectors be negative binomial distributed (shifted by 1) with parameters 1/n and r > 0, having probability mass function with E(N n (r)) = r (n − 1) + 1. Then, P(N n (r)/g n ≤ x) tends to the Gamma distribution function G r,r (x) with the shape and rate parameters r > 0, having density If the statistic T m is asymptotically normal, the limit distribution of the standardized statistic with ν = 2r, see Bening and Korolev [23] or Schluter and Trede [26].

Proposition 1.
Let r > 0, discrete random variable N n (r) have probability mass function (29) and g n := EN n (r) = r(n − 1) + 1. For x > 0 and all n ∈ N there exists a real number C 2 (r) > 0 such that where [.] denotes the integer part of a number.
The negative binomial random variable N n satisfies Condition 2 of the transfer Theorem 1 with (23) and (24)

Lemma 2. In Theorem 2 the additional conditions
Moreover, one has for γ ∈ {−1/2, 0, 1/2} and f 2 (z; a) = (a z 3 − 5 z) ϕ(z)/4, with a = 1 or a = 1/3: g n y dG r,r (y) ≤ c 5 g −r n r < 1, Furthermore, we have In addition to the expansion of N n (r) a bound of E(N n (r)) −a is required, where m −a is rate of convergence of Edgeworth expansion for T m , see (18). Lemma 3. Let r > 0, α > 0 and the random variable N n (r) is defined by (29). Then, and the convergence rate in case r = α cannot be improved.

Maximum of n Independent Discrete Pareto Random Variables Is the Dimension of the Normal Vectors
Let Y(s) ∈ N be discrete Pareto II distributed with parameter s > 0, having probability mass and distribution functions which is a particular class of a general model of discrete Pareto distributions, obtained by discretization continuous Pareto II (Lomax) distributions on integers, see Buddana and Kozubowski [28]. Now, let Y 1 (s), Y 2 (s), ..., be independent random variables with the same distribution (38). Define for n ∈ N and s > 0 the random variable It should be noted that the distribution of N n (s) is extremely spread out on the positive integers. In Christoph et al. [12], the following Edgeworth expansion was proved: Let the discrete random variable N n (s) have distribution function (39). For x > 0, fixed s > 0 and all n ∈ N, then there exists a real number C 3 (s) > 0 such that

Remark 7.
The continuous function H s (y) = e −s/y I (0 ∞) (y) with parameter s > 0 is the distribution function of the inverse exponential random variable W(s) = 1/V(s), where V(s) is exponentially distributed with rate parameter s > 0. Both H s (y) and P(N n (s) ≤ y) are heavy tailed with shape parameter 1.

Lemma 4. In Transfer Theorem 2, the additional conditions
Lemma 5. For random size N n (s) with probabilities (39) with reals s ≥ s 0 > 0 and arbitrary small s 0 > 0 and n ≥ 1, we have The Lemmas are proved in Appendix A.2.

Main Results
Consider the sample correlation coefficient R m = R m ( X m , Y m ), given in (4) and the two statistics R * m = √ m R m and R * * m = m R m which differ from R m by scaling factors. Hence, by (10), Let θ m be the angle between the vectors X m and Y m . Contemplate the statistics ) which differ only in scaling . Then, by (13), Consider now the statistics R N n , R * N n and R * * N n as well as Θ N n , Θ * N n and Θ * * N n when the vectors have random dimension N n . The normalized statistics have different limit distributions as n → ∞.

The Random Dimension N n = N n (r) Is Negative Binomial Distributed
Let the random dimension N n (r) be negative binomial distributed with probability mass function (29) and g n = EN n (r) = r(n − 1) + 1. "The negative binomial distribution is one of the two leading cases for count models, it accommodates the overdispersion typically observed in count data (which the Poisson model cannot)", see Schluter and Trede [26].
It follows from Theorems 1 and 2 and Proposition 1 that if limit distributions for with densities given bellow in the proof of the corresponding theorems: where in case γ = −1/2 for r = 1 generalized Laplace distributions occur.

Student's t-Distribution
We start with the case γ = 1/2 in Theorems 1 and 2. Consider the statistic R N n (r) = √ g n R N n (r) .
The limit distribution is the Student's t-distribution S 2r (x) with 2 r degrees of freedom with density (31).
Theorem 3. Let r > 0 and (29) be the probability mass function of the random dimension N n = N n (r) of the vectors under consideration. If the representation (42) for the statistic R m and the inequality (32) with g n = EN n (r) = r(n − 1) + 1 hold, then there exists a constant C r such that for all n ∈ N + sup x P √ g n R N n (r) ≤ x − S 2r;n (x; 1) ≤ C r n − min{r,2} , r = 2, where S 2r;n (x; a) = S 2r (x) + s 2r (x) r n a x 3 − 10 r x + 5 Moreover, the scaled angle θ N n (r) between the vectors X N n (r) and Y N n (r) allows the estimate where S 2r;n (x; 1/3) is given in (45) with a = 1/3. Figure 2 shows the advantage of the Chebyshev-Edgeworth expansion versus the limit law in approximating the empirical distribution function. and second approximation S 2r;n (x; 1) (green line) for the correlation coefficient for pairs of normal vectors with random dimension N 25 (2). Here, x > 0, n = 25 and r = 2.

Remark 9.
The limit Student's t-distribution S 2r (x) is symmetric and a generalized hyperbolic distribution which can be written as a regularized incomplete beta function I z (a, b). For x > 0:

Remark 11.
If the dimension of the vectors has the geometric distribution N n (1), then asymptotic distribution of the sample coefficient is the Student law S 2 (x) with two degrees of freedom.
Remark 12. The Cauchy limit distribution occurs when the dimension of the vectors has distribution N n (1/2).

Remark 13.
The Student's t-distributions S 2r (x) are heavy tailed and their moments of orders α ≥ 2 r do not exist.

Standard Normal Distribution
Let γ = 0 in the Theorems 1 and 2 examining the statistics R * N n (r) and Θ * N n (r) = N n (r)(θ N n (r) − π/2). Theorem 4. Let r > 0 and N n = N n (r) be the random vector dimension having probability mass function (29). If the representation (42) for the statistic R m and the inequality (32) with g n = EN n (r) = r(n − 1) + 1 hold, then there exists a constant C r such that for all n ∈ N + where Moreover, the scaled angle θ * N n (r) between the vectors X N n (r) and Y N n (r) allows the estimate where Φ n;2 (x; 1/3) is given in (47) with a = 1/3. Figure 3 shows that the second order Chebyshev-Edgeworth expansion approximates the empirical distribution function better than the limit normal distribution.  . Empirical version of P N n (r) R N n (r) ≤ x (blue line), limit normal law Φ(x) (orange line) and second approximation Φ n;2 (x; 1) (green line) for the correlation coefficient for pairs of normal vectors with random dimension N 25 (2). Here, x > 0, n = 25 and r = 2.

Remark 14.
When the distribution function of a statistic T m without standardization tends to the standard normal distribution Φ(x), i.e., P(T m ≤ x) → Φ(x), then the limit law for P(T N n ≤ x) remains the standard normal distribution Φ(x).

Generalized Laplace Distribution
Finally, we use γ = −1/2 in Theorems 1 and 2 examining the statistic g −1/2 n R * * N n (r) . Theorems 1 and 2 state that if there exists a limit distribution of P g −1/2 n R * * N n ≤ x as n → ∞ then it has to be a scale mixture of normal distributions with zero mean and gamma distribution: having density, see formula (A9) in the proof of Theorem 5: where K α (u) is the α-order Macdonald function or α-order modified Bessel function of the third kind. See, e.g., Oldham et al. [30], Chapter 51, or Kotz et al. [31], Appendix, for properties of these functions. For integer r = 1, 2, 3, ... these densities l r (x), so-called Sargan densities, and their distribution functions are computable in closed forms: The standard Laplace distribution is L 1 (x) with variance 1 and density l 1 (x) given in (49). Therefore, Sargans distributions are a kind of generalizations of the standard Laplace distribution. Theorem 5. Let r = 1, 2, 3 and (29) be probability mass function of the random dimension N n = N n (r) of the vectors under consideration. If the representation (42) for the statistic R m and the inequality (32) for N n (r) with g n = EN n (r) = r(n − 1) + 1 hold, then there exists a constant C r such that for all n ∈ N + sup x P g −1/2 n N n (r) R N n (r) ≤ x − L n;2 (x; 1) ≤ C r n − min{r,2} , r = 2, where L n;2 (x; a) = For arbitrary r > 0, the approximation rate is given by: Moreover, the scaled angle N n (r) θ N n (r) between the vectors X N n (r) and Y N n (r) allows the estimate sup x P g −1/2 n N n (r) θ N n (r) ≤ x − L n;2 (x; 1/3) ≤ C r n − min{r,2} , r = 2, ln(n) n −2 , r = 2, where L n;2 (x; 1/3) is given in (51) with a = 1/3. Figure 4 shows that the Chebyshev-Edgeworth expansion approaches the empirical distribution function better than the limit Laplace law.

Remark 16.
The function (48) as density of a mixture of normal distributions with zero mean and random variance W r having gamma distribution P(W r ≤ y) = G r,r (y) is given also in Kotz et al. [31], Formula (4.1.32) with τ = r, σ = 1/ √ r, using Formula (A.0.4) with λ = −r + 3/2 and the order-reflection formula K −α (x) = K α (x). Such a variance gamma model is studied in Madan and Senata [33] for share market returns.

Remark 17.
A systematic exposition about the Laplace distribution and its numerous generalization and diverse applications one finds in the useful and interesting monography by Kotz et al. [31]. Here, these generalized Laplace distributions L 1 (x), L 2 (x) and L 3 (x) are the leading terms in the approximations of the sample correlation coefficient R * * N n (r) of two Gaussian vectors with negative binomial distributed random dimension N n (r) and the angle θ * * N n (r) between these vectors. [34] and Missiakoulis [35] Sargans densities l r (x) and distribution functions L r (x) for arbitrary integer r = 1, 2, 3, ... have been studied as an alternative to normal law in econometric models because they are computable in closed form, see also Kotz et al. [31], Section 4.4.3 and the references therein.  . Empirical version of P g −1/2 n N n (r) R N n (r) ≤ x (blue line), limit Laplace law L r (x) (orange line) and second approximation L n;2 (x; 1) (green line) for the correlation coefficient for pairs of normal vectors with random dimension N 25 (2). Here, x > 0, n = 25 and r = 2.

The Random Dimension N n = N n (s) Is the Maximum of n Independent Discrete Pareto Random Variables
The random dimension N n (s) has probability mass function (39): Since EN n (s) = ∞ we choose g n = n and consider again the cases γ = 1/2, γ = 0 and γ = −1/2.
It follows from Theorems 1 and 2 and Proposition 2 that if limit distributions for P g γ n R N n (s) ≤ x for γ ∈ {1/2, 0 − 1/2} exist, they are ∞ 0 Φ(x y γ )dH s (y) with densities given below in the proof of the corresponding theorems where s * 2 (x; √ s) is the density of the scaled Student's t-distribution S * 2 (x; √ s) with 2 degrees of freedom, see Definition B37 in Jackman [36], p.507. If Z has density s * 2 (x; √ s) then Z/ √ s has a classic Student's t-distribution with two degrees of freedom.

Laplace Distribution
We start with the case γ = 1/2 in Theorems 1 and 2. Consider the statistics √ n R N n (s) and √ n(θ N n (s) − π/2). The limit distribution is now the Laplace distribution Moreover, the scaled angle θ N n (s) between the vectors X N n (s) and Y N n (s) allows the estimate where L 1/ √ s ;n (x; 1/3) is given in (53) with a = 1/3.
Theorem 7. Let s > 0 and N n = N n (s) be the random vector dimension having probability mass function (39). If the representation (42) for the statistic R m and the inequality (32) with g n = n hold, then there exists a constant C s such that, for all n ∈ N + Moreover, the scaled angle θ * N n (s) between the vectors X N n (s) and Y N n (s) allows the estimate

Scaled Student's t-Distribution
Finally, we use γ = −1/2 in Theorems 1 and 2 examining the statistics n −1/2 N n (s) R N n (s) and n −1/2 N n (s) (θ N n (s) − π/2). The limit Scaled Student's t-Distribution S * 2 (x; √ s) with two degrees of freedom is a scale mixture of the normal distribution with zero mean and mixing exponential distribution 1 − e −sy , y ≥ 0, and it is representable in a closed form, see (A15) below in the proof of Theorem 8: Theorem 8. Let s > 0 and N n = N n (s) be the random vector dimension having probability mass function (39). If the representation (42) for the statistic R m and the inequality (32) with g n = n hold, then there exists a constant C s such that for all n ∈ N + sup where S * n;2 (x; Moreover, the scaled angle θ * N n (s) between the vectors X N n (s) and Y N n (s) allows the estimate where S n;2 (x; √ s; 1/3) is given in (55) with a = 1/3. Theorems 3 to 8 are proved in Appendix A.3.

Conclusions
The asymptotic distributions of the sample correlation coefficient of vectors with random dimensions are normal scale mixtures. From (43) and (52), one can conclude that random dimension and corresponding scaling have significant influence on limit distributions A scale mixture of a normal distribution change the tail behavior of the distribution. Students t-Distributions have polynomial tails, as one class of heavy-tailed distributions, they can be used to model heavy-tail returns data in finance. The Laplace distributions have heavier tails than normal distributions.  Acknowledgments: The authors would like to thank the Managing Editor and the Reviewers for the careful reading of the manuscript and pertinent comments. Their constructive feedback helped to improve the quality of this work and shape its final form.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Proofs of the Theorems and Lemmas
Appendix A.1. Proofs of Theorems 1 and 2 Proof of Theorem 1. The proof follows along the similar arguments of the more general transfer Theorem 3.1 in Bening et al. [24]. Since in Theorem 3.1 in Bening et al. [24] the constant γ has to be non-negative and in our Theorem 1, we also need γ = −1/2, therefore we repeat the proof. Conditioning on N n , we have Using now (18) Taking into account P N n /g n < 1/g n = P N n < 1 = 0, we obtain where ∆ n (x, y) := Φ(xy γ ) + f 2 (xy γ )/(g n y), G n (x, 1/g n ) is defined in (21) and Estimating integral I 1 , we use the integration by parts for Lebesgue-Stieltjes integrals, the boundedness of f 2 (z), say sup z | f 2 (z)| ≤ c * 1 , and estimates (19) where D n is defined in (22). Together with (A1), we obtain (20) and Theorem 1 is proved.
Theorem 2 is proved.
For r = 2, the second integral in line above is an exponential integral. Therefore, we estimate the integral I 2 (x, n) in (35) more precisely like in estimating I 1 (x, n) above, taking into account the given function f 2 (z; a).
In order to prove (35) for r = 2 and γ = 0, we consider for α > 0 the following inequalities: The upper bound in (A4) leads to (35) for r = 2, γ = 0, too. The lower bound in (A4) shows that the ln n-term cannot be improved.
Proof of Theorem 8. By Lemma 4, the additional assumptions (23) and (24) in Transfer Theorem 2 are satisfied with the limit inverse exponential distribution H s (y) and h 2;s (y) given in (40), g n = n and b = 2. In Transfer Theorem 1, the right-hand side of (20) is estimated by Lemma 1 and Lemma 5 for α = 2 in the case γ = −1/2. Then, we have in (21) with (35) To obtain (54), we calculate the above integrals: