Some Properties of Univariate and Multivariate Exponential Power Distributions and Related Topics

: In the paper, a survey of the main results concerning univariate and multivariate exponential power (EP) distributions is given, with main attention paid to mixture representations of these laws. The properties of mixing distributions are considered and some asymptotic results based on mixture representations for EP and related distributions are proved. Unlike the conventional analytical approach, here the presentation follows the lines of a kind of arithmetical approach in the space of random variables or vectors. Here the operation of scale mixing in the space of distributions is replaced with the operation of multiplication in the space of random vectors/variables under the assumption that the multipliers are independent. By doing so, the reasoning becomes much simpler, the proofs become shorter and some general features of the distributions under consideration become more vivid. The ﬁrst part of the paper concerns the univariate case. Some known results are discussed and simple alternative proofs for some of them are presented as well as several new results concerning both EP distributions and some related topics including an extension of Gleser’s theorem on representability of the gamma distribution as a mixture of exponential laws and limit theorems on convergence of the distributions of maximum and minimum random sums to one-sided EP distributions and convergence of the distributions of extreme order statistics in samples with random sizes to the one-sided EP and gamma distributions. The results obtained here open the way to deal with natural multivariate analogs of EP distributions. In the second part of the paper, we discuss the conventionally deﬁned multivariate EP distributions and introduce the notion of projective EP (PEP) distributions. The properties of multivariate EP and PEP distributions are considered as well as limit theorems establishing the conditions for the convergence of multivariate statistics constructed from samples with random sizes (including random sums of random vectors) to multivariate elliptically contoured EP and projective EP laws. The results obtained here give additional theoretical grounds for the applicability of EP and PEP distributions as asymptotic approximations for the statistical regularities observed in data in many ﬁelds.


Introduction
Let α > 0. The symmetric exponential power (EP) distribution is an absolutely continuous distribution defined by its Lebesgue probability density (1) To make notation and calculation simpler, hereinafter we will use a single parameter α in representation (1) because this parameter determines the shape of distribution (1). If α = 1, then relation (1) defines the classical Laplace distribution with zero expectation and variance 2. If α = 2, then relation (1) defines the normal (Gaussian) distribution with zero expectation and variance 1 2 . Any random variable (r.v.) with probability density p α (x) will be denoted Q α .
The distribution (1) was introduced and studied by M. T. Subbotin in 1923 [1]. In that paper distribution (1) was called generalized Laplace distribution. For distribution (1), several other different terms are used. For example, this distribution is called exponential power distribution ( [2,3], power exponential distribution ( [4][5][6]), generalized error distribution ( [7][8][9]), generalized exponential distribution ( [10]), generalized normal ( [11]) and generalized Gaussian ( [12][13][14]). Distribution (1) is widely used in Bayesian analysis and various applied problems from signal and image processing to astronomy and insurance as more general alternatives to the normal law. The paper [14] contains a survey of applications of univariate EP distributions. Particular fields of application of multivariate EP models are enlisted in [15] and [13]. Concerning the methods of statistical estimation of the parameters of these distributions, see [13] and the references therein.
In the present paper we focus on mixture representations for EP and related distributions. In [16] it was proved that for 0 < α 2, distributions of type (1) can be represented as scale mixtures of normal laws. Ten years later this result was re-proved in [17] with no reference to [16]. In the present paper, this result is generalized. We also consider and discuss some alternative uniform mixture representations for univariate and multivariate EP distributions and obtain some unexpected representations for the exponential and normal laws. Mixture representations of EP and related distributions are of great interest due to the following reasons.
In the book [18], a principle was implicitly formulated that a formal model of a probability distribution can be treated as reliable or trustworthy in applied probability only if it is an asymptotic approximation or a limit distribution in a more or less simple limit setting. This principle can be interrelated with the universal principle stating that in closed systems the uncertainty does not decrease, as it was done in the book [19]. It is a convention to measure the uncertainty of a probability distribution by entropy. It has already been mentioned that with 0 < α 2 the EP distribution can be represented as a scale mixture of normal laws. As is known, the normal distribution has the maximum (differential) entropy among all distributions with finite second moment and supported by the whole real axis. In the book [19], it was emphasized that in probability theory the principle of the non-decrease of entropy manifests itself in the form of limit theorems for sums of independent r.v.s. Therefore, if the system under consideration was information-isolated from the environment, then the observed statistical distributions of its characteristics could be regarded as very close to the normal law which possesses maximum possible entropy. However, by definition, a mathematical model cannot take into account all factors which influence the evolution of the system under consideration. Therefore, the parameters of this normal law vary depending on the evolution of the "environmental" factors. In other words, these parameters should be treated as random depending on the information interchange between the system and environment. Therefore, mixtures of normal laws are reasonable mathematical models of statistical regularities of the behavior of the observed characteristics of systems in many situations. Therefore, the EP distribution (1), being a normal mixture, is of serious analytical and practical interest.
Probably, the simplicity of representation (1) has been the main (at least, substantial) reason for using the EP distributions in many applied problems. The first attempt to provide "asymptotic" reasons of possible adequacy of this model was made in [20]. In this paper we prove more general results than those presented in [20] and demonstrate that the (multivariate) EP distribution can be asymptotic in simple limit theorems for those statistics constructed from samples with random sizes, that are asymptotically normal, if the sample size is non-random, in particular, in the scheme of random summation.
The EP distributions (at least with 0 < α 2) turn out to be closely related with stable distributions. The book [21] by V. M. Zolotarev became a milestone on the way of development of the theory of stable distributions. The representation of an EP distribution with 0 < α 2 as a normal scale mixture can be easily proved using the famous 'multiplication' Theorem 3.3.1 in [21]. Moreover, in these representations the mixing distributions are defined via stable densities. In the present paper, we show that these mixing laws that play an auxiliary role in the theory of EP distributions, can play quite a separate role being limit laws for the random sample size providing that the extreme order statistics follow the gamma distribution.
This paper can be regarded as a complement to the recent publication [14]. We give a survey of main results concerning univariate and multivariate EP distributions, consider the properties of mixing distributions appearing in the generalizations mentioned above and prove some asymptotic results based on mixture representations for EP and related distributions. Unlike the conventional analytical approach used in [14], here the presentation follows the lines of a kind of arithmetical approach in the space of random variables or vectors. Here the operation of scale mixing in the space of distributions is replaced with the operation of multiplication in the space of random vectors/variables under the assumption that the multipliers are independent. By doing so, the reasoning becomes much simpler, the proofs become shorter and some general features of the distributions under consideration become more vivid. Section 2 contains some preliminaries. Section 3 concerns the univariate case. We discuss some known results mentioned in [14] and present simple alternative proofs of some of them as well as several new results concerning both EP distributions and some related topics including an extension of Gleser's theorem on representability of the gamma distribution as a mixture of exponential laws and limit theorems on convergence of the distributions of maximum and minimum random sums to one-sided EP distributions and convergence of the distributions of extreme order statistics in samples with random sizes to the one-sided EP and gamma distributions. The results obtained here open the way to deal with natural multivariate analogs of EP distributions. In Section 4, we discuss the conventionally defined multivariate EP distributions and introduce the notion of projective EP (PEP) distributions. The properties of multivariate EP and PEP distributions are considered as well as limit theorems establishing the conditions for the convergence of multivariate statistics constructed from samples with random sizes (including random sums of random vectors) to multivariate elliptically contoured EP and projective EP laws. The results obtained here give additional theoretical grounds for the applicability of EP and PEP distributions as asymptotic approximations for the statistical regularities observed in data in many fields.

Mathematical Preliminaries
The symbol d = will stand for the coincidence of distributions. The symbol marks the end of the proof. The indicator function of a set A will be denoted I A (z): if z ∈ A, then I A (z) = 1, otherwise I A (z) = 0. The symbol • denotes product of independent random elements.
All the r.v.s and random vectors will be assumed to be defined on one and the same probability space (Ω, A, P). The symbols L(Y) and L(Y) will denote the distribution of an r.v. Y and an r-variate random vector Y with respect to the measure P, respectively.
An r.v. with the standard exponential distribution will be denoted W 1 : A gamma-distributed r.v. with shape parameter r > 0 and scale parameter λ > 0 will be denoted G r,λ , where Γ(r) is Euler's gamma-function, In this notation, obviously, G 1,1 is an r.v. with the standard exponential distribution: It is easy to make sure that G 1/α,1 is called the Weibull distribution with shape parameter γ. It is easy to see that W The standard normal distribution function (d.f.) and its density will be denoted Φ(x) and ϕ(x), respectively. An r.v. with the standard normal distribution will be denoted X. By g α,θ (x) and G α,θ (x) we will respectively denote the probability density and the d.f. of the strictly stable law with characteristic exponent α and symmetry parameter θ corresponding to the characteristic function with 0 < α 2, |θ| θ α = min{1, 2 α − 1} (see, e.g., [21]). An r.v. with characteristic function (2) will be denoted S α,θ . To symmetric strictly stable distributions there correspond the value θ = 0 and the ch.f.
According to the «multiplication theorem» (see, e.g., Theorem 3.3.1 in [21]) for any admissible pair of parameters (α, θ) and any α ∈ (0, 1], the product representation holds, in which the factors on the right-hand side are independent. From (3), it follows that for any that is, any symmetric strictly stable distribution can be represented as a normal scale mixture.
Rewrite (3) with θ = 0 in terms of characteristic functions: Then, changing the notation t −→ x, by formal transformations of Equality (8), we obtain It can be easily verified that u α,α (z) is the probability density of a nonnegative r.v. Indeed, since p α (z) is a probability density, for any z > 0 we have Therefore, it follows from (9) that The proposition is thus proved. Let 0 < β α 2. Then the assertion of Proposition 1 can be rewritten as It is easily seen that Q 2 X. Setting α = 2, from (10) we obtain Corollary 1. [16] Any symmetric EP distribution with α ∈ (0, 2] is a scale mixture of normal laws: Now let α = 1. The r.v. having the Laplace distribution with variance 2 will be denoted Λ. As it has already been noted, On the other hand, from Corollary 1 it follows that Therefore, by virtue of identifiability of scale mixtures of normal laws (see [25] and details below), having compared (11) and (12), we obtain that is, the r.v. U −1 1, 1/2 has the exponential distribution with parameter 1 4 whereas the r.v. U 1, 1/2 has the inverse exponential (Fréchet) distribution, P(U 1, 1/2 < x) = exp{− 1 4x }, x 0.

Corollary 2.
Any symmetric EP distribution with α ∈ (0, 1] is a scale mixture of Laplace laws: For α ∈ (0, 1], from Corollary 2 we obtain one more representation of the EP distribution as a scale mixture of normal laws: Proof. By virtue of identifiability of scale mixtures of normal laws, from (13) and Corollary 1 we obtain that if α ∈ (0, 1], then the distribution of the r.v. U −1 2,α/2 is mixed exponential: Hence, in accordance with the result of [26] which states that the product of two independent non-negative r.v.s is infinitely divisible, provided one of the two is exponentially distributed, from (14) it follows that, with α ∈ (0, 1], the distribution of U −1 2,α/2 is infinitely divisible. It remains to use Corollary 1 and a well-known result that a normal scale mixture is infinitely divisible, if the mixing distribution is infinitely divisible (see, e.g., [27], Chapter XVII, Section 3).
The interval (0, 1] does not cover all values of α providing the infinite divisibility of L(Q α ). Another obvious value of α for which L(Q α ) is infinitely divisible is α = 2: the distribution of Q 2 is normal and hence, infinitely divisible as well. Moreover, as is shown in [14], with values of α / ∈ (0, 1] {2}, the EP distributions are not infinitely divisible. From Proposition 1, as a by-product, we can obtain simple expressions for the moments of negative orders of one-sided strictly stable distributions. Proof. As it was made sure in the proof of Proposition 1, the function u 1/δ,α (x) is a probability density, that is, Therefore, Now consider some properties of the mixing r.v. U α,α in (7). First, we present some inequalities for the tail of the distribution of U α,α .
Proof. (i) With the account of the well-known relation (e.g., see [18], Chapter 7, Section 36) we conclude that for any x > 0, there exists a c ∈ (0, ∞) such that (ii) With the account of (6), we have Proof. (i) To prove (15), notice that, by the definition of u α,α (x), and use (6).
(ii) To prove (16), note that for arbitrary β ∈ (0, 2] and γ β − 1 Letting δ = γ/α, from (17) we obtain (16). The proposition is proved. Now consider the property of identifiability of scale mixtures of EP distributions. Recall the definition of identifiability of scale mixtures. Let Q be an r.v. with the d.f. F Q (x), V 1 and V 2 be two nonnegative r.v.s. The family of scale mixtures of F Q is said to be identifiable, if the equality is not identically zero in some nondegenerate real interval. that is, if V 1 and V 2 are two nonnegative r.v.s, then the equality Therefore, by the chain differentiation rule we have Hence, the Fourier- Multiplying the integrand in the last integral by 1 = e −αx e αx and changing the variables e αx −→ y so that dy = αe αx dx and e x = y 1/α , we obtain The reference to Lemma 1 proves that and the desired result follows from what has just been proved.
Proof. From Proposition 1 (see (10)) we have relates the distributions of the r.v.s U α,α with different values of α but with the same values of α. As regards the relation between the distributions of the r.v.s U α,α with different values of α but with the same values of α , it can be easily seen that for any α ∈ (0, 1] and α, β ∈ (0, 2] In other words, for any x > 0 Consider some properties of the one-sided EP distribution of the r.v. |Q α |. Obviously, the density + α (x) of |Q α | is given by (18), so that for δ > −1 .

Lemma 2.
A d.f. F(x) such that F(0) = 0 corresponds to a mixed exponential distribution, if and only if its Proof. This statement immediately follows from the Bernstein theorem [28].
Proposition 6. The distribution of the r.v. |Q α | can be represented as mixed exponential if and only if α ∈ (0, 1]. In that case the mixing density is u 1,α (x).
Proof. Let α ∈ (0, 1]. As is known, the Laplace-Stieltjes transform ψ α (s) of the nonnegative strictly stable r.v. S α,1 is Hence, by formal transformation, we obtain where the function u 1,α (x) was introduced in Proposition 1 and proved to be a probability density function. Relation (20) means that if α ∈ (0, 1], then the distribution of |Q α | is mixed exponential. Now, let α > 1. We have It can be easily seen that for Hence, by Lemma 2 the distribution of |Q α | is not mixed exponential. The proposition is proved.
In terms of r.v.s the statement of Proposition 6 can be formulated as provided α ∈ (0, 1] (also see Corollary 1).
Proof. This statement immediately follows from (21) and the result of [26] which states that the product of two independent non-negative r.v.s is infinitely divisible, provided one of the two is exponentially distributed.

Convergence of the Distributions of Maximum and Minimum Random Sums to One-Sided EP Laws
From Corollary 1 and (13), we obtain In this section we demonstrate that the one-sided EP distribution can be limiting for maximum sums of a random number of independent r.v.s (maximum random sums), minimum random sums and absolute values of random sums. Convergence in distribution will be denoted by the symbol =⇒.
Consider independent not necessarily identically distributed r.v.s X 1 , Assume that the r.v.s X 1 , X 2 , . . . satisfy the Lindeberg condition: for any τ > 0 It is well known that under these assumptions P S n < B n x =⇒ Φ(x) (this is the classical Lindeberg central limit theorem) and P S n < B n x =⇒ 2Φ(x) − 1, x 0, and P S n < B n x =⇒ 2Φ(x), x 0, (this is one of manifestations of the invariance principle). Let . Let {d n } n 1 be an infinitely increasing sequence of positive numbers. Here and in what follows convergence is meant as n → ∞.

Lemma 3.
Assume that the r.v.s X 1 , X 2 , . . . and N 1 , N 2 , . . . satisfy the conditions specified above. In particular, let Lindeberg condition (22) hold. Moreover, let N n → ∞ in probability. Then the distributions of normalized random sums weakly converge to some distribution; that is, there exists an r.v. Y such that d −1 n S N n =⇒ Y, if and only if any of the following conditions holds: The proof of Lemma 3 was given in [29].
Lemma 3 and Corollary 6 imply the following statement.

Extensions of Gleser's Theorem for Gamma Distributions
In [30], it was shown that a gamma distribution can be represented as mixed exponential if and only if its shape parameter is no greater than one. Namely, the density g(x; r, µ) of a gamma distribution with 0 < r < 1 can be represented as In [31], it was proved that if r ∈ (0, 1), µ > 0 and G r, 1 and G 1−r, 1 are independent gamma-distributed r.v.s, then the density p(z; r, µ) defined by (23) corresponds to the r.v.
where R 1−r,r is the r.v. with the Snedecor-Fisher distribution corresponding to the probability density In other words, if r ∈ (0, 1), then A natural question arises: is there a product representation of G r, µ in terms of exponential r.v.s for r > 1 similar to (26)? The results of the preceding section can give an answer to this question.
For simplicity, without loss of generality let µ = 1.

Proposition 8.
Let r 1. Then, Proof. As it has been already mentioned, Therefore, with the account of (21), we obtain the desired result. Gamma distributions, as well as one-sided EP distributions, are particular representatives of the class of generalized gamma distributions (GG distributions), that was first described (under another name) in [32,33] in relation with some hydrological problems. The term "generalized gamma distribution" was proposed in [34] by E. W. Stacy who considered a special family of lifetime distributions containing both gamma and Weibull distributions. However, these distributions are particular cases of a more general family introduced by L. Amoroso [35]. A generalized gamma distribution is the absolutely continuous distribution defined by the density with α ∈ R, µ > 0, r > 0. An r.v. with the density g(x; r, α, µ) will be denoted G r,α,µ . It is easy to see that G r,α,µ The following statement can be regarded as a generalization of (27).

Alternative Mixture Representations
we will denote an r.v. with the uniform distribution on the segment [a, b].

Lemma 4. For any
For the proof see [36].
Note that Lemma 4 with α = 2 yields an 'unexpected' uniform mixture representation for the normal distribution: The following statement extends and generalizes a result of [36] (see Lemma 4).
As by-products of Proposition 10 and Corollary 7, consider some mixture representations for the exponential and normal distributions. Using Corollary 1 we obtain for 0 < α 2 that Here we use the notation χ 2 m for the r.v. having the chi-squared distribution with m degrees of freedom. Setting α = 1 in (36), we obtain the following representation for the exponentially distributed r.v.: Now on the left-hand side of (37) use the easily verified relation W 1 d = √ 2W 1 • |X| and on the right-hand side of (37) use relation G 1/2, 1 d = W 1 • Z −1 1/2, 1 (see (26)). Then (37) will be transformed into and since the family of mixed exponential distributions is identifiable, this yields the following mixture representation for the folded normal distribution: Along with (32), from (38) we obtain one more product representation for the normal r.v., this time in terms of the 'scaling' r.v.s in (26) and Corollary 1: Since the r.v. Y ±1 has the discrete uniform distribution on the set {−1, +1}, relation (39) can be regarded as one more uniform mixture representation for the normal distribution.

Some Limit Theorems for Extreme Order Statistics in Samples with Random Sizes
Proposition 11 declares that the one-sided EP distribution with α 1 is a scale mixture of Weibull distribution with shape parameter α. In other words, relation (35) can be expressed in the following form: for any x 0 At the same time, Proposition 8 means that any gamma distribution with shape parameter r 1 can also be represented as a scale mixture of the Weibull distribution with the same shape parameter. In other words, relation (27) can be expressed in the following form: for any x 0 From (40) and (41), it follows that one-sided EP distribution with α 1 and the gamma distribution with r > 1 can appear as a limit distribution in limit theorems for extreme order statistics constructed from samples with random sizes. To illustrate this, we will consider the limit setting dealing with the min-compound doubly stochastic Poisson processes.
A doubly stochastic Poisson process (also called Cox process) is defined in the following way. A stochastic point process is called a doubly stochastic Poisson process, if it has the form N 1 (L(t)), where N 1 (t), t ≥ 0, is a time-homogeneous Poisson process with intensity equal to one and the stochastic process L(t), t ≥ 0, is independent of N 1 (t) and has the following properties: L(0) = 0, P(L(t) < ∞) = 1 for any t > 0, the trajectories of L(t) are right-continuous and do not decrease. In this context, the Cox process N(t) is said to be lead or controlled by the process L(t).
Now let N(t), t ≥ 0, be the a doubly stochastic Poisson process (Cox process) controlled by the process L(t). Let T 1 , T 2 , . . . be the points of jumps of the process N(t). Consider a marked Cox point process {(T i , X i )} i 1 , where X 1 , X 2 , . . . are independent identically distributed (i.i.d.) r.v.s assumed to be independent of the process N(t). Most studies related to the point process {(T i , X i )} i 1 deal with compound Cox process S(t) which is a function of the marked Cox point process {(T i , X i )} i 1 defined as the sum of all marks of the points of the marked Cox point process which do not exceed the time t, t 0. In S(t), the operation of summation is used for compounding. Another function of the marked Cox point process {(T i , X i )} i 1 that is of no less importance is the so-called max-compound Cox process which differs from S(t) by that compounding operation is the operation of taking maximum of the marking r.v.s. The analytic and asymptotic properties of max-compound Cox processes were considered in [37,38]. Here we will consider the min-compound Cox process.
Let N(t) be a Cox process. The process M(t) defined as t 0, is called a min-compound Cox process.

Lemma 5.
Assume that there exist a positive infinitely increasing function d(t) and a positive r.v. L such that Additionally assume that lext(F) > −∞ and the d.f. P F (x) ≡ F lext(F) − x −1 satisfies the condition: there exists a number γ > 0 such that for any x > 0 Then there exist functions a(t) and b(t) such that for x 0 and H(x) = 0 for x < 0. Moreover, the functions a(t) and b(t) can be defined as Proof. This lemma can be proved in the same way as Theorem 2 in [37] dealing with max-compound Cox processes using the fact that min{X 1 , . . . , X N(t) } = − max{−X 1 , . . . , −X N(t) }.

Proposition 12.
Let α 1. Assume that there exists a positive infinitely increasing function d(t) such that Then there exist functions a(t) and b(t) such that as t → ∞. Moreover, the functions a(t) and b(t) can be defined by (43).
Proof. This statement directly follows from Lemma 5 with the account of (40).
as t → ∞. Moreover, the functions a(t) and b(t) can be defined by (43).
Proof. This statement directly follows from Lemma 5 with the account of (41). Propositions 11 and 12 describe the conditions for the convergence of the distributions of extreme order statistics to one-sided EP distributions with α 1 and to gamma distributions with r 1, respectively. Using (21) and (26) instead of (40) and (41) correspondingly, we can also cover the cases α ∈ (0, 1] and r ∈ (0, 1).
Proof. This statement directly follows from Lemma 5 with the account of (21).

Proposition 15.
Let r ∈ (0, 1]. Assume that there exists a positive infinitely increasing function d(t) such that as t → ∞. In addition, assume that lext(F) > −∞ and the d.f. P F (x) ≡ F lext(F) − x −1 satisfies condition (42) with γ = 1. Then there exist functions a(t) and b(t) such that (45) holds as t → ∞. Moreover, the functions a(t) and b(t) can be defined by (43).
Proof. This statement directly follows from Lemma 5 with the account of (26). It is very simple to give examples of processes satisfying the conditions described in Propositions 11-14. Let L(t) ≡ Ut and d(t) ≡ t, t 0, where U is a positive r.v. Then choosing an appropriately distributed U we can provide the validity of the corresponding condition for the convergence of L(t)/d(t). Moreover, the parameter t may not have the meaning of physical time. For example, it may be some location parameter of L(t), so that the statements of this section concern the case of large mean intensity of the Cox process.

Conventional Approach Higher-Order EP Scale Mixture Representation
Let r ∈ N. In this section, we will consider random elements taking values in the r-dimensional Euclidean space R r . The notation x will mean the vector-column x = (x 1 , . . . , x r ) . The vector with all zero coordinates will be denoted 0.
Let Σ be a symmetric positive definite (r × r)-matrix. The normal distribution in R r with zero vector of expectations and covariance matrix Σ will be denoted N r, Σ . This distribution is defined by its density The characteristic function f X r, Σ (t) of a random vector X r, Σ such that L(X r, Σ ) = N r, Σ has the form Let α > 0, Σ be a symmetric positive definite (r × r)-matrix. Following the conventional approach (see, e.g., [4]), we define the r-variate elliptically contoured EP distribution with parameters α and Σ as an absolutely continuous probability distribution corresponding to the probability density The random vector whose density is given by (47) will be denoted Q r, α, Σ . It is easy to see that Having obtained the formula for the density of Q r, α, Σ , we are in a position to prove the multivariate generalization of Proposition 1 for α ∈ (0, 2].
PROOF. From the definition of multivariate elliptically contoured EP distribution, by virtue of the well-known property of linear transformations of random vectors with the multivariate normal distribution (see, e.g., [39], Theorem 2.4.4), we have In [4], it was shown that if A is a (p × r)-matrix with p < r, then the distribution of AQ r, α, Σ is elliptically contoured, but, in general, not EP. The idea of the proof of Proposition 17 can clarify why this is so. Indeed, if p < r and 0 < α < 2, then and if p = r, then, by virtue of Corollary 9 and identifiability of scale mixtures of multivariate normal distributions, the product on the right-hand side is not an EP-distributed p-variate random vector. This fact illustrates the result of Y. Kano [42]: to ensure that marginal distributions of a multivariate elliptically contoured distribution belong to the same type, the mixing distribution in the stochastic representation similar to (51) must not depend on the dimensionality, whereas in (51) this condition (called 'consistency' in [42]) is violated.
Corollary 11. Let r 2. Assume that a random vector Z has an r-variate elliptically contoured EP distribution with parameters α and Σ. If α ∈ (0, 2), then the distribution of each linear combination of its coordinates is a normal scale mixture, but not EP.
By this property multivariate EP distributions differ from multivariate stable (in particular, normal) laws, for which each projection of a random vector with a stable law also follows a stable distribution with the same characteristic exponent (see, e.g., [43]).

Alternative Multivariate Uniform Mixture Representation for the EP Distribution
The multivariate EP distributions are a special class of elliptically contoured distributions (see, e.g., [44][45][46][47]). Therefore, it is possible to use the properties of elliptically contoured laws to obtain the following multivariate uniform mixture representation for the EP distributions similar to Lemma 5.

Proposition 18.
Let α > 0, Σ be a symmetric positive definite (r × r)-matrix, A be an (r × r)-matrix such that A A = Σ, Y r be a random vector with the uniform distribution on the unit sphere in R r . Then, Proof. See Proposition 4.1 in [4].
Since Q r, 2, Σ d = X r, Σ , from Proposition 18 with α = 2, we obtain the following representation of the r-variate normal distribution as a scale mixture of the uniform distribution on the unit sphere in R r transformed into the dispersion ellipsoid corresponding to the covariance matrix.

Corollary 12.
Let Σ be a symmetric positive definite (r × r)-matrix, A be an (r × r)-matrix such that A A = Σ, Y r be a random vector with the uniform distribution on the unit sphere in R r . Then, (53)

Multivariate Projective Exponential Power Distributions
Let r ∈ N. In order to obtain a multivariate analog of a univariate EP distribution that meets Kano's consistency condition, for which each projection has a univariate EP distribution, we will formally transfer the property of a univariate EP distribution to be a normal scale mixture to the multivariate case and call the distribution of an r-variate random vector the multivariate projective exponential power (PEP) distribution, where α ∈ (0, 2] and Σ is a positive definite (r × r)-matrix. Since scale mixtures of the multivariate normal distribution are elliptically contoured (see [44,46,47]), the PEP distributions so defined are elliptically contoured.
Consider an analog of Proposition 16 for multivariate PEP distributions.
Corollary 13. Let 0 < α 1, Σ be a symmetric positive definite (r × r)-matrix. Then The following statements present the features of projective EP distributions that distinguish them from 'conventional' EP distributions considered in the preceding section. Proposition 20. Let p ∈ N, p r, Σ be a symmetric positive definite (r × r)-matrix, A be a (p × r)-matrix of rank p, Q * r, α, Σ be an r-variate random vector with PEP distribution with parameters α and Σ. Then the random vector AQ * r, α, Σ has the p-variate PEP distribution with parameters α and AΣA : AQ * Proof. From the definition of the multivariate PEP distribution, by virtue of the well-known property of the linear transformations of the random vectors with the multivariate normal distribution (see, e.g., [39], Theorem 2.4.4), we have

Proposition 21.
A random vector has an r-variate PEP distribution if and only if each linear combination of its coordinates has a univariate symmetric EP distribution.
Proof. The 'only if' part. Let u ∈ R r be an arbitrary vector, u = 0. Assume that a random vector Z has an r-variate PEP distribution with some α ∈ (0, 2] and positive definite matrix Σ; that is, that is, up to a scale factor √ u Σu, the distribution of the linear combination u Z of the coordinates of Z (the projection of Z onto the direction u) is univariate EP with parameter α.
The 'if' part. Let Z be an r-variate random vector with EZ = 0 and covariance matrix C, u be an arbitrary vector from R r . Consider the linear combination u Z of the coordinates of Z. We obviously have Eu Z = 0 and D(u Z) = u Cu. According to the assumption, this combination, up to a scale parameter σ > 0, has a univariate EP distribution with some parameter α: u Z d = σQ α . With the account of Corollary 1 and Proposition 3 (see (16)), this means that , hence, we obtain where Now consider the characteristic function h(t) of the r.v. u Z. By virtue of the assumption, with the account of Corollary 1 and (55), we have with γ(α) given by (56). Relation (57) holds for any u ∈ R r . Letting in (57) t = 1 and taking (46) into account, we notice that (57) turns into the characteristic function h Z (u) of the random vector Z: That is, the random vector Z has the r-variate PEP distribution with parameters α and 2γ(α)C. Proposition 21 explains the term projective EP distribution.

Proposition 22.
If α ∈ (0, 1] {2}, then PEP distributions are infinitely divisible. If 1 < α < 2, then PEP distributions are not infinitely divisible. Proof. First, consider the case α ∈ (0, 1] {2}. In the proof of Corollary 3 we established that L(U −1 2, α/2 ) is infinitely divisible for α ∈ (0, 1]. Hence, for these values of α the distribution of Q * r, α, Σ is also infinitely divisible being a scale mixture of the r-variate normal distribution in which the mixing distribution is infinitely divisible (this fact can be proved in the same way as in the univariate case [27]). The case Now consider an r-variate random vector Q * r, α, Σ with 1 < α < 2 and some positive definite (r × r)-matrix Σ. Assume that L(Q * r, α, Σ ) is infinitely divisible. Then, in accordance with Theorem 3.2 of [48] for any u ∈ R r the r.v. u Q * r, α, Σ is infinitely divisible as well. In the proof of the 'only if' part of Proposition 21, we found out that that is, the univariate distribution L(Q α ) also must be infinitely divisible. However, it was shown in [14], with values of α ∈ (1, 2) the univariate EP distributions are not infinitely divisible. This contradiction completes the proof. Now consider the representation of r-variate PEP distributions as scale mixtures of the uniform distribution on the unit sphere in R r transformed in accordance with the corresponding matrix parameter Σ.

Proposition 23.
Let α ∈ (0, 2], Σ be a symmetric positive definite (r × r)-matrix, A be an (r × r)-matrix such that A A = Σ, Y r be a random vector with the uniform distribution on the unit sphere in R r . Then, Proof. This statement follows from the definition of an r-variate PEP distribution and Corollary 12.
In practice, depending on a particular problem, a researcher should choose what is more beneficial: either to deal with a statistical model based on the convenient multivariate density of a conventional EP distribution at the expense of the loss of EP property for marginals and projections, or to deal with the model having convenient EP marginal and projective densities at the expense of the loss of conventional multivariate EP property of PEP distributions.

A Criterion of Convergence of the Distributions of Random Sums to Multivariate EP and PEP Distributions
Recall that the symbol =⇒ denotes convergence in distribution. The Borel σ-algebra of subsets of R r will be denoted B r .
Consider a sequence of independent identically distributed random vectors X 1 , X 2 , . . . taking values in R r . For a natural n 1, let S n = X 1 + . . . + X n . Let N 1 , N 2 , . . . be a sequence of nonnegative integer r.v.s defined on the same probability space so that for each n 1 the r.v. N n is independent of the sequence X 1 , X 2 , . . . For definiteness, hereinafter we will assume that ∑ 0 j=1 = 0.
Lemma 6. Assume that the random vectors S 1 , S 2 , . . . satisfy the condition as n → ∞, where {b n } n 1 is an infinitely increasing sequence of positive numbers and Σ is some positive definite matrix. In other words, let Let {d n } n 1 be an infinitely increasing sequence of positive numbers. Then a distribution F on B r such that exists if and only if there exists a d.f. V(x) satisfying the conditions Proof. This statement is a particular case of a more general theorem proved in [49]. if and only if d −1 n b N n =⇒ U 2/r, α/2 (n → ∞).
Proof. First of all, note that in the case under consideration, as F(A) in Lemma 6 we can take F(A) = P(Q r, α, Σ ∈ A), A ∈ B r . Furthermore, by virtue of Corollary 9, we have F(A) = P(Q r, α, Σ ∈ A) = P U −1/2 2/r,α/2 • X r, Σ ∈ A = ∞ 0 N r, uΣ (A)dP U 2/r, α/2 < u , A ∈ B r . Therefore, Proposition 24 is a direct consequence of Lemma 6 with V(x) = P U −1 2/r, α/2 < x . In the same way as Proposition 22 was proved, by the corresponding replacement of U 2/r, α/2 by 1 2 U 2, α/2 with the reference to Corollary 9 replaced by that to the definition of a multivariate PEP distribution, we can obtain conditions for the convergence of the distributions of multivariate random sums to PEP distributions.

Proposition 25.
Assume that the random vectors X 1 , X 2 , . . . satisfy condition (58) with some infinitely increasing sequence {b n } n 1 of positive numbers and some positive definite matrix Σ. Let {d n } n 1 be an infinitely increasing sequence of positive numbers. Then, if and only if d −1 n b N n =⇒ 1 2 U 2, α/2 (n → ∞).

A Criterion of Convergence of the Distributions of Regular Statistics Constructed from Samples with Random Sizes to Multivariate EP Distributions
Let {X n } n 1 be independent not necessarily identically distributed random vectors with values in R r , r ∈ N. For n ∈ N, let T n = T n (X 1 , . . . , X n ) be a statistic, i.e., a measurable function of X 1 , . . . , X n with values in R m , m ∈ N. For each n ≥ 1, we define a random vector T N n by setting T N n (ω) ≡ T N n (ω) X 1 (ω), . . . , X N n (ω) (ω) , ω ∈ Ω.
Let θ ∈ R m . Assume that the statistics T n are asymptotically normal in the sense that as n → ∞, where X m, Σ is a random vector with the m-variate normal distribution with an (m × m) covariance matrix Σ. Recall that we use a special notation N m, Σ for L(X m, Σ ). Examples of statistics satisfying (60) are well known: sample quantiles, maximum likelihood estimators of a multivariate parameter, etc. Let N 1 , N 2 , . . . be a sequence of nonnegative integer r.v.s defined on the same probability space so that for each n 1 the r.v. N n is independent of the sequence X 1 , X 2 , . . . In this section, we will be interested in the conditions providing the convergence of the distributions of the m-variate random vectors Z = √ n(T N n − θ) to m-variate elliptically contoured EP distributions L(Q m, α, Σ ). In limit theorems of probability theory and mathematical statistics, it is conventional to use centering and normalization of r.v.s and vectors in order to obtain non-trivial asymptotic distributions. Moreover, to obtain reasonable approximation to the distribution of the basic statistics (in our case, T N n ), the normalizing values should be non-random. Otherwise the approximate distribution becomes a random process itself and, say, the problem of evaluation of its quantiles or critical values of statistical tests becomes senseless. Therefore, in the definition of Z we consider the non-randomly normalized statistic constructed from a sample with random size. Lemma 7. Assume that N n → ∞ in probability and the statistic T n is asymptotically normal so that condition (60) holds. Then a random vector Z such that (iii) P(nN −1 n < x) =⇒ V(x), n → ∞.
Proof. This lemma is a particular case of a more general statement proved in [50] and strengthened in [41] (see Theorem 8 there).
Proof. This statement is a direct consequence of Lemma 7 with the account of Corollary 9.
In the same way that Proposition 26 was proved, by the corresponding replacement of U 2/r, α/2 by 1 2 U 2, α/2 with the reference to Corollary 9 replaced by that to the definition of a multivariate PEP distribution, we can obtain conditions for the convergence of the distributions of regular (in the sense of (60)) multivariate statistics constructed from samples with random sizes to PEP distributions. Proposition 27. Assume that N n → ∞ in probability and the statistic T n is asymptotically normal so that condition (60) holds. Then, Z = √ n(T N n − θ) =⇒ Q * m, α, Σ as n → ∞ with some α ∈ (0, 2] and the same (m × m) matrix Σ as in (60), if and only if n −1 N n =⇒ 1 2 U −1 2, α/2 (n → ∞).