Next Article in Journal
Three-Way Multi-Attribute Decision Making Based on Outranking Relations under Intuitionistic Fuzzy Environments
Next Article in Special Issue
On a Vector-Valued Measure of Multivariate Skewness
Previous Article in Journal
Identities Generalizing the Theorems of Pappus and Desargues
Previous Article in Special Issue
Canonical Correlations and Nonlinear Dependencies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cumulants of Multivariate Symmetric and Skew Symmetric Distributions

by
Sreenivasa Rao Jammalamadaka
1,
Emanuele Taufer
2,* and
Gyorgy H. Terdik
3
1
Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110, USA
2
Department of Economics and Management, University of Trento, 38122 Trento, Italy
3
Faculty of Informatics, University of Debrecen, 4032 Debrecen, Hungary
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(8), 1383; https://doi.org/10.3390/sym13081383
Submission received: 23 June 2021 / Revised: 18 July 2021 / Accepted: 20 July 2021 / Published: 29 July 2021
(This article belongs to the Special Issue Symmetry and Asymmetry in Multivariate Statistics and Data Science)

Abstract

:
This paper provides a systematic and comprehensive treatment for obtaining general expressions of any order, for the moments and cumulants of spherically and elliptically symmetric multivariate distributions; results for the case of multivariate t-distribution and related skew-t distribution are discussed in some detail.

1. Introduction

This paper provides a systematic treatment and derivation of moments and cumulants of any order for spherically as well as elliptically symmetric multivariate distributions. Expressions for the multivariate t-distribution and the related skew-t distribution are considered in detail. Our approach exploits the stochastic representation of such random variables in terms of the so-called generating variate and its uniform-distribution (Uniform) base in the appropriate dimension.
It is well known that the problem of representing the structure of higher-order cumulants of multivariate distributions is rather messy. In this paper, we present an approach based on a vectorization of cumulants which leads to a natural and intuitive way to obtain multivariate moments and cumulants of any order; we make the point that this provides the simplest way to deal with this issue.
We note that [1] provides formulae for cumulants in terms of matrices; however, retaining a matrix structure for all higher-order cumulants leads to high-dimensional matrices with special symmetric structures which are quite hard to follow notionally and computationally. Ref. [2] provides quite an elegant approach using tensor methods; however, tensor methods are not very well known and computationally not so simple.
The method discussed here is based on relatively simple calculus. Although the tensor product of Euclidean vectors is not commutative, it has the advantage of permutation equivalence and allows one to obtain general results for cumulants and moments of any order, as it will be demonstrated in this paper, where general formulae, suitable for algorithmic implementation through a computer software, will be provided. Methods based on a matrix approach do not provide this type of result; see also [3], which goes as far as the sixth-order moment matrices, whereas there is no such limitation in our derivations and our results. For further discussion, one can see also [4,5].
The primary contributions of this paper may be summarized as: (a) providing formulae, valid for any order, for vectorized cumulants and moments of multivariate spherical and elliptically symmetric distributions; matrix structures can be readily obtained from these expressions; some examples are provided in Section 4; (b) introduce the so-called marginal moment and cumulant parameters for multivariate spherical and elliptically symmetric distributions—providing an extension of the results discussed in [6]; (c) provide formulae for all-order moments and cumulants for multivariate t and skew-symmetric t-distributions; results of this type, as far as we know, are not available in the literature.
As is well known, cumulants play a key role in several areas of multivariate statistics: from the earliest applications in estimation and testing to fields such as signal detection, clustering, invariant coordinate selection, projection pursuit, time series modelling, pricing and portfolio analysis. See, for example, [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. Higher-order cumulants going beyond the third and fourth order are typically needed to obtain the asymptotic properties of various statistics discussed in the aforementioned papers; for instance, ref. [30] provides general expressions for covariances of cumulants of the third and fourth order, requiring several higher-order cumulants. We believe that the expressions for higher-order cumulants for the models presented here open the door for such explorations in these areas.
In this paper, we will also go beyond such spherically and elliptically symmetric models, and obtain general expressions for the cumulants of skew-symmetric distributions such as the skew-t (see [31]). In particular, ref. [32] provides some specific analytical properties of skew-symmetric distributions. Further discussion on these distributions and their applications can be found in [33,34,35].
To formally introduce the problem, consider a random vector X in d-dimensions, with mean vector μ and covariance matrix Σ , λ is a d-vector of real constants; let ϕ X ( λ ) and ψ X ( λ ) = log ϕ X ( λ ) denote, respectively, the characteristic function and the cumulant-generating function of X .
With the symbol ⊗ denoting the Kronecker product, consider the operator D λ , which we refer to as the T-derivative; see [36] for details. For any function ϕ ( λ ) , the T-derivative is defined as
D λ ϕ ( λ ) = vec ϕ ( λ ) λ = ϕ ( λ ) λ .
If ϕ is k-times differentiable, with its k-th T-derivative D λ k ϕ ( λ ) = D λ D λ k 1 ϕ ( λ ) , then the k-th order cumulant of the vector X is obtained as
κ X , k = Cum ̲ k ( X ) = ( i ) k D λ k ψ X ( λ ) | λ = 0 .
Note that Cum ̲ k ( X ) is a vector of dimension d k that contains all possible cumulants of order k formed by X 1 , , X d . For example, in Equation (1), one has κ X , 2 = vec Σ .
Throughout this paper, N is used for the set of natural numbers, n ! ! for the double (semi-) factorial (see the reference [37]-Common Notations and Definitions) and ( d ) m = Γ ( d + m ) / Γ ( d ) for the Pochhammer symbol (see [37] 5.2.5), known also as the rising or ascending factorial. We will also denote
G m p = Γ p + m / 2 Γ p / 2 ,
so that G 2 m ( p ) = ( p / 2 ) m . Finally, for any matrix A , recall vec m A = vec A m .
The paper is organized as follows. Section 2 discusses moments and cumulants of spherical and elliptically symmetric multivariate distributions; details on marginal moments and moment- and cumulant-parameters are provided. Section 3 considers the special case of multivariate t-distribution and discusses its extension to the skew-t distribution. Section 4 presents some applications and examples which are aimed mainly at providing evidence of the correctness of the formulae provided rather than providing simulations on estimation or testing. The proofs are collected in a separate section at the end, viz. Section 5.

2. Spherical and Elliptically Symmetric Distributions

For a comprehensive discussion of spherically and elliptically symmetric multivariate distributions, one may refer to the book by [38]. Other book-level discussions can be found, for instance, in [1,39,40]. General formulae for moments and kurtosis parameters are discussed in [6,41], among others.
A random variate W is spherically distributed if its distribution is invariant under rotations of R d , which is equivalent to having the stochastic representation
W = R U ,
where R is a non-negative random variable and U is a uniform random d-vector on the sphere S d 1 , which is independent of R (see Theorem 2.5, [38]). The random variable R is called the generating variate, with the generating distribution F, and the random vector U is the uniform base of the spherical distribution. The characteristic function of W can then be written in the form ϕ W λ = g λ λ in terms of a given a function g, called the characteristic generator. An elliptically symmetric random variable  X E d μ , Σ , g is defined by a location-scale extension of W so that it can be represented in the form
X = μ + Σ 1 / 2 W
where μ R d , Σ is a variance-covariance matrix and W is spherically distributed. The cumulants of X are just the cumulants of W multiplied by a constant, except for the mean, which is given by E X = Cum ̲ 1 X = μ . For m 1
Cum ̲ 2 m X = Σ 1 / 2 2 m Cum ̲ 2 m W , and Cum ̲ 2 m + 1 X = 0 .
Let ψ W λ = log ϕ W λ = log g λ λ be the cumulant-generating function of W . Consider the series expansions
ϕ W λ = g λ λ = j = 1 μ j i j j ! λ j and ψ W λ = j = 1 i j j ! κ W , j λ j ,
which lead to expressions for the moments and cumulants of W expressed through the T-derivative of the characteristic and log-characteristic generator functions as
μ j = ( i ) j D λ j ϕ W λ λ = 0 and κ W , j = ( i ) j D λ j log g λ λ λ = 0 .
The relationship between the distribution F of R and g is given through the characteristic function of the uniform distribution on the sphere (see [38], Theorem 2.2, p. 29).
Using the stochastic representation (3) of W , cumulants can be calculated either via the generator function g, or the distribution of the generating variate R. We first start with using the generator function g for deriving the cumulants of W . Since the characteristic generator g is a function of one variable and represents the characteristic function at λ λ , the series expansion of g and f = log g include 1 j instead of i j and we have
g u = j = 1 1 j j ! g j u j and f u = j = 1 f j 1 j j ! u j ,
with the coefficients g j = 1 j g j 0 and f j = 1 j f j 0 . To introduce a specific notation, let us denote the generator moment by ν k = 1 k g k 0 and the corresponding generator cumulant by ζ k = 1 k f k 0 .
Although neither the generator moments nor its cumulants correspond to the moments and cumulants of a random variate, the generator cumulants can be expressed in terms of the generator moments (and vice versa) via Faà di Bruno’s Formula, which says:
ζ k = r = 1 k ( 1 ) r 1 ( r 1 ) ! Σ j = r , Σ j j = k k ! j = 1 k j ! j = 1 k ν j j ! j ,
where the second sum is taken over all sequences (types) = 1 , , k , j 0 such that j j = r and j j j = k .

2.1. Marginal Moments and Cumulants

If one takes the derivatives of ϕ W j λ j = g λ j 2 , at λ j = 0 , , λ j , , 0 , it is readily seen that these derivatives do not depend on j; also, the odd moments can be seen to be zero (see, e.g., [6]).
The following theorem (which is proven in Section 5) establishes the connection between (i) the moments of W j and the generator moments, as well as (ii) cumulants of W j and the generator cumulants.
Theorem 1.
Let m N and, in (3), assume E ( R m ) < . Define μ W , m and κ W , m as the m t h order moment and cumulant of W j , while ν m and ζ m denote the generator moment and generator cumulant of the m t h order. Then, the odd moments μ W , 2 m + 1 = E W j 2 m + 1 of W j are all zero and the even ones are
μ W , 2 m = 2 m 2 m 1 ! ! ν m = 2 m ! m ! ν m .
Again, the odd cumulants κ W , 2 m + 1 of W j are zero as well, and the even ones are given by
κ W , 2 m = 2 m 2 m 1 ! ! ζ m = 2 m ! m ! ζ m .
Further, the even cumulants can be expressed in terms of the generator moments as
κ W , 2 m = 2 m 2 m 1 ! ! r = 1 m ( 1 ) r 1 ( r 1 ) ! Σ j = r , Σ j j = m m ! j = 1 j ! j = 1 ν j j ! j
where the second sum is taken over all sequences = 1 , , , j 0 , satisfying j = 1 j = r and j = 1 j j = m .
Example 1.
Applying Theorem 1 in particular for m = 4 , 6 , 8 yields
κ W , 4 = μ W , 4 3 μ W , 2 2 = 12 ν 2 12 ν 1 2 = 12 ν 2 ν 1 2 , κ W , 6 = 2 3 5 ! ! ν 3 3 ν 2 ν 1 + 2 ν 1 3 κ W , 8 = 2 4 7 ! ! ν 4 4 ν 3 ν 1 3 ν 2 2 + 12 ν 2 ν 1 2 6 ν 1 4 ,

2.1.1. Moment and Cumulant Parameters

The fourth-order cumulant of the standardized W j for each entry W j , usually called kurtosis, has the form
κ W , 4 κ W , 2 2 = 3 ν 2 ν 1 2 ν 1 2 = 3 ζ 2 ν 1 2 = Cum 4 W j 2 ν 1 .
In the above formula, we observe two quantities: one is the kurtosis (standardized generator cumulant since ν 1 = ζ 1 ), i.e.,
κ ˜ 2 = ζ 2 ν 1 2 ,
and the other one is μ ˜ 2 = ( ν 2 ν 1 2 ) / ν 1 2 = ν 2 / ν 1 2 1 , which contains the standardized generator moment ν 2 / ν 1 2 . Both quantities μ ˜ 2 and κ ˜ 2 have the same value (see Equation (13)). The kurtosis κ ˜ 2 is sometimes called the kurtosis parameter (see, for instance, [39]). Observe that the parameter μ ˜ 2 depends on the generator moment of the standardized variate only, so it can be called the moment parameter as in [6].
More generally, for m 1 , define the moment parameters and the cumulant parameters, respectively, as
μ ˜ m = ν m ν 1 m 1 and κ ˜ m = ζ m ν 1 m
The following normalized cumulants of W j , where cumulant parameters κ ˜ m are expressed in terms of moment parameters μ ˜ m , connect our different notations
Cum 2 W j 2 ν 1 = 1 , Cum 4 W j 2 ν 1 = 3 ! ! κ ˜ 2 = 3 ! ! μ ˜ 2 , Cum 6 W j 2 ν 1 = 5 ! ! κ ˜ 3 = 5 ! ! μ ˜ 3 3 μ ˜ 2 , Cum 8 W j 2 ν 1 = 7 ! ! κ ˜ 4 = 7 ! ! μ ˜ 4 4 μ ˜ 3 3 μ ˜ 2 2 + 6 μ ˜ 2 .
As will be seen in Section 3, using moment and cumulant parameters reduces the number of parameters for a spherically distributed random variate W significantly, while the number of characteristics is halved for an elliptically symmetric distribution. The cumulant parameters κ ˜ m can be expressed in terms of moment parameters μ ˜ m in higher orders as well.
Corollary 1.
Under the assumptions of Theorem 1, the moments of standardized W j are zero for odd orders and
E W j 2 ν 1 2 m = μ W , 2 m 2 ν 1 m = 2 m 1 ! ! μ ˜ m + 1 ,
for even orders, where μ ˜ m is the moment parameter defined in (15). The cumulants of standardized W j are zero for odd orders and
Cum 2 m W j 2 ν 1 = 2 m 1 ! ! κ ˜ m ,
for even orders, where κ ˜ m is the cumulant parameter (15), such that
κ ˜ m = r = 1 m ( 1 ) r 1 ( r 1 ) ! Σ j = r , Σ j j = m m ! j = 1 j ! j = 1 μ ˜ j + 1 j ! j .
Formula (17) is valid for all m 1 , since, as seen earlier, μ ˜ 1 = 0 and κ ˜ 1 = 1 .

2.1.2. Using the Representation RU

We now turn to the stochastic representation (3) and consider the scalar variable case W j = R U j . We first explore the even order moments μ W , 2 m = μ R , 2 m . μ U j , 2 m . Using the result on the even-order moments E U j 2 m as given in Lemma 1 in [29], we get
μ W , 2 m = μ R , 2 m 2 m d / 2 m 2 m 1 ! ! .
This result in combination with Equation (10) lets us express the generator moment ν m in terms of the moment of the generating variate R as
ν m = μ R , 2 m m ! 2 m ! 2 m ! 2 2 m m ! d / 2 m = μ R , 2 m 2 2 m d / 2 m .
Alternatively, in terms of moments of W, this becomes
ν m = μ W , 2 m 2 m 2 m 1 ! ! .
The dependence of moment parameter μ ˜ m on the moment of the generating variate R follows directly from the definition of μ ˜ m and the above expression
μ ˜ m = ν m ν 1 m 1 = d / 2 m μ R , 2 m d / 2 m μ R , 2 m 1 .
If U j is one component of the vector U , then the stochastic representation (3) of W implies W j = R U j . Therefore, the cumulants of W j can be expressed either by the moments or the cumulants of R. Thus, the cumulants of W j are connected to the cumulants of R in general. Since the cumulant Cum n W is an n t h order cumulant of the product R U 1 of two independent variates, a direct method for its calculation can be made using the conditional cumulants. Using this idea provides the following.
Example 2.
Consider, e.g., the fourth-order cumulant κ W , 4 = κ R U 1 , 4 of say W 1 (since all W j have identical distributions). Applying the formula for cumulants via moments, and using the independence of R and U j , one obtains
κ W , 4 = μ R , 4 μ U 1 , 4 3 μ R , 2 μ U 1 , 2 2 .
Now, using particular values for moments of U j gives
κ W , 4 = 3 d d + 2 κ R 2 , 2 + κ R 2 , 1 2 3 d 2 κ R 2 , 1 2 = 6 d 2 d + 2 κ R 2 , 1 2 + 3 d d + 2 κ R 2 , 2 .
Lemma 1.
Let n N and, in (3), assume E ( R n ) < . The even-order cumulants of a component W j of a spherically distributed random variate W are given in terms of generating variate R as follows
κ W , 2 n = 2 n ! r = 1 2 n Σ j = r , Σ j j = n j i s e v e n Cum r R 1 , R 2 2 , , R n n j = 1 n r + 1 1 j ! κ U 1 , j j ! j ,
where the summation is taken over all even-order cumulants of U 1 , since the odd orders are zero and where R j j corresponds to the block with cardinality j , which includes power R j only (it implies listing R j consecutively j times).
Here, the cumulants κ U 1 , j of U 1 are involved, which can be evaluated explicitly in particular cases to obtain the formula for κ W 1 , 2 n .
Example 3.
Consider the fourth-order cumulant κ W , 4 = κ R U 1 , 4 of W j , say. Applying the formula to cumulant (23) gives
κ R U 1 , 4 = κ R 4 , 1 κ U 1 , 4 + 3 κ R 2 , 2 κ U 1 , 2 2 .
Next, obtain the cumulants of U 1 from the moments
μ U 1 , 4 = 3 d d + 2 , κ U 1 , 2 2 = μ U 1 , 2 = 1 / d , κ U 1 , 4 = μ U 1 , 4 3 μ U 1 , 2 2 = 3 d d + 2 3 d 2 = 6 d 2 d + 2 .
Finally, plugging these into (24) and using (22), one obtains
κ R U 1 , 4 = 6 d 2 d + 2 κ R 4 , 1 + 3 1 d 2 κ R 2 , 2 .

2.2. Multivariate Moments and Cumulants

Rewriting the characteristic function using the series expansion of g u given in (8), one obtains
ϕ W λ = g λ λ = j = 1 1 j g j j ! λ λ .
The moments can be calculated using Equation (7) as follows
( i ) k D λ k ϕ W λ λ = 0 = ( i ) k j = 1 1 j g j j ! D λ k λ λ j λ = 0 .
Observe that
μ k = ( i ) k D λ k ϕ W λ λ = 0 = ( i ) k j = 1 1 j g j j ! D λ k λ λ j λ = 0 = 0 if 2 j k , 1 j ! g j c j if 2 j = k ,
and the vector c j does not depend on g. Using g j = 1 j g j 0 = ν j , and c j = D λ 2 j λ λ j , we have μ W , 2 m = ν m m ! D λ 2 m λ λ m . This derivation provides the connection between the generator moments and the marginal moments in (10), by using which, one obtains
μ W , 2 m = μ W 1 , 2 m m ! 2 m 2 m 1 ! ! D λ 2 m λ λ m = μ W 1 , 2 m 2 m ! D λ 2 m λ λ m .
The same argument applies to the cumulant generator function ψ W λ = log ϕ W λ with series expansion (8). Now, we obtain κ 2 m = ζ m / m ! D λ 2 m λ λ m , and since ζ m is connected to the m t h cumulant of a component of W by (11), we get
κ W , 2 m = κ W , 2 m 2 m m ! 2 m 1 ! ! D λ 2 m λ λ m = κ W , 2 m 2 m ! D λ 2 m λ λ m .
Recall that K r | denotes particular partitions with size r and type . The above calculations are summarized in the following theorem, which has been stated without proof and used in [30]. One finds “commutator matrices”, which change the order of the tensor products according to a given permutation, very useful. (For a brief description of the commutator matrices and the symmetrizer, the reader is referred to Section 5.1).
Theorem 2.
Let m N and, in (3), assume E ( R m ) < . The moments and cumulants of odd orders of spherically distributed W are zero. The moments of even orders are given by
μ W , 2 m = μ W , 2 m 2 m 1 ! ! L m 2 1 vec m I d = μ W , 2 m S d 1 2 m vec m I d ,
while the cumulants of even orders are
κ W , 2 m = κ W , 2 m 2 m 1 ! ! L m 2 1 vec m I d = κ W , 2 m S d 1 2 m vec m I d .
In terms of moment parameters, the standardized cumulants
Cum ̲ 2 m Σ 1 / 2 W = κ ˜ m L m 2 1 vec m I d ,
where Σ 1 / 2 = 1 / 2 ν 1 I d . Formula (17) shows that κ ˜ m is a polynomial in μ ˜ k , k = 2:m. We denote this polynomial by a m μ ˜ 2 , μ ˜ m = κ ˜ m , hence
Cum ̲ 2 m Σ 1 / 2 W = a m μ ˜ 2 , μ ˜ m L m 2 1 vec m I d .
In particular, when a 1 = 1
a m μ ˜ 2 , , μ ˜ m = μ ˜ 2 , m = 2 , μ ˜ 3 3 μ ˜ 2 , m = 3 , μ ˜ 4 4 μ ˜ 3 3 μ ˜ 2 2 + 6 μ ˜ 2 , m = 4 ,
Remark 1.
Both higher-order moments μ W , 2 m and cumulants κ W , 2 m use a commutator matrix L m 2 1 or a symmetrizer S d 1 2 m ; details are given in Section 5.1. Determining L m 2 1 may be cumbersome although much lighter, from a computational point of view, than the actual calculation of the symmetrizer, which, for large dimensions d and order 2 m , is very time-, memory- and space- consuming. Alternatively, one can get an efficient calculation evaluating the D λ 2 m λ λ m by using the T-derivative step by step. For example,
D λ 4 λ λ 2 = 2 3 I d 4 + K 3214 1 + K 1324 1 vec 2 I d = 2 3 L 2 2 1 vec 2 I d ,
where 3 commutator matrices are included instead of the 24, which are necessary for obtaining S d 1 4 . Further examples can be found in [30].
Remark 2.
Deriving D λ 6 λ λ 3 means finding 15 partitions K 3 | , with size 3 and type = 0 , 3 , 0 , 0 , 0 , 0 , i.e., splitting up the set 1:6 into three blocks, and each block contains two elements. The partitions are in canonical form and the corresponding commutator L 3 2 1 is listed in Section 4.1. The result is
D λ 6 λ λ 3 = 48 L 3 2 1 vec 3 I d .
Then, E W 2 m can be calculated directly from the expected values of the entries. The nonzero entries of E W 2 m are those where all terms in the product of entries of W have even degrees (cf. (18)).
Remark 3.
We compare the expected values of the entries of W 2 m to the expected values of those in Z 2 m , where Z is a standard normally distributed variate. The nonzero entries of the expected value E Z 2 m are products with even powers such that
E i = 1 d Z i 2 k i = i = 1 d 2 k i 1 ! ! ,
where Σ k 1 : d = m . At the same time, we have E Z 2 m = 2 m 1 ! ! S d 1 2 m vec m I d . Comparing this to E W 2 m = E W 1 2 m S d 1 2 m vec m I d , we conclude that the higher-order moments of an elliptic random vector W differ from the moments of a normal one Z only in the constant E W 1 2 m . The major difference turns up in comparing the cumulants, since cumulants of order higher than 2 for Z are zero while the cumulants of W are
Cum ̲ 2 m W = Cum 2 m W 1 S d 1 2 m vec m I d .
Example 4.
If the generating variate R is Gamma distributed with parameters ϑ > 0 , α > 0 , then
E R r = ϑ r Γ α + r Γ α ,
and the kurtosis parameter κ ˜ 2 becomes
κ ˜ 2 = d d + 2 E R 4 E R 2 2 1 = d d + 2 Γ α + 4 Γ α Γ α + 2 2 1 = d d + 2 α + 3 α + 2 Γ α Γ α + 2 1 = d d + 2 α + 3 α + 2 α α + 1 1 .

3. Multivariate t and Skew-t Distributions

There have been several attempts to introduce asymmetry into multivariate distributions by “skewing” a given spherically or elliptically symmetric distribution such as the normal or a t-distribution. The multivariate skew-normal distribution was introduced by [42]; see also [43]. Ref. [44] derived the first four moments and discussed quadratic forms based on these; for further properties, see [45].
For the case of the multivariate skew-symmetric-t, we use the definition given in [31], which is different from that provided by [46]. Its moments were derived in [47]. For further discussion on these distributions and their applications, see [32,33,34,35].

3.1. Multivariate t-Distribution

The d-variate vector W is t-distributed, written as W M t d p , 0 , I d , if W = p Z / S where Z N d 0 , I d is standard normal, and S 2 is χ 2 distributed with p degrees of freedom. See Example 2.5, Section 3.3.6, p. 85 [38], and p. 32. Such a random variable W M t d p , 0 , I d is spherically distributed since it has the representation
W = p Z S Z / Z = R U ,
where R = p Z / S is the generating variate. We note that R 2 / d F d , p has an F-distribution with d and p degrees of freedom (cf. also [48]). Let μ R d , and A is a d × d matrix; then, the linear transform X = μ + A W will be considered as X M t d p , μ , Ω , where Ω = A A ; hence, X is an elliptically symmetric random variable. The characteristic function of X is quite involved, and therefore we utilize the stochastic representation of W viz. (27), for deriving higher-order cumulants including skewness and kurtosis for X . For the even-order moments of the generating variate R, i.e., the even-order moments of the F-distribution with d and p degrees of freedom, with p > 2 m , we have
E R 2 m d m = p d m Γ d / 2 + m Γ p / 2 m Γ d / 2 Γ p / 2 = p d m G 2 m d G 2 m d ,
so that
μ R , 2 m = p m d / 2 m p / 2 m + 1 m = p m d / 2 m 1 p / 2 m = p m d / 2 m p / 2 m m .
We also have the even-order moments of the components of uniform distribution (on sphere S d 1 )
μ U 1 , 2 m = 2 m 1 ! ! 2 m d / 2 m .
These two quantities will provide us the moments of the components of W as
μ W , 2 m = μ R , 2 m μ U 1 , 2 m = p m d / 2 m p / 2 m m 2 m 1 ! ! 2 m d / 2 m = p m 2 m 1 ! ! 2 m p / 2 m m .
Recall that all entries of W have the same distribution so that one can use the notation W for the generic entry. The cumulants of even order of W can be calculated with the help of the cumulant parameters κ ˜ m . Consider the moment parameters μ ˜ m first, which is
μ ˜ m = 1 α d , m μ R , 2 m μ R , 2 m 1 = d / 2 m d / 2 m p m d / 2 m p / 2 m m p / 2 1 p d / 2 m 1 = p / 2 1 m p / 2 m m 1 ,
(see Equation (21) for the moments of R). Then, the cumulant parameter κ ˜ m is calculated using the general expression (17). Theorem 2 then leads us to the following.
Lemma 2.
Let p > 2 m and W be t-multivariate, W M t d p , 0 , I d , with dimension d and degrees of freedom p, then E W = 0 , and both the moments and the cumulants with odd higher order are zero. The moments with even order are given by
μ W , 2 m = p m 2 m p / 2 m m L m 2 1 vec m I d .
The covariance matrix of W has the diagonal form Var W = Σ = p p 2 I d and the even-order standardized cumulants are
Cum ̲ 2 m Σ 1 / 2 W = κ ˜ m L m 2 1 vec m I d ,
where the cumulant parameters κ ˜ m are given by the expression (17) and moment parameters in (29). In particular, this gives
Cum 2 W j 2 ν 1 = 1 , Cum 4 W j 2 ν 1 = 3 ! ! 2 p 4 , Cum 6 W j 2 ν 1 = 5 ! ! 16 p 4 p 6 .

3.2. Multivariate Skew-t Distribution

Let V be a d-dimensional multivariate skew-normal distribution, denoted by V S N d 0 , Ω , α , and S 2 be an independently distributed χ 2 random variable with p degrees of freedom. We define a skew-t distributed random vector X by X = μ + p V / S and denote it by S t d μ , Ω , α , p . We use the notation R p = p S where S 2 is a χ 2 distributed variable with shape parameter p, so that
X = μ + R p V .
The skew-normal distribution is characterized by the skewness vector δ , which is given as
δ = Ω α 1 + α Ω α .
Consider first the mean E X = μ + E R p E V , where E V = κ V , 1 , and recall that the higher-order cumulants of a skew-normal are as given in [29], as
κ V , 1 = 2 π δ , κ V , 2 = vec Ω 2 π δ 2 ,
and in general for j > 2 , we have κ V , j = κ Z , j δ j . The moments of R p (using the moments of a χ 2 distribution) are given in the following
Let p > k , and μ R p , k = E S 2 / p k then
μ R p , k = p 2 k / 2 Γ p k / 2 Γ p / 2 = p 2 k / 2 G k p .
From the above results, the first two cumulant vectors of X are given by
μ X = μ + G 1 p p π δ ,
κ X , 2 = κ R p V , 2 = E R p 2 V 2 E R p 2 E V 2 = μ R p , 2 μ V , 2 μ R p , 1 2 μ V , 1 2 = p 2 G 2 p vec Ω p π G 1 2 p δ 2 = p p 2 vec Ω p π G 1 2 p δ 2 .
The variance matrix of X is then p p 2 Ω p π G 1 2 p δ δ .

Cumulants of the Skew-t Distribution

The third- and fourth-order cumulants are given in the next two lemmas.
Lemma 3.
Let p > 2 , then
κ X , 3 = c 1 δ 3 + c 2 L 1 2 , 1 1 1 vec Ω δ ,
where
c 1 p = p p π G 1 p 2 π G 1 2 p 1 p 3 , c 2 p = p π p G 1 p p 2 p 3 .
Lemma 4.
Let p > 3 , then the fourth-order cumulant of X S t d μ , Ω , α , p is given by
κ X , 4 = c 1 δ 4 + c 2 L 2 2 1 vec Ω 2 c 3 L 1 2 , 2 1 1 vec Ω δ 2 ,
where
c 1 = 2 p 2 π G 1 2 ( p ) 2 p 3 3 π G 1 2 ( p ) , c 2 = 2 p 2 p 4 p 2 2 , c 3 = 2 π p 2 p 3 p 2 G 1 2 ( p ) .
We conclude this section by providing a formula for the cumulant κ X , n of general order n. In the the next theorem (and in later sections), when a symmetrizer S d 1 m is applied, a symmetrical equivalence form denoted by = is used.
Theorem 3.
κ X , 1 = μ X = μ + G 1 p p π δ , κ X , 2 = p p 2 vec Ω p π G 1 p 2 δ 2 ,
if n > 2 and p > n 1 , then n-symmetrized version by symmetrizer S d 1 n of κ X , n is the following
κ X , n = n ! r = 1 n Σ j = r , Σ j j = n C r R p , 1 : n j = 1 , j 2 n 1 j ! κ Z , j j ! j 1 2 ! 1 2 ! 2 δ n 2 2 κ V , 2 2 ,
where
C r R p , 1 : n = Cum R p 1 , R p 2 2 , , R p n n
where R p j j corresponds to the block with cardinality j , which includes the power R p j only (it implies listing R p j consecutively j times).
One can also express κ X , n ignoring symmetrization
κ X , n = r = 1 n Σ j = r , Σ j j = n C r R p , 1 : n n 2 2 ! j = 1 : n r + 1 2 1 j ! κ Z , j j ! j L n 2 2 1 , 2 1 δ n 2 2 κ V , 2 2 ,
which might be useful from a computational point of view (see Section 5.1 for L n 2 2 1 , 2 1 ).

4. Applications and Examples

The vector cumulant formulae provided in the previous section find immediate application in results discussed in the literature. For the subsequent discussion, let Y = Σ 1 / 2 X μ .

4.1. Cumulant-Based Measures of Skewness and Kurtosis

Ref. [29] have shown that knowledge of the third and fourth cumulant vectors allows one to retrieve all cumulant-based measures of skewness and kurtosis discussed in the literature. For example, for [7] index of skewness β 1 , d , one has that β 1 , d ( Y ) = | | κ Y , 3 | | 2 ; if one considers [11] skewness vector s ( Y ) , it holds that
s ( Y ) = vec I d I d κ Y , 3 .
As far as kurtosis indexes are concerned, for [7] index, one has
β 2 , d ( Y ) = vec I d 2 κ Y , 4 + d ( d + 2 ) ,
whereas, for the [11] kurtosis matrix, it holds that
Vec B Y = I d 2 vec I d κ Y , 4 .
For further examples concerning the indexes discussed in [8,9,14,15] and relations among these, see [29].

4.2. Covariance Matrices

Using cumulant vectors up to the eighth order, one can retrieve covariance matrices and asymptotic results for statistics based on third and fourth cumulants. For example, in the case of elliptically symmetric distributions, one can show that
Cum ̲ 2 H 3 Y = κ Y , 6 + K H 4 , 2 1 κ Y , 4 vec I d + K 3 ! 1 vec 3 I d .
where H 3 Y is the third d-variate Hermite polynomial [49] and K H 4 , 2 , K 3 ! are commutator matrices; see Theorem 1 in [30] for details and further results on general symmetric and asymmetric distributions.
These results can be exploited to obtain new weighted measures of skewness and kurtosis and, in conjunction with Theorem 2 in [30], retrieve asymptotic distributions of several statistics based on the third and the fourth cumulant vectors. Note that in contrast to the results typically available in the literature, which provide results in terms of expectations of function of sample statistics, the explicit form of model covariance matrices based on the cumulant vectors that we have allows straightforward computation of the asymptotic parameters.

4.3. Illustrative Numerical Examples

Example 5
(Uniform-Gamma). Consider again Example 4, where R is a gamma random variable with ϑ = 0.3 and α = 1 . In order to generate a random d-vector from a spherical uniform distribution on the unit ball, we first generate a d-variate standard normal random vector Z and define U = Z / | | Z | | and then use (3) to generate W .
Figure 1, for d = 3 , reports 100 random values, respectively, for U (transparent red) and W (solid blue); note that the values of W are much more concentrated towards the center, with approximately 10% of the points going out of the unit sphere.
Numerical true and estimated values of the moments μ W , 2 m and ν m are computed by using Formulae (18) and (20). The tables below report population values and sample estimates for different sample sizes. As one can see, these values are in good agreement for all sample sizes but for very high-order moments, which require very large sample sizes to obtain reasonable approximations.
Consider now the case of multivariate moments and cumulants of W whose formulae are provided in Theorem 2. Set d = 3 . Since the result lies in univariate moments and cumulants, the application of the theorem reduces to computing the term L m 2 1 vec m I d (see Section 5.1 for L m 2 1 ).
The second univariate cumulant of W , viz. the variance, is given in Table 1 and vec I 3 provides the correct form for κ W , 2 .
Computations show that κ W , 4 = 0.02808 and the kurtosis κ W , 4 / κ W , 2 2 = 7.8 , which actually corresponds to 3 κ 0 given in Table 2. Note that for d = 3 , the fourth cumulant vector is of dimension d 4 = 81 ; in general, its elements are not all distinct, and the distinct elements can be recovered by linear transformations (see [30] for further discussion). The cumulant vector of the distinct elements, using the formula for L 2 2 1 given in Remark 1, reads
distinct L 2 2 1 vec 2 I 3 = 1 , 0 , 0 , 1 3 , 0 , 1 3 , 0 , 0 , 0 , 0 , 1 , 0 , 1 3 , 0 , 1 .
Direct estimation of the fourth cumulant vector from the simulated data confirms the theoretical results. The knowledge of κ W , 4 and Formulae (38) and (39) obtains β 2 , d = 54 (note also that β 2 , d = d ( d + 2 ) ( κ 0 + 1 ) and Vec B Y = ( 13 , 0 , 0 , 0 , 13 , 0 , 0 , 0 , 13 ) ).
Example 6.
Set d = 3 and let X be a trivariate S t 3 0 , Ω , α , p random vector, with p = 15 , α = ( 10 , 5 , 0 ) and Ω be the identity matrix. Random numbers generated from the distribution of X and for comparison from a trivariate U are in Figure 2. From (32), we obtain that δ = ( 0.89 , 0.45 , 0 ) approximately.
We run a Monte Carlo simulation where 1000 samples of size n = 1000 are generated from the S t 3 distribution defined above. For each of the 1000 samples, measures of location, variance, skewness and kurtosis are computed applying Formulae (34) and (35) and the results of Lemmas 3 and 4. Empirical averages of the statistics are calculated, denoting the empirical expected value as error.
As far as mean and variance of X , we compare the true and empirical expected values (values are rounded to the second decimal point).
E X = ( 0.75 , 0.37 , 0 ) , E X = ( 0.75 , 0.37 , 0 ) ,
Var X = 0.59 0.28 0 . 0.28 1.01 0 . 0 . 0 . 1.15 , E Var X = 0.59 0.28 0 . 0.28 1.02 0 . 0 . 0 . 1.15 .
As far as the skewness vector is concerned, let κ 3 = κ Y , 3 . Note that κ 3 has dimension d 3 and has d ( d + 1 ) ( d + 2 ) / 6 distinct values; for d = 3 , these are 10. True and empirical expected values of the distinct values are (subscript D denotes distinct values)
κ 3 , D = ( 0.94 , 0.38 , 0 , 0.26 , 0 , 0.09 , 0.22 , 0 , 0.05 , 0 ) , E κ 3 , D = ( 0.92 , 0.38 , 0 , 0.26 , 0 , 0.09 , 0.22 , 0 , 0.05 , 0 ) .
Using κ 3 , from the results in Section 4.1, one gets β 1 d = 1.6 and s ( Y ) = ( 1.29 , 0.64 , 0 ) . Considering now the kurtosis, define κ 4 = κ Y , 4 . There are d ( d + 1 ) ( d + 2 ) ( d + 3 ) / 24 distinct values in κ 4 , which has dimension d 4 ; for d = 3 , the distinct values are 15. True and empirical expected values of the distinct values are
κ 4 , D = ( 1.61 , 0.50 , 0 , 0.44 , 0 , 0.20 , 0.15 , 0 , 0.01 , 0 , 0.63 , 0 , 0.19 , 0 , 0.55 ) , E κ 4 , D = ( 1.49 , 0.48 , 0 , 0.42 , 0 , 0.18 , 0.13 , 0 , 0 , 0 , 0.60 , 0 , 0.18 , 0 , 0.52 ) .
Again, using κ 4 , from the results in Section 4.1, one gets β 2 , 3 = 19.45 and Vec B ( Y ) = ( 2.25 , 0.66 , 0 , 0.66 , 1.26 , 0 , 0 , 0 , 0.93 ) .

5. Proofs

Proof of Theorem 1.
Differentiate g λ j 2 to obtain the moments of W j in terms of generator moments,
λ j g λ j 2 = g 1 λ j 2 2 λ j , 2 λ j 2 g λ j 2 = g 2 λ j 2 2 λ j 2 + 2 g 1 λ j 2 .
Proceed by noting that the coefficients of g n λ j 2 = n / λ j n g λ j 2 are scalar valued series, say, b n j , when we take the derivatives of a compound function g λ j 2 . Therefore, we have
g n λ j 2 = b n n g n λ j 2 2 λ j n + b n n 1 g n 1 λ j 2 2 λ j n 2 + b n n m g n m λ j 2 λ j n 2 m ,
where m = n 2 , and the coefficients b n k fulfill the following recursion
b n n = 1 and b n n k = 2 b n 1 n k n 2 k + 1 + b n 1 n k 1 , k = 1 , , m = n 2 .
If n is odd then the power of the last term in (42) is n 2 n / 2 = 1 , then g n 0 = 0 ; otherwise, g n 0 = b n n / 2 g m 0 ; hence,
E W j n = 0 if n odd , 1 m b 2 m m g m 0 if n = 2 m even .
As has been noticed by [6], b n k does not depend on g. Hence, we may choose, say, g t = exp t 2 (being a valid characteristic function) and derive coefficients b 2 m m , resulting in b 2 m m = 2 m 2 m 1 ! ! . Hence, for n = 2 m , E W j 2 m = 1 m 2 m 2 m 1 ! ! g m 0 . Now, (10) follows by changing 1 m g m 0 to the generator moment
E W j 2 m = 2 m ! m ! ν m .
Observe that the right-hand side does not depend on the index j, so that all marginals are distributed equally. Plugging the cumulant generator function f into (42), it is readily seen that the odd generator cumulants ζ m are also zero. The even-order cumulant for each W j is
Cum 2 m W j = 2 m 2 m 1 ! ! ζ m = 2 m ! m ! ζ m .
Formula (17) utilizes Faà di Bruno’s Formula (9) connecting the generator cumulants ζ m in terms of generator moments ν m . □
Proof of Corollary 1.
The general Formula (17) is based on the formula for the “cumulants” ζ m in terms of “moments” ν j . Thus
κ ˜ m = ζ m ν 1 m = r = 1 m ( 1 ) r 1 ( r 1 ) ! Σ j = r , Σ j j = m m ! j = 1 j ! j = 1 ν j j ! ν 1 j j ,
and we change the ratio ν j / ν 1 j = μ ˜ 2 + 1 , so that the assertion (17) follows. □
Proof of Theorem 2.
The key point in the proof is understanding the form of the derivative D λ 2 m λ λ m . First, we notice that
D λ 2 m λ λ m = D λ 2 m j = 1 m f j λ ,
where f j λ = λ λ ; that is, we can use the general Leibnitz rule, and symmetrize by S d 1 2 m ,
D λ 2 m j f j λ = Σ k 1 : m = 2 m 2 m k 1 : m j D λ k j f j λ , = 2 m ! vec m I d ,
since the only nonzero term is when k j = 2 , j = 1 : m , then D λ 2 f j λ = 2 vec I d . Therefore, we have
2 m 2 1 m 2 m = 2 m ! 2 ! m 2 m = 2 m ! ,
terms in the sum (44), each equals vec m I d , hence the assertion follows. □
Proof of Lemma 3.
Direct calculation shows
κ X , 3 = Cum ̲ 1 κ X , 3 | R p + 3 Cum ̲ 2 κ X , 1 | R p , κ X , 2 | R p + Cum ̲ 3 κ X , 1 | R p = Cum ̲ 1 R p 3 κ V , 3 + 3 Cum ̲ 2 R p κ V , 1 , R p 2 κ V , 2 + Cum ̲ 3 R p κ V , 1 = κ R p 3 , 1 κ V , 3 + 3 κ R p , R p 2 κ V , 1 κ V , 2 + κ R p , 3 κ V , 1 3 = κ R p 3 , 1 κ Z , 3 δ 3 + 3 κ R p , R p 2 κ Z , 1 δ vec Ω κ Z , 2 δ 2 + κ R p , 3 κ Z , 1 3 δ 3 = κ R p 3 , 1 κ Z , 3 + 3 κ R p , R p 2 κ Z , 1 κ Z , 2 + κ R p , 3 κ Z , 1 3 δ 3 + 3 κ R p , R p 2 κ Z , 1 δ vec Ω
where κ Z , 2 = 2 / π , the coefficients of κ X , 3 in the expression are given in (36). We obtain ( κ R p 3 , 1 = μ R p , 3 )
κ X , 3 = μ R p , 3 κ Z , 3 + κ R p , 3 κ Z , 1 3 δ 3 + 3 κ R p , R p 2 κ Z , 1 vec Ω 2 π δ 2 .
The quantities in the coefficient of δ 3 are given in (33), leading to the result of the lemma. □
Proof of Lemma 4.
Writing G 1 = G 1 p , one can derive the formula
κ X , 4 = κ R p 4 , 1 κ V , 4 + 4 κ R p , R p 3 κ V , 1 κ V , 3 + 3 κ R p 2 , R p 2 κ V , 2 2 + 6 κ R p , R p , R p 2 κ V , 1 2 κ V , 2 + κ R p , 4 δ 4
from (36) for κ X , 4 directly. We pay particular attention to the value of κ V , 2
κ X , 4 = κ R p 4 , 1 κ Z , 4 + 4 κ R p , R p 3 κ Z , 1 κ Z , 3 + κ R p , 4 κ Z , 1 4 δ 4 + 3 κ R p 2 , 2 vec Ω 2 π δ 2 2 + 6 κ R p , R p , R p 2 κ Z , 1 2 vec Ω 2 π δ 2 δ 2
we have κ R p 4 , 1 = μ R p , 4 ; moreover,
κ X , 4 = κ R p 4 , 1 κ Z , 4 + 4 κ R p , R p 3 κ Z , 1 κ Z , 3 + κ R p , 4 κ Z , 1 4 + 3 2 π 2 κ R p 2 , 2 6 2 π 2 κ R p , R p , R p 2 × δ 4 + 3 κ R p 2 , 2 vec Ω 2 + 6 2 π κ R p , R p , R p 2 κ R p 2 , 2 vec Ω δ 2
and
κ R p 2 , 2 = 2 p 2 p 4 p 2 2 ,
κ R p , R p , R p 2 κ R p 2 , 2 = p 2 2 G 1 2 4 p 2 p 3 .
The coefficient of δ 4 follows
μ R p , 4 κ Z , 4 + 4 κ R p , R p 3 κ Z , 1 κ Z , 3 + 3 2 π 2 κ R p 2 , 2 6 2 π 2 κ R p , R p , R p 2 + κ R p , 4 κ Z , 1 4 = 6 2 π 2 + 4 2 π μ R p , 4 + 8 2 π 2 4 2 π κ R p , R p 3 + 3 2 π 2 κ R p 2 , 2 6 2 π 2 κ R p , R p , R p 2 + κ R p , 4 2 π 2 = 2 π 2 6 μ R p , 4 + 8 κ R p , R p 3 + 3 κ R p 2 , 2 6 κ R p , R p , R p 2 + κ R p , 4 + 4 2 π μ R p , 4 κ R p , R p 3
where 6 μ R p , 4 + 8 κ R p , R p 3 + 3 κ R p 2 , 2 6 κ R p , R p , R p 2 + κ R p , 4 = 3 p 2 G 1 4 / 2 , and μ R p , 4 κ R p , R p 3 = p 2 G 1 2 / 2 ( p 3 ) .
Proof of Theorem 3.
We use (23) and obtain
κ X , n = S d 1 n r = 1 n Σ j = r , Σ j j = n n ! j = 1 n j ! j ! j Cum ̲ r κ X , 1 | R p 1 : 1 , , κ X , n | R p 1 : n
where κ X , j | R p = Cum ̲ j X | R p 1 : j denotes j copies of κ X , j | R p , as usual, including the case j = 0 , when Cum ̲ j X | R p is missing from Cum ̲ r . Therefore, Cum ̲ r contains exactly r variables. The conditional cumulant
κ X , j | R p = κ R p V , j | R p = R p j κ V , j ,
since R p and X are independent. We apply Lemma 4 of [29], obtaining the κ V , j , and get
κ X , n = n ! r = 1 n Σ j = r , Σ j j = n C r R p , 1 : n j = 1 n j ! j ! j j = 1 : n κ V , j j = n ! S d 1 n r = 1 n Σ j = r , Σ j j = n C r R p , 1 : n j = 1 , j 2 n 1 j ! κ Z , j j ! j 1 2 ! 1 2 ! 2 κ V , 2 2 δ n 2 2 ,
where we have separated j = 2 , since the second-order cumulant κ V , 2 is different in the product. □

5.1. Commutator and Symmetrizer Matrices

Commutator matrices K p change the order of the tensor products of vectors according to permutation p ; see [29] for more details. Commutator matrix L corresponds to type = 1 : n , such that nonzero j s of define the sum of commutator matrices as follows
L l r : 1 1 = K r | P n K p K r | . 1
where index l r : 1 is defined by the following way: if j 0 then set j by l j , where l denotes the actual value of j ; for instance, if j = 3 , then l j = 3 j . The summation is taken over all partition K r | of the set 1:n, having type and size r.
Ref. [49] uses the symmetrizer matrix S d 1 q for symmetrization of a T-product of q vectors with the same dimension d; that is, the result of S d 1 4 a ̲ 1 a ̲ 2 a ̲ 3 a ̲ 4 is a vector of dimension d 4 , and symmetric in a ̲ j . It can be computed as
S d 1 q = 1 q ! p P q K p ,
where P q denotes the set of all permutations of the numbers 1:q; the sum includes q ! terms. The symmetrizer S d 1 q provides an orthogonal projection to the subspace of R d q , which is invariant under the transformation S d 1 q . A vector will be called symmetrical if it belongs to that subspace.

Author Contributions

Conceptualization, S.R.J., E.T. and G.H.T.; Data curation, E.T.; Formal analysis, S.R.J., E.T. and G.H.T.; Methodology, G.H.T.; Software, E.T.; Supervision, S.R.J.; Visualization, E.T.; Writing—original draft, G.H.T.; Writing—review and editing, S.R.J. and E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kollo, T.; von Rosen, D. Advanced Multivariate Statistics with Matrices; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 579. [Google Scholar]
  2. McCullagh, P. Tensor Methods in Statistics; Courier Dover Publications: New York, NY, USA, 2018. [Google Scholar]
  3. Ould-Baba, H.; Robin, V.; Antoni, J. Concise formulae for the cumulant matrices of a random vector. Linear Algebra Appl. 2015, 485, 392–416. [Google Scholar] [CrossRef]
  4. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  5. Qi, L. Rank and eigenvalues of a supersymmetric tensor, the multivariate homogeneous polynomial and the algebraic hypersurface it defines. J. Symb. Comput. 2006, 41, 1309–1327. [Google Scholar] [CrossRef] [Green Version]
  6. Berkane, M.; Bentler, P.M. Moments of elliptically distributed random variates. Stat. Probab. Lett. 1986, 4, 333–335. [Google Scholar] [CrossRef]
  7. Mardia, K.V. Measures of Multivariate Skewness and Kurtosis with Applications. Biometrika 1970, 57, 519–530. [Google Scholar] [CrossRef]
  8. Malkovich, J.F.; Afifi, A.A. On tests for multivariate normality. J. Am. Stat. Assoc. 1973, 68, 176–179. [Google Scholar] [CrossRef]
  9. Srivastava, M.S. A measure of skewness and kurtosis and a graphical method for assessing multivariate normality. Stat. Probab. Lett. 1984, 2, 263–267. [Google Scholar] [CrossRef]
  10. Koziol, J.A. A Note on Measures of Multivariate Kurtosis. Biom. J. 1989, 31, 619–624. [Google Scholar] [CrossRef]
  11. Móri, T.F.; Rohatgi, V.K.; Székely, G.J. On multivariate skewness and kurtosis. Theory Probab. Appl. 1994, 38, 547–551. [Google Scholar] [CrossRef]
  12. Shoukat Choudhury, M.; Shah, S.; Thornhill, N. Diagnosis of poor control-loop performance using higher-order statistics. Automatica 2004, 40, 1719–1728. [Google Scholar] [CrossRef] [Green Version]
  13. Oja, H.; Sirkiä, S.; Eriksson, J. Scatter matrices and independent component analysis. Austrian J. Stat. 2006, 35, 175–189. [Google Scholar]
  14. Balakrishnan, N.; Brito, M.R.; Quiroz, A.J. A vectorial notion of skewness and its use in testing for multivariate symmetry. Commun. Stat. Theory Methods 2007, 36, 1757–1767. [Google Scholar] [CrossRef]
  15. Kollo, T. Multivariate skewness and kurtosis measures with an application in ICA. J. Multivar. Anal. 2008, 99, 2328–2338. [Google Scholar] [CrossRef] [Green Version]
  16. Tyler, D.E.; Critchley, F.; Dümbgen, L.; Oja, H. Invariant co-ordinate selection. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2009, 71, 549–592. [Google Scholar] [CrossRef]
  17. Ilmonen, P.; Nevalainen, J.; Oja, H. Characteristics of multivariate distributions and the invariant coordinate system. Stat. Probab. Lett. 2010, 80, 1844–1853. [Google Scholar] [CrossRef] [Green Version]
  18. Peña, D.; Prieto, F.J.; Viladomat, J. Eigenvectors of a kurtosis matrix as interesting directions to reveal cluster structure. J. Multivar. Anal. 2010, 101, 1995–2007. [Google Scholar] [CrossRef] [Green Version]
  19. Tanaka, K.; Yamada, T.; Watanabe, T. Applications of Gram–Charlier expansion and bond moments for pricing of interest rates and credit risk. Quant. Financ. 2010, 10, 645–662. [Google Scholar] [CrossRef]
  20. Huang, H.H.; Lin, S.H.; Wang, C.P.; Chiu, C.Y. Adjusting MV-efficient portfolio frontier bias for skewed and non-mesokurtic returns. N. Am. J. Econ. Financ. 2014, 29, 59–83. [Google Scholar] [CrossRef]
  21. Wang, Y.; Fan, J.; Yao, Y. Online monitoring of multivariate processes using higher-order cumulants analysis. Ind. Eng. Chem. Res. 2014, 53, 4328–4338. [Google Scholar] [CrossRef]
  22. Lin, S.H.; Huang, H.H.; Li, S.H. Option pricing under truncated Gram–Charlier expansion. N. Am. J. Econ. Financ. 2015, 32, 77–97. [Google Scholar] [CrossRef]
  23. Loperfido, N. Singular value decomposition of the third multivariate moment. Linear Algebra Appl. 2015, 473, 202–216. [Google Scholar] [CrossRef]
  24. De Luca, G.; Loperfido, N. Modelling multivariate skewness in financial returns: A SGARCH approach. Eur. J. Financ. 2015, 21, 1113–1131. [Google Scholar] [CrossRef]
  25. Fiorentini, G.; Planas, C.; Rossi, A. Skewness and kurtosis of multivariate Markov-switching processes. Comput. Stat. Data Anal. 2016, 100, 153–159. [Google Scholar] [CrossRef] [Green Version]
  26. León, A.; Moreno, M. One-sided performance measures under Gram-Charlier distributions. J. Bank. Financ. 2017, 74, 38–50. [Google Scholar] [CrossRef] [Green Version]
  27. Nordhausen, K.; Oja, H.; Tyler, D.E.; Virta, J. Asymptotic and bootstrap tests for the dimension of the non-Gaussian subspace. IEEE Signal Process. Lett. 2017, 24, 887–891. [Google Scholar] [CrossRef]
  28. Loperfido, N. Skewness-based projection pursuit: A computational approach. Comput. Stat. Data Anal. 2018, 120, 42–57. [Google Scholar] [CrossRef]
  29. Jammalamadaka, S.; Taufer, E.; Terdik, G.H. On Multivariate Skewness and Kurtosis. Sankhya A 2021, 83-A, 607–644. [Google Scholar] [CrossRef]
  30. Jammalamadaka, S.; Taufer, E.; Terdik, G.H. Asymptotic theory for statistics based on cumulant vectors with applications. Scand. J. Stat. 2021, 48, 708–728. [Google Scholar] [CrossRef]
  31. Azzalini, A.; Capitanio, A. Distributions generated by perturbation of symmetry with emphasis on a multivariate skew t-distribution. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2003, 65, 367–389. [Google Scholar] [CrossRef]
  32. Azzalini, A.; Regoli, G. Some properties of skew-symmetric distributions. Ann. Inst. Stat. Math. 2011, 64, 857–879. [Google Scholar] [CrossRef] [Green Version]
  33. Sahu, S.K.; Dey, D.K.; Branco, M.D. A new class of multivariate skew distributions with applications to Bayesian regression models. Can. J. Stat. 2003, 31, 129–150. [Google Scholar] [CrossRef] [Green Version]
  34. Jones, M.C.; Faddy, M. A skew extension of the t-distribution, with applications. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2003, 65, 159–174. [Google Scholar] [CrossRef]
  35. Adcock, C.; Azzalini, A. A Selective Overview of Skew-Elliptical and Related Distributions and of Their Applications. Symmetry 2020, 12, 118. [Google Scholar] [CrossRef] [Green Version]
  36. Rao Jammalamadaka, S.; Subba Rao, T.; Terdik, G. Higher Order Cumulants of Random Vectors and Applications to Statistical Inference and Time Series. Sankhya Ser. A Methodol. 2006, 68, 326–356. [Google Scholar]
  37. NIST Digital Library of Mathematical Functions. Release 1.0.17 of 2017-12-22. 2020. Available online: http://dlmf.nist.gov/ (accessed on 23 June 2021).
  38. Fang, K.W.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distributions; Chapman and Hall: London, UK; CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  39. Muirhead, R.J. Aspects of Multivariate Statistical Theory; John Wiley & Sons: Hoboken, NJ, USA, 2009; Volume 197. [Google Scholar]
  40. Anderson, T.W. An Introduction to Multivariate Statistical Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  41. Steyn, H. On the problem of more than one kurtosis parameter in multivariate analysis. J. Multivar. Anal. 1993, 44, 1–22. [Google Scholar] [CrossRef] [Green Version]
  42. Azzalini, A.; Dalla Valle, A. The Multivariate Skew-Normal Distribution. Biometrika 1996, 83, 715–726. [Google Scholar] [CrossRef]
  43. Azzalini, A.; Capitanio, A. Statistical applications of the multivariate skew normal distribution. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 1999, 61, 579–602. [Google Scholar] [CrossRef]
  44. Genton, M.G.; He, L.; Liu, X. Moments of skew-normal random vectors and their quadratic forms. Stat. Probab. Lett. 2001, 51, 319–325. [Google Scholar] [CrossRef]
  45. Azzalini, A. The Skew-normal Distribution and Related Multivariate Families. Scand. J. Stat. 2005, 32, 159–188. [Google Scholar] [CrossRef]
  46. Branco, M.D.; Dey, D.K. A General Class of Multivariate Skew-Elliptical Distributions. J. Multivar. Anal. 2001, 79, 99–113. [Google Scholar] [CrossRef] [Green Version]
  47. Kim, H.J.; Mallick, B.K. Moments of random vectors with skew t distribution and their quadratic forms. Stat. Probab. Lett. 2003, 63, 417–423. [Google Scholar] [CrossRef]
  48. Sutradhar, B.C. On the characteristic function of multivariate Student t-distribution. Can. J. Stat. 1986, 14, 329–337. [Google Scholar] [CrossRef]
  49. Holmquist, B. The D-Variate Vector Hermite Polynomial of Order. Linear Algebra Appl. 1996, 237/238, 155–190. [Google Scholar] [CrossRef]
Figure 1. One hundred random values, respectively, for U (transparent red) and W (solid blue).
Figure 1. One hundred random values, respectively, for U (transparent red) and W (solid blue).
Symmetry 13 01383 g001
Figure 2. Five hundred random values, respectively, for U (transparent red) and X (blue).
Figure 2. Five hundred random values, respectively, for U (transparent red) and X (blue).
Symmetry 13 01383 g002
Table 1. Moments μ W , 2 m (see (18)). True and sample estimates.
Table 1. Moments μ W , 2 m (see (18)). True and sample estimates.
2m246810
True0.06000.03880.07500.29391.9480
Sample n = 10 3 0.05970.03310.04120.07180.1438
Sample n = 10 4 0.05970.03710.06060.16120.5439
Sample n = 10 5 0.06020.03940.07340.24011.0682
Sample n = 10 6 0.06010.03910.07400.26141.3682
Table 2. Moments ν m (see (20)) and kurtosis parameter κ 0 . True and sample estimates.
Table 2. Moments ν m (see (20)) and kurtosis parameter κ 0 . True and sample estimates.
m1234 κ 0
True0.03000.003240.000620.000172.60
Sample n = 10 3 0.03340.004770.001320.000403.28
Sample n = 10 4 0.03130.003310.000520.000092.38
Sample n = 10 5 0.03030.003330.000670.000192.62
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jammalamadaka, S.R.; Taufer, E.; Terdik, G.H. Cumulants of Multivariate Symmetric and Skew Symmetric Distributions. Symmetry 2021, 13, 1383. https://doi.org/10.3390/sym13081383

AMA Style

Jammalamadaka SR, Taufer E, Terdik GH. Cumulants of Multivariate Symmetric and Skew Symmetric Distributions. Symmetry. 2021; 13(8):1383. https://doi.org/10.3390/sym13081383

Chicago/Turabian Style

Jammalamadaka, Sreenivasa Rao, Emanuele Taufer, and Gyorgy H. Terdik. 2021. "Cumulants of Multivariate Symmetric and Skew Symmetric Distributions" Symmetry 13, no. 8: 1383. https://doi.org/10.3390/sym13081383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop