Cumulants of Multivariate Symmetric and Skew Symmetric Distributions

This paper provides a systematic and comprehensive treatment for obtaining general expressions of any order, for the moments and cumulants of spherically and elliptically symmetric multivariate distributions; results for the case of multivariate t-distribution and related skew-t distribution are discussed in some detail.


Introduction
This paper provides a systematic treatment and derivation of moments and cumulants of any order for spherically as well as elliptically symmetric multivariate distributions. Expressions for the multivariate t-distribution and the related skew-t distribution are considered in detail. Our approach exploits the stochastic representation of such random variables in terms of the so-called generating variate and its uniform-distribution (Uniform) base in the appropriate dimension.
It is well known that the problem of representing the structure of higher-order cumulants of multivariate distributions is rather messy. In this paper, we present an approach based on a vectorization of cumulants which leads to a natural and intuitive way to obtain multivariate moments and cumulants of any order; we make the point that this provides the simplest way to deal with this issue.
We note that [1] provides formulae for cumulants in terms of matrices; however, retaining a matrix structure for all higher-order cumulants leads to high-dimensional matrices with special symmetric structures which are quite hard to follow notionally and computationally. Ref. [2] provides quite an elegant approach using tensor methods; however, tensor methods are not very well known and computationally not so simple.
The method discussed here is based on relatively simple calculus. Although the tensor product of Euclidean vectors is not commutative, it has the advantage of permutation equivalence and allows one to obtain general results for cumulants and moments of any order, as it will be demonstrated in this paper, where general formulae, suitable for algorithmic implementation through a computer software, will be provided. Methods based on a matrix approach do not provide this type of result; see also [3], which goes as far as the sixth-order moment matrices, whereas there is no such limitation in our derivations and our results. For further discussion, one can see also [4,5].
The primary contributions of this paper may be summarized as: (a) providing formulae, valid for any order, for vectorized cumulants and moments of multivariate spherical and elliptically symmetric distributions; matrix structures can be readily obtained from these expressions; some examples are provided in Section 4; (b) introduce the so-called marginal moment and cumulant parameters for multivariate spherical and elliptically symmetric distributions-providing an extension of the results discussed in [6]; (c) provide formulae for all-order moments and cumulants for multivariate t and skew-symmetric t-distributions; results of this type, as far as we know, are not available in the literature.
As is well known, cumulants play a key role in several areas of multivariate statistics: from the earliest applications in estimation and testing to fields such as signal detection, clustering, invariant coordinate selection, projection pursuit, time series modelling, pricing and portfolio analysis. See, for example, . Higher-order cumulants going beyond the third and fourth order are typically needed to obtain the asymptotic properties of various statistics discussed in the aforementioned papers; for instance, ref. [30] provides general expressions for covariances of cumulants of the third and fourth order, requiring several higher-order cumulants. We believe that the expressions for higher-order cumulants for the models presented here open the door for such explorations in these areas.
In this paper, we will also go beyond such spherically and elliptically symmetric models, and obtain general expressions for the cumulants of skew-symmetric distributions such as the skew-t (see [31]). In particular, ref. [32] provides some specific analytical properties of skew-symmetric distributions. Further discussion on these distributions and their applications can be found in [33][34][35].
To formally introduce the problem, consider a random vector X in d-dimensions, with mean vector µ and covariance matrix Σ, λ is a d-vector of real constants; let φ X (λ) and ψ X (λ) = log φ X (λ) denote, respectively, the characteristic function and the cumulantgenerating function of X.
With the symbol ⊗ denoting the Kronecker product, consider the operator D ⊗ λ , which we refer to as the T-derivative; see [36] for details. For any function φ(λ), the T-derivative is defined as , then the k-th order cumulant of the vector X is obtained as Note that Cum k (X) is a vector of dimension d k that contains all possible cumulants of order k formed by X 1 , . . . , X d . For example, in Equation (1), one has κ X,2 = vec Σ. Throughout this paper, N is used for the set of natural numbers, n!! for the double (semi-) factorial (see the reference [37]-Common Notations and Definitions) and (d) m = Γ(d + m)/Γ(d) for the Pochhammer symbol (see [37] 5.2.5), known also as the rising or ascending factorial. We will also denote so that G 2m (p) = (p/2) m . Finally, for any matrix A, recall vec ⊗m A = (vec A) ⊗m . The paper is organized as follows. Section 2 discusses moments and cumulants of spherical and elliptically symmetric multivariate distributions; details on marginal moments and moment-and cumulant-parameters are provided. Section 3 considers the special case of multivariate t-distribution and discusses its extension to the skew-t distribution. Section 4 presents some applications and examples which are aimed mainly at providing evidence of the correctness of the formulae provided rather than providing simulations on estimation or testing. The proofs are collected in a separate section at the end, viz. Section 5.

Spherical and Elliptically Symmetric Distributions
For a comprehensive discussion of spherically and elliptically symmetric multivariate distributions, one may refer to the book by [38]. Other book-level discussions can be found, for instance, in [1,39,40]. General formulae for moments and kurtosis parameters are discussed in [6,41], among others.
A random variate W is spherically distributed if its distribution is invariant under rotations of R d , which is equivalent to having the stochastic representation where R is a non-negative random variable and U is a uniform random d-vector on the sphere S d−1 , which is independent of R (see Theorem 2.5, [38]). The random variable R is called the generating variate, with the generating distribution F, and the random vector U is the uniform base of the spherical distribution. The characteristic function of W can then be written in the form φ W (λ) = g(λ λ) in terms of a given a function g, called the characteristic generator. An elliptically symmetric random variable X ∈ E d (µ, Σ, g) is defined by a location-scale extension of W so that it can be represented in the form where µ ∈ R d , Σ is a variance-covariance matrix and W is spherically distributed. The cumulants of X are just the cumulants of W multiplied by a constant, except for the mean, which is given by E Let ψ W (λ) = log φ W (λ) = log g(λ λ) be the cumulant-generating function of W. Consider the series expansions which lead to expressions for the moments and cumulants of W expressed through the T-derivative of the characteristic and log-characteristic generator functions as The relationship between the distribution F of R and g is given through the characteristic function of the uniform distribution on the sphere (see [38], Theorem 2.2, p. 29).
Using the stochastic representation (3) of W, cumulants can be calculated either via the generator function g, or the distribution of the generating variate R. We first start with using the generator function g for deriving the cumulants of W. Since the characteristic generator g is a function of one variable and represents the characteristic function at λ λ, the series expansion of g and f = log g include (−1) j instead of (i) j and we have with the coefficients g j = (−1) j g (j) (0) and f j = (−1) j f (j) (0). To introduce a specific notation, let us denote the generator moment by ν k = (−1) k g (k) (0) and the corresponding generator cumulant by ζ k = (−1) k f (k) (0).
Although neither the generator moments nor its cumulants correspond to the moments and cumulants of a random variate, the generator cumulants can be expressed in terms of the generator moments (and vice versa) via Faà di Bruno's Formula, which says: where the second sum is taken over all sequences (types) = ( 1 , . . . , k ), j ≥ 0 such that ∑ j ∈ j = r and ∑ j ∈ j j = k.

Marginal Moments and Cumulants
If one takes the derivatives of φ W j λ j = g λ 2 j , at λ j = 0, . . . , λ j , . . . , 0 , it is readily seen that these derivatives do not depend on j; also, the odd moments can be seen to be zero (see, e.g., [6]) .
The following theorem (which is proven in Section 5) establishes the connection between (i) the moments of W j and the generator moments, as well as (ii) cumulants of W j and the generator cumulants.
Again, the odd cumulants κ W,2m+1 of W j are zero as well, and the even ones are given by Further, the even cumulants can be expressed in terms of the generator moments as where the second sum is taken over all sequences = ( 1 , . . . , ), j ≥ 0, satisfying ∑ j=1 j = r and ∑ j=1 j j = m.

Moment and Cumulant Parameters
The fourth-order cumulant of the standardized W j for each entry W j , usually called kurtosis, has the form In the above formula, we observe two quantities: one is the kurtosis (standardized generator cumulant since ν 1 = ζ 1 ), i.e., and the other one is µ 2 = (ν 2 − ν 2 1 )/ν 2 1 = ν 2 /ν 2 1 − 1, which contains the standardized generator moment ν 2 /ν 2 1 . Both quantities µ 2 and κ 2 have the same value (see Equation (13)). The kurtosis κ 2 is sometimes called the kurtosis parameter (see, for instance, [39]). Observe that the parameter µ 2 depends on the generator moment of the standardized variate only, so it can be called the moment parameter as in [6].
More generally, for m ≥ 1, define the moment parameters and the cumulant parameters, respectively, as The following normalized cumulants of W j , where cumulant parameters κ m are expressed in terms of moment parameters µ m , connect our different notations As will be seen in Section 3, using moment and cumulant parameters reduces the number of parameters for a spherically distributed random variate W significantly, while the number of characteristics is halved for an elliptically symmetric distribution. The cumulant parameters κ m can be expressed in terms of moment parameters µ m in higher orders as well.

Corollary 1.
Under the assumptions of Theorem 1, the moments of standardized W j are zero for odd orders and for even orders, where µ m is the moment parameter defined in (15). The cumulants of standardized W j are zero for odd orders and for even orders, where κ m is the cumulant parameter (15), such that Formula (17) is valid for all m ≥ 1, since, as seen earlier, µ 1 = 0 and κ 1 = 1.

Using the Representation RU
We now turn to the stochastic representation (3) and consider the scalar variable case W j = RU j . We first explore the even order moments µ W,2m = µ R,2m .µ U j ,2m . Using the result on the even-order moments E U 2m j as given in Lemma 1 in [29], we get This result in combination with Equation (10) lets us express the generator moment ν m in terms of the moment of the generating variate R as Alternatively, in terms of moments of W, this becomes The dependence of moment parameter µ m on the moment of the generating variate R follows directly from the definition of µ m and the above expression If U j is one component of the vector U, then the stochastic representation (3) of W implies W j = RU j . Therefore, the cumulants of W j can be expressed either by the moments or the cumulants of R. Thus, the cumulants of W j are connected to the cumulants of R in general. Since the cumulant Cum n (W) is an n th order cumulant of the product RU 1 of two independent variates, a direct method for its calculation can be made using the conditional cumulants. Using this idea provides the following.

Example 2.
Consider, e.g., the fourth-order cumulant κ W,4 = κ RU 1 ,4 of say W 1 (since all W j have identical distributions). Applying the formula for cumulants via moments, and using the independence of R and U j , one obtains Now, using particular values for moments of U j gives Lemma 1. Let n ∈ N and, in (3), assume E(R n ) < ∞. The even-order cumulants of a component W j of a spherically distributed random variate W are given in terms of generating variate R as follows where the summation is taken over all even-order cumulants of U 1 , since the odd orders are zero and where R j { j } corresponds to the block with cardinality j , which includes power R j only (it implies listing R j consecutively j times).
Here, the cumulants κ U 1 ,j of U 1 are involved, which can be evaluated explicitly in particular cases to obtain the formula for κ W 1 ,2n .

Example 3.
Consider the fourth-order cumulant κ W,4 = κ RU 1 ,4 of W j , say. Applying the formula to cumulant (23) gives Next, obtain the cumulants of U 1 from the moments , .

Multivariate Moments and Cumulants
Rewriting the characteristic function using the series expansion of g(u) given in (8), one obtains The moments can be calculated using Equation (7) as follows Observe that and the vector c j does not depend on g. Using g j = (−1) j g (j) (0) = ν j , and This derivation provides the connection between the generator moments and the marginal moments in (10), by using which, one obtains The same argument applies to the cumulant generator function ψ W (λ) = log φ W (λ) with series expansion (8). Now, we obtain κ ⊗ 2m = ζ m /m!D ⊗2m λ (λ λ) m , and since ζ m is connected to the m th cumulant of a component of W by (11), we get Recall that K {r| } denotes particular partitions with size r and type . The above calculations are summarized in the following theorem, which has been stated without proof and used in [30]. One finds "commutator matrices", which change the order of the tensor products according to a given permutation, very useful. (For a brief description of the commutator matrices and the symmetrizer, the reader is referred to Section 5.1).

Theorem 2.
Let m ∈ N and, in (3), assume E(R m ) < ∞. The moments and cumulants of odd orders of spherically distributed W are zero. The moments of even orders are given by while the cumulants of even orders are In terms of moment parameters, the standardized cumulants where Σ −1/2 = 1/ √ 2ν 1 I d . Formula (17) shows that κ m is a polynomial in µ k , k = 2:m. We denote this polynomial by a m ( µ 2 . . . , µ m ) = κ m , hence In particular, when a 1 = 1

Remark 1.
Both higher-order moments µ ⊗ W,2m and cumulants κ ⊗ W,2m use a commutator matrix L −1 m 2 or a symmetrizer S d1 2m ; details are given in Section 5.1. Determining L −1 m 2 may be cumbersome although much lighter, from a computational point of view, than the actual calculation of the symmetrizer, which, for large dimensions d and order 2m, is very time-, memory-and spaceconsuming. Alternatively, one can get an efficient calculation evaluating the D ⊗2m λ (λ λ) m by using the T-derivative step by step. For example, where 3 commutator matrices are included instead of the 24, which are necessary for obtaining S d1 4 . Further examples can be found in [30].

Remark 2.
Deriving D ⊗6 λ (λ λ) 3 means finding 15 partitions K {3| } , with size 3 and type = (0, 3, 0, 0, 0, 0), i.e., splitting up the set 1:6 into three blocks, and each block contains two elements. The partitions are in canonical form and the corresponding commutator L −1 Then, E W ⊗2m can be calculated directly from the expected values of the entries. The nonzero entries of E W ⊗2m are those where all terms in the product of entries of W have even degrees (cf. (18)).

Remark 3.
We compare the expected values of the entries of W ⊗2m to the expected values of those in Z ⊗2m , where Z is a standard normally distributed variate. The nonzero entries of the expected value E Z ⊗2m are products with even powers such that where Σk 1:d = m. At the same time, we have E Z ⊗2m = (2m − 1)!!S d1 2m vec ⊗m I d . Comparing this to E W ⊗2m = E W 2m 1 S d1 2m vec ⊗m I d , we conclude that the higher-order moments of an elliptic random vector W differ from the moments of a normal one Z only in the constant E W 2m 1 . The major difference turns up in comparing the cumulants, since cumulants of order higher than 2 for Z are zero while the cumulants of W are Example 4. If the generating variate R is Gamma distributed with parameters ϑ > 0, α > 0, then and the kurtosis parameter κ 2 becomes

Multivariate t and Skew-t Distributions
There have been several attempts to introduce asymmetry into multivariate distributions by "skewing" a given spherically or elliptically symmetric distribution such as the normal or a t-distribution. The multivariate skew-normal distribution was introduced by [42]; see also [43]. Ref. [44] derived the first four moments and discussed quadratic forms based on these; for further properties, see [45].
For the case of the multivariate skew-symmetric-t, we use the definition given in [31], which is different from that provided by [46]. Its moments were derived in [47]. For further discussion on these distributions and their applications, see [32][33][34][35].

Multivariate t-Distribution
The d-variate vector W is t-distributed, written as W ∈ Mt d (p, 0, where Z ∈ N d (0, I d ) is standard normal, and S 2 is χ 2 distributed with p degrees of freedom. See Example 2.5, Section 3.3.6, p. 85 [38], and p. 32. Such a random variable W ∈ Mt d (p, 0, I d ) is spherically distributed since it has the representation where R = √ p Z /S is the generating variate. We note that R 2 /d ∈ F(d, p) has an Fdistribution with d and p degrees of freedom (cf. also [48]). Let µ ∈ R d , and A is a d × d matrix; then, the linear transform X = µ + A W will be considered as X ∈ Mt d (p, µ, Ω), where Ω = A A; hence, X is an elliptically symmetric random variable. The characteristic function of X is quite involved, and therefore we utilize the stochastic representation of W viz. (27), for deriving higher-order cumulants including skewness and kurtosis for X. For the even-order moments of the generating variate R, i.e., the even-order moments of the F-distribution with d and p degrees of freedom, with p > 2m, we have We also have the even-order moments of the components of uniform distribution (on sphere S d−1 ) These two quantities will provide us the moments of the components of W as Recall that all entries of W have the same distribution so that one can use the notation W for the generic entry. The cumulants of even order of W can be calculated with the help of the cumulant parameters κ m . Consider the moment parameters µ m first, which is (see Equation (21) for the moments of R). Then, the cumulant parameter κ m is calculated using the general expression (17). Theorem 2 then leads us to the following.

Lemma 2.
Let p > 2m and W be t-multivariate, W ∈ Mt d (p, 0, I d ), with dimension d and degrees of freedom p, then E W = 0, and both the moments and the cumulants with odd higher order are zero. The moments with even order are given by The covariance matrix of W has the diagonal form Var W = Σ = p p−2 I d and the even-order standardized cumulants are where the cumulant parameters κ m are given by the expression (17) and moment parameters in (29). In particular, this gives .

Multivariate Skew-t Distribution
Let V be a d-dimensional multivariate skew-normal distribution, denoted by V ∈ SN d (0, Ω, α), and S 2 be an independently distributed χ 2 random variable with p degrees of freedom. We define a skew-t distributed random vector X by X = µ + √ pV/S and denote it by St d (µ, Ω, α, p). We use the notation R p = √ p S where S 2 is a χ 2 distributed variable with shape parameter p, so that The skew-normal distribution is characterized by the skewness vector δ, which is given as Consider first the mean E 1 , and recall that the higherorder cumulants of a skew-normal are as given in [29], as and in general for j > 2, we have κ ⊗ V,j = κ |Z|,j δ ⊗j . The moments of R p (using the moments of a χ 2 distribution) are given in the following From the above results, the first two cumulant vectors of X are given by The variance matrix of X is then

Cumulants of the Skew-t Distribution
The third-and fourth-order cumulants are given in the next two lemmas.

Lemma 4.
Let p > 3, then the fourth-order cumulant of X ∈ St d (µ, Ω, α, p) is given by We conclude this section by providing a formula for the cumulant κ ⊗ X,n of general order n. In the the next theorem (and in later sections), when a symmetrizer S d1 m is applied, a symmetrical equivalence form denoted by = is used.
if n > 2 and p > n − 1, then n-symmetrized version by symmetrizer S d1 n of κ ⊗ X,n is the following corresponds to the block with cardinality j , which includes the power R j p only (it implies listing R j p consecutively j times).
One can also express κ ⊗ X,n ignoring symmetrization ).

Applications and Examples
The vector cumulant formulae provided in the previous section find immediate application in results discussed in the literature. For the subsequent discussion, let Y = Σ −1/2 (X − µ).

Cumulant-Based Measures of Skewness and Kurtosis
Ref. [29] have shown that knowledge of the third and fourth cumulant vectors allows one to retrieve all cumulant-based measures of skewness and kurtosis discussed in the literature. For example, for [7] index of skewness β 1,d , one has that β 1,d (Y) = ||κ ⊗ Y,3 || 2 ; if one considers [11] skewness vector s(Y), it holds that As far as kurtosis indexes are concerned, for [7] index, one has whereas, for the [11] kurtosis matrix, it holds that For further examples concerning the indexes discussed in [8,9,14,15] and relations among these, see [29].

Covariance Matrices
Using cumulant vectors up to the eighth order, one can retrieve covariance matrices and asymptotic results for statistics based on third and fourth cumulants. For example, in the case of elliptically symmetric distributions, one can show that where H 3 (Y) is the third d-variate Hermite polynomial [49] and K H4,2 , K 3! are commutator matrices; see Theorem 1 in [30] for details and further results on general symmetric and asymmetric distributions. These results can be exploited to obtain new weighted measures of skewness and kurtosis and, in conjunction with Theorem 2 in [30], retrieve asymptotic distributions of several statistics based on the third and the fourth cumulant vectors. Note that in contrast to the results typically available in the literature, which provide results in terms of expectations of function of sample statistics, the explicit form of model covariance matrices based on the cumulant vectors that we have allows straightforward computation of the asymptotic parameters.

Illustrative Numerical Examples
Example 5 (Uniform-Gamma). Consider again Example 4, where R is a gamma random variable with ϑ = 0.3 and α = 1. In order to generate a random d-vector from a spherical uniform distribution on the unit ball, we first generate a d-variate standard normal random vector Z and define U = Z/||Z|| and then use (3) to generate W. Figure 1, for d = 3, reports 100 random values, respectively, for U (transparent red) and W (solid blue); note that the values of W are much more concentrated towards the center, with approximately 10% of the points going out of the unit sphere. Numerical true and estimated values of the moments µ W,2m and ν m are computed by using Formulae (18) and (20). The tables below report population values and sample estimates for different sample sizes. As one can see, these values are in good agreement for all sample sizes but for very high-order moments, which require very large sample sizes to obtain reasonable approximations.
Consider now the case of multivariate moments and cumulants of W whose formulae are provided in Theorem 2. Set d = 3. Since the result lies in univariate moments and cumulants, the application of the theorem reduces to computing the term L −1 m 2 vec ⊗m I d (see Section 5.1 for L −1 m 2 ). The second univariate cumulant of W, viz. the variance, is given in Table 1 and vec I 3 provides the correct form forˇ⊗ W,2 . Table 1. Moments µ W,2m (see (18)). True and sample estimates. Computations show that κ W,4 = 0.02808 and the kurtosis κ W,4 /κ 2 W,2 = 7.8, which actually corresponds to 3κ 0 given in Table 2. Note that for d = 3, the fourth cumulant vector is of dimension d 4 = 81; in general, its elements are not all distinct, and the distinct elements can be recovered by linear transformations (see [30] for further discussion). The cumulant vector of the distinct elements, using the formula for L −1 2 2 given in Remark 1, reads Direct estimation of the fourth cumulant vector from the simulated data confirms the theoretical results. The knowledge of κ ⊗ W, 4 and Formulae (38) and (39) obtains β 2,d = 54 (note also that β 2,d = d(d + 2)(κ 0 + 1) and Vec B(Y) = (13, 0, 0, 0, 13, 0, 0, 0, 13) ).  (20)) and kurtosis parameter κ 0 . True and sample estimates.

Proofs
Proof of Theorem 1. Differentiate g λ 2 j to obtain the moments of W j in terms of generator moments, Proceed by noting that the coefficients of g (n) λ 2 j = ∂ n /∂λ n j g λ 2 j are scalar valued series, say, b j n , when we take the derivatives of a compound function g λ 2 j . Therefore, we have where m = n 2 , and the coefficients b k n fulfill the following recursion b n n = 1 and b n−k n = 2b n−k n−1 (n − 2k If n is odd then the power of the last term in (42) is n − 2[n/2] = 1, then g (n) (0) = 0; otherwise, g (n) (0) = b n/2 n g (m) (0); hence, As has been noticed by [6], b k n does not depend on g. Hence, we may choose, say, g(t) = exp −t 2 (being a valid characteristic function) and derive coefficients b m 2m , resulting in b m 2m = 2 m (2m − 1)!!. Hence, for n = 2m, E W 2m j = (−1) m 2 m (2m − 1)!!g (m) (0). Now, (10) follows by changing (−1) m g (m) (0) to the generator moment Observe that the right-hand side does not depend on the index j, so that all marginals are distributed equally. Plugging the cumulant generator function f into (42), it is readily seen that the odd generator cumulants ζ m are also zero. The even-order cumulant for each W j is Formula (17) utilizes Faà di Bruno's Formula (9) connecting the generator cumulants ζ m in terms of generator moments ν m .
Proof of Corollary 1. The general Formula (17) is based on the formula for the "cumulants" ζ m in terms of "moments" ν j . Thus and we change the ratio ν j /ν j 1 = µ 2 + 1, so that the assertion (17) follows. Proof of Theorem 2. The key point in the proof is understanding the form of the derivative D ⊗2m λ (λ λ) m . First, we notice that where f j (λ) = λ λ; that is, we can use the general Leibnitz rule, and symmetrize by S d1 2m , since the only nonzero term is when k j = 2, j = 1 : m, then D ⊗2 λ f j (λ) = 2 vec I d . Therefore, we have 2m terms in the sum (44), each equals vec ⊗m I d , hence the assertion follows.

Proof of Lemma 3. Direct calculation shows
where κ |Z|,2 = −2/π, the coefficients of κ ⊗ X,3 in the expression are given in (36). We obtain The quantities in the coefficient of δ ⊗3 are given in (33), leading to the result of the lemma.

Commutator and Symmetrizer Matrices
Commutator matrices K p change the order of the tensor products of vectors according to permutation p; see [29] for more details. Commutator matrix L corresponds to type = 1:n , such that nonzero j s of define the sum of commutator matrices as follows where index l r:1 is defined by the following way: if j = 0 then set j by l j , where l denotes the actual value of j ; for instance, if j = 3, then l j = 3 j . The summation is taken over all partition K {r| } of the set 1:n, having type and size r.
Ref. [49] uses the symmetrizer matrix S d1 q for symmetrization of a T-product of q vectors with the same dimension d; that is, the result of S d1 4 (a 1 ⊗ a 2 ⊗ a 3 ⊗ a 4 ) is a vector of dimension d 4 , and symmetric in a j . It can be computed as where P q denotes the set of all permutations of the numbers 1:q; the sum includes q! terms. The symmetrizer S d1 q provides an orthogonal projection to the subspace of R d q , which is invariant under the transformation S d1 q . A vector will be called symmetrical if it belongs to that subspace.