MaxSkew and MultiSkew: Two R Packages for Detecting, Measuring and Removing Multivariate Skewness

Skewness plays a relevant role in several multivariate statistical techniques. Sometimes it is used to recover data features, as in cluster analysis. In other circumstances, skewness impairs the performances of statistical methods, as in the Hotelling's one-sample test. In both cases, there is the need to check the symmetry of the underlying distribution, either by visual inspection or by formal testing. The R packages MaxSkew and MultiSkew address these issues by measuring, testing and removing skewness from multivariate data. Skewness is assessed by the third multivariate cumulant and its functions. The hypothesis of symmetry is tested either nonparametrically, with the bootstrap, or parametrically, under the normality assumption. Skewness is removed or at least alleviated by projecting the data onto appropriate linear subspaces. Usages of MaxSkew and MultiSkew are illustrated with the Iris dataset.


Introduction
Skewness of a random variable X satisfying E |X| 3 < +∞ is often measured by its third standardized cumulant where µ and σ are the mean and the standard deviation of X. The squared third standardized cumulant β 1 (X) = γ 2 1 (X), known as Pearson's skewness, is also used. The numerator of γ 1 (X), that is is the third cumulant (i.e. the third central moment) of X. Similarly, the third moment (cumulant) of a random vector is a matrix containing all moments (cumulants) of order three which can be obtained from the random vector itself. Statistical applications of the third moment include, but are not limited to: factor analysis (Bonhomme and Robin, 2009;Mooijaart, 1985), density approximation (Christiansen and Loperfido, 2014;Loperfido, 2019;Van Hulle, 2005), independent component analysis [Paajarvi and Leblanc, 2004], financial econometrics (De Luca and Loperfido, 2015;Elyasiani and Mansur, 2017), cluster analysis (Kaban and Girolami, 2000;Loperfido, 2013;Loperfido, 2015a;Loperfido, 2019;Tarpey and Loperfido, 2015), Edgeworth expansions (Kollo and von Rosen, 2005, page 189), portfolio theory (Jondeau and Rockinger, 2006), linear models (Mardia, 1971;Yin and Cook, 2003), likelihood inference (McCullagh and Cox, 1986), projection pursuit (Loperfido, 2018), time series (De Luca and Loperfido, 2015;Fiorentini et al., 2016), spatial statistics (Genton et al., 2001;Kim and Mallick, 2003;Lark, 2015). The third cumulant of a d−dimensional random vector is a d 2 × d matrix with at most d (d + 1) (d + 2) /6 distinct elements. Since their number grows very quickly with the vector's dimension, it is convenient to summarize the skewness of the random vector itself with a scalar function of the third standardized cumulant, as for example Mardia's skewness (Mardia, 1970), partial skewness (Davis, 1980;Isogai, 1983;Mòri et al., 1993) or directional skewness (Malkovich and Afifi, 1973). These measures have been mainly used for testing multivariate normality, are invariant with respect to one-to-one affine transformations and reduce to Pearson's skewness in the univariate case. Loperfido [2015b] reviews their main properties and investigates their mutual connections. Skewness might hamper the performance of several multivariate statistical methods, as for example the Hotelling's one-sample test (Mardia, 1970;Everitt, 1979;Davis, 1982). Symmetry is usually pursued by means of power transformations, which unfortunately suffer from some serious drawbacks: the transformed variables are neither affine invariant nor robust to outliers (Hubert and Van der Veeken, 2008;Lin and Lin, 2010). Moreover, they might not be easily interpretable nor jointly normal. Loperfido [2014Loperfido [ , 2019 addressed these problems with symmetrizing linear transformations. The R packages MaxSkew and MultiSkew provide an unified treatment of multivariate skewness by detecting, measuring and alleviating skewness from multivariate data. Symmetry is assessed by either visual inspection or formal testing. Skewness is measured by the third multivariate cumulant and its scalar functions. Skewness is removed or at least alleviated by projecting the data onto appropriate linear subspaces. To the best of our knowledge, no statistical packages compute their bootstrap estimates, the third cumulant and linear projections alleviating skewness. The remainder of the paper is organized as follows. Section 2 reviews the basic concepts of multivariate skewness within the frameworks of third moments and projection pursuit. It also describes some skewness-related features of the Iris dataset. Section 3 illustrates the package MaxSkew. Section 4 describes the functions of MultiSkew related to symmetrization, third moments and skewness measures. Section 5 contains some concluding remarks and hints for improving the packages.

Third moment
The third multivariate moment, that is the third moment of a random vector, naturally generalizes to the multivariate case the third moment E X 3 of a random variable X whose third absolute moment is finite. It is defined as follows, where "⊗" denotes the Kronecker product (see, for example, Loperfido, 2015b). In the following, when referring to the third moment of a random vector, we shall implicitly assume that all appropriate moments exist. The third moment M 3,x of x = (X 1 , ..., X d ) ⊤ contains d 3 elements of the form µ ijh = E (X i X j X h ), where i, j, h = 1, ..., d. Many elements are equal to each other, due to the identities First, there are at most d distinct elements µ ijh where the three indices are equal to each other: µ iii = E X 3 i , for i = 1, ..., d. Second, there are at most d(d − 1) distinct elements µ ijh where only two indices are equal to each other: µ iij = E X 2 i X j , for i, j = 1, ..., d and i = j. Third, there are at most d (d − 1) (d − 2) /6 distinct elements µ ijh where the three indices differ from each other: µ ijh = E (X i X j X h ), for i, j = 1, ..., d and i = j = h. Hence M 3,x contains at most d (d + 1) (d + 2) /6 distinct elements.
Invariance of µ ijh = E (X i X j X h ) with respect to indices permutations implies several symmetries in the structure of M 3,x . First, M 3,x might be regarded as d matrices B i = E X i xx ⊤ , (i = 1, ..., n) stacked on top of each other. Hence µ ijh is the element in the j-th row and in the h-th column of the i-th block B i of M 3,x . Similarly, M 3,x might be regarded as d vectorized, symmetric matrices lined side by side: . Also, left singular vectors corresponding to positive singular values of the third multivariate moment are vectorized, symmetric matrices (Loperfido, 2015b). Finally, M 3,x might be decomposed into the sum of at most d Kronecker products of symmetric matrices and vectors (Loperfido, 2015b). Many useful properties of multivariate moments are related to the linear transformation y = Ax, where A is a k × d real matrix. The first moment (that is the mean) of y is evaluated via matrix multiplication only: E(y) = Aµ. The second moment of y is evaluated using both the matrix multiplication and transposition: M 2,y = AM 2,x A ⊤ where M 2,x = E xx ⊤ denotes the second moment of x. The third moment of y is evaluated using the matrix multiplication, transposition and the tensor product: M 3,y = (A ⊗ A) M 3,x A ⊤ (Christiansen and Loperfido, 2014). In particular, the third moment of the lin- The third central moment of x, also known as its third cumulant, is the third moment of x − µ, where µ is the mean of x: It is related to the third moment via the identity The third cumulant allows for a better assessment of skewness by removing the effect of location on third-order moments. It becomes a null matrix under central symmetry, that is when x − µ and µ − x are identically distributed. The third standardized moment (or cumulant) of the random vector x is the third moment of z = (Z 1 , ..., Z d ) ⊤ = Σ −1/2 (x − µ), where Σ −1/2 is the inverse of the positive definite square root Σ 1/2 of Σ = cov (x), which is assumed to be positive definite: It is often denoted by K 3,z and is related to K 3,x via the identity The third standardized cumulant is particularly useful for removing the effects of location, scale and correlations on third order moments. The mean and the variance of z are invariant with respect to orthogonal transformations, but the same does not hold for third moments: M 3,z and M 3,w will in general differ, if w = Uz and U is a d × d orthogonal matrix. Projection pursuit is a multivariate statistical technique aimed at finding interesting low-dimensional data projections. It looks for the data projections which maximize the projection pursuit index, that is a measure of interestingness. Loperfido [2018] reviews the merits of skewness (i.e. the third standardized cumulant) as a projection pursuit index. Skewness-based projection pursuit is based on the multivariate skewness measure in Malkovich and Afifi [1973]. They defined the directional skewness of a random vector x as the maximum value β D 1,d (x) attainable by β 1 c ⊤ x , where c is a nonnull, d−dimensional real vector and β 1 (Y ) is the Pearson's skewness of the random variable Y : with S d−1 being the set of d− dimensional real vectors of unit length. The name directional skewness reminds that β D 1,d (x) is the maximum skewness attainable by a projection of the random vector x onto a direction. It admits a simple representation in terms of the third standardized cumulant: Statistical applications of directional skewness include normality testing (Malkovich and Afifi, 1973), point estimation (Loperfido, 2010), independent component analysis (Loperfido, 2015b;Paajarvi and Leblanc, 2004) and cluster analysis (Kaban and Girolami, 2000;Loperfido, 2013;Loperfido, 2015a;Loperfido, 2018;Loperfido, 2019;Tarpey and Loperfido, 2015). There is a general consensus that an interesting feature, once found, should be removed (Huber, 1985;Sun, 2006). In skewness-based projection pursuit, this means removing skewness from the data using appropriate linear transformations. A random vector whose third cumulant is a null matrix is said to be weakly symmetric. Weak symmetry might be achieved by linear transformations, when the third cumulant of x is not of full rank, and its rows belong to the linear space generated by the right singular vectors associated with its null singular values. More formally, let x be a d−dimensional random vector whose third cumulant K 3,x has rank d − k, with 0 < k < d. Also, let A be a k × d matrix whose rows span the null space of K ⊤ 3,x K 3,x . Then the third cumulant of Ax is a null matrix (Loperfido, 2014). Weak symmetry might be achieved even when this assumption is not satisfied: any random vector with finite third-order moments and at least two components admits a projection which is weakly symmetric (Loperfido, 2014).
The appropriateness of the linear transformation purported to remove or alleviate symmetry might be assessed with measures of multivariate skewness, which should be significantly smaller in the transformed data than in the original ones. Mardia [1970] summarized the multivariate skewness of the random vector x with the scalar measure where x and y are two d−dimensional, independent and identically distributed random vectors with mean µ and covariance Σ. It might be represented as the squared norm of the third standardized cumulant: It is invariant with respect to one-to-one affine transformations: Mardia's skewness is by far the most popular measure of multivariate skewness. Its statistical applications include multivariate normality testing (Mardia, 1970) and assessment of robustness of MANOVA statistics (Davis, 1980). Another scalar measure of multivariate skewness is where x and y are the same as above. It has been independently proposed by several authors (Davis, 1980;Isogai, 1983;Mòri et al., 1993). Loperfido [2015b] named it partial skewness to remind that β P 1,d (x) does not depend on moments of the form E (Z i Z j Z h ) when i, j, h differ from each other, as it becomes apparent when representing it as a function of the third standardized cumulant: Partial skewness is by far less popular than Mardia's skewness. Like the latter measure, however, it has been applied to multivariate normality testing (Henze, 1997a;Henze, 1997b;Henze et al., 2003) and to the assessment of the robustness of MANOVA statistics (Davis, 1980). Mòri et al. [1993] proposed to measure the skewness of the d−dimensional random vector x with the vector γ 1,d (x) = E z ⊤ zz , where z is the standardized version of x. This vector-valued measure of skewness might be regarded as weighted average of the standardized vector z, with more weight placed on the outcomes furthest away from the sample mean. It is location invariant, admits the representation γ 1,d (x) = K ⊤ 3,z vec (I d ) and its squared norm is the partial skewness of x. It coincides with the third standardized cumulant of a random variable in the univariate case, and with the null d−dimensional vector when the underlying distribution is centrally symmetric. Loperfido [2015a] applied it to model-based clustering.
We shall illustrate the role of skewness with the Iris dataset contained in the R package datasets: iris{datasets}. We shall first download the data: R> data(iris). Then use the help command R>help(iris) and the structure command R> str(iris) to obtain the following informations about this dataset. It contains the measurements in centimeters of four variables on 150 iris flowers: sepal length, sepal width, petal length and petal width. There is also a factor variable (Species) with three levels: setosa, versicolor and virginica. There are 50 rows for each species. The output of the previous code shows that the dataset is a data frame object containing 150 units and 5 variables:  The groups "versicolor" and "virginica" are quite close to each other, and well separated from "setosa". The data are markedly skewed, but skewness reduces within each group, as exemplified by the histograms of petal length in all flowers (Figure 2(a)) and in those belonging to the "setosa" group ( Figure 2(b)). Both facts motivated researchers to model the dataset with mixtures of three normal components (see, for example Fruhwirth-Schnatter, 2006, Subsection 6.4.3). However, Korkmaz et al. [2014] showed that the normality hypothesis should be rejected at the 0.05 level in the "setosa" group, while nonnormality is undetected by Mardia's skewness. We shall use MaxSkew and MultiSkew to answer the following questions about the Iris dataset.
1. Is skewness really inept at detecting nonnormality of the variable recorded in the "setosa" group?
2. Does skewness help in recovering the cluster structure, when information about the group memberships is removed?
3. Can skewness be removed via linear projections, and are the projected data meaningful?
The above questions will be addressed within the frameworks of projection pursuit, normality testing, cluster analysis and data exploration.

MaxSkew
The package MaxSkew [Franceschini and Loperfido, 2017a]  The MaxSkew package finds orthogonal data projections with maximal skewness. The first data projection in the output is the most skewed among all data projections. The second data projection in the output is the most skewed among all data projections orthogonal to the first one, and so on. Loperfido [2019] motivates this method within the framework of model-based clustering. The package implements the algorithm described in Loperfido [2018] and may be downloaded with the command R> install.packages("MaxSkew"). The package is attached with the command R > library(MaxSkew). The packages MaxSkew and MultiSkew require the dataset to be a data matrix object, so we transform the iris data frame accordingly: R > iris.m<-data.matrix(iris). We check that we have a data matrix object with the command: The package MaxSkew has three functions, two of which are internal. The usage of the main function is R> MaxSkew(data, iterations, components, plot), where data is a data matrix object, iterations (the number of required iterations) is a positive integer, components (the number of orthogonal projections maximizing skewness) is a positive integer smaller than the number of variables, and plot is a dichotomous variable: TRUE/FALSE. If plot is set equal to TRUE (FALSE) the scatterplot of the projections maximizing skewness appears (does not appear) in the output. The output includes a matrix of projected data, whose i-th row represents the i-th unit, while the j-th column represents the j-th projection. The output also includes the multiple scatterplot of the projections maximizing skewness. As an example, we call the function R > MaxSkew(iris.m[,1:4], 50, 2, TRUE).
We have used only the first four columns of the iris.m data matrix object, because the last column is a label. As a result, we obtain a matrix with 150 rows and 2 columns containing the projected data and a multiple scatterplot. The structure of the resulting matrix is Figure 3(a) shows the scatterplot of the projected data. The connections between skewness-based projection pursuit and cluster analysis, as implemented in MaxSkew, have been investigated by several authors (Kaban and Girolami, 2000;Loperfido, 2013;Loperfido, 2015a;Loperfido, 2018;Loperfido, 2019;Tarpey and Loperfido, 2015). For the Iris dataset, it is well illustrated by the scatterplot of the two most skewed, mutually orthogonal projections, with different colors to denote the group memberships (Figure 3(b)). The plot is obtained with the following commands: R > attach ( iris ) R > iris . projections <-MaxSkew ( iris . m [ ,1:4] , 50 ,2 , TRUE ) R > iris . projections . species <-cbind ( iris . projections , iris $ Species ) R > pairs ( iris . projections . species [ ,1:2] , col = c ( " red " ," green " ," blue " )[ as . numeric ( Species )]) R > detach ( iris ) The scatterplot clearly shows the separation of "setosa" from "virginica" and "versicolor", whose overlapping is much less marked than in the original variables. The scatterplot is very similar, up to rotation and scaling, to those obtained from the same data by Friedman and Tukey [1974] and Hui and Lindsay [2010]. Mardia's skewness is unable to detect nonnormality in the "setosa" group, thus raising the question of whether any skewness measure is apt at detecting such nonnormality (Korkmaz et al., 2014). We shall address this question using skewness-based projection pursuit as a visualization tool. Figure 4 contains the scatterplot of the two most skewed, mutually orthogonal projections obtained from the four variables recorded from setosa flowers only. It clearly shows the presence of a dense, elongated cluster, which is inconsistent with the normality assumption. Formal testing for excess skewness in the "setosa" group will be discussed in the following sections. and then use the command R > library(MultiSkew) to attach the package.
Since the package MultiSkew depends on the package MaxSkew, the latter is loaded together to the former.

MinSkew
The function R > MinSkew(data, dimension) alleviates sample skewness by projecting the data onto appropriate linear subspaces and implements the method in Loperfido [2014]. It requires two input arguments: data (a data matrix object), and dimension (the number of required projections), which must be an integer between 2 and the number of the variables in the data matrix. The output has two values: Linear (the linear function of the variables) and Projections (the projected data). Linear is a matrix with the number of rows and columns equal to the number of variables and number of projections. Projections is a matrix whose number of rows and columns equal the number of observations and the number of projections. We call the function using our data matrix object: R > MinSkew(iris.m [,1:4],2). We obtain the matrix Linear as a first output: With the commands R > attach ( iris ) R > projections . species <-cbind ( Projections , iris $ Species ) R > pairs ( projections . species [ ,1:2] , col = c ( " red " ," green " , " blue " )[ as . numeric ( Species )]) R > detach ( iris ) we obtain the multiple scatterplot of the two projections ( Figure 5). The points clearly remind of bivariate normality, and the groups markedly overlap with each other. Hence the projection removes the group structure as well as skewness. This result might be useful for a researcher interested in the features which are common to the three species. The histograms of the MinSkew projections remind of univariate normality, as it can be seen from Figure 6(a) and Figure 6

Third
This section describes the function which computes the third multivariate moment of a data matrix. Some general information about the third multivariate moment of both theoretical and empirical distributions are reviewed in Loperfido [2015b]. The name of the function is Third, and its usage is R > Third(data,type). As before, data is a data matrix object while type may be: "raw" (the third raw moment), "central" (the third central moment) and "standardized" (the third standardized moment). The output of the function, called ThirdMoment, is a matrix containing all moments of order three which can be obtained from the variables in data. We compute the third raw moments of the iris variables with the command R > Third(iris.m[,1:4], "raw"). The matrix ThirdMoment is: Similarly, we use "central" instead of "raw" with the command R > Third(iris.m[,1:4], "central").
The output which appears in console is [1] " Third Moment " The largest entries in the matrix are the first element of the first row and the last element in the last row. This pattern is typical of outcomes from random vectors with skewed, independent components (see, for example, Loperfido 2015b). Hence the two most skewed projections may well be mutually independent. All elements in the matrix are very close to zero, as it might be better appreciated by comparing them with those in the third standardized cumulant of the original variables. This pattern is typical of outcomes from weakly symmetric random vectors (Loperfido, 2014).

Skewness measures
The package MultiSkew has other four functions. All of them compute skewness measures. The first one is R > FisherSkew(data) and computes Fisher's measure of skewness, that is the third standardized moment of each variable in the dataset. The usage of the function shows that there is only one input argument: data (a data matrix object). The output of the function is a dataframe, whose name is tab,  Mòri et al. [1993]. The input is still a data matrix, while the values in output are three objects: Vector, Scalar and pvalue. The first is the skewness measure and it has a number of elements equal to the number of the variables in the dataset used as input. The second is the squared norm of Vector. The last is the probability of observing a value of Scalar greater than the observed one, when the data are normally distributed and the sample size is large enough. We apply this function to our dataset: R > PartialSkew(iris. The function R > SkewMardia(data) computes the multivariate skewness introduced in Mardia [1970], that is the sum of squared elements in the third standardized cumulant of the data matrix. The output of the function is the squared norm of the third cumulant of the standardized data (MardiaSkewness) and the probability of observing a value of MardiaSkewness greater than the observed one, when data are normally distributed and the sample size is large enough (pvalue The function SkewBoot performs bootstrap inference for multivariate skewness measures. It computes the bootstrap distribution, its histogram and the corresponding p-value of the chosen measure of multivariate skewness using a given number of bootstrap replicates. The function calls the function MaxSkew contained in MaxSkew package. Here, the number of iterations required by the function MaxSkew is set equal to 5. The function's usage is R > SkewBoot(data, replicates, units, type). It requires four inputs: data (the usual data matrix object), replicates (the number of bootstrap replicates), units (the number of rows in the data matrices sampled from the original data matrix object) and type. The latter may be "Directional", "Partial" or "Mardia" (three different measures of multivariate skewness). If type is set equal to "Directional" or "Mardia", units is an integer greater than the number of variables. If type is set equal to "Partial", units is an integer greater than the number of variables augmented by one. The values in output are three: histogram (a plot of the above mentioned bootstrap distribution), Pvalue (the p-value of the chosen skewness measure) and Vector (the vector containing the bootstrap replicates of the chosen skewness measure). For the reproducibility of the result, before calling the function SkewBoot, we type R> set.seed(101) and after R > SkewBoot(iris.m[,1:4],10,11,"Directional"). We obtain the output [1] " Vector " [1]  and also the histogram of bootstrapped directional skewness (Figure 7). We call the function SkewBoot, first setting type equal to Mardia and then equal to Partial: R > set . seed (101) R > SkewBoot ( iris . m [ ,1:4] ,10 ,11 , " Mardia " ).

Histogram of bootstrapped Directional skewness
We obtain the output [1] " Vector " [1]   We shall now compute the Fisher's skewnesses of the four variables in the "setosa" group with R > FisherSkew(iris.m [1:50,1:4] The p−value is highly significant, clearly suggesting the presence of skewness and hence of nonnormality. We conjecture that the normality test based on Mardia's skewness is less powerful when skewness is present only in a few variables, either original or projected, while the remaining variables might be regarded as background noise. We hope to either prove or disprove this conjecture by both theoretical results and numerical experiments in future works.

Conclusions
MaxSkew and MultiSkew are two R packages aimed at detecting, measuring and removing multivariate skewness. They also compute the three main skewness measures. The function SkewBoot computes the bootstrap p-value corresponding to the chosen skewness measure. Skewness removal might be achieved with the function MinSkew. The function Third, which computes the third moment, plays a role whenever the researcher compares the third sample moment with the expected third moment under a given model, in order to get a better insight into the model's fit. The major drawback of both MaxSkew and Multiskew is that they address skewness by means of third-order moments only. In the first place, they may not exist even if the distribution is skewed, as it happens for the skew-Cauchy distribution. In the second place, the third moment of a random vector may be a null matrix also when the random vector itself is asymmetric. In the third place, third-order moments are not robust to outliers. We are currently investigating these problems.