Next Article in Journal
Sensitivity Analysis of an OLS Multiple Regression Inference with Respect to Possible Linear Endogeneity in the Explanatory Variables, for Both Modest and for Extremely Large Samples
Previous Article in Journal
Distributions You Can Count On …But What’s the Point?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mahalanobis Distances on Factor Model Based Estimation

Department of Economics and Statistics, Linnaeus university, 351 95 Växjö, Sweden
Econometrics 2020, 8(1), 10; https://doi.org/10.3390/econometrics8010010
Submission received: 20 August 2019 / Revised: 28 February 2020 / Accepted: 2 March 2020 / Published: 5 March 2020

Abstract

:
A factor model based covariance matrix is used to build a new form of Mahalanobis distance. The distribution and relative properties of the new Mahalanobis distances are derived. A new type of Mahalanobis distance based on the separated part of the factor model is defined. Contamination effects of outliers detected by the new defined Mahalanobis distances are also investigated. An empirical example indicates that the new proposed separated type of Mahalanobis distances predominate the original sample Mahalanobis distance.

1. Introduction

The measurement of the distance between two individual observations is a fundamental topic in some research areas. The most commonly used statistics for the measurements are Euclidean distance and Mahalanobis distance (MD). By taking the covariance matrix into account, the MD is more suitable for the analysis of correlated data. Thus, MD has become one of the most commonly implemented tools for outlier detection since it was introduced by Mahalanobis (1936). Through decades of development, its usages are extended into a few broader applications, such as: assessing multivariate normality (Holgersson and Shukur 2001; Mardia 1974; Mardia et al. 1980), classification (Fisher 1940; Pavlenko 2003) and outlier detection (Mardia et al. 1980; Wilks 1963), etc.
However, as the number of variables increase, the estimation on an unconstrained covariance matrix becomes inefficient and computationally expensive, and so does the estimated sample MD. Thus, an efficient and easier method of estimating the covariance matrix is needed. This target can be achieved for some structured data. For instance, in factor–structured data, the correlated observations contain common information which can be represented into fewer variables by using dimension reduction tools, for example, factor model. The fewer variables are a linear combination of the original variable in the data set. The newly built factor model has the advantages of summarising the internal information of data and expressing it with much fewer latent factors ( k p ) in a linear model form with residuals. The idea of implement factor model on structured data is widely applied in different research areas. For example, in arbitrage pricing theory (Ross 1976), the asset return is assumed to follow a factor structure. In consumer theory, factor analysis shows that utility maximising could be described by at most three factors (Gorman et al. 1981; Lewbel 1991). In desegregate business cycle analysis, the factor model is used to identify the common and specific shocks that drive a country’s economy (Bai 2003; Gregory and Head 1999).
The robustness of factor model is also of substantial concern. Pison et al. (2003) proposed a robust version of factor model which is highly resistant to the outliers. Yuan and Zhong (2008) defined different types of outliers as good or bad leverage points. Their research set a criterion to distinguish different types of outliers based on evaluating the effects on the MLE and likelihood ratio statistics. Mavridis and Moustaki (2008) implement a forward search algorithm as a robust estimation on factor models based outlier detections. Pek and MacCallum (2011) present several measures of case influence applicable in SEM and illustrate their implementation, presentation, and interpretation. Flora et al. (2012) provide a comprehensive review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis. More investigations is done with practical advice for conducting analyses that are sensitive to these concerns. Chalmers and Flora (2015) implement the utilities into a statistical software package. But the MD is typically used either with its classical form, or some other measurements are implemented. Very little research is concerned about a modified MD that can improve the robustness on measuring the extreme values.
We propose a new type of factor model based MD. The first two moments and the corresponding distributions of the new MD are derived. Furthermore, a new way of outlier detection based on the factor model MD is also investigated. An empirical example shows a promising result from the new type of MD when using classic MD as a benchmark.
The structure of this paper is organised as follows: Section 2 defines the classic and newly built MD with their basic properties. Section 3 shows the distributional properties of the new MDs. We investigate a new method for detection of the source of the outliers in Section 4. An empirical example based on the new MDs is represented in Section 5.

2. MD for Factor–Structured Variables

We introduce several types of MDs that are built on factor models here. These factor models have either known or unknown mean and covariance matrix. Therefore, different ways of estimation are used respectively.

2.1. MDs with Known Mean and Variance

Definition 1.
Let x p × 1 N ( μ , Σ ) be a random vector with known mean μ and covariance matrix Σ. The factor model of x p × 1 is
x p × 1 μ p × 1 = L p × m F m × 1 + ε p × 1 ,
where m is the number of factors in the model, x a p–dimensional observation vector (p>m), L is the factor loading matrix, F is an m × 1 vector of common factors and ε an error term.
By Definition 1, we can express the covariance matrix of x p × 1 into the covariance matrix in the form of factor model as follows:
Proposition 1.
Let ε N ( 0 , Ψ ) where Ψ is a diagonal matrix, and F N ( 0 , I ) be distributed independently, which leads to C o v ( ε , F ) = 0 ; the covariance structure for x is given as follows:
C o v ( x ) = Σ f = E ( LF + ε ) ( LF + ε ) = LL + Ψ ,
where Σ f is the covariance matrix of x under the assumption of a factor model. The joint distribution of the components of the factor model is
LF ε N 0 0 , LL 0 0 Ψ .
Definition 1 implies that the rank of LL , r LL = m p . Thus, the inverse of the singular matrix LL is not unique. This will be discussed in Section 3. With Definition 1, we define the MD on a factor model as follows:
Definition 2.
Under the assumptions in Definition 1, the MD for factor model with known mean μ is
D i i x i , μ , Σ f = x i μ Σ f 1 x i μ ,
where Σ f is defined as in Proposition 1.
The way of estimating the covariance matrix from a factor model is different from the classic one, i.e., the sample covariance matrix S = n 1 1 X X of demeaned sample X . This alternative way offers several improvements over the classic method. One is that the dimension will be much smaller with the factor model while the information is mostly maintained. Thus, the computation based on the estimated covariance matrix becomes more efficient, for example, the calculation of a MD based on the covariance matrix from factor model.
From Definition 1, the factor model consists of two parts, the systematic part LF i and the residual part ε i . These two parts lead to the possibility of building covariance matrices for the two independent parts separately. The separately estimated covariance matrices increase the insights of a data set since we can build a new tool for outlier detections on each part. The systematic part usually contains necessary information for our currently fitted model. Therefore, the outliers from this part should be considered carefully in case of missing some abnormal but meaningful information. On the other hand, the outliers from the error part are so “less” necessary for the model that they can be discarded.
We define the MDs for the two separate parts of the factor model, i.e., for LF i and ε i individually. Proposition 1 implies E LF = L E ( F ) = 0 , which simplifies the estimations of MD on LF . We define the MDs of separate terms as follows:
Definition 3.
Under the conditions in Definitions 1 and 2, the MD of the common factor F part is
D F i , 0 , I = F i C o v F i 1 F i = F i F i .
For the ε part, the corresponding MD is
D ε i , 0 , Ψ = ε i Ψ 1 ε i .
As we know, there are two kinds of factor scores—the random and the fixed factor scores. In this paper, we are concerned with the random factor scores.

2.2. MDs with Unknown Mean and Unknown Covariance Matrix

In most practical situations, the mean and covariance matrix are unknown. Therefore, we utilise the MDs with unknown mean and covariance matrix. Under the conditions in Definitions 1–3, the MD with unknown mean and covariance can be expressed as follows:
D x i , x ¯ , Σ ^ f = x i x ¯ Σ ^ f 1 x i x ¯ ,
where x ¯ = n 1 X 1 is the sample mean and estimated covariance matrix is Σ ^ f = L ^ L ^ + Ψ ^ , where L ^ is the maximum likelihood estimation (MLE) of the factor loadings and Ψ ^ is the MLE of the corresponding error term.
Proposition 2.
Under the conditions of Equation (1), the expectation of MD in factor structured variables with unknown mean is
E D x i , x ¯ , Σ ^ f = p .
The expectation above is equivalent to the expectation of the MD in Definition 2. Thus, the MD with estimated covariance matrix is unbiased with respect to the MD with known covariance matrix. Following the same idea as in Section 2.1, it is straightforward to investigate the separate parts of the MDs. We introduce the definitions of the separate parts as follows:
Definition 4.
Let L ^ be the estimated factor loadings and Ψ ^ be the estimated error term. Since the means of F and ε are zero, we define the MDs for the estimated factor scores and error as follows:
D F i ^ , 0 , C o v ( F i ) = F ^ i ( L ^ Ψ ^ 1 L ^ ) 1 F ^ i
and
D ε ^ i , 0 , Ψ ^ = ε ^ i Ψ ^ 1 ε ^ i .
In the following section, we will derive the distributional properties of the MDs.

3. Distributional Properties of Factor Model MDs

In this section, we present some distributional properties of MDs in the factor structure variables b random factor scores.
Proposition 3.
Under assumptions of Proposition 2, the distribution of D F i and D ε i is,
Y = D F i , 0 , I D ε i , 0 , Ψ χ ( m ) 2 χ ( p ) 2 , Cov Y = 2 m 0 0 2 p
.
Proof. 
It is obviously to see since the true values are know in this case.
Next, we turn to the asymptotic distribution of MDs with estimated mean and covariance matrix.
Proposition 4.
Let F ^ i = L ^ Σ ^ 1 ( x i x ¯ ) be the estimated factor scores from the regression method (Johnson and Wichern 2002), Σ ^ be the estimated covariance matrix of demeaned sample X and L ^ the factor loading matrix under the maximum likelihood estimation. If Ψ and L are identified by L Ψ L being diagonal with different and ordered elements and ε ^ is homogeneity, as n , the first two moments of the MDs under a factor model structure convergence in probability as follows:
E ( D F i ^ , 0 , C o v ( F i ) ) p n t r γ Σ ; V ( D F i ^ , 0 , C o v ( F i ) ) p 2 n t r ( γ Σ ) 2 ; σ 1 E ε ^ i , 0 , Ψ ^ = p ; σ 2 V ε ^ i , 0 , Ψ ^ = 2 p .
where γ = Ψ 1 L ( I + Δ ) 1 Δ 1 ( I + Δ ) 1 L Ψ 1 , Δ = L Ψ 1 L and σ is the constant from the homogeneous errors.
Proof. 
The proof is given in Appendix A. □
Note that these two MDs are not linearly additive, i.e., D ( LF i ) + D ( ε i ) D ( X i ) . Next we continue to derive the distributional properties of the MDs in factor model. We split the MD into two parts as in Section 2. We first derive the properties of L L . Due to the singularity of L L , some additional restrictions are necessary. Styan (1970) showed that, for a singular normally distributed variable x and any symmetric matrix A , sufficient and necessary conditions for its quadratic form x Ax to follow the Chi-square distribution χ ( r ) 2 with r degrees of freedom are,
CACAC = CAC a n d r CAC = t r ( AC ) = r ,
where C is the covariance matrix of x and r ( A ) stands for the rank of the matrix A . To our model in Definition 3, A = L L and C = L L . According to the properties of the generalised inverse matrix, we then obtain
CACAC = L L L L L L L L L L = L L L L L L = CAC ,
and from Rao (2009) we get
r CAC = r L L L L L L = r L L = t r L L L L = t r AC .
Harville (1997) showed that t r AC = I m = m = r ( CAC ) , which shows the result of rank.
The reason we choose MLE on factor loading is that, based on Proposition 1, the MLE has some neat properties. Since we assume that the factor scores are random, the uniqueness condition can be achieved here. Compared with the principal component and centroid methods, another advantage of MLE is that it is independent of the metric used. A change of scale of any variate x i would just induce proportional changes in its factor loadings L i r . This may easily be verified by examining the equations of estimation as follows:
Let the transformed data be X DX where D is a diagonal matrix with positive elements and L Ψ 1 L be diagonal with identical L . Then, the logarithm of the likelihood function becomes
log D Ψ D + DLL D t r DCD D Ψ D + DLL D 1 = log Ψ + LL t r C Ψ + LL 2 log D .
Thus, the MLE of L ^ * = D L ^ , Ψ ^ * = D Ψ ^ D and L ^ * Ψ ^ * 1 L ^ * = L ^ Ψ ^ 1 L ^ .
When it comes to the factor scores, we use the regression method. It is shown (Mardia 1977) that MD with the factor loading that is estimated by the weighted least square method represents a larger value than the regression method on the same data set. However, both methods have pros and cons. They solve the same problem from different directions. Bartlett treats the factor scores as parameters while Thomson (Lawley and Maxwell 1971) considers it as a prediction problem. He wants to find a linear function of the factors that has the minimum variance with unbiased estimators on one specific factor each time. It becomes a problem of how to find an optimal model for a specific factor.
On the other hand, Thomson emphasises all factors as one whole group. He also estimates the factor scores as parameters but focuses more on minimising the error of a factor model for any factor score within it. Thus, the ideas of how they investigated their methods are quite different.
Further, to some more general cases such as data sets with missing values, the weighted least method is more sensitive. Thus, with the uniqueness condition, the regression method will lead us to an estimation similar to the weighted least square method on factor scores, while in some other extreme situations, the regression method is more robust than the weighted least square method.

4. Contaminated Data

Inferences based on residual–related models can be drastically influenced by one or a few outliers in the data sets. Therefore, some preparatory works for data quality checking are necessary before the analysis. One way is using the diagnostic methods, for example, outlier detection. Given a data set, the outliers can be detected by some distance measurement methods (Mardia et al. 1980). Several outlier detection tools have been proposed in literature. However, there has been little discussion either about the detection of the source of outliers or about the combination of factor model and MD (Mardia 1977). Apart from general outlier detection, the factor model suggests a possibility for further detection on the source of outliers. It should be pointed out that there are many causes for the occurrence of an outlier. It can be a simple mistake when collecting the data. For this case, it should be removed directly once detected. On the other hand, there are some extreme values that are not collected by mistake. This type of outlier contains more valuable information which will benefit the investigation than the information from the non-outlier data. Thus, outlier detection becomes the very first and necessary step in handling the extreme values in an analysis. Furthermore, extra investigations are needed for a decision on retaining or excluding an outlier.
In this section, we investigate a solution to this situation. The separation of MDs shows a clear picture of the source of an outlier. For the outliers from the error term part, we can consider to remove them since they possibly come from an erratic data collection. For the outliers from the systematic part, one can pay more attention and try to find out the reason for the extremity. They can contain valuable information and should be retained in the data set of analysis. As stated before, we build a new type of MD based on two separated parts of a factor model: the error term and the systematic term. They are defined as follows:
Definition 5.
Assume E F i = 0 , E ε i = 0 , (i) let X i = L F i + ε i + δ i , where δ i : p × 1 . We refer to this case as an additive outlier in ε i with δ k δ k = 1 and δ i = 0 i k . (ii) Let X i = L F i + δ i + ε i , where δ k : m × 1 . We refer to this case as an additive outlier in F i where δ i = 0 i k .
With the additive type of outliers above, we turn to the expression of outlier detection as follows.
Proposition 5.
Let d δ j , j = 1 , 2 be the MDs for the two kinds of outliers in Definition 5. To simplify notations, we here assume known μ. Let S = n 1 i = 1 n x i x i be the sample covariance matrix and set W = n S and W k = W x k x k . The expressions based on the contaminated and uncontaminated parts of the MD, up to o p n 1 2 , are given as follows:
case (i):
d δ 1 = d i i , i k d k k n 1 n 1 x k S 1 δ k 2 1 + n 1 δ k S 1 δ k + n , i = k
case (ii):
d δ 2 = d i i , i k d k k n 1 n 1 x k S 1 L δ k 2 1 + n 1 δ k L S 1 L δ k + n . i = k
where d i i = n x i W 1 x i = x i S 1 x i .
Proof. 
The proof is given in Appendix A. □
The above results give us a way of separating two possible sources of an outlier.
As an alternative to the classic way of estimating the covariance matrix, the factor model has several advantages. It is more computationally efficient and is theoretically easier to implement with much fewer variables. The simple structure of a factor model can also be extended further to the MDs with many possible generalisations. The theorem in this section gives detailed expressions for the influence of different types of outliers on the MD and offers an explicit way of detecting the outliers.

5. Empirical Example

In this section, we employ the factor model based MD on outlier detection and use the classic MD as a benchmark. The data set was collected from the database on private companies—“ORBIS”. We collected monthly stock closing prices from 10 American companies between 2005 and 2013. The companies are Exxon Mobil Corp., Chevron Corp., Apple Inc., General Electric Co., Wells Fargo Co., Procter & Gamble Co., Microsoft Corp., Johnson & Johnson, Google Inc. and Johnson Control Inc. All the data were standardised before exported from the database. First of all, we transform the closing prices into the return rates ( r t ) as follows:
r t = y t y t 1 y t 1 ,
where y t stands for the closing price in the t t h month and y t 1 is the ( t 1 ) t h month respectively.
The box-plot of the classic MD is given in Figure 1. Here the classic MD is defined in Definition 2.
The source of the outliers is vague when we treat the data as a whole. Figure 1 shows that the most extreme individuals are the 58th, 70th and 46th, etc. As discussed before, information about the source of the outliers is unavailable since the structure of the model is not separated.
Next, we turn to the separated MDs to get more information. We show the box–plots of the separate parts in the factor model in Figure 2 and Figure 3.
In our case, we choose the number of factors as 3 and 4 based on economic theory and the supportive results from scree plots. In the factor model, we use the maximum likelihood method to estimate the factor loadings. For the factor scores, we use the weighted least square method which is also known as “Bartlett’s method”. Readers are referred to the literature for details of the methods (Anderson 2003; Johnson and Wichern 2002).
Compared with Figure 1, we get different orders of the outliers. For the factor score part, the most distinct outliers are the 58th, 70th and 46th. Figure 2 shows that, from the error part, the most distinct outliers are the 82nd, 58th and 34th, while the first three outliers in the classic MD are the 58th, 70th and 46th. By using this method, we distinguish the outliers from different parts of the factor model. It helps us to detect the source of outliers. In addition to the figures, we also implemented our model in other factor structures with different numbers of factors (5 and 6 factors). The results showed a similar pattern. The maximum stock returning rates are the same as the ones we found from the factor structure model with 3 or 4 factors. The reliability of the factor analysis on this data set has also been investigated and confirmed.

6. Conclusions and Discussion

We investigate a new solution on detecting outliers based on the information from internal structure of data sets. By using a factor model, we reduce the dimension of the data set to improve the estimation efficiency of the covariance matrix. Further, we detect outliers based on a separated factor model that consisted by the error part and the systematic part. The new way of detecting the outliers is investigated by an empirical application. The results show that different source of outliers is detectable and can imply a different view inside the data.
We provide a different perspective on how to check the data set by investigating the source of outliers. In a sense, the new method helps us to improve the accuracy of inference since we can filter the observations more carefully based on different sources. The proposed method leads to different conclusions compared to the classic method.
We employed the sample covariance matrix as part of the estimated factor scores. The sample covariance matrix is not the optimal component since it is sensitive to the outlier. Thus, some further study can be conducted based on the results of this paper. In addition to the results above, further studies can be developed for non-normality distributed variables case especially when the sample size is relatively small. Because the violation of the multivariate normal distribution assumption will lead to severe results for example but not limited: model fit statistics (i.e., the χ 2 fit statistic and statistics based on χ 2 such as Root mean square error of approximation). The model fit statistics will further affect the specification of the factor model by misleading choosing the number of factors in the fitted model. Some assumptions such as δ k δ k = 1 above is imposed to simplify the work of derivation. This is such a strong imposition that it may not be valid in reality all the time. Thus, some further research could be conducted in this direction.

Acknowledgments

Thanks to the referees and the Academic Editor for their constructive suggestions to improve the presentation of this paper.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Propositions

Proof of Proposition 2.
The sample covariance matrix for the estimated factor loadings L ^ and covariance of error term Ψ ^ is defined as S = 1 / n i = 1 n x i x ¯ x i x ¯ . E D i i , f = E x i x ¯ Σ ^ f 1 x i x ¯ = E t r x i x ¯ x i x ¯ Σ ^ f 1 = E t r S Σ ^ f 1 . According to Lawley and Maxwell (1971), we have E t r S Σ ^ f 1 = t r I p = p , thus, E D i i , f = E p = p .
Proof of Proposition 4.
Without loss of generality, we set μ = 0 . The factor scores based MD is d f ^ i = F ^ i ( L ^ Ψ ^ 1 L ^ ) 1 F ^ i = ( L ^ Σ ^ 1 x j ^ ) ( L ^ Ψ ^ 1 L ^ ) 1 ( L ^ Σ ^ 1 x j ^ ) = x j ^ Σ ^ 1 L ^ ( L ^ Ψ ^ 1 L ^ ) 1 L ^ Σ ^ 1 x j ^ = t r [ ( L ^ L ^ + Ψ ^ ) 1 L ^ ( L ^ Ψ ^ 1 L ^ ) 1 L ^ ( L ^ L ^ + Ψ ^ ) 1 x j ^ x j ^ ] . Since L ( L L + Ψ ) 1 = ( I + L Ψ 1 L ) 1 L Ψ 1 (Johnson and Wichern 2002), by using the results L ^ p L and Ψ ^ p Ψ where p stands for convergence in probability (Anderson 2003), we get d f ^ i t r Ψ 1 L ( I + Δ ) 1 Δ 1 ( I + Δ ) 1 L Ψ 1 x j x j where Δ = L Ψ 1 L . Set γ = Ψ 1 L ( I + Δ ) 1 Δ 1 ( I + Δ ) 1 L Ψ 1 , we get the first moment as E ( d f ^ i ) = E ( t r γ S ) = n t r γ Σ . Follow the same way, we get the variance of the factor scores based MD as V ( d f ^ i ) = 2 n t r ( γ Σ ) 2 .
Given the error terms are homogeneous, we get that Ψ ^ p Ψ = σ I where σ is a constant across the different factors. The proof for the error terms follows the same pattern as the factor score based MD. We skip it in the proof. □
Proof of Proposition 5.
By Proposition 5 (i): X i = L F i + ε i + δ i .
Set S = n 1 i = 1 n L F i + ε i L F i + ε i then,
Σ ^ = n 1 i = 1 n X i X i = n 1 i = 1 n L F i + ε i + δ k L F i + ε i + δ k = n 1 i k n L F i + ε i L F i + ε i + n 1 L F k + ε k + δ k L F k + ε k + δ k = n 1 i k n L F i + ε i L F i + ε i + n 1 L F k + ε k L F k + ε k + L F k + ε k δ k + δ k L F k + ε k + δ k δ k = S + n 1 L F k + ε k δ k + δ k L F k + ε k + δ k δ k .
Since the outlier δ k is deterministic, it is independent of the factor scores F k and the error term ε k . By using E F = 0 and E ε = 0 , we get the convergence δ k F k p 0 in probability with o p ( n 1 2 ) and so is the error term ε . Note that replacing μ with x ¯ will not affect the asymptotic properties. Thus, Σ ^ = S + n 1 δ k δ k p n Σ ^ = n S + δ k δ k = W + δ k δ k . Hence, n Σ ^ 1 = n S + δ k δ k 1 = W + δ k δ k 1 = W 1 W 1 δ k δ k W 1 / 1 + δ k W 1 δ k . Thus, for i k , we extend the expression as x i n Σ 1 x i = x i W 1 W 1 δ k δ k W 1 1 + δ k W 1 δ k x i = x i W 1 x i x i W 1 δ k δ k W 1 x i 1 + δ k W 1 δ k = n 1 D i i x i W 1 δ k 2 1 + δ k W 1 δ k , where n 1 D i i = x i W 1 x i = n 1 x i S 1 x i . For i = k , let x ˜ k = x k + δ k , plug x ˜ k into the expression above and we get x ˜ k n Σ ^ 1 x ˜ k = x k + δ k n Σ ^ 1 x k + δ k = x k n Σ ^ 1 x k + 2 δ k n Σ ^ 1 x k + δ k n Σ ^ 1 δ k where these parts are equivalent to x k n Σ 1 x k = x k W 1 x k x k W 1 δ k 2 / 1 + δ k W 1 δ k ,
δ k n Σ ^ 1 x k = δ k W 1 x k / 1 + δ k W 1 δ k , δ k n Σ ^ 1 δ k = δ k W 1 δ k / 1 + δ k W 1 δ k . By all the results above, we extend the expression that x ˜ k n Σ ^ 1 x ˜ k = n 1 D k k δ k W 1 x k 1 2 / 1 + δ k W 1 δ k + 1 .
For Proposition 5 (ii) X i = L F i + δ k + ε i , it follows directly from part (i). □
Linear invariance of factor models MD (covariance matrix): Let the linear transformation of data be y i = a + B x i where B is the symmetric invertible matrix, then the new MD D i i , f * under the linear transformation is given as follows:
E D i i , f * = E y i y ¯ Σ ^ f * 1 y i y ¯ = E B x i B x ¯ Σ ^ f * 1 B x i B x ¯ = E x i x ¯ B Σ ^ f * 1 B x i x ¯
where Σ ^ f * 1 = L ^ * L ^ * + Ψ ^ * 1 = B 1 Σ ^ f 1 B 1 (Mardia 1977). Using this connection, we get, E D i i , f * = E y i y ¯ Σ ^ f * 1 y i y ¯ = E x i x ¯ Σ ^ f 1 x i x ¯ = E D i i , f . Thus, the MD for factor models is linear transform invariant.
The MLE of factor loadings: The maximum likelihood estimator for the mean vector μ , the factor loadings L and the specific variances Ψ are obtained by finding μ ^ , L ^ , and Ψ ^ that maximises the log likelihood, which is given by the following expression:
l μ , L , Ψ = n p 2 l o g 2 π n 2 l o g LL + Ψ 1 2 i = 1 n X i μ LL + Ψ X i μ .
The log of the joint probability distribution of the data is to be maximised.

References

  1. Anderson, Theodore Wilbur. 2003. An Introduction to Multivariate Statistical Analysis, 5th ed. Hoboken: Wiley. [Google Scholar]
  2. Bai, Jushan. 2003. Inferential theory for factor models of large dimensions. Econometrica 71: 135–71. [Google Scholar] [CrossRef]
  3. Chalmers, R. Philip, and David B. Flora. 2015. Faoutlier: An r package for detecting influential cases in exploratory and confirmatory factor analysis. Applied Psychological Measurement 39: 573. [Google Scholar] [CrossRef] [PubMed]
  4. Fisher, Ronald A. 1940. The precision of discriminant functions. Annals of Human Genetics 10: 422–29. [Google Scholar] [CrossRef] [Green Version]
  5. Flora, David B., Cathy LaBrish, and R. Philip Chalmers. 2012. Old and new ideas for data screening and assumption testing for exploratory and confirmatory factor analysis. Frontiers in Psychology 3: 55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Gorman, N. T., I. McConnell, and P. J. Lachmann. 1981. Characterisation of the third component of canine and feline complement. Veterinary Immunology and Immunopathology 2: 309–20. [Google Scholar] [CrossRef]
  7. Gregory, Allan W., and Allen C. Head. 1999. Common and country-specific fluctuations in productivity, investment, and the current account. Journal of Monetary Economics 44: 423–51. [Google Scholar] [CrossRef] [Green Version]
  8. Harville, David A. 1997. Matrix Algebra from a Statistician’s Perspective. New York: Springer. [Google Scholar]
  9. Holgersson, H. E. T., and Ghazi Shukur. 2001. Some aspects of non-normality tests in systems of regression equations. Communications in Statistics-Simulation and Computation 30: 291–310. [Google Scholar] [CrossRef]
  10. Johnson, Richard Arnold, and Dean W. Wichern. 2002. Applied Multivariate Statistical Analysis, 5th ed. Upper Saddle River: Prentice Hall. [Google Scholar]
  11. Lawley, Derrick Norman, and Albert Ernest Maxwell. 1971. Factor Analysis as a Statistical Method. London: Butterworths. [Google Scholar]
  12. Lewbel, Arthur. 1991. The rank of demand systems: theory and nonparametric estimation. Econometrica 59: 711–30. [Google Scholar] [CrossRef]
  13. Mahalanobis, Prasanta Chandra. 1936. On the Generalized Distance in Statistics. Odisha: National Institute of Science of India, vol. 2, pp. 49–55. [Google Scholar]
  14. Mardia, Kanti V. 1974. Applications of some measures of multivariate skewness and kurtosis in testing normality and robustness studies. Sankhyā: The Indian Journal of Statistics, Series B 36: 115–28. [Google Scholar]
  15. Mardia, K. V. 1977. Mahalanobis distances and angles. Multivariate analysis IV 4: 495–511. [Google Scholar]
  16. Mardia, K. V., J. T. Kent, and J. M. Bibby. 1980. Multivariate Analysis. San Diego: Academic Press. [Google Scholar]
  17. Mavridis, Dimitris, and Irini Moustaki. 2008. Detecting outliers in factor analysis using the forward search algorithm. Multivariate Behavioral Research 43: 453–75. [Google Scholar] [CrossRef] [PubMed]
  18. Pavlenko, Tatjana. 2003. On feature selection, curse-of-dimensionality and error probability in discriminant analysis. Journal of Statistical Planning and Inference 115: 565–84. [Google Scholar] [CrossRef]
  19. Pek, Jolynn, and Robert C. MacCallum. 2011. Sensitivity analysis in structural equation models: Cases and their influence. Multivariate Behavioral Research 46: 202–28. [Google Scholar] [CrossRef] [PubMed]
  20. Pison, Greet, Peter J. Rousseeuw, Peter Filzmoser, and Christophe Croux. 2003. Robust factor analysis. Journal of Multivariate Analysis 84: 145–72. [Google Scholar] [CrossRef] [Green Version]
  21. Rao, C. Radhakrishna. 2009. Linear Statistical Inference and its Applications. New York: John Wiley & Sons, vol 22. [Google Scholar]
  22. Ross, Stephen A. 1976. The arbitrage theory of capital asset pricing. Journal of Economic Theory 13: 341–60. [Google Scholar] [CrossRef]
  23. Styan, George P. H. 1970. Notes on the distribution of quadratic forms in singular normal variables. Biometrika 57: 567–72. [Google Scholar] [CrossRef]
  24. Wilks, Samuel S. 1963. Multivariate statistical outliers. Sankhyā: The Indian Journal of Statistics, Series A 25: 407–26. [Google Scholar]
  25. Yuan, Ke-Hai, and Xiaoling Zhong. 2008. 8. outliers, leverage observations, and influential cases in factor analysis: Using robust procedures to minimize their effect. Sociological Methodology 38: 329–68. [Google Scholar] [CrossRef]
Figure 1. The box–plot of D x i , x ¯ , S for the stock data.
Figure 1. The box–plot of D x i , x ¯ , S for the stock data.
Econometrics 08 00010 g001
Figure 2. The box–plot of D ε i , 0 , Ψ ^ .
Figure 2. The box–plot of D ε i , 0 , Ψ ^ .
Econometrics 08 00010 g002
Figure 3. The box–plot of D F i , x ¯ , Σ ^ f .
Figure 3. The box–plot of D F i , x ¯ , Σ ^ f .
Econometrics 08 00010 g003

Share and Cite

MDPI and ACS Style

Dai, D. Mahalanobis Distances on Factor Model Based Estimation. Econometrics 2020, 8, 10. https://doi.org/10.3390/econometrics8010010

AMA Style

Dai D. Mahalanobis Distances on Factor Model Based Estimation. Econometrics. 2020; 8(1):10. https://doi.org/10.3390/econometrics8010010

Chicago/Turabian Style

Dai, Deliang. 2020. "Mahalanobis Distances on Factor Model Based Estimation" Econometrics 8, no. 1: 10. https://doi.org/10.3390/econometrics8010010

APA Style

Dai, D. (2020). Mahalanobis Distances on Factor Model Based Estimation. Econometrics, 8(1), 10. https://doi.org/10.3390/econometrics8010010

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop