An Entropy-Based Approach for Measuring Factor Contributions in Factor Analysis Models

In factor analysis, factor contributions of latent variables are assessed conventionally by the sums of the squared factor loadings related to the variables. First, the present paper considers issues in the conventional method. Second, an alternative entropy-based approach for measuring factor contributions is proposed. The method measures the contribution of the common factor vector to the manifest variable vector and decomposes it into contributions of factors. A numerical example is also provided to demonstrate the present approach.


Introduction
Factor analysis is a statistical method for extracting simple structures to explain inter-relations between manifest and latent variables. The origin dates back to the works of [1], and the single factor model was extended to the multiple factor model [2]. These days, factor analysis is widely applied in behavioral sciences [3]; hence, it is important to interpret the extracted factors and is critical to explain how such factors influence manifest variables, that is, measurement of factor contribution. Let X i be manifest variables; ξ j latent variables (common factors); ε i unique factors related to X i ; and let λ ij be factor loadings that are weights of factors ξ j to explain X i . Then, the factor analysis model is given as follows: where E(X i ) = E ξ j = E(ε i ) = 0, var ξ j =1, cov ξ j , ε i = 0, cov(ε i , ε k ) = 0 for i = k and var(ε i ) = σ 2 i > 0 For the simplicity of discussion, common factors ξ j are assumed to be mutually independent in this section, that is, we first consider an orthogonal factor analysis model. In the conventional approach, the contribution of factor ξ j to all manifest variables X i , C j , is defined as follows: The above definition of factor contributions is based on the following decomposition of the total of variances of the observed variables X i [4] (p. 59): What physical meaning does the above quantity have? Applying it to the manifest variables observed, however, such a decomposition leads to scale-variant results. For this reason, factor contribution is usually considered on the standardized versions of manifest variables X i . What does it mean to measure factor contributions by (2)? For standardized manifest variables X i , we have Then, (2) is the sum of the coefficients of determination for all standardized manifest variables X i with respect to a single latent variable ξ j . The squared correlation coefficients (3), that is, cor X i , ξ j 2 , are the ratios of explained variances of a manifest variable X i , and in this sense, they can be interpreted as the contributions (effects) of factors ξ j to the manifest variable X i . Although, what does the sum of these with respect to all manifest variables X i , that is, (2), mean? The conventional method may be intuitively reasonable for measuring factor contributions; however, we think it is sensible to propose a method measuring factor contributions as the effects of factors on the manifest variable vector X = X 1 , X 2 , . . . , X p , which are interpretable and have a theoretical basis. There is no research on this topic as far as we have searched. The present paper provides an entropy-based solution to the problem. Entropy is a useful concept to measure the uncertainty in the systems of random variables and sample spaces [5] and it can be applied to measure multivariate dependences of random variables [6,7]. This paper proposes an entropy-based method for measuring factor contributions of ξ j to the manifest variable vector X = X 1 , X 2 , . . . , X p concerned, which can treat not only orthogonal factors, but also oblique cases. The present paper has five sections in addition to this section. In Section 2, the conventional method for measuring factor contributions is reviewed. Section 3 considers the factor analysis model in view of entropy and makes a preliminary discussion on measurement of factor contribution. In Section 4, an entropy-based path analysis is applied as a tool to measure factor contributions. Contributions of factors ξ j are defined by the total effects of the factors on the manifest variable vector, and the contributions are decomposed into those to manifest variables and subsets of manifest variables. Section 5 illustrates the present method using a numerical example. Finally, in Section 6, some conclusions are provided.

Relative Factor Contributions in the Conventional Method
In the conventional approach, for the orthogonal factor model (1), the contribution ratio of ξ j is defined by The above measure is referred to as the factor contribution ratio in the common factor space. Let R i be the multiple correlation coefficient of latent variable vector ξ = (ξ 1 , ξ 2 , . . . , ξ m ) T and manifest variable X i . Then, for standardized manifest variable X i , we have The above quantity can be interpreted as the effect (explanatory power) of latent variable vector ξ = ξ j on manifest variable X i ; however, the denominator of (4) is the sum of those effects (5) and there is no theoretical basis to interpret it. Another contribution ratio of ξ j is referred to as that in the whole space of X = (X i ), and is defined by If the manifest variables are standardized, we have Here, there is an issue similar to (4), because the denominator in (6) does not express the variation of the manifest variable vector X = (X i ). Indeed, it is the sum of the variances of manifest variables and does not include covariances between them. In the next section, the factor analysis model (1) is reconsidered in the framework of generalized linear models (GLMs), and the effects (contributions) of latent variables ξ j on the manifest variable vector X = (X i ), that is, factor contributions, are discussed through entropy [8].

Factor Analysis Model and Entropy
It is assumed that factors ε i and ξ j are normally distributed, and the factor analysis model (1) is reconsidered in the GLM framework. Let Λ = λ ij be a p × m factor loading matrix; let Φ be an m × m correlation matrix of common factor vector ξ = (ξ 1 , ξ 2 , . . . , ξ m ) T ; and let Ω be the p × p variance-covariance matrix of unique factor vector ε = ε 1 , ε 2 , . . . , ε p T . The conditional density function of X given ξ, f (x|ξ), is normal with mean Λξ and variance matrix Ω, and is given as follows: where Ω is the cofactor matrix of Ω. Let f (x) and g(ξ) be the marginal density functions of X and ξ, respectively. Then, a basic predictive power measure for GLMs [9] is based on the Kullback-Leibler information [6], and applying it to the above model, we have The above measure was derived from a discussion on log odds ratios in GLMs [9], and is scale-invariant with respect to manifest variables X i . The numerator of (7) is the explained entropy of X by ξ, and the denominator is the dispersion of the unique factors in entropy, that is, the generalized variance of ε = ε 1 , ε 2 , . . . , ε p T . Thus, (7) expresses the total effect (contribution) of factor vector ξ = ξ j on manifest variable vector X = (X i ) in entropy, and is denoted by C(ξ → X) in the present paper. The entropy coefficient of determination (ECD) is calculated as follows [9]: The denominator of the above measure is interpreted as the variation of manifest variable vector X = (X i ) in entropy and the numerator is the explained variation of random vector X in entropy. In this sense, ECD (8) is the factor contribution ratio of ξ = ξ j for the whole entropy space of X = (X i ), and it expresses the standardized total effect of ξ = (ξ 1 , ξ 2 , . . . , ξ m ) T on the manifest variable vector X = X 1 , X 2 , . . . , X p T , which is denoted by e T (ξ → X) [8,10]. As for (6), in the present paper, the ECD is denoted by RC(ξ → X), that is, the relative contribution of factor vector ξ for the whole space of manifest variable vector X in entropy.
Remark 1. Let Σ be the p × p variance-covariance matrix of manifest variable vector X = X 1 , X 2 , . . . , X p T and let Φ be the m × m correlation matrix of ξ. Then, we have For assessing the goodness-of-fit of the models, the following overall coefficient of determination (OCD) is suggested ( [11], p. 60) on the basis of (9): Determinant |Ω| is the generalized variance of unique factor vector ε = ε 1 , ε 2 , . . . , ε p T and |Σ| is that of manifest variable vector X = X 1 , X 2 , . . . , X p T . Then, OCD is interpreted as the ratio of the explained generalized variance of manifest variable vector X = X 1 , X 2 , . . . , X p T by common factor vector ξ = (ξ 1 , ξ 2 , . . . , ξ m ) T in the p-dimensional Euclidian space. On the other hand, from (8), it follows that Hence, ECD is viewed as the ratio of the explained variation of the manifest variable vector in entropy.

Cofactor matrix Ω is diagonal and the
If common factors are statistically independent, it follows that Thus, (7) is decomposed as As detailed below, in the present paper, the contribution of factor ξ j to X, C ξ j → X , is defined by Remark 2. The above contribution is different from the conventional definition of factor contribution (2); unless σ 2 i = 1, i = 1, 2, . . . , p. In this sense, we may say that the standardization of manifest variables in entropy is obtained by setting all the unique factor variances to one.
In the next section, the contributions (effects) of factors ξ j to manifest variable vector X are discussed in a general framework through an entropy-based path analysis [8].

Measurement of Factor Contribution Based on Entropy
A path diagram for the factor analysis model is given in Figure 1, in which the single-headed arrows imply the directions of effects of factors and the double-headed curved arrows indicate the associations between the related variables. In this section, common factors are assumed to be correlated, that is, we consider an oblique case, and an entropy-based path analysis [8] is applied to make a general discussion in the measurement of factor contributions. Proof. Let ( | ) be the conditional density functions of manifest variables , given factor vector ; let ( ) be the marginal density functions of ; let ( ) be the marginal density function of ; and let ( ) be the marginal density function of common factor vector . As the manifest variables are conditionally independent, given factor vector , the conditional density function of is From (7), we have

Theorem 1.
In the factor analysis model (1), Proof. Let f i ( x i |ξ) be the conditional density functions of manifest variables X i , given factor vector ξ; let f i (x i ) be the marginal density functions of X i ; let f (x) be the marginal density function of X; and let g(ξ) be the marginal density function of common factor vector ξ. As the manifest variables are conditionally independent, given factor vector ξ, the conditional density function of X is In model (1) with correlation matrix Φ = ϕ ij , we have The above quantity is referred to as the contribution of ξ to X i , and is denoted as C(ξ → X i ). Let R i be the multiple correlation coefficient of X i and ξ = ξ j . Then, From Theorem 1, we then have Hence, Theorem 1 gives the following decomposition of the contribution of ξ on X into those on the single manifest variables X i (11): Remark 3. Notice that in the denominator of (4), the total contribution of all factors ξ i is simply defined as the total sum assessed: On the other hand, in the present approach, the total effect (contribution) of factor vector ξ on manifest variable vector X is decomposed into those of manifest variables X i , (12) and (13).
Let X sub be any sub-vector of manifest variable vector X = X 1 , X 2 , . . . , X p T . Then, the contribution of factor vector ξ to X sub is defined by From Theorem 1, we have the following corollary.
Corollary 1. Let X (1) = X i 1 , X i 2 , . . . , X i q T and X (2) = X j 1 , X j 2 , . . . , X j p−q T be a decomposition of manifest variable vector X = X 1 , X 2 , . . . , X p T , where q < p. Then, for factor analysis model (1), it follows that Proof: From a similar discussion to the proof of Theorem 1, we have Hence, the corollary follows.
Next, the standardized total effects of single factors ξ j on manifest variable vector X, that is, e T ξ j → X , are calculated [8,10]. Let ξ /j = ξ 1 , ξ 2 , . . . , ξ j−1 , ξ j+1 , . . . , ξ m T ; f x, ξ /j ξ j be the conditional density function of X and ξ /j given ξ j ; f x ξ j be the conditional density function of X given ξ j ; g ξ /j ξ j be the conditional density function of ξ /j given ξ j ; and g j ξ j be the marginal density function of ξ j . Then, we have where cov ξ, X ξ j is a m × p covariance matrix given ξ j , of which the (k, i) elements are cov ξ k , X i ξ j . The standardized total effect e T ξ j → X is given by The standardized total effect e T ξ j → X [8] is interpreted as the contribution ratio of factor ξ j in the whole entropy space of X, and in the present paper, it is denoted by RC ξ j → X . The contribution of factor ξ j measured in entropy is defined by As for (6), the relative contribution of factor ξ j on X is given by Concerning factor contributions of ξ j on the single manifest variables X i , that is, C ξ j → X i , the following theorem can be stated.

Theorem 2.
In the factor analysis model (1), From the above theorem, we have the following corollary.
Proof: From a similar discussion in the proof of Theorem 2, the corollary follows.

Remark 4.
Let X sub be any sub-vector of manifest variable vector X = X 1 , X 2 , . . . , X p T .
By substituting X for X sub in the above discussion, C(ξ → X sub ), C ξ j → X sub , RC ξ j → X sub , and RC ξ j → X sub can be defined.
For orthogonal factor analysis models, the following theorem holds true.
Theorem 3. In factor analysis model (1), if common factors ξ j are statistically independent, then This completes the theorem.
From the above discussion, if common factors ξ j are statistically independent, (10) is derived. Moreover, we have This measure is the relative contribution ratio of ξ j for the variation of X in entropy. The relative contributions of ξ j on X in entropy are calculated as follows: It is difficult to use OCD for assessing factor contributions, because |Σ| cannot be decomposed as in the above discussion.

Numerical Example
In order to illustrate the present method, we use the data shown in Table 1 [12]. In this table, manifest variables X 1 , X 2 , and X 3 are subjects in liberal arts and variables X 4 and X 5 are those in sciences. First, orthogonal factor analysis (varimax method by S-PLUS ver. 8.2) is applied to the data and the results are illustrated in Table 2. From the estimated factor loadings, the first factor is interpreted as an ability relating to liberal arts, and the second factor as that for sciences. According to the factor contributions C ξ j → X shown in Table 3, the contribution of factor ξ 2 is about twice as big than that of factor ξ 1 from a view point of entropy, and from the relative contributions RC ξ j → X , about 30% of variation of manifest variable vector X in entropy is explained by factor ξ 1 and about 60% by factor ξ 2 . The relative contribution RC(ξ → X) in Table 3 implies about 90% of the entropy of manifest variable vector X is explained by the two factors. On the other hand, in the conventional method, the measured factor contributions of ξ 1 and ξ 2 , that is, C j , are almost equal ( Table 4). As discussed in the present paper, the conventional method is intuitive and does not have any logical foundation for multidimensionally measuring contributions of factors to manifest variable vectors. Table 5 decomposes "the contribution of ξ to X" into components C ξ j → X i . The contribution of ξ 2 to X 5 is prominent compared with the other contributions.

Subject
Japanese X 1 English X 2 Social X 3 Mathematics X 4 Science X 5 Table 2. Factor loadings of orthogonal factor analysis (χ 2 = 0.55, d f = 1, P = 0.45).  Table 5. Decomposition of factor contribution C(ξ → X) into C ξ j → X i . From the discussion in the previous section, the contributions of factors are flexibly calculated. For example, it is reasonable to divide the manifest variable vector into X (1) = (X 1 , X 2 , X 3 ) and X (2) = (X 4 , X 5 ), because the first sub-vector is related to the liberal arts and the second one to the sciences. First, the contributions of ξ 1 and ξ 2 to X (1) are calculated according to the present method, and the details are given as follows: From (14), 77% of the entropy of manifest variable sub-vector X (1) are explained by the two factors, in which 67% of that are explained by factor ξ 1 (15) and 10% by factor ξ 2 (16). From the relative contributions (17) and (18), 87% of the total contribution of the two factors are made by factor ξ 1 and 13% by factor ξ 2 .
Second, factor contributions in an oblique case are calculated. The estimated factor loadings and the correlation matrix of factors based on the covarimin method are shown in Tables 6 and 7, respectively. Based on factor loadings in Table 6, factor ξ 1 is interpreted as an ability for subjects in the liberal arts and factor ξ 2 as an ability for subjects in sciences. The results are similar to those in the orthogonal case mentioned above, because the correlation between the factors is not strong. Table 8 shows the decomposition of C(ξ → X) based on Theorems 1 and 2. In this case, it is noted that C(ξ → X) = C(ξ 1 → X) + C(ξ 2 → X); however, C(ξ → X) = ∑ 5 i=1 C(ξ → X i ). According to the table, the contributions of ξ 1 and ξ 2 to sub-vectors of manifest variable vector X can also be calculated as in the above orthogonal factor analysis. Table 9 illustrates the contributions of factors on manifest variable vector X. Factor ξ 1 explains 42% of the entropy of X and factor ξ 2 explains 71%.

Discussion
For orthogonal factor analysis models, the conventional method measures factor contributions (effects) by the sums (totals) of squared factor loadings related to the factors (2); however, there is no logical foundation for how they can be interpreted. It is reasonable to measure factor contributions as the effects of factors on the manifest variable vector concerned. The present paper has proposed a method of measuring factor contributions through entropy, that is, applying an entropy-based path analysis approach. The method measures the contribution of factor vector ξ to manifest variable vector X and decomposes it into those of factors ξ j to manifest variables X i and/or those to sub-vectors of X.
Comparing (2) and (10), for standardization of unique factor variances σ 2 i = 1, the present method equals to the conventional method. As discussed in this paper, the present method can be employed in oblique factor analysis as well, and it has been illustrated in a numerical example. The present method has a theoretical basis for measuring factor contributions in a framework of entropy, and it is a novel approach for factor analysis. The present paper confines itself to the usual factor analysis model. A more complicated model with a mixture of normal factor analysis models [13] is excluded, and a further study is needed to apply the entropy-based method to the model. Author Contributions: N.E. conceived the study; N.E., M.T., and C.G.B. discussed the idea for measuring factor contribution; N.E. and M.T. proven the theorems in the paper; N.E. and C.G.B. computed the numerical example; N.E. wrote the paper, and the coauthors reviewed it.
Funding: Grant-in-aid for Scientific Research 18993038, Ministry of Education, Culture, Sports, Science, and Technology of Japan.