Next Article in Journal
Detailed Fluctuation Theorems: A Unifying Perspective
Previous Article in Journal
Fractal Structure and Non-Extensive Statistics

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# An Entropy-Based Approach for Measuring Factor Contributions in Factor Analysis Models

1
Center for Educational Outreach and Admissions, Kyoto University, Kyoto 606-8501, Japan
2
Department of Mathematical Sciences, Osaka Prefecture University, Osaka 599-8532, Japan
3
Department of Statistics and Quantitative Methods, University of Milano Bicocca, 20126 Milano, Italy
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(9), 634; https://doi.org/10.3390/e20090634
Received: 12 July 2018 / Revised: 20 August 2018 / Accepted: 20 August 2018 / Published: 24 August 2018

## Abstract

:
In factor analysis, factor contributions of latent variables are assessed conventionally by the sums of the squared factor loadings related to the variables. First, the present paper considers issues in the conventional method. Second, an alternative entropy-based approach for measuring factor contributions is proposed. The method measures the contribution of the common factor vector to the manifest variable vector and decomposes it into contributions of factors. A numerical example is also provided to demonstrate the present approach.

## 1. Introduction

Factor analysis is a statistical method for extracting simple structures to explain inter-relations between manifest and latent variables. The origin dates back to the works of [1], and the single factor model was extended to the multiple factor model [2]. These days, factor analysis is widely applied in behavioral sciences [3]; hence, it is important to interpret the extracted factors and is critical to explain how such factors influence manifest variables, that is, measurement of factor contribution. Let $X i$ be manifest variables; $ξ j$ latent variables (common factors); $ε i$ unique factors related to $X i$; and let $λ i j$ be factor loadings that are weights of factors $ξ j$ to explain $X i$. Then, the factor analysis model is given as follows:
where
For the simplicity of discussion, common factors are assumed to be mutually independent in this section, that is, we first consider an orthogonal factor analysis model. In the conventional approach, the contribution of factor $ξ j$ to all manifest variables, $C j$, is defined as follows:
The above definition of factor contributions is based on the following decomposition of the total of variances of the observed variables [4] (p. 59):
What physical meaning does the above quantity have? Applying it to the manifest variables observed, however, such a decomposition leads to scale-variant results. For this reason, factor contribution is usually considered on the standardized versions of manifest variables. What does it mean to measure factor contributions by (2)? For standardized manifest variables, we have
Then, (2) is the sum of the coefficients of determination for all standardized manifest variableswith respect to a single latent variable. The squared correlation coefficients (3), that is, $cor ( X i , ξ j ) 2$, are the ratios of explained variances of a manifest variable, and in this sense, they can be interpreted as the contributions (effects) of factors to the manifest variable $X i$. Although, what does the sum of these with respect to all manifest variables $X i$, that is, (2), mean? The conventional method may be intuitively reasonable for measuring factor contributions; however, we think it is sensible to propose a method measuring factor contributions as the effects of factors on the manifest variable vector $X = ( X 1 , X 2 , … , X p )$, which are interpretable and have a theoretical basis. There is no research on this topic as far as we have searched. The present paper provides an entropy-based solution to the problem. Entropy is a useful concept to measure the uncertainty in the systems of random variables and sample spaces [5] and it can be applied to measure multivariate dependences of random variables [6,7].
This paper proposes an entropy-based method for measuring factor contributions of to the manifest variable vector concerned, which can treat not only orthogonal factors, but also oblique cases. The present paper has five sections in addition to this section. In Section 2, the conventional method for measuring factor contributions is reviewed. Section 3 considers the factor analysis model in view of entropy and makes a preliminary discussion on measurement of factor contribution. In Section 4, an entropy-based path analysis is applied as a tool to measure factor contributions. Contributions of factors are defined by the total effects of the factors on the manifest variable vector, and the contributions are decomposed into those to manifest variables and subsets of manifest variables. Section 5 illustrates the present method using a numerical example. Finally, in Section 6, some conclusions are provided.

## 2. Relative Factor Contributions in the Conventional Method

In the conventional approach, for the orthogonal factor model (1), the contribution ratio of $ξ j$ is defined by
The above measure is referred to as the factor contribution ratio in the common factor space. Let be the multiple correlation coefficient of latent variable vector $ξ = ( ξ 1 , ξ 2 , … , ξ m ) T$ and manifest variable. Then, for standardized manifest variable, we have
The above quantity can be interpreted as the effect (explanatory power) of latent variable vector on manifest variable; however, the denominator of (4) is the sum of those effects (5) and there is no theoretical basis to interpret it. Another contribution ratio ofis referred to as that in the whole space of $X = ( X i )$, and is defined by
If the manifest variables are standardized, we have
Here, there is an issue similar to (4), because the denominator in (6) does not express the variation of the manifest variable vector $X = ( X i )$. Indeed, it is the sum of the variances of manifest variables and does not include covariances between them. In the next section, the factor analysis model (1) is reconsidered in the framework of generalized linear models (GLMs), and the effects (contributions) of latent variables $ξ j$ on the manifest variable vector $X = ( X i )$, that is, factor contributions, are discussed through entropy [8].

## 3. Factor Analysis Model and Entropy

It is assumed that factors $ε i$ and $ξ j$ are normally distributed, and the factor analysis model (1) is reconsidered in the GLM framework. Let $Λ = ( λ i j )$ be a $p × m$ factor loading matrix; let $Φ$ be an $m × m$ correlation matrix of common factor vector $ξ = ( ξ 1 , ξ 2 , … , ξ m ) T$; and let $Ω$ be the variance-covariance matrix of unique factor vector $ε = ( ε 1 , ε 2 , … , ε p ) T$. The conditional density function of $X$ given, $f ( x | ξ )$, is normal with mean $Λ ξ$ and variance matrix, and is given as follows:
where is the cofactor matrix of . Let $f ( x )$ and $g ( ξ )$ be the marginal density functions of $X$ and , respectively. Then, a basic predictive power measure for GLMs [9] is based on the Kullback–Leibler information [6], and applying it to the above model, we have
The above measure was derived from a discussion on log odds ratios in GLMs [9], and is scale-invariant with respect to manifest variables . The numerator of (7) is the explained entropy of $X$ by , and the denominator is the dispersion of the unique factors in entropy, that is, the generalized variance of . Thus, (7) expresses the total effect (contribution) of factor vector on manifest variable vector in entropy, and is denoted by $C ( ξ → X )$ in the present paper. The entropy coefficient of determination (ECD) is calculated as follows [9]:
The denominator of the above measure is interpreted as the variation of manifest variable vector in entropy and the numerator is the explained variation of random vector $X$ in entropy. In this sense, ECD (8) is the factor contribution ratio of $ξ = ( ξ j )$ for the whole entropy space of $X = ( X i )$, and it expresses the standardized total effect of $ξ = ( ξ 1 , ξ 2 , … , ξ m ) T$ on the manifest variable vector , which is denoted by $e T ( ξ → X )$ [8,10]. As for (6), in the present paper, the ECD is denoted by $RC ˜ ( ξ → X )$, that is, the relative contribution of factor vector for the whole space of manifest variable vector in entropy.
Remark 1.
Let $Σ$ be the $p × p$ variance-covariance matrix of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$ and let $Φ$ be the $m × m$ correlation matrix of. Then, we have
For assessing the goodness-of-fit of the models, the following overall coefficient of determination (OCD) is suggested ([11], p. 60) on the basis of (9):
Determinant is the generalized variance of unique factor vector $ε = ( ε 1 , ε 2 , … , ε p ) T$ and $| Σ |$ is that of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$. Then, OCD is interpreted as the ratio of the explained generalized variance of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$ by common factor vector in the p-dimensional Euclidian space. On the other hand, from (8), it follows that
Hence, ECD is viewed as the ratio of the explained variation of the manifest variable vector in entropy.
Cofactor matrix is diagonal and the $( i , i )$ elements are . If common factors are statistically independent, it follows that
Thus, (7) is decomposed as
As detailed below, in the present paper, the contribution of factor $ξ j$ to $X$, $C ( ξ j → X )$, is defined by
Remark 2.
The above contribution is different from the conventional definition of factor contribution (2); unless . In this sense, we may say that the standardization of manifest variables in entropy is obtained by setting all the unique factor variances to one.
In the next section, the contributions (effects) of factorsto manifest variable vector are discussed in a general framework through an entropy-based path analysis [8].

## 4. Measurement of Factor Contribution Based on Entropy

A path diagram for the factor analysis model is given in Figure 1, in which the single-headed arrows imply the directions of effects of factors and the double-headed curved arrows indicate the associations between the related variables. In this section, common factors are assumed to be correlated, that is, we consider an oblique case, and an entropy-based path analysis [8] is applied to make a general discussion in the measurement of factor contributions.
Theorem 1.
In the factor analysis model (1),
Proof.
Let $f i ( x i | ξ )$ be the conditional density functions of manifest variables $X i$, given factor vector $ξ$; let $f i ( x i )$ be the marginal density functions of $X i$; let $f ( x )$ be the marginal density function of $X$; and let $g ( ξ )$ be the marginal density function of common factor vector. As the manifest variables are conditionally independent, given factor vector $ξ$, the conditional density function of $X$ is
From (7), we have
☐
In model (1) with correlation matrix $Φ = ( φ i j )$, we have
The above quantity is referred to as the contribution of $ξ$ to $X i$, and is denoted as $C ( ξ → X i )$. Let $R i$ be the multiple correlation coefficient ofand $ξ = ( ξ j )$. Then,
From Theorem 1, we then have
Hence, Theorem 1 gives the following decomposition of the contribution of $ξ$ on $X$ into those on the single manifest variables $X i$ (11):
Remark 3.
Notice that in the denominator of (4), the total contribution of all factors $ξ i$ is simply defined as the total sum assessed:
On the other hand, in the present approach, the total effect (contribution) of factor vector $ξ$ on manifest variable vector $X$ is decomposed into those of manifest variables .
Let $X s u b$ be any sub-vector of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$. Then, the contribution of factor vector $ξ$ to $X s u b$ is defined by
From Theorem 1, we have the following corollary.
Corollary 1.
Let$X ( 1 ) = ( X i 1 , X i 2 , … , X i q ) T$and$X ( 2 ) = ( X j 1 , X j 2 , … , X j p − q ) T$be a decomposition of manifest variable vector$X = ( X 1 , X 2 , … , X p ) T$, where $q < p$. Then, for factor analysis model (1), it follows that
Proof:
From a similar discussion to the proof of Theorem 1, we have
Hence, the corollary follows.
Next, the standardized total effects of single factors on manifest variable vector, that is, $e T ( ξ j → X )$, are calculated [8,10]. Let $ξ / j = ( ξ 1 , ξ 2 , … , ξ j − 1 , ξ j + 1 , … , ξ m ) T$; $f ( x , ξ / j | ξ j )$ be the conditional density function of $X$ and given; be the conditional density function of given; be the conditional density function ofgiven; and $g j ( ξ j )$ be the marginal density function of . Then, we have
where is a $m × p$ covariance matrix given $ξ j$, of which the $( k , i )$ elements are $cov ( ξ k , X i | ξ j )$. The standardized total effect is given by
The standardized total effect [8] is interpreted as the contribution ratio of factorin the whole entropy space of, and in the present paper, it is denoted by $R C ˜ ( ξ j → X )$. The contribution of factor measured in entropy is defined by
As for (6), the relative contribution of factor on $X$ is given by
Concerning factor contributions of $ξ j$ on the single manifest variables $X i$, that is, $C ( ξ j → X i )$, the following theorem can be stated.
Theorem 2.
In the factor analysis model (1),
Proof:
From Theorem 1, it follows that
Then, we have
and,
From the above theorem, we have the following corollary.
Corollary 2.
Let$X ( 1 ) = ( X i 1 , X i 2 , … , X i q ) T$and$X ( 2 ) = ( X j 1 , X j 2 , … , X j p − q ) T$bedecomposition of manifest variable vector$X = ( X 1 , X 2 , … , X p ) T$, where$q < p$.
Proof:
From a similar discussion in the proof of Theorem 2, the corollary follows. ☐
Remark 4.
Let $X s u b$ be any sub-vector of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$. By substituting $X$ for $X s u b$ in the above discussion, $C ( ξ → X s u b )$, $C ( ξ j → X s u b )$, $RC ˜ ( ξ j → X s u b )$, and $RC ( ξ j → X s u b )$ can be defined.
For orthogonal factor analysis models, the following theorem holds true.
Theorem 3.
In factor analysis model (1), if common factors $ξ j$ are statistically independent, then
Proof:
From model (1), we have
This completes the theorem. ☐
From the above discussion, if common factors $ξ j$ are statistically independent, (10) is derived. Moreover, we have
This measure is the relative contribution ratio of for the variation of $X$ in entropy. The relative contributions of on $X$ in entropy are calculated as follows:
Remark 5.
It is difficult to use OCD for assessing factor contributions, because $| Σ |$ cannot be decomposed as in the above discussion.

## 5. Numerical Example

In order to illustrate the present method, we use the data shown in Table 1 [12]. In this table, manifest variables are subjects in liberal arts and variables are those in sciences. First, orthogonal factor analysis (varimax method by S-PLUS ver. 8.2) is applied to the data and the results are illustrated in Table 2. From the estimated factor loadings, the first factor is interpreted as an ability relating to liberal arts, and the second factor as that for sciences. According to the factor contributions $C ( ξ j → X )$ shown in Table 3, the contribution of factor $ξ 2$ is about twice as big than that of factor $ξ 1$ from a view point of entropy, and from the relative contributions $RC ˜ ( ξ j → X )$, about 30% of variation of manifest variable vector $X$ in entropy is explained by factor $ξ 1$ and about 60% by factor $ξ 2$. The relative contribution $RC ˜ ( ξ → X )$ in Table 3 implies about 90% of the entropy of manifest variable vector $X$ is explained by the two factors. On the other hand, in the conventional method, the measured factor contributions of $ξ 1$ and $ξ 2$, that is, $C j$, are almost equal (Table 4). As discussed in the present paper, the conventional method is intuitive and does not have any logical foundation for multidimensionally measuring contributions of factors to manifest variable vectors. Table 5 decomposes “the contribution of $ξ$ to $X$” into components $C ( ξ j → X i )$. The contribution of $ξ 2$ to $X 5$ is prominent compared with the other contributions.
From the discussion in the previous section, the contributions of factors are flexibly calculated. For example, it is reasonable to divide the manifest variable vector into $X ( 1 ) = ( X 1 , X 2 , X 3 )$ and $X ( 2 ) = ( X 4 , X 5 )$, because the first sub-vector is related to the liberal arts and the second one to the sciences. First, the contributions of $ξ 1$ and $ξ 2$ to $X ( 1 )$ are calculated according to the present method, and the details are given as follows:
From (14), 77% of the entropy of manifest variable sub-vector $X ( 1 )$ are explained by the two factors, in which 67% of that are explained by factor $ξ 1$ (15) and 10% by factor . From the relative contributions (17) and (18), 87% of the total contribution of the two factors are made by factor $ξ 1$ and 13% by factor $ξ 2$.
On the other hand, the contributions of $ξ 1$ and $ξ 2$ on $X ( 2 ) = ( X 4 , X 5 )$ are calculated as follows:
From (19), 86% of entropy of manifest variable sub-vector $X ( 2 )$ is explained by the two factors, in which 3% of the entropy are explained by factor $ξ 1$ (20) and 83% by factor $ξ 2$ (21). The contribution ratios of the factors to sub-vector $X ( 2 )$ are calculated in (22) and (23). Ninety-seven percent of the entropy was made by factor $ξ 2$.
Second, factor contributions in an oblique case are calculated. The estimated factor loadings and the correlation matrix of factors based on the covarimin method are shown in Table 6 and Table 7, respectively. Based on factor loadings in Table 6, factor $ξ 1$ is interpreted as an ability for subjects in the liberal arts and factor $ξ 2$ as an ability for subjects in sciences. The results are similar to those in the orthogonal case mentioned above, because the correlation between the factors is not strong. Table 8 shows the decomposition of $C ( ξ → X )$ based on Theorems 1 and 2. In this case, it is noted that $C ( ξ → X ) ≠ C ( ξ 1 → X ) + C ( ξ 2 → X )$; however, $C ( ξ → X ) = ∑ i = 1 5 C ( ξ → X i )$. According to the table, the contributions of $ξ 1$ and $ξ 2$ to sub-vectors of manifest variable vector $X$ can also be calculated as in the above orthogonal factor analysis. Table 9 illustrates the contributions of factors on manifest variable vector. Factor $ξ 1$ explains 42% of the entropy of $X$ and factor $ξ 2$ explains 71%.

## 6. Discussion

For orthogonal factor analysis models, the conventional method measures factor contributions (effects) by the sums (totals) of squared factor loadings related to the factors (2); however, there is no logical foundation for how they can be interpreted. It is reasonable to measure factor contributions as the effects of factors on the manifest variable vector concerned. The present paper has proposed a method of measuring factor contributions through entropy, that is, applying an entropy-based path analysis approach. The method measures the contribution of factor vector $ξ$ to manifest variable vector $X$ and decomposes it into those of factors $ξ j$ to manifest variables $X i$ and/or those to sub-vectors of $X$. Comparing (2) and (10), for standardization of unique factor variances $σ i 2 = 1$, the present method equals to the conventional method. As discussed in this paper, the present method can be employed in oblique factor analysis as well, and it has been illustrated in a numerical example. The present method has a theoretical basis for measuring factor contributions in a framework of entropy, and it is a novel approach for factor analysis. The present paper confines itself to the usual factor analysis model. A more complicated model with a mixture of normal factor analysis models [13] is excluded, and a further study is needed to apply the entropy-based method to the model.

## Author Contributions

N.E. conceived the study; N.E., M.T., and C.G.B. discussed the idea for measuring factor contribution; N.E. and M.T. proven the theorems in the paper; N.E. and C.G.B. computed the numerical example; N.E. wrote the paper, and the coauthors reviewed it.

## Funding

Grant-in-aid for Scientific Research 18993038, Ministry of Education, Culture, Sports, Science, and Technology of Japan.

## Acknowledgments

The authors would like to thank the referees for their useful comments and suggestions to improve the first version of the present paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Spearman, S. “General-intelligence”, objectively determined and measured. Am. J. Psychol. 1904, 15, 201–293. [Google Scholar] [CrossRef]
2. Thurstone, L.L. Vector of Mind: Multiple Factor Analysis for the Isolation of Primary Traits; The University of Chicago Press: Chicago, IL, USA, 1935. [Google Scholar]
3. Young, A.G.; Pearce, S. A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Quant. Methods Psychol. 2013, 9, 79–94. [Google Scholar] [CrossRef]
4. Bartholomew, D.J. Latent Variable Models and Factor Analysis; Oxford University Press: New York, NY, USA, 1987. [Google Scholar]
5. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1946, 27, 379–423. [Google Scholar] [CrossRef]
6. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
7. Harry, J. Relative entropy measures of multivariate dependence. J. Am. Stat. Assoc. 1989, 84, 157–164. [Google Scholar]
8. Eshima, N.; Tabata, M.; Borroni, G.C.; Kano, Y. An entropy-based approach to path analysis of structural generalized linear models: A basic idea. Entropy 2015, 17, 5117–5132. [Google Scholar] [CrossRef]
9. Eshima, N.; Tabata, M. Entropy coefficient of determination for generalized linear models. Comput. Stat. Data Anal. 2010, 54, 1381–1389. [Google Scholar] [CrossRef]
10. Eshima, N.; Borroni, C.G.; Tabata, M. Relative importance assessment of explanatory variables in generalized linear models: An entropy-based approach. Stat. Appl. 2016, 16, 107–122. [Google Scholar]
11. Everitt, B.S. An Introduction to Latent Variable Models; Chapman and Hall: London, UK, 1984. [Google Scholar]
12. Adachi, K.; Trendafilov, N.T. Some mathematical properties of the matrix decomposition solution in factor analysis. Psychometrika 2018, 83, 407–424. [Google Scholar] [CrossRef] [PubMed]
13. Attias, H. Independent factor analysis. Neural Comput. 1999, 11, 803–851. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Path diagram for factor analysis model (1) $( m = 2 )$.
Figure 1. Path diagram for factor analysis model (1) $( m = 2 )$.
Table 1. Data for illustrating factor analysis.
Table 1. Data for illustrating factor analysis.
Subject
16465836970
25456534032
38068757484
47165404168
56361605680
64762335787
74253503823
85417465858
95748592617
105472585530
116782525044
127182546728
135367747553
1490966387100
157169747642
1661100925358
176169486371
188784646553
197775783744
205727415430
 $X 1$ $X 2$ $X 3$ $X 4$ $X 5$ $ξ 1$ 0.60 0.75 0.65 0.32 0.00 $ξ 2$ 0.39 0.24 0.00 0.59 0.92 uniqueness 0.50 0.38 0.58 0.55 0.16
Table 3. Factor contributions based on entropy (orthogonal case).
Table 3. Factor contributions based on entropy (orthogonal case).
Total
$C ( ξ j → X )$3.116.23$9.34 = C ( ξ → X )$
$RC ˜ ( ξ j → X )$0.300.60$0.90 = R C ˜ ( ξ → X )$
$RC ( ξ j → X )$0.330.67$1$
Table 4. Factor contributions with the conventional method.
Table 4. Factor contributions with the conventional method.
Total
$C j$1.441.392.83
$RC ˜ j$0.290.280.57
$RC j$0.510.491
Table 5. Decomposition of factor contribution $C ( ξ → X )$ into $C ( ξ j → X i ) .$
Table 5. Decomposition of factor contribution $C ( ξ → X )$ into $C ( ξ j → X i ) .$
$ξ 1$0.721.490.720.190.003.11
$ξ 2$0.300.1500.635.146.23
total 1.011.640.720.825.149.34
$ξ 1$0.590.770.680.290
$ξ 2$0.240.00−0.120.520.92
uniqueness0.500.410.580.550.16
Table 7. Correlation matrix of factors.
Table 7. Correlation matrix of factors.
$ξ 1$10.315
$ξ 2$0.3151
Table 8. Decomposition of factor contribution $C ( ξ → X )$ into $C ( ξ j → X i )$ (oblique case).
Table 8. Decomposition of factor contribution $C ( ξ → X )$ into $C ( ξ j → X i )$ (oblique case).
$ξ 1$0.901.440.700.370.543.95
$ξ 2$0.370.140.010.685.436.65
$C ( ξ → X i )$1.011.440.730.825.43$C ( ξ → X ) = 9.43$
Table 9. Factor contributions based on entropy (oblique case).
Table 9. Factor contributions based on entropy (oblique case).
Effect of $ξ$ on $X$
$C ( ξ j → X )$3.956.65$C ( ξ → X ) = 9.43$
$RC ˜ ( ξ j → X )$0.380.64$RC ˜ ( ξ → X ) = 0.90$
$RC ( ξ j → X )$0.420.71

## Share and Cite

MDPI and ACS Style

Eshima, N.; Tabata, M.; Giovanni Borroni, C. An Entropy-Based Approach for Measuring Factor Contributions in Factor Analysis Models. Entropy 2018, 20, 634. https://doi.org/10.3390/e20090634

AMA Style

Eshima N, Tabata M, Giovanni Borroni C. An Entropy-Based Approach for Measuring Factor Contributions in Factor Analysis Models. Entropy. 2018; 20(9):634. https://doi.org/10.3390/e20090634

Chicago/Turabian Style

Eshima, Nobuoki, Minoru Tabata, and Claudio Giovanni Borroni. 2018. "An Entropy-Based Approach for Measuring Factor Contributions in Factor Analysis Models" Entropy 20, no. 9: 634. https://doi.org/10.3390/e20090634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.