Next Article in Journal
Knotting the MECO Network

Open AccessArticle

# An Entropy-Based Tool to Help the Interpretation of Common-Factor Spaces in Factor Analysis

1
Center for Educational Outreach and Admissions, Kyoto University, Yoshida-machi, Sakyoku, Kyoto 660-8501, Japan
2
Department of Statistics and Quantitative Methods, University of Milano Bicocca, 20126 Milano, Italy
3
Department of Mathematical Sciences, Osaka Prefecture University, Osaka 599-8532, Japan
4
Department of Applied Mathematics, Tokyo University of Science, Kagurazaka, Shinzyukuku, Tokyo 162-0825, Japan
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 140; https://doi.org/10.3390/e23020140
Received: 27 December 2020 / Revised: 18 January 2021 / Accepted: 19 January 2021 / Published: 24 January 2021

## Abstract

This paper proposes a method for deriving interpretable common factors based on canonical correlation analysis applied to the vectors of common factors and manifest variables in the factor analysis model. First, an entropy-based method for measuring factor contributions is reviewed. Second, the entropy-based contribution measure of the common-factor vector is decomposed into those of canonical common factors, and it is also shown that the importance order of factors is that of their canonical correlation coefficients. Third, the method is applied to derive interpretable common factors. Numerical examples are provided to demonstrate the usefulness of the present approach.

## 1. Introduction

In factor analysis, extracting interpretable factors is important for practical data analysis. In order to carry it out, methods for factor rotation have been studied, e.g., varimax [1] and orthomax [2] for orthogonal rotations and oblimin [3] and orthoblique [4] for oblique rotations. The basic idea for factor rotation in factor analysis is owed to the criteria of simple structures of factor analysis models by Thurstone [5], and the methods of factor rotation are constructed with respect to maximizations of variations of the squared factor loadings in order to derive simple structures of factor analysis models. Let $X i$ be manifest variables, let $ξ j$ be latent variables (common factors), let $ε i$ be unique factors related to $X i$, and finally, let $λ i j$ be factor loadings that are weights of common factors $ξ j$ to explain $X i$. Then, the factor analysis model is given as follows:
where
To derive simple structures of factor analysis models, for example, in the varimax method, the following variation function of squared factor loadings is maximized with respect to factor loadings:
$V = ∑ i = 1 p ∑ j = 1 m ( λ i j 2 − λ 2 ¯ ) 2 ,$
where $λ 2 ¯ = 1 p m ∑ i = 1 p ∑ j = 1 m λ i j 2$. In this sense, the basic factor rotation methods can be viewed as those for exploratively analyzing multidimensional common-factor spaces. The interpretation of factors is made according to manifest variables with large weights in common factors. As far as we know, novel methods for factor rotation have not been investigated except for rotation methods similar to the above basic ones. In real data analyses, manifest variables are usually classified into some groups of variables in advance that may have common factors and concepts for themselves. For example, suppose we have a test battery including the following five subjects: Japanese, English, Social Science, Mathematics, and Natural Science. It is then reasonable to classify the five subjects into two groups, {Japanese, English, Social Science} and {Mathematics, Natural Science}. In such cases, it is meaningful to determine common factors related to the two manifest variable groups. For this objective, it is useful to develop a novel method to derive the common factors based on a factor contribution measure. In conventional methods of factor rotation, for example, as mentioned above, variation function (2) for the varimax method is not related to factor contribution.
An entropy-based method for measuring factor contribution was proposed by [6], and the method can measure factor contributions to manifest variables vectors and can decompose the factor contributions into those of manifest subvectors and individual manifest variables. By using the method, we can derive important common factors related to the manifest subvectors and the manifest variables. The aim of the present paper is to propose a new method for deriving simple structures based on entropy, that is, extracting common factors easy to interpret. In Section 2, an entropy-based method for measuring factor contribution [6] is reviewed to apply its properties for deriving simple structures in factor analysis models. Section 3 discusses canonical correlation analysis between common factors and manifest variables, and the contributions of common factors to the manifest variables are decomposed into components related to the extracted pairs of canonical variables. A numerical example is given to demonstrate the approach. In Section 4, canonical correlation analysis is applied to obtain common factors easy to interpret, and the contributions of the extracted factors are measured. Numerical examples are given to illustrate the present approach, and finally, Section 5 provides discussions and conclusions to summarize the present approach.

## 2. Entropy-Based Method for Measuring Factor Contributions

First, in order to derive factor contributions, factor analysis model (1) with error terms , which are normally distributed, can be discussed in the framework of generalized linear models (GLMs) [7]. A general path diagram among manifest variables and common factors in the factor analysis model is illustrated in Figure 1. The conditional density functions of manifest variables of , given the factors , are expressed as follows:
Let $θ i = ∑ j = 1 m λ i j ξ j$ and $d ( x i , ω i 2 ) = − x i 2 2 ω i 2 − log 2 π ω i 2$. Then, the above density function is described in a GLM framework as
According to the local independence of the manifest variables in factor analysis model (1), the conditional density function of $X = ( X 1 , X 2 , … , X p ) T$ given is expressed as
$f ( x | ξ ) = ∏ i = 1 p exp ( x i θ i − 1 2 θ i 2 ω i 2 + d ( x i , ω i 2 ) ) = exp ( ∑ i = 1 p x i θ i − 1 2 θ i 2 ω i 2 + ∑ i = 1 p d ( x i , ω i 2 ) ) .$
Let $g ( ξ )$ be the joint density function of common-factor vector ; let $f i ( x i )$ be the marginal density functions of ; and let us set
where “KL” stands for “Kullback–Leibler information” [8]. From (3) and (4), we have
$KL ( X i , ξ ) = Cov ( X i , θ i ) ω i 2 = ∑ j = 1 m λ i j Cov ( X i , ξ j ) ω i 2 , i = 1 , 2 , … , p ;$
The above quantities (7) and (8) are interpreted as the signal-to-noise ratios for dependent variables $X i$ and predictors $θ i$; and the signal-to-noise ratio for dependent-variable vectors $X$ and common-factor vector $ξ$, respectively.
From (7) and (8), the following theorem can be derived [6]:
Theorem 1.
In factor analysis model (1), let and. Then,
$KL ( X , ξ ) = ∑ i = 1 p KL ( X i , ξ ) .$
Consistently, the following theorem, which is actually an extended version of Corollary 1 in [6], can be also obtained:
Theorem 2.
Let manifest variable subvectors be any decomposition of manifest variable vector. Then,
$KL ( X , ξ ) = ∑ a = 1 A KL ( X ( a ) , ξ ) .$
Following Eshima et al. [6], the contribution of factor vector to manifest variable vector is thus defined as
$C ( ξ → X ) = KL ( X , ξ ) ,$
so that, in Theorem 2, the contributions of factor vector to manifest variable vectors are defined by
Let $ξ \ j$ be subvectors of all variables $ξ i$ except $ξ j$ from , i.e.,
and let $KL ( X , ξ \ j | ξ j )$ and $KL ( X ( a ) , ξ \ j | ξ j )$ be the conditional Kullback–Leibler information as defined in (5) and (6). The contributions of common factors $ξ j$ are defined by
Remark 1.
Information$KL ( X , ξ \ j | ξ j )$and$KL ( X ( a ) , ξ \ j | ξ j )$can be expressed by using the conditional covariances$Cov ( X i , θ i | ξ j )$. For example,
$KL ( X , ξ \ j | ξ j ) = ∑ i = 1 p Cov ( X i , θ i | ξ j ) ω i 2 .$
Finally, the following decomposition of $KL ( X , ξ )$ holds for orthogonal factors ([6], Theorem 3):
Theorem 3.
If the common factors are mutually independent, it follows that
$C ( ξ → X ) = ∑ j = 1 m ∑ a = 1 A C ( ξ j → X ( a ) ) = ∑ j = 1 m ∑ i = 1 p C ( ξ j → X i ) .$
The entropy coefficient of determination (ECD) [9] between $ξ$ and $X$ is defined by
$ECD ( ξ , X ) = KL ( ξ , X ) KL ( ξ , X ) + 1 ,$
so that the total relative contribution of factor vector $ξ$ to manifest variable vector $X$ in entropy can be defined as
$RC ˜ ( ξ → X ) = ECD ( ξ , X ) = C ( ξ → X ) C ( ξ → X ) + 1 ,$
while, for a single factor $ξ j$, two relative contribution ratios can be defined:
(see [6] for details).
Second, factor analysis model (1) in a general case is discussed. Let $Σ$ be the variance–covariance matrix of manifest variable vector $X = ( X 1 , X 2 , … , X p ) T$; let $Ω$ be the $p × p$ variance–covariance matrix of unique factor vector $ε = ( ε 1 , ε 2 , … , ε p ) T$; let $Λ$ be the $p × m$ factor loading matrix of $λ i j$; and let $Φ$ be the correlation matrix of common-factor vector $ξ = ( ξ 1 , ξ 2 , … , ξ m ) T$. Then, model (1) can be expressed as
$X = Λ ξ + ε$
and we have
$Σ = Λ Φ Λ T + Ω .$
Now, the above discussion is extended in a general factor analysis model (1) with the following variance–covariance matrix of $X$ and $ε$:
$( Λ Φ Λ T + Ω Λ Φ Φ Λ T Φ ) .$
Let $θ = Λ ξ$ be the predictor vector of manifest variable vector $X T = ( X 1 , X 2 , … , X p )$. Then, the contribution of common-factor vector $ξ$ to manifest variable vector $X$ is defined by the following generalized signal-to-noise ratio:
$E ( X T Ω − 1 θ ) = E ( X T Ω ˜ Λ ξ ) | Ω | = tr Ω ˜ Λ Φ Λ T | Ω | ,$
where $Ω ˜$ is the cofactor matrix of $Ω$. The signal is $tr Ω ˜ Λ Φ Λ T$ and the noise $| Ω |$, and both are positive. Hence, the above quantity is defined as the explained entropy with the factor analysis model, and the same notation as above is used, having to do with the Kullback–Leibler information for the factor analysis model with normal distribution errors (4). Similarly, in the general model, as in (9), signal-to-noise ratio (11) is decomposed into
$tr Ω ˜ Λ Φ Λ T | Ω | = ∑ i = 1 p Cov ( X i , θ i ) ω i 2 = ∑ i = 1 p ∑ j = 1 m λ i j Cov ( X i , ξ j ) ω i 2 ,$
so the above theorems hold true as well. Thus, the results mentioned above are applicable to factor analysis models with error terms with non-normal distributions.

## 3. Canonical Factor Analysis

In order to derive interpretable factors from the common-factor space, we propose taking advantage of the results of canonical correlation analysis applied to manifest variables and common factors. This approach can be referred to as “canonical factor analysis” [10]. In the factor analysis model (1), the variance–covariance matrix of and is given by (10). Then, we have the following theorem:
Theorem 4.
For canonical correlation coefficients betweenand$ξ$in factor analysis model (1) with (10), it follows that
$KL ( X , ξ ) = ∑ j = 1 m ρ j 2 1 − ρ j 2 .$
Proof.
Let $B ( 1 )$, $B ( 2 )$, and $F$ be $m × p$, $( p − m ) × p$, and $m × m$ matrices, respectively; let $V ( 1 ) = ( V 1 , V 2 , … , V m ) T = B ( 1 ) X$, $V ( 2 ) = B ( 2 ) X$, and $η = ( η 1 , η 2 , … , η m ) T = F ξ$. It is assumed that $( V j , η j )$ are the pairs of canonical variables with correlation coefficients ; that matrices $( B ( 1 ) B ( 2 ) )$ and $F$ are nonsingular; and that $V ( 1 )$ and $V ( 2 )$ are statistically independent. Since all pairs of canonical variables $( V j , η j )$ and $V ( 2 )$ are mutually independent, we have
From Theorem 2, it follows that
$KL ( X , ξ ) = KL ( V , F ξ ) = KL ( ( V ( 1 ) V ( 2 ) ) , η ) = KL ( V ( 1 ) , η ) + KL ( V ( 2 ) , η ) = KL ( V ( 1 ) , η ) = ∑ j = 1 m KL ( V ( 1 ) , η j ) = ∑ j = 1 m KL ( V j , η j ) = ∑ j = 1 m ρ j 2 1 − ρ j 2 .$
This completes the theorem. □
In the proof of the above theorem, we have
It implies that
$C ( η j → X ) = C ( η j → V j ) = ρ j 2 1 − ρ j 2 ;$
$RC ˜ ( η j → X ) = KL ( X , η j ) KL ( X , ξ ) + 1 = KL ( X , η j ) KL ( η , V ) + 1 = ρ j 2 1 − ρ j 2 ∑ a = 1 m ρ a 2 1 − ρ a 2 + 1 ;$
Theorem 4 shows that the contribution of common-factor vector $ξ$ to manifest variable vector $X$ is decomposed into those of canonical common factors $η j$, i.e.,
Let us assume
$1 > ρ 1 2 ≥ ρ 2 2 ≥ … ≥ ρ m 2 ≥ 0 .$
According to the entropy-based criterion in Theorem 4, the order of importance of canonical common factors is that of canonical correlation coefficients. The interpretation of factors $η j$ can be made with the corresponding manifest canonical variables $V j$ and the factor loading matrix of canonical common factors $η = F ξ$. For the canonical common factors, the factor loading matrix can be obtained as $Λ ∗ = Λ F − 1$. We refer to the canonical correlation analysis in Theorem 4 as canonical factor analysis [10].
Theorem 5.
In factor analysis model (1), for any $p × p$and$m × m$nonsingular matrices$P$and$Q$, the canonical factor analysis between manifest variable vector$P X$and common-factor vector $Q ξ$is invariant.
Proof.
Since the variance–covariance matrix of $P X$ and $Q ξ$ is given by
$( P 0 0 Q ) ( Σ Λ T Λ I m ) ( P 0 0 Q ) T ,$
the theorem follows. □
Notice that we also have
$KL ( P X , Q ξ ) = KL ( X , ξ ) .$
From the above theorem, the results of the canonical factor analysis do not depend on the initial common factors $ξ j$ in factor analysis model (1). For factor analysis model (1), it follows that
$KL ( X , ξ ) = ∑ j = 1 m KL ( V j , η j ) = ∑ i = 1 p KL ( X i , ξ ) ,$
implying that
where $R i$ are the multiple correlation coefficients between manifest variables $X i$ and factor vector .

#### Numerical Example 1

Table 1 shows the results of orthogonal factor analysis (varimax method by S-PLUS ver. 8.2) as reported in [6]; the same example is used here to demonstrate the canonical factor analysis mentioned above. In Table 1, manifest variables are scores in some subjects in the liberal arts, while variables are those in the sciences. We refer to the factors as the initial common factors. In this example, from Table 1, the variance–covariance matrices in (10) are given as follows:
$Σ = ( 1 0.54 0.39 0.42 0.36 0.54 1 0.49 0.38 0.22 0.39 0.49 1 0.21 0 0.42 0.38 0.21 1 0.54 0.36 0.22 0 0.54 1 ) ,$
$Φ = ( 1 0 0 1 ) .$
where covariance matrix $Λ T$ is given in Table 1.
From the above matrices, to obtain the pairs of canonical variables, linear transformation matrices $B ( 1 )$ and $F$ in Theorem 4 are as follows:
$B ( 1 ) = ( 0.19 0.20 0.32 0.58 0.06 0.20 0.94 0.37 0.00 − 0.65 ) ,$
and
$F = ( 0.32 0.95 0.95 − 0.32 ) .$
By the above matrices, we have the following pairs of canonical variables $( V i , η i )$ and their squared canonical correlation coefficients $ρ i 2$:
${ V 1 = 0.19 X 1 + 0.20 X 2 + 0.06 X 3 + 0.20 X 4 + 0.94 X 5 , η 1 = 0.32 ξ 1 + 0.95 ξ 2 , ρ 1 2 = 0.88 ,$
${ V 2 = 0.32 X 1 + 0.58 X 2 + 0.37 X 3 + 0.07 X 4 − 0.65 X 5 , η 2 = 0.95 ξ 1 − 0.32 ξ 2 , ρ 2 2 = 0.73 .$
According to the above canonical variables, the factor loading for canonical factors is calculated with the initial loading matrix $Λ$ and the rotation matrix $F$, and we have
$Λ ∗ T = ( Λ F − 1 ) T = ( 0.32 0.95 0.95 − 0.32 ) − 1 ( 0.6 0.75 0.39 0.24 0.65 0.32 0.00 0.00 0.59 0.92 ) = ( 0.56 0.47 0.45 0.64 0.21 0.66 0.87 0.62 0.12 − 0.29 ) .$
From the above results, the first canonical factor $η 1$ can be viewed as a general common ability (factor) to solve all five subjects. The second factor $η 2$ can be regarded as a factor related to subjects in the liberal arts, which is independent of the first canonical factor. In the canonical correlation analysis, the contributions of canonical factors are calculated. Since the multiple correlation coefficient between $η 1$ and is $ρ 1 2 = 0.88$ and that between $η 2$ and $X$ is $ρ 2 2 = 0.73$, we have
Let $ξ = ( ξ 1 , ξ 2 )$. From the above results, we have
From this, $91 %$ of the variation of manifest random vector $X$ in entropy is explained by the common latent factors $ξ$. The contribution ratios of canonical common factors are calculated as follows:
The contribution of the first canonical factor is about 2.6 times greater than that of the second one.

## 4. Deriving Important Common Factors Based on Decomposition of Manifest Variables into Subsets

From (9) in Theorem 2, $KL ( X , ξ )$ is decomposed into those for manifest variable subvectors $X ( a )$, . Thus, we have the following theorem:
Theorem 6.
Let manifest variable vector $X$be decomposed into subvectors. Let be the canonical correlation coefficients between manifest variable subvector$X ( a )$and common-factor vector $ξ$,$a = 1 , 2 , … , A$in the factor analysis model (1), where . Then,$KL ( X , ξ )$is decomposed into canonical components as follows:
$KL ( X , ξ ) = ∑ a = 1 A ∑ j = 1 m ( a ) ρ ( a ) j 2 1 − ρ ( a ) j 2 .$
Proof.
For manifest variable vector $X ( a )$ and common-factor vector $ξ$, applying canonical correlation analysis, we have $m ( a )$ pairs of canonical variables $( V j ( α ) , η j ( α ) )$ with squared canonical correlation coefficients . Then, applying Theorem 4 to $KL ( X ( a ) , ξ )$ it follows that
From Theorem 2, the theorem follows. □
Remark 2.
As shown in the above theorem, the following relations hold:
In this sense,
To derive important common factors, the above theorem can be used. In many of the data in factor analysis, manifest variables can be classified into subsets that have common concepts (factors) to be measured. For example, in the data used for Table 1, it is meaningful to classify the five variables into two subsets $X ( 1 ) = ( X 1 , X 2 , X 3 )$ and $X ( 2 ) = ( X 4 , X 5 )$, where the first subset is related to the liberal arts and the second one is related to the sciences. In $( X ( 1 ) , ξ )$ and $( X ( 2 ) , ξ )$, it is possible to derive the latent ability for the liberal arts and that for the sciences, respectively.

#### 4.1. Numerical Example 1 (Continued)

For $( X ( 1 ) , ξ )$ and $( X ( 2 ) , ξ )$, two sets of canonical variables are obtained, respectively, as follows:
According to the above canonical variables, we have the following factor contributions:
From the above results, canonical factors $η 1 ( 1 )$ and $η 1 ( 2 )$ can be interpreted as general common factors for the liberal arts and for the sciences, respectively. By using the factors, the factor loadings are given in Table 2. In this case, Table 2 is similar to Table 1; however, the factor analysis model is oblique and the correlation coefficient between $η 1 ( 1 )$ and $η 1 ( 2 )$ is 0.374. The contributions of the factors to manifest variable vector $X = ( X 1 , X 2 , X 3 , X 4 , X 5 ) = ( X ( 1 ) , X ( 2 ) )$ are calculated as follows:
In this case, factors $η 1 ( 1 )$ and $η 1 ( 2 )$ are correlated, so it follows that
$CR ( η 1 ( 1 ) → X ) + CR ( η 1 ( 2 ) → X ) = 1.129 > 1 .$

#### 4.2. Numerical Example 2

Table 3 shows the results of the maximum likelihood factor analysis (orthogonal) for six scores ([11], pp. 61–65); such results are treated as the initial estimates in the present analysis. In this example, variables are classified into the following three groups: variable $X 1$ is related to the Spearman’s $g$ factor; variables $X 2$, $X 3$, and $X 4$ account for problem-solving ability; and variables $X 5$ and $X 6$ are associated with verbal ability [11]; however, it is difficult to explain the three factors by using Table 3. In this example, the present approach is employed for deriving the three factors. From (10) and Table 3, the correlation matrix of the manifest variables is given as follows:
$Σ ^ = ( 1 0.417 0.576 0.417 1 0.567 0.576 0.567 1 0.312 0.576 0.514 0.306 0.265 0.263 0.427 0.355 0.354 0.312 0.306 0.427 0.576 0.265 0.355 0.514 0.263 0.354 1 0.193 0.193 0.193 1 0.799 0.193 0.799 1 ) .$
Let $X ( 2 ) = ( X 2 , X 3 , X 4 )$, let $X ( 3 ) = ( X 5 , X 6 )$, and let $ξ = ( ξ 1 , ξ 2 )$. Canonical correlation analysis is carried out for $( X 1 , ξ )$, $( X ( 2 ) , ξ )$, and $( X ( 3 ) , ξ )$, and we have the following canonical variables, respectively:
The contributions of canonical factors are calculated as follows:
The common factor $η 1 ( 1 ) ( = g )$ can be interpreted as the Spearman’s $g$ factor (general intelligence) and canonical common factors $η 1 ( 2 )$ and $η 1 ( 3 )$ can be interpreted as problem-solving ability and verbal ability, respectively. The correlation coefficients between the three factors are given by
The contributions of the above three factors to manifest variable vector $X = ( X 1 , X 2 , X 3 , X 4 , X 5 , X 6 )$ are computed as follows:
${ C ( g → X ) = 19.93 , CR ( g → X ) = 0.68 CR ˜ ( g → X ) = 0.66 C ( η 1 ( 2 ) → X ) = 9.77 , CR ( η 1 ( 2 ) → X ) = 0.33 , CR ˜ ( η 1 ( 2 ) → X ) = 0.32 , C ( η 1 ( 3 ) → X ) = 25.02 , CR ( η 1 ( 3 ) → X ) = 0.85 , CR ˜ ( η 1 ( 3 ) → X ) = 0.82 .$
The common-factor space is two-dimensional, and the factor loadings with common factors $η 1 ( 2 )$ and $η 1 ( 3 )$ are calculated as in Table 4. The table shows a clear interpretation of the common factors. Thus, the present method is effective for deriving interpretable factors in situations such as that of this example. The expressions of the factor analysis model can also be given by factor vectors $( g , η 1 ( 2 ) )$ and $( g , η 1 ( 3 ) )$, respectively. The present method is applicable for any subsets of manifest variables.

## 5. Discussion

In order to find interpretable common factors in factor analysis models, methods of factor rotation are often used. The methods are based on maximizations of variation functions of squares of factor loadings, and orthogonal or oblique factors are applied. The factors derived by the conventional methods may be interpretable; however, it may be more useful to propose a method for detecting interpretable common factors based on factor contribution measurement, i.e., importance of common factors. An entropy-based method for measuring factor contribution [6] can measure the contribution of the common-factor vector to the manifest variable vector, and one can decompose such a contribution into those of single manifest variables (Theorem 1) and into that of some manifest variable subvectors as well (Theorem 2). A characterization in the case of orthogonal factors can be also given (Theorem 3). The paper shows that the most important common factor with respect to entropy can be identified by using canonical correlation analysis between the factor vector and the manifest variable vector (Theorem 4). Theorem 4 shows that the contribution of the common-factor vector to the manifest variable vector can be decomposed into those of canonical factors and that the order of canonical correlation coefficients is that of factor contributions. In most multivariate data, manifest variables can be naturally classified into subsets according to common concepts as in Examples 1 and 2. By using Theorems 2 and 5, canonical correlation analysis can also be applied to derive canonical common factors from subsets of manifest variables and the initial common-factor vector (Theorem 6). According to the analysis, interpretable common factors can be obtained easily, as demonstrated in Examples 1 and 2. In Example 1, Table 1 and Table 2 have similar factor patterns; however, the derived factors in Table 1 are orthogonal and those in Table 2 are oblique. In Example 2, it may be difficult to interpret the factors in Table 3 produced by the varimax method. On the other hand, Table 4, obtained by using the present method, can be interpreted clearly. Finally, according to Theorem 5, the present method produces results that are invariant with respect to linear transformations of common factors, so that the method is independent of the initial common factors. The present method is the first one to derive interpretable factors based on a factor contribution measure, and the interpretable factors can be obtained easily through canonical correlation analysis between manifest variable subvectors and the factor vectors.

## Author Contributions

Conceptualization, N.E.; methodology, N.E., C.G.B., M.T., T.K.; formal analysis, N.E.; writing—original draft preparation, N.E.; writing—review and editing, C.G.B.; funding acquisition, T.K. All authors have read and agreed to the published version of the manuscript.

## Funding

This research was supported by the Grant-in-aid for Scientific Research 18K11200, Ministry of Education, Culture, Sports, Science, and Technology of Japan.

Not applicable.

Not applicable.

## Acknowledgments

The authors would like to thank the three referees for their useful comments and suggestions for improving the first version of this paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Kaiser, H.F. The varimax criterion for analytic rotation in factor analysis. Psychometrika 1958, 23, 187–200. [Google Scholar] [CrossRef]
2. Ten Berge, J.M.F. A joint treatment of VARIMAX rotation and the problem of diagonalizing symmetric matrices simultaneously in the least-squares sense. Psychometrika 1984, 49, 347–358. [Google Scholar] [CrossRef]
3. Jennrich, R.I.; Sampson, P.F. Rotation for simple loadings. Psychometrika 1966, 31, 313–323. [Google Scholar] [CrossRef] [PubMed]
4. Harris, C.W.; Kaiser, H.F. Oblique factor analytic solutions by orthogonal transformation. Psychometrika 1964, 29, 347–362. [Google Scholar] [CrossRef]
5. Thurstone, L.L. Vector of Mind: Multiple Factor Analysis for the Isolation of Primary Traits; University of Chicago Press: Chicago, IL, USA, 1935. [Google Scholar]
6. Eshima, N.; Tabata, M.; Borroni, C.G. An entropy-based approach for measuring factor contributions in factor analysis models. Entropy 2018, 20, 634. [Google Scholar] [CrossRef] [PubMed]
7. Nelder, J.A.; Wedderburn, R.W.M. Generalized linear model. J. R. Stat. Soc. A 1972, 135, 370–384. [Google Scholar] [CrossRef]
8. Kullback, S.; Leibler, R.A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
9. Eshima, N.; Tabata, M. Entropy coefficient of determination for generalized linear models. Comput. Stat. Data Anal. 2010, 54, 1381–1389. [Google Scholar] [CrossRef]
10. Rao, C.R. Estimation and tests of significance in factor analysis. Psychometrika 1955, 20, 93–111. [Google Scholar] [CrossRef]
11. Bartholomew, D.J. Latent Variable Models and Factor Analysis; Oxford University Press: New York, NY, USA, 1987. [Google Scholar]
Figure 1. Path diagram of a general factor analysis model.
Figure 1. Path diagram of a general factor analysis model.
$X 1$$X 2$$X 3$$X 4$$X 5$
$ξ 1$0.600.750.650.320.00
$ξ 2$0.390.240.000.590.92
uniqueness0.500.380.580.550.16
Uniqueness is the proportion of unique factor $ε i$ related to manifest variable $X i$.
Table 2. Factor loadings by using canonical common factors $η 1 ( 1 )$ and $η 1 ( 2 )$.
Table 2. Factor loadings by using canonical common factors $η 1 ( 1 )$ and $η 1 ( 2 )$.
$X 1$$X 2$$X 3$$X 4$$X 5$
$η 1 ( 1 )$0.620.800.700.31$− 0.06$
$η 1 ( 2 )$0.19$− 0.02$$− 0.22$0.490.94
uniqueness0.500.380.580.550.16
$X 1$$X 2$$X 3$$X 4$$X 5$$X 6$
$ξ 1$0.640.340.460.250.970.82
$ξ 2$0.370.540.760.41$− 0.12$$− 0.03$
Table 4. The factor loadings with common factors $η 1 ( 2 )$ and $η 1 ( 3 )$.
Table 4. The factor loadings with common factors $η 1 ( 2 )$ and $η 1 ( 3 )$.
$X 1$$X 2$$X 3$$X 4$$X 5$$X 6$
$η 1 ( 2 )$0.490.630.890.48$− 0.01$0.07
$η 1 ( 3 )$0.390.010.000.000.98$0.79$