Next Article in Journal
Approximating the Density of Random Differential Equations with Weak Nonlinearities via Perturbation Techniques
Next Article in Special Issue
Predicting PM2.5 and PM10 Levels during Critical Episodes Management in Santiago, Chile, with a Bivariate Birnbaum-Saunders Log-Linear Model
Previous Article in Journal
Analyzing Operational Efficiency in Real Estate Companies: An Application of GM (1,1) and DEA Malmquist Model
Previous Article in Special Issue
Sign, Wilcoxon and Mann-Whitney Tests for Functional Data: An Approach Based on Random Projections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Algorithm for Computing Disjoint Orthogonal Components in the Three-Way Tucker Model

by
Carlos Martin-Barreiro
1,2,
John A. Ramirez-Figueroa
1,2,
Ana B. Nieto-Librero
1,3,
Víctor Leiva
4,*,
Ana Martin-Casado
1 and
M. Purificación Galindo-Villardón
1,3
1
Department of Statistics, Universidad de Salamanca, 37008 Salamanca, Spain
2
Faculty of Natural Sciences and Mathematics, Universidad Politécnica ESPOL, Guayaquil 090902, Ecuador
3
Institute of Biomedical Research of Salamanca, 37008 Salamanca, Spain
4
School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(3), 203; https://doi.org/10.3390/math9030203
Submission received: 12 December 2020 / Revised: 16 January 2021 / Accepted: 18 January 2021 / Published: 20 January 2021
(This article belongs to the Special Issue Statistical Simulation and Computation)

Abstract

:
One of the main drawbacks of the traditional methods for computing components in the three-way Tucker model is the complex structure of the final loading matrices preventing an easy interpretation of the obtained results. In this paper, we propose a heuristic algorithm for computing disjoint orthogonal components facilitating the analysis of three-way data and the interpretation of results. We observe in the computational experiments carried out that our novel algorithm ameliorates this drawback, generating final loading matrices with a simple structure and then easier to interpret. Illustrations with real data are provided to show potential applications of the algorithm.

1. Introduction

The multivariate methods offer a simplified and descriptive look at a set of multidimensional data, where individuals and variables are numerous. These data are often organized in a matrix or two-way table and, for their analysis, there is a wide theoretical background. In addition, in this context, three-way (or three-mode) tables or three-order tensors can be obtained when a new way, such as time or location, is introduced into the two-way table; see more details in [1].
Tensor decomposition emerged during the 20th century [2] and, as mentioned in [3], Tucker [4] was responsible for its use within a multivariate context in the sixties, whereas, later on, in the seventies, Harshman [5] and Carroll and Chang [6] continued the use of tensor decomposition in multivariate methods.
The analysis of three-way tables attempts to identify patterns in the space of individuals, of variables, and of times (or situations in general), searching for robust and easy-to-interpret models in order to discover how individuals and variables are related to entities in the third mode [7].
The three-way Tucker model (or Tucker3 from hereafter as Tucker) is a tensor decomposition that allows for the generalization of a principal component analysis (PCA) [8,9] to three-way tables. This multivariate method represents the original data in lower dimensional spaces, enabling pattern recognition. Furthermore, it is possible to quantify the interactions between entities in three-modes.
Similar to what occurs with a PCA, each mode of a three-way table can be represented in spaces of lower dimension than the original spaces [10]. These spaces are featured by the principal axes (components), which maximizes the total variance and they are linear combinations of the original entities [11]. Within the space of each mode, the interpretation of these axes is done in accordance with the values of their components (loadings). Therefore, interpretation can sometimes be difficult, leading to inaccurate characterization of these new axes. Thus, it is desirable that each principal component or axis has few entities that contribute to the variability of the component in a relevant manner.
There are several theoretical approaches to yield tensor decomposition that have components with some of their loadings being equal to zero. This is useful for facilitating the analysis, and the interpretation of the three-way tables, such as the sparse parallelizable tensor decomposition used in scattered components works [12]. The sparse hierarchical Tucker model focuses on factoring high order tensors [13]. In addition, the tensor truncated power searches for a sparse decomposition by choosing variables [14]. An algorithm for the sparse Tucker decomposition that sets an orthogonality condition for loading matrices and sparse conditions in the core matrix was proposed by [15].
Unlike sparse methods, disjoint methods in two-way tables search for a decomposition with a loading matrix that has a single column (latent variable) with non-zero input for each row (original variable). Furthermore, there is at least one row with a non-zero entry for each column of the loading matrix. Hence, it is possible to obtain loading matrices of simple structure that facilitate the interpretation. A method that allows for disjoint orthogonal components in a two-way table to be calculated was presented in [16]. Recently, an algorithm that is based on particle swarm optimization was proposed by [17], which consists of a disjoint PCA with constraints for the case of two-way tables. To the best of our knowledge, there are no methods for computing disjoint orthogonal components in three-way tables.
The objective of this work is to propose a heuristic algorithm that extends the existing methods of two-way tables to three-way tables. This proposal computes disjoint orthogonal components in loading matrices of a Tucker model. We introduce a procedure that suggests routes in which the proposed algorithm can be used. We call this new algorithm as DisjointTuckerALS, because this is based on the alternating least squares (ALS) method.
The remainder of the paper is organized as follows. Section 2 defines what a disjoint orthogonal matrix is and presents an optimization mathematical problem that must be solved in order to calculate disjoint orthogonal components in the Tucker model. In Section 3, we introduce the DisjointTuckerALS algorithm. Section 4 carries out the numerical applications of the present work regarding computational experiments in order to evaluate the performance of our algorithm, as well as illustrations with real data to show its potential applications. Finally, in Section 5, the conclusions of this study are provided, together with some final remarks, limitations, a wide spectrum of additional applications that are different from those presented in the illustration with real data, and ideas for future research.

2. The Tucker Model and the Disjoint Approach

In this section, we present the structure of three-way tables and define the Tucker model, as well as a disjoint approach for this model.

2.1. Three-Way Tables

A three-way table represents a data set with three-modes as individuals, variable, and situations, which is a three-dimensional array or a third-order tensor. Note that tensors have three variation modes: A -mode (with I individuals); B -mode (with J variables); and, C -mode (with K situations).
Let X ̲ be a three-way table of order I × J × K . The generic element x i j k stores the measure of individual i { 1 , , I } in variable j { 1 , , J } and situation k { 1 , , K } . The tensor X ̲ can be converted into a two-way table while using a process of matricization. In this work, we use three types of supermatrices: A -mode that yields a matrix X A of order I × J K , B -mode that yields a matrix a X B of order J × I K , and  C -mode that yields a matrix X C of order K × I J . These supermatrices are defined as in [3], where X A , X B , and X C are known as frontal, horizontal, and vertical slices matrices, respectively.

2.2. The Tucker Model

Tucker is a multilinear model that approximates the three-way table X ̲ while using a dimensional reduction on its three-modes. The Tucker tensor decomposition of X ̲ = ( x i j k ) is given by
x i j k = x ^ i j k + e i j k , i = 1 , , I , j = 1 , , J , k = 1 , , K ,
where x ^ i j k = p = 1 P q = 1 Q r = 1 R a i p b j q c k r g p q r , with  a i p , b j q , and  c k r being the corresponding elements of the matrices A = ( a i p ) of order I × P , B = ( b j q ) of order J × Q , and  C = ( c k r ) of order K × R , which are called component or loading matrices. In addition, g p q r that is defined in (1) is the p q r -th element of the tensor G ̲ = ( g p q r ) of order P × Q × R , which is called core and it is considered a reduced version of the tensor X ̲ . Integers P < I , Q < J , and  R < K represent the number of components required on each mode, respectively. Thus, for instance, the matrix A contains P columns that represent the new referential system of the individuals. Note that E ̲ = ( e i j k ) of order I × J × K is a tensor of model errors.
The Tucker model can be represented by matrix equations that are based on all modes [3], which are stated as
X A = A G A C B + E A ,
X B = B G B C A + E B ,
X C = C G C B A + E C ,
where ⊗ is the Kronecker product. Furthermore, G A of order P × Q R , G B of order Q × P R , and  G C of order R × P Q , defined in (2), (3), and (4), are the frontal, horizontal, and vertical slices matrices from the core G ̲ , respectively. Observe that E A , E B and E C are the corresponding error matrices.
An algorithm that is based on the ALS method and singular value decomposition (SVD) is used in order to compute the orthogonal components in the Tucker model, called the TuckerALS algorithm [10]. Note that the ALS method partitions the way of computing the three loading matrices by fixing two of them and identifying the third matrix. This is done iteratively until the matrices do not differ significantly (for example, | | A A 1 | | < 10 5 ; | | B B 1 | | < 10 5 ; | | C C 1 | | < 10 5 ), or if a maximum number of iterations (for example, 100) is attained, which are called TuckerALS stopping criteria.
The goodness-of-fit tells us how good an approximation between the original tensor and the solution that was obtained by the algorithm is. This goodness-of-fit is computed by the expression defined as
Fit = i = 1 I j = 1 J k = 1 K x ^ i j k 2 i = 1 I j = 1 J r = 1 K x i j k 2 × 100 % .
Algorithm 1 summarizes the TuckerALS method. The loading matrices A , B , C , and the supermatrix G A are the output of the TuckerALS algorithm. The DisjointTuckerALS algorithm that we propose in this paper uses an adapted version of Algorithm 1. The notation B svd ( X B , Q ) means that B is a matrix whose columns are the first Q left singular vectors of X B .
Algorithm 1: TuckerALS
Mathematics 09 00203 i001

2.3. Disjoint Approach for the Tucker Model

Let X = ( x i j ) be a matrix of order I × J . Afterwards, we say that X is disjoint if and only if:
  • For all i ! j , such that x i j 0 .
  • For all j i , such that x i j 0 .
If X also satisfies X X = I J , where I J is the identity matrix of order J × J , we say that X is a disjoint orthogonal matrix. The following is an example of a disjoint orthogonal matrix:
X = 1 2 0 0 1 2 0 0 0 1 0 0 0 1 3 0 0 1 3 0 0 1 3 .
The optimization mathematical problem to be solved with the DisjointTuckerALS algorithm, when the three loading matrices A , B , and C are required to be disjoint orthogonal, is stated as
min A , B , C , G A X A A G A ( C B ) X ^ A 2
Subject to:
A A = I P ,
B B = I Q ,
C C = I R ,
where · is the Frobenius norm, and  I P , I Q , I R are identity matrices of order P × P , Q × Q , and R × R , respectively. Note that the number of decision variables of this model is I P + J Q + K R + P Q R . The constraints that are given in (7)–(9) are needed in order for columns of each loading matrix to form an orthonormal set. In the previous mathematical problem, the objective function that is defined in (6) is minimized, but, in practice, the fit is calculated according to (5). In order to obtain a simple structure in a loading matrix for three-way tables, there are some known techniques called: scaling, rotation, and sparse. We propose a disjoint technique by the design and implementation of the DisjointTuckerALS algorithm. This requires a reduction of the three-modes and can be obtained up to three disjoint orthogonal loading matrices. Several methods for obtaining disjoint orthogonal components for two-way tables have been derived; see [16,18,19]. If A , B , C are disjoint matrices, the mathematical model defined in (6) can be solved while using the TuckerALS method stated in Algorithm 1 and then the orthogonal components may be obtained for the Tucker model.

2.4. Illustrative Example

We show the benefit of using the DisjointTuckerALS algorithm through a computational experiment for a three-way table that was taken from [7] and adapted by [20]. This small data set is provided in Table 1, Table 2, Table 3 and Table 4, which show three-way tables with I = 6 individuals, J = 4 variables, and K = 4 situations, where the behavioral levels are measured.
The four matrices of order 6 × 4 that are presented in Table 1, Table 2, Table 3 and Table 4 correspond to the different scenarios or situations in which the levels of behavior are evaluated. Components are chosen and the TuckerALS algorithm is executed to obtain orthogonal components, with a model fit of 99.84 % , as in [7], and P = Q = R = 2 . When the DisjointTuckerALS algorithm is executed, the model fit is 98.10 % .
Loading matrices A , B , and  C that were obtained with both the TuckerALS and DisjointTuckerALS algorithms are reported in Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10. Disjoint orthogonal components were calculated for the loading matrices A , B , C , and reported in Table 6, Table 8, and Table 10, respectively. The first component of the loading matrix A represents femininity and the second component masculinity; see Table 5 and Table 6. These tables allow us to affirm that the DisjointTuckerALS algorithm can identify the disjoint structure that lies in the individuals. For the loading matrix B , the first component represents the emotional state and the second component is awareness; see Table 7 and Table 8. From these tables, note that the DisjointTuckerALS algorithm is able to identify the disjoint structure in the variables. For the loading matrix C , the algorithm is able to group the four situations into two clusters; see Table 9 and Table 10. From Table 10, note that the first component is related to social situations and the second component to performance situations.
Table 11 and Table 12 show the core G ̲ that was obtained with both of the algorithms. When observing both cores, it can be interpreted that women in social situations are mainly emotional and less aware. Conversely, men in the same situations are less emotional and more aware. In addition, in performance situations, women are mostly aware, while men are more aware than emotional in the same situations. The TuckerALS and DisjointTuckerALS algorithms yield different G ̲ cores, but they are interpreted in the same manner.
When a three-way table is analyzed while using the Tucker model, it is important to point out that the loading matrices A , B and C are not always easily interpreted [7]. In many situations, these matrices need to be rotated (where any rotation is compensated in core G ̲ ) in order to identify a simple structure that allows for interpretation. However, rotating the matrices does not guarantee that a simple structure is achieved. Therefore, the use of a sparse technique would be an alternative option. Nevertheless, it is worth mentioning that we have a loss of fit when using a sparse technique, which does not happen with rotations.
It is possible to rotate only the matrix B or, alternatively, the matrices B and C can be simultaneously rotated in order to obtain a simple structure, thus improving the interpretation,. However, in some cases, the data analyst can opt to rotate the three loading matrices at the same time. Similarly, in the DisjointTuckerALS algorithm, disjoint orthogonal components can be chosen in a unique matrix, for example B ; in two matrices, for example B and C ; or even in the three loading matrices.
The DisjointTuckerALS algorithm was executed with the same three-way table considering all possible combinations of disjoint orthogonal components in the loading matrices A , B , and C . Table 13 reports the results of comparing different settings. From Table 13 and using the expression defined in (5), note that we lose fit when disjoint orthogonal components are required in the three loading matrices. It is important to consider that there is a loss of fit when using the disjoint technique, although interpretable loading matrices are achieved. Note that there is a tradeoff between interpretation and speed, because the DisjointTuckerALS algorithm takes longer than the TuckerALS algorithm. For details regarding the computational time (runtime) of the algorithms that are presented in Table 13, see Section 4.1.

2.5. The DisjointPCA Algorithm

The optimization mathematical model allowing for a disjoint orthogonal loading matrix B in a two-way table X to be obtained is stated as
min A , B X A B X ^ 2
subject to B B = I Q , with  B being a disjoint matrix, where X is the data matrix of order I × J , with I individuals and J variables, A is the scoring matrix of order I × Q , and  B is the loading matrix of order J × Q . Note that Q < J is the number that is required for the components in the variable mode. Here, we use a greedy algorithm, known as the DisjointPCA algorithm proposed by [16], in order to find a solution to the minimization problem defined in (10). The DisjointPCA algorithm plays a fundamental role for the operation of the DisjointTuckerALS algorithm.
The notation B vs ( X , Q , Tol ) means that the DisjointPCA algorithm with a tolerance Tol is applied to the data matrix X , and then the disjoint orthogonal loading matrix B , with Q components, is obtained as a result. Recall that the DisjointPCA algorithm was proposed by Vichi and Saporta [16] reason why we use the acronym “vs" in the above notation. Note that Tol is a tolerance parameter that represents the maximum distance allowed in the model fit for two consecutive iterations of the DisjointPCA algorithm. The above notation is used in order to explain how the DisjointTuckerALS algorithm works; see [16,21] for more details on the DisjointPCA algorithm.

3. The DisjointTuckerALS Algorithm

In this section, we derive the DisjointTuckerALS algorithm in order to compute from one to three disjoint orthogonal loading matrices for the Tucker model. Next, we explain how the DisjointTuckerALS algorithm works.

3.1. The Stages of the Algorithm

The DisjointTuckerALS algorithm has three stages and its input parameters are:
  • X ̲ : Three-way table of data;
  • P , Q , R : Number of components in A -mode, B -mode, C -mode, respectively;
  • ALSMaxIter: Maximum number of iterations of the ALS algorithm; and
  • Tol: Maximum distance allowed in the fit of the model for two consecutive iterations of the DisjointPCA algorithm.
Stage 1 
[Initial computation of loading matrices with an adapted TuckerALS algorithm]
In this first stage, an initial calculation of the loading matrices is made. To do this, an adapted TuckerALS algorithm is executed, as defined in Algorithm 2. The output in Algorithm 2 are the matrices Y A , Y B , and Y C of order I × Q R , J × P R , and K × P Q , respectively.
Algorithm 2: Adapted TuckerALS
Mathematics 09 00203 i002
Stage 2 
[Computation of disjoint orthogonal loading matrices with the DisjointPCA algorithm]
This second stage is where the disjoint orthogonal loading matrices are computed. In order to obtain P, Q, and R disjoint orthogonal components in the loading matrices A , B , and C , the DisjointPCA algorithm is applied to the matrices Y A , Y B and Y C , respectively. If  A is required to be disjoint orthogonal, then we have that: A vs ( Y A , P , Tol ) . If  B is required to be disjoint orthogonal, then we have that: B vs ( Y B , Q , Tol ) . If  C is required to be disjoint orthogonal, then we have that: C vs ( Y C , R , Tol ) .
Stage 3 
[Computation of non-disjoint orthogonal loading matrices and of the core]
This final stage is where the non-disjoint orthogonal loading matrices are computed. For instance, if it is required that only the matrix B has disjoint orthogonal components (see Figure 1), then an ALS algorithm must be applied in order to compute loading matrices A and C (with the B matrix being fixed). The same occurs in the TuckerALS algorithm, initializing A or C . In addition, if it is required that the matrices B and C have disjoint orthogonal components (see Figure 2), then the loading matrix A is calculated while using the following two steps (the matrices B and C are fixed), as in the TuckerALS algorithm: (1) Y A X A C B ; and (2) A svd Y A , P . If  A , B and C are required to be disjoint orthogonal (see Figure 3), then no calculation is necessary in the loading matrices. Therefore, the DisjointTuckerALS algorithm must compute the core while using two steps. Thus, by using the frontal slices equation from G ̲ , we have that: (1) Y A X A C B ; and (2) G A A Y A . The DisjointTuckerALS algorithm finishes by providing the matrices A , B and C , the core G ̲ , and the fit of the model.

3.2. Using the DisjointTuckerALS Algorithm

We summarize our proposal in Algorithm 3 in order to explain the use of the DisjointTuckerALS algorithm when performing a component analysis for three-way tables with the Tucker model.
Computing the disjoint orthogonal components in Step 7 simultaneously in the three loading matrices is up to the analyst. However, it is not recommendable, due to the significant loss of fit that we have observed in the different computational experiments that were carried out. When more disjoint orthogonal loading matrices are computed, there is less fit in the model and more processing time is required. More than one technique can be used by combining Step 5, Step 6, and Step 7. Subsequently, the results can be compared in Step 8; see Figure 4.
Algorithm 3: Procedure for using DisjointTuckerALS
Step 1.
Collect the data in a three-way table X ̲ of order I × J × K , where I is the number of individuals, J is the number of variables, and K is the number of situations.
Step 2.
Preprocess X ̲ according to the analyst’s criterion.
Step 3.
Determine the number of components P, Q and R on each mode, A -mode, B -mode and C -mode, respectively.
Step 4.
Perform a usual PCA with the tensor X ̲ , that is, compute the loading matrices for example with the TuckerALS algorithm. If the loading matrices A , B and C have a simple structure (easy to interpret), go to Step 8.
Step 5.
Apply either scaling or rotation techniques to keep the fit. If the loading matrices have a simple structure, go to Step 8.
Step 6.
Use a sparse technique. If the loading matrices are simple, go to Step 8.
Step 7.
Employ a disjoint technique by computing disjoint orthogonal components using the DisjointTuckerALS algorithm, and continue to Step 8.
Step 8.
Report the results and obtain conclusions.

4. Numerical Results

In this section, we carry out computational experiments in order to evaluate the performance of our algorithm. The first experiment corresponds to data that are simulated to generate a three-way table with a disjoint structure according to the Tucker model. The second one is an experiment using real data that correspond to a three-way table taken from [22]. In this section, we also provide details of some computational aspects, such as runtimes of the algorithm, characteristics of the hardware, and software used, among others.

4.1. Computational Aspects

We must mention that the DisjointTuckerALS algorithm requires more computational time (runtime) than the TuckerALS algorithm. This is explained, because, as the number of loading matrices calculated as disjoint increases, the time that is required for their calculation also increases, consuming more computational resources of memory and processor.
The computational experiments were carried out on a computer with the following hardware characteristics: (i) OS: Windows 10 for 64 bits; (ii) RAM: 8 Gigabytes; and (iii) processor: Intel Core i7-4510U 2-2.60 GHZ. Regarding the software, the following tools and programming languages were used: (i) development tool—IDE—: Microsoft Visual Studio Express; (ii) programming language: C#.NET; and, (iii) statistical software: R.
The DisjointPCA, TuckerALS, and DisjointTuckerALS algorithms that are presented in this paper to perform all of the numerical applications were implemented in C#.NET as the programming language mainly for the graphical user interface (GUI) of data entry, control of calculations, and delivery of results. Data entry and presentation of results were carried out with Excel sheets. Communication between C#.NET and Excel was established through a connector known as COM+. Some parts of the codes developed were implemented while using the R programming language for random number generation and SVD. Communication between C#.NET and R was stated with R.NET as a connector, which can be installed in Visual Studio with a package named NuGet, whereas SVD was performed with an R package named irlba. This package quickly calculates the partial SVD, that is, those SVD which use the first singular values, but we must specify the number of singular values to be calculated. Therefore, the irlba package does not use nor compute the other singular values, accelerating the calculations for big matrices, such as frontal, horizontal, and vertical slices matrices of a three-way table.

4.2. Generator of Disjoint Structure Tensors

We design and implement a simulation algorithm to randomly build a three-way table with a disjoint latent structure in its three-modes. Subsequently, the DisjointTuckerALS algorithm should be able to detect that structure, since it uses a Tucker model with disjoint orthogonal components.
Let X ̲ be a three-way table with I individuals, J variables, and K times or locations. Assume that: (i) the first mode, which is related to the loading matrix A , has P latent individuals ( P < I ); (ii) the second mode, which is related to the loading matrix B , has Q latent variables ( Q < J ); and, (iii) the third mode, which is related to the loading matrix C , has R latent locations ( R < K ). Suppose that s x 1 , , s x I are the I original individuals. In addition, s y 1 , , s y P are the P latent individuals. We consider the linear combination that is given by
s y p = a 1 , p s x 1 + + a I , p s x I , p = 1 , , P .
If it is required that the m original consecutive individuals, s x n , s x n + 1 , , s x n + m 1 , are represented by the latent individual s y p , then the scalars a n , p , a n + 1 , p , , a n + m 1 , p are defined as independent random variables with discrete uniform distribution, whose support is the closed set of integer numbers from 70 to 100. The other scalars in the same linear combination are defined as independent random variables with discrete uniform distribution, whose support is the closed set of integer numbers from one to 30. This procedure must be performed for each p from 1 to P, since each original individual must have a strong presence in a unique latent individual. The Gram–Schmidt orthonormalization process is applied to the matrix of order I × P , which has the scalars from all the linear combinations. Hence, a disjoint dimensional reduction of the loading matrix A is achieved.
Similarly with the loading matrix B , consider that v x 1 , , v x J are the J original variables. In addition, v y 1 , , v y Q are the Q latent variables. We consider the linear combination that is stated as
v y q = b 1 , q v x 1 + + b J , q v x J , q = 1 , , Q .
If it is required that the m consecutive original variables v x n , v x n + 1 , , v x n + m 1 are represented by the latent variable v y q , then the scalars b n , q , b n + 1 , q , , b n + m 1 , q are defined as independent random variables with discrete uniform distribution, similarly as for the matrix A . In the same manner as before, this procedure must be performed for each q from 1 to Q and the Gram–Schmidt orthonormalization process is again applied, as in the case of A , and then a disjoint dimensional reduction of B is achieved.
Analogously with C , let t x 1 , , t x K be the K times or original locations and t y 1 , , t y R be the R latent times or latent locations. We consider the linear combination that is expressed by
t y r = c 1 , r t x 1 + + c K , r t x K , r = 1 , , R .
If it is required that the m consecutive original locations t x n , t x n + 1 , , t x n + ( m 1 ) are represented by the latent location t y r , then the scalars c n , r , c n + 1 , r , , c n + m 1 , r are defined as with A and B , and the procedure is applied for each r from 1 to R. Once again, the Gram–Schmidt orthonormalization process is applied to the matrix of order K × R that has the scalars from all of the linear combinations. Thus, a disjoint dimensional reduction of the loading matrix C is achieved.
The Core G ̲ must be of order P × Q × R . With no loss of generality, suppose that the inputs of that three-way table are independent random variables with continuous uniform distribution in the interval 50 . 50 . In order to complete the creation of X ̲ , the matrix equation is defined as
X A = A G A C B .
The matrix X A of order I × J K stated in (14) has the frontal slices of X ̲ , whereas the matrix G A of order P × Q R has the frontal slices of G ̲ . Equations (11)–(13) are used in order to build the random loading matrices. The algorithm that builds the three-way random table X ̲ of order I × J × K must implement an application φ that is expressed as
φ : N 6 × N P × N Q × N R T I × J × K I , J , K , P , Q , R , α p p = 1 P , β q q = 1 Q , γ r r = 1 R φ I , J , K , P , Q , R , α p p = 1 P , β q q = 1 Q , γ r r = 1 R ,
which is subject to P < I , Q < J and R < K , where T I × J × K is the set of all three-way tables with entries in the real numbers. Additionally, it must satisfy that
I = p = 1 P α p , J = q = 1 Q β q , K = r = 1 R γ r ,
where α p is the number of original individuals in the p-th latent individual; β q is the number of original variables in the q-th latent variable; and γ r is the number of original locations in the r-th latent location. The application φ that is defined in (15) must randomly provide a three-way table X ̲ of order I × J × K that has a simple structure, which is expected to be detected by the DisjointTuckerALS algorithm.

4.3. Applying the DisjointTuckerALS Algorithm to Simulated Data

Next, we show how the DisjointTuckerALS algorithm works by using the application φ in order to generate a three-way table X ̲ of order 20 × 18 × 17 . According to the definition of φ given in (15), the values of P, Q, R are 3 , 4 and 5 respectively. Furthermore, constraints stated in (16) are satisfied. The other parameter setting is given by ALSMaxIter = 100 and Tol = 0.00001 . The TuckerALS and DisjointTuckerALS algorithms were executed while using the data φ ( 20 , 18 , 17 , 3 , 4 , 5 , { 5 , 7 , 8 } , { 3 , 4 , 5 , 6 } , { 2 , 3 , 3 , 4 , 5 } ) , obtaining a fit of 94.73 % and 92.79 % , respectively; see Table 14. As expected, we have a loss of fit valued at 1.94%. However, there is a gain in interpretation, because a simple structure in the three loading matrices is obtained. Table 15 reports the loading matrix A . Note that the DisjointTuckerALS algorithm is able to identify the disjoint structure in the first mode. Observe that five original individuals are represented by the first disjoint orthogonal component. The next seven original individuals are represented by the second disjoint orthogonal component. In addition, the last eight original individuals are represented by the third disjoint orthogonal component. Table 16 shows the loading matrix B . The fact that the DisjointTuckerALS algorithm is able to identify a disjoint structure in the second mode is highlighted. The algorithm is able to recognize the way in which the latent variables group the original variables. Table 17 presents the loading matrix C and, once again, note that the DisjointTuckerALS algorithm identifies the disjoint structure in the third mode.
The DisjointTuckerALS algorithm also recognizes the manner in which the latent locations group the original locations and the three loading matrices are easily interpreted. However, when running the TuckerALS algorithm, the three loading matrices do not allow for an easy interpretation. Note that the disjoint approach can be complemented with rotations and sparse techniques for better analysis.

4.4. Applying the DisjointTuckerALS Algorithm to Real Data

Next, the DisjointTuckerALS algorithm is executed with a three-way table X ̲ of order 24 × 20 × 38 with real data being taken from [22]. Note that K = 38 Japanese university students evaluate I = 24 Chopin’s preludes while using J = 20 bipolar scales. The preprocessing of the data and the number of components in each mode have been chosen in the same manner, as in [22]. Table 18 reports the model fit in four different scenarios with P = 2 , Q = 3 , and R = 2 . The full data set can be downloaded from http://three-mode.leidenuniv.nl.
For a comparative analysis, the loading matrix that is related to Chopin’s preludes is chosen. Table 19 reports the loading matrix A obtained while using the TuckerALS algorithm. In [22], this matrix is not interpreted and they proceed to make rotations.
Table 20 provides the final loading matrix A used for interpretation. The first component is named “fast+minor, slow+major” and the second component is named “fast+major, slow+minor”. Table 21 presents the loading matrix A that was obtained with the DisjointTuckerALS algorithm. Note that, with the loading matrix A of Table 21, the same conclusions are reached as with the loading matrix A of Table 20.

5. Conclusions, Discussion, Limitations, and Future Research

The main techniques for dimensionality reduction, pattern extraction, and classification in data obtained through tensorial analysis have been based on the Tucker model. However, a big problem of the existing techniques is the interpretability of their results. In this work, we have proposed a heuristic algorithm for computing the disjoint orthogonal components in a three-way table with the Tucker model, which facilitates the mentioned interpretability. The DisjointTuckerALS algorithm is based on a combination of the TuckerALS and DisjointPCA algorithms. The results that were obtained in the computational experiments have shown that the main benefit of the proposed algorithm is its easiness of direct interpretation in the loading matrices without using rotational methods or sparse techniques. Computational experiments have suggested that the algorithm can detect and catch disjoint structures in a three-way table according to the Tucker model. In summary, this paper reported the following findings:
(i)
A new algorithm for computing disjoint orthogonal components in a three-way table with the Tucker model was proposed.
(ii)
A measure of goodness of fit to evaluate the algorithms presented was proposed.
(iii)
A optimization mathematical model was used.
(iv)
A numerical evaluation of the proposed methodology was considered by means of Monte Carlo simulations.
(v)
By using a case study with real-world data, we have illustrated the new algorithm.
Numerical experiments of the proposed algorithm with simulated and real data sets allowed us to show its good performance and its potential applications. We obtained a new algorithm that can be a useful knowledge addition to the multivariate tool-kit of diverse practitioners, applied statisticians, and data scientists.
Some limitations of our study, which could be improved in future works are the following:
(i)
There is no guarantee that the optimal solution is attained due to the heuristic nature of the DisjointTuckerALS algorithm.
(ii)
In the absence of additional constraints to those inherent to the original problem, the space of feasible solutions contains the global optimum. However, by incorporating the constraints of the DisjointTuckerALS algorithm, the space of feasible solutions is compressed, which aims to find a solution that is as close as possible to the global optimum within this new set of feasible solutions. For this reason, the fit corresponding to the solution provided by the DisjointTuckerALS algorithm is less than the fit achieved by the TuckerALS algorithm. Nevertheless, the incorporated constraints allow us to put zeros in the positions of the variables with low contribution into a component of the loading matrix, which permits us to interpret the components more clearly.
(iii)
The proposed algorithm takes longer than the TuckerALS algorithm, so that a tradeoff between interpretation and speed exists.
In order to motivate readers and potential users, a wide spectrum of additional applications of the new algorithm with real three-way data in diverse areas is the following:
(i)
Functional magnetic resonance imaging (fMRI) has been successfully used by the neuroscientists for diagnosis of neurological and neurodevelopmental disorders. The fMRI has been analyzed by means of tensorial methods while using the Tucker model [23].
(ii)
Component analysis in three-way tables also has application in environmental sciences. For example, in [24], through the multivariate study of a sample of blue crabs, a hypothesis is tested that environmental stress weakens some organisms, since the normal immune response is not able to protect them from a bacterial infection. A Tucker model was used for this analysis.
(iii)
The data of the price indexes in search of behavior patterns using the Tucker decomposition were analyzed in [25]. The DisjointTuckerALS algorithm can be used for detecting these patterns.
(iv)
An application in economy on the specialization indexes of the electronic industries of 23 European countries of the Organisation for Economic Co-operation and Development (OECD) based on three-way tables is presented in [1]; see also http://three-mode.leidenuniv.nl. Applications in stock markets and breakpoint analysis for the COVID-19 pandemic can be also considered [26].
(v)
On the website "The Three-Mode Company” (see http://three-mode.leidenuniv.nl), data sets corresponding to three-way tables, including engineering, management, and medicine, are related to (a) aerosol particles in Austria; (b) diseased blue crabs in the US; (c) chromatography; (d) coping Dutch primary school children; (e) Dutch hospitals as organizations; (f) girls’ growth curves between five and 15 years old; (g) happiness, siblings, and schooling; (h) multiple personalities; (i) parental behavior in Japan; (j) peer play and a new sibling; (k) Dutch children in the strange situations; and, (l) university positions and academics.
Some open problems that arose from this study are the following:
(i)
We believe that the disjoint approach can be used together with existing techniques.
(ii)
A study that allows for obtaining a disjoint structure in the core of a Tucker model to facilitate their interpretation is of interest.
(iii)
A bootstrap analysis for the loading matrices can be performed.
(iv)
Regression modeling, errors-in-variables, functional data analysis, and PLS regression, based on the proposed methodology are also of interest [27,28,29,30].
(v)
Other applications of the algorithm developed in the context of multivariate methods are: discriminant analysis, correspondence analysis, and cluster analysis, as well as the already mentioned functional data analysis and PLS.
(vi)
There is also a promising field of applications in the so-called statistical learning; for example, for image compression.
Therefore, the new methodology that was proposed in this study promotes new challenges and opens issues to be explored from the theoretical and numerical perspectives. Future articles reporting research on these and other issues are in progress and we hope to publish their findings.

Author Contributions

Data curation, C.M.-B. and J.A.R.-F.; formal analysis, C.M.-B., J.A.R.-F., A.B.N.-L., V.L., A.M.-C., M.P.G.-V.; investigation, C.M.-B., M.P.G.-V.; methodology, C.M.-B., J.A.R.-F., A.B.N.-L., V.L., A.M.-C., M.P.G.-V.; writing—original draft, C.M.-B., J.A.R.-F., A.B.N.-L., A.M.-C., M.P.G.-V.; writing—review and editing, V.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported partially by project grant “Fondecyt 1200525” (V. Leiva) from the National Agency for Research and Development (ANID) of the Chilean government.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available in this paper, in the links there provided or from the corresponding author upon request.

Acknowledgments

The authors would also like to thank the Editor and Reviewers for their constructive comments which led to improve the presentation of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kroonenberg, P.M. Applied Multiway Data Analysis; Wiley: New York, NY, USA, 2008. [Google Scholar]
  2. Hitchcock, F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  3. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  4. Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef]
  5. Harshman, R.A. Foundations of the parafac procedure: Models and conditions for an explanatory multimodal factor analysis. UCLA Work. Pap. Phon. 1970, 16, 1–84. [Google Scholar]
  6. Carroll, J.D.; Chang, J.J. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
  7. Kiers, H.A.L.; Mechelen, I.V. Three-way component analysis: Principles and illustrative application. Psychol. Methods 2001, 6, 84–110. [Google Scholar] [CrossRef]
  8. Kolda, T.G. Orthogonal tensor decompositions. SIAM J. Matrix Anal. Appl. 2001, 23, 243–255. [Google Scholar] [CrossRef]
  9. Acal, C.; Aguilera, A.M.; Escabias, M. New modeling approaches based on varimax rotation of functional principal components. Mathematics 2020, 8, 2085. [Google Scholar] [CrossRef]
  10. Kroonenberg, P.M.; de Leeuw, J. Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika 1980, 45, 69–97. [Google Scholar] [CrossRef]
  11. Jolliffe, I.T. Principal Component Analysis; Springer: New York, NY, USA, 2002. [Google Scholar]
  12. Papalexakis, E.E.; Faloutsos, C.; Sidiropoulos, N.D. Parcube: Sparse parallelizable tensor decompositions. In Machine Learning and Knowledge Discovery in Databases; Springer: Berlin/Heidelberg, Germany, 2012; pp. 521–536. [Google Scholar]
  13. Perros, I.; Chen, R.; Vuduc, R.; Sun, J. Sparse hierarchical tucker factorization and its application to healthcare. In Proceedings of the IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015; pp. 943–948. [Google Scholar]
  14. Sun, W.W.; Junwei, L.; Han, L.; Guang, C. Provable sparse tensor decomposition. J. R. Stat. Soc. B 2017, 79, 899–916. [Google Scholar] [CrossRef] [Green Version]
  15. Yokota, T.; Cichocki, A. Multilinear tensor rank estimation via sparse tucker decomposition. In Proceedings of the 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advance Intelligent Systems (ISIS), Kitakyushu, Japan, 3–6 December 2014. [Google Scholar]
  16. Vichi, M.; Saporta, G. Clustering and disjoint principal component analysis. Comput. Stat. Data Anal. 2009, 53, 3194–3208. [Google Scholar] [CrossRef]
  17. Ramirez-Figueroa, J.A.; Martin-Barreiro, C.; Nieto-Librero, A.B.; Leiva, V.; Galindo, M.P. A new principal component analysis by particle swarm optimization with an environmental application for data science. Stoch. Environ. Res. Risk Assess. 2021, in press. [Google Scholar] [CrossRef]
  18. Ferrara, C.; Martella, F.; Vichi, M. Dimensions of well-being and their statistical measurements. In Topics in Theoretical and Applied Statistics; Alleva, G., Giommi, A., Eds.; Springer: Cham, Switzerland, 2016; pp. 85–99. [Google Scholar]
  19. Nieto-Librero, A.B. Inferential Version of Biplot Methods Based on Bootstrapping and its Application to Three-Way Tables. Ph.D. Thesis, Universidad de Salamanca, Salamanca, Spain, 2015. (In Spanish). [Google Scholar]
  20. Amaya, J.; Pacheco, P. Dynamic factor analysis using the Tucker3 method. Rev. Colomb. Estad. 2002, 25, 43–57. [Google Scholar]
  21. Macedo, E.; Freitas, A. The alternating least-squares algorithm for CDPCA. In Optimization in the Natural Sciences; Plakhov, A., Tchemisova, T., Freitas, A., Eds.; Springer: Cham, Switzerland, 2015; pp. 173–191. [Google Scholar]
  22. Murakami, T.; Kroonenberg, P.M. Three-mode models and individual differences in semantic differential data. Multivar. Behav. Res. 2003, 38, 247–283. [Google Scholar] [CrossRef]
  23. Hamdi, S.M.; Wu, Y.; Boubrahimi, S.F.; Angryk, R.; Krishnamurthy, L.C.; Morris, R. Tensor decomposition for neurodevelopmental disorder prediction. In Brain Informatics; Wang, S., Yamamoto, V., Su, J., Yang, Y., Jones, E., Iasemidis, L., Mitchell, T., Eds.; Springer: Cham, Switzerland, 2018; pp. 339–348. [Google Scholar]
  24. Gemperline, P.J.; Miller, K.H.; West, T.L.; Weinstein, J.E.; Hamilton, J.C.; Bray, J.T. Principal component analysis, trace elements, and blue crab shell disease. Anal. Chem. 1992, 64, 523–531. [Google Scholar] [CrossRef]
  25. Correa, F.E.; Oliveira, M.D.; Gama, J.; Correa, P.L.P.; Rady, J. Analyzing the behavior dynamics of grain price indexes using Tucker tensor decomposition and spatio-temporal trajectories. Comput. Electron. Agric. 2016, 120, 72–78. [Google Scholar] [CrossRef] [Green Version]
  26. Chahuan-Jimenez, K.; Rubilar, R.; de la Fuente-Mella, H.; Leiva, V. Breakpoint analysis for the COVID-19 pandemic and its effect on the stock markets. Entropy 2021, 23, 100. [Google Scholar] [CrossRef]
  27. Huerta, M.; Leiva, V.; Liu, S.; Rodriguez, M.; Villegas, D. On a partial least squares regression model for asymmetric data with a chemical application in mining. Chemom. Intell. Lab. Syst. 2019, 190, 55–68. [Google Scholar] [CrossRef]
  28. Carrasco, J.M.F.; Figueroa-Zuniga, J.I.; Leiva, V.; Riquelme, M.; Aykroyd, R.G. An errors-in-variables model based on the Birnbaum-Saunders and its diagnostics with an application to earthquake data. Stoch. Environ. Res. Risk Assess. 2020, 34, 369–380. [Google Scholar] [CrossRef]
  29. Giraldo, R.; Herrera, L.; Leiva, V. Cokriging prediction using as secondary variable a functional random field with application in environmental pollution. Mathematics 2020, 8, 1305. [Google Scholar] [CrossRef]
  30. Melendez, R.; Giraldo, R.; Leiva, V. Sign, Wilcoxon and Mann-Whitney tests for functional data: An approach based on random projections. Mathematics 2021, 9, 44. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the DisjointTuckerALS algorithm that computes a single disjoint orthogonal matrix (in this case, the matrix B ).
Figure 1. Flowchart of the DisjointTuckerALS algorithm that computes a single disjoint orthogonal matrix (in this case, the matrix B ).
Mathematics 09 00203 g001
Figure 2. Flowchart of the DisjointTuckerALS algorithm that computes two disjoint orthogonal matrices (in this case the matrices B and C ).
Figure 2. Flowchart of the DisjointTuckerALS algorithm that computes two disjoint orthogonal matrices (in this case the matrices B and C ).
Mathematics 09 00203 g002
Figure 3. Flowchart of the DisjointTuckerALS algorithm that computes three disjoint orthogonal matrices: A , B , and C .
Figure 3. Flowchart of the DisjointTuckerALS algorithm that computes three disjoint orthogonal matrices: A , B , and C .
Mathematics 09 00203 g003
Figure 4. Flowchart for using the DisjointTuckerALS algorithm.
Figure 4. Flowchart for using the DisjointTuckerALS algorithm.
Mathematics 09 00203 g004
Table 1. Matrix of situation 1: “applying for an examen” for behavioral level data.
Table 1. Matrix of situation 1: “applying for an examen” for behavioral level data.
 EmotionalSensitiveCaringThorough
Anne0.00.04.04.0
Bert0.00.02.02.0
Claus0.00.02.02.0
Dolly0.00.04.04.0
Edna0.00.02.52.5
Frances0.00.04.04.0
Table 2. Matrix of situation 2: “giving a speech” for behavioral level data.
Table 2. Matrix of situation 2: “giving a speech” for behavioral level data.
 EmotionalSensitiveCaringThorough
Anne0.60.62.42.4
Bert0.20.21.81.8
Claus0.20.21.81.8
Dolly0.60.62.42.4
Edna0.40.42.12.1
Frances0.60.62.42.4
Table 3. Matrix of situation 3: “family picnic” for behavioral level data.
Table 3. Matrix of situation 3: “family picnic” for behavioral level data.
 EmotionalSensitiveCaringThorough
Anne4.04.00.00.0
Bert1.01.01.01.0
Claus1.01.01.01.0
Dolly4.04.00.00.0
Edna2.02.00.50.5
Frances4.04.00.00.0
Table 4. Matrix of situation 4: “meeting a new date” for behavioral level data.
Table 4. Matrix of situation 4: “meeting a new date” for behavioral level data.
EmotionalSensitiveCaringThorough
Anne4.64.60.90.9
Bert1.21.21.81.8
Claus1.21.21.81.8
Dolly4.64.60.90.9
Edna2.42.41.41.4
Frances4.64.60.90.9
Table 5. Loading matrix A with the TuckerALS algorithm for behavioral level data.
Table 5. Loading matrix A with the TuckerALS algorithm for behavioral level data.
 FemininityMasculinity
Anne−0.5185609940.2282689980
Bert−0.2167812130−0.619921659
Claus−0.2167812130−0.619921659
Dolly−0.5185609940.2282689980
Edna−0.315111562−0.2739964750
Frances−0.5185609940.2282689980
Table 6. Loading matrix A with the DisjointTuckerALS algorithm for behavioral level data.
Table 6. Loading matrix A with the DisjointTuckerALS algorithm for behavioral level data.
 FemininityMasculinity
Anne−0.5457371490
Bert0−0.707106781
Claus0−0.707106781
Dolly−0.5457371490
Edna−0.3263631310
Frances−0.5457371490
Table 7. Loading matrix B with the TuckerALS algorithm for behavioral level data.
Table 7. Loading matrix B with the TuckerALS algorithm for behavioral level data.
 EmotionalityConscientiousness
Emotional−0.588715219−0.3916814920
Sensitive−0.588715219−0.3916814920
Caring−0.39168149200.588715219
Thorough−0.39168149200.588715219
Table 8. Loading matrix B with the DisjointTuckerALS algorithm for behavioral level data.
Table 8. Loading matrix B with the DisjointTuckerALS algorithm for behavioral level data.
 EmotionalityConscientiousness
Emotional−0.7071067810
Sensitive−0.7071067810
Caring0−0.707106781
Thorough0−0.707106781
Table 9. Loading matrix C with the TuckerALS algorithm for behavioral level data.
Table 9. Loading matrix C with the TuckerALS algorithm for behavioral level data.
 Social SituationsPerformance Situations
Applying for an examen−0.33319869300.770591276
Giving a speech−0.31042551400.435143574
Family picnic−0.538191124−0.3872072680
Meeting a new date−0.709200215−0.2586690690
Table 10. Loading matrix C with the DisjointTuckerALS algorithm for behavioral level data.
Table 10. Loading matrix C with the DisjointTuckerALS algorithm for behavioral level data.
 Social SituationsPerformance Situations
Applying for an examen0−0.827376571
Giving a speech0−0.561647585
Family picnic−0.6338103570
Meeting a new date−0.7734884820
Table 11. Core G ̲ with the TuckerALS algorithm for behavioral level data.
Table 11. Core G ̲ with the TuckerALS algorithm for behavioral level data.
Social SituationsPerformance Situations
 EmotionalityConscientiousnessEmotionalityConscientiousness
Femininity−17.09837769−0.4894918580.50606476−12.25957342
Masculinity−0.6237027974.1061537630.543201040.72835096
Table 12. Core G ̲ with the DisjointTuckerALS algorithm for behavioral level data.
Table 12. Core G ̲ with the DisjointTuckerALS algorithm for behavioral level data.
 Social SituationsPerformance Situations
 EmotionalityConscientiousnessEmotionalityConscientiousness
Femininity−15.55006689−2.25788715−0.883942787−12.28278827
Masculinity−3.12399307−4.052179249−0.224659034−5.33143759
Table 13. Comparison of fit and runtime for behavioral level data.
Table 13. Comparison of fit and runtime for behavioral level data.
Disjoint Orthogonal ComponentsFit (in %)Runtime (in min)
None (TuckerALS)99.840.0038
A (DisjointTuckerALS)99.250.0831
B (DisjointTuckerALS)99.840.0797
C (DisjointTuckerALS)98.670.0672
A , B (DisjointTuckerALS)99.250.0989
A , C (DisjointTuckerALS)98.100.0866
B , C (DisjointTuckerALS)98.670.0913
A , B , C (DisjointTuckerALS)98.100.1012
Table 14. Comparison of fit and runtime for simulated data.
Table 14. Comparison of fit and runtime for simulated data.
AlgorithmFit (in %)Runtime (in min)
TuckerALS94.730.0717
DisjointTuckerALS92.790.8735
Table 15. Loading matrix A with the DisjointTuckerALS algorithm for simulated data.
Table 15. Loading matrix A with the DisjointTuckerALS algorithm for simulated data.
sy 1 sy 2 sy 3
sx 1 0.4893215100
sx 2 0.4413863400
sx 3 0.3868782500
sx 4 0.4077556200
sx 5 0.4998031000
sx 6 00.393767930
sx 7 00.363699950
sx 8 00.391658640
sx 9 00.375645280
sx 10 00.430016930
sx 11 00.311469070
sx 12 00.369101280
sx 13 00−0.32559414
sx 14 00−0.33366963
sx 15 00−0.31117803
sx 16 00−0.36099787
sx 17 00−0.34156470
sx 18 00−0.38518722
sx 19 00−0.37839893
sx 20 00−0.38377130
Table 16. Loading matrix B with the DisjointTuckerALS algorithm for simulated data.
Table 16. Loading matrix B with the DisjointTuckerALS algorithm for simulated data.
  vy 1 vy 2 vy 3 vy 4
vx 1 −0.63185677000
vx 2 −0.53495062000
vx 3 −0.56087864000
vx 4 0−0.5405845000
vx 5 0−0.5127787700
vx 6 0−0.5459378500
vx 7 0−0.3831164200
vx 8 00−0.436312390
vx 9 00−0.464329650
vx 10 00−0.456814430
vx 11 00−0.455925700
vx 12 00−0.421285900
vx 13 0000.48405851
vx 14 0000.47246029
vx 15 0000.36140660
vx 16 0000.40851941
vx 17 0000.35273551
vx 18 0000.34719369
Table 17. Loading matrix C with DisjointTuckerALS algorithm for simulated data.
Table 17. Loading matrix C with DisjointTuckerALS algorithm for simulated data.
  ty 1 ty 2 ty 3 ty 4 ty 5
tx 1 0.568261440000
tx 2 0.822848060000
tx 3 00.50066252000
tx 4 00.57846089000
tx 5 00.64398760000
tx 6 000.6293597300
tx 7 000.6589965800
tx 8 000.4118614300
tx 9 0000.404552060
tx 10 0000.521914700
tx 11 0000.475954830
tx 12 0000.580869760
tx 13 00000.50933537
tx 14 00000.44032157
tx 15 00000.34018158
tx 16 00000.36337834
tx 17 00000.54674224
Table 18. Comparison of fit and runtime for Chopin’s preludes data.
Table 18. Comparison of fit and runtime for Chopin’s preludes data.
Disjoint Orthogonal ComponentsFit (in %)Runtime (in min)
NONE (TuckerALS)42.630.0711
A (DisjointTuckerALS)38.920.4171
B (DisjointTuckerALS)40.240.4808
A , B (DisjointTuckerALS)36.790.7136
Table 19. Loading matrix A obtained with the TuckerALS algorithm for Chopin’s preludes data.
Table 19. Loading matrix A obtained with the TuckerALS algorithm for Chopin’s preludes data.
Chopin’s PreludesComp1Comp2
(1) C major Agitato−0.0339535280.184842162
(2) a minor Lento−0.137757855−0.362945929
(3) G major Vivace0.1831425890.335407766
(4) e minor Largo−0.046034600−0.290603627
(5) D major Allegro0.2040624800.200322397
(6) b minor Lento assai−0.109263322−0.337194080
(7) A major Andantino0.322239475−0.165253184
(8) f# minor Molto agitato−0.1314973580.103731974
(9) E major Largo−0.191131130−0.199220960
(10) c# minor Allegro molto0.0603457670.118857312
(11) B major Vivace0.270309648−0.021135215
(12) g# minor Presto−0.1741597550.154464730
(13) F# major Lento0.124820434−0.196316896
(14) eb minor Allegro−0.2004765830.134265782
(15) Db major Sostenuto0.317539137−0.189612402
(16) bb minor Presto con fuoco−0.134117330.374842648
(17) Ab major Allegretto0.039035613−0.099661458
(18) f minor Allegro molto−0.2971279560.124019724
(19) Eb major Vivace0.2288449260.119167072
(20) c minor Largo−0.225976898−0.225192986
(21) Bb major Cantabile0.144306691−0.085136044
(22) g minor Molto agitato−0.2695837040.074670854
(23) F major Moderato0.3137004440.137709411
(24) d minor Allegro appasionato−0.2572671840.109970949
Table 20. Loading matrix A obtained by rotations for Chopin’s preludes data.
Table 20. Loading matrix A obtained by rotations for Chopin’s preludes data.
Chopin’s PreludesFast + Minor,
Slow + Major
Fast + Major,
Slow + Minor
(1) C major Agitato0.1530.109
(2) a minor Lento−0.155−0.356
(3) G major Vivace0.1030.368
(4) e minor Largo−0.170−0.240
(5) D major Allegro0.0060.286
(6) b minor Lento assai−0.157−0.318
(7) A major Andantino−0.3460.107
(8) f# minor Molto agitato0.1670.018
(9) E major Largo−0.003−0.276
(10) c# minor Allegro molto0.0400.127
(11) B major Vivace−0.2080.174
(12) g# minor Presto0.232−0.011
(13) F# major Lento−0.227−0.053
(14) eb minor Allegro0.237−0.044
(15) Db major Sostenuto−0.3600.086
(16) bb minor Presto con fuoco0.3580.174
(17) Ab major Allegretto−0.098−0.044
(18) f minor Allegro molto0.299−0.119
(19) Eb major Vivace−0.0800.245
(20) c minor Largo−0.004−0.319
(21) Bb major Cantabile−0.1630.040
(22) g minor Molto agitato0.245−0.135
(23) F major Moderato−0.1280.318
(24) d minor Allegro appasionato0.261−0.101
Table 21. Loading matrix A obtained with the DisjointTuckerALS algorithm for Chopin’s data.
Table 21. Loading matrix A obtained with the DisjointTuckerALS algorithm for Chopin’s data.
Chopin’s PreludesFast + Minor,
Slow + Major
Fast + Major,
Slow + Minor
(1) C major Agitato0.0914952430
(2) a minor Lento0−0.349499568
(3) G major Vivace00.380083836
(4) e minor Largo0−0.216381495
(5) D major Allegro00.322563112
(6) b minor Lento assai0−0.306829783
(7) A major Andantino−0.3914071190
(8) f# minor Molto agitato0.1707836070
(9) E major Largo0−0.308357327
(10) c# minor Allegro molto00.130529712
(11) B major Vivace−0.2929935920
(12) g# minor Presto0.2311534480
(13) F# major Lento−0.1913088500
(14) eb minor Allegro0.2527892150
(15) Db major Sostenuto−0.3935918400
(16) bb minor Presto con fuoco0.2547100190
(17) Ab major Allegretto−0.0714498060
(18) f minor Allegro molto0.3523149320
(19) Eb major Vivace00.300266062
(20) c minor Largo0−0.359417233
(21) Bb major Cantabile−0.1785604950
(22) g minor Molto agitato0.3082589270
(23) F major Moderato00.396120182
(24) d minor Allegro appasionato0.3058649850
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martin-Barreiro, C.; Ramirez-Figueroa, J.A.; Nieto-Librero, A.B.; Leiva, V.; Martin-Casado, A.; Galindo-Villardón, M.P. A New Algorithm for Computing Disjoint Orthogonal Components in the Three-Way Tucker Model. Mathematics 2021, 9, 203. https://doi.org/10.3390/math9030203

AMA Style

Martin-Barreiro C, Ramirez-Figueroa JA, Nieto-Librero AB, Leiva V, Martin-Casado A, Galindo-Villardón MP. A New Algorithm for Computing Disjoint Orthogonal Components in the Three-Way Tucker Model. Mathematics. 2021; 9(3):203. https://doi.org/10.3390/math9030203

Chicago/Turabian Style

Martin-Barreiro, Carlos, John A. Ramirez-Figueroa, Ana B. Nieto-Librero, Víctor Leiva, Ana Martin-Casado, and M. Purificación Galindo-Villardón. 2021. "A New Algorithm for Computing Disjoint Orthogonal Components in the Three-Way Tucker Model" Mathematics 9, no. 3: 203. https://doi.org/10.3390/math9030203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop