A multidimensional principal component analysis via the c-product Golub-Kahan-SVD for classification and face recognition

Face recognition and identification is a very important application in machine learning. Due to the increasing amount of available data, traditional approaches based on matricization and matrix PCA methods can be difficult to implement. Moreover, the tensorial approaches are a natural choice, due to the mere structure of the databases, for example in the case of color images. Nevertheless, even though various authors proposed factorization strategies for tensors, the size of the considered tensors can pose some serious issues. When only a few features are needed to construct the projection space, there is no need to compute a SVD on the whole data. Two versions of the tensor Golub-Kahan algorithm are considered in this manuscript, as an alternative to the classical use of the tensor SVD which is based on truncated strategies. In this paper, we consider the Tensor Tubal Golub Kahan Principal Component Analysis method which purpose is to extract the main features of images using the tensor singular value decomposition (SVD) based on the tensor cosine product that uses the discrete cosine transform. This approach is applied for classification and face recognition and numerical tests show its effectiveness.


1.
Introduction. An important challenge in the last few years was the extraction of the main information in large datasets, measurements, observations that appear in signal and hyperspectral image processing, data mining, machine learning. Due to the increasing volume of data required by these applications, approximative low-rank matrix and tensor factorizations play a fundamental role in extracting latent components. The idea is to replace the initial large and maybe noisy and ill conditioned large scale original data by a lower dimensional approximate representation obtained via a matrix or multi-way array factorization or decomposition; see [1,2,4,13,14,16,17,18,15,20] for more details on recent work related to tensors and applications. In the present work, we consider third order tensors that could be defined as three dimensional arrays of data. As our study is based on the cosine transform product, we limit this work to three-order tensors.
The number of indices of a tensor is called modes or ways. Scalars can also be regarded as zero mode tensors, first mode tensors are vectors and matrices are second mode tensors.
For a given 3-mode tensor X ∈ R n 1 ×n 2 ×n 3 , the notation x i 1 ,i 2 ,i 3 stands for the element (i 1 , i 2 , i 3 ) of the tensor X .
A fiber is defined by fixing all the indexes except one. A matrix column is a mode-1 fiber and a matrix row is a mode-2 fiber. Third-order tensors have column, row and tube fibers. An element c ∈ R 1×1×n is called a tubal-scalar or simply tube of length n. More details can be found in [1,2].

Definitions and Notations.
2.1. Discrete Cosine Transformation. In this subsection we recall some definitions and properties of the discrete cosine transformation and the c-product of tensors. During recent years, many advances were made in order to establish a rigorous framework enabling the treatment of problems for which the data is stored in three-way tensors without having to resort to matricization [4,1]. One of the most important feature of such a framework is the definition of a tensor-tensor product as the t-product, based on the Fast Fourier Transform . For applications as image treatment, the tensor-tensor product based on the Discrete Cosine Transformation (DCT) has shown to be an interesting alternative to FFT. We now give some basic facts on the DCT and its associated tensor-tensor product. The DCT of a vector v ∈ R n is defined by where C n is the n × n discrete cosine transform matrix with entries with δ i j is the Kronecker delta; see [ [5] p. 150] for more details. It is known that the matrix C n is orthogonal, ie C T n C n = C n C T n = I n ; see [6]. Furthermore, for any vector v ∈ R n , the matrix vector multiplication C n v can be computed in O(nlog(n)) operations. Moreover, Ng and al. [6] have shown that a certain class of Toeplitz-plus-Hankel matrices can be diagonalized by C n . More precisely, we have and Diag(ṽ) is the diagonal matrix whose i-th diagonal element is (ṽ) i .

2.2.
Definitions and properties of the cosine product. In this subsection, we briefly review some concepts and notations, which play a central role for the elaboration of the tensor global iterative methods based on the c-product; see [7,8] for more details on the c-product. Let A ∈ R n 1 ×n 2 ×n 3 be a real valued third-order tensor, then the operations mat and its inverse ten are defined by ∈ R n 1 n 3 ×n 2 n 3 and the inverse operation denoted by ten is simply defined by ten(mat(A )) = A .
Let us denote A the tensor obtained by applying the DCT on all the tubes of the tensor A . This operation and its inverse are implemented in the Matlab by the commands dct and idct as where idct denotes the Inverse Discrete Cosine Transform. REMARK 1. Notice that the tensor A can be computed by using the 3-mode product defined in [2] as follows: where M is the n 3 × n 3 invertible matrix given by where C n 3 denote de n 3 × n 3 Discrete Cosine Transform DCT matrix, W = diag(C n 3 (:, 1)) is the diagonal matrix made of the first column of the DCT matrix, Z is n 3 × n 3 circulant upshift matrix which can be computed in MATLAB using W = diag(ones(n 3 − 1, 1), 1) and I the n 3 × n 3 identity matrix ; see [7] for more details.
Let A be the matrix where the matrices A (i) 's are the frontal slices of the tensor A . The block matrix mat(A ) can also be block diagonalized by using the DCT matrix as follows The c-product of two tensors A ∈ R n 1 ×n 2 ×n 3 and B ∈ R n 2 ×m×n 3 is the n 1 × m × n 3 tensor defined by: Notice that from the relation 2.3, we can show that the product C = A c B is equivalent to C = A B. The following algorithm allows us to compute, in an efficient way, the c-product of the tensors A and B, see [7].
Next, give some definitions and remarks on the c-product and related topics. DEFINITION 2.2. The identity tensor I n 1 n 1 n 3 is the tensor such that each frontal slice of I n 1 n 1 n 3 is the identity matrix I n 1 n 1 . An n 1 ×n 1 ×n 3 tensor A is said to be invertible if there exists a tensor B of order n 1 ×n 1 ×n 3 such that A c B = I n 1 n 1 n 3 and B c A = I n 1 n 1 n 3 .
In that case, we denote B = A −1 . It is clear that A is invertible if and only if mat(A ) is invertible.

Algorithm 2.1 Computing the c-product
Inputs: A ∈ R n 1 ×n 2 ×n 3 and B ∈ R n 2 ×m×n 3 Output: Compute each frontal slices of C by The inner scalar product is defined by and its corresponding norm is given by

REMARK 2.
Another interesting way for computing the scalar product and the associated norm is as follows: Next we recall the Tensor Singular Value Decomposition of a tensor; more details can be found in [3]. THEOREM 2.4. Let A be an n 1 × n 2 × n 3 real-valued tensor. Then A can be factored as follows where U and V are orthogonal tensors of order (n 1 , n 1 , n 3 ) and (n 2 , n 2 , n 3 ), respectively and S is an f-diagonal tensor of order (n 1 × n 2 × n 3 ). This factorization is called Tensor Singular Value Decomposition (c-SVD) of the tensor A .  As for the t-product [3], we can show that if A = U c S c V T is a c-SVD of the tensor A , then we have where A k , U k , S k and V k are the frontal slices of the tensors A , U , S and V , respectively, and , and define for k ≤ min(n 1 , n 2 ) the tensor Then Note that when n 3 = 1 this theorem reduces to the well known Eckart-Young theorem for matrices [9].
DEFINITION 2.6. The tensor tubal-rank Let A be an n 1 × n 2 × n 3 be a tensor and consider its c-SVD A = U c S c V T . The tensor tubal rank of A , denoted as rank t (A ) is defined to be the number of non-zero tubes of the f-diagonal tensor S , i.e., rank t (A ) = #{i, S (i, i, :) = 0}. DEFINITION 2.7. The multi-rank of the tensor A is a vector p ∈ R n 3 with the i-th element equal to the rank of the i-th frontal slice of The well known QR matrix decomposition can also be extended to the tensor case; see [3] THEOREM 2.8. Let A be a real-valued tensor of order n 1 × n 2 × n 3 . Then A can be factored as follows where Q is an n 1 × n 1 × n 3 orthogonal tensor and R is an n 1 × n 1 × n 3 f-upper triangular tensor.
3. Tensor Principal Component Analysis for face recognition. Principle Component Analysis PCA is a widely used technique in image classification and face recognition. Many approaches involve a conversion of color images to grayscale in order to reduce the training cost. Nevertheless, for some applications, color an is important feature and tensor based approaches offer the possibility to take it into account. Moreover, especially in the case of facial recognition, it allows the treatment of enriched databases including for instance additional biometric information. But, one have to bear in mind that the computational cost is an important issue as the volume of data can be very large. We first recall some background facts on the matrix based approach.
3.1. The matrix case. One of the simplest and most effective PCA approaches used in face recognition systems is the so-called eigenface approach. This approach transforms faces into a small set of essential characteristics, eigenfaces, which are the main components of the initial set of learning images (training set). Recognition is done by projecting a test image in the eigenface subspace, after which the person is classified by comparing its position in eigenface space with the position of known individuals. The advantage of this approach over other face recognition strategies resides in its simplicity, speed and insensitivity to small or gradual changes on the face.
The process is defined as follows: Consider a set of training faces I 1 , I 2 , . . ., I p . All the face images have the same size: n × m. Each face I i is transformed into a vector x i using the operation vec: x i = vec(I i ). These vectors are columns of the nm × p matrix Notice that the nm × nm covariance matrix C =XX T can be very large. Therefore, the computation of the nm eigenvalues and the corresponding eigenvectors (eigenfaces) can be very difficult. To circumvent this issue, we instead consider the smaller p × p matrix L =X TX .
Let v i be an eigenvector of L then Lv i =X TX v i = λ i v i and which shows thatXv i is an eigenvector of the covariance matrix C =XX T . The p eigenvectors of L =X TX are then used to find the p eigenvectors u i =Xv i of C that form the eigenface space. We keep only k eigenvectors corresponding to the largest k eigenvalues (eigenfaces corresponding to small eigenvalues can be omitted, as they explain only a small part of characteristic features of the faces.) The next step consists in projecting each image of the training sample onto the eigenface space spanned by the orthogonal vectors u 1 , . . . , u k : The matrix U k U T k is an orthogonal projector onto the subspace U k . A face image can be projected onto this face space as . We now give the steps of an image classification process based on this approach: Let x = vec(I) be a test vector-image and project it onto the face space to get y = U T k (x − µ). Notice that the reconstructed image is given by Compute the Euclidean distance A face is classified as belonging to the class l when the minimum l is below some chosen threshold θ Set and let ε be the distance between the original test image x ans its reconstructed image x r : then the input image is not even a face image and not recognized.
• If ε < θ and ε i ≥ θ for all i then the input image is a face image but it is an unknown image face. • If ε < θ and ε i < θ for all i then the input images are the individual face images associated with the class vector x i .
We now give some basic facts on the relation between the singular value decomposition (SVD) and PCA in this context: Consider the Singular Value Decomposition of the matrix A as where U and V are orthonormal matrices of sizes nm and p respectively. The singular values σ i are the square roots of the eigenvalues of the matrix L =X TX , the u i 's are the left vectors and the v i s are the right vectors. We have which is is the eigendecomposition of the matrix L and C =XX T = UDU T ; D = diag(σ 2 1 , . . . , σ 2 p , 0, . . . , 0).
In the PCA method, the projected eigenface space is then generated by the first u 1 , . . . , u k columns of the unitary matrix U derived from the SVD decomposition of the matrixX.
As only a small number k of the largest singular values are needed in PCA, we can use the well known Golub-Kahan algorithm to compute these wanted singular values and the corresponding singular vectors to define the projected subspace. In the next section, we explain how the SVD based PCA can be extended to tensors and propose an algorithm for facial recognition in this context. 4. The tensor Golub-Kahan method. As explained in the previous section, it is important to take into account the potentially large size of data, especially for the training process. The Golub Kahan bidiagonalization algorithm can be extended to the tensor context, especially in its c-tubal form.  1. Choose a tensor V 1 ∈ R n 2 ×s×n 3 such that V 1 F = 1 and set β 0 = 0.
Let C k be the k × k upper bidiagonal matrix defined by Let V k and A c V k be the (n 2 × (sk) × p) and (n 1 × (sk) × n 3 ) tensors with frontal slices V 1 , . . . , V k and A c V 1 , . . . , A c V k , respectively, and let U k and A T c U k be the (n 1 × (sk) × n 3 ) and (n 2 × (sk) × n 3 ) tensors with frontal slices U 1 , . . . , U k and A T c U 1 , . . . , A T c U k , respectively. We set Then, we have the following results [17] PROPOSITION 4.1. The tensors produced by the tensor c-global Golub-Kahan algorithm satisfy the following relations where the product is defined by: We set the following notation: where C i k is the i-th column of the matrix C k . We note that since the matrix C k is bidiagonal, T k = C T k C k is symmetric and tridiagonal and then Algorithm computes the same information as tensor global Lanczos algorithm applied to the symmetric matrix A * c A.

4.2.
Tensor tubal Golub-Kahan bidiagonalisation algorithm. First, we introduce some new products that will be useful in this section. Let A ∈ R n 1 ×m 1 ×n 3 , B ∈ R n 1 ×m 2 ×n 3 , C ∈ R n 2 ×m 1 ×n 3 and D ∈ R n 2 ×m 2 ×n 3 be tensors. The block tensor is defined by compositing the frontal slices of the four tensors. . . , A n 2 ] ∈ R n 1 ×n 2 ×n 3 where A i ∈ R n 1 ×1×n 3 , we denoted by TVect( A ) the tensor vectorization operator : R n 1 ×n 2 ×n 3 → R n 1 n 2 ×1×n 3 obtained by superposing the laterals slices A i of A , for i = 1, . . . , n 2 . In others words, for a tensor A = [A 1 , . . . , A n 2 ] ∈ R n 1 ×n 2 ×n 3 where A i ∈ R n 1 ×1×n 3 , we have : The TVect operator transform a given tensor on lateral slice. Its easy to see that when we take p = 1, the TVect operator coincides with the operation vec which transform the matrix on vector. PROPOSITION 4.5. Let A be a tensor of size R n 1 ×n 2 ×n 3 , we have  We introduce now a normalization algorithm allowing us to decompose the non-zero tensor C ∈ R n 1 ×n 2 ×n 3 , such that: where a is an invertible tube fiber of size a ∈ R 1×1×n 3 and Q ∈ R n 1 ×n 2 ×n 3 and e is the tube fiber e ∈ R 1×1×n 3 defined by unfold(e) = (1, 0, 0 . . . , 0) T . This procedure is described in Algorithm 4.2 Algorithm 4.2 Normalization algorithm (Normalize) 1. Input. A ∈ R n 1 ×n 2 ×n 3 and a tolerance tol > 0. 2. Output. The tensor Q and the tube fiber a.
Next, we give the Tensor Tube Global Golub-Kahan (TTGGKA) algorithm, seeElIchi1. Let A ∈ R n 1 ×n 2 ×n 3 be a tensor and let s ≥ 1 be an integer. The Tensor Tube Global Golub-Kahan bidiagonalization process is defined as follows. 1. Choose a tensor V 1 ∈ R n 2 ×s×n 3 such that V 1 ,V 1 = e and set b 0 = 0.
Let C k be the k × k × n 3 upper bidiagonal tensor (each frontal slice of C k is a bidiagonal matrix) and C k the k × (k + 1) × n 3 defined by Let V k and A c V k be the (n 2 × (sk) × n 3 ) and (n 1 × (sk) × n 3 ) tensors with frontal slices V 1 , . . . , V k and A c V 1 , . . . , A c V k , respectively, and let U k and A T c U k be the (n 1 × (sk) × n 3 ) and (n 2 × (sk) × n 3 ) tensors with frontal slices U 1 , . . . , U k and A T c U 1 , . . . , A T c U k , respectively. We set Then, we have the following results PROPOSITION 4.8. The tensors produced by the tensor TTGGKA algorithm satisfy the following relations where e 1,k,: ∈ R 1×k×n 3 with 1 in the (1, k, 1) position and zeros in the other positions, I ssn 3 ∈ R s×s×n 3 the identity tensor and b k is the fiber tube in the (k, k + 1, :) position of the tensor C k .
5. The tensor tubal PCA method. In this section, we describe a tensor-SVD based PCA method for order 3 tensors which naturally arise in problems involving images such as facial recognition. As for the matrix case, we consider a set of N training images, each of one being encoded as n 1 × n 2 × n 3 real tensors I i , 1 ≤ i ≤ N. In the case of RGB images, each frontal slice would contain the encoding for each color layer (n 3 = 3) but in order to be able to store additional features, the case n 3 > 3 could be contemplated.
Let us consider one training image I i 0 . Each one of the n 3 frontal slices I Let us now consider the c-SVD decomposition X = U * c S * c V T of X , where U and V are orthogonal tensors of size L × L × n 3 and N × N × n 3 respectively and S is a f-diagonal tensor of size L × N × n 3 .
In the matrix context, it is known that just a few singular values suffice to capture the main features of an image, therefore, applying this idea to each one of the three color layers, an RGB image can be approximated by a low tubal rank tensor. Let us consider an image tensor S ∈ R n 1 ×n 2 ×n 3 and its c-SVD decomposition S = U c S c V T . Choosing an integer r such as r ≤ min(n 1 , n 2 ), we can approximate S by the r tubal rank tensor In Figure 1, we represented a 512 × 512 RGB image and the images obtained for various truncation indices. On the left part, we plotted the singular values of one color layer of the RGB tensor (the exact same behaviour is observed on the two other layers). The rapid decrease of the singular values explain the good quality of compressed images even for small truncation indices. Applying this idea to our problem, we want to be able to obtain truncated tensor SVD's of the training tensor X , without needing to compute the whole c-SVD. After k iterations of the TTGGKA algorithm (for the case s = 1), we obtain three tensors U k ∈ R n 1 ×k×n 3 , V k+1 ∈ R n 2 ×(k+1)×n 3 and C k ∈ R (k×(k+1)×n 3 as defined in 4.11 such as Let C k = Φ c Σ c Ψ the c-SVD of C k , noticing that C k ∈ R k×(k+1)×n 3 is much smaller than X . Then first tubal singular values and the left tubal singular tensors ofX are given by Σ(i, i, :) and U k c Φ(:, i, :) respectively, for i ≤ k, see [1] for more details. In order to illustrate the ability to approximate the first singular elements of a tensor using the TTGGKA algorithm, we considered a 900 × 900 × 3 real tensor A which frontal slices were matrices generated by a finite difference discretization method of differential operators. On Figure 2, we displayed the error on the first diagonal coefficient of the first frontal S (1, 1, 1) in function of the number of iteration of the Tensor Tube Golub Kahan algorithm, where In Table 1, we reported on the errors on the tensor Frobenius norms of the singular tubes in function of the number k of the Tensor Tube Golub Kahan algorithm.
The same behaviour was observed on all the other frontal slices. This example illustrate the ability of the TTGKA algorithm for approximating the largest singular tubes. The projection space is generated by the lateral slices of the tensor P = U k c Φ(:, 1 : k, :) ∈ R n 1 ×i×n 3 derived from the TTGGKA algorithm and the c-SVD decomposition of the bidiagonal tensor C k ie,   We considered various truncation indices r for which the recognition rates were computed. We also reported the CPU time for the training process (ie the process of computating the eigenfaces).
6.1. Example 1. In this example, we considered the MNIST database of handwritten digits [10]. The database contains two subsets of 28 × 28 grayscale images ( 60000 training images and 10000 test images). Each image was vectorized as a vector of length 28 × 28 = 784 and, following the process described in Section 3.1, we formed the training and the test matrices of sizes 784 × 60000 and 784 × 10000 respectively. In this example, we used the Georgia Tech database GTDB crop [11], which contains 750 face images of 50 persons in different illumination conditions, facial expression and face orientation. The RGB JPEG images were resized to 100x100x3 tensors. Each image file is coded as a 100 ×100 ×3 tensor and transformed into a 10000 ×1 ×3 tensor as explained in the previous section. We built the training and test tensors as follows: from 15 pictures of each person in the database, 5 pictures were randomly chosen and stored in the test folder and the 10 remaining pictures were used for the train tensor. Hence, the database was partitioned into two subsets containing 250 and 500 items respectively, at each iteration of the simulation.
We applied the TTGGKA based algoritm 5.1 for various truncation indices. In Figure 6, we represented a test image (top left position), the closest image in the database (top right), the mean image of the training database (bottom left) and the eigenface associated to the test image (bottom right). To compute the rate of recognition, we ran 100 simulations, obtained the number of successes (ie a test is successful if the person is correctly identified) and reported the best identification rates, in function of the truncation index r in Fig. 7. The results match the performances observed in the literature [12] for this database and it confirms that the use of a Golub Kahan strategy is interesting especially because, in terms of training, the Tube Tensor PCA algorithm required only 5 seconds instead of 25 seconds when using a c-SVD.
6.3. Example 3. In the second example, we used the larger AR face database (cropped version) (Face crops), [13], which contains 2600 bitmap pictures of human faces (50 males and 50 females, 26 pictures per person), with different expressions, lightning conditions, facial expressions and face orientation. The bitmap pictures were resized to 100x100 Jpeg images. The same protocol as for Example 1 was followed: we partitioned the set of images in two subsets. Out of 26 pictures, 6 pictures were randomly chosen as test images and the remaining 20 were put into the training folder. We applied our approach (TTPCA) to the 10000 × 2000 × 3 training tensor X and plotted the recognition rate as a function of the truncation index in Figure 9. The training process took 24 seconds while it would have taken 81.5 seconds if using a c-SVD. For all examples, it is worth noticing that, as expected in face identification problems, only a few of the first largest singular elements suffice to capture the main features of an image. Therefore, the Golub Kahan based strategies such as the TTPCA method are an interesting choice.

Conclusion.
In this manuscript, we focused on two types of Golub Kahan factorizations. We used the recent advances in the field of tensor factorization and showed that this approach is efficient for image identification. The main feature of this approach resides in the ability of the Global Golub Kahan algorithms to approximate the dominant singular elements of a training matrix or tensor without needing to compute the SVD. This is particularly important as the matrices and tensors involved in this type of application can be very large. Moreover, in the case for which color has to be taken into account, this approach do not involve a conversion to grayscale, which can be very important for some applications.