A Multidimensional Principal Component Analysis via the CProduct Golub–Kahan–SVD for Classification and Face Recognition^{ †}
Abstract
:1. Introduction
2. Definitions and Notations
2.1. Discrete Cosine Transformation
2.2. Definitions and Properties of the Cosine Product
Algorithm 1 Computing the cproduct. 
Inputs: $\mathcal{A}\in {\mathbb{R}}^{{n}_{1}\times {n}_{2}\times {n}_{3}}$ and $\mathcal{B}\in {\mathbb{R}}^{{n}_{2}\times m\times {n}_{3}}$ 
Output: $\mathcal{C}=\mathcal{A}{\U0001f7c9}_{c}\mathcal{B}\in {\mathbb{R}}^{{n}_{1}\times m\times {n}_{3}}$ 
1. Compute $\tilde{\mathcal{A}}=\mathtt{dct}(\mathcal{A},\left[\phantom{\rule{0.277778em}{0ex}}\right],3)$ and $\tilde{\mathcal{B}}=\mathtt{dct}(\mathcal{B},\left[\phantom{\rule{0.277778em}{0ex}}\right],3)$. 
2. Compute each frontal slices of $\tilde{\mathcal{C}}$ by
$${C}^{\left(i\right)}={A}^{\left(i\right)}{B}^{\left(i\right)}$$

3. Compute $\mathcal{C}=\mathtt{idct}(\tilde{\mathcal{C}},\left[\phantom{\rule{0.277778em}{0ex}}\right],3)$. 
Algorithm 2 The Tensor SVD (cSVD). 
Input: $\mathcal{A}\in {\mathbb{R}}^{{n}_{1}\times {n}_{2}\times {n}_{3}}$Output: $\mathcal{U}$, $\mathcal{V}$ and $\mathcal{S}$. 
1. Compute $\tilde{\mathcal{A}}=\mathtt{dct}(\mathcal{A},\left[\right],3)$. 
2. Compute each frontal slices of $\tilde{\mathcal{U}}$, $\tilde{\mathcal{V}}$ and $\tilde{\mathcal{S}}$ from $\tilde{\mathcal{A}}$ as follows 
(a) for $i=1,\dots ,{n}_{3}$
$$[{\tilde{\mathcal{U}}}^{\left(i\right)},{\tilde{\mathcal{S}}}^{\left(i\right)},{\tilde{\mathcal{V}}}^{\left(i\right)}]=svd({\tilde{\mathcal{A}}}^{\left(i\right)})$$

(b) End for 
3. Compute $\mathcal{U}=\mathtt{idct}(\tilde{\mathcal{U}},\left[\phantom{\rule{0.166667em}{0ex}}\right],3)$, $\mathcal{S}=\mathtt{idct}(\tilde{\mathcal{S}},\left[\phantom{\rule{0.166667em}{0ex}}\right],3)$ and $\mathcal{V}=\mathtt{idct}(\tilde{\mathcal{V}},\left[\phantom{\rule{0.166667em}{0ex}}\right],3)$. 
3. Tensor Principal Component Analysis for Face Recognition
3.1. The Matrix Case
 If $\u03f5\ge \theta $, then the input image is not even a face image and not recognized.
 If $\u03f5<\theta $ and ${\u03f5}_{i}\ge \theta $ for all i then the input image is a face image but it is an unknown image face.
 If $\u03f5<\theta $ and ${\u03f5}_{i}<\theta $ for all i then the input images are the individual face images associated with the class vector ${x}_{i}$.
4. The Tensor Golub–Kahan Method
4.1. The Tensor CGlobal Golub–Kahan Algorithm
Algorithm 3 The Tensor Global Golub–Kahan algorithm (TGGKA). 
1. Choose a tensor ${\mathcal{V}}_{1}\in {\mathbb{R}}^{{n}_{2}\times s\times {n}_{3}}$ such that $\parallel {\mathcal{V}}_{1}{\parallel}_{F}=1$ and set ${\beta}_{0}=0$. 
2. For $i=1,2,\dots ,k$ 
(a) ${\mathcal{U}}_{i}=\mathcal{A}{\U0001f7c9}_{c}{\mathcal{V}}_{i}{\beta}_{i1}{\mathcal{U}}_{i1}$, 
(b) ${\alpha}_{i}={\parallel {\mathcal{U}}_{i}\parallel}_{F}$, 
(c) ${\mathcal{U}}_{i}={\mathcal{U}}_{i}/{\alpha}_{i}$, 
(d) ${\mathcal{V}}_{i+1}={\mathcal{A}}^{T}{\U0001f7c9}_{c}{\mathcal{U}}_{i}{\alpha}_{i}{\mathcal{V}}_{i}$, 
(e) ${\beta}_{i}={\parallel {\mathcal{V}}_{i+1}\parallel}_{F}$. 
(f) ${\mathcal{V}}_{i+1}={\mathcal{V}}_{i+1}/{\beta}_{i}$. 
End 
4.2. Tensor Tubal Golub–Kahan Bidiagonalisation Algorithm
Algorithm 4 Normalization algorithm (Normalize). 
1. Input. $\mathcal{A}\in {\mathbb{R}}^{{n}_{1}\times {n}_{2}\times {n}_{3}}$ and a tolerance $tol>0$. 
2. Output. The tensor $\mathcal{Q}$ and the tube fiber $\mathbf{a}$. 
3. Set $\tilde{\mathcal{Q}}=\mathrm{dct}(\mathcal{A},\left[\right],3)$ 
(a) For $j=1,\dots ,{n}_{3}$ 
i ${a}_{j}=\left\right{\tilde{Q}}^{\left(j\right)}{\left\right}_{F}$ 
ii if ${a}_{j}>tol$, $\tilde{\mathcal{Q}}}^{\left(j\right)}=\frac{{\tilde{\mathcal{Q}}}^{\left(j\right)}}{{a}_{j}$ 
iii else ${\tilde{\mathcal{Q}}}_{j}=\mathrm{rand}({n}_{1},{n}_{2})$; ${a}_{j}=\left\right{\tilde{Q}}^{\left(j\right)}{\left\right}_{F}$ 
$\tilde{\mathcal{Q}}}^{\left(j\right)}=\frac{{\tilde{\mathcal{Q}}}^{\left(j\right)}}{{a}_{j}$; ${a}_{j}=0,$ 
(b) End 
4. $\mathcal{Q}=\mathrm{idct}(\tilde{\mathcal{Q}},\left[\right],3)$, $\mathbf{a}=\mathrm{idct}(\mathbf{a},[],3)$ 
5. End 
Algorithm 5 The Tensor Tube Global Golub–Kahan algorithm (TTGGKA). 
1. Choose a tensor ${\mathcal{V}}_{1}\in {\mathbb{R}}^{{n}_{2}\times s\times {n}_{3}}$ such that $\langle {V}_{1},{V}_{1}\rangle =\mathbf{e}$ and set ${\mathbf{b}}_{0}=0$. 
2. For $i=1,2,\dots ,k$ 
(a) ${\mathcal{U}}_{i}=\mathcal{A}{\U0001f7c9}_{c}{\mathcal{V}}_{i}{\mathbf{b}}_{i1}\divideontimes {\mathcal{U}}_{i1}$, 
(b) $[{\mathcal{U}}_{i},{\mathbf{a}}_{i}]=Normalize\left({\mathcal{U}}_{i}\right)$. 
(c) ${\mathcal{V}}_{i+1}={\mathcal{A}}^{T}{\U0001f7c9}_{c}{\mathcal{U}}_{i}{\mathbf{a}}_{i}\divideontimes {\mathcal{V}}_{i}$, 
(d) $[{\mathcal{V}}_{i+1},{\mathbf{b}}_{i}]=Normalize\left({\mathcal{V}}_{i+1}\right)$. 
End 
5. The Tensor Tubal PCA Method
Algorithm 6 The Tensor Tubal PCA algorithm (TTPCA). 
1. Inputs Training Image tensor $\mathcal{X}$ (N images), mean image tensor $\overline{\mathcal{X}}$,Test image ${\mathcal{I}}_{0}$, index of truncation r, k=number of iterations of the TTGGKA algorithm ($k\ge r$). 
2. Output Closest image in the Training database. 
3. Run k iterations of the TTGGKA algorithm to obtain tensors ${\mathbb{U}}_{k}$ and $\widehat{{\mathcal{C}}_{k}}$ 
4. Compute $\left(\right)open="["\; close="]">\mathsf{\Phi},\phantom{\rule{0.166667em}{0ex}}\mathsf{\Sigma},\phantom{\rule{0.166667em}{0ex}}\mathsf{\Psi}$ cSVD$(\tilde{{\mathcal{C}}_{k}})$ 
5. Compute the projection tensor ${\mathcal{P}}_{r}=\left(\right)open="["\; close="]">{\mathcal{P}}_{r}(:,1,:),\phantom{\rule{0.166667em}{0ex}}\cdots ,\phantom{\rule{0.166667em}{0ex}}{\mathcal{P}}_{r}(:,r,:)$, where ${\mathcal{P}}_{r}(:,i,:)={\mathbb{U}}_{k}{\U0001f7c9}_{c}\mathsf{\Phi}(:,i,:)\in {\mathbb{R}}^{{n}_{1}\times 1\times {n}_{3}}$ 
6. Compute the projected Training tensor ${\widehat{\mathcal{X}}}_{r}={\mathcal{P}}_{r}^{T}{\U0001f7c9}_{c}\mathcal{X}$ and projected centred test image ${\widehat{\mathcal{I}}}_{r}={\mathcal{P}}_{r}^{T}{\U0001f7c9}_{c}(\mathcal{I}\overline{\mathcal{X}})$ 
7. Find $i=arg{min}_{i=1,..,N}{\parallel {\widehat{\mathcal{I}}}_{r}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\widehat{\mathcal{X}}}_{r}(:,i,:)}_{F}$ 
6. Numerical Tests
6.1. Example 1
6.2. Example 2
6.3. Example 3
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
 Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Thirdorder tensors as operators on matrices: A theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar]
 Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 3, 455–500. [Google Scholar] [CrossRef]
 Zhang, J.; Saibaba, A.K.; Kilmer, M.E.; Aeron, S. A randomized tensor singular value decomposition based on the tproduct. Numer Linear Algebra Appl. 2018, 25, e2179. [Google Scholar] [CrossRef] [Green Version]
 Cai, S.; Luo, Q.; Yang, M.; Li, W.; Xiao, M. Tensor robust principal component analysis via nonconvex low rank approximation. Appl. Sci. 2019, 9, 1411. [Google Scholar] [CrossRef] [Green Version]
 Kong, H.; Xie, X.; Lin, Z. tSchattenp norm for lowrank tensor recovery. IEEE J. Sel. Top. Signal Process. 2018, 12, 1405–1419. [Google Scholar] [CrossRef]
 Lin, Z.; Chen, M.; Ma, Y. The augmented Lagrange multiplier method for exact recovery of corrupted lowrank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
 Kang, Z.; Peng, C.; Cheng, Q. Robust PCA via nonconvex rank approximation. In Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA, 14–17 November 2015. [Google Scholar]
 Lu, C.; Feng, J.; Chen, Y.; Liu, W.; Lin, Z.; Yan, S. Tensor Robust Principal Component Analysis with a New Tensor Nuclear Norm. IEEE Anal. Mach. Intell. 2020, 42, 925–938. [Google Scholar] [CrossRef] [PubMed] [Green Version]
 Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
 Guide, M.E.; Ichi, A.E.; Jbilou, K.; Sadaka, R. Tensor Krylov subspace methods via the Tproduct for color image processing. arXiv 2020, arXiv:2006.07133. [Google Scholar]
 Brazell, M.; Navasca, N.L.C.; Tamon, C. Solving Multilinear Systems Via Tensor Inversion. SIAM J. Matrix Anal. Appl. 2013, 34, 542–570. [Google Scholar] [CrossRef]
 Beik, F.P.A.; Jbilou, K.; NajafiKalyani, M.; Reichel, L. Golub–Kahan bidiagonalization for illconditioned tensor equations with applications. Numer. Algorithms 2020, 84, 1535–1563. [Google Scholar] [CrossRef]
 Ichi, A.E.; Jbilou, K.; Sadaka, R. On some tensor tubalKrylov subspace methods via the Tproduct. arXiv 2020, arXiv:2010.14063. [Google Scholar]
 Guide, M.E.; Ichi, A.E.; Jbilou, K. Discrete cosine transform LSQR and GMRES methods for multidimensional illposed problems. arXiv 2020, arXiv:2103.11847. [Google Scholar]
 Vasilescu, M.A.O.; Terzopoulos, D. Multilinear image analysis for facial recognition. In Proceedings of the Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 11–15 August 2002; pp. 511–514. [Google Scholar]
 Jain, A. Fundamentals of Digital Image Processing; Prentice–Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
 Ng, M.K.; Chan, R.H.; Tang, W. A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput. 1999, 21, 851–866. [Google Scholar] [CrossRef] [Green Version]
 Kernfeld, E.; Kilmer, M.; Aeron, S. Tensortensor products with invertible linear transforms. Linear Algebra Appl. 2015, 485, 545–570. [Google Scholar] [CrossRef]
 Kilmer, M.E.; Martin, C.D. Factorization strategies for thirdorder tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef] [Green Version]
 Golub, G.H.; Van Loan, C.F. Matrix Computations, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
 Savas, B.; Eldén, L. Krylovtype methods for tensor computations I. Linear Algebra Appl. 2013, 438, 891–918. [Google Scholar] [CrossRef]
 Lecun, Y.; Cortes, C.; Curges, C. The MNIST Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 22 February 2021).
 Nefian, A.V. Georgia Tech Face Database. Available online: http://www.anefian.com/research/face_reco.htm (accessed on 22 February 2021).
 Wang, S.; Sun, M.; Chen, Y.; Pang, E.; Zhou, C. STPCA: Sparse tensor Principal Component Analysis for feature extraction. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan, 11–15 November 2012; pp. 2278–2281. [Google Scholar]
$\mathit{k}=\mathbf{10}$  $\mathit{k}=\mathbf{30}$  $\mathit{k}=\mathbf{50}$  $\mathit{k}=\mathbf{70}$  

$\mathcal{S}(1,1,:)$  $3.6\times {10}^{4}$  $1.3\times {10}^{5}$  $5.1\times {10}^{11}$  $4.8\times {10}^{17}$ 
$\mathcal{S}(2,2,:)$  $2.0\times {10}^{3}$  $1.6\times {10}^{6}$  $5.2\times {10}^{7}$  $3.1\times {10}^{8}$ 
$\mathcal{S}(3,3,:)$  $4.9\times {10}^{3}$  $5.9\times {10}^{4}$  $2.3\times {10}^{4}$  $5.6\times {10}^{8}$ 
$\mathcal{S}(4,4,:)$  $8.4\times {10}^{3}$  $8.8\times {10}^{4}$  $1.5\times {10}^{4}$  $1.0\times {10}^{8}$ 
$\mathcal{S}(5,5,:)$  $1.4\times {10}^{2}$  $1.3\times {10}^{3}$  $2.7\times {10}^{4}$  $1.1\times {10}^{8}$ 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hached, M.; Jbilou, K.; Koukouvinos, C.; Mitrouli, M. A Multidimensional Principal Component Analysis via the CProduct Golub–Kahan–SVD for Classification and Face Recognition. Mathematics 2021, 9, 1249. https://doi.org/10.3390/math9111249
Hached M, Jbilou K, Koukouvinos C, Mitrouli M. A Multidimensional Principal Component Analysis via the CProduct Golub–Kahan–SVD for Classification and Face Recognition. Mathematics. 2021; 9(11):1249. https://doi.org/10.3390/math9111249
Chicago/Turabian StyleHached, Mustapha, Khalide Jbilou, Christos Koukouvinos, and Marilena Mitrouli. 2021. "A Multidimensional Principal Component Analysis via the CProduct Golub–Kahan–SVD for Classification and Face Recognition" Mathematics 9, no. 11: 1249. https://doi.org/10.3390/math9111249