Principle Component Analysis (PCA) is a widely used technique in image classification and face recognition. Many approaches involve a conversion of color images to grayscale in order to reduce the training cost. Nevertheless, for some applications, color an is important feature and tensor based approaches offer the possibility to take it into account. Moreover, especially in the case of facial recognition, it allows the treatment of enriched databases including for instance additional biometric information. However, one has to bear in mind that the computational cost is an important issue as the volume of data can be very large. We first recall some background facts on the matrix based approach.
3.1. The Matrix Case
One of the simplest and most effective PCA approaches used in face recognition systems is the so-called eigenface approach. This approach transforms faces into a small set of essential characteristics, eigenfaces, which are the main components of the initial set of learning images (training set). Recognition is done by projecting a test image in the eigenface subspace, after which the person is classified by comparing its position in eigenface space with the position of known individuals. The advantage of this approach over other face recognition strategies resides in its simplicity, speed and insensitivity to small or gradual changes on the face.
The process is defined as follows: Consider a set of training faces
,
,
…,
. All the face images have the same size:
. Each face
is transformed into a vector
using the operation
:
. These vectors are columns of the
matrix
We compute the average image
Set
and consider the new matrices
Notice that the covariance matrix can be very large. Therefore, the computation of the eigenvalues and the corresponding eigenvectors (eigenfaces) can be very difficult. To circumvent this issue, we instead consider the smaller matrix .
Let
be an eigenvector of
L then
and
which shows that
is an eigenvector of the covariance matrix
.
The p eigenvectors of are then used to find the p eigenvectors of C that form the eigenface space. We keep only k eigenvectors corresponding to the largest k eigenvalues (eigenfaces corresponding to small eigenvalues can be omitted, as they explain only a small part of characteristic features of the faces.)
The next step consists of projecting each image of the training sample onto the eigenface space spanned by the orthogonal vectors
:
The matrix is an orthogonal projector onto the subspace . A face image can be projected onto this face space as
We now give the steps of an image classification process based on this approach:
Let
be a test vector-image and project it onto the face space to get
Notice that the reconstructed image is given by
Compute the Euclidean distance
A face is classified as belonging to the class
l when the minimum
l is below some chosen threshold
Set
and let
be the distance between the original test image
x and its reconstructed image
:
. Then
If , then the input image is not even a face image and not recognized.
If and for all i then the input image is a face image but it is an unknown image face.
If and for all i then the input images are the individual face images associated with the class vector .
We now give some basic facts on the relation between the singular value decomposition (SVD) and PCA in this context:
Consider the Singular Value Decomposition of the matrix
A as
where
U and
V are orthonormal matrices of sizes
and
p, respectively. The singular values
are the square roots of the eigenvalues of the matrix
, the
’s are the left vectors and the
are the right vectors. We have
which is is the eigendecomposition of the matrix
L and
In the PCA method, the projected eigenface space is then generated by the first columns of the unitary matrix U derived from the SVD decomposition of the matrix .
As only a small number k of the largest singular values are needed in PCA, we can use the well known Golub–Kahan algorithm to compute these wanted singular values and the corresponding singular vectors to define the projected subspace.
In the next section, we explain how the SVD based PCA can be extended to tensors and propose an algorithm for facial recognition in this context.