Next Article in Journal
A Urinary Bcl-2 Surface Acoustic Wave Biosensor for Early Ovarian Cancer Detection
Previous Article in Journal
A Survey on the Taxonomy of Cluster-Based Routing Protocols for Homogeneous Wireless Sensor Networks

Sensors 2012, 12(6), 7410-7422; doi:10.3390/s120607410

Article
A Kernel Gabor-Based Weighted Region Covariance Matrix for Face Recognition
Huafeng Qin 1,*, Lan Qin 1, Lian Xue 1 and Yantao Li 2
1
Key Laboratory of Optoelectronic Technology and Systems of Ministry of Education, College of Opto-Electronic Engineering, Chongqing University, Chongqing 400030, China; E-Mails: qinlan@cqu.edu.cn (L.Q.); xuelian@cqu.edu.cn (L.X.)
2
College of Computer Science, Chongqing University, Chongqing 400030, China; E-Mail: yantaoli@foxmail.com
*
Author to whom correspondence should be addressed; E-Mail: N1107450k@ntu.edu.sg; Tel.: +86-65-8455-5456.
Received: 6 May 2012; in revised form: 16 May 2012 / Accepted: 17 May 2012 /
Published: 31 May 2012

Abstract

: This paper proposes a novel image region descriptor for face recognition, named kernel Gabor-based weighted region covariance matrix (KGWRCM). As different parts are different effectual in characterizing and recognizing faces, we construct a weighting matrix by computing the similarity of each pixel within a face sample to emphasize features. We then incorporate the weighting matrices into a region covariance matrix, named weighted region covariance matrix (WRCM), to obtain the discriminative features of faces for recognition. Finally, to further preserve discriminative features in higher dimensional space, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results show that the KGWRCM outperforms other algorithms including the kernel Gabor-based region covariance matrix (KGCRM).
Keywords:
face recognition; Gabor features; weighted region covariance matrix; kernalization

1. Introduction

Feature extraction from images or image regions is a key step for image recognition and video analysis problems. Recently, matrix-based feature representations [16] have been developed and employed for feature extraction. Tuzel et al. [3] introduced the region covariance matrix (RCM) as a new image region descriptor and have applied it to object detection and texture classification. RCM is a covariance matrix of basic features extracted from a region. The diagonal entries of the covariance matrix represent the variance of each feature, while the nondiagonal entries represent the respective correlations. Using RCM as region descriptor has several advantages. Firstly, RCM provides a natural fusion method because it can fuse multiple basic features without any normalization or weight operations. Secondly, RCM can be invariant to rotations. Thirdly, its computational cost does not depend on the size of the region. Due to these advantages, RCM has been employed to detect and track objects [3,5], and has achieved promising results. The RCM in [3] and [5] were constructed using the basic features including the pixel locations, color values and the norm of the first and second order derivatives. However, directly employing RCM for human face recognition cannot achieve higher recognition rates. In order to improve face recognition rates, Pang et al. [4] proposed the Gabor-based RCM (GRCM) method using pixel locations and Gabor features to construct region covariance. As Gabor features can carry more discriminating information, GRCM displayed better performance. Subsequently, they also proposed a kernel Gabor RCM (KGRCM) method [7] to capture the higher order statistics in the original low-dimensional space. Their experimental results have demonstrated that the KGRCM can improve the classification performance. Recently, KGRCM has been applied to object detection and tracking [6]. The nonlinear descriptor can capture nonlinear relationships within image regions due to the usage of nonlinear region covariance matrix.

However, the previous methods based on RCM consider each pixel in the training image to be contributing equally when reconstructing the RCM, i.e., the contribution of each pixel is usually set to be 1/N2, where N is the number of pixels in a local region. However, this assumption of equal contribution does not hold in real-world applications because it is possible that different pixels in different image parts may have different discriminative powers. For example, pixels at important facial features such as eyes, mouth, and nose should be emphasized and others such as cheek and forehead should be deemphasized.

Motivated by the above-mentioned reasons, we hence propose in this paper a weighted region covariance matrix (WRCM) to explicitly exploit the different importance of each pixel of a sample. WRCM can only extract linear face features. However, by using nonlinear features it can achieve higher performance for face recognition tasks [79]. To further preserve nonlinear features, we develop the kernel Gabor-based weighted region covariance matrix (KGWRCM). Experimental results on the ORL Face database [10], the Yale Face database [11] and the AR database [12] show that the KGWRCM algorithm outperforms the RCM, the WRCM, the RCM with Gabor features (GRCM) [4], the KRCM with Gabor features (KGRCM) [7], and the conventional KPCA [9], Gabor + PCA [13], and Gabor +LDA [13] algorithms in terms of the recognition rate.

2. Region Covariance Matrix (RCM)

The RCM [3] is a matrix of covariance of features computed inside a region of an image. Let F be a two-dimensional image size of h × w, where w and h are the height and width of the image region. The number of pixels in image region is N = h × w. Define a mapping ϕ that maps a pixel (k, l) of F onto the d dimensional feature vector xi:

ϕ ( F , k , l ) = x i Φ d

As a result there are N d-dimensional feature vectors (xi)i = 1,…,N. For the intensity image, the feature mapping function ϕ is defined by pixel locations, gray values and the norm of the first and second order derivatives of the intensities with respect to k and l:

ϕ ( F , k , l ) = [ k l F ( k , l ) | F k | | F l | | F k k | | F l l | ]

The image region can then be represented by a d × d covariance matrix of the basic feature vectors xi:

C = 1 N i = 1 N ( x i - u ) ( x i - u ) T
where μ is the mean of the feature vectors xi:
u = 1 N i = 1 N x i

Equation (3) can also be expressed by following equation:

C = 1 2 1 N × N i = 1 N ( x i - x j ) ( x i - x j ) T

The computation process is given in Appendix A.

3. Weighted Region Covariance Matrix (WRCM)

Based on the feature vectors xi, the d × d weighted region covariance matrix of the image region is defined as follows:

C W = 1 2 j = 1 N i = 1 N ( x i - x j ) 2 S i j = j = 1 N i = 1 N x i S i j ( x i ) T - j = 1 N i = 1 N x i S i j ( x j ) T = i = 1 N x i D i i ( x i ) T - X S X T = X D X T - X S X T = X ( D - S ) X T = X L X T
where the matrix S is a similarity matrix [14], which is chosen as:
S i j = exp ( - x i - x j 2 / σ )
with a value ranging from 0 to 1, and σ is a suitable constant. Dii is a diagonal matrixes whose entries are column or row sums of S, D i i = j = 1 N S i j. L = DS is a matrix of N × N.

Comparing Equations (5) and (6), we can see that the WRCM is just the RCM if Sij = 1/N2, which implies that RCM is a special case of the WRCM method. However, as all the weights in RCM are 1/N2, RCM cannot exploit the different importance of each pixel of a sample. On the other hand, the WRCM can assign different weights for each pixel of a sample, so it can preserve more discrimination information than RCM.

As CW in Equation (6) is a matrix-form feature, the commonly used distances are not used. The generalized eigenvalue based distance proposed by Forstner [15] is hence used to measure the distance/dissimilarity between the WRCMs C W p and C W g:

D ( C W p , C W g ) = i = 1 c ln 2 λ i ( C W p , C W g )
where λ1,…,λc are the generalized eigenvalues of covariance C W p and C W g computed from:
λ i C W p v i = C W g v i i = 1 , , c

To preserve the local and global patterns, similar to [3,4], we represent a face image with five WRCMs from five different regions (R1, R2, R3, R4, and R5) (Figure 1). The five WRCMs (CW1, CW2, CW3, CW4, and CW5) are constructed from five different regions. As CW1 is the weighted region covariance matrix of the entire image region R1, it is a global representation of the face. The CW2, CW3, CW4, and CW5 are extracted from four local image regions (R2, R3, R4, and R5), so they are part-based representations of the face.

After obtaining WRCMs of each region, it is necessary to measure the distance between the gallery and probe sets. Let C W p and C W g be WRCMs from the gallery and probe sets. The distance between a gallery WRCM and a probe one is computed as follows:

d ( C W p , C W g ) = min j [ i = 1 5 D ( C W i p , C W i g ) - D ( C W j p , C W j g ) ] = i = 1 5 D ( C W i p , C W i g ) - max j [ D ( C W j p , C W j g ) ]

4. Kernel Weighted Region Covariance Matrix (KWRCM)

To generalize WRCM to the nonlinear case, we use a nonlinear kernel mapping x ∈ Φd → ϕ(x) ∈ Ω to map the feature data x ∈ Φd into a higher dimensional subspace. Then a linear WRCM is performed to preserve intrinsic geometric structures in subspace Ω. Suppose that R1 and R2 are two rectangular regions in the gallery and probe set images, respectively. Let m and n be number of pixels located in regions R1 and R2, respectively. ϕ(x) and ϕ(y) are the higher dimensional features extracted from regions R1 and R2, where, ϕ(X) = [ϕ(x1), ϕ(x2),…, ϕ(xm)] and ϕ(Y) = [ϕ(y1), ϕ(y2),…, ϕ(yn)]. Let C ϕ W p and C ϕ W g be the kernel weighted region covariance matrices of regions R1 and R2, respectively. C ϕ W p and C ϕ W g are computed as follows:

C ϕ W p = j = 1 m i = 1 m ϕ ( x i ) S i j ϕ T ( x i ) - j = 1 m i = 1 m ϕ ( x i ) S i j ϕ T ( x j ) = i = 1 m ϕ ( x i ) D i i ϕ T ( x i ) - ϕ ( X ) S ϕ T ( X ) = ϕ ( X ) D ϕ T ( X ) - ϕ ( X ) S ϕ T ( X ) = ϕ ( X ) ( D - S ) ϕ T ( X ) = ϕ ( X ) L ϕ T ( X )
where S i j = exp ( - x i - x j 2 / t ), D i i = j = 1 n S i j , and L* = D* − S*:
C ϕ W g = j = 1 n i = 1 n ϕ ( y i ) S i j # ϕ T ( y i ) - j = 1 n i = 1 n ϕ ( y i ) S i j # ϕ T ( y j ) = i = 1 n ϕ ( y i ) D i i # ϕ T ( y i ) - ϕ ( Y ) S # ϕ T ( Y ) = ϕ ( Y ) D # ϕ T ( Y ) - ϕ ( Y ) S # ϕ T ( Y ) = ϕ ( Y ) ( D # - S # ) ϕ T ( Y ) = ϕ ( Y ) L # ϕ T ( Y )
where S i j # = exp ( - y i - y j 2 / t ), D i i # = j = 1 n S i j #, and L# = D# − S#.

Hence Equation (9) can be written as follows:

ϕ ( X ) L ϕ T ( X ) V = λ ϕ ( Y ) L # ϕ T ( Y ) V

As any eigenvector can be expressed by a linear combination of the elements, there exist coefficients αi (i = 1,2,…,m) and βj (j = 1,2,…,n) such that:

V = i = 1 m α i ϕ ( x i ) + j = 1 n β j ϕ ( y j ) = ϕ ( X ) α + ϕ ( Y ) β
where α = [α1,α2,…αm]T and β = [β1,β2,…βn]T.

Combining Equations (13) and (14), the generalized eigenvalue task in Equation (13) can be expressed in the form of block matrices:

[ K ( X , X ) L K ( X , X ) K ( X , X ) L K ( X , Y ) K ( Y , X ) L K ( X , X ) K ( Y , X ) L K ( X , Y ) ] [ α β ] = λ [ K ( X , Y ) L # K ( Y , X ) K ( X , Y ) L # K ( Y , Y ) K ( Y , Y ) L # K ( Y , X ) K ( Y , Y ) L # K ( Y , Y ) ] [ α β ]

The detailed derivation of Equation (15) is given in Appendix B.

We defined matrices U, A, and B as

U = [ α β ]
A = [ K ( X , X ) L K ( X , X ) K ( X , X ) L K ( X , Y ) K ( Y , X ) L K ( X , X ) K ( Y , X ) L K ( X , Y ) ]
B = [ K ( Y , Y ) L # K ( Y , Y ) K ( Y , Y ) L # K ( Y , X ) K ( X , Y ) L # K ( Y , Y ) K ( X , Y ) L # K ( Y , X ) ]

Equation (15) can be rewritten as:

A U = λ B U

When A is positive definite, the generalized eigenvalues are obtained through solving the following eigenvalue problem:

λ B A - 1 U = U

However, in many cases, A is a singular matrix, we hence incorporate a regularization parameter u > 0 on both sides, respectively:

( A + u I ) U = λ ( B + u I ) U
where I is an identity matrix. When u is large enough, (B + uI) is positive definite. Equation (21) becomes a standard eigenvalue problem:
( A + u I ) ( B + u I ) - 1 U = λ U

Based on eigenvalues obtained by Equation (9) or Equation (22), we compute the distance between the two image regions R1 and R2 using Equation (8).

5. Kernel Gabor-Based Weighted Region Covariance Matrix (KGWRCM)

In Equation (2), these features such as pixel locations (k,l), intensity values and the norm of the first and second order derivatives of the intensities with respect to k and l are effective for tracking and detecting objects. However, their discriminating ability is not strong enough for face recognition [4]. To further improve the performance, Gabor features are added to the feature space. A 2-D Gabor wavelet kernel is the product of an elliptical Gaussian envelope and a complex plane wave, defined as:

ψ u , v ( z) = k u , v 2 σ 2 e - k u , v 2 z 2 / 2 σ 2 ( e i k u , v z - e - σ 2 / 2 )
where u and v define the orientation and scale of the Gabor kernels, z = (x, y), ║•║ denotes the norm operator, and the wave vector ku,v is defined as follows:
k u , v = k v e i ϕ u
where kv = kmax/fv and ϕu = πu/8. kmax is the maximum frequency, and f is the spacing factor between kernels in the frequency domain. We use Gabor functions with eight orientations u = {0,…,7} and five scales v = {0,…,4}, making a total of 40 Gabor functions. The value of other parameters follows the setting in [16]: σ = 2π, k max = π 2, f = 2. The Gabor-based features are obtained by the convolution of image F with the 40 Gabor wavelets, using the following formula:
g uv ( z ) = | F ( z ) ψ u , v ( z ) |
where, * denotes the convolution operator, and |•| is a magnitude operator.

Therefore, a feature mapping function based on Gabor features is obtain by:

ϕ ( F , k , l ) = [ k l F ( k , l ) g 00 ( k , l ) , g 01 ( k , l ) , , g 74 ( k , l ) ]

As the Gabor wavelet representation can capture salient visual properties such as spatial localization, orientation selectivity, and spatial frequency characteristic, Gabor-based features can carry more important information. The proposed KGWRCM method can be briefly summarized as follows:

  • partition a face image into five regions (R2, R3, R4, and R5), and extract basic features of five regions using Equation (26).

  • compute two weight matrices L* and L# using Equations (11) and (12), and obtain four kernel matrices K(X,X), K(X,Y), K(Y,X), and K(Y,Y), using Equations (30)(33). Based on these matrices, the matrices A and B are computed utilizing Equations (17) and (18), respectively.

  • with A and B, the eigenvalues are obtained by Equation (20) or Equation (22) and submitted into Equation (8) to calculate the distance.

  • based on the distance defined in Equation (10), the nearest neighborhood classifier is employed to performance classification.

6. Experimental Results

We tested the GKWRCM algorithm on the ORL Face database [10], the Yale Face database [11] and AR Face database [12]. The ORL Face database comprises of 400 different images of 40 distinct subjects. Each subject provides 10 images that include variations in pose and scale. To reduce computational cost, each original image is resized to 56 × 46 by the nearest-neighbor interpolation function. A random subset with five images per individual is taken with labels to comprise the training set, and the remaining constructs the testing set. There are totally 252 different ways of selecting five images for training and five for testing. We select 20 random subsets with five images for training and five for testing.

The Yale face database contains 165 grayscale images with 11 images for each of 15 individuals. These images are subject to expression and lighting variations. In this recognition experiment, all face images with size of 80 × 80 were resized to 40 × 40. Five images of each subject were randomly chosen for training and the remaining six images were used for testing. There are hence 462 different selection ways. We select 20 random subsets with five images for training and six for testing.

The AR database consists of over 4,000 images corresponding to 126 people's faces (70 men and 56 women). These images include more facial variations, including illumination change, and facial occlusions (sun glasses and scarf). For each individual, 26 pictures were taken in two separate sessions and each section contains 13 images. In the experiment, we chose a subset of the data set consisting of 50 male subjects and 50 female subjects with seven images for each subject. The size of images are 165 × 120. We select two images for training and five for testing from the seven images. There are 21 different selection ways. Figure 2 shows some examples of the first object in each database used here.

In our experiment, all images are normalized to zero mean and unit variance. We compare the developed WRCM and KGWRCM with the RCM, the RCM with Gabor features GRCM [4], the KGRCM [7] and the conventional methods including KPCA [11], Gabor + PCA [12], and Gabor +LDA [12]. For GPCA and the PCA stage of KPCA, we keep 99% information to determine the number of the principal components. For the PCA stage of Gabor+LDA, we selected the number of principal components as Mc, where M is the number of training samples and c is the number of classes (M = 200 and c = 40 for ORL database, and M = 55 and c = 11 for Yale database, and M = 200 and c = 100 for Yale database). For KPCA, KGRCM, and the proposed KGWRCM, a Gaussian kernel function is used as kernel. We select their parameters for following algorithms with cross-validation method. Their parameters are summarized as follows: (1) parameter σ for the WRCM and KGWRCM methods. (2) the kernel parameters for the KPCA, KGRCM, and KGWRCM methods. In all these experiments, the classifiers of nearest neighborhood are employed. The performance of all methods is evaluated by the mean and standard deviation (std) of the recognition accuracies on 20 data sets for ORL and Yale databases, and the 21 data sets for AR database.

The average recognition accuracies on the ORL, Yale and AR databases are shown in Tables 1, 2 and 3, respectively. For the ORL face database and the Yale face database, the proposed KGWRCM method achieves 99.21% and 79.20% mean recognition accuracy which is higher than that of other methods. For the AR database, the proposed KGWRCM method achieves 95.95% mean recognition accuracy, which is 4.15% higher than that of KGRCM and much higher than that of other methods.

These results clearly show that the proposed KGWRCM method can capture more discriminative information than other methods for face recognition. Particularly KGWRCM and WRCM outperform KGRCM and RCM, which implies that the weighted approaches can better emphasize more important parts in faces and deemphasize the less important parts, and also preserve discriminated information for face recognition.

7. Conclusions

In this paper, an efficient image representation method for face recognition called KGWRCM is proposed. Considering that some pixels in face image are more effectual in representing and recognizing faces, we have constructed KGWRCM based on weighted score of each pixel within a sample to duly emphasize the face features. As the weighted matrix can carry more important information, the proposed method has shown good performance. Experimental results also show that the proposed KGWRCM method outperforms other approaches in terms of recognition accuracy. However, similar to KGRCM, the computational cost of KGWRCM is high due to the computation of the high dimensional matrix. In future work, an effective KGWRCM method with low computational complexity will be developed for face recognition.

This work is supported by the Research Fund for the Doctoral Program of Higher Education of China (NO. 20100191110011), and the Fundamental Research Funds for the Central Universities (NO. CDJXS11122220, CDJXS11121145).

Appendix A

Equation (3) can be formulated

C = 1 N i = 1 N ( x i - u ) ( x i - u ) T = 1 N i = 1 N x i x i T - 1 N 2 ( N u ) ( N u ) T = 1 N i = 1 N x i x i T - 1 N 2 ( X e ) ( X e ) T = 1 N i = 1 N x i x i T - X e e T N 2 X T = X 1 N X T - X e e T N 2 X T = X ( I N - e e T N 2 ) X T
where e is a column vector taking one at each entry and I is the identity matrix.

By some simple algebraic, Equation (5) is expressed by

C = 1 2 1 N 2 j = 1 N i = 1 N ( x i - x j ) 2 = 1 N 2 j = 1 N i = 1 N x i x i T - 1 N 2 j = 1 N i = 1 N x i x j T = 1 N i = 1 N x i x i T - X e e T N 2 X T = X I N X T - X e e T N 2 X T = X ( I N - e e T N 2 ) X T

Based on Equations (27) and (28), the following equation is obtained by

C = C = 1 2 1 N 2 j = 1 N i = 1 N ( x i - x j ) 2

Appendix B

Let k(xi, xj) = ϕ(xiϕ(xj) be the kernel function, the following four kernel matrices K(X,X), K(X,Y), K(Y,X), and K(Y,Y) with sizes of m × m, m × n, n × m, and n × n can be obtained by

K ( X , X ) = ϕ ( X ) T ϕ ( X ) = [ k ( x 1 , x 1 ) k ( x 1 , x 2 ) k ( x 1 , x m ) k ( x 2 , x 1 ) k ( x 2 , x 2 ) k ( x 2 , x m ) k ( x m , x 1 ) k ( x m , x 2 ) k ( x m , x m ) ]
K ( X , Y ) = ϕ ( X ) T ϕ ( Y ) = [ k ( x 1 , y 1 ) k ( x 1 , y 2 ) k ( x 1 , y n ) k ( x 2 , y 1 ) k ( x 2 , y 2 ) k ( x 2 , y n ) k ( x m , y 1 ) k ( x m , y 2 ) k ( x m , y n ) ]
K ( Y , X ) = ϕ ( Y ) T ϕ ( X ) = [ k ( y 1 , x 1 ) k ( y 1 , x 2 ) k ( y 1 , x m ) k ( y 2 , x 1 ) k ( y 2 , x 2 ) k ( y 2 , x m ) k ( y n , x 1 ) k ( y n , x 2 ) k ( y n , x m ) ]
K ( Y , Y ) = ϕ ( Y ) T ϕ ( Y ) = [ k ( y 1 , y 1 ) k ( y 1 , y 2 ) k ( y 1 , y n ) k ( y 2 , y 1 ) k ( y 2 , y 2 ) k ( y 2 , y n ) k ( y n , y 1 ) k ( y n , y 2 ) k ( y n , y n ) ]

Substituting Equation (11) into Equation (10), we can obtain

ϕ ( X ) L ϕ T ( X ) ( ϕ ( X ) α + ϕ ( Y ) β ) = λ ϕ ( Y ) L # ϕ T ( Y ) ( ϕ ( X ) α + ϕ ( Y ) β )

Based on Equations (21)(24), Equation (25) can be expressed as

ϕ ( X ) L K ( X , X ) α + ϕ ( X ) L K ( X , Y ) β = λ ϕ ( Y ) L # K ( Y , X ) α + λ ϕ ( Y ) L # K ( Y , Y ) β

To express Equation (26) in the form of a kernel function, we multiply ϕT(X) on both sides, respectively. The following equation is obtained

K ( X , X ) L K ( X , X ) α + K ( X , X ) L K ( X , Y ) β = λ K ( X , Y ) L # K ( Y , X ) α + λ K ( X , Y ) L # K ( Y , Y ) β

Similarly, we multiply ϕT(Y) on both sides of Equation (26), and have

K ( Y , X ) L K ( X , X ) α + K ( Y , X ) L K ( X , Y ) β = λ K ( Y , Y ) L # K ( Y , X ) α + λ K ( Y , Y ) L # K ( Y , Y ) β

Equations (27) and (28) can be expressed using matrices and vectors

[ K ( X , X ) L K ( X , X ) K ( X , X ) L K ( X , Y ) ] [ α β ] = λ [ K ( X , Y ) L # K ( Y , X ) K ( X , Y ) L # K ( Y , Y ) ] [ α β ]
[ K ( Y , X ) L K ( X , X ) K ( Y , X ) L K ( X , Y ) ] [ α β ] = λ [ K ( Y , Y ) L # K ( Y , X ) K ( Y , Y ) L # K ( Y , Y ) ] [ α β ]

Combining Equation (29) and Equation (30) obtains

[ K ( X , X ) L K ( X , X ) K ( X , X ) L K ( X , Y ) K ( Y , X ) L K ( X , X ) K ( Y , X ) L K ( X , Y ) ] [ α β ] = λ [ K ( X , Y ) L # K ( Y , X ) K ( X , Y ) L # K ( Y , Y ) K ( Y , Y ) L # K ( Y , X ) K ( Y , Y ) L # K ( Y , Y ) ] [ α β ]

References

  1. Jensen, A.C.; Loog, M.; Solberg, A.H.S. Using Multiscale Spectra in Regularizing Covariance Matrices for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1851–1859. [Google Scholar]
  2. Porikliand, F.; Kocak, T. Robust License Plate Detection Using Covariance Descriptor in a Neural Network Framework. Proceedings of 2006 IEEE International Conference on Advanced Video and Signal, Sydney, Australia, 22–24 November 2006; pp. 107–113.
  3. Tuzel, O.; Porikli, F.; Meer, P. Region Covariance: A Fast Description for Detection and Classification. Proceedings of 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; pp. 589–600.
  4. Pang, Y.W.; Yuan, Y.; Li, X.D. Gabor-Based Region Covariance Matrices for Face Recognition. IEEE Trans. Circ. Syst. Video Technol. 2008, 18, 989–993. [Google Scholar]
  5. Tuzel, O.; Porikli, F.; Meer, P. Pedestrian Detection via Classification on Riemannian Manifolds. IEEE Trans. Patt. Anal. Mach. Intell. 2008, 30, 1713–1727. [Google Scholar]
  6. Arif, O.; Vela, P.A. Kernel Covariance Image Region Description for Object Tracking. Proceedings of 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 865–868.
  7. Pang, Y.W.; Yuan, Y.; Li, X.D. Effective Feature Extraction in High-Dimensional Space. IEEE Trans. Syst. Man Cyber. B Cybern. 2008, 38, 1652–656. [Google Scholar]
  8. Zhao, X.; Zhang, S. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap. Sensors 2011, 11, 9573–9588. [Google Scholar]
  9. Scholkopf, B.; Smola, A.; Muller, K.R. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neur. Comput. 1998, 10, 1299–1319. [Google Scholar]
  10. The ORL Face Database. Available online: http://www.cam_orl.co.uk/ (accessed on 10 May 2012).
  11. Yale University. Face Database. Available online: http://cvc.yale.edu/projects/yalefaces/yalefaces.html (accessed on 10 May 2012).
  12. Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Patt. Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar]
  13. Liu, C.J.; Wechsler, H. Gabor Feature Based Classification Using the Enhanced Fisher Linear Discriminant Model for Face Recognition. IEEE Trans. Image Process. 2002, 11, 467–476. [Google Scholar]
  14. He, X.F.; Yan, S.C.; Hu, Y.X.; Partha, N.Y.; Zhang, H.J. Face Recognition Using Laplacianfaces. IEEE Trans. Patt. Anal. Mach. Intell. 2005, 27, 328–340. [Google Scholar]
  15. Forstner, W.; Moonen, B. A Metric for Covariance Matrices; Technical Report. Stuttgart University: Stuttgart, Germany, 1999. [Google Scholar]
  16. Yang, X.; Zhou, Y.; Zhang, T.; Zheng, E.; Yang, J. Gabor Phase Based Gait Recognition. Electr. Lett. 2008, 44, 620–621. [Google Scholar]
Sensors 12 07410f1 1024
Figure 1. Five regions of a face image. Five WRCMs are constructed from the corresponding regions.

Click here to enlarge figure

Figure 1. Five regions of a face image. Five WRCMs are constructed from the corresponding regions.
Sensors 12 07410f1 1024
Sensors 12 07410f2 1024
Figure 2. Five examples of the first subject in (a) the ORL face database, (b) the Yale face database and (c) the AR face database.

Click here to enlarge figure

Figure 2. Five examples of the first subject in (a) the ORL face database, (b) the Yale face database and (c) the AR face database.
Sensors 12 07410f2 1024
Table Table 1. The performance of different approaches on the ORL face database.

Click here to display table

Table 1. The performance of different approaches on the ORL face database.
Method MeanRecognition rates (%)Standard deviations (%)
KGWRCM99.211.12
KGRCM98.411.24
GRCM97.061.28
WRCM93.832.11
RCM91.882.57
GPCA89.782.43
GLDA97.51.37
KPCA94.431.55
Table Table 2. The performance of different approaches on the Yale face database.

Click here to display table

Table 2. The performance of different approaches on the Yale face database.
Method MeanRecognition rates (%)Standard deviations (%)
KGWRCM79.208.72
KGRCM76.239.04
GRCM72.0010.58
RWCM61.678.76
RCM51.947.22
GPCA67.949.36
GLDA73.477.06
KPCA73.288.11
Table Table 3. The performance of different approaches on the AR face database.

Click here to display table

Table 3. The performance of different approaches on the AR face database.
Method MeanRecognition rates (%)Standard deviations (%)
KGWRCM95.951.46
KGRCM91.802.58
GRCM81.4611.73
WRCM48.5611.08
RCM41.3112.54
GPCA78.645.35
GLDA88.994.18
KPCA66.897.68
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert