Graph Regularized Within-Class Sparsity Preserving Projection for Face Recognition
AbstractAs a dominant method for face recognition, the subspace learning algorithm shows desirable performance. Manifold learning can deal with the nonlinearity hidden in the data, and can project high dimensional data onto low dimensional data while preserving manifold structure. Sparse representation shows its robustness for noises and is very practical for face recognition. In order to extract the facial features from face images effectively and robustly, in this paper, a method called graph regularized within-class sparsity preserving analysis (GRWSPA) is proposed, which can preserve the within-class sparse reconstructive relationship and enhances separatability for different classes. Specifically, for each sample, we use the samples in the same class (except itself) to represent it, and keep the reconstructive weight unchanged during projection. To preserve the manifold geometry structure of the original space, one adjacency graph is constructed to characterize the interclass separability and is incorporated into its criteria equation as a constraint in a supervised manner. As a result, the features extracted are sparse and discriminative and helpful for classification. Experiments are conducted on the two open face databases, the ORL and YALE face databases, and the results show that the proposed method can effectively and correctly find the key facial features from face images and can achieve better recognition rate compared with other existing ones. View Full-Text
Share & Cite This Article
Lou, S.; Zhao, X.; Guo, W.; Chen, Y. Graph Regularized Within-Class Sparsity Preserving Projection for Face Recognition. Information 2015, 6, 152-161.
Lou S, Zhao X, Guo W, Chen Y. Graph Regularized Within-Class Sparsity Preserving Projection for Face Recognition. Information. 2015; 6(2):152-161.Chicago/Turabian Style
Lou, Songjiang; Zhao, Xiaoming; Guo, Wenping; Chen, Ying. 2015. "Graph Regularized Within-Class Sparsity Preserving Projection for Face Recognition." Information 6, no. 2: 152-161.