You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

8 May 2018

Demographic-Assisted Age-Invariant Face Recognition and Retrieval

,
,
,
,
,
and
1
Department of Electrical Engineering, Mirpur University of Science and Technology, Mirpur 10250 (AJK), Pakistan
2
Faculty of Computing, Engineering and Science, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
3
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.

Abstract

Demographic estimation of human face images involves estimation of age group, gender, and race, which finds many applications, such as access control, forensics, and surveillance. Demographic estimation can help in designing such algorithms which lead to better understanding of the facial aging process and face recognition. Such a study has two parts—demographic estimation and subsequent face recognition and retrieval. In this paper, first we extract facial-asymmetry-based demographic informative features to estimate the age group, gender, and race of a given face image. The demographic features are then used to recognize and retrieve face images. Comparison of the demographic estimates from a state-of-the-art algorithm and the proposed approach is also presented. Experimental results on two longitudinal face datasets, the MORPH II and FERET, show that the proposed approach can compete the existing methods to recognize face images across aging variations.

1. Introduction

Recognition of face images is an important yet challenging problem. This challenge mainly includes pose, illumination, expression, and aging variations. Recognizing face images across aging variations is called age-invariant face recognition. Despite competitive performance of some existing methods, recognizing and retrieving face images across aging variations remains a challenging problem. Facial aging is a complex process owing to variations in the balance, proportions, and symmetry of the face with varying age, gender, and race [1,2]. More precisely, the age, gender, and race are correlated in characterizing the facial shapes [3]. In our routine life, we can estimate the age, gender, and race of our peers quite effectively and easily. However, a number of anthropometric studies such as [3] suggest significant facial morphological differences among race, gender, and age groups. The study suggests that the anthropometric measurements for female subjects are smaller than the corresponding measurements for male subjects. Another study [4] reported the broader faces and noses for Asians compared to North American Whites. Similarly, some studies on facial aging such as [5] suggest a change in facial size with change in age during craniofacial growth, with adults developing a triangular facial shape with some wrinkles. Some studies on sexual dimorphism reveal more prominent features for male faces compared to female faces [6]. Some studies suggest that different populations exhibit different bilateral facial asymmetry. A relationship between facial asymmetry and sexual dimorphism has been revealed in [7]. In [8], it has been observed that facial masculinization covaries with bilateral asymmetry in males’ faces. Facial asymmetry increases with increasing age, as observed in [9,10].
In this paper, we present demographic-assisted recognition and retrieval of face images across aging variations. More precisely, the proposed approach involves: (i) facial-asymmetry-based demographic estimation and (ii) demographic-assisted face recognition and retrieval. To this end, we first estimated the age group, gender, and race of a query face image using facial-asymmetry-based, demographic-aware features learned by convolutional neural networks (CNNs). Face images were then recognized and retrieved based on a demographic-assisted re-ranking approach.
The motivation of this study is to answer the following questions.
(1)
Does demographic-estimation-based re-ranking improve the face identification and retrieval performance across aging variations?
(2)
What is the individual role of gender, race, and age features in improving face recognition and retrieval performance?
(3)
Do deeply learned asymmetric features perform better or worse compared to existing handcrafted features?
We organize the rest of this paper as follows. Existing methods on demographic estimation and face recognition are presented in Section 2. The facial-asymmetry-based demographic estimation approach is illustrated in Section 3. Section 4 presents the demographic-assisted face recognition and retrieval approach with experiments and results. Discussion on the results is presented in Section 5, while the last section concludes this paper.

3. Demographic Estimation

The proposed approach for demographic estimation consists of two main components: (i) preprocessing and (ii) learning demographic informative features using CNNs as described in the following subsections.

3.1. Preprocessing

Face images normally contain various appearance variations including scale, translation, rotation, illumination, and color cast. To compensate for such variations, we adopted following preprocessing steps: (i) to correct unwanted color cast, we used a luminance model adopted by NTSC and JPEG [20]; (ii) to mitigate the effects of in-plane rotation and translation, face images were aligned based on fixed eye locations detected by a publicly available tool, the Face++ [49]; (iii) the aligned face images were cropped to a common size of 128 × 128 pixels; and (iv) to correct illumination variations due to shadows and underexposure, we used difference of Gaussian filtering [50]. Figure 2 shows face preprocessing results for two sample face images of a subject from MORPH II dataset.
Figure 2. Face preprocessing (a) input images (b) gray-scale images (c) images aligned based on fixed eye locations (d) illumination correction.

3.2. Demographic Features Extraction

Since we aimed to extract asymmetric facial features for demographic estimation (age group, gender, and race), it was necessary to have a face representation to extract such features. For this purpose, each preprocessed face image of size 128 × 128 pixels was divided into 128 × 64 pixels, left and right half face, such that difference between two halves was minimal. The flipped image of the left half face was subtracted from the right half face, resulting in 128 × 64 pixels left–right difference half face. The difference half-face image contained the asymmetric facial variations and was used to extract the demographic informative features. Figure 3 illustrates the extraction of a difference half face from a given preprocessed face image.
Figure 3. Extraction of difference half-face image.
To compute demographic informative features from the difference half-face image, we trained three convolutional neural networks, A, G, and R, each for age group, gender, and race classification tasks, respectively. More precisely, networks A, G, and R represented CNNs that took difference half-face images as input and output class labels with corresponding softmax layers. Network A was trained for the age classification task such that input face images can be classified into one of the four age groups. Similarly, networks G and R were trained for binary classification tasks of gender and race, respectively. Each network consisted of two convolutional layers (conv) followed by batch normalization and max pooling steps. Two fully connected layers were placed at the end of each network. The first fully connected layer consisted of 1024 units and the second layer acted as the output layer with the softmax function. The last layer of network A consisted of four units, each for one age group. Similarly, the networks G and R contained the last fully connected layer with two units each for the binary classification tasks of gender and race of input image. Figure 4 illustrates the CNNs used for demographic estimation, including age group, gender, and race for an arbitrary input difference face image.
Figure 4. Illustration of networks A, G, and R for demographic classification tasks.

3.3. Experimental Results of Demographic Estimation

We performed age group, gender, and race classification on difference half-face images using simple CNN models. To this end, each difference face image of size h × w was passed through pretrained networks A, G, and R, each containing two convolutional layers. The choice of the simple CNN models was motivated by the abstract nature of the asymmetric facial information present in the difference half-face images. The output L p of a layer p was a h p × w p × f p feature map, where f is number of filters in a convolutional layer. Finally, a set S = { s 1 p , s 2 p , s h p , w p p } was obtained at each location of the feature map for the networks A, G, and R. More precisely, we obtained three sets of feature vectors, S A = { s 1 A p , s 2 A p , s h p , w p p } , S G = { s 1 G p , s 2 G p , s h p , w p p } , and S R = { s 1 R p , s 2 R p , s h p , w p p } for age, gender, and race features, respectively. To avoid overfitting, we used a cross-validation strategy to obtain error-aware outputs from each network. Due to significantly imbalanced race distribution in MORPH II and FERET datasets, we performed binary race estimation between White and other races (Black, Hispanic, Asian, and African-American). For each classification task, the results are reported in the form of a confusion matrix, as illustrated in following subsections.

3.3.1. Results on MORPH II Dataset

We used a subset consisting of 20,000 face images from MORPH II dataset. We used 10,000 face images in training, while 10,000 face images were used in test sets with age ranges of 16–20, 21–30, 31–45, and 46–60+. The performance of the proposed and the Face++ methods [27] for age group, gender and race estimation is reported in Table 2a–c in the form of confusion matrices. It is worthwhile to mention that the results were calculated using a 5-fold cross-validation methodology. An overall age group estimation accuracy of 94.50% with a standard deviation of 1.1 was achieved for our method compared to the state-of-the-art Face++, which achieves an overall accuracy of 89.15% with a standard deviation of 1.5. In case of Face++, the subjects (particularly in the age range of 46–60+) are found to be more baffled with subjects in younger age groups, mainly because of use of facial make-up, resulting in a younger appearance of the subjects. In contrast, asymmetric features used in the proposed method are difficult to manipulate by using facial make-up and hence the subjects are less confused with the corresponding younger age groups. The comparative results of the gender estimation task of the proposed approach and the Face++ are shown in Table 2b. It can be seen that our method achieved an overall accuracy of 82.35% with a standard deviation of 0.6 compared to the Face++, which achieved an overall accuracy of 77.80% with a standard deviation of 0.4. An interesting observation is higher misclassification rates for females than males achieved by the proposed approach. One possible explanation for this higher misclassification is frequent extrinsic variations of female faces, such as facial make-up and varying eyebrow styles.
Table 2. Confusion matrices showing classification accuracies of the proposed and Face++ methods for (a) age group, (b) gender, and (c) race estimation tasks on MORPH II dataset.
The race estimation experiments conducted on MORPH II dataset classify subjects between White and other races. The race estimation results are shown in Table 2c, depicting the superior performance of our method compared to the Face++. The proposed method achieved an overall accuracy of 80.21% with a standard deviation of 1.18 compared to the Face++, which achieved an overall accuracy of 76.50% with a standard deviation of 1.2. It can be seen that proposed approach is better at estimating the White race compared to the other races.

3.3.2. Results on FERET Dataset

We use 1196 frontal face images from fa set of FERET dataset. Five hundred and ninety-eight images were used in training, while 598 images were used in testing for demographic estimation tasks. The demographic estimation performance of the proposed approach and the Face++ is reported in Table 3a–c in the form of confusion matrices. The proposed method gave an overall age group estimation accuracy of 83.88% with a standard deviation of 1.00 across a 5-fold cross validation. In contrast, the Face++ achieved an overall age group accuracy of 81.52% for the same experimental protocol.
Table 3. Confusion matrices showing the classification accuracies of the proposed and the Face++ methods for (a) age group, (b) gender, and (c) race estimation accuracies on FERET dataset.
For gender estimation task, the proposed method achieved an overall estimation accuracy of 82.72% with a standard deviation of 0.9 compared to 76.28% achieved by the Face++. It can be observed that the misclassification rate of female subjects is greater than male subjects in the case of the FERET dataset, similar to the trend observed for subjects in MORPH II dataset.
Our method achieved an overall race estimation accuracy of 79.80% with a standard deviation of 0.9 across 5-folds, compared to the race estimation accuracy of 74.56% achieved by the Face++ for the same experimental protocol. The proposed approach gave a higher misclassification rate for the White race compared to other races (26.00% vs. 14.39%), which can be attributed to large within group variety of other races.

4. Recognition and Retrieval of Face Images across Aging Variations

The second part of this study deals with the demographic-assisted face recognition and retrieval approach. Our proposed method for face recognition and retrieval includes the following steps.
(i)
A query face image is first matched against a gallery using deep CNN (dCNN) features. To extract these features, we used VGGNet [51]. Particularly, we used a variant of VGGNet called VGG-16, which contains 16 layers with learnable weights, including 13 convolution and 3 fully connected layers. The VGGNet used a filter of size 3 × 3. The combination of two 3 × 3 filters resulted in a receptive field of 5 × 5, simulating a larger filter but retaining the benefits of smaller filters. We selected VGGNet due to its better performance for the desired task with relatively simpler architecture. The matching stage returned top k matches from gallery against the query face image.
(ii)
Extract demographic features (age, gender, and race) using three CNNs A, G and R.
(iii)
The top k matched face images are then re-ranked by using gender, race and age features as shown in Figure 5.
Figure 5. The proposed demographic-assisted re-ranking pipeline.
We explain demographic-assisted re-ranking in the following steps:
(i)
Re-ranking by gender features: In this step, the top k matched face images are re-ranked based on the gender-specific features, S G = { s 1 G p , s 2 G p , s h p , w p p } , of the query face image, returning gender re-ranked top k matches. The re-ranking applied in this step helps to refine the initial top k matches based on the gender features of the query face image. The resulting re-ranked images are called gender re-ranked k matches.
(ii)
Re-ranking by race features: The gender re-ranked face images are again re-ranked using race-specific features, S R = { s 1 R p , s 2 R p , s h p , w p p } , resulting in the gender–race re-ranked top k matches. The re-ranking applied in this step helps to refine the gender re-ranked k matches obtained in the first step based on the race of the query face image. The resulting re-ranked images are called gender–race re-ranked k matches.
(iii)
Re-ranking by age features: Finally, the gender–race re-ranked face images are re-ranked using age-specific features, S A = { s 1 A p , s 2 A p , s h p , w p p } , returning gender–race–age re-ranked top k matches. The re-ranking applied in final step helps to refine the gender–race re-ranked face images based on the aging features of the query face image. The resulting re-ranked images are called gender–race–age re-ranked top k matches. This step produces the final ranked output as shown in Figure 5.
The complete block diagram of the proposed demographic-assisted face recognition and retrieval approach is shown in Figure 6.
Figure 6. The block diagram of the proposed face recognition and retrieval approach.

4.1. Evaluation

The performance of our method is evaluated in terms of identification accuracy and mean average precision (mAP), as illustrated in the following subsections.

4.1.1. Face Recognition Experiments

In the first evaluation, we used rank-1 identification accuracy as an evaluation metric for face recognition performance on MORPH II and FERET datasets, as described in the following experiments.
Face Recognition Experiments on MORPH II Dataset: In case of MORPH II dataset, we used 20,000 face images of 20,000 subjects, as a gallery, while 20,000 older face images are used as the probe set. To extract gender, race, and aging features, the CNN models were trained on gallery face images. The rank-1 recognition accuracies for face identification experiments are reported in Table 4. We achieved a rank-1 recognition accuracy of 80.50% for the probe set when dCNN features were used to match the face images. Re-ranking face images with gender features resulted in a rank-1 identification accuracy of 84.81%. Similarly, gender–race re-ranking yielded the rank-1 identification accuracy of 89.00%. Finally, the proposed approach with gender–race–age re-ranking gave a rank-1 identification accuracy of 95.10%. We also show the cumulative match characteristic (CMC) curves for this series of experiments in Figure 7a.
Table 4. Rank-1 recognition accuracies of the proposed and existing methods.
Figure 7. Cumulative match characteristic (CMC) curves showing the identification performance of the proposed methods on (a) MORPH II (b) FERET dataset.
Face Recognition Experiments on FERET Dataset: In case of FERET dataset, we used standard gallery and probe sets to evaluate the performance of our method. We used 1196 face images from fa set as the gallery, while 956 face images from dup I and dup II sets were used as probe set. Gender, race, and age features were extracted by training the CNN models on the gallery set. The rank-1 recognition accuracies are reported in Table 4. The dCNN features to match the probe and gallery face images yielded a rank-1 recognition accuracy of 82.00%. Gender re-ranking, and gender–race re-ranking, yielded rank-1 identification accuracies of 85.91% and 90.00%, respectively. The proposed approach with gender–race–age re-ranking yielded the highest rank-1 identification accuracy of 96.21%. The CMC curves for this series of experiments are shown in Figure 7b.

4.1.2. Comparison of Face Recognition Results with State-Of-The-Art

We compared the results of the proposed face-recognition approach with existing methods, including the score-space-based fusion approach presented in [10], age-assisted face recognition [38], CARC [45], GSM1 [52], and GSM2 [52]. The score-space-based approach presented in [10] uses facial-asymmetry-based features to recognize face images across aging variations without demographic estimation. Age-assisted face recognition [38] leverages the age group estimates to enhance the recognition accuracy across aging variations. CARC [45] is a data-driven coding framework, called cross-age reference coding, that is suggested to retrieve and recognize face images across aging variations. The generalized similarity models (GSM1 and GSM2) aim to retrieve face images by implementing a cross-domain visual matching strategy followed by incorporation of a similarity measure matrix into a deep architecture.
For the same experimental setup, the score-space-based fusion [10] achieved recognition accuracies of 72.40% on MORPH II and 66.66% on FERET dataset. The age-assisted face-recognition-based approach [38] achieved 85.00% on MORPH II and 78.60% on FERET dataset. CARC achieved rank-1 identification accuracies of 84.11% and 85.98% on MORPH II and FERET datasets, respectively. GSM1 yielded the rank-1 identification accuracies of 83.33% and 85.00% for MORPH II and FERET datasets, respectively. Similarly, GSM2 achieved rank-1 identification accuracies of 93.73% and 94.23% on MORPH II and FERET datasets, respectively. We also present an analysis for the error introduced by the age group, gender, and race estimation of probe images compared to the actual age-groups, gender, and race in recognizing face images both for MORPH II and FERET datasets (see row viii, Table 4).

4.1.3. Face Retrieval Experiments

In the second evaluation, mAP was used as an evaluation metric for the face retrieval performance of the proposed method. Following existing face-retrieval methods [33,53,54] for a given a set of p query face images, Q = { y q 1 , y q 2 , y q p } , and a gallery set with N face images, the average precision for y q i is defined as:
a v g P ( y q i ) = j = 1 N P ( y q i ,   j ) [ C ( y q i , j ) C ( y q i , j 1 )
where ( y q i ,   j ) is precision at the j-th position for y q i and C ( y q i , j ) is the recall for the same position, such that y q i ( C ( 0 ) = 0 . Finally, the mAP for entire query dataset Q is defined as:
mAP ( Q ) = 1 | Q | ( a v g P ( y q i ) ) ,   i = 1 , 2 , , p
Retrieval Experiments on MORPH II Dataset: We selected face images of 780 distinct subjects from the MORPH II dataset to constitute a test set with images acquired in the years 2007 as the query set, while images acquired in the years 2004, 2005, and 2006 formed three distinct training subsets. The cosine similarity metric was used to calculate the matching scores between two given face images. The mAPs for face matching using linear scan, gender re-ranking, gender–race re-ranking, and the proposed approach are shown in Figure 8a for three test sets containing face images acquired in the years 2004, 2005, and 2006. The results suggest that the proposed approach achieved the highest mAP on all three test sets compared to gender re-ranking and gender–race re-ranking-based retrieval methods.
Figure 8. Face retrieval performance of the proposed and existing methods on (a) MORPH II (b) FERET dataset.
Retrieval Experiments on FERET Dataset: In case of FERET dataset, we used fa as training, while dup I and dup II were used as test sets. The mAPs for face matching using the linear scan, gender re-ranking, gender–race re-ranking, and the proposed approach are shown in Figure 8b for the dup I and dup II test sets. The results show that the proposed approach gave the highest mAP on all three test sets compared to gender re-ranking and gender–race re-ranking-based retrieval methods.

4.1.4. Comparison of Face Retrieval Results with State-Of-The-Art

The performance of our approach is compared with close competitors, including CARC [45], and generalized similarity models [52] (GSM1 and GSM2) in Figure 8a,b in terms of mAP for MORPH II and FERET test sets, respectively. It is evident that the proposed approach outperformed the existing methods. The superior performance can be attributed to the demographic-estimation-based re-ranking compared to the linear scan employed in CARC [45], GSM1 [52], and GSM2 [52].

6. Conclusions

The human face contains a number of age-specific features. Facial asymmetry is one of such intrinsic facial feature, which is a strong indicator of age group, gender, and race. In this work, we have proposed a demographic-assisted face recognition and retrieval approach. First, we estimated facial-asymmetry-based age group, gender, and race of a query face image using CNNs. The demographic features are then used to re-rank face images. The experimental results suggest that, firstly, facial asymmetry is a strong indicator of age group, gender, and race. Secondly, the demographic features can be used to re-rank face images to achieve superior recognition accuracies. The proposed approach yields superior mAP accuracies by matching a query face image against gallery face images of a specific gender, race, and age group. Thirdly, the deeply learned face features can be used to achieve superior face recognition and retrieval performance compared to the handcrafted features. Finally, the study suggests that among aging, gender, and race, the aging features play a significant role in recognizing and retrieving age-separated face images, owing to their person-specific nature. The experimental results on two longitudinal datasets suggest that the proposed approach can compete with the existing methods to recognize and retrieve face images across aging variations. Future work may include expanding the role of other facial attributes in demographic estimation and face recognition.

Author Contributions

M.S. and T.S. conceived the idea and performed the analysis. M.S., and F.I. contributed in the write up of the manuscript, performed experiments and prepared the relevant Figures and Tables. S.M. gave useful insights during experimental analysis. H.T., U.S.Q., and I.R. provided their expertise in revising the manuscript. All authors were involved in preparing the final manuscript.

Acknowledgments

The authors are thankful to the sponsors of publicly available FERET and MORPH II datasets used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mayes, E.; Murray, P.G.; Gunn, D.A.; Tomlin, C.C.; Catt, S.D.; Wen, Y.B.; Zhou, L.P.; Wang, H.Q.; Catt, M.; Granger, S.P. Environmental and lifestyle factors associated with percieved age in Chinese women. PLoS ONE 2010, 5, e15273. [Google Scholar] [CrossRef] [PubMed]
  2. Balle, D.S. Anatomy of Facial Aging. Available online: https://drballe.com/conditions-treatment/anatomy-of-facial-aging-2 (accessed on 28 January 2017).
  3. Zhuang, Z.; Landsittel, D.; Benson, S.; Roberge, R. Facial anthropometric differences among gender, ethnicity, and age groups. Ann. Occup. Hyg. 2010, 54, 391–402. [Google Scholar] [PubMed]
  4. Farkas, L.G.; Katic, M.J.; Forrest, C.R. International anthropometric study of facial morphology in various ethnic groups/races. J. Craniofac. Surg. 2005, 16, 615–646. [Google Scholar] [CrossRef] [PubMed]
  5. Ramanathan, N.; Chellappa, R.; Biswas, S. Computational methods for modeling facial aging: A survey. J. Vis. Lang. Comput. 2009, 20, 131–144. [Google Scholar] [CrossRef]
  6. Bruce, V.; Burton, A.M.; Hanna, E.; Healey, P.; Mason, O.; Coombs, A. Sex discrimination: How do we tell the difference between male and female faces? Perception 1993, 22, 131–152. [Google Scholar] [CrossRef] [PubMed]
  7. Little, A.C.; Jones, B.C.; Waitt, C.; Tiddem, B.P.; Feinberg, D.R.; Perrett, D.I.; Apicella, C.L.; Marlowe, F.W. Symmetry is related to sexual dimorphism in faces: Data across culture and species. PLoS ONE 2008, 3. [Google Scholar] [CrossRef] [PubMed]
  8. Steven, W.; Randy, T. Facial masculinity and fluctuatinga symmetry. Evol. Hum. Behav. 2003, 24, 231–241. [Google Scholar]
  9. Morrison, C.S.; Phillips, B.Z.; Chang, J.T.; Sullivan, S.R. The Relationship between Age and Facial Asymmetry. 2011. Available online: http://meeting.nesps.org/2011/80.cgi (accessed on 28 April 2017).
  10. Sajid, M.; Taj, I.A.; Bajwa, U.I.; Ratyal, N.I. The role of facial asymmetry in recognizing age-separated face images. Comput. Electr. Eng. 2016, 54, 255–270. [Google Scholar] [CrossRef]
  11. Fu, Y.; Huang, T.S. Human age estimation with regression on discriminative aging manifold. IEEE Trans. Multimed. 2008, 10, 578–584. [Google Scholar] [CrossRef]
  12. Lu, K.; Seshadri, K.; Savvides, M.; Bu, T.; Suen, C. Contourlet Appearance Model for Facial Age Estimation. 2011. Available online: https://pdfs.semanticscholar.org/bc82/a5bfc6e5e8fd77e77e0ffaadedb1c48d6ae4.pdf (accessed on 28 April 2017).
  13. Bekios-Calfa, J.; Buenaposada, J.M.; Baumela, L. Revisiting linear discriminant techniques in gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 858–864. [Google Scholar] [CrossRef] [PubMed]
  14. Wu, T.; Turaga, P.; Chellappa, R. Age estimation and face verification across aging using landmarks. IEEE Trans. Inf. Forensics Secur. 2012, 7, 1780–1788. [Google Scholar] [CrossRef]
  15. Hadid, A.; Pietikanen, M. Demographic classification from face videos using manifold learning. Neurocomputing 2013, 100, 197–205. [Google Scholar] [CrossRef]
  16. Guo, G.; Mu, G. Joint estimation of age, gender and ethnicity: CCA vs. PLS. In Proceedings of the 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Shanghai, China, 22–26 April 2013. [Google Scholar]
  17. Tapia, J.E.; Perez, C.A. Gender classification based on fusion of different spatial scale features selected by mutual information from histogram of LBP, intensity, and shape. IEEE Trans. Inf. Forensics Secur. 2013, 8, 488–499. [Google Scholar] [CrossRef]
  18. Choi, S.E.; Lee, Y.J.; Lee, S.J.; Park, K.R.; Kim, J.K. Age estimation using a hierarchical classifier based on global and local facial features. Pattern Recognit. 2011, 44, 1262–1281. [Google Scholar] [CrossRef]
  19. Geng, X.; Yin, C.; Zhou, Z.-H. Facial age estimation by learning from label distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 35, 2401–2412. [Google Scholar] [CrossRef] [PubMed]
  20. Han, H.; Otto, C.; Liu, X.; Jain, A.K. Demographic estimation from face images: Human vs. machine performance. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1148–1161. [Google Scholar] [CrossRef] [PubMed]
  21. Hu, Z.; Wen, Y.; Wang, J.; Wang, M.; Hong, R.; Yan, S. Facial age estimation with age difference. IEEE Trans. Image Process. 2017, 26, 3087–3097. [Google Scholar] [CrossRef] [PubMed]
  22. Jadid, M.A.; Sheij, O.S. Facial age estimation under the terms of local latency using weighted local binary pattern and multi-layer perceptron. In Proceedings of the 4th International Conference on Control, Instrumentation, and Automation (ICCIA), Qazvin, Iran, 27–28 January 2016. [Google Scholar]
  23. Liu, K.-H.; Yan, S.; Kuo, C.-C.J. Age estimation via grouping and decision fusion. IEEE Trans. Inf. Forensics Secur. 2015, 10, 2408–2423. [Google Scholar] [CrossRef]
  24. Geng, X.; Zhou, Z.; Smith-Miles, K. Automatic age estimation based on facial aging patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 2234–2240. [Google Scholar] [CrossRef] [PubMed]
  25. Ling, H.; Soatto, S.; Ramanathan, N.; Jacobs, D. Face verification across age progression using discriminative methods. IEEE Trans. Inf. Forensics Secur. 2010, 5, 82–91. [Google Scholar] [CrossRef]
  26. Park, U.; Tong, Y.; Jain, A.K. Age-invariant face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 947–954. [Google Scholar] [CrossRef] [PubMed]
  27. Li, Z.; Park, U.; Jain, A.K. A discriminative model for age-invariant face recognition. IEEE Trans. Inf. Forensics Secur. 2011, 6, 1028–1037. [Google Scholar] [CrossRef]
  28. Yadav, D.; Vatsa, M.; Singh, R.; Tistarelli, M. Bacteria foraging fusion for face recognition across age progression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  29. Sungatullina, D.; Lu, J.; Wang, G.; Moulin, P. Discriminative Learning for Age-Invariant Face Recognition. In Proceedings of the IEEE International Conference and Workshops on Face and Gesture Recognition, Shanghai, China, 22–26 April 2013. [Google Scholar]
  30. Ramanathan, N.; Chellappa, R. Modeling age progression in young faces. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; pp. 387–394. [Google Scholar]
  31. Deb, D.; Best-Rowden, L.; Jain, A.K. Face recognition performance under aging. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  32. Machado, C.E.P.; Flores, M.R.P.; Lima, L.N.C.; Tinoco, R.L.R.; Franco, A.; Bezerra, A.C.B.; Evison, M.P.; Aure, M. A new approach for the analysis of facial growth and age estimation: Iris ratio. PLoS ONE 2017, 12. [Google Scholar] [CrossRef] [PubMed]
  33. Xu, C.; Liu, Q.; Ye, M. Age invariant face recognition and retrieval by coupled auto-encoder networks. Neurocomputing 2017, 222, 62–71. [Google Scholar] [CrossRef]
  34. Park, U.; Tong, Y.; Jain, A.K. Face recognition with temporal invariance: A 3D aging model. In Proceedings of the 8th IEEE International Conference on Automatic Face & Gesture Recognition, Amsterdam, The Netherlands, 17–19 September 2008. [Google Scholar]
  35. Yadav, D.; Singh, R.; Vatsa, M.; Noore, A. Recognizing age-separated face images: Humans and machines. PLoS ONE 2014, 9. [Google Scholar] [CrossRef] [PubMed]
  36. Best-Rowden, L.L.; Jain, A.K. Longitudinal study of automatic face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 148–162. [Google Scholar] [CrossRef] [PubMed]
  37. Cheong, Y.W.; Lo, L.J. Facial asymmetry: Etiology, evaluation and management. Chang Gung Med. J. 2011, 34, 341–351. [Google Scholar] [PubMed]
  38. Sajid, M.; Taj, I.A.; Bajwa, U.I.; Ratyal, N.I. Facial asymmetry-based age group estimation: Role in recognizing age-separated face images. J. Forensic Sci. 2018. [Google Scholar] [CrossRef] [PubMed]
  39. Lee, K.W.; Hong, H.G.; Park, K.R. Fuzzy system-based fear estimation based on symmetrical characteristics of face and facial fetaure points. Symmetry 2017, 9, 102. [Google Scholar] [CrossRef]
  40. Zhai, H.; Liu, C.; Dong, H.; Ji, Y.; Guo, Y.; Gong, S. Face verification across aging based on deep convolutional networks and local binary patterns. In International Conference on Intelligent Science and Big Data Engineering; Springer: Cham, Switzerland, 2015. [Google Scholar]
  41. Wen, Y.; Li, Z.; Qiao, Y. Age invariant deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  42. El Khiyari, H.; Wechsler, H. Face recognition across time lapse using convolutional neural networks. J. Inf. Secur. 2016, 7, 141–151. [Google Scholar] [CrossRef]
  43. Liu, L.; Xiong, C.; Zhang, H.; Niu, Z.; Wang, M.; Yan, S. Deep aging face verification with large gaps. IEEE Trans. Multimed. 2016, 18, 64–75. [Google Scholar] [CrossRef]
  44. Lu, J.; Liong, V.E.; Wang, G.; Moulin, P. Joint feature learning for face recognition. IEEE Trans. Inf. Forensics Secur. 2015, 10, 1371–1383. [Google Scholar] [CrossRef]
  45. Chen, B.-C.; Chen, C.-S.; Hsu, W.H. Face recognition and retrieval using cross-age reference coding with cross-age celebrity dataset. IEEE Trans. Multimed. 2015, 17, 804–815. [Google Scholar] [CrossRef]
  46. Chen, B.-C.; Chen, C.-S.; Hsu, W.H. Cross-Age Reference Coding for Age-Invariant Face Recognition and Retrieval. 2014. Available online: http://bcsiriuschen.github.io/CARC/ (accessed on 28 January 2017).
  47. Ricanek, K.; Tesafaye, T. MORPH: A longitudinal image database of normal adult age-progression. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR06), Southampton, UK, 10–12 April 2006. [Google Scholar]
  48. FERET Database. Available online: http://www.itl.nist.gov/iad/humanid/feret (accessed on 15 September 2014).
  49. Face++ API. Available online: http://www.faceplusplus.com (accessed on 30 January 2017).
  50. Ha, H.; Shan, S.; Chen, X.; Gao, W. A comparative study on illumination preprocessing in face recognition. Pattern Recognit. 2013, 46, 1691–1699. [Google Scholar]
  51. Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep face recognition. In Proceedings of the British Machine Vision Conference, Swansea, UK, 7–10 September 2015. [Google Scholar]
  52. Lin, L.; Wang, G.; Zuo, W.; Xiangchu, F.; Zhang, L. Cross-domain visual matching via generalized similarity measure and feature learning. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1089–1102. [Google Scholar] [CrossRef] [PubMed]
  53. Wu, Z.; Ke, Q.; Sun, J.; Shum, H.-Y. Scalable face image retrieval with identity-based quantization and multireference reranking. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1991–2001. [Google Scholar] [PubMed]
  54. Jegou, H.; Douze, M.; Schmid, C. Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 117–128. [Google Scholar] [CrossRef] [PubMed]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.