Next Article in Journal
Adaptive Real-Time Object Detection for Autonomous Driving Systems
Next Article in Special Issue
Surreptitious Adversarial Examples through Functioning QR Code
Previous Article in Journal
Explainable Multimedia Feature Fusion for Medical Applications
Previous Article in Special Issue
Exploring Metrics to Establish an Optimal Model for Image Aesthetic Assessment and Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Face Attribute Estimation Using Multi-Task Convolutional Neural Network

Graduate School of Information Sciences, Tohoku University, 6-6-05, Aramaki Aza Aoba, Sendai 9808579, Japan
*
Authors to whom correspondence should be addressed.
J. Imaging 2022, 8(4), 105; https://doi.org/10.3390/jimaging8040105
Submission received: 8 February 2022 / Revised: 14 March 2022 / Accepted: 5 April 2022 / Published: 10 April 2022
(This article belongs to the Special Issue Intelligent Media Processing)

Abstract

:
Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation as a multiple two-class classification problem. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency. This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN) to automatically optimize CNN structures for solving multiple binary classification problems to improve parameter efficiency and accuracy in face attribute estimation. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments using the CelebA and LFW-a datasets, we demonstrate that MM-CNN with CPR exhibits higher efficiency of face attribute estimation in terms of estimation accuracy and the number of weight parameters than conventional methods.

1. Introduction

Face recognition is one of the most attractive topics in biometrics and computer vision because of its convenience, hygiene, and low cost, since face images can be acquired in a contactless manner without requiring any special equipment [1]. Face recognition is in great demand as personal authentication for smartphones, security gates, payment services, communication robots, etc. due to its advantages. Although the explosive development of Convolutional Neural Networks (CNNs) has dramatically improved the accuracy of face recognition, face recognition still faces the problem that its accuracy is significantly degraded by changes in pose, facial expression, motion, illumination, and resolution. To address the great demand for face recognition, further improvements in its performance have been investigated. There are two approaches to improve the performance of face recognition: a direct approach to improve the face recognition method and an indirect approach to improve the performance by adding other factors to the face recognition method. In this paper, we focus on face attribute estimation, which is an indirect approach, in the sense that it can be used not only for improving the accuracy of face recognition but also for customer analysis in marketing, image retrieval, video surveillance, and criminal investigation [2,3].
A face has a wide variety of biological features, including age, gender, hair color, hairstyle, mouth size, nose height, etc. These facial features, called face attributes, cannot be used for personal identification on their own; however, they can be used together for rough personal identification. This use of biometric traits is known as soft biometrics, in contrast to hard biometrics, where a single biometric trait such as fingerprint, iris, or face can be used for personal identification. For example, the recognition accuracy of face recognition methods can be improved by combining general face features with face attributes [4,5]. The processing time of face recognition can be reduced by prescreening using face attributes.
Face attribute estimation can be regarded as a multiple binary classification problem, as shown in Figure 1; that is, it is a problem of estimating whether a face has or does not have the attribute. Face attributes have multiple names depending on their color and shape, such as hair, or are expressed numerically, such as age. To deal with face attribute estimation as a binary classification problem, for example, hair can be decomposed into several classes such as black hair, blond hair, brown hair, and gray hair, and age can be simplified to young. Face attribute estimation consists of three processes: face detection, feature extraction, and classification [3,6]. Among these processes, feature extraction is the most important process, since it has the greatest impact on the estimation accuracy.
Traditional methods utilize hand-crafted features such as Local Binary Patterns (LBP) [7] in feature extraction. The LBP-based methods can estimate attributes from only one face image, since they do not require any training process; however, their estimation accuracy is quite low, since LBP cannot handle a wide variety of face attributes. CNN-based approaches have recently become the most popular approach for face attribute estimation, since CNN has made a significant impact on image recognition. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency [2,8,9,10,11,12,13,14]. To achieve both high parameter efficiency and high estimation accuracy, it is necessary to design CNN consisting of multiple layers such as convolution and pooling layers to extract the optimal features for each attribute. Several methods have been proposed to improve the accuracy of face attribute estimation by appropriately sharing the layers of CNNs [2,13,14,15]. In those methods, the manual grouping and clustering of face attributes were used to share layers of CNNs based on grouping. Manual grouping is not only time consuming but also arbitrary, and simple attribute clustering is not always effective for attribute estimation.
In this paper, we propose a method to automatically optimize CNN structures for solving multiple binary classification problems in order to improve the processing efficiency and accuracy in face attribute estimation. The basic structure of CNN used in the proposed method, which is called Merged Multi-CNN (MM-CNN), consists of a large number of convolution blocks regularly located in the depth and width directions, which are connected to each other at each depth by merging layers. MM-CNN is automatically optimized for face attribute estimation by introducing trainable weight parameters for each merging layer between blocks. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNN. Through a set of experiments to evaluate the performance on two public datasets, Large-scale CelebFaces Attributes dataset (CelebA) [9] and Labeled Faces in the Wild-a dataset (LFW-a) [16], we demonstrate that MM-CNN can estimate face attributes with high accuracy using CNN with fewer weight parameters than conventional methods. This paper is a full version of our initial study [17] with a detailed description of the proposed method, a survey of recent works, and performance comparison. The contributions of this paper can be summarized as follows:
  • We propose a novel CNN architecture, MM-CNN, specifically designed for multi-task processing; and
  • We also propose CPR, which significantly reduces the parameters of CNN by removing fully connected layers.

2. Related Work

The conventional methods for face attribute estimation are summarized in Table 1. These methods can be categorized as Support Vector Machine (SVM), CNN, and others depending on the type of classifier. In the following, we give an overview of the conventional methods for each type of classifier.
The first type of methods employs SVM as classifiers, which are the earliest methods for face attribute estimation [6,8,9,10]. SVM is a machine learning method to determine the decision boundaries for separating classes in feature space. Kumar et al. [6] proposed one of the famous face attribute estimation methods using handcrafted local features. This method extracts pixel values from grayscale, RGB, and HSV color spaces, edge magnitude, and orientation as features and classifies them into each face attribute using SVM. After this work, most of the methods have employed CNN-based feature extractors due to its excellent performance on image recognition. Zhang et al. [8] proposed Pose Aligned Networks for Deep Attribute modeling (PANDA), which consists of feature extraction by CNNs with poselet detection and attribute prediction by a linear SVM for each attribute. Liu et al. [9] proposed two CNN architectures: LNet for face localization and ANet for face attribute prediction with a linear SVM for each attribute. Zhong et al. [10] extracted features using FaceNet [18] or VGG-16 [19] and predicted attributes using a linear SVM.
The second type of methods employs neural networks as classifier [2,12,13,14,15,21,23,25], where most methods employ a single CNN to complete feature extraction and classification as a multi-task CNN. Wang et al. [12] proposed a GoogLeNet-like network architecture consisting of three CNNs for face recognition, weather prediction, and location estimation. Face attributes are estimated from concatenated features in the fully connected layers. Hand et al. [2] proposed Multi-task deep Convolutional Neural Network (MCNN) with an AUXiliary network (MCNN-AUX). They separate the 40 face attributes into six or nine groups based on facial parts, and they extract features for each attribute group. Auxiliary network, which finally estimates face attributes based on the estimation results of the multi-task CNN, is added. Cao et al. [15] proposed Partially Shared Multi-task CNN (PS-MCNN). They separate the 40 face attributes into four groups: upper, middle, lower, and whole images, based on the position of each attribute in the face. The PS-MCNN aggregates the features extracted by the network for each group and estimates their attributes using a classifier consisting of fully connected layers. Gao et al. [13] proposed three small multi-task CNNs: ATNet, ATNet_G, and ATNet_GT. Although these approaches are similar to MCNN, CNNs are desinged according to multiple clusters obtained by classifying face attributes using the k-means algorithm. Han et al. [14] proposed a multi-label classification method using original labels determined by their own rule in light of correlation among face attributes. They separate the attributes into eight groups—one group related to the whole face and seven groups related to each facial parts—and design a special classifier architecture with multiple one output for each group. Fukui et al. [21] proposed Attention Branch Networks (ABN), which is a sort of general-purpose CNN with attention to features. ABN consists of two branches: an attention branch for generating a visualization map and a perception branch for classification. They demonstrated that the attention mechanism with a visualization map is effective for estimating face attributes. Bhattarai et al. [23] proposed a new loss function based on a continuous label, which is generated by word2vec [24] based on 40 face attributes labels written in text. Chen et al. [25] proposed a Hard Parameter Sharing-Channel Split network (HPS-CS) consisting of normal and group convolution layers.
The third type of methods employs other classifiers [11,27,28]. Huang et al. proposed Large Margin Local Embedding (LMLE)-kNN [11] and Cluster-based LMLE (CLMLE) [27]. They focused on the class imbalance of face attribute labels and proposed a learning method that takes into account the distance between small clusters generated for each class. In LMLE-kNN and CLMLE, DeepID2 [26] and ResNet-like CNN [29] are used for feature extraction, respectively. Ehrlich et al. [28] proposed Multi-Task Restricted Boltzmann Machines (MT-RBMs) with Principal Component Analysis (PCA).
Our approach is similar to MCNN [2], PS-MCNN [15], and ATNet [13]. Although the relationships among facial attributes are hierarchical and complex, these methods use manual or non-hierarchical clustering to make a preliminary set of groups of facial attributes. On the other hand, our approach automatically optimizes the network parameters by recognizing the relationships among face attributes during the training of CNN.

3. Fundamentals of Face Attributes

In this section, we give fundamental observations about the face attributes that we focus on in this paper. We use the 40 face attributes defined in CelebA [9], as shown in Table 2. CelebA is a large-scale dataset of face attributes that has been used for the training and performance evaluation of major face attribute estimation methods. In this paper, for convenience, each attribute is assigned an index number from 1 to 40, as shown in Table 2. Most of the attributes in CelebA are defined on the biological characteristics, while some are defined by whether the person wears ornaments such as glasses and earrings. These face attributes can be classified into groups based on the following relations: (i) commonality of facial parts, (ii) co-occurrence, and (iii) color, shape, and texture. Figure 2 shows an example of illustrating the relationship among face attributes based on relations (i)–(iii). In the following, we discuss the details of each relation.
(i) Commonality of facial parts—For face attribute labels, the most obvious relationship is based on the organs, that is, the facial parts included in the face. For example, Black Hair (9) and Wavy Hair (34) are attributes related to “hair,” Arched Eyebrows (2) and Narrow Eyes (24) are attributes related to “eyes,” and Big Nose (8) and Pointy Nose (28) are attributes related to “nose.” Note that the attribute labels such as Male (21), Attractive (3), and Young (40) are assigned to “face” in Figure 2, since they are based on the features of the entire face.
(ii) Co-occurrence—Some attributes have co-occurrence, since they can appear simultaneously. Figure 3 shows a color map visualizing the co-occurrence probabilities of 40 face attributes in CelebA. The co-occurrence probability of two face attributes indicates the ratio of face images assigned with those two attributes. The face attributes with the highest co-occurrence probability are related to gender. Male (21) has a high probability of attributes such as 5 O’Clock Shadow (1), Bald (5), and Goatee (17), while female has a high probability of attributes such as Arched Eyebrows (2) and Heavy Makeup (19), where female means the face image without the Male (21) assignment. Exceptions are the co-occurrence of Smiling (32) with High Cheekbones (20) and Rosy Cheeks (30) for facial expressions, and Young (40) with Rosy Cheeks (30) for age. The co-occurrence of face attributes has a positive correlation in most cases, while there are some cases that have a negative correlation. For example, Gray Hair (18) symbolizing “aging” shows a high negative correlation with Young (40) and 5 O’Clock Shadow (1). No Beard (25) and Sideburns (31) also show a high negative correlation. We guess that Sideburns (31) is assigned a label as part of the beard in CelebA. However, note that such correlations between face attributes depend on the dataset. In Figure 3, Blond Hair (10) and No Beard (25) have high co-occurrence probability, while Black Hair (9) and No Beard (25) have low co-occurrence probability. This fact indicates that most of the females in CelebA have blond hair rather than black hair. CelebA consists mainly of Western celebrities and a very small number of Asian celebrities. Thus, the correlation of facial attributes strongly depends on ethnicity and gender.
(iii) Color or shape or texture—Most face attributes are related to either color, shape, or texture, except for abstract attributes such as age and gender. Color-related attributes include Black Hair (9), Blond Hair (10), Brown Hair (12), Gray Hair (18), Bags Under Eyes (4), Pale Skin (27), and Rosy Cheeks (30), shape-related attributes include Straight Hair (33) and Wavy Hair (34), Chubby (14), and Oval Face (26), and texture-related attributes include Blurry (11), Eyeglasses (16), and Heavy Makeup (19). The 5 O’Clock Shadow (1) and No Beard (25) attributes are related to both color and shape.
It is important to consider the above relationships among face attributes for estimating face attributes using multi-task CNN. In multi-task CNN, sharing feature extractors for face attributes with strong relationships can improve the estimation accuracy and reduce computational cost and memory consumption. There are complex relationships among face attributes, and it is difficult to manually design the optimal network architectures that takes them into account. To address this problem, in this paper, we propose a method to automatically optimize multi-task CNN for face attribute estimation.

4. Merged Multi-Convolutional Neural Network for Face Attribute Estimation

In this section, we describe the details of the Merged Multi-Convolutional Neural Network (MM-CNN) for face attribute estimation proposed in this paper.

4.1. Network Architecture of MM-CNN

We describe the network architecture of MM-CNN. First, we consider Multi-CNN that estimates attributes by inputting a face image to a small CNN for each attribute, as shown in Figure 4a. One small CNN is designed based on AlexNet [20], which consists of five convolution blocks and one fully connected layer. Note that the following points are different from the original AlexNet. In Conv1, the kernel size of convolution is changed to 7 × 7 from 11 × 11. In Conv2, the stride of convolution is changed to 1 from 2. All the normalization layers are replaced by the batch normalization layer [30]. The number of output channels in Conv5 is set to 1000, and the output of Conv5 is input to the Global Average Pooling (GAP) [31] layer. In the case of estimating 40 attributes, 40 single CNNs are set up in parallel, as shown in Figure 4a, with each CNN estimating one attribute. In this paper, the number of CNNs set in parallel is called “parallels”. Then, we design MM-CNN based on Multi-CNN as shown in Figure 4b. In MM-CNN, a unique layer called the merging layer is inserted after every convolutional block except Conv5. All the convolution blocks are connected to the merging layer for each stage, and their outputs are merged individually. The details of the merging layer are described below.

4.2. Merging Layer in MM-CNN

The role of the merging layer is to merge multiple inputs into one, and a trainable weight parameter for merging is assigned to each input. The initial values of all the weight parameters are set to 1.0 unless otherwise specified. In the merging layer, the inputs are merged after weighting similarly to the fully connected layer. We consider three types of merging weighted inputs in the merging layer: Concat, Add, and Mean. In the following, we refer to these three types of merging as merging functions. An overview of each merging function is shown in Figure 5. In Concat, the weighted inputs are concatenated in the channel direction. In Mean, the weighted inputs are averaged for each channel. In Add, the weighted inputs are added for each channel. Since the value of the output feature map becomes extremely large if the weighted inputs are simply added, the weight a is used by applying a softmax function to the weights a before weighting. Which merging function to use needs to be decided before training MM-CNN.

4.3. Convolutionalization for Parameter Reduction (CPR)

MM-CNN consists of the same number of CNNs as attributes; thus, it has a huge number of weight parameters. The larger the size of CNN, the higher its performance may be; however, the higher its computational cost and memory consumption. It is not practical to use such large CNNs due to the limited computational resources available on the device such as cell phones and PCs. Therefore, we introduce two approaches to reduce the number of weight parameters to be trained in MM-CNN.
The first approach is to control the number of output channels in the convolution blocks. The number of output channels of the convolution blocks strongly affects the number of weight parameters of MM-CNN. Therefore, we introduce a hyperparameter c for the number of output channels in the convolution blocks. Note that the number of output channels for Conv5 is independent of c. The larger c is, the larger the number of weight parameters, resulting in the larger scale of MM-CNN. Table 3 shows the configuration of one CNN consisting of MM-CNN when c is introduced in the output channel of the convolution blocks.
The second approach is to reduce the number of weight parameters by eliminating the fully connected layers without sacrificing the estimation accuracy. Early CNNs such as AlexNet [20] and VGG [19] used three fully connected layers in the classifier, as shown in Figure 6a, where the number of outputs is set to 2 for two-class classification based on whether an attribute is available or not. In general, the number of weight parameters of CNN increases significantly as the number of fully connected layers increases. Recent CNNs such as ResNet [22] and MobileNet [32] reduce the number of weight parameters by using Global Average Pooling (GAP) and one fully connected layer in the classifier, as shown in Figure 6b. The same configuration is used in MM-CNN. However, this configuration is proposed to be used for ImageNet [33] with 1000-class classification. The weight parameters in the classifier can be further reduced, since face attribute estimation is based on two-class classification, which is a simpler task than 1000-class classification. We assume that feature extraction in convolution blocks already classifies the face image into two classes and propose Convolutinalization for Parameter Reduction (CPR) that eliminates all the fully connected layers in the classifier. The configuration of the classifier using CPR is shown in Figure 6c. The number of output channels of Conv5 is set to 2, and the feature map output from Conv5 is aggregated by GAP to obtain two channels of output. The final output is the score obtained by applying the softmax function without passing through a fully connected layer. Some CNNs without fully connected layers have already been proposed such as FCN [34], U-Net [35], MobileNetV2 [36], and EfficientNet [37]. FCN and U-Net are designed for image segmentation, which consist of an encoder and a decoder. The encoder is the same as a feature extractor of general CNNs for image classification, and the fully connected layers are replaced by a decoder including transposed convolution layers to output 2D or 3D matrices. MobileNetV2 and EfficientNet are designed for image classification. All the fully connected layers are replaced by 1 × 1 convolution layers for fast and parallel processing with Graphical Processing Units (GPUs). Unlike the above methods, CPR eliminates fully connected layers without replacing them with other layers to reduce the number of weight parameters in the network. To the best of our knowledge, CPR is the first method to eliminate all the fully connected layers with the aim of reducing the number of weight parameters. The effect of reducing the number of weight parameters by CPR is summarized in Table 4. CPR reduces the number of weight parameters in MM-CNN by 82.4% for Mean and c = 30 , and by 97.8% for Concat and c = 3 , respectively. The effect of CPR on reducing the number of parameters in Add and Mean is the same. The effect of CPR in Concat is more significant than that in Add and Mean, since many weight parameters are required in Conv5.

5. Experiments and Discussion

In this section, we describe the performance evaluation of the proposed method and ten conventional methods on two public datasets: CelebA [9] and LFW-a [16].

5.1. Dataset

CelebA (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 5 September 2019)—This dataset consists of 202,599 face images of 10,177 identities, 40 binary facial attributes, and 5 landmark coordinates. In this experiment, we use face images aligned based on the coordinates of five landmarks: the left eye, the right eye, the nose, the left edge of the mouth, and the right edge of the mouth.
LFW-a (https://talhassner.github.io/home/projects/lfwa/, accessed on 17 November 2019)—This dataset consists of 13,233 face images of 5749 identities and 73 binary facial attributes. In the experiment, we use only the 40 facial attributes common to CelebA. We also use face images aligned based on the coordinates of three landmarks: the right eye, the left eye, and the center of the mouth.

5.2. Experimental Condition

As for CelebA, 182,637 images and the remaining 19,962 images are used for training and test, respectively. The splitting of the dataset follows the experimental protocol recommended by CelebA. As for LFW-a, 6263 images and the remaining 6880 images are used for training and test, respectively. For both datasets, 10% of the training data is used as validation data to verify overfitting. The cross-entropy loss is used as the loss function in training, and Nesterov Accelerated Gradient (NAG) [38] is used as the optimizer. The initial value of the learning rate is set to 0.025. The maximum number of epochs is set to 50. The batch size is set to 64. If the loss to validation data is not improved for two consecutive epochs, the learning rate is reduced to half. If the loss is not improved in five consecutive epochs, the training is completed. The pixel values of input images are normalized to have 0 mean and 1 variance, are randomly horizontally flipped, and are resized to 227 × 227 pixels. The weight parameters of all convolution layers and fully connected layers are initialized using He initialization [39]. Python 3.8.8 (https://www.python.org, accessed on 1 February 2022), Pytorch 1.8.1 [40], CUDA 10.2 (https://developer.nvidia.com/cuda-toolkit, accessed on 1 February 2022), and cuDNN 7.6.5 (https://developer.nvidia.com/cudnn, accessed on 1 February 2022) are used in the implementation. All the CNN models are trained and evaluated on NVIDIA GeForce GTX 1080 Ti (https://www.nvidia.com/en-us/geforce/10-series/, accessed on 1 February 2022) hardware.
We compare the performance of MM-CNN with that of the ten conventional methods: LNets + ANet [9], FaceNet [10], MT-RBMs [28], MCNN-AUX [2], ATNet_GT [13], PS-MCNN-LC [15], AlexNet + CSFL [14], ABN [21], VGG16 + Auglabel [23], and DeepID2 + CLMLE [27]. We also evaluate the performance of MM-CNN with three merging functions: Concat, Mean, and Add. The performance of MM-CNN is evaluated for Concat at c = { 1 , 2 , 3 , 4 } and for Mean and Add at c = { 5 , 10 , 20 , 30 , 60 } , respectively. Each method is evaluated on the estimation accuracy of each face attribute or the average of them. In face attribute estimation, each attribute is estimated regarding whether the input face image includes it or not. The estimation accuracy of each attribute is calculated by estimating the attribute for all face images in the test dataset and comparing the estimation results to the ground-truth labels in the test dataset. Note that the average of the estimation accuracy is an average of the estimation accuracy for each attribute after rounding to the third decimal place. In the experimental results, the average of the estimation accuracy is presented except when the estimated accuracy for an attribute index is presented.

5.3. Evaluation of Merging Functions and CPR in MM-CNN

We first evaluate the impact of the merging functions and CPR in MM-CNN for each hyperparameter c. Table 5 summarizes the accuracy of face attribute estimation and the number of weight parameters for each dataset when changing the merging functions, c, and CPR. “N/A” means that attribute estimation cannot be done due to exceeding the maximum memory size of GPU. Figure 7 shows the trade-off plot between estimation accuracy for CelebA and the number of parameters when varying the merging function, c, and CPR used in MM-CNN. The horizontal axis indicates the number of weight parameters, and the vertical axis indicates the average of the estimation accuracy for CelebA, where the estimation accuracy is the average of the estimation accuracy of the 40 attributes. In MM-CNN without CPR, Mean and Add exhibit higher parameter efficiency than Concat. In MM-CNN with CPR, the number of parameters is much smaller than that without CPR. Surprisingly, CPR slightly improves the accuracy of face attribute estimation in MM-CNN. This result suggests that a classifier with many weight parameters, such as fully connected layers, is not effective for a simple binary classification task. CPR is extremely effective in improving the parameter efficiency of MM-CNN, and it also makes optimization easier by reducing the complexity of MM-CNN. In particular, CPR can improve the parameter efficiency of MM-CNN using Concat, since most of the weight parameters are in the fully connected layers, as shown in Table 4. The balance between the number of weight parameters and accuracy of MM-CNN can be adjusted by changing the combination of merging functions, c, and CPR. MM-CNN using {Mean, c = 20 , CPR} and {Concat, c = 4 , CPR} achieve high-parameter efficiency for CelebA and LFW-a, respectively.

5.4. Evaluation of the Number of Parallels in MM-CNN

As mentioned in Section 4.1. MM-CNN consists of the combination of single-task CNNs. Although the number of single-task CNNs in MM-CNN is set to 40, which is the same as the number of face attributes, the number of parallel networks can be changed. Through this experiment, we verify the number of parallels with high parameter efficiency in MM-CNN. Note that regardless of the number of parallels, the network architecture from Conv5 to FC in Figure 4b is not changed to output 40 scores. The accuracy of face attribute estimation for CelebA and the number of parameters when changing the number of parallels and c are summarized in Table 6 and Figure 8, where we use Mean and CPR for all the settings. Note that “N/A” in Table 6 indicates that attribute estimation is not performed, since the maximum memory size of the GPU is exceeded. The parameter efficiency for MM-CNN with 20 and 30 parallels is almost the same as 40 parallels. The above results indicate that the performance of MM-CNN can be maximized with a simple criterion that the number of parallels of MM-CNN is set to be the same as the number of face attributes. On the other hand, the parameter efficiency becomes lower for MM-CNNs with more than 60 parallels. In MM-CNN with Mean and Add, the feature maps extracted from each convolution block are added in each channel. As the number of parallels increases, the information is compressed by the addition of feature maps, resulting in a decrease in estimation accuracy. The estimation accuracy in MM-CNN with Concat will also be reduced, since the next convolution block after merging compresses the information in a similar way.

5.5. Comparison with Multi-CNN

We compare the accuracy of face attribute estimation using Multi-CNN and MM-CNN to verify the effectiveness of the merging layer. Multi-CNN uses independent CNNs to estimate each attribute as shown in Figure 4a. Table 7 shows the results of evaluating the estimation accuracy of Multi-CNN and MM-CNN for CelebA by changing c and with/without CPR, where we use Mean for MM-CNN. Note that the existence of the merging layers has little effect on the number of weight parameters, except for MM-CNN with Concat. The experimental results show that MM-CNN has higher estimation accuracy than Multi-CNN in all settings. The merging layers can improve the multi-task performance of CNNs with little increase in the number of weight parameters.

5.6. Comparison with Conventional Methods

We compare the performance of MM-CNN with ten conventional methods: LNets + ANet [9], FaceNet [10], MT-RBMs [28], MCNN-AUX [2], ATNet_GT [13], PS-MCNN-LC [15], AlexNet + CSFL [14], ABN [21], VGG16 + Auglabel [23], and DeepID2 + CLMLE [27]. In this experiment, we use MM-CNN with Mean and focus on the three patterns exhibiting high parameter efficiency from Table 5. We also use MM-CNN with Concat and CPR, which exhibited the highest estimation accuracy for LFW-a in Table 5. Table 8 and Table 9 show the experimental results for CelebA and LFW-a, respectively. Figure 9 shows the parameter efficiency of each method in face attribute estimation. Note that some conventional methods are listed and plotted only on one side in Table 8 and Table 9 and Figure 9. The accuracy of the conventional methods is referred to those described in the paper. If the accuracy for each attribute was not listed or the accuracy for one dataset was not listed such as ABN [21] and DeepID2 + CLMLE [27], the accuracy for those methods is not listed in Table 8 and Table 9. Since SVM-based methods such as LNets + ANet [9] and MT-RBMs [28] cannot evaluate the number of weight parameters, only methods without SVM are plotted in Figure 9.
In CelebA, the accuracy of MCNN-AUX [2] and MM-CNN (Mean, c = 10 , CPR) is comparable, while the number of parameters of MM-CNN is 1/70 that of MCNN-AUX. Comparing ATNet_GT [13] and MM-CNN (Mean, c = 10 , CPR), MM-CNN (Mean, c = 10 , CPR) has 1% higher accuracy and 1/3 of the number of parameters. The accuracy of PS-MCNN-LC [15] and AlexNet + CSFL [14] is higher than MM-CNN, while the number of parameters of MM-CNN with CPR is much smaller than those of them. As mentioned above, MM-CNN exhibited the best parameter efficiency among the compared methods. In addition, since MM-CNN is a network architecture for multi-task processing, the architecture and CPR of MM-CNN can be used in combination with multi-label methods such as concatenating multiple attribute labels.

6. Conclusions

In this paper, we proposed a face attribute estimation method using Merged Multi-CNN (MM-CNN), which consists of multiple CNNs in parallel with the merging layers. We also proposed a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments to evaluate the performance on CelebA [9] and LFW-a [16], we demonstrated that MM-CNN can estimate face attributes with high accuracy using CNN with fewer weight parameters than conventional methods. Although the MM-CNN discussed in this paper was based on simple networks, the approach can be applied to recent complex networks. Future work will include extending and improving the accuracy of MM-CNN, applying it to practical applications, and comparing its performance other than face attribute estimation with general multi-task learning methods.

Author Contributions

Funding acquisition, K.I. and T.A.; Investigation, H.K.; Methodology, H.K. and K.I.; Project administration, T.A.; Resources, T.A.; Software, H.K.; Supervision, K.I.; Writing—original draft, H.K.; Writing—review and editing, K.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by JSPS KAKENHI Grant Numbers 19H04106, 21H03457, and 21J15252.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The CelebA dataset can be found here: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed on 5 September 2019). The LFW-a dataset can be found here: https://talhassner.github.io/home/projects/lfwa/ (accessed on 17 November 2019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, S.; Jain, A. Handbook of Face Recognition; Springer: Cham, Switzerland, 2011. [Google Scholar]
  2. Hand, E.; Chellappa, R. Attributes for improved attributes: A multi-task network utilizing implicit and explicit relationships for facial attribute classification. AAAI Conf. Artif. Intell. 2017, 31, 4068–4074. [Google Scholar]
  3. Scheirer, W.; Kumar, N.; Ricanek, K.; Belhumeur, P.; Boult, T. Fusing with context: A Bayesian approach to combining descriptive attributes. In Proceedings of the 2011 International Joint Conference on Biometrics, Washington, DC, USA, 11–13 October 2011. [Google Scholar]
  4. Jain, A.; Nandakumar, K.; Lu, X.; Park, U. Integrating faces, fingerprints, and soft biometric traits for user recognition. In Proceedings of the ECCV 2004 International Workshop, BioAW 2004, Prague, Czech Republic, 15 May 2004; pp. 259–269. [Google Scholar]
  5. Park, U.; Jain, A. Face Matching and Retrieval Using Soft Biometrics. IEEE Trans. Inf. Forensics Secur. 2010, 5, 406–415. [Google Scholar] [CrossRef] [Green Version]
  6. Kumar, N.; Berg, A.; Belhumeur, P.; Nayar, S. Describable visual attributes for face verification and image search. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1962–1977. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Pietikäinen, M.; Hadid, A.; Zhao, G.; Ahonen, T. Computer Vision Using Local Binary Patterns; Springer: Cham, Switzerland, 2011. [Google Scholar]
  8. Zhang, N.; Paluri, M.; Ranzato, M.; Darrell, T.; Bourdev, L. PANDA: Pose aligned networks for deep attribute modeling. arXiv 2014, arXiv:1311.5591. [Google Scholar]
  9. Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the Wild. arXiv 2015, arXiv:1411.7766. [Google Scholar]
  10. Zhong, Y.; Sullivan, J.; Li, H. Face attribute prediction using off-the-shelf CNN features. arXiv 2016, arXiv:1602.03935. [Google Scholar]
  11. Huang, C.; Li, Y.; Loy, C.; Tang, X. Learning deep representations for imbalanced classification. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5375–5383. [Google Scholar]
  12. Wang, J.; Cheng, Y.; Feris, R. Walk and learn: Facial attribute representation learning from egocentric video and contextual data. arXiv 2016, arXiv:1604.06433. [Google Scholar]
  13. Gao, D.; Yuan, P.; Sun, N.; Wu, X.; Cai, Y. Face attribute prediction with convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017; pp. 1294–1299. [Google Scholar]
  14. Han, H.; Jain, A.; Wang, F.; Shan, S.; Chen, X. Heterogeneous face attribute estimation: A deep multi-task learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2597–2609. [Google Scholar] [CrossRef] [Green Version]
  15. Cao, J.; Li, Y.; Zhang, Z. Partially shared multi-task convolutional neural network with local constraint for face attribute learning. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4290–4299. [Google Scholar]
  16. Wolf, L.; Hassner, T.; Taigman, Y. Effective Face Recognition by Combining Multiple Descriptors and Learned Background Statistics. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1978–1990. [Google Scholar] [CrossRef]
  17. Kawai, H.; Ito, K.; Aoki, T. Merged Multi-CNN with parameter reduction for face attribute estimation. In Proceedings of the 2019 International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019. [Google Scholar]
  18. Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
  19. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  20. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. In Proceedings of the Twenty-Sixth Annual Conference on Neural Information Processing Systems (NIPS), Stateline, NV, USA, 3–8 December 2012; pp. 1–9. [Google Scholar]
  21. Fukui, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Attention Branch Network: Learning of Attention Mechanism for Visual Explanation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10705–10714. [Google Scholar]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  23. Bhattarai, B.; Bodur, R.; Kim, T. Auglabel: Exploiting word representations to augment labels for face attribute classification. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2308–2312. [Google Scholar]
  24. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; Dean, J. Distributed Representations of Words and Phrases and their Compositionality. Proc. Annu. Conf. Neural Inf. Process. Syst. 2013, 2, 3111–3119. [Google Scholar]
  25. Chen, X.; Wang, W.; Zheng, S. Research on face attribute recognition based on multi-task CNN Network. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 1221–1224. [Google Scholar]
  26. Sun, Y.; Chen, Y.; Wang, X.; Tang, X. Deep learning face representation by joint identification-verification. Proc. Int. Conf. Neural Inf. Process. Syst. 2014, 2, 1988–1996. [Google Scholar]
  27. Huang, C.; Li, Y.; Loy, C.; Tang, X. Deep Imbalanced Learning for Face Recognition and Attribute Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2781–2794. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Ehrlich, M.; Shields, T.; Almaev, T.; Amer, M. Facial attributes classification using multi-task representation learning. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 47–55. [Google Scholar]
  29. Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. SphereFace: Deep Hypersphere Embedding for Face Recognition. arXiv 2017, arXiv:1704.08063. [Google Scholar]
  30. Loffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Proc. Int. Conf. Mach. Learn. 2015, 37, 448–456. [Google Scholar]
  31. Lin, M.; Chen, Q.; Yan, S. Network In Network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
  32. Howard, A.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  33. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  34. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  35. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  36. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  37. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proc. Int. Conf. Mach. Learn. 2019, 97, 6105–6114. [Google Scholar]
  38. Nesterov, Y. A method of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification view document. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  40. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Proc. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
Figure 1. A typical processing flow of face attribute estimation. Face attribute estimation consists of multiple two-class classification problems. First, a face region is detected from a face image, and features are extracted. Then, the features are input to a discriminator for each attribute, and the presence or absence of the attribute is estimated.
Figure 1. A typical processing flow of face attribute estimation. Face attribute estimation consists of multiple two-class classification problems. First, a face region is detected from a face image, and features are extracted. Then, the features are input to a discriminator for each attribute, and the presence or absence of the attribute is estimated.
Jimaging 08 00105 g001
Figure 2. Example of illustrating the relationship among face attributes based on (i) commonality of facial parts, (ii) co-occurrence, and (iii) color, shape, and texture.
Figure 2. Example of illustrating the relationship among face attributes based on (i) commonality of facial parts, (ii) co-occurrence, and (iii) color, shape, and texture.
Jimaging 08 00105 g002
Figure 3. Color map visualizing the co-occurrence probabilities of 40 face attributes in CelebA.
Figure 3. Color map visualizing the co-occurrence probabilities of 40 face attributes in CelebA.
Jimaging 08 00105 g003
Figure 4. Overview of network architectures for (a) Multi-CNN and (b) MM-CNN.
Figure 4. Overview of network architectures for (a) Multi-CNN and (b) MM-CNN.
Jimaging 08 00105 g004
Figure 5. Overview of 3 types of merging function used in MM-CNN. For simplification, both the number of parallels and output channels of convolution layers are set to 2 in this figure.
Figure 5. Overview of 3 types of merging function used in MM-CNN. For simplification, both the number of parallels and output channels of convolution layers are set to 2 in this figure.
Jimaging 08 00105 g005
Figure 6. Configuration of CNN classifiers for two-class classification: (a) VGG-16 [19], (b) MM-CNN, and (c) MM-CNN with CPR.
Figure 6. Configuration of CNN classifiers for two-class classification: (a) VGG-16 [19], (b) MM-CNN, and (c) MM-CNN with CPR.
Jimaging 08 00105 g006
Figure 7. Comparison of the parameter efficiency of MM-CNN with different merge functions and c. The numbers near each point in the graph indicate the hyperparameter c.
Figure 7. Comparison of the parameter efficiency of MM-CNN with different merge functions and c. The numbers near each point in the graph indicate the hyperparameter c.
Jimaging 08 00105 g007
Figure 8. Comparison of the parameter efficiency of MM-CNN using Mean and CPR with the different number of parallels for CelebA. The numbers near each point in the graph indicate the hyperparameter c.
Figure 8. Comparison of the parameter efficiency of MM-CNN using Mean and CPR with the different number of parallels for CelebA. The numbers near each point in the graph indicate the hyperparameter c.
Jimaging 08 00105 g008
Figure 9. Comparison of the parameter efficiency of face attribute estimation methods. The numbers near each point in the graph indicate the hyperparameter c.
Figure 9. Comparison of the parameter efficiency of face attribute estimation methods. The numbers near each point in the graph indicate the hyperparameter c.
Jimaging 08 00105 g009
Table 1. A summary of face attribute estimation methods.
Table 1. A summary of face attribute estimation methods.
MethodFeature ExtractionClassifier
Kumar et al. [6]Pixel value (gray, RGB and HSV), edge magnitude and orientationOne SVM for each attribute
Zhang et al. [8]Pose Aligned Networks (4 Conv and 1 FC) for Deep Attribute modeling (PANDA)One linear SVM for each attribute
Liu et al. [9]LNet (5 Conv) for face localization and ANet (4 Conv) for face attribute predictionOne linear SVM for each attribute
Zhong et al. [10]FaceNet [18] or VGG-16 [19]Softmax classifier or one linear SVM for each attribute
Wang et al. [12]Siamese network (2 Conv and 7 Inception and 1 Fc)Softmax
Hand et al. [2]Multi-task deep Convolutional Neural Network (MCNN) (3 Conv and 2 FC)Softmax classifier with an AUXilirary network (AUX)
Gao et al. [13]ATNet, ATNet_G, ATNet_GT (4 Conv and 3 FCc)Softmax
Cao et al. [15]Partially Shared MCNN (PS-MCNN) (5 Conv and 2 FCc)Multi-label classifier
Han et al. [14]AlexNet-like CNN [20] (5 Conv and 4 FC) with facial landmark detectorMulti-label classifier
Fukui et al. [21]Attention Branch Network (ABN) based on ResNet-101 [22]Softmax
Bhattarai et al. [23]VGG-16 [19] and word2vec [24]Multi-label classifier
Chen et al. [25]Hard Parameter Sharing-Channel Split network (HPS-CS) based on AlexNet (9 Conv and 1 FC)Softmax
Huang et al. [11]DeepID2 [26]Large Margin Local Embedding (LMLE)-kNN
Huang et al. [27]ResNet-like CNN (64 Conv and 1 FC) with facial landmark detectorCluster-based Large Margin Local Embedding (CLMLE)
Ehrlich et al. [28]Multi-Task Restricted Boltzmann Machines (MT-RBMs) with PCA with facial landmark detectorSoftmax
Table 2. Face attribute labels defined in CelebA [9].
Table 2. Face attribute labels defined in CelebA [9].
Idx.AttributeIdx.Attribute
15 O’Clock Shadow21Male
2Arched Eyebrows22Mouth Slightly Open
3Attractive23Mustache
4Bags Under Eyes24Narrow Eyes
5Bald25No Beard
6Bangs26Oval Face
7Big Lips27Pale Skin
8Big Nose28Pointy Nose
9Black Hair29Receding Hairline
10Blond Hair30Rosy Cheeks
11Blurry31Sideburns
12Brown Hair32Smiling
13Bushy Eyebrows33Straight Hair
14Chubby34Wavy Hair
15Double Chin35Wearing Earrings
16Eyeglasses36Wearing Hat
17Goatee37Wearing Lipstick
18Gray Hair38Wearing Necklace
19Heavy Makeup39Wearing Necktie
20High Cheekbones40Young
Table 3. Configuration of one CNN consisting of MM-CNN when c is introduced in the output channel of the convolution blocks.
Table 3. Configuration of one CNN consisting of MM-CNN when c is introduced in the output channel of the convolution blocks.
LayerKernelStridePaddingOutput Shape
Conv 1 7 × 7 42 56 × 56 × c
BatchNorm 1 56 × 56 × c
MaxPool 1 3 × 3 20 28 × 28 × c
Conv 2 5 × 5 11 28 × 28 × ( 2 × c )
BatchNorm 2 28 × 28 × ( 2 × c )
MaxPool 2 3 × 3 20 12 × 12 × ( 2 × c )
Conv 3 3 × 3 11 12 × 12 × ( 2 × c )
BatchNorm 3 12 × 12 × ( 2 × c )
Conv 4 3 × 3 11 12 × 12 × ( 2 × c )
BatchNorm 4 12 × 12 × ( 2 × c )
Conv 5 3 × 3 11 12 × 12 × 1000
BatchNorm 5 12 × 12 × ( 2 × c )
MaxPool 3 3 × 3 20 5 × 5 × 1000
GAP 1 × 1 × 1000
FC 2
Table 4. Effect of reducing the number of weight parameters by CPR, where “Ratio” indicates the ratio of the number of weight parameters in each conv block to the total number of weight parameters in the MM-CNN.
Table 4. Effect of reducing the number of weight parameters by CPR, where “Ratio” indicates the ratio of the number of weight parameters in each conv block to the total number of weight parameters in the MM-CNN.
TypeMM-CNN (Mean, c = 30 )
w/o CPRw/ CPR
# of ParamsRatio# of ParamsRatio
Conv1176,4000.7%176,4003.8%
Conv21,800,0006.9%1,800,00039.0%
Conv31,296,0004.9%1,296,00028.1%
Conv41,296,0004.9%1,296,00028.1%
Conv521,600,00082.3%43,2001.0%
FC80,0800.3%
Total26,248,480100%4,611,600100%
TypeMM-CNN (Concat, c = 3 )
w/o CPRw/ CPR
# of ParamsRatio# of ParamsRatio
Conv117,6400.1%17,7600.9%
Conv2720,0000.8%720,24037.0%
Conv3518,4000.6%518,64026.6%
Conv4518,4000.6%518,64026.6%
Conv586,400,00097.8%172,8008.9%
FC80,0800.1%
Total88,254,520100%1,947,240100%
Table 5. Accuracy of face attribute estimation and the number of parameters on both datasets when changing the merging functions, c, and CPR of MM-CNN, where “N/A” means that attribute estimation cannot be done due to exceeding the maximum memory size of GPU. Best accuracy is shown with underline.
Table 5. Accuracy of face attribute estimation and the number of parameters on both datasets when changing the merging functions, c, and CPR of MM-CNN, where “N/A” means that attribute estimation cannot be done due to exceeding the maximum memory size of GPU. Best accuracy is shown with underline.
Merging Functioncw/o CPRw/ CPR
CelebALFW-aParamsCelebALFW-aParams
Concat191.30%84.90%29.12 M91.25%85.85%0.26 M
291.53%85.90%58.51 M91.48%86.10%0.91 M
391.55%85.50%88.30 M91.53%86.15%1.95 M
4N/AN/A118.48 M91.50%86.33%3.27 M
Mean590.40%81.70%3.87 M90.53%78.80%0.16 M
1091.15%82.65%7.87 M91.30%84.48%0.56 M
2091.45%83.45%16.60 M91.58%85.28%2.10 M
3091.53%84.95%26.30 M91.60%85.10%4.62 M
60N/AN/A61.26 M91.68%85.54%18.02 M
Add590.53%78.55%3.87 M90.68%83.35%0.16 M
1091.15%82.60%7.87 M91.30%83.45%0.56 M
2091.45%82.98%16.60 M91.55%85.25%2.10 M
3091.50%82.58%26.30 M91.60%85.05%4.62 M
60N/AN/A61.26 M91.70%85.15%18.02 M
Table 6. Estimation accuracy of MM-CNN with Mean and CPR under varying the number of parallels for CelebA.
Table 6. Estimation accuracy of MM-CNN with Mean and CPR under varying the number of parallels for CelebA.
# of ParallelscAccuracyParams
201091.18%0.29 M
2091.45%1.06 M
4091.60%4.06 M
301091.28%0.43 M
2091.55%1.58 M
4091.63%6.08 M
401091.30%0.57 M
2091.58%2.10 M
4091.60%8.10 M
601091.25%0.85 M
2091.50%3.15 M
4091.65%12.14 M
801091.25%1.13 M
2091.53%4.20 M
40N/A    16.18 M
Table 7. Estimation accuracy Multi-CNN and MM-CNN with Mean for CelebA. ✓ shows that CPR is used. Best accuracy is shown with underline.
Table 7. Estimation accuracy Multi-CNN and MM-CNN with Mean for CelebA. ✓ shows that CPR is used. Best accuracy is shown with underline.
CPRcMulti-CNNMM-CNN
589.73%90.40%
1090.33%91.15%
2090.80%91.45%
3090.90%91.53%
590.00%90.53%
1090.45%91.30%
2090.95%91.58%
3091.03%91.60%
6091.15%91.68%
Table 8. Estimation accuracy of face attribute estimation methods for CelebA. Best accuracy is shown with underline.
Table 8. Estimation accuracy of face attribute estimation methods for CelebA. Best accuracy is shown with underline.
MethodAttribute Index
12345678910
LNet + ANet [9]91798179989568788895
FaceNet [10]89838279969470798793
MT-RBMs [28]90777681988869817691
MCNN-AUX [2]95838385999671859096
ATNet_GT [13]92818184999671838995
PS-MCNN-LC [15]97868487999873869298
AlexNet + CSFL [14]95868599999688928591
MM-CNN (Concat, c = 4 , CPR)94848385999672859096
MM-CNN (Mean, c = 60 , CPR)95848386999672859196
MM-CNN (Mean, c = 30 , CPR)95848386999672859096
MM-CNN (Mean, c = 10 , CPR)95848386999671849096
MethodAttribute Index
11121314151617181920
LNet+ANet [9]84809091929995979087
FaceNet [10]87798788899994959187
MT-RBMs [28]95838895969696978583
MCNN-AUX [2]968993969610097989288
ATNet_GT [13]96879294969997989086
PS-MCNN-LC [15]989195989810098999389
AlexNet + CSFL [14]96968597999998969288
MM-CNN (Concat, c = 4 , CPR)968993969710097989288
MM-CNN (Mean, c = 60 , CPR)969093969710098989288
MM-CNN (Mean, c = 30 , CPR)969093969710097989288
MM-CNN (Mean, c = 10 , CPR)968993969710097989288
MethodAttribute Index
21222324252627282930
LNet + ANet [9]98929581956691728990
FaceNet [10]99929378946785738788
MT-RBMs [28]90829786907396739294
MCNN-AUX [2]98949787967697779495
ATNet_GT [13]97939786947697759395
PS-MCNN-LC [15]99969989987799799697
AlexNet+CSFL [14]98949790977897789496
MM-CNN (Concat, c = 4 , CPR)98949788967697789495
MM-CNN (Mean, c = 60 , CPR)98949788967797789496
MM-CNN (Mean, c = 30 , CPR)98949788967697789496
MM-CNN (Mean, c = 10 , CPR)98949788967697779495
MethodAttribute Index
31323334353637383940Ave.
LNet + ANet [9]9692738082999371938787.3
FaceNet [10]9592737982969373918686.6
MT-RBMs [28]9688807281978987948187.0
MCNN-AUX [2]9893848490999487978891.3
ATNet_GT [13]9792808289999386968890.2
PS-MCNN-LC [15]9895868693999689999193.0
AlexNet + CSFL [14]9894858791999389979092.6
MM-CNN (Concat, c = 4 , CPR)9893848491999488978991.5
MM-CNN (Mean, c = 60 , CPR)9893848491999488978991.7
MM-CNN (Mean, c = 30 , CPR)9893848491999488978991.6
MM-CNN (Mean, c = 10 , CPR)9893838390999487978891.3
Table 9. Estimation accuracy of face attribute estimation methods for LFW-a. Best accuracy is shown with underline.
Table 9. Estimation accuracy of face attribute estimation methods for LFW-a. Best accuracy is shown with underline.
MethodAttribute index
12345678910
LNet + ANet [9]84828383888875819097
FaceNet [10]77837983919178839097
MCNN-AUX [2]77828083929079859397
PS-MCNN-LC [15]88848287939183869398
AlexNet + CSFL [14]80868492937781808391
MM-CNN (Concat, c = 4 , CPR)78818183939279849297
MM-CNN (Mean, c = 60 , CPR)77808082939177839297
MM-CNN (Mean, c = 30 , CPR)76808082929076839297
MM-CNN (Mean, c = 10 , CPR)76798081929074839297
MethodAttribute Index
11121314151617181920
LNet + ANet [9]74778273789578849588
FaceNet [10]88768375809183879588
MCNN-AUX [2]85818577829183899688
PS-MCNN-LC [15]87828678879384919789
AlexNet + CSFL [14]75978278928688899589
MM-CNN (Concat, c = 4 , CPR)85828576829284899587
MM-CNN (Mean, c = 60 , CPR)85828376809183899587
MM-CNN (Mean, c = 30 , CPR)85828375809083899586
MM-CNN (Mean, c = 10 , CPR)84818274798982889486
MethodAttribute Index
21222324252627282930
LNet + ANet [9]94829281797484808578
FaceNet [10]94819481807573838682
MCNN-AUX [2]94849383827793848688
PS-MCNN-LC [15]95859484827895888789
AlexNet + CSFL [14]93869582817591848586
MM-CNN (Concat, c = 4 , CPR)94829482817991858787
MM-CNN (Mean, c = 60 , CPR)93809379797691838688
MM-CNN (Mean, c = 30 , CPR)93799378807690838787
MM-CNN (Mean, c = 10 , CPR)93799375807590828685
MethodAttribute Index
31323334353637383940Ave.
LNet + ANet [9]7791767694889588798683.9
FaceNet [10]8290777794909590818684.7
MCNN-AUX [2]8392798295909590818686.3
PS-MCNN-LC [15]8493808396919691828787.4
AlexNet + CSFL [14]8092798094929391818786.0
MM-CNN (Concat, c = 4 , CPR)8491798294919590838586.3
MM-CNN (Mean, c = 60 , CPR)8390798194909489828585.5
MM-CNN (Mean, c = 30 , CPR)8290788094909489818485.1
MM-CNN (Mean, c = 10 , CPR)8289758094909489818484.5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kawai, H.; Ito, K.; Aoki, T. Face Attribute Estimation Using Multi-Task Convolutional Neural Network. J. Imaging 2022, 8, 105. https://doi.org/10.3390/jimaging8040105

AMA Style

Kawai H, Ito K, Aoki T. Face Attribute Estimation Using Multi-Task Convolutional Neural Network. Journal of Imaging. 2022; 8(4):105. https://doi.org/10.3390/jimaging8040105

Chicago/Turabian Style

Kawai, Hiroya, Koichi Ito, and Takafumi Aoki. 2022. "Face Attribute Estimation Using Multi-Task Convolutional Neural Network" Journal of Imaging 8, no. 4: 105. https://doi.org/10.3390/jimaging8040105

APA Style

Kawai, H., Ito, K., & Aoki, T. (2022). Face Attribute Estimation Using Multi-Task Convolutional Neural Network. Journal of Imaging, 8(4), 105. https://doi.org/10.3390/jimaging8040105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop