Abstract
Face attribute estimation can be used for improving the accuracy of face recognition, customer analysis in marketing, image retrieval, video surveillance, and criminal investigation. The major methods for face attribute estimation are based on Convolutional Neural Networks (CNNs) that solve face attribute estimation as a multiple two-class classification problem. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency. This paper proposes a face attribute estimation method using Merged Multi-CNN (MM-CNN) to automatically optimize CNN structures for solving multiple binary classification problems to improve parameter efficiency and accuracy in face attribute estimation. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments using the CelebA and LFW-a datasets, we demonstrate that MM-CNN with CPR exhibits higher efficiency of face attribute estimation in terms of estimation accuracy and the number of weight parameters than conventional methods.
1. Introduction
Face recognition is one of the most attractive topics in biometrics and computer vision because of its convenience, hygiene, and low cost, since face images can be acquired in a contactless manner without requiring any special equipment []. Face recognition is in great demand as personal authentication for smartphones, security gates, payment services, communication robots, etc. due to its advantages. Although the explosive development of Convolutional Neural Networks (CNNs) has dramatically improved the accuracy of face recognition, face recognition still faces the problem that its accuracy is significantly degraded by changes in pose, facial expression, motion, illumination, and resolution. To address the great demand for face recognition, further improvements in its performance have been investigated. There are two approaches to improve the performance of face recognition: a direct approach to improve the face recognition method and an indirect approach to improve the performance by adding other factors to the face recognition method. In this paper, we focus on face attribute estimation, which is an indirect approach, in the sense that it can be used not only for improving the accuracy of face recognition but also for customer analysis in marketing, image retrieval, video surveillance, and criminal investigation [,].
A face has a wide variety of biological features, including age, gender, hair color, hairstyle, mouth size, nose height, etc. These facial features, called face attributes, cannot be used for personal identification on their own; however, they can be used together for rough personal identification. This use of biometric traits is known as soft biometrics, in contrast to hard biometrics, where a single biometric trait such as fingerprint, iris, or face can be used for personal identification. For example, the recognition accuracy of face recognition methods can be improved by combining general face features with face attributes [,]. The processing time of face recognition can be reduced by prescreening using face attributes.
Face attribute estimation can be regarded as a multiple binary classification problem, as shown in Figure 1; that is, it is a problem of estimating whether a face has or does not have the attribute. Face attributes have multiple names depending on their color and shape, such as hair, or are expressed numerically, such as age. To deal with face attribute estimation as a binary classification problem, for example, hair can be decomposed into several classes such as black hair, blond hair, brown hair, and gray hair, and age can be simplified to young. Face attribute estimation consists of three processes: face detection, feature extraction, and classification [,]. Among these processes, feature extraction is the most important process, since it has the greatest impact on the estimation accuracy.
Figure 1.
A typical processing flow of face attribute estimation. Face attribute estimation consists of multiple two-class classification problems. First, a face region is detected from a face image, and features are extracted. Then, the features are input to a discriminator for each attribute, and the presence or absence of the attribute is estimated.
Traditional methods utilize hand-crafted features such as Local Binary Patterns (LBP) [] in feature extraction. The LBP-based methods can estimate attributes from only one face image, since they do not require any training process; however, their estimation accuracy is quite low, since LBP cannot handle a wide variety of face attributes. CNN-based approaches have recently become the most popular approach for face attribute estimation, since CNN has made a significant impact on image recognition. Although one feature extractor should be used for each attribute to explore the accuracy of attribute estimation, in most cases, one feature extractor is shared to estimate all face attributes for the parameter efficiency [,,,,,,,]. To achieve both high parameter efficiency and high estimation accuracy, it is necessary to design CNN consisting of multiple layers such as convolution and pooling layers to extract the optimal features for each attribute. Several methods have been proposed to improve the accuracy of face attribute estimation by appropriately sharing the layers of CNNs [,,,]. In those methods, the manual grouping and clustering of face attributes were used to share layers of CNNs based on grouping. Manual grouping is not only time consuming but also arbitrary, and simple attribute clustering is not always effective for attribute estimation.
In this paper, we propose a method to automatically optimize CNN structures for solving multiple binary classification problems in order to improve the processing efficiency and accuracy in face attribute estimation. The basic structure of CNN used in the proposed method, which is called Merged Multi-CNN (MM-CNN), consists of a large number of convolution blocks regularly located in the depth and width directions, which are connected to each other at each depth by merging layers. MM-CNN is automatically optimized for face attribute estimation by introducing trainable weight parameters for each merging layer between blocks. We also propose a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNN. Through a set of experiments to evaluate the performance on two public datasets, Large-scale CelebFaces Attributes dataset (CelebA) [] and Labeled Faces in the Wild-a dataset (LFW-a) [], we demonstrate that MM-CNN can estimate face attributes with high accuracy using CNN with fewer weight parameters than conventional methods. This paper is a full version of our initial study [] with a detailed description of the proposed method, a survey of recent works, and performance comparison. The contributions of this paper can be summarized as follows:
- We propose a novel CNN architecture, MM-CNN, specifically designed for multi-task processing; and
- We also propose CPR, which significantly reduces the parameters of CNN by removing fully connected layers.
2. Related Work
The conventional methods for face attribute estimation are summarized in Table 1. These methods can be categorized as Support Vector Machine (SVM), CNN, and others depending on the type of classifier. In the following, we give an overview of the conventional methods for each type of classifier.
Table 1.
A summary of face attribute estimation methods.
The first type of methods employs SVM as classifiers, which are the earliest methods for face attribute estimation [,,,]. SVM is a machine learning method to determine the decision boundaries for separating classes in feature space. Kumar et al. [] proposed one of the famous face attribute estimation methods using handcrafted local features. This method extracts pixel values from grayscale, RGB, and HSV color spaces, edge magnitude, and orientation as features and classifies them into each face attribute using SVM. After this work, most of the methods have employed CNN-based feature extractors due to its excellent performance on image recognition. Zhang et al. [] proposed Pose Aligned Networks for Deep Attribute modeling (PANDA), which consists of feature extraction by CNNs with poselet detection and attribute prediction by a linear SVM for each attribute. Liu et al. [] proposed two CNN architectures: LNet for face localization and ANet for face attribute prediction with a linear SVM for each attribute. Zhong et al. [] extracted features using FaceNet [] or VGG-16 [] and predicted attributes using a linear SVM.
The second type of methods employs neural networks as classifier [,,,,,,,], where most methods employ a single CNN to complete feature extraction and classification as a multi-task CNN. Wang et al. [] proposed a GoogLeNet-like network architecture consisting of three CNNs for face recognition, weather prediction, and location estimation. Face attributes are estimated from concatenated features in the fully connected layers. Hand et al. [] proposed Multi-task deep Convolutional Neural Network (MCNN) with an AUXiliary network (MCNN-AUX). They separate the 40 face attributes into six or nine groups based on facial parts, and they extract features for each attribute group. Auxiliary network, which finally estimates face attributes based on the estimation results of the multi-task CNN, is added. Cao et al. [] proposed Partially Shared Multi-task CNN (PS-MCNN). They separate the 40 face attributes into four groups: upper, middle, lower, and whole images, based on the position of each attribute in the face. The PS-MCNN aggregates the features extracted by the network for each group and estimates their attributes using a classifier consisting of fully connected layers. Gao et al. [] proposed three small multi-task CNNs: ATNet, ATNet_G, and ATNet_GT. Although these approaches are similar to MCNN, CNNs are desinged according to multiple clusters obtained by classifying face attributes using the k-means algorithm. Han et al. [] proposed a multi-label classification method using original labels determined by their own rule in light of correlation among face attributes. They separate the attributes into eight groups—one group related to the whole face and seven groups related to each facial parts—and design a special classifier architecture with multiple one output for each group. Fukui et al. [] proposed Attention Branch Networks (ABN), which is a sort of general-purpose CNN with attention to features. ABN consists of two branches: an attention branch for generating a visualization map and a perception branch for classification. They demonstrated that the attention mechanism with a visualization map is effective for estimating face attributes. Bhattarai et al. [] proposed a new loss function based on a continuous label, which is generated by word2vec [] based on 40 face attributes labels written in text. Chen et al. [] proposed a Hard Parameter Sharing-Channel Split network (HPS-CS) consisting of normal and group convolution layers.
The third type of methods employs other classifiers [,,]. Huang et al. proposed Large Margin Local Embedding (LMLE)-kNN [] and Cluster-based LMLE (CLMLE) []. They focused on the class imbalance of face attribute labels and proposed a learning method that takes into account the distance between small clusters generated for each class. In LMLE-kNN and CLMLE, DeepID2 [] and ResNet-like CNN [] are used for feature extraction, respectively. Ehrlich et al. [] proposed Multi-Task Restricted Boltzmann Machines (MT-RBMs) with Principal Component Analysis (PCA).
Our approach is similar to MCNN [], PS-MCNN [], and ATNet []. Although the relationships among facial attributes are hierarchical and complex, these methods use manual or non-hierarchical clustering to make a preliminary set of groups of facial attributes. On the other hand, our approach automatically optimizes the network parameters by recognizing the relationships among face attributes during the training of CNN.
3. Fundamentals of Face Attributes
In this section, we give fundamental observations about the face attributes that we focus on in this paper. We use the 40 face attributes defined in CelebA [], as shown in Table 2. CelebA is a large-scale dataset of face attributes that has been used for the training and performance evaluation of major face attribute estimation methods. In this paper, for convenience, each attribute is assigned an index number from 1 to 40, as shown in Table 2. Most of the attributes in CelebA are defined on the biological characteristics, while some are defined by whether the person wears ornaments such as glasses and earrings. These face attributes can be classified into groups based on the following relations: (i) commonality of facial parts, (ii) co-occurrence, and (iii) color, shape, and texture. Figure 2 shows an example of illustrating the relationship among face attributes based on relations (i)–(iii). In the following, we discuss the details of each relation.
Table 2.
Face attribute labels defined in CelebA [].
Figure 2.
Example of illustrating the relationship among face attributes based on (i) commonality of facial parts, (ii) co-occurrence, and (iii) color, shape, and texture.
(i) Commonality of facial parts—For face attribute labels, the most obvious relationship is based on the organs, that is, the facial parts included in the face. For example, Black Hair (9) and Wavy Hair (34) are attributes related to “hair,” Arched Eyebrows (2) and Narrow Eyes (24) are attributes related to “eyes,” and Big Nose (8) and Pointy Nose (28) are attributes related to “nose.” Note that the attribute labels such as Male (21), Attractive (3), and Young (40) are assigned to “face” in Figure 2, since they are based on the features of the entire face.
(ii) Co-occurrence—Some attributes have co-occurrence, since they can appear simultaneously. Figure 3 shows a color map visualizing the co-occurrence probabilities of 40 face attributes in CelebA. The co-occurrence probability of two face attributes indicates the ratio of face images assigned with those two attributes. The face attributes with the highest co-occurrence probability are related to gender. Male (21) has a high probability of attributes such as 5 O’Clock Shadow (1), Bald (5), and Goatee (17), while female has a high probability of attributes such as Arched Eyebrows (2) and Heavy Makeup (19), where female means the face image without the Male (21) assignment. Exceptions are the co-occurrence of Smiling (32) with High Cheekbones (20) and Rosy Cheeks (30) for facial expressions, and Young (40) with Rosy Cheeks (30) for age. The co-occurrence of face attributes has a positive correlation in most cases, while there are some cases that have a negative correlation. For example, Gray Hair (18) symbolizing “aging” shows a high negative correlation with Young (40) and 5 O’Clock Shadow (1). No Beard (25) and Sideburns (31) also show a high negative correlation. We guess that Sideburns (31) is assigned a label as part of the beard in CelebA. However, note that such correlations between face attributes depend on the dataset. In Figure 3, Blond Hair (10) and No Beard (25) have high co-occurrence probability, while Black Hair (9) and No Beard (25) have low co-occurrence probability. This fact indicates that most of the females in CelebA have blond hair rather than black hair. CelebA consists mainly of Western celebrities and a very small number of Asian celebrities. Thus, the correlation of facial attributes strongly depends on ethnicity and gender.
Figure 3.
Color map visualizing the co-occurrence probabilities of 40 face attributes in CelebA.
(iii) Color or shape or texture—Most face attributes are related to either color, shape, or texture, except for abstract attributes such as age and gender. Color-related attributes include Black Hair (9), Blond Hair (10), Brown Hair (12), Gray Hair (18), Bags Under Eyes (4), Pale Skin (27), and Rosy Cheeks (30), shape-related attributes include Straight Hair (33) and Wavy Hair (34), Chubby (14), and Oval Face (26), and texture-related attributes include Blurry (11), Eyeglasses (16), and Heavy Makeup (19). The 5 O’Clock Shadow (1) and No Beard (25) attributes are related to both color and shape.
It is important to consider the above relationships among face attributes for estimating face attributes using multi-task CNN. In multi-task CNN, sharing feature extractors for face attributes with strong relationships can improve the estimation accuracy and reduce computational cost and memory consumption. There are complex relationships among face attributes, and it is difficult to manually design the optimal network architectures that takes them into account. To address this problem, in this paper, we propose a method to automatically optimize multi-task CNN for face attribute estimation.
4. Merged Multi-Convolutional Neural Network for Face Attribute Estimation
In this section, we describe the details of the Merged Multi-Convolutional Neural Network (MM-CNN) for face attribute estimation proposed in this paper.
4.1. Network Architecture of MM-CNN
We describe the network architecture of MM-CNN. First, we consider Multi-CNN that estimates attributes by inputting a face image to a small CNN for each attribute, as shown in Figure 4a. One small CNN is designed based on AlexNet [], which consists of five convolution blocks and one fully connected layer. Note that the following points are different from the original AlexNet. In Conv1, the kernel size of convolution is changed to 7 × 7 from 11 × 11. In Conv2, the stride of convolution is changed to 1 from 2. All the normalization layers are replaced by the batch normalization layer []. The number of output channels in Conv5 is set to 1000, and the output of Conv5 is input to the Global Average Pooling (GAP) [] layer. In the case of estimating 40 attributes, 40 single CNNs are set up in parallel, as shown in Figure 4a, with each CNN estimating one attribute. In this paper, the number of CNNs set in parallel is called “parallels”. Then, we design MM-CNN based on Multi-CNN as shown in Figure 4b. In MM-CNN, a unique layer called the merging layer is inserted after every convolutional block except Conv5. All the convolution blocks are connected to the merging layer for each stage, and their outputs are merged individually. The details of the merging layer are described below.
Figure 4.
Overview of network architectures for (a) Multi-CNN and (b) MM-CNN.
4.2. Merging Layer in MM-CNN
The role of the merging layer is to merge multiple inputs into one, and a trainable weight parameter for merging is assigned to each input. The initial values of all the weight parameters are set to 1.0 unless otherwise specified. In the merging layer, the inputs are merged after weighting similarly to the fully connected layer. We consider three types of merging weighted inputs in the merging layer: Concat, Add, and Mean. In the following, we refer to these three types of merging as merging functions. An overview of each merging function is shown in Figure 5. In Concat, the weighted inputs are concatenated in the channel direction. In Mean, the weighted inputs are averaged for each channel. In Add, the weighted inputs are added for each channel. Since the value of the output feature map becomes extremely large if the weighted inputs are simply added, the weight is used by applying a softmax function to the weights before weighting. Which merging function to use needs to be decided before training MM-CNN.
Figure 5.
Overview of 3 types of merging function used in MM-CNN. For simplification, both the number of parallels and output channels of convolution layers are set to 2 in this figure.
4.3. Convolutionalization for Parameter Reduction (CPR)
MM-CNN consists of the same number of CNNs as attributes; thus, it has a huge number of weight parameters. The larger the size of CNN, the higher its performance may be; however, the higher its computational cost and memory consumption. It is not practical to use such large CNNs due to the limited computational resources available on the device such as cell phones and PCs. Therefore, we introduce two approaches to reduce the number of weight parameters to be trained in MM-CNN.
The first approach is to control the number of output channels in the convolution blocks. The number of output channels of the convolution blocks strongly affects the number of weight parameters of MM-CNN. Therefore, we introduce a hyperparameter c for the number of output channels in the convolution blocks. Note that the number of output channels for Conv5 is independent of c. The larger c is, the larger the number of weight parameters, resulting in the larger scale of MM-CNN. Table 3 shows the configuration of one CNN consisting of MM-CNN when c is introduced in the output channel of the convolution blocks.
Table 3.
Configuration of one CNN consisting of MM-CNN when c is introduced in the output channel of the convolution blocks.
The second approach is to reduce the number of weight parameters by eliminating the fully connected layers without sacrificing the estimation accuracy. Early CNNs such as AlexNet [] and VGG [] used three fully connected layers in the classifier, as shown in Figure 6a, where the number of outputs is set to 2 for two-class classification based on whether an attribute is available or not. In general, the number of weight parameters of CNN increases significantly as the number of fully connected layers increases. Recent CNNs such as ResNet [] and MobileNet [] reduce the number of weight parameters by using Global Average Pooling (GAP) and one fully connected layer in the classifier, as shown in Figure 6b. The same configuration is used in MM-CNN. However, this configuration is proposed to be used for ImageNet [] with 1000-class classification. The weight parameters in the classifier can be further reduced, since face attribute estimation is based on two-class classification, which is a simpler task than 1000-class classification. We assume that feature extraction in convolution blocks already classifies the face image into two classes and propose Convolutinalization for Parameter Reduction (CPR) that eliminates all the fully connected layers in the classifier. The configuration of the classifier using CPR is shown in Figure 6c. The number of output channels of Conv5 is set to 2, and the feature map output from Conv5 is aggregated by GAP to obtain two channels of output. The final output is the score obtained by applying the softmax function without passing through a fully connected layer. Some CNNs without fully connected layers have already been proposed such as FCN [], U-Net [], MobileNetV2 [], and EfficientNet []. FCN and U-Net are designed for image segmentation, which consist of an encoder and a decoder. The encoder is the same as a feature extractor of general CNNs for image classification, and the fully connected layers are replaced by a decoder including transposed convolution layers to output 2D or 3D matrices. MobileNetV2 and EfficientNet are designed for image classification. All the fully connected layers are replaced by 1 × 1 convolution layers for fast and parallel processing with Graphical Processing Units (GPUs). Unlike the above methods, CPR eliminates fully connected layers without replacing them with other layers to reduce the number of weight parameters in the network. To the best of our knowledge, CPR is the first method to eliminate all the fully connected layers with the aim of reducing the number of weight parameters. The effect of reducing the number of weight parameters by CPR is summarized in Table 4. CPR reduces the number of weight parameters in MM-CNN by 82.4% for Mean and , and by 97.8% for Concat and , respectively. The effect of CPR on reducing the number of parameters in Add and Mean is the same. The effect of CPR in Concat is more significant than that in Add and Mean, since many weight parameters are required in Conv5.
Figure 6.
Configuration of CNN classifiers for two-class classification: (a) VGG-16 [], (b) MM-CNN, and (c) MM-CNN with CPR.
Table 4.
Effect of reducing the number of weight parameters by CPR, where “Ratio” indicates the ratio of the number of weight parameters in each conv block to the total number of weight parameters in the MM-CNN.
5. Experiments and Discussion
In this section, we describe the performance evaluation of the proposed method and ten conventional methods on two public datasets: CelebA [] and LFW-a [].
5.1. Dataset
CelebA (http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html, accessed on 5 September 2019)—This dataset consists of 202,599 face images of 10,177 identities, 40 binary facial attributes, and 5 landmark coordinates. In this experiment, we use face images aligned based on the coordinates of five landmarks: the left eye, the right eye, the nose, the left edge of the mouth, and the right edge of the mouth.
LFW-a (https://talhassner.github.io/home/projects/lfwa/, accessed on 17 November 2019)—This dataset consists of 13,233 face images of 5749 identities and 73 binary facial attributes. In the experiment, we use only the 40 facial attributes common to CelebA. We also use face images aligned based on the coordinates of three landmarks: the right eye, the left eye, and the center of the mouth.
5.2. Experimental Condition
As for CelebA, 182,637 images and the remaining 19,962 images are used for training and test, respectively. The splitting of the dataset follows the experimental protocol recommended by CelebA. As for LFW-a, 6263 images and the remaining 6880 images are used for training and test, respectively. For both datasets, 10% of the training data is used as validation data to verify overfitting. The cross-entropy loss is used as the loss function in training, and Nesterov Accelerated Gradient (NAG) [] is used as the optimizer. The initial value of the learning rate is set to 0.025. The maximum number of epochs is set to 50. The batch size is set to 64. If the loss to validation data is not improved for two consecutive epochs, the learning rate is reduced to half. If the loss is not improved in five consecutive epochs, the training is completed. The pixel values of input images are normalized to have 0 mean and 1 variance, are randomly horizontally flipped, and are resized to 227 × 227 pixels. The weight parameters of all convolution layers and fully connected layers are initialized using He initialization []. Python 3.8.8 (https://www.python.org, accessed on 1 February 2022), Pytorch 1.8.1 [], CUDA 10.2 (https://developer.nvidia.com/cuda-toolkit, accessed on 1 February 2022), and cuDNN 7.6.5 (https://developer.nvidia.com/cudnn, accessed on 1 February 2022) are used in the implementation. All the CNN models are trained and evaluated on NVIDIA GeForce GTX 1080 Ti (https://www.nvidia.com/en-us/geforce/10-series/, accessed on 1 February 2022) hardware.
We compare the performance of MM-CNN with that of the ten conventional methods: LNets + ANet [], FaceNet [], MT-RBMs [], MCNN-AUX [], ATNet_GT [], PS-MCNN-LC [], AlexNet + CSFL [], ABN [], VGG16 + Auglabel [], and DeepID2 + CLMLE []. We also evaluate the performance of MM-CNN with three merging functions: Concat, Mean, and Add. The performance of MM-CNN is evaluated for Concat at and for Mean and Add at , respectively. Each method is evaluated on the estimation accuracy of each face attribute or the average of them. In face attribute estimation, each attribute is estimated regarding whether the input face image includes it or not. The estimation accuracy of each attribute is calculated by estimating the attribute for all face images in the test dataset and comparing the estimation results to the ground-truth labels in the test dataset. Note that the average of the estimation accuracy is an average of the estimation accuracy for each attribute after rounding to the third decimal place. In the experimental results, the average of the estimation accuracy is presented except when the estimated accuracy for an attribute index is presented.
5.3. Evaluation of Merging Functions and CPR in MM-CNN
We first evaluate the impact of the merging functions and CPR in MM-CNN for each hyperparameter c. Table 5 summarizes the accuracy of face attribute estimation and the number of weight parameters for each dataset when changing the merging functions, c, and CPR. “N/A” means that attribute estimation cannot be done due to exceeding the maximum memory size of GPU. Figure 7 shows the trade-off plot between estimation accuracy for CelebA and the number of parameters when varying the merging function, c, and CPR used in MM-CNN. The horizontal axis indicates the number of weight parameters, and the vertical axis indicates the average of the estimation accuracy for CelebA, where the estimation accuracy is the average of the estimation accuracy of the 40 attributes. In MM-CNN without CPR, Mean and Add exhibit higher parameter efficiency than Concat. In MM-CNN with CPR, the number of parameters is much smaller than that without CPR. Surprisingly, CPR slightly improves the accuracy of face attribute estimation in MM-CNN. This result suggests that a classifier with many weight parameters, such as fully connected layers, is not effective for a simple binary classification task. CPR is extremely effective in improving the parameter efficiency of MM-CNN, and it also makes optimization easier by reducing the complexity of MM-CNN. In particular, CPR can improve the parameter efficiency of MM-CNN using Concat, since most of the weight parameters are in the fully connected layers, as shown in Table 4. The balance between the number of weight parameters and accuracy of MM-CNN can be adjusted by changing the combination of merging functions, c, and CPR. MM-CNN using {Mean, , CPR} and {Concat, , CPR} achieve high-parameter efficiency for CelebA and LFW-a, respectively.
Table 5.
Accuracy of face attribute estimation and the number of parameters on both datasets when changing the merging functions, c, and CPR of MM-CNN, where “N/A” means that attribute estimation cannot be done due to exceeding the maximum memory size of GPU. Best accuracy is shown with underline.
Figure 7.
Comparison of the parameter efficiency of MM-CNN with different merge functions and c. The numbers near each point in the graph indicate the hyperparameter c.
5.4. Evaluation of the Number of Parallels in MM-CNN
As mentioned in Section 4.1. MM-CNN consists of the combination of single-task CNNs. Although the number of single-task CNNs in MM-CNN is set to 40, which is the same as the number of face attributes, the number of parallel networks can be changed. Through this experiment, we verify the number of parallels with high parameter efficiency in MM-CNN. Note that regardless of the number of parallels, the network architecture from Conv5 to FC in Figure 4b is not changed to output 40 scores. The accuracy of face attribute estimation for CelebA and the number of parameters when changing the number of parallels and c are summarized in Table 6 and Figure 8, where we use Mean and CPR for all the settings. Note that “N/A” in Table 6 indicates that attribute estimation is not performed, since the maximum memory size of the GPU is exceeded. The parameter efficiency for MM-CNN with 20 and 30 parallels is almost the same as 40 parallels. The above results indicate that the performance of MM-CNN can be maximized with a simple criterion that the number of parallels of MM-CNN is set to be the same as the number of face attributes. On the other hand, the parameter efficiency becomes lower for MM-CNNs with more than 60 parallels. In MM-CNN with Mean and Add, the feature maps extracted from each convolution block are added in each channel. As the number of parallels increases, the information is compressed by the addition of feature maps, resulting in a decrease in estimation accuracy. The estimation accuracy in MM-CNN with Concat will also be reduced, since the next convolution block after merging compresses the information in a similar way.
Table 6.
Estimation accuracy of MM-CNN with Mean and CPR under varying the number of parallels for CelebA.
Figure 8.
Comparison of the parameter efficiency of MM-CNN using Mean and CPR with the different number of parallels for CelebA. The numbers near each point in the graph indicate the hyperparameter c.
5.5. Comparison with Multi-CNN
We compare the accuracy of face attribute estimation using Multi-CNN and MM-CNN to verify the effectiveness of the merging layer. Multi-CNN uses independent CNNs to estimate each attribute as shown in Figure 4a. Table 7 shows the results of evaluating the estimation accuracy of Multi-CNN and MM-CNN for CelebA by changing c and with/without CPR, where we use Mean for MM-CNN. Note that the existence of the merging layers has little effect on the number of weight parameters, except for MM-CNN with Concat. The experimental results show that MM-CNN has higher estimation accuracy than Multi-CNN in all settings. The merging layers can improve the multi-task performance of CNNs with little increase in the number of weight parameters.
Table 7.
Estimation accuracy Multi-CNN and MM-CNN with Mean for CelebA. ✓ shows that CPR is used. Best accuracy is shown with underline.
5.6. Comparison with Conventional Methods
We compare the performance of MM-CNN with ten conventional methods: LNets + ANet [], FaceNet [], MT-RBMs [], MCNN-AUX [], ATNet_GT [], PS-MCNN-LC [], AlexNet + CSFL [], ABN [], VGG16 + Auglabel [], and DeepID2 + CLMLE []. In this experiment, we use MM-CNN with Mean and focus on the three patterns exhibiting high parameter efficiency from Table 5. We also use MM-CNN with Concat and CPR, which exhibited the highest estimation accuracy for LFW-a in Table 5. Table 8 and Table 9 show the experimental results for CelebA and LFW-a, respectively. Figure 9 shows the parameter efficiency of each method in face attribute estimation. Note that some conventional methods are listed and plotted only on one side in Table 8 and Table 9 and Figure 9. The accuracy of the conventional methods is referred to those described in the paper. If the accuracy for each attribute was not listed or the accuracy for one dataset was not listed such as ABN [] and DeepID2 + CLMLE [], the accuracy for those methods is not listed in Table 8 and Table 9. Since SVM-based methods such as LNets + ANet [] and MT-RBMs [] cannot evaluate the number of weight parameters, only methods without SVM are plotted in Figure 9.
Table 8.
Estimation accuracy of face attribute estimation methods for CelebA. Best accuracy is shown with underline.
Table 9.
Estimation accuracy of face attribute estimation methods for LFW-a. Best accuracy is shown with underline.
Figure 9.
Comparison of the parameter efficiency of face attribute estimation methods. The numbers near each point in the graph indicate the hyperparameter c.
In CelebA, the accuracy of MCNN-AUX [] and MM-CNN (Mean, , CPR) is comparable, while the number of parameters of MM-CNN is 1/70 that of MCNN-AUX. Comparing ATNet_GT [] and MM-CNN (Mean, , CPR), MM-CNN (Mean, , CPR) has 1% higher accuracy and 1/3 of the number of parameters. The accuracy of PS-MCNN-LC [] and AlexNet + CSFL [] is higher than MM-CNN, while the number of parameters of MM-CNN with CPR is much smaller than those of them. As mentioned above, MM-CNN exhibited the best parameter efficiency among the compared methods. In addition, since MM-CNN is a network architecture for multi-task processing, the architecture and CPR of MM-CNN can be used in combination with multi-label methods such as concatenating multiple attribute labels.
6. Conclusions
In this paper, we proposed a face attribute estimation method using Merged Multi-CNN (MM-CNN), which consists of multiple CNNs in parallel with the merging layers. We also proposed a parameter reduction method called Convolutionalization for Parameter Reduction (CPR), which removes all fully connected layers from MM-CNNs. Through a set of experiments to evaluate the performance on CelebA [] and LFW-a [], we demonstrated that MM-CNN can estimate face attributes with high accuracy using CNN with fewer weight parameters than conventional methods. Although the MM-CNN discussed in this paper was based on simple networks, the approach can be applied to recent complex networks. Future work will include extending and improving the accuracy of MM-CNN, applying it to practical applications, and comparing its performance other than face attribute estimation with general multi-task learning methods.
Author Contributions
Funding acquisition, K.I. and T.A.; Investigation, H.K.; Methodology, H.K. and K.I.; Project administration, T.A.; Resources, T.A.; Software, H.K.; Supervision, K.I.; Writing—original draft, H.K.; Writing—review and editing, K.I. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported, in part, by JSPS KAKENHI Grant Numbers 19H04106, 21H03457, and 21J15252.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Publicly available datasets were analyzed in this study. The CelebA dataset can be found here: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (accessed on 5 September 2019). The LFW-a dataset can be found here: https://talhassner.github.io/home/projects/lfwa/ (accessed on 17 November 2019).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Li, S.; Jain, A. Handbook of Face Recognition; Springer: Cham, Switzerland, 2011. [Google Scholar]
- Hand, E.; Chellappa, R. Attributes for improved attributes: A multi-task network utilizing implicit and explicit relationships for facial attribute classification. AAAI Conf. Artif. Intell. 2017, 31, 4068–4074. [Google Scholar]
- Scheirer, W.; Kumar, N.; Ricanek, K.; Belhumeur, P.; Boult, T. Fusing with context: A Bayesian approach to combining descriptive attributes. In Proceedings of the 2011 International Joint Conference on Biometrics, Washington, DC, USA, 11–13 October 2011. [Google Scholar]
- Jain, A.; Nandakumar, K.; Lu, X.; Park, U. Integrating faces, fingerprints, and soft biometric traits for user recognition. In Proceedings of the ECCV 2004 International Workshop, BioAW 2004, Prague, Czech Republic, 15 May 2004; pp. 259–269. [Google Scholar]
- Park, U.; Jain, A. Face Matching and Retrieval Using Soft Biometrics. IEEE Trans. Inf. Forensics Secur. 2010, 5, 406–415. [Google Scholar] [CrossRef] [Green Version]
- Kumar, N.; Berg, A.; Belhumeur, P.; Nayar, S. Describable visual attributes for face verification and image search. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1962–1977. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pietikäinen, M.; Hadid, A.; Zhao, G.; Ahonen, T. Computer Vision Using Local Binary Patterns; Springer: Cham, Switzerland, 2011. [Google Scholar]
- Zhang, N.; Paluri, M.; Ranzato, M.; Darrell, T.; Bourdev, L. PANDA: Pose aligned networks for deep attribute modeling. arXiv 2014, arXiv:1311.5591. [Google Scholar]
- Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the Wild. arXiv 2015, arXiv:1411.7766. [Google Scholar]
- Zhong, Y.; Sullivan, J.; Li, H. Face attribute prediction using off-the-shelf CNN features. arXiv 2016, arXiv:1602.03935. [Google Scholar]
- Huang, C.; Li, Y.; Loy, C.; Tang, X. Learning deep representations for imbalanced classification. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5375–5383. [Google Scholar]
- Wang, J.; Cheng, Y.; Feris, R. Walk and learn: Facial attribute representation learning from egocentric video and contextual data. arXiv 2016, arXiv:1604.06433. [Google Scholar]
- Gao, D.; Yuan, P.; Sun, N.; Wu, X.; Cai, Y. Face attribute prediction with convolutional neural networks. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017; pp. 1294–1299. [Google Scholar]
- Han, H.; Jain, A.; Wang, F.; Shan, S.; Chen, X. Heterogeneous face attribute estimation: A deep multi-task learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2597–2609. [Google Scholar] [CrossRef] [Green Version]
- Cao, J.; Li, Y.; Zhang, Z. Partially shared multi-task convolutional neural network with local constraint for face attribute learning. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4290–4299. [Google Scholar]
- Wolf, L.; Hassner, T.; Taigman, Y. Effective Face Recognition by Combining Multiple Descriptors and Learned Background Statistics. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1978–1990. [Google Scholar] [CrossRef]
- Kawai, H.; Ito, K.; Aoki, T. Merged Multi-CNN with parameter reduction for face attribute estimation. In Proceedings of the 2019 International Conference on Biometrics (ICB), Crete, Greece, 4–7 June 2019. [Google Scholar]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet classification with deep convolutional neural networks. In Proceedings of the Twenty-Sixth Annual Conference on Neural Information Processing Systems (NIPS), Stateline, NV, USA, 3–8 December 2012; pp. 1–9. [Google Scholar]
- Fukui, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Attention Branch Network: Learning of Attention Mechanism for Visual Explanation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10705–10714. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Bhattarai, B.; Bodur, R.; Kim, T. Auglabel: Exploiting word representations to augment labels for face attribute classification. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 2308–2312. [Google Scholar]
- Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; Dean, J. Distributed Representations of Words and Phrases and their Compositionality. Proc. Annu. Conf. Neural Inf. Process. Syst. 2013, 2, 3111–3119. [Google Scholar]
- Chen, X.; Wang, W.; Zheng, S. Research on face attribute recognition based on multi-task CNN Network. In Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; pp. 1221–1224. [Google Scholar]
- Sun, Y.; Chen, Y.; Wang, X.; Tang, X. Deep learning face representation by joint identification-verification. Proc. Int. Conf. Neural Inf. Process. Syst. 2014, 2, 1988–1996. [Google Scholar]
- Huang, C.; Li, Y.; Loy, C.; Tang, X. Deep Imbalanced Learning for Face Recognition and Attribute Prediction. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2781–2794. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ehrlich, M.; Shields, T.; Almaev, T.; Amer, M. Facial attributes classification using multi-task representation learning. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 47–55. [Google Scholar]
- Liu, W.; Wen, Y.; Yu, Z.; Li, M.; Raj, B.; Song, L. SphereFace: Deep Hypersphere Embedding for Face Recognition. arXiv 2017, arXiv:1704.08063. [Google Scholar]
- Loffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Proc. Int. Conf. Mach. Learn. 2015, 37, 448–456. [Google Scholar]
- Lin, M.; Chen, Q.; Yan, S. Network In Network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Howard, A.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proc. Int. Conf. Mach. Learn. 2019, 97, 6105–6114. [Google Scholar]
- Nesterov, Y. A method of solving a convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification view document. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Proc. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).