Special Issue "Deep Learning for Facial Informatics"

A special issue of Symmetry (ISSN 2073-8994).

Deadline for manuscript submissions: closed (31 October 2018).

Special Issue Editors

Prof. Gee-Sern Jison Hsu
E-Mail Website1 Website2
Guest Editor
Artificial Vision Lab., Dept. Mech. Eng., National Taiwan University of Science and Technology, Taipei 10607, Taiwan
Interests: Computer Vision; Pattern Recognition
Dr. Radu Timofte
E-Mail Website
Guest Editor
Computer Vision Laboratory, Sternwartstrasse 7, ETH Zentrum, CH - 8092 Zurich, Switzerland
Interests: Multi-Class Multi-View Object Detection; Recognition; Segmentation; Tracking; Sparse and Collaborative Representations; Machine Learning; Artificial Intelligence

Special Issue Information

Dear Colleagues,

Deep learning has been revolutionizing many fields in computer vision, and facial informatics is one of the major fields. Novel approaches and performance breakthroughs are often reported on existing benchmarks. As the performances on existing benchmarks are close to saturation, larger and more challenging databases are being made and considered as new benchmarks, further pushing the advancement of the technologies. Considering face recognition, for example, the DeepFace and DeepID report nearly perfect and better-than-human performances on the LFW (Labeled Faces in the Wild) benchmark. More challenging benchmarks, e.g., the IARPA Janus Benchmark A (IJB-A) and MegaFace, are accepted as new standards for evaluating the performance of a new approach. Such an evolution is also seen in other branches of face informatics.

This special issue aims to delineate the state-of-the-art technologies in deep learning for facial informatics from multiple perspectives, including methods, architectures, databases, protocols and applications. Researchers from both academia and industry are cordially invited to share their latest advancements on all aspects of facial informatics. Both image- and video-based works are welcome. Topics for this special issue include, but are not limited to, the following: face recognition, face alignment, face detection and tracking, facial attributes (e.g., age, expression, gender, ethnicity, micro-expression, …), face hallucination, facial trait analysis.

All received submissions will be subject to peer review by experts in the field and will be evaluated based on their relevance to this special issue, level of novelty, significance of contribution, and the overall quality.

Prof. Gee-Sern Jison Hsu
Dr. Radu Timofte
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

This charge will be waived for the top ranked 5 accepted submissions (therefore, no fee for the best papers).

For paper templates, please refer to https://www.mdpi.com/journal/symmetry/instructions.

DATES

Submission: October 31, 2018
First Review: November 31, 2018
First Revision Due: December 31, 2018
Final Review: January 10, 2019
Final Revision Due: January 20, 2019
Acceptance Notification: January 31, 2019
Online Publication: Right after acceptance

Keywords

  • Deep Learning
  • Computer Vision
  • Face Recognition

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Face Liveness Detection Using Thermal Face-CNN with External Knowledge
Symmetry 2019, 11(3), 360; https://doi.org/10.3390/sym11030360 - 10 Mar 2019
Cited by 1
Abstract
Face liveness detection is important for ensuring security. However, because faces are shown in photographs or on a display, it is difficult to detect the real face using the features of the face shape. In this paper, we propose a thermal face-convolutional neural [...] Read more.
Face liveness detection is important for ensuring security. However, because faces are shown in photographs or on a display, it is difficult to detect the real face using the features of the face shape. In this paper, we propose a thermal face-convolutional neural network (Thermal Face-CNN) that knows the external knowledge regarding the fact that the real face temperature of the real person is 36~37 degrees on average. First, we compared the red, green, and blue (RGB) image with the thermal image to identify the data suitable for face liveness detection using a multi-layer neural network (MLP), convolutional neural network (CNN), and C-support vector machine (C-SVM). Next, we compared the performance of the algorithms and the newly proposed Thermal Face-CNN in a thermal image dataset. The experiment results show that the thermal image is more suitable than the RGB image for face liveness detection. Further, we also found that Thermal Face-CNN performs better than CNN, MLP, and C-SVM when the precision is slightly more crucial than recall through F-measure. Full article
(This article belongs to the Special Issue Deep Learning for Facial Informatics)
Show Figures

Figure 1

Open AccessArticle
Emotion Classification Using a Tensorflow Generative Adversarial Network Implementation
Symmetry 2018, 10(9), 414; https://doi.org/10.3390/sym10090414 - 19 Sep 2018
Cited by 1
Abstract
The detection of human emotions has applicability in various domains such as assisted living, health monitoring, domestic appliance control, crowd behavior tracking real time, and emotional security. The paper proposes a new system for emotion classification based on a generative adversarial network (GAN) [...] Read more.
The detection of human emotions has applicability in various domains such as assisted living, health monitoring, domestic appliance control, crowd behavior tracking real time, and emotional security. The paper proposes a new system for emotion classification based on a generative adversarial network (GAN) classifier. The generative adversarial networks have been widely used for generating realistic images, but the classification capabilities have been vaguely exploited. One of the main advantages is that by using the generator, we can extend our testing dataset and add more variety to each of the seven emotion classes we try to identify. Thus, the novelty of our study consists in increasing the number of classes from N to 2N (in the learning phase) by considering real and fake emotions. Facial key points are obtained from real and generated facial images, and vectors connecting them with the facial center of gravity are used by the discriminator to classify the image as one of the 14 classes of interest (real and fake for seven emotions). As another contribution, real images from different emotional classes are used in the generation process unlike the classical GAN approach which generates images from simple noise arrays. By using the proposed method, our system can classify emotions in facial images regardless of gender, race, ethnicity, age and face rotation. An accuracy of 75.2% was obtained on 7000 real images (14,000, also considering the generated images) from multiple combined facial datasets. Full article
(This article belongs to the Special Issue Deep Learning for Facial Informatics)
Show Figures

Figure 1

Open AccessArticle
Accurate Age Estimation Using Multi-Task Siamese Network-Based Deep Metric Learning for Frontal Face Images
Symmetry 2018, 10(9), 385; https://doi.org/10.3390/sym10090385 - 06 Sep 2018
Cited by 1
Abstract
Recently, there have been many studies on the automatic extraction of facial information using machine learning. Age estimation from frontal face images is becoming important, with various applications. Our proposed work is based on a binary classifier that only determines whether two input [...] Read more.
Recently, there have been many studies on the automatic extraction of facial information using machine learning. Age estimation from frontal face images is becoming important, with various applications. Our proposed work is based on a binary classifier that only determines whether two input images are clustered in a similar class and trains a convolutional neural network (CNN) model using the deep metric learning method based on the Siamese network. To converge the results of the training Siamese network, two classes, for which age differences are below a certain level of distance, are considered as the same class, so the ratio of positive database images is increased. The deep metric learning method trains the CNN model to measure similarity based only on age data, but we found that the accumulated gender data can also be used to compare ages. Thus, we adopted a multi-task learning approach to consider the gender data for more accurate age estimation. In the experiment, we evaluated our approach using MORPH and MegaAge-Asian datasets, and compared gender classification accuracy only using age data from the training images. In addition, using gender classification, our proposed architecture, which is trained with only age data, performs age comparison using the self-generated gender feature. The accuracy enhancement by multi-task learning, i.e. simultaneously considering age and gender data, is discussed. Our approach results in the best accuracy among the methods based on deep metric learning on MORPH dataset. Additionally, our method has better results than the state of the art in terms of age estimation on MegaAge-Asian and MORPH datasets. Full article
(This article belongs to the Special Issue Deep Learning for Facial Informatics)
Show Figures

Figure 1

Open AccessArticle
A Coarse-to-Fine Approach for 3D Facial Landmarking by Using Deep Feature Fusion
Symmetry 2018, 10(8), 308; https://doi.org/10.3390/sym10080308 - 01 Aug 2018
Cited by 1
Abstract
Facial landmarking locates the key facial feature points on facial data, which provides not only information on semantic facial structures, but also prior knowledge for other kinds of facial analysis. However, most of the existing works still focus on the 2D facial image [...] Read more.
Facial landmarking locates the key facial feature points on facial data, which provides not only information on semantic facial structures, but also prior knowledge for other kinds of facial analysis. However, most of the existing works still focus on the 2D facial image which may suffer from lighting condition variations. In order to address this limitation, this paper presents a coarse-to-fine approach to accurately and automatically locate the facial landmarks by using deep feature fusion on 3D facial geometry data. Specifically, the 3D data is converted to 2D attribute maps firstly. Then, the global estimation network is trained to predict facial landmarks roughly by feeding the fused CNN (Convolutional Neural Network) features extracted from facial attribute maps. After that, input the local fused CNN features extracted from the local patch around each landmark estimated previously, and other local models are trained separately to refine the locations. Tested on the Bosphorus and BU-3DFE datasets, the experimental results demonstrated effectiveness and accuracy of the proposed method for locating facial landmarks. Compared with existed methods, our results have achieved state-of-the-art performance. Full article
(This article belongs to the Special Issue Deep Learning for Facial Informatics)
Show Figures

Figure 1

Open AccessArticle
Towards Real-Time Facial Landmark Detection in Depth Data Using Auxiliary Information
Symmetry 2018, 10(6), 230; https://doi.org/10.3390/sym10060230 - 17 Jun 2018
Cited by 1
Abstract
Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D) is used for tracking the facial features and predicting facial expression, whereas Depth (3D) data is used to build a series of expressions on 3D [...] Read more.
Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D) is used for tracking the facial features and predicting facial expression, whereas Depth (3D) data is used to build a series of expressions on 3D face models. An issue with modern research approaches is the use of a single data stream that provides little indication of the 3D facial structure. We compare and analyse the performance of Convolutional Neural Networks (CNN) using visual, Depth and merged data to identify facial features in real-time using a Depth sensor. First, we review the facial landmarking algorithms and its datasets for Depth data. We address the limitation of the current datasets by introducing the Kinect One Expression Dataset (KOED). Then, we propose the use of CNNs for the single data stream and merged data streams for facial landmark detection. We contribute to existing work by performing a full evaluation on which streams are the most effective for the field of facial landmarking. Furthermore, we improve upon the existing work by extending neural networks to predict into 3D landmarks in real-time with additional observations on the impact of using 2D landmarks as auxiliary information. We evaluate the performance by using Mean Square Error (MSE) and Mean Average Error (MAE). We observe that the single data stream predicts accurate facial landmarks on Depth data when auxiliary information is used to train the network. The codes and dataset used in this paper will be made available. Full article
(This article belongs to the Special Issue Deep Learning for Facial Informatics)
Show Figures

Figure 1

Back to TopTop