Special Issue "Deep Learning-Based Biometric Technologies"

A special issue of Symmetry (ISSN 2073-8994).

Deadline for manuscript submissions: 31 August 2019.

Special Issue Editors

Guest Editor
Prof. Dr. Kang Ryoung Park

Division of Electronics and Electrical Engineering, Dongguk University, 30, Pildong- ro 1-gil, Jung-gu, Seoul 100-715, Republic of Korea
Website | E-Mail
Interests: deep learning, biometrics, image processing
Guest Editor
Dr. Huibin Li

Room 302, Science Building, School of Mathematics and Statistics, Xi'an Jiaotong University, No. 28, Xianning West Road, Xi'an, Shaanxi, 710049, China
E-Mail
Phone: +86-29-82660949
Interests: Discrete geometry analysis; 3D face recognition; 3D facial expression analysis; Deep learning for 3D shapes

Special Issue Information

Dear Colleagues,

Recent developments have led to the widespread use of biometric technologies, such as face, fingerprint, vein, iris, palmprint, wrinkle, voice, and gait recognition, in a variety of applications in access control, financial transactions on mobile devices, and automatic teller machines (ATMs). While existing biometric technology has matured, its performance is still affected by various environmental conditions, and recent approaches have been attempted to combine deep learning techniques with conventional biometrics to guarantee the higher performance. The objective of this Special Issue is to invite high-quality, state-of-the-art research papers that deal with challenging issues in deep learning-based biometric technologies. We solicit the original papers of unpublished and completed research that are not currently under review by any other conference/magazine/journal. Topics of interest include, but are not limited to:

  •  Region of interest (ROI) or feature point detection for biometrics based on deep learning
  •  Biometric feature extraction based on deep learning
  •  Biometric recognition based on deep learning
  •  Soft biometrics based on deep learning
  •  Multimodal biometrics based on deep learning
  •  Spoof detection based on deep learning

Prof. Kang Ryoung Park
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Region of interest (ROI) or feature point detection for biometrics based on deep learning
  • Biometric feature extraction based on deep learning
  • Biometric recognition based on deep learning
  • Soft biometrics based on deep learning
  • Multimodal biometrics based on deep learning
  • Spoof detection based on deep learning

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle
An Adversarial and Densely Dilated Network for Connectomes Segmentation
Symmetry 2018, 10(10), 467; https://doi.org/10.3390/sym10100467
Received: 28 August 2018 / Revised: 5 October 2018 / Accepted: 8 October 2018 / Published: 9 October 2018
Cited by 1 | PDF Full-text (4375 KB) | HTML Full-text | XML Full-text
Abstract
Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality [...] Read more.
Automatic reconstructing of neural circuits in the brain is one of the most crucial studies in neuroscience. Connectomes segmentation plays an important role in reconstruction from electron microscopy (EM) images; however, it is rather challenging due to highly anisotropic shapes with inferior quality and various thickness. In our paper, we propose a novel connectomes segmentation framework called adversarial and densely dilated network (ADDN) to address these issues. ADDN is based on the conditional Generative Adversarial Network (cGAN) structure which is the latest advance in machine learning with power to generate images similar to the ground truth especially when the training data is limited. Specifically, we design densely dilated network (DDN) as the segmentor to allow a deeper architecture and larger receptive fields for more accurate segmentation. Discriminator is trained to distinguish generated segmentation from manual segmentation. During training, such adversarial loss function is optimized together with dice loss. Extensive experimental results demonstrate that our ADDN is effective for such connectomes segmentation task, helping to retrieve more accurate segmentation and attenuate the blurry effects of generated boundary map. Our method obtains state-of-the-art performance while requiring less computation on ISBI 2012 EM dataset and mouse piriform cortex dataset. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Figure 1

Open AccessArticle
Deep Learning-Based Multinational Banknote Fitness Classification with a Combination of Visible-Light Reflection and Infrared-Light Transmission Images
Symmetry 2018, 10(10), 431; https://doi.org/10.3390/sym10100431
Received: 31 August 2018 / Revised: 18 September 2018 / Accepted: 21 September 2018 / Published: 25 September 2018
Cited by 1 | PDF Full-text (8830 KB) | HTML Full-text | XML Full-text
Abstract
The fitness classification of a banknote is important as it assesses the quality of banknotes in automated banknote sorting facilities, such as counting or automated teller machines. The popular approaches are primarily based on image processing, with banknote images acquired by various sensors. [...] Read more.
The fitness classification of a banknote is important as it assesses the quality of banknotes in automated banknote sorting facilities, such as counting or automated teller machines. The popular approaches are primarily based on image processing, with banknote images acquired by various sensors. However, most of these methods assume that the currency type, denomination, and exposed direction of the banknote are known. In other words, not only is a pre-classification of the type of input banknote required, but in some cases, the type of currency is required to be manually selected. To address this problem, we propose a multinational banknote fitness-classification method that simultaneously determines the fitness level of a banknote from multiple countries. This is achieved without the pre-classification of input direction and denomination of the banknote, using visible-light reflection and infrared-light transmission images of banknotes, and a convolutional neural network. The experimental results on the combined banknote image database consisting of the Indian rupee and Korean won with three fitness levels, and the United States dollar with two fitness levels, show that the proposed method achieves better accuracy than other fitness classification methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Figure 1

Open AccessArticle
Age Estimation Robust to Optical and Motion Blurring by Deep Residual CNN
Symmetry 2018, 10(4), 108; https://doi.org/10.3390/sym10040108
Received: 9 March 2018 / Revised: 9 April 2018 / Accepted: 10 April 2018 / Published: 13 April 2018
Cited by 3 | PDF Full-text (18829 KB) | HTML Full-text | XML Full-text
Abstract
Recently, real-time human age estimation based on facial images has been applied in various areas. Underneath this phenomenon lies an awareness that age estimation plays an important role in applying big data to target marketing for age groups, product demand surveys, consumer trend [...] Read more.
Recently, real-time human age estimation based on facial images has been applied in various areas. Underneath this phenomenon lies an awareness that age estimation plays an important role in applying big data to target marketing for age groups, product demand surveys, consumer trend analysis, etc. However, in a real-world environment, various optical and motion blurring effects can occur. Such effects usually cause a problem in fully capturing facial features such as wrinkles, which are essential to age estimation, thereby degrading accuracy. Most of the previous studies on age estimation were conducted for input images almost free from blurring effect. To overcome this limitation, we propose the use of a deep ResNet-152 convolutional neural network for age estimation, which is robust to various optical and motion blurring effects of visible light camera sensors. We performed experiments with various optical and motion blurred images created from the park aging mind laboratory (PAL) and craniofacial longitudinal morphological face database (MORPH) databases, which are publicly available. According to the results, the proposed method exhibited better age estimation performance than the previous methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Figure 1

Open AccessArticle
A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods
Symmetry 2018, 10(4), 96; https://doi.org/10.3390/sym10040096
Received: 11 February 2018 / Revised: 26 March 2018 / Accepted: 28 March 2018 / Published: 4 April 2018
Cited by 5 | PDF Full-text (13907 KB) | HTML Full-text | XML Full-text
Abstract
Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs) [...] Read more.
Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs) and canonical correlation analysis (CCA) methods. The model, which has a symmetric structure, is found to have high potential for multimodal biometrics. The model works as follows. First, it learns the hidden-layer representation of biological images using extreme learning machines layer by layer. Second, the canonical correlation analysis method is applied to map the representation to a feature space, which is used to reconstruct the multimodal image feature representation. Third, the reconstructed features are used as the input of a classifier for supervised training and output. To verify the validity and efficiency of the method, we adopt it for new hybrid datasets obtained from typical face image datasets and finger-vein image datasets. Our experimental results demonstrate that our model performs better than traditional methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Graphical abstract

Open AccessFeature PaperArticle
Palmprint and Palmvein Recognition Based on DCNN and A New Large-Scale Contactless Palmvein Dataset
Symmetry 2018, 10(4), 78; https://doi.org/10.3390/sym10040078
Received: 7 March 2018 / Revised: 18 March 2018 / Accepted: 19 March 2018 / Published: 21 March 2018
Cited by 6 | PDF Full-text (4260 KB) | HTML Full-text | XML Full-text
Abstract
Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN) based scheme, namely [...] Read more.
Among the members of biometric identifiers, the palmprint and the palmvein have received significant attention due to their stability, uniqueness, and non-intrusiveness. In this paper, we investigate the problem of palmprint/palmvein recognition and propose a Deep Convolutional Neural Network (DCNN) based scheme, namely P a l m R CNN (short for palmprint/palmvein recognition using CNNs). The effectiveness and efficiency of P a l m R CNN have been verified through extensive experiments conducted on benchmark datasets. In addition, though substantial effort has been devoted to palmvein recognition, it is still quite difficult for the researchers to know the potential discriminating capability of the contactless palmvein. One of the root reasons is that a large-scale and publicly available dataset comprising high-quality, contactless palmvein images is still lacking. To this end, a user-friendly acquisition device for collecting high quality contactless palmvein images is at first designed and developed in this work. Then, a large-scale palmvein image dataset is established, comprising 12,000 images acquired from 600 different palms in two separate collection sessions. The collected dataset now is publicly available. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Figure 1

Open AccessArticle
Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment
Symmetry 2017, 9(11), 263; https://doi.org/10.3390/sym9110263
Received: 8 October 2017 / Revised: 27 October 2017 / Accepted: 1 November 2017 / Published: 4 November 2017
Cited by 20 | PDF Full-text (9085 KB) | HTML Full-text | XML Full-text
Abstract
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted [...] Read more.
Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Biometric Technologies)
Figures

Figure 1

Symmetry EISSN 2073-8994 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top