Next Article in Journal
Human Visual Perception-Based Multi-Exposure Fusion Image Quality Assessment
Previous Article in Journal
Efficient Vanishing Point Detection for Driving Assistance Based on Visual Saliency Map and Image Segmentation from a Vehicle Black-Box Camera
Open AccessArticle

Handcrafted versus CNN Features for Ear Recognition

1
Institute for Neuro- and Bioinformatics, University of Lübeck, 23562 Lübeck, Germany
2
Mathematics Department, Faculty of Science, South Valley University, Qena 83523, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(12), 1493; https://doi.org/10.3390/sym11121493
Received: 8 November 2019 / Revised: 26 November 2019 / Accepted: 5 December 2019 / Published: 8 December 2019
Ear recognition is an active research area in the biometrics community with the ultimate goal to recognize individuals effectively from ear images. Traditional ear recognition methods based on handcrafted features and conventional machine learning classifiers were the prominent techniques during the last two decades. Arguably, feature extraction is the crucial phase for the success of these methods due to the difficulty in designing robust features to cope with the variations in the given images. Currently, ear recognition research is shifting towards features extracted by Convolutional Neural Networks (CNNs), which have the ability to learn more specific features robust to the wide image variations and achieving state-of-the-art recognition performance. This paper presents and compares ear recognition models built with handcrafted and CNN features. First, we experiment with seven top performing handcrafted descriptors to extract the discriminating ear image features and then train Support Vector Machines (SVMs) on the extracted features to learn a suitable model. Second, we introduce four CNN based models using a variant of the AlexNet architecture. The experimental results on three ear datasets show the superior performance of the CNN based models by 22%. To further substantiate the comparison, we perform visualization of the handcrafted and CNN features using the t-distributed Stochastic Neighboring Embedding (t-SNE) visualization technique and the characteristics of features are discussed. Moreover, we conduct experiments to investigate the symmetry of the left and right ears and the obtained results on two datasets indicate the existence of a high degree of symmetry between the ears, while a fair degree of asymmetry also exists. View Full-Text
Keywords: ear recognition; handcrafted features; CNN features; convolutional neural networks; transfer learning; feature visualization ear recognition; handcrafted features; CNN features; convolutional neural networks; transfer learning; feature visualization
Show Figures

Graphical abstract

MDPI and ACS Style

Alshazly, H.; Linse, C.; Barth, E.; Martinetz, T. Handcrafted versus CNN Features for Ear Recognition. Symmetry 2019, 11, 1493.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop