Next Article in Journal
Interval Intuitionistic Fuzzy Clustering Algorithm Based on Symmetric Information Entropy
Previous Article in Journal
Multi-Criteria Group Decision-Making for Selection of Green Suppliers under Bipolar Fuzzy PROMETHEE Process
Open AccessFeature PaperArticle

PointNet++ and Three Layers of Features Fusion for Occlusion Three-Dimensional Ear Recognition Based on One Sample per Person

School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(1), 78; https://doi.org/10.3390/sym12010078 (registering DOI)
Received: 8 December 2019 / Revised: 24 December 2019 / Accepted: 27 December 2019 / Published: 2 January 2020
The ear’s relatively stable structure makes it suitable for recognition. In common identification applications, only one sample per person (OSPP) is registered in a gallery; consequently, effectively training deep-learning-based ear recognition approach is difficult. The state-of-the-art (SOA) 3D ear recognition using the OSPP approach bottlenecks when large occluding objects are close to the ear. Hence, we propose a system that combines PointNet++ and three layers of features that are capable of extracting rich identification information from a 3D ear. Our goal is to correctly recognize a 3D ear affected by a large nearby occlusion using one sample per person (OSPP) registered in a gallery. The system comprises four primary components: (1) segmentation; (2) local and local joint structural (LJS) feature extraction; (3) holistic feature extraction; and (4) fusion. We use PointNet++ for ear segmentation. For local and LJS feature extraction, we propose an LJS feature descriptor–pairwise surface patch cropped using a symmetrical hemisphere cut-structured histogram with an indexed shape (PSPHIS) descriptor. Furthermore, we propose a local and LJS matching engine based on the proposed LJS feature descriptor and SOA surface patch histogram indexed shape (SPHIS) local feature descriptor. For holistic feature extraction, we use a voxelization method for global matching. For the fusion component, we use a weighted fusion method to recognize the 3D ear. The experimental results demonstrate that the proposed system outperforms the SOA normalization-free 3D ear recognition methods using OSPP when the ear surface is influenced by a large nearby occlusion. View Full-Text
Keywords: occlusion 3D ear recognition; three layer of features; PointNet++, pairwise surface patch cropped using a hemisphere cut-structured; local and local joint structural feature-matching engine occlusion 3D ear recognition; three layer of features; PointNet++, pairwise surface patch cropped using a hemisphere cut-structured; local and local joint structural feature-matching engine
Show Figures

Figure 1

MDPI and ACS Style

Zhu, Q.; Mu, Z. PointNet++ and Three Layers of Features Fusion for Occlusion Three-Dimensional Ear Recognition Based on One Sample per Person. Symmetry 2020, 12, 78.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop