Next Article in Journal
Design and Development of a Low-Cost Wearable Glove to Track Forces Exerted by Workers in Car Assembly Lines
Next Article in Special Issue
A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns
Previous Article in Journal
Enabling Virtual AAA Management in SDN-Based IoT Networks
Previous Article in Special Issue
DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors
Article Menu
Issue 2 (January-2) cover image

Export Article

Open AccessArticle
Sensors 2019, 19(2), 291; https://doi.org/10.3390/s19020291

3D Affine: An Embedding of Local Image Features for Viewpoint Invariance Using RGB-D Sensor Data

1
Department of Precision Engineering, Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
2
Human-Artifactology Research Division, Research into Artifacts, Center for Engineering (RACE), The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8568, Japan
*
Author to whom correspondence should be addressed.
Received: 28 October 2018 / Revised: 20 December 2018 / Accepted: 7 January 2019 / Published: 12 January 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
  |  
PDF [17375 KB, uploaded 18 January 2019]
  |  

Abstract

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond. View Full-Text
Keywords: viewpoint invariance; local image feature embedding; wide baseline matching; out-of-plane rotations; 6D pose estimation; nonparametric spherical k-means; denoising and interpolation; 3D points projection viewpoint invariance; local image feature embedding; wide baseline matching; out-of-plane rotations; 6D pose estimation; nonparametric spherical k-means; denoising and interpolation; 3D points projection
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Sahloul, H.; Shirafuji, S.; Ota, J. 3D Affine: An Embedding of Local Image Features for Viewpoint Invariance Using RGB-D Sensor Data. Sensors 2019, 19, 291.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top