Next Article in Journal
Green Compressive Sampling Reconstruction in IoT Networks
Next Article in Special Issue
Indoor Positioning Algorithm Based on the Improved RSSI Distance Model
Previous Article in Journal
A High Precision Quality Inspection System for Steel Bars Based on Machine Vision
Previous Article in Special Issue
Indoor Visual Positioning Aided by CNN-Based Image Retrieval: Training-Free, 3D Modeling-Free
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(8), 2734; https://doi.org/10.3390/s18082734

An Occlusion-Aware Framework for Real-Time 3D Pose Tracking

1,2,†,* , 3,†
,
1
and
1
1
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen 518055, China
These authors contributed equally to this work.
*
Author to whom correspondence should be addressed.
Received: 21 July 2018 / Revised: 10 August 2018 / Accepted: 17 August 2018 / Published: 20 August 2018
(This article belongs to the Collection Positioning and Navigation)
Full-Text   |   PDF [4203 KB, uploaded 20 August 2018]   |  

Abstract

Random forest-based methods for 3D temporal tracking over an image sequence have gained increasing prominence in recent years. They do not require object’s texture and only use the raw depth images and previous pose as input, which makes them especially suitable for textureless objects. These methods learn a built-in occlusion handling from predetermined occlusion patterns, which are not always able to model the real case. Besides, the input of random forest is mixed with more and more outliers as the occlusion deepens. In this paper, we propose an occlusion-aware framework capable of real-time and robust 3D pose tracking from RGB-D images. To this end, the proposed framework is anchored in the random forest-based learning strategy, referred to as RFtracker. We aim to enhance its performance from two aspects: integrated local refinement of random forest on one side, and online rendering based occlusion handling on the other. In order to eliminate the inconsistency between learning and prediction of RFtracker, a local refinement step is embedded to guide random forest towards the optimal regression. Furthermore, we present an online rendering-based occlusion handling to improve the robustness against dynamic occlusion. Meanwhile, a lightweight convolutional neural network-based motion-compensated (CMC) module is designed to cope with fast motion and inevitable physical delay caused by imaging frequency and data transmission. Finally, experiments show that our proposed framework can cope better with heavily-occluded scenes than RFtracker and preserve the real-time performance. View Full-Text
Keywords: pose tracking; occlusion handling; online rendering; motion compensation pose tracking; occlusion handling; online rendering; motion compensation
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Fu, M.; Leng, Y.; Luo, H.; Zhou, W. An Occlusion-Aware Framework for Real-Time 3D Pose Tracking. Sensors 2018, 18, 2734.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top