A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences
AbstractThis paper addresses the problem of accurate and robust tracking of 3D human body pose from depth image sequences. Recovering the large number of degrees of freedom in human body movements from a depth image sequence is challenging due to the need to resolve the depth ambiguity caused by self-occlusions and the difficulty to recover from tracking failure. Human body poses could be estimated through model fitting using dense correspondences between depth data and an articulated human model (local optimization method). Although it usually achieves a high accuracy due to dense correspondences, it may fail to recover from tracking failure. Alternately, human pose may be reconstructed by detecting and tracking human body anatomical landmarks (key-points) based on low-level depth image analysis. While this method (key-point based method) is robust and recovers from tracking failure, its pose estimation accuracy depends solely on image-based localization accuracy of key-points. To address these limitations, we present a flexible Bayesian framework for integrating pose estimation results obtained by methods based on key-points and local optimization. Experimental results are shown and performance comparison is presented to demonstrate the effectiveness of the proposed approach. View Full-Text
Share & Cite This Article
Zhu, Y.; Fujimura, K. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences. Sensors 2010, 10, 5280-5293.
Zhu Y, Fujimura K. A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences. Sensors. 2010; 10(5):5280-5293.Chicago/Turabian Style
Zhu, Youding; Fujimura, Kikuo. 2010. "A Bayesian Framework for Human Body Pose Tracking from Depth Image Sequences." Sensors 10, no. 5: 5280-5293.