A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
AbstractPedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. View Full-Text
Share & Cite This Article
Guo, C.; Ruan, S.; Liang, X.; Zhao, Q. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image. Sensors 2016, 16, 263.
Guo C, Ruan S, Liang X, Zhao Q. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image. Sensors. 2016; 16(2):263.Chicago/Turabian Style
Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping. 2016. "A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image." Sensors 16, no. 2: 263.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.