Sensors 2019, 19(2), 282; https://doi.org/10.3390/s19020282
DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors
1
Centre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, Greece
2
National Technical University of Athens, School of Electrical and Computer Engineering, Zografou Campus, Iroon Polytechniou 9, 15780 Zografou, Athens, Greece
3
School of Computer Science, University of Lincoln, Brayford LN67TS, UK
*
Author to whom correspondence should be addressed.
Received: 13 December 2018 / Revised: 5 January 2019 / Accepted: 7 January 2019 / Published: 11 January 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Abstract
In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method byKeywords:
motion capture; deep learning; retro-reflectors; retro-reflective markers; multiple depth sensors; low-cost; deep mocap; depth data; 3D data; 3D vision; optical mocap; marker-based mocap
▼
Figures
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article
MDPI and ACS Style
Chatzitofis, A.; Zarpalas, D.; Kollias, S.; Daras, P. DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors. Sensors 2019, 19, 282.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.