Next Article in Journal
Dopamine/2-Phenylethylamine Sensitivity of Ion-Selective Electrodes Based on Bifunctional-Symmetrical Boron Receptors
Next Article in Special Issue
3D Affine: An Embedding of Local Image Features for Viewpoint Invariance Using RGB-D Sensor Data
Previous Article in Journal
Deep RetinaNet-Based Detection and Classification of Road Markings by Visible Light Camera Sensors
Previous Article in Special Issue
Incremental 3D Cuboid Modeling with Drift Compensation
Article Menu
Issue 2 (January-2) cover image

Export Article

Open AccessArticle
Sensors 2019, 19(2), 282; https://doi.org/10.3390/s19020282

DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors

1
Centre for Research and Technology Hellas, Information Technologies Institute, 6th km Charilaou-Thermi, 57001 Thermi, Thessaloniki, Greece
2
National Technical University of Athens, School of Electrical and Computer Engineering, Zografou Campus, Iroon Polytechniou 9, 15780 Zografou, Athens, Greece
3
School of Computer Science, University of Lincoln, Brayford LN67TS, UK
*
Author to whom correspondence should be addressed.
Received: 13 December 2018 / Revised: 5 January 2019 / Accepted: 7 January 2019 / Published: 11 January 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
PDF [13424 KB, uploaded 11 January 2019]   |  
  |   Review Reports

Abstract

In this paper, a marker-based, single-person optical motion capture method (DeepMoCap) is proposed using multiple spatio-temporally aligned infrared-depth sensors and retro-reflective straps and patches (reflectors). DeepMoCap explores motion capture by automatically localizing and labeling reflectors on depth images and, subsequently, on 3D space. Introducing a non-parametric representation to encode the temporal correlation among pairs of colorized depthmaps and 3D optical flow frames, a multi-stage Fully Convolutional Network (FCN) architecture is proposed to jointly learn reflector locations and their temporal dependency among sequential frames. The extracted reflector 2D locations are spatially mapped in 3D space, resulting in robust 3D optical data extraction. The subject’s motion is efficiently captured by applying a template-based fitting technique on the extracted optical data. Two datasets have been created and made publicly available for evaluation purposes; one comprising multi-view depth and 3D optical flow annotated images (DMC2.5D), and a second, consisting of spatio-temporally aligned multi-view depth images along with skeleton, inertial and ground truth MoCap data (DMC3D). The FCN model outperforms its competitors on the DMC2.5D dataset using 2D Percentage of Correct Keypoints (PCK) metric, while the motion capture outcome is evaluated against RGB-D and inertial data fusion approaches on DMC3D, outperforming the next best method by 4 . 5 % in total 3D PCK accuracy.
Keywords: motion capture; deep learning; retro-reflectors; retro-reflective markers; multiple depth sensors; low-cost; deep mocap; depth data; 3D data; 3D vision; optical mocap; marker-based mocap motion capture; deep learning; retro-reflectors; retro-reflective markers; multiple depth sensors; low-cost; deep mocap; depth data; 3D data; 3D vision; optical mocap; marker-based mocap
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Supplementary material

SciFeed

Share & Cite This Article

MDPI and ACS Style

Chatzitofis, A.; Zarpalas, D.; Kollias, S.; Daras, P. DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors. Sensors 2019, 19, 282.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top