An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data
Abstract
:1. Introduction
2. Related Work
2.1. Approaches for Search and Retrieval
2.2. Approaches for 3D MoCap Recovery
2.2.1. Approaches for 3D Recovery of Missing Markers
2.2.2. Approaches for 3D Recovery of Missing Joints
3. Methodology
3.1. Problem Formulation
3.2. Normalization
3.2.1. Translational Normalization
3.2.2. Orientational Normalization
3.3. Construction of Parallel kd-Tree
- The input dataset is divided into multiple subsets, and each subset is assigned to a processor that handles each chunk of data independently.
- All threads in a block perform the following subsequent tasks for their assigned data;
- Compute bucket index for all elements in its bucket;
- Count the number of elements in its bucket;
- Save this entry histogram of each block in memory;
- The keys are shifted to the suitable buckets within a processor.
3.4. Nearest Neighbors Search
3.5. 3D Recovery
3.5.1. Retrieval Error
3.5.2. Input Error
3.5.3. Limb Length Error
3.5.4. Smoothing Error
4. Results and Discussion
- In the first case, we deal with the recovery of missing joints [48]. The skeleton model, in this protocol, consists of 31 joints, and the details of the joints are described in Figure 3. In contrast, the details about Acclaim Motion Capture (AMC) file and the Acclaim Skeleton File (ASF) are shown in Figure 2b,c, respectively, for the HDM05 MoCap dataset [9], and for the CMU MoCap dataset [56], the AMC and the ASF files are shown in Figure 2e,f;
- In the second case, in the line of Kucherenko et al. [35], we deal with the recovery of missing markers rather than joints, and the number of markers is in case of the CMU MoCap dataset [56]. We also assess the performance of our approach with the HDM05 MoCap dataset [9], which contains markers that vary from 40 to 50. The markers (c3d file) for HDM05 and CMU MoCap dataset are shown in Figure 2a,d, respectively.
4.1. Parameters
4.1.1. GPU Memory
4.1.2. CUBLAS API
4.1.3. Nearest Neighbors
4.1.4. Principal Components
4.1.5. Impact of Error Terms
4.2. Comparison with State-of-the-Art Methods
4.2.1. Missing Joints
4.2.2. Missing Markers
4.2.3. Parallel vs. Serial
4.3. Controlled Experiments
4.3.1. Impact of Missing Joints
4.3.2. Impact of Missing Body Parts
4.4. Qualitative Evaluation
5. Limitations
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Meyer, J.; Kuderer, M.; Muller, J.; Burgard, W. Online marker labeling for fully automatic skeleton tracking in optical motion capture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5652–5657. [Google Scholar] [CrossRef]
- Schubert, T.; Eggensperger, K.; Gkogkidis, A.; Hutter, F.; Ball, T.; Burgard, W. Automatic bone parameter estimation for skeleton tracking in optical motion capture. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5548–5554. [Google Scholar] [CrossRef]
- Sedmidubsky, J.; Elias, P.; Zezula, P. Effective and efficient similarity searching in motion capture data. Multimed. Tools Appl. 2018, 77, 12073–12094. [Google Scholar] [CrossRef]
- Yasin, H.; Hussain, M.; Weber, A. Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network. Sensors 2020, 20, 2226. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Perepichka, M.; Holden, D.; Mudur, S.P.; Popa, T. Robust Marker Trajectory Repair for MOCAP Using Kinematic Reference. In Proceedings of the Motion, Interaction and Games, Newcastle upon Tyne, UK, 28–30 October 2019; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
- Tits, M.; Tilmanne, J.; Dutoit, T. Robust and automatic motion-capture data recovery using soft skeleton constraints and model averaging. PLoS ONE 2018, 13, e0199744. [Google Scholar] [CrossRef] [Green Version]
- Xia, G.; Sun, H.; Chen, B.; Liu, Q.; Feng, L.; Zhang, G.; Hang, R. Nonlinear Low-Rank Matrix Completion for Human Motion Recovery. IEEE Trans. Image Process. 2018, 27, 3011–3024. [Google Scholar] [CrossRef] [PubMed]
- Cui, Q.; Sun, H.; Li, Y.; Kong, Y. A Deep Bi-directional Attention Network for Human Motion Recovery. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, Macao, China, 10–16 August 2019; pp. 701–707. [Google Scholar]
- Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; Weber, A. Documentation Mocap Database HDM05; Technical Report CG-2007-2; Universität Bonn: Bonn, Germany, 2007. [Google Scholar]
- Xiao, Q.; Li, J.; Xiao, Q. Human Motion Capture Data Retrieval Based on Quaternion and EMD. In Proceedings of the 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 26–27 August 2013; Volume 1, pp. 517–520. [Google Scholar]
- Bernard, J.; Wilhelm, N.; Krüger, B.; May, T.; Schreck, T.; Kohlhammer, J. Motionexplorer: Exploratory search in human motion capture data based on hierarchical aggregation. IEEE Trans. Vis. Comput. Graph. 2013, 19, 2257–2266. [Google Scholar] [CrossRef] [Green Version]
- Vögele, A.; Krüger, B.; Klein, R. Efficient unsupervised temporal segmentation of human motion. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Eurographics Association, Copenhagen, Denmark, 21–23 July 2014; pp. 167–176. [Google Scholar]
- Krüger, B.; Vögele, A.; Willig, T.; Yao, A.; Klein, R.; Weber, A. Efficient unsupervised temporal segmentation of motion data. IEEE Trans. Multimed. 2017, 19, 797–812. [Google Scholar] [CrossRef] [Green Version]
- Li, M.; Leung, H.; Liu, Z.; Zhou, L. 3D human motion retrieval using graph kernels based on adaptive graph construction. Comput. Graph. 2016, 54, 104–112. [Google Scholar] [CrossRef]
- Plantard, P.; Shum, H.P.; Multon, F. Filtered pose graph for efficient kinect pose reconstruction. Multimed. Tools Appl. 2017, 76, 4291–4312. [Google Scholar] [CrossRef] [Green Version]
- Panagiotakis, C.; Papoutsakis, K.; Argyros, A. A graph-based approach for detecting common actions in motion capture data and videos. Pattern Recognit. 2018, 79, 1–11. [Google Scholar] [CrossRef]
- Yasin, H. Towards Efficient 3D Pose Retrieval and Reconstruction from 2D Landmarks. In Proceedings of the 19th IEEE International Symposium on Multimedia, ISM 2017, Taichung, Taiwan, 11–13 December 2017; pp. 169–176. [Google Scholar]
- Krüger, B.; Tautges, J.; Weber, A.; Zinke, A. Fast Local and Global Similarity Searches in Large Motion Capture Databases. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Eurographics Association, SCA ’10, Madrid, Spain, 2–4 July 2010; pp. 1–10. [Google Scholar]
- Choi, B.; Komuravelli, R.; Lu, V.; Sung, H.; Bocchino, R.L.; Adve, S.V.; Hart, J.C. Parallel SAH kD tree construction. In Proceedings of the Conference on High Performance Graphics, Eurographics Association, Saarbrücken, Germany, 25–27 June 2010; pp. 77–86. [Google Scholar]
- Danilewski, P.; Popov, S.; Slusallek, P. Binned SAH Kd-Tree Construction on a GPU; Saarland University: Saarbrücken, Germany, 2010; pp. 1–15. [Google Scholar]
- Wu, Z.; Zhao, F.; Liu, X. SAH KD-tree construction on GPU. In Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics, ACM, Vancouver, BC, Canada, 5–7 August 2011; pp. 71–78. [Google Scholar]
- Yasin, H.; Iqbal, U.; Krüger, B.; Weber, A.; Gall, J. A Dual-Source Approach for 3D Pose Estimation from a Single Image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4948–4956. [Google Scholar]
- Yasin, H.; Krüger, B. An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks. Sensors 2021, 21, 2415. [Google Scholar] [CrossRef]
- Yasin, H.; Hayat, S. DeepSegment: Segmentation of motion capture data using deep convolutional neural network. Image Vis. Comput. 2021, 109, 104147. [Google Scholar] [CrossRef]
- Hu, L.; Nooshabadi, S.; Ahmadi, M. Massively parallel KD-tree construction and nearest neighbor search algorithms. In Proceedings of the 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 24–27 May 2015; pp. 2752–2755. [Google Scholar]
- Wehr, D.; Radkowski, R. Parallel kd-Tree Construction on the GPU with an Adaptive Split and Sort Strategy. Int. J. Parallel Program. 2018, 46, 1139–1156. [Google Scholar] [CrossRef]
- Sedmidubsky, J.; Elias, P.; Budikova, P.; Zezula, P. Content-Based Management of Human Motion Data: Survey and Challenges. IEEE Access 2021, 9, 64241–64255. [Google Scholar] [CrossRef]
- Lv, N.; Wang, Y.; Feng, Z.; Peng, J. Deep Hashing for Motion Capture Data Retrieval. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 2215–2219. [Google Scholar] [CrossRef]
- Piazza, T.; Lundström, J.; Kunz, A.M.; Fjeld, M. Predicting Missing Markers in Real-Time Optical Motion Capture. In Proceedings of the Second 3D Physiological Human Workshop, 3DPH 2009, Zermatt, Switzerland, 29 November–2 December 2009. [Google Scholar]
- Baumann, J.; Krüger, B.; Zinke, A.; Weber, A. Data-Driven Completion of Motion Capture Data. In Proceedings of the Workshop on Virtual Reality Interaction and Physical Simulation (VRIPHYS), Eurographics Association, Lyon, France, 5–6 December 2011. [Google Scholar]
- Aristidou, A.; Lasenby, J. Real-time marker prediction and CoR estimation in optical motion capture. Vis. Comput. 2013, 29, 7–26. [Google Scholar] [CrossRef]
- Peng, S.J.; He, G.F.; Liu, X.; Wang, H.Z. Hierarchical block-based incomplete human mocap data recovery using adaptive nonnegative matrix factorization. Comput. Graph. 2015, 49, 10–23. [Google Scholar] [CrossRef]
- Wang, Z.; Feng, Y.; Liu, S.; Xiao, J.; Yang, X.; Zhang, J.J. A 3D human motion refinement method based on sparse motion bases selection. In Proceedings of the 29th International Conference on Computer Animation and Social Agents, ACM, Geneva, Switzerland, 23–25 May 2016; pp. 53–60. [Google Scholar]
- Hu, W.; Wang, Z.; Liu, S.; Yang, X.; Yu, G.; Zhang, J.J. Motion Capture Data Completion via Truncated Nuclear Norm Regularization. IEEE Signal Process. Lett. 2018, 25, 258–262. [Google Scholar] [CrossRef]
- Kucherenko, T.; Beskow, J.; Kjellström, H. A Neural Network Approach to Missing Marker Reconstruction. arXiv 2018, arXiv:1803.02665. [Google Scholar]
- Wiley, D.J.; Hahn, J.K. Interpolation Synthesis for Articulated Figure Motion. In Proceedings of the IEEE 1997 Annual International Symposium on Virtual Reality, VRAIS ’97, Albuquerque, NM, USA, 1–5 March 1997; IEEE Computer Society: Washington, DC, USA, 1997; p. 156. [Google Scholar]
- Liu, G.; Zhang, J.; Wang, W.; McMillan, L. Human Motion Estimation from a Reduced Marker Set. In Proceedings of the ACM SIGGRAPH 2006 Sketches, Boston, MA, USA, 30 July–3 August 2006; Association for Computing Machinery: New York, NY, USA, 2006. [Google Scholar]
- Liu, G.; McMillan, L. Estimation of Missing Markers in Human Motion Capture. Vis. Comput. 2006, 22, 721–728. [Google Scholar] [CrossRef]
- Burke, M.; Lasenby, J. Estimating missing marker positions using low dimensional Kalman smoothing. J. Biomech. 2016, 49, 1854–1858. [Google Scholar] [CrossRef] [Green Version]
- Park, S.I.; Hodgins, J.K. Capturing and Animating Skin Deformation in Human Motion. ACM Trans. Graph. 2006, 25, 881–889. [Google Scholar] [CrossRef]
- Gløersen, Ø.; Federolf, P. Predicting Missing Marker Trajectories in Human Motion Data Using Marker Intercorrelations. PLoS ONE 2016, 11, e0152616. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Yu, H.; Kieu, H.D.; Vuong, T.L.; Zhang, J.J. PCA-Based Robust Motion Data Recovery. IEEE Access 2020, 8, 76980–76990. [Google Scholar] [CrossRef]
- Kieu, H.D.; Yu, H.; Li, Z.; Zhang, J.J. Locally weighted PCA regression to recover missing markers in human motion data. PLoS ONE 2022, 17, e0272407. [Google Scholar] [CrossRef]
- Li, S.; Zhou, Y.; Zhu, H.; Xie, W.; Zhao, Y.; Liu, X. Bidirectional recurrent autoencoder for 3D skeleton motion data refinement. Comput. Graph. 2019, 81, 92–103. [Google Scholar]
- Li, S.J.; Zhu, H.S.; Zheng, L.P.; Li, L. A Perceptual-Based Noise-Agnostic 3D Skeleton Motion Data Refinement Network. IEEE Access 2020, 8, 52927–52940. [Google Scholar]
- Lai, R.Y.Q.; Yuen, P.C.; Lee, K.K.W. Motion Capture Data Completion and Denoising by Singular Value Thresholding. In Proceedings of the Eurographics 2011, Llandudno, UK, 11–15 April 2011. [Google Scholar]
- Li, L.; McCann, J.; Pollard, N.; Faloutsos, C. BoLeRO: A Principled Technique for Including Bone Length Constraints in Motion Capture Occlusion Filling. In Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’10, Madrid, Spain, 2–4 July 2010; pp. 179–188. [Google Scholar]
- Tan, C.H.; Hou, J.; Chau, L.P. Motion capture data recovery using skeleton constrained singular value thresholding. Vis. Comput. 2015, 31, 1521–1532. [Google Scholar]
- Cai, J.F.; Candès, E.J.; Shen, Z. A Singular Value Thresholding Algorithm for Matrix Completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
- Aristidou, A.; Cohen-Or, D.; Hodgins, J.; Shamir, A. Self-similarity Analysis for Motion Capture Cleaning. Comput. Graph. Forum 2018, 37, 297–309. [Google Scholar] [CrossRef]
- Fragkiadaki, K.; Levine, S.; Felsen, P.; Malik, J. Recurrent Network Models for Human Dynamics. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; IEEE Computer Society: Washington, DC, USA, 2015. [Google Scholar]
- Mall, U.; Lal, G.R.; Chaudhuri, S.; Chaudhuri, P. A Deep Recurrent Framework for Cleaning Motion Capture Data. arXiv 2017, arXiv:1712.03380. [Google Scholar]
- Cui, Q.; Sun, H.; Li, Y.; Kong, Y. Efficient human motion recovery using bidirectional attention network. Neural Comput. Appl. 2019, 32, 10127–10142. [Google Scholar] [CrossRef]
- Cui, Q.; Sun, H.; Kong, Y.; Zhang, X.; Li, Y. Efficient human motion prediction using temporal convolutional generative adversarial network. Inf. Sci. 2021, 545, 427–447. [Google Scholar]
- Barrachina Mir, S.; Castillo, M.; Igual, F.; Mayo, R.; Quintana-Orti, E.S. Evaluation and tuning of the Level 3 CUBLAS for graphics processors. In Proceedings of the 2008 IEEE International Symposium on Parallel and Distributed Processing, Miami, FL, USA, 14–18 April 2008; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
- CMU. CMU Motion Capture Database. 2003. Available online: http://mocap.cs.cmu.edu/ (accessed on 16 September 2021).
- Singh, D.P.; Joshi, I.; Choudhary, J. Survey of GPU based sorting algorithms. Int. J. Parallel Program. 2018, 46, 1017–1034. [Google Scholar] [CrossRef]
Sequences | Method | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mean | Mean | ||||||||||||
BSVT [46] | 0.052 | 0.051 | 0.053 | 0.056 | 0.061 | 0.055 ± 0.004 | 0.098 | 0.118 | 0.138 | 0.128 | 0.140 | 0.124 ± 0.017 | |
14_01 | SCSVT [48] | 0.046 | 0.043 | 0.045 | 0.045 | 0.048 | 0.045 ± 0.002 | 0.085 | 0.104 | 0.118 | 0.102 | 0.105 | 0.103 ± 0.012 |
(Boxing) | BoLeRO [47] | 0.068 | 0.066 | 0.068 | 0.063 | 0.072 | 0.067 ± 0.003 | 0.100 | 0.112 | 0.143 | 0.112 | 0.130 | 0.119 ± 0.017 |
Our | 0.025 | 0.027 | 0.043 | 0.061 | 0.063 | 0.044 ± 0.018 | 0.054 | 0.054 | 0.065 | 0.076 | 0.080 | 0.066 ± 0.012 | |
BSVT [46] | 0.281 | 0.613 | 1.563 | 2.038 | 2.499 | 1.399 ± 0.937 | 0.716 | 1.119 | 2.204 | 3.060 | 3.745 | 2.169 ± 1.274 | |
85_02 | SCSVT [48] | 0.120 | 0.191 | 0.230 | 0.204 | 0.249 | 0.199 ± 0.050 | 0.248 | 0.315 | 0.354 | 0.485 | 0.321 | 0.344 ± 0.087 |
(Jumptwist) | BoLeRO [47] | 0.145 | 0.154 | 0.214 | 0.266 | 0.326 | 0.221 ± 0.076 | 0.290 | 0.340 | 0.366 | 0.538 | 0.397 | 0.386 ± 0.094 |
Our | 0.042 | 0.027 | 0.044 | 0.010 | 0.011 | 0.027 ± 0.016 | 0.121 | 0.091 | 0.031 | 0.026 | 0.030 | 0.060 ± 0.044 | |
BSVT [46] | 0.075 | 0.136 | 0.510 | 0.380 | 1.261 | 0.472 ± 0.475 | 0.156 | 0.340 | 1.148 | 1.634 | 3.280 | 1.312 ± 1.253 | |
143_04 | SCSVT [48] | 0.059 | 0.088 | 0.124 | 0.128 | 0.155 | 0.111 ± 0.038 | 0.117 | 0.193 | 0.220 | 0.270 | 0.290 | 0.218 ± 0.068 |
(Run fig 8) | BoLeRO [47] | 0.057 | 0.062 | 0.155 | 0.135 | 0.141 | 0.110 ± 0.047 | 0.105 | 0.143 | 0.174 | 0.314 | 0.403 | 0.228 ± 0.126 |
Our | 0.039 | 0.038 | 0.037 | 0.050 | 0.051 | 0.043 ± 0.007 | 0.050 | 0.049 | 0.058 | 0.073 | 0.069 | 0.060 ± 0.011 | |
BSVT [46] | 0.046 | 0.065 | 0.111 | 0.091 | 0.238 | 0.110 ± 0.076 | 0.128 | 0.182 | 0.239 | 0.255 | 0.423 | 0.245 ± 0.111 | |
49_02 | SCSVT [48] | 0.039 | 0.053 | 0.086 | 0.068 | 0.093 | 0.068 ± 0.022 | 0.119 | 0.140 | 0.132 | 0.146 | 0.153 | 0.138 ± 0.013 |
(Jumping) | BoLeRO [47] | 0.057 | 0.058 | 0.074 | 0.064 | 0.079 | 0.066 ± 0.010 | 0.103 | 0.099 | 0.139 | 0.133 | 0.233 | 0.141 ± 0.054 |
Our | 0.056 | 0.062 | 0.095 | 0.111 | 0.130 | 0.091 ± 0.032 | 0.130 | 0.139 | 0.131 | 0.131 | 0.151 | 0.136 ± 0.009 | |
BSVT [46] | 0.097 | 0.120 | 0.168 | 0.166 | 0.209 | 0.152 ± 0.044 | 0.201 | 0.246 | 0.404 | 0.428 | 0.453 | 0.346 ± 0.115 | |
135_02 | SCSVT [48] | 0.083 | 0.096 | 0.139 | 0.131 | 0.151 | 0.120 ± 0.029 | 0.181 | 0.211 | 0.333 | 0.247 | 0.280 | 0.250 ± 0.059 |
(Martial Arts) | BoLeRO [47] | 0.138 | 0.144 | 0.144 | 0.161 | 0.195 | 0.156 ± 0.023 | 0.216 | 0.228 | 0.287 | 0.266 | 0.282 | 0.256 ± 0.032 |
Our | 0.073 | 0.086 | 0.129 | 0.175 | 0.219 | 0.136 ± 0.061 | 0.112 | 0.133 | 0.167 | 0.204 | 0.283 | 0.180 ± 0.067 | |
BSVT [46] | 0.057 | 0.071 | 0.144 | 0.261 | 0.288 | 0.164 ± 0.106 | 0.132 | 0.180 | 0.326 | 0.500 | 0.608 | 0.349 ± 0.204 | |
135_11 | SCSVT [48] | 0.052 | 0.065 | 0.095 | 0.095 | 0.104 | 0.082 ± 0.022 | 0.125 | 0.150 | 0.184 | 0.160 | 0.157 | 0.155 ± 0.021 |
(Kicking) | BoLeRO [47] | 0.054 | 0.058 | 0.081 | 0.099 | 0.074 | 0.073 ± 0.018 | 0.092 | 0.102 | 0.383 | 0.154 | 0.114 | 0.169 ± 0.122 |
Our | 0.071 | 0.112 | 0.165 | 0.258 | 0.302 | 0.182 ± 0.097 | 0.083 | 0.124 | 0.240 | 0.265 | 0.341 | 0.210 ± 0.106 | |
BSVT [46] | 0.070 | 0.086 | 0.102 | 0.167 | 0.174 | 0.120 ± 0.048 | 0.159 | 0.206 | 0.233 | 0.390 | 0.344 | 0.266 ± 0.097 | |
61_05 | SCSVT [48] | 0.060 | 0.063 | 0.080 | 0.100 | 0.102 | 0.081 ± 0.020 | 0.137 | 0.168 | 0.144 | 0.185 | 0.179 | 0.163 ± 0.021 |
(Salsa) | BoLeRO [47] | 0.073 | 0.085 | 0.086 | 0.100 | 0.112 | 0.091 ± 0.015 | 0.123 | 0.145 | 0.156 | 0.151 | 0.209 | 0.157 ± 0.032 |
Our | 0.042 | 0.048 | 0.054 | 0.084 | 0.100 | 0.066 ± 0.025 | 0.068 | 0.067 | 0.086 | 0.107 | 0.121 | 0.090 ± 0.024 | |
BSVT [46] | 0.131 | 0.202 | 0.274 | 0.342 | 0.347 | 0.259 ± 0.093 | 0.363 | 0.617 | 0.576 | 1.036 | 0.949 | 0.708 ± 0.279 | |
88_04 | SCSVT [48] | 0.101 | 0.123 | 0.146 | 0.147 | 0.141 | 0.132 ± 0.020 | 0.265 | 0.432 | 0.317 | 0.314 | 0.261 | 0.318 ± 0.070 |
(Acrobatics) | BoLeRO [47] | 0.155 | 0.143 | 0.201 | 0.178 | 0.177 | 0.171 ± 0.023 | 0.270 | 0.286 | 0.341 | 0.302 | 0.297 | 0.299 ± 0.026 |
Our | 0.090 | 0.103 | 0.148 | 0.211 | 0.244 | 0.159 ± 0.067 | 0.152 | 0.170 | 0.205 | 0.232 | 0.333 | 0.218 ± 0.071 | |
Mean | BSVT [46] | 0.101 | 0.179 | 0.366 | 0.438 | 0.635 | 0.344 ± 0.212 | 0.244 | 0.376 | 0.659 | 0.929 | 1.243 | 0.690 ± 0.407 |
SCSVT [48] | 0.070 | 0.090 | 0.118 | 0.115 | 0.130 | 0.105 ± 0.024 | 0.160 | 0.214 | 0.225 | 0.239 | 0.218 | 0.211 ± 0.032 | |
BoLeRO [47] | 0.093 | 0.096 | 0.128 | 0.133 | 0.147 | 0.119 ± 0.024 | 0.162 | 0.182 | 0.249 | 0.246 | 0.258 | 0.229 ± 0.044 | |
Our | 0.055 | 0.063 | 0.089 | 0.120 | 0.140 | 0.093 ± 0.036 | 0.096 | 0.103 | 0.123 | 0.139 | 0.176 | 0.128 ± 0.032 |
(a) 10% Missing Markers | |||||
---|---|---|---|---|---|
Method | Training Dataset | Motion Sequences | Mean | ||
BasketBall | Boxing | Jump Turn | |||
PCA (PC-18) | CMU | ||||
PCA (PC-18) | HDM05 | ||||
Interpolation * | CMU | ||||
Peng et al. [32] * | CMU | n.a. | n.a. | n.a. | n.a. |
Burke and Lasenby[39] * | CMU | ||||
Kucherenko et al. [35] (Window) | CMU | ||||
Kucherenko et al. [35] (LSTM) | CMU | ||||
Ours-1 | CMU | ||||
Ours-2 | HDM05 | ||||
(b) 20% Missing Markers | |||||
Method | Training Dataset | Motion Sequences | Mean | ||
BasketBall | Boxing | Jump Turn | |||
PCA (PC-18) | CMU | ||||
PCA (PC-18) | HDM05 | ||||
Interpolation * | CMU | ||||
Peng et al. [32] * | CMU | n.a. | |||
Burke and Lasenby [39] * | CMU | ||||
Kucherenko et al. [35] (Window) | CMU | ||||
Kucherenko et al. [35] (LSTM) | CMU | ||||
Ours-1 | CMU | ||||
Ours-2 | HDM05 | ||||
(c) 30% Missing Markers | |||||
Method | Training Dataset | Motion Sequences | Mean | ||
BasketBall | Boxing | Jump Turn | |||
PCA (PC-18) | CMU | ||||
PCA (PC-18) | HDM05 | ||||
Interpolation * | CMU | ||||
Peng et al. [32] * | CMU | n.a. | 4.9 | 4.63 | |
Burke and Lasenby [39] * | CMU | ||||
Kucherenko et al. [35] (Window) | CMU | ||||
Kucherenko et al. [35] (LSTM) | CMU | ||||
Ours-1 | CMU | ||||
Ours-2 | HDM05 | ||||
(d) Mean | |||||
Method | Training Dataset | Motion Sequences | Mean | ||
BasketBall | Boxing | Jump Turn | |||
PCA (PC-18) | CMU | ||||
PCA (PC-18) | HDM05 | ||||
Interpolation | CMU | ||||
Peng et al. [32] | CMU | n.a. | 4.65 | 5.01 | 4.83 |
Burke and Lasenby [39] | CMU | ||||
Kucherenko et al. [35] (Window) | CMU | ||||
Kucherenko et al. [35] (LSTM) | CMU | ||||
Ours-1 | CMU | ||||
Ours-2 | HDM05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yasin, H.; Ghani, S.; Krüger, B. An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data. Sensors 2023, 23, 3664. https://doi.org/10.3390/s23073664
Yasin H, Ghani S, Krüger B. An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data. Sensors. 2023; 23(7):3664. https://doi.org/10.3390/s23073664
Chicago/Turabian StyleYasin, Hashim, Saba Ghani, and Björn Krüger. 2023. "An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data" Sensors 23, no. 7: 3664. https://doi.org/10.3390/s23073664
APA StyleYasin, H., Ghani, S., & Krüger, B. (2023). An Effective and Efficient Approach for 3D Recovery of Human Motion Capture Data. Sensors, 23(7), 3664. https://doi.org/10.3390/s23073664