Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study
Abstract
1. Introduction
2. State of the Art
2.1. Historical Overview
2.2. Accessible Motion Tracking Systems for Intelligent Rehabilitation
2.3. Data Acquisition
2.4. Feature Engineering
2.5. Comparison and Assessment
2.6. Commercial Systems
2.7. Available Datasets for Modeling Computer Vision Systems
2.8. Takeaways and Conclusions
3. Implementation
3.1. Architecture
3.1.1. Data Acquisition: Smartphone Camera
3.1.2. Feature Engineering: MediaPipe’s Pose Landmarker (BlazePose)
3.1.3. Comparison and Assessment: Dynamic Time Warping
3.2. Dataset
3.3. System Development
3.3.1. Comparing Two Videos
- Start by focusing on a single exercise and identify the relevant joints for that exercise.
- Process one input video:
- (a)
- Read the video.
- (b)
- Extract values of relevant joint positions for each frame.
- (c)
- Create and save a sequence of these values.
- Process another input video in the same fashion.
- Calculate the DTW distance between two sequences.
- Head: Landmarks 0 to 10.
- Trunk: Landmarks 11, 12, 23, 24.
- Shoulder: Landmarks 12, 14, 16.
3.3.2. Generalizing for the Whole Dataset
4. System Evaluation and Results
4.1. Data Normalization
4.2. System Evaluation
4.2.1. Why Range-of-Motion and Trajectory Similarity Works
4.2.2. Evaluation Metrics
4.2.3. Outliers
4.3. Results
4.3.1. Diagonal Exercise
- Trunk: The values obtained by the metrics for the Trunk joint group of the Diagonal exercise are shown in Table 6. In Figure 13, we can see the error distribution for both sets of predictions. With min-max scaling, the Trunk joint group obtained an MAE of 17.10, an RMSE of 19.12, and a CC of 0.47. The MAE of 17.10 reveals that, on average, the predicted scores for the Trunk joint group deviated by 17.10 points from the QTM scores. As for the Head, while there are some larger errors, the majority are smaller, as indicated by similar values of MAE and RMSE. The CC of 0.47 indicates a positive correlation, and it is relatively stronger than observed in the Head joint group. The variation in our predictions aligns with the variation in actual values, suggesting a reasonably strong directional relationship. In contrast, the Z-score results for the Trunk joint group show a higher MAE of 31.11 and a larger RMSE of 36.67 compared to min-max scaling. This notable increase in errors in magnitude and quantity reveals the difference between min-max and Z-score and can be better understood when we look at Figure 13. Again, the CC value is the same.
- Shoulder: The evaluation metrics for the Shoulder joint group during the Diagonal exercise are illustrated in Table 7, and the error distribution for both sets of predictions is represented in Figure 13. Under min-max scaling, the Shoulder joint group obtained an MAE of 14.44, an RMSE of 15.59, and a CC of −0.37. The MAE is similar to the one obtained by the Head and represents an average deviation of 14.44 points from the QTM scores. The values of MAE and RMSE suggest a reasonable level of accuracy, with few large errors, given how close the two values are. The CC of −0.37 indicates a negative correlation, which means that when the values in our predictions increase, the actual values decrease, which is not ideal. However, this correlation is weak, which means not much of the variation in the actual values can be explained by the variation in our predictions. Following the trend of the other two joint groups, the Z-score results for the Shoulder joint group show a substantially higher MAE of 57.59 and a larger RMSE of 62.97. In this case, the difference is even greater, meaning a considerable increase in errors, as can be seen in Figure 14.
4.3.2. Rotation Exercise
- Head: The values obtained by the metrics for the Head joint group of the Rotation exercise are shown in Table 8. Figure 15 shows the error distribution for the Head joint group during the Rotation exercise. The MAE of 20.41 implies that, on average, the predicted scores for the Head joint group deviated by 20.41 points from the QTM scores. The RMSE, measuring the square root of the average squared errors, is slightly larger, indicating the presence of some large errors. The CC of 0.22 indicates a positive but weak correlation. This suggests a directional relationship between our predictions and the actual values, but the correlation strength is low. Utilizing Z-score normalization, the Head joint group achieved an MAE of 18.86 and an RMSE of 27.11. While the MAE shows a slight improvement in prediction accuracy compared to min-max scaling, the RMSE reflects an increase in the magnitude of errors. In Figure 15, we can see that Z-score shows, in general, smaller errors than min-max, explaining the lower MAE, but, as we can see, it also shows one very large error, penalized by RMSE. As mentioned in the previous sections, the CC is the same for Z-score and min-max results.
- Trunk: Table 9 displays the evaluation metrics for the Trunk joint group during the Rotation exercise, and in Figure 16, we can see the error distribution for the Trunk joint group during the Rotation exercise. With min-max scaling, the Trunk joint group achieved an MAE of 32.30, an RMSE of 33.69, and a CC of 0.51. The MAE of 32.30 is the largest of any joint group with min-max, indicating larger errors on average. A similar value for RMSE reveals that the largest error must not be much larger than the other errors. However, because the MAE is already large, the RMSE could still indicate the presence of large errors. The CC of 0.51 reveals a positive correlation with a moderate strength. This suggests a reasonably strong directional relationship between our predictions and the actual values for the Trunk joint group. Switching to Z-score normalization, the Trunk joint group achieved an MAE of 27.30 and an RMSE of 32.25. Notably, Z-score normalization resulted in a decrease in the MAE, indicating an improvement in prediction accuracy, while the value of the RMSE being close to the one obtained with the min-max results implies a comparable distribution of errors. Figure 16 shows that Z-score presents many more errors close to 0, but there are still some very large errors, while min-max’s errors are generally larger. Once more, the CC is the same.
- Shoulder: The evaluation metrics for the Shoulder joint group during the ’Rotation’ exercise are illustrated in Table 10. Figure 17 displays the error distribution for the Trunk joint group during the Rotation exercise. When employing min-max scaling, the Shoulder joint group obtained an MAE of 21.45, an RMSE of 23.19, and a CC of −0.37. Once again, the value of the MAE indicates that, on average, the predicted scores for the Shoulder joint group deviated by 21.45 points from the QTM scores. The RMSE suggests reasonably large errors, as it is larger than MAE. As for the Shoulder group of the Diagonal exercise, the CC of −0.37 indicates a negative correlation, revealing an inverse relationship between our predictions and the actual values for the Shoulder joint group. However, this correlation is weak, with a relatively low magnitude. Upon employing Z-score normalization, the Shoulder joint group achieved an MAE of 30.30 and an RMSE of 36.83. Z-score normalization resulted in an increase in both the MAE, indicating a decrease in prediction accuracy, and the RMSE, revealing the presence of larger errors. Figure 17 clarifies the difference between MAE and RMSE values. Min-max shows not only smaller values on average but also larger values that are smaller than the ones obtained by Z-score. Once again, the value of the CC is not affected by normalization.
4.4. Results Discussion
4.4.1. Overall Performance Trends
4.4.2. Exercise-Specific Observations
- Diagonal Exercise Considering the min-max results for the Diagonal exercise, the Head group presents modest errors and exhibits a positive correlation, which, despite being weak, suggests a consistent directional relationship. The Trunk group’s errors are slightly larger and include a positive correlation, highlighting reasonable prediction accuracy. The Shoulder groups’ evaluation metrics’ results are between those of the Head and the Trunk groups. However, a negative correlation suggests that predictions may not be related to actual values. Overall, the min-max scores for this exercise obtained reasonable results, with an average MAE of 14.94 and an average RMSE of 16.74. Although the Head had fewer errors, the values are too close to state a clear difference between any of the joint groups. With the Z-score scores, the errors were larger. While the Head and Trunk groups obtained MAE and RMSE values close to 30, which would already be worse than the min-max scores, the Shoulder group obtained an MAE and an RMSE close to 60, revealing that the average prediction was off by almost 60 points.
- Rotation Exercise Regarding the min-max results for the Rotation exercise, the Head and Shoulder groups obtained similar error metrics, and the Trunk group obtained a higher degree of error. In terms of correlation, the trend from the Diagonal exercise continues, with the Head obtaining a weak positive correlation, the Trunk a reasonably high positive correlation, and the Shoulder a weak negative correlation. Compared with the Diagonal exercise, the min-max scores for this exercise obtained worse results, with an average MAE of 24.72 and an average RMSE of 27.16, practically 10 points higher than the respective metrics for the Diagonal exercise. Interestingly, the Z-score scores obtained very similar, if not better, performances to those of the min-max scores in both the Head and the Trunk groups, obtaining slightly lower MAEs and similar RMSEs.
4.4.3. Discussion Summary
4.5. Statistical Analysis
Wilcoxon Signed-Rank Test
- For each pair of observations, calculate the absolute difference between the predicted and the actual score.
- Rank these absolute differences from smallest to largest, ignoring the signs.
- Taking the signs back into account, sum all the positive ranks (R−) and all the negative ranks (R+).
- Calculate the smallest value between R− and R+: R = min(R−, R+).
4.6. Summary
5. Conclusions
5.1. Summary of Main Takeaways
5.2. Opportunities for Improvement and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Machlin, S.; Chevan, J.; Yu, W.; Zodet, M. Determinants of Utilization and Expenditures for Episodes of Ambulatory Physical Therapy Among Adults. Phys. Ther. 2011, 91, 1018–1029. [Google Scholar] [CrossRef] [PubMed]
- Mehmood, F.; Mumtaz, N.; Mehmood, A. Next-Generation Tools for Patient Care and Rehabilitation: A Review of Modern Innovations. Actuators 2025, 14, 133. [Google Scholar] [CrossRef]
- Lopes, M.; Melo, A.S.C.; Cunha, B.; Sousa, A.S.P. Smartphone-Based Video Analysis for Guiding Shoulder Therapeutic Exercises: Concurrent Validity for Movement Quality Control. Appl. Sci. 2023, 13, 12282. [Google Scholar] [CrossRef]
- Colyer, S.L.; Evans, M.; Cosker, D.P.; Salo, A.I.T. A Review of the Evolution of Vision-Based Motion Analysis and the Integration of Advanced Computer Vision Methods Towards Developing a Markerless System. Sport. Med.-Open 2018, 4, 24. [Google Scholar] [CrossRef] [PubMed]
- Hellsten, T.; Karlsson, J.; Shamsuzzaman, M.; Pulkkis, G. The Potential of Computer Vision-Based Marker-Less Human Motion Analysis for Rehabilitation. Rehabil. Process Outcome 2021, 10, 117957272110223. [Google Scholar] [CrossRef]
- Lobo, P.; Morais, P.; Murray, P.; Vilaça, J.L. Trends and Innovations in Wearable Technology for Motor Rehabilitation, Prediction, and Monitoring: A Comprehensive Review. Sensors 2024, 24, 7973. [Google Scholar] [CrossRef]
- Debnath, B.; O’Brien, M.; Yamaguchi, M.; Behera, A. A review of computer vision-based approaches for physical rehabilitation and assessment. Multimed. Syst. 2022, 28, 209–239. [Google Scholar] [CrossRef]
- Chen, Y.L.; Liu, C.H.; Yu, C.W.; Lee, P.; Kuo, Y.W. An Upper Extremity Rehabilitation System Using Efficient Vision-Based Action Identification Techniques. Appl. Sci. 2018, 8, 1161. [Google Scholar] [CrossRef]
- Su, C.J.; Chiang, C.Y.; Huang, J.Y. Kinect-enabled home-based rehabilitation system using Dynamic Time Warping and fuzzy logic. Appl. Soft Comput. 2014, 22, 652–666. [Google Scholar] [CrossRef]
- Capecci, M.; Ceravolo, M.G.; Ferracuti, F.; Iarlori, S.; Kyrki, V.; Monteriù, A.; Romeo, L.; Verdini, F. A Hidden Semi-Markov Model based approach for rehabilitation exercise assessment. J. Biomed. Inform. 2018, 78, 1–11. [Google Scholar] [CrossRef]
- Dorado, J.; del Toro Garcia, X.; Santofimia, M.; Parreño-Torres, A.; Cantarero, R.; Rubio Ruiz, A.; López, J.C. A computer-vision-based system for at-home rheumatoid arthritis rehabilitation. Int. J. Distrib. Sens. Netw. 2019, 15, 155014771987564. [Google Scholar] [CrossRef]
- Zhi, Y.X.; Lukasik, M.; Li, M.H.; Dolatabadi, E.; Wang, R.H.; Taati, B. Automatic Detection of Compensation During Robotic Stroke Rehabilitation Therapy. IEEE J. Transl. Eng. Health Med. 2018, 6, 2100107. [Google Scholar] [CrossRef] [PubMed]
- Ciabattoni, L.; Ferracuti, F.; Iarlori, S.; Longhi, S.; Romeo, L. A novel computer vision based e-rehabilitation system: From gaming to therapy support. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 43–44. [Google Scholar] [CrossRef]
- Francisco, J.A.; Rodrigues, P.S. Computer Vision Based on a Modular Neural Network for Automatic Assessment of Physical Therapy Rehabilitation Activities. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 31, 2174–2183. [Google Scholar] [CrossRef] [PubMed]
- Leechaikul, N.; Charoenseang, S. Computer Vision Based Rehabilitation Assistant System; Springer International Publishing: Cham, Switzerland, 2021; pp. 408–414. [Google Scholar] [CrossRef]
- Yang, H.; Wang, Y.; Shi, Y. Rehabilitation Training Evaluation and Correction System Based on BlazePose. In Proceedings of the 2022 IEEE 4th Eurasia Conference on IOT, Communication and Engineering (ECICE), Yunlin, Taiwan, 28–30 October 2022; pp. 27–30. [Google Scholar] [CrossRef]
- Abbas, A.; Yadav, V.; Smith, E.; Ramjas, E.; Rutter, S.; Benavidez, C.; Koesmahargyo, V.; Zhang, L.; Guan, L.; Rosenfield, P.; et al. Computer Vision-Based Assessment of Motor Functioning in Schizophrenia: Use of Smartphones for Remote Measurement of Schizophrenia Symptomatology. Digit. Biomark. 2021, 5, 29–36. [Google Scholar] [CrossRef]
- Ferrer-Mallol, E.; Matthews, C.; Stoodley, M.; Gaeta, A.; George, E.; Reuben, E.; Johnson, A.; Davies, E.H. Patient-led development of digital endpoints and the use of computer vision analysis in assessment of motor function in rare diseases. Front. Pharmacol. 2022, 13, 916714. [Google Scholar] [CrossRef]
- Li, M.H.; Mestre, T.A.; Fox, S.H.; Taati, B. Vision-based assessment of parkinsonism and levodopa-induced dyskinesia with pose estimation. J. NeuroEng. Rehabil. 2018, 15, 97. [Google Scholar] [CrossRef]
- Liao, Y.; Vakanski, A.; Xian, M. A Deep Learning Framework for Assessing Physical Rehabilitation Exercises. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 468–477. [Google Scholar] [CrossRef]
- Mousavi Hondori, H.; Khademi, M. A Review on Technical and Clinical Impact of Microsoft Kinect on Physical Therapy and Rehabilitation. J. Med Eng. 2014, 2014, 846514. [Google Scholar] [CrossRef]
- Microsoft. Kinect. 2022. Available online: https://learn.microsoft.com/en-us/windows/apps/design/devices/kinect-for-windows (accessed on 30 May 2023).
- Mourcou, Q.; Fleury, A.; Diot, B.; Franco, C.; Vuillerme, N. Mobile Phone-Based Joint Angle Measurement for Functional Assessment and Rehabilitation of Proprioception. BioMed Res. Int. 2015, 2015, 328142. [Google Scholar] [CrossRef]
- Lam, W.W.T.; Tang, Y.M.; Fong, K.N.K. A systematic review of the applications of markerless motion capture (MMC) technology for clinical measurement in rehabilitation. J. NeuroEng. Rehabil. 2023, 20, 57. [Google Scholar] [CrossRef]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. arXiv 2019, arXiv:1812.08008. [Google Scholar] [CrossRef]
- Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. BlazePose: On-device Real-time Body Pose tracking. arXiv 2020, arXiv:2006.10204. [Google Scholar] [CrossRef]
- Falahati, S. OpenNI Cookbook; Packt Publishing: Birmingham, UK, 2013. [Google Scholar]
- Kendall, A.; Grimes, M.; Cipolla, R. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. arXiv 2016, arXiv:1505.07427. [Google Scholar] [CrossRef]
- Wei, S.E.; Ramakrishna, V.; Kanade, T.; Sheikh, Y. Convolutional Pose Machines. arXiv 2016, arXiv:1602.00134. [Google Scholar] [CrossRef]
- Baltrusaitis, T.; Robinson, P.; Morency, L.P. OpenFace: An open source facial behavior analysis toolkit. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; IEEE: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
- Exer. Exer Health. 2023. Available online: https://www.exer.ai/product/health (accessed on 30 May 2023).
- Pereira, B.; Cunha, B.; Viana, P.; Lopes, M.; Melo, A.S.C.; Sousa, A.S.P. A Machine Learning App for Monitoring Physical Therapy at Home. Sensors 2024, 24, 158. [Google Scholar] [CrossRef] [PubMed]
- Miron, A.; Sadawi, N.; Ismail, W.; Hussain, H.; Grosan, C. IntelliRehabDS (IRDS)—A Dataset of Physical Rehabilitation Movements. Data 2021, 6, 46. [Google Scholar] [CrossRef]
- Vakanski, A.; Jun, H.P.; Paul, D.; Baker, R. A Data Set of Human Body Movements for Physical Rehabilitation Exercises. Data 2018, 3, 2. [Google Scholar] [CrossRef]
- Capecci, M.; Ceravolo, M.G.; Ferracuti, F.; Iarlori, S.; Monteriù, A.; Romeo, L.; Verdini, F. The KIMORE Dataset: KInematic Assessment of MOvement and Clinical Scores for Remote Monitoring of Physical REhabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1436–1448. [Google Scholar] [CrossRef]
- Google. MediaPipe. 2023. Available online: https://developers.google.com/mediapipe (accessed on 21 December 2023).
- Lugaresi, C.; Tang, J.; Nash, H.; McClanahan, C.; Uboweja, E.; Hays, M.; Zhang, F.; Chang, C.L.; Yong, M.; Lee, J.; et al. MediaPipe: A Framework for Perceiving and Processing Reality. In Proceedings of the Third Workshop on Computer Vision for AR/VR at IEEE Computer Vision and Pattern Recognition (CVPR) 2019, Long Beach, CA, USA, 17 June 2019. [Google Scholar]
- Google. Pose Landmark Detection Guide. 2023. Available online: https://developers.google.com/mediapipe/solutions/vision/pose_landmarker (accessed on 21 December 2023).
- Sakoe, H.; Chiba, S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 1978, 26, 43–49. [Google Scholar] [CrossRef]
- Kulkarni, N. Effect of Dynamic Time Warping using different Distance Measures on Time Series Classification. Int. J. Comput. Appl. 2017, 179, 34–39. [Google Scholar] [CrossRef]
- Qualisys, A. Qualisys Track Manager User Manual; Qualisys AB: Gothenburg, Sweden, 2006. [Google Scholar]
- Bradski, G.R.; Kaehler, A. OpenCV. Dr. Dobb’s J. Softw. Tools 2000, 120, 122–125. [Google Scholar]
- Wu, R.; Keogh, E. FastDTW is Approximate and Generally Slower Than the Algorithm it Approximates. IEEE Trans. Knowl. Data Eng. 2020, 34, 3779–3785. [Google Scholar] [CrossRef]
- Salvador, S.; Chan, P. FastDTW: Toward accurate dynamic time warping in linear time and space. In Proceedings of the KDD Workshop on Mining Temporal and Sequential Data, Seattle, DC, USA, 22–25 August 2004; Volume 6, pp. 70–80. [Google Scholar]
- Lima, F.T.; Souza, V.M.A. A Large Comparison of Normalization Methods on Time Series. Big Data Res. 2023, 34, 100407. [Google Scholar] [CrossRef]
- Aguinis, H.; Gottfredson, R.K.; Joo, H. Best-Practice Recommendations for Defining, Identifying, and Handling Outliers. Organ. Res. Methods 2013, 16, 270–301. [Google Scholar] [CrossRef]
- Chambers, R.L.; Ren, R. Chambers, R.L.; Ren, R. Outlier robust imputation of survey data. Proc. Am. Stat. Assoc. 2004, 3336–3344. [Google Scholar]
- Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
- Whitley, E.; Ball, J. Statistics review 6: Nonparametric methods. Crit. Care 2002, 6, 509. [Google Scholar] [CrossRef] [PubMed]
Article | Data Acquisition | Feature Engineering | Comparison and Assessment |
---|---|---|---|
[8] | Microsoft Kinect | OpenNI | DTW |
[9] | Microsoft Kinect | Kinect SDK | ANFIS for performance and speed components and Fuzzy Logic to combine those into a score |
[10] | Microsoft Kinect | Kinect SDK | HSMM |
[11] | Microsoft Kinect | Kinect SDK | N/A |
[12] | Microsoft Kinect | Kinect SDK | N/A |
[13] | RGB Camera | N/A | Custom scoring function and Fuzzy Logic to obtain an overall score |
[14] | RGB Camera | OpenPose | Modular NN |
[15] | RGB Camera | PoseNet | Not Specified |
[16] | RGB Camera | BlazePose | N/A |
[17] | RGB Camera | OpenFace | N/A |
[18] | RGB Camera | OpenPose | N/A |
[19] | RGB Camera | Convolutional Pose Machines | N/A |
[20] | N/A | N/A | Deep NN and GMM log-likelihood performance metric |
Dataset | Number of Individuals | Number of Exercises |
---|---|---|
IntelliRehabDS [33] | 29 | 9 |
UI-PRMD [34] | 10 | 10 |
KIMORE [35] | 78 | 5 |
ID | Head | Trunk | Shoulder |
---|---|---|---|
ID01_1 | 220.63 | 50.39 | 103.13 |
ID01_2 | 258.58 | 55.52 | 109.38 |
ID01_3 | 257.50 | 55.06 | 122.51 |
ID02_1 | 272.44 | 37.97 | 72.85 |
ID02_2 | 212.43 | 32.74 | 80.76 |
ID02_3 | 310.14 | 47.60 | 89.19 |
ID03_1 | 159.56 | 32.30 | 99.23 |
ID03_2 | 152.75 | 29.36 | 102.72 |
ID03_3 | 176.59 | 31.23 | 86.32 |
ID04_1 | 235.18 | 54.42 | 82.61 |
ID04_2 | 188.21 | 47.61 | 90.15 |
ID04_3 | 203.35 | 60.69 | 86.92 |
ID05_1 | 308.34 | 70.13 | 117.46 |
ID05_2 | 301.78 | 67.27 | 106.83 |
ID05_3 | 316.42 | 65.68 | 113.23 |
ID | Head | Trunk | Shoulder |
---|---|---|---|
ID01 | 60.08 | 56.45 | 77.31 |
ID02 | 55.95 | 61.03 | 76.73 |
ID03 | 80.08 | 72.93 | 76.14 |
ID04 | 65.33 | 58.81 | 76.47 |
ID05 | 63.32 | 61.25 | 80.64 |
ID06 | 72.1 | 73.42 | 79.74 |
ID07 | 70.18 | 63.21 | 67.68 |
ID08 | 73.92 | 69.19 | 65.31 |
ID09 | 69.99 | 66.98 | 80.01 |
ID10 | 57.75 | 59.85 | 78.93 |
ID11 | 70.94 | 74.62 | 78.15 |
ID12 | 29.1 | 29.92 | 54.14 |
ID13 | 65.54 | 58.5 | 70.37 |
ID14 | 78.75 | 72.59 | 82.31 |
ID15 | 62.81 | 48.89 | 59.19 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
Min-max | 13.29 | 15.52 | 0.36 |
Z-score | 27.88 | 34.36 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
Min-max | 17.10 | 19.12 | 0.47 |
Z-score | 31.11 | 37.67 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
Min-max | 14.44 | 15.59 | −0.37 |
Z-score | 57.59 | 62.97 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
Min-max | 20.41 | 24.60 | 0.22 |
Z-score | 18.86 | 27.11 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
min-max | 32.30 | 33.69 | 0.51 |
Z-score | 27.30 | 32.25 |
Normalization | MAE | RMSE | CC |
---|---|---|---|
Min-max | 21.45 | 23.19 | −0.37 |
Z-score | 30.30 | 36.83 |
Exercise | Joint Group | p-Value Min-Max | p-Value Z-Score |
---|---|---|---|
Diagonal | Head | 0.057 | 0.005 |
Trunk | 0.001 | 0.002 | |
Shoulder | 0.000 | 0.000 | |
Rotation | Head | 0.057 | 0.334 |
Trunk | 0.000 | 0.000 | |
Shoulder | 0.035 | 0.107 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cunha, B.; Maçães, J.; Amorim, I. Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study. Sensors 2025, 25, 5428. https://doi.org/10.3390/s25175428
Cunha B, Maçães J, Amorim I. Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study. Sensors. 2025; 25(17):5428. https://doi.org/10.3390/s25175428
Chicago/Turabian StyleCunha, Bruno, José Maçães, and Ivone Amorim. 2025. "Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study" Sensors 25, no. 17: 5428. https://doi.org/10.3390/s25175428
APA StyleCunha, B., Maçães, J., & Amorim, I. (2025). Smartphone-Based Markerless Motion Capture for Accessible Rehabilitation: A Computer Vision Study. Sensors, 25(17), 5428. https://doi.org/10.3390/s25175428