Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (115)

Search Parameters:
Keywords = MOCAP

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 315 KiB  
Review
Motion Capture Technologies for Athletic Performance Enhancement and Injury Risk Assessment: A Review for Multi-Sport Organizations
by Bahman Adlou, Christopher Wilburn and Wendi Weimar
Sensors 2025, 25(14), 4384; https://doi.org/10.3390/s25144384 - 13 Jul 2025
Viewed by 542
Abstract
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite [...] Read more.
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite System (GNSS)-integrated systems, and markerless computer vision systems. Studies were evaluated for validated accuracy metrics across indoor court, aquatic, and outdoor field environments. Results: Optical systems maintain sub-millimeter accuracy in controlled environments but face field limitations. IMU systems demonstrate an angular accuracy of 2–8° depending on movement complexity. Markerless systems show variable accuracy (sagittal: 3–15°, transverse: 3–57°). Environmental factors substantially impact system performance, with aquatic settings introducing an additional orientation error of 2° versus terrestrial applications. Outdoor environments challenge GNSS-based tracking (±0.3–3 m positional accuracy). Critical gaps include limited gender-specific validation and insufficient long-term reliability data. Conclusions: This review proposes a tiered implementation framework combining foundation-level team monitoring with specialized assessment tools. This evidence-based approach guides the selection of technology aligned with organizational priorities, sport-specific requirements, and resource constraints. Full article
(This article belongs to the Special Issue Sensors Technology for Sports Biomechanics Applications)
9 pages, 989 KiB  
Proceeding Paper
Motion Capture System in Performance Assessment of Playing Piano: Establishing the Center for Music Performance Science and Musicians’ Medicine in China
by Qing Yang, Chieko Mibu and Yuchi Zhang
Eng. Proc. 2025, 98(1), 28; https://doi.org/10.3390/engproc2025098028 - 1 Jul 2025
Viewed by 321
Abstract
This article introduces China’s first Center for Music Performance Science and Musicians’ Medicine. In the center, motion capture (MoCap) technology is used to study piano performance and musicians’ health. An idea and methodology to assess the performance of piano performance are developed in [...] Read more.
This article introduces China’s first Center for Music Performance Science and Musicians’ Medicine. In the center, motion capture (MoCap) technology is used to study piano performance and musicians’ health. An idea and methodology to assess the performance of piano performance are developed in the center. The center uses high-precision MoCap system to analyze movement efficiency, posture, joint angles, and coordination of pianists. By addressing physical challenges, the center promotes healthier, more efficient practice ways, especially for adolescent piano learners. The pioneering research results bridge the gap between music performance (art) and science, positioning China as a leader in music performance science and musicians’ health. Full article
Show Figures

Figure 1

15 pages, 4940 KiB  
Article
Consistency Is Key: A Secondary Analysis of Wearable Motion Sensor Accuracy Measuring Knee Angles Across Activities of Daily Living Before and After Knee Arthroplasty
by Robert C. Marchand, Kelly B. Taylor, Emily C. Kaczynski, Skye Richards, Jayson B. Hutchinson, Shayan Khodabakhsh and Ryan M. Chapman
Sensors 2025, 25(13), 3942; https://doi.org/10.3390/s25133942 - 25 Jun 2025
Viewed by 429
Abstract
Background: Monitoring knee range of motion (ROM) after total knee arthroplasty (TKA) via clinically deployed wearable motion sensors is increasingly common. Prior work from our own lab showed promising results in one wearable motion sensor system; however, we did not investigate errors across [...] Read more.
Background: Monitoring knee range of motion (ROM) after total knee arthroplasty (TKA) via clinically deployed wearable motion sensors is increasingly common. Prior work from our own lab showed promising results in one wearable motion sensor system; however, we did not investigate errors across different activities. Accordingly, herein we conducted secondary analyses of error using wearable inertial measurement units (IMUs) quantifying sagittal knee angles across activities in TKA patients. Methods: After Institutional Review Board (IRB) approval, TKA patients were recruited for participation in two visits (n = 20 enrolled, n = 5 lost to follow-up). Following a sensor tutorial (MotionSense, Stryker, Mahwah, NJ, USA), sensors and motion capture (MOCAP) markers were applied for data capture before surgery. One surgeon then performed TKA. An identical data capture was then completed postoperatively. MOCAP and wearable motion sensor knee angles were computed during a series of activities and compared. Two-way ANOVA evaluated the impact of time (pre- vs. post-TKA) and activity on average error. Another two-way ANOVA was completed, assessing if error at local maxima was different than at local minima and if either was different across activities. Results: Pre-TKA/post-TKA errors were not different. No differences were noted across activities. On average, the errors were under clinically acceptable thresholds (i.e., 4.9 ± 2.6° vs. ≤5°). Conclusions: With average error ≤ 5°, these specific sensors accurately quantify knee angles before/after surgical intervention. Future investigations should explore leveraging this type of technology to evaluate preoperative function decline and postoperative function recovery. Full article
(This article belongs to the Special Issue State of the Art in Wearable Sensors for Health Monitoring)
Show Figures

Figure 1

20 pages, 2223 KiB  
Article
ChatGPT-Based Model for Controlling Active Assistive Devices Using Non-Invasive EEG Signals
by Tais da Silva Mota, Saket Sarkar, Rakshith Poojary and Redwan Alqasemi
Electronics 2025, 14(12), 2481; https://doi.org/10.3390/electronics14122481 - 18 Jun 2025
Viewed by 494
Abstract
With an anticipated 3.6 million Americans who will be living with limb loss by 2050, the demand for active assistive devices is rapidly increasing. This study investigates the feasibility of leveraging a ChatGPT-based (Version 4o) model to predict motion based on input electroencephalogram [...] Read more.
With an anticipated 3.6 million Americans who will be living with limb loss by 2050, the demand for active assistive devices is rapidly increasing. This study investigates the feasibility of leveraging a ChatGPT-based (Version 4o) model to predict motion based on input electroencephalogram (EEG) signals, enabling the non-invasive control of active assistive devices. To achieve this goal, three objectives were set. First, the model’s capability to derive accurate mathematical relationships from numerical datasets was validated to establish a foundational level of computational accuracy. Next, synchronized arm motion videos and EEG signals were introduced, which allowed the model to filter, normalize, and classify EEG data in relation to distinct text-based arm motions. Finally, the integration of marker-based motion capture data provided motion information, which is essential for inverse kinematics applications in robotic control. The combined findings highlight the potential of ChatGPT-generated machine learning systems to effectively correlate multimodal data streams and serve as a robust foundation for the intuitive, non-invasive control of assistive technologies using EEG signals. Future work will focus on applying the model to real-time control applications while expanding the dataset’s diversity to enhance the accuracy and performance of the model, with the ultimate aim of improving the independence and quality of life of individuals who rely on active assistive devices. Full article
(This article belongs to the Special Issue Advances in Intelligent Control Systems)
Show Figures

Figure 1

18 pages, 5409 KiB  
Article
Research on Motion Transfer Method from Human Arm to Bionic Robot Arm Based on PSO-RF Algorithm
by Yuanyuan Zheng, Hanqi Zhang, Gang Zheng, Yuanjian Hong, Zhonghua Wei and Peng Sun
Biomimetics 2025, 10(6), 392; https://doi.org/10.3390/biomimetics10060392 - 11 Jun 2025
Viewed by 447
Abstract
Although existing motion transfer methods for bionic robot arms are based on kinematic equivalence or simplified dynamic models, they frequently fail to tackle dynamic compliance and real-time adaptability in complex human-like motions. To address this shortcoming, this study presents a motion transfer method [...] Read more.
Although existing motion transfer methods for bionic robot arms are based on kinematic equivalence or simplified dynamic models, they frequently fail to tackle dynamic compliance and real-time adaptability in complex human-like motions. To address this shortcoming, this study presents a motion transfer method from the human arm to a bionic robot arm based on the hybrid PSO-RF (Particle Swarm Optimization-Random Forest) algorithm to improve joint space mapping accuracy and dynamic compliance. Initially, a high-precision optical motion capture (Mocap) system was utilized to record human arm trajectories, and Kalman filtering and a Rauch–Tung–Striebel (RTS) smoother were applied to reduce noise and phase lag. Subsequently, the joint angles of the human arm were computed through geometric vector analysis. Although geometric vector analysis offers an initial estimation of joint angles, its deterministic framework is subject to error accumulation caused by the occlusion of reflective markers and kinematic singularities. To surmount this limitation, this study designed five action sequences for the establishment of the training database for the PSO-RF model to predict joint angles when performing different actions. Ultimately, an experimental platform was built to validate the motion transfer method, and the experimental verification showed that the system attained high prediction accuracy (R2 = 0.932 for the elbow joint angle) and real-time performance with a latency of 0.1097 s. This paper promotes compliant human–robot interaction by dealing with joint-level dynamic transfer challenges, presenting a framework for applications in intelligent manufacturing and rehabilitation robotics. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

16 pages, 1978 KiB  
Article
Learning-Assisted Multi-IMU Proprioceptive State Estimation for Quadruped Robots
by Xuanning Liu, Yajie Bao, Peng Cheng, Dan Shen, Zhengyang Fan, Hao Xu and Genshe Chen
Information 2025, 16(6), 479; https://doi.org/10.3390/info16060479 - 9 Jun 2025
Viewed by 1768
Abstract
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing [...] Read more.
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing the dynamics of the body and legs, in addition to joint encoders. The extended Kalman filter (KF) is employed to fuse sensor data to estimate the robot’s states in the world frame and enhance the convergence of the extended KF (EKF). To circumvent the requirements for the measurements from the motion capture (mocap) system or other vision systems, the right-invariant EKF (RI-EKF) is extended to employ the foot IMU measurements for enhanced state estimation, and a learning-based approach is presented to estimate the vision system measurements for the EKF. One-dimensional convolutional neural networks (CNN) are leveraged to estimate required measurements using only the available proprioception data. Experiments on real data from a quadruped robot demonstrate that proprioception can be sufficient for state estimation. The proposed learning-assisted approach, which does not rely on data from vision systems, achieves competitive accuracy compared to EKF using mocap measurements and lower estimation errors than RI-EKF using multi-IMU measurements. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

17 pages, 2275 KiB  
Article
Comparative Assessment of an IMU-Based Wearable Device and a Marker-Based Optoelectronic System in Trunk Motion Analysis: A Cross-Sectional Investigation
by Fulvio Dal Farra, Serena Cerfoglio, Micaela Porta, Massimiliano Pau, Manuela Galli, Nicola Francesco Lopomo and Veronica Cimolin
Appl. Sci. 2025, 15(11), 5931; https://doi.org/10.3390/app15115931 - 24 May 2025
Viewed by 2757
Abstract
Wearable inertial measurement units (IMUs) are increasingly used in human motion analysis due to their ability to measure movement in real-world environments. However, with rapid technological advancement and a wide variety of models available, it is essential to evaluate their performance and suitability [...] Read more.
Wearable inertial measurement units (IMUs) are increasingly used in human motion analysis due to their ability to measure movement in real-world environments. However, with rapid technological advancement and a wide variety of models available, it is essential to evaluate their performance and suitability for analyzing specific body regions. This study aimed to assess the accuracy and precision of an IMU-based sensor in measuring trunk range of motion (ROM). Twenty-seven healthy adults (11 males, 16 females; mean age: 31.1 ± 11.0 years) participated. Each performed trunk movements—flexion, extension, lateral bending, and rotation—while angular data were recorded simultaneously using a single IMU and a marker-based optoelectronic motion capture (MoCap) system. Analyses included accuracy indices, Root Mean Square Error (RMSE), Pearson’s correlation coefficient (r), concordance correlation coefficient (CCC), and Bland–Altman limits of agreement. The IMU showed high accuracy in rotation (92.4%), with strong correlation (r = 0.944, p < 0.001) and excellent agreement [CCC = 0.927; (0.977–0.957)]. Flexion (72.1%), extension (64.1%), and lateral bending (61.4%) showed moderate accuracy and correlations (r = 0.703, 0.564, and 0.430, p < 0.05). The RMSE ranged from 1.09° (rotation) to 3.01° (flexion). While the IMU consistently underestimated ROM, its accuracy in rotation highlights its potential as a cost-effective MoCap alternative, warranting further study for broader clinical use. Full article
Show Figures

Figure 1

37 pages, 2036 KiB  
Article
GCN-Transformer: Graph Convolutional Network and Transformer for Multi-Person Pose Forecasting Using Sensor-Based Motion Data
by Romeo Šajina, Goran Oreški and Marina Ivašić-Kos
Sensors 2025, 25(10), 3136; https://doi.org/10.3390/s25103136 - 15 May 2025
Viewed by 1134
Abstract
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where [...] Read more.
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where motion capture systems, including vision-based sensors and IMUs, provide crucial data for analyzing human movement. This paper introduces GCN-Transformer, a novel model for multi-person pose forecasting that leverages the integration of Graph Convolutional Network and Transformer architectures. We integrated novel loss terms during the training phase to enable the model to learn both interaction dependencies and the trajectories of multiple joints simultaneously. Additionally, we propose a novel pose forecasting evaluation metric called Final Joint Position and Trajectory Error (FJPTE), which assesses both local movement dynamics and global movement errors by considering the final position and the trajectory leading up to it, providing a more comprehensive assessment of movement dynamics. Our model uniquely integrates scene-level graph-based encoding and personalized attention-based decoding, introducing a novel architecture for multi-person pose forecasting that achieves state-of-the-art results across four datasets. The model is trained and evaluated on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets, which are collected using sensor-based motion capture systems, ensuring its applicability in real-world scenarios. Comprehensive evaluations on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets demonstrate that the proposed GCN-Transformer model consistently outperforms existing state-of-the-art (SOTA) models according to the VIM and MPJPE metrics. Specifically, based on the MPJPE metric, GCN-Transformer shows a 4.7% improvement over the closest SOTA model on CMU-Mocap, 4.3% improvement over the closest SOTA model on MuPoTS-3D, 5% improvement over the closest SOTA model on the SoMoF Benchmark, and a 2.6% improvement over the closest SOTA model on the ExPI dataset. Unlike other models with performances that fluctuate across datasets, GCN-Transformer performs consistently, proving its robustness in multi-person pose forecasting and providing an excellent foundation for the application of GCN-Transformer in different domains. Full article
Show Figures

Figure 1

24 pages, 8541 KiB  
Article
Feature Fusion Graph Consecutive-Attention Network for Skeleton-Based Tennis Action Recognition
by Pawel Powroznik, Maria Skublewska-Paszkowska, Krzysztof Dziedzic and Marcin Barszcz
Appl. Sci. 2025, 15(10), 5320; https://doi.org/10.3390/app15105320 - 9 May 2025
Viewed by 541
Abstract
Human action recognition has become a key direction in computer vision. Deep learning models, particularly when combined with sensor data fusion, can significantly enhance various applications by learning complex patterns and relationships from diverse data streams. Thus, this study proposes a new model, [...] Read more.
Human action recognition has become a key direction in computer vision. Deep learning models, particularly when combined with sensor data fusion, can significantly enhance various applications by learning complex patterns and relationships from diverse data streams. Thus, this study proposes a new model, the Feature Fusion Graph Consecutive-Attention Network (FFGCAN), in order to enhance performance in the classification of the main tennis strokes: forehand, backhand, volley forehand, and volley backhand. The proposed network incorporates seven basic blocks that are combined with two types of module: an Adaptive Consecutive Attention Module, and Graph Self-Attention module. They are employed to extract joint information at different scales from the motion capture data. Due to focusing on relevant components, the model enriches the network’s comprehension of tennis motion data representation and allows for a more invested representation. Moreover, the FFGCAN utilizes a fusion of motion capture data that generates a channel-specific topology map for each output channel, reflecting how joints are connected when the tennis player is moving. The proposed solution was verified utilizing three well-known motion capture datasets, THETIS, Tennis-Mocap, and 3DTennisDS, each containing tennis movements in various formats. A series of experiments were performed, including data division into training (70%), validating (15%), and testing (15%) subsets. The testing utilized five trials. The FFCGAN model obtained very high results for accuracy, precision, recall, and F1-score, outperforming the commonly applied networks for action recognition, such as the Spatial-Temporal Graph Convolutional Network or its modifications. The proposed model demonstrated excellent tennis movement prediction ability. Full article
Show Figures

Figure 1

16 pages, 3689 KiB  
Article
Breath-by-Breath Measurement of Respiratory Frequency and Tidal Volume with a Multiple-Camera Motion Capture System During Cycling Incremental Exercise
by Carlo Massaroni, Andrea Nicolò, Ana Luiza de Castro Lopes, Chiara Romano, Mariangela Pinnelli, Karine Sarro, Emiliano Schena, Pietro Cerveri, Massimo Sacchetti, Sergio Silvestri and Amanda Piaia Silvatti
Sensors 2025, 25(8), 2578; https://doi.org/10.3390/s25082578 - 19 Apr 2025
Viewed by 602
Abstract
This study evaluates the performance of a 32-marker motion capture (MoCap) system in estimating respiratory frequency (fR) and tidal volume (VT) during cycling exercise. Fourteen well-trained cyclists performed an incremental step test on a cycle ergometer, while [...] Read more.
This study evaluates the performance of a 32-marker motion capture (MoCap) system in estimating respiratory frequency (fR) and tidal volume (VT) during cycling exercise. Fourteen well-trained cyclists performed an incremental step test on a cycle ergometer, while simultaneously recording a raw flow signal with a reference metabolic cart (COSMED) and respiratory-induced torso movements with twelve optoelectronic cameras registering the position of 32 markers affixed to the torso. fR and VT were calculated from both systems on a breath-by-breath basis. The MoCap system showed a strong correlation with the COSMED system when measuring fR and VT (r2 = 0.99, r2 = 0.87, respectively) during exercise. For fR, the mean absolute error (MAE) and mean absolute percentage error (MAPE) were 0.79 breaths/min and 2.1%, respectively. For VT, MoCap consistently underestimated values compared to COSMED, showing a bias (MOD ± LOA) of −0.11 ± 0.42 L and MAPE values of 8%. These findings highlight the system’s capabilities for real-time respiratory monitoring in athletic environments. Full article
(This article belongs to the Special Issue Sensor Technologies in Sports and Exercise)
Show Figures

Figure 1

16 pages, 3643 KiB  
Article
2D Pose Estimation vs. Inertial Measurement Unit-Based Motion Capture in Ergonomics: Assessing Postural Risk in Dental Assistants
by Steven Simon, Jonna Meining, Laura Laurendi, Thorsten Berkefeld, Jonas Dully, Carlo Dindorf and Michael Fröhlich
Bioengineering 2025, 12(4), 403; https://doi.org/10.3390/bioengineering12040403 - 10 Apr 2025
Viewed by 520
Abstract
The dental profession has a high prevalence of musculoskeletal disorders because daily working life is characterized by many monotonous and one-sided physical exertions. Inertial measurement unit (IMU)-based motion capture (MoCap) is increasingly utilized for assessing workplace postural risk. However, practical alternatives are needed [...] Read more.
The dental profession has a high prevalence of musculoskeletal disorders because daily working life is characterized by many monotonous and one-sided physical exertions. Inertial measurement unit (IMU)-based motion capture (MoCap) is increasingly utilized for assessing workplace postural risk. However, practical alternatives are needed because it is time-consuming and relatively cost intensive for ergonomists. This study compared two measurement technologies: IMU-based MoCap and a time-effective alternative, two-dimensional (2D) pose estimation. Forty-five dental assistant students (all female) were included (age: 19.56 ± 5.91 years; height: 165.00 ± 6.35 cm; weight: 63.41 ± 13.87 kg; BMI: 21.56 ± 4.63 kg/m2). A 30 s IMU-based MoCap and image-based pose estimation in the sagittal and frontal planes were performed during a representative experimental task. Data were analyzed using Cohen’s weighted kappa and Bland–Altman plots. There was a significant moderate agreement between the Rapid Upper Limb Assessment (RULA) score from IMU-based MoCap and pose estimation (κ = 0.461, pB = 0.006), but no significant poor agreement (p > 0.05) regarding the body regions of the upper arm, lower arm, wrist, neck, and trunk. These findings indicate that IMU-based MoCap and pose estimation moderately align when assessing the overall RULA score but not for specific body parts. While pose estimation might be useful for quick general posture assessment, it may not be reliable for evaluating joint-level differences, especially in body areas such as the upper extremities. Future research should focus on refining video-based pose estimation for real-time postural risk assessment in the workplace. Full article
Show Figures

Figure 1

13 pages, 1495 KiB  
Article
Accuracy of Measurement Tools for Ocular-Origin Anomalous Head Posture and the Cervical Range of Motion Kinematics in Children with an Anomalous Head Position
by Serena Cerfoglio, Francesco Bonsignore, Giuseppina Bernardelli, Lucia Donno, Fabiana Aili, Manuela Galli, Francesca Nucci, Edoardo Villani, Paolo Nucci and Veronica Cimolin
Appl. Sci. 2025, 15(7), 3642; https://doi.org/10.3390/app15073642 - 26 Mar 2025
Viewed by 439
Abstract
The accurate assessment of anomalous head posture (AHP) is crucial for diagnosing, treating, and monitoring postural changes in individuals with ocular impairments. This study evaluated the accuracy of a digital goniometer and an iOS-based application by comparing their measurements to a gold-standard motion [...] Read more.
The accurate assessment of anomalous head posture (AHP) is crucial for diagnosing, treating, and monitoring postural changes in individuals with ocular impairments. This study evaluated the accuracy of a digital goniometer and an iOS-based application by comparing their measurements to a gold-standard motion capture (MoCap) system. Additionally, it assessed cervical range of motion (ROM) limitations in children with AHP versus healthy controls. Fifteen pediatric patients with ocular-origin AHP and 20 age-matched controls participated. Head rotation and inclination were measured using a goniometer, the iOS app, and MoCap under static and dynamic conditions. Pearson’s correlation coefficient (PCC), root mean square error (RMSE), and Bland–Altman plots assessed inter-system agreement, while MoCap analyzed cervical ROM. The results showed strong agreement between the ophthalmological tools and MoCap for head rotation (PCC = 0.86, RMSE = 3.43°) and inclination (PCC = 0.82, RMSE = 5°), with no significant inter-system differences (p > 0.05). AHP patients exhibited reduced head flexion (p < 0.05), suggesting long-term postural adaptations. Digital goniometers and smartphone applications provide accurate, cost-effective AHP assessment alternatives, particularly in resource-limited settings. Future research should expand cohorts and integrate multidisciplinary approaches to refine assessment and treatment strategies. Full article
(This article belongs to the Special Issue Advances in Motion Monitoring System)
Show Figures

Figure 1

15 pages, 937 KiB  
Systematic Review
The Role of Motion Capture Analysis in Evaluating Postoperative Functional Outcomes in Adolescent Idiopathic Scoliosis: A Systematic Review
by Sergio De Salvatore, Paolo Brigato, Davide Palombi, Leonardo Oggiano, Sergio Sessa, Umile Giuseppe Longo and Pier Francesco Costici
Appl. Sci. 2025, 15(4), 1829; https://doi.org/10.3390/app15041829 - 11 Feb 2025
Viewed by 909
Abstract
Introduction: This systematic review evaluates the application of motion capture analysis (MCA) in assessing postoperative functional outcomes in adolescent idiopathic scoliosis (AIS) patients treated with spinal fusion. Material and Methods: A comprehensive search of PubMed, Scopus, Embase, and Cochrane Library was [...] Read more.
Introduction: This systematic review evaluates the application of motion capture analysis (MCA) in assessing postoperative functional outcomes in adolescent idiopathic scoliosis (AIS) patients treated with spinal fusion. Material and Methods: A comprehensive search of PubMed, Scopus, Embase, and Cochrane Library was conducted for studies published between January 2013 and September 2024. Eligible studies included original research examining AIS patients’ post-spinal fusion, specifically assessing kinematic outcomes via MCA. Key outcomes included gait parameters, range of motion (ROM), and trunk–pelvic kinematics. Results: Nine studies comprising 216 participants (81.5% female), predominantly with Lenke 1 and 3 curve types. MCA revealed significant improvements in gait symmetry, stride length, and trunk–pelvic kinematics within one year of surgery. Enhanced mediolateral stability and normalized transverse plane motion were commonly observed. However, persistent reductions in thoracic–pelvic ROM and flexibility highlight postoperative limitations. Redistributing mechanical loads to adjacent unfused segments raises concerns about long-term compensatory mechanisms and risks for adjacent segment degeneration. Conclusions: While spinal fusion effectively restores coronal and sagittal alignment and improves functional mobility, limitations in ROM and dynamic adaptability necessitate targeted rehabilitation. Future research should standardize MCA methodologies and explore motion-preserving surgical techniques to address residual functional deficits. Full article
(This article belongs to the Special Issue Orthopaedics and Joint Reconstruction: Latest Advances and Prospects)
Show Figures

Figure 1

15 pages, 1696 KiB  
Article
Advancing Field-Based Vertical Jump Analysis: Markerless Pose Estimation vs. Force Plates
by Jelena Aleksic, David Mesaroš, Dmitry Kanevsky, Olivera M. Knežević, Dimitrije Cabarkapa, Lucija Faj and Dragan M. Mirkov
Life 2024, 14(12), 1641; https://doi.org/10.3390/life14121641 - 11 Dec 2024
Viewed by 2081
Abstract
The countermovement vertical jump (CMJ) is widely used in sports science and rehabilitation to assess lower body power. In controlled laboratory environments, a complex analysis of CMJ performance is usually carried out using motion capture or force plate systems, providing detailed insights into [...] Read more.
The countermovement vertical jump (CMJ) is widely used in sports science and rehabilitation to assess lower body power. In controlled laboratory environments, a complex analysis of CMJ performance is usually carried out using motion capture or force plate systems, providing detailed insights into athlete’s movement mechanics. While these systems are highly accurate, they are often costly or limited to laboratory settings, making them impractical for widespread or field use. This study aimed to evaluate the accuracy of MMPose, a markerless 2D pose estimation framework, for CMJ analysis by comparing it with force plates. Twelve healthy participants performed five CMJs, with each jump trial simultaneously recorded using force plates and a smartphone camera. Vertical velocity profiles and key temporal variables, including jump phase durations, maximum jump height, vertical velocity, and take-off velocity, were analyzed and compared between the two systems. The statistical methods included a Bland–Altman analysis, correlation coefficients (r), and effect sizes, with consistency and systematic differences assessed using intraclass correlation coefficients (ICC) and paired samples t-tests. The results showed strong agreement (r = 0.992) between the markerless system and force plates, validating MMPose for CMJ analysis. The temporal variables also demonstrated high reliability (ICC > 0.9), with minimal systematic differences and negligible effect sizes for most variables. These findings suggest that the MMPose-based markerless system is a cost-effective and practical alternative for analyzing CMJ performance, particularly in field settings where force plates may be less accessible. This system holds potential for broader applications in sports performance and rehabilitation, enabling more scalable, data-driven movement assessments. Full article
(This article belongs to the Special Issue Advances and Applications of Sport Physiology)
Show Figures

Figure 1

15 pages, 6985 KiB  
Article
Assessing Postural Stability in Gastrointestinal Endoscopic Procedures with a Belt-like Endoscope Holder Using a MoCap Camera System
by Tadej Durič, Jan Hejda, Petr Volf, Marek Sokol, Patrik Kutílek and Jan Hajer
J. Pers. Med. 2024, 14(12), 1132; https://doi.org/10.3390/jpm14121132 - 30 Nov 2024
Cited by 1 | Viewed by 872
Abstract
Background/Objectives: As musculoskeletal injuries in gastroenterologists related to the performance of endoscopic procedures are on the rise, solutions and new approaches are needed to prevent these undesired outcomes. In our study, we evaluated an approach to ergonomic challenges in the form of a [...] Read more.
Background/Objectives: As musculoskeletal injuries in gastroenterologists related to the performance of endoscopic procedures are on the rise, solutions and new approaches are needed to prevent these undesired outcomes. In our study, we evaluated an approach to ergonomic challenges in the form of a belt-like endoscope holder designed to redistribute the weight of the endoscope across the whole body of the practitioner. The aim of the study was to determine how the use of this holder affected the body posture of practitioners during endoscopy. Methods: We designed a special endoscopic model that emulates basic endoscopic movement and maneuvers. With the use of the MoCap camera system, we recorded experienced endoscopists exercising a standardized set of tasks with and without the holder. Results: Following video and statistical analyses, the most significant differences were observed in the position of the left arm which pointed to a more relaxed arm position. Conclusions: The ergonomic benefits of the belt holder in this model merit testing in the clinical setting to evaluate its effectiveness and prevention of musculoskeletal injuries in GI endoscopy. Full article
(This article belongs to the Special Issue Clinical Updates on Personalized Upper Gastrointestinal Endoscopy)
Show Figures

Figure 1

Back to TopTop