Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (51)

Search Parameters:
Keywords = motion capture (MOCAP) data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3498 KB  
Article
Effect of Walking Speed on the Reliability of a Smartphone-Based Markerless Gait Analysis System
by Edilson Fernando de Borba, Jorge L. Storniolo, Serena Cerfoglio, Paolo Capodaglio, Veronica Cimolin, Leonardo A. Peyré-Tartaruga, Marcus P. Tartaruga and Paolo Cavallari
Sensors 2025, 25(20), 6474; https://doi.org/10.3390/s25206474 - 20 Oct 2025
Viewed by 479
Abstract
Quantitative gait analysis is essential for understanding motor function and guiding clinical decisions. While marker-based motion capture (MoCap) systems are accurate, they are costly and require specialized facilities. OpenCap, a markerless alternative, offers a more accessible approach; however, its reliability across different walking [...] Read more.
Quantitative gait analysis is essential for understanding motor function and guiding clinical decisions. While marker-based motion capture (MoCap) systems are accurate, they are costly and require specialized facilities. OpenCap, a markerless alternative, offers a more accessible approach; however, its reliability across different walking speeds remains uncertain. This study assessed the agreement between OpenCap and MoCap in measuring spatiotemporal parameters, joint kinematics, and center of mass (CoM) displacement during level walking at three speeds: slow, self-selected, and fast. Fifteen healthy adults performed multiple trials simultaneously, recorded by both systems. Agreement was analyzed using intraclass correlation coefficients (ICC), minimal detectable change (MDC), Bland–Altman analyses, root mean square error (RMSE), Statistical Parametric Mapping (SPM), and repeated-measures ANOVA. Results indicated excellent agreement for spatiotemporal variables (ICC ≥ 0.95) and high consistency for joint waveforms (RMSE < 2°) and CoM displacement (RMSE < 6 mm) across all speeds. However, the joint range of motion (ROM) showed lower reliability, especially at the hip and ankle, at higher speeds. ANOVA revealed no significant System × Speed interactions for most variables, though a significant effect of speed was noted, with OpenCap underestimating walking speed more at fast speeds. Overall, OpenCap is a valuable tool for gait assessment, very accurate for spatiotemporal data and CoM displacement. Still, caution should be taken when interpreting joint kinematics and speed at different walking speeds. Full article
(This article belongs to the Special Issue Sensors and Data Analysis for Biomechanics and Physical Activity)
Show Figures

Figure 1

31 pages, 1116 KB  
Article
MoCap-Impute: A Comprehensive Benchmark and Comparative Analysis of Imputation Methods for IMU-Based Motion Capture Data
by Mahmoud Bekhit, Ahmad Salah, Ahmed Salim Alrawahi, Tarek Attia, Ahmed Ali, Esraa Eldesouky and Ahmed Fathalla
Information 2025, 16(10), 851; https://doi.org/10.3390/info16100851 - 1 Oct 2025
Viewed by 384
Abstract
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and [...] Read more.
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and environmental interference. Such limitations can introduce bias, prevent the fusion of critical data streams, and ultimately compromise the integrity of human activity analysis. Despite the plethora of data imputation techniques available, there have been few systematic performance evaluations of these techniques explicitly for the time series data of IMU-derived MoCap data. We address this by evaluating the imputation performance across three distinct contexts: univariate time series, multivariate across players, and multivariate across kinematic angles. To address this limitation, we propose a systematic comparative analysis of imputation techniques, including statistical, machine learning, and deep learning techniques, in this paper. We also introduce the first publicly available MoCap dataset specifically for the purpose of benchmarking missing value imputation, with three missingness mechanisms: missing completely at random, block missingness, and a simulated value-dependent missingness pattern simulated at the signal transition points. Using data from 53 karate practitioners performing standardized movements, we artificially generated missing values to create controlled experimental conditions. We performed experiments across the 53 subjects with 39 kinematic variables, which showed that discriminating between univariate and multivariate imputation frameworks demonstrates that multivariate imputation frameworks surpassunivariate approaches when working with more complex missingness mechanisms. Specifically, multivariate approaches achieved up to a 50% error reduction (with the MAE improving from 10.8 ± 6.9 to 5.8 ± 5.5) compared to univariate methods for transition point missingness. Specialized time series deep learning models (i.e., SAITS, BRITS, GRU-D) demonstrated a superior performance with MAE values consistently below 8.0 for univariate contexts and below 3.2 for multivariate contexts across all missing data percentages, significantly surpassing traditional machine learning and statistical methods. Notable traditional methods such as Generative Adversarial Imputation Networks and Iterative Imputers exhibited a competitive performance but remained less stable than the specialized temporal models. This work offers an important baseline for future studies, in addition to recommendations for researchers looking to increase the accuracy and robustness of MoCap data analysis, as well as integrity and trustworthiness. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

14 pages, 2389 KB  
Article
Development of Marker-Based Motion Capture Using RGB Cameras: A Neural Network Approach for Spherical Marker Detection
by Yuji Ohshima
Sensors 2025, 25(17), 5228; https://doi.org/10.3390/s25175228 - 22 Aug 2025
Viewed by 1022
Abstract
Marker-based motion capture systems using infrared cameras (IR MoCaps) are commonly employed in biomechanical research. However, their high costs pose challenges for many institutions seeking to implement such systems. This study aims to develop a neural network (NN) model to estimate the digitized [...] Read more.
Marker-based motion capture systems using infrared cameras (IR MoCaps) are commonly employed in biomechanical research. However, their high costs pose challenges for many institutions seeking to implement such systems. This study aims to develop a neural network (NN) model to estimate the digitized coordinates of spherical markers and to establish a lower-cost marker-based motion capture system using RGB cameras. Thirteen participants were instructed to walk at self-selected speeds while their movements were recorded with eight RGB cameras. Each participant undertook trials with 24 mm spherical markers attached to 25 body landmarks (marker trials), as well as trials without markers (non-marker trials). To generate training data, virtual markers mimicking spherical markers were randomly inserted into images from the non-marker trials. These images were then used to fine-tune a pre-trained model, resulting in an NN model capable of detecting spherical markers. The digitized coordinates inferred by the NN model were employed to reconstruct the three-dimensional coordinates of the spherical markers, which were subsequently compared with the gold standard. The mean resultant error was determined to be 2.2 mm. These results suggest that the proposed method enables fully automatic marker reconstruction comparable to that of IR MoCap, highlighting its potential for application in motion analysis. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

18 pages, 4452 KB  
Article
Upper Limb Joint Angle Estimation Using a Reduced Number of IMU Sensors and Recurrent Neural Networks
by Kevin Niño-Tejada, Laura Saldaña-Aristizábal, Jhonathan L. Rivas-Caicedo and Juan F. Patarroyo-Montenegro
Electronics 2025, 14(15), 3039; https://doi.org/10.3390/electronics14153039 - 30 Jul 2025
Viewed by 1230
Abstract
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide [...] Read more.
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide precise tracking but are constrained to controlled laboratory environments. This study presents a deep learning-based approach for estimating shoulder and elbow joint angles using only three IMU sensors positioned on the chest and both wrists, validated against reference angles obtained from a MoCap system. The input data includes Euler angles, accelerometer, and gyroscope data, synchronized and segmented into sliding windows. Two recurrent neural network architectures, Convolutional Neural Network with Long-short Term Memory (CNN-LSTM) and Bidirectional LSTM (BLSTM), were trained and evaluated using identical conditions. The CNN component enabled the LSTM to extract spatial features that enhance sequential pattern learning, improving angle reconstruction. Both models achieved accurate estimation performance: CNN-LSTM yielded lower Mean Absolute Error (MAE) in smooth trajectories, while BLSTM provided smoother predictions but underestimated some peak movements, especially in the primary axes of rotation. These findings support the development of scalable, deep learning-based wearable systems and contribute to future applications in clinical assessment, sports performance analysis, and human motion research. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

25 pages, 315 KB  
Review
Motion Capture Technologies for Athletic Performance Enhancement and Injury Risk Assessment: A Review for Multi-Sport Organizations
by Bahman Adlou, Christopher Wilburn and Wendi Weimar
Sensors 2025, 25(14), 4384; https://doi.org/10.3390/s25144384 - 13 Jul 2025
Cited by 1 | Viewed by 4078
Abstract
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite [...] Read more.
Background: Motion capture (MoCap) technologies have transformed athlete monitoring, yet athletic departments face complex decisions when selecting systems for multiple sports. Methods: We conducted a narrative review of peer-reviewed studies (2015–2025) examining optical marker-based, inertial measurement unit (IMU) systems, including Global Navigation Satellite System (GNSS)-integrated systems, and markerless computer vision systems. Studies were evaluated for validated accuracy metrics across indoor court, aquatic, and outdoor field environments. Results: Optical systems maintain sub-millimeter accuracy in controlled environments but face field limitations. IMU systems demonstrate an angular accuracy of 2–8° depending on movement complexity. Markerless systems show variable accuracy (sagittal: 3–15°, transverse: 3–57°). Environmental factors substantially impact system performance, with aquatic settings introducing an additional orientation error of 2° versus terrestrial applications. Outdoor environments challenge GNSS-based tracking (±0.3–3 m positional accuracy). Critical gaps include limited gender-specific validation and insufficient long-term reliability data. Conclusions: This review proposes a tiered implementation framework combining foundation-level team monitoring with specialized assessment tools. This evidence-based approach guides the selection of technology aligned with organizational priorities, sport-specific requirements, and resource constraints. Full article
(This article belongs to the Special Issue Sensors Technology for Sports Biomechanics Applications)
15 pages, 4940 KB  
Article
Consistency Is Key: A Secondary Analysis of Wearable Motion Sensor Accuracy Measuring Knee Angles Across Activities of Daily Living Before and After Knee Arthroplasty
by Robert C. Marchand, Kelly B. Taylor, Emily C. Kaczynski, Skye Richards, Jayson B. Hutchinson, Shayan Khodabakhsh and Ryan M. Chapman
Sensors 2025, 25(13), 3942; https://doi.org/10.3390/s25133942 - 25 Jun 2025
Viewed by 996
Abstract
Background: Monitoring knee range of motion (ROM) after total knee arthroplasty (TKA) via clinically deployed wearable motion sensors is increasingly common. Prior work from our own lab showed promising results in one wearable motion sensor system; however, we did not investigate errors across [...] Read more.
Background: Monitoring knee range of motion (ROM) after total knee arthroplasty (TKA) via clinically deployed wearable motion sensors is increasingly common. Prior work from our own lab showed promising results in one wearable motion sensor system; however, we did not investigate errors across different activities. Accordingly, herein we conducted secondary analyses of error using wearable inertial measurement units (IMUs) quantifying sagittal knee angles across activities in TKA patients. Methods: After Institutional Review Board (IRB) approval, TKA patients were recruited for participation in two visits (n = 20 enrolled, n = 5 lost to follow-up). Following a sensor tutorial (MotionSense, Stryker, Mahwah, NJ, USA), sensors and motion capture (MOCAP) markers were applied for data capture before surgery. One surgeon then performed TKA. An identical data capture was then completed postoperatively. MOCAP and wearable motion sensor knee angles were computed during a series of activities and compared. Two-way ANOVA evaluated the impact of time (pre- vs. post-TKA) and activity on average error. Another two-way ANOVA was completed, assessing if error at local maxima was different than at local minima and if either was different across activities. Results: Pre-TKA/post-TKA errors were not different. No differences were noted across activities. On average, the errors were under clinically acceptable thresholds (i.e., 4.9 ± 2.6° vs. ≤5°). Conclusions: With average error ≤ 5°, these specific sensors accurately quantify knee angles before/after surgical intervention. Future investigations should explore leveraging this type of technology to evaluate preoperative function decline and postoperative function recovery. Full article
(This article belongs to the Special Issue State of the Art in Wearable Sensors for Health Monitoring)
Show Figures

Figure 1

16 pages, 1978 KB  
Article
Learning-Assisted Multi-IMU Proprioceptive State Estimation for Quadruped Robots
by Xuanning Liu, Yajie Bao, Peng Cheng, Dan Shen, Zhengyang Fan, Hao Xu and Genshe Chen
Information 2025, 16(6), 479; https://doi.org/10.3390/info16060479 - 9 Jun 2025
Viewed by 3831
Abstract
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing [...] Read more.
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing the dynamics of the body and legs, in addition to joint encoders. The extended Kalman filter (KF) is employed to fuse sensor data to estimate the robot’s states in the world frame and enhance the convergence of the extended KF (EKF). To circumvent the requirements for the measurements from the motion capture (mocap) system or other vision systems, the right-invariant EKF (RI-EKF) is extended to employ the foot IMU measurements for enhanced state estimation, and a learning-based approach is presented to estimate the vision system measurements for the EKF. One-dimensional convolutional neural networks (CNN) are leveraged to estimate required measurements using only the available proprioception data. Experiments on real data from a quadruped robot demonstrate that proprioception can be sufficient for state estimation. The proposed learning-assisted approach, which does not rely on data from vision systems, achieves competitive accuracy compared to EKF using mocap measurements and lower estimation errors than RI-EKF using multi-IMU measurements. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

17 pages, 2275 KB  
Article
Comparative Assessment of an IMU-Based Wearable Device and a Marker-Based Optoelectronic System in Trunk Motion Analysis: A Cross-Sectional Investigation
by Fulvio Dal Farra, Serena Cerfoglio, Micaela Porta, Massimiliano Pau, Manuela Galli, Nicola Francesco Lopomo and Veronica Cimolin
Appl. Sci. 2025, 15(11), 5931; https://doi.org/10.3390/app15115931 - 24 May 2025
Cited by 1 | Viewed by 4945
Abstract
Wearable inertial measurement units (IMUs) are increasingly used in human motion analysis due to their ability to measure movement in real-world environments. However, with rapid technological advancement and a wide variety of models available, it is essential to evaluate their performance and suitability [...] Read more.
Wearable inertial measurement units (IMUs) are increasingly used in human motion analysis due to their ability to measure movement in real-world environments. However, with rapid technological advancement and a wide variety of models available, it is essential to evaluate their performance and suitability for analyzing specific body regions. This study aimed to assess the accuracy and precision of an IMU-based sensor in measuring trunk range of motion (ROM). Twenty-seven healthy adults (11 males, 16 females; mean age: 31.1 ± 11.0 years) participated. Each performed trunk movements—flexion, extension, lateral bending, and rotation—while angular data were recorded simultaneously using a single IMU and a marker-based optoelectronic motion capture (MoCap) system. Analyses included accuracy indices, Root Mean Square Error (RMSE), Pearson’s correlation coefficient (r), concordance correlation coefficient (CCC), and Bland–Altman limits of agreement. The IMU showed high accuracy in rotation (92.4%), with strong correlation (r = 0.944, p < 0.001) and excellent agreement [CCC = 0.927; (0.977–0.957)]. Flexion (72.1%), extension (64.1%), and lateral bending (61.4%) showed moderate accuracy and correlations (r = 0.703, 0.564, and 0.430, p < 0.05). The RMSE ranged from 1.09° (rotation) to 3.01° (flexion). While the IMU consistently underestimated ROM, its accuracy in rotation highlights its potential as a cost-effective MoCap alternative, warranting further study for broader clinical use. Full article
Show Figures

Figure 1

37 pages, 2036 KB  
Article
GCN-Transformer: Graph Convolutional Network and Transformer for Multi-Person Pose Forecasting Using Sensor-Based Motion Data
by Romeo Šajina, Goran Oreški and Marina Ivašić-Kos
Sensors 2025, 25(10), 3136; https://doi.org/10.3390/s25103136 - 15 May 2025
Viewed by 2477
Abstract
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where [...] Read more.
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where motion capture systems, including vision-based sensors and IMUs, provide crucial data for analyzing human movement. This paper introduces GCN-Transformer, a novel model for multi-person pose forecasting that leverages the integration of Graph Convolutional Network and Transformer architectures. We integrated novel loss terms during the training phase to enable the model to learn both interaction dependencies and the trajectories of multiple joints simultaneously. Additionally, we propose a novel pose forecasting evaluation metric called Final Joint Position and Trajectory Error (FJPTE), which assesses both local movement dynamics and global movement errors by considering the final position and the trajectory leading up to it, providing a more comprehensive assessment of movement dynamics. Our model uniquely integrates scene-level graph-based encoding and personalized attention-based decoding, introducing a novel architecture for multi-person pose forecasting that achieves state-of-the-art results across four datasets. The model is trained and evaluated on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets, which are collected using sensor-based motion capture systems, ensuring its applicability in real-world scenarios. Comprehensive evaluations on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets demonstrate that the proposed GCN-Transformer model consistently outperforms existing state-of-the-art (SOTA) models according to the VIM and MPJPE metrics. Specifically, based on the MPJPE metric, GCN-Transformer shows a 4.7% improvement over the closest SOTA model on CMU-Mocap, 4.3% improvement over the closest SOTA model on MuPoTS-3D, 5% improvement over the closest SOTA model on the SoMoF Benchmark, and a 2.6% improvement over the closest SOTA model on the ExPI dataset. Unlike other models with performances that fluctuate across datasets, GCN-Transformer performs consistently, proving its robustness in multi-person pose forecasting and providing an excellent foundation for the application of GCN-Transformer in different domains. Full article
Show Figures

Figure 1

24 pages, 8541 KB  
Article
Feature Fusion Graph Consecutive-Attention Network for Skeleton-Based Tennis Action Recognition
by Pawel Powroznik, Maria Skublewska-Paszkowska, Krzysztof Dziedzic and Marcin Barszcz
Appl. Sci. 2025, 15(10), 5320; https://doi.org/10.3390/app15105320 - 9 May 2025
Viewed by 1157
Abstract
Human action recognition has become a key direction in computer vision. Deep learning models, particularly when combined with sensor data fusion, can significantly enhance various applications by learning complex patterns and relationships from diverse data streams. Thus, this study proposes a new model, [...] Read more.
Human action recognition has become a key direction in computer vision. Deep learning models, particularly when combined with sensor data fusion, can significantly enhance various applications by learning complex patterns and relationships from diverse data streams. Thus, this study proposes a new model, the Feature Fusion Graph Consecutive-Attention Network (FFGCAN), in order to enhance performance in the classification of the main tennis strokes: forehand, backhand, volley forehand, and volley backhand. The proposed network incorporates seven basic blocks that are combined with two types of module: an Adaptive Consecutive Attention Module, and Graph Self-Attention module. They are employed to extract joint information at different scales from the motion capture data. Due to focusing on relevant components, the model enriches the network’s comprehension of tennis motion data representation and allows for a more invested representation. Moreover, the FFGCAN utilizes a fusion of motion capture data that generates a channel-specific topology map for each output channel, reflecting how joints are connected when the tennis player is moving. The proposed solution was verified utilizing three well-known motion capture datasets, THETIS, Tennis-Mocap, and 3DTennisDS, each containing tennis movements in various formats. A series of experiments were performed, including data division into training (70%), validating (15%), and testing (15%) subsets. The testing utilized five trials. The FFCGAN model obtained very high results for accuracy, precision, recall, and F1-score, outperforming the commonly applied networks for action recognition, such as the Spatial-Temporal Graph Convolutional Network or its modifications. The proposed model demonstrated excellent tennis movement prediction ability. Full article
Show Figures

Figure 1

16 pages, 3643 KB  
Article
2D Pose Estimation vs. Inertial Measurement Unit-Based Motion Capture in Ergonomics: Assessing Postural Risk in Dental Assistants
by Steven Simon, Jonna Meining, Laura Laurendi, Thorsten Berkefeld, Jonas Dully, Carlo Dindorf and Michael Fröhlich
Bioengineering 2025, 12(4), 403; https://doi.org/10.3390/bioengineering12040403 - 10 Apr 2025
Viewed by 966
Abstract
The dental profession has a high prevalence of musculoskeletal disorders because daily working life is characterized by many monotonous and one-sided physical exertions. Inertial measurement unit (IMU)-based motion capture (MoCap) is increasingly utilized for assessing workplace postural risk. However, practical alternatives are needed [...] Read more.
The dental profession has a high prevalence of musculoskeletal disorders because daily working life is characterized by many monotonous and one-sided physical exertions. Inertial measurement unit (IMU)-based motion capture (MoCap) is increasingly utilized for assessing workplace postural risk. However, practical alternatives are needed because it is time-consuming and relatively cost intensive for ergonomists. This study compared two measurement technologies: IMU-based MoCap and a time-effective alternative, two-dimensional (2D) pose estimation. Forty-five dental assistant students (all female) were included (age: 19.56 ± 5.91 years; height: 165.00 ± 6.35 cm; weight: 63.41 ± 13.87 kg; BMI: 21.56 ± 4.63 kg/m2). A 30 s IMU-based MoCap and image-based pose estimation in the sagittal and frontal planes were performed during a representative experimental task. Data were analyzed using Cohen’s weighted kappa and Bland–Altman plots. There was a significant moderate agreement between the Rapid Upper Limb Assessment (RULA) score from IMU-based MoCap and pose estimation (κ = 0.461, pB = 0.006), but no significant poor agreement (p > 0.05) regarding the body regions of the upper arm, lower arm, wrist, neck, and trunk. These findings indicate that IMU-based MoCap and pose estimation moderately align when assessing the overall RULA score but not for specific body parts. While pose estimation might be useful for quick general posture assessment, it may not be reliable for evaluating joint-level differences, especially in body areas such as the upper extremities. Future research should focus on refining video-based pose estimation for real-time postural risk assessment in the workplace. Full article
Show Figures

Figure 1

23 pages, 7915 KB  
Article
Deep-Learning-Based Recovery of Missing Optical Marker Trajectories in 3D Motion Capture Systems
by Oleksandr Yuhai, Ahnryul Choi, Yubin Cho, Hyunggun Kim and Joung Hwan Mun
Bioengineering 2024, 11(6), 560; https://doi.org/10.3390/bioengineering11060560 - 1 Jun 2024
Cited by 4 | Viewed by 2558
Abstract
Motion capture (MoCap) technology, essential for biomechanics and motion analysis, faces challenges from data loss due to occlusions and technical issues. Traditional recovery methods, based on inter-marker relationships or independent marker treatment, have limitations. This study introduces a novel U-net-inspired bi-directional long short-term [...] Read more.
Motion capture (MoCap) technology, essential for biomechanics and motion analysis, faces challenges from data loss due to occlusions and technical issues. Traditional recovery methods, based on inter-marker relationships or independent marker treatment, have limitations. This study introduces a novel U-net-inspired bi-directional long short-term memory (U-Bi-LSTM) autoencoder-based technique for recovering missing MoCap data across multi-camera setups. Leveraging multi-camera and triangulated 3D data, this method employs a sophisticated U-shaped deep learning structure with an adaptive Huber regression layer, enhancing outlier robustness and minimizing reconstruction errors, proving particularly beneficial for long-term data loss scenarios. Our approach surpasses traditional piecewise cubic spline and state-of-the-art sparse low rank methods, demonstrating statistically significant improvements in reconstruction error across various gap lengths and numbers. This research not only advances the technical capabilities of MoCap systems but also enriches the analytical tools available for biomechanical research, offering new possibilities for enhancing athletic performance, optimizing rehabilitation protocols, and developing personalized treatment plans based on precise biomechanical data. Full article
(This article belongs to the Special Issue Biomechanics and Motion Analysis)
Show Figures

Figure 1

21 pages, 26199 KB  
Article
Implementation of a Long Short-Term Memory Neural Network-Based Algorithm for Dynamic Obstacle Avoidance
by Esmeralda Mulás-Tejeda, Alfonso Gómez-Espinosa, Jesús Arturo Escobedo Cabello, Jose Antonio Cantoral-Ceballos and Alejandra Molina-Leal
Sensors 2024, 24(10), 3004; https://doi.org/10.3390/s24103004 - 9 May 2024
Cited by 10 | Viewed by 2537
Abstract
Autonomous mobile robots are essential to the industry, and human–robot interactions are becoming more common nowadays. These interactions require that the robots navigate scenarios with static and dynamic obstacles in a safely manner, avoiding collisions. This paper presents a physical implementation of a [...] Read more.
Autonomous mobile robots are essential to the industry, and human–robot interactions are becoming more common nowadays. These interactions require that the robots navigate scenarios with static and dynamic obstacles in a safely manner, avoiding collisions. This paper presents a physical implementation of a method for dynamic obstacle avoidance using a long short-term memory (LSTM) neural network that obtains information from the mobile robot’s LiDAR for it to be capable of navigating through scenarios with static and dynamic obstacles while avoiding collisions and reaching its goal. The model is implemented using a TurtleBot3 mobile robot within an OptiTrack motion capture (MoCap) system for obtaining its position at any given time. The user operates the robot through these scenarios, recording its LiDAR readings, target point, position inside the MoCap system, and its linear and angular velocities, all of which serve as the input for the LSTM network. The model is trained on data from multiple user-operated trajectories across five different scenarios, outputting the linear and angular velocities for the mobile robot. Physical experiments prove that the model is successful in allowing the mobile robot to reach the target point in each scenario while avoiding the dynamic obstacle, with a validation accuracy of 98.02%. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 4350 KB  
Article
Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones
by Haoyu Wang, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu and Bisheng Yang
Drones 2024, 8(4), 137; https://doi.org/10.3390/drones8040137 - 2 Apr 2024
Cited by 4 | Viewed by 4454
Abstract
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera [...] Read more.
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

20 pages, 9609 KB  
Article
Development of Wearable Devices for Collecting Digital Rehabilitation/Fitness Data from Lower Limbs
by Yu-Jung Huang, Chao-Shu Chang, Yu-Chi Wu, Chin-Chuan Han, Yuan-Yang Cheng and Hsian-Min Chen
Sensors 2024, 24(6), 1935; https://doi.org/10.3390/s24061935 - 18 Mar 2024
Cited by 3 | Viewed by 3057
Abstract
Lower extremity exercises are considered a standard and necessary treatment for rehabilitation and a well-rounded fitness routine, which builds strength, flexibility, and balance. The efficacy of rehabilitation programs hinges on meticulous monitoring of both adherence to home exercise routines and the quality of [...] Read more.
Lower extremity exercises are considered a standard and necessary treatment for rehabilitation and a well-rounded fitness routine, which builds strength, flexibility, and balance. The efficacy of rehabilitation programs hinges on meticulous monitoring of both adherence to home exercise routines and the quality of performance. However, in a home environment, patients often tend to inaccurately report the number of exercises performed and overlook the correctness of their rehabilitation motions, lacking quantifiable and systematic standards, thus impeding the recovery process. To address these challenges, there is a crucial need for a lightweight, unbiased, cost-effective, and objective wearable motion capture (Mocap) system designed for monitoring and evaluating home-based rehabilitation/fitness programs. This paper focuses on the development of such a system to gather exercise data into usable metrics. Five radio frequency (RF) inertial measurement unit (IMU) devices (RF-IMUs) were developed and strategically placed on calves, thighs, and abdomens. A two-layer long short-term memory (LSTM) model was used for fitness activity recognition (FAR) with an average accuracy of 97.4%. An intelligent smartphone algorithm was developed to track motion, recognize activity, and calculate key exercise variables in real time for squat, high knees, and lunge exercises. Additionally, a 3D avatar on the smartphone App allows users to observe and track their progress in real time or by replaying their exercise motions. A dynamic time warping (DTW) algorithm was also integrated into the system for scoring the similarity in two motions. The system’s adaptability shows promise for applications in medical rehabilitation and sports. Full article
(This article belongs to the Special Issue Multi-sensor for Human Activity Recognition: 2nd Edition)
Show Figures

Figure 1

Back to TopTop