Next Article in Journal
Voltage Regulation of a DC–DC Boost Converter Using a Vertex-Based Convex PI Controller
Next Article in Special Issue
Multimodal Minimal-Angular-Geometry Representation for Real-Time Dynamic Mexican Sign Language Recognition
Previous Article in Journal
Deep Learning in Cardiovascular Tissue Engineering: A Review on Current Advances and Future Perspectives
Previous Article in Special Issue
Post Hoc Error Correction for Missing Classes in Deep Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Measurement Error of Markerless Motion Capture Systems Applied to Tracking Movements in Human–Object Interaction Tasks: A Systematic Review with Best Evidence Synthesis

by
Nicole Unsihuay
1,2,*,
Rene F. Clavo
2 and
Luiz H. Palucci Vieira
1,*
1
Laboratory of Biomedical Imaging and Signal Processing (Lab-SEIB), Research Group on Technology Applied to Health and Physical Performance—TeHealP@PUCP, Bioengineering Section, Department of Engineering, Faculty of Sciences and Engineering, Pontificia Universidad Católica del Perú, PUCP, Av. Universitaria 1801, San Miguel, Lima 15088, Peru
2
Undergraduate Program in Biomedical Engineering, Faculty of Sciences and Engineering (FCI), Bioengineering Section, Pontificia Universidad Católica del Perú, Av. Universitaria 1801, San Miguel, Lima 15088, Peru
*
Authors to whom correspondence should be addressed.
Technologies 2026, 14(1), 28; https://doi.org/10.3390/technologies14010028
Submission received: 24 October 2025 / Revised: 5 December 2025 / Accepted: 24 December 2025 / Published: 1 January 2026
(This article belongs to the Special Issue Image Analysis and Processing)

Abstract

This systematic review focused on the validity of markerless motion capture (MMC) systems used for human movement assessment during tasks that involve physical interaction with objects. Five electronic databases were searched until May 2025. Eligible studies (i) assessed the validity of an MMC system, (ii) required human participants to perform tasks that involved physical interaction with objects (e.g., lifts, carrying, gait with loads), (iii) employed a marker-based reference system, and (iv) reported at least one kinematic metric. Risk of bias was assessed using the SURE checklist. A best-evidence synthesis was conducted to classify the level of evidence across included studies. Fifteen studies met eligibility (median = 21 participants per study). In general, MMC systems presented good performance in capturing the waveforms related to movement (i.e., high associations with reference systems), but its level of precision (i.e., the magnitude of differences to the reference systems) still requires improvement regarding tasks involving human–object interactions. Most tasks analyzed were lifts, gait with load, squatting and reaching/manipulation, and technical gestures. There was strong evidence for the validity of MMC for implementation during lifting tasks. In summary, markerless motion capture (MMC) systems exhibit promising evidence of validity for some human–object interaction tasks, that is, especially when lifting as strong evidence was observed across studies on this type of task. In contrast, some evidence for tasks including gait under load, squatting, reaching, or touchscreen interaction is limited, moderate, or conflicting. Notwithstanding these limitations, most studies were observed to have moderate- to high-quality methodology. Additional research is required to optimize protocols to study the measurement error aspects of MMC under human–object interaction in real-world environments.

1. Introduction

Markerless motion capture (MMC) refers to the process of tracking human movement directly from video or depth images using computer vision and machine learning without reflective markers attached to the body of evaluated subjects [1,2]. Researchers and practitioners in movement science are increasingly interested in these systems due to their less invasive nature, improved portability, and often lower cost as compared with marker-based systems [3]. There are several validation studies and even recent reviews and meta-analyses reporting adequate or excellent accuracy and reliability for spatiotemporal movement measures (e.g., gait velocity and step length) for MMC systems [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]. While there is a knowledge base from such previous review studies, most of them were conducted with simple, formally controlled experiments in mind, and none have solely examined movements under conditions of some form of human–object interaction (e.g., lifting, carrying, or completing overall manipulation with tools), that may represent the physical actions requested in real-world tasks [20]. Thus, collated evidence specifically about the performance of MMC during human–object interactions tasks is currently lacking.
Measuring human movement features in the tasks requiring interactions with objects can increase the usefulness of MMC systems for a range of distinct fields (e.g., [21,22,23]). In ergometrics, previous work evaluated the utility of MMC for lifting and handling tasks in adding in-depth detail to distinguish occupational risks and posture assessment [24,25,26]. Markerless systems have been also implemented for assessing functional tasks within clinical rehabilitation contexts, such as reaching and upper-limb manipulation [27]. Of note, it is possible to observe that MMC has been now adopted for monitoring patients outside laboratory environments [6,28]. In sports and performance, MMC is being used to conduct ecologically valid assessments of skill execution, ranging from gait [9], skiing [10] and tactical movements (e.g., box jumps, single-leg landings, squat, or pivoting under heavy load) [11,13]. Collectively, these applications demonstrate the promise of MMC for escaping the control of laboratory settings and achieving real-world clinical relevance. However, measuring human movement kinematics with MMC while the subjects interact with objects could still involve problems in identifying the regions of interest similar to those of traditional marker-based kinematic systems. These include, for example, the occlusion effect derived from the potential overlapping of the object relative to the body in the image sequences, in addition to potential variations in lighting in the environment, which can also impact the MMC system functioning [20,29,30].
Despite increased validation of MMC systems, reviews to date have mainly summarized simple tasks, such as unladen gait or posture in laboratory environments [4,5,6]. There is limited research evaluating their use in tasks involving human–object interaction given the relevance of these situations in ergonomics, clinical, and sport-related settings, and this gap highlights the need for a systematic synthesis of the evidence on this topic. Therefore, the purpose of this systematic review was to examine the validity of markerless motion capture systems for tasks involving human–object interaction.

2. Materials and Methods

The protocol for this review was carried out in accordance with items suggested by The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA 2020) statement [31], and was registered in the OSF—Open Science Framework under ID #meugz (https://osf.io/9zx6c; (accessed on 15 September 2025), DOI: 10.17605/OSF.IO/MEUGZ).

2.1. Search Strategy

The electronic databases IEEE Xplore, Scopus, Web of Science, EBSCOHost, and PubMed were searched on 6 May 2025 (between 18:31 PM and 19:56 PM, GMT-5) to identify evidence related to the validity of markerless motion capture systems in human–object interaction tasks, published in scientific journal articles. The search strategy and keywords were defined according to previous systematic reviews evaluating the validity, reliability, and clinical applications of markerless motion capture systems [4,5,6]. The search strategy used in the databases (Table 1) was performed by combining keywords and medical subject headings (MeSH) to make the search as exhaustive as possible. Keywords that did not yield any results were discarded after the pilot test. Searches were restricted to the title, abstract, and keyword fields in all the indicated databases.

2.2. Eligibility Criteria

This review only considered original studies that met the following conditions: (i) peer-reviewed scientific articles; (ii) full text available in English language and using a PICOS/PECOS framework [32]; (iii) human participants; (iv) reported at least one kinematic-derived outcome (e.g., gait speed, joint angles); (v) evaluated a markerless motion capture system that was directly compared to a reference system; and (vi) explicitly stated or allowed inference of ethics committee approval [33]. Furthermore, eligible studies had to include participants who passively or actively interacted with a physical object during data collection (e.g., lifting, carrying, manipulating, or moving under external load). No restrictions were imposed on study design or publication date (articles were searched from inception of the database to the end date of the search).

2.3. Study Selection Process

First, all duplicate records were removed using the “Duplicate Items” function associated with the free bibliographic software Zotero (v.7.0.15; Corporation for Digital Scholarship, Vienna, VA USA). The lead author then independently screened the titles, abstracts and keywords of all items retrieved through database searches, applying the pre-specified eligibility criteria in a systematic approach. Exclusions were made when records (i) involved robotic systems; (ii) were conference abstracts, conference proceedings, book chapters or other type of grey literature [16,34,35]; (iii) were identified as retracted publications; (iv) evaluated just the feasibility of a markerless system without validation against a reference method; or (v) described experiments where the authors did not include either evidence of ethical approval or justification for the exemption in the article text [36,37].

2.4. Data Extraction and Evidence Synthesis

For each study extraction, we systematically retained data for the following items: (i) reference information (authors, year of publication, funding source); (ii) sample and setting (country/region, number of participants, gender, age categories (child 12 years, adolescent 13–17 years, adult 18–59 years and older adult 60 % years)); (iii) task protocol, task location, task instructions, object descriptions, object interactions (e.g., lifting, carrying, reaching, transporting (i.e., locomotion with external load), sporting gestures); (iv) kinematic system, collection method (system/platform, number of cameras/sensors, type of cameras/sensors, and sampling frequency), reference systems used for validation, and any other instrumentation used in conjunction with the kinematic system (e.g., force plates, EMG, inertial measurement units (IMUs)); (v) kinematic outputs, other outcomes and statistical comparisons (e.g., RMSE/RMSD, ICC, correlations, Bland–Altman, TOST, ANOVA) in relation to the main purpose of the present systematic review; and (vi) level/category of evidence given for best-evidence synthesis (see below). When required information was incomplete, or unclear, the corresponding author was contacted once.
The methodological quality of the included studies was assessed by using the SURE critical appraisal checklist for cross-sectional studies (Specialist Unit for Review Evidence, 2018) [38,39,40]. The checklist consists of 12 items that address aspects such as study design, outlines the research question, setting and recruitment, participants, sampling and characteristics, appropriateness of exposure and outcomes, sample size rationale, statistical methods, eligibility reporting, results were described, declaration of conflicts of interest, and acknowledgment of limitations.
Each item was scored based on a 3-point scale (2 = criterion met; 1 = unclear; 0 = not met) to give a total score out of 12, which was then converted to percentage scores. The sum of items ( Σ ) was calculated for each article and transformed into a quality score percentage Q S = Σ 24 × 100 . According to the adopted threshold, studies with a score < 75% were classified as low quality, while those with a score ≥ 75% were considered high quality [41]. Two authors independently completed the checklist (NU and RC) on all papers included. When a consensus could not be reached, a third (senior) author (LV) was then consulted until a consensus was reached. Best-evidence synthesis was conducted with regard to task type (e.g., lifting, loaded gait, reaching/squats, sport-specific gestures) [42] and reported the strength of evidence based on predefined criteria: (i) the consistency of research results across at least two independent studies examining a similar task (≥75% of studies reporting results showing the same direction), and (ii) the methodology quality rating determined by the SURE checklist. Evidence was deemed strong when the consistent findings were all from high-quality studies, moderate when consistent findings included at least one high-quality study, conflicting when there were inconsistent findings between studies, and limited when there was only one study. The threshold to consider included evidence as statistically significant for the current systematic review was p ≤ 0.05 if not otherwise noted.

3. Results

In total, 3713 studies were initially found in the systematic search across the five databases (Figure 1). After discarding duplicates, 1844 titles were left as unique records. At the initial screening, 339 articles were excluded for low relevance. The titles, abstracts, and keywords were screened for 1505 articles, of which 1443 were excluded for non-compliance with the inclusion criteria previously established in methods.
Then, 62 studies were screened for full-text review for eligibility. Of them, 47 studies were excluded for the following reasons: robotic systems (n = 2), feasibility reports, protocol only (n = 2), no human participants (n = 8), no object interaction (n = 8), no markerless systems (n = 19), no reference system (n = 4), a language other than English (n = 1), no peer reviewed article (n = 1), or absence of reporting kinematic measures (n = 2).
Finally, 15 studies [8,9,10,11,12,13,24,25,26,27,43,44,45,46,47] met all the required criteria and were included in the qualitative synthesis. Reasons for exclusion in the final eligibility stage included, for example, studies that were not in English, did not use a markerless system, or lacked a reference system comparison.

3.1. Characteristics of the Included Studies

The main methodological aspects of the included studies are summarized in Table 2. The total number of participants across all studies was 316 (median: 21 participants per study). Regarding sex, three studies included exclusively males (20%), while twelve studies (80%) included both men and women. As for age groups, most studies (12, equivalent to 80%) were conducted with adult samples. Only one study included children (≤12 years), one included adolescents, and one included older adults. In one study, information about the age group was not clearly reported.
The experimental protocols included (i) symmetrical or asymmetrical lifting of boxes (from 10 to 12.7 kg) (five studies; [12,24,25,26,46]), (ii) gait and locomotion tests under different conditions, including manipulation of external loads such as backpacks or military equipment (five studies; [8,9,11,43,45]), (iii) squat and knee flexion tests (two studies; [13,44]), (iv) reaching and manipulation tasks (two studies; [27,47]), and (v) technical gestures such as treadmill skating [10]. In most studies, the main task was clearly reported, although only in 38% ([11,13,25,26,46]) was the duration or exact number of repetitions of the tests specified. Figure 2 shows the distribution of included studies published over the years. The older study on the subject found in the current review was published in 2016 [45]. Between 2016 and 2018 and 2021 and 2023, one study was published per year; increases were observed between 2019 and 2020 (two studies published per year) and more recently in 2024 (three studies published; [10,13,47]).
The metrics collected predominantly included three-dimensional joint angles of the upper and lower limbs (shoulder, elbow, wrist, hip, knee, ankle) (present in at least 9 studies: [9,10,11,12,13,24,27,44,46]). Spatiotemporal gait parameters (speed, stride/step time and length) were also analyzed (three papers; ([8,43,45])), as well as joint forces and moments (including L5/S1 forces) ([11,12,46]), muscle activation via EMG [11], and functional or ergonomic metrics such as REBA scores [12]. All of the included studies reported at least one kinematic metric, while 54% included additional kinetic or functional variables. Regarding markerless motion capture systems, the following solutions were used across the included studies: artificial intelligence and deep learning algorithms (PoseChecker, DeepLabCut, Theia3D, MediaPipe Hands), commercial systems such as Microsoft Kinect V2 (used in at least 4 studies), Organic Motion, Wrnch AI, and the Ergo system, as well as custom platforms in some cases.
Deep learning-based platforms, such as the Ergo system [13], demonstrated excellent concurrent validity and high test–retest reliability in functional movements like the overhead squat, with RMSE values ranging from 2 to 6° and ICCs up to 1.0. The reference systems were predominantly optical marker-based (Vicon, Yarnton, UK, Qualisys Gothenburg, Sweden, Motion Analysis Corp., Rohnert Park, CA), employed in 100% of the studies for direct comparison. In more comprehensive protocols, these systems were often accompanied by IMUs, force plates, or EMG. Kinect was the most frequently used markerless system, appearing in 3 of the 15 included studies [25,27,45]. These studies assessed gait, overhead lifting, and upper-limb reaching tasks. Although methodological differences exist, all three reported acceptable to high validity results when compared to reference systems, including RMSE values below 10° or 10 mm, and ICCs above 0.85, supporting Kinect’s suitability for basic clinical and ergonomic assessments.
The number and arrangement of sensors/cameras varied considerably: from simple setups with a single camera (MediaPipe Hands in [47]) to multi-camera systems of 7, 8, 10, 11, and up to 18 RGB or infrared cameras [10,11,13]. The anatomical placement of markers was described in detail in 92% of the studies, covering the pelvis, hip, thigh, leg, ankle, foot, trunk, shoulder, elbow, wrist, hand, and head, depending on the experimental objective.
As for acquisition frequencies, this information was explicitly reported in only some studies, with typical values ranging from 25 Hz [43], 60 Hz, 100 Hz, 120 Hz (Vicon in [11]), and up to 200 Hz (Qualisys in [13]). In approximately 31% of the studies, the acquisition frequency was not specified for all systems used or was only indicated for the marker-based system.
The statistical strategies employed almost always included root mean square error (RMSE), Pearson or Spearman correlation coefficients, intraclass correlation coefficients (ICC), Bland–Altman analysis, paired t-tests, and in several cases, repeated-measures ANOVA and equivalence tests. All of the studies applied at least one quantitative comparison between systems, strengthening the validity of their results. Three studies used 2D (20%; [25,26,47]) while twelve used 3D analysis (80%; [8,9,10,11,12,13,24,27,43,44,45,46]) regarding the tracking method.

3.2. Measurement Properties—Overview

The main results related to the measurement properties of markerless motion capture systems for human–object interaction tasks are summarized in Table 3. Overall, the authors claimed validity of markerless systems in 13 of the 15 included studies (87%). For example, high correlations (corr = 0.81–0.95) and acceptable root mean square errors (RMSE = 5–23°) were observed for estimating joint angles and ergonomic loads when comparing PoseChecker with reference systems [12]. Similar results were reported with RGB camera-based models and machine learning [25,46], with RMSE between 5 and 12° and high correlations for lumbar spine loads. Other studies using commercial and artificial intelligence systems, such as KinaTrax, Theia3D, DeepLabCut, or MediaPipe Hands, or the Ergo platform [13] also showed good agreement with traditional marker-based systems for gait analysis, reaching and tracing tasks, and ergonomic assessments [8,9,10,47]. On the contrary, 2 studies did not support the validity of markerless systems. In these, gait tasks were studied either with limited validation [45] or distinguishing biometric classification rather than biomechanical [43].
In reference to the manipulated objects, most studies analyzed tasks involving only rigid tools/loads (11/15 studies, 73%; [8,9,10,12,13,24,25,26,27,46,47]), whereas only four studies (27%) included non-rigid or mixed rigid/non-rigid objects such as hurdles, backpacks, or tactical vests [11,43,44,45]. Among the studies involving only rigid tools/loads, validity was supported in 8/11 studies (73%) and not confirmed in 3/11 (27%). In contrast, for tasks involving non-rigid or mixed rigid/non-rigid objects (e.g., hurdles, backpacks, tactical vests), only 2/4 studies (50%) supported the validity of MMC, while the remaining 2/4 (50%) reported insufficient agreement.
Regarding reliability and accuracy, the results were also generally favorable, although fewer studies explicitly evaluated these properties. For example, ICCs > 0.85 were reported for dynamic tasks (box lifting, gait, squats), both in within-session protocols and under controlled conditions [10,25]. However, between-session reliability and long-term reproducibility have been poorly explored, and in some cases only within-session comparisons were documented.
In short, in the included studies using correlation analysis, six (86%) demonstrated high associations [10,11,12,13,25,45] while only one reported mixed results (i.e., weak-to-strong correlations; [44]). ICCs reported were also generally high, ranging from 0.75 to 1 across all studies which calculated such metric [8,10,11,13,25,27,46]. On the other hand, RMSD/RMSE values notably varied across studies from approximately 2° [13] to 13° [12] (see Table 3).

3.3. Methodological Quality and Risk of Bias in the Included Studies

The overall methodological quality outcomes for the included studies, as well as scores for each of the 12-item scale, are summarized in Table 4. Sample size was the most frequently low-scoring aspect, with 100% of the studies scoring suboptimal on this criterion (Q3). Furthermore, other relevant methodological elements, such as the absence of intraclass reliability (ICC) analysis and the lack of correlation or area under the curve (AUC) analysis, were also identified as recurring missing items in part of the included studies, i.e., respectively, 46% and 38% of the studies lacked or were unclear in reporting these analyses. Overall, most studies achieved moderate to high overall methodological quality, with an average score of 77% (ranged between 50% and 96%). The highest-quality studies reached a 96% score [13,26], while the lowest score corresponded to 50% [45].

3.4. Evidence Synthesis

The best-evidence synthesis tool was adopted in an attempt to depict the results according to the group of tasks analyzed in included studies (see Table 5). After applying the criteria to classify the level of evidence, the following results were obtained: (a) lifting tasks had strong evidence for the use of markerless systems, as we found that the results were consistent across five studies (four high quality and one low quality) [12,24,25,26,46]. These studies reported strong correlation coefficients and acceptable error margins (RMSE = 5°–23°); (b) for gait and locomotor tasks, especially load manipulation tasks, we found conflicting evidence across 5 studies [8,9,11,43,45]; (c) squat and total knee flexion tests had moderate evidence, as the results were consistent in two studies [13,44] of different quality; and (d) reaching/manipulation tasks presented conflicting evidence across studies [10,27,47]. Overall, the most robust (consensus) evidence from the content provided was related to lifting tasks and less consistent among gait, reaching, and sport-specific complex movements.

4. Discussion

The main objective of the current review was to systematically evaluate the evidence on the measurement error assessments of markerless motion capture systems, specifically for movement analysis involving human–object interaction. A total of 15 articles met all eligibility criteria and therefore were reviewed and their evidence collated. The key findings are as follows: (1) there is strong evidence supporting the validity of markerless systems during lifting tasks; (2) only moderate or conflicting evidence was found for gait, squats, reaching, and complex sports-related gestural movement tasks; (3) the most frequently used markerless system was the Kinect V2, used in four studies which all reported acceptable to high validity for basic movements despite methodological variability; and (4) overall, most studies achieved moderate to high methodological quality (average score 77%), though recurring weaknesses were identified, particularly in sample size justification, reporting of reliability analyses, and the lack of use of some statistical methods (e.g., between-session ICCs). The following paragraphs present the main findings of this review, contrast them with those of previous systematic and narrative reviews on markerless motion capture systems, and offer interpretative insights. Based on these comparisons, recommendations are proposed to guide future research and clinical application.
Our review found strong evidence supporting the validity of markerless motion capture systems for symmetrical/asymmetrical lifting tasks. In at least five studies, including four high-quality studies, markerless systems showed high correlation coefficients and acceptable errors in estimating joint angles and ergonomic loads during box lifting. This is consistent with the previous literature [48,49], which suggests that repetitive, controlled, and relatively simple tasks, like weightlifting, are more suitable for markerless tracking, as they likely involve lower complexity and reduced movement occlusion. Most studies used several AI-based systems or multi-camera configurations to improve measurement accuracy and robustness, for example, PoseChecker [12], Theia3D [9,11], DeepLabCut with 4 RGB cameras [10], and an 8-camera setup with triangulation [43]. These configurations may have played a role to improve joint reconstruction and measurement fidelity. However, due to the fact that some types of multiple-camera systems, even if they are of the same model, may present phase differences between them [50], it cannot be discarded that this may have an impact on the levels of error reported for the video-based MMC tracking systems.
Overall, we did not observe that errors due to differences in joint center estimation or brief occlusion affected the construct validity results, except for two reports that presented methodological limitations [43,45]. Taken together, these findings suggest that markerless systems could be adopted as an alternative monitoring tool in ergonomic and occupational health assessments related to lifting objects while encouraging future research into more complex or real-world lifting tasks. Regarding the evidence for gait/locomotion tasks including the manipulation of external loads, the evidence supporting markerless tracking application was more variable and, in some cases, even contradictory. Across all studies on gait and locomotion tasks, varying levels of agreement with marker-based systems were observed, especially with healthy adults performing in controlled settings or stable environments like treadmills, walking or laboratory settings. Some studies reported good agreement with marker-based systems [8,9,45] while others, particularly studies in children or during tasks designed to measure load adaptation, such as backpack or military load, demonstrated limited or partial validity [11,43,45]. Furthermore, for squats, reaches, and treadmill skating, only a few studies provided moderate evidence for the use of markerless systems in those settings, only some of which had limited methodological quality [10,44,47]. These trends support previous findings [30,48] suggesting that markerless systems still struggle to achieve the unrestricted, multi-joint, or rapid movements intended to be quantified. These inconsistencies may arise not only from differences in execution or participant characteristics but also from intrinsic limitations in the way that current markerless systems characterize complex movements. While previously published reviews (e.g., [4,30,48]) have shown strong validity, they have focused on tasks associated with unconstrained, unloaded movement. In contrast, we have focused on the task of human–object interaction and therefore consider behaviors such as load carriage or manipulation of external objects, which might bring with it additional aspects of biomechanical complexity or marker occlusions that would limit the algorithm’s ability to be able to track joint trajectories. This may explain the lower or inconsistent validity outcomes we observed in more “real-world” tasks, for example, gait under load or military marching. Indeed, one can argue that various algorithms used for object recognition in markerless tracking systems may not have been originally designed for the specific purpose of analyzing human movement from a mechanical point of view at segments level, and this could contribute to the (still) lack of acceptable validity in some cases [1]. Therefore, it is recommended that markerless systems be used with caution for these tasks, and additional high-quality studies are needed to confirm their validity.
Furthermore, several studies examined specific or challenging conditions that may influence system performance. These included external loads such as school backpacks [45], military packs [11], lifting technique variations [25], and orthoses, deformities, or assistive devices in children [9]. Some studies additionally investigated intra- and inter-subject repeatability and gesture execution in varied contexts. While these contexts augment the ecological validity of markerless assessments, they also added additional variability. As, for example, the results of a lower percentage of studies supporting the validity of MMC in conditions of non-rigid or rigid/non-rigid objects suggests that MMC systems tend to perform more consistently in tasks with rigid objects, whereas non-rigid or deformable loads still pose important challenges for accurate kinematic estimation.
The Kinect V2 was the most often used commercial markerless system used by the studies included in this review. However, validity was not consistent across studies. While there were studies assessing simple gait or upper-limb motion, with several studies reporting acceptable agreement with marker-based systems [45], there were studies where more complicated motions like lifting, overhead contexts, or gait under load were only partially or limited valid [25,43]. Trainers and companies using Kinect V2 will be drawn to its lower cost, ease of access, and ease of set-up, but this does not reflect player performance when the demand of the task is greater. While markerless systems have less value for biomechanical research, studies using enhanced markerless systems or multi-camera arrays such as Theia3D, DeepLabCut, or the Ergo platform [13] report better validity in complex or dynamic tasks highlighting not only the selection but also experimental design aspects which impact the validity.
For research and practice applications requiring high-quality measurement and/or complex movement, advanced markerless systems should be considered, and researchers/ practitioners should be transparent about task constraints and system limitations. Consistent with the results described, most authors concluded that the markerless systems they evaluated were valid and accurate (see Table 3) and, in some cases, reliable for capturing kinematic and functional parameters. Authors particularly emphasized portability, reduced cost, and avoidance of physical markers in tasks where there are practical reasons for markerless systems to be favored over competitive systems [9,25,26,27,45]. Specifically, 87% (13 studies) of the studies we reviewed concluded markerless motion capture systems were valid for at least one of the evaluated tasks. In contrast, 13% (2 studies) reported partial or insufficient validity, typically for complex, uncontrolled movements. Findings suggest an emerging consensus on the value of markerless systems, albeit they still need to be interpreted concerning the task and methodology.
In general, the overall methodological quality of the included studies was moderate to high (mean quality score 77%, range: 50–96%), indicating progress towards a rigorous approach to research on the topic. However, many studies continued to exhibit recurring limitations, particularly regarding sample size justification, reliability analysis (ICC, AUC), and standardizing measurement protocols. This is consistent with previous reviews describing significant and ongoing deficiencies in statistical and methodological power, mainly due to deficiencies in sample size and reporting of advanced statistical analyses [30,48,51]. Additionally, in approximately 31% of the studies, the acquisition frequency was not provided for one or more of the systems, or the acquisition frequency was provided for the marker-based system but not the markerless system. This is a setback, as it did not allow for the assurance of fair comparisons. Although the markerless system probably used the same acquisition frequency in most cases where the information was unclear or omitted, a complete description relating to that methodological feature is necessary. Future studies should document the acquisition frequencies of both systems and, ideally, adjust them to the same sampling frequency before beginning the data collection to allow direct comparisons of motion data. Critical developments to ameliorate these issues are essential to strengthening the comparability and confidence in future research initiatives.
The reviewed studies reveal a lot of variance in their methods; however, they approached their studies for the validation of markerless systems using functional, ergonomic, or sporting tasks. The existing markerless literature regarding human–object interactions has several significant limitations. The majority of the studies employed small or convenience samples, which restrict statistical power and generalizability of findings. There are substantial differences in the protocols, tasks, and outcome measures that make it difficult to compare the studies, and as a result, meta-analyses are impossible. There were very few clinical populations or in situ contexts reported in the studies, and when there were, most studies were confined to a laboratory. This effectively restricted the applicability of the results to real-world contexts. There were also multiple studies that underreported the reliability statistics and failed to provide between-session consistency, and there were also studies that did not report any follow-up assessments, which limits well-evaluated performance. Overall, it is evident that future studies need to enhance methodological transparency, provide adequate statistical methods, justify their sample size, and offer more representative samples of participants. Future development regarding markerless tracking should consider providing user-friendly executable software applications for the provision of real-time data. Also, Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) techniques are tools that may improve occlusion robustness and thus potentially help next-generation markerless systems [52]. Implementation of testing protocols in the real world are warranted as well.
Finally, an important finding of the present systematic review was that across the included studies, correlations between MMC and reference systems were generally high, indicating good agreement as regarding signal waveforms. However, joint-angle errors (RMSE) were more unstable, showing values that would often not be considered negligible in clinical or ergonomic contexts, particularly near end-range of motion or when small angular changes affect risk scores. These findings suggest that current MMC systems can be able to capture the overall waveform of joint kinematics, whereas their precision relative to gold-standard reference systems still requires improvement. It is important to note that some of these differences may be due to the generally distinct definitions of local frame orientations between MMC and marker-based systems [53].
The current systematic review also has limitations, which the authors have acknowledged. Firstly, although a full search strategy was performed, only 15 studies met our strict inclusion criteria, and the results may be limited to the specific studies summarized here. The relatively small number of studies included may be due to the fact that the topic of pose estimation has been researched more intensively since approximately the beginning of the 2020s (see, for example, [54]). Also, based on some previous reports using PRISMA guidelines (e.g., [16,34,35]), grey literature including studies published in conferences were not considered. On the other hand, significant efforts on MMC research and their applications can be found in a range of conferences (e.g., [55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79]), which may have resulted in a possible reduction of strength in the evidence synthesis described in the current review study. Studies on robots were also not considered here (i.e., one of the inclusion criteria was that experiments should have included human participants). However, as there is increasing research involving robots attempting to simulate/recreate human movements (e.g., [80,81,82]), removing studies of such type might lead to a loss of potentially useful information.
In addition, exclusion of studies not published in English, and studies without direct comparisons to marker-based systems, may have resulted in relevant data being omitted from the overall synthesis. Moreover, this synthesis was reliant on the original reports being complete and valid, and any report with writing deficiencies in primary studies may have affected the conclusions drawn here. Finally, our best-evidence synthesis per task category was constrained by the literature; that is, some types of movements remain limited in terms of providing the related literature.

5. Conclusions

The main objective of this review was to evaluate the measurement error of markerless motion capture systems for movement analysis in task assessment involving human–object interaction, using the best evidence synthesis method to collate evidence. Strong evidence existed supporting the validity of markerless tracking application for lifting tasks, especially in a controlled environment. There was moderate evidence for tasks such as squats and knee flexion, but there was some variation across systems and protocols. Conflicted or inconclusive evidence was apparent for reaching and manipulation tasks as well as more complex, sport-specific gestures; likely influenced by variation in protocol design, movement dynamics, and evaluation methods used in each of the studies. There was limited evidence for gait. Finally, it is important to note that MMC generally seems well suited to capturing the form of movement, but its level of precision still requires improvement regarding tasks involving human–object interactions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/technologies14010028/s1, Table S1: PRISMA 2020 Checklist completed for the current systematic review [83]; Table S2: Summary of ethical approval and informed consent procedures reported in the included studies.

Author Contributions

N.U.: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Visualization, Writing—original draft, Writing—review and editing. R.F.C.: Formal analysis, Investigation, Writing—review and editing. L.H.P.V.: Conceptualization, Methodology, Project administration, Validation, Visualization, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

Luiz H. Palucci Vieira: ongoing assignment received from PUCP—Tenure Track program.

Institutional Review Board Statement

All studies included in the present review reported ethics aspects in their full-texts.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data that support the findings of this study are included within the article (and any Supplementary Files).

Acknowledgments

AI or AI-assisted tools were not used in drafting any aspect of the text of the current manuscript.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. In addition, the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Wade, L.; Needham, L.; McGuigan, P.; Bilzon, J. Applications and limitations of current markerless motion capture methods for clinical gait biomechanics. PeerJ 2022, 10, e12995. [Google Scholar] [CrossRef]
  2. Jeyasingh-Jacob, J.; Crook-Rumsey, M.; Shah, H.; Joseph, T.; Abulikemu, S.; Daniels, S.; Sharp, D.J.; Haar, S. Markerless Motion Capture to Quantify Functional Performance in Neurodegeneration: Systematic Review. JMIR Aging 2024, 7, e52582. [Google Scholar] [CrossRef] [PubMed]
  3. Das, K.; de Paula Oliveira, T.; Newell, J. Comparison of markerless and marker-based motion capture systems using 95% functional limits of agreement in a linear mixed-effects modelling framework. Sci. Rep. 2023, 13, 22880. [Google Scholar] [CrossRef] [PubMed]
  4. Scataglini, S.; Abts, E.; Van Bocxlaer, C.; Van den Bussche, M.; Meletani, S.; Truijen, S. Accuracy, Validity, and Reliability of Markerless Camera-Based 3D Motion Capture Systems versus Marker-Based 3D Motion Capture Systems in Gait Analysis: A Systematic Review and Meta-Analysis. Sensors 2024, 24, 3686. [Google Scholar] [CrossRef] [PubMed]
  5. Needham, L.; Evans, M.; Cosker, D.P.; Wade, L.; McGuigan, P.M.; Bilzon, J.L.; Colyer, S.L. The accuracy of several pose estimation methods for 3D joint centre localisation. Sci. Rep. 2021, 11, 20673. [Google Scholar] [CrossRef]
  6. Lam, W.W.T.; Tang, Y.M.; Fong, K.N.K. A systematic review of the applications of markerless motion capture (MMC) technology for clinical measurement in rehabilitation. J. NeuroEng. Rehabil. 2023, 20, 57. [Google Scholar] [CrossRef]
  7. Franco, A.; Russo, M.; Amboni, M.; Ponsiglione, A.M.; Di Filippo, F.; Romano, M.; Amato, F.; Ricciardi, C. The Role of Deep Learning and Gait Analysis in Parkinson’s Disease: A Systematic Review. Sensors 2024, 24, 5957. [Google Scholar] [CrossRef]
  8. Ripic, Z.; Signorile, J.; Kuenze, C.; Eltoukhy, M. Concurrent validity of artificial intelligence-based markerless motion capture for over-ground gait analysis: A study of spatiotemporal parameters. J. Biomech. 2022, 143, 111278. [Google Scholar] [CrossRef]
  9. Wren, T.; Isakov, P.; Rethlefsen, S. Comparison of kinematics between Theia markerless and conventional marker-based gait analysis in clinical patients. Gait Posture 2023, 104, 9–14. [Google Scholar] [CrossRef]
  10. Torvinen, P.; Ruotsalainen, K.S.; Zhao, S.; Cronin, N.; Ohtonen, O.; Linnamo, V. Evaluation of 3D Markerless Motion Capture System Accuracy during Skate Skiing on a Treadmill. Bioengineering 2024, 11, 136. [Google Scholar] [CrossRef]
  11. Coll, I.; Mavor, M.P.; Karakolis, T.; Graham, R.B.; Clouthier, A.L. Validation of Markerless Motion Capture for Soldier Movement Patterns Assessment Under Varying Body-Borne Loads. Ann. Biomed. Eng. 2025, 53, 358–370. [Google Scholar] [CrossRef]
  12. Bonakdar, A.; Riahi, N.; Shakourisalim, M.; Miller, L.; Tavakoli, M.; Rouhani, H.; Golabchi, A. Validation of markerless vision-based motion capture for ergonomics risk assessment. Int. J. Ind. Ergon. 2025, 107, 103734. [Google Scholar] [CrossRef]
  13. Bae, K.; Lee, S.; Bak, S.Y.; Kim, H.S.; Ha, Y.; You, J.H. Concurrent validity and test reliability of the deep learning markerless motion capture system during the overhead squat. Sci. Rep. 2024, 14, 29462. [Google Scholar] [CrossRef] [PubMed]
  14. Lam, W.W.T.; Fong, K.N.K. The application of markerless motion capture (MMC) technology in rehabilitation programs: A systematic review and meta-analysis. Virtual Real. 2023, 27, 3363–3378. [Google Scholar] [CrossRef]
  15. Bhola, S.; Kim, H.B.; Kim, H.S.; Gu, B.; Yoo, J.I. Does advancement in marker-less pose-estimation mean more quality research? A systematic review. Front. Behav. Neurosci. 2025, 19, 1663089. [Google Scholar] [CrossRef]
  16. Scott, B.; Seyres, M.; Philp, F.; Chadwick, E.K.; Blana, D. Healthcare applications of single camera markerless motion capture: A scoping review. PeerJ 2022, 10, e13517. [Google Scholar] [CrossRef] [PubMed]
  17. Osness, E.; Isley, S.; Bertrand, J.; Dennett, L.; Bates, J.; Van Decker, N.; Stanhope, A.; Omkar, A.; Dolgoy, N.; Ezeugwu, V.E.; et al. Markerless Motion Capture Parameters Associated with Fall Risk or Frailty: A Scoping Review. Sensors 2025, 25, 5741. [Google Scholar] [CrossRef]
  18. Scataglini, S.; Fontinovo, E.; Khafaga, N.; Khan, M.U.; Faizan Khan, M.; Truijen, S. A Systematic Review of the Accuracy, Validity, and Reliability of Markerless Versus Marker Camera-Based 3D Motion Capture for Industrial Ergonomic Risk Analysis. Sensors 2025, 25, 5513. [Google Scholar] [CrossRef]
  19. Sethi, D.; Prakash, C.; Bharti, S. Latest Trends in Gait Analysis Using Deep Learning Techniques: A Systematic Review. In Proceedings of the International Conference on Artificial Intelligence and Speech Technology. AIST 2021; Communications in Computer and Information Science; Dev, A., Agrawal, S.S., Sharma, A., Eds.; Springer: Cham, Switzerland, 2022; Volume 1546, pp. 363–375. [Google Scholar] [CrossRef]
  20. Huang, Y.; Taheri, O.; Black, M.J.; Tzionas, D. InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction from Multi-view RGB-D Images. Int. J. Comput. Vis. 2024, 132, 2551–2566. [Google Scholar] [CrossRef]
  21. García-Gil, G.; López-Armas, G.d.C.; Navarro, J.d.J., Jr. Human-Machine Interaction: A Vision-Based Approach for Controlling a Robotic Hand Through Human Hand Movements. Technologies 2025, 13, 169. [Google Scholar] [CrossRef]
  22. Kwak, W.; Yin, J.; Wang, S.; Chen, J. Advances in triboelectric nanogenerators for self-powered wearable respiratory monitoring. FlexMat 2024, 1, 5–22. [Google Scholar] [CrossRef]
  23. Lu, L.; Wu, J.; Zhang, Y.; Liu, C.; Hu, Y.; Chen, B.; Zhu, Y.; Mao, Y. Noncontact 3D gesture recognition enabled VR human–machine interface via electret-nanofiber-based triboelectric sensor. Nano Res. 2024, 18, 94907924. [Google Scholar] [CrossRef]
  24. Mehrizi, R.; Peng, X.; Xu, X.; Zhang, S.; Metaxas, D.; Li, K. A computer vision based method for 3D posture estimation of symmetrical lifting. J. Biomech. 2018, 69, 40–46. [Google Scholar] [CrossRef]
  25. Steinebach, T.; Grosse, E.; Glock, C.; Wakula, J.; Lunin, A. Accuracy evaluation of two markerless motion capture systems for measurement of upper extremities: Kinect V2 and Captiv. Hum. Factors Ergon. Manuf. Serv. Ind. 2020, 30, 291–302. [Google Scholar] [CrossRef]
  26. Remedios, S.; Fischer, S. Towards the Use of 2D Video-Based Markerless Motion Capture to Measure and Parameterize Movement During Functional Capacity Evaluation. J. Occup. Rehabil. 2021, 31, 754–767. [Google Scholar] [CrossRef]
  27. Scano, A.; Mira, R.; Cerveri, P.; Tosatti, L.; Sacco, M. Analysis of upper-limb and trunk kinematic variability: Accuracy and reliability of an RGB-D sensor. Multimodal Technol. Interact. 2020, 4, 14. [Google Scholar] [CrossRef]
  28. Unger, T.; Moslehian, A.S.; Peiffer, J.D.; Ullrich, J.; Gassert, R.; Lambercy, O.; Cotton, R.J.; Easthope, C.A. Differentiable Biomechanics for Markerless Motion Capture in Upper Limb Stroke Rehabilitation: A Comparison with Optical Motion Capture. arXiv 2024, arXiv:2411.14992. [Google Scholar] [CrossRef]
  29. Brunner, O.; Mertens, A.; Nitsch, V.; Brandl, C. Accuracy of a markerless motion capture system for postural ergonomic risk assessment in occupational practice. Int. J. Occup. Saf. Ergon. 2022, 28, 1865–1873. [Google Scholar] [CrossRef]
  30. Pfister, A.; West, A.; Bronner, S.; Noah, J. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J. Med. Eng. Technol. 2014, 38, 274–280. [Google Scholar] [CrossRef]
  31. Page, M.; McKenzie, J.; Bossuyt, P.; Boutron, I.; Hoffmann, T.; Mulrow, C.; Shamseer, L.; Tetzlaff, J.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef] [PubMed]
  32. Higgins, J.; Thomas, J.; Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.A. Cochrane Handbook for Systematic Reviews of Interventions, 2nd ed.; Wiley Cochrane, John Wiley & Sons, Incorporated: Hoboken, NJ, USA, 2019. [Google Scholar]
  33. ICMJE|Recommendations|Protection of Research Participants. Available online: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/protection-of-research-participants.html (accessed on 11 July 2025).
  34. Palucci Vieira, L.H.; Clemente, F.M.; Silva, R.M.; Vargas-Villafuerte, K.R.; Carpes, F.P. Measurement Properties of Wearable Kinematic-Based Data Collection Systems to Evaluate Ball Kicking in Soccer: A Systematic Review with Evidence Gap Map. Sensors 2024, 24, 7912. [Google Scholar] [CrossRef] [PubMed]
  35. Camomilla, V.; Cereatti, A.; Cutti, A.G.; Fantozzi, S.; Stagni, R.; Vannozzi, G. Methodological factors affecting joint moments estimation in clinical gait analysis: A systematic review. Biomed. Eng. Online 2017, 16, 106. [Google Scholar] [CrossRef]
  36. Vergnes, J.N.; Marchal-Sixou, C.; Nabet, C.; Maret, D.; Hamel, O. Ethics in systematic reviews. J. Med. Ethics 2010, 36, 771–774. [Google Scholar] [CrossRef] [PubMed]
  37. Winter, E.; Maughan, R. Requirements for ethics approvals. J. Sports Sci. 2009, 27, 985. [Google Scholar] [CrossRef]
  38. Reponen, E.; Rundall, T.; Shortell, S.; Blodgett, J.; Juarez, A.; Jokela, R.; Mäkijärvi, M.; Torkki, P. Benchmarking outcomes on multiple contextual levels in lean healthcare: A systematic review, development of a conceptual framework, and a research agenda. BMC Health Serv. Res. 2021, 21, 161. [Google Scholar] [CrossRef]
  39. Jain, M.; Duvendack, M.; Shisler, S.; Parsekar, S.; Leon, M. Effective interventions for improving routine childhood immunisation in low and middle-income countries: A systematic review of systematic reviews. BMJ Open 2024, 14, e074370. [Google Scholar] [CrossRef]
  40. Specialist Unit for Review Evidence (SURE). Questions to Assist with the Critical Appraisal of Randomised Controlled Trials and Other Experimental Studies, 2018. Available online: https://www.cardiff.ac.uk/__data/assets/pdf_file/0005/1142969/SURE-CA-form-for-RCTs-and-other-experimental-studies_2018.pdf (accessed on 23 April 2025).
  41. Bishop, C.; Turner, A.; Read, P. Effects of inter-limb asymmetries on physical and sports performance: A systematic review. J. Sports Sci. 2018, 36, 1135–1144. [Google Scholar] [CrossRef]
  42. van Tulder, M.; Furlan, A.; Bombardier, C.; Bouter, L.; Editorial Board of the Cochrane Collaboration Back Review Group. Updated method guidelines for systematic reviews in the cochrane collaboration back review group. Spine 2003, 28, 1290–1299. [Google Scholar] [CrossRef]
  43. Kwolek, B.; Michalczuk, A.; Krzeszowski, T.; Switonski, A.; Josinski, H.; Wojciechowski, K. Calibrated and synchronized multi-view video and motion capture dataset for evaluation of gait recognition. Multimed. Tools Appl. 2019, 78, 32437–32465. [Google Scholar] [CrossRef]
  44. Perrott, M.; Pizzari, T.; Cook, J.; McClelland, J. Comparison of lower limb and trunk kinematics between markerless and marker-based motion capture systems. Gait Posture 2017, 52, 57–61. [Google Scholar] [CrossRef]
  45. Gupta, I.; Kalra, P.; Iqbal, R. Gait Parameters in School Going Children Using a Marker-Less Approach. Curr. Sci. 2016, 111, 1668. [Google Scholar] [CrossRef]
  46. Mehrizi, R.; Peng, X.; Metaxas, D.; Xu, X.; Zhang, S.; Li, K. Predicting 3-D lower back joint load in lifting: A deep pose estimation approach. IEEE Trans. Hum.-Mach. Syst. 2019, 49, 85–94. [Google Scholar] [CrossRef]
  47. Wagh, V.; Scott, M.; Kraeutner, S. Quantifying Similarities Between MediaPipe and a Known Standard to Address Issues in Tracking 2D Upper Limb Trajectories: Proof of Concept Study. JMIR Form. Res. 2024, 8, e56682. [Google Scholar] [CrossRef]
  48. Colyer, S.L.; Evans, M.; Cosker, D.P.; Salo, A.I.T. A Review of the Evolution of Vision-Based Motion Analysis and the Integration of Advanced Computer Vision Methods Towards Developing a Markerless System. Sports Med.-Open 2018, 4, 24. [Google Scholar] [CrossRef]
  49. Turner, J.; Chaaban, C.; Padua, D. Validation of OpenCap: A low-cost markerless motion capture system for lower-extremity kinematics during return-to-sport tasks. J. Biomech. 2024, 171, 112200. [Google Scholar] [CrossRef]
  50. Leite de Barros, R.M.; Guedes Russomanno, T.; Brenzikofer, R.; Jovino Figueroa, P. A method to synchronise video cameras using the audio band. J. Biomech. 2006, 39, 776–780. [Google Scholar] [CrossRef] [PubMed]
  51. Drazan, J.; Phillips, W.; Seethapathi, N.; Hullfish, T.; Baxter, J. Moving outside the lab: Markerless motion capture accurately quantifies sagittal plane kinematics during the vertical jump. J. Biomech. 2021, 125, 110547. [Google Scholar] [CrossRef]
  52. Tosi, F.; Zhang, Y.; Gong, Z.; Sandström, E.; Mattoccia, S.; Oswald, M.R.; Poggi, M. How NeRFs and 3D Gaussian Splatting are Reshaping SLAM: A Survey. arXiv 2025, arXiv:2402.13255. [Google Scholar] [CrossRef]
  53. Antognini, C.; Ortigas-Vásquez, A.; Knowlton, C.; Utz, M.; Sauer, A.; Wimmer, M.A. Comparison of markerless and marker-based motion analysis accounting for differences in local reference frame orientation. J. Biomech. 2025, 185, 112683. [Google Scholar] [CrossRef]
  54. Moura, F.A.; Caetano, F.G.; da Silva Torres, R. Tracking Methods in Sports: A Review of Advances, Quality, and Challenges in Performance Data. Int. J. Sports Med. 2025, 46, 621–660. [Google Scholar] [CrossRef]
  55. Wang, J.; Lyu, K.; Xue, J.; Gao, P.; Yan, Y. A Markerless Body Motion Capture System for Character Animation Based on Multi-view Cameras. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8558–8562. [Google Scholar] [CrossRef]
  56. Pokhariya, C.; Shah, I.N.; Xing, A.; Li, Z.; Chen, K.; Sharma, A.; Sridhar, S. MANUS: Markerless Grasp Capture Using Articulated 3D Gaussians. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 2197–2208. [Google Scholar] [CrossRef]
  57. Rajakumar, V.; Rethinam, P.; Manoharan, S.; Kirupakaran, A.M.; Sadananda Hegde, R.; Srinivasan, B. 3D Markerless Velocity Based Weight Training System for Athletes: Detection, Estimation and Validation. In Proceedings of the 2024 IEEE International Workshop on Sport, Technology and Research (STAR), Lecco, Italy, 8–10 July 2024; pp. 228–233. [Google Scholar] [CrossRef]
  58. Fu, R.; Zhang, D.; Jiang, A.; Fu, W.; Funk, A.; Ritchie, D.; Sridhar, S. GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities. In Proceedings of the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 10–17 June 2025; pp. 17461–17474. [Google Scholar] [CrossRef]
  59. Taylor, C.; McNicholas, R.; Cosker, D. Towards An Egocentric Framework for Rigid and Articulated Object Tracking in Virtual Reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA, 22–26 March 2020; pp. 354–359. [Google Scholar] [CrossRef]
  60. Li, J.; Zhang, J.; Wang, Z.; Shen, S.; Wen, C.; Ma, Y.; Xu, L.; Yu, J.; Wang, C. LiDARCap: Long-range Markerless 3D Human Motion Capture with LiDAR Point Clouds. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 20470–20480. [Google Scholar] [CrossRef]
  61. Yunardi, R.T.; Sardjono, T.A.; Mardiyanto, R. Motion Capture System based on RGB Camera for Human Walking Recognition using Marker-based and Markerless for Kinematics of Gait. In Proceedings of the 2023 IEEE 13th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 20–21 May 2023; pp. 262–267. [Google Scholar] [CrossRef]
  62. Duver, M.; Wiederhold, N.; Kyrarini, M.; Banerjee, S.; Banerjee, N.K. VR-Hand-in-Hand: Using Virtual Reality (VR) Hand Tracking For Hand-Object Data Annotation. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR), Los Angeles, CA, USA, 17–19 January 2024; pp. 325–329. [Google Scholar] [CrossRef]
  63. Liang, Y.; Du, G.; Li, F.; Zhang, P. Markerless Human-Manipulator Interface with Vibration Feedback Using Multi-sensors. In Proceedings of the 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR), Irkutsk, Russia, 4–9 August 2019; pp. 935–940. [Google Scholar] [CrossRef]
  64. Chen, Y.H.; Huang, J.H. Measurement and Tracking System for Movement of the Pilot’s Body During Flight Operations. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics—Taiwan, Taipei, Taiwan, 6–8 July 2022; pp. 539–540. [Google Scholar] [CrossRef]
  65. Scargill, T.; Premsankar, G.; Chen, J.; Gorlatova, M. Here To Stay: A Quantitative Comparison of Virtual Object Stability in Markerless Mobile AR. In Proceedings of the 2022 2nd International Workshop on Cyber-Physical-Human System Design and Implementation (CPHS), Milan, Italy, 3–6 May 2022; pp. 24–29. [Google Scholar] [CrossRef]
  66. Wang, Y.; Wang, Y.; Guo, J. Rapid Analysis of Human Musculoskeletal Dynamics Combining Markerless Motion Capture, Extended Kalman Filtering and Semi-Recursive Method. In Proceedings of the 2025 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2025, Hangzhou, China, 14–18 July 2025. [Google Scholar] [CrossRef]
  67. Zhou, Z.; Shuai, Q.; Wang, Y.; Fang, Q.; Ji, X.; Li, F.; Bao, H.; Zhou, X. QuickPose: Real-time Multi-view Multi-person Pose Estimation in Crowded Scenes. In Proceedings of the ACM SIGGRAPH 2022 Conference Proceedings, SIGGRAPH’22, Vancouver, BC, Canada, 7–11 August 2022. [Google Scholar] [CrossRef]
  68. Winkler, A.; Won, J.; Ye, Y. QuestSim: Human Motion Tracking from Sparse Sensors with Simulated Avatars. In Proceedings of the SIGGRAPH Asia 2022 Conference Papers, SA’22, Vancouver, BC, Canada, 7–11 August 2022. [Google Scholar] [CrossRef]
  69. Han, S.; Wu, P.C.; Zhang, Y.; Liu, B.; Zhang, L.; Wang, Z.; Si, W.; Zhang, P.; Cai, Y.; Hodan, T.; et al. UmeTrack: Unified multi-view end-to-end hand tracking for VR. In Proceedings of the SIGGRAPH Asia 2022 Conference Papers, SA’22, Daegu, Republic of Korea, 6–9 December 2022. [Google Scholar] [CrossRef]
  70. Ranganathan, R.; Wang, R.; Gebara, R.; Biswas, S. Detecting Compensatory Trunk Movements in Stroke Survivors using a Wearable System. In Proceedings of the 2017 Workshop on Wearable Systems and Applications, WearSys’17, Niagara Falls, NY, USA, 19 June 2017; pp. 29–32. [Google Scholar] [CrossRef]
  71. Moro, M.; Casadio, M.; Mrotek, L.A.; Ranganathan, R.; Scheidt, R.; Odone, F. On the Precision of Markerless 3d Semantic Features: An Experimental Study on Violin Playing. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 2733–2737. [Google Scholar] [CrossRef]
  72. Li, S.; Schieber, H.; Corell, N.; Egger, B.; Kreimeier, J.; Roth, D. GBOT: Graph-Based 3D Object Tracking for Augmented Reality-Assisted Assembly Guidance. In Proceedings of the 2024 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2024, Orlando, FL, USA, 16–21 March 2024; pp. 513–523. [Google Scholar] [CrossRef]
  73. D’Antonio, E.; Taborri, J.; Palermo, E.; Rossi, S.; Patanè, F. A markerless system for gait analysis based on OpenPose library. In Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, 25–28 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  74. Li, S.; Li, B.; Zhang, S.; Fu, H.; Lo, W.L.; Yu, J.; Sit, C.H.P.; Li, R. A markerless visual-motor tracking system for behavior monitoring in DCD assessment. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; pp. 774–777. [Google Scholar] [CrossRef]
  75. Balta, D.; Kuo, H.; Wang, J.; Porco, I.; Schladen, M.; Cereatti, A.; Lum, P.; Croce, U.D. Estimating infant upper extremities motion with an RGB-D camera and markerless deep neural network tracking: A validation study. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; pp. 2548–2551. [Google Scholar] [CrossRef]
  76. Milef, N.; Keyser, J.; Kong, S. Towards Unstructured Unlabeled Optical Mocap: A Video Helps! In Proceedings of the ACM SIGGRAPH 2024 Conference Papers, SIGGRAPH’24, Denver, CO, USA, 27 July–1 August 2024. [Google Scholar] [CrossRef]
  77. North, R.; Wurr, R.; Macon, R.; Mannion, C.; Hyde, J.; Torres-Espin, A.; Rosenzweig, E.S.; Ferguson, A.R.; Tuszynski, M.H.; Beattie, M.S.; et al. Quantifying the kinematic features of dexterous finger movements in nonhuman primates with markerless tracking. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 6110–6115. [Google Scholar] [CrossRef]
  78. Liangsorn, N.; Tretriluxana, S. Evaluating Marker-Based and Markerless Motion Capture Systems in Reach-to-Grasp Task. In Proceedings of the 2024 16th Biomedical Engineering International Conference (BMEiCON), Chon Buri, Thailand, 21–24 November 2024; pp. 306–309. [Google Scholar] [CrossRef]
  79. Jin, Y.; Qiu, Z.; Shi, Y.; Sun, S.; Wang, C.; Pan, D.; Zhao, J.; Liang, Z.; Wang, Y.; Li, X.; et al. Audio Matters Too! Enhancing Markerless Motion Capture with Audio Signals for String Performance Capture. ACM Trans. Graph. 2024, 43, 90. [Google Scholar] [CrossRef]
  80. Chua, Z.Y.; Yan, S.B.; Hu, J.; Lu, S. Impedance identification using tactile sensing and its adaptation for an underactuated gripper manipulation. Int. J. Control. Autom. Syst. 2018, 16, 875–886. [Google Scholar] [CrossRef]
  81. Lu, Y.; Chang, Z.; Lu, Y.; Wang, Y. Development and kinematics/statics analysis of rigid-flexible-soft hybrid finger mechanism with standard force sensor. Robot. Comput.-Integr. Manuf. 2021, 67, 101978. [Google Scholar] [CrossRef]
  82. Fan, S.; Gu, H.; Zhang, Y.; Jin, M.; Liu, H. Research on adaptive grasping with object pose uncertainty by multi-fingered robot hand. Int. J. Adv. Robot. Syst. 2018, 15, 101978. [Google Scholar] [CrossRef]
  83. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flow chart of literature search/selection steps to find pertinent studies on the current subject.
Figure 1. Flow chart of literature search/selection steps to find pertinent studies on the current subject.
Technologies 14 00028 g001
Figure 2. Distribution of included studies according to the year of publication.
Figure 2. Distribution of included studies according to the year of publication.
Technologies 14 00028 g002
Table 1. PECO framework with extracted keywords related to markerless motion capture validation.
Table 1. PECO framework with extracted keywords related to markerless motion capture validation.
PECO
humanmarkerlessgold standardkinematics
able-bodiedmarker-lessmarker-basedvalid *
patientmarkerless motion capture3D marker-based motion analysisconcurrent validity
clinicalmarkerless motion capture technologyoptoelectronic systemaccuracy
load carriagedynamic movement captureViconerror
body-borne loadbody scanOptiTrackcorrelation
backpackpose estimationQualisysconcordance
walkerkinectmanual trackingcomparison
crutchesdeeplabcutreference
caneopenpose
assisted-gaitalphapose
handlingTheia3D
neural network
Abbreviations: PECO = Population, Exposure, Comparison, and Outcome; * = wildcard term.
Table 2. Summary of included studies on markerless motion capture validity in human–object interaction tasks.
Table 2. Summary of included studies on markerless motion capture validity in human–object interaction tasks.
ReferenceMetrics CollectedMarkerless SystemReference SystemExperimental ProtocolStatistical Treatment
Bonakdar et al., 2025 [12]Joint angles (back, knee, shoulder, elbow),
JRF, REBA
PoseChecker (ML-OMC)MB-OMC (Vicon), IMUs, force platesLift 12.7 kg box floor → pelvis; recordings with markers, IMUs, force plates, iPhone.Correlations; RMSE, normalized RMSE; REBA vs. expert estimates.
Mehrizi et al., 2018 [24]3D joint angles (hip, knee, ankle, lumbar, shoulder, elbow), joint positionsModified Twin Gaussian Process + 2 camcordersMotion Analysis Corp. (marker-based, 45 markers)Three symmetric lifts (floor–knuckle, knuckle–shoulder, floor–shoulder) with
10 kg box.
Euclidean distance; joint-angle diff (mean ± SD); paired t-tests.
Ripic et al., 2022 [8]Spatiotemporal gait (speed, step/stride time, step length, etc.)KinaTrax (AI-based)SMART-DX (BTS) +
force plates
5 m walkway, self-selected speed, 3 trials; synchronous MB and ML; DLS emphasized.ICC(2,k), Lin’s CCC; Bland–Altman; paired tests; agreement levels (poor–excellent).
Perrott et al., 2017 [44]13 joint angles (trunk, pelvis, hip/knee/ankle flexion/rotation)Organic MotionVicon (Helen
Hayes model)
(1) Knee Flexion Test with stick; (2) Single-Leg Squat; non-simultaneous sessions.Paired t/Wilcoxon; Pearson/Spearman; start → peak change and peak-angle comparisons.
Wren et al., 2023 [9]Lower-limb kinematics; RMSD, RMSDoffset, mean diff over gait cycleTheia3DVicon Nexus (Plug-in-Gait)Pediatric gait;
concurrent capture;
subgroups: orthoses,
deformities, devices.
RMSD, RMSDoffset, mean diff; subgroup analyses; visual waveforms.
Torvinen et al., 2024 [10]Joint-center 3D distance, joint-vector angles (elbow, shoulder, hip, knee, ankle); RMSE, ICC, rDeepLabCut (custom)Vicon Nexus 2.8.1 (8 cams)Experienced skiers; G1/G3 skating on treadmill with poles; ML model trained on separate cohort.Bland–Altman; RMSE; Pearson r; ICC; mean ± SD; LoA; p 0.05 .
Wagh et al., 2024 [47]2D index-finger trajectories; RMSE vs. touchscreen; frame-
rate effect
MediaPipe HandsTouchscreen (Surface
Pro 7)
Trace animated shapes on touchscreen; camera capture (GoPro); resampling, normalization, Procrustes.RMSE; TOST for equivalence; t-tests across 30/60/120 FPS; α < 0.05 .
Gupta et al., 2016 [45]Stride length/width, height of earlobe; with/without backpackKinect V2Clinical manual method7 m walkway; pre/post ergonomic repacking (AOTA); with vs.
without backpack.
Paired t-tests; Bland–Altman; Pearson r; R 2 ; % error; linear/cubic regressions.
Steinebach et al., 2020 [25]Shoulder (abd, flex) and elbow (flex) angles; static and dynamic; with/without boxKinect V2 × 2 +
Captiv IMUs
Goniometer (static), angle scale (dynamic)(1) Static postures;
(2) movements w/o objects; (3) with box (occlusion).
MAE, RMSE; Pearson/Spearman; Wilcoxon; Bland–Altman; α = 0.05 .
Scano et al., 2020 [27]3D joint angles (shoulder, elbow, wrist, trunk); 11 DoF; angular distance; test–retestKinect V2Vicon Vero (10 IR cams)Seated right-arm pointing and workspace exploration across
3 sectors.
RMSE; angular distance; ICC (retest); two-way ANOVA (DoF × sector), Tukey HSD; Bland–Altman; α = 0.05 .
Mehrizi et al., 2019 [46]3D body pose; L5/S1 force and moment (kinetics); peaks & time seriesDNN + 2 RGB camsMotion Analysis Corp. (marker-based)Sym/asym lifting of 10 kg crate across 9 conditions
(3 heights × 3 end angles); synchronized capture.
RMSE; R coefficient; ICC (peaks); Bland–Altman; ICC > 0.99 for peak moments/forces.
Coll et al., 2025 [11]Joint angles (hip, knee, ankle); joint reaction forces; EMG timingTheia3D (7 RGB cams)Vicon (11 IR cams, 120 Hz) +
force plates + EMG
Walk/run under 4 military loads (5–41 kg) over 6 m; real rifle;
OpenSim modeling.
RMSE; Pearson r; Bland–Altman LOA; RM-ANOVA (joint × load), Tukey; forces to body-weight (×BW).
Remedios & Fischer, 2021 [26]Peak joint angles (knee, trunk, shoulder); shoulder abd; FSL; COG-to-load; hip–knee MARPWrnch AI (2D, sagittal)Vicon 3D (60 Hz)Box lifts floor → waist under 3 loads; simultaneous 2D vs. 3D posture/balance/ coordination.Bland–Altman; LOA; Shapiro–Wilk; outlier checks; descriptives.
Kwolek et al., 2019 [43]3D joint trajectories; Euclidean joint-position error (ankle, knee,
hip, etc.)
Not reported (8 RGB cams)Vicon MX-T40 (10 cams)166 sequences; walking under 4 conditions (normal, clothes change, backpack, varied gait); RGB 25 Hz, Vicon 100 Hz.Mean joint error (cm) per joint; no inferential stats (dataset benchmarking).
Bae et al., 2024 [13]3D joint angles (shoulder, hip, knee); ROM; time-series and peaks; test–retestErgo system
(18 RGB + DL)
Qualisys (8 cams); 36 markers (Helen Hayes)3 trials of overhead squat with 120 cm dowel; concurrent capture.ICC(2,1), ICC(3,1); RMSE; Bland–Altman; CV; linear regression
Abbreviations: RMSE = root mean square error; RMSD = root mean square deviation; REBA = Rapid Entire Body Assessment; r = Pearson correlation coefficient; R2 = coefficient of determination; Bland–Altman = Bland–Altman plot/limits of agreement; TOST = two one-sided tests; ANOVA = analysis of variance; ICC = intraclass correlation coefficient.
Table 3. Summary of validity results for included markerless motion capture studies addressing human–object interaction tasks.
Table 3. Summary of validity results for included markerless motion capture studies addressing human–object interaction tasks.
ReferenceSample (N, Sex, Age)Markerless SystemObjectValidity ResultsSummary
Bonakdar et al., 2025 [12] N = 8 ; 4M/4F; 25 ± 3  yPoseChecker (ML-OMC)Box (11.3 kg); ergonomic load; repetitive liftsRMSE = 6.5 12.9 ° ; r = 0.81 0.95 Authors report good validity; posture scoring aligns with experts.
Mehrizi et al., 2018 [24] N = 12 ; all male; 47.5 ± 11.3  y2 RGB cams + ML modelBox (10 kg); lifts with postural constraintsRMSE = 5 12 ° Valid for estimating spinal joint loads during lifting.
Ripic et al., 2022 [8] N = 22 ; 10M/12F; 22.72 ± 3.49  yKinaTrax (AI skeletal)Force plate (5 m walkway); stepped during walkingICC ≈ 0.87–0.93Valid for spatiotemporal gait analysis.
Perrott et al., 2017 [44] N = 20 ; 10M/10F; median 28.1 (22–40)Organic MotionPVC stick + squat box; used during squatsRMSD = 5 7 ° ; very weak-to-strong correlationsAcceptable validity for squats and reach movements.
Wren et al., 2023 [9] N = 36 ; 20M/16F;
2–25 y (mixed)
Theia3DPediatric orthoses, walkers; worn during gaitRMSD and offset 5 ° Valid for pediatric gait with assistive devices.
Torvinen et al., 2024 [10] N = 10 ; 5M/5F; F: 21.0 ± 3.1  y; M: 24.4 ± 10.7  yDeepLabCut (4 RGB)Roller skis + poles; treadmill skatingRMSE = 2.9 4.6 ° ;
ICC > 0.90 ; r = 0.82–0.97
Provided valid kinematics in treadmill skiing.
Wagh et al., 2024 [47] N = 10 ; 9F/1M; 19.5 ± 1.3  yMediaPipe HandsTouchscreen panel; fingertip tracing of shapesRMSE = 4.1 6.2  mmValid for capturing 2D fingertip trajectories.
Gupta et al., 2016 [45] N = 60 ; 60M; 11.77 ± 1.52  yKinect V2Backpack (∼10% BW); worn during 7 m
walkway gait
strong correlation
(r1 and 2 > 0.90)
Limited but acceptable validity in children’s gait.
Steinebach et al., 2020 [25] N = 12 ; 7M/5F; 23.8 ± 2.6  yKinect V2 + Captiv IMUsBox; overhead reach/
lift task
RMSE = 6 8 ° ; ICC > 0.85 ; r  > 0.93 Valid for upper-limb angles during lifting.
Scano et al., 2020 [27] N = 15 ; 8F/7M; 24 ± 3  yKinect V2Tabletop targets;
seated pointing
RMSE = 9.6 ° ; ICC > 0.90 Accurate and repeatable for seated pointing tasks.
Mehrizi et al., 2019 [46] N = 12 ; 12M; 47.5 ± 11.3  y2 RGB cams + DNNBox (10 kg); simulated
lift postures
RMSE = 6.7 11.3 ° ;
ICC > 0.89 ; R > 0.80
Valid for estimating lumbar loads during lifting.
Coll et al., 2025 [11] N = 16 ; 8F/8M; 30 ± 12  yTheia3DMilitary gear (helmet, vest, rifle); worn during tasksICC > 0.85 ; r > 0.80 ; significant ANOVA effectsRobust validity in high-load military simulations.
Remedios & Fischer, 2021 [26] N = 20 ; 11M/9F; M: 21.6 ± 3.1  y; F: 21 ± 1.2  yWrnch AI (2D)Box; lifts to 3 levels for posture scoringMean differences 1.9 22.1 ° between 2D MMC and 3D reference; Bland–Altman LoA ± 20 ° Valid for ergonomic posture classification.
Kwolek et al., 2019 [43] N = 32 ; 10F/22M; age NR8 RGB cams +
ML triangulation
Backpack (7 trials); worn during walkingRank-1 accuracy = 96 % High gait-recognition accuracy; not a biomechanical validity study.
Bae et al., 2024 [13] N = 31 ; 22M/9F; 32.2 ± 5.9  yErgo system
(DL, multicam)
Overhead squat; full-
body kinematics
RMSE = 2.3 6.3 ° ;
ICC = 0.75 1.0 ;
R 2 = 0.88–0.99
Excellent concurrent validity and high test–retest reliability.
Abbreviations: RMSE = root mean square error; RMSD = root mean square deviation; r = Pearson correlation coefficient; R2 = coefficient of determination; Bland–Altman = Bland–Altman plot/limits of agreement; ANOVA = analysis of variance; ICC = intraclass correlation coefficient. Object = Characteristic/Type of interaction.
Table 4. Quality assessment of included studies (Q1–Q12, sum score Σ , and quality score percentage QS).
Table 4. Quality assessment of included studies (Q1–Q12, sum score Σ , and quality score percentage QS).
ReferenceQ1Q2Q3Q4Q5Q6Q7Q8Q9Q10Q11Q12 Σ QS
Bae et al., 2024 [13]2212222222222396
Bonakdar et al., 2025 [12]2211220222222083
Coll et al., 2025 [11]2211220222222083
Gupta et al., 2016 [45]2212010101201250
Kwolek et al., 2019 [43]2211020222201667
Mehrizi et al., 2018 [24]2011220202221667
Mehrizi et al., 2019 [46]2211220212221979
Perrott et al., 2017 [44]2211010202221563
Remedios & Fischer, 2021 [26]2212222222222396
Ripic et al., 2022 [8]2210220212221875
Scano et al., 2020 [27]2011220212221771
Steinebach et al., 2020 [25]2211220212221979
Torvinen et al., 2024 [10]2211220222222083
Wagh et al., 2024 [47]2211020222221875
Wren et al., 2023 [9]2212220222222188
Mean2.001.731.001.201.471.870.271.931.331.932.001.7318.4776.94
SD 0.000.700.000.560.920.350.700.260.820.260.000.702.9712.39
Table 5. Evidence synthesis regarding the concurrent validity of markerless tracking to evaluate kinematics in human–object interaction tasks.
Table 5. Evidence synthesis regarding the concurrent validity of markerless tracking to evaluate kinematics in human–object interaction tasks.
Group of
Tasks
Methodological
Quality
Validity ConfirmedValidity not
Confirmed
Judgement
Evidence
LiftingHigh-qualityBonakdar et al., 2025 [12]; Mehrizi et al., 2019 [46]; Steinebach et al., 2020 [25]; Remedios &
Fischer, 2021 [26]
Strong evidence
Low-qualityMehrizi et al., 2018 [24]
LocomotionHigh-qualityColl et al., 2025 [11]Wren et al., 2023 [9]Conflicting evidence
Low-qualityGupta et al., 2016 [45]; Ripic et al., 2022 [8]; Kwolek et al., 2019 [43]
Squat/knee flexionHigh-qualityBae et al., 2024 [13]Moderate evidence
Low-qualityPerrott et al., 2017 [44]
Reaching/
manipulation
High-qualityTorvinen et al., 2024 [10]Conflicting evidence
Low-qualityWagh et al., 2024 [47]Scano et al., 2020 [27]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Unsihuay, N.; Clavo, R.F.; Palucci Vieira, L.H. Measurement Error of Markerless Motion Capture Systems Applied to Tracking Movements in Human–Object Interaction Tasks: A Systematic Review with Best Evidence Synthesis. Technologies 2026, 14, 28. https://doi.org/10.3390/technologies14010028

AMA Style

Unsihuay N, Clavo RF, Palucci Vieira LH. Measurement Error of Markerless Motion Capture Systems Applied to Tracking Movements in Human–Object Interaction Tasks: A Systematic Review with Best Evidence Synthesis. Technologies. 2026; 14(1):28. https://doi.org/10.3390/technologies14010028

Chicago/Turabian Style

Unsihuay, Nicole, Rene F. Clavo, and Luiz H. Palucci Vieira. 2026. "Measurement Error of Markerless Motion Capture Systems Applied to Tracking Movements in Human–Object Interaction Tasks: A Systematic Review with Best Evidence Synthesis" Technologies 14, no. 1: 28. https://doi.org/10.3390/technologies14010028

APA Style

Unsihuay, N., Clavo, R. F., & Palucci Vieira, L. H. (2026). Measurement Error of Markerless Motion Capture Systems Applied to Tracking Movements in Human–Object Interaction Tasks: A Systematic Review with Best Evidence Synthesis. Technologies, 14(1), 28. https://doi.org/10.3390/technologies14010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop