Next Article in Journal
Sleep Stage Estimation from Bed Leg Ballistocardiogram Sensors
Next Article in Special Issue
Probabilistic Evaluation of 3D Surfaces Using Statistical Shape Models (SSM)
Previous Article in Journal
Parametric Electromagnetic Analysis of Radar-Based Advanced Driver Assistant Systems
Previous Article in Special Issue
Switchable Glass Enabled Contextualization for a Cyber-Physical Safe and Interactive Spatial Augmented Reality PCBA Manufacturing Inspection System
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Motion Capture Technology in Industrial Applications: A Systematic Review

Tyndall National Institute, University College Cork, T23 Cork, Ireland
Authors to whom correspondence should be addressed.
Sensors 2020, 20(19), 5687;
Received: 9 September 2020 / Revised: 24 September 2020 / Accepted: 2 October 2020 / Published: 5 October 2020
(This article belongs to the Special Issue Advanced Measurements for Industry 4.0)


The rapid technological advancements of Industry 4.0 have opened up new vectors for novel industrial processes that require advanced sensing solutions for their realization. Motion capture (MoCap) sensors, such as visual cameras and inertial measurement units (IMUs), are frequently adopted in industrial settings to support solutions in robotics, additive manufacturing, teleworking and human safety. This review synthesizes and evaluates studies investigating the use of MoCap technologies in industry-related research. A search was performed in the Embase, Scopus, Web of Science and Google Scholar. Only studies in English, from 2015 onwards, on primary and secondary industrial applications were considered. The quality of the articles was appraised with the AXIS tool. Studies were categorized based on type of used sensors, beneficiary industry sector, and type of application. Study characteristics, key methods and findings were also summarized. In total, 1682 records were identified, and 59 were included in this review. Twenty-one and 38 studies were assessed as being prone to medium and low risks of bias, respectively. Camera-based sensors and IMUs were used in 40% and 70% of the studies, respectively. Construction (30.5%), robotics (15.3%) and automotive (10.2%) were the most researched industry sectors, whilst health and safety (64.4%) and the improvement of industrial processes or products (17%) were the most targeted applications. Inertial sensors were the first choice for industrial MoCap applications. Camera-based MoCap systems performed better in robotic applications, but camera obstructions caused by workers and machinery was the most challenging issue. Advancements in machine learning algorithms have been shown to increase the capabilities of MoCap systems in applications such as activity and fatigue detection as well as tool condition monitoring and object recognition.

1. Introduction

Motion capture (MoCap) is the process of digitally tracking and recoding the movements of objects or living beings in space. Different technologies and techniques have been developed to capture motion. Camera-based systems with infrared (IR) cameras, for example, can be used to triangulate the location of retroreflective rigid bodies attached to the targeted subject. Depth sensitive cameras, projecting light towards an object, can estimate depth based on the time delay from light emission to backscattered light detection [1]. Systems based on inertial sensors [2], electromagnetic fields [3] and potentiometers that track the relative movements of articulated structures [4] also exist. Hybrid systems combine different MoCap technologies in order to improve precision and reduce camera occlusions [5]. Research has also focused on the handling and processing of high dimensional data sets with a wide range of analysis techniques, such as machine learning [6], Kalman filters [7], hierarchical clustering [8] and more.
Thanks to their versatility, MoCap technologies are employed in a wide range of applications. In healthcare and clinical settings, they aid in the diagnosis and treatment of physical ailments, for example, by reviewing the motor function of a patient or by comparing past recordings to see if a rehabilitation approach had the desired effect [9]. Sports applications also benefit from MoCap by breaking down the athletes’ motion to analyse the efficiency of the athletic posture and make performance-enhancing modifications [10]. In industrial settings, MoCap is predominately used in the entertainment [11] and gaming industry [12], followed by relatively few industrial applications in the sectors of robotics [13], automotive [14] and construction [15]. However, the need for highly specialised equipment, regular calibration routines, limited capture volumes, inconvenient markers or specialised suits, as well as the significant installation and operation costs of MoCap systems, has greatly impeded the adoption of such technologies in other primary (i.e., extraction of raw materials and energy production) and secondary industrial applications (i.e., manufacturing and construction). Nevertheless, the fourth industrial revolution has brought new forms of industrial processes that require advanced and smart sensing solutions; as MoCap technology becomes more convenient and affordable [16], and applicable in challenging environments [17], its application becomes more attractive for a wider range of industrial scenarios.
Since industrial technologies are constantly changing and evolving in order to meet the demands of different sectors, it is important to track the technological progress and the new trends in hardware and software advancements. Previous reviews have focused on MoCap in robotics [13], clinical therapy and rehabilitation [18], computer animation [12], and sports [19]; however, the use of MoCap for industrial applications has not been yet recorded in a systematic way. The purpose of this work is to report on the development and application of different commercial and bespoke MoCap solutions in industrial settings, present the sectors that mainly benefit from them (e.g., robotics and construction), and identify the most targeted applications (e.g., infrastructure monitoring and workers’ health and safety). Along these lines, this review aims to provide insight on the capabilities (e.g., robust pose estimation) and limitations (e.g., noise and obstructions) of MoCap solution in industry, along with the data analytics and machine learning solutions that are used in conjunction with MoCap technologies in order to improve the potency of the sensors, support in the processing of large quantities of output data and aid in decision-making processes.

2. Materials and Methods

2.1. Search Strategy

This study was aligned with the Preferred Reported Item for Systematic review and Meta-Analysis (PRISMA) statement [20]. A literature search was carried out on Embase, Scopus and Web of Science databases from the 9th to the 16th of March 2020. Titles, abstracts and authors’ keywords were screened with a four-component search string using Boolean operators. The first three components of the string were linked with AND operators and were formed of keywords and their spelling variations that are associated with motion analysis (e.g., biomechanics, kinematics, position), the sensors used to capture motion (e.g., IMUs), and the industrial setting (e.g., industry, occupation, factory), respectively. A NOT operator preceded the fourth section of the string that was a concatenation of terms detached from the aims of this review (e.g., surgery, therapy, sports, animals). Google Scholar was also employed to screen for keywords in abstracts that were published up to one year prior to the literature search. The full search strings used in Google Scholar and for each database search are also included in Appendix A.
Studies met the inclusion criteria if they were written in English and were published from January 2015 onwards. The search included both in press and issued articles that were published in scientific journals or conference proceedings alike. Review papers and conference abstracts were all excluded from this work since they do not typically report on all elements of the predefined tables that were used for data extraction. To ensure consistency of comparison, only studies that actively employed sensors that are designed to directly measure motion (i.e., the position, displacement, velocity or acceleration of an object) for either primary and secondary industrial applications were included; in this context, an industrial application was defined as any process related to the extraction of raw materials (e.g., metals or farming), or the manufacturing and assembly of goods (e.g., cars or buildings). Therefore, proof of concept papers that were not tested experimentally, simulations, and studies concerning white collar workers (e.g., office or other non-manual workers) and were excluded; additionally, works employing sensors that can indirectly measure motion (e.g., electromyography (EMG) in conjunction with machine learning algorithms [21]) were also omitted. Articles were included only if the participants’ sample size (where applicable), and the type, number and placement of all used sensors were reported. Journal papers were prioritized in the event where their contents were also covered in earlier conference publications; in cases where this overlap was only partial, multiple publications were included.
All articles were imported to a standard software for publishing and managing bibliographies and duplicates were automatically removed. Two independent reviewers screened all titles and abstracts and labelled each article based on its conformity with the aims of the study. Articles that both reviewers deemed as non-compliant with the predefined inclusion criteria were excluded from further review. The remaining articles were then fully screened, and each reviewer provided reasons for every exclusion. Conflicts between the reviewers were debated until both parties agreed to a conclusion. Finally, the reference lists of all approved articles were browsed to discover eligible articles that were not previously found; once more, both reviewers individually performed full-text screenings and evaluated all newly found publications.

2.2. Assessment of Risk of Bias

Two reviewers assessed the risk of bias and the quality of all considered studies using an adapted version of the AXIS appraisal tool for cross-sectional studies [22]. Questions 6, 7, 13, 14, 15 and 20 of the original AXIS appraisal tool were disregarded since they assess issues that are not often apparent in studies concerning industrial applications, such as taking samples from representative populations, non-responders, non-response bias and participants’ consent. The remaining elements of the AXIS list were adapted to form twelve questions that could be answered with a “yes” or a “no” and were used to appraise each study (Table 1) by summing all affirmative responses and providing a concluding score out of 12. Studies ranked below 6 were viewed as having a high risk of bias, while studies with ratings over 7 and 10 were considered of medium or low risk, respectively. The average study ratings of both reviewers were also computed to confirm the inter-rater assessment consistency.

2.3. Data Extraction

Data from all considered articles were extracted by each reviewer independently using predefined tables. Cumulative logged data were systematically monitored by both parties to ensure coherent data collection. Authors’ names, year of publication, sample size, sensor placement (binned: machinery, upper, lower or full body), number and types of all used sensors (e.g., IMU or cameras), secondary validation systems (where applicable), and main findings were all recorded. In reference to their respective broad objectives, all considered articles were allocated into four groups based on whether they aimed to ensure workers’ health and safety (e.g., postural assessment, preventing musculoskeletal injuries, detecting trips and falls), to directly increase workers’ productivity (e.g., workers’ location and walk path analysis), to conduct machinery monitoring and quality control (e.g., cutting tool inspections), or to improve an industrial process (e.g., hybrid assembly systems) or the design of a product (e.g., car seats). If a work could fall into more than one category [23] (e.g., health and safety, and workers’ productivity), the paper was allocated in the most prominent category. Additionally, the directly beneficiary industry sector was recorded (e.g., construction, aerospace, automotive, or energy); in the instance of a widespread application, the corresponding article was labelled as “generic”. Studies that employed machine learning were additionally logged, along with the used algorithm, type of input data, training dataset, output and performance.

3. Results

3.1. Search Results

Database searching returned 1682 records (Figure 1). After removing duplicates (n = 185), the titles, keywords and abstracts of 1497 articles were screened and 1353 records were excluded as they did not meet inclusion criteria. The remaining articles (n = 144) were assessed for eligibility, and 47 papers were retained in the final analysis. Twelve more records were added after screening the reference lists of the eligible papers, bringing the total number of the included studies to 59. Four, 13 and 16 records were published in 2015, annually from 2016 to 2018, and in 2019, respectively, underlying the increasing interest of the research community on the topic.

3.2. Risk Assessment

Twenty-one and 38 studies were assessed as being prone to medium and low risks of bias, respectively (Table 2). None of the considered articles scored lower that six on the employed appraisal checklist. All reviewed articles presented reliable measurements (Q5) and conclusions that were justified by their results (Q10); yet, many authors have inadequately reported or justified sample characteristics (Q3, 37%), study limitations (Q11, 53%) and funding or possible conflict sources (Q12, 51%). Statistics (Q6, 81%) and general methods (Q7, 88%) were typically described in depth. Generally, studies were favourably assessed against all the remaining items of the employed appraisal tool (Q1, 95%; Q2, 92%; Q4, 93%; Q8, 98%; Q9, 93%). The assessments of both reviewers were consistent and comparable with average review scores of 9.9 ± 1.6 and 9.9 ± 0.9.

3.3. MoCap Technologies in Industry

In the reviewed studies, pose and position estimation was carried out with either inertial or camera-based sensors (i.e., RGB, infrared, depth or optical cameras), or in combination with each other (Table 3). Inertial sensors have been widely employed across all industry sectors (49.2% of the reviewed works), whether the tracked object was an automated tool, the end effector of a robot [30,37,64], or the operator [27,36,39]. In 30.5% of the reviewed studies, camera-based off-the-shelf devices such as RGB, IR and depth cameras, mostly coming from the gaming industry (e.g., Microsoft Kinect and Xbox 360), were successfully employed for human activity tracking, and gesture or posture classification [25,77]. Inertial and camera-based sensors were used in synergy in 10.2% of the considered works, in the tracking of the operator’s body during labour or the operator’s interaction with an automated system (e.g., robotic arm). EMG, ultra-wide band (UWB) nets, resistive bending sensors or scanning sonars were used along with IMUs to improve pose and position estimation in five studies (8.5%). One study also coupled an IMU sensor with a CCTV and radio measurements. Generally, IMU and camera-based sensors were used consistently in the industry during the last 5 years (Figure 2).
Considering that the most frequently adopted sensors used in industry were IMUs (e.g., Xsens MVN) and marker-based or marker-less (e.g., Kinect) camera systems, their characteristics, advantages and disadvantages were also mapped (Table 4) in order to evaluate how each sensors type is appropriate to the different applications. Naturally, the characteristics of each system vary greatly depending on the number, placement, settings and calibration requirements of the sensors, yet, general recommendations can be made for the adoption of a particular type of sensor for distinct tasks. Additionally, given the required level of accuracy, capture volume, budget and workplace limitations or other considerations, Table 4 shows the specifications and most favoured industrial applications for each type of sensor (e.g., activity recognition, or human–robot collaboration).

3.4. Types of Industry Sectors

Most frequently, MoCap technologies were adopted by the construction industry (Table 5, 30.5%), followed by applications on the improvement of industrial robots (22%), automotive and bicycle manufacturing (10.2%), and agriculture and timber (8.5%). On a few occasions, authors engaged in applications in the food (5.1%) and aerospace industries (3.4%), while energy, petroleum and steel industries were each discussed in a single study (1.7%). All remaining applications were considered as generic (22%) with typical examples of studies monitoring physical fatigue [48,71], posture [45] and neck-shoulder pain [74] in workers. Construction, generic and robotic applications were the only researched topics in 2015, while automotive, agriculture and food industrial applications were explored every year after 2016; MoCap technologies in the aerospace, energy, steel and petroleum industries were disseminated only recently (Figure 3, left).

3.5. MoCap Industrial Applications

MoCap techniques for industrial applications were primarily used for the assessment of health and safety risks in the working environment (Table 6, 64.4%), whilst fatigue and proper posture were the most targeted issues [48,49,72]. The research interest of the industry in health and safety MoCap applications increased steadily over the reviewed period (Figure 3, right). Productivity evaluation was the second most widespread application (20.3%), with studies typically aiming to identify inefficiency or alternative approaches to improve industrial processes. Similarly, MoCap techniques were also employed to directly improve workers productivity (10.1%), whereas 8.5 % of the studies focused on task monitoring [17] or in the quality control of an industrial processes [30].

3.6. MoCap Data Processing

In the majority of the reviewed works, raw sensor recordings were subject to data synchronization, pre-processing, and classification. Data synchronisation was occasionally reported as part of the pre-processing stage and included in the data fusion algorithm [24,34,36], but technical details were frequently omitted in the reviewed studies [27,28]; yet, when the synchronization strategy was reported, a master control unit [36,50,51,54] or a common communication network [15,31,67] were used. Different sampling rates of data streams were addressed by linear interpolation and cross-correlation [73] techniques, or by introducing a known event that triggers all the sensors [29,47,49,55].
In the pre-processing stage, data were filtered to mitigate noise and address drift, outliers and missing points in data streams (to avoid de-synchronisation for instance), then were fused together, and were further processed to extract the numerical values of interest; in the studies considered by this review, this was mostly achieved via low-pass filters (e.g., nth order Butterworth, sliding window and median filters) [15,31,40,45,58,61,62,63,66,69,73,75,77], Kalman filters [17,28,29,40,41,51,54,57,60,71,73] and band-pass filters when EMG data were collected [29,49,50,51,54]. The drift of inertial data, a typical inertial sensors issue, was sometimes addressed in the pre-processing stage by implementing filtering methods such as the zero-velocity update technique [44,59,60].
Data classification was obtained by establishing thresholds or via machine learning classifiers. An example of threshold was given by [39], where trunk flexion of over 90° was selected to identify a high ergonomic risk, or by [31] where the position of the operator’s centre of mass and the increasing palm pressure identified a reach-and-pick task. Such thresholds were obtained based on observations or established models and standards (e.g., RULA: Rapid Upper Limb Assessment, and REBA: Rapid Entire Body Assessment scores). Machine learning techniques were employed in 18.6% of the reviewed works (Table 7), aiming to build an unsupervised or semi-supervised system able to improve its own robustness and accuracy while increasing the number of outcomes that were correctly predicted. The most used algorithms were Artificial Neural Network (ANN), Support Vector Machine (SVM) and Random Forest (RF), with ANN and SVM being mostly employed for binary or three group classification, while random forest for multiclass classification. The accuracy of the developed machine learning algorithms typically ranged from 93% to 99% (Table 7).

3.7. Study Designs and Accuracy Assessments

Overall, the reviewed studies dealt with small sample sizes of less than twenty participants, with the exception of Tao et al. [56], Muller et al. [38] and Hallman et al. [74] who recruited 29, 42 and 625 participants, respectively. Eighteen, 13 and 8 studies placed IMU sensors on the upper, full and lower body, respectively, while six authors attached IMUs on machinery (Table 8). Out of the 41 studies that employed inertial units (70% of all the works), the majority of the authors used less than three sensors (25 studies, Table 8), while seven groups used 17 sensors, as a part of a pre-developed biomechanical model with systems such as the Xsens MVN, to capture full body movements. Sensor placement for all the studies that did not adopt pre-developed models is graphically depicted on Figure 4. Six studies accompanied motion tracking technologies with EMG sensors [29,49,50,51,54,57], two with force plates [73,75], two with pressure mats [61,62] and one with instrumented shoes [73]. Two works also used the Oculus Rift virtual reality headset to remotely assess industrials locations and control robotic elements [39,43]. The tracking accuracy of the developed systems was directly assessed against gold-standard MoCap systems (e.g., Vicon or Optotrack; Table 8, in bold) in six works [14,15,55,59,73,77], while the classification or identification accuracy of a process was frequently evaluated with visual inspection of video or phone cameras [15,29,36,44,60,63,69]. A thorough diagram showing the connections between type of industry, application and MoCap system, for each considered study is also presented on Figure 5.

4. Discussion

Industry 4.0 has introduced new processes that require advanced sensing solutions. The use of MoCap technologies in industry has been steadily increasing over the years, enabling the development of smart solutions that can provide advanced position estimation, aid in automated decision-making processes, improve infrastructure inspection, enable teleoperation, and increase the safety of human workers. The majority of the MoCap systems that were used in industry were IMU-based (in 70% of the studies, Table 3), whilst camera-based sensors were employed less frequently (40%), most likely due to their increased operational and processing cost, and other functional limitations, such as camera obstructions by workers and machinery which were reported as the most challenging issues [25,45,55]. Findings suggest that the selection of the optimal MoCap system to adopt was primarily driven by the type of application (Figure 5); for instance, monitoring and quality control was mainly achieved via IMUs sensors, while productivity improvement via camera-based (marker-less) systems. Type of industry was the second factor that had an impact on the choice of a MoCap system (Figure 5); for example, in a highly dynamic environment, like in construction sites, where the setup and configuration of the physical space change over time, wearable inertial sensors were the best option, since they could ensure a robust and continuous assessment of the operator’s health and safety. In industrial robot manufacturing instead, where the environmental constraints are known and constant, health and safety issues were primarily addressed by camera-based systems.
The increased use of IMUs promoted the development of advanced algorithmic methods (e.g., Kalman filters and machine learning, Table 7) for data processing and estimation of parameters that are not directly measured with inertial systems [17]. Optoelectronic technologies performed better and with higher tracking accuracy in human–robot collaboration tasks [33] and robot trajectory planning [32,46,52], due to the favourable conditions in such applications (e.g., the limited working volume and the known robot configurations) which allowed cameras to avoid obstructions. In general, hybrid systems that incorporate both vision and inertial sensors were found to have improved tracking performance in noisy and highly dynamic industrial environments, compensating for drift issues of inertial sensors and long-term occlusions which can effect camera-based systems [15,40]. For instance, in Papaioannou et al. [40], the trajectory tracking error caused by occlusion in a hybrid system was approximately half that of a camera-based tracking system.
Workers’ health and safety was found to be the most prolific research area. Even though wearable sensors are widely used in clinical settings for the remote monitoring of physiological parameters (e.g., heart rate, blood pressure, body temperature, VO2), only a single study [26] has employed multiple sensors for the measurement of such metrics in industrial scenarios. This can be attributed to the industries involved being interested in the prevention of work-related incidents that can lead to absence from work, rather than in the normative function of the workers’ body. As anticipated, health and safety research focused on the most common musculoskeletal conditions (e.g., back pain) and injuries (e.g., trips or injuries due to bad body posture), while the industries in which workers deal with heavy biomechanical loads or high risk of accidents (e.g., construction, Table 5) were the industries that drove the research. Fatigue and postural distress were also successfully detected by wearable inertial MoCap technologies [27,39,49,71,72]. When MoCap systems were combined with EMG sensors (Table 8), the musculoskeletal and postural evaluation of workers during generic physical activities (Table 5) was improved [29,48,49,50,51,54,57]. Inertial sensors also showed good results for the identification of hazardous events such as trips and falls in the construction industry [44,58,60,65,66,69,75], but the positions and numbers of the used IMUs were reported to impact on the intra-subject activity identification [26]. For example, fewer IMUs placed on specific anatomical sections (e.g., hip and neck) showed similar task classification performance than a greater number of IMUs distributed on the entire body [36]. In Kim et al. [36], a task classifier based on just two IMUs on the hip and head of the subject reached an accuracy of 0.7617 against the 0.7983 of the classifier based on 17 IMUs placed on the entire body. Activity recognition was also well performed by IMUs, and combined with activity duration measurements, made the evaluation of workers’ productivity in jobsites possible [24]. This topic was also the focus of interest for more than 10% of the studies in the past years (Table 6). However, when the assessment involved the identification or classification of tasks [26], secondary sensors were frequently needed in addition to the IMUs (force cells, temperature sensors, etc.).
Advancements were also reported in the development of efficient data classification algorithms that require large data streams, such as machine learning-based classifiers (Table 7). The usage of such algorithms has been documented in 11 works out of a total of 59, and was accompanied with a very high level of accuracy. The classification output of the reviewed algorithms differed greatly between the reviewed works, and covered applications from activity and fatigue detection to tool condition monitoring and object recognition (Table 7). However, the need of large training datasets, which usually require expert manual labelling to be produced, contradicted the very small sample sizes that were typically recruited (Table 8), and thus potentially impeding the broader use of machine learning beyond the proof-of-concept in applied cases in industry. The general lack of information regarding real-time capability of the presented classification algorithms was also identified as a potential drawback in real-world application, suggesting that more work is required to address this challenge. Yet, the reviewed works generally outlined the capacity of MoCap sensors in conjunction with machine learning solutions to provide solutions for activity recognition, tool inspection and ergonomic assessment in the workplace. These findings highlighted how the research activity on wearable systems for industrial applications is going towards solutions that can be easily embedded in working cloths. Improving key factors such as wearability, washability, battery duration, data storage and edge computing will be therefore essential. This improvement in the hardware design will have a direct impact on the amount and the quality of the data collection. This, as well, will have a beneficial effect on software development, especially for machine learning applications, were huge quantity of data are required. In this regard, attempts should be made for the further development and commercial distribution of processing algorithms that would improve the ease of use of such systems and the data processing.
Direct evaluation of the accuracy and tracking performance of a developed MoCap system [14,55] was generally achieved through comparisons with a high accuracy camera-based system. This is so far the most reliable process, as it guarantees an appropriate ground truth reference. However, the performance of algorithmic processes (e.g., evaluation of body postures or near-miss fall detections) was typically validated against visual observations of video recordings [69] or the ground truth that was provided by experts in the field [78], and therefore potentially biasing the accuracy of the respective method. As regards the use of commercially available MoCap solutions, a comparison was made of their limitations, advantages and applicability to industrial applications (Table 4) while the accuracy of off-the-shelf MoCap systems has been also extensively reviewed by van der Kruk and Reijne [82].
Even though all the reviewed works were assessed as being prone to medium and low risks of bias individually (Table 2), the main limitation at a study level was that more than half of the reviewed works (51%) did not properly report funding and conflict sources. This may be an indication of a critical source of bias, particularly in studies directly driven by the beneficiary industry, or in works that demonstrate MoCap systems that may be commercially available in the future. A limitation of this review stems from the potential publication bias and selective reporting across studies, which may affect the accumulation of thorough evidence in the field. Efforts from industry bodies to incorporate MoCap applications in their facilities that were either unsuccessful or were not disseminated in scientific journals were likely overlooked in this review. Finally, another limitation at a review-level arises from the short review period that narrowed the reporting of findings in a period of five years; however, the selected review period returned an adequate number of records for the justification of conclusions and exposure of trends (e.g., Figure 3), while also facilitating the reporting of multiple aspects of the reviewed articles, such as the studies’ design and key findings (Table 8).

5. Conclusions

This systematic review has highlighted how the industry 4.0 framework had led industrial environments to slowly incorporate MoCap solutions, mainly to improve the workers’ health and safety, increase productivity and improve an industrial process. Predominately, research was driven by the construction, robot manufacturing and automotive sectors. IMUs are still seen as the first choice for such applications, as they are relatively simple in their operation, cost effective, and present minimal impact on the industrial workflow in such scenarios. Moreover, inertial sensors have acquired, over the years, the performance (e.g., low power consumption, modularity) and size requirements to also be applied for body activity monitoring, mostly in the form of wearable off-the-shelf systems.
In the coming years, the sensors and systems that will be used in advanced industrial application will become smarter with built-in functions and embedded algorithms, such as machine learning and Kalman filters, which will be incorporated in the processing of data streams retrieved by IMUs, in order to increase their functionality and present a substitute for highly accurate (and expensive) camera-based MoCap systems. Furthermore, systems are expected to become smaller and portable in order to interfere less with the workers and workplace, while real-time (bio)feedback should accompany health and safety applications in order to aid in the adoption and acceptance of such technologies by industry workers. Marker-less MoCap systems, such as the Kinect, are low cost and offer adequate accuracy for certain classification and activity tracking tasks; however, attempts should be made for the further development and commercial distribution of processing algorithms that would improve their ease of use and capability to carry out data processing tasks. Optoelectronics have been widely and consistently used in robotics over the recent years, particularly in the research field of collaborative systems and are shown to increase the safety of human operators. In the future, the price drop of optoelectronic sensors and the release of more compact and easier to implement hybrid and data fusion solutions, as well as next-generation wearable lens-less cameras [83,84,85], will lead to fewer obstructions in jobsites and improve the practicality of camera-based approaches in other industry sectors.

Author Contributions

Conceptualization, M.M., D.-S.K., S.T., B.O. and M.W.; methodology, M.M. and D.-S.K.; formal analysis, M.M. and D.-S.K.; investigation, M.M. and D.-S.K.; data curation, M.M. and D.-S.K.; writing—original draft preparation, M.M. and D.-S.K.; writing—review and editing, M.M., D.-S.K., S.T., B.O. and M.W.; visualization, M.M. and D.-S.K.; supervision, S.T., B.O. and M.W.; funding acquisition, S.T., B.O. and M.W. All authors have read and agreed to the published version of the manuscript.


This research was funded in part by Science Foundation Ireland under Grant number 16/RC/3918 (CONFIRM) which is co-funded under the European Regional Development Fund. Aspects of this publication have emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 12/RC/2289-P2 (INSIGHT) and 13/RC/2077-CONNECT which are co-funded under the European Regional Development Fund.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Database search strings.
Table A1. Database search strings.
DatabaseSearch String
Embase(‘motion analys*’:ab,kw,ti OR ‘movement analys*’:ab,kw,ti OR ‘movement monitor*’:ab,kw,ti OR ‘biomech*’:ab,kw,ti OR ‘kinematic*’:ab,kw,ti OR ‘position*’:ab,kw,ti OR ‘location*’:ab,kw,ti) AND (‘motion capture’:ab,kw,ti OR ‘mocap’:ab,kw,ti OR ‘acceleromet*’:ab,kw,ti OR ‘motion tracking’:ab,kw,ti OR ‘wearable sensor*’:ab,kw,ti OR ‘inertial sensor*’:ab,kw,ti OR ‘inertial measur*’:ab,kw,ti OR ‘imu’:ab,kw,ti OR ‘magnetomet*’:ab,kw,ti OR ‘gyroscop*’:ab,kw,ti OR ‘mems’:ab,kw,ti) AND (‘industr*’:ab,kw,ti OR ‘manufactur*’:ab,kw,ti OR ‘occupation*’:ab,kw,ti OR ‘factory’:ab,kw,ti OR ‘assembly’:ab,kw,ti OR ‘safety’:ab,kw,ti) NOT (‘animal’:ti,kw OR ‘surg*’:ti,kw OR ‘rehabilitation’:ti,kw OR ‘disease*’:ti,kw OR ‘sport*’:kw,ti OR ‘therap*’:kw,ti OR ‘treatment’:kw,ti OR ‘patient’:kw,ti) AND ([article]/lim OR [article in press]/lim OR [conference paper]/lim OR [conference review]/lim OR [data papers]/lim OR [review]/lim) AND [english]/lim AND [abstracts]/lim AND ([embase]/lim OR [medline]/lim OR [pubmed-not-medline]/lim) AND [2015–2020]/py AND [medline]/lim
ScopusTITLE-ABS-KEY (“motion analys*” OR “movement analys*” OR “movement monitor*” OR biomech* OR kinematic* OR position* OR location*) AND TITLE-ABS-KEY(“motion capture” OR mocap OR acceleromet* OR “motion tracking” OR wearable sensor* OR “inertial sensor*” OR “inertial measur*” OR imu OR magnetomet* OR gyroscop* OR mems) AND TITLE-ABS-KEY(industr* OR manufactur* OR occupation* OR factory OR assembly OR safety) AND PUBYEAR > 2014 AND (LIMIT-TO (PUBSTAGE, “final”)) AND (LIMIT-TO (DOCTYPE, “cp”) OR LIMIT-TO (DOCTYPE, “ar”) OR LIMIT-TO (DOCTYPE, “cr”) OR LIMIT-TO (DOCTYPE, “re”))
Web of ScienceTS = (imu OR “wearable sensor*” OR wearable*) AND TS = (“motion analys*” OR “motion track*” OR “movement analys*” OR “motion analys*” OR biomech* OR kinematic*) AND TS = (industr* OR manufactur* OR occupation* OR factory OR assembly OR safety) NOT TS = (animal* OR patient*) NOT TS = (surg* OR rehabilitation OR disease* OR sport* OR therap* OR treatment* OR rehabilitation OR “energy harvest*”)
Google Scholar(“motion|movement analysis|monitoring” OR biomechanics OR kinematic OR position OR Location) (“motion capture|tracking” OR mocap OR accelerometer OR “wearable|inertial sensor|measuring” OR mems) (industry OR manufacturing OR occupation OR factory OR safety)


  1. Zhang, Z. Microsoft Kinect Sensor and Its Effect. IEEE MultiMedia 2012, 19, 4–10. [Google Scholar] [CrossRef][Green Version]
  2. Roetenberg, D.; Luinge, H.; Slycke, P. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technologies BV. Tech. Rep. 2009, 1. [Google Scholar]
  3. Bohannon, R.W.; Harrison, S.; Kinsella-Shaw, J. Reliability and validity of pendulum test measures of spasticity obtained with the Polhemus tracking system from patients with chronic stroke. J. Neuroeng. Rehabil. 2009, 6, 30. [Google Scholar] [CrossRef][Green Version]
  4. Park, Y.; Lee, J.; Bae, J. Development of a wearable sensing glove for measuring the motion of fingers using linear potentiometers and flexible wires. IEEE Trans. Ind. Inform. 2014, 11, 198–206. [Google Scholar] [CrossRef]
  5. Bentley, M. Wireless and Visual Hybrid Motion Capture System. U.S. Patent 9,320,957, 26 April 2016. [Google Scholar]
  6. Komaris, D.-S.; Perez-Valero, E.; Jordan, L.; Barton, J.; Hennessy, L.; O’Flynn, B.; Tedesco, S.; OrFlynn, B. Predicting three-dimensional ground reaction forces in running by using artificial neural networks and lower body kinematics. IEEE Access 2019, 7, 156779–156786. [Google Scholar] [CrossRef]
  7. Jin, M.; Zhao, J.; Jin, J.; Yu, G.; Li, W. The adaptive Kalman filter based on fuzzy logic for inertial motion capture system. Measurement 2014, 49, 196–204. [Google Scholar] [CrossRef]
  8. Komaris, D.-S.; Govind, C.; Clarke, J.; Ewen, A.; Jeldi, A.; Murphy, A.; Riches, P.L. Identifying car ingress movement strategies before and after total knee replacement. Int. Biomech. 2020, 7, 9–18. [Google Scholar] [CrossRef][Green Version]
  9. Aminian, K.; Najafi, B. Capturing human motion using body-fixed sensors: Outdoor measurement and clinical applications. Comput. Animat. Virtual Worlds 2004, 15, 79–94. [Google Scholar] [CrossRef]
  10. Tamir, M.; Oz, G. Real-Time Objects Tracking and Motion Capture in Sports Events. U.S. Patent Application No. 11/909,080, 14 August 2008. [Google Scholar]
  11. Bregler, C. Motion capture technology for entertainment [in the spotlight]. IEEE Signal. Process. Mag. 2007, 24. [Google Scholar] [CrossRef]
  12. Geng, W.; Yu, G. Reuse of motion capture data in animation: A Review. In Proceedings of the Lecture Notes in Computer Science; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2003; pp. 620–629. [Google Scholar]
  13. Field, M.; Stirling, D.; Naghdy, F.; Pan, Z. Motion capture in robotics review. In Proceedings of the 2009 IEEE International Conference on Control and Automation; Institute of Electrical and Electronics Engineers (IEEE), Christchurch, New Zealand, 9–11 December 2009; pp. 1697–1702. [Google Scholar]
  14. Plantard, P.; Shum, H.P.H.; Le Pierres, A.-S.; Multon, F. Validation of an ergonomic assessment method using Kinect data in real workplace conditions. Appl. Ergon. 2017, 65, 562–569. [Google Scholar] [CrossRef]
  15. Valero, E.; Sivanathan, A.; Bosché, F.; Abdel-Wahab, M. Analysis of construction trade worker body motions using a wearable and wireless motion sensor network. Autom. Constr. 2017, 83, 48–55. [Google Scholar] [CrossRef]
  16. Brigante, C.M.N.; Abbate, N.; Basile, A.; Faulisi, A.C.; Sessa, S. Towards miniaturization of a MEMS-based wearable motion capture system. IEEE Trans. Ind. Electron. 2011, 58, 3234–3241. [Google Scholar] [CrossRef]
  17. Dong, M.; Li, J.; Chou, W. A new positioning method for remotely operated vehicle of the nuclear power plant. Ind. Robot. Int. J. 2019, 47, 177–186. [Google Scholar] [CrossRef]
  18. Hondori, H.M.; Khademi, M. A review on technical and clinical impact of microsoft kinect on physical therapy and rehabilitation. J. Med. Eng. 2014, 2014, 1–16. [Google Scholar] [CrossRef] [PubMed][Green Version]
  19. Barris, S.; Button, C. A review of vision-based motion analysis in sport. Sports Med. 2008, 38, 1025–1043. [Google Scholar] [CrossRef] [PubMed]
  20. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, U.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed][Green Version]
  21. Loconsole, C.; Leonardis, D.; Barsotti, M.; Solazzi, M.; Frisoli, A.; Bergamasco, M.; Troncossi, M.; Foumashi, M.M.; Mazzotti, C.; Castelli, V.P. An emg-based robotic hand exoskeleton for bilateral training of grasp. In Proceedings of the 2013 World Haptics Conference (WHC); Institute of Electrical and Electronics Engineers (IEEE), Daejeon, Korea, 14–17 April 2013; pp. 537–542. [Google Scholar]
  22. Downes, M.J.; Brennan, M.L.; Williams, H.C.; Dean, R.S. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open 2016, 6, e011458. [Google Scholar] [CrossRef][Green Version]
  23. Bortolini, M.; Faccio, M.; Gamberi, M.; Pilati, F. Motion Analysis System (MAS) for production and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 2020, 139, 105485. [Google Scholar] [CrossRef]
  24. Akhavian, R.; Behzadan, A.H. Productivity analysis of construction worker activities using smartphone sensors. In Proceedings of the 16th International Conference on Computing in Civil and Building Engineering (ICCCBE2016), Osaka, Japan, 6–8 July 2016. [Google Scholar]
  25. Krüger, J.; Nguyen, T.D. Automated vision-based live ergonomics analysis in assembly operations. CIRP Ann. 2015, 64, 9–12. [Google Scholar] [CrossRef]
  26. Austad, H.; Wiggen, Ø.; Færevik, H.; Seeberg, T.M. Towards a wearable sensor system for continuous occupational cold stress assessment. Ind. Health 2018, 56, 228–240. [Google Scholar] [CrossRef][Green Version]
  27. Brents, C.; Hischke, M.; Reiser, R.; Rosecrance, J.C. Low Back Biomechanics of Keg Handling Using Inertial Measurement Units. In Software Engineering in Intelligent Systems; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2018; Volume 825, pp. 71–81. [Google Scholar]
  28. Caputo, F.; Greco, A.; D’Amato, E.; Notaro, I.; Sardo, M.L.; Spada, S.; Ghibaudo, L. A human postures inertial tracking system for ergonomic assessments. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), Florence, Italy, 26–30 August 2018; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; Volume 825, pp. 173–184. [Google Scholar]
  29. Greco, A.; Muoio, M.; Lamberti, M.; Gerbino, S.; Caputo, F.; Miraglia, N. Integrated wearable devices for evaluating the biomechanical overload in manufacturing. In Proceedings of the 2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4.0&IoT), Naples, Italy, 4–6 June 2019; pp. 93–97. [Google Scholar] [CrossRef]
  30. Lin, H.J.; Chang, H.S. In-process monitoring of micro series spot welding using dual accelerometer system. Weld. World 2019, 63, 1641–1654. [Google Scholar] [CrossRef]
  31. Malaisé, A.; Maurice, P.; Colas, F.; Charpillet, F.; Ivaldi, S. Activity Recognition with Multiple Wearable Sensors for Industrial Applications. In Proceedings of the ACHI 2018-Eleventh International Conference on Advances in Computer-Human Interactions, Rome, Italy, 25 March 2018. [Google Scholar]
  32. Zhang, Y.; Zhang, C.; Nestler, R.; Notni, G. Efficient 3D object tracking approach based on convolutional neural network and Monte Carlo algorithms used for a pick and place robot. Photonics Educ. Meas. Sci. 2019, 11144, 1114414. [Google Scholar] [CrossRef][Green Version]
  33. Tuli, T.B.; Manns, M. Real-time motion tracking for humans and robots in a collaborative assembly task. Proceedings 2020, 42, 48. [Google Scholar] [CrossRef][Green Version]
  34. Agethen, P.; Otto, M.; Mengel, S.; Rukzio, E. Using marker-less motion capture systems for walk path analysis in paced assembly flow lines. Procedia CIRP 2016, 54, 152–157. [Google Scholar] [CrossRef][Green Version]
  35. Fletcher, S.; Johnson, T.; Thrower, J. A study to trial the use of inertial non-optical motion capture for ergonomic analysis of manufacturing work. Proc. Inst. Mech. Eng. Part. B J. Eng. Manuf. 2016, 232, 90–98. [Google Scholar] [CrossRef][Green Version]
  36. Kim, K.; Chen, J.; Cho, Y.K. Evaluation of machine learning algorithms for worker’s motion recognition using motion sensors. Comput. Civ. Eng. 2019, 51–58. [Google Scholar]
  37. McGregor, A.; Dobie, G.; Pearson, N.; MacLeod, C.; Gachagan, A. Mobile robot positioning using accelerometers for pipe inspection. In Proceedings of the 14th International Conference on Concentrator Photovoltaic Systems, Puertollano, Spain, 16–18 April 2018; AIP Publishing: Melville, NY, USA, 2019; Volume 2102, p. 060004. [Google Scholar]
  38. Müller, B.C.; Nguyen, T.D.; Dang, Q.-V.; Duc, B.M.; Seliger, G.; Krüger, J.; Kohl, H. Motion tracking applied in assembly for worker training in different locations. Procedia CIRP 2016, 48, 460–465. [Google Scholar] [CrossRef][Green Version]
  39. Nath, N.D.; Akhavian, R.; Behzadan, A.H. Ergonomic analysis of construction worker’s body postures using wearable mobile sensors. Appl. Ergon. 2017, 62, 107–117. [Google Scholar] [CrossRef][Green Version]
  40. Papaioannou, S.; Markham, A.; Trigoni, N. Tracking people in highly dynamic industrial environments. IEEE Trans. Mob. Comput. 2016, 16, 2351–2365. [Google Scholar] [CrossRef]
  41. Ragaglia, M.; Zanchettin, A.M.; Rocco, P. Trajectory generation algorithm for safe human-robot collaboration based on multiple depth sensor measurements. Mechatronics 2018, 55, 267–281. [Google Scholar] [CrossRef]
  42. Scimmi, L.S.; Melchiorre, M.; Mauro, S.; Pastorelli, S.P. Implementing a Vision-Based Collision Avoidance Algorithm on a UR3 Robot. In Proceedings of the 2019 23rd International Conference on Mechatronics Technology (ICMT), Salerno, Italy, 23–26 October 2019; Institute of Electrical and Electronics Engineers (IEEE): New York City, NY, USA, 2019; pp. 1–6. [Google Scholar]
  43. Sestito, A.G.; Frasca, T.M.; O’Rourke, A.; Ma, L.; Dow, D.E. Control for camera of a telerobotic human computer interface. Educ. Glob. 2015, 5. [Google Scholar] [CrossRef]
  44. Yang, K.; Ahn, C.; Vuran, M.C.; Kim, H. Sensing Workers gait abnormality for safety hazard identification. In Proceedings of the 33rd International Symposium on Automation and Robotics in Construction (ISARC), Auburn, AL, USA, 18–21 July 2016; pp. 957–965. [Google Scholar]
  45. Tarabini, M.; Marinoni, M.; Mascetti, M.; Marzaroli, P.; Corti, F.; Giberti, H.; Villa, A.; Mascagni, P. Monitoring the human posture in industrial environment: A feasibility study. In Proceedings of the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018; pp. 1–6. [Google Scholar] [CrossRef]
  46. Jha, A.; Chiddarwar, S.S.; Bhute, R.Y.; Alakshendra, V.; Nikhade, G.; Khandekar, P.M. Imitation learning in industrial robots. In Proceedings of the Advances in Robotics on-AIR ’17, New Delhi, India, 28 June–2 July 2017; pp. 1–6. [Google Scholar]
  47. Lim, T.-K.; Park, S.-M.; Lee, H.-C.; Lee, D.-E. Artificial neural network-based slip-trip classifier using smart sensor for construction workplace. J. Constr. Eng. Manag. 2016, 142, 04015065. [Google Scholar] [CrossRef]
  48. Maman, Z.S.; Yazdi, M.A.A.; Cavuoto, L.A.; Megahed, F.M. A data-driven approach to modeling physical fatigue in the workplace using wearable sensors. Appl. Ergon. 2017, 65, 515–529. [Google Scholar] [CrossRef] [PubMed]
  49. Merino, G.S.A.D.; Da Silva, L.; Mattos, D.; Guimarães, B.; Merino, E.A.D. Ergonomic evaluation of the musculoskeletal risks in a banana harvesting activity through qualitative and quantitative measures, with emphasis on motion capture (Xsens) and EMG. Int. J. Ind. Ergon. 2019, 69, 80–89. [Google Scholar] [CrossRef]
  50. Monaco, M.G.L.; Fiori, L.; Marchesi, A.; Greco, A.; Ghibaudo, L.; Spada, S.; Caputo, F.; Miraglia, N.; Silvetti, A.; Draicchio, F. Biomechanical overload evaluation in manufacturing: A novel approach with sEMG and inertial motion capture integration. In Software Engineering in Intelligent Systems; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; Volume 818, pp. 719–726. [Google Scholar]
  51. Monaco, M.G.L.; Marchesi, A.; Greco, A.; Fiori, L.; Silvetti, A.; Caputo, F.; Miraglia, N.; Draicchio, F. Biomechanical load evaluation by means of wearable devices in industrial environments: An inertial motion capture system and sEMG based protocol. In Software Engineering in Intelligent Systems; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2018; Volume 795, pp. 233–242. [Google Scholar]
  52. Mueller, F.; Deuerlein, C.; Koch, M. Intuitive welding robot programming via motion capture and augmented reality. IFAC-PapersOnLine 2019, 52, 294–299. [Google Scholar] [CrossRef]
  53. Nahavandi, D.; Hossny, M. Skeleton-free RULA ergonomic assessment using Kinect sensors. Intell. Decis. Technol. 2017, 11, 275–284. [Google Scholar] [CrossRef]
  54. Peppoloni, L.; Filippeschi, A.; Ruffaldi, E.; Avizzano, C.A. A novel wearable system for the online assessment of risk for biomechanical load in repetitive efforts. Int. J. Ind. Ergon. 2016, 52, 1–11. [Google Scholar] [CrossRef]
  55. Seo, J.; Alwasel, A.A.; Lee, S.; Abdel-Rahman, E.M.; Haas, C. A comparative study of in-field motion capture approaches for body kinematics measurement in construction. Robotica 2017, 37, 928–946. [Google Scholar] [CrossRef]
  56. Tao, Q.; Kang, J.; Sun, W.; Li, Z.; Huo, X. Digital evaluation of sitting posture comfort in human-vehicle system under industry 4.0 framework. Chin. J. Mech. Eng. 2016, 29, 1096–1103. [Google Scholar] [CrossRef][Green Version]
  57. Wang, W.; Li, R.; Diekel, Z.M.; Chen, Y.; Zhang, Z.; Jia, Y. Controlling object hand-over in human-robot collaboration via natural wearable sensing. IEEE Trans. Hum. Mach. Syst. 2019, 49, 59–71. [Google Scholar] [CrossRef]
  58. Yang, K.; Jebelli, H.; Ahn, C.R.; Vuran, M.C. Threshold-Based Approach to Detect Near-Miss Falls of Iron Workers Using Inertial Measurement Units. Comput. Civ. Eng. 2015, 148–155. [Google Scholar] [CrossRef]
  59. Yang, K.; Ahn, C.; Kim, H. Validating ambulatory gait assessment technique for hazard sensing in construction environments. Autom. Constr. 2019, 98, 302–309. [Google Scholar] [CrossRef]
  60. Yang, K.; Ahn, C.; Vuran, M.C.; Kim, H. Collective sensing of workers’ gait patterns to identify fall hazards in construction. Autom. Constr. 2017, 82, 166–178. [Google Scholar] [CrossRef]
  61. Albert, D.L.; Beeman, S.M.; Kemper, A.R. Occupant kinematics of the Hybrid III, THOR-M, and postmortem human surrogates under various restraint conditions in full-scale frontal sled tests. Traffic Inj. Prev. 2018, 19, S50–S58. [Google Scholar] [CrossRef]
  62. Cardoso, M.; McKinnon, C.; Viggiani, D.; Johnson, M.J.; Callaghan, J.P.; Albert, W.J. Biomechanical investigation of prolonged driving in an ergonomically designed truck seat prototype. Ergonomics 2017, 61, 367–380. [Google Scholar] [CrossRef]
  63. Ham, Y.; Yoon, H. Motion and visual data-driven distant object localization for field reporting. J. Comput. Civ. Eng. 2018, 32, 04018020. [Google Scholar] [CrossRef]
  64. Herwan, J.; Kano, S.; Ryabov, O.; Sawada, H.; Kasashima, N.; Misaka, T. Retrofitting old CNC turning with an accelerometer at a remote location towards Industry 4.0. Manuf. Lett. 2019, 21, 56–59. [Google Scholar] [CrossRef]
  65. Jebelli, H.; Ahn, C.R.; Stentz, T.L. Comprehensive fall-risk assessment of construction workers using inertial measurement units: Validation of the gait-stability metric to assess the fall risk of iron workers. J. Comput. Civ. Eng. 2016, 30, 04015034. [Google Scholar] [CrossRef]
  66. Kim, H.; Ahn, C.; Yang, K. Identifying safety hazards using collective bodily responses of workers. J. Constr. Eng. Manag. 2017, 143, 04016090. [Google Scholar] [CrossRef]
  67. Oyekan, J.; Prabhu, V.; Tiwari, A.; Baskaran, V.; Burgess, M.; McNally, R. Remote real-time collaboration through synchronous exchange of digitised human–workpiece interactions. Futur. Gener. Comput. Syst. 2017, 67, 83–93. [Google Scholar] [CrossRef][Green Version]
  68. Prabhu, V.A.; Song, B.; Thrower, J.; Tiwari, A.; Webb, P. Digitisation of a moving assembly operation using multiple depth imaging sensors. Int. J. Adv. Manuf. Technol. 2015, 85, 163–184. [Google Scholar] [CrossRef][Green Version]
  69. Yang, K.; Ahn, C.; Vuran, M.C.; Aria, S.S. Semi-supervised near-miss fall detection for ironworkers with a wearable inertial measurement unit. Autom. Constr. 2016, 68, 194–202. [Google Scholar] [CrossRef][Green Version]
  70. Zhong, H.; Kanhere, S.S.; Chou, C.T. WashInDepth: Lightweight hand wash monitor using depth sensor. In Proceedings of the 13th Annual International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Hiroshima, Japan, 28 November–1 December 2016; pp. 28–37. [Google Scholar]
  71. Baghdadi, A.; Megahed, F.M.; Esfahani, E.T.; Cavuoto, L.A. A machine learning approach to detect changes in gait parameters following a fatiguing occupational task. Ergonomics 2018, 61, 1116–1129. [Google Scholar] [CrossRef]
  72. Balaguier, R.; Madeleine, P.; Rose-Dulcina, K.; Vuillerme, N. Trunk kinematics and low back pain during pruning among vineyard workers-A field study at the Chateau Larose-Trintaudon. PLoS ONE 2017, 12, e0175126. [Google Scholar] [CrossRef] [PubMed][Green Version]
  73. Faber, G.S.; Koopman, A.S.; Kingma, I.; Chang, C.; Dennerlein, J.T.; Van Dieën, J.H. Continuous ambulatory hand force monitoring during manual materials handling using instrumented force shoes and an inertial motion capture suit. J. Biomech. 2018, 70, 235–241. [Google Scholar] [CrossRef][Green Version]
  74. Hallman, D.M.; Jørgensen, M.B.; Holtermann, A. Objectively measured physical activity and 12-month trajectories of neck–shoulder pain in workers: A prospective study in DPHACTO. Scand. J. Public Health 2017, 45, 288–298. [Google Scholar] [CrossRef]
  75. Jebelli, H.; Ahn, C.; Stentz, T.L. Fall risk analysis of construction workers using inertial measurement units: Validating the usefulness of the postural stability metrics in construction. Saf. Sci. 2016, 84, 161–170. [Google Scholar] [CrossRef]
  76. Kim, H.; Ahn, C.; Stentz, T.L.; Jebelli, H. Assessing the effects of slippery steel beam coatings to ironworkers’ gait stability. Appl. Ergon. 2018, 68, 72–79. [Google Scholar] [CrossRef]
  77. Mehrizi, R.; Peng, X.; Xu, X.; Zhang, S.; Metaxas, D.; Li, K. A computer vision based method for 3D posture estimation of symmetrical lifting. J. Biomech. 2018, 69, 40–46. [Google Scholar] [CrossRef]
  78. Chen, H.; Luo, X.; Zheng, Z.; Ke, J. A proactive workers’ safety risk evaluation framework based on position and posture data fusion. Autom. Constr. 2019, 98, 275–288. [Google Scholar] [CrossRef]
  79. Dutta, T. Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace. Appl. Ergon. 2012, 43, 645–649. [Google Scholar] [CrossRef] [PubMed]
  80. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  81. Ferrari, E.; Gamberi, M.; Pilati, F.; Regattieri, A. Motion Analysis System for the digitalization and assessment of manual manufacturing and assembly processes. IFAC-PapersOnLine 2018, 51, 411–416. [Google Scholar] [CrossRef]
  82. Van Der Kruk, E.; Reijne, M.M. Accuracy of human motion capture systems for sport applications; state-of-the-art review. Eur. J. Sport Sci. 2018, 18, 806–819. [Google Scholar] [CrossRef] [PubMed]
  83. Kim, G.; Menon, R. Computational imaging enables a “see-through” lens-less camera. Opt. Express 2018, 26, 22826–22836. [Google Scholar] [CrossRef] [PubMed]
  84. Abraham, L.; Urru, A.; Wilk, M.P.; Tedesco, S.; O’Flynn, B. 3D ranging and tracking using lensless smart sensors. In Proceedings of the 11th Smart Systems Integration, SSI 2017: International Conference and Exhibition on Integration Issues of Miniaturized Systems, Cork, Ireland, 8–9 March 2017; pp. 1–8. [Google Scholar]
  85. Normani, N.; Urru, A.; Abraham, A.; Walsh, M.; Tedesco, S.; Cenedese, A.; Susto, G.A.; O’Flynn, B. A machine learning approach for gesture recognition with a lensless smart sensor system. In Proceedings of the 2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Las Vegas, NV, USA, 4–7 March 2018; pp. 136–139. [Google Scholar] [CrossRef]
Figure 1. Search strategy Preferred Reported Item for Systematic review and Meta-Analysis (PRISMA) flow chart.
Figure 1. Search strategy Preferred Reported Item for Systematic review and Meta-Analysis (PRISMA) flow chart.
Sensors 20 05687 g001
Figure 2. Number of publications per year divided by type of MoCap technology adopted.
Figure 2. Number of publications per year divided by type of MoCap technology adopted.
Sensors 20 05687 g002
Figure 3. Number of publications per year and type of industry sector (left) and application (right).
Figure 3. Number of publications per year and type of industry sector (left) and application (right).
Sensors 20 05687 g003
Figure 4. IMUs placement in the reviewed studies. Pre-developed models are excluded.
Figure 4. IMUs placement in the reviewed studies. Pre-developed models are excluded.
Sensors 20 05687 g004
Figure 5. The relations between type of industry, application and MoCap system, for each considered study. Indexes at the top of each branch specify the number of studies associated to each block.
Figure 5. The relations between type of industry, application and MoCap system, for each considered study. Indexes at the top of each branch specify the number of studies associated to each block.
Sensors 20 05687 g005
Table 1. Risk and quality assessment questions of the modified AXIS tool.
Table 1. Risk and quality assessment questions of the modified AXIS tool.
Question NumberAXIS
Question Code
Q11Were the aims/objectives of the study clear?
Q22Was the study design appropriate for the stated aim(s)?
Q33, 4 and 5Was the sample size justified, clearly defined, and taken from an appropriate population?
Q48Were the outcome variables measured appropriate to the aims of the study?
Q59Were the outcome variables measured correctly using instruments/measurements that had been trialled, piloted or published previously?
Q610Is it clear what was used to determined statistical significance and/or precision estimates? (e.g., p-values, confidence intervals)
Q711Were the methods sufficiently described to enable them to be repeated?
Q812Were the basic data adequately described?
Q916Were the results presented for all the analyses described in the methods?
Q1017Were the authors’ discussions and conclusions justified by the results?
Q1118Were the limitations of the study discussed?
Q1219Were there any funding sources or conflicts of interest that may affect the authors’ interpretation of the results?
Table 2. Risk of bias assessment.
Table 2. Risk of bias assessment.
Risk of BiasScoreStudyNumber of Studies
Table 3. Sensors used in the reported studies.
Table 3. Sensors used in the reported studies.
SensorsStudyNumber of StudiesPercentage of Studies
Camera-based Sensors *[14,23,25,32,33,34,38,41,42,43,46,52,53,55,67,68,70,77]1830.5%
IMUs + Camera-based Sensors[15,45,56,61,62,63]610.2%
IMUs + Other Technologies[17,31,54,57,78]58.5%
IMUs + Camera-based + Other Technologies[40]11.7%
* Camera-based sensors include RGB, infrared, depth or optical cameras.
Table 4. Characteristics of the most used MoCap systems.
Table 4. Characteristics of the most used MoCap systems.
AccuracyHigh (0.75° to 1.5°) 3Very high (0.1 mm and 0.5°) 1; subject to number/location of camerasLow (static, 0.0348 m [79]) subject to distance from camera
Set upStraightforward; subject to number of IMUsRequires time-consuming and frequent calibrationsUsually requires checkerboard calibrations
Capture volumesOnly subject to distance from station (if required)Varies; up to 15 × 15 × 6 m 1Field of view: 70.6° × 60°; 8 m depth range 5
Cost of installationFrom USD 50 per unit to over USD 12,000 for a full-body suit 4 Varies; from USD 5000 2 to USD 150,000 1USD 200 5 per unit
Ease of use and data processingUsually raw sensor data to ASCII filesUsually highly automated, outputs full 3D kinematicsRequires custom-made processing algorithms
Invasiveness (individual)MinimalHigh (markers’ attachment)Minimal
Invasiveness (workplace)MinimalHigh (typically, 6 to 12 camera systems)Medium (typically, 1 to 4 camera systems)
Line-of-sight necessityNoYesYes
RangeUsually up to 20 m from station 3 (if wireless)Up to 30 m camera-to-marker 1Low: skeleton tracking range of 0.5 m to 4.5 m 5
Sampling rateUsually from 60 to 120 Hz 3 (if wireless)Usually up to 250 Hz 1 (subject to resolution)Varies; 15–30 Hz 5 or higher for high-speed cameras
SoftwareUsually requires bespoke or off-the-shelf softwareRequires off-the-shelf softwareRequires bespoke software, off-the-shelf solutions not available
Noise sources and environmental interferenceFerromagnetic disturbances, temperature changesBright light and vibrationsIR-interference with overlapping coverage, angle of observed surface
Other limitationsDrift, battery life, no direct position trackingCamera obstructionsCamera obstructions, difficulties tracking bright or dark objects
Favoured applicationsActivity recognition [31], identification of hazardous events/poses [44,58,60,65,66,69,75]Human–robot collaboration [42], robot trajectory planning [52]Activity tracking [34], gesture or pose classification [25,45,53]
1 Based on a sample layout with 24 Primex41 Optritrack cameras. 2 Based on a sample layout with 4 Flex 3 Optritrack cameras. 3 Based on the specs of the Xsens MTW Awinda. 4 Based on the Xsens MVN. 5 Based on the Kinect V2.
Table 5. Types of industry sectors directly or indirectly suggested as potential recipients for the MoCap solutions developed in the reviewed works.
Table 5. Types of industry sectors directly or indirectly suggested as potential recipients for the MoCap solutions developed in the reviewed works.
IndustryStudyNumber of StudiesPercentage of Studies
Construction Industry[15,24,36,39,40,44,47,55,58,59,60,63,65,66,69,75,76,78]1830.5%
Industrial Robot Manufacturing[30,32,33,41,42,43,46,52,57]915.3%
Automotive and Cycling Industry[28,29,34,38,61,62]610.2%
Agriculture and Timber[50,51,56,67,68] 58.5%
Food Industry[27,70,72]35.1%
Aerospace Manufacturing[35,49]23.4%
Energy Industry[17]11.7%
Petroleum Industry[37]11.7%
Steel Industry[64]11.7%
Table 6. Generic MoCap applications in industry.
Table 6. Generic MoCap applications in industry.
ApplicationsStudyNumber of StudiesPercentage of Studies
Workers’ Health and Safety[14,15,25,26,27,28,29,33,35,36,39,41,42,44,45,47,48,49,50,51,53,54,55,58,59,60,65,66,69,70,71,72,73,74,75,76,77,78]3864.4%
Improvement of Industrial Process or Product[32,43,46,52,56,57,61,62,67,68]1017.0%
Workers’ Productivity Improvement[23,24,31,34,38,40]610.1%
Machinery Monitoring and Quality Control[17,30,37,63,64]58.5%
Table 7. Machine learning classification approaches.
Table 7. Machine learning classification approaches.
StudyMachine Learning ModelInput DataTraining DatasetClassification OutputAccuracy
[24]ANN, k-NNMagnitude of linear and angular acceleration.Manually labelled video sequencesActivity recognition94.11%
[71]SVMEight different motion components (2D position trajectories, profile magnitude of vel., acc. and jerk, angle and velocity).2000 sample data points manually labelled2 Fatigue states (Yes or No)90%
[63]Bayes classifierAcceleration.Labelled sensor data featuresType of landmarks (lift, staircase, etc.)96.8% *
[64]ANNCutting speed, feed rate, depth of cut, and the three peak spectrum amplitudes from vibration signals in 3 directions.Labelled cutting and vibration data from 9 experimentsWorn tool condition (Yes or No)94.9%
[36]k-NN, MLP, RF, SVMQuaternions, three-dimensional acceleration, linear velocity, and angular velocity.Manually labelled video sequences14 Activities (e.g., bending-up and bending-down)Best RF, with 79.83%
[25]RFJoint angles evaluated from an artificial model built on a segmentation from depth images.Manually labelled video sequences5 different postures87.1%
[47]ANNThree-dimensional acceleration.Labelled datasetWalk/slip/trip94%
[53]RFDepth Comparison Features (DCF) from depth images.Labelled dataset of 5000 images7 RULA score postures93%
[57]DAG-SVMRotation angle and EMG signals.Dataset acquired with known object weightLight objects, large objects, and heavy objects96.67%
[69]One-class SVMAcceleration.Dataset of normal walk samples2 states (walk, near-miss falls)86.4%
[32]CNNDistance of nearest point, curvature and HSV colour.Pre-existing dataset of objects from YOLO [80]Type of object97.3% *
* Combined accuracy of the classification process and the success rate of the task. Abbreviations used: ANN = Artificial Neural Network; k-NN = k-Nearest Neighbour; SVM = Support Vector Machine; MLP = Multilayer Perceptron; DAG = Directed Acyclic Graph; CNN = Convolutional Neural Network; HSV = Hue Saturation Value.
Table 8. Summary of study designs and findings.
Table 8. Summary of study designs and findings.
StudySample SizeSensor PlacementNumber of IMUsMoCap Systems and Complementary SensorsStudy Findings
[34]1- 6 × Kinect,
16 × ARTtrack2 IRCs
Evaluation of workers’ walk paths in assembly lines showed an average difference of 0.86 m in the distance between planned and recorded walk paths.
[24]1Upper Body1-Activity recognition of over 90% between different construction activities; very good accuracy in activity duration measurements.
[61]8-316 × VICON IRCs (50 × passive markers), 1 × pressure sensor, 2 × belt load cellsBoth tested car crash test surrogates had comparable overall ISO scores; THOR-M scored better in acceleration and angular velocity data; Hybrid III had higher average ISO/TS 18571 scores in excursion data.
[26]11Upper Body2Heart rate, skin temperature, air humidity, temperature and VO2IMUs can discriminate rest from work but they are less accurate differentiating moderate from hard work. Activity is a reliable predictor of cold stress for workers in cold weather environments.
[71]20Lower Body1-Fatigue detection of workers with an accuracy of 80%.
[72]15Upper body11 × electronic algometerVineyard-workers spent more than 50% of their time with the trunk flexed over 30°. No relationship between duration of forward bending or trunk rotation and pain intensity.
[23]1--4 × KinectProof of concept of a MoCap system for the evaluation of the human labour in assembly workstations.
[27]5Full body17-Workers lifting kegs from different heights showed different torso flexion/extension angles.
[28]2Full body7-Proof of concept of an IMU system for workers’ postural analyses, with an exemplary application in automotive industry.
[62]20Upper body34 × Optotrack IRCs (active markers), 2 × Xsensor pressure padsNo significant differences in terms of body posture between the tested truck seats; peak and average seat pressure was higher with industry standard seats; average trunk flexion was higher with industry standard seats by 16% of the total RoM.
[78]3Upper body171 × UWB, 1 × Perception Neuron, 1 × phone cameraIn construction tasks, the accuracy of the automatic safety risk evaluation was 83%, as compared to the results of the video evaluation by a safety expert.
[17]-Machinery11 × mechanical scanning sonarThe position tracking accuracy of remotely operated vehicles in a nuclear power plant was within centimetre level when compared to a visual positioning method.
[73]16Full body171 × Certus Optotrak, 6 × Kistler FPs, 2 × Xsens instrumented shoesThe root-mean square differences between the estimated and measured hand forces during manual materials handling tasks from IMUs and instrumented force shoes ranged between 17-21N.
[81]1Full body-4 × depth camerasProof of concept of a motion analysis system for the evaluation of the human labour in assembly workstations.
[35]10Full body17-Demonstrated an IMU MoCap system for the evaluation of workers’ posture in the aerospace industry.
[29]1Upper body41 × wearable camera, 3 × video cameras
6 × BTS EMG sensors
Proof of concept of an EMG and IMU system for the risk assessing of workers’ biomechanical overload in assembly lines.
[74]625Full body4-More time spent in leisure physical activities was associated with lower pain levels in a period of over 12 months. Depending on sex and working domain, high physical activity had a negative effect on the course of pain over 12 months.
[63]-Machinery11 × mobile phone cameraThree-dimensional localization of distant target objects in industry with an average position errors of 3% in the location of the targeted objects.
[64]-Machinery1-Tool wear detection in CNC machines using an accelerometer and an artificial neural network with an accuracy of 88.1%.
[65]8Lower Body11 × video cameraDistinguish low-fall-risk tasks (comfortable walking) from high-risk tasks (carrying a side load or high-speed walking) in construction workers walking on I-beams.
[75]10Upper body11 × force plateWearing a harness loaded with common iron workers’ tools could be considered as a moderate fall-risk task, while holding a toolbox or squatting as a high-risk task.
[46]1Upper body-1 × KinectTeleoperation of a robot’s end effector through imitation of the operator’s arm motion with a similarity of 96% between demonstrated and imitated trajectories.
[66]10Upper body1-The Shapiro–Wilk statistic of the used acceleration metric can distinguish workers’ movements in hazardous (slippery floor) from non-hazardous areas.
[76]16Lower Body1-The gait stability while walking on coated steel beam surfaces is greatly affected by the slipperiness of the surfaces (p = 0.024).
[36]1Full body171 × video cameraTwo IMU sensors on hip and either neck or head showed similar motion recognition accuracy (higher than 0.75) to a full body model of 17 IMUs (0.8) for motion classification.
[25]8Full body-1 × KinectPosture classification in assembly operations from a stream of depth images with an accuracy of 87%; similar but systematically overestimated EAWS scores.
[47]3Lower Body1-Identification of slip and trip events in workers’ walking using an ANN and phone accelerometer with detection accuracy of 88% for slipping, and 94% for tripping.
[30]-Machinery2-High frequency vibrations can be detected by common accelerometers and can be used for micro series spot welder monitoring.
[31]1Full Body17E-glove from Emphasis TelematicsMeasurements from the IMU and force sensors were used for an operator activity recognition model for pick-and-place tasks (precision 96.12%).
[48]8Full Body41 × ECGIMUs were a better predictor of fatigue than ECG. Hip movements can predict the level of physical fatigue.
[37]-Machinery1-The orientation of a robot (clock face and orientation angles) for pipe inspection can be estimated via an inverse model using an on-board IMU.
[77]12Full Body-Motion analysis system (45 × passive markers), 2 × camcordersThe 3D pose reconstruction can be achieved by integrating morphological constraints and discriminative computer vision. The performance was activity-dependent and was affected by self and object occlusion.
[49]3Full Body171 × manual grip dynamometer, 1 × EMGWorkers in a banana harvesting adapt to the position of the bunches and the used tools leading to musculoskeletal risk and fatigue.
[50]2Upper Body86 × EMGA case study on the usefulness of the integration of kinematic and EMG technologies for assessing the biomechanical overload in production lines.
[51]2Upper Body86 × EMGDemonstration of an integrated EMG-IMU protocol for the posture evaluation during work activities, tested in an automotive environment.
[52]-Machinery-4 × IRCs (5 × passive markers)Welding robot path planning with an error in the trajectory of the end-effector of less than 3 mm.
[38]42--1 × KinectThe transfer of assembly knowledge between workers is faster with printed instructions rather with the developed smart assembly workplace system (p-value = 7 × 10−9) as tested in the assembly of a bicycle e-hub.
[53]---1 × KinectReal time RULA for the ergonomic analysis for assembly operations in industrial environments with an accuracy of 93%.
[39]--31 × Kinect, 1 × Oculus riftSmartphone sensors to monitor workers’ bodily postures, with errors in the measurements of trunk and shoulder flexions of up to 17˚.
[67]---1 × KinectProof of concept of a real-time MoCap platform, enabling workers to remotely work on a common engineering problem during a collaboration session, aiding in collaborative designs, inspection and verifications tasks.
[40]--11 × CCTV, radio transmitterA positioning system for tracking people in construction sites with an accuracy of approximately 0.8 m in the trajectory of the target.
[54]10Upper Body3EMGA wireless wearable system for the assessment of work-related musculoskeletal disorder risks with a 95% and 45% calculation accuracy of the RULA and SI metrics, respectively.
[14]12--1 × Kinect, 15 × Vicon IRC (47 × passive markers)RULA ergonomic assessment in real work conditions using Kinect with similar computed scores compared to expert observations (p = 0.74).
[68]---2 × Kinect, 1 × laser motion trackerDigitising the wheel loading process in the automotive industry, for tracking the moving wheel hub with an error less than the required assembly tolerance of 4 mm.
[41]---1 × Kinect, 1 × XtionA demonstration of a real time trajectory generation algorithm for human–robot collaboration that predicts the space that the human worker can occupy within the robot’s stopping time and modifies the trajectory to ensure the worker’s safety.
[42]-Upper Body-1 × OptiTrack V120: Trio systemA demonstration of a collision avoidance algorithm for robotics aiming to avoid collisions with obstacles without losing the planned tasks.
[55]-Lower Body-1 × Kinect, 1 × bumblebee XB3 camera, 1 × 3D Camcoder, 2 × Optotrack IRCs, 1 × GoniometerA vision-based and angular measurement sensor-based approach for measuring workers’ motions. Vision-based approaches had about 5–10 degrees of error in body angles (Kinect’s performance), while an angular measurement sensor-based approach measured body angles with about 3 degrees of error during diverse tasks.
[43]---1 × Oculus Rift, 2 × PlayStation Eye camerasUsing the Oculus rift to control remote robots for human computer interface. The method outperforms the mouse in rise time, percent overshoot and settling time.
[56]29Machinery and Upper Body611 × IRC Eagle DigitalProof of concept method for the evaluation of sitting posture comfort in a vehicle.
[45]-Upper Body71 × KinectA demonstration of a human posture monitoring systems aiming to estimate the range of motion of the body angles in industrial environments.
[33]-Upper Body-1 × HTC Vive systemProof of concept of a real-time motion tracking system for assembly processes aiming to identify if the human worker body parts enter the restricted working space of the robot.
[15]6Full Body81 × video CameraA demonstration of a system for the identification of detrimental postures in construction jobsites.
[57]10Upper Body18 × EMGA wearable system for human–robot collaborative assembly tasks using hand-over intentions and gestures. Gestures and intentions by different individuals were recognised with a success rate of 73.33% to 90%.
[58]2Upper Body1-Detection of near-miss falls of construction workers with 74.9% precision and 89.6% recall.
[69]5Upper Body1-Automatically detect and document near-miss falls from kinematic data with 75.8% recall and 86.4% detection accuracy.
[44]4Lower Body2video camerasA demonstration of a method for detecting jobsite safety hazards of ironworkers by analysing gait anomalies.
[60]9Lower Body1video camerasIdentification of physical fall hazards in construction, results showed a strong correlation between the location of hazards and the workers’ responses (0.83).
[59]4Lower Body2Osprey IRC systemDistinguish hazardous from normal conditions on construction jobsites with 1.2 to 6.5 mean absolute percentage error in non-hazard and 5.4 to 12.7 in hazardous environments.
[32]---1 × video cameraPresentation of a robot vision system based on CNN and a Monte Carlo algorithm with a success rate of 97.3% for the pick-and-place task.
[70]15--1 × depth cameraA system aiming to warn a person while washing hands if improper application of soap was detected based on hand gestures, with 94% gesture detection accuracy.
In Bold: Validation Systems. IRC: infrared camera; THOR-M: test device for human occupant restraint; VO2: oxygen consumption; FP: force plate; EAWS: European assembly worksheet; ECG: electrocardiogram; CNC = Computer Numerical Control.

Share and Cite

MDPI and ACS Style

Menolotto, M.; Komaris, D.-S.; Tedesco, S.; O’Flynn, B.; Walsh, M. Motion Capture Technology in Industrial Applications: A Systematic Review. Sensors 2020, 20, 5687.

AMA Style

Menolotto M, Komaris D-S, Tedesco S, O’Flynn B, Walsh M. Motion Capture Technology in Industrial Applications: A Systematic Review. Sensors. 2020; 20(19):5687.

Chicago/Turabian Style

Menolotto, Matteo, Dimitrios-Sokratis Komaris, Salvatore Tedesco, Brendan O’Flynn, and Michael Walsh. 2020. "Motion Capture Technology in Industrial Applications: A Systematic Review" Sensors 20, no. 19: 5687.

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop