Next Article in Journal
Effects of Fe Staple-Fiber Spun-Yarns and Correlation Models on Textile Pressure Sensors
Previous Article in Journal
Driver’s Head Pose and Gaze Zone Estimation Based on Multi-Zone Templates Registration and Multi-Frame Point Cloud Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors

School of Integrated Technology, Gwangju Institute of Science and Technology, Gwangju 61005, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3155; https://doi.org/10.3390/s22093155
Submission received: 23 February 2022 / Revised: 6 April 2022 / Accepted: 18 April 2022 / Published: 20 April 2022
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Skeleton data, which is often used in the HCI field, is a data structure that can efficiently express human poses and gestures because it consists of 3D positions of joints. The advancement of RGB-D sensors, such as Kinect sensors, enabled the easy capture of skeleton data from depth or RGB images. However, when tracking a target with a single sensor, there is an occlusion problem causing the quality of invisible joints to be randomly degraded. As a result, multiple sensors should be used to reliably track a target in all directions over a wide range. In this paper, we proposed a new method for combining multiple inaccurate skeleton data sets obtained from multiple sensors that capture a target from different angles into a single accurate skeleton data. The proposed algorithm uses density-based spatial clustering of applications with noise (DBSCAN) to prevent noise-added inaccurate joint candidates from participating in the merging process. After merging with the inlier candidates, we used Kalman filter to denoise the tremble error of the joint’s movement. We evaluated the proposed algorithm’s performance using the best view as the ground truth. In addition, the results of different sizes for the DBSCAN searching area were analyzed. By applying the proposed algorithm, the joint position accuracy of the merged skeleton improved as the number of sensors increased. Furthermore, highest performance was shown when the searching area of DBSCAN was 10 cm.

1. Introduction

1.1. Research Background

The captured human pose or gestured data can provide a lot of useful information for developing human–robot interaction (HRI) or human action recognition. The skeleton data, which consists of human joint position, is one of the most commonly used pieces of information in human research. This is because the number of joints that comprise the skeleton does not vary according to the shape of the human body or gender, maintaining a standardized structure that represents the human pose.
Due to these advantages, skeleton data is widely used in developing games for humans, industrial environments, and research on pathological diagnosis through recognition of gestures. M. Ma, et al. [1] developed a multi-planar full-body rehabilitation game named Mystic Isle using Microsoft Kinect V2. The user can interact with virtual environment by using their body. The Kinect V2 sensor tracked the user’s body, and the body tracking SDK provided the tracked skeleton data, which included 25 joints. A. Taha et al. [2] attempted to obtain descriptive labeling of complex human activities using Kinect V2 skeleton data. They proposed building the specific feature vector using identified skeleton joint coordinates as input for the Hidden Markov model.
In [3], M. Varshney, et al. proposed the rule-based classifier method that recognizes a view-invariant multiple human activity recognition in real time. They also used a single Kinect V2 sensor and body tracking SDK to track the human skeleton data. The skeleton data was then used as input for the proposed classifier method. E. Cippitelli et al. [4] attempted to recognize human activity using skeleton data obtained from RGB-D sensors. They also extracted a specific feature vector from skeleton pose data and used a support vector machine to classify human actions. Bari proposes a gait recognition model based on deep learning architecture in his paper [5]. They also trained the suggested deep learning model using the public 3D skeleton gait database recorded with the Microsoft Kinect V2 sensor. Based on skeleton data, two unique geometric features, the joint relative cosine dissimilarity and joint relative triangle area, are constructed. Consequently, the skeleton data is utilized in wide research areas that target human gesture or interaction as the raw or input data. Skeleton-based approaches, on the other hand, do not always ensure good performance. If the skeleton data is incorrect or noisy, they perform poorly. In other words, the quality of the skeleton data may be a determinant of the performance of the human tracking system or gesture recognition model [6].
To capture the human gesture using skeleton data accurately, the typical system is the marker-based motion capture system such as VICON (Vicon Motion Systems Ltd., Oxford, UK) and Opti-track (Natural Point Inc., St Corvallis, OR, USA) because of their proven accuracy. These systems require wearing a suit attached with a reflective marker or a process for attaching the marker to the human body. This procedure takes a long time, and the attached marker makes the human’s movement unnatural [7]. Therefore, the motion capture system is difficult to apply in a variety of environments for capturing human gestures.
In many recent studies, a human motion capturing system using an RGB-Depth (RGB-D) sensor such as Kinect (Microsoft Corp., Redmond, WA, USA), Xtion (ASUS, Taipei, Taiwan), Astra (Orbbec 3D Technology International, Inc., Troy, MI, USA), or Realsense (Intel Corp., Santa Clara, CA, USA) that does not require marker attachment is widely used as an alternative method [8,9,10]. The MS Kinect sensor is the most widely used in motion capture research, and it includes not only the sensor SDK, which captures RGB and depth data, but also the body tracking SDK, which captures 3D skeleton data from the depth image [11]. The Azure Kinect sensor is the most recent addition to the Kinect sensor series. The Azure Kinect body tracking SDK provides skeleton data consisting of 3D positional information for 32 joints, as shown in Figure 1. The random forest algorithm is adopted in Kinect v2 whereas a deep learning-based algorithm is adopted to Azure Kinect’s body tracker [12,13]. The GPU can be used to run the deep learning model extracting human skeleton data. Furthermore, because the sensor SDK has been upgraded to allow multiple sensors to be operated on a single PC, it can be effectively applied in a variety of research or industrial areas.
Although skeleton tracking for a whole body is possible using Azure Kinect, poor skeleton quality often occurs due to a problem called self-occlusion [14]. This problem is a limitation of the single sensor system and occurs when the target joint is obscured by other body parts. The issue causes the quality of skeleton data to degrade at random. One of the simplest ways to solve this problem is to use a motion capture system with multiple sensors that can cover the entire workspace area [15,16]. For example, if an occlusion problem causes an error in the skeleton data, the inaccurate information of the obscured joint can be compensated by using information from another sensor. In other words, a single sensor’s self-occlusion problem can be overcome by appropriately combining skeleton data from multiple sensors.
In this paper, we developed a new algorithm that merges the multiple skeleton data obtained by multiple RGB-D sensors in real time. We used Azure Kinect sensors of Microsoft because of the sensor’s convenient expandability meaning that the multiple sensors can be utilized on a single PC. The skeleton data was obtained using the Azure Kinect body tracking SDK. Because the Azure Kinect sensor provides a function that can synchronize time steps between sensors by linking them together, no additional work for time synchronization is required when using multiple sensors. Specifically, we adopted the density-based spatial clustering of applications with noise (DBSCAN) which is generally used for clustering as the error filter on the skeleton merging process. After the merging process, we used the Kalmal filter to minimize the tremble error in joint movements.
There are three main contributions. First, the proposed algorithm can merge the skeleton data accurately in real time. We demonstrated how the number of sensors (TNOS) increased the joint accuracy of the merged skeleton. Second, the error caused by self-occlusion is avoided during the merging process of skeletons obtained by multiple Kinects using DBSCAN, a clustering algorithm. Third, we reduced joint position tremble error by using a Kalman filter on merged skeleton data. With these contributions, the proposed method can help improve the performance of various skeleton-based research and applications by obtaining accurate skeleton data in all directions.

1.2. Related Works

There have been many studies dealing with markerless skeleton tracking to overcome the difficult usability of motion capture equipment. Among them, studies using RGB-D sensors are representative. The development of many kinds of RGB-D sensors provides the human pose information in the form of the skeleton data extracted from depth images. However, there is a problem known as self-occlusion, which is caused by the sensor’s limited viewing area. Furthermore, when the subject is facing the sensor, the skeleton tracking algorithm can provide the best accuracy [17]. In other words, the subject can be tracked more reliably when facing the sensor in a pose with no invisible joint [14].
To address the occlusion issue, several studies adopted the multiple RGB-D sensors system to minimize invisible body parts. They made several attempts to obtain optimized skeleton data by combining multiple skeleton data obtained from multiple sensors installed in various views. Several studies have been conducted to merge multiple skeleton data sets using constraint rules determined by the structure of the human skeleton. These attempts were conducted by assigning different weights to inaccurate joint candidates in the merging process. These types of trials may necessitate an initial configuration process to determine structural components such as bone length.
Y. Kim, et al. [18] proposed a motion capture system using multiple Kinect V2 sensors for capturing the dynamic gestures of humans in a 3D environment. A posture reconstruction method was adopted for tracking human gestures consistently. They proposed a tracking method for the torso joints and limb joints separately, based on the consistent bone length of the human body. The mean value of the candidates within the bone length in the direction from the parent joint to the target joint was calculated in the case of the torso. In the case of limb, a joint candidate with the smallest sum of rotation direction and rotation angle compared to the previous joint coordinates was chosen from among the candidate groups within the bone length threshold. J. Colombel et al. [19] presented a fusion algorithm for tracking the joint’s center position using multiple skeleton data from multiple depth cameras to improve human motion analysis. The proposed system adopted an extended Kalman filter for the fusion of the joint candidates into the joint center position and applied the anthropomorphic constraints of human skeleton structure. As the measurement model of the extended Kalman filter, a specific forward kinematics model representing the human locomotor system with fixed bone lengths was used. The measurement fusion method was chosen among the fusion methods based on the Kalman filter because the proposed algorithm should be run in real time. N. Chen et al. [20] describe a method for combining two skeleton data sets from two Kinect V2 sensors. They proposed a data fusion strategy that weights the candidate of the target joint based on human physiological movement constraints related to both bone length and joint angle.
The other approach for the development of a multiple skeleton fusion algorithm is the definition of confidence value for a joint candidate in the merging process. The confidence value is usually determined according to the state in which the joint or skeleton data is detected in the sensor’s view. In [21], the authors proposed the human pose estimation method by fusing the multiple skeleton data and tracking the merged skeleton data. They considered the confidence value at both the whole skeleton and each joint level, and they filtered the inaccurate skeleton or joint data by using a confidence value threshold. Finally, the fused skeleton was tracked using the Kalman filter. Y. Wu et al. [22] created a real-time full-body tracking system with three Kinect V2 sensors. They used an adaptive weighting adjustment fusion method to build merged 3D skeleton data regardless of the subject’s orientation. Each candidate of the target joint obtained from each sensor was weighted according to the angle between sensor and subject and participated in the merging process. K. Desai and S. Raghuraman [23] proposed a real-time skeletal pose tracking method that aims to get merged skeleton data using multiple inaccurate skeletons. They determined a new confidence value named probability of an accurate joint (PAJ) for each target joint. Several factors were considered when determining the PJA. First, PAJ was calculated using the skeleton’s orientation, which is the angle between the subject’s facing direction and the sensors. The second state is the joint state, which indicates that the target joint is visible in the sensor’s visible area. The third factor is the bone angle, which is calculated between the bone and the capture plane, as well as the fixed length of a human’s bone. Taking into account all components, the merged skeleton’s joint position was determined using a distance-constrained consensus approach that maximizes the overall PAJ.
Some studies tried to design a new merging process for calculating the position of a joint. Moon and others [24] developed a human skeleton tracking system using Kalman filter framework with weighted measurement fusion method for merging five inaccurate skeleton data. The five Kinect V2 sensors were used to capture the subject, and the measurement noise of the Kalman filter was controlled based on the predicted state and joint motion continuity. In addition, H. Zhang et al. [25] proposed a method for combining multiple skeleton data sets. The proposed method’s first strategy was to filter outliers among target joint candidates using spatial region constraints and K-means clustering. The second step was to combine the inlier candidates into a single skeleton and apply the proposed adaptive weighted fusion rules. K. Ryselis et al. [26] presented a practical solution for performing multiple skeleton data fusion algorithms in vector space using algebraic operations. They aimed to track the human with non-standard poses such as squatting, sitting, and lying.
As in the studies described above, the algorithm developed in this paper does not estimate the skeleton using raw data such as RGB, Depth, and Pointcloud, but creates optimized skeleton data by merging multiple inaccurate skeleton data measured from various angles. In particular, as in [25], we also propose a method of filtering inaccurate joint candidates by applying a clustering algorithm in the merging process.
The remainder of the paper is structured as follows: Section 2 describes all sensor’s coordinate system calibration methods, a proposed merging algorithm that uses DBSACN, and settings of our experiment; Section 3 describes the result of the experiment; Section 4 discusses the experimental result and the future works of this study. Finally, we present our conclusions in Section 5.

2. Materials and Methods

2.1. Calibration for Coordinate Systems of Sensors

To use the 3D data acquired using multiple RGB-D sensors, a calibration process must be performed first. Moreover, since the calibration accuracy can have a great effect on the joint position of the merged skeleton data, an accurate calibration process must be performed. The calibration process refers to matching the coordinate systems of each sensor into one global coordinate system by calculating a rigid transformation matrix M = R ,   T . Here, R is rotation matrix parameterized by the three rotations θ x , θ y , and θ z . Additionally, T is a translation matrix consisting of three translation offset values for the x, y , and z axes, respectively. In this paper, two steps for the calibration process were adopted as described in Figure 2. The first one is sensor-to-sensor calibration matching the coordinate system of each sensor to the coordinate system of the reference sensor set as a master sensor. Another is the sensor-to-marker calibration resetting the coordinate system of all sensors calibrated with the reference sensor to the global origin customized by the user.

2.1.1. Sensor-to-Sensor Calibration

The sensor-to-sensor calibration process was conducted by constructing correspondence trajectories composed of the 3D centroid points of the sphere object. Figure 3 depicts a sphere object with a radius of 24 cm and a red tone color. The RGB image was first converted into HSV color space to track the center coordinates of a spherical object. Additionally, the histogram backprojection algorithm [27] was used to extract the area of a specific hue value. The Pointcloud Data (PCD) corresponding to the sphere object is then obtained by extracting the pixels in the depth image that correspond to the pixels in the HSV image that correspond to the sphere. Finally, the RANSAC [28] algorithm identified iteratively all the 3D points that satisfy the surface equation of the sphere object with radius R to detect the robust 3D centroid points of sphere object. The 3D centroid points of the sphere object extracted for several frames from each sensor constitute a correspondence trajectory. In the process of capturing the sphere, frames in which one or more sensors do not detect the object are filtered out in the construction trajectory.
To obtain a rigid transformation matrix between the coordinate systems of the master and target sensors, singular value decomposition (SVD) was calculated on a pair of trajectories configured in the corresponding sensors [29]. Let the C M be the trajectory captured in the coordinate system of the master sensor and C p be the trajectory captured in the coordinate system of target sensor number p , we have the rigid transformation matrix RT between C p and C M as:
R T M p = R p M T p M 0 1
To compute unknown parameters of the R p M , T p M matrix, the trajectory with 3 or more correspondences is required. In this study, we collected 1000 points of correspondence to create a trajectory. The optimal transformation matrix could then be computed by minimizing the error function shown below.
E ( R p M , T p M )     i = 1 N C i M ( R p M C i p + T p M ) 2
where N is the number of correspondences of the trajectory. To compute the rotation matrix, we used the least-squares-based method described in [30]. Thus, the optimal solution for E R p M ,   T p M could be obtained by computing the covariance matrix as follows:
C o v = i = 1 N [ ( C ^ p C i p ) · ( C ^ M C i M ) ]
where, C ^ i is the centroid of trajectory captured on the coordinate system of target sensor number p , and C ^ M is the centroid of trajectory captured on the coordinate system of the master sensor. By using SVD, the covariance matrix C o v   is decomposed as C o v = U S V T . Then, the rotation matrix could be calculated as R p M = U V T and translation matrix T p M = C ^ M C ^ p . This procedure occurs between all coordinate systems of target sensors and the master sensor; as a result of this procedure, all coordinate systems of target sensors are calibrated with the coordinate system of the master sensor.

2.1.2. Sensor-to-Marker Calibration

In this paper, we performed marker calibration to obtain the collected 3D skeleton data based on the desired coordinate system. This is accomplished by placing a plane marker with a specific pattern in the capture area and recognizing it in master Kinect’s RGB view. As shown in Figure 3, we used a plate printed with four ARUCO markers. The method of [31] implemented in the OpenCV ver3.4.0 library was used for marker recognition. After detecting the marker in the master Kinect’s RGB image, the PCD corresponding to each marker can be extracted from the depth image collected along with the RGB image. Then, the centroid of the PCD is calculated to obtain the three-dimensional center points of all markers.
To set all coordinate systems to a custom coordinate system defined by the user, we calculate a rigid transformation matrix by setting the desired coordinate axis and origin point using the center points. As shown in Figure 3, the vector between the centroid of marker number 2 and the centroid of marker number 1 serves as the x-axis in this study, while the vector between the centroid of marker number 0 and the centroid of marker number 1 serves as the y-axis. The z-axis was set in the direction from the floor to the ceiling by calculating the cross-product of the x and y axes, and the origin point was set as the centroid of marker number 1.
After sensor-to-marker calibration, the coordinate systems of all target sensors transformed to the master sensor are transformed again to the custom coordinate system. Throughout this procedure, all data obtained from multiple RGB-D sensors can be taken based on the desired coordinate system and origin set by the user.

2.2. Skeleton Merging Algorithm

As mentioned in Section 1, a self-occlusion problem that randomly degrades the quality of the skeleton data could occur when using a single RGB-D sensor to track the target. For handling this issue, we adopted multiple RGB-D sensors to overcome the self-occlusion problem. If a joint in a position invisible to one sensor is visible to another, it will be possible to compensate using the appropriate merging algorithm. The main issue with tracking skeletons with multiple sensors is figuring out how to combine multiple inaccurate skeleton data sets. In this paper, we designed a skeleton merging algorithm that increases the accuracy of the merged skeleton according to the TNOS, as shown in Figure 4. After capturing skeleton data and applying calibration, the merging process is applied. The proposed algorithm’s first component is the rearrangement of a joint’s directions, and the second is DBSCAN, which is used as a noise filtering method in the merging process [32]. Lastly, the third one is the tracking method based on Kalman filter to track the target joint and make the movement of joint smooth by canceling the tremble error.

2.2.1. Arrangement of Skeleton to Correct the Misoriented Joints

Before we apply the merging process, the arrangement procedure that corrects the direction of joints is needed because there is a misoriented problem mentioned in [21,23,33]. In more detail, the misoriented problem refers to a phenomenon in which the left–right direction of the joint in the measured skeleton data differs from the actual joint’s left–right direction. For example, while one of the measured skeleton’s joints is labeled as the left shoulder, for the actual skeleton it may be the right shoulder. In addition, the left and the right directions of each joint may be recognized differently for each sensor. In Figure 5, three examples of misoriented problems are described. The yellow and green spheres indicate the positions of the elbow joint recognized by the four sensors as left and right, respectively. Since the spheres of the same color are not adjacent to each other but are mixed, it is confirmed that a misoriented problem occurs in the skeleton data obtained by the SDK. As a result, if joints are merged without alignment, it is inevitable to obtain crushed skeleton data such as the red skeleton in Figure 5.
For that problem, the arrangement process for each of the left and right joint sets is adopted. The body tracking SDK of Azure Kinect provides the confidence value of tracked joints. There are three levels of confidence values: medium, low, and none. If the tracker can track the joint with an average level of confidence, the joint will be assigned a medium level of confidence value. Additionally, the low level is assigned to a joint that is not tracked but is estimated by the tracker because it is occluded or invisible. The joint with the none level indicates that the target joint is not in the field of view. Additionally, this level of assurance was used as the arrangement’s standard.
At first, for each of the left and right joints, the reference position for arrangement is set by the average point of the joint positions that have a medium confidence level (if there is no medium level joint, the standard will be low level). After, the distance comparing will be conducted. Let P i R r = x i R r ,   y i R r ,   z i R r ,     P i R l = x i R l ,   y i R l ,   z i R l where the P i R r ,   P i R l are the reference positions of the right and left joint, respectively. Additionally, i is the number of the set of left and right joints that should be verified. Then, the distance between the reference position and the joints in both directions is calculated and compared to correct the misoriented problem as the following function.
D i c o r r e c t = D i s t P i R r ,   P i T r  
D i w r o n g = D i s t P i R r ,   P i T l
Here, the D i s t P 1 , P 2 means the Euclidian distance (mm) between P 1 and P 2 , and the P i T r ,   P i T l are the positions of the target joint of the right and left sides, respectively. Then, we defined the checking rules to detect the misoriented problem as:
D i c o r r e c t > D i w r o n g     p r o b l e m D i c o r r e c t < D i w r o n g     n o   p r o b l e m  
When the misoriented joint set is detected, the direction label of the target joint set composed of left and right is corrected. The Azure Kinect body tracking SDK was used for this study, and the tracker provided skeleton data with 32 joints. However, we only used 16 joints from the torso (pelvis, spine, chest, neck, head, and shoulder) and limbs (elbow, wrist, hip, knee, and ankle). This is because other joints do not have high accuracy, including a large error when the subject performs a large movement action. The proposed arrangement process is applied to all limb joints, hip joints, and shoulder joints in this study to correct misaligned joints.

2.2.2. Skeleton Merging and Noise Filtering Using DBSCAN

In the merging process, we tried to conduct the merging process with only inlier candidates by noise filtering based on the positions of candidates. Noise candidates were defined as incorrectly recognized joints collected from sensors that had issues recognizing target joints. When compared to the inlier candidate, the noise candidate is relatively far from the actual target joint position. In this study, we use DBSCAN to filter these noise candidates during the merging process, as shown in Figure 6.
DBSCAN is a clustering method based on the density of target data distribution. It operates under the assumption that candidates belonging to the same cluster will be distributed close to each other. Because it operates by including adjacent data in the same cluster based on the density of data, DBSCAN can perform clustering well on data of unspecified shapes. Furthermore, DBSCAN can classify noise data while clustering, which mitigates the degradation of clustering performance caused by outlier participation. As a result of merging joint candidates that can be randomly distributed, noise can be appropriately filtered by using DBSCAN. It operates with hyperparameters composed of the neighboring data searching area and the minimum number of neighboring data N c . The detailed operation procedure of DBSCAN is described in Algorithms 1 and 2.
Algorithm 1: DBSCAN
Input:candidates set χ ,
searching area ϵ ,
minimum number of neighboring data N c
Output:labels y
k = 0 // number of cluster
foreach x χ do
y x ← UNASSIGNED
end
foreach x χ do
if y x = UNASSIGNED then
   χ x = SCAN( x , ϵ ) // Searching neighbors of x
  if χ x     N c then
    k k + 1
    y x k
    foreach z χ x do
    if y z = UNASSIGNED then
      y z k
       χ z = SCAN( x , ϵ )
      if χ z     N c then
        χ x χ x   χ z
      end
     end
    end
  end
  else
    y x ← NOISE
  end
end
end
return y
Algorithm 2: SCAN
Input:data point χ ,
searching area ϵ
Output:neighbors χ x
foreach z χ do
if Euclidean_distance( x , z ) ϵ then
   χ x χ x   z
end
end
return χ x
Our proposed algorithm defines the probability that the data in the cluster are the same as the actual location of the target joint considering both the number and the density of the cluster. In other words, it is assumed that the more densely the positions of candidate joints recognized from various angles belong, the higher the probability that the data constituting the cluster is the same as the actual joint coordinates. We use two more tricks in this case. First, the candidate for the reference position mentioned in Section 2.2.1 was chosen. In the clustering process, it gives more weight to candidates with a high confidence value of tracking state. Second, the previous position of the target joint was included as a clustering candidate. Even if an occlusion problem occurs during the skeleton data recognition process, the noise may be too large for DBSCAN to filter. Furthermore, there are cases where the movement of the recognized joint exceeds the actual joint movement distance or there is completely no recognized joint movement. To solve this problem, we added a smoothing effect to the movement of the joint by including the previous joint position in the clustering process.
After applying DBSCAN, we selected the cluster containing the most points as the candidate group for the target joint. Furthermore, we used the centroid of the chosen cluster as the merged joint position. The reference position and previous coordinates of the target joint, which were also included, were not used for the centroid calculation at this time to ensure the range of the actual joint movement. Among the hyperparameters of DBSCAN described above, N c was fixed to 1. In addition, the neighboring data searching area cannot be specified. Therefore, we conducted experiments on the searching areas of 5, 10, 15, and 20 cm in Section 3.

2.2.3. Joint Position Tracking Using Kalman Filter

Even after the merging process, the tremble noise remains in the target joint. Therefore, we applied a Kalman filter-based tracking method to each joint to make the movement of the target joint smooth [34]. We denote X t j = X x , t j   ,     X y , t   j ,     X z , t j the state vector of the joint number j at time step t and Z t j = x Z , t   j ,     y Z , t j   ,     z Z , t j is the measurement vector for this joint resulting from the previously described merging algorithm. Then, we designed a linear system for a state model with process noise, and its measurement model with measurement noise as follows:
X t + 1 j = A X t j + w p ,   w p   ~   N 0 ,   Q t
Z t j = H X t j + v m ,   v m   ~   N 0 ,   R t
where A is the so-called state space transition model, H is the measurement matrix, and w p and v m are the process noise and measurement noise, respectively. Here, w p and v m are the white noise that complies with the Gaussian normal distribution with zero mean and covariance Q t and R t , respectively. In our system, Q t and R t are set to 0.01 and 1.0, respectively.
For the input argument X t j , the 3D coordinate data of the target joint is used, and Z t j is a corrected position of X t t j j from the measurement step. The designed state model estimates a predicted position of joint, X ˜ t j , through the prediction and correction steps of the Kalman filter with X t j . In detail, the measurement step removes the noise in X t j and then the prediction step estimates X ˜ t j . The predicted state vector X ˜ t j and the predicted covariance matrix P ˜ t j are estimated in the prediction step at the time step t 1 as follows:
X ˜ t j = A X ^ t 1 j  
P ˜ t j = A P t 1 j A T + Q t  
where X ^ t 1 j and P ˜ t 1 j are the posteriori state estimate and the posteriori error covariance matrix at the time step t 1 , respectively. Here, X ˜ t j is used as the tracked target joint position in this study. For the correction step, X ˜ t j , P ˜ t j , and the Kalman gain K t are used to calculate the posteriori system estimate X ^ t j which removes the noise. Thus, X ^ t j was calculated as the corrected position by using the measurement value p t j and the posteriori covariance matrix P t j as follows:
K t = P ˜ t j H T H P ˜ t j H T + R t  
X ^ t j = X ˜ t j + K t p t j H X ˜ t j  
P t j = 1 K t H P ˜ t j ,
which will be used for the prediction step at time step t + 1 . By applying a Kalman filter to each joint and performing a tracking method, it is possible to obtain the smooth movement of joints with unrecognized movement or tremors.

2.3. Experiment Setting

In this section, the experiment settings and environment are described. All experiments were conducted on an Intel 19-11900F octa-core microprocessor clocked at 2.50 Ghz with 32 GB RAM. Additionally, we used two GeForce RTX 2060 super GPU for operating the Azure Kinect body tracking SDK. In the experiment, five Kinect Azure sensors were used to capture all of the subjects’ gestures. The sensor SDK version was 1.4.1, and the body tracking SDK version was 1.1.0. Additionally, every procedure was developed in the C++ environment. The proposed algorithm’s goal is to track skeleton data in real-time. However, when the body tracking SDK for five sensors was operated on two GPU, the capturing speed was less than 10 frames per second. Therefore, we chose the lite-model that had a 2 times performance increase and 5% accuracy decrease among the models of body tracking SDK (as described in https://docs.microsoft.com/en-us/azure/kinect-dk/ accessed on 17 April 2022). Additionally, the depth mode of the sensor was narrow field-of-view that has the smallest depth image size. Then, with five sensors in a single PC, we could achieve a capturing rate of 30 frames per second (obtaining data using all sensors). Furthermore, the proposed skeleton merging algorithm generates results at a speed of 1–2 msec per frame, on average. As a result, the final tracking ran at approximately 28–30 frames per second, allowing real-time tracking of skeleton data. In addition, by connecting multiple devices to each other, all sensors’ time steps were synchronized. Therefore, no additional work for time synchronization was required because the sensor SDK manages the trigger timing between linked sensors.
We conducted an experiment to evaluate the proposed skeleton tracking algorithm, capturing six gestures performed by six different people using four Azure Kinect sensors. The gestures are both hands up and down, jump, squat, lunge, walking, and moving the body in a standing pose (random movement). The examples of gestures are described in Figure 7. All subjects repeatedly performed each gesture for 1000 frames. Additionally, all gestures were started with a standing pose. In the case of the jump gesture, all subjects jumped naturally with their hands raised. In the case of random movement, all subjects moved their body in standing pose, for example, waving arms, leaning, or crossing arms. Many studies that test the accuracy of skeleton data use motion capture equipment as the ground truth. However, there is an interference problem between the RGB-D sensor and the motion capture system [35,36]. This is caused by the interference of infrared (IR) wavelength between the RGB-D sensor and the IR sensor of the motion capture system. This issue prevents the RGB-D sensor from measuring depth data and causes serious issues when the RGB-D sensor’s body tracker estimates skeleton data. Furthermore, because the retroreflective marker used in the motion capture system reflects IR, the RGB-D sensor cannot extract depth information from the corresponding area, affecting skeleton data recognition. Additionally, the position of each joint in the skeleton data provided by motion capture does not perfectly match the skeleton data of the RGB-D sensor body tracker. As a result, it is not appropriate to use the motion capture system as the ground truth in quantitatively evaluating the proposed algorithm.
Therefore, similar to the evaluation method of [21], we adopted the skeleton data from the best view sensor as the ground truth for the evaluation. The performance of Kinect body tracking is the best when entire body parts could be observed in the depth view of the sensor [14]. Additionally, when the sensor measures the subject from the front view of the subject, the largest number of body parts can be measured [17]. In other words, when the skeleton data are measured from the front view, the result of body tracking SDK could have the best accuracy. Therefore, the skeleton data measured from the front of the subject was adopted as the ground truth. By comparison, we evaluated how different the skeleton data merged by the proposed algorithm was from the ground truth.
For the experiment, we installed four sensors to capture the gestures of subjects and one more sensor was installed additionally for the ground truth. Figure 8 depicts the locations of all capturing sensors with gray color as well as the areas where the subjects performed the gestures. The positions of capturing sensors measuring the skeleton data of the subject were fixed. The subjects performed all gestures for each direction described as the blue arrow-cross in Figure 8. According to directions of the subject, the reference sensor was moved to obtain the reference data that measured the subject from the front (the candidate positions of reference sensor are described in Figure 8 with green color). The best view sensor was capable of capturing all gestures without occlusion.
When all capturing sensors measure the skeleton data of the subject, the self-occlusion problem arises in any direction the subject performs a gesture. We defined the distance between the joint positions of the ground truth and the joint positions of the merging process as an error for the evaluation in millimeters. The standard deviation of the error values is also calculated. Through a designed experiment, the difference between the skeleton data merged by the proposed algorithm and the skeleton data measured from the front was compared. The analysis of the results is described in the following section and the raw data (RMSE, STD) is provided in the Appendix A and Appendix B.

3. Experimental Results

3.1. Result of Performance Improvement in the Merging Algorithm

Figure 9 shows the average position error of several algorithms. The first analysis was conducted to prove the performance improvement of the merging algorithm for all gestures performed by all subjects. The comparison group consisted of Just Average (A1), Orientation Resetting and DBSCAN (A3), Orientation Resetting and 1 frame Smoothing DBSCAN (A4), and Orientation Resetting and 1 frame Smoothing DBSCAN with Kalman Filter (A5). Each comparison group represents the elements constituting the proposed merging algorithm. In the case of Orientation Resetting and Average (A2), since there was no significant difference in performance with A1 in our evaluation data, it was excluded from the comparison group. In other words, there was no misorientation case in evaluation data. However, a misorientation situation was observed when testing the body tracking SDK with a sensor height of 180 cm. Furthermore, since many studies reported the misorientation error of skeleton data measured by Kinect body tracking SDK, the realignment process of the joint’s orientation is determined as essential [21,23,33]. For the result of A5, the average value of the results of setting the DBSCAN searching area to 5, 10, 15, and 20 cm was used. The first experiment’s evaluation is divided into joints corresponding to the torso, upper limb, and lower limb, and the details are as follows.
As a result of A1, the torso joints had an average error (AE) of 41.1 mm, the upper limb joints had an AE of 90.9 mm, and the lower limb joints had an AE of 45.2 mm. Additionally, the standard deviation (STD) was 11.02, 44.89, and 22.0, respectively. In the case of A3, the AE of the torso was 40.8 mm, the upper limb was 66.0 mm, and the lower limb was 38.9 mm, with STD values of 13.1, 40.1, and 17.8, respectively. There was an improvement in joint position accuracy in the joints corresponding to the upper and lower limbs, and the STD of error was reduced in the case of the lower limb, resulting in a smoothing effect. As a result of A4, torso had an AE of 41.1 mm, upper limb 60.2 mm, and lower limb 38.9 mm. STD was 12.0, 33.8, and 17.3, respectively. A4 also showed improvement in performance in the joints corresponding to the upper and lower limbs, also showing a smoothing effect by reducing the STD of error in the joints corresponding to the upper limb. Finally, in the case of A5, the torso had an AE of 39.9 mm, the upper limb had an AE of 55.9 mm, and the lower limb had an AE of 36.2 mm, with the STD of error being 10.3, 29.3, and 14.2, respectively. The performance for positioning accuracy in all joints improved as a result of A5, and the standard deviation of error was also reduced. Consequently, the performance of A5 proposed in this paper had the highest accuracy among all comparison groups. The raw data of the experimental result of algorithm improvement was described in Table A1, Table A2, Table A3 and Table A4 in Appendix A.

3.2. Result of Different Searching Areas of DBSCAN

The second analysis is a comparison of results according to the searching area of DBSCAN used in A3, A4, and A5. Additionally, the comparison group consisted of 5, 10, 15, and 20 cm. As in the above analysis, the results for all gestures are divided into torso, upper limb, and lower limb as shown in Figure 10. As a result of the searching area of 5 cm, the joints of torso had an AE of 40.6 mm, the joints of the upper limb had an AE of 60.2 mm, and the joints of the lower limb had an AE of 36.0 mm. For STD, the results were 13.5, 36.2, and 14.4, respectively. This indicates that the search area is insufficiently large. In particular, in the case of a joint corresponding to a fast-moving limb, a sufficient number of inlier candidates cannot participate in the merging process. Furthermore, the point used for 1 frame smoothing prevents fast-moving inlier candidates from taking part in the merge process. When the searching area was set to 10 cm, the torso had an AE of 39.1 mm, the upper limb had an AE of 51.6 mm, and the lower limb had an AE of 35.7 mm, with the error STD values being 9.6, 26.7, and 13.8, respectively. As a result of setting the searching area to 15 cm, the torso had an AE of 39.9 mm, the upper limb had an AE of 53.5 mm, and the lower limb had an AE of 36.2 mm. Additionally, they had STDs of 9.0, 26.6, and 14.0, respectively. As a result of the searching area of 20 cm, the torso had an AE of 39.9 mm, upper limb 58.1 mm, and lower limb 36.8 mm, with STDs of 9.0, 27.8, and 14.4, respectively. Setting the search area to 15 or 20 cm is too large for noise filtering. As a result, the error increased because points corresponding to noise candidates participated in the merge process without being filtered. Overall, the case of the 10 cm searching area performed the best out of all comparison groups. Furthermore, the standard deviation of the error with searching areas of 15 and 20 cm is less than 10 cm, but there is no statistically significant difference.

3.3. Result According to TNOS

Finally, a third analysis was performed to evaluate the performance of the proposed algorithm in improving the accuracy of the skeleton data by merging the skeletons obtained from multiple sensors. This analysis also made use of data from the described gestures performed by all subjects. The comparison was carried out by adjusting the TNOS used for merging, and the results are depicted in Figure 11. The TNOS values used in the evaluation are 1, 2, 3, and 4. In the case of a single sensor, the raw data was used instead of the merging algorithm. The comparison components (AE, STD) were calculated as the average value of all combinations with the same size as the TNOS used for merging. In addition, based on the above experimental results, the DBSCAN searching area was fixed to 10 cm that had the best performance.
Regarding the result of a single sensor, the number of combinations was 4, had an AE of 63.3 mm for the joints corresponding to the torso, 125.6 mm for the joints belonging to the upper limb, and 60.2 mm for the lower limb. The STD values of error were 15.8, 64.2, and 28.6, respectively. The number of combinations in the case of using two sensors for merging was 6, and the torso had an AE of 50.9 mm, the upper limb had an AE of 103.5 mm, and the lower limb had an AE of 46.8 mm. Additionally, the standard deviations of error were 13.6, 57.5, and 20.3, respectively. The number of combinations for the use of three sensors was 4, and the AE of torso joints was 43.4 mm, upper limb 66.3 mm, and lower limb 38.3 mm. The standard deviations of error were 12.2, 37.1, and 14.5, respectively. Lastly, in the case of four sensors with a single combination, the AE of torso joints was 39.6 mm, the upper limb 51.8 mm, and the lower limb 35.8 mm. Additionally, the STD values of error were 9.8, 26.0, and 13.2, respectively. Consequently, the accuracy of merged skeleton data increased according to the increase in the TNOS used for merging. The result of TONS experiment was described in Table A5 and Table A6 in Appendix B.

4. Discussion

In this study, we proposed the markerless skeleton tracking algorithm to track skeleton data accurately. The main strategy of the algorithm is filtering the noise candidate joints that occurred due to the self-occlusion problem in the merging process. The proposed algorithm was evaluated by comparing the ground truth obtained with the best view sensor that measures the subject from the front. The detailed analysis for this experimental result is as follows: The analysis was conducted based on the results of the searching area of 10 cm with the best performance. Among the joints corresponding to the torso (pelvis, spine naval, neck, left hip, right hip, left shoulder, right shoulder, and head joints) they could be measured in all gestures because of the low installation height of all sensors. Furthermore, because the amount of change in the positions of the torso joints was not large while the subjects performed all gestures, the proposed algorithm outperformed A1 by less than 7 mm.
In the case of the upper limb joints, the proposed algorithm showed at least 15 mm better results than A1 in all gestures. Among them, the squat gesture improved performance the most. Furthermore, when compared to the results of the elbow joint, the performance of the wrist joint improved by an average of 10 mm or more. In comparison to A1, A3 improved by 20.1 mm, A4 by 23.4 mm, and A5 by 27.0 mm as a result of the elbow joint in all gestures. In the case of the squat gesture, the A3 improved by 34.3 mm, the A4 by 39.3 mm, and the A5 by 42.7 mm for elbow joint. Moreover, in the case of the wrist joint, there was an improvement in the performance of 29.8 mm for A3, 45.4 mm for A4, and 51.6 mm for A5 compared with A1 for all gestures. Additionally, the same as in the case of the elbow, it showed the maximum performance in the squat gesture, and there were performance improvements of 44.0, 80.0, and 83.9 mm for each algorithm. The subjects frequently raised their upper limb toward the sensor during the squatting gesture, resulting in self-occlusion problems at the elbow and wrist joints.
In the case of the joints corresponding to the lower limb, the difference in improvement of performance between the knee and ankle joints is not large, within 7 mm except for the lunge gesture. This is because, due to the characteristics of all gestures, the effect of occlusion affecting the lower limb was not large. Thus, we described the result of the lunge gesture below. In the results of the knee joint, the coordinate accuracy of the joint corresponding to the lower limb was improved by 25.5 mm in A3, 27.7 mm in A4, and 27.6 mm in A5 compared with A1, respectively. Additionally, in the case of the ankle joint, the improvement of A3 was 28.4 mm, A4 32.2 mm, A5 34.5 mm, respectively. Consequently, while there was a large error in the result of A1 in the merged skeleton, the skeletal data composed of the precise joint position could be obtained using the algorithm proposed in this paper.
Our results are comparable to the performance in existing studies related to tracking accurate skeleton data by merging multiple inaccurate skeleton data. Existing skeleton merging algorithms were mainly developed on the basis of Kinect V2. In addition, the superiority of the algorithm was evaluated compared to the performance of a single Kinect. In [22], the authors reported an error of 87 mm for the entire joint in the T-pose and walking around gesture. For the experiment, eight Kinect v2 sensors were used and a marker-based motion capture system was used as the ground truth. They mentioned that the position of the skeleton recognized using the motion capture device and the skeleton of the Kinect SDK did not match perfectly, so there was a problem in accurate comparison. To overcome this, the position of the marker closest to the skeleton joint among the markers attached to the body of the target was adopted as the ground truth, and there was an average distance difference of 100 mm between the joint of skeleton data and marker position. In [23], a merged skeleton was obtained using seven Kinect V2 sensors, and the skeleton data measured in the best view was adopted as the ground truth in the same way as in this study. They reported an average error of 80.3 mm in the experiments on standing, rotating, walking, roaming, and free motion gestures performed by seven subjects. The best view sensor was automatically selected among the capturing sensors using the factor used in the PJA algorithm proposed in the paper. In [24], the authors merged skeletons obtained from five Kinect V2 sensors into one skeleton. They adopted a marker-based motion capture system as the ground truth and evaluated the performance of the proposed merged algorithm in terms of the gestures of walking, spinning, sitting, running, kicking, punching, and crossing of limbs. As a result, the average errors for all joints were 97.1, 91.2, and 69.5 mm for single Kinect (centre-Kinect), simply average, and the proposed merging method, respectively. They also reported that motion capture skeleton and Kinect skeleton did not match perfectly, as in [22], and there was an average difference of 55 mm between the skeleton of Kinect and motion capture system. In [26], the authors obtained a merged skeleton using three Kinect V2 sensors. They evaluated the performance of the proposed merging algorithm using a specific training protocol and non-standing posture, including standing, bending, squat, lying, crossing arms or legs. They also adopted the skeleton obtained from the marker-based motion capture system as the ground truth. As a result, they reported that the accuracy of the merged skeleton was improved by 15.7% compared to the single Kinect, and the average error was measured to be less than 55 mm for all joints.
As with the results in this study, most of the studies have reported that the performance improvement in the joints belonging to the limbs is more pronounced compared to the torso joints. Ref. [26] also reported that the joint belonging to the limbs had a higher performance improvement than the torso. In [25], the authors obtained merged skeleton data of 20 people walking on the treadmill using five Kinect V2 sensors. They evaluated the performance of the merging algorithm using the STD for the difference between the pre-measured bone length of the subjects and that of the merged skeleton. As a result, the STD of the torso was 5.9, 12.0 for the upper limb and 21.8 for the lower limb. In [19], the authors proposed a skeleton merging algorithm for each Kinect V2 and Azure Kinect skeleton. Three of each sensor types were used, and a marker-based motion capture system was adopted as the ground truth. They evaluated the proposed algorithm for running, kicking, punching, crossing arms, crossing legs, crossing arms and legs, bowing from the waist, sitting on the chair, spinning, and walking around. As a result, Kinect V2 reported an error of 46.2 mm for the torso joints, 105 mm for the upper limb, and 135.5 mm for the lower limb. According to the Azure Kinect result, an error of 31 mm for the torso, 59.5 mm for the upper limb, and 121.5 for the lower limb was measured. As mentioned before, the interference problem and IR-reflective marker issue were reported also. For this issue, controlling the trigger between the motion capture system and Azure Kinect was necessary. Additionally, the miniature markers of 2.5 mm in diameter were used to avoid the problem that depth information is not measured by the markers. However, despite these attempts, the problem that occurs when the motion capture system and Azure Kinect are running at the same time cannot be completely eliminated. In [21], the authors proposed a skeleton merging algorithm using four Kinect V2 sensors. Similar to this study, the skeleton measured using the manually selected best view sensor was adopted as the ground truth. The authors collected validation data for walking, walking and spinning, walking while moving arms, walking and bending down, spin arms, and jumping jack gestures performed by six subjects (three for training; three for testing). The performance of the proposed algorithm was evaluated by comparing with the results of a single sensor. As a result, the error for the single sensor was 128.6 mm in the torso, 187.2 mm in the upper limb, and 157.3 mm in the lower limb. The error for the merged skeleton was 86.0 mm in the torso, 94.5 mm in the upper limb, and 97.5 mm in the lower limb.
As a result of analyzing the performance improvement of the components constituting the algorithm, the merged skeleton using A5 (Orientation Resetting and 1 frame Smoothing DBSCAN with Kalman Filter) was the most accurate. Furthermore, the highest accuracy was obtained when 10 cm was applied to the DBSCAN searching area. Similar to the results of other research, the proposed algorithm improved the accuracy of the merged skeleton compared to the single sensor, and the accuracy improvement for the joints belonging to the limbs was greater than the joints belonging to the torso. In addition, our result was comparable or greater to the performances in existing studies about development of skeleton merging algorithms. Especially, the error of the proposed algorithm was relatively low compared to [21,23], which used the best view evaluation method as in this study. In addition, the accuracy of the merged skeleton increased as TNOS increased. However, the evaluation method performed to evaluate the performance of the proposed algorithm only evaluates the difference between the merged skeleton and the skeleton captured using the best view. Therefore, in order to evaluate the performance of a wider range, it is necessary to compare it with an interference-free measuring device that can measure actual human behavior, such as motion capture using IMU.
Consequently, the proposed system can track the skeleton as accurately as a skeleton measured in front of the subject. The proposed system can be utilized in the field of behavioral monitoring research targeting human tracking. Furthermore, it can be used for a variety of interactive content such as games or education. The proposed algorithm, on the other hand, is intended to apply to a single person. In this study, we focus on applying the algorithm to only a single person. Therefore, extending the study is needed to apply the proposed algorithm for tracking multiple people. For the next study, we have a plan to implement a game-like interaction with a large number of participants. A study on the standards of gestures (actual motion shape and speed) that can be actually applied by using the skeleton data will be conducted to define the possible interaction. Additionally, discussion of the proper installation of sensors to track multiple people will also be covered.

5. Conclusions

The goal of this paper was development of the markerless skeleton tracking system using multiple RGB-D sensors. We proposed the algorithm to merge multiple skeleton data, which includes an error in the position of joints due to self-occlusion problems, into accurate single skeleton data. The main issue with this approach was determining how to filter the noise candidates reported by the multiple RGB-D sensors during the merging process. To address this issue, we used a clustering algorithm called DBSCAN. We proposed additional tricks to increase the weight of inlier candidates participating in the merging process. For the evaluation of the proposed algorithm, we conducted the experiment capturing the six gestures performed by six subjects using four capturing RGB-D sensors and a single best view sensor for obtaining ground truth. As a result of the analysis, the proposed algorithm showed the most accurate performance. Additionally, the proposed method showed relatively lower errors than other related studies. Furthermore, the result of the 10 cm searching area of DBSCAN showed the highest accuracy. Consequently, using the algorithm proposed in this study, it was possible to acquire skeleton data as accurate as the skeleton data measured from the front of the subject.

Author Contributions

S.-h.L. designed the algorithm, performed the experimental work, wrote the manuscript. D.-W.L. organized the experiment setup. K.J. performed the experiment and organized the manuscript contents. W.L. edited the manuscript and analyzed the results of the experiment. Corresponding author: M.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Trade, Industry and Energy of Korea under grant (20003762) and GIST Research Project grant funded by the GIST in 2022.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the data used in this study was lab-data. Also, the The data does not contain any information about the subject’s identity.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. RMSE result of the algorithm development experiment for gestures 1, 2, and 3.
Table A1. RMSE result of the algorithm development experiment for gestures 1, 2, and 3.
G1RMSEA1A3_5A3_10A3_15A3_20A4_5A4_10A4_15A4_20A5_5A5_10A5_15A5_20
Pelvis30.4031.0931.0931.0931.0932.2730.4030.4030.4032.0530.2830.2830.28
Spine Naval45.3247.7247.7247.7247.7248.0845.3245.3245.3247.9145.1545.1645.16
Neck39.1335.3435.3435.3435.3432.6639.1639.1439.1430.8438.6938.6838.68
L Shoulder39.1133.5833.5833.5833.5831.2736.5539.1139.1130.1035.4838.4838.48
L Elbow53.9240.5840.5840.5840.5839.4639.1540.9443.7537.0937.8439.6642.59
L Wrist80.8665.0265.0265.0265.0265.1259.0657.9859.9656.8554.2555.9557.83
R Shoulder33.8236.6736.6736.6736.6738.4432.8833.9033.8637.3531.8033.1133.05
R Elbow57.4241.2241.2241.2241.2240.3534.6035.3839.3736.4933.2933.5337.51
R Wrist102.7672.6472.6472.6472.6474.8459.5758.6457.8368.6854.3356.3456.33
L Hip33.1736.3636.3636.3636.3638.8933.1733.1733.1738.4333.0033.0033.00
L Knee24.8024.1324.1324.1324.1324.1624.7724.7924.8023.7424.3424.3724.38
L Ankle31.5131.6731.6731.6731.6732.1831.0231.3031.5131.4930.3830.6130.84
R Hip29.3631.7931.7931.7931.7931.5729.3529.3629.3631.2729.1729.1729.17
R Knee22.8322.9122.9122.9122.9123.1622.8222.8222.8322.8522.5122.5122.51
R Ankle29.4428.2628.2628.2628.2628.5328.7728.9329.4827.8828.1828.1628.86
Head38.3234.4234.4234.4234.4232.1038.2838.3238.3230.4937.5337.6237.61
G2Pelvis35.3235.8735.8735.8735.8736.7435.2935.3235.3232.9332.7132.7532.75
Spine Naval48.8046.3146.3146.3146.3156.1248.5148.8048.8050.6146.6146.9246.92
Neck50.7549.6849.6849.6849.6851.8850.3350.7650.7647.6448.0648.5248.52
L Shoulder49.5845.9445.9445.9445.9447.8744.8949.4749.5742.7240.7846.2246.34
L Elbow73.6256.6856.6856.6856.6857.1053.4056.6564.3147.9945.5950.1159.17
L Wrist114.9991.8591.8591.8591.8591.5782.0980.1685.4983.1369.2969.0775.93
R Shoulder46.7449.7149.7149.7149.7154.0245.1046.7346.7448.2341.0743.3343.33
R Elbow72.4360.6360.6360.6360.6367.3659.0955.6960.3356.1748.7149.0754.37
R Wrist121.5995.2295.2295.2295.2298.5786.1381.2878.7395.0773.3070.1869.41
L Hip37.9337.8737.8737.8737.8739.9537.8137.9337.9334.9434.9235.0835.08
L Knee44.4344.8744.8744.8744.8745.9444.3144.3744.4240.2439.3239.3739.41
L Ankle43.3343.2943.2943.2943.2943.6742.9442.9943.1238.2137.8337.8937.99
R Hip35.1635.9135.9135.9135.9137.7735.0935.1735.1633.4632.0232.1832.17
R Knee40.9441.4341.4341.4341.4342.7440.8240.8240.9336.4535.4135.4335.49
R Ankle40.8440.1840.1840.1840.1840.4439.8539.9640.1034.6734.4734.5934.68
Head52.5052.7252.7252.7252.7255.2852.0952.4852.5049.9249.4349.9149.94
G3Pelvis39.0739.3839.3839.3839.3839.6239.0739.0739.0738.3937.8637.8637.86
Spine Naval49.0149.0649.0649.0649.0650.0149.0049.0149.0149.7648.2248.2348.23
Neck45.7947.4247.4247.4247.4248.6945.7345.7545.7945.1644.7144.7344.77
L Shoulder44.6843.0243.0243.0243.0242.6542.3744.6544.6641.4340.3343.0743.08
L Elbow84.4347.2647.2647.2647.2644.0343.4350.8258.8141.3440.9948.0856.36
L Wrist145.80104.66104.66104.66104.6690.8770.5475.0382.3292.9466.6272.4379.77
R Shoulder36.6342.4142.4142.4142.4145.1735.5336.5536.6242.9533.4234.7234.77
R Elbow78.1546.7246.7246.7246.7243.4940.4544.9351.0438.7336.1539.0545.78
R Wrist145.4698.5998.5998.5998.5997.6860.6662.4569.23104.8756.7558.7964.79
L Hip40.8241.2741.2741.2741.2742.9240.8240.8240.8241.7339.5239.5239.52
L Knee44.1439.6039.6039.6039.6039.5839.9140.7941.5238.0737.9638.7639.32
L Ankle45.0741.7841.7841.7841.7842.0741.2141.5542.2239.0538.2238.4138.78
R Hip38.6538.9238.9238.9238.9239.7038.6538.6538.6538.3337.2537.2537.25
R Knee31.6331.2731.2731.2731.2731.6931.5431.5531.5929.3529.1429.1529.18
R Ankle33.9933.0733.0733.0733.0733.2033.5733.7033.8029.8930.3630.4530.53
Head45.8949.0149.0149.0149.0150.9245.8245.8145.8847.3144.5944.5844.64
Table A2. RMSE result of the algorithm development experiment for gestures 4, 5, and 6.
Table A2. RMSE result of the algorithm development experiment for gestures 4, 5, and 6.
G4RMSEA1A3_5A3_10A3_15A3_20A4_5A4_10A4_15A4_20A5_5A5_10A5_15A5_20
Pelvis33.9435.2735.2735.2735.2737.4934.0133.9433.9436.9333.4633.3733.37
Spine Naval46.0247.8147.8147.8147.8148.8245.6646.0246.0249.9545.0945.4745.47
Neck41.9938.1038.1038.1038.1039.0441.7541.9941.9939.7840.7240.9840.98
L Shoulder44.5039.8239.8239.8239.8242.2739.6943.8444.2941.5338.1242.3342.75
L Elbow74.1851.8551.8551.8551.8550.6445.9948.1854.0147.8843.9947.0353.00
L Wrist109.4481.0481.0481.0481.0475.8865.3463.6567.1669.5259.3561.8766.62
R Shoulder38.4539.9639.9639.9639.9643.6637.2938.0838.2742.6535.4836.3236.50
R Elbow64.2448.1748.1748.1748.1749.8243.1744.1347.3443.4740.9241.2644.08
R Wrist111.1578.4778.4778.4778.4777.6062.4460.0963.1176.9656.1256.9760.64
L Hip37.4838.7538.7538.7538.7541.6736.9237.4737.4840.5636.1836.6936.69
L Knee66.8142.9942.9942.9942.9943.3440.9142.2544.5442.6540.5441.5043.36
L Ankle76.4750.9450.9450.9450.9448.7847.7649.6051.2145.3345.3247.2648.59
R Hip33.0634.3734.3734.3734.3737.0933.4933.0533.0636.6232.8232.2832.28
R Knee67.2440.0240.0240.0240.0240.7337.7338.7540.9639.7238.2338.7140.64
R Ankle85.2153.9053.9053.9053.9051.4849.3850.6552.7246.9347.3549.1950.37
Head42.4038.2938.2938.2938.2939.3042.0242.3942.4039.2340.7141.0741.08
G5Pelvis32.7334.3434.3434.3434.3434.7332.7332.7332.7332.7031.8431.8431.84
Spine Naval45.1644.2844.2844.2844.2845.6044.9945.1645.1649.0544.6144.7844.78
Neck51.9649.3749.3749.3749.3749.5651.9651.9651.9649.2351.4451.3951.39
L Shoulder48.1444.4944.4944.4944.4943.5544.0048.1548.1444.1642.9646.6846.67
L Elbow80.2852.7552.7552.7552.7553.2754.5661.4470.2248.2454.0357.1066.28
L Wrist121.8888.1688.1688.1688.1685.4176.1276.5382.8480.9774.7976.5579.22
R Shoulder42.7249.0849.0849.0849.0851.7743.0042.7242.7251.6940.9741.1341.13
R Elbow65.9846.8846.8846.8846.8848.9844.2346.6652.3244.3241.3341.9647.58
R Wrist120.2888.2988.2988.2988.2982.3371.0072.4476.6277.4368.4870.1372.80
L Hip35.7037.4337.4337.4337.4339.3435.7135.7035.7037.9534.4734.4734.47
L Knee54.2146.7146.7146.7146.7147.3746.6047.4348.2542.6342.4943.2343.79
L Ankle57.2152.5152.5152.5152.5152.5251.5351.8753.0247.5546.9247.1548.12
R Hip33.1631.7331.7331.7331.7331.1632.9633.1633.1628.7931.6331.8631.86
R Knee54.2545.2245.2245.2245.2247.2544.9045.9447.4242.0741.6142.4443.65
R Ankle52.1046.1546.1546.1546.1546.1544.9545.9548.5041.0940.8241.6443.96
Head55.3453.6053.6053.6053.6054.8556.1955.3455.3456.6055.4954.5954.59
G6Pelvis34.9936.7736.7736.7736.7737.1734.9934.9934.9937.4834.4234.4234.42
Spine Naval46.5447.0747.0747.0747.0745.3046.4146.5446.5444.7746.0346.1846.18
Neck42.7136.5236.5236.5236.5233.1942.6742.7142.7134.1442.0542.1142.10
L Shoulder44.5039.4739.4739.4739.4738.8040.1944.3744.3938.3638.3242.9442.97
L Elbow61.3643.7043.7043.7043.7043.3442.2846.0551.6639.4938.5842.1548.13
L Wrist90.3367.8267.8267.8267.8265.0759.6261.4665.4857.2154.0855.8259.87
R Shoulder41.4842.5442.5442.5442.5444.2940.5041.4141.4042.7038.7039.8639.86
R Elbow59.4047.0747.0747.0747.0745.6243.3244.3948.4441.2439.5740.1544.02
R Wrist93.9869.0969.0969.0969.0966.9360.7257.6559.3360.7750.9152.0953.72
L Hip38.4539.6539.6539.6539.6541.1637.5538.4538.4541.1436.6537.6037.60
L Knee33.5132.0032.0032.0032.0032.1833.0233.1933.3930.6331.2631.4131.58
L Ankle38.3936.2736.2736.2736.2736.4236.8437.3637.5533.9034.5035.0135.21
R Hip35.5336.0536.0536.0536.0536.2335.4335.5535.5435.8034.5634.7134.69
R Knee30.3730.3530.3530.3530.3530.6729.9129.9730.0829.1328.4528.4928.57
R Ankle37.4335.0935.0935.0935.0934.9935.5035.8236.0632.5833.1333.4433.60
Head42.7536.6836.6836.6836.6835.2642.7342.7542.7435.1541.8841.9041.90
Table A3. STD result of the algorithm development experiment for gestures 1, 2, and 3.
Table A3. STD result of the algorithm development experiment for gestures 1, 2, and 3.
G1STDA1A3_5A3_10A3_15A3_20A4_5A4_10A4_15A4_20A5_5A5_10A5_15A5_20
Pelvis3.454.454.454.454.454.433.453.453.454.083.173.173.18
Spine Naval6.1011.2411.2411.2411.2413.816.106.096.0912.525.755.745.74
Neck9.0612.7612.7612.7612.7613.849.076.099.0712.238.438.448.43
L Shoulder7.7810.7110.7110.7110.7110.7710.367.777.779.579.106.936.94
L Elbow23.1518.9218.9218.9218.9217.3814.3814.8715.7014.7212.9713.3914.20
L Wrist42.4937.6037.6037.6037.6036.6630.2428.1528.9430.6926.1826.4927.01
R Shoulder8.8912.8512.8512.8512.8513.658.938.948.9212.347.858.128.10
R Elbow28.6822.1922.1922.1922.1920.0514.5315.4518.1016.4412.9813.8816.60
R Wrist52.6645.3745.3745.3745.3748.0633.3032.0329.9544.7128.5928.8827.70
L Hip3.794.854.854.854.855.263.793.793.794.773.403.403.40
L Knee5.365.135.135.135.135.155.325.355.364.464.604.634.64
L Ankle7.036.856.856.856.856.926.766.937.035.915.735.845.93
R Hip3.985.695.695.695.696.823.993.983.986.123.593.573.57
R Knee4.854.914.914.914.914.964.824.834.854.374.244.244.24
R Ankle7.938.138.138.138.138.057.858.007.977.036.846.926.86
Head10.7013.1613.1613.1613.1613.6610.8410.7110.7012.3710.119.989.98
G2Pelvis12.7914.4114.4114.4114.4115.5412.7812.7912.7911.449.949.979.97
Spine Naval10.8615.2415.2415.2415.2426.7110.5810.8510.8622.018.038.308.31
Neck12.8016.4616.4616.4616.4623.7912.5912.7912.7919.399.8110.0610.06
L Shoulder14.7418.4418.4418.4418.4421.3518.7914.6814.7415.2214.1410.4510.55
L Elbow33.1633.6133.6133.6133.6133.9230.2130.3629.3925.8022.4923.5023.88
L Wrist51.1554.6854.6854.6854.6857.6548.9445.4646.7351.7237.4034.7437.18
R Shoulder16.4817.8917.8917.8917.8921.0617.6116.4816.4915.4213.3112.5012.51
R Elbow36.5538.3938.3938.3938.3945.2437.6333.7232.6835.3627.5627.5227.06
R Wrist55.1060.5160.5160.5160.5167.0156.8151.0746.6966.0245.8941.3637.84
L Hip13.0415.3715.3715.3715.3717.0813.0713.0413.0412.089.789.789.77
L Knee20.1121.2821.2821.2821.2822.4320.0220.0520.1017.0515.4315.4515.50
L Ankle20.1920.2420.2420.2420.2420.5219.8819.9319.9815.3614.9715.0515.03
R Hip14.2515.5715.5715.5715.5716.4014.3214.2614.2511.9010.9310.8910.88
R Knee20.8521.3921.3921.3921.3922.6220.7720.7620.8316.6115.6815.6915.74
R Ankle21.2820.8020.8020.8020.8020.8920.3120.3220.4315.4415.1715.18215.27
Head14.5717.6517.6517.6517.6523.1314.5614.5514.5718.8111.5711.6711.71
G3Pelvis11.4012.2312.2312.2312.2312.4811.4011.4011.4010.759.769.769.76
Spine Naval10.4613.4213.4213.4213.4217.9310.4610.4610.4618.119.229.239.23
Neck12.2013.9413.9413.9413.9416.9112.1212.1512.2013.8511.0211.0511.10
L Shoulder12.9114.5014.5014.5014.5015.0115.3012.8812.9012.6512.6210.4610.49
L Elbow41.1026.4726.4726.4726.4722.5121.6125.2128.3318.3417.5620.7223.43
L Wrist71.5565.3865.3865.3865.3857.8437.1737.7939.9058.3131.5033.6435.58
R Shoulder12.5915.1915.1915.1915.1915.7112.3512.5012.5913.8610.0010.3710.44
R Elbow41.4629.3329.3329.3329.3324.9421.6625.7029.4119.2616.1919.1423.20
R Wrist70.9366.7466.7466.7466.7472.5135.8335.1239.9478.4931.3730.6034.25
L Hip11.3612.2412.2412.2412.2412.4911.3511.3611.3610.809.459.469.46
L Knee19.7716.0616.0616.0616.0615.7415.3916.1816.9413.1312.5413.3013.87
L Ankle20.1517.0017.0017.0017.0017.2216.7016.9017.4813.4713.0013.0813.27
R Hip12.6313.8313.8313.8313.8314.0312.6312.6312.6312.0610.7310.7310.73
R Knee14.6414.6914.6914.6914.6915.1414.5914.5914.6112.4511.9211.9111.91
R Ankle16.1915.5015.5015.5015.5015.6815.8215.8815.9511.8812.1312.1412.19
Head13.5315.4315.4315.4315.4317.7313.4313.4413.5215.0312.2912.3012.38
Table A4. STD result of the algorithm development experiment for gestures 4, 5, and 6.
Table A4. STD result of the algorithm development experiment for gestures 4, 5, and 6.
G4STDA1A3_5A3_10A3_15A3_20A4_5A4_10A4_15A4_20A5_5A5_10A5_15A5_20
Pelvis10.4112.5512.5512.5512.5514.5510.5510.4110.4113.929.979.819.81
Spine Naval10.0416.5216.5216.5216.5222.9810.4310.0410.0423.549.859.439.43
Neck12.4515.1715.1715.1715.1718.2712.5512.4512.4518.0311.5211.3911.39
L Shoulder14.4517.3017.3017.3017.3020.2315.9713.8014.2019.3714.0712.0012.34
L Elbow38.0929.8029.8029.8029.8027.8922.4023.2326.3225.7218.7220.7123.97
L Wrist61.1653.2253.2253.2253.2250.0139.6536.5938.0545.1333.7834.2836.44
R Shoulder14.4416.3616.3616.3616.3618.0714.4614.1014.2817.1612.8212.3212.53
R Elbow34.6428.3328.3328.3328.3328.8723.1123.7625.2322.2419.4120.1121.41
R Wrist61.4452.0852.0852.0852.0854.2638.8735.2437.1255.4932.8131.1533.61
L Hip11.2113.8613.8613.8613.8615.9711.4211.2111.2115.6210.5210.2710.27
L Knee35.5119.9619.9619.9619.9620.0017.0717.7519.4719.2816.6016.8918.08
L Ankle40.7725.1925.1925.1925.1923.0921.2822.1222.8819.7819.0120.0520.42
R Hip11.5013.2813.2813.2813.2815.0111.9811.5011.5014.1311.3410.6710.67
R Knee38.0418.9418.9418.9418.9419.0616.2716.8718.5517.5616.0116.2517.72
R Ankle44.7928.2228.2228.2228.2226.1323.2023.6425.3321.6421.0922.0522.83
Head13.9815.6015.6015.6015.6017.8713.9813.9813.9817.4612.7812.7412.74
G5Pelvis8.029.849.849.849.8410.118.028.028.027.976.886.886.88
Spine Naval6.0010.4610.4610.4610.4616.436.156.006.0017.355.145.045.04
Neck8.9511.8311.8311.8311.8316.229.318.958.9516.618.017.797.79
L Shoulder13.0314.5914.5914.5914.5914.6918.7913.0313.0313.2515.5210.4610.46
L Elbow29.6726.7826.7826.7826.7826.1026.2529.8030.0020.8823.8323.3124.08
L Wrist47.3648.2848.2848.2848.2847.8939.9636.9639.4645.5136.0735.1134.08
R Shoulder12.1714.4614.4614.4614.4613.8815.3412.1912.1713.1012.339.519.50
R Elbow30.2525.1025.1025.1025.1025.7721.8423.1425.6221.7418.7418.5220.89
R Wrist50.6748.1448.1448.1448.1444.4435.4634.9036.8342.5932.8032.0432.69
L Hip10.2712.7012.7012.7012.7012.8610.2610.2710.2710.248.528.528.52
L Knee22.6220.7820.7820.7820.7821.2219.8719.8320.4316.8516.2816.1916.45
L Ankle23.2921.5021.5021.5021.5021.5220.7520.9121.4317.0916.6816.7617.10
R Hip10.1810.6010.6010.6010.6010.2910.1010.1810.188.258.228.308.30
R Knee26.1922.0522.0522.0522.0523.7821.2621.8522.8618.9418.1318.6719.61
R Ankle22.7320.4720.4720.4720.4720.5419.0619.5521.0215.9715.3715.6817.02
Head10.8813.1713.1713.1713.1716.5011.8710.8810.8817.7910.279.519.51
G6Pelvis9.449.709.709.709.709.767.737.737.737.826.606.606.60
Spine Naval7.5810.2110.2110.2110.2116.766.225.935.9316.055.124.954.95
Neck9.9411.6311.6311.6311.6315.418.538.468.4614.677.287.327.32
L Shoulder15.4215.7415.7415.7415.7415.7619.4314.0714.0713.7115.6510.8710.87
L Elbow32.0928.1928.1928.1928.1927.3628.0730.1730.0121.0323.6222.8322.76
L Wrist52.0247.3747.3747.3747.3747.4840.2538.6040.9443.7035.3234.6433.91
R Shoulder14.6715.1115.1115.1115.1114.7315.8113.1413.1413.8112.049.829.82
R Elbow35.0227.1127.1127.1127.1128.7824.6725.9227.4523.4720.2920.0821.65
R Wrist57.0649.8249.8249.8249.8246.9938.6937.4638.6041.8034.9633.8333.92
L Hip11.0911.8711.8711.8711.8712.399.449.449.449.597.517.517.51
L Knee23.7019.4319.4319.4319.4319.7618.6618.6818.9215.4014.7514.7614.92
L Ankle23.6220.4320.4320.4320.4320.6320.3420.4720.7016.0315.9216.0316.14
R Hip11.0910.7010.7010.7010.7010.539.399.379.378.267.377.337.33
R Knee26.3220.1720.1720.1720.1721.5619.2219.7920.7616.8315.6516.1516.96
R Ankle22.2819.2319.2319.2319.2319.2218.0018.5719.7715.0314.3714.6415.82
Head11.6412.7712.7712.7712.7715.8410.4610.2710.2716.328.958.918.91

Appendix B

Table A5. RMSE result of the TNOS experiment.
Table A5. RMSE result of the TNOS experiment.
Gesture 1Gesture 2Gesture 3
TNOS123412341234
Pelvis43.8034.6831.5730.2952.0539.3635.0732.7153.1245.3742.5837.86
Spine Naval64.6751.1046.8145.1574.0258.4851.4146.6171.3358.8153.3548.22
Neck59.5345.9240.6838.6973.4358.8251.6148.0668.2254.3149.5544.71
L Shoulder64.2248.4238.5535.4978.3061.0946.2240.7872.9555.9044.7240.33
L Elbow72.4356.6440.5237.8498.8375.8551.7145.59112.6885.5856.3740.99
L Wrist106.3193.7763.8254.24147.66125.1483.7869.29188.05166.06113.3966.62
R Shoulder60.3243.4835.6531.8075.0555.9946.3841.0766.4948.1439.5733.42
R Elbow92.2673.0841.7433.28110.9487.9057.3848.71115.9582.4653.6336.15
R Wrist142.59125.1774.5054.34168.38143.9391.3173.30194.83168.88109.6356.75
L Hip51.2239.5135.0233.0056.2243.3037.8234.9256.6848.1544.5539.52
L Knee33.5227.6025.4224.3451.0443.4840.7339.3255.7744.9342.2137.96
L Ankle44.4435.7432.2130.3753.5441.9439.2537.8357.2443.3841.0438.22
R Hip47.0635.1231.1129.1854.9741.3035.5432.0254.9946.0042.4737.25
R Knee33.8326.7424.0622.5049.7540.3737.0935.4144.6536.3733.6929.14
R Ankle42.5334.9029.8228.1751.8839.2035.9834.4742.8835.1832.6730.36
Head60.0046.0239.8437.5374.9660.2853.1749.4369.0854.4749.7844.59
Gesture 4Gesture 5Gesture 6
TNOS123412341234
Pelvis53.7444.7438.6135.8950.1238.7634.6632.8851.0039.8836.2434.42
Spine Naval72.3360.7551.2548.7670.5359.3549.6044.8368.0755.9649.1346.03
Neck66.7757.9046.8343.3773.9565.2455.7251.7362.7950.6744.2642.05
L Shoulder72.2760.8046.9539.5276.9569.5951.9142.5170.4155.4843.2738.32
L Elbow94.8373.0749.0043.41109.30102.7766.2553.3584.4463.1242.8838.58
L Wrist139.50108.0971.7459.21165.96142.3387.5473.26121.9199.0362.0354.08
R Shoulder68.8355.4444.8838.3772.6957.3644.6640.5366.6652.1443.0938.70
R Elbow99.5776.5950.4342.35104.3199.6655.3143.1792.5969.2344.7739.57
R Wrist148.02118.8672.6156.74167.31146.1389.9671.84136.60102.1562.2250.91
L Hip59.6450.5541.9238.2354.1642.4737.3835.3058.0845.3339.1036.65
L Knee98.8768.8746.5343.1460.3046.4742.5040.9343.9035.2432.4531.26
L Ankle121.1275.0050.2747.0661.9149.2346.5645.4049.0539.0035.9434.50
R Hip56.3846.6739.5335.3253.0440.1034.5432.0254.4943.0137.4634.56
R Knee101.7990.0047.8040.7763.0353.7342.6139.4943.0932.9930.0428.45
R Ankle131.88100.4256.1848.8060.9946.4841.8540.1548.9537.9134.6133.13
Head67.0858.7947.1542.7776.8767.5658.8755.1863.2650.7944.3241.88
Table A6. STD result of the TNOS experiment.
Table A6. STD result of the TNOS experiment.
Gesture 1Gesture 2Gesture 3
TNOS123412341234
Pelvis4.703.813.433.1715.2911.4210.819.9415.5013.4512.939.76
Spine Naval8.846.806.555.7414.0812.8212.618.0315.4113.7412.969.22
Neck13.179.709.088.4318.4614.5113.139.8120.0115.8414.7211.02
L Shoulder16.1110.7710.039.0920.7219.5317.7414.1421.4417.9715.5212.62
L Elbow31.4426.0114.8512.9643.0842.0427.5922.4953.6247.7328.4617.56
L Wrist55.2552.6634.2126.1870.8267.9248.9037.4097.5896.1266.6331.50
R Shoulder16.3211.469.637.8521.1318.3616.9513.3119.6816.1213.7310.00
R Elbow42.6538.4319.2712.9850.9951.7733.9027.5653.1546.1028.7716.19
R Wrist73.1071.7845.4328.5980.4477.8958.6045.8995.3996.9169.1131.37
L Hip6.154.343.933.4016.5112.5411.189.7816.2413.2612.609.45
L Knee6.555.344.774.6020.3117.1316.1515.4324.3915.9714.5712.54
L Ankle8.706.776.025.7322.9016.4215.5714.9724.6614.5413.6513.00
R Hip5.784.203.903.5916.5412.6811.9610.9317.2614.4313.7810.73
R Knee5.894.794.454.2420.7217.7216.3915.6817.4914.7313.9311.92
R Ankle9.887.857.066.8424.6517.2515.8115.1717.6313.7713.0612.13
Head15.7311.6810.8410.1021.0716.3114.9511.5721.9717.4316.1412.29
Gesture 4Gesture 5Gesture 6
TNOS123412341234
Pelvis16.4314.4812.0810.219.717.006.846.6012.178.578.327.59
Spine Naval17.8719.2616.6710.8210.747.998.705.1211.299.209.986.92
Neck18.8320.0915.8411.6912.8912.1311.027.2814.7912.6111.369.45
L Shoulder24.0224.5820.3515.0315.5218.3818.9715.6519.2119.3816.3013.70
L Elbow52.6941.6124.1719.7155.0852.9133.3323.6243.5136.5622.3719.31
L Wrist85.2165.0743.4333.2493.6271.6647.0635.3269.9260.1035.5528.42
R Shoulder22.1420.9817.5413.5216.6016.4313.9312.0420.1716.3015.3513.19
R Elbow52.0344.3626.5020.5950.3153.3829.8120.2946.9838.4822.6318.98
R Wrist83.3175.9645.9933.1486.5967.0448.6434.9674.6459.4936.4327.26
L Hip19.0517.8113.7011.3611.048.277.647.5113.7910.9110.389.22
L Knee64.2939.2820.6217.8827.8017.1415.4114.7515.7511.1210.309.69
L Ankle80.8340.3322.5320.0127.0217.5416.4515.9219.0814.4013.1512.73
R Hip18.5517.3914.5711.8311.608.577.947.3714.7111.6710.619.71
R Knee65.0960.9723.3317.5331.0925.0517.9115.6516.1611.4910.5610.13
R Ankle87.8464.8028.8523.0528.5618.0215.3614.3721.0715.3213.6713.38
Head20.6021.8516.8912.7014.9813.1612.248.9517.3614.5812.9510.88

References

  1. Ma, M.; Proffitt, R.; Skubic, M. Validation of a Kinect V2 based rehabilitation game. PLoS ONE 2018, 13, e0202338. [Google Scholar] [CrossRef]
  2. Taha, A.; Zayed, H.H.; Khalifa, M.; El-Horbaty, E.-S.M. Skeleton-based human activity recognition for video surveillance. Int. J. Sci. Eng. Res. 2015, 6, 993–1004. [Google Scholar] [CrossRef]
  3. Varshney, N.; Bakariya, B.; Kushwaha, A.K.S.; Khare, M. Rule-based multi-view human activity recognition system in real time using skeleton data from RGB-D sensor. Soft Comput. 2021, 241. [Google Scholar] [CrossRef]
  4. Cippitelli, E.; Gasparrini, S.; Gambi, E.; Spinsante, S. A human activity recognition system using skeleton data from RGBD sensors. Comput. Intell. Neurosci. 2016, 2016, 4351435. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Bari, A.H.; Gavrilova, M.L. Multi-layer perceptron architecture for kinect-based gait recognition. In Computer Graphics International Conference; Springer: Berlin/Heidelberg, Germany, 2019; pp. 356–363. [Google Scholar]
  6. Yao, A.; Gall, J.; Fanelli, G.; Van Gool, L. Does human action recognition benefit from pose estimation? In Proceedings of the 22nd British Machine Vision Conference (BMVC 2011), Dundee, Scotland, 29 August–2 September 2011; BMV Press: Columbus, OH, USA, 2011. [Google Scholar]
  7. Schlagenhauf, F.; Sreeram, S.; Singhose, W. Comparison of kinect and vicon motion capture of upper-body joint angle tracking. In Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AK, USA, 12–15 June 2018; IEEE: New York, NY, USA, 2018; pp. 674–679. [Google Scholar]
  8. Shaikh, M.B.; Chai, D. RGB-D Data-based Action Recognition: A Review. Sensors 2021, 21, 4246. [Google Scholar] [CrossRef]
  9. Wang, P.; Li, W.; Ogunbona, P.; Wan, J.; Escalera, S. RGB-D-based human motion recognition with deep learning: A survey. Comput. Vis. Image Underst. 2018, 171, 118–139. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, B.; Cai, H.; Ju, Z.; Liu, H. RGB-D sensing based human action and interaction analysis: A survey. Pattern Recognit. 2019, 94, 1–12. [Google Scholar] [CrossRef]
  11. Tölgyessy, M.; Dekan, M.; Chovanec, Ľ.; Hubinský, P. Evaluation of the azure Kinect and its comparison to Kinect V1 and Kinect V2. Sensors 2021, 21, 413. [Google Scholar] [CrossRef]
  12. Romeo, L.; Marani, R.; Malosio, M.; Perri, A.G.; D’Orazio, T. Performance analysis of body tracking with the microsoft azure Kinect. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; IEEE: New York, NY, USA, 2021; pp. 572–577. [Google Scholar]
  13. Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; Association for Computing Machinery: New York, NY, USA, 2011; pp. 559–568. [Google Scholar]
  14. Tölgyessy, M.; Dekan, M.; Chovanec, Ľ. Skeleton Tracking Accuracy and Precision Evaluation of Kinect V1, Kinect V2, and the Azure Kinect. Appl. Sci. 2021, 11, 5756. [Google Scholar] [CrossRef]
  15. Aguileta, A.A.; Brena, R.F.; Mayora, O.; Molino-Minero-Re, E.; Trejo, L.A. Multi-sensor fusion for activity recognition—A survey. Sensors 2019, 19, 3808. [Google Scholar] [CrossRef] [Green Version]
  16. Gravina, R.; Alinia, P.; Ghasemzadeh, H.; Fortino, G. Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inf. Fusion 2017, 35, 68–80. [Google Scholar] [CrossRef]
  17. Yeung, L.-F.; Yang, Z.; Cheng, K.C.-C.; Du, D.; Tong, R.K.-Y. Effects of camera viewing angles on tracking kinematic gait patterns using Azure Kinect, Kinect v2 and Orbbec Astra Pro v2. Gait Posture 2021, 87, 19–26. [Google Scholar] [CrossRef]
  18. Kim, Y.; Baek, S.; Bae, B.C. Motion capture of the human body using multiple depth sensors. Etri J. 2017, 39, 181–190. [Google Scholar] [CrossRef]
  19. Colombel, J.; Daney, D.; Bonnet, V.; Charpillet, F. Markerless 3D Human Pose Tracking in the Wild with fusion of Multiple Depth Cameras: Comparative Experimental Study with Kinect 2 and 3. In Activity and Behavior Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 119–134. [Google Scholar]
  20. Chen, N.; Chang, Y.; Liu, H.; Huang, L.; Zhang, H. Human pose recognition based on skeleton fusion from multiple kinects. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 5228–5232. [Google Scholar]
  21. Núñez, J.C.; Cabido, R.; Montemayor, A.S.; Pantrigo, J.J. Real-time human body tracking based on data fusion from multiple RGB-D sensors. Multimed. Tools Appl. 2017, 76, 4249–4271. [Google Scholar] [CrossRef]
  22. Wu, Y.; Gao, L.; Hoermann, S.; Lindeman, R.W. Towards robust 3D skeleton tracking using data fusion from multiple depth sensors. In Proceedings of the 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Wurzburg, Germany, 5–7 September 2018; IEEE: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
  23. Desai, K.; Prabhakaran, B.; Raghuraman, S. Combining skeletal poses for 3D human model generation using multiple Kinects. In Proceedings of the 9th ACM Multimedia Systems Conference, Amsterdam, The Netherlands, 12–15 June 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 40–51. [Google Scholar]
  24. Moon, S.; Park, Y.; Ko, D.W.; Suh, I.H. Multiple kinect sensor fusion for human skeleton tracking using Kalman filtering. Int. J. Adv. Robot. Syst. 2016, 13, 65. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, H.; He, X.; Liu, Y. A Human Skeleton Data Optimization Algorithm for Multi-Kinect. In Proceedings of the 2020 Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2020; IEEE: New York, NY, USA, 2020; pp. 89–95. [Google Scholar]
  26. Ryselis, K.; Petkus, T.; Blažauskas, T.; Maskeliūnas, R.; Damaševičius, R. Multiple Kinect based system to monitor and analyze key performance indicators of physical training. Hum. Cent. Comput. Inf. Sci. 2020, 10, 51. [Google Scholar] [CrossRef]
  27. Swain, M.J.; Ballard, D.H. Indexing via color histograms. In Active Perception and Robot Vision; Springer: Berlin/Heidelberg, Germany, 1992; pp. 261–273. [Google Scholar]
  28. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  29. Gower, J.C.; Dijksterhuis, G.B. Procrustes Problems; Oxford University Press on Demand: Oxford, UK, 2004; Volume 30. [Google Scholar]
  30. Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 698–700. [Google Scholar] [CrossRef] [Green Version]
  31. Garrido-Jurado, S.; Munoz-Salinas, R.; Madrid-Cuevas, F.J.; Medina-Carnicer, R. Generation of fiducial marker dictionaries using mixed integer linear programming. Pattern Recognit. 2016, 51, 481–491. [Google Scholar] [CrossRef]
  32. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd, 1996; AAAI: Palo Alto, CA, USA, 1996; pp. 226–231. [Google Scholar]
  33. Haller, E.; Scarlat, G.; Mocanu, I.; Trăscău, M. Human activity recognition based on multiple Kinects. In International Competition on Evaluating AAL Systems through Competitive Benchmarking; Springer: Berlin/Heidelberg, Germany, 2013; pp. 48–59. [Google Scholar]
  34. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  35. Naeemabadi, M.; Dinesen, B.; Andersen, O.K.; Hansen, J. Influence of a marker-based motion capture system on the performance of Microsoft Kinect v2 skeleton algorithm. IEEE Sens. J. 2018, 19, 171–179. [Google Scholar] [CrossRef] [Green Version]
  36. Naeemabadi, M.; Dinesen, B.; Andersen, O.K.; Hansen, J. Investigating the impact of a motion capture system on Microsoft Kinect v2 recordings: A caution for using the technologies together. PLoS ONE 2018, 13, e0204052. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (Left), Azure Kinect and (Right), body tracking SDK of Azure Kinect.
Figure 1. (Left), Azure Kinect and (Right), body tracking SDK of Azure Kinect.
Sensors 22 03155 g001
Figure 2. Proposed calibration procedure.
Figure 2. Proposed calibration procedure.
Sensors 22 03155 g002
Figure 3. (Left), sphere object and (Right), plane marker.
Figure 3. (Left), sphere object and (Right), plane marker.
Sensors 22 03155 g003
Figure 4. Proposed skeleton merging algorithm.
Figure 4. Proposed skeleton merging algorithm.
Sensors 22 03155 g004
Figure 5. Three different examples of a misorientation error situation (yellow sphere: the candidate positions of the right elbow; green: the candidate positions of the left elbow).
Figure 5. Three different examples of a misorientation error situation (yellow sphere: the candidate positions of the right elbow; green: the candidate positions of the left elbow).
Sensors 22 03155 g005
Figure 6. DBSCAN for joint merging.
Figure 6. DBSCAN for joint merging.
Sensors 22 03155 g006
Figure 7. Example of gestures. (a) Both hands up and down. (b) Jump. (c) Squat. (d) Lunge. (e) Walking. (f) Moving body in standing pose (random movement).
Figure 7. Example of gestures. (a) Both hands up and down. (b) Jump. (c) Squat. (d) Lunge. (e) Walking. (f) Moving body in standing pose (random movement).
Sensors 22 03155 g007
Figure 8. Multiple RGB-D sensors tracking system for evaluation.
Figure 8. Multiple RGB-D sensors tracking system for evaluation.
Sensors 22 03155 g008
Figure 9. Result of the algorithm experiment (RMSE, STD).
Figure 9. Result of the algorithm experiment (RMSE, STD).
Sensors 22 03155 g009
Figure 10. Result of the searching area for DBSCAN.
Figure 10. Result of the searching area for DBSCAN.
Sensors 22 03155 g010
Figure 11. Result of number of sensors.
Figure 11. Result of number of sensors.
Sensors 22 03155 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.-h.; Lee, D.-W.; Jun, K.; Lee, W.; Kim, M.S. Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors. Sensors 2022, 22, 3155. https://doi.org/10.3390/s22093155

AMA Style

Lee S-h, Lee D-W, Jun K, Lee W, Kim MS. Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors. Sensors. 2022; 22(9):3155. https://doi.org/10.3390/s22093155

Chicago/Turabian Style

Lee, Sang-hyub, Deok-Won Lee, Kooksung Jun, Wonjun Lee, and Mun Sang Kim. 2022. "Markerless 3D Skeleton Tracking Algorithm by Merging Multiple Inaccurate Skeleton Data from Multiple RGB-D Sensors" Sensors 22, no. 9: 3155. https://doi.org/10.3390/s22093155

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop