Next Article in Journal
Convolutional Autoencoding and Gaussian Mixture Clustering for Unsupervised Beat-to-Beat Heart Rate Estimation of Electrocardiograms from Wearable Sensors
Next Article in Special Issue
A Study of Biofeedback Gait Training in Cerebral Stroke Patients in the Early Recovery Phase with Stance Phase as Target Parameter
Previous Article in Journal
Secure Cognitive Radio-Enabled Vehicular Communications under Spectrum-Sharing Constraints
Previous Article in Special Issue
Head Trajectory Diagrams for Gait Symmetry Analysis Using a Single Head-Worn IMU
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Constraints Based Dynamic Calibration of IMU Position on Lower Limbs in IMU-MoCap

School of Information Engineering, East China Jiao Tong University, Shuanggang Dong Dajie 808, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(21), 7161; https://doi.org/10.3390/s21217161
Submission received: 19 September 2021 / Revised: 21 October 2021 / Accepted: 25 October 2021 / Published: 28 October 2021
(This article belongs to the Special Issue Feedback-Based Balance, Gait Assistive and Rehabilitation Aids)

Abstract

:
The position calibration of inertial measurement units (IMUs) is an important part of human motion capture, especially in wearable systems. In realistic applications, static calibration is quickly invalid during the motions for IMUs loosely mounted on the body. In this paper, we propose a dynamic position calibration algorithm for IMUs mounted on the waist, upper leg, lower leg, and foot based on joint constraints. To solve the problem of IMUs’ position displacement, we introduce the Gauss–Newton (GN) method based on the Jacobian matrix, the dynamic weight particle swarm optimization (DWPSO), and the grey wolf optimizer (GWO) to realize IMUs’ position calibration. Furthermore, we establish the coordinate system of human lower limbs to estimate each joint angle and use the fusion algorithm in the field of quaternions to improve the attitude calibration performance of a single IMU. The performances of these three algorithms are analyzed and evaluated by gait tests on the human body and comparisons with a high-precision IMU-Mocap reference device. The simulation results show that the three algorithms can effectively calibrate the IMU’s position for human lower limbs. Additionally, when the degree of freedom (DOF) of a certain dimension is limited, the performances of the DWPSO and GWO may be better than GN, when the joint changes sufficiently, the performances of the three are close. The results confirm that the dynamic calibration algorithm based on joint constraints can effectively reduce the position offset errors of IMUs on upper or lower limbs in practical applications.

1. Introduction

In recent years, inertial measurement units (IMUs) have attracted increasing interest in the field of human motion analysis. The wearable sensor motion capture system is less costly, more flexible, and more portable than optical camera-based motion capture devices [1,2]. By mounting IMUs on each limb of human bodies, the real-time tracking and motion data analysis of human postures can be realized. The IMU-based motion capture and analysis have shown substantial applications in athletic training, e.g., golf training, baseball training, dart-throwing training, etc. [3,4,5,6]. It also has promising application prospects in medical rehabilitation training. In [7], the IMU-Mocap system is applied to determine the level of autonomy for patients with Parkinsonism syndromes. To obtain the information of body motions, the installation position of the IMUs and the variation of joint space position should be accurately measured. The works [1,8] analyzed the influence of IMUs’ positions or directions on the accuracy of motion evaluation, and it will further affect the variation of the joint angle. However, in practical applications, different types of clothing materials and muscle stretching during exercise will cause IMU position displacements. A direct consequence of IMU displacement is the difference of the derived joint position relative to the pre-calibration. Therefore, an effective IMU position calibration method is necessary.
Regarding IMU position displacement, there are currently two types of IMU position calibration, divided into static calibration, e.g., quiet standing, and dynamic calibration, e.g., knee flexion. For static calibration, the work [9] realized an IMU-to-body calibration based on preset static postures. In [10,11], the direction of the IMUs was estimated by a T-pose or N-pose to ensure that each frame in the IMUs aligns with the known direction in that posture. For dynamic calibration, the works [12,13] calibrated the IMUs bound to the upper leg and lower leg by using flexion/extension (FE) and abduction/adduction (AA) of the knee joint, so that the angle of the knee joint is equal to zero in the standing posture. The studies [14,15] proposed a simple calibration scheme, which does not need to specify the motion of limbs, and used human gait analysis to align the direction of IMUs to the body. However, the above methods do not estimate the positions of the sensors relative to the adjacent limbs, which is critical information for calculating the joint angle using IMUs, especially during fast rotations of the joint [16], and establish a motion chain model in high-speed motions [17]. The study [18] proposed a position estimation algorithm to estimate the position of IMUs relative to the limbs based on the least-squares optimization. This approach was further extended in [19,20] for gait analysis and the angles of the knee and ankle joint were estimated; it was then applied to the rehabilitation system of human limbs.
The limitations of the above calibration methods can be summarized as follows:
(1) The wearing position of the sensor needs to be fixed or special tools are required [9,10,11].
(2) The limbs are required to perform specific motions, but it is difficult for volunteers with damaged joints to complete them. Even volunteers with normal physical activities need to be guided by professionals [12,13].
(3) The direction of IMUs must be estimated in advance to complete the calibration, and it is easy to make mistakes, especially when using a magnetometer, which is vulnerable to the interference of magnetic field [14,15,21].
(4) When the joint rotates, it cannot be fully rotated in all specified directions, resulting in the decline of calibration accuracy [18,19].

Problem Statement

Considering the limitations of previous calibration algorithms, and based on the joint constraints proposed in [18], in this work, we aim to study the influence of different algorithms on the dynamic calibration performance of IMUs’ position based on joint constraints. Furthermore, we aim to establish the human lower limb coordinate system and calculate the joint angle to study the influence of IMUs’ position accuracy on human gait space-time parameters. The study [18] did not provide research on the difference of results caused by different joint motion types in the actual motion process. Due to the different variation ranges of the hinge joint and spherical joint, the degree of freedom (DOF) of the joint will change. When the joint rotation is insufficient, the result of the Gaussian–Newton (GN) algorithm based on the Jacobian matrix may be inaccurate. To solve the IMU position displacement and consider the influence of the change of joint DOF on the calibration algorithm, we introduce the dynamic weight particle swarm optimization (DWPSO) [22] and grey wolf optimizer (GWO) [23] to realize the position calibration of IMUs based on the joint constraints, and the calibration results of the two algorithms are compared with GN. The main contents of this work are as follows:
(1) The four IMUs are bound to the waist, upper leg, lower leg, and foot, respectively, for the gait experiment, and the data of accelerometers and gyroscopes of each IMU are collected. High-precision IMU-Mocap equipment is bound on the lower limbs for synchronous data acquisition with IMUs. High-precision motion capture equipment is only for reference. In addition, we place IMUs in two different positions, and three subjects are tested in two positions, including one female of height 165 cm, and two males of height 175 cm and 180 cm respectively.
(2) The collected data are substituted into the GN, DWPSO, and GWO for position estimation to obtain the position information of IMUs relative to the limbs.
(3) Using the calibrated IMU position information, we establish the human lower limb coordinate system to calculate the angles of the hip, knee, and ankle joint in each DOF. Additionally, to improve the performance of attitude calibration, the quaternion fusion algorithm is used to fuse the data of the accelerometer and gyroscope of single IMU.
(4) The performance of the three algorithms is evaluated by comparing them with the high-precision IMU-Mocap reference device.
The following of the paper is organized into 5 parts. In Section 2, the IMU position calibration model of the spherical joints and the hinge joints is introduced, respectively. In Section 3, analyzing the performance of the GN, and points out its limitations. Under the same constraints as the GN, the DWPSO and GWO are used to calibrate IMUs’ positions. Section 4 establishes the coordinate system of human lower limbs, combines the position information of IMUs and the attitude of single IMU to calculate the joint angle of human lower limbs during walking. Section 5 introduces the experimental test device and test scheme and analyzes the test data by analyzing the angle variation of each joint angle to verify the performance of the three calibration algorithms. Finally, Section 6 summarizes the study.

2. IMU Position Calibration Principle

In this study, we focus on the calibration of IMUs’ positions relative to the lower limb joints. According to the international society of biomechanics (ISB) standard [24] and the joints coordinate system defined in human anatomy [25], 3D rotation for the lower limbs joints can be defined as: (1) hip: flexion/extension ( H FE ), abduction/adduction ( H AA ), internal/external rotation ( H IE ), (2) knee: flexion/ extension ( K FE ), abduction/adduction ( K AA ), internal/external rotation ( K IE ), (3) ankle: flexion/extension ( A FE ), abduction/adduction ( A AA ), internal/external rotation ( A IE ).
For the IMU position estimation, human lower limbs can be simplified as rigid segments connected by joints. Figure 1 presents a model of human left lower limbs; as the human body is symmetrical, it can also be applied to the right lower limbs. The four IMUs are denoted as S, S { A , B , C , D } being mounted on the waist, upper leg, lower leg, and foot. The hip, knee, and ankle joints are denoted as J i , J i { J H , J K , J A } . r J K is the rotation axis of the knee joint. O g is the global coordinate system, which represents the coordinate system of the 3D space object. O s , S { A , B , C , D } is the sensor coordinate system, which takes IMUs’ center as the coordinate origin.
The human lower limb joints can be classified into a spherical joint and hinge joint, where the hip and ankle are spherical joints and the knee joint is the hinge joint. At time step t ( t = 1 n ), the accelerations measured by the accelerometers are denoted as a S ( t ) . The angular velocities measured by the gyroscope are denoted as w S ( t ) , S { A , B , C , D } .
(1) The spherical joint
The spherical joint is analyzed on the hip joint as an example and is applicable for the ankle joint. Assuming A and B are connected through the joint J H , the position of the two IMUs relative to the joint can be determined from the sequence of measurements of A and B. Let V J H , S = [ x J H , S , y J H , S , z J H , S ] T , S { A , B } denote the vector pointing from the joints center to the origin of the two IMUs coordinate systems in Figure 1. Through the IMU position estimation algorithm proposed in [18], the spherical joint model is defined by Equation (1).
a A ( t ) Γ A ( t ) a B ( t ) Γ B ( t ) = 0 ,
Γ S ( t ) = w S ( t ) × ( w S ( t ) × V J H , S ) + α S ( t ) × V J H , S , S { A , B } ,
where · is the norm for vectors, × is a cross product. The angular acceleration calculated by the angular velocity is denoted as α S , which is defined by [19].
The IMU position displacement will cause the errors of Equation (1), and the equation is not equal on the left and right. The errors are defined by Equation (3).
e J H ( t ) = a A ( t ) Γ A ( t ) a B ( t ) Γ B ( t )
(2) The hinge joint
The knee joint is a hinge joint, and the model of the knee joint rotation axis estimation proposed in [18] is defined by Equation (4).
w B ( t ) × r J K , B w C ( t ) × r J K , C = 0 ,
where r J K , B , r J K , C are the coordinates of the unit vector parallel to the knee joint axis in the O B and O C .
Figure 2 show the coordinates of r J K , B and r J K , C in spherical coordinates, and converts it to the rectangular coordinate system in Equation (5)
r J K , S = cos ( φ S ) cos ( θ S ) , cos ( φ S ) sin ( θ S ) , sin ( θ S ) , S { B , C } ,
where φ 0 , π is the pitch angle, θ 0 , 2 π is the yaw angle. The IMU position displacement will cause Equation (4) to be unequal on the left and right. The errors are defined by Equation (6).
e J K ( t ) = w B ( t ) × r J K , B w C ( t ) × r J K , C

3. Calibration Algorithm Design

3.1. Gauss–Newton Method for IMUs Position Calibration

By the analysis of joint constraints in Section 2, we use the Gaussian–Newton (GN) algorithm based on the Jacobian matrix to calculate Equations (3) and (6).
For Equation (3), the optimization problem is expressed by Equation (7).
min x J H t = 1 n e J H 2 ( t ) , x J H = [ V J H , A , V J H , B ] T , e J H ( t ) = a A ( t ) Γ A ( t ) a B ( t ) Γ B ( t ) ,
where x J H is the vector containing IMUs’ position parameters, and x J H , S , y J H , S , z J H , S , S { A , B } are in the range [ 0.2 , 0.2 ] .
The iteration steps at time t are described as follows:
(1) Randomly generate initial values of x J H γ , γ is the number of iterations.
(2) Calculate the deviation vector e J H using Equation (7).
(3) Calculate the Jacobian matrix J = d e J H d x J H using Equation (8), and then calculate the generalized inverse matrix of J , which is p i n V ( J ) .
J = e J H ( 1 ) V J H , A e J H ( 1 ) V J H , B e J H ( n ) V J H , A e J H ( n ) V J H , B ,
where
e J H V J H , S = ( a S Γ S ) T a S Γ S ( w S × w S × + α S × ) , S { A , B }
the following symbols are introduced by Equation (10)
[ w S ] × = 0 w z w y w z 0 w x w y w x 0 , [ α S ] × = 0 α z α y α z 0 α x α y α x 0 ,
where w S = [ w x , w y , w z ] T , α S = [ α x , α y , α z ] T .
(4) Update x J H γ by Equation (11) and return to (2).
x J H γ + 1 = x J H γ p i n v ( J ) e J H ,
For Equation (6), the optimization iteration is expressed by Equation (12).
min x J K t = 1 n e J K 2 ( t ) , x J K = φ B , θ B , φ C , θ C T , e J K ( t ) = w B ( t ) × r J K , B w C ( t ) × r J K , C ,
where x J K is the vector containing knee joint axis position parameters. The iteration steps at time t are described as follows:
(1) Randomly generate initial values of x J K γ .
(2) Calculate r J K , S using Equation (5)
(3) Calculate the deviation vector e J K using Equation (12).
(4) Calculate the Jacobian matrix J = d e J K d x J K using Equation (13) and calculate the generalized inverse matrix of J is p i n v ( J ) .
J = e J K ( 1 ) r J K , B e J K ( 1 ) r J K , C e J K ( n ) r J K , B e J K ( n ) r J K , C ,
where
( w S × r J H , S ) r J H , S = ( w S × r J H , S ) × w S w S × r J H , S , S { B , C }
(5) Update x J K γ using Equation (15) and return to (2).
x J K γ + 1 ( t ) = x J K γ ( t ) p i n v ( J ) e J K ( t )
According to the definition of the DH coordinate system in [26], the three DOF (3-DOF) joints of the hip and ankle can be divided into three hinge joints. Therefore, the position of the IMUs relative to the knee joint can be calculated using the spherical joint approach. The positions of B and C relative to the knee joint can be obtained by Equation (16).
V J K , B = V ˜ J K , B 1 2 ( r J K , B T · V ˜ J K , B + r J K , C T · V ˜ J K , C ) r J K , B , V J K , C = V ˜ J K , C 1 2 ( r J K , B T · V ˜ J K , B + r J K , C T · V ˜ J K , C ) r J K , C ,
where V ˜ J K , B and V ˜ J K , C are the estimated by Equation (7).
By analyzing the algorithm, the limitations of the GN are as follows:
(1) In the process of using the GN, the Jacobi matrix theoretically needs to be positive definite; however, the calculation may not be of full rank. When people walk, the motion of the knee joint is mainly flexion and extension, i.e., there is a significant change in only one DOF. When in other DOF, such as internal/external rotation of the knee joint, w x = w y = 0 cause α x = α y = 0 . According to the analysis of Equations (8)–(10), this will reduce the rank of the Jacobian matrix. For the hip or ankle joint, it is also not guaranteed that each motion produces rotation in all 3-DOF at the same time, which reduces the rank of J . The matrix J may be singular and leads to non-convergence of the algorithm.
(2) In the process of practical calculation, the GN is iterated along with one or two of the matrix entries, which causes the high complexity of the J . Therefore, each joint can only be calculated separately. If the motion data of the whole lower limb is processed at the same time, it will increase the complexity of the algorithm and make the iteration time longer and the performance of the GN will be compromised.

3.2. Dynamic Weight Particle Swarm Optimization for IMUs Position Calibration

Both the population algorithm and genetic algorithm simulate the adaptability of the individual population on the basis of natural characteristics and use some transformation rules to solve the optimal solution through the search space. However, individual variation will occur in the process of the genetic algorithm, which cannot completely solve the constraints in the optimization problem [27]. Therefore, we chose the population optimization algorithm to realize the position calibration of IMUs.
Under the same constraints as [18], we introduce dynamic weight particle swarm optimization (DWPSO) to calibrate the position of IMUs, where the dynamic weight is added to the traditional PSO. Unlike the GN, the DWPSO does not need to consider the complexity of the Jacobian matrix calculation.
At time t, let N is the number of particles in the population, and ε is the ε -th particle of all particles. The parameter vector containing the IMUs’ positions is expressed by Equation (17). The vector containing knee joint axis position parameters is expressed by Equation (18).
x J H , ε = [ V J H , A , ε , V J H , B , ε ] T ,
x J K , ε = [ φ B , ε , θ B , ε , φ C , ε , θ C , ε ] T ,
For Equations (3) and (6), the optimization is expressed by Equations (19)–(21).
η ε = argmin ( e J i ( x J i , ε γ ) ) ,
where η ε is individual extreme as optimal value of x J i found by the particle ε in the iteration, J i { J H , J K } . γ is the number of iterations.
p = argmin ( e J i ( η ε ) ) ,
where p is the global extremum, i.e., all particles find the optimal value of x J i in the iterative.
F J H , ε = F x J H , A , ε F x J H , B , ε F y J H , A , ε F y J H , B , ε F z J H , A , ε F z J H , B , ε , F J K , ε = [ F φ B , ε , F θ B , ε , F φ C , ε , F θ C , ε ] T ,
where F J H , F J K are the update speed of particles. The algorithm steps at time t are as follows:
(1) Initialization x J i , ε and V J i , ε . x J H , S , y J H , S , z J H , S , S { A , B } are following uniform distribution in range [−0.2, 0.2]. φ B , φ C are following uniform distribution in range 0 , π . θ B , θ C are following uniform distribution in range 0 , 2 π .
(2) Substitute x J H , ε into Equation (3) to obtain e J H . x J K , ε into Equation (6) to obtain e J K .
(3) According to Equations (19) and (20) to calculate η ε , p .
(4) Update x J i , ε and V J i , ε as follows:
F J i , ε γ + 1 = μ F J i , ε γ + Ω 1 1 ( η ε x J i , ε γ ) + Ω 2 2 ( p x J i , ε γ ) ,
x J i , ε γ + 1 = x J i , ε γ + F J i , ε γ + 1 ,
where F J i , ε γ + 1 and x J i , ε γ + 1 are the values after the γ -th update. 1 and 2 are following uniform distribution in range [0, 1]. μ is the inertia factor. Ω 1 and Ω 2 are acceleration constants. This work takes Ω 1 = 2 , Ω 2 = 2 .
(5) Repeat steps 2∼4 until e J i convergence.
In step 4 of the algorithm, μ will affect the performance of the algorithm. μ is too large or too small will affect the convergence of the error. Therefore, we introduce a PSO based on dynamic weight. μ decreases exponentially in range μ min , μ max as γ increases. Additionally, to avoid falling into a local optimum when μ decreases, random jumps are introduced by Equation (24).
μ ˜ = μ max μ max μ min 1 + 1 10 γ γ max ,
where μ ˜ is dynamic weight. The maximum number of iterations is denoted as γ max . Let μ min = 0.2 , μ max = 0.8 . Figure 3 shows the convergence curve of e J H when using fixed weight and the dynamic weight, and the e J H is defined by Equation (3). When the N is small, the performance of dynamic weight is better than the fixed weight. It is suitable for some scenes that require high timeliness. When the N is large, the performance of dynamic weight is similar to fixed weight.

3.3. Grey Wolf Optimizer for IMUs Position Calibration

To verify the influence of different population algorithms on IMUs’ position calibration performance, we also introduce the grey wolf optimizer (GWO) [23] for position calibration. The GWO is a new optimization algorithm inspired by the hunting and social hierarchical behavior of gray wolves. It randomly generates a set of solutions to form an initial gray wolf group, and then iteratively selects the best three wolves in the population, similar to the optimal solution in the PSO.
The parameter vector containing the IMUs position is expressed by Equation (25). The vector containing knee joint axis position parameters is expressed by Equation (26).
x J H , p = [ V J H , A , V J H , B ] T ,
x J K , p = [ φ B , θ B , φ C , θ C ] T ,
where p is a different individual in the population and x J i , p , J i { J H , J K } is the corresponding position of the individual.
In the GWO iteration, the position of each gray wolf represents a feasible solution in the solution space. Let the wolf with the best position be p α ; the second-best is p β ; the third-best is p δ . The remaining wolves are p ε , i.e., the wolf pack individuals other than the best three wolves. In each iteration, the three best gray wolves in the current population are retained, and then the positions of other search agents are updated according to their position information. p α , p β , p δ and p ε are constantly updated and iterated until the optimal solution is found. The process of finding the optimal solution is the process of a gray wolf hunting prey.
(1) Encircling Prey
f p α = h 1 · x J i , p α γ x J i , p ε γ , f p β = h 2 · x J i , p β γ x J i , p ε γ , f p δ = h 3 · x J i , p δ γ x J i , p ε γ , h = 2 · r 1
where γ is the number of iterations. The positions of p α , p β , p δ , p ε in γ -th iteration are denoted by x J i , p α γ , x J i , p β γ , x J i , p δ γ , x J i , p ε γ , respectively. The position vectors of p ε relative to p α , p β , p δ are denoted as f p α , f p β , f p δ . The random vector following uniform distribution in range 0 , 1 is denoted as r 1 . h { h 1 , h 2 , h 3 } is the random weight and decreases nonlinearly in iterations. From the initial iteration to the final iteration, it provides global search in the decision space. When the algorithm falls into local optimization and it is not easy to jump out, the randomness of h plays an important role in avoiding local optimization. Equation (27) shows that, after moving, the p ε will move around the target gray wolves p α , p β , p δ , and its orientation is determined by the size of each dimension and the h .
(2) Hunting
x J i , p 1 = x J i , p α γ k 1 · f p α , x J i , p 2 = x J i , p β γ k 2 · f p β , x J i , p 3 = x J i , p δ γ k 3 · f p δ , k = 2 · ψ · r 2 ψ ,
x J i , p ε γ + 1 = x J i , p 1 + x J i , p 2 + x J i , p 3 3 ,
where k { k 1 , k 2 , k 3 } is the random weight. The random vector following uniform distribution in range 0 , 1 is denoted as r 2 . ψ is the convergence factor, which decreases linearly from 2 to 0 with the number of iterations. Combining Equations (27) and (28) shows that the p ε moves its position by observing the positions of p α , p β , p δ , and denoted as x J i , p 1 , x J i , p 2 , x J i , p 3 , respectively. Then, use Equation (29) to determine the moving direction of the prey and update its position, i.e., x J i , p ε γ + 1 is the updated position of the p ε . Through a continuous iterative search, the optimal solution is found. Additionally, Equation (29) shows that the target position of the p ε is the centroid of the area enclosed by the three positions obtained by observing the p α , p β , p δ .
(3) Attacking Prey
During the iteration, when ψ decreases linearly from 2 to 0, its corresponding k also changes in range ψ , ψ . When the value of k is in the range, the next position of the gray wolf can be anywhere between its current position and the prey position. When | k | < 1 , wolves attack their prey. When | k | > 1 , the gray wolf separates from its prey and continues to look for more suitable prey.
The algorithm steps at time t are as follows:
(1) Initialization h , k , x J i , p . x J H , S , y J H , S , z J H , S , S { A , B } are following uniform distribution in range [−0.2, 0.2]. φ B , φ C are following uniform distribution in range 0 , π . θ B , θ C are following uniform distribution in range 0 , 2 π .
(2) Calculate the individual fitness of the population by substituting x J H , p into Equation (3) and x J K , p into Equation (6). Select the three individuals with the smallest error as p α , p β , p δ .
(3) Calculate the position vectors of p ε relative to p α , p β , p δ in γ -th iteration, respectively by Equation (27). Update the position of p ε by Equations (28) and (29).
(4) If the maximum number of iterations is reached, go to step 6. Otherwise, go to step 5.
(5) Reorder to determine the position of the gray wolf, and go to step 2.
(6) Output current optimal solution by Equation (29).

4. Calculation of Human Lower Limbs Joint Angles

To calculate the joint angle, the coordinate system of each limb needs to be constructed first. The previous method of establishing the limb coordinate system is to let the subject stand in a standard standing posture. The limitations of this method have been analyzed in Section 1. According to the calibration algorithm in Section 3, we can obtain the installation position and direction information of human lower limb sensors and use this information to establish the coordinate system attached to a limb.

4.1. Establish the Coordinate System Attached to a Limb

In human kinematic analysis, it is crucial to determine the spatial relationship between two adjacent limbs. The establishment of the spatial relationship between two limbs depends on the coordinate frame fixed on each limb, i.e., the coordinate system attached to a limb. The commonly used method in domestic and overseas is to establish the coordinate frame on the axis of the proximal or posterior joints of each limb [28]. In this work, according to the standards of the international society of biomechanics (ISB) [24], we establish the coordinate system attached to a limb on the proximal joint, and all the coordinate systems are the right-hand Cartesian coordinates. As shown in Figure 4, the limbs of the pelvis, upper leg, lower leg, and foot of the left leg are denoted as the rod L 1 , L 2 , L 3 , and L 4 , respectively, and the coordinate system attached to each limb is denoted as { L 1 } , { L 2 } , { L 3 } , { L 4 } . In most human motions, the pelvis generally makes only translational motion without rotation; therefore, in the case of only calculating the joint angles, the L 1 rod can be considered as a fixed rod.
As shown in Figure 5a, { L 2 } is established at J H . Make the normal line of r J K through J H , and the foot point is N. The y-axes direction of the attached coordinate system is from N to J H . The z-axes parallel to the knee joint rotation axis and the direction is left leg to right leg. The x-axes are perpendicular to the y- and z-axes.
In Figure 5b, the vertical line of the knee joint is made through the point J A , and the vertical point is O, { L 3 } is established at O. The y-axes direction is from J A to O. The z-axes coincides with the knee joint axis, and the direction is the left leg to right leg. The x-axes are perpendicular to the y and z-axes, { L 4 } is established at J A . At the initial stage, { L 1 } and { L 2 } coincide, and { L 3 } and { L 4 } are parallel.
The attitude transformation between two adjacent sensors is defined by Equation (30).
R B A ( t ) = ( R A O g ( t ) ) T R B O g ( t ) , R C B ( t ) = ( R B O g ( t ) ) T R C O g ( t ) , R D C ( t ) = ( R C O g ( t ) ) T R D O g ( t ) ,
where R B A is the attitude transformation matrix between A and B. The attitude transformation matrix between B and C is denoted as R C B . The attitude transformation matrix between C and D is denoted as R D C . The attitude transformation matrix of each IMU relative to the global coordinate system at time t is denoted as R S O g , S { A , B , C , D } . It is worth noting that only the sensor coordinate system O s is time-dependent, resulting in R S O g changing with time and O g is not time-dependent. We analyze the fusion process in detail in Section 4.3. When the human body has not started motion capture and stands still, the attitude transformations between IMUs are denoted by R B A ( 0 ) , R C B ( 0 ) , R D C ( 0 ) .
According to the analysis in Section 3, the V J H , B , V J K , B , V J K , C , V J A , C , V J A , D , r J K , B , and r J K , C can be obtained. As shown in Figure 5a,b, V B N and V C O need to be calculated to obtain the coordinate system attached to each limb. Establishing the equation of the knee joint axis by Equation (31).
x + x 2 r x = y + y 2 r y = z + z 2 r z ,
where x 2 , y 2 , z 2 T denote the coordinates of V J H , B and x 3 , y 3 , z 3 T denote the coordinates of V J K , B . r x , r y , r z T denote the coordinates of r J K , B . x , y , z T is any point on a line, and point N is on this line. Assume that the coordinates of V B N are a , b , c T , the equation of knee joint axis by Equation (32).
a + x 2 r x = b + y 2 r y = c + z 2 r z
The normal line of r J K can be expressed by Equation (33).
r x a + x 3 + r y b + y 3 + r z a + z 3 = 0
According to Equations (32) and (33), V B N can be calculated, and { L 2 } is calculated by Equation (34).
z L 2 = r J K , B y L 2 = V J H , B + V B N V J H , B + V B N x L 2 = y L 2 × z L 2
The attitude transformation matrix from { L 2 } to the coordinate system of sensor B is denoted by Equation (35).
R B L 2 = x L 2 , y L 2 , z L 2 T
Similarly, we can obtain V C O , and the attitude transformation matrix from { L 3 } to the coordinate system of sensor C is denoted by Equation (36).
R C L 3 = x L 3 , y L 3 , z L 3 T
At initial stage, { L 1 } and { L 2 } coincide, and { L 3 } and { L 4 } are parallel, i.e.,
R L 2 L 1 = I , R D L 4 = R C L 3 R D C ,
where I is a identity matrix of 3 by 3, R L 2 L 1 is a rotation matrix between L 1 and L 2 , and R D L 4 is attitude transformation matrix from L 4 to the coordinate system of sensor D.

4.2. Joint Angles Calculation

According to the ISB standard [24], each joint angle of the lower limbs is the motion of the lower limbs relative to the adjacent upper limb, i.e., the upper leg is relative to the pelvis, the lower leg is relative to the upper leg and the foot is relative to the lower leg. In the 3-DOF of the joint angles, flexion/extension is β , which is the angle of rotation about the z-axes. Abduction/adduction is ϕ , which is the angle of rotation about the x-axes. Internal/external rotation is δ , which is the angle of rotation about the y-axes, according to the Z-X-Y Euler angular rotation order to calculate the joint angles. At t time, the rotation matrix of limb L i relative to L i 1 ( i = 2 , 3 , 4 ) can be obtained by Equation (38).
R L 2 L 1 ( t ) = R B L 2 R B O g ( t ) R L 3 L 2 ( t ) = R B L 2 ( R B O g ( t ) ) T R C B ( 0 ) R C O g ( t ) ( R C L 3 ) T , R L 4 L 3 ( t ) = R C L 3 ( R C O g ( t ) ) T R D C ( 0 ) R D O g ( t ) ( R D L 4 ) T
where R L i L i 1 can be expressed by Equation (39).
R L i L i 1 = s δ s β s ϕ + c δ c ϕ s δ c β s δ s β s ϕ + c δ s ϕ c δ s β s ϕ + s δ c ϕ c δ s β c δ s β c ϕ + s δ s ϕ c β s ϕ s β c β c ϕ = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 ,
where c is cos and s is sin. Euler angle can be calculated by Equations (40)–(42). when cos β 0 :
β = atan 2 ( r 32 , r 12 2 + r 22 2 ) δ = atan 2 ( r 12 cos β , r 22 cos β ) , ϕ = atan 2 ( r 31 cos β , r 33 cos β )
when β = 90 :
β = 90 δ = 0 ϕ = atan 2 ( r 13 , r 23 )
when β = 90 :
β = 90 δ = 0 ϕ = atan 2 ( r 13 , r 23 )

4.3. Single IMU Attitude Fusion

To improve the accuracy of attitude acquisition by single IMU, we need to fuse the attitude rotation matrix in Equation (30). The quaternion-based attitude fusion algorithm can effectively combine the error characteristics of gyroscope and accelerometer, and improve the accuracy of attitude calculation [29]. The expression of a quaternion is defined by Equation (43).
q = q 0 + q 1 i + q 2 j + q 3 k ,
where i, j, k is an imaginary unit, q 0 , q 1 , q 2 , q 3 is a real number, and each quaternion is a linear combination of 1, i, j and k.
(1) Quaternion initialization
At time t, the quaternion of attitude change is q t = q 0 , q 1 , q 2 , q 3 T , the attitude calculation error is ξ t = ξ x , ξ y , ξ z T . At the initial stage, q 0 and ξ 0 are defined by Equation (44).
q 0 = 1 , 0 , 0 , 0 T , ξ 0 = 0 , 0 , 0 T ,
(2) Correction of angular velocity error
Based on the definition of cosine matrix and Euler angles in [30], the gravity vector of the global coordinate system can be rotated to the sensor coordinate system by Equation (45).
g t O s = R O s O g ( t ) · g t O g = q 0 2 + q 1 2 q 2 2 q 3 2 2 ( q 1 q 2 q 3 q 0 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) q 0 2 q 1 2 + q 2 2 q 3 2 2 ( q 2 q 3 q 0 q 1 ) 2 ( q 1 q 3 q 0 q 2 ) 2 ( q 2 q 3 + q 0 q 1 ) q 0 2 q 1 2 q 2 2 + q 3 2 0 0 1 ,
where the rotation matrix of O s relative to O g by quaternion is denoted as R O s O g ( t ) . The acceleration of gravity is denoted as g t O g . The acceleration of gravity after rotation from O g to O s is denoted as g t O s = g x O s , g y O s , g z O s T . Equation (45) is simplified by Equation (46).
g x O s = 2 ( q 1 q 3 q 0 q 2 ) , g y O s = 2 ( q 2 q 3 + q 0 q 1 ) , g z O s = q 0 2 q 1 2 q 2 2 + q 3 2 .
In the process of IMU attitude rotation, the gravity vector measured by the accelerometer is a t O s = [ a x O s , a y O s , a z O s ] T , and the gravity vector calculated by the attitude integrated by the gyroscope is g t O s = g x O s , g y O s , g z O s T . The error vector between them is ξ t = ξ x , ξ y , ξ z T , which is the error between the attitude integrated by the gyroscope and the attitude measured by the accelerometer. It can be expressed by cross product, and ξ t is defined by Equation (47).
ξ x = a y O s · g z O s a z O s · a y O s , ξ y = a z O s · g x O s a x O s · a z O s , ξ z = a x O s · g y O s a y O s · a x O s .
(3) Data fusion
The cross product error is adjusted by proportional-integral (PI) controller [31] to correct the bias of the gyroscope. By adjusting the two parameters λ p and λ i , the speed of the accelerometer to correct the integral attitude of the gyroscope can be controlled, where, λ p is the proportional adjustment coefficient, which is used to control the speed of the error converges to the accelerometer. Once there is a deviation in the system, the proportional adjustment will immediately produce an adjustment effect to reduce the error. λ i is the integral adjustment coefficient, which is used to control the convergence speed of gyro bias, so as to eliminate the steady-state error and improve the accuracy of the system.
(1) At time t, integrate the cross product error by Equation (48).
ξ x + = ξ x + ξ x · λ i · 1 2 Δ t , ξ y + = ξ y + ξ y · λ i · 1 2 Δ t , ξ z + = ξ z + ξ z · λ i · 1 2 Δ t ,
(2) The fused gyro measurements are defined by Equation (49).
w x O s + = w x O s + λ p · ξ x + ξ x + , w y O s + = w y O s + λ p · ξ y + ξ x + , w z O s + = w z O s + λ p · ξ z + ξ x + ,
where λ p = 2 Δ t . The sampling frequency is denoted as Δ t . The angular velocity measured by gyroscopes in IMUs coordinate system are denoted as w t O s = w x O s , w y O s , w z O s T . “-” is prior-estimted, and “+” is post-estimted. Since the parameter value of PI controller needs to be dynamically adjusted according to different experimental requirements, in this work, the value of λ i cannot be too large. As shown in Figure 6, after several experimental parameter adjustments, when λ i is greater than 0.2 Δ t , the ξ defined by Equation (48) increases gradually. Therefore, any value less than 0.2 is acceptable, and we set λ i = 0.1 Δ t .
(4) Update quaternion
q 0 ( t + t ) q 1 ( t + t ) q 2 ( t + t ) q 3 ( t + t ) = q 0 ( t ) q 1 ( t ) q 2 ( t ) q 3 ( t ) 1 w x O s 2 Δ t w y O s 2 Δ t w z O s 2 Δ t w x O s 2 Δ t 1 w z O s 2 Δ t w y O s 2 Δ t w y O s 2 Δ t w z O s 2 Δ t 1 w x O s 2 Δ t w z O s 2 Δ t w y O s 2 Δ t w x O s 2 Δ t 1
(5) Convert the updated quaternions into matrix forms by R O s O g of Equation (45), and the attitude transformation matrix of each IMU relative to the global coordinate system can be obtained.

5. Experimental Analysis

5.1. Measurement Equipment

In Figure 7, four IMUs (Yost Labs, USA) with red marked points were bound to the limb with medical tape. Each IMU included a tri-axial accelerometer in the range ±16 g and a tri-axial gyroscope in range ±2000 deg/s. The four IMUs are bound to the waist, upper leg, lower leg, and foot, respectively, and the IMUs are connected by a 3.5 mm retractable spring cable. All IMUs record data synchronously during motion capture, and the sampling frequency is 100 Hz, which is transmitted to the computer wirelessly. The white marked point is a wireless motion capture device named Perception Neuron Pro (Noitom, CN), which is used as a reference system and worn on the limbs simultaneously with IMUs for synchronous capture. The motion capture device is only used to verify the performance of the calibration algorithm.
Considering that different gait characteristics of different people may affect the experimental results and avoid the randomness of the experimental results, in the data acquisition phase, the data of the accelerometer and gyroscope are acquired by three subjects, including one female at a height of 165 cm (subject 1) and two males at heights of 175 cm (subject 2) and 180 cm (subject 3), respectively. Among them, the data of motion capture device are only used for reference, which is different from the data collected by IMUs and they do not affect each other. As shown in Figure 8, the IMUs are mounted at position 1 and position 2, respectively, for two experiments, and the data of the accelerometer and gyroscope are collected at two different positions, respectively. To study the randomness and accuracy of the calibration algorithm, the binding positions of IMUs do not coincide with the motion capture device, i.e., the positions of IMUs is displaced. Before the formal motion capture, the subjects need to stand still for 10 s to obtain the attitude rotation matrix between adjacent IMUs in the initial state. During motion capture, the subjects walked for 15 s. To avoid the deviation of the walking mode from normal, the subjects are not informed that the walking data would be used for calibration. The data collected by IMUs were substituted into the calibration algorithm to calculate the angle changes of the hip, knee, and ankle during walking and compared with the reference value of motion capture equipment.

5.2. Data Analysis

To analyze the accuracy of the algorithm, we compared the deviation between the three algorithms and the reference value. The angle calculated by the GN, DWPSO, and GWO are estimated values, and the angle calculated by motion capture equipment are reference values. The Root Mean Squared Error (RMSE) between the estimate values and the reference value for each DOF is calculated by Equation (51).
R M S E = 1 t m = 1 t ( H I M U , m H M C S , m ) 2 ,
where H I M U is the estimation values from IMUs, H M C S is the angle reference value calculated by motion capture equipment named Perception Neuron Pro.

5.3. Results and Analysis

Figure 9, Figure 10 and Figure 11 show the RMSE comparison of three algorithms when IMUs on three subjects were bound in two positions. It shows that the three algorithms achieve position calibration at two positions, respectively. No matter where it is, the RMSE of the DWPSO is the lowest of the three algorithms, which is closer to the reference value. It is worth mentioning that the reference value will change when IMUs change the position. When the IMUs were placed in the second position, the accuracy of the three algorithms were ranked the same. When the GWO is used for position calibration, the initial population is easy to be unevenly distributed and lacks global communication, resulting in the final solution being easy to fall into local optimization. In the DWPSO algorithm, we introduce dynamic weight to control the speed of the initial population and improve the accuracy of the algorithm. Therefore, the calibration performance of the GWO is lower than DWPSO. However, the introduction of dynamic weight increases the complexity of the PSO algorithm and reduces the efficiency of DWPSO.
Table 1 shows the average and standard deviation (SD) of 15 computation times of three algorithms, and all algorithms are completed on the same computer. As shown in Table 1, the GWO uses the shortest average computation times, followed by the DWPSO, and the GN takes the longest. When a high calibration accuracy and fast algorithm efficiency are required, the GWO can be used for calibration. However, the SD value of the GWO is the highest, indicating that the algorithm is less stable than DWPSO and GN, which may reduce the efficiency. The DWPSO algorithm is relatively stable, and the optimization performance is better than the other two algorithms. When there is no requirement for speed, the DWPSO may be the best choice.
Combined with the analyses in Table 2 and Figure 9, Figure 10 and Figure 11, although the heights and sexes of the subjects are different, the variation range of the results of each subject is roughly the same, and the performance of the calibration algorithm is also the same. This is because the three calibration algorithms are carried out under the same joint constraints and the joint constraints of each subject are the same, which will not be affected by the different gait characteristics of the subjects. Therefore, subject 1 is selected as the sample for analysis. Figure 12 shows the variation of the joint angle of IMUs in position 1 for 5 s. It shows that the angle variation waveform of each joint is consistent with the reference value, only the up and down translation is produced in terms of amplitude. It indicates that the offset error of IMUs position is fixed and will not change over time.
Table 2 shows the test results of the RMSE when IMUs on three subjects were bound in position 1. In H FE , H IE , K FE and A IE , the performances of the three algorithms are close to each other. In H AA , K AA , K IE , A FE and A AA , the calibration performances of the DWPSO and GWO are better than GN. A possible explanation is that under this DOF, the variation of the joint is not significant, which will affect the calculation of the Jacobian matrix and the accuracy of the calibration. The DWPSO and GWO do not consider the Jacobian matrix, and their accuracy is significantly higher than GN. Additionally, the results in Figure 12 show that when the joint angle is around 0 , the values of the DWPSO and GWO are closer to the reference value than GN, e.g., K AA or A FE . This is because the DOF is not the main activity of the joint, and will also affect the performance of the GN algorithm.
To more intuitively evaluate the consistency between the results of the three calibration algorithms and the reference, we selected the angle values of H AA and K IE in Figure 12 as samples and plot the Bland–Altman diagram for analysis. As shown in Figure 13, the x-axes are the average of each individual between the reference value and estimated value, the y-axes are the difference of each individual between the reference and the estimated. The two red lines in the figure are the upper and lower limits of the 95 % consistency interval, the purple dotted line indicates that the average value of the difference is 0, and the green line is the average value of the difference between the reference value and the estimated value in each individual. The closer the green line is to the purple dotted line, the higher the consistency between the reference value and the estimated value. As shown in Figure 13a, in H AA , the average difference value of the DWPSO is the closest to 0, and the consistency with the reference value is the highest, the GWO is the second, and the GN is the lowest. As shown in Figure 13b, the consistency analysis of K IE is also the highest in the DWPSO. These results are consistent with the curve results in Figure 12. Additionally, most of the results in Figure 13 are within the confidence interval, which explains why the waveforms of the estimated value and the reference value are similar in Figure 12.
Through the above analysis, it shows that in the IMUs position calibration of three joints of human lower limbs, the three algorithms have achieved good calibration results, and the calibration accuracy of the population algorithm is better than GN. When the joint changes sufficiently in a certain DOF, the results of the three algorithms are close. When the joint changes are insufficient, the calibration accuracy of the population algorithm is obviously better than GN. For two different population algorithms the DWPSO and GWO, different choices can be made according to practical applications.

6. Conclusions

In this work, we introduce the DWPSO, GWO, and GN algorithms to realize the dynamic calibration of IMUs’ positions based on human lower limb joint constraints. The performance of the algorithm is evaluated by gait experiments. The results show that the three algorithms have achieved IMU position calibration and are suitable for estimating the angles of the hip, knee, and ankle of humans during free walking. The simulation results show that the DWPSO has the best calibration performance, followed by the GWO and GN. When the joint rotation is sufficient or the joint is in the main motion, the performances of the three algorithms are close. When the joint rotation is insufficient, the performances of the DWPSO and the GWO are significantly better than the GN.
At present, our work has achieved an IMU position calibration of human lower limbs. However, when applied for a whole-body calibration, a large amount of data may cause the decline of the searchability of the DWPSO and GWO. In future work, we need to conduct further experiments.
Another route of future work is that when the offset error of IMUs position drifts slowly over time in the short term, an accelerometer and gyroscope can be combined to estimate the joint axis of the knee joint, and further improve the position calibration accuracy.

Author Contributions

L.L. and Q.H. proposed the concept and method. Q.H. designed the experimental scheme. Q.H., F.M. and C.Y. performed the experiments and collected and analyzed the data. Q.H. wrote the code. L.L. reviewed the draft, made substantial comments, and was responsible for funding. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Nature Science Foundation of China, under grant number 61801180, 81460275, Jiangxi Provincial Nature Science Foundation, grant number 20202BAB202003, 20192BAB207007.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent was obtained from the volunteers to publish this paper.

Data Availability Statement

All measurement data in this paper have been listed in the content of the article, which can be used by all peers for related research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Muro-De-La-Herran, A.; Garcia-Zapirain, B.; Mendez-Zorrilla, A. Gait analysis methods: An overview of wearable and non-wearable systems, highlighting clinical applications. Sensors 2014, 14, 3362–3394. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Tao, W.; Tao, L.; Zheng, R.; Feng, H. Gait Analysis Using Wearable Sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef] [PubMed]
  3. Blake, A.; Grundy, C. Evaluation of motion capture systems for Golf Swings: Optical vs. gyroscopic. In Proceedings of the ITI 2008—30th International Conference on Information Technology Interfaces, Cavtat, Croatia, 23–26 June 2008; pp. 409–414. [Google Scholar]
  4. Ghasemzadeh, H.; Jafari, R. Coordination Analysis of Human Movements with Body Sensor Networks: A Signal Processing Model to Evaluate Baseball Swings. IEEE Sens. J. 2011, 11, 603–610. [Google Scholar] [CrossRef]
  5. Walsh, M.; Barton, J.; O’Flynn, B.; O’Mathuna, C.; Tyndyk, M. Capturing the overarm throw in darts employing wireless inertial measurement. In Proceedings of the SENSORS, 2011 IEEE, Limerick, Ireland, 28–31 October 2011; pp. 1441–1444. [Google Scholar]
  6. Valentina, C.; Elena, B.; Fantozz, S.; Vannozzi, G. Trends Supporting the In-Field Use of Wearable Inertial Sensors for Sport Performance Evaluation: A Systematic Review. Sensors 2018, 18, 873. [Google Scholar]
  7. Salarian, A.; Russmann, H.; Vingerhoets, F.; Dehollain, C.; Aminian, K. Gait assessment in Parkinson’s disease: Toward an ambulatory system for long-term monitoring. IEEE Trans. Biomed. Eng. 2004, 51, 1434–1443. [Google Scholar] [CrossRef] [PubMed]
  8. Anwary, A.; Yu, H.; Vassallo, M. An Automatic Gait Feature Extraction Method for Identifying Gait Asymmetry Using Wearable Sensors. Sensors 2018, 18, 676. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Palermo, E.; Rossi, S.; Marini, F.; Cappa, P. Experimental evaluation of accuracy and repeatability of a novel body-to-sensor calibration procedure for inertial sensor-based gait analysis. Measurement 2014, 52, 145–155. [Google Scholar] [CrossRef]
  10. Roetenberg, D.; Luinge, H.; Slycke, P. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technol. BV 2009, 3, 1–9. [Google Scholar]
  11. Robert-Lachaine, X.; Mecheri, H.; Larue, C.; Plamondon, A. Validation of inertial measurement units with an optoelectronic system for whole-body motion analysis. Med. Biol. Eng. Comput. 2016, 55, 1–11. [Google Scholar] [CrossRef] [PubMed]
  12. Favre, J.; Aissaoui, R.; Jolles, B.M.; Guise, J.; Aminian, K. Functional calibration procedure for 3D knee joint angle description using inertial sensors. J. Biomech. 2009, 42, 2330–2335. [Google Scholar] [CrossRef] [PubMed]
  13. Cutti, A.G.; Ferrari, A.; Garofalo, P.; Raggi, M.; Cappello, A.; Ferrari, A. ’Outwalk’: A protocol for clinical gait analysis based on inertial and magnetic sensors. Med. Biol. Eng. Comput. 2010, 48, 17. [Google Scholar] [CrossRef] [PubMed]
  14. Nazarahari, M.; Rouhani, H. Semi-Automatic Sensor-to-Body Calibration of Inertial Sensors on Lower Limb Using Gait Recording. IEEE Sens. J. 2019, 19, 12465–12474. [Google Scholar] [CrossRef]
  15. Elias, L.S.V.; Elias, A.; Rocon, E.; Freire, T.; Frizera, A. An IMU-to-Body Alignment Method Applied to Human Gait Analysis. Sensors 2016, 16, 2090. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Cheng, P.; Oelmann, B. Joint-Angle Measurement Using Accelerometers and Gyroscopes-A Survey. IEEE Trans. Instrum. Meas. 2010, 59, 404–414. [Google Scholar] [CrossRef]
  17. Miezal, M.; Bleser, G.; Schmitz, N.; Stricker, D. A Generic Approach to Inertial Tracking of Arbitrary Kinematic Chains. In Proceedings of the 8th International Conference on Body Area Networks, Boston, MA, USA, 30 September–2 October 2013; Number 4. pp. 189–192. [Google Scholar]
  18. Seel, T.; Schauer, T.; Raisch, J. Joint axis and position estimation from inertial measurement data by exploiting kinematic constraints. In Proceedings of the 2012 IEEE International Conference on Control Applications, Dubrovnik, Croatia, 3–5 October 2012; pp. 45–49. [Google Scholar]
  19. Seel, T.; Raisch, J.; Schauer, T. IMU-Based Joint Angle Measurement for Gait Analysis. Sensors 2014, 14, 6891–6909. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Nerino, R.; Contin, L.; Gonçalves da Silva Pinto, W.J.; Massazza, G.; Actis, M.; Capacchione, P.; Chimienti, A.; Pettiti, G. A BSN Based Service for Post-Surgical Knee Rehabilitation at Home. In Proceedings of the 8th International Conference on Body Area Networks; ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), Boston, MA, USA, 30 September–2 October 2013; pp. 401–407. [Google Scholar]
  21. O’Donovan, K.J.; Kamnik, R.; O’Keeffe, D.T.; Lyons, G.M. An inertial and magnetic sensor based technique for joint angle measurement. J. Biomech. 2007, 40, 2604–2611. [Google Scholar] [CrossRef] [PubMed]
  22. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  23. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  24. Wu, G.; Siegler, S.; Allard, P.; Kirtley, C.; Leardini, A.; Rosenbaum, D.; Whittle, M.; DâǍŹLima, D.D.; Cristofolini, L.; Witte, H.; et al. ISB recommendation on definitions of joint coordinate system of various joints for the reporting of human joint motion—Part I: Ankle, hip, and spine. J. Biomech. 2002, 35, 543–548. [Google Scholar] [CrossRef]
  25. Cappozzo, A.; Catani, F.; Leardini, A.; Benedetti, M.; Della Croce, U. Position and orientation in space of bones during movement: Experimental artefacts. Clin. Biomech. 1996, 11, 90–100. [Google Scholar] [CrossRef]
  26. Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson: New York City, NY, USA, 1986. [Google Scholar]
  27. Gao, Y.; Song, D. A New Improved Genetic Algorithms and its Property Analysis. In Proceedings of the 2009 Third International Conference on Genetic and Evolutionary Computing, Guilin, China, 14–17 October 2009; pp. 73–76. [Google Scholar]
  28. Paul, R.P. Robot Manipulators: Mathematics, Programming and Control; The MIT Press: Cambridge, MA, USA, 1981. [Google Scholar]
  29. Zhou, Y.; Zhang, H.; Zhang, T.; Wang, L.; Wei, R.; Luan, M.; Liu, H.; Shi, J. A fusion attitude determination method based on quaternion for MEMS gyro/accelerometer/magnetometer. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; pp. 3228–3232. [Google Scholar]
  30. Premerlani, W.; Bizard, P. Direction Cosine Matrix IMU: Theory; Diy Drone: USA, 2009; pp. 13–15. Available online: https://www.researchgate.net/publication/265755808_Direction_Cosine_Matrix_IMU_Theory (accessed on 19 September 2021).
  31. Goodwin, G.C.; Graebe, S.F.; Salgado, M.E. Control System Design; Prentice Hall New: Hoboken, NJ, USA, 2001; Volume 240. [Google Scholar]
Figure 1. A model of a human left lower limb connected by joints.
Figure 1. A model of a human left lower limb connected by joints.
Sensors 21 07161 g001
Figure 2. Spherical coordinates of r J K , B and r J K , C .
Figure 2. Spherical coordinates of r J K , B and r J K , C .
Sensors 21 07161 g002
Figure 3. Convergence curves of dynamic weight and fixed weight with different numbers of particles. (a) N = 50 ; (b) N = 150 ; (c) N = 350 .
Figure 3. Convergence curves of dynamic weight and fixed weight with different numbers of particles. (a) N = 50 ; (b) N = 150 ; (c) N = 350 .
Sensors 21 07161 g003
Figure 4. Structural diagram of human lower limb.
Figure 4. Structural diagram of human lower limb.
Sensors 21 07161 g004
Figure 5. Structure diagram of limbs and position vector of each joint. (a) Limb L 2 ; (b) Limb L 3 .
Figure 5. Structure diagram of limbs and position vector of each joint. (a) Limb L 2 ; (b) Limb L 3 .
Sensors 21 07161 g005
Figure 6. When the λ i takes different values, the convergence curve of ξ .
Figure 6. When the λ i takes different values, the convergence curve of ξ .
Sensors 21 07161 g006
Figure 7. The motion capture test equipment. (a) he IMUs on the leg; (b) the IMUs on the waist.
Figure 7. The motion capture test equipment. (a) he IMUs on the leg; (b) the IMUs on the waist.
Sensors 21 07161 g007
Figure 8. The IMUs are mounted in position 1 and position 2 for the experiment, respectively. (a) The waist; (b) the upper leg; (c) the lower leg; (d) the foot.
Figure 8. The IMUs are mounted in position 1 and position 2 for the experiment, respectively. (a) The waist; (b) the upper leg; (c) the lower leg; (d) the foot.
Sensors 21 07161 g008
Figure 9. The RMSE ( ) comparison of three algorithms when IMUs on subject 1 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Figure 9. The RMSE ( ) comparison of three algorithms when IMUs on subject 1 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Sensors 21 07161 g009
Figure 10. The RMSE ( ) comparison of three algorithms when IMUs on subject 2 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Figure 10. The RMSE ( ) comparison of three algorithms when IMUs on subject 2 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Sensors 21 07161 g010
Figure 11. The RMSE ( ) comparison of three algorithms when IMUs on subject 3 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Figure 11. The RMSE ( ) comparison of three algorithms when IMUs on subject 3 were bound in two positions. (a) The IMUs in position 1; (b) the IMUs in position 2.
Sensors 21 07161 g011
Figure 12. The variation of the joint DOF when IMUs on subject 1 were bound in position 1. (ac) The hip joint; (df) the knee joint; (gi) the ankle joint.
Figure 12. The variation of the joint DOF when IMUs on subject 1 were bound in position 1. (ac) The hip joint; (df) the knee joint; (gi) the ankle joint.
Sensors 21 07161 g012
Figure 13. The Bland-Altman consistency analysis of estimated and reference values of three algorithms when IMUs in position 1. (a) H AA ; (b) K IE .
Figure 13. The Bland-Altman consistency analysis of estimated and reference values of three algorithms when IMUs in position 1. (a) H AA ; (b) K IE .
Sensors 21 07161 g013
Table 1. Average and standard deviation (SD) of 15 computation times of the DWPSO, GWO, and GN.
Table 1. Average and standard deviation (SD) of 15 computation times of the DWPSO, GWO, and GN.
Algorithm TypeAverage (s)SD
DWPSO1076.12.01
GWO576.33.76
GN1556.42.98
Table 2. The RMSE ( ) comparison of three algorithms when IMUs on three subjects were bound in position 1.
Table 2. The RMSE ( ) comparison of three algorithms when IMUs on three subjects were bound in position 1.
Subject 1Subject 2Subject 3
DWPSOGWOGNDWPSOGWOGNDWPSOGWOGN
H FE 8.659.099.3610.6310.9711.058.179.6511.90
H AA 3.726.427.905.977.3612.533.427.839.61
H IE 4.535.135.263.294.154.695.716.086.86
K FE 5.776.867.454.355.615.992.063.413.76
K AA 1.123.425.026.017.8610.344.986.939.28
K IE 3.165.267.024.546.377.631.574.256.97
A FE 4.035.697.363.814.086.625.438.629.45
A AA 5.766.837.713.555.679.713.264.544.86
A IE 21.0523.0723.4525.4126.0626.8320.2521.7923.67
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, Q.; Liu, L.; Mei, F.; Yang, C. Joint Constraints Based Dynamic Calibration of IMU Position on Lower Limbs in IMU-MoCap. Sensors 2021, 21, 7161. https://doi.org/10.3390/s21217161

AMA Style

Hu Q, Liu L, Mei F, Yang C. Joint Constraints Based Dynamic Calibration of IMU Position on Lower Limbs in IMU-MoCap. Sensors. 2021; 21(21):7161. https://doi.org/10.3390/s21217161

Chicago/Turabian Style

Hu, Qian, Lingfeng Liu, Feng Mei, and Changxuan Yang. 2021. "Joint Constraints Based Dynamic Calibration of IMU Position on Lower Limbs in IMU-MoCap" Sensors 21, no. 21: 7161. https://doi.org/10.3390/s21217161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop