Next Article in Journal
Apple Leaf Diseases Recognition Based on An Improved Convolutional Neural Network
Next Article in Special Issue
The Influence of Proprioceptive Training with the Use of Virtual Reality on Postural Stability of Workers Working at Height
Previous Article in Journal
Effectiveness of Multidimensional Controllers Designated to Steering of the Motions of Ship at Low Speed
Previous Article in Special Issue
Sensor-to-Segment Calibration Methodologies for Lower-Body Kinematic Analysis with Inertial Sensors: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Plug-and-Play Joint Axis Estimation Using Inertial Sensors

1
Systems and Control, Department of Information Technology, Uppsala University, SE-75105 Uppsala, Sweden
2
Delft Center for Systems and Control, Delft University of Technology, 2628 CD Delft, The Netherlands
3
Control Systems Group, Technische Universität Berlin, 10623 Berlin, Germany
4
Department of Mechatronics, Campus Estado de Mexico, Tecnologico de Monterrey, Monterrey 64849, NL, Mexico
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(12), 3534; https://doi.org/10.3390/s20123534
Submission received: 17 April 2020 / Revised: 29 May 2020 / Accepted: 16 June 2020 / Published: 22 June 2020
(This article belongs to the Special Issue Human and Animal Motion Tracking Using Inertial Sensors)

Abstract

:
Inertial motion capture relies on accurate sensor-to-segment calibration. When two segments are connected by a hinge joint, for example in human knee or finger joints as well as in many robotic limbs, then the joint axis vector must be identified in the intrinsic sensor coordinate systems. Methods for estimating the joint axis using accelerations and angular rates of arbitrary motion have been proposed, but the user must perform sufficiently informative motion in a predefined initial time window to accomplish complete identifiability. Another drawback of state of the art methods is that the user has no way of knowing if the calibration was successful or not. To achieve plug-and-play calibration, it is therefore important that 1) sufficiently informative data can be extracted even if large portions of the data set consist of non-informative motions, and 2) the user knows when the calibration has reached a sufficient level of accuracy. In the current paper, we propose a novel method that achieves both of these goals. The method combines acceleration- and angular rate information and finds a globally optimal estimate of the joint axis. Methods for sample selection, that overcome the limitation of a dedicated initial calibration time window, are proposed. The sample selection allows estimation to be performed using only a small subset of samples from a larger data set as it deselects non-informative and redundant measurements. Finally, an uncertainty quantification method that assures validity of the estimated joint axis parameters, is proposed. Experimental validation of the method is provided using a mechanical joint performing a large range of motions. Angular errors in the order of 2 were achieved using 125–1000 selected samples. The proposed method is the first truly plug-and-play method that overcome the need for a specific calibration phase and, regardless of the user’s motions, it provides an accurate estimate of the joint axis as soon as possible.

1. Introduction

Wearable inertial measurement units (IMUs) have become a key technology for a range of applications, from performance assessment and optimization in sports [1], to objective measurements and progress monitoring in health care [2], as well as real-time motion tracking for feedback-controlled robotic or neuroprosthetic systems [3]. In all these application domains, IMUs are used to track or capture the motion of mechatronic or biological joint systems such as robotic or human limbs. In this work we consider such systems where the joint is a hinge joint with one degree of freedom. Examples of hinge joints include the knee and finger joints, which are essential in applications targeting lower limb [4] and hand [5] kinematics.
In contrast to stationary optical motion tracking systems, miniature IMU networks can be used in ambulatory settings and facilitate motion tracking outside lab environments. While this is an important step towards ubiquitous sensing, one major limitation of the technology is that the IMUs’ local coordinate systems must be aligned with the anatomical axes of the joints and body segments to which they are attached. This sensor-to-segment calibration is a crucial step that establishes the connection between the motion of the IMUs and the motion of the joint system to which they are attached.
Several different approaches have been proposed for sensor-to-segment calibration of inertial sensor networks, from trying to align the sensor axes with body axes by precise attachment to predefined calibration poses and motions; see, e.g., [6,7,8,9]. However, in all of these cases, the calibration crucially depends on the knowledge and skills of the person who attaches the sensors or the person who performs the calibration procedure. This might be acceptable in supervised settings with trained and able-bodied users, but it represents a major limitation of IMU-based motion tracking and capture in clinical applications and in motion assessment of elderly and children. Finding solutions for these application domains and enabling ubiquitous sensing in daily life requires the development of less restrictive methods for sensor-to-segment calibration.
Ideally, wearable IMU networks should be plug-and-play, and the sensor-to-segment calibration should be performed by the network autonomously, which means without additional effort or requirements on the user’s knowledge or on the performed motion. An important step towards this goal was the development of methods that exploit the kinematic constraints of the joints to identify sensor-to-segment calibration parameters from almost arbitrary motions [10,11]. For joints with one degree of freedom (DOF), the feasibility of this approach has been demonstrated [12,13,14,15]. Methods have been proposed that require the user to perform a sufficiently informative but otherwise arbitrary motion during an initial calibration time window and determine the functional joint axis in intrinsic coordinates of both IMUs, cf. Figure 1. It was recently shown that almost every motion, including purely sequential motions and simultaneous planar motions, is informative enough to render the joint axis identifiable unless the joint remains stiff throughout the motion [16].
Several methods targeting different types of joints or sensor-to-segment calibration parameters have been developed. In [17], a method for identifying the joint axes of a joint with two DOF was proposed. Methods for identifying the position of the joint center relative to sensors attached to adjacent segments have been proposed in [12,18,19]. A method enabling automatic pairing of sensors to lower limb segments have been proposed in [20].
The published kinematic-constraint-based methods constitute an important step forward but still impose undesirable and unnecessary limitations. If the user does not move during the initial calibration time window or if the motion is not sufficiently informative, the calibration will be wrong and all subsequently derived motion parameters will be subject to unpredictable errors. For a truly plug-and-play system, it is therefore crucial that the IMU network is able to
  • Recognize how informative motions are and whether they render the joint axis identifiable;
  • Wait for sufficiently informative data to be generated and combine useful data even if it is spread and intermitted by useless data;
  • Determine how accurate the current estimate of the joint axis is and provide only sufficiently reliable estimates.
An IMU network with such properties can be used without the aforementioned limitations. Once it is installed, it will autonomously gather all available useful information and provide reliable calibration parameters as soon as possible, which immediately enable calculation of accurate motion parameters from the incoming raw data as well as from already recorded data. To explain the practical value of the proposed concept of plug-and-play calibration, we briefly compare this concept to the aforementioned existing calibration concepts that use predefined motions [6,7,8,9] or arbitrary motions [10,11,12,13,14,15]:
Predefined-Motions: The calibration is based on the assumption that the user performs a sequence of predefined motions and poses with sufficient precision within a predefined initial time interval. The approach fails and provides inaccurate calibration without warning if
(a)
The user performs the sequence of predefined motions and poses without sufficient precision;
(b)
The user performs the sequence with sufficient precision but not within the predefined initial time interval;
(c)
The user performs sufficiently informative but otherwise arbitrary motions;
(d)
The user performs no sufficiently informative motion at all, e.g., he/she moves with a stiff joint.
Arbitrary-Motions: The calibration is based on the assumption that the user performs sufficiently informative but otherwise arbitrary motions within a predefined initial time interval. The motion does not need to be precise, and it has been shown that sufficient excitation is provided by almost every motion for which the joint does not remain stiff [16]. However, the approach fails and provides inaccurate calibration without warning if
(a)
The user performs a sequence of predefined motions but not within the predefined initial time interval;
(b)
The user performs sufficiently informative arbitrary motions but not within the predefined initial time interval;
(c)
The user performs no sufficiently informative motion at all, e.g., he/she moves with a stiff joint.
Plug-and-Play: The proposed sensor-to-segment calibration approach. It works well for all mentioned cases and exceptions in the sense that it always provides accurate calibration parameters as soon as the user’s motions are sufficiently informative, and it clearly indicates at all times whether the desired calibration accuracy has yet been reached.
It is important to note that the cases without warning are very dangerous, because inaccurate information is provided and claimed as accurate. In many applications, this leads to unacceptable risks. This and the other listed differences between the two existing approaches and the proposed new method have large implications for the way wearable IMU networks can be used in offline and online applications.
Offline Applications include motion capture for ergonomic workplace assessment [21], for monitoring of movement disorders [2] and for sport performance analysis [1]. In state-of-the-art solutions, the user performs an initial calibration procedure before (or after) recording data from the motions to be analyzed. The user can only hope that the calibration was accurate enough. If the calibration was inaccurate, then all recorded data is corrupted and might lead to false interpretation and conclusions. In contrast, when the calibration is plug-and-play, the user starts recording data from motions that should be analyzed immediately after attaching the sensors. Calibration automatically takes place as soon as sufficiently informative data has been gathered. The system indicates that calibration has been successful, and the user can be sure that all obtained measurements are valid and accurate. The identified calibration parameters are used to evaluate the data that was recorded before and after the moment at which accurate calibration was achieved.
Online Applications include real-time motion tracking for wearable biofeedback systems [22] as well as robotic and neuroprosthetic motion support systems [23]. In state-of-the-art solutions, the user first performs an initial calibration procedure before the sensor system is connected to an assistive device that uses the measurements to provide e.g., biofeedback or motion support. The user can only hope that the calibration was accurate enough. If the calibration was inaccurate, then the provided biofeedback or motion support might be wrong and dangerous. In contrast, when the calibration is plug-and-play, the user instead attaches the sensors and starts moving. As soon as the desired calibration accuracy has been achieved, the sensor system automatically provides measurements to the assistive device. The user can be sure that all provided biofeedback and motion support is based on valid and accurate measurements.
In the present contribution we propose the first joint axis identification method for one-dimensional joints that is plug-and-play in the aforementioned sense. The main contributions of the present work are the following:
  • We leverage recent results on joint axis identifiability [16] to develop a sample selection method that overcomes the limitation of a dedicated initial calibration time window.
  • To assure that the motion needs to fulfill only the minimum required conditions, we combine accelerometer-based and gyroscope-based joint constraints and weight them according to the information contained in both signals.
  • We propose an uncertainty quantification method that assures validity of the estimated joint axis parameters and thereby eradicates the risk of false calibration.
  • We provide an experimental validation in a mechanical joint performing a large range of different motions with different identifiability properties.
In the proposed system, successful calibration no longer depends on performing certain motions in a predefined manner or time window but only on fulfilling the minimum required conditions at some point. Moreover, the system knows when these conditions are fulfilled and provides only reliable calibration parameters.

2. Inertial Measurement Models

Inertial sensors collectively refers to accelerometers and gyroscopes, which are sensors used to measure linear acceleration and angular velocity, respectively. When the sensors have three sensitive axes which are orthogonal to each other, the inertial sensors can measure these quantities in three dimensions. Such sensors are referred to as triaxial. An IMU is a single sensor that contains one triaxial accelerometer and one triaxial gyroscope. The measurements from the IMU are obtained with respect to (w.r.t.) a reference frame, referred to as the sensor frame (S), its axes and origin corresponding to those of the accelerometer triad. The axes of the gyroscope is assumed to be aligned with the axes of the accelerometer. The measured quantities describe the motion of the sensor frame w.r.t. a global frame (G) that is fixed w.r.t. the environment.
The accelerometer measurements at time t k , where the integer k is used as a sample index, can be modeled as
y a S ( t k ) = R S G ( t k ) a G ( t k ) + g G + b a S + e a S ( t k ) ,
where a G R 3 is the acceleration of the sensor w.r.t. the global frame and g G R 3 is the gravitational acceleration, which is assumed to be constant in the environment. The measurements are corrupted by a constant additive bias b a S and noise e a S ( t k ) R 3 , which is assumed to be Gaussian e a S ( t k ) N ( 0 , Σ a ) , with zero mean and covariance matrix Σ a . The superscript S and G are used to denote in which reference frame a quantity is expressed in, and the rotation matrix R S G describes the rotation from the global frame to the sensor frame, i.e., we have that
R S G ( t k ) a G ( t k ) + g G = a S ( t k ) + g S ( t k ) .
The multiplication between a rotation matrix and a vector is equivalent to a change of orthonormal basis.
The gyroscope measurements are modeled as
y ω S ( t k ) = R S G ( t k ) ω G ( t k ) + b ω S + e ω S ( t k ) ,
where ω G R 3 is the angular velocity of the sensor frame in the global frame. Similar to the accelerometer, the measurements are corrupted by constant additive bias b ω S and noise e ω S ( t k ) R 3 , which is assumed to be zero-mean Gaussian e ω S ( t k ) N ( 0 , Σ ω ) . Note that the same rotation matrix R S G as in (1) is used to rotate quantities from the global frame to the sensor frame because the accelerometer and the gyroscope are contained in the same IMU and their axes are assumed to be aligned. The gyroscope bias term b ω S can be compensated for through pre-calibration of the gyroscopes [24]. In Section 7.5, we will evaluate the effect of uncompensated biases on the proposed method.
Biases and Gaussian measurement noise have been shown to be the dominating error sources, even for low-cost IMUs [25]. However, for longer experiments or for low-quality IMUs, there are other types of errors that may need to be considered. These errors can still be well compensated for by pre-calibration or by online auto-calibration methods. Therefore, we only consider biases in our models, as these are the dominating systematic errors. The bias terms b a and b ω are not constant, but drift slowly over time [26]. Sensor manufacturers typically provide a bias stability metric for their sensors, which tells the user the expected rate of the bias drift. Bias instability in inertial sensors is primarily caused by low-frequency flicker noise in the electronics and temperature fluctuations [27]. If the bias drift is significant enough that it needs to be compensated for, there are methods that model the biases as time or temperature dependent, enabling continuous estimation of drifting biases (see, e.g., [28,29]). Such methods can be used in combination with the method proposed in this paper. Low-quality IMUs may be affected by other systematic errors such as non-unit scale factors and misalignments/non-orthogonalities in the sensor axes. If the effect from these types of errors are non-negligible, it is advised to perform a more sophisticated pre-calibration of the sensors to compensate for these errors. Methods for in-field pre-calibration of such errors exist; see, e.g., [30,31,32,33].

3. Kinematics

The kinematic model of the hinge joint system has been described in previous works [12,15,16], and is recapitulated here in Section 3.1 and Section 3.2 for completeness.

3.1. Kinematic Constraints of Two Segments in a Kinematic Chain

Consider the kinematic chain model where we have two rigid body segments connected by a joint. The joint can have 1, 2 or 3 degrees of freedom (DOF). Furthermore, consider the case where each segment has one IMU rigidly attached to it in an arbitrary position and orientation. We therefore have two sensor frames, denoted by S 1 and S 2 , that are fixed in the center of the accelerometer triad of each IMU. The DOF of the joint determines how many angles that are required to describe the orientation of S 2 w.r.t. S 1 and vice versa. We let subscripts i { 1 , 2 } denote quantities belonging to a specific sensor frame. Rigid body kinematics gives
a i S i ( t ) = a 0 S i ( t ) + ω i S i ( t ) × ( ω i S i ( t ) × r i S i ) + ω ˙ i S i ( t ) × r i S i ,
where a i are the accelerations of the sensor frames with i { 1 , 2 } , a 0 is the acceleration of the joint center, ω i and ω ˙ i are the angular velocities and angular accelerations of the sensor frames and t is used to denote time-dependence of the kinematic variables. The positions of the joint center with respect to each sensor frame are denoted by r i , which we assume to be unknown and constant for each sensor. All quantities in (4) are vectors in R 3 since they describe 3D motion. The acceleration of the joint center expressed in either of the sensor frames has the same magnitude but a different orientation. We have that
a 0 G ( t ) = R G S 1 ( t ) a 0 S 1 ( t ) = R G S 2 ( t ) a 0 S 2 ( t )
where R G S i are the rotation matrices that maps a vector expressed in S i into the global frame.
For convenience we shall for the remainder of this document drop the use of the superscripts except for where it’s needed. Hence, the sensor frame of a kinematic variable will be given by subscript i { 1 , 2 } . We will also drop the use of t to denote time-dependence unless we want to refer to the kinematic variables at specific time instances. The relationship in (4) is linear in a 0 and r i and can equivalently be formulated as
a i = a 0 S i + K ( ω i , ω ˙ i ) r i ,
where
K ( ω , ω ˙ ) = ω y 2 ω z 2 ω x ω y ω ˙ z ω x ω z + ω ˙ y ω x ω y + ω ˙ z ω x 2 ω z 2 ω y ω z ω ˙ x ω x ω z ω ˙ y ω y ω z + ω ˙ x ω x 2 ω y 2 ,
and where subscripts x , y , z denote the elements of the three-dimensional vectors. For convenience of notation we will write K i = K ( ω i , ω ˙ i ) .

3.2. Kinematic Constraints of a Hinge Joint System

For a 1-DOF joint, the two segments can only rotate independently with respect to each other along the joint axis. We let · denote the Euclidean vector norm, then the joint axis is defined by the unit vector j R 3 , j = 1 . We refer to such a joint as a hinge joint. We let j 1 and j 2 denote the direction of the joint axis in the respective sensor frames. Since the two IMUs are assumed to be rigidly attached to the segments, j 1 and j 2 are constant. The joint axis j expressed in the global frame must then satisfy
j G ( t ) = R G S 1 ( t ) j 1 S 1 = R G S 2 ( t ) j 2 S 2 ,
meaning that the vectors j i expressed in the two sensor frame has the same direction as j in the global frame, see Figure 1, and time-dependence is only caused by the rotations of the sensor frames in the global frame. We can decompose the angular velocities into one component that is parallel to the joint axis and one that is perpendicular to the joint axis
ω i = ω j i + ω j i ,
ω j i = j i ω i j i ,
ω j i = ω i ω j i = ω i j i ω i j i .
Since the two segments can only rotate independently along the joint axis, it follows that the perpendicular components must have the same magnitude regardless of reference frame
ω j 1 = ω j 2 .
The magnitude of the perpendicular component can also be computed from the cross product between the angular velocity and the joint axis
ω i j i ω i j i = ω i × j i .
Combining (12) and (13) we formulate the angular velocity constraint
ω 1 × j 1 ω 2 × j 2 = 0 ,
which must be satisfied by hinge joint systems.
Looking at the projection of the accelerations onto the joint axis, from (6) we have that
j i a i j i = j i a 0 S i j i + j i K i r i j i .
Because j i has the same direction as j in the global frame, it must also be the same for the projection of a 0 onto j i , it follows from (5) and (8) that
j G a 0 G j G = R G S 1 j 1 j 1 a 0 S 1 = R G S 2 j 2 j 2 a 0 S 2 j 1 a 0 S 1 = j 2 a 0 S 2 .
By projecting the accelerations onto the joint axis and subtracting one from the other we get
j 1 a 1 j 2 a 2 = j 1 a 0 S 1 j 2 a 0 S 2 + j 1 K 1 r 1 j 2 K 2 r 2 = j 1 K 1 r 1 j 2 K 2 r 2 ,
where we see that only the rotational components of the accelerations remain on the right hand side. The relationship (17) is the exact acceleration constraint of the hinge joint system. The right hand side (r.h.s.) of (17) is zero if and only if either K i r i j i or K i = 0 are satified for all i { 1 , 2 } . It is clear that if the rotational acceleration components along the direction of the joint axis are small ( j i K i r i 0 , i ), the r.h.s. will vanish
j 1 a 1 j 2 a 2 0 ,
which forms the approximate acceleration constraint for the hinge joint system.

4. Joint Axis Estimation

We assume that we have two IMUs, one attached to each segment of a hinge joint system. Measurements from a completely unspecified motion has been collected. We will use y ω , i to refer to the gyroscope measurements (3) and y a , i to refer to the accelerometer measurements (1) from Sensor i { 1 , 2 } . We will use the non-indexed y ω and y a to refer to measurements from both sensors as
y ω = y ω , 1 y ω , 2 ,
and similarly for y a . We let D N = { y ω N , y a N } denote our data, which consists of N samples of recorded motion. Each sample in the data set is assigned a sample index k { 1 , , N } , such that t k refers to the sampling time of the kth measurement relative to the beginning of the recorded motion.
Given the data D N from the two IMUs, the variables we want to estimate are the unit vectors j i which corresponds to the directions of the joint axis j in the two sensor frames. We let j ^ i denote the estimate of j i . Note that the joint axis in one sensor frame can be described by either ± j i since a clockwise rotation w.r.t. the positive axis is equivalent to a counter-clockwise rotation w.r.t. the negative axis. However, we require both j 1 and j 2 to have the same sign (direction) to correspond to either ± j in the global frame, otherwise a clockwise rotation for one sensor might be considered a counter-clockwise rotation for the other sensor and vice versa. That is, the sign pairing of the joint axes in the sensor coordinate frames is important. Consequently, ( ± j 1 , ± j 2 ) is the correct sign pairing and ( ± j 1 , j 2 ) is the wrong sign pairing.

4.1. Formulating the Optimization Problem

We parametrize j i using spherical coordinates to enforce the unit vector constraint
x = θ 1 ϕ 1 θ 2 ϕ 2 ,
j i ( x ) = cos θ i cos ϕ i cos θ i sin ϕ i sin θ i ,
which then become the unknown parameters to estimate. The estimation problem for the joint axis is formulated as
x ^ = arg min x V ( x ) ,
V ( x ) = k = 1 N [ e ω ( k , x ) ] 2 + [ e a ( k , x ) ] 2 ,
where e ω ( k , x ) and e a ( k , x ) are scalar residual terms, based on the angular velocity constraint (14) and acceleration constraints (18) of the hinge joint system
e ω ( k , x ) = w ω [ y ω , 1 ( t k ) × j 1 ( x ) y ω , 2 ( t k ) × j 2 ( x ) ] ,
e a ( k , x ) = w a [ j 1 ( x ) y a , 1 ( t k ) j 2 ( x ) y a , 1 ( t k ) ] .
Two scalars w ω and w a are used to change the relative weighting of the residuals.

4.2. Identifiability and Local Minima

For the gyroscope measurements to contain information about the joint axis, they have to be recorded from motions where the joint angle is excited, i.e., when the two segments rotate independently. These motions should contain either simultaneous planar rotations, where the segments rotate simultaneously in the plane perpendicular to the joint axis, or sequential rotations of the segments. However, stiff joint motions, which can have a significant angular rate but no independent rotation of the segments, do not facilitate identifiability of the joint axis [16]. For the non-informative stiff joint motions, the relative rotation of the two sensors can be described by a time-invariant rotation matrix R and we have that
ω 2 ( t k ) × j 2 = R ( ω 1 ( t k ) × j 1 ) = ω 1 ( t k ) × j 1 ,
where we see that for any choice of j 1 , the vector j 2 = R j 1 will minimize the gyroscope residual (24). Therefore, we want motions where ω 1 ( t k ) ω 2 ( t k ) , which implies that the segments are rotating independently and we require motions where ω i ( t k ) > 0 for at least some time, since ω i ( t k ) = 0 ω i ( t k ) × j i = 0 , j i .
If only acceleration information is considered, we get the following over-determined system of linear equations
a 1 ( t 1 ) a 2 ( t 1 ) a 1 ( t M ) a 2 ( t M ) = A j 1 j 2 = 0 ,
which has a unique solution if rank ( A ) = 5 , in which case j 1 j 2 lies in the null-space of A. This holds when the acceleration constraint holds exactly for all t k , the accelerations measured are exact and the angular rate and angular accelerations of the sensors are parallel with j [16]. Therefore, for the accelerometer, we want measurements that increase the separation between the column-space and the null-space of A.
The proposed method uses both gyroscope and accelerometer information, and their relative contribution to the cost function is controlled by the weight parameters w ω and w a . Figure 2 shows how the weights affect the cost function in the case that w a = 1 and w ω is allowed to vary. For small w ω , the local minima corresponds to the correct sign pairing ( ± j 1 , ± j 2 ) , whereas the local maxima corresponds to the wrong sign pairing ( ± j 1 , j 2 ) . Note that each local minimum is equally valid for small w ω because of the periodicity of the spherical coordinates. The acceleration residuals are relatively large whereas the gyroscope residuals are relatively small at the locations corresponding to the wrong sign pairing. Therefore, as w ω increases the gyroscope residuals will contribute more to the cost function. The peaks associated with the wrong sign pairing are flattened and new local minima will eventually appear at these locations. Therefore, for large w ω an optimization method (solver) can end up in the wrong local minimum. However, regardless of which sign pairing the solver finds, the opposite sign pairing can always be obtained at x = θ 1 ϕ 1 θ 2 ϕ 2 + π . Therefore, if our solver finds the estimate x ^ ( 1 ) we can reinitialize at
θ ^ 1 ( 1 ) ϕ ^ 1 ( 1 ) θ ^ 2 ( 1 ) ϕ ^ 2 ( 1 ) + π ,
and obtain a new estimate x ^ ( 2 ) . Then we select the local minimum with the smallest value of the cost function as our estimate
x ^ = arg min x { x ^ ( 1 ) , x ^ ( 2 ) } V ( x ) .
Therefore, it is possible to find the correct sign pairing as long as V ( x ^ ( 2 ) ) is numerically distinguishable from V ( x ^ ( 1 ) ) . As discussed in this section and shown in Figure 2, the relative weighting of the residuals determines how easy it is to distinguish a correct local minimum from a wrong one. If w ω is set to be significantly larger than w a , we expect the acceleration residuals to eventually become so small relative to the gyroscope residuals, that the solver is no longer sensitive enough to detect the difference between correct and wrong local minima.

4.3. Solving the Optimization Problem

The optimization problem (22) is a nonlinear least-squares problem. An efficient solver for such problems is the Gauss–Newton method [34]. Given an initial estimate x ^ ( 0 ) the Gauss–Newton method iteratively updates the estimate according to    
x ^ ( k + 1 ) = x ^ ( k ) α J ( x ^ ( k ) ) J ( x ^ ( k ) ) 1 J ( x ^ ( k ) ) e ( x ^ ( k ) ) = x ^ ( k ) α Δ x ( k ) ,
where k is only used here as an integer index denoting the iterations of the method and is not to be confused with the sample index. The method uses the Jacobian matrix J ( x ) R 2 N × 4 , which contains all first-order partial derivatives of e ω and e a
J ( x ) = e ω ( 1 , x ) θ 1 e ω ( 1 , x ) ϕ 1 e ω ( 1 , x ) θ 2 e ω ( 1 , x ) ϕ 1 e ω ( N , x ) θ 1 e ω ( N , x ) ϕ 1 e ω ( N , x ) θ 2 e ω ( N , x ) ϕ 1 e a ( 1 , x ) θ 1 e a ( 1 , x ) ϕ 1 e a ( 1 , x ) θ 2 e a ( 1 , x ) ϕ 1 e a ( N , x ) θ 1 e a ( N , x ) ϕ 1 e a ( N , x ) θ 2 e a ( N , x ) ϕ 1 ,
and e ( x ) R 2 N is the residual vector
e ( x ) = e ω ( 1 , x ) e ω ( N , x ) e a ( 1 , x ) e a ( N , x ) .
The term J ( x ) J ( x ) 1 is an approximation of the Hessian of V ( x ) , which is given by
d 2 V ( x ) d x 2 = J ( x ) J ( x ) + k = 1 N e ω ( k , x ) d 2 e ω ( k , x ) d x 2 + k = 1 N e a ( k , x ) d 2 e a ( k , x ) d x 2 ,
where the higher-order terms are ignored, yielding
d 2 V ( x ) d x 2 J ( x ) J ( x ) .
The partial derivatives of the residuals (24) and (25) in the Jacobian (31) are computed in the following way using the chain rule
e ω ( k , x ) x = j x e ω ( k , x ) j w ω ( k ) ,
e ω ( k , x ) j = ( y ω , 1 ( t k ) × j 1 ( x ) ) j 1 ( y ω , 2 ( t k ) × j 2 ( x ) ) j 2 ,
( y ω , i ( t k ) × j i ( x ) ) j i = ( y ω , i ( t k ) × j i ) × y ω , i ( t k ) y ω , i ( t k ) × j i ( x ) ,
e a ( k , x ) x = j x e a ( k , x ) j ,
e a ( k , x ) j = y a , 1 ( t k ) y a , 2 ( t k ) w a ( k ) ,
j x = j 1 x 1 0 0 j 2 x 2 ,
j i x i = sin θ i cos ϕ i cos θ i sin ϕ i sin θ i sin ϕ i cos θ i cos ϕ i cos θ i 0 .
The term Δ x in (30) defines the search direction, and Δ x is a descent direction, meaning that moving our estimate in that direction will decrease the value of the cost function. The scalar 0 < α 1 is known as the step length, which controls how far our estimates move in the descent direction. By using a method known as backtracking line search [35], we find a value for α that is guaranteed to lower the value of the cost function. If no such α is found or the change in the value of V ( x ) is too small, below a set tolerance level V tol , the Gauss–Newton method terminates and returns the estimate corresponding to the current iteration x ^ = x ^ ( k ) .
The complete joint axis estimation method, including the steps of the Gauss–Newton method and the re-initialization step (28) required to identify the minimum corresponding to the correct sign pairing, is described in Algorithm 1.
Algorithm 1 Joint axis estimation
Require: Data D N = { y ω N , y a N } , initial estimate x ^ ( 0 ) , tolerance V tol , residual weights w ω and w a .
1:
for i { 1 , 2 } do
2:
     k 0 .                      ▹ Begin Gauss–Newton.
3:
     Δ V V tol .
4:
     V ( 0 ) V ( x ^ ( 0 ) ) .                 ▹ V ( x ) defined by (23).
5:
    while Δ V V tol do
6:
        Compute the Jacobian J ( x ^ ( k ) ) and the residuals e ( x ^ ( k ) ) according to (31) and (32).
7:
         Δ x ( k ) J ( x ^ ( k ) ) J ( x ^ ( k ) ) 1 J ( x ^ ( k ) ) e ( x ^ ( k ) ) .
8:
        Obtain step length α using backtracking line search.
9:
         x ^ ( k + 1 ) x ^ ( k ) α Δ x ( k ) .
10:
         k k + 1 .
11:
         V ( k ) V ( x ^ ( k ) ) .
12:
         Δ V | V ( k 1 ) V ( k ) | .
13:
    end while
14:
     x ^ x ^ ( k ) .                     ▹ End Gauss–Newton.
15:
     x ^ ( i ) = θ ^ 1 ( i ) ϕ ^ 1 ( i ) θ ^ 2 ( i ) ϕ ^ 2 ( i ) x ^ .
16:
     x ^ ( 0 ) θ ^ 1 ( i ) ϕ ^ 1 ( i ) θ ^ 2 ( i ) ϕ ^ 2 ( i ) + π .        ▹ Initialize at j ^ 2 .
17:
end for
18:
x ^ arg min x { x ^ ( 1 ) , x ^ ( 2 ) } V ( x ) .               ▹Correct sign pairing.
19:
return j ( x ^ ) .

5. Sample Selection

A key feature of plug-and-play estimation is that it should not require specific calibration data, recorded from predetermined motions. Rather, such plug-and-play methods should be able to use data recorded from arbitrary motions. Such data sets could be very large, and using all available data for identification is often unnecessary and resource-demanding. It is also possible that very few samples in the data set contain information about the joint axis. In a sense, too much bad information might ruin the good information. To handle this, we propose a method for selecting samples to use for estimation.
In the following sections we assume that we want a maximum of N max gyroscope and accelerometer measurements can be used to identify the joint axis, but that we have N > N max measurements available to us to choose from.

5.1. Gyroscope

To distinguish between informative and non-informative motions, we use the difference in angular velocity magnitude measured by the two gyroscopes    
Δ ω ( k ) = y ω , 1 ( t k ) y ω , 2 ( t k ) ,
which is a sufficient metric for detecting independent rotations of the sensors, and hence the two segments. For stationary segments Δ ω ( k ) = 0 . One thing to note is that Δ ω ( k ) cannot differentiate between informative motions where ω 1 ω 2 and non-informative stiff joint rotations. For example, the two segments can undergo simultaneous planar rotations, where the two segments rotate in different directions but with approximately the same magnitude. However, for realistic motions, especially for motions performed by humans, it is unlikely that independent rotations will have the same magnitude, even for short moments.
Each gyroscope measurement is given a score
s ω ( k ) = Δ ω ( l )
l = arg min l | Δ ω ( l ) | , l ( k n , k + n )
that is equal to the Δ ω with smallest magnitude in a window of 2 n + 1 samples. This is to avoid selecting large outliers of Δ ω . For example, if the system is not completely rigid or the sensors are not rigidly attached, the kinematic constraints are violated, and some samples of stiff joint motion can obtain a large Δ ω value. However, if the outliers are relatively few, there should be Δ ω with smaller magnitude among neighboring samples. In some sense, s ω ( k ) assumes a conservative score for each sample.
When the score s ω has been computed for all measurements, the list of measurements is sorted in descending order such that s ω ( k ) s ω ( k + 1 ) , k ( 1 , N 1 ) , where k is a new index variable used to denote the sorted order. The first and last N max / 2 of the sorted gyroscope measurements are selected, or, equivalently, the measurements corresponding to the middle of the list, i.e., with index k ( N max / 2 + 1 , N N max / 2 ) are removed from the set of measurements. By doing this, the algorithm will make sure that measurements with excitation in both sensors are selected, since Δ ω > 0 means that Sensor 1 has larger angular rate than Sensor 2 and vice versa for Δ ω < 0 . The gyroscope sample selection method is described in Algorithm 2. In essence, the algorithm picks half the required points from either end of the sorted list.
Algorithm 2 Gyroscope sample selection
Require: Gyroscope data y ω N , number of allowed measurements, N max , window size n.
1:
if N > N max then
2:
    Compute s ω ( k ) , k according to (43).
3:
    Obtain the sorted order k such that s ω ( k ) s ω ( k + 1 ) , k ( 1 , N 1 ) .
4:
    Remove the N N max samples y ω ( t k ) , k ( N max / 2 + 1 , N N max / 2 ) from y ω .
5:
end if
6:
return y ω N max

5.2. Accelerometer

The acceleration constraint is accurate when the angular rate and angular accelerations are small, since that makes the right hand side of (17) vanish. Note that linear acceleration terms in (17), which are collected in a 0 , always cancel out. Therefore, we do not use the energy of the accelerometer measurements to determine if the acceleration constraint is valid. Instead, we give each acceleration measurement a penalty based on the average angular rate energy
E i ( k ) = 1 2 n + 1 l = k n k + n y ω , i ( t l ) 2 , n < k N n , otherwise ,
where the average is calculated from a window of size 2 n + 1 , centered around each sample. This angular rate energy statistic has been shown to be an effective detector of stationarity in foot-mounted inertial navigation [36], so-called zero-velocity detection.
Small E i ( k ) indicate that Sensor i is stationary. For the hinge joint system, it is sufficient for one sensor to be stationary since the acceleration components in the plane normal to the joint axis does not change the r.h.s of (17). If one sensor is stationary, then the other sensor can only have accelerations that are induced by independent rotation, which has to be in the plane. For this reason, the penalty given to each pair of acceleration measurements is chosen as
s a ( k ) = min { E 1 ( t k ) , E 2 ( t k ) } .
As a first step of the accelerometer sample selection, measurements with s a ( k ) > E th are removed, where E th is a scalar threshold parameter, which should be chosen to remove measurements for which it is likely that the motion violates the acceleration constraint.
We also need to consider the conditions for identifiability of the joint axis. That is, we want our measurements to increase the separation between the column-space and the null-space of the matrix A in (27). In practice, A will have full rank regardless of the motion, since the measurements are corrupted by noise and bias and the acceleration constraint does not hold for arbitrary motions. However, if A has one singular value that is relatively small compared to the other singular values, it can be considered to be approximately rank 5. Consider the singular value decomposition (SVD) of A
A = U Σ W ,
where the diagonal elements σ 1 to σ 6 of Σ R M × 6 are the singular values and the columns of U and W represents orthonormal bases in R N and R 6 , respectively. The columns of W are known as the right-singular vectors of A, and each is associated with a corresponding singular value, i.e., if
diag ( Σ ) = σ 1 σ 2 σ 3 σ 4 σ 5 σ 6 ,
W = w 1 w 2 w 3 w 4 w 5 w 6 ,
the right-singular vector w 1 is associated with σ 1 . The singular values are ordered σ 1 σ 2 σ 6 0 . We have that w 1 is the direction in R 6 where the rows of A are most coherent, meaning that
w 1 = arg max w , w = 1 | A w | ,
which has the interpretation that w 1 is the direction that is most separated from the null-space of A. The information about j that is contained in A is directly linked to the separation between the null-space and the column space of A. The intuition behind this can be seen by comparing the system of linear equations in (27) to the definition of w 1 in (50), where it appears most unlikely that j should be parallel with w 1 . In fact, the least-squares estimator for j given by
j ^ = arg min j A j 2 ,
has solutions on the line in R 6 , which is spanned by w 6 , the right-singular vector associated with the smallest singular value. If we add the constraints j 1 = j 2 = 1 the two solutions with correct sign pairing, corresponding to ( j 1 , j 2 ) and ( j 1 , j 2 ) can be obtained through normalization. A problem arises when multiple singular values are close to zero, in which case the value of A j 2 will be small in more than one direction, and the uncertainty in the estimate increases. If A is only allowed to have N max rows, we should therefore only remove measurements whose rows in A are most coherent with w 1 , the direction with most information. This way, we make sure that space is always allocated for measurements with rows that do not align with w 1 , which over time should increase the discrepancy between the two smallest singular values and increase the certainty of the least-squares estimator.
The coherence between a row in A and the right-singular vector w 1 is computed as the vector c R M , with the elements
c k = | A k w 1 | A k w 1 ,
A k = y a , 1 ( t k ) y a , 2 ( t k ) ,
where A k is the k th row vector in A, and c k has a value of 1 if A k is parallel to w 1 and 0 if they are orthogonal. A c k > 0.5 means that A k has most of its magnitude in the direction of w 1 . Therefore, we choose to remove measurements with the largest s a ( k ) where c k > 0.5 . This ensures that we also keep good measurements in the w 1 direction, while allocating space for measurements with new information about j. The algorithm for selecting accelerometer samples is described in Algorithm 3.
Algorithm 3 Accelerometer sample selection
Require: Data D N = { y ω N , y a N } , number of allowed measurements N max , window size n, threshold E th .
1:
if N > N max then
2:
    Compute s a ( k ) , k according to (46) using window size n.
3:
    Remove measurements where s a ( k ) > E th from a.
4:
     N | y a | .
5:
    while N > N max do
6:
        Compute the SVD A = U Σ W , with A given by (27).
7:
        Compute the coherence c according to (52).
8:
        Remove the measurement with largest s a ( k ) where c k > 0.5 from a.
9:
         N | y a | .      ▹ A changes in subsequent iterations.
10:
    end while
11:
end if
12:
return y a N max .

5.3. Online Implementation

The two proposed sample selection algorithms can be implemented for an online application. For Algorithm 2, simply save the scores s ω and re-use them when a new batch of data is available, new s ω only needs to be computed for the previously unseen measurements. The same principle holds for Algorithm 3 and s a .

6. Uncertainty Quantification

When identifying an unknown quantity, it is useful for the user of the method to know if they can expect their estimate to be accurate given the data that is available, or if more informative data needs to be collected. Here we propose a method for quantifying both local and global uncertainty of an estimate x ^ .
The local uncertainty is obtained through estimating the covariance matrix of the estimation errors using the Jacobian of the cost function. Global uncertainty is obtained through solving multiple parallel or sequential optimization problems with different random initializations, then comparing the resulting estimates to see if they correspond to the same joint axis.
The local and global uncertainty metrics are combined into an algorithm that can be used to determine if a current estimate is of acceptable accuracy or if more informative data needs to be collected.

6.1. Local Uncertainty

We approximate the cost function V ( x ) (23) as a quadratic function near the estimate x ^
V ^ ( x ) = V ( x ^ ) + 1 2 ( x x ^ ) H ( x ^ ) ( x x ^ ) ,
where H ( x ^ ) R 4 × 4 is the approximate Hessian of V ( x ) evaluated at x ^ according to (34).
We make the assumption that the uncertainty can be captured by a Gaussian distribution. Given the estimate x ^ and the covariance matrix P x , the probability that x is the true parameter vector is given by the probability density function (PDF)
p ( x | x ^ , P x ) = N ( x ^ , P x ) = 1 ( 2 π ) 4 | P x | exp 1 2 ( x x ^ ) P x 1 ( x x ^ ) .
This is the same as assuming the estimation errors x x ^ to be zero-mean Gaussian with covariance P x . We are interested in finding P x to quantify the uncertainty of estimates. We now consider the negative log-likelihood of this PDF
log p ( x | x ^ , P x ) = log ( 2 π ) 4 | P x | + 1 2 ( x x ^ ) P x 1 ( x x ^ ) .
Note the similarities to V ^ ( x ) in (54). If (54) is a good local approximation of the cost function and our estimator is unbiased, the distribution of the estimation errors x x ^ will be asymptotically ( N ) zero-mean Gaussian with covariance matrix [37]
P x J s ( x ^ ) J s ( x ^ ) 1 ,
where J s ( x ^ ) is Jacobian from (31) where the partial derivatives of the gyroscope and acceleration residuals have been scaled by 1 / std ( e ω ( k , x ^ ) ) and 1 / std ( e a ( k , x ^ ) ) , respectively. Here, std ( e ( k , x ) ) denotes the sample standard deviation of the residuals
std ( e ( k , x ) ) = 1 N 1 k = 1 N e ( k , x ) 1 N k = 1 N e ( k , x ) 2 .
We want to measure the uncertainty in terms of angular deviation
AD ( v 1 , v 2 ) = cos 1 v 1 v 2 v 1 v 2 ,
where v 1 and v 2 are vectors of the same dimension, AD ( v 1 , v 2 ) returns the positive angle between the two vectors. Let
z = h ( x ) = AD ( j 1 ( x ) , j 1 ( x ^ ) ) AD ( j 2 ( x ) , j 2 ( x ^ ) ) ,
x i = θ i ϕ i ,
then we want to find the probability distribution of p ( z ) or its first two moments (mean μ z and covariance matrix P z ).
We use a Monte Carlo method to estimate the mean μ z and covariance P z [38]    
x l N ( μ x , P x ) , l = 1 , , L
z l = h ( x l )
μ z = 1 L l = 1 L z l
P z = 1 L 1 l = 1 L ( z l μ z ) ( z l μ z ) ,
where we let μ x = x ^ , P x is obtained as in (57) and h ( x l ) is given by (60). The covariance matrix P z is estimated by the unbiased sample covariance estimator, hence the division by L 1 .
The metric we will be using to determine local uncertainty is the mean plus two standard deviations, μ z + 2 σ z , where σ z = diag ( P z ) .

6.2. Global Uncertainty

The cost function V ( x ) may have multiple local minima. In the case that the local minima correspond to either the correct or the wrong sign pairing of j 1 and j 2 , we can find the correct one by comparing minima located near the opposite sign of either j 1 or j 2 . If these minima are not distinctly different in terms of the values of V ( x ) , we expect the estimates to have the correct sign half of the times our method finds a solution given that the initial estimates are uniformly spread over the parameter space. Furthermore, in the case where our data has little information about j, there may be other local minima that corresponds to wrong solutions. Wrong local minima can still have low local uncertainty, meaning that if our estimates are initialized near them, it is likely that wrong solutions are found. Therefore, to be confident that the method has found the global minimum, we need to solve the optimization problem multiple times with different initial estimates and compare the angular deviations of the sequential estimates.
We compute estimates j ^ i ( t ) for t = 1 , 2 , , T as
j ^ ( t ) = j ^ , t = 1 arg min j { + j ^ , j ^ } min i { 1 , 2 } A D ( j i , j ^ i ( t 1 ) ) , t > 1 ,
where j ^ i ( t ) is chosen as either ± j ^ , such that one of the two estimated joint axes j ^ i always has the sign that is most consistent with its previous estimate. Note that this only forces either j ^ 1 or j ^ 2 to be consistent with the previous estimate, whereas the other one may still be inconsistent. We then consider the maximum sequential angular deviation as our metric for whether the estimate at time t corresponds to the same minimum as the estimate at time t 1
SEQAD ( t ) = 180 , t 1 max i { 1 , 2 } AD ( j ^ i ( t ) , j ^ i ( t 1 ) ) , t > 1 .
The SEQAD ( t ) metric corresponds to the angular deviation of the joint axis estimate that is most inconsistent with its previous estimate. Consecutive estimates will differ when there is no clear and consistent global minimum. Therefore, if we observe that SEQAD ( t ) 0 as t increases, we can be more certain that the local minimum found by our solver corresponds to a global minimum.

6.3. Identifying Estimates with Acceptable Uncertainty

Suppose that we receive data sequentially, i.e., we obtain D N ( t ) = { y ω N ( t ) , y a N ( t ) } , t { 0 , , T } , the sets of N ( t ) gyroscope and N ( t ) accelerometer measurements that have been recorded from time t = 0 to t. If we use sample selection according to Algorithms 2–3, then N ( t ) N max , t . For each D N ( t ) we obtain an estimate j ^ = j ( x ^ ) by solving the optimization problem (22). Furthermore, we will select the estimate associated with time t to be j ^ ( t ) as in (66), such that either j ^ 1 or j ^ 2 is consistent with the sign of the previous estimate.
We now want to assess if j ^ ( t ) has acceptable uncertainty. Let E max denote the maximum uncertainty that we accept. We use the following two criteria to determine if the local and global uncertainty is sufficiently small
  • We require that μ z + 2 σ z < E max , where μ z and σ z are obtained from (64) and (65) through the procedure described in Section 6.1.
  • We require that the sequential angular deviations given by (67) satisfy SEQAD ( t ) < E max for a minimum of n min consecutive estimates, that were randomly initialized uniformly over the parameter space. This is equivalent to
    max t [ t n min + 1 , t ] SEQAD ( t ) < E max .
We summarize the method for selecting an estimate j ^ ( t ) of acceptable uncertainty in Algorithm 4.
Algorithm 4 Identifying an estimate of acceptable uncertainty
Require: Data  D N ( t ) = { y ω N ( t ) , y a N ( t ) } , t { 1 , , T } , number of Monte Carlo samples L, maximum acceptable uncertainty E max , threshold for minimum number of sequential estimates with acceptable deviation n min .
1:
n 0
2:
for t { 1 , , T } do
3:
    Obtain an estimate j ^ = j ( x ^ ) by solving the optimization problem (22) using the data D N ( t ) and Algorithm 1.
4:
    Obtain j ^ ( t ) from (66).
5:
    Compute the covariance matrix P x according to (57).
6:
    Compute μ z and P z according to the Monte Carlo method (62)–(65).
7:
    Compute SEQAD ( t ) , t ( t n min + 1 , t ) as in (67).
8:
    if μ z + 2 σ < E max AND max { SEQAD ( t ) } < E max then
9:
        return j ^ ( t ) .
10:
    end if
11:
end for

7. Experiment

7.1. Data Acquisition

Data were collected from a 3D printed hinge joint system [39] with one wireless IMU (Xsens MTw) attached to each segment; see Figure 3. The sampling rate was set to 50Hz for both IMUs. The operating ranges of the IMUs were ± 160 m/s2 for the accelerometers and ± 21 rad/s for the gyroscopes. The IMUs were attached in sockets such that the joint axis was parallel with the positive y-axes of the sensors and both pointing in the same direction (same sign), that is
j 1 = j 2 = 0 1 0 .
The data consist of 14 recorded motions, listed in order below
  • Stationary system;
  • Free rotation, stiff joint, free joint axis;
  • Sequential rotation, horizontal joint axis;
  • Sequential rotation, tilting joint axis;
  • Simultaneous planar rotation, horizontal joint axis;
  • Simultaneous planar rotation, tilting joint axis;
  • Simultaneous free rotation, free joint axis;
  • [8–14] Same motions as 1–7, respectively, but with faster rotations.
The recorded angular velocity magnitudes for these motions are shown in Figure 4. For the sequential rotation, Segment 1 was always rotated first while Segment 2 was stationary, which was followed by the converse motion. Horizontal joint axis means that the joint axis was aligned to be approximately orthogonal to the gravitational acceleration vector. For the tilting joint axis case, the angle between the joint axis and the gravitation acceleration vector was maintained at 45 for the duration of the motion. For the free joint axis motions, the joint axis was not constrained to any particular orientation, but rotated freely in space. The hinge joint system was equipped with a screw, which when tightened prevented independent rotation of the two segments. The screw was tightened when the system was stationary and during the stiff joint rotations. Measurements of the transitions from one motion to another were removed from the recorded data, such that only the specified motions of interest could be isolated. The first set of stationary data was used to estimate the gyroscope bias b ω in (3), which was then subtracted from all subsequent measurements.
Because the optimization problem (22) is formulated based on the kinematics at each sample time, and does not contain dynamics, we are allowed to shuffle around the measurements in our data set. Using the 14 different motions, four scenarios were designed where the motions appeared in different sequences. The sequences of motion for the different scenarios were
  • 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 ;
  • 1 , 3 * , 10 * , 8 , 2 , 9 , 3 , 10 , 4 , 11 , 5 , 12 , 6 , 13 , 7 , 14 ;
    * only 500 samples (10 seconds) from motions 3 and 10, which contain motion in only Segment 1.
  • 6 * , 1 , 8 , 2 , 9 ;
    * only 1000 samples from motion 6.
  • 1 , 8 , 2 , 9 , 6 * , 2 , 9 .
    * only 1000 samples (20 seconds) from motion 6.
    samples divided in half.
Scenario 1 is the original sequence in which the motions were recorded. Scenario 2 starts with the sensors being stationary, then there is motion in only Segment 1, after which the system alternates between the slower and faster motions, starting with non-informative stiff-joint motions. For this scenario we expect to have good estimates of j 1 before we have any excitation in Segment 2. Scenario 3 has early excitation of both segments followed by measurements from a stationary system and non-informative stiff joint motion and Scenario 4 has the converse case where the excitation comes late in the sequence. Scenarios 3 and 4 are also designed to contain more non-informative motions, as the only informative motion is contained in the 1000 samples (20 seconds) from motion 6.

7.2. Evaluating Robustness of the Residual Weighting

To experimentally evaluate the robustness of the proposed method for different weights w ω and w a , we estimated the joint axis using data from motions 3 to 7 and 10 to 14. Data from a stationary system and rotations with a stiff joint were not used in these evaluations since the joint axis is not identifiable for these motions. The weights were chosen as
w ω = w 0 , w a = 1 w 0 ,
where we let w 0 = w ω w a determine the relative weighting of the residuals e ω and e a . We estimated the joint axis for 100 different values of w 0 , which had a logarithmic distribution on the interval ( 10 3 , 10 10 ) . For all different motions and for each value of w 0 the initial estimates of x were selected deterministically such that all possible sign pairings ( ± j 1 , ± j 2 ) and ( ± j 1 , j 2 ) were selected equally many times. The initial estimates for j 1 and j 2 were selected from a grid on the unit-sphere of R 3 with 6 grid points the positive and negative axes. With all possible combinations for j 1 and j 2 this resulted in M = 36 different initial conditions for the optimization algorithm. Here we use the root-mean-square angular error (RMSAE) for both joint axes as the metric to evaluate performance
RMSAE = 1 2 M k = 1 M AD ( j 1 , j ( x ^ 1 ( k ) ) ) 2 + AD ( j 2 , j ( x ^ 2 ( k ) ) ) 2 ,
where AD is the angular deviation metric given by (59), and here we let the superscript k denote the estimates obtained from different initializations. Since we consider ( ± j 1 , ± j 2 ) to be correct sign pairings of the joint axis, we select the sign of j ^ 1 which has the lowest A D . If, as a result of this, j ^ 1 changes sign, the sign of j ^ 2 is also changed. This way A D for j ^ 1 will always be 90 whereas the A D for j ^ 2 can be up to 180 , which corresponds to an error in the sign pairing.

7.3. Evaluating Sample Selection

To evaluate the proposed sample selection in Algorithms 2–3, we use the data according to the four scenarios specified in Section 7.1. Starting with the first second of recorded motion and incrementally adding subsequent data in small batches of one second at a time. That is, we receive sequential batches of data D N ( t ) , t { 1 , , T } where T is the duration of the scenario.
We compute one estimate j ^ i ( t ) for each D N ( t ) . For each new batch, the joint axis is estimated again starting from a new initial estimate (i.e., no warm-start of the optimization method), which is randomly selected from a uniform distribution. The reason for this is that we also want to evaluate if the estimates are consistent over time, regardless of initialization of the optimization method. The relative weighting of the residuals (70) was set to w 0 = 50 .
We compare the method when the proposed sample selection is used to the case where all available data is used ( N max = N ( t ) ) for estimation for all four scenarios. When using the sample selection in Algorithms 2–3, the maximum sample sizes of N max { 1000 , 500 , 250 , 125 } were compared.
Other than N, the other user chosen parameters for the sample selection is related to the angular rate energy penalty (45). The window size of n = 21 samples was used, which means the average energy is computed for 0.42 s of motion for our sensors. The threshold used to determine if the accelerometer is stationary in Algorithm 3, was set to E th = 1 rad2s−2. This is around 10 times higher than the threshold used for the angular rate energy detector suggested for zero-velocity detection in human gait [40]. Measurements that we discard in line 2 of Algorithm 3 are therefore likely to be of significant motion.

7.4. Evaluating Uncertainty Quantification

To evaluate the efficacy of the proposed uncertainty quantification, we will use the same sequential batches of data for all four scenarios D N ( t ) , t { 1 , , T } as in Section 7.3, but with a fixed maximum number of samples N = 1000 chosen by Algorithms 2–3. Similarly to the procedure used to evaluate the sample selection method one estimate j ^ i ( t ) is obtained for each new batch, and initial estimates are independently randomized from a uniform distribution over the parameter space at each t. The relative weighting of the residuals (70) was set to w 0 = 50 .
Here we use Algorithm 4, which returns an estimate j ^ , when the local and global criteria indicate that the uncertainty is acceptable. This requires the user to choose the threshold for acceptable uncertainty, E max , and the minimum number of sequential estimates that should have angular deviations below this threshold, n min . For our evaluation we choose to set E max = 3 , and n min { 3 , 10 } . Algorithm 4 is then deemed to be successful if AD ( j i , j ^ i ) E max . This procedure is repeated 100 times for each scenario, with different randomized initial estimates each time.

7.5. Evaluating Robustness to Sensor Bias

We evaluated the robustness of the complete method, which includes Algorithms 1–4, to measurement bias. The measurement bias refers to b a and b ω in the measurement models (1)–(3). In the other evaluations presented here, we have compensated for gyroscope bias by estimating b ω from the initial stationary data (Motion 1) and subtracting this bias from the subsequent measurements. We have not compensated for any accelerometer bias since it cannot be estimated from only one stationary position of the sensor. In this section, we will study the effect of sensor biases by adding artificially generated biases to both the previously bias-compensated gyroscope measurements and to the accelerometer measurements. These artificial biases have fixed magnitudes b a = 1 m/s2 and b ω = 1 / s, but their directions are randomized by generating random unit vectors. To evaluate the effect of the added artificial bias, M = 100 estimation runs are performed for all four scenarios, with and without the added artificial bias. The artificial biases are first added to the measurements, then the proposed method is applied as described in Section 7.4. Here, we set N max = 500 , E max = 1 and n min = 10 , other parameters are the same as in previous sections. We will use the RMSAE metric (71) and the maximum angular error (MAXAE)
MAXAE = max 1 k M { AD ( j 1 , j ( x ^ 1 ( k ) ) ) , AD ( j 2 , j ( x ^ 2 ( k ) ) ) } ,
to evaluate the performance of the method across all M = 100 estimation rounds for all scenarios.

8. Results

8.1. Robustness

The RMSAE for the different motions and weights w 0 are shown in Figure 5. Here, estimation is done separately and using all samples for each motion, i.e., without sample selection. The optimal choice for w 0 appears to be in the interval of ( 10 1 , 10 5 ) , where the errors are small for all motions.

8.2. Sample Selection

The angular errors for j ^ 1 and j ^ 2 obtained from testing the proposed sample selection as described in Section 7.3, are shown in Figure 6. From these results, we can compare the use of Algorithms 2–3 for different sample sizes N max . This includes the case of N max = N ( t ) , where N ( t ) corresponds to making use of all samples that have been observed up to an integer t number of seconds.
Figure 7 shows which samples were selected for N max = 1000 , for Scenarios 1 and 2 at the times given by the vertical axes.

8.3. Uncertainty Quantification

With the parameter n min = 10 , the final errors were below E max = 3 for all 100 estimation rounds for all four scenarios, meaning the estimates obtained from Algorithm 4 were acceptable 100 % of the time. With n min = 3 and the same E max , the estimates were acceptable 8 % of the time for Scenario 1, 81 % of the time for Scenario 2, 100 % of the time for Scenario 3 and 0 % of the time for Scenario 4. Figure 8 compares the local and global uncertainty metrics to the angular errors of a single estimation round for each scenario and shows when Algorithm 4 accepted an estimate j ^ for n min = 3 (leftmost vertical dashed lines) and for n min = 10 (rightmost vertical dashed lines).

8.4. Robustness to Sensor Bias

The resulting RMSAE and MAXAE for the M = 100 estimation rounds with and without added artificial bias are shown for all four scenarios in Table 1. Without the added artificial biases, the errors were at most 2.16 , and with the added artificial biases of magnitudes b a = 1 m/s2 and b ω = 1 / s the errors were at most 4.84 .

9. Discussion

9.1. The Method Is Not Sensitive to the Relative Weighting w 0

The parameter w 0 , which is defined from (70), controls the relative weighting of the residuals e ω and e a . As w 0 increases, the relative weighting of the gyroscope residual is increased. As we see in Figure 5, the optimal choice of w 0 for most motions in terms of RMSAE (71), is somewhere in the large range between 10 and 10 5 . The errors are also small ( < 3 ) for w 0 < 10 for the slower planar motions (3–6), which shows that the acceleration information can be reliable for these motions. However, some larger errors can be observed for small w 0 for the faster planar motion 12 and the errors are also significantly larger for the free axis rotations (motions 7 and 14), which can be explained by the fact that these motions violate the acceleration constraint, meaning that the r.h.s. of (17) is nonzero.
Since we can select any w 0 from within such a large interval and still obtain similar performance, our method is not sensitive to the relative weighting of the residuals. It makes sense that w 0 > 10 , since the angular velocities, measured in rad/s, have smaller magnitudes than the accelerations, that typically fluctuate around 9.82 m/s2 due to the gravitational acceleration. Furthermore, the angular velocity constraint always holds for a rigid hinge joint system. Hence, we expect the angular velocity information to be more reliable. The method is robust for larger w 0 up to 10 5 where the RMSAE become large for motions 3 and 10. This large increase in RMSAE occurs when the acceleration residual becomes numerically indistinguishable to the tolerance of the optimization algorithm, and it becomes more likely that the method selects an estimate which corresponds to the wrong sign pairing. Therefore, as w 0 increases we see the RMSAE approach 90 as the AD for j ^ 1 is still small but the probability of selecting ± j ^ 2 is approaching 0.5 , meaning that approximately half of the estimates will have the wrong sign pairing. This can also depend on the numerical tolerance and stopping criteria of the optimization method, since a global minimum corresponding to the correct sign pairing might not be significantly different from other local minima that correspond to the wrong sign pairing.

9.2. Sample Selection Offers Substantial Benefits

From the results shown in Figure 6 we see that we can achieve similar, and in some cases even better performance by selecting relatively few measurements to use for estimation out of all N ( t ) measurements that have been observed up to time t. For Scenario 1, N max { 1000 , 500 , 250 } have angular errors within 0.5 and N = 125 have errors within 1 from the case with N max = N ( t ) .
In Scenario 2, the errors for j ^ 2 drop below 2 at t = 85 s for the methods using sample selection, but it takes until t = 135 s for the method where N max = N ( t ) to stay consistently below 2 . However, the method with N = 125 again shows a slightly larger deviation from the others, with some momentary spikes in error around t = 200 s and t = 300 s. Note that Scenario 2 is designed to have no independent rotation of Segment 2 until t = 255 s. So the only information about j 2 until then has to come from the accelerometer. This shows that carefully selecting accelerometer samples according to Algorithm 3 is beneficial, especially if angular velocity information is missing. Comparing the results from Scenarios 1 and 2, the final errors are very similar, indicating that the methods are not sensitive to the sequence of motions.
Scenarios 3 and 4 represent challenging cases where only a small minority of samples contain motion with independent rotation (only 20s of motion 6). In Scenario 3, we note that the final error for N max = N ( t ) is significantly larger than the cases with N max { 1000 , 500 , 250 } , and for N max = 125 the final error for j ^ 2 is at the same level as N max = N ( t ) . Scenario 4 has a similar performance in terms of final errors. However, Scenario 4 does not have any motions with independent rotations of the segments until around t = 180 s. The only information about the joint axis until that point comes from the accelerometer, in Motions 1, 8, 2 and 9. The errors start to decrease around t = 150 s when data from Motion 9 comes in, but do not settle until after Motion 6. The large fluctuations in errors we see for N max = 1000 during Motion 9 indicate that there are still at least two local minima corresponding to the wrong joint axis at this point. Errors for N max < 1000 are smaller during motion 9, but still vary between 5 and 20 .
Using Algorithms 2–3 is therefore beneficial, not only for reducing the computational complexity of the optimization problem, but it can even improve the performance in situations where gyroscope information is limited. However, judging by Scenario 4 in particular, N max 1000 appears to be the best choice in terms of overall performance. Even then, N max = 1000 is only a small fraction of the total number of measurements. With a sample period of 0.02 s, we have that N ( t = 700 ) = 35000 and N ( t = 250 ) = 12500 .
Figure 7 shows the samples that were selected over time from Scenarios 1 and 2 with N max = 1000 and which motions these samples come from. For both scenarios, we see that gyroscope samples from non-informative motions 1,2,8,9 are all deselected by the end. Samples from these motions are only kept until enough samples from informative motions have been parsed by the algorithm. For the accelerometer, we see that samples from stationary sensors are preferred since many samples of motions 1 and 8 are kept, which is in line with the penalty we give samples based on the angular rate energy. It is also important that samples from other motions are selected since the criterion for identifiability requires a strong separation between the nullspace and the column space of the matrix A given by (27). Had the selection criterion of the accelerometer only been based on the angular rate energy, we would risk ending up in the situation where all samples are selected from the same stationary position, in which case all rows of A are linearly dependent. Lines 5–10 in Algorithm 3 prevent this by removing the worst samples that are coherent with the right-singular vector of the largest singular value. This can be thought of as allocating space in the A matrix for novel information by removing redundant information.

9.3. Reliability of the Proposed Uncertainty Quantification

We obtained reliable estimates with errors below that of the maximum acceptable error E max = 3 , 100 % of the time when the parameter n min = 10 . However, estimates were not reliable for n min = 3 , where the results were particularly bad for Scenario 1, with 8 % of estimates of acceptable error and for Scenario 4 with 0 % of estimates of acceptable error. Both of these scenarios contained no informative motions in the beginning, and we found that the estimates that were returned often had not used any batch of informative data for estimation because the criteria for local and global uncertainty were satisfied prematurely by Algorithm 4.
In Figure 8 it can be seen that local uncertainty metric μ z + 2 σ z can be below E max (horizontal dashed lines) while the actual angular error fluctuates between values below and above E max . This occurs when there exist multiple other local minima than those corresponding to the true joint axis. Furthermore, Figure 8 shows how the SEQAD remains large as the angular errors fluctuate in the same way as the angular errors. Interestingly, Scenario 2 appears to fluctuate between one correct and one wrong local minimum between t = 66 s and t = 82 s. If we assume that the probability of finding the correct local minimum is 0.5 , having n min = 3 that means that the probability of ending up in the wrong local minimum n min times in a row is 0.5 n min = 12.5 % . This matches well with the results obtained for Scenario 2, where 88 % of the estimates were acceptable for n min = 3 .
For Scenarios 1 and 4, where the results were significantly worse for n min = 3 , it appears that wrong local minima were dominating. These two scenarios have sequences of stiff-joint motion before any informative motions are observed, which can explain why wrong local minima were found more frequently. Scenario 3, which had informative motion in the beginning did not have this issue, and hence 100 % of the estimates were acceptable even for n min = 3 .
We can conclude that setting the parameter n min sufficiently large is important for fully capturing the global uncertainty. Sequential data dominated by non-informative motions in the beginning are more sensitive to the choice of n min . The results showed that Algorithm 4 successfully identified all of the estimates that satisfied the accuracy criteria E max = 3 when n min = 10 . Here, this corresponds to 10 consecutive estimates (computed once per second), that differed by 3 at most.

9.4. The Method Is Robust to Realistic and Uncompensated Sensor Bias

As shown in Table 1, even with added artificial biases of relatively large magnitudes b a = 1 m/s2 and b ω = 1 / s, the errors were at most 4.84 across all M = 100 estimation runs for all four scenarios. The average errors in terms of RMSAE were less than 2 even with the added artificial biases. As a comparison, the IMUs used in our experiments had bias magnitudes in the order of b a = 0.1 m/s2 and b ω = 0.5 / s, so the artificial biases were significantly larger. This shows that the method is robust to sensor biases of at least these magnitudes. However, we had to lower the threshold E max from 3 to 1 to achieve this. This means that Algorithm 4 will be more conservative in selecting an estimate. With added artificial bias and E max = 3 , the method would sometimes terminate prematurely, when no informative motion had been observed because a global minimum that satisfied this threshold value was found. Therefore, lowering E max was required to achieve robustness to the added artificial biases. It is therefore still highly recommend that pre-calibration of the biases is performed when possible. If bias drift is significant enough to exceed the magnitudes tested here across the duration of the experiment, it is advised to use a method that allows for online compensation of biases alongside the proposed method. Lowering E max is only an optional measure one would take in the unusual case where late bias occurs and is not compensated for.

10. Conclusions

We have proposed a method which facilitates plug-and-play sensor-to-segment calibration for two IMUs attached to the segments of a hinge joint system. The method identifies the direction of the joint axis j in the intrinsic reference frames of each sensor, thus providing the user with information about the sensors’ orientation with respect to the joint. Accurate sensor-to-segment calibration is crucial for tracking the motion of the segments.
The method was experimentally validated on data collected from a mechanical joint, which performed a wide range of motions with different identifiability properties. As soon as sufficiently informative data was available, the method achieved a sensor-to-segment calibration accuracy in the order of 2 , assessed as the angular deviation from the ground truth of the joint axis.
The proposed method includes the following features that were evaluated separately using the experimental data:
  • Gyroscope and accelerometer information are weighted and combined, which makes the joint axis identifiable for a wider range of different motions. Experimental evaluation showed that the method is not sensitive to the weighting parameters, and that it performs comparably well for a wide range of different motions across a large interval of weights.
  • A method to select a smaller subset of samples to use from a long sequence of recorded motion is proposed. Samples are selected from motions that yield identifiability, and measurements of non-informative motions are automatically discarded. The experimental evaluation showed that using between 125 and 1000 samples can achieve similar and in some cases even better performance than using all available samples collected from a long sequence of motions. Sample selection was shown to be particularly beneficial when data consisted of more non-informative than informative motions. Furthermore, using less samples for estimation reduces the computational complexity of the estimation.
  • A method to quantify local and global uncertainty properties of sequential estimates, which provides the user with an estimate when criteria for acceptable uncertainty are met. The method successfully identified estimates that satisfied the uncertainty criteria ( E max = 3 ).
The proposed method is the first truly plug-and-play calibration method that directly enables plug-and-play motion tracking in hinge joints. For the first time, the user can simply start using the sensors instead of performing precise or sufficiently informative motion in a predefined initial time window, and the proposed method provides reliable calibration parameters as soon as possible, which immediately enable calculation of accurate motion parameters from the incoming raw data as well as from already recorded data. Regardless of the performed motion, it provides only parameters that are actually accurate, which is not guaranteed by any state of the art method. This enables the kind of truly non-restrictive and reliable motion tracking that is needed in a range of application domains including ubiquitous motion assessment to wearable biofeedback systems.
In future work, the method could be extended to different joint types and be applied to motion tracking in mechatronic and biomechanical systems. For the latter case in particular, it would be of great interest to study the reliability of the method in non-rigid systems, such as human limbs, where motion of soft tissue is significant.

Author Contributions

Conceptualization, F.O., M.K., T.S. and K.H.; Data curation, F.O. and T.S.; Formal analysis, F.O.; Funding acquisition, K.H.; Investigation, F.O. and T.S.; Methodology, F.O., M.K., T.S. and K.H.; Software, F.O.; Supervision, M.K., T.S. and K.H.; Validation, F.O.; Visualization; F.O.; Writing—original draft, F.O. and T.S.; Writing—review and editing, F.O., M.K., T.S. and K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project “Mobile assessment of human balance” (Contract number: 2015-05054), funded by the Swedish Research Council.

Acknowledgments

The authors would like to thank Dustin Lehmann (TU Berlin) for providing the 3D printed hinge joint system and for his assistance with the data acquisition.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Camomilla, V.; Bergamini, E.; Fantozzi, S.; Vannozzi, G. Trends supporting the in-field use of wearable inertial sensors for sport performance evaluation: A systematic review. Sensors 2018, 18, 873. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Jalloul, N. Wearable sensors for the monitoring of movement disorders. Biomed. J. 2018, 41, 249–253. [Google Scholar] [CrossRef] [PubMed]
  3. Valtin, M.; Seel, T.; Raisch, J.; Schauer, T. Iterative learning control of drop foot stimulation with array electrodes for selective muscle activation. IFAC Proc. Vol. 2014, 47, 6587–6592. [Google Scholar] [CrossRef] [Green Version]
  4. Picerno, P. 25 years of lower limb joint kinematics by using inertial and magnetic sensors: A review of methodological approaches. Gait Posture 2017, 51, 239–246. [Google Scholar] [CrossRef]
  5. Kortier, H.G.; Sluiter, V.I.; Roetenberg, D.; Veltink, P.H. Assessment of hand kinematics using inertial and magnetic sensors. J. Neuroeng. Rehabil. 2014, 11, 70. [Google Scholar] [CrossRef] [Green Version]
  6. Favre, J.; Jolles, B.; Aissaoui, R.; Aminian, K. Ambulatory measurement of 3D knee joint angle. J. Biomech. 2008, 41, 1029–1035. [Google Scholar] [CrossRef]
  7. O’Donovan, K.J.; Kamnik, R.; O’Keeffe, D.T.; Lyons, G.M. An inertial and magnetic sensor based technique for joint angle measurement. J. Biomech. 2007, 40, 2604–2611. [Google Scholar] [CrossRef] [PubMed]
  8. Favre, J.; Aissaoui, R.; Jolles, B.M.; De Guise, J.A.; Aminian, K. Functional calibration procedure for 3D knee joint angle description using inertial sensors. J. Biomech. 2009, 42, 2330–2335. [Google Scholar] [CrossRef] [PubMed]
  9. Cutti, A.G.; Ferrari, A.; Garofalo, P.; Raggi, M.; Cappello, A.; Ferrari, A. ‘Outwalk’: A protocol for clinical gait analysis based on inertial and magnetic sensors. Med Biol. Eng. Comput. 2010, 48, 17. [Google Scholar] [CrossRef]
  10. Taetz, B.; Bleser, G.; Miezal, M. Towards self-calibrating inertial body motion capture. In Proceedings of the 19th International Conference on Information Fusion (FUSION), Heidelberg, Germany, 5–8 July 2016; pp. 1751–1759. [Google Scholar]
  11. Nazarahari, M.; Rouhani, H. Semi-automatic sensor-to-body calibration of inertial sensors on lower limb using gait recording. IEEE Sens. J. 2019, 19, 12465–12474. [Google Scholar] [CrossRef]
  12. Seel, T.; Schauer, T.; Raisch, J. Joint axis and position estimation from inertial measurement data by exploiting kinematic constraints. In Proceedings of the International Conference on Control Applications, Dubrovnik, Croatia, 3–5 October 2012; pp. 45–49. [Google Scholar]
  13. McGrath, T.; Fineman, R.; Stirling, L. An auto-calibrating knee flexion-extension axis estimator using principal component analysis with inertial sensors. Sensors 2018, 18, 1882. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Küderle, A.; Becker, S.; Disselhorst-Klug, C. Increasing the robustness of the automatic IMU calibration for lower Extremity Motion Analysis. Curr. Dir. Biomed. Eng. 2018, 4, 439–442. [Google Scholar] [CrossRef]
  15. Olsson, F.; Seel, T.; Lehmann, D.; Halvorsen, K. Joint axis estimation for fast and slow movements using weighted gyroscope and acceleration constraints. In Proceedings of the 22nd International Conference on Information Fusion (Fusion), Ottawa, ON, Canada, 2–5 July 2019; pp. 1–8. [Google Scholar]
  16. Nowka, D.; Kok, M.; Seel, T. On motions that allow for identification of hinge joint axes from kinematic constraints and 6D IMU data. In Proceedings of the 18th European Control Conference (ECC), Naples, Italy, 25–28 June 2019; pp. 4325–4331. [Google Scholar]
  17. Laidig, D.; Müller, P.; Seel, T. Automatic anatomical calibration for IMU-based elbow angle measurement in disturbed magnetic fields. Curr. Dir. Biomed. Eng. 2017, 3, 167–170. [Google Scholar] [CrossRef]
  18. Salehi, S.; Bleser, G.; Reiss, A.; Stricker, D. Body-IMU autocalibration for inertial hip and knee joint tracking. In Proceedings of the 10th EAI International Conference on Body Area Networks, Sydney, Australia, 28–30 September 2015; pp. 51–57. [Google Scholar]
  19. Olsson, F.; Halvorsen, K. Experimental evaluation of joint position estimation using inertial sensors. In Proceedings of the 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; pp. 1–8. [Google Scholar]
  20. Graurock, D.; Schauer, T.; Seel, T. Automatic pairing of inertial sensors to lower limb segments–a plug-and-play approach. Curr. Dir. Biomed. Eng. 2016, 2, 715–718. [Google Scholar] [CrossRef]
  21. Mark, C.; Schall, J.; Sesek, R.F.; Cavuoto, L.A. Barriers to the adoption of wearable sensors in the workplace: A survey of occupational safety and health professionals. Hum. Fact. 2018, 60, 351–362. [Google Scholar]
  22. Passon, A.; Schauer, T.; Seel, T. Hybrid inertial-robotic motion tracking for upper limb rehabilitation with posture biofeedback. In Proceedings of the International Conference on Biomedical Robotics and Biomechatronics (BioRob), Enschede, The Netherlands, 26–29 August 2018; pp. 1163–1168. [Google Scholar]
  23. Salchow-Hömmen, C.; Callies, L.; Laidig, D.; Valtin, M.; Schauer, T.; Seel, T. A tangible solution for hand motion tracking in clinical applications. Sensors 2019, 19, 208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Poddar, S.; Kumar, V.; Kumar, A. A comprehensive overview of inertial sensor calibration techniques. J. Dyn. Syst. Meas. Control 2017, 139. [Google Scholar] [CrossRef]
  25. Kok, M.; Hol, J.D.; Schön, T.B. Using inertial sensors for position and orientation estimation. Found. Trends® Signal Process. 2017, 11, 1–153. [Google Scholar] [CrossRef] [Green Version]
  26. El-Sheimy, N.; Hou, H.; Niu, X. Analysis and modeling of inertial sensors using allan variance. Trans. Instrum. Meas. 2007, 57, 140–149. [Google Scholar] [CrossRef]
  27. Woodman, O.J. An introduction to inertial navigation. Technical Report 696; University of Cambridge Computer Laboratory: Cambridge, UK, 2007. [Google Scholar]
  28. Gulmammadov, F. Analysis, modeling and compensation of bias drift in MEMS inertial sensors. In Proceedings of the 4th International Conference on Recent Advances in Space Technologies, Istanbul, Turkey, 11–13 June 2009; pp. 591–596. [Google Scholar]
  29. El Hadri, A.; Benallegue, A. Attitude estimation with gyros-bias compensation using low-cost sensors. In Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with the 28th Chinese Control Conference, Shanghai, China, 15–18 December 2009; pp. 8077–8082. [Google Scholar]
  30. Fong, W.; Ong, S.; Nee, A. Methods for in-field user calibration of an inertial measurement unit without external equipment. Meas. Sci. Technol. 2008, 19, 085202. [Google Scholar] [CrossRef]
  31. Qureshi, U.; Golnaraghi, F. An algorithm for the in-field calibration of a MEMS IMU. Sens. J. 2017, 17, 7479–7486. [Google Scholar] [CrossRef]
  32. Olsson, F.; Kok, M.; Halvorsen, K.; Schön, T.B. Accelerometer calibration using sensor fusion with a gyroscope. In Proceedings of the Statistical Signal Processing Workshop (SSP), Palma de Mallorca, Spain, 26–29 June 2016; pp. 1–5. [Google Scholar]
  33. Frosio, I.; Pedersini, F.; Borghese, N.A. Autocalibration of MEMS accelerometers. Trans. Instrum. Meas. 2008, 58, 2034–2041. [Google Scholar] [CrossRef]
  34. Wright, S.; Nocedal, J. Numerical Optimization, 2nd ed.; Springer Series in Operations Research; Springer: New York, NY, USA, 2006. [Google Scholar]
  35. Boyd, S.; Vandenberghe, L. Convex optimization; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  36. Skog, I.; Handel, P.; Nilsson, J.O.; Rantakokko, J. Zero-velocity detection—An algorithm evaluation. Trans. Biomed. Eng. 2010, 57, 2657–2666. [Google Scholar] [CrossRef] [PubMed]
  37. Gustafsson, M.M.F.; Ljung, L.; Milnert, M. Signal Processing; Studentlitteratur: Lund, Sweden, 2010. [Google Scholar]
  38. Hendeby, G.; Gustafsson, F. On nonlinear transformations of stochastic variables and its application to nonlinear filtering. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3617–3620. [Google Scholar]
  39. Lehmann, D.; Laidig, D.; Deimel, R.; Seel, T. Magnetometer-Free Inertial Motion Tracking of Arbitrary Joints with Range of Motion Constraints. Available online: https://arxiv.org/abs/2002.00639 (accessed on 22 June 2020).
  40. Skog, I.; Nilsson, J.O.; Händel, P. Evaluation of zero-velocity detectors for foot-mounted inertial navigation systems. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation (IPIN), Zurich, Switzerland, 15–17 September 2010; pp. 1–6. [Google Scholar]
Figure 1. The hinge joint system that we consider. The two segments rotate independently with respect to each other only along the joint axis j. The sensor frames S i are rigidly fixed to their respective segments and their relative orientation can be described by one joint angle, that corresponds to a rotation about the joint axis. The joint axis expressed in local sensor coordinates is an important sensor-to-segment calibration parameter in joint systems with one degree of freedom (DOF).
Figure 1. The hinge joint system that we consider. The two segments rotate independently with respect to each other only along the joint axis j. The sensor frames S i are rigidly fixed to their respective segments and their relative orientation can be described by one joint angle, that corresponds to a rotation about the joint axis. The joint axis expressed in local sensor coordinates is an important sensor-to-segment calibration parameter in joint systems with one degree of freedom (DOF).
Sensors 20 03534 g001
Figure 2. Shape of the cost function V ( x ) for a motion with simultaneous planar rotations of the segments. The parameters θ 1 and ϕ 1 are fixed near their true values and θ 2 and ϕ 2 are allowed to vary. From left to right, we see how the geometry changes as w ω increases while w a = 1 is constant. As w ω increases, new local minima appear near the locations at ϕ 2 + π from the previously existing local minima. These new local minima correspond to the wrong sign pairing ( ± j 1 , j 2 ) .
Figure 2. Shape of the cost function V ( x ) for a motion with simultaneous planar rotations of the segments. The parameters θ 1 and ϕ 1 are fixed near their true values and θ 2 and ϕ 2 are allowed to vary. From left to right, we see how the geometry changes as w ω increases while w a = 1 is constant. As w ω increases, new local minima appear near the locations at ϕ 2 + π from the previously existing local minima. These new local minima correspond to the wrong sign pairing ( ± j 1 , j 2 ) .
Sensors 20 03534 g002
Figure 3. The 3D printed hinge joint system, design by Dustin Lehmann, with the two IMUs (orange boxes, 34 × 58 mm) attached.
Figure 3. The 3D printed hinge joint system, design by Dustin Lehmann, with the two IMUs (orange boxes, 34 × 58 mm) attached.
Sensors 20 03534 g003
Figure 4. The angular velocity magnitudes for the 14 different motions that were recorded. The vertical lines and numbered sections indicate when the different motions begin and end.
Figure 4. The angular velocity magnitudes for the 14 different motions that were recorded. The vertical lines and numbered sections indicate when the different motions begin and end.
Sensors 20 03534 g004
Figure 5. Root-mean-square angular error (RMSAE) (71) for different motions and weights w 0 . Normal speed motions are shown in the top plots and faster motions are shown in the bottom plots. The plots on the right show the same results as the plots to their respective lefts, but zoomed in.
Figure 5. Root-mean-square angular error (RMSAE) (71) for different motions and weights w 0 . Normal speed motions are shown in the top plots and faster motions are shown in the bottom plots. The plots on the right show the same results as the plots to their respective lefts, but zoomed in.
Sensors 20 03534 g005
Figure 6. Angular errors over time for the four scenarios, see (ad). Comparing the case N max = N ( t ) , where all samples up to time t are used for estimation to N max { 1000 , 500 , 250 , 125 } samples being chosen by Algorithms 2–3 at each integer t seconds. Colored lines define these different cases as given by the legend in the top right. Vertical dashed lines and the numbers 1–14 are used to indicate from which motions (see Section 7.1) the data comes from.
Figure 6. Angular errors over time for the four scenarios, see (ad). Comparing the case N max = N ( t ) , where all samples up to time t are used for estimation to N max { 1000 , 500 , 250 , 125 } samples being chosen by Algorithms 2–3 at each integer t seconds. Colored lines define these different cases as given by the legend in the top right. Vertical dashed lines and the numbers 1–14 are used to indicate from which motions (see Section 7.1) the data comes from.
Sensors 20 03534 g006
Figure 7. The figure shows which samples from Scenario 1 (a) and Scenario 2 (b) that were selected by Algorithms 2–3 with N max = 1000 , at the times given by the vertical axes. Black/white indicates that a sample were selected/not selected respectively. As time increases and more samples become available, we see some previously selected samples being deselected in favor of new samples that are deemed superior by the algorithms. Vertical dashed lines and the numbers 1–14 indicate from which motions (see Section 7.1) the data comes from.
Figure 7. The figure shows which samples from Scenario 1 (a) and Scenario 2 (b) that were selected by Algorithms 2–3 with N max = 1000 , at the times given by the vertical axes. Black/white indicates that a sample were selected/not selected respectively. As time increases and more samples become available, we see some previously selected samples being deselected in favor of new samples that are deemed superior by the algorithms. Vertical dashed lines and the numbers 1–14 indicate from which motions (see Section 7.1) the data comes from.
Sensors 20 03534 g007
Figure 8. The plots shows the local and global uncertainty metrics compared to the angular errors (red) for the four scenarios, see (ad). Local uncertainty is quantified by μ z + 2 σ z , where μ z is the estimated mean AD (64) and σ z is the standard deviation, computed from the estimated covariance matrix (65). Local uncertainty (blue) is shown for both j ^ 1 and j ^ 2 for each scenario. Global uncertainty is quantified by (68). The global uncertainty (green) with n min = 10 is shown for each scenario. Horizontal dashed lines show the accuracy threshold E max = 3 . Vertical dashed lines show when estimates j ^ were accepted by Algorithm 4. For each scenario, the leftmost vertical lines show the case of n min = 3 and the rightmost vertical lines show the case of n min = 10 , where Algorithm 4 terminates when the estimates have reached the desired accuracy w.r.t. ground truth.
Figure 8. The plots shows the local and global uncertainty metrics compared to the angular errors (red) for the four scenarios, see (ad). Local uncertainty is quantified by μ z + 2 σ z , where μ z is the estimated mean AD (64) and σ z is the standard deviation, computed from the estimated covariance matrix (65). Local uncertainty (blue) is shown for both j ^ 1 and j ^ 2 for each scenario. Global uncertainty is quantified by (68). The global uncertainty (green) with n min = 10 is shown for each scenario. Horizontal dashed lines show the accuracy threshold E max = 3 . Vertical dashed lines show when estimates j ^ were accepted by Algorithm 4. For each scenario, the leftmost vertical lines show the case of n min = 3 and the rightmost vertical lines show the case of n min = 10 , where Algorithm 4 terminates when the estimates have reached the desired accuracy w.r.t. ground truth.
Sensors 20 03534 g008
Table 1. Shows the RMSAE (71) and MAXAE (72) after M = 100 runs with and without artificial bias for the four scenarios.
Table 1. Shows the RMSAE (71) and MAXAE (72) after M = 100 runs with and without artificial bias for the four scenarios.
Scenario b a [m/s2] b ω [°/s]RMSAE [°]MAXAE [°]
100 1.55 1.67
111 1.73 4.41
200 1.58 2.16
211 1.97 4.84
300 1.50 2.07
311 1.58 3.09
400 1.47 1.98
411 1.30 2.32

Share and Cite

MDPI and ACS Style

Olsson, F.; Kok, M.; Seel, T.; Halvorsen, K. Robust Plug-and-Play Joint Axis Estimation Using Inertial Sensors. Sensors 2020, 20, 3534. https://doi.org/10.3390/s20123534

AMA Style

Olsson F, Kok M, Seel T, Halvorsen K. Robust Plug-and-Play Joint Axis Estimation Using Inertial Sensors. Sensors. 2020; 20(12):3534. https://doi.org/10.3390/s20123534

Chicago/Turabian Style

Olsson, Fredrik, Manon Kok, Thomas Seel, and Kjartan Halvorsen. 2020. "Robust Plug-and-Play Joint Axis Estimation Using Inertial Sensors" Sensors 20, no. 12: 3534. https://doi.org/10.3390/s20123534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop