Next Article in Journal
An Approach to Frequency Selectivity in an Urban Environment by Means of Multi-Path Acoustic Channel Analysis
Next Article in Special Issue
Kelvin–Voigt Parameters Reconstruction of Cervical Tissue-Mimicking Phantoms Using Torsional Wave Elastography
Previous Article in Journal
Land Use Classification of the Deep Convolutional Neural Network Method Reducing the Loss of Spatial Features
Previous Article in Special Issue
Constraint-Based Optimized Human Skeleton Extraction from Single-Depth Camera

Sensors 2019, 19(12), 2794; https://doi.org/10.3390/s19122794

Article
Simultaneous Floating-Base Estimation of Human Kinematics and Joint Torques
1
Dynamic Interaction Control at Istituto Italiano di Tecnologia, Center for Robotics and Intelligent Systems, Via San Quirico 19D, 16163 Genoa, Italy
2
Machine Learning and Optimisation, The University of Manchester, Manchester M13 9PL, UK
3
DIBRIS, University of Genova, 16145 Genova, Italy
*
Author to whom correspondence should be addressed.
Received: 1 May 2019 / Accepted: 17 June 2019 / Published: 21 June 2019

Abstract

:
The paper presents a stochastic methodology for the simultaneous floating-base estimation of the human whole-body kinematics and dynamics (i.e., joint torques, internal and external forces). The paper builds upon our former work where a fixed-base formulation had been developed for the human estimation problem. The presented approach is validated by presenting experimental results of a health subject equipped with a wearable motion tracking system and a pair of shoes sensorized with force/torque sensors while performing different motion tasks, e.g., walking on a treadmill. The results show that joint torque estimates obtained by using floating-base and fixed-base approaches match satisfactorily, thus validating the present approach.
Keywords:
floating-base dynamics estimation; human joint torque analysis; human wearable dynamics

1. Introduction

In physical human–robot interaction (pHRI) domains, a huge variety of applications requires robots to actively collaborate with humans. More and more frequently, robots are required to be endowed with the capability to control physical collaboration through intentional interaction with humans. The simultaneous whole-body estimation of the human kinematics (i.e., motion) and dynamics (i.e., joint torques and internal forces) is a crucial component for modeling, estimating and controlling the interaction. In assistive and rehabilitation scenarios, for instance, the demand for physical robotic assistance to humans is an ever-growing practice and the estimation is a pivotal component for creating technologies capable to help and assist humans. The importance of controlling the pHRI calls for the design of a framework in which the concurrent estimation of the human kinematics and dynamics could be exploited by the robots [1].
In general, several routes for pHRI have been explored over the years. Minimum-jerk-based methods [2,3], imitation learning [4] and retargeting techniques [5,6] are only some of the relevant examples. None of them, however, deal with the simultaneous estimation of the human kinematics and dynamics. Although real-time solutions for whole-body motion tracking are widely marketed (e.g., marker-based motion capture such as Vicon or marker-less systems such as Microsoft Kinect and Xsens wearable suit system), the dynamics estimation is still a challenging problem, especially in those scenarios for which an online estimation is a crucial requirement (e.g., health monitoring or manufacturing ergonomy).
This paper builds upon our former work [7], where a probabilistic algorithm to estimate the human whole-body fixed-base dynamics is described. We propose here a stochastic methodology for the simultaneous floating-base estimation of the human kinematics and dynamics. The estimation is computed by means of a sensor-fusion-based tool able to provide an estimation of the whole-body kinematics and dynamics of the human (torques, internal forces exchanged across joints, and external forces acting on links) by leveraging the reliability of the available measurements. The core of the algorithm has been here adapted to a new version to address the need of the floating-base estimation, in which the position and the velocity of the human base are not assumed to be known a priori. The floating-base formalism is not a novelty in the robotics research and it has been used for multiple topics for years. Inverse dynamics control for humanoids and legged robots [8], modeling and control of humanoids in dynamic environments [9], identification of humanoids inertial parameters [10,11], are only a few examples of applications. In [12,13], the formalism has been even used in the context of human–robot experiments. The human dynamics, however, has been not computed under the floating-base formalism but human motion capture data have been used to generate human-like motions to be retargeted onto floating-base humanoids.
Our experimental validation setup considered a sensorized human subject walking on a treadmill. In general, the gait analysis requires: (i) classifying the human walking state, i.e., the recursive switching pattern from right leg single support → double support → left leg single support; and (ii) defining an algorithm able to detect the pattern classification. In a recent study on the human joint muscular torques estimation during gait [14], two dynamical models have been considered separately for the legs to overcome the problem of the switching contact detection and to avoid the increasing complexity of the control algorithm for pattern classification. In this work, we decided to perform a different choice: we considered the dynamics of the human as a whole (with the pelvis as a floating base) and we developed an algorithm to detect the feet contact via additional sensors readings (force/torque sensors). In general, the algorithm constitutes a powerful and versatile tool to arbitrarily analyze all those tasks for which a switching contact condition is required without changing the base inside the algorithm.
The paper is structured as follows. Section 2 defines the mathematical notation and describes the human kinematics and dynamics modeling. Section 3 describes the steps useful to compute the kinematics and joint torques whole-body estimation for the floating-base formalism. In Section 4, the experimental setup and data analysis are described. Conclusions, limitations and several considerations on future developments are discussed in Section 5.

2. Background

2.1. Notation

  • Let R and N be the set of real and natural numbers, respectively.
  • Let x R n denote a n-dimensional column vector, while x denotes a scalar quantity. We advise the reader to pay attention to the notation style: we define vectors, matrices with bold small and capital letters, respectively, and scalars with non-bold style.
  • Let | x | be the norm of the vector x .
  • Let 0 m and 1 m be the zero and identity matrices R m × m , respectively. The notation 0 m × n represents the zero matrix R m × n .
  • Let I be an inertial frame with z-axis pointing against the gravity (g denotes the norm of the gravitational acceleration). Let B denote the base frame, i.e., a frame attached to the base link. Let L be the generic frame attached to a link, and J the frame of a joint.
  • Let each frame be identified by an origin and an orientation, e.g., I = ( O I , [ I ] ) or L [ I ] = ( O L , [ I ] ) .
  • Let I o B R 3 be the coordinate vector connecting O I with O B , pointing towards O B , expressed with respect to (w.r.t.) frame I .
  • Let I R B S O ( 3 ) be a rotation matrix such that I o L = I o B + I R B B o L .
  • Let S ( x ) S O ( 3 ) denote the skew-symmetric matrix such that S ( x ) y = x × y , being × the cross product operator in R 3 .
  • Let I o ˙ B and I o ¨ B denote the first-order and second-order time derivatives of I o B , respectively.
  • Given a stochastic variable x , let p ( x ) denote its probability density and p ( x | y ) the conditional probability of x given the assumption that another stochastic variable y has occurred.
  • If E [ x ] is the expected value of a stochastic variable x , let μ x = E [ x ] and Σ x = E [ x x ] be the mean and covariance of x , respectively. Let x N ( μ x , Σ x ) be the expression for the normal distribution of x .

2.2. Human Kinematics and Dynamics Modeling

The human is modeled as a rigid multi-body system with n N internal Degrees of Freedom (DoFs). The system is composed of N B rigid bodies, called links (denoted with L ) connected by joints (denoted with J ). Links are numbered from 0 to N B 1 , being 0 the floating base. Furthermore, λ ( L ) and μ ( L ) represent the parent and child links of L , respectively. The topological order is such that links L and λ ( L ) are coupled by joint J . The joint motion constraint is modeled with the motion freedom subspace S R 6 . We assume that all the joints have one DoF each, which implies that the joint numbering is from 1 to n = N B 1 . No parent joint is assumed for the floating base. See the overall representation in Figure 1.
We also assume that none of the links have a known a priori constant pose w.r.t. I . Thus, we say that the system is floating base. More precisely, the system configuration space is a Lie group Q = R 3 × S O ( 3 ) × R n such that q = ( q b , s ) Q , being q b = ( I o B , I R B ) R 3 × S O ( 3 ) the pose of the base frame B w.r.t. I and s R n the joint positions vector capturing the topology of the system. The velocity of the system is represented by ν = ( I v B , s ˙ ) R 6 + n being I v B = ( I o ˙ B , I ω B ) R 6 the velocity of B w.r.t. I (the angular velocity of the base I ω B is such that I R ˙ B = S ( I ω B ) I R B ). s ˙ R n is the joint velocities vector. If the system is interacting with the external environment by exchanging n c wrenches, the dynamics of the floating-base system can be described by adopting the Euler–Poincaré formalism ([15], Ch. 13.5):
M ( q ) ν ˙ + h ( q , ν ) = B τ + k = 1 n c J C k ( q ) 𝕗 k x ,
where M R ( n + 6 ) × ( n + 6 ) is the mass matrix, h R n + 6 is a vector accounting for the Coriolis effects and gravity terms, B : = ( 0 n × 6 , 1 n ) is a selector matrix, τ R n represents the joint torques, and 𝕗 k x = ( f k x , m k x ) R 6 is a vector representing the external wrench acting on the system on the link that has the kth contact point, being f k x and m k x the external force and moment, respectively. The Jacobian J C k ( q ) R 6 × ( 6 + n ) is an operator that maps the system velocity ν with the velocity v at the kth contact frame, such that
I v C k = J C k ( q ) ν = J b ( q ) J s ( q ) I v B s ˙ ,
with J b ( q ) R 6 × 6 and J s ( q ) R 6 × n Jacobians being related to the base and joint configuration, respectively.

2.3. Case-Study Human Model

Our case-study human body owns N B = 67 links and n = 66 internal DoFs. The links were modeled with simple geometric shapes (parallelepiped, cylinder, and sphere) whose dimensions were estimated via inertial measurement units (IMUs) readings (i.e, Xsens motion capture system provides the position of several anatomical bony landmarks w.r.t. the origin of each link). The dynamic properties of each link (i.e., inertias and center of mass) were computed via the anthropometric data available in the literature by: (i) exploiting the relation between the total body mass and the mass of each link [16,17]; and (ii) assuming geometric approximations and homogeneous density for the links [18,19].

3. Simultaneous Floating-Base Estimation of Human Whole-Body Kinematics and Dynamics

In this section, we describe step-by-step the simultaneous floating-base estimation algorithm for the human whole-body kinematics and dynamics.

3.1. Offline Estimation of Sensor Position

The first objective was to develop a Universal Robot Description Format (URDF) model for the human with properties listed in Section 2.3 (see Figure 2, right). The URDF is a XML-based file format for representing the kinematics and dynamics of multi-body systems. A crucial step for the URDF generation is to identify the position of each sensor w.r.t. the attached link frame (Figure 2, left). Xsens exposes the sensor linear acceleration, the link angular velocity and acceleration, the sensor and link orientation w.r.t. the inertial frame. However, the sensor position is not provided by its framework. A procedure to estimate the sensor position by processing IMUs data was therefore adopted. The procedure is very similar to the one in [20], where it is used for humanoid robots.
More precisely, if S is the frame associated to the sensor and L is the frame of the link where the sensor is rigidly attached, then the measurement equation is such that
a S = S R I ( I o ¨ S I g ) = S R I I o ¨ L + I ω ˙ L × I R L L o S + I ω L × ( I ω L × I R L L o S ) I g = S R I I o ¨ L + S ( I ω ˙ L ) + S ( I ω L ) 2 I R L L o S I g ,
being I g = [ 0 0 9.81 ] .
Equation (3) can be rearranged in the following form:
S ( I ω ˙ L ) + S ( I ω L ) 2 I R L A L o S = I R S a S ( I o ¨ L I g ) b .
Given N m measurements, the position of the sensor w.r.t. the link, i.e., L o S , is the solution of the following optimization problem:
L o S * = arg min L o S | A ¯ L o S b ¯ | 2
being A ¯ = [ A 1 A 2 A N m ] , b ¯ = [ b 1 b 2 b N m ] .

3.2. Estimation of Human Kinematics

The objective of this section is to derive algorithms for estimating the human kinematic configuration q = ( q b , s ) and its derivatives.
Per each link pair [ λ ( L ) , L ] coupled by the joint J, s R n was computed by solving an optimization problem using Ipopt [21]. The problem is formulated to minimize the distance between the measured, i.e., λ ( L ) R L m e a s , and the computed, i.e., λ ( L ) R L , relative rotations between the frames attached to the link pair, such that
s J = arg min s d i s t a n c e λ ( L ) R L m e a s , λ ( L ) R L , s . t . s J , m i n < s J < s J , m a x
being the distance defined in terms of the rotation error parameterized in Euler angles, and ( s J , m i n , s J , m a x ) joint limits. We refer to Equation (6) as a link-pairwise inverse kinematics (IK) problem. Joint velocities and accelerations s ˙ , s ¨ R n were computed by using a weighted sum of moving windows of elements with a third-order polynomial Savitzky–Golay filtering [22].
The base pose q b was obtained via IMUs readings. The pivotal modification for the floating-base formalism deals with the computation of the velocity I v B of the floating base. It is assumed that holonomic constraints in the form of c ( q ) = 0 act on the system in Equation (1). In the human experimental framework, constraints occur when the system is in contact with the ground such that the feet can be considered as end-effectors with zero velocity (i.e., I v C k = 0 ). This yields to
0 = J C k ( q ) ν .
If RF and LF are the contact frames associated to the right and left foot, respectively, for Equation (2), we can write Equation (7) such that
0 = J b , RF ( q ) I v B + J s , RF ( q ) s ˙ ,
I v B * = arg min I v B | J b , RF ( q ) I v B + J s , RF ( q ) s ˙ | 2 ,
if only the contact in RF occurs, and
0 = J b , LF ( q ) I v B + J s , LF ( q ) s ˙ ,
I v B * = arg min I v B | J b , LF ( q ) I v B + J s , LF ( q ) s ˙ | 2 ,
if the contact occurs in LF .
Nevertheless, if the system is simultaneously constrained by both feet, we need to consider the overall effect on the system,
0 = J b , RF ( q ) J b , LF ( q ) J ¯ b ( q ) I v B J s , RF ( q ) J s , LF ( q ) J ¯ b ( q ) s ˙
I v B * = arg min I v B | J ¯ b ( q ) I v B + J ¯ s ( q ) s · | 2

3.3. Offline Contact Classification

We implemented an offline algorithm to detect which foot is in contact with the ground, i.e., double support state, left single support state, or right single support state (see Algorithm 1). The contact classification is determined via force/torque (FT) sensors readings and depends on a self-tuned threshold force value T f z . The threshold defines how big the area of the double support has to be considered. When a single support occurs, the algorithm is able to classify which foot is in contact with the ground by reading and comparing the FT sensors values.
Algorithm 1 Offline Feet Contact Cassification.
Require: FT sensor forces (z component) for right foot R F f z and left foot L F f z
1:procedure
2:    N ← number of samples
3:     T f z ← threshold on fz = mean( R F f z + L F f z )
4:    main loop:
5:    for j = 1 N do
6:        if a b s R F f z L F f z T f z then
7:           Classify j as double support sample
8:        else
9:           if R F f z ( j ) > L F f z ( j )   then
10:               Classify j as right single support sample
11:           else
12:               Classify j as left single support sample
13:           end if
14:        end if
15:    end for
16:  end procedure

3.4. Maximum-A-Posteriori Algorithm for Floating-Base Dynamics Estimation

The simultaneous estimation of the human kinematics and dynamics is performed by means of a Maximum-A-Posteriori (MAP) algorithm. The advantages of this algorithm are discussed in [7]. Here, the objective is to describe how the core of the algorithm was modified to fit the floating-base formalism. The main difference lies in a new representation for the acceleration. Instead of using the proper body acceleration R 6 (e.g., as in [23]), i.e.,
a L g = v ˙ L L R I I g 0 3 × 1 = I o ¨ L I ω ˙ L L R I I g 0 3 × 1 ,
we decided to adopt the proper sensor acceleration R 6 , i.e.,
α L g = α L L R I I g 0 3 × 1 = L X L [ I ] L [ I ] v ˙ L L R I I g 0 3 × 1 = L R I 0 3 0 3 L R I I o ¨ L I ω ˙ L L R I I g 0 3 × 1 = L R I I o ¨ L ω ˙ L L R I I g 0 3 × 1 .
being X R 6 × 6 the adjoint transformation matrix for motion vectors. The main advantage is that the linear part of Equation (12) corresponds to Equation (3) where the frame of the accelerometer coincides with the frame L of the link (same origin and orientation). In general, several modifications were required for the floating-base formalism as follows.
  • We used the new acceleration representation by exploiting the relation between Equation (11) and Equation (12), i.e.,
    a L g = α L g α ¯ L g     being     α ¯ L g = ( L R I I o ˙ L ) × ω L 0 3 × 1
  • Since we broke the univocal relation between each link and its parent joint, we redefine the serialization of all the kinematics and dynamics quantities in the vector d w.r.t. the fixed-base serialization of the same vector in Section 4 of [7], thus
    d = d l i n k d j o i n t R 12 N B + 7 n ,
    being
    d l i n k = α 0 g 𝕗 0 x α 1 g 𝕗 1 x α N B 1 g 𝕗 N B 1 x R 12 N B ,
    d j o i n t = 𝕗 1 𝕗 2 𝕗 n s ¨ 1 s ¨ 2 s ¨ n R 7 n .
    In the new serialization, α g R 6 is the proper sensor acceleration of Equation (12) and 𝕗 x R 6 is the external wrench acting on each link. Similarly for the joint quantities, 𝕗 R 6 is the internal wrench (or joint wrench) exchanged from λ ( L ) to L through the joint J , while s ¨ is the joint acceleration.
  • The variable τ was removed from d . The joint torque can be obtained as a projection of the joint wrench on the motion freedom subspace, such that τ J = S J 𝕗 J , for each joint J of the model.
Within this new formalism, we rewrite the equation of the acceleration propagation and the recursive Newton–Euler equations. The Newton–Euler formalism is an equivalent representation of Equation (1) (more details about this choice can be found in Section 3.3 of [24]). For the sake of simplicity, here following, we refer to L and J as compact forms for L L and J J , respectively, thus:
α L g = L X λ ( L ) λ ( L ) α λ ( L ) g + S J s ¨ J + 0 3 × 1 ω L × S J s ˙ J + ( α ¯ L g L X λ ( L ) λ ( L ) α ¯ λ ( L ) g )
M L α L g + 0 3 × 1 ω L × * M L 0 3 × 1 ω L = 𝕗 L x + 𝕗 J J X L μ * 𝕗 L μ ,
where X * R 6 × 6 and × * are the adjoint transformation matrix and the dual cross product operator R 3 for force vectors, respectively. In general, these equations seem to be much more complex than the ones obtained by using the proper body acceleration in [7]. They have, however, the convenient property to be agnostic to the linear velocity of each link. This property drastically simplifies the generalization to the floating-base case, in which the linear velocity of the floating base is, in general, not available.
As already described in [24], the estimation problem can be compactly arranged in a matrix form, as follows:
Y ( s ) D ( s ) d + b Y ( s , s ˙ ) b D ( s , s ˙ ) = y 0 12 N B × 1 .
More in detail:
  • The first set of equations Y ( s ) d + b Y ( s , s ˙ ) = y accounts for the sensor measurements. The number of equations depends on how many sensors are conveyed into the vector y R y and it does not depend on the number of links in the model (more than one sensor could be associated to the same link, e.g., the combination of an IMU + a FT sensor). In general, the sensor matrices are not changed within the new floating-base formalism. The only difference is that the accelerometer has a different relation with the acceleration of the body. In particular, if the frame L of a link and the frame associated to the IMU located on the same link are rigidly connected, then
    y L , I M U = α I M U g = L X B α B g + L R B ω B × ω B × B o I M U 0 3 × 1 .
    Similarly, for the FT sensor frames rigidly connected to the feet frames, the measurement equation is
    y L , F T = L X F T * 𝕗 F T .
  • The second set of equations D ( s ) d + b D ( s , s ˙ ) = 0 represents the compact matrix form for Equations (16) and (17) given the new serialization of d in Equation (14). The matrix D R 12 N B × d is a matrix with 12 N B rows and d columns, i.e., the number of rows of d in Equation (14). The matrix blocks in D for the acceleration of Equation (16) are recursively the following:
    D α L = 1 6 ,
    D α λ ( L ) = L X λ ( L ) R 6 × 6 L B 0 6 o t h e r w i s e
    D s ¨ J = S J R 6 L B 0 6 × 1 o t h e r w i s e
    The blocks in D for Newton–Euler equations related to Equation (17) are instead:
    D α L = M L R 6 × 6 ,
    D f L x = 1 6 ,
    D f J = 1 6 L B 0 6 o t h e r w i s e
    D f J μ = J X J μ * R 6 × 6 i f μ ( L ) 0 6 o t h e r w i s e
    All the other blocks in D are equal to 0. Unlike D , the term b D R 12 N B is affected by the new representation of the acceleration w.r.t. the one in [7]. Each subterm b D L R 12 is such that
    b D L = { α ¯ B g 0 3 × 1 ω B × * M B 0 3 × 1 ω B i f L = B 0 3 × 1 ω L × S J s ˙ J + α ¯ L g L X λ ( L ) λ ( L ) α λ ¯ ( L ) g 0 3 × 1 ω L × * M L 0 3 × 1 ω L o t h e r w i s e
The solution of the system in Equation (18) is computed in a Gaussian domain via MAP estimator. Within this framework, d and y are stochastic variables with Gaussian distributions and the problem is solved by maximizing the conditional probability of d given the measurements y , i.e.,
d M A P = arg max d p ( d | y ) .
Equation (24) corresponds to the mean of the conditional probability μ d | y , such that
Σ d | y = Σ ¯ D 1 + Y Σ y 1 Y 1 ,
μ d | y = Σ d | y [ Y Σ y 1 ( y b Y ) + Σ ¯ D 1 μ ¯ D ] .
More details on the MAP solution are provided in Appendix A.

4. Experiments and Analysis

4.1. Experimental Setup

The objective of the experiment was to test the goodness of the estimation algorithm. An experimental session was carried out at Istituto Italiano di Tecnologia (IIT), Genoa, Italy, with a healthy male subject. The participant was equipped with an Xsens wearable motion tracking system with 17 IMUs to capture the whole-body kinematics. A pair of sensorized shoes developed at IIT was used to detect the ground reaction forces. Each shoe was equipped with two six-axis FT sensors able to measure 6D wrenches (3 forces and 3 moments), as shown in Figure 3. The subject was asked to perform a set of different tasks, as listed in Table 1.
Data were recorded at 50 Hz via a YARP-based [25] framework for wearable sensors that allows synchronously collecting data coming from multiple sources (see open-source code available on Github repository https://github.com/robotology-playground/wearables). Data processing was analyzed on MathWorks MATLAB®. The MAP computation code (open-source at https://github.com/claudia-lat/MAPest) relies on the C++ based iDynTree multi-body dynamics library designed for free floating robots [26]. iDynTree is released as open source code available on Github: https://github.com/robotology/idyntree. Furthermore, it is worth remarking that an important modification on the IK computation was here introduced w.r.t. the one in [7]. We removed the OpenSim IK toolbox dependency and we computed the whole-body joint angles with an Ipopt-based IK (see Section 3.2).
Data coming from the shoes FT sensors were analyzed to detect the feet contact, as described in Algorithm 1 in Section 3.3. The overall estimation considered thus Equation (10b) for T1, Equation (8b) for Sequence 2 of T2, and Equation (9b) for Sequence 2 of T3. In Sequence T4 on Figure 4, for instance, the algorithm detected the switching contact condition of the feet in Figure 5 and applied the proper base velocity computation. Figure 6 shows the feet contact detection for the tasks in Table 1.

4.2. Comparison between Measurement and Estimation

The primary objective of the floating-base MAP algorithm is to estimate simultaneously kinematic and dynamic quantities related to the links and the joints of the human model. The vector d in Equation (14) contains variables measurable via sensors ( α l i n g , f x and m x ) and variables that cannot be measured in humans ( 𝕗 and τ ) but only estimated with the algorithm. The MAP algorithm represents the probabilistic way to estimate those quantities for which a quantitative measure does not exist. The objective is to compare the same variables (measured and estimated) to prove the goodness of the proposed algorithm. Figure 7 shows the comparison for the base linear proper sensor acceleration α l i n g [ m / s 2 ] between the measurement (mean and standard deviation, in red) and the estimation via algorithm (mean, in blue), for tasks T1, T2, T3 and T4, respectively. The same comparison for the external force f x [N] and external moment m x [Nm] is shown in Figure 8a,b for the left foot and right foot, respectively. The validation was performed along with a Root Mean Square Error (RMSE) investigation for linear accelerations and external wrenches (Table 2). Error range values are shown in Table 3.
It is worth remarking on the importance of the choice of the covariance matrix associated to the sensors. It can be manually tuned as a parameter of the measurement trust. In this experimental analysis, covariances were chosen in a range from 10 6 to 10 4 : the higher the level of sensor trust (i.e., low covariance), the lower the RMSE associated to the sensor variable.
Figure 9 represents the norm of the overall error of the joint acceleration s ¨ [ rad / s 2 ], for tasks T1, T2, T3 and T4. The error norm | ε ( s ¨ ) | = | ( s ¨ m e a s u r e d s ¨ e s t i m a t e d ) | was computed by considering the entire set of joints of the model, such that | ε T 1 ( s ¨ ) | = 0.007 ± 4.9 × 10 5 [ rad / s 2 ], | ε T 2 ( s ¨ ) | = 0.008 ± 0.001 [ rad / s 2 ], | ε T 3 ( s ¨ ) | = 0.007 ± 3.5 × 10 4 [ rad / s 2 ], | ε T 4 ( s ¨ ) | = 0.009 ± 0.002 [ rad / s 2 ].

4.3. Human Joint Torques Estimation during Gait

The floating-base MAP algorithm provides the whole-body joint torque estimation. The estimated torque does not have a measured quantity to be compared. We can trust only the estimation as a consequence of the validation analysis in Section 4.2. The algorithm becomes particularly useful when dealing with the human gait analysis. Figure 10 shows the joint torque estimations along with the joint angles for the ( Figure 10a) left leg and ( Figure 10b) right leg, respectively. We decided, here, to show only the most representative results for the walking task.

4.4. Comparison between Fixed-Base and Floating-Base Algorithms

We performed a comparison between the floating-base estimation w.r.t. the fixed-base estimation done in [24] by computing the norm of the error between the two formalisms, for tasks T1, T2 and T3. In general, the error norm was computed as the norm of the difference between each fixed-base and the floating-base estimated variable, i.e, | ε ( v a r i a b l e ) | = | ( v a r i a b l e e s t i m a t e d F i x e d v a r i a b l e e s t i m a t e d F l o a t i n g ) | . Figure 11 shows the norm of the error for the base proper body linear acceleration ε ( a l i n g ) [ m / s 2 ] and angular acceleration ε ( a a n g g ) [ rad / s 2 ] of the base (i.e, the Pelvis) between the estimations with the formalisms. The proper body acceleration for the floating-base MAP estimation was obtained via Equation (13). The same comparison was performed for the overall set of external force ε ( f x ) [N] and moment ε ( m x ) [Nm] errors (Figure 12), and for the entire set of joint acceleration ε ( s ¨ ) [ rad / s 2 ] and torque ε ( τ ) [Nm] (Figure 13) errors. Table 4 shows the mean and standard deviation of the error norms.
In addition to the analytical modifications for the floating-base formalism described in Section 3.4, there is another important advantage in using the floating-base MAP. Unlike the fixed-base estimation where it is fundamental to change the base among the tasks (left foot for tasks T1 and T3, and right foot for T2), this limitation does not exist for the floating-base algorithm, e.g., the pelvis remains the base for all the tasks. Furthermore, the floating-base formalism allows us to make up for the lack of the external force estimation that exists for the fixed-base algorithm on the link appointed as the model base.

4.5. A Word of Caution on the Covariances Choice

The MAP algorithm estimation depends on the covariance values chosen by the end-user. In Equations (25a) and (25b), it is visible the role of: (i) the measurements covariance Σ y ; and ( i i ) a covariance Σ ¯ D that, in turn, takes into account the model reliability (via covariance Σ D ) and the prior on the estimation (via covariance Σ d ) (see details in Appendix A, Equations (A5a) and (A5b)). In general, the procedural approach consists in assigning
  • low values for the covariance Σ y if trusting in the sensor measurements;
  • low values for the model covariance Σ D for trusting the dynamic model; and
  • high values for the covariance Σ d , which means that the end-user does not know any a priori information on the estimation.
The combined contribution of this set of covariances affects the final estimation. The estimation vector d contains variables that are also measured (e.g., linear acceleration, external wrench, and joint acceleration) for which the covariance of the measurement Σ y plays a predominant role (a minor role is due to the covariance Σ d of the prior). A problem arises, however, when considering the angular acceleration α a n g g . This variable is part of the d vector as estimation, but it is not measured. At the current stage, a way to play with the angular acceleration trust is, therefore, to tune Σ d . A forthcoming investigation will deal with the possibility of integrating the angular acceleration as part in the vector y of the measurements.

5. Conclusions

In this paper, we present a stochastic methodology for the simultaneous floating-base estimation of the whole-body human kinematics and dynamics (joint torques, internal forces and external forces). The novelty consists in the possibility to perform the estimation in a floating-base framework. The floating base can be arbitrarily chosen among the model links and the algorithm requires to estimate the pose and 6D velocity of the base w.r.t. the inertial frame.
The algorithm was validated by carrying out a four-task experimental session with a healthy subject equipped with a wearable motion tracking system—to capture the whole-body kinematics—and a pair of force/torque sensorized shoes. We performed the tasks in Table 1 by considering the human pelvis as the system (floating) base. This choice came particularly in handy with the walking task on the treadmill. In general, the algorithm allows analyzing all those tasks for which a switching contact condition is implicitly required (e.g., the human gait) without manually tuning the model base into the algorithm.
Current limitations of the methodology concern: (i) the human URDF generation and the estimation of sensors position w.r.t. the attached link (see Section 3.1); and (ii) the feet contact classification (see Section 3.3), as they are carried out in an offline post-processing step. The impeding objective is to develop a new online procedure to automatize the human model generation from real-time acquisitions together with a real-time algorithm to classify the feet contact. These two features will strongly improve the already existing tool for the online estimation of the human joint torques (open-source code available on Github repository https://github.com/robotology/human-dynamics-estimation).

Author Contributions

Methodology, C.L., S.T., F.N., and D.P.; development, C.L. and S.T.; software, S.T., D.F., Y.T., and L.R.; validation, C.L. and F.J.A.C.; data curation, C.L., Y.T., and L.R.; writing—original draft preparation, C.L.; and supervision, D.P.

Funding

This paper was supported by EU An.Dy Project. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 731540.

Conflicts of Interest

The content of this publication is the sole responsibility of the authors. The European Commission or its services cannot be held responsible for any use that may be made of the information it contains.

Appendix A

Appendix A provides the reader with a description of the Maximum-A-Posteriori (MAP) estimator as a tool for the whole-body kinematics and dynamics human estimation. More details can be found in [24].
The problem in Equation (18) is here treated in a Gaussian domain. Let d and y be stochastic variables with Gaussian distributions. The conditional probability of d given y is
p ( d | y ) = p ( d , y ) p ( y ) = p ( d ) p ( y | d ) p ( y ) p ( d ) p ( y | d ) ,
since the term p ( y ) does not depend on d . The solution provided by the MAP estimator yields to
d M A P = arg max d p ( d | y ) arg max d p ( d , y ) .
The solution consists in the computation of p ( y | d ) , p ( d ) and then of p ( d | y ) .
  • The conditional probability p ( y | d ) is:
    p ( y | d ) exp 1 2 y μ y | d Σ y | d 1 y μ y | d = exp 1 2 y Y d + b Y Σ y | d 1 y Y d + b Y ,
    which implicitly makes the assumption that the set of measurements equations Y ( s ) d + b Y ( s , s ˙ ) = y is affected by a Gaussian noise with zero mean and covariance Σ y | d .
  • Let d N ( μ ¯ D , Σ ¯ D ) the normal distribution of d. The probability density p(d)
    p ( d ) exp 1 2 ( d μ ¯ D ) Σ ¯ D 1 ( d μ μ ¯ D ) ,
    where the covariance and the mean are, respectively,
    Σ ¯ D = ( D Σ D 1 D + Σ d 1 ) 1
    μ ¯ D = Σ ¯ D ( Σ d 1 μ D D Σ D 1 b D )
    In particular, covariances Σ D and Σ d account for the reliability of the model constraints and on the estimation prior, respectively, in the equation D ( s ) d + b D ( s , s ˙ ) = 0 .
  • To compute p ( d | y ) , it suffices to combine Equations (A3) and (A4),
    p ( d | y ) exp 1 2 ( d μ ¯ D ) Σ ¯ D 1 ( d μ ¯ D ) + [ y ( Y d + b Y ) ] Σ ¯ y | d 1 [ y ( Y d + b Y ) ]
    with covariance matrix and mean as follows:
    Σ d | y = Σ ¯ D 1 + Y Σ y | d 1 Y 1 ,
    μ d | y = Σ d | y Y Σ y | d 1 ( y b Y ) + Σ ¯ D 1 μ ¯ D .
    In the Gaussian domain, the MAP solution coincides with the mean in Equation (A7b) yielding to:
    d M A P = μ d | y .

References

  1. Tirupachuri, Y.; Nava, G.; Latella, C.; Ferigo, D.; Rapetti, L.; Tagliapietra, L.; Nori, F.; Pucci, D. Towards Partner-Aware Humanoid Robot Control Under Physical Interactions. arXiv 2019, arXiv:1809.06165. [Google Scholar]
  2. Flash, T.; Hogan, N. The coordination of arm movements: An experimentally confirmed mathematical model. J. Neurosci. 1985, 5, 1688–1703. [Google Scholar] [CrossRef] [PubMed]
  3. Maeda, Y.; Hara, T.; Arai, T. Human-robot cooperative manipulation with motion estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No.01CH37180), Maui, HI, USA, 29 October–3 November 2001; Volume 4, pp. 2240–2245. [Google Scholar] [CrossRef]
  4. Schaal, S.; Ijspeert, A.; Billard, A. Computational approaches to motor learning by imitation. Philosoph. Trans. R. Soc. Lond. Ser. B Biol. Sci. 2003, 358, 537–547. [Google Scholar] [CrossRef] [PubMed]
  5. Amor, H.B.; Neumann, G.; Kamthe, S.; Kroemer, O.; Peters, J. Interaction primitives for human-robot cooperation tasks. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 2831–2837. [Google Scholar] [CrossRef]
  6. Penco, L.; Brice, C.; Modugno, V.; Mingo Hoffmann, E.; Nava, G.; Pucci, D.; Tsagarakis, N.; Mouret, J.B.; Ivaldi, S. Robust Real-time Whole-Body Motion Retargeting from Human to Humanoid. In Proceedings of the IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), Beijing, China, 6–9 November 2018. [Google Scholar]
  7. Latella, C.; Lorenzini, M.; Lazzaroni, M.; Romano, F.; Traversaro, S.; Akhras, M.A.; Pucci, D.; Nori, F. Towards real-time whole-body human dynamics estimation through probabilistic sensor fusion algorithms. Auton. Robots 2018. [Google Scholar] [CrossRef]
  8. Mistry, M.; Buchli, J.; Schaal, S. Inverse dynamics control of floating base systems using orthogonal decomposition. In Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 3406–3412. [Google Scholar] [CrossRef]
  9. Nava, G.; Pucci, D.; Guedelha, N.; Traversaro, S.; Romano, F.; Dafarra, S.; Nori, F. Modeling and Control of Humanoid Robots in Dynamic Environments: iCub Balancing on a Seesaw. In Proceedings of the IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, UK, 15–17 November 2017. [Google Scholar]
  10. Ayusawa, K.; Venture, G.; Nakamura, Y. Identification of humanoid robots dynamics using floating-base motion dynamics. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 2854–2859. [Google Scholar] [CrossRef]
  11. Mistry, M.; Schaal, S.; Yamane, K. Inertial parameter estimation of floating base humanoid systems using partial force sensing. In Proceedings of the 2009 9th IEEE-RAS International Conference on Humanoid Robots, Paris, France, 7–10 December 2009; pp. 492–497. [Google Scholar] [CrossRef]
  12. Dasgupta, A.; Nakamura, Y. Making feasible walking motion of humanoid robots from human motion capture data. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), Detroit, MI, USA, 10–15 May 1999; Volume 2, pp. 1044–1049. [Google Scholar] [CrossRef]
  13. Zheng, Y.; Yamane, K. Human motion tracking control with strict contact force constraints for floating-base humanoid robots. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, USA, 15–17 October 2013; pp. 34–41. [Google Scholar] [CrossRef]
  14. Li, M.; Deng, J.; Zha, F.; Qiu, S.; Wang, X.; Chen, F. Towards Online Estimation of Human Joint Muscular Torque with a Lower Limb Exoskeleton Robot. Appl. Sci. 2018, 8, 1610. [Google Scholar] [CrossRef]
  15. Marsden, J.E.; Ratiu, T. Introduction to Mechanics and Symmetry: A Basic Exposition of Classical Mechanical Systems, 2nd ed.; Texts in Applied Mathematics; Springer-Verlag: New York, NY, USA, 1999. [Google Scholar] [CrossRef]
  16. Winter, D. Biomechanics and Motor Control of Human Movement, 4th ed.; Wiley: Hoboken, NJ, USA, 1990. [Google Scholar]
  17. Herman, I. Physics of the Human Body | Irving P. Herman |; Springer: Basel, Switzerland, 2016. [Google Scholar] [CrossRef]
  18. Hanavan, E.P. A Mathematical Model of the Human Body. Available online: https://apps.dtic.mil/docs/citations/AD0608463 (accessed on 19 June 2019).
  19. Yeadon, M.R. The simulation of aerial movement—II. A mathematical inertia model of the human body. J. Biomech. 1990, 23, 67–74. [Google Scholar] [CrossRef]
  20. Rotella, N.; Mason, S.; Schaal, S.; Righetti, L. Inertial sensor-based humanoid joint state estimation. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1825–1831. [Google Scholar] [CrossRef]
  21. Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 2006, 106, 25–57. [Google Scholar] [CrossRef]
  22. Savitzky, A.; Golay, M. Smoothing and Differentiation of Data by Simplified Least Squares Procedures (ACS Publications). Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
  23. Featherstone, R. Rigid Body Dynamics Algorithms; Springer US: New York, NY, USA, 2008. [Google Scholar] [CrossRef]
  24. Latella, C.; Kuppuswamy, N.; Romano, F.; Traversaro, S.; Nori, F. Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing. Sensors 2016, 16, 727. [Google Scholar] [CrossRef]
  25. Metta, G.; Fitzpatrick, P.; Natale, L. YARP: Yet Another Robot Platform. Int. J. Adv. Robot. Syst. 2006, 3, 8. [Google Scholar] [CrossRef]
  26. Nori, F.; Traversaro, S.; Eljaik, J.; Romano, F.; Del Prete, A.; Pucci, D. iCub Whole-Body Control through Force Regulation on Rigid Non-Coplanar Contacts. Front. Robot. AI 2015, 2. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of the system topological order for links and joints.
Figure 1. Graphical representation of the system topological order for links and joints.
Sensors 19 02794 g001
Figure 2. Human model with distributed inertial measurement units. Joint reference frames are shown by using RGB (Red–Green–Blue) convention for xyz-axes. (left) Detail of the sensor position on the link.
Figure 2. Human model with distributed inertial measurement units. Joint reference frames are shown by using RGB (Red–Green–Blue) convention for xyz-axes. (left) Detail of the sensor position on the link.
Sensors 19 02794 g002
Figure 3. Subject equipped with the Xsens wearable motion tracking system and six-axis force/torque shoes.
Figure 3. Subject equipped with the Xsens wearable motion tracking system and six-axis force/torque shoes.
Sensors 19 02794 g003
Figure 4. Task T4, Sequence 2: walking on a treadmill.
Figure 4. Task T4, Sequence 2: walking on a treadmill.
Sensors 19 02794 g004
Figure 5. Classification of feet contacts: (left) single support on the right foot; (middle) double support; and (right) single support on the left foot.
Figure 5. Classification of feet contacts: (left) single support on the right foot; (middle) double support; and (right) single support on the left foot.
Sensors 19 02794 g005
Figure 6. Tasks representation from initial time t i to final time t f (left); and feet contact pattern classification obtained via Algorithm 1 (right).
Figure 6. Tasks representation from initial time t i to final time t f (left); and feet contact pattern classification obtained via Algorithm 1 (right).
Sensors 19 02794 g006
Figure 7. The base linear proper sensor acceleration α l i n g [ m / s 2 ] comparison between measurement (mean and standard deviation, in red) and floating-base MAP estimation (mean, in blue), for tasks T1, T2, T3 and T4.
Figure 7. The base linear proper sensor acceleration α l i n g [ m / s 2 ] comparison between measurement (mean and standard deviation, in red) and floating-base MAP estimation (mean, in blue), for tasks T1, T2, T3 and T4.
Sensors 19 02794 g007
Figure 8. The external force f x [N] and moment m x [Nm] comparison between measurement (mean and standard deviation, in red) and estimation via floating-base MAP algorithm (mean, in blue) for (a) the left foot and (b) the right foot, respectively, for tasks T1, T2, T3 and T4.
Figure 8. The external force f x [N] and moment m x [Nm] comparison between measurement (mean and standard deviation, in red) and estimation via floating-base MAP algorithm (mean, in blue) for (a) the left foot and (b) the right foot, respectively, for tasks T1, T2, T3 and T4.
Sensors 19 02794 g008
Figure 9. Norm of the overall error of the entire set of joint accelerations ε ( s ¨ ) [ rad / s 2 ] between measurement and estimation via floating-base MAP algorithm, for tasks T1, T2, T3 and T4.
Figure 9. Norm of the overall error of the entire set of joint accelerations ε ( s ¨ ) [ rad / s 2 ] between measurement and estimation via floating-base MAP algorithm, for tasks T1, T2, T3 and T4.
Sensors 19 02794 g009
Figure 10. Floating-base MAP estimation in task T4 of the joint torques τ [ N m ] (in blue) for the (b) left leg and (a) right leg, respectively, along with the related joint angles s [deg] (in black). Gait estimations were performed by following the procedure for the feet contact classification in Algorithm 1.
Figure 10. Floating-base MAP estimation in task T4 of the joint torques τ [ N m ] (in blue) for the (b) left leg and (a) right leg, respectively, along with the related joint angles s [deg] (in black). Gait estimations were performed by following the procedure for the feet contact classification in Algorithm 1.
Sensors 19 02794 g010
Figure 11. Norm of the error of the base proper body linear acceleration ε ( a l i n g ) [ m / s 2 ] and angular acceleration ε ( a a n g g ) [ rad / s 2 ] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3. The proper body acceleration for the floating-base estimation is obtained via Equation (13).
Figure 11. Norm of the error of the base proper body linear acceleration ε ( a l i n g ) [ m / s 2 ] and angular acceleration ε ( a a n g g ) [ rad / s 2 ] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3. The proper body acceleration for the floating-base estimation is obtained via Equation (13).
Sensors 19 02794 g011
Figure 12. Norm of the error of the overall set of the overall external forces ε ( f x ) [N] and the moments ε ( m x ) [Nm] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3.
Figure 12. Norm of the error of the overall set of the overall external forces ε ( f x ) [N] and the moments ε ( m x ) [Nm] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3.
Sensors 19 02794 g012
Figure 13. Norm of the error of the overall set of joint accelerations ε ( s ¨ ) [ rad / s 2 ] and torques ε ( τ ) [Nm] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3.
Figure 13. Norm of the error of the overall set of joint accelerations ε ( s ¨ ) [ rad / s 2 ] and torques ε ( τ ) [Nm] between the estimation with the fixed-base and the floating-base MAP, for tasks T1, T2 and T3.
Sensors 19 02794 g013
Table 1. Tasks performed for the estimation analysis.
Table 1. Tasks performed for the estimation analysis.
TaskTypeDescription
T1Static double supportNeutral pose, standing still
T2Static right single supportSequence 1: static double support
Sequence 2: weight balancing on the right foot
T3Static left single supportSequence 1: static double support
Sequence 2: weight balancing on the left foot
T4Static-walking-staticSequence 1: static double support
Sequence 2: walking on a treadmill (Figure 4)
Sequence 3: static double support
Table 2. RMSE analysis of the base linear proper sensor acceleration α l i n g [ m / s 2 ], the external force f x [N] and moment m x [Nm] floating-base algorithm estimations w.r.t. the measurements, for tasks T1, T2, T3 and T4.
Table 2. RMSE analysis of the base linear proper sensor acceleration α l i n g [ m / s 2 ], the external force f x [N] and moment m x [Nm] floating-base algorithm estimations w.r.t. the measurements, for tasks T1, T2, T3 and T4.
TaskLink α lin , x g [m/s2] α lin , y g [m/s2] α lin , z g [m/s2] f x x [N] f y x [N] f z x [N] m x x [Nm] m y x [Nm] m z x [Nm]
Base (Pelvis) 0.008 0.014 0.002 ------
T1Left foot--- 0.050 0.030 2.5 × 10 4 8.3 × 10 4 0.002 2.1 × 10 5
Right foot--- 0.031 0.048 0.004 0.00150.001 1.0 × 10 5
Base (Pelvis) 0.003 0.027 0.018 ------
T2Left foot--- 0.153 0.071 0.009 0.002 4.7 × 10 4 1.7 × 10 4
Right foot--- 0.013 0.074 0.005 0.002 4.2 × 10 4 4.3 × 10 5
Base (Pelvis) 0.012 0.007 0.007 ------
T3Left foot--- 0.075 0.019 0.002 6.0 × 10 4 0.002 5.2 × 10 5
Right foot--- 0.065 0.018 0.003 6.1 × 10 4 0.002 1.2 × 10 4
Base (Pelvis) 0.011 0.018 0.033 ------
T4Left foot--- 0.089 0.056 0.012 0.002 0.003 1.3 × 10 4
Right foot--- 0.084 0.056 0.019 0.002 0.003 9.7 × 10 4
Table 3. Max and min values for RMSE analysis in Table 2, for tasks T1, T2, T3 and T4.
Table 3. Max and min values for RMSE analysis in Table 2, for tasks T1, T2, T3 and T4.
VariablesT1T2T3T4
minmaxminmaxminmaxminmax
α l i n , x g 0.006 0.0086 2.0 × 10 5 0.0087 0.0029 0.0187 1.6 × 10 7 0.035
α l i n , y g 0.012 0.015 0.008 0.053 1.3 × 10 5 0.0214 1.3 × 10 4 0.055
α l i n , z g 1.9 × 10 4 0.003 0.0062 0.026 0.0015 0.0165 1.3 × 10 5 0.147
f L F , x x 0.045 0.054 0.0016 0.030 0.0071 0.0893 0.0016 0.318
f L F , y y 0.026 0.0329 0.0317 0.1258 6.2 × 10 4 0.042 0.0012 0.144
f L F , z z 6.7 × 10 6 4.8 × 10 4 2.1 × 10 4 0.021 5.4 × 10 5 0.0034 1.0 × 10 5 0.094
m L F , x x 7.3 × 10 4 9.5 × 10 4 9.1 × 10 4 0.004 2.3 × 10 8 0.0014 1.4 × 10 5 0.0047
m L F , x y 0.0015 0.0013 2.1 × 10 5 9.4 × 10 4 3.0 × 10 4 0.0029 1.6 × 10 5 0.010
m L F , x z 1.7 × 10 5 2.5 × 10 5 4.6 × 10 5 3.0 × 10 4 3.3 × 10 5 7.9 × 10 5 1.5 × 10 7 7.8 × 10 4
f R F , x x 0.026 0.0362 3.2 × 10 4 0.038 0.0066 0.077 1.4 × 10 5 0.296
f R F , y y 0.045 0.051 0.033 0.1261 1.5 × 10 4 0.040 4.3 × 10 6 0.141
f R F , z z 0.0036 0.0045 0.0015 0.014 7.3 × 10 5 0.0061 5.5 × 10 5 0.113
m R F , x x 0.0014 0.0016 9.8 × 10 4 0.004 1.5 × 10 5 0.0014 4.6 × 10 6 0.005
m R F , x y 9.5 × 10 4 0.0013 2.6 × 10 6 0.0012 3.3 × 10 4 0.0029 2.6 × 10 5 0.0097
m R F , x z 6.2 × 10 6 1.4 × 10 4 4.0 × 10 6 6.7 × 10 5 7.0 × 10 6 1.7 × 10 4 5.9 × 10 5 3.0 × 10 4
Table 4. Mean and standard deviation of the error norm for: ( i ) the base proper body linear acceleration related to Figure 11; ( i i ) the external wrench related to Figure 12; and ( i i i ) the joint acceleration and torque related to Figure 13, for tasks T1, T2 and T3.
Table 4. Mean and standard deviation of the error norm for: ( i ) the base proper body linear acceleration related to Figure 11; ( i i ) the external wrench related to Figure 12; and ( i i i ) the joint acceleration and torque related to Figure 13, for tasks T1, T2 and T3.
Error NormT1T2T3
| ε ( a l i n g ) | [ m / s 2 ] 0.008 ± 1.6 × 10 4 0.0170 ± 0.0176 0.014 ± 0.010
| ε ( a a n g g ) | [ rad / s 2 ] 0.0357 ± 5.4 × 10 4 0.024 ± 0.012 0.036 ± 0.005
| ε ( f x ) | [ N ] 0.026 ± 5.3 × 10 4 0.033 ± 0.010 0.031 ± 0.006
| ε ( m x ) | [ N m ] 9.330 e 4 ± 1.9 × 10 5 0.001 ± 3.8 × 10 4 0.001 ± 3.0 × 10 4
| ε ( s ¨ ) | [ rad / s 2 ] 0.003 ± 3.7 × 10 5 0.003 ± 7.4 × 10 4 0.003 ± 3.0 × 10 4
| ε ( τ ) | [ N m ] 7.199 ± 0.076 3.756 ± 0.845 5.988 ± 0.943

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop