Drift-Free Joint Angle Calculation Using Inertial Measurement Units without Magnetometers: An Exploration of Sensor Fusion Methods for the Elbow and Wrist

Joint angles of the lower extremities have been calculated using gyroscope and accelerometer measurements from inertial measurement units (IMUs) without sensor drift by leveraging kinematic constraints. However, it is unknown whether these methods are generalizable to the upper extremity due to differences in motion dynamics. Furthermore, the extent that post-processed sensor fusion algorithms can improve measurement accuracy relative to more commonly used Kalman filter-based methods remains unknown. This study calculated the elbow and wrist joint angles of 13 participants performing a simple ≥30 min material transfer task at three rates (slow, medium, fast) using IMUs and kinematic constraints. The best-performing sensor fusion algorithm produced total root mean square errors (i.e., encompassing all three motion planes) of 6.6°, 3.6°, and 2.0° for the slow, medium, and fast transfer rates for the elbow and 2.2°, 1.7°, and 1.5° for the wrist, respectively.


Introduction
Inertial motion capture systems (IMCs) are touted for their biomechanical motion analysis capabilities without the capture volume constraints imposed by traditional optical motion capture systems (OMCs). Various IMCs contain the battery and memory capacity for biomechanical motion analysis over several hours, which allows the quantification of exposures to non-neutral postures for assessing human performance in occupational ergonomics and rehabilitation applications [1][2][3]. Although commercially available IMCs are increasingly used in human motion studies [4,5], opportunities exist to (i) improve their accuracy and (ii) characterize their error characteristics. Understanding an IMC's error characteristics and failure modes may be more advantageous to a user than potential increases in accuracy that may come from a proprietary solution [6].
IMCs rely on magnetometers to provide a directional reference around the gravitational vector. They assume that the magnetometer accurately measures the strength and direction of Earth's local magnetic field. A major limitation of IMCs is the reduction in accuracy associated with magnetic disturbance [4,[7][8][9], which occurs when the assumption of a homogenous local magnetic field is violated (e.g., indoor environments). Deviations up to 180 • [8] have been reported. Without magnetometers, IMCs are limited to shortduration (i.e., minutes) full-body motion capture to avoid errors associated with gyroscopic drift [10,11] or making measurements relative to gravity (i.e., inclinometer) [3,[12][13][14][15]-both approaches which limit their functionality. Various sensor fusion algorithms account for assessed within the context of upper extremity kinematics where sensor excitation can potentially be more limited.
In summary, the most promising studies that leveraged motion constraints to obtain joint angles from IMUs without magnetometers primarily focused on the lower extremity. Studies involving the upper extremity have contained fewer participants and shorter measurement durations. The overall objectives of this study are to characterize the (i) capability and limitations of calculating joint angles of the elbow and wrist without magnetometers at various motion speeds over >30 min among a sample of 13 participants using motion constraints and (ii) the capability of post-processed sensor fusion algorithms (rts smoother and map) to further increase measurement accuracy. In this paper, we (i) describe an rts smoother formulation of [21]. We then (ii) incorporate a rotational constraint for the elbow that limits abduction/adduction motion and a rotational constraint for the wrist to limit internal/external rotation motion that is analogous to [28]. Finally, we (iii) evaluate the effects of motion speed on the accuracy of elbow and wrist kinematics using three different sensor fusion methods (mekf, rts, map) and two different models (linear motion constraint with a degree of freedom rotational motion constraint (dof) and without a degree of freedom motion constraint (acc)), totaling six different sensor fusion algorithms (mekf-acc, mekf-dof, rts-acc, rts-dof, map-acc, map-dof). We hypothesize that the measurement accuracy for a given sensor fusion algorithm will increase with (i) increased motion speed, (ii) the use of smoothers for sensor fusion, and (iii) the addition of the rotational constraint. We further hypothesize that the map sensor fusion method will provide higher measurement accuracy compared to the rts sensor fusion method and the wrist joint will produce lower error magnitudes compared to the elbow joint.

Quaternion Definition
We define the quaternion vector → q as Hamiltonian quaternion with a 4 × 1 column vector consisting of the scalar term q 0 for the first element and the vector term → q , for the subsequent three elements (1): Given quaternion vectors → q a and → q b , the quaternion product ⊗ can be calculated as follows: where [·] × is the skew-symmetric operator and I n is an n × n identity matrix. The skewsymmetric operator for vector → u is defined as: can be converted into its equivalent rotation matrix R GL as follows: and can be subsequently used to obtain

Coordinate Frame Notations and Definitions
We define G as the global frame, I as the IMU local frame, and S as the local frame of a given body segment. We define → r i as the distance from the joint center to the position of the IMU in Frame I. R GI provides the orientation of IMU relative to Frame G, R GS provides the orientation of its underlying body segment relative to Frame G, and R IS is the offset between the IMU local frame and the local frame of its underlying body segment ( Figure 1).
Sensors 2023, 23, x FOR PEER REVIEW ⃗ will provide a rotation from local coordinate Frame L to global coordinate F where ⃗ is the quaternion conjugate of ⃗, ⃗ is a vector in Frame L, and ⃗ is it sponding vector in Frame G.
⃗ can be converted into its equivalent rotation matrix as follows: and can be subsequently used to obtain ⃗ from ⃗ .

Coordinate Frame Notations and Definitions
We define G as the global frame, I as the IMU local frame, and S as the local f a given body segment. We define ⃗ as the distance from the joint center to the pos the IMU in Frame I.
provides the orientation of IMU relative to Frame G, vides the orientation of its underlying body segment relative to Frame G, and offset between the IMU local frame and the local frame of its underlying body s ( Figure 1). provides the orientation of IM tive to Frame G, provides the orientation of its underlying body segment relative to F and is the offset between the IMU local frame and the local frame of its underlying b ment. ⃗ is the distance from the joint center to the position of the IMU in Frame I. Note segment coordinate frame was placed shifted in relation to the IMU local frame in the fi legibility.

Joint Angle Calculation Process
The process of calculating joint angles from IMU measurements is shown in Figure 1. Coordinate frame definitions used within this study. G denotes the global frame. I denotes the IMU local frame. S denotes the segment local frame. R GI provides the orientation of IMU relative to Frame G, R GS provides the orientation of its underlying body segment relative to Frame G, and R IS is the offset between the IMU local frame and the local frame of its underlying body segment. → r is the distance from the joint center to the position of the IMU in Frame I. Note that the segment coordinate frame was placed shifted in relation to the IMU local frame in the figure for legibility. The process of calculating joint angles from IMU measurements is shown in Figure 2. Given two IMUs attached to adjacent body segments, i = 1 denotes the proximal segment and i = 2 denotes the distal segment. We determine the joint angles from angular velocity → ω t,i and linear acceleration → a t,i measurements from the gyroscope and accelerometer contained within each IMU attached to a given body segment by first calculating → r i using the translational alignment process. Specifically, → r i is solved through optimization by minimizing the difference in linear acceleration vector magnitude of the proximal and distal segments at the joint center (Section 2.3.1). ⃗ using the translational alignment process. Specifically, ⃗ is solved through optimiz tion by minimizing the difference in linear acceleration vector magnitude of the proxim and distal segments at the joint center (Section 2.3.1). The orientation at time t of the IM attached to the proximal body segment ( ,1 ) and the orientation of the IMU attached the distal body segment ( ,2 ) are subsequently calculated from ⃗ 1 , ⃗ 1 , ⃗ ⃗⃗ 1 , and ⃗ 2 , ⃗ ⃗⃗ 2 using either mekf-acc, rts-acc, or map-acc (Section 2.2). Rotational alignment 1 a 2 are subsequently calculated using ,1 , ,2 and ⃗ ⃗⃗ 1 , ⃗ ⃗⃗ 2 by either minimizing abdu tion/adduction (ab/ad) for the elbow joint or internal/external rotation (in/ex) for the wr joint (Section 2.3.2). ,1 , ,2 can subsequently be calculated using either of the six sens fusion algorithms (Section 2.2) given ⃗ 1 , ⃗ 1 , ⃗ ⃗⃗ 1 , 1 and ⃗ 2 , ⃗ 2 , ⃗ ⃗⃗ 2 , 2 . Finally, joint a gle can be calculated from ,1 and ,2 . Note that an IMU must be attached to the upp arm and forearm to obtain elbow joint angles and to the hand and forearm to obtain wr joint angles. Depending on the joint angle, the rotational constraint limits either elbo ab/ad or wrist in/ex. Our source code is provided as Supplemental Material. Joint angle calculation process. The translational alignment process is used to calculate t distance from the joint center to IMU attached to the proximal and distal body segment ( ⃗ ). T information combined with IMU linear acceleration ⃗ and angular velocity ⃗ ⃗⃗ measurements used to calculate the orientation of the IMU relative to global Frame G ( , ), where i = 1 denotes t proximal body segment and i = 2 the distal body segment. The rotational offset between IMU Fram I and body segment Frame S ( ) is determined from ,1 , ,2 and ω ⃗⃗⃗ 1 , ω ⃗⃗⃗ 2 . Joint angle is sub quently calculated from ⃗ 1 , ⃗ 1 , ⃗ ⃗⃗ 1 , 1 and ⃗ 2 , ⃗ 2 , ⃗ ⃗⃗ 2 , 2 .

Sensor Fusion
The combination of the sensor fusion method (e.g., mekf, rts, map) and kinema constraint (linear acceleration constraint, linear acceleration and degree of freedom-bas rotational constraint) provided six different sensor fusion algorithms: mekf-acc, rts-a map-acc, mekf-dof, rts-dof, map-dof. All sensor fusion algorithms presented consider t following states (8 × 1 vector): (1 where ̃, is the segment orientation estimated by a given sensor fusion algorithm. Sin the quaternion orientation vector is overconstrained, instead of estimating ̃, direct the sensor fusion algorithms presented will instead estimate orientation deviations which is a 6 × 1 vector containing the orientation deviation for the proximal segme (̂, 1 ) and distal segment (̂, 2 ).

Sensor Fusion
The combination of the sensor fusion method (e.g., mekf, rts, map) and kinematic constraint (linear acceleration constraint, linear acceleration and degree of freedom-based rotational constraint) provided six different sensor fusion algorithms: mekf-acc, rts-acc, map-acc, mekf-dof, rts-dof, map-dof. All sensor fusion algorithms presented consider the following states (8 where ∼ q GI t,i is the segment orientation estimated by a given sensor fusion algorithm. Since the quaternion orientation vector is overconstrained, instead of estimating ∼ q GI t,i directly, the sensor fusion algorithms presented will instead estimate orientation deviationsη t , which Sensors 2023, 23, 7053 6 of 24 is a 6 × 1 vector containing the orientation deviation for the proximal segment (η t,1 ) and distal segment (η t,2 ).
The orientation deviation is subsequently used to correct the orientation obtained by integrating the gyroscope.η t is subsequently set to zero. The details of their implementation are shown in subsequent sections.

Multiplicative Extended Kalman Filter
The mekf was obtained from [21], which used a linear constraint assuming that the acceleration at the joint center is identical when calculated using acceleration measurements from either IMU. Equations (12)-(26) assume solely a link constraint. Equations (27)- (34) provide the extensions to include the rotational constraint to limit elbow ab/ad and in/ex. These methods assume the IMU is already aligned with the underlying body segment. Spatial orientation is obtained by first integrating → ω with respect to time, where ∆t is the sampling period.
Note that the Kalman filter state transition equation is unnecessary sinceη t is zero at the start of each time update, and the inputs are assumed to be zero mean Gaussian.
The predicted estimate covariance P t|t−1 is calculated using Equation (13).
Gyroscope orientation deviation transition matrix F, gyroscope noise Jacobian G, and gyroscope noise covariance Q are calculated as follows: where σ 2 ω is the gyroscope white noise and 0 n is an n × n matrix of zeros. Kalman gain K t is calculated as follows: with measurement Jacobian H defined as: ω is its derivative calculated using a 3rd order approximation [56]: H t = H t,acc with solely a link constraint. Measurement noise covariance matrix R is defined as follows: where σ 2 acc is the acceleration constraint noise. η t is calculated as follows: δz t = δz t,acc with solely a link constraint and the updated estimate covariance P t|t is calculated as Finally, η t,i is used to correct for ∼ q GI t|t−1,i as follows:

Addition of Rotational Constraint
The degree of freedom rotational constraint for the elbow (27) adopted from [33] was added as an additional measurement update to limit elbow ab/ad.
The corresponding Jacobian is calculated using (28).
Unit vectors u 1 , u 2 , and u 3 are defined as: Similarly, the degree of freedom rotational constraint for the wrist (30) was added as an additional measurement update to limit wrist in/ex.
The corresponding Jacobian is calculated using (31).
With the addition of the rotation constraint, δz t , H t , and R become where dof corresponds to either el_dof or wl_dof, depending on the joint of interest.

Rauch-Tung-Striebel Smoother
The rts first requires running the mekf with values of P t|t−1 , F t−1 , P t|t ,η t , and ∼ q GS t|t,i from (13), (14), (23), (25), and (26), respectively, to be stored. These values are used when computing the smoothed orientation deviation from the rts smootherη t|n , its associated covariance P t|n , and the smoothed orientation ∼ q GI t|n,i by executing (35) through (38) from t = n − 1 to t = 1.
Since the iterations are executed 'backwards', P t+1|t is used instead of P t|t−1 .η t+1|n is initialized to the value ofη t at t = n and P t+1|n is initialized to the value of P t|t at t = n, where n is the number of measurements. Consistent with the mekf, (11) is used to calculatê η t|n,i fromη t|n . More information on implementation of rts can be obtained from [50].

Maximum a Posteriori
The map sensor fusion algorithm with a linear acceleration constraint from [21] was implemented. Levenberg-Marquardt was used as the solver instead of Gauss-Newton for convergence stability. Generalized implementation information on map can be obtained from [54]. In brief, map for this application considers the following objective function: where e init,i , e t,ω,i , e t,acc are the residuals associated with the initial orientation, orientation derived from gyroscope measurements, and the linear acceleration constraint, respectively, and Σ init , Σ ω , Σ acc are the corresponding noise covariance matrices. Note that the minimization in (39) assumes that the residuals are Gaussian with zero mean. Minimizing the residual is equivalent to maximizing the stochastic likelihood. This objective function was solved by iterating the Levenberg-Marquardt solver until the solution converged. The orientation deviation at iteration k is calculated by iterating the following equations: where λ is the damping parameter, η n+1,2n , and W is a diagonal matrix containing the weights, as follows: (42) where diag(·) is the diagonal operator, and {a, m} will repeat value a m times. The error function (e) is defined as follows: where log q → q ≈ → q , e acc = δz acc , which is calculated with (24), and q init is the initial orientation value andq 1,i is the estimated value ofq i at t = 1. Note e init,i has a dimension of 3 × 1, e ω,i has a dimension of 3(n − 1) × 1, and e link has a dimension of 3n × 1.
The corresponding Jacobian matrix J is defined as These 3 × 3 matrices at time t are calculated as follows: where Addition of Rotational Constraint The objective function, error function, its corresponding Jacobian, and the weight matrix with the addition of the rotation constraint are as follows: For the elbow, For the wrist,

Translational Alignment
The distance from the joint center to each IMU ( → r i ) is calculated using the method from [56], which solves these distances by minimizing the acceleration magnitude at the joint center. The cost function is as follows: where K t,i is defined in (20). The parameter vector Φ is defined as Φ = r x,1 r y,1 r z,1 r x,2 r y,2 r z,2 and Φ is solved by iterating the following until convergence.

Rotational Alignment
IMUs were rotationally aligned to the underlying upper arm and forearm body segments using the extrinsic calibration method from [27]. Modifications were made to omit the use of magnetometer measurements. Specifically, orientation measurements were obtained from a sensor fusion algorithm that used the linear acceleration constraint to omit the use of magnetometer measurements. The heading error parameters in [27] used to mitigate magnetometer disturbance were therefore excluded. Additionally, parameters were solved using Levenberg-Marquardt instead of Gauss-Newton to improve reliability.
The joint axes j 1 and j 2 were parameterized as An alternative parameterization is as follows: where θ i and ψ i are scalar values that define angles in the spherical coordinates, which are used to ensure j i is a unit vector. To alleviate singularities, the two parameterizations for j i were switched from one to another whenever |sin(θ i )| < 0.5. The 4 × 1 parameter vector Φ and its corresponding error function e(Φ) k are defined as follows: where · is the vector dot product, × is the vector cross product, and R GI t,i is the orientation of the IMU (local Frame I) relative to the global Frame G. θ and φ are parameters used to define the direction of a unit vector in Cartesian space.
Φ is solved by iterating the following until convergence.
where J is the partial derivative e(Φ) t with respect to Φ. The rotational offset from inertial Frame I to segment intermediate Frame S' is calculated as follows: and (α, u) = cos α The rotational offset from inertial frame to segment frame q IS i is calculated as follows: where α 0 is the measured orientation and α r is the reference orientation for the joint segment, which we define as the orientation of a given segment at neutral posture.
Once q IS i is obtained, gyroscope measurements → ω s t,i , accelerometer measurements â s t,i , and distances from the joint center to the adjacent IMUs → r s i in the segment frame are calculated as follows.
and used in the sensor fusion equations instead of → ω t,i ,â t,i , and → r i to solve for spatial orientations in the segment frame. Note that R IS i is the rotation matrix equivalent of q IS i . Elbow orientation q t,el is calculated as follows: where i = 1 denotes the upper arm orientation and i = 2 denotes the forearm orientation.
The rotational alignment for the wrist was obtained by solving for the parameter vector Φ containing the orientation deviations corresponding to the IMU's rotational alignment to the forearm and hand. The first three elements correspond to the alignment of the forearm IMU, while the last three elements correspond to the hand alignment. Specifically, the following was iterated until convergence: where Note that Φ for the rotational alignment of the wrist is a 6 × 1 vector; i = 1 and i = 2 correspond to the forearm and hand segments, respectively; and R IS i is the rotation matrix equivalent of q IS i . Equations (78)-(80) are subsequently used for angular velocity, linear acceleration, and lever arm from the IMU frame to the segment frame, respectively.

Data Collection
We refer to our previous work [6,7,15,57] for a comprehensive description of our data collection methods. In brief, 13 right-handed participants (11 male; mean age = 27.2 ± 6.6 years) each completed six trials of a material transfer task that involved transferring wooden dowels from a waist-height container directly in front of the participant to a shoulder-height container for one minute. The exclusion criteria included any selfreported cases of (i) physician-diagnosed MSD in the previous six months, (ii) pain during the two weeks prior to study enrollment, and (iii) history of orthopedic surgery in the upper extremity (shoulder, elbow, wrist, hand). Written informed consent was obtained prior to data collection. The study protocol was approved by the University of Iowa Institutional Review Board.
The six trials consisted of two at three different material transfer rates (15,30, and 45 transfers per minute) dictated by a metronome. The transfer rate for each trial was randomly assigned. Each trial was followed by a five-minute rest period and a 'practice' period for the participant to acquaint themselves with the given transfer rate. IMUs (Opal, APDM) were attached to the dominant upper arm, forearm, and hand using Velcro ® straps for each participant. The IMUs were oriented with the positive x-axis parallel to the body segment and pointed distally, the positive y-axis parallel to the sagittal plane pointed forward, and the positive z-axis perpendicular to the sagittal plane pointed laterally when the participant was in the 'I-pose'. IMU and OMC measurements were recorded at 128 and 120 Hz, respectively. An OMC (Optitrack Flex13, Naturalpoint, Corvallis, OR, USA) was used to track four reflective markers attached to each IMU's top surface. OMC spatial orientation was calculated for each marker cluster using the manufacturerprovided software and software package (Motive: Body version 1.10.0, NaturalPoint, Inc. Corvallis, OR, USA). IMU measurements were recorded continuously and exceeded 30 min in duration for each participant.

Implementation of Sensor Fusion Algorithms and IMU-to-Segment Alignment
All computation was accomplished using MATLAB (R2021a, Mathworks) except for map-acc and map-dof. The map algorithms were implemented in C++ using the Eigen 3.4.0 linear algebra library due to computational overhead. The settings for the sensor fusion algorithm are shown in Table 1 for the elbow and Table 2 for the wrist. Initial quaternion was set to [1, 0, 0, 0] for all sensor fusion algorithms. Map was iterated until either (i) local minimum, (ii) maximum iteration of 25, or (iii) tolerance of 0.01 had been reached. A dampening parameter (λ) of 1 × 10 −8 was used. Translation and rotational alignment were determined using the same trial with a 'fast' transfer rate. For both the elbow and wrist joints, the translational alignment algorithm was initialized to zeros and was executed for 20 iterations. The rotational alignment was calculated with the spatial orientation measurements provided using rts-acc with the assumption that the IMU was translationally aligned to the underlying body segment. The rotational alignment for the elbow was iterated until one of the following was reached: (i) a local minimum, (ii) a maximum iteration of 100, or (iii) a tolerance of 0.001. The λ dampening parameter was set to 1. The rotational alignment for the wrist was iterated until one of the following was reached: (i) a local minimum, (ii) a maximum iteration of 50, or (iii) a tolerance of 0.001. The λ dampening parameter was set to 0.0001.
The offset between the OMC-derived relative orientation and IMU-derived relative orientation was determined using the method described in [58] and applied to OMC measurements. This procedure was consistent with [21]. Error at a given time is defined according to [59]: where → q t,REF is the relative orientation from the OMC with the alignment offsets added, → q t,I MU is the elbow joint angle. The OMC was aligned to the IMU using the same trial for IMU-to-segment alignment (I2S). → q t,ERR was subsequently decomposed to an Euler rotation sequence of z,y ,x to obtain error in fl/ex, ab/ad, and internal/external rotation (in/ex), respectively, at time t for the elbow and x-y -z to obtain in/ex, fl/ex, and ulnar/radial deviation (ul/ra). Total error (θ t,tot ) is calculated as follows: For the wrist, → q t,ERR was subsequently decomposed to an Euler rotation sequence of x,y ,z corresponding to wrist in/ex, fl/ex, and ulnar/radial deviation (ul/ra).
RMSE for direction j is calculated as:

Results
Three of the six trials from one of the thirteen participants were discarded from both elbow and wrist joint angles because the forearm IMUs had shifted significantly prior to the start of the third trial. Since these two trials corresponded to a 'medium' and 'fast' transfer rate, the first trial with a 'slow' transfer rate was also discarded to maintain a balanced dataset across all transfer rates. Data from another participant were discarded for the wrist joint angles because the rotational alignment method failed to converge, resulting in RMS joint angles >50 • for the methods that leverage the rotational constraint for the wrist. Figure 3 shows the upper arm elevation displacements (e.g., pitch) and the corresponding elbow fl/ex for the last two trials for a participant, the rest, and the practice period between the trials. The presence of gyroscopic drift in the upper arm elevation measurements is indicated by the 'upward' trajectory from t = 900 s to t = 1200 s and the 'downward' trajectory from t = 1200 s to t = 1400, which was not present in the elbow fl/ex displacements.    RMSEs across all sensor fusion methods, motion constraints, and transfer rates are shown in Tables 3 and 4, respectively. Total RMSE was <10.2 • for the elbow across all fusion methods, motion constraints, and transfer rates. As expected, accuracy increased with the increased transfer rate. For the 'fast' transfer rate, the total RMS error decreased to <3.7 • . Across all transfer rates and motion constraints, smoothers decreased total RMS errors to <9.2 • while the map smoother decreased total RMS errors to <7.3 • . The rotational motion constraint decreased total RMSE between 0.5 • and 1.4 • across all transfer rates for the filtering approach. It decreased total RMSE between 0.7 • and 2.1 • for the 'slow' transfer rate across the two smoothing methods. Except for the 'slow' transfer rate, total RMSEs were practically identical (differed by 0.1 • ) for all smoothers (i.e., rts-acc, map-acc, rts-dof, and map-dof) for a given transfer rate, with total errors of~3.5 • for the 'medium' transfer rate and~2.0 • for the 'fast' transfer rate. The similarities of elbow fl/ex calculated using rts-acc and rts-dof for the 'fast' transfer rate and the deviations during the 'slow' transfer rate are shown in Figure 4. Elbow fl/ex calculated using omc, mekf-acc, and map-acc for a 'slow' motion trial and its corresponding sample-to-sample error for one trial and one cycle are shown in Figures 5 and 6, respectively.      . Elbow Flexion/Extension for a single one-minute trial with a 'slow' transfer rate calculated using an optical motion capture system (OMC), linear acceleration-based kinematic constraint, and its sample-to-sample difference calculated using two different sensor fusion algorithms: a Multiplicative Extended Kalman Filter (mekf-acc) and Maximum A Posteriori Smoother (map-acc).  Total RMSE was <4.4 • for the wrist across all fusion methods, motion constraints, and transfer rates. As expected, accuracy increased with the transfer rate. Across all transfer rates and motion constraints, smoothers decreased total RMS errors to <2.6 • . The total RMS error decreased to <2.2 • for the 'fast' transfer rate. For the wrist, the rotational motion constraint increased total RMSE between 0.4 • and 1.5 • for the 'slow' transfer rate but decreased total RMSE by 0.4 • for the 'fast' transfer rate. Except for the 'slow' transfer rate, total RMSEs were practically identical (differed by 0.2 • ) for all smoothers (i.e., rts-acc, map-acc, rts-dof, and map-dof) for a given transfer rate, with total errors of~1.7 • for the 'medium' transfer rate and~1.5 • for the 'fast' transfer rate.

Discussion
Our results indicate that elbow and wrist kinematics can be calculated for extended durations (>30 min) using solely angular velocity and linear acceleration measurements from IMUs with RMSE <4.7 • for a given motion plane for various motion speeds (i.e., dowel transfer rates). As hypothesized, for a given sensor fusion algorithm, (i) measurement accuracy improved with increased motion speeds and (ii) RMSE was smaller for the wrist than the elbow. The better performance may be attributed to greater linear accelerations at the joint center due to increased motion speeds and the longer lever arms associated with the wrist compared to the elbow.
As expected, smoothers decreased measurement error compared to the evaluated filters. The decrease in measurement error was more pronounced with the lower linear accelerations, as experienced with the elbow joint during the 'slow' transfer rate. During this situation, map provided an improvement over rts. Map did not produce an appreciable difference relative to rts (differed by <0.2 • total RMSE) except for the elbow joint during the 'slow' transfer rate. As hypothesized, the addition of the rotational motion constraint decreased measurement error for the elbow joint. However, in contrast to our hypothesis, adding the rotational motion constraint increased measurement errors from <2.9 • to <4.4 • for the wrist joint during the 'slow' transfer rate. This increase in error magnitudes could be attributed to the need for a more refined rotational kinematic model. It is also likely that the linear acceleration at the wrist joint provided sufficient sensor excitation for our range of testing conditions, such that the rotational constraints did not provide an added benefit. Except for the elbow joint during the 'slow' transfer rate, a motion constraint did not produce an appreciable difference from the smoothers (differed by <0.4 • total RMSE).
For the elbow joint, the relatively large variations in RMSE among the trials for the 'slow' transfer rate suggest that the linear acceleration at the joint center for this motion condition provided insufficient excitation for the sensor fusion algorithm for multiple participants. This limitation was mitigated through rts-acc and map-acc and the addition of a rotational motion constraint. The resulting decrease in measurement error for the 'slow' transfer rate is pertinent because it represents the highest error magnitude within our study and is representative of extended work durations (i.e., hours) where high motion speeds are unlikely to be sustained. While the rotational constraint did provide notable benefits for the elbow joint, the use of this constraint is (i) joint specific (i.e., a different formulation is required for the wrist) and (ii) will cause errors when the axis of constraint (e.g., the axis of ab/ad for elbow) is nearly vertical due to limited observability.
The choice of sensor fusion method is application specific. Within the context of the three different sensor fusion methods presented within this study, mekf is the only viable option if a joint angle must be computed as data are streamed. If joint angles can be calculated after collecting sensor measurements, the results indicate that rts and map can further reduce measurement error. The map sensor fusion method consistently provided the most accurate results for the elbow, especially during the 'slow' transfer rate. However, it did not provide any additional benefit for the wrist joint. We hypothesize that any benefits of the map sensor fusion method were not perceived for the wrist joint because our choice of material transfer rates provided sufficient sensor excitation (i.e., linear acceleration).
The goal of this study was not limited to determining the lowest error magnitudes for IMU-based motion capture but was to also understand the effects of movement speed on the error magnitude for various sensor fusion algorithms. In general, results for mekfdof for the 'slow' transfer rate for the elbow were consistent with literature investigating magnetometer-free IMCs on the upper extremity. Our filter-and smoother-based results with the 'medium' transfer rate for the elbow were consistent with the respective filters and smoothers proposed for calculating lower extremity joint kinematics. Our results for the wrist joint and the smoothers for the 'fast' transfer rate provided the most accurate results among known literature.
Regarding the upper extremity, Luinge et al. [28] used a rotational constraint for the elbow joint and reported RMSE of <10 • for one of the two tasks and >20 • RMSE for the other. El-Gohary et al. [23] similarly used a rotational constraint for the elbow joint. They reported 6.5 • and 5.5 • RMSE for elbow flexion and rotation in their study involving four subjects with a recording frame of 2 min. Our study's filter-based approach with rotation constraint (mekf-dof) offered a marginal increase in measurement accuracy for the elbow joint. RMSE <9 • was observed for the elbow joint with 6.5 • and 4.5 • RMSE for elbow flexion and rotation. Using smoothers under increased motion speeds offered further reductions in error magnitudes. We hypothesize that our increase in measurement accuracy with mekf-dof compared to these studies may be attributed to differences in motion, the choice of kinematic model, and the model-based method for determining the rotational alignment of the IMUs. In contrast, Miezal et al. [22] reported a total RMSE of <3.5 • for the shoulder, elbow, and wrist using validation from a single participant with a measurement timeframe of <30 s. The increase in accuracy in their study may be attributed to using a position-based linear motion constraint instead of the acceleration-based constraint used in this study, differences in motion, and using OMCs to derive I2S parameters.
Studies of the lower extremity may not be directly comparable due to differences in motion dynamics. Similar to the observed decrease in error magnitudes for the wrist joint compared to the elbow joint in our study, we also expect lower error magnitudes for the lower extremity than the upper extremity due to larger sensor excitations attributed to longer lever arms. Nevertheless, these studies represent the state-of-the-art magnetometerfree IMC regarding reported error magnitudes, measurement timeframe duration, and the number of participants. Weygers et al. [21] evaluated mekf-acc and map-acc for calculating knee kinematics among 11 healthy participants walking for 7 min. Mean RMSEs among the participants were <4.5 • and <3.7 • for each of the three motion planes for mekf-acc and map-acc, respectively. Mcgrath et al. [52] used a map smoother. They reported 3.8 • RMSE for knee flexion/extension and 3.7 • , 6.3 • , and 4.6 • for hip flexion/extension, internal/external rotation, and abduction/adduction, respectively, for 30 min of walking across 12 participants. Potter et al. [29] reported that RMSEs were generally <5 • for all ankle joint angles and for flexion/extension and abduction/adduction of the hips and knee in their study of 20 participants that performed one-minute walking trials, with each trial consisting of different walking speeds as well as backward and lateral walking. Similar to our study, it was observed that RMSEs are joint and motion dependent. Left ankle dorsiflexion, for example, was observed to be 5.5 • for backward walking but decreased to 4.0 • for fast walking. Assuming that the motion dynamics of the lower extremity are equivalent to our elbow joint with a 'medium' transfer rate, our filtering approach (mekf-acc) provided improvements in measurement accuracy (<3.4 • RMSE for each of the three motion planes) over these results. Teufl et al. [25] reported RMSE of <2.4 • for all joints and motion planes of the lower extremity (left and right hip, knee, ankle) using a smoothing-based approach among 26 participants walking for 6 min, while our smoother approach provided equivalent error magnitudes (<2.4 • RMSE for each of the three motion planes) under this same assumption.
The primary strengths of this study are (i) the number of participants and total measurement duration per participant, (ii) the use of field-capable model-based methods for determining I2S, (iii) the consideration of two smoother-based sensor fusion methods while controlling for modeling differences, and (iv) the inclusion of both elbow and wrist joint angles. The extended measurement duration is important to confirm that gyroscopic drift is eliminated. A relatively large number of participants are required to understand how variations in motion dynamics caused by individual differences will affect the performance of the sensor fusion algorithms. The use of smoothers showed the capability of this underutilized method to improve measurement accuracy. The potential benefit of the map smoother over the rts smoother was also demonstrated. Results for the elbow and wrist joints indicate that the accuracy of joint angles depends on sensor excitation, which is attributed to joint location and motion speed.
This study's primary limitation entails using a cyclic motion pattern and lacking OMC information during rest periods. Therefore, it is unknown how well these algorithms perform for non-cyclic motion patterns and during rest periods, in which motion may be considered static or quasi-static. A secondary limitation of this study is that the IMUs were attached to the underlying segment using Velcro ® , which could cause the sensor to shift relative to the underlying body segment over time, affecting measurement accuracy. Consistent with any sensor fusion algorithms that leverage motion constraints, soft tissue artifacts are expected to adversely affect measurement accuracy.
Future work may entail investigating the optimal conditions for conducting modelbased I2S and considerations of static and quasi-static movements of the elbow and wrist, as well as validating the performance of these sensor fusion methods in static, quasistatic, and non-cyclic conditions and evaluating the computational costs of these sensor fusion algorithms.
Supplementary Materials: The sensor fusion and alignment algorithms described in this paper are available at www.github.com/uah-crablab (accessed on 12 October 2022).