Validation of Novel Relative Orientation and Inertial Sensor-to-Segment Alignment Algorithms for Estimating 3D Hip Joint Angles

Wearable sensor-based algorithms for estimating joint angles have seen great improvements in recent years. While the knee joint has garnered most of the attention in this area, algorithms for estimating hip joint angles are less available. Herein, we propose and validate a novel algorithm for this purpose with innovations in sensor-to-sensor orientation and sensor-to-segment alignment. The proposed approach is robust to sensor placement and does not require specific calibration motions. The accuracy of the proposed approach is established relative to optical motion capture and compared to existing methods for estimating relative orientation, hip joint angles, and range of motion (ROM) during a task designed to exercise the full hip range of motion (ROM) and fast walking using root mean square error (RMSE) and regression analysis. The RMSE of the proposed approach was less than that for existing methods when estimating sensor orientation (12.32° and 11.82° vs. 24.61° and 23.76°) and flexion/extension joint angles (7.88° and 8.62° vs. 14.14° and 15.64°). Also, ROM estimation error was less than 2.2° during the walking trial using the proposed method. These results suggest the proposed approach presents an improvement to existing methods and provides a promising technique for remote monitoring of hip joint angles.


Introduction
Joint angle estimates are emerging as important metrics for the analysis of human health and performance. They may soon play key roles in the identification and treatment of a variety of conditions impacting mobility such as osteoarthritis (OA) [1], multiple sclerosis (MS) [2], Parkinson's disease (PD) [3], stroke [4], and rehabilitation from joint injury [5,6]. Hip range of motion, in particular, has emerged as an important indicator of lower-limb motor impairment [7,8], and differentiates healthy and obese children [9]. However, advancement in the use of joint angles for the analysis of human health and performance is largely inhibited by limitations of the current measurement modalities employed in both clinical and research contexts.
The accepted standard for joint angle estimation is optical motion capture (OMC). OMC works by using infra-red (IR) cameras to locate IR-reflective markers in 3D space. Markers are placed on specific anatomical landmarks (e.g., anterior-superior iliac spines, femoral epicondyles, etc.) which allow the reconstruction of segment frames. The relative orientation of adjacent frames is then used to compute Building on this previous work, the objectives of this study were to carefully validate novel sensor-to-sensor relative orientation and sensor-to-segment alignment algorithms by assessing performance in the estimation of relative orientations and hip joint angles in human subjects. To provide context for these results, their performance was also compared to other methods previously described in the literature [11,13]. Finally, to examine the potential clinical utility, the computed joint ROMs are also compared to those from OMC.

Measurement Protocol
Twenty subjects (N = 10 male, N = 10 female, 22.9 ± 5.4 years old; Inclusion: able to perform daily activities without difficulty; exclusion: diagnosis of a balance or mobility impairment, inability to complete the in-lab activities of daily living without assistance, opioid-dependent) participated in the study. The study was carried out following the rules of the Declaration of Helsinki of 1975, and all study activities were approved by the University of Vermont Institutional Review Board (CHRBSS 18-0518, approved 24 May 2018) and subjects gave written informed consent prior to participation.
Subjects were instrumented with wearable MIMU sensors and reflective markers for OMC, as in Figure 1. Two calibration trials were performed, a static standing and a star calibration motion ('StarArc' in [36]. Five combined movements were used herein instead of seven). Following the calibration trials, calibration only markers (trochanters, medial femoral epicondyles, lateral tibial condyles, medial tibial condyles) were removed before subjects completed a series of activities including standard functional assessments (e.g., standing-sitting transitions) as well as simulated activities of daily living (e.g., walking, lying down). Each subject performed each task once. Herein we consider data sampled during trials when subjects completed the star calibration and walked on the treadmill for one minute ('walking' trial). The star calibration was included because it exercised the hip in all three axes of rotation, while the treadmill walking was included as a common daily activity that is also used in clinical assessments. Over all the subjects, the star calibration task had a duration of 56.41 ± 9.99 s (min: 38.44 s, max: 74.55 s). Treadmill walking speed was self-selected fast (resulting speed: 4.89 ± 0.68 kph, min: 3.50 kph, max: 6.40 kph, kph: kilometers per hour), with the speed chosen to validate on a more dynamic motion than walking slower would represent.

Optical Motion Capture
Reflective markers were tracked at a rate of 100 Hz with a 19-camera optical motion capture system (VICON Motion Systems) covering two overlapping capture volumes (a force plate walkway and treadmill). Subjects had markers placed on body segments and segment anatomical landmarks, following the locations suggested in [37][38][39]. Rigid clusters of three markers (see Figure 1) were also attached to each inertial sensor.
Reference rotations from a thigh sensor to the pelvis sensor were computed as the ground truth. This required both the cluster to global orientation, and the sensor to cluster orientation. To find the time-invariant sensor to cluster orientation, the cluster angular velocity was computed per where ω is the angular velocity, C indicates the cluster frame and G the global/world frame, and A B R is the rotation matrix from B to A. The OMC global frame is aligned with the capture volume used, and is consistent per subject for all OMC computations. G C R was computed using the cluster marker positions. The cluster angular velocity was then compared to the measured sensor angular velocity to obtain C S R (S is the sensor reference frame) using the singular value decomposition (SVD) [40]. The OMC-based sensor-to-sensor rotation can then be found per where 1 and 2 indicate a sensor, for example, 1 and 2 could indicate the lumbar and a thigh sensor, respectively. Hip joint angles were computed from OMC data following an established approach. First, functional joint centers were computed using a least squares geometric fit method [41] with an additional bias correction optimization to compensate for any soft-tissue artefact (STA) in marker trajectories [42]. Segment anatomical frames were defined per ISB standards [37,38] during the static calibration trial, and constant cluster-to-anatomical frame rotations were computed. Hip joint angles were computed as suggested in the ISB standards [37] with angle range corrections in the flexion-extension (FE) and IER directions [43]. Specifically, hip joint axes (e 1 , e 2 , e 3 ) were defined such that e 1 is aligned with the Z-axis of the pelvis (X, Y, Z are the pelvis axes), e 3 is aligned with the y-axis of the thigh (x, y, z are the thigh axes), and e 2 is perpendicular to each (e 2 = e 3 × e 1 ). Hip joint angles are then defined per where α is the FE angle, γ is the IER angle, β is the ad/abduction (AA) angle, || · || 2 is the vector 2-norm, and η is a correction that ensures the sign conventions are maintained. The OMC-based hip joint angles and sensor-to-sensor rotations serve as ground truth values for comparing the existing and novel MIMU-based algorithms described next.

Wearable Magnetic and Inertial Sensors
Eight MIMU sensors (Opal v2, APDM, Inc. 'Opal') were each seated in plastic clips and attached to feet, shanks, thighs, the lumbar, and sternum in manufacturer-suggested locations (see Figure 1) via double-sided adhesive tape and velcro straps to prevent slipping against the skin (see Figure 1). However, the algorithm presented herein requires a minimum of three permanent and one temporary (only used initially for joint center estimation) sensor for bilateral estimation, whereas only two permanent and one temporary sensor would be required for unilateral joint angle estimation. Herein, only the lumbar and thigh sensors are considered for joint angle estimation, and the shank sensors utilized only for joint center location estimation.
Data from all Opals were time-synchronized with each other, and were synchronized with the OMC system via an electronic trigger. Acceleration, angular velocity, and magnetic field were recorded from each sensor at a sampling frequency of 128 Hz.

Sensor-to-Sensor Rotation
Performance of the current state of the art algorithms for estimating sensor orientations on these data was established using the proprietary orientation estimation provided by the Opal sensor manufacturer APDM ('APDM' orientation method). As the Opal orientation output is per sensor relative to a world frame (defined by gravity and magnetic north), the sensor-to-sensor orientation was defined as

Hip Angles
Performance of current state of the art hip angle estimation algorithms was established using one of the few, and potentially only, open-source algorithms available (functional calibration-strap down integration (FC-STI)) [11][12][13] (MATLAB source code was ported to Python and validated against provided sample data. Original MATLAB code available: https://codeocean.com/capsule/ 1305245/tree/v1). This method requires a series of functional calibration activities (left and right hip ad/abduction, squatting, trunk rotations, standing) that are used to determine anatomical rotation axes. The orientation of these axes over time is computed using strap-down integration with joint kinematic constraints used to correct for drift error. Best performance is achieved when data from sensors deployed to all lower-body segments (shanks, thighs, pelvis) and the sternum are considered. This approach provides the pelvis and thigh anatomical frames necessary for computing hip joint angles as per Equations (3)-(5).

Novel MIMU Methods
As part of this work, two novel methods were developed. First, we propose a novel KF-based method for estimating sensor-to-sensor relative orientations (SSRO), which allows data in the thigh-sensor local frame to be transformed into the pelvis-sensor local frame. Next, a novel method for obtaining the sensor-to-segment alignments was developed, utilizing computed joint center locations to form the anatomical functional rotation axes (the 'proposed' method). These axes correspond, as closely as possible, to those used by OMC for computing hip joint angles. Finally, joint angles were computed as the angle between specific anatomical axes, as in the OMC method.

Data Preprocessing
Measured angular velocity, acceleration, and magnetometer readings were low-pass zero-phase filtered with a 15 Hz cutoff frequency. Angular acceleration was calculated from angular velocity using a second order approach perω where the overhead dot indicates the time derivative, as inω k i is the angular acceleration of the i-th sensor at time point k, and ∆t is the time difference between adjacent samples. After calculation, angular acceleration was low-pass zero-phase filtered with a cutoff frequency of 12 Hz.

Sensor-to-Sensor Rotation
Sensor orientations were calculated relative to adjacent sensors (e.g., left thigh to lumbar) using the SSRO approach. A KF [44,45] was used for the estimation, where, briefly, the direction of gravity and magnetic north in each sensor's frame were assumed to be the same up to some varying rotation between the frames, expressed as where g is the direction of gravity (unit length vector), m is the measured magnetic field vector (can be raw or normalized), ⊗ indicates quaternion multiplication (where quaternion q multiplication of a vector v is expressed , and 1 2 q is the rotation quaternion (unit length) from frame 2 to frame 1. Time indices are left off for clarity unless different time points are used in the same equation. While magnetic field readings are frequently effected by magnetic disturbances, as the adjacent sensors are relatively close to each other, these disturbances were assumed to have an equivalent effect on both sensors. This in turn yields the assumption that the magnetic field vectors from both sensors were the same (albeit expressed in different frames).
The measured accelerometer signal (a) can be modeled as a linear combination of the true/body acceleration (ã), gravitational acceleration, and white Gaussian measurement noise (n a ) as per One component of the SSRO is then a model for estimating the direction of gravity. The state vector of the KF is defined as a combination of gravity vectors and a rotation quaternion between sensor 2 and sensor 1, per where χ is the state vector. The time update for this vector is defined using a first order approach, per where s is a placeholder that can either be g or q. Because the gravity direction vector is part of the orientation matrix of its sensor, the same time derivative equation for matrices [46] can be used for the vector, perġ where ω × is the skew symmetric matrix representation of ω. Similarly for the rotation quaternion, the time derivative can be related to the angular velocity [47] peṙ where ω di f f is the difference between angular velocities (i.e. , These time derivatives can then be substituted into the general time update equation (Equation (12)) as in where A g and A q are the state update matrices for gravity and the rotation quaternion respectivelyand I is the identity matrix, with the subscript indicating the size. These matrices can be combined into one larger state update matrix for the state vector with A g1 , A g2 , and A q along the diagonal. By rearranging, the process covariance can be then be defined [34] as where Q g and Q q are the gravity and rotation quaternion process noise covariance matrices and σ ω is the gyroscope measurement noise variance. These can be combined along the diagonal in the same way as the state update matrix. In order to have a measurement for the gravity part of the KF, a first order model of the true acceleration was used [34], perã where c is a constant between 0 and 1 which gives the cutoff frequency and is the time varying error of the model. Combined with Equation (10) and rearranged, an equality was formed: whereĝ k indicates the a priori estimate of g k , G is the gravitational acceleration in m s 2 , H is the observation matrix, and ζ is the measurement. The measurement for the rotation quaternion was then a combination of the gravity vectors and magnetic field vectors. First, the rotation quaternion from g 2 to g 1 was found per ∂ 2 q(ĝ 2 ,ĝ 1 ) = cos 1 2 arccos (ĝ 2 ·ĝ 1 ) sin 1 2 arccos (ĝ 2 ·ĝ 1 ) where ∂ indicates a frame that is partially aligned with that of sensor 1 (i.e., rotation from gravity vectors does not account for heading). To complete the rotation to the frame of sensor 1, the magnetic field vectors can be used, first removing the component in the direction of gravity, per The final rotation 1 ∂ q can then be found from the rotation required to align ∂ 2 q ⊗ m xy|2 ⊗ ∂ 2 q −1 with m xy|1 , using the same method as for the gravity vectors. These partial rotations are then combined to yield the measurement per Since measurement is directly provided for the rotation quaternion, the observation matrix for the rotation quaternion part of the state vector is simply H q = I 4 . As with the state update and process covariance matrix, the full observation matrix was formed with those of gravity and rotation quaternion on the diagonal. The gravity direction measurement noise covariance matrix was defined per [34] where M is the measurement noise covariance matrix, σ a is the accelerometer noise variance, and N is the number of samples used for the moving average. The rotation quaternion measurement noise covariance was defined as where µ is an error factor term. Gravity direction and rotation quaternion matrices were combined as the other matrices. With the state update and measurement models, the rotation between two sensors could be obtained for the given MIMU data in the KF framework per where P is the state covariance matrix, and K is the Kalman gain. Finally, at each time point, the true acceleration was calculated perã where g i comes from part of the state vector χ, depending on i (e.g., i = 1 would use the first 3 elements, while i = 2 would use the 3rd through 6th elements), and G is the gravitational acceleration in m s 2 . This leaves five parameters to be set for the orientation estimation, σ ω , σ a , c, N, and µ. σ ω and σ a were determined to be 1 × 10 −3 and 6 × 10 −3 respectively for the Opal sensors-determined approximately from the sensors when placed on a stationary surface, while the c was set to 0.003 and N to 64. µ was set to 5 × 10 −8 which balanced out the reliance on angular velocity integration for the rotation quaternion with using the gravity and magnetic field vectors.
Initialization of the gravity vectors for the KF used the mean of the first few samples of the measured acceleration, while the rotation quaternion was initialized by computing the measurement value in the KF.

Sensor-to-Segment Alignment
The proposed method for estimating sensor-to-segment alignments utilizes the ability to estimate joint center locations to find the anatomical axes of the thigh and pelvis.
Joint center locations were estimated by leveraging inherent kinematic constraints as described previously [48,49]. Given two adjacent segments with sensors on each, that are linked via a common joint, the acceleration of the joint center as computed from each sensor's measurements must be equal to within a relative rotation between frames. Mathematically, this is defined per whereā is the acceleration at the joint center and r i is the vector from the joint center to the ith sensor's origin. With some manipulation, Equation (34) can be rearranged into an indeterminate linear system. The joint center location is fixed over time allowing the formation of the over-determined linear system, expressed as where the superscripts indicate the time index. A minimum of 1500 observations were used to inform a least-squares solution for the joint center locations r 1 and r 2 . Points were selected by considering the largest angular velocities measured by sensors 1 and 2. While the hip experiences enough 3D rotation for good location estimation, the quasi-1D nature and limited AA and IER ranges of the knee can result in the estimated location lying anywhere along the knee FE axis. A correction for this location error relies upon an estimate of the knee FE axis found via enforcing the following kinematic constraint: where j i is the rotation axis in the ith sensor's reference frame. The rotation axes can be found via a least squares minimization algorithm such as gradient descent as in [14,15]. Once the axes have been obtained, the knee joint centers are corrected per wherer i is the initial estimate of the knee joint center in the i-th sensors frame obtained from Equation (35). For the knee this would be the shank and thigh sensors. This process shifts the estimated joint center location close to the sensors, and results in a better approximation of the true joint center.
Once the joint centers were calculated, the pelvis and thigh fixed axes were calculated following the conventions for axes directions in [37]. The pelvis and thigh anatomical frames were then formed using a static standing trial as follows: 1. Rotate the fixed axes into common frames (ex. left thigh fixed axis from left thigh sensor frame to pelvis sensor frame). 2. Create the left and right hip joint coordinate systems as per ISB standards [37]. e 1 = Pelvis fixed axis. e 3 = Left/right thigh fixed axis. e 2 = e 3 × e 1 3. Create the pelvis anatomical frame from the hip joint coordinate systems as: Create the thigh anatomical frame from the hip joint coordinate system as: The anatomical frames and axes were created during the most still period of the static calibration trial, and remained constant relative to their respective sensors (e.g., left thigh anatomical frame is constant in left thigh sensor frame).

Joint Angles
Following the sensor-to-segment alignments found in the proposed method, hip joint angles were then calculated by rotating the thigh anatomical frames into the pelvis frame at each time point using the SSRO algorithm described above, and computing the angles as per Equations (3)-(5).
All analysis was performed in Python 3.7, and the algorithms presented herein have been developed into a package, which can be found at https://github.com/M-SenseResearchGroup/ pykinematics.

Relative Orientations
Angles from the axis-angle convention for rotation were extracted from the sensor-to-sensor relative orientations from the thigh to lumbar sensor (i.e., 1 2 R for the OMC and APDM methods and 1 2 q for the SSRO method). SSRO and APDM angles were downsampled to 100 Hz to match those from OMC. To ensure that RMSE values accurately reflected the minimum distance between angles, any angle differences larger than 180 • had 360 • either added or removed, depending on the sign of the difference (i.e., an angle difference of 350 • would actually be 350 − 360 = 10 • ). RMSE was then calculated per where N is the number of samples in the trial and ∆α is the corrected angle difference. Additionally, regression analysis was performed between the SSRO and APDM methods and the OMC ground truth (e.g., OMC angles were the independent variable, and SSRO or APDM angles the dependent). This yielded slope and intercept values; slope giving an indication of how well changes in the SSRO or APDM rotation angles track those in the OMC rotation angles. Intercept provides a measure of the bias.
The mean and standard deviation (SD) of statistics were reported for each comparison (SSRO vs. OMC, APDM vs. OMC), trial type (star calibration and walking).

Joint Angles
Agreements between FE, AA, and IER hip angles using the proposed method to those from OMC were established via RMSE, slope, intercept, ROM difference (ROMD), and drift. To provide context for these results, the same quantities were computed for the comparison between the FC-STI and OMC methods. Proposed and FC-STI method joint angle results were downsampled from 128 Hz to 100 Hz to match OMC joint angle results. RMSE is typically reported for assessing agreement for angle estimations, while slope and intercept provide information about tracking changes and bias respectively. Hip ROM is a clinically-relevant metric that has been shown to differ statistically significantly between, for example, persons with MS and healthy controls [2] and between those with PD and healthy controls [3]. As such, assessing the ROMD between the MIMU and OMC methods is an important component of establishing the applicability of the algorithm to clinical settings.
RMSE was computed the same way as for the orientation Euler angles, as were slope, and intercept. Slope values were taken as excellent if they were within 0.1 of 1, good from 0.3 to 0.1 away from 1, and moderate for values between 0.5 to 0.3 away. ROM values for each trial were computed from the difference between the maximum and minimum angles. If the trial was longer than 30 s however, then the first 10 s were excluded to ensure start-up effects were not present in the ROM estimates. ROMD was then the simple difference between proposed or FC-STI method ROMs and those from OMC.
Finally, the drift was assessed during the walking trials with linear regression analysis of the hip angle difference over time. Assessment of drift provides information regarding the ability of the KF framework to mitigate this error source during long non-stationary tasks, which is a critical aspect of most MIMU joint angle estimation algorithms. We do not assess drift in the star calibration trials as other sources of error (e.g., axis misalignment, bias) dominate, which causes the linear model-based approach to make unreliable estimates of the drift error. Drift distributions for each method were tested using the Wilcoxon test for non-zero slopes.
The mean and standard deviation (SD) of statistics were reported for each comparison (proposed vs. OMC, FC-STI vs. OMC), trial type (star calibration and walking), and angle (FE, AA, IER). Table 1 reports the agreement of methods for computing orientation of the lumbar sensor relative to the left and right thigh sensors with OMC. The SSRO method exhibits much lower mean RMSE values (11.82 • and 12.32 • for star calibration and walking) than the APDM orientation (21.61 • and 23.76 • ). Slopes for all trials and methods were above 0.85, indicating that both methods tracked the ground truth OMC angle with less than 15% variation. Intercept values for the SSRO method were again better than APDM, though for both methods very large SDs were present indicating substantial variability in results between subjects. Table 1. Mean (SD) of statistics for agreement between sensor-to-sensor relative orientations (SSRO) and APDM results and ground truth optical motion capture (OMC) rotation angles from the axis-angle representation.

Joint Angles
The violin plots of Figure 2 show the hip angle RMSE in each anatomical direction for the proposed and FC-STI methods. Results demonstrate that the median FE RMSE for the proposed method is below that of the FC-STI method. Inter-quartile ranges (IQRs) for AA and IER angles from both methods are overlapping, indicating similarity between the results.  Figure 3 shows several sample cycles of gait during fast walking on a treadmill for two different subjects. For both subjects, FE and IER proposed method angles exhibit close agreement with the OMC estimate. While the FC-STI method results track these angles well, there is a larger offset present. For the AA angles, the two subjects differ in tracking performance. The subject in Figure 3a shows poor tracking for both methods, while the subject in Figure 3b shows good tracking. Collectively, these results suggest that the proposed approach improves FE angle estimation while maintaining performance in the other anatomical directions.
These results for the subject in Figure 3a are reflected in the sample regression plots from the treadmill fast walk trial of Figure 4. Good agreement is observed between the proposed method and OMC, with slopes and r values that are close to 1. The circular pattern in the AA graphs is due to the cyclic nature of the motion, with proposed and FC-STI results cycling away and towards the OMC ground truth during each gait cycle. This is reflected in the lower r values, 0.852 and 0.604 for the proposed and FC-STI AA angles respectively.
The full report of results is found in Table 2. For the star calibration, the proposed method had half the FE RMSE (7.88 ± 3.64 • ) compared to the FC-STI method (14.49 ± 6.28 • ). Proposed AA and IER RMSE were higher than those of the FC-STI method, but still within the 1 SD range. Slopes were comparable for all three angles between both methods, as were intercepts, though FE and IER for the proposed method were slightly lower than the FC-STI method values. For the star calibration, the proposed method ROMD values were higher than those of the FC-STI method, though both methods showed large SDs.
Walking results showed similar trends to those in the star calibration. FE RMSE for the proposed method was approximately half that of the FC-STI method (8.62 ± 7.52 • compared to 15.64 ± 10.24 • ), AA was higher (5.03 ± 6.42 • compared to 5.65 ± 3.16 • ), and IER was slightly lower (9.99 ± 5.90 • compared to 11.93 ± 6.04 • ). Slopes were again consistent across both methods, indicating that both methods do track the ground truth OMC angles well. Intercepts followed a similar trend to that of the RMSE values, with FE being lower for the proposed method (−6.29 ± 9.15 • compared to −10.17 ± 14.75 • ) and AA and IER being similar. ROMD values were very small for both methods, under 2.2 • for the proposed and under 4.5 • for the FC-STI method. SD values were also much smaller than those of the star calibration, indicating a tight clustering of ROMD values around zero. For ROMD, FE values from the proposed and FC-STI methods were similar, while the proposed method AA (0.80 ± 4.64 • ) and IER (0.77 ± 4.34) were much lower than those of the FC-STI method (3.35 ± 5.55 • and 4.49 ± 6.78 • respectively). Wilcoxon p-values showed statistical evidence that AA and IER drift values for both the proposed and FC-STI methods were not zero, especially FC-STI IER, with a drift of −0.12 ± 0.18 • /s.

Discussion
In this study, novel sensor-to-sensor relative orientation and 3D joint angle estimation methods were presented. Only minimal calibration motions are required, and no specific sensor orientation or placement is necessary. Validation was performed on human subjects, using an OMC system as ground truth, with performance relative to existing methods established by reporting existing algorithm performance.
The proposed orientation method (SSRO) strongly outperforms the existing proprietary APDM orientation, with SSRO RMSE values approximately half those of the APDM (Table 1) for both the star calibration and treadmill fast walking. SSRO slopes were slightly closer to 1 than the APDM slopes, though large SD ranges indicate little difference between the two methods. Overall, the SSRO shows excellent tracking of the ground truth OMC orientations, though with some offset that is reflected in the higher RMSE and intercept values. Better performance of the SSRO method is likely due to better compensation for magnetic effects under the assumption that both sensors pick up the effect. Additionally, there are likely performance benefits from direct alignment of gravity and magnetic field vectors between sensors.
Proposed method RMSE values were all below 10.5 • when compared to OMC for both trials, and slopes were all excellent (0.90-1.04) except for IER during walking, which was good (0.78). The same comparison for the FC-STI method yielded similar RMSEs for AA and IER directions, though proposed FE RMSEs were just over half of those from the FC-STI method ( Table 2), as well as similar slopes. As the FC-STI method was originally designed and utilized for alpine skiing, this study also serves as a validation during the more common daily activity of gait. The original study for the FC-STI method reported mean errors of −10.7 ± 3.4 • , −3.3 ± 4.1, and 0.5 ± 4.8 • for FE, AA, and IER angles [11]. Here, we observed mean errors of 6.35 • , 3.81 • , and 6.24 • for proposed method FE, AA, and IER in the fast treadmill walk, and 10.71 • , 1.84 • , and −5.50 • for the FC-STI method, all of which are comparable or better (e.g., proposed FE) with the original study [11]. The performance of the proposed method for FE angles over the FC-STI method is likely due to the definition of the rotation axes. Whereas the proposed method uses a definition closely aligned with that of OMC, the FC-STI method relies on calibration motions that are assumed to align with the joint rotation axes and may result in more cross-talk between axes and therefore higher RMSE values, especially for FE which is the prominent axis of motion.
Few other studies have implemented or validated hip angle algorithms in human subjects. Validation of a proprietary algorithm during manual material handling tasks reported less than 7.5 • RMSE [31]. Another proprietary algorithm validation resulted in 9.6 • and 27.6 • RMSE for walking and running on a treadmill [35]. While these results are comparable or worse, no information regarding the algorithms employed by the sensor systems was given due to their proprietary nature, and the significant performance decrease from walking to running raises concerns regarding the utility of the systems. Additionally, the use of alternative sensors would not be possible with the proprietary nature of the algorithms. During walking, another study, which did not use proprietary algorithms, reported mean errors less than 6 • [32]. However, upon detection of significant drift, trials were removed from analysis, resulting in approximately 60% of trials being removed (57% in the IER direction). Additionally, angles were taken to be 0 • during static standing, which may not be repeatable, or valid in populations with balance, mobility, or joint ROM impairments.
Drift in the joint angles was assessed for the proposed and FC-STI methods, with similar results. Both AA and IER showed statistical evidence of non-zero drift during walking for both methods. All drift value mean magnitudes were less than 0.03 • /s except walking FC-STI IER (−0.12 • /s). We suspect that there is evidence of drift in the walking trial (significant non-zero slopes in the AA and IER directions) because there are no still periods. For the SSRO KF, this impacts estimates of the direction of gravity. The treadmill walking task was one minute long, and therefore future work should explore this trend over multiple minute-long tasks with the goal of developing effective mitigation strategies.
Joint ROM is a highly relevant metric for a variety of clinical populations [1][2][3]. As such, assessing the difference between ROM from OMC and wearable sensor methods is critical to establishing performance. While the proposed method exhibits large ROMD with OMC during the star calibration task (8.74 • to 18.08 • ), this set of motions is not typically performed. Gait ROMD yields a much more informative assessment for clinical application, and the proposed method shows very strong performance, with 2.17 • , 0.80 • , and 0.77 • mean ROMD for FE, AA, and IER respectively. The FC-STI method also performs very well, just slightly higher at 1.86 • , 3.55 • , and 4.49 • mean ROMD.
Differences in ROM between healthy and impaired populations indicate that the levels of results achieved especially by the proposed method are sufficient to detect reported ROM differences. For example, in persons with OA, mean peak hip extension in the stance phase during walking was reported as 8.4 ± 7.0 • compared to 14 [3]. For the proposed method, FE ROMD was 2.17 ± 3.61 • during treadmill walking, indicating that the proposed method has the resolution necessary to observe these differences between healthy and afflicted populations. With that said, future work should explore similar validation studies to establish performance characteristics of the proposed approach in these populations.
Future work on the SSRO should involve an observability and stability analysis of the KF form [50] to assess the ability of the KF to capture the dynamics of the problem. Future work should also involve a comparison against fluoroscopic methods in an effort to provide an even more detailed picture of the performance of the proposed approach. To this end, future work should also examine the repeatability of the proposed method to provide evidence for its use in studies looking for changes in kinematics over repeated visits. Additionally, while previous studies have not grouped by gender [31,32,35], this should be explored in the future to examine potential performance changes associated with anatomical differences. Finally, extension and modification of the proposed methods to other joints (e.g., the knee, shoulder, etc.) should be explored.
Limitations of this study include the limited age range of subjects, with little to no representation of subjects over the age of 25, as well as the small subject pool. While typical for similar studies (N = 11 in [11]) a larger subject pool would allow for stronger conclusions to be drawn from the results. Furthermore, the testing on healthy subjects does not indicate how well the algorithms will perform on individuals with mobility impairments. Additionally, while effort was made to ensure that subjects were instrumented for OMC by the same person, in the same way, marker placement was done by two people, potentially introducing some variability between subjects in the initial creation of anatomical reference frames. Finally, while the sensor assigned to the pelvis is placed in the manufacturer's recommended location, which is also used in previous studies [11,32], it is not directly on the pelvis which could result in some rotations from the spine affecting results.

Conclusions
Wearable MIMU-based methods for estimating sensor-to-sensor relative orientation and 3D hip joint angles were proposed and validated against OMC on human subjects. Python implementations of these algorithms have been made available as open-source software. Innovations in sensor-to-sensor orientation and sensor-to-segment alignment yield improvements in estimation performance during the walking and star calibration tasks considered herein. Specifically, the SSRO showed much lower RMSE values (12.32 vs. 24.61 • ) than the proprietary orientation estimation algorithm provided by a commercially available MIMU system. Similarly, the proposed method for estimating hip joint angles also had lower RMSE (8.62 vs. 15.64 • ) than the FC-STI method. During walking, ROMDs for the proposed method were all below 2.17 • , further indicating their close agreement with OMC and the potential clinical utility of this approach. Overall, these results are comparable to and improve upon existing methods for estimating hip joint angles with wearable sensors.
Author Contributions: L.A. contributed to the conceptualization, methodology, investigation, data curation, validation, software, formal analysis, visualization, and writing. R.D.G. contributed to the methodology, investigation, data curation, validation, and review and editing. J.F. contributed to the investigation, data curation, and review and editing. A.T.U. contributed to the investigation, data curation, and review and editing. N.F. contributed to the review and editing. R.S.M. contributed to the conceptualization, methodology, resources, and review and editing. All authors reviewed and approved the final manuscript.
Funding: This research received no external funding.