Estimation of Three-Dimensional Lower Limb Kinetics Data during Walking Using Machine Learning from a Single IMU Attached to the Sacrum

Kinetics data such as ground reaction forces (GRFs) are commonly used as indicators for rehabilitation and sports performance; however, they are difficult to measure with convenient wearable devices. Therefore, researchers have attempted to estimate accurately unmeasured kinetics data with artificial neural networks (ANNs). Because the inputs to an ANN affect its performance, they must be carefully selected. The GRF and center of pressure (CoP) have a mechanical relationship with the center of mass (CoM) in the three dimensions (3D). This biomechanical characteristic can be used to establish an appropriate input and structure of an ANN. In this study, an ANN for estimating gait kinetics with a single inertial measurement unit (IMU) was designed; the kinematics of the IMU placed on the sacrum as a proxy for the CoM kinematics were applied based on the 3D spring mechanics. The walking data from 17 participants walking at various speeds were used to train and validate the ANN. The estimated 3D GRF, CoP trajectory, and joint torques of the lower limbs were reasonably accurate, with normalized root-mean-square errors (NRMSEs) of 6.7% to 15.6%, 8.2% to 20.0%, and 11.4% to 24.1%, respectively. This result implies that the biomechanical characteristics can be used to estimate the complete three-dimensional gait data with an ANN model and a single IMU.


Introduction
Kinetic walking data (such as the ground reaction force (GRF) and joint torques) can be used as quantitative indicators for the diagnosis, rehabilitation, or sports performance [1][2][3][4]. The proportion of mediolateral (ML) components among the loads on the lower limb joints is approximately 30%, and the force or torque in the ML direction is used as an indicator to predict the injuries of athletes and for the initial diagnosis of patients [5][6][7]. Therefore, the gait dynamics in three dimensions (3D) (including the ML direction) must be analyzed. However, gait analysis usually requires a laboratory environment in which the kinetics and kinematics data can be measured with force plates and a motion capture system. For rehabilitation, visiting the laboratory or hospital causes temporal and spatial constraints, which are a considerable obstacle for patients.
Because the demand for diagnosis and gait analysis in daily life is increasing, the use of wearable monitoring devices that collect gait data is increasing [8]. However, these wearable devices (e.g., Galaxy Watch, Apple Watch, and Garmin) usually include a single inertial measurement unit (IMU), and only simple gait information (e.g., the step length, step frequency, and walking speed) can be provided. Accordingly, estimating meaningful dynamic information such as kinetics data θ θ φ where xf is sufficiently small and yf is a sigmoidal function of time t.
Based on the equation of motion of the 3D SLIP model (Equations (1) and (3)), the GRF F = [Fx, Fy, Fz] T and CoP can be expressed by the augmented state variable r as follows:  By applying the small-angle approximation by substituting θ ⊥ +π/2 for θ ( Figure 2B), we can obtain the augmented state r = [l, θ ⊥ , φ, lθ ⊥ , lφ] T , which consists of the state variables of the 3D SLIP model, l, θ ⊥ , and φ, and their nonlinear term lθ ⊥ and lφ, expressed by the CoM position q and a sigmoidal function of time t as follows: Based on the equation of motion of the 3D SLIP model (Equations (1) and (3)), the GRF F = [F x , F y , F z ] T and CoP can be expressed by the augmented state variable r as follows: Sensors 2020, 20, 6277 5 of 16 Because the GRF and CoP [F, q f ] T could be expressed as a weighted sum of the CoM position and time q t and bias, we decided to use the CoM position and attach the IMU sensor to the sacrum most similar to the CoM in the body part to estimate the 3D kinetics data.

Experiments and Data Collection
Twenty young and healthy participants (12 females and 8 males, 24.7 ± 3.2 years) volunteered for the data collection, with average heights and mass of 166.7 ± 7.8 cm and 60.1 ± 8.9 kg. Data from three participants were removed due to equipment failure. Before the experiment, they signed informed consent forms approved by the KAIST Institutional Review Board on 15 October 2019 (KH2019-121). The participants walked on a customized split-belt treadmill at four different walking speeds (0.7, 1.0, 1.3, and 1.6 m/s) for 2 min at each speed.
The kinematics data of the CoM were measured by an IMU sensor (Trigno TM Avanti sensors, Delsys, Natick, MA, USA) that was attached to the sacrum at a sampling frequency of 148 Hz. The GRFs and ground reaction moments were measured with two force plates (FP 6012 ® , Bertec, Columbus, OH, USA) underneath a split-belt treadmill at a sampling frequency of 200 Hz. Moreover, the trajectories of the optical markers attached to the body segments were measured by the motion capture system that included 10 optical cameras (Hawk ® and Osprey ® , Motion Analysis Corporation, Rohnert Park, CA, USA) with sampling frequencies of 100 Hz. To use the steady-state walking data, 10 consecutive steps with the smallest deviation in the vertical GRF for each trial were used.
Subsequently, the kinetic and kinematics data were filtered with a fifth-order zero-phase Butterworth low-pass filter at cutoff frequencies of 20 and 10 Hz, respectively; all data were down-sampled to 100 Hz. The CoP trajectories were calculated with the GRFs and ground reaction moments. The 3D joint torques were calculated with the inverse dynamics of the 3D rigid body model, which consisted of six conical frustum-shaped segments for the lower limbs (e.g., the thighs, shanks, and feet) and a cylindrical segment for the trunk [26]. The segments were defined by optical markers attached to each lateral and medial joint. Furthermore, the 3D joint angles were determined based on the local frame axes of the segments, and the mass distribution was based on previous study [27].

Data Process of IMU Measurement
We obtained the input data to the ANN model by calculating the velocity and the position of the sacrum by defining the gait events and estimating the walking speeds with the IMU data. We used the acceleration data in the local frame aligned with the forward direction, assuming that the difference between the acceleration in the local frame and that in the global frame is sufficiently small (Figure 3).
The acceleration data from the IMU sensor had to be segmented to be used for the stance phase. The gait events, the heel strike (HS) and toe-off (TO), were detected based on the sacral acceleration, by referring to the GRF (Figure 4) [17]. The cutoff frequencies of the low-pass filter were set heuristically to identify the acceleration characteristics for each event. Cutoff frequencies of 20 and 5 Hz were used for the HS and TO, respectively. First, to divide the step phase to include the bipedal support phase in the center, the mid-stance was used as the starting point; it re-occurred after one-third of the duration between the successive local minimum of the AP acceleration filtered at 5 Hz ( Figure 4B). The HS was the last local minimum before the maximum of the step phase in the vertical acceleration filtered at 20 Hz, and the TO was the minimum of the step phase in the AP acceleration filtered at 5 Hz. When the detected HS came after the TO, the HS was identified as the previous local minimum of the originally detected HS until the HS was detected before the TO.  The acceleration data from the IMU sensor had to be segmented to be used for the stance phase. The gait events, the heel strike (HS) and toe-off (TO), were detected based on the sacral acceleration, by referring to the GRF ( Figure 4) [17]. The cutoff frequencies of the low-pass filter were set heuristically to identify the acceleration characteristics for each event. Cutoff frequencies of 20 and 5 Hz were used for the HS and TO, respectively. First, to divide the step phase to include the bipedal support phase in the center, the mid-stance was used as the starting point; it re-occurred after onethird of the duration between the successive local minimum of the AP acceleration filtered at 5 Hz ( Figure  4B). The HS was the last local minimum before the maximum of the step phase in the vertical acceleration filtered at 20 Hz, and the TO was the minimum of the step phase in the AP acceleration filtered at 5 Hz. When the detected HS came after the TO, the HS was identified as the previous local minimum of the originally detected HS until the HS was detected before the TO.   The average acceleration data from IMU during the stride phase for all walking speeds of all 17 participants. The averages and the standard deviations of the acceleration in the global frame are represented by black thick and thin solid lines, respectively, and the average and standard deviations of the acceleration in the local frame are represented by a white thick solid line and gray shade area, respectively. The percent of normalized root-mean-square errors (NRMSEs) of the acceleration in the local frame are 9.5 ± 3.3% in x-axis, 8.0 ± 3.5% in y-axis, and 2.3 ± 0.7% in z-axis.
The acceleration data from the IMU sensor had to be segmented to be used for the stance phase. The gait events, the heel strike (HS) and toe-off (TO), were detected based on the sacral acceleration, by referring to the GRF ( Figure 4) [17]. The cutoff frequencies of the low-pass filter were set heuristically to identify the acceleration characteristics for each event. Cutoff frequencies of 20 and 5 Hz were used for the HS and TO, respectively. First, to divide the step phase to include the bipedal support phase in the center, the mid-stance was used as the starting point; it re-occurred after onethird of the duration between the successive local minimum of the AP acceleration filtered at 5 Hz ( Figure  4B). The HS was the last local minimum before the maximum of the step phase in the vertical acceleration filtered at 20 Hz, and the TO was the minimum of the step phase in the AP acceleration filtered at 5 Hz. When the detected HS came after the TO, the HS was identified as the previous local minimum of the originally detected HS until the HS was detected before the TO.  The next step was to calculate the velocity and displacement of the sacrum by integrating the acceleration data measured by the IMU sensor. The drift error that occurred during integration had to be removed, and the integral constant had to be determined. Both problems could be solved by assuming steady-state walking for each stride. Then, because the average velocity during a stride phase was zero, drift due to random errors could be linearly removed as follows: wherev, v, and T are the drift removal velocity, the velocity obtained by integrating the acceleration over time t during the stride phase, and the duration of the stride, respectively. Because the walking was steady-state, the integral constants were set such that the average velocities in the vertical and ML directions were zero. However, because the average velocity in the anteroposterior (AP) direction was the walking speed, it was necessary to estimate the walking speed. Using the walking frequency and the average acceleration amplitude, which were correlated with the walking speed, the walking speed v 0 was estimated as follows: v where f and A are the walking frequency and average magnitude of acceleration, respectively, and a 1 , a 2 , and a 3 are the coefficient constants. The drift that occurred when calculating displacement by integrating the velocity could also be eliminated linearly by considering the average displacement for the stride phase. Although the average displacement in the vertical and ML directions was zero, the average displacement in the AP direction was determined by the walking speed. In contrast to the other direction, the data in the ML direction had the opposite sign depending on the leg side, so it was unified on one side. The leg side in the stance phase was determined by referring to the sign of the displacement in the 50% section of the stance phase and was unified with one leg. All kinematics data were processed by MATLAB 2017a.

Structure of ANN and Its Training and Test Procedures
In this study, the input and structure of the ANN model was designed based on the biomechanical relationship between the GRF, CoP, and CoM. Because the GRF and CoP were approximated based on the CoM position, the input to the ANN model was set according to the measured data of the IMU attached to the sacrum; in addition, the ANN model consisted of one hidden layer with a sigmoidal activation function [25]. Two ANN models were built: one for predicting the GRF and CoP and one for predicting the joint torques. The ANN models consisted of one input layer, one hidden layer, and one output layer. For both models, a 10 × 1 column vector was fed to 10 nodes of the input layer, which consisted of time t and the corresponding 3D kinematics data of the sacrum, which acted as a proxy of the CoM, such as (t, x m , y m , z m , v x , v y , v z , a x , a y , a z ); x m , y m , and z m are the ML, AP, and vertical positions, and v and a are the velocity and acceleration of the sacrum measured at a specific time t, respectively. The hidden layer had 20 nodes with a sigmoidal function as the activation transfer function. Moreover, the two models had five and seven nodes for the output layer (consisting of 5 × 1 and 7 × 1 column vectors, respectively), which consisted of the 3D walking kinetics, such as (GRF ML , GRF AP , GRF vertical , CoP ML , CoP AP ) and (T Ab/Ad,hip , T E/F,hip , T ER/IR,hip , T Ab/Ad,knee , T F/E,knee , T ER/IR,knee , T PF/DF,ankle ); T is the joint torque, and the subscripts Ab, Ad, E, F, ER, IR, PF, and DF represent the abduction, adduction, extension, flexion, external rotation, internal rotation, plantar flexion, and dorsiflexion, respectively; in addition, the joint to which the torque was applied is indicated. The output layers used linear activation transfer functions.
All input data except the time were normalized according to the amplitudes of the data in the range 0 to 1. The output data, GRF, CoP, and joint torque were normalized by the body weight, foot length, and body mass, respectively. All input and output data for a stance phase were interpolated linearly to 201 points to match the length of the dataset to all walking speeds of all participants.
To validate the proposed estimation model, the datasets of a total of 17 participants were used to train and test the ANN with the LOO validation method. Each ANN model for each of the 17 participants was trained with the training set from the other 13 participants, and the datasets from the remaining three participants were used for validation stopping. Furthermore, the mean squared errors (MSEs) of the predicted and measured values were used as the loss function and Levenberg-Marquardt optimization for the backpropagation. When the MSE for the validation set failed to decrease or remained constant for 20 epochs, the training was stopped, and the parameters with the lowest MSEs for the validation set were chosen for the final models. To evaluate the estimation accuracy of the final models, all 40 stance phases from each participant and the normalized root-mean-square error (NRMSE) of the predicted values, which was normalized to the amplitude of the output of each stance phase, were used. The training of the neural network was conducted with the "train" function of MATLAB 2017a.

Results
The 3D kinetics data during walking were estimated with a single IMU attached to the sacrum, considered as the CoM using the ANN model. The predicted kinetics data are the vertical, AP, and ML GRF, the AP and ML CoP, the 3D joint torques at the hip and knee, and the ankle flexion torque.
We obtained walking data of the stance phase by estimating gait events such as the HS and TO using the acceleration from the IMU sensor, which were compared to the defined gait event by the GRF. The mean absolute error (MAE) of the HS and TO were 50 ± 41 and 13 ± 11 m/s, respectively, for a total of 680 stance phases in 17 participants (Table 1). Based on estimating gait events using the proposed method, the estimation accuracy for the TO was higher than that for the HS at all walking speeds. The estimation accuracy of HS did not correlate with speed (r = 0.045 and p = 0.24), whereas the accuracy of TO increased with increasing speed (r = −0.29 and p < 0.001). The MAE for the estimated walking speeds from all trials was 0.11 ± 0.11 m/s using the estimated gait events. We obtained the displacement and velocity of the sacrum for the stance phase by integrating acceleration from the IMU sensor ( Figure 5). The estimated position and velocity of the sacrum had NRMSEs of 20% and 21%, respectively, compared with the kinematics data of an optical marker attached to the sacrum ( Table 2). errors (MSEs) of the predicted and measured values were used as the loss function and Levenberg-Marquardt optimization for the backpropagation. When the MSE for the validation set failed to decrease or remained constant for 20 epochs, the training was stopped, and the parameters with the lowest MSEs for the validation set were chosen for the final models. To evaluate the estimation accuracy of the final models, all 40 stance phases from each participant and the normalized rootmean-square error (NRMSE) of the predicted values, which was normalized to the amplitude of the output of each stance phase, were used. The training of the neural network was conducted with the "train" function of MATLAB 2017a.

Results
The 3D kinetics data during walking were estimated with a single IMU attached to the sacrum, considered as the CoM using the ANN model. The predicted kinetics data are the vertical, AP, and ML GRF, the AP and ML CoP, the 3D joint torques at the hip and knee, and the ankle flexion torque.
We obtained walking data of the stance phase by estimating gait events such as the HS and TO using the acceleration from the IMU sensor, which were compared to the defined gait event by the GRF. The mean absolute error (MAE) of the HS and TO were 50 ± 41 and 13 ± 11 m/s, respectively, for a total of 680 stance phases in 17 participants (Table 1). Based on estimating gait events using the proposed method, the estimation accuracy for the TO was higher than that for the HS at all walking speeds. The estimation accuracy of HS did not correlate with speed (r = 0.045 and p = 0.24), whereas the accuracy of TO increased with increasing speed (r = −0.29 and p < 0.001). The MAE for the estimated walking speeds from all trials was 0.11 ± 0.11 m/s using the estimated gait events. We obtained the displacement and velocity of the sacrum for the stance phase by integrating acceleration from the IMU sensor ( Figure 5). The estimated position and velocity of the sacrum had NRMSEs of 20% and 21%, respectively, compared with the kinematics data of an optical marker attached to the sacrum ( Table 2).  The estimation of the 3D kinetics from the ANN agreed closely with the experimental data of all 17 participants (Figures 6 and 7). The NRMSEs of the GRF, CoP, and joint torques were approximately 10%, 14%, and 17%, respectively ( Table 3). The R 2 , the coefficient of determination of the GRF, CoP, and joint torques were approximately 0.81, 0.44, and 0.46, respectively. (Table 4). The GRF and joint torques estimated by the ANN showed the same trends across all walking speeds as the GRF measured with Sensors 2020, 20, 6277 9 of 16 the force plate and joint torques that were calculated with the inverse dynamics. Except for the vertical GRF, all variables had the largest NRMSE at the slowest walking speed. The NRMSE was higher for joint torques than for the GRF and CoP (p < 0.001 for both), which could be approximated based on the weighted sum of the CoM.

Discussion
By designing the ANN model based on the 3D spring mechanics of walking, the 3D kinetics data during walking could be estimated with a single IMU sensor. The GRF and CoP were approximated by formulating them as functions of the CoM. The functional relationship between the CoM, GRF, and CoP enabled the use of an ANN with a single hidden layer for the estimation [25]. Based on this approximation, the ANN model, in which the sacrum kinematics were fed to the input nodes, could predict a total of 12 walking kinetics data points at various gait speeds with a single IMU sensor. The estimated GRF, CoP, and joint torques agreed closely with the measured kinetics data (Figures 6 and 7). In previous studies, the 2D GRF and joint dynamics were only estimated with 2D sacrum acceleration based on the 2D spring walking model [17]. However, by proposing a 3D spring walking model that represents the 3D GRF and COP [24], the 3D kinetics data could be predicted with the 3D CoM kinematics. Furthermore, based on the training datasets from multiple participants, the applicability of the method to various participants despite their considerable variabilities was confirmed.
The performance of the ANN model proposed in this study reflects the walking reproducibility of the 3D SLIP model. Although the 3D SLIP model closely represents the GRF, the GRF reproducibility in the ML direction is lower than in the AP and vertical directions [24]. This may explain why the estimation accuracy in the ML direction is lower than those in the other directions. This low accuracy in the ML direction can be found in other existing 3D GRF estimation studies [9,16,28]. The high variability of the GRF in the ML direction compared to those of the other directions [29] would affect the low estimation accuracy. Because the ML CoP is much smaller than the AP CoP, the CoP moves only in the AP direction in the 3D SLIP model. The reproducibility of the model for the ML CoP is reflected in the performance of the ML CoP; thus, the error is larger than those of the GRF or AP CoP, and the SD of the estimation result is smaller than the SD of the experimental data. In addition, just as the GRF peak value of the 3D SLIP model did not reach the peak of the experimental data, the estimated GRF peak error was approximately 5% to 22% in this study, which exceeded the entire phase average error. The 3D joint torque had a large error compared to the GRF and COP. Because the 3D joint kinetics have no mechanical relationship to the CoM, the joint torques has a large estimation error compared to the GRF and CoP. Similar to those of the GRF peak, the joint torque peak errors were 16% to 27% (hip), 27% to 43% (knee), and 15% (ankle); thus, they were much greater than the entire phase average error. In the 2D case, the lower limb kinematics, lower limb kinetics, and GRF can be estimated based on the SLIP model with the ankle joint to reveal the dependency of the CoM and lower limb kinematics [17]. However, because the dependency of the CoM and lower limb kinematics is limited to the 2D case, the 3D kinematics could not be estimated with the ANN model and biomechanical characteristics of the CoM. In addition, by assuming a point mass, the rotation of the CoM and angular velocity of the IMU sensor were not considered despite the relationship between the joint torque and rotation. Insufficient input data for the estimation of the joint torque variability can decrease the estimated SD with respect to that of the experimental data. Therefore, using the angular velocity in the gyro as an additional input could lead to a similar SD with higher accuracy.
Considering the biomechanical characteristics while estimating the GRF and CoP can help to reduce drastically the necessary number of sensors. Several researchers have estimated the 3D GRFs and CoP with kinematics data and without considering the biomechanical characteristics during walking [9,16,28]. The 3D GRF and CoP could be predicted with the 3D accelerations and angular velocities from the 17 IMU sensors attached across the entire body. Despite the use of more sensors, the prediction results of the GRFs were similar to (for the AP and vertical directions) or only slightly more accurate (for the ML direction) than those of the proposed method; in addition, the prediction errors of the CoP were 1.5-2 times larger than those of the proposed method. These results imply a high correlation between the CoM kinematics and GRF and CoP. With machine learning, the 3D GRFs can be predicted based on the walking kinematics data without any biomechanical characteristics. The 3D GRF and joint torques were estimated with a general regression neural network and kinematics data recorded during walking. The motion data of the entire body were measured with optical markers in a laboratory environment; thus, their use in daily life was limited. The predictions in the AP and perpendicular directions exhibited smaller errors (approximately 1% to 2% lower); however, the ML directions exhibited errors that were more than 4% larger than the results of the proposed method. By using a feed-forward neural network with only two IMU sensors attached to both shanks [10], the accuracy of the GRF estimation was improved; this increases its potential for field applications. However, no LOO validation was used to validate the model; hence, the error may increase when the test data do not include data from the training participant. Furthermore, the requirement of multiple sensors remains. In this study, only a single IMU could estimate the 3D GRFs and CoP because biomechanical domain knowledge was used to design an ANN model.
Previously, the 3D lower limb joint torques were predicted with machine learning and gait data that must be measured in the laboratory [13,14]. Ardestani et al. estimated 3D joint torques with a wavelet neural network based on surface electromyography data and GRFs; the NRMSEs were 5% or less at all joint torques except during hip flexion (6.4%) [13]. In another study, Mundt et al. used the long short-term memory from the 3D joint angles of the lower limbs measured by a motion capture system to estimate the 3D joint torques [14]. Compared with the results of this study, only the NRMSE of the hip flexion had a larger error (18%); the NRMSEs of the other torques were smaller (6 to 15%). However, in this study, errors for only one participant were found; hence, the average NRMSE of all joint torques of all participants may increase. By using the biomechanical domain knowledge in the ANN design, the 3D joint torque can be estimated with a wearable device.
Estimating kinetics data with machine learning and biomechanical domain knowledge may resolve the tradeoff between the wearing convenience and data usability of wearable devices. The rehabilitation stage of a stroke patient can be assessed based on the propulsion of the paretic leg [30,31]. However, the patient must visit the motion analysis laboratory for the measurement. If a patient can obtain motion analysis data with comparable accuracy by using a wearable device without visiting the motion analysis laboratory, efficient rehabilitation diagnosis in daily life will be possible. Regarding the asymmetric walking of mildly hemiplegic patients, the difference in the AP GRF was approximately 50 N [30], and the minimal detectable changes were 2.9% and 4.7% body weight (BW) for the AP and vertical GRF when the stroke patients walk on a treadmill, respectively [32]. The proposed estimation method provided reliable estimations with average errors of 7.4%, 3.1%, and 2.2% (6.8%, 2.9%, and 2.1% for 0.7 m/s) BW for the vertical, AP, and ML GRFs, respectively; nevertheless, the error in the vertical direction of the proposed estimation method must be improved. The difference between the left and right knee adduction moments of osteoarthritis patients was 0.5% BW multiplied by height [33]. In this study, the knee adduction moment was estimated with an error of 0.06% BW multiplied by height. Considering the changes in the GRF and joint torque in stroke and arthritis patients, the proposed estimation method provides reliable prediction and diagnosis results. However, because the errors of the proposed method were average errors across the stance phase, the error can be larger or smaller for the clinically relevant parts, which could only be the part, such as at the highest or lowest value. Furthermore, using the method in rehabilitation and diagnosis of sports injuries requires further evaluation. In a previous study, the 3D GRF in the running and sidestep motions was estimated with a relative RMSE of 20% to 30% with a convolutional neural network (CNN) and data from an IMU attached to the sacrum [34]. The difference in the kinetics based on the change in the footstrike type during running is a GRF of 20% BW, knee adduction moment of 0.5 Nm/kg, and ankle plantar flexor moment of 0.6 Nm/kg [35]. Moreover, the risk of injury increased when the braking force exceeded 30% BW [36]. According to the error range, the proposed estimation method can be used for running.
Our study has several limitations in measuring and processing of the IMU data. In this study, data from walking on a treadmill was used to estimate gait kinetics. To apply the proposed method in daily life, it is essential to obtain a meaningful walking phase. Recently, a study has been reported in which repetitive walking phases were classified using IMU and GPS data that measured movements in daily life for ten days [37]. After the gait phase is determined, the step of gait event detection must be followed. If the gait event is estimated with higher accuracy, the CoM displacement and velocity can be more accurately calculated. Thus, the input to the ANN model can be more accurately provided to increase the model's accuracy (not shown). In the previous study on 2D walking data prediction, the MAE of the HS estimation was 25 m/s, which was smaller than ours [17]. While the local minima for estimating HS was undoubtedly found for barefoot walking, it was sometimes not found in shoed walking, as in this study, so the estimation accuracy in HS was lowered. Wearing shoes reduces the effect of the GRF transmitted to the waist [38]. This is a similar reason why the accuracy of event detection is lower when using the IMU attached to the proximal segment away from the foot [39]. Therefore, using machine learning that can estimate the gait event with high accuracy, even with small signals at the waist, can improve the accuracy of estimating 3D kinetics data. Because the sacrum was used instead of the CoM, which has a relationship between GRF and spring dynamics, the input data to the ANN model has a difference with the velocity and position of the CoM [40,41]. The displacement magnitude of the sacrum in the vertical direction is larger than that of the CoM, which increases as walking speed increases [40]. Nevertheless, because the input is normalized to amplitude, the effect due to the amplitude difference between velocity and position is sufficiently small. The horizontal velocity of the CoM lags behind the sacrum; however, the difference decreases as walking speed increases [41]. The detected HSs were estimated to be ahead of the HSs defined by the force plates, in which case the error of the estimated velocity could be reduced. Nevertheless, the error of the velocity and displacement in other directions should be minimized through accurate detection.
The characteristics of the CoM dynamics were used as biomechanical knowledge to estimate the unmeasured gait data using ANN with only one IMU. By attaching the IMU to the sacrum, it was possible to estimate GRF, CoP, and lower limb kinetics during walking at various speeds-attributed the biomechanical properties that the GRF and CoP could be expressed as the CoM. The resulting reliable estimation accuracy suggests that the use of biomechanical characteristics and machine learning together can produce adequate results.