Next Article in Journal
Multipurpose Modular Wireless Sensor for Remote Monitoring and IoT Applications
Next Article in Special Issue
Building Individual Player Performance Profiles According to Pre-Game Expectations and Goal Difference in Soccer
Previous Article in Journal
Pressure-Driven Piezoelectric Sensors and Energy Harvesting in Biaxially Oriented Polyethylene Terephthalate Film
Previous Article in Special Issue
Objective Measurement of Subjective Pain Perception with Autonomic Body Reactions in Healthy Subjects and Chronic Back Pain Patients: An Experimental Heat Pain Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bi-Directional Long Short-Term Memory-Based Gait Phase Recognition Method Robust to Directional Variations in Subject’s Gait Progression Using Wearable Inertial Sensor

Mechanical Engineering Department, Soongsil University, Seoul 06978, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(4), 1276; https://doi.org/10.3390/s24041276
Submission received: 15 January 2024 / Revised: 30 January 2024 / Accepted: 15 February 2024 / Published: 17 February 2024
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)

Abstract

:
Inertial Measurement Unit (IMU) sensor-based gait phase recognition is widely used in medical and biomechanics fields requiring gait data analysis. However, there are several limitations due to the low reproducibility of IMU sensor attachment and the sensor outputs relative to a fixed reference frame. The prediction algorithm may malfunction when the user changes their walking direction. In this paper, we propose a gait phase recognition method robust to user body movements based on a floating body-fixed frame (FBF) and bi-directional long short-term memory (bi-LSTM). Data from four IMU sensors attached to the shanks and feet on both legs of three subjects, collected via the FBF method, are processed through preprocessing and the sliding window label overlapping method before inputting into the bi-LSTM for training. To improve the model’s recognition accuracy, we selected parameters that influence both training and test accuracy. We conducted a sensitivity analysis using a level average analysis of the Taguchi method to identify the optimal combination of parameters. The model, trained with optimal parameters, was validated on a new subject, achieving a high test accuracy of 86.43%.

1. Introduction

In human locomotion, walking constitutes a reflexive biomechanical process characterized by repetitive phases, encompassing various physiological and mechanical factors, including muscular activity, neurological control, cutaneous sensation, and kinematic dynamics like force exertion and velocity [1]. Gait data, reflecting distinct physical and mechanical attributes within each phase of the movement, are utilized extensively not only in rehabilitation [2,3], healthcare [4], and medical care [5,6] but also in providing daily assistance [7,8] and support in physical training [9].
In analyzing gait data, the accurate identification and prediction of distinct gait phases [10] are crucial for reliable interpretation and subsequent applications. Commonly, the detection of gait phases employs a variety of sensors, such as wearable Inertial Measurement Unit (IMU) sensors, vision-based sensors, surface Electromyography (sEMG) sensors, and Force-Sensitive Resistor (FSR) sensors. Although motion capture systems, a type of vision sensor, offer high accuracy in gait phase detection, they are susceptible to limitations such as light sensitivity and occlusion and their restricted workspaces [11]. While surface Electromyography (sEMG) sensors can effectively collect muscle activation signals for enhanced intent recognition in gait analysis, they face limitations due to individual variances in body fat and physical condition and require extensive signal processing to address noise-related challenges [12]. FSR has the advantage of having a higher recognition accuracy than other sensors. However, the sensor must be located accurately on each foot to ensure high accuracy, and it has low durability because it is exposed to strong impacts [13,14]. Therefore, an IMU sensor, known for its wearability and reliability, is predominantly used either independently or in conjunction with an FSR as an auxiliary device to augment its capabilities [15]. Zhen et al. [16] achieved a 91.8% accuracy rate in gait phase recognition using a combination of three IMU sensors and the LSTM-DNN algorithm. However, due to the characteristics of IMU sensors, which always represent data relative to a global frame, maintaining a consistent initial posture and orientation is essential to ensure uniform data acquisition. Gait data acquisition and validation were conducted on a treadmill to ensure uniformity and to enhance recognition accuracy; the gait cycle was categorized into two distinct phases: stance and swing. Vu et al. [14] utilized an RNN with one IMU and two FSR sensors to recognize gait phases, achieving consistent gait phase prediction with an average estimation error of only 2.1 ± 0.1%. Similar to the previous study, this research also performed data collection and validation on a treadmill for consistent data values. Moreover, it was noted that there was a variability in error deviation among different subjects, attributed to the limitations of the RNN model.
In studies that integrate IMU and FSR sensors for gait phase recognition, challenges include the IMU sensors’ low reproducibility in attachment and decreased accuracy due to movement [17]. Moreover, the difficulty distinguishing between stance phases, due to minimal variation in sensor data, often leads to poor recognition accuracy. This also results in a generalized classification of gait phases into broad categories like swing and stance, without further subdivision into sub-phases [18,19].
In previous studies on gait phase recognition using IMU sensors or a combination of IMU and FSR sensors, certain limitations commonly arose, which are detailed below:
  • Recognition accuracy was reduced in subsequent experiments due to the low reproducibility of sensor attachment when using IMU sensors.
  • The nature of IMU sensors’ dependency on a global reference frame necessitates maintaining a fixed initial posture and orientation for consistent data gathering.
  • Labeling and differentiating the subtle sub-phases of the stance phase becomes challenging due to minimal variability in IMU sensor outputs.
In this study, to address the identified limitations, the floating body-fixed frame ( F B f ) concept from previous research was adopted [20]. This approach ensures consistent IMU sensor data collection, irrespective of attachment variability or changes in the wearer’s posture. Additionally, the high phase recognition accuracy of FSR sensors is utilized for more precise gait phase division and labeling, aiding in the differentiation of subtle sub-phases within the stance phase. To enhance the gait phase recognition model’s performance, this study focused on identifying and analyzing the most impactful parameters and hyperparameters. Through a thorough selection process and sensitivity analysis, the optimal parameter set was determined, ultimately leading to higher test accuracy in the model.

2. Materials and Methods

2.1. Gait Phase Division and Labeling Method

Generally, in biomechanics and ergonomics, the gait phase is divided into the swing phase (SW) and the stance phase, based on foot–ground contact, as shown in Figure 1. The stance phase is further subdivided into heel strike (HS), full contact (FC), heel off (HO), and toe off (TO) [21]. However, it is observed that as a person’s walking speed increases, the relative duration of the heel-off (HO) and toe-off (TO) phases within the entire gait cycle decreases [22]. In this study, based on previous research that demonstrates the toe-off (TO) phase constitutes only a minor portion (1.59%) of the general gait cycle, TO was merged with the heel-off (HO) phase. Consequently, the gait cycle was divided into four distinct phases: swing (SW, label 5), heel strike (HS, label 4), full contact (FC, label 3), and heel off (HO, label 2) [23].
To predict the four selected gait phases using IMU and FSR sensors, a measurement unit that can simultaneously label IMU sensor output data in the same measurement cycle is necessary. Figure 2 displays a wireless insole unit equipped with FSR, designed to label gait phases concurrently as IMU sensor data are collected. To collect gait data, IMU sensors are attached using straps at specific body locations: one on the waist, one on the top of each foot, and one on each shank. These sensors capture lower extremity gait data at a sampling rate of 100 Hz. For the FSR insole unit, which labels IMU sensor data, an FSR sensor was placed in three key foot areas: the distal phalangeal tuberosity of the first toe, the metatarsophalangeal joint, and the calcaneus. This configuration is based on the patterns of foot pressure distribution during various phases of contact with the ground. The gait phase, in accordance with the combination of the FSR sensor input, is shown in Figure 1. A single-board computer (SBC) with a Bluetooth module for wirelessly transmitting the data acquisition (DAQ) and gait phase recognition results from the 3-channel FSR array was developed to be strap-attachable to the ankle. Consequently, by utilizing the wireless FSR insole unit and IMU sensor developed in this study, we were able to gather the gait data of the subjects and label them effectively without any workspace limitations.

2.2. Sensor Calibration and Floating Body-Fixed Frame

The primary challenge in wearable Inertial Measurement Unit (IMU) sensor-based gait phase recognition research is the difficulty in maintaining consistent sensor orientation, as the attachment reproducibility of the sensor is low and the sensor fixed frame is expressed relative to a global reference frame, leading to orientation distortion with any deviation from the initial calibration pose. As shown in Figure 3 (left), even if the subject performs the same action while changing pose, the same pattern of data is not always collected, and the features that affect phase recognition change, resulting in low overall recognition accuracy. Therefore, this study solves the aforementioned problem by creating the float body-fixed frame ( F B f ), as proposed in previous our research [20].
Figure 4 illustrates the process of generating the  F B f  through a calibration gesture, followed by aligning the orientations of all IMU sensors to match the orientation of the newly created  F B f . Similar to Algorithm 1, the frontal horizontal axis can be determined through a stand–stoop operation, and the sagittal horizontal axis can be ascertained by computing the cross-product with the vertical axis, which remains parallel to the z-axis. In essence, a new reference frame,  B f , is established, aligning with the human body’s anatomical plane. To ensure that wearable IMU sensors, which may have varied orientations due to attachment variability, align with the orientation of the generated  B f , Algorithm 2 is employed. This algorithm calculates a rotation matrix that represents the orientation difference between the IMU sensor in the initial stand position and the  B f  orientation. It then adjusts the IMU sensors to match the pose of  B f . At this stage, the reference frame is modified such that the data output from the wearable IMU sensor, initially expressed in the global reference frame, is now represented from the newly established  B f . However, as  B f  is a fixed constant value within the global reference frame, it requires compensation to maintain alignment with the anatomical plane of the human body during movement. Therefore, as outlined in Algorithm 3, the orientation changes of the wearable IMU sensor attached to the waist are incorporated into the  B f , ensuring that the  F B f  consistently aligns with the human body’s anatomical plane, regardless of the person’s movement. In other words, the IMU sensor values are consistently represented relative to the new reference frame  F B f , which is aligned with the human body’s anatomical plane. This alignment ensures that the IMU sensor data remain consistent, even when the user changes posture or moves, as illustrated on the right of Figure 3. The formation and detailed information regarding the  F B f  are available in our previous research [20].
Algorithm 1. Create a body-fixed frame
1:procedure     sensor   data   R S f , j G , G a S f , j , G ω S f , j
2:  While about 5 s
3:    Maintain standing posture
4:      Save   orientation   data   R S f , j G
5:  end
6:    Update   R f , j . s t a n d G a v g ( s a v e d o r i e n t a t i o n d a t a )
7:  While 5 s
8:    Maintain stooping posture
9:      Save   orientation   data   R S f , j G
10:   end
11:    Update   R f , j . s t o o p G a v g ( s a v e d o r i e n t a t i o n d a t a )
12:   Calculate   vector   k   R T f , j . s t o o p G R f , j . s t a n d G
13:   Calculate   vector   x   v e c t o r k × 0 0 1 T
14:   Return   body-fixed   frame   x B f k B f z B f
Algorithm 2. Align all sensor fixed frames equally
1:procedure sensor data, body fixed frame, the orientation of the initial posture
2:  Calculate rotated frame mapping
   R C , j S f , j . s t a n d = R T S f , j . s t a n d G R B f G
3:   Calculate   sensor   orientation   with   respect   to   { B f }
   R C , j B f = R T B f G R S f , j G R C , j S f , j . s t a n d
4:   Calculate   sensor   acceleration   with   resepect   to   { B f }
   a S f , s h a n k / f o o t B f = R T B f G R S f , s h a n k / f o o t G G a S f , s h a n k / f o o t
5:   Calculate   sensor   rate   of   turn   with   respect   to   { B f }
   ω S f , s h a n k / f o o t B f = R T B f G R S f , s h a n k / f o o t G G ω S f , s h a n k / f o o t
6:   Return   sensor   data   with   respect   to   { B f }
Algorithm 3. Updating the body-fixed frame according to changes in subject’s time-varying body alignment
1:procedure sensor data, the orientation of the initial posture
2:  Calculate floating body fixed frame
   R F B f G = R S f , w a i s t G R S f , w a i s t , s t a n d G
3:   Calculate   sensor   orientation   with   respect   to   { F B f }
   R C , j F B f = R T F B f G R S f , j G R C , j S f , j . s t a n d
4:   Calculate   sensor   acceleration   with   respect   to   { F B f }
   a S f , s h a n k / f o o t F B f = R T F B f G R S f , s h a n k / f o o t G G a S f , s h a n k / f o o t
5:   Calculate   sensor   rate   of   turn   with   respect   to   { F B f }
   ω S f , s h a n k / f o o t F B f = R T F B f G R S f , s h a n k / f o o t G G ω S f , s h a n k / f o o t
6:   Return   sensor   data   with   respect   to   { F B f }

2.3. Data Acquisition and Preprocessing

2.3.1. Data Acquisition

In this study, to collect labeled walking data, four subjects with diverse physical conditions (as detailed in Table 1) had markers attached to their heads for measuring walking trajectories. They walked at speeds ranging from 0.24 to 1.37 m/s on a 4 × 4 m² flat surface, where a Prime13 motion capture camera (OptiTrack, Corvallis, OR, USA) was installed, changing walking directions, as depicted in Figure 5. We collected approximately 29,000 data points, comprising orientation, acceleration, and angular rate, while subjects freely changed direction. Approximately 24,000 data points from three of the four subjects (subjects 1, 2, and 3) served as training datasets for developing prediction models, while around 5000 data points from the remaining subject (subject 4) were utilized as a test dataset to validate the learned model.

2.3.2. Data Augmentation

A large amount of data is required to improve the performance of recognition models and prevent overfitting. Due to the difficulty in achieving high accuracy and generalization performance with the initially collected 24,000 training data, data augmentation is performed. Data augmentation of the gait data collected from IMU sensors, which are inherently time-series data, must be carried out to preserve their temporal dependency feature. There are various methods available for augmenting time-series data. Among these, data augmentation techniques applied in the time domain are considered the most practical and widely used [24]. Among the time domain techniques, the noise injection method that adds Gaussian noise is similar to the sensor noise that occurs when collecting IMU sensor data, so the noise injection method is used in this study. As described in Equation (1), data augmentation, in this study, is executed by adding Gaussian noise  W n  to each feature of the labeled IMU gait data  S N . This Gaussian noise is of a magnitude equivalent to 10% of ±(max(|feature data|) − |mean|), ensuring the addition aligns with the distribution characteristics of the original sensor data. Through this process, approximately 24,000 training data points were augmented to a total of 359,924, achieving an augmentation increase of about 15 times.
S N + W N = N n

2.3.3. Standardization

Figure 6a is the result of plotting the gait data collected from IMU sensors attached to each lower limb segment for each feature before standardization. As seen in the figure, each feature has a considerable scale difference. When learning a recognition model with feature data showing such a significant difference, features with a wide distribution of data mainly affect learning, and features with a small distribution do not significantly impact learning. In other words, recognition accuracy decreases because the number of features that affect gait phase recognition falls. Therefore, standardization was performed through Equations (2) and (3) so that all feature data had a similar scale.
σ j = i = 1 N x i , j E x j 2 N
x ~ i , j = x i , j x ¯ j σ j
Here,  i  corresponds to the time stamp, and  j  represents each feature column.  x i , j  is raw feature data,  E x j  is the mean value of feature  x j , and  σ j  and  x ~ i , j  are the standard deviation and normalization results of feature data  x j , respectively. After standardization, as shown in Figure 6b, the influence of the scale of specific feature data was significantly reduced, and the distribution pattern of features between labels was clearly visible, which is expected to be a positive factor in learning a multi-gait phase recognition model.

2.3.4. Sliding Window Label Overlapping Method

Bi-directional LSTM, which is used as a gait phase recognition model in this study, mainly uses the sliding window method to continuously extract and predict data features in every cycle [25]. However, if label overlapping is allowed in time-series data where the ordered label appears repeatedly, two labels are included in the sliding window where the labels transition, as shown in Figure 7. Therefore, labeling these sliding windows is also important. In this study, we aim to estimate the current phase by referring to past data through a window with an experimentally found optimal window size of 22 (height: feature)  ×  14 (width: time) when the last N of the labels of the sliding window are the same, it is encoded into the label of the sliding window. At this time, how many of the last N labels are included in the 14-time horizons and whether label encoding is performed also significantly impacts the prediction model recognition accuracy. Therefore, we divide the label overlapping ratio by 30%, 50%, and 70% to select the optimal parameter through experiments in the next section.

2.4. Bidirectional LSTM-Based Gait Phase Recognition Model

In this study, we used bi-directional LSTM based on previous research showing that bi-directional LSTM is capable of recognizing not only the instantaneous mode of repetitive and regular motion but also the long-term dependency between modes [20], and the detailed model structure is shown in Figure 8. The 22 gait data for each lower extremity segment consisting of orientation, acceleration, and angular rate collected through the FBF method and IMU are captured at 1 ms intervals through standardization and a sliding window overlapping method with a window of  22 × 14  size. The captured  22 × 14  data goes through a forward LSTM model and a backward LSTM model to identify the walking context. Afterward, the output hidden vector is predicted to be one of four gait phases through the activation function.

3. Experiment and Conclusions

3.1. Sensitivity Analysis

The selection and setting of parameters and hyperparameters in training deep learning models significantly impact both learning and test accuracy [26,27]. Therefore, it is important to select the parameters that influence model training and find their optimal values. In this study, eight parameters were selected as key design variables to maximize the accuracy of gait phase recognition. The activation functions chosen were tanh, selected for its ability to appropriately model nonlinear data characteristics and provide a suitable output range for LSTM gates, and LeakyReLU, utilized to mitigate the issue of gradient vanishing. The overlapping ratio was set to 30%, 50%, and 70% according to the number of the last N labels, as described in the previous section. The number of layers was selected in consideration of the model’s complexity and computational efficiency, and the optimizer was selected as Adam, Nadam, and AdamW to increase convergence efficiency in model learning. The size of the hidden units and the learning rate were considered crucial factors for determining the model’s ability to process complex data and the speed of efficient learning, respectively. Thus, they were set within an appropriate range. The dropout ratio was adopted at an appropriate level to prevent overfitting and enhance the model’s generalization ability. Finally, the batch size was chosen considering both the stability of gradient descent and memory efficiency. These parameters are vital in optimizing the performance of the model, and the level for the parameter was selected, as shown in Table 2, to find the optimal combination through level average analysis of the Taguchi method  L 18 2 1 × 3 7 .
In the experiment, about 360,000 data collected from three subjects were divided into a training dataset and a validation dataset at a 6:4 ratio, and bi-directional LSTM model learning was conducted according to the combination of 18 design variables in Table 3. Additionally, the test accuracy of the learned model was confirmed with about 5000 data collected from one other subject that was not used for learning. Figure 9 presents the sensitivity analysis result for the test accuracy of the learned model, and Table 4 outlines the optimal parameter set derived through sensitivity analysis. In the case of the activation function, LeakyReLU showed significantly higher test accuracy than tanh, which has greatly contributed to LeakyReLU’s ability to solve the vanishing gradient problem. The overlapping ratio showed the highest test accuracy when it was 30%, and a single layer was found to be the most efficient. This is expected to be because gait phase data have a simple characteristic of repeated cycles, so a deeper architecture may actually cause overfitting. The optimizer analysis demonstrates that basic Adam efficiently finds the most effective convergence path. For hidden units, selecting 64 optimizes the balance between model complexity and overfitting avoidance, allowing for accurate gait data modeling without excessive computational load. In the case of the learning rate, it was confirmed that 0.003 is the optimal value, which indicates that it can converge faster while maintaining stability. In the case of the dropout rate, it was confirmed that 0.5 is the most optimal value. This means that significant normalization seems necessary, as even with simple, repetitive phase gait data, the risk of overfitting to the gait patterns of the three subjects used for training can severely impair the generalization performance. Finally, in the case of batch size, 7000 was found to be the most optimal value, which shows that the global minimum can be found stably based on a lot of data.

3.2. Optimal Parameter Learning Result and Conclusions

The bi-directional LSTM model, coded in Python 3.11 and TensorFlow 2.14.0, with optimal parameters, was trained and tested for gait phase prediction on a computer equipped with an AMD Ryzen 5950 × 4th generation CPU and a 3090 GPU. As a result, the model achieved a high test accuracy of 86.43%. This represents an improvement of approximately 122% over the lowest test accuracy shown by set 5 in the orthogonal array combinations and about 105% over the highest test accuracy demonstrated by set 15. Table 5 shows that the SW phase test accuracy was the highest at 91.39%. This was followed by a nearly uniform accuracy across the remaining stance phases: FC at 85.45%, HS at 85.08%, and HO at 83.82%. Figure 10 shows the confusion matrix of the prediction results. In the figure, SW and HS show high true positive rates in the diagonal values, while FC and HO frequently exhibit adjacent off-diagonal values. This can be attributed to two main factors. Firstly, as seen in the left walking data of Figure 7, most of the IMU sensor features show little change in values during the transition from label 3 (FC) to label 2 (HO), making it difficult to distinguish between these labels. Additionally, this study uses a sliding window method that allows for label overlapping, contributing to the tendency of misclassifying between FC and HO. Therefore, future research is needed to improve accuracy in the transition sections of ordered time-series data when predicting phases using the sliding window label overlapping method, particularly where there is a lack of features to distinguish phases.

Author Contributions

Conceptualization, D.L. and H.J.; methodology, D.L. and H.J.; software, H.J.; validation, H.J.; formal analysis, H.J. and D.L.; investigation, H.J. and D.L.; resources, D.L. and H.J.; data curation, H.J. and D.L.; writing—original draft preparation, D.L. and H.J.; writing—review and editing, D.L. and H.J.; visualization, H.J. and D.L.; supervision, D.L.; project administration, D.L. and H.J.; funding acquisition, D.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2023 R1F1A1074704); the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean Government (MSIT) (No. 2023-0-00218); the MSIT (Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2024-RS-2022-00156360) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation); and a Korea Institute for Advancement of Technology (KIAT) grant funded by the Korean Government (MOTIE) (P0017123, HRD Program for Industrial Innovation).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no competing financial interests.

Nomenclature

  R Rotation matrix
  G Global reference frame
  B f Body-fixed frame
  F B f Floating body-fixed frame
  S f , j Sensor-fixed frame of jth IMU sensor
  C j Calibrated sensor-fixed frame
  S f , j . s t a n d Sensor-fixed frame at initial standing posture
  S f , j . s t o o p Sensor-fixed frame at initial stooping posture
  a Acceleration
  ω Angular rate

References

  1. Jacquelin Perry, M. Gait Analysis: Normal and Pathological Function; SLACK: Haddonfield, NJ, USA, 2010. [Google Scholar]
  2. Chen, W.; Xu, Y.; Wang, J.; Zhang, J. Kinematic analysis of human gait based on wearable sensor system for gait rehabilitation. J. Med. Biol. Eng. 2016, 36, 843–856. [Google Scholar] [CrossRef]
  3. Chen, G.; Qi, P.; Guo, Z.; Yu, H. Gait-event-based synchronization method for gait rehabilitation robots via a bioinspired adaptive oscillator. IEEE Trans. Biomed. Eng. 2016, 64, 1345–1356. [Google Scholar] [CrossRef]
  4. Yang, P.; Xie, L.; Wang, C.; Lu, S. IMU-Kinect: A motion sensor-based gait monitoring system for intelligent healthcare. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 350–353. [Google Scholar]
  5. Lukšys, D.; Jatužis, D.; Jonaitis, G.; Griškevičius, J. Application of continuous relative phase analysis for differentiation of gait in neurodegenerative disease. Biomed. Signal Process. Control 2021, 67, 102558. [Google Scholar] [CrossRef]
  6. Wren, T.A.; Gorton, G.E., III; Ounpuu, S.; Tucker, C.A. Efficacy of clinical gait analysis: A systematic review. Gait Posture 2011, 34, 149–153. [Google Scholar] [CrossRef] [PubMed]
  7. Martindale, C.F.; Roth, N.; Hannink, J.; Sprager, S.; Eskofier, B.M. Smart annotation tool for multi-sensor gait-based daily activity data. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018; pp. 549–554. [Google Scholar]
  8. Chen, D.; Cai, Y.; Qian, X.; Ansari, R.; Xu, W.; Chu, K.-C.; Huang, M.-C. Bring gait lab to everyday life: Gait analysis in terms of activities of daily living. IEEE Internet Things J. 2019, 7, 1298–1312. [Google Scholar] [CrossRef]
  9. Miyazaki, T.; Kawase, T.; Kanno, T.; Sogabe, M.; Nakajima, Y.; Kawashima, K. Running motion assistance using a soft gait-assistive suit and its experimental validation. IEEE Access 2021, 9, 94700–94713. [Google Scholar] [CrossRef]
  10. Behboodi, A.; Zahradka, N.; Wright, H.; Alesi, J.; Lee, S.C. Real-time detection of seven phases of gait in children with cerebral palsy using two gyroscopes. Sensors 2019, 19, 2517. [Google Scholar] [CrossRef] [PubMed]
  11. Mohammed, S.; Same, A.; Oukhellou, L.; Kong, K.; Huo, W.; Amirat, Y. Recognition of gait cycle phases using wearable sensors. Robot. Auton. Syst. 2016, 75, 50–59. [Google Scholar] [CrossRef]
  12. Ryu, J.; Lee, B.-H.; Maeng, J.; Kim, D.-H. sEMG-signal and IMU sensor-based gait sub-phase detection and prediction using a user-adaptive classifier. Med. Eng. Phys. 2019, 69, 50–57. [Google Scholar] [CrossRef]
  13. Miyake, T.; Cheng, Z.; Hosono, S.; Yamamoto, S.; Funabashi, S.; Zhang, C.; Tamaki, E. Heel-contact gait phase detection based on specific poses with muscle deformation. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 977–982. [Google Scholar]
  14. Almuteb, I.; Hua, R.; Wang, Y. Smart insoles review over the last two decade: Applications, potentials, and future. Smart Health 2022, 25, 100301. [Google Scholar] [CrossRef]
  15. Vu, H.T.T.; Dong, D.; Cao, H.-L.; Verstraten, T.; Lefeber, D.; Vanderborght, B.; Geeroms, J. A review of gait phase detection algorithms for lower limb prostheses. Sensors 2020, 20, 3972. [Google Scholar] [CrossRef]
  16. Zhen, T.; Yan, L.; Yuan, P. Walking gait phase detection based on acceleration signals using LSTM-DNN algorithm. Algorithms 2019, 12, 253. [Google Scholar] [CrossRef]
  17. Anwary, A.R.; Yu, H.; Vassallo, M. Optimal foot location for placing wearable IMU sensors and automatic feature extraction for gait analysis. IEEE Sens. J. 2018, 18, 2555–2567. [Google Scholar] [CrossRef]
  18. Sarshar, M.; Polturi, S.; Schega, L. Gait phase estimation by using LSTM in IMU-based gait analysis—Proof of concept. Sensors 2021, 21, 5749. [Google Scholar] [CrossRef]
  19. Guenterberg, E.; Yang, A.Y.; Ghasemzadeh, H.; Jafari, R.; Bajcsy, R.; Sastry, S.S. A method for extracting temporal parameters based on hidden Markov models in body sensor networks with inertial sensors. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 1019–1030. [Google Scholar] [CrossRef] [PubMed]
  20. Jeon, H.; Choi, H.; Noh, D.; Kim, T.; Lee, D. Wearable Inertial Sensor-Based Hand-Guiding Gestures Recognition Method Robust to Significant Changes in the Body-Alignment of Subject. Mathematics 2022, 10, 4753. [Google Scholar] [CrossRef]
  21. Kim, M.; Lee, D. Development of an IMU-based foot-ground contact detection (FGCD) algorithm. Ergonomics 2017, 60, 384–403. [Google Scholar] [CrossRef] [PubMed]
  22. Hebenstreit, F.; Leibold, A.; Krinner, S.; Welsch, G.; Lochmann, M.; Eskofier, B.M. Effect of walking speed on gait sub phase durations. Hum. Mov. Sci. 2015, 43, 118–124. [Google Scholar] [CrossRef] [PubMed]
  23. Jeon, H.; Kim, S.L.; Kim, S.; Lee, D. Fast wearable sensor–based foot–ground contact phase classification using a convolutional neural network with sliding-window label overlapping. Sensors 2020, 20, 4996. [Google Scholar] [CrossRef] [PubMed]
  24. Wen, Q.; Sun, L.; Yang, F.; Song, X.; Gao, J.; Wang, X.; Xu, H. Time series data augmentation for deep learning: A survey. arXiv 2020, arXiv:2002.12478. [Google Scholar]
  25. Lee, J.; Hong, W.; Hur, P. Continuous gait phase estimation using LSTM for robotic transfemoral prosthesis across walking speeds. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1470–1477. [Google Scholar] [CrossRef] [PubMed]
  26. Yu, T.; Zhu, H. Hyper-parameter optimization: A review of algorithms and applications. arXiv 2020, arXiv:2003.05689. [Google Scholar]
  27. Agrawal, T. Hyperparameter Optimization in Machine Learning: Make Your Machine Learning and Deep Learning Models More Efficient; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
Figure 1. Four gait sub-phases according to 3-ch FSR measurement results.
Figure 1. Four gait sub-phases according to 3-ch FSR measurement results.
Sensors 24 01276 g001
Figure 2. Sensor configuration and attachment location for data acquisition.
Figure 2. Sensor configuration and attachment location for data acquisition.
Sensors 24 01276 g002
Figure 3. Sensor data output according to subject posture depending on whether the floating body-fixed frame is used or not. (a) The same pose as the initial reference frame, (b) rotate 90°, (c) rotate 180°.
Figure 3. Sensor data output according to subject posture depending on whether the floating body-fixed frame is used or not. (a) The same pose as the initial reference frame, (b) rotate 90°, (c) rotate 180°.
Sensors 24 01276 g003
Figure 4. The procedure of creating a body-fixed frame and conducting sensor calibration.
Figure 4. The procedure of creating a body-fixed frame and conducting sensor calibration.
Sensors 24 01276 g004
Figure 5. Subject-specific gait trajectories measured with an OptiTrack motion capture camera for data acquisition. (a) Walking trajectory, (b) walking speed.
Figure 5. Subject-specific gait trajectories measured with an OptiTrack motion capture camera for data acquisition. (a) Walking trajectory, (b) walking speed.
Sensors 24 01276 g005
Figure 6. Distribution range of gait data features. (a) Before performing standardization, (b) after performing standardization.
Figure 6. Distribution range of gait data features. (a) Before performing standardization, (b) after performing standardization.
Sensors 24 01276 g006
Figure 7. Generation and labeling of training data using the sliding window overlapping method.
Figure 7. Generation and labeling of training data using the sliding window overlapping method.
Sensors 24 01276 g007
Figure 8. Structure of the bi-directional LSTM-based gait phase recognition model.
Figure 8. Structure of the bi-directional LSTM-based gait phase recognition model.
Sensors 24 01276 g008
Figure 9. Result of the level average analysis.
Figure 9. Result of the level average analysis.
Sensors 24 01276 g009
Figure 10. Confusion matrix of the prediction results for the test dataset.
Figure 10. Confusion matrix of the prediction results for the test dataset.
Sensors 24 01276 g010
Table 1. Detailed physical information of the participants in the experiment.
Table 1. Detailed physical information of the participants in the experiment.
SubjectGenderHeight [cm]Weight [kg]Foot Size [mm]
1Man177100270
2Man17173260
3Man17288265
4Man179.973270
Table 2. Design parameters and level for each parameter.
Table 2. Design parameters and level for each parameter.
LevelActivation
Function
Overlapping
Ratio [%]
Layer No.OptimizerHidden
Unit
Learning
Rate
Dropout
Rate
Batch
Size
1Tanh301Adam320.0010.33000
2LeakyReLU502Nadam640.0020.55000
3 703AdamW1280.0030.77000
Table 3. Orthogonal array table.
Table 3. Orthogonal array table.
SetActivation
Function
Overlap
Ratio [%]
Layer No.OptimizerHidden
Unit
Learning
Rate
Dropout
Rate
Batch
Size
Test
Accuracy
11111111176.36
21122222275.76
31133333376.43
41211223374.29
51222331171.10
61233112273.49
71312132378.01
81323213174.55
91331321274.15
102113322181.08
112121133278.35
122132211379.95
132212313280.13
142223121380.33
152231232182.66
162313231278.07
172321312379.53
182332123175.28
Table 4. Optimal design parameter combination.
Table 4. Optimal design parameter combination.
Activation
Function
Overlapping
Ratio [%]
Layer No.OptimizerHidden
Unit
Learning
Rate
Dropout
Rate
Batch
Size
LeakyReLU301Adam640.0030.57000
Table 5. Test accuracy for each gait phase.
Table 5. Test accuracy for each gait phase.
SWHSFCHOTotal
Accuracy [%]91.3985.0885.4583.8286.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jeon, H.; Lee, D. Bi-Directional Long Short-Term Memory-Based Gait Phase Recognition Method Robust to Directional Variations in Subject’s Gait Progression Using Wearable Inertial Sensor. Sensors 2024, 24, 1276. https://doi.org/10.3390/s24041276

AMA Style

Jeon H, Lee D. Bi-Directional Long Short-Term Memory-Based Gait Phase Recognition Method Robust to Directional Variations in Subject’s Gait Progression Using Wearable Inertial Sensor. Sensors. 2024; 24(4):1276. https://doi.org/10.3390/s24041276

Chicago/Turabian Style

Jeon, Haneul, and Donghun Lee. 2024. "Bi-Directional Long Short-Term Memory-Based Gait Phase Recognition Method Robust to Directional Variations in Subject’s Gait Progression Using Wearable Inertial Sensor" Sensors 24, no. 4: 1276. https://doi.org/10.3390/s24041276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop