Next Article in Journal
Shock Absorption for Legged Locomotion through Magnetorheological Leg-Stiffness Control
Previous Article in Journal
Cyber-Physical System for Evaluation of Taekwondo Athletes: An Initial Project Description
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Feature Extraction and Locomotion Mode Classification Using Intelligent Lower-Limb Prosthesis

College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Machines 2023, 11(2), 235; https://doi.org/10.3390/machines11020235
Submission received: 15 January 2023 / Revised: 27 January 2023 / Accepted: 1 February 2023 / Published: 5 February 2023

Abstract

:
Intelligent lower-limb prosthesis appears in the public view due to its attractive and potential functions, which can help amputees restore mobility and return to normal life. To realize the natural transition of locomotion modes, locomotion mode classification is the top priority. There are mainly five steady-state and periodic motions, including LW (level walking), SA (stair ascent), SD (stair descent), RA (ramp ascent), and RD (ramp descent), while ST (standing) can also be regarded as one locomotion mode (at the start or end of walking). This paper mainly proposes four novel features, including TPDS (thigh phase diagram shape), KAT (knee angle trajectory), CPO (center position offset) and GRFPV (ground reaction force peak value) and designs ST classifier and artificial neural network (ANN) classifier by using a user-dependent dataset to classify six locomotion modes. Gaussian distributions are applied in those features to simulate the uncertainty and change of human gaits. An angular velocity threshold and GRFPV feature are used in the ST classifier, and the artificial neural network (ANN) classifier explores the mapping relation between our features and the locomotion modes. The results show that the proposed method can reach a high accuracy of 99.16% ± 0.38%. The proposed method can provide accurate motion intent of amputees to the controller and greatly improve the safety performance of intelligent lower-limb prostheses. The simple structure of ANN applied in this paper makes adaptive online learning algorithms possible in the future.

1. Introduction

According to the statistics of the World Health Organization (WHO), about 15% (975 million) of the world’s population have physical disabilities to varying degrees [1]. Some of them suffer from lower-limb amputation (LLA). For people with LLA, lower limb prosthesis is an important tool to help them restore mobility and live a better life. However, at present, the majority of commercial prosthetic legs are passive, and walking with them will consume 20~30% more energy than healthy individuals [2]. Moreover, the obvious asymmetry between the sound side and the affected side will lead to secondary damage. When people with LLA are in a complex walking environment, even stability will become a luxury [3]. Research on intelligent lower-limb prostheses has been performed recently due to their adaptation in different terrains.
The common control framework of intelligent lower-limb prostheses is a hierarchical control system [4] shown in Figure 1. The high-level controller focuses on human intent recognition to distinguish locomotion modes in real time and obtain gait parameters and gait phase, which will be sent to the middle-level controller to compute desired joint angles and torques. The low-level controller aims at making the actuators output the desired torques. In this paper, we focus on locomotion mode classification, which belongs to the high-level controller of the prosthesis control system.
Pattern recognition (PR), machine learning (ML), and statistical methods have been widely applied to classify locomotion modes from surface electromyogram (sEMG) signals [5] to mechanical sensors. Ref. [6] collects sEMG signals to interpret motion modes. J A Spanias et al. developed an adaptive PR system to adapt to changes in the user’s neural information during ambulation and consistently identify the user’s intent over multiple days with a classification accuracy of 96.31% ± 0.91% [7]. Zhang et al. present a robust environmental feature recognition system (EFRS) to predict the locomotion modes of amputees and estimate environmental features with the depth camera [8]. Ref. [9] combines EMG and mechanical sensors, using linear discriminant analysis (LDA) to reach 86% accuracy. Quadratic discriminant analysis (QDA) in [10] gets a similar result with EMG only. Ref. [11] infers the user’s intent with the Gaussian mixture model (GMM). They combine foot force and EMG to distinguish intent to stand, sit and walk. Ref. [12] combines all sensors to recognize six locomotion modes and five mode transitions by support vector machine (SVM) and gets a high accuracy of 95%. Ref. [13] adopts dynamic Bayesian network (DBN) to recognize level walking (LW), stair ascent (SA), stair descent (SD), ramp ascent (RA) and ramp descent (RD) with a load cell and a six-axis inertial measurement unit (IMU). Ref. [14] depends only on ground reaction force (GRF) to distinguish LW and SD by an artificial neural network (ANN). Recently, [15] encodes data from IMUs into picture format and inputs this 2D image into a convolutional neural network (CNN). The CNN outputs the probability of five steady states and eight transition states. They successfully improve the accuracy to 95.8%. Similarly, Kang et al. developed a DL-based (deep learning-based) classifier for five steady states, which is user-independent and achieved an overall accuracy of 98.84% ± 0.47% [16]. Ref. [17] uses ML methods to compare the user-independent and dependent intent recognition systems for powered prostheses. The results show that the user-dependent method has better accuracy. The previous works have achieved good results in locomotion mode classification. However, the traditional extracted features are generally the average, maximum, minimum, median and variance of sensors’ data, which lack a physical explanation. The DL methods need big data collection, which is unfriendly to the amputees.
Based on 2 IMUs (Placed on the thigh and shank of amputees’ sound leg, respectively) and a GRF insole (Placed in the amputees’ sound leg’s shoe), shown in Figure 2, which are used to collect a user-dependent dataset, this paper extracts four novel features and design classifiers to recognize locomotion modes. The main contributions of this paper are as follows: (1) based on the processed sensor data, four novel features are extracted; (2) the fluctuation of gait data is expressed by Gaussian distributions to simulate the fluctuation of human motion trajectory; (3) The designed classifiers achieve a higher accuracy of 99.16% ± 0.38% compared with previous works.

2. Materials and Methods

2.1. Data Acquisition and Processing

2.1.1. Data Collection

Eight able-bodied subjects agreed to participate in the data acquisition experiments, including six males (varies in height (1.63–1.81 m) and weight (58–76 kg)) and two females (varies in height (1.65–1.7 m) and weight (50–55 kg)). They are required to equip with an intelligent prosthesis in Figure 3a. The intelligent lower-limb prosthesis is designed for healthy individuals in which the knee and ankle are active joints that provide power in the sagittal plane. The finite state machine impedance control [18] is applied for the intelligent prosthesis, and the joint torques τ i are determined according to Equation (1)
τ i = k i ( θ i θ e q i ) + b i θ ˙ i
where i denotes the prosthesis knee or ankle, θ is the joint angle and θ ˙ is the joint angular velocity. K , b and θ e are the stiffness, damping and equilibrium positions, respectively, and are assigned different values under different states and locomotion modes. The impedance parameters are set to make the subject walk comfortably.
Moreover, data acquisition sensors, including IMUs and GRF insoles, are set, as shown in Figure 3b. IMU data consist of angle and angular velocity in the x, y, and z-axis directions, respectively, with a sampling frequency of 100 Hz. The GRF insoles measure pressure in the vertical direction with a frequency of 100 Hz.
This paper aims to classify 6 locomotion modes, shown in Figure 4, and the locomotion settings, including an incline-adjustable treadmill and 2 stairs of different heights, are shown in Figure 5. The ramp inclines are set to ±3.6°, ±5.8° and ±9.5°. Under each incline, subjects walk for 30 s with low, normal and fast speeds (0.56 m/s, 0.83 m/s and 1.11 m/s). To keep the data sizes in different modes almost the same, each subject walks on the level ground (LG) for 90 s at each speed. As for SA and SD, subjects completed 10 trials under different stair heights of 11.8 cm and 14.5 cm. Finally, subjects maintain a relaxing standing posture on level ground for 60 s.
Our final dataset includes data from 3 sensors: 2 IMUs placed at the thigh and the shank of the sound leg and a GRF insole put in the shoe on the same side (Figure 2). We only used IMU data in the sagittal plane direction. The raw sensor dataset under ω m mode is recorded as follows:
D ω m r a w = { ( θ t , θ ˙ t , θ s , G h , G t ) | ω m }
For clarity, we list some parameters in Table 1 below and some abbreviations in Table A1 of Appendix A.

2.1.2. Gait Phase Variable

There are experiments using motion capture systems that prove that the human thigh motion can uniquely and continuously represent the gait cycle [19]. The continuous gait phase variable is defined by the atan2 function:
φ ( t ) = atan 2 ( θ ˙ t , θ t ) + π
However, there are noises and ground impact in thigh angular velocity θ ˙ t . The linear polynomial fitting method is adopted to smooth the θ ˙ t ( t ) curve, as shown in Figure 6a.
θ ˙ t _ s ( t ) = ( 1 r ) θ ˙ t ( t ) + r [ a ( t ) · t + b ( t ) ]
where a ( t ) and b ( t ) are linear fitting parameters calculated by the least square method online according to Equation (5), and r is the smoothing coefficient. In Equation (5), n is the sample number. Here, n = 40 and r = 0.9 .
a ( t ) = [ k = 1 n ( t k · t s ) · θ ˙ t ( t k · t s ) n x ¯ y ¯ ] / [ k = 1 n ( t k · t s ) 2 n x ¯ 2 ] x ¯ = k = 1 n ( t k · t s ) / n , y ¯ = k = 1 n θ ˙ t ( t k · t s ) / n b ( t ) = y ¯ a ( t ) · x ¯
By translation, the phase diagram trajectory can wrap the origin and make the phase variable vary circularly. By normalization, we can obtain a similar phase diagram trajectory under the same locomotion mode with different walking speeds. The transformation formulas [19] are as follows:
θ t _ t n ( t ) = α x ( t ) [ θ t ( t ) + β x ( t ) ] γ       θ ˙ t _ s t n ( t ) = α y ( t ) [ θ ˙ t _ s ( t ) + β y ( t ) ] γ
where γ = 180 is the scale factor, α ( t ) is the normalization factor, and β ( t ) is the translation factor which can be calculated by:
α x ( t ) = 1 max [ θ t ( t ) ] min [ θ t ( t ) ] , α y ( t ) = 1 max [ θ ˙ t _ s ( t ) ] min [ θ ˙ t _ s ( t ) ] , β x ( t ) = max [ θ t ( t ) ] + min [ θ t ( t ) ] 2 , β y ( t ) = max [ θ ˙ t _ s ( t ) ] + min [ θ ˙ t _ s ( t ) ] 2 .
Data of the previous gait cycle from moment t are taken to calculate the maximum and minimum in real time. The new continuous gait phase variable (Figure 6b) is defined as
φ n e w ( t ) = atan 2 [ θ ˙ t _ s t n ( t ) , θ t _ t n ( t ) ] + π

2.1.3. Knee Angle and GRF value

The knee angle is calculated as:
θ k = θ s θ t
The smoothing method in Equation (4) ( n = 20 and r = 0.9 ) is applied at the knee angle to get a smoother curve θ k _ s ( t ) . To obtain similar knee trajectory under the same locomotion mode with different walking speeds, θ k _ s ( t ) is normalized to θ k _ s n ( t ) .
The GRF of the heel and toe are recorded as G h ( t ) and G t ( t ) respectively. Their latest peak values are recorded as G h p ( t ) and G t p ( t ) during one gait cycle before.
Then, the processed dataset D ω m is
D ω m = { ( θ t _ t n , θ ˙ t _ s t n , φ n e w , θ k _ s n , β x , β y , G t p , G h p ) | ω m }
[ 0 , 2 π ) is discretized into f p parts of the same length according to Equation (11).
φ j = [ 2 π j f p , 2 π ( j + 1 ) f p ) ( j = 0 , 1 , 2 , , f p 1 )
D ω m is divided into D ω m , φ j according to which part φ n e w belongs to. The divided datasets are used to calculate feature distributions under different modes. The workflow of offline data processing and feature distribution calculation are shown in Figure 7.

2.2. Feature Distributions and Extractions

2.2.1. Thigh Phase Diagram Shape (TPDS)

The thigh phase diagram consists of processed thigh IMU angle θ t _ s n in the x-axis and processed thigh IMU angular velocity θ ˙ t _ s t n in y-axis. The trajectory of ( θ t _ t n , θ ˙ t _ s t n ) during walking is named TDPS, and TDPS varies from each other under different locomotion modes. Here, Gaussian distributions are used to record the standard TDPS of different locomotion modes. For example, the TPDS under LW mode is shown in Figure 8a.
From Figure 8b, it is known that ( θ t _ t n , θ ˙ t _ s t n ) D ω m , φ j correspond to a two-dimensional Gaussian distribution N t | ω m , φ j ( μ , Σ ) . For each N t | ω m , φ j ( μ , Σ ) , it can be calculated as:
μ = [ μ 1 μ 2 ] , Σ = [ C o v ( X 1 , X 1 ) C o v ( X 1 , X 2 ) C o v ( X 2 , X 1 ) C o v ( X 2 , X 2 ) ] μ 1 = ( D ω m , φ j θ t _ t n ) / | D ω m , φ j | , μ 2 = ( D ω m , φ j θ ˙ t _ s t n ) / | D ω m , φ j | C o v ( X 1 , X 1 ) = D ω m , φ j ( θ t _ t n μ 1 ) 2 / ( | D ω m , φ j | 1 ) C o v ( X 2 , X 2 ) = D ω m , φ j ( θ ˙ t _ s t n μ 2 ) 2 / ( | D ω m , φ j | 1 ) C o v ( X 1 , X 2 ) = C o v ( X 2 , X 1 ) = D ω m , φ j ( θ t _ t n μ 1 ) ( θ ˙ t _ s t n μ 2 ) / ( | D ω m , φ j | 1 )
f p Gaussian distributions from N t | ω m , φ 0 ( μ , Σ ) to N t   | ω m , φ f p 1 ( μ , Σ ) together represent the TPDS under ω m mode. TPDSs in different locomotion modes are shown in Figure 9a.
The overlap degree between real-time thigh phase diagram trajectory and standard TPDSs of different modes shown in Figure 9b is an index of similarity to classify locomotion modes. The summation of the probability density of each sample point is used to evaluate the overlap degree:
s u m t | ω m ( t ) = k = 0 L t w 1 f t | ω m , φ j ( θ t _ t n ( t k · t s ) , θ ˙ t _ s t n ( t k · t s ) )
where f t | ω m , φ j ( x , y ) is the probability density function of N t | ω m , φ j ( μ , Σ ) . Then we can figure out the conditional probability, P ( ω m | T P D S ( t ) ) , of each mode:
P ( ω m | T P D S ( t ) ) = s u m t | ω m ( t ) / i = 1 M s u m t | ω i ( t )
where T P D S ( t ) represents the TPDS feature at time t .

2.2.2. Knee Angle Trajectory (KAT)

KAT is the normalized knee angle trajectory at the gait phase axis. Similarly to TPDS, θ k _ s n D ω m , φ j correspond to a one-dimensional normal distribution N k | ω m , φ j ( μ , Σ ) . Standard KATs in different locomotion modes are shown in Figure 10.
The summation of probability density is used to evaluate the overlap degree of real-time KAT and the standard KAT under ω m mode:
s u m k | ω m ( t ) = k = 0 L t w 1 f k | ω m , φ j ( θ k _ s n ( t k · t s ) )
where f k | ω m , φ j ( x ) is the probability density function of N k | ω m , φ j ( μ , Σ ) . The conditional probability P ( ω m | K A T ( t ) ) of each mode is:
P ( ω m | K A T ( t ) ) = s u m k | ω m ( t ) / i = 1 M s u m k | ω i ( t )
where K A T ( t ) represents the KAT feature at time t .

2.2.3. Center Position Offset (CPO)

During the translation of the thigh phase diagram in Equation (7), the translation vector ( β x , β y ) can reflect the range of thigh motion. The translation vector is called CPO. In different locomotion modes, the distributions of ( β x , β y ) D ω m are shown in Figure 11a. The two-dimensional normal distribution function N c | ω m ( μ , Σ ) is used to describe CPO under ω m mode. The probability density function of N c | ω m ( μ , Σ ) is f c | ω m ( x , y ) , and the conditional probability under CPO is
P ( ω m | C P O ( t ) ) = f c | ω m ( β x ( t ) , β y ( t ) ) / i = 1 M f c | ω i ( β x ( t ) , β y ( t ) )
where C P O ( t ) represents the CPO feature at time t .

2.2.4. Ground Reaction Force Peak Value (GRFPV)

The peak values of the force and heel force have different distributions in different locomotion modes, as shown in Figure 11b. We assume that the GRFPV of LW are assembling near a line segment and GRFPV of other modes are gathering in a circle.
The line segment AB is fitted by the least square method:
a ω m x + b ω m y + c ω m = 0 , x A x x B , ω m = LW
The coordinate of each circle center of each mode is
C ω m ( x ω m ,   y ω m )   ,   ω m     LW
Then Euclidean distance is used to compute the relative probability R P ω m :
R P ω m = 3 1 + 2 exp ( d / 100 )
where d is
d = | p A | p A B > 90 ° | p B | p B A > 90 ° ω m = L W a ω m G h p ( t ) + b ω m G t p ( t ) + c ω m a ω m 2 + b ω m 2 e l s e p C ω m ω m L W
where p ( G h p ( t ) ,   G t p ( t ) ) is the GRFPV point in real time. The conditional probability P ( ω m | G R F P V ( t ) ) is:
P ( ω m | G R F P V ( t ) ) = R P ω m / i = 1 M R P ω m
where G R F P V ( t ) represents the GRFPV feature at time t .
It should be noted that ST is not periodic movement and does not have a stable phase variable. The TPDS, KAT, and CPO features of ST are not calculated.

3. Results

The workflow of real-time feature extraction and classification is shown in Figure 12. This paper designs two classifiers: the standing (ST) classifier is used to identify the ST mode, and the artificial neural network (ANN) classifier is used to classify the other five locomotion modes.

3.1. ST Classifier

An angular velocity threshold and GRFPV feature are used at the ST classifier, as shown in Figure 12. The threshold T h satisfies
D T = k = 0 L t w θ ˙ t ( t k · t s ) / L t w < T h
where D T is the dynamic trend. When the subject is standing, the D T curve is shown in Figure 13a, and the maximum D T is smaller than 2. Then, 10,000 sample points are randomly sampled in other modes, and their D T distribution is shown in Figure 13b. The minimum D T under other modes is greater than 7. Here, we take T h = 5 .
The threshold ensures that there won’t be much movement, while the GRFPV feature inequality ensures that the subject is standing on the ground. The ST classifier has an accuracy of 100% in the test because standing has an obvious static feature which is different from periodic locomotion modes.

3.2. ANN Classifier

At the ANN classifier, the conditional probabilities of each mode under each feature are input into a fully connected neural network with one hidden layer, as shown in Figure 14, and the final outputs are the probabilities of each locomotion mode. The calculations of the hidden layer and output layer are shown in Equation (24).
y j = g ( i = 1 4 M ω i j l 1 · x i + b j l 1 ) , z j = g ( i = 1 H ω i j l 2 · y i + b j l 2 )
where ω ij l 1 and b j l 1 are the weight matrix and bias between input and hidden layer. ω ij l 2 and b j l 2 are the weight matrix and bias between the hidden and output layers. g is the tansig function. During the training process, the training dataset and validation dataset are strictly separated.
The data was trained on a particular subject, and the ANN classifier was evaluated by a five-fold cross-validation. The testing results of eight subjects are listed in Table 2 below, and the confusion matrix of eight subjects is shown in Table 3. The ANN classifier has an average accuracy across all subjects of 99.16% ± 0.38%.
Compared with the traditional confusion matrix, we add one column of “None” to represent the unclassified mode when
max ( p ω k ) < 0.5 , k = 1 , 2 , , M

4. Discussion

4.1. Network Hyperparameters

The performance of ANN is closely related to the network structure. Here, the following hyperparameters are considered: (1) the number of neurons in the hidden layer ( N h i d d e n ); (2) the number of groups of training data for each mode ( N t r a i n ) in the training set. N h i d d e n affects network structure and N t r a i n may lead to overfitting or underfitting. Based on M1′s user-dependent dataset, tests are carried out under different parameters, and the results are shown in Figure 15a. The bigger N t r a i n corresponds to higher classification accuracy. However, classification accuracy has already reached 95% when N t r a i n = 200 , which shows that the proposed method can achieve good results when training with a small amount of data. When N t r a i n = 2000 and N h i d d e n = 25 , we get the best classification accuracy of M1.

4.2. Ramp Slope

From Table 2 and Table 3, classification errors mainly focus on RA and RD. The proportion of each error of different slopes in the whole error under RA and RD is shown in Figure 15b. The result shows that the main error occurs in the low slopes, which indicates that gaits under low slopes have similarities.

4.3. Time Window Length

Time window size decides how much information we can use when classification is in progress. However, the bigger time window size will bring higher delay. Based on M1′s user-dependent dataset, we test the average accuracy under different time window sizes when N t r a i n = 2000 and N h i d d e n = 25 . The result is shown in Figure 16. When the time window size reaches 50, the classification accuracy increases very slowly.

5. Conclusions

The high accuracy of locomotion mode classification ensures prosthetic users’ safety and are the foundation of the natural transition between locomotion modes. In this paper, four novel features are proposed based on data from two IMUs and one GRF insole. Gaussian distributions are used to describe the TPDS, KAT and CPO features after using distribution fitter tools to analyze the data. Euclidean distances in GRFPV diagrams are used to compute the relative probabilities of different locomotion modes. To the author’s knowledge, those features haven’t been proposed and applied yet. ST classifier and ANN classifier are designed and achieve a high accuracy of 100% and 99.16% ± 0.38%, respectively.
Moreover, the proposed method is potential for future research. The real-time classified walking data are used to adjust features’ distribution to adapt amputee’s gaits. The new extracted feature is convenient to be added to our control framework. The ANN used in this paper is simple in structure, which makes it possible to train ANN online. Additionally, human locomotion modes are not limited to the listed. When the predicted class is “None,” we can collect the unclassified data and apply clustering algorithms to discover new modes. Those evolutionary and adaptive abilities are what we will study next.
To further our study, the disabled volunteers will be invited to test the proposed method. Except for locomotion mode classification, more information such as slopes, step stride and stair height will be predicted by analyzing the walking dataset.

Author Contributions

Conceptualization, Y.L. and H.A.; methodology, Y.L. and H.A.; software, Y.L.; validation, Y.L., H.A. and H.M.; formal analysis, Y.L.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, H.A., H.M. and Q.W.; visualization, Y.L.; supervision, H.A., H.M. and Q.W.; project administration, H.A.; funding acquisition, H.A., H.M. and Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project of the National Key Research and Development Program of China, grant number (2018YFC2001304).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Abbreviations in this paper are shown in Table A1.
Table A1. Abbreviations in this paper.
Table A1. Abbreviations in this paper.
AbbreviationsFull NamesAbbreviationsFull Names
LWlevel walkingSAstair ascent
SDstair descentRAramp ascent
RDramp descentSTstanding
TPDSthigh phase diagram shapeKATknee angle trajectory
CPOcenter position offsetGRFPVground reaction force peak value
ANNartificial neural networkWHOworld health organization
LLAlower-limb amputationPRpattern recognition
MLmachine learningsEMGsurface electromyogram
EFRSenvironmental feature recognition systemLDAlinear discriminant analysis
QDAquadratic discriminant analysisGMMGaussian mixture model
DBNdynamic Bayesian networkIMUinertial measurement unit
GRFground reaction forceCNNconvolutional neural network
DL-baseddeep learning basedLGlevel ground
MmanWwoman
DTdynamic trend

References

  1. Filmer, D. Disability, Poverty, and Schooling in Developing Countries. Soc. Sci. Electron. Publ. Vol. 2008, 22, 141–163. [Google Scholar]
  2. Au, S.K.; Weber, J.; Herr, H.M. Powered Ankle—Foot Prosthesis Improves Walking Metabolic Economy. IEEE Trans. Robot. 2009, 25, 51–66. [Google Scholar] [CrossRef]
  3. Wang, Q.N.; Zheng, E.H.; Chen, B.J.; Mai, J.G. Recent Progress and Challenges of Robotic Lower-limb Prostheses for Human-robot Integration. Acta Autom. Sin. 2016, 42, 1780–1793. [Google Scholar]
  4. Tucker, M.R.; Olivier, J.; Pagel, A.; Bleuler, H.; Bouri, M.; Lambercy, O.; Gassert, R. Control strategies for active lower extremity prosthetics and orthotics: A review. J. NeuroEngineering Rehabil. 2015, 12, 1–30. [Google Scholar] [CrossRef] [PubMed]
  5. Fleming, A.; Stafford, N.; Huang, S.; Hu, X.; Ferris, D.P.; Huang, H.H. Myoelectric control of robotic lower limb prostheses: A review of electromyography interfaces, control paradigms, challenges and future directions. J. Neural Eng. 2021, 18, 041004. [Google Scholar] [CrossRef] [PubMed]
  6. Simao, M.; Mendes, N.; Gibaru, O.; Neto, P. A Review on Electromyography Decoding and Pattern Recognition for Human-Machine Interaction. IEEE Access 2019, 7, 39564–39582. [Google Scholar] [CrossRef]
  7. Spanias, J.A.; Simon, A.M.; Finucane, S.B.; Perreault, E.J.; Hargrove, L.J. Online adaptive neural control of a robotic lower limb prosthesis. J. Neural Eng. 2018, 15, 016015. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, K.; Xiong, C.; Zhang, W.; Liu, H.; Lai, D.; Rong, Y.; Fu, C. Environmental Features Recognition for Lower Limb Prostheses Toward Predictive Walking. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 465–476. [Google Scholar] [CrossRef] [PubMed]
  9. Young, A.J.; Simon, A.M.; Fey, N.P.; Hargrove, L.J. Classifying the intent of novel users during human locomotion using powered lower limb prostheses. In Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, CA, USA, 6–8 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 311–314. [Google Scholar]
  10. Ha, K.H.; Varol, H.A.; Goldfarb, M. Volitional Control of a Prosthetic Knee Using Surface Electromyography. IEEE Trans. Biomed. Eng. 2010, 58, 144–151. [Google Scholar] [CrossRef] [PubMed]
  11. Varol, H.A.; Sup, F.; Goldfarb, M. Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis. IEEE Trans. Biomed. Eng. 2009, 57, 542–551. [Google Scholar] [CrossRef] [PubMed]
  12. Huang, H.; Zhang, F.; Hargrove, L.J.; Dou, Z.; Rogers, D.R.; Englehart, K.B. Continuous Locomotion-Mode Identification for Prosthetic Legs Based on Neuromuscular-Mechanical Fusion. IEEE Trans. Biomed. Eng. 2011, 58, 2867–2875. [Google Scholar] [CrossRef] [PubMed]
  13. Young, A.J.; Simon, A.M.; Fey, N.P.; Hargrove, L.J. Intent Recognition in a Powered Lower Limb Prosthesis Using Time History Information. Ann. Biomed. Eng. 2013, 42, 631–641. [Google Scholar] [CrossRef] [PubMed]
  14. Samuel, A.; Berniker, M.; Herr, H. Powered ankle-foot prosthesis to assist level-ground and stair-descent gaits. Neural Netw. Off. J. Int. Neural Netw. Soc. 2008, 21, 654–666. [Google Scholar]
  15. Su, B.Y.; Wang, J.; Liu, S.Q.; Sheng, M.; Jiang, J.; Xiang, K. A CNN-Based Method for Intent Recognition Using Inertial Measurement Units and Intelligent Lower Limb Prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1032–1042. [Google Scholar] [CrossRef] [PubMed]
  16. Kang, I.; Molinaro, D.D.; Choi, G.; Camargo, J.; Young, A.J. Subject-Independent Continuous Locomotion Mode Classification for Robotic Hip Exoskeleton Applications. IEEE Trans. Biomed. Eng. 2022, 69, 3234–3242. [Google Scholar] [CrossRef] [PubMed]
  17. Bhakta, K.; Camargo, J.; Donovan, L.; Herrin, K.; Young, A. Machine learning model comparisons of user independent & dependent intent recognition systems for powered prostheses. IEEE Robot. Automat. Lett. 2020, 5, 5393–5400. [Google Scholar]
  18. Lawson, B.E.; Mitchell, J.; Truex, D.; Shultz, A.; Ledoux, E.; Goldfarb, M. A robotic leg prosthesis: Design, control, and implementation. IEEE Robot. Autom. Mag. 2014, 21, 70–81. [Google Scholar] [CrossRef]
  19. Quintero, D.; Lambert, D.J.; Villarreal, D.J.; Gregg, R.D. Real-time continuous gait phase and speed estimation from a single sensor. In Proceedings of the 2017 IEEE Conference on Control Technology and Applications (CCTA), Kohala Coast, HI, USA, 27–30 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 847–852. [Google Scholar]
Figure 1. The hierarchical control system [4] of intelligent powered low limb prosthesis based on the gait phase.
Figure 1. The hierarchical control system [4] of intelligent powered low limb prosthesis based on the gait phase.
Machines 11 00235 g001
Figure 2. Sensor installation positions and variable descriptions. Θ t is the thigh IMU angle in the sagittal plane, θ s is the shank IMU angle in the sagittal plane and θ k is the knee angle. G h and G t are the raw force sensor data of the heel and toe.
Figure 2. Sensor installation positions and variable descriptions. Θ t is the thigh IMU angle in the sagittal plane, θ s is the shank IMU angle in the sagittal plane and θ k is the knee angle. G h and G t are the raw force sensor data of the heel and toe.
Machines 11 00235 g002
Figure 3. (a) Intelligent lower-limb prosthesis designed for able-bodied subjects. (b) The able-bodied subject equipped with a prosthesis and data acquisition sensors.
Figure 3. (a) Intelligent lower-limb prosthesis designed for able-bodied subjects. (b) The able-bodied subject equipped with a prosthesis and data acquisition sensors.
Machines 11 00235 g003
Figure 4. Six locomotion modes including level walking (LW), stair ascent (SA), stair descent (SD), ramp ascent (RA), ramp descent (RD), and standing (ST).
Figure 4. Six locomotion modes including level walking (LW), stair ascent (SA), stair descent (SD), ramp ascent (RA), ramp descent (RD), and standing (ST).
Machines 11 00235 g004
Figure 5. (a) Incline adjustable treadmill. (b) Stairs with 14.5 cm stair height. (c) Stairs with 11.8 cm stair height.
Figure 5. (a) Incline adjustable treadmill. (b) Stairs with 14.5 cm stair height. (c) Stairs with 11.8 cm stair height.
Machines 11 00235 g005
Figure 6. (a) Smoothing effect on the normalized thigh Angular velocity. (b) The new gait phase variable shows better monotonicity at the time axis.
Figure 6. (a) Smoothing effect on the normalized thigh Angular velocity. (b) The new gait phase variable shows better monotonicity at the time axis.
Machines 11 00235 g006
Figure 7. Offline data processing and feature distribution calculation.
Figure 7. Offline data processing and feature distribution calculation.
Machines 11 00235 g007
Figure 8. (a) TPDS of LW mode. Pink points belong to the dataset D L W . Points in the green circle are in the dataset D L W , φ 0 . (b) The distributions of x-coordinates and y-coordinates of points in the green circle are near to normal according to the histograms.
Figure 8. (a) TPDS of LW mode. Pink points belong to the dataset D L W . Points in the green circle are in the dataset D L W , φ 0 . (b) The distributions of x-coordinates and y-coordinates of points in the green circle are near to normal according to the histograms.
Machines 11 00235 g008
Figure 9. (a) TPDSs in different locomotion modes. The solid line is the mean trajectory, and the transparent area is ±1 standard deviation. (b) The red part is the standard TPDS of LW mode. Blue points form the real-time thigh phase diagram trajectory, and the blue points are collected under LW mode.
Figure 9. (a) TPDSs in different locomotion modes. The solid line is the mean trajectory, and the transparent area is ±1 standard deviation. (b) The red part is the standard TPDS of LW mode. Blue points form the real-time thigh phase diagram trajectory, and the blue points are collected under LW mode.
Machines 11 00235 g009
Figure 10. KATs in different locomotion modes. The x-axis is the continuous thigh phase variable φ n e w ( t ) which represents a whole gait cycle, and the y-axis is the normalized knee angle θ k _ s n ( t ) . The solid line is the mean trajectory, and the transparent area is ±1 standard deviation.
Figure 10. KATs in different locomotion modes. The x-axis is the continuous thigh phase variable φ n e w ( t ) which represents a whole gait cycle, and the y-axis is the normalized knee angle θ k _ s n ( t ) . The solid line is the mean trajectory, and the transparent area is ±1 standard deviation.
Machines 11 00235 g010
Figure 11. (a) CPO distributions in different modes. Ellipses represent probability density contours, and the points represent the translation vector ( β x ,   β y ) in different modes. (b) GRFPV feature distributions in different modes. Points represent parts of sample points ( G h p ,   G t p ) . The line segment is fitted by points of D L W . The circles contain all sample points of one mode with the smallest radius.
Figure 11. (a) CPO distributions in different modes. Ellipses represent probability density contours, and the points represent the translation vector ( β x ,   β y ) in different modes. (b) GRFPV feature distributions in different modes. Points represent parts of sample points ( G h p ,   G t p ) . The line segment is fitted by points of D L W . The circles contain all sample points of one mode with the smallest radius.
Machines 11 00235 g011
Figure 12. Real-time feature extraction and classification. C1 is the standing (ST) classifier, and C2 is the artificial neural network (ANN) classifier. The blue points are real-time feature points under LW.
Figure 12. Real-time feature extraction and classification. C1 is the standing (ST) classifier, and C2 is the artificial neural network (ANN) classifier. The blue points are real-time feature points under LW.
Machines 11 00235 g012
Figure 13. (a) The dynamic trend under ST mode. (b) Ten thousand sample points are randomly sampled in other modes, and their D T distribution is shown above.
Figure 13. (a) The dynamic trend under ST mode. (b) Ten thousand sample points are randomly sampled in other modes, and their D T distribution is shown above.
Machines 11 00235 g013
Figure 14. The structure of the designed neural network.
Figure 14. The structure of the designed neural network.
Machines 11 00235 g014
Figure 15. (a) Classification accuracy under different N h i d d e n and N t r a i n . The error bar represents ±1 standard deviation. (b) Error’s proportion of different slopes under RA and RD.
Figure 15. (a) Classification accuracy under different N h i d d e n and N t r a i n . The error bar represents ±1 standard deviation. (b) Error’s proportion of different slopes under RA and RD.
Machines 11 00235 g015
Figure 16. Classification accuracy under different time window sizes. Error bars represent ±1 standard deviation.
Figure 16. Classification accuracy under different time window sizes. Error bars represent ±1 standard deviation.
Machines 11 00235 g016
Table 1. Variable descriptions.
Table 1. Variable descriptions.
SymbolQuantity
M Total number of modes except for ST ( M = 5 )
θ ˙ t Thigh IMU angular velocity in the sagittal plane
ω m Locomotion mode. ω m { LW , SA , SD , RA , RD , ST }
t s The sampling period and t s = 0.01 s
L t w The length of the time window.
Table 2. Classification results for 8 Subjects.
Table 2. Classification results for 8 Subjects.
Testing AccuracyLocomotion Modes
LWSASDRARDTotal
Subjects
M = Man
W = Woman
M1100.099.8999.0899.1098.7199.36
M2100.099.6799.9097.5597.5998.94
M3100.099.7199.1198.8898.6599.27
M4100.099.1599.1997.7397.3298.68
M599.9999.8999.3997.3696.9098.71
M6100.0100.099.5999.3399.5899.70
W1100.098.6999.53100.099.6399.57
W299.9498.4898.8599.1698.7599.04
Table 3. The confusion matrix of the accuracy tests.
Table 3. The confusion matrix of the accuracy tests.
Confusion
Matrix
Predicted Class
LWSASDRARDNone
Actual
Class
LW99.99 0.01
SA 99.440.56
SD 0.6799.33
RA 0.05 98.641.300.01
RD 1.6198.380.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; An, H.; Ma, H.; Wei, Q. Novel Feature Extraction and Locomotion Mode Classification Using Intelligent Lower-Limb Prosthesis. Machines 2023, 11, 235. https://doi.org/10.3390/machines11020235

AMA Style

Liu Y, An H, Ma H, Wei Q. Novel Feature Extraction and Locomotion Mode Classification Using Intelligent Lower-Limb Prosthesis. Machines. 2023; 11(2):235. https://doi.org/10.3390/machines11020235

Chicago/Turabian Style

Liu, Yi, Honglei An, Hongxu Ma, and Qing Wei. 2023. "Novel Feature Extraction and Locomotion Mode Classification Using Intelligent Lower-Limb Prosthesis" Machines 11, no. 2: 235. https://doi.org/10.3390/machines11020235

APA Style

Liu, Y., An, H., Ma, H., & Wei, Q. (2023). Novel Feature Extraction and Locomotion Mode Classification Using Intelligent Lower-Limb Prosthesis. Machines, 11(2), 235. https://doi.org/10.3390/machines11020235

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop