Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = lower limb movement intention recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7261 KB  
Article
IFIANet: Frequency Attention Network for Time–Frequency in sEMG-Based Motion Intent Recognition
by Gang Zheng, Jiankai Lin, Jiawei Zhang, Heming Jia, Jiayang Tang and Longtao Shi
Sensors 2026, 26(1), 169; https://doi.org/10.3390/s26010169 - 26 Dec 2025
Viewed by 424
Abstract
Lower limb exoskeleton systems require accurate recognition of the wearer’s movement intentions prior to action execution in order to achieve natural and smooth human–machine interaction. Surface electromyography (sEMG) signals can reflect neural activation of muscles before movement onset, making them a key physiological [...] Read more.
Lower limb exoskeleton systems require accurate recognition of the wearer’s movement intentions prior to action execution in order to achieve natural and smooth human–machine interaction. Surface electromyography (sEMG) signals can reflect neural activation of muscles before movement onset, making them a key physiological source for movement intention recognition. To improve sEMG-based recognition performance, this study proposes an innovative deep learning framework, IFIANet. First, a CNN–TCN-based spatiotemporal feature learning network is constructed, which efficiently models and represents multi-scale temporal–frequency features while effectively reducing model parameter complexity. Second, an IFIA (Frequency-Informed Integration Attention) module is designed to incorporate global frequency information, compensating for frequency components potentially lost during time–frequency transformations, thereby enhancing the discriminability and robustness of temporal–frequency features. Extensive ablation and comparative experiments on the publicly available MyPredict1 dataset demonstrate that the proposed framework maintains stable performance across different prediction times and achieves over 82% average recognition accuracy in within-experiments involving nine participants. The results indicate that IFIANet effectively fuses local temporal–frequency features with global frequency priors, providing an efficient and reliable approach for sEMG-based movement intention recognition and intelligent control of exoskeleton systems. Full article
(This article belongs to the Special Issue Advanced Sensors for Human Health Management)
Show Figures

Figure 1

17 pages, 3783 KB  
Article
A Dual-Task Improved Transformer Framework for Decoding Lower Limb Sit-to-Stand Movement from sEMG and IMU Data
by Xiaoyun Wang, Changhe Zhang, Zidong Yu, Yuan Liu and Chao Deng
Machines 2025, 13(10), 953; https://doi.org/10.3390/machines13100953 - 16 Oct 2025
Viewed by 722
Abstract
Recent advances in exoskeleton-assisted rehabilitation have highlighted the significance of lower limb movement intention recognition through deep learning. However, discrete motion phase classification and continuous real-time joint kinematics estimation are typically handled as independent tasks, leading to temporal misalignment or delayed assistance during [...] Read more.
Recent advances in exoskeleton-assisted rehabilitation have highlighted the significance of lower limb movement intention recognition through deep learning. However, discrete motion phase classification and continuous real-time joint kinematics estimation are typically handled as independent tasks, leading to temporal misalignment or delayed assistance during dynamic movements. To address this issue, this study presents iTransformer-DTL, a dual-task learning framework with an improved Transformer designed to identify end-to-end locomotion modes and predict joint trajectories during sit-to-stand transitions. Employing a learnable query mechanism and a non-autoregressive decoding approach, the proposed iTransformer-DTL can produce the complete output sequence at once, without relying on any previously generated elements. The proposed framework has been tested with a dataset of lower limb movements involving seven healthy individuals and seven stroke patients. The experimental results indicate that the proposed framework achieves satisfactory performance in dual tasks. An average angle prediction Mean Absolute Error (MAE) of 3.84° and a classification accuracy of 99.42% were obtained in the healthy group, while 4.62° MAE and 99.01% accuracy were achieved in the stroke group. These results suggest that iTransformer-DTL could support adaptable rehabilitation exoskeleton controllers, enhancing human–robot interactions. Full article
Show Figures

Figure 1

25 pages, 18664 KB  
Article
Study on Lower Limb Motion Intention Recognition Based on PO-SVMD-ResNet-GRU
by Wei Li, Mingsen Wang, Daxue Sun, Zhuoda Jia and Zhengwei Yue
Processes 2025, 13(10), 3252; https://doi.org/10.3390/pr13103252 - 13 Oct 2025
Viewed by 520
Abstract
This study aims to enhance the accuracy of human lower limb motion intention recognition based on surface electromyography (sEMG) signals and proposes a signal denoising method based on Sequential Variational Mode Decomposition (SVMD) optimized by the Parrot Optimization (PO) algorithm and a joint [...] Read more.
This study aims to enhance the accuracy of human lower limb motion intention recognition based on surface electromyography (sEMG) signals and proposes a signal denoising method based on Sequential Variational Mode Decomposition (SVMD) optimized by the Parrot Optimization (PO) algorithm and a joint motion angle prediction model combining Residual Network (ResNet) with Gated Recurrent Unit (GRU) for the two aspects of signal processing and predictive modeling, respectively. First, for the two motion conditions of level walking and stair climbing, sEMG signals from the rectus femoris, vastus lateralis, semitendinosus, and biceps femoris, as well as the motion angles of the hip and knee joints, were simultaneously collected from five healthy subjects, yielding a total of 400 gait cycle data points. The sEMG signals were denoised using the method combining PO-SVMD with wavelet thresholding. Compared with denoising methods such as Empirical Mode Decomposition, Partial Ensemble Empirical Mode Decomposition, Independent Component Analysis, and wavelet thresholding alone, the signal-to-noise ratio (SNR) of the proposed method was increased to a maximum of 23.42 dB. Then, the gait cycle information was divided into training and testing sets at a 4:1 ratio, and five models—ResNet-GRU, Transformer-LSTM, CNN-GRU, ResNet, and GRU—were trained and tested individually using the processed sEMG signals as input and the hip and knee joint movement angles as output. Finally, the root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2) were used as evaluation metrics for the test results. The results show that for both motion conditions, the evaluation metrics of the ResNet-GRU model in the test results are superior to those of the other four models. The optimal evaluation metrics for level walking are 2.512 ± 0.415°, 1.863 ± 0.265°, and 0.979 ± 0.007, respectively, while the optimal evaluation metrics for stair climbing are 2.475 ± 0.442°, 2.012 ± 0.336°, and 0.98 ± 0.009, respectively. The method proposed in this study achieves improvements in both signal processing and predictive modeling, providing a new method for research on lower limb motion intention recognition. Full article
Show Figures

Figure 1

29 pages, 32678 KB  
Article
An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition
by Zhuangqun Song, Peng Zhao, Xueji Wu, Rong Yang and Xueshan Gao
Sensors 2025, 25(3), 713; https://doi.org/10.3390/s25030713 - 24 Jan 2025
Cited by 5 | Viewed by 2885
Abstract
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a [...] Read more.
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a vision-driven follow-and-track control strategy is proposed. Subsequently, an algorithm for recognizing human motion intentions based on machine learning is proposed for human-robot collaboration tasks. A muscle–machine interface is constructed using a bi-directional long short-term memory (BiLSTM) network, which decodes multichannel surface electromyography (sEMG) signals into flexion and extension angles of the hip and knee joints in the sagittal plane. The hyperparameters of the BiLSTM network are optimized using the quantum-behaved particle swarm optimization (QPSO) algorithm, resulting in a QPSO-BiLSTM hybrid model that enables continuous real-time estimation of human motion intentions. Further, to address the uncertain nonlinear dynamics of the wearer-exoskeleton robot system, a dual radial basis function neural network adaptive sliding mode Controller (DRBFNNASMC) is designed to generate control torques, thereby enabling the precise tracking of motion trajectories generated by the muscle–machine interface. Experimental results indicate that the follow-up-assisted frame can accurately track human motion trajectories. The QPSO-BiLSTM network outperforms traditional BiLSTM and PSO-BiLSTM networks in predicting continuous lower limb motion, while the DRBFNNASMC controller demonstrates superior gait tracking performance compared to the fuzzy compensated adaptive sliding mode control (FCASMC) algorithm and the traditional proportional–integral–derivative (PID) control algorithm. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

20 pages, 2957 KB  
Article
Recognition of Human Lower Limb Motion and Muscle Fatigue Status Using a Wearable FES-sEMG System
by Wenbo Zhang, Ziqian Bai, Pengfei Yan, Hongwei Liu and Li Shao
Sensors 2024, 24(7), 2377; https://doi.org/10.3390/s24072377 - 8 Apr 2024
Cited by 18 | Viewed by 4597
Abstract
Functional electrical stimulation (FES) devices are widely employed for clinical treatment, rehabilitation, and sports training. However, existing FES devices are inadequate in terms of wearability and cannot recognize a user’s intention to move or muscle fatigue. These issues impede the user’s ability to [...] Read more.
Functional electrical stimulation (FES) devices are widely employed for clinical treatment, rehabilitation, and sports training. However, existing FES devices are inadequate in terms of wearability and cannot recognize a user’s intention to move or muscle fatigue. These issues impede the user’s ability to incorporate FES devices into their daily life. In response to these issues, this paper introduces a novel wearable FES system based on customized textile electrodes. The system is driven by surface electromyography (sEMG) movement intention. A parallel structured deep learning model based on a wearable FES device is used, which enables the identification of both the type of motion and muscle fatigue status without being affected by electrical stimulation. Five subjects took part in an experiment to test the proposed system, and the results showed that our method achieved a high level of accuracy for lower limb motion recognition and muscle fatigue status detection. The preliminary results presented here prove the effectiveness of the novel wearable FES system in terms of recognizing lower limb motions and muscle fatigue status. Full article
(This article belongs to the Special Issue Wearable Sensors for Physical Activity Monitoring and Motion Control)
Show Figures

Figure 1

13 pages, 3011 KB  
Article
Motion-Based Control Strategy of Knee Actuated Exoskeletal Gait Orthosis for Hemiplegic Patients: A Feasibility Study
by Yoon Heo, Hyuk-Jae Choi, Jong-Won Lee, Hyeon-Seok Cho and Gyoo-Suk Kim
Appl. Sci. 2024, 14(1), 301; https://doi.org/10.3390/app14010301 - 29 Dec 2023
Cited by 9 | Viewed by 2173
Abstract
In this study, we developed a unilateral knee actuated exoskeletal gait orthosis (KAEGO) for hemiplegic patients to conduct gait training in real-world environments without spatial limitations. For this purpose, it is crucial that the controller interacts with the patient’s gait intentions. This study [...] Read more.
In this study, we developed a unilateral knee actuated exoskeletal gait orthosis (KAEGO) for hemiplegic patients to conduct gait training in real-world environments without spatial limitations. For this purpose, it is crucial that the controller interacts with the patient’s gait intentions. This study newly proposes a simple gait control strategy that detects the gait state and recognizes the patient’s gait intentions using only the motion information of the lower limbs obtained from an embedded inertial measurement units (IMU) sensor and a knee angle sensor without employing ground reaction force (GRF) sensors. In addition, a torque generation method based on negative damping was newly applied as a method to determine the appropriate amount of assistive torque to support flexion or extension movements of the knee joint. To validate the performance of the developed KAEGO and the effectiveness of our proposed gait control strategy, we conducted walking tests with a hemiplegic patient. These tests included verifying the accuracy of gait recognition and comparing the metabolic cost of transport (COT). The experimental results confirmed that our gait control approach effectively recognizes the patient’s gait intentions without GRF sensors and reduces the metabolic cost by approximately 8% compared to not wearing the device. Full article
(This article belongs to the Special Issue Rehabilitation Robot with Intelligent Sensing System)
Show Figures

Figure 1

16 pages, 2775 KB  
Article
Research on Lower Limb Step Speed Recognition Method Based on Electromyography
by Peng Zhang, Pengcheng Wu and Wendong Wang
Micromachines 2023, 14(3), 546; https://doi.org/10.3390/mi14030546 - 26 Feb 2023
Cited by 7 | Viewed by 2128
Abstract
Wearable exoskeletons play an important role in people’s lives, such as helping stroke and amputation patients to carry out rehabilitation training and so on. How to make the exoskeleton accurately judge the human action intention is the basic requirement to ensure that it [...] Read more.
Wearable exoskeletons play an important role in people’s lives, such as helping stroke and amputation patients to carry out rehabilitation training and so on. How to make the exoskeleton accurately judge the human action intention is the basic requirement to ensure that it can complete the corresponding task. Traditional exoskeleton control signals include pressure values, joint angles and acceleration values, which can only reflect the current motion information of the human lower limbs and cannot be used to predict motion. The electromyography (EMG) signal always occurs before a certain movement; it can be used to predict the target’s gait speed and movement as the input signal. In this study, the generalization ability of a BP neural network and the timing property of a hidden Markov chain are used to properly fuse the two, and are finally used in the research of this paper. Experiments show that, using the same training samples, the recognition accuracy of the three-layer BP neural network is only 91%, while the recognition accuracy of the fusion discriminant model proposed in this paper can reach 95.1%. The results show that the fusion of BP neural network and hidden Markov chain has a strong solving ability for the task of wearable exoskeleton recognition of target step speed. Full article
Show Figures

Figure 1

14 pages, 834 KB  
Review
Measurement, Evaluation, and Control of Active Intelligent Gait Training Systems—Analysis of the Current State of the Art
by Yi Han, Chenhao Liu, Bin Zhang, Ning Zhang, Shuoyu Wang, Meimei Han, João P. Ferreira, Tao Liu and Xiufeng Zhang
Electronics 2022, 11(10), 1633; https://doi.org/10.3390/electronics11101633 - 20 May 2022
Cited by 7 | Viewed by 3778
Abstract
Gait recognition and rehabilitation has been a research hotspot in recent years due to its importance to medical care and elderly care. Active intelligent rehabilitation and assistance systems for lower limbs integrates mechanical design, sensing technology, intelligent control, and robotics technology, and is [...] Read more.
Gait recognition and rehabilitation has been a research hotspot in recent years due to its importance to medical care and elderly care. Active intelligent rehabilitation and assistance systems for lower limbs integrates mechanical design, sensing technology, intelligent control, and robotics technology, and is one of the effective ways to resolve the above problems. In this review, crucial technologies and typical prototypes of active intelligent rehabilitation and assistance systems for gait training are introduced. The limitations, challenges, and future directions in terms of gait measurement and intention recognition, gait rehabilitation evaluation, and gait training control strategies are discussed. To address the core problems of the sensing, evaluation and control technology of the active intelligent gait training systems, the possible future research directions are proposed. Firstly, different sensing methods need to be proposed for the decoding of human movement intention. Secondly, the human walking ability evaluation models will be developed by integrating the clinical knowledge and lower limb movement data. Lastly, the personalized gait training strategy for collaborative control of human–machine systems needs to be implemented in the clinical applications. Full article
(This article belongs to the Special Issue Physical Diagnosis and Rehabilitation Technologies)
Show Figures

Figure 1

24 pages, 2026 KB  
Review
Identification of Lower-Limb Motor Tasks via Brain–Computer Interfaces: A Topical Overview
by Víctor Asanza, Enrique Peláez, Francis Loayza, Leandro L. Lorente-Leyva and Diego H. Peluffo-Ordóñez
Sensors 2022, 22(5), 2028; https://doi.org/10.3390/s22052028 - 4 Mar 2022
Cited by 28 | Viewed by 7888
Abstract
Recent engineering and neuroscience applications have led to the development of brain–computer interface (BCI) systems that improve the quality of life of people with motor disabilities. In the same area, a significant number of studies have been conducted in identifying or classifying upper-limb [...] Read more.
Recent engineering and neuroscience applications have led to the development of brain–computer interface (BCI) systems that improve the quality of life of people with motor disabilities. In the same area, a significant number of studies have been conducted in identifying or classifying upper-limb movement intentions. On the contrary, few works have been concerned with movement intention identification for lower limbs. Notwithstanding, lower-limb neurorehabilitation is a major topic in medical settings, as some people suffer from mobility problems in their lower limbs, such as those diagnosed with neurodegenerative disorders, such as multiple sclerosis, and people with hemiplegia or quadriplegia. Particularly, the conventional pattern recognition (PR) systems are one of the most suitable computational tools for electroencephalography (EEG) signal analysis as the explicit knowledge of the features involved in the PR process itself is crucial for both improving signal classification performance and providing more interpretability. In this regard, there is a real need for outline and comparative studies gathering benchmark and state-of-art PR techniques that allow for a deeper understanding thereof and a proper selection of a specific technique. This study conducted a topical overview of specialized papers covering lower-limb motor task identification through PR-based BCI/EEG signal analysis systems. To do so, we first established search terms and inclusion and exclusion criteria to find the most relevant papers on the subject. As a result, we identified the 22 most relevant papers. Next, we reviewed their experimental methodologies for recording EEG signals during the execution of lower limb tasks. In addition, we review the algorithms used in the preprocessing, feature extraction, and classification stages. Finally, we compared all the algorithms and determined which of them are the most suitable in terms of accuracy. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

22 pages, 4467 KB  
Article
Recognition of Upper Limb Action Intention Based on IMU
by Jian-Wei Cui, Zhi-Gang Li, Han Du, Bing-Yan Yan and Pu-Dong Lu
Sensors 2022, 22(5), 1954; https://doi.org/10.3390/s22051954 - 2 Mar 2022
Cited by 29 | Viewed by 5629
Abstract
Using motion information of the upper limb to control the prosthetic hand has become a hotspot of current research. The operation of the prosthetic hand must also be coordinated with the user’s intention. Therefore, identifying action intention of the upper limb based on [...] Read more.
Using motion information of the upper limb to control the prosthetic hand has become a hotspot of current research. The operation of the prosthetic hand must also be coordinated with the user’s intention. Therefore, identifying action intention of the upper limb based on motion information of the upper limb is key to controlling the prosthetic hand. Since a wearable inertial sensor bears the advantages of small size, low cost, and little external environment interference, we employ an inertial sensor to collect angle and angular velocity data during movement of the upper limb. Aiming at the action classification for putting on socks, putting on shoes and tying shoelaces, this paper proposes a recognition model based on the Dynamic Time Warping (DTW) algorithm of the motion unit. Based on whether the upper limb is moving, the complete motion data are divided into several motion units. Considering the delay associated with controlling the prosthetic hand, this paper only performs feature extraction on the first motion unit and the second motion unit, and recognizes action on different classifiers. The experimental results reveal that the DTW algorithm based on motion unit bears a higher recognition rate and lower running time. The recognition rate reaches as high as 99.46%, and the average running time measures 8.027 ms. In order to enable the prosthetic hand to understand the grasping intention of the upper limb, this paper proposes a Generalized Regression Neural Network (GRNN) model based on 10-fold cross-validation. The motion state of the upper limb is subdivided, and the static state is used as the sign of controlling the prosthetic hand. This paper applies a 10-fold cross-validation method to train the neural network model to find the optimal smoothing parameter. In addition, the recognition performance of different neural networks is compared. The experimental results show that the GRNN model based on 10-fold cross-validation exhibits a high accuracy rate, capable of reaching 98.28%. Finally, the two algorithms proposed in this paper are implemented in an experiment of using the prosthetic hand to reproduce an action, and the feasibility and practicability of the algorithm are verified by experiment. Full article
Show Figures

Figure 1

15 pages, 34275 KB  
Article
A Lightweight Exoskeleton-Based Portable Gait Data Collection System
by Md Rejwanul Haque, Masudul H. Imtiaz, Samuel T. Kwak, Edward Sazonov, Young-Hui Chang and Xiangrong Shen
Sensors 2021, 21(3), 781; https://doi.org/10.3390/s21030781 - 24 Jan 2021
Cited by 19 | Viewed by 6952
Abstract
For the controller of wearable lower-limb assistive devices, quantitative understanding of human locomotion serves as the basis for human motion intent recognition and joint-level motion control. Traditionally, the required gait data are obtained in gait research laboratories, utilizing marker-based optical motion capture systems. [...] Read more.
For the controller of wearable lower-limb assistive devices, quantitative understanding of human locomotion serves as the basis for human motion intent recognition and joint-level motion control. Traditionally, the required gait data are obtained in gait research laboratories, utilizing marker-based optical motion capture systems. Despite the high accuracy of measurement, marker-based systems are largely limited to laboratory environments, making it nearly impossible to collect the desired gait data in real-world daily-living scenarios. To address this problem, the authors propose a novel exoskeleton-based gait data collection system, which provides the capability of conducting independent measurement of lower limb movement without the need for stationary instrumentation. The basis of the system is a lightweight exoskeleton with articulated knee and ankle joints. To minimize the interference to a wearer’s natural lower-limb movement, a unique two-degrees-of-freedom joint design is incorporated, integrating a primary degree of freedom for joint motion measurement with a passive degree of freedom to allow natural joint movement and improve the comfort of use. In addition to the joint-embedded goniometers, the exoskeleton also features multiple positions for the mounting of inertia measurement units (IMUs) as well as foot-plate-embedded force sensing resistors to measure the foot plantar pressure. All sensor signals are routed to a microcontroller for data logging and storage. To validate the exoskeleton-provided joint angle measurement, a comparison study on three healthy participants was conducted, which involves locomotion experiments in various modes, including overground walking, treadmill walking, and sit-to-stand and stand-to-sit transitions. Joint angle trajectories measured with an eight-camera motion capture system served as the benchmark for comparison. Experimental results indicate that the exoskeleton-measured joint angle trajectories closely match those obtained through the optical motion capture system in all modes of locomotion (correlation coefficients of 0.97 and 0.96 for knee and ankle measurements, respectively), clearly demonstrating the accuracy and reliability of the proposed gait measurement system. Full article
(This article belongs to the Special Issue Feature Papers in Physical Sensors Section 2020)
Show Figures

Figure 1

19 pages, 5442 KB  
Article
Research on Lower Limb Motion Recognition Based on Fusion of sEMG and Accelerometer Signals
by Qingsong Ai, Yanan Zhang, Weili Qi, Quan Liu and And Kun Chen
Symmetry 2017, 9(8), 147; https://doi.org/10.3390/sym9080147 - 6 Aug 2017
Cited by 64 | Viewed by 6808
Abstract
Since surface electromyograghic (sEMG) signals are non-invasive and capable of reflecting humans’ motion intention, they have been widely used for the motion recognition of upper limbs. However, limited research has been conducted for lower limbs, because the sEMGs of lower limbs are easily [...] Read more.
Since surface electromyograghic (sEMG) signals are non-invasive and capable of reflecting humans’ motion intention, they have been widely used for the motion recognition of upper limbs. However, limited research has been conducted for lower limbs, because the sEMGs of lower limbs are easily affected by body gravity and muscle jitter. In this paper, sEMG signals and accelerometer signals are acquired and fused to recognize the motion patterns of lower limbs. A curve fitting method based on median filtering is proposed to remove accelerometer noise. As for movement onset detection, an sEMG power spectral correlation coefficient method is used to detect the start and end points of active signals. Then, the time-domain features and wavelet coefficients of sEMG signals are extracted, and a dynamic time warping (DTW) distance is used for feature extraction of acceleration signals. At last, five lower limbs’ motions are classified and recognized by using Gaussian kernel-based linear discriminant analysis (LDA) and support vector machine (SVM) respectively. The results prove that the fused feature-based classification outperforms the classification with only sEMG signals or accelerometer signals, and the fused feature can achieve 95% or higher recognition accuracy, demonstrating the validity of the proposed method. Full article
(This article belongs to the Special Issue Information Technology and Its Applications)
Show Figures

Figure 1

Back to TopTop