Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements
Abstract
:1. Introduction
- Data representation stage: considering the validity and integrity of time–frequency characteristics, a multiscale time–frequency information fusion method is proposed to obtain the dynamic, related information of multichannel sEMG signals.
- Feature extraction stage: in this paper, a multiple feature fusion network is designed for the purpose of considering the relevant features of the current layer and cross-layer of CNN and reducing the loss of the time–frequency features for sEMG signals in the process of convolution operation.
- Classification stage: the deep belief network is introduced as a classification model of the multiple feature fusion network to realize the abstract expression and self-reconstruction for time–frequency features of sEMG signals corresponding to more types of upper-limb movements.
2. Methodology
2.1. Data Acquisition
2.2. Data Representation
2.2.1. Signal Preprocessing
2.2.2. Multiscale Time–Frequency Information Fusion Representation
2.3. Components of MFFN
2.3.1. DenseNet
2.3.2. The Deep Belief Network
2.4. Designing Architecture of MFFN
3. Experimental Results and Discussion
3.1. Parameter Selection
3.2. Experimental Comparison and Analysis
3.2.1. Longitudinal Contrast Experiment
3.2.2. Transverse Contrast Experiment
3.2.3. Analysis and Discussion
4. Conclusions and Future Outlook
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Oonishi, Y.; Oh, S.; Hori, Y. A New Control Method for Power-Assisted Wheelchair Based on the Surface Myoelectric Signal. IEEE Trans. Ind. Electron. 2010, 57, 3191–3196. [Google Scholar] [CrossRef]
- Giho, J.; Choi, Y. EMG-based continuous control method for electric wheelchair. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 3549–3554. [Google Scholar]
- Veneman, K.R.; Hekman, E.E.G.; Hori, Y.; Ekkelenkamp, R.; Van, E.H.F.; Van, H. Design and Evaluation of the LOPES Exoskeleton Robot for Interactive Gait Rehabilitation. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 379–386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kiguchi, K.; Hayashi, Y. An EMG-Based Control for an Upper-Limb Power-Assist Exoskeleton Robot. IEEE Trans. Syst. Man. Cyber. 2012, 42, 1064–1071. [Google Scholar] [CrossRef] [PubMed]
- Clearpath Robotics Drives Robot with Arm Motions. Available online: http://www.clearpathrobotics.com/press_release/drive-robot-with-arm-motion/ (accessed on 20 February 2014).
- Kamali, T.; Boostani, R.; Parsaei, H. A Multi-Classifier Approach to MUAP Classification for Diagnosis of Neuromuscular Disorders. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 191–200. [Google Scholar] [CrossRef] [PubMed]
- Nair, S.S.; French, R.M.; Laroche, D.; Thomas, E. The application of machine learning algorithms to the analysis of electromyographic patterns from arthritic patients. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 174–184. [Google Scholar] [CrossRef]
- Xie, Q.; Meng, Q.; Zeng, Q.; Fan, Y.; Dai, Y.; Yu, H. Human-Exoskeleton Coupling Dynamics of a Multi-Mode Therapeutic Exoskeleton for Upper Limb Rehabilitation Training. IEEE Access 2021, 9, 61998–62007. [Google Scholar] [CrossRef]
- Chen, S.H.; Lien, W.M.; Wang, W.W. Assistive Control System for Upper Limb Rehabilitation Robot. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 1199–1209. [Google Scholar] [CrossRef]
- Wang, W.Q.; Hou, Z.G.; Cheng, L. Toward Patients’ Motion Intention Recognition: Dynamics Modeling and Identification of iLeg-An LLRR Under Motion Constraints. IEEE Trans. Syst. Man Cyber. Syst. 2016, 46, 980–992. [Google Scholar] [CrossRef]
- Chu, J.; Moon, I.; Lee, Y.; Kim, S.; Mun, M. A Supervised Feature-Projection-Based Real-Time EMG Pattern Recognition for Multifunction Myoelectric Hand Control. IEEE/ASME Trans. Mecha. 2007, 12, 282–290. [Google Scholar] [CrossRef]
- Duan, F.; Dai, L. Recognizing the Gradual Changes in sEMG Characteristics Based on Incremental Learning of Wavelet Neural Network Ensemble. IEEE Trans. Ind. Electron. 2017, 64, 4276–4286. [Google Scholar] [CrossRef]
- Xiao, F.; Wang, Y.; He, L.; Wang, H.; Li, W.; Liu, Z. Motion Estimation from Surface Electromyogram Using Adaboost Regression and Average Feature Values. IEEE Access 2019, 7, 13121–13134. [Google Scholar] [CrossRef]
- He, C.; Zhuo, T.; Ou, D.; Liu, M.; Liao, M. Nonlinear Compressed Sensing-Based LDA Topic Model for Polarimetric SAR Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 972–982. [Google Scholar] [CrossRef]
- Shi, W.T.; Lyu, Z.J.; Tang, S.T.; Chia, T.L.; Yang, C.Y. A bionic hand controlled by hand gesture recognition based on surface EMG signals: A preliminary study. Biocyber. Biomed. Eng. 2017, 38, 126–135. [Google Scholar] [CrossRef]
- Ding, H.J.; He, Q.; Zeng, L.; Zhou, Y.J.; Shen, M.M.; Dan, G. Motion intent recognition of individual fingers based on mechanomyogram. Patt. Recog. Lett. 2017, 88, 41–48. [Google Scholar] [CrossRef]
- Khezri, M.; Jahed, M. A Neuro-Fuzzy Inference System for sEMG-Based Identification of Hand Motion Commands. IEEE Trans. Ind. Electron. 2011, 58, 1952–1960. [Google Scholar] [CrossRef]
- Chu, J.U.; Moon, I.; Mun, M.S. A Real-Time EMG Pattern Recognition System Based on Linear-Nonlinear Feature Projection for a Multifunction Myoelectric Hand. IEEE Trans. Biomed. Eng. 2006, 53, 2232–2239. [Google Scholar] [PubMed]
- Tsujimura, T.; Hashimoto, T.; Izumi, K. Genetic reasoning for finger sign identification based on forearm electromyogram. Int. Conf. Appl. Electron. 2014, 7, 297–302. [Google Scholar]
- Ajiboye, A.B.; Weir, R.F. A heuristic fuzzy logic approach to EMG pattern recognition for multifunctional prosthesis control. IEEE Trans. Neural Syst. Rehabil. Eng. 2005, 13, 280–291. [Google Scholar] [CrossRef]
- Momen, K.; Krishnan, S.; Chau, T. Real-Time Classification of Forearm Electromyographic Signals Corresponding to User-Selected Intentional Movements for Multifunction Prosthesis Control. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 535–542. [Google Scholar] [CrossRef]
- Geethanjali, P.; Ray, K.K. A Low-Cost Real-Time Research Platform for EMG Pattern Recognition-Based Prosthetic Hand. IEEE/ASME Trans. Mech. 2015, 20, 1948–1955. [Google Scholar] [CrossRef]
- Kiguchi, K.; Tanaka, T.; Fukuda, T. Neuro-fuzzy control of a robotic exoskeleton with EMG signals. IEEE Trans. Fuzzy Syst. 2004, 12, 481–490. [Google Scholar] [CrossRef]
- Yang, Y.K.; Duan, F.; Ren, J.; Xue, J.N.; Lv, Y.Z.; Zhu, C.; Yokoi, H.S. Performance Comparison of Gesture Recognition System Based on Different Classifiers. IEEE Trans. Cognit. Dev. Syst. 2021, 13, 141–150. [Google Scholar] [CrossRef]
- Duan, F.; Dai, L.; Chang, W.; Chen, Z.; Zhu, C.; Li, W. sEMG-Based Identification of Hand Motion Commands Using Wavelet Neural Network Combined with Discrete Wavelet Transform. IEEE Trans. Ind. Electron. 2016, 63, 1923–1934. [Google Scholar] [CrossRef]
- Wei, W.; Wong, Y.; Du, Y.; Hu, Y.; Kankanhalli, M.; Geng, W. A multi-stream convolutional neural network for sEMG-based gesture recognition in muscle-computer interface. Patt. Recog. Lett. 2019, 119, 131–138. [Google Scholar] [CrossRef]
- Azami, H.; Hassanpour, H.; Escudero, J.; Sanei, S. An intelligent approach for variable size segmentation of non-stationary signals. J. Adv. Res. 2015, 6, 687–698. [Google Scholar] [CrossRef] [Green Version]
- Ercan, G.; Abdulhamit, S. Comparison of decision tree algorithms for EMG signal classification using DWT. Biomed. Signal Process. Control. 2015, 18, 138–144. [Google Scholar]
- Wu, Y.; Zheng, B.; Zhao, Y. Dynamic Gesture Recognition Based on LSTM-CNN. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 2446–2450. [Google Scholar]
- Alkan, A.; Günay, M. Identification of EMG signals using discriminant analysis and SVM classifier. Expert Syst. Appl. 2012, 39, 44–47. [Google Scholar] [CrossRef]
- Godfrey, A.; Conway, R.; Leonard, M.; Meagher, D.; Olaighin, G.M. A Continuous Wavelet Transform and Classification Method for Delirium Motoric Subtyping. IEEE Trans. Neural Syst. Rehabil. Eng. 2009, 17, 298–307. [Google Scholar] [CrossRef]
- Shi, X.; Qin, P.; Zhu, J.; Zhai, M.; Shi, W. Feature Extraction and Classification of Lower Limb Motion Based on sEMG Signals. IEEE Access 2020, 8, 132882–132892. [Google Scholar] [CrossRef]
- De, C.J.; Donald, L.; Mikhai, K.; Roy, S.H. Filtering the surface EMG signal: Movement artifact and baseline noise contamination. J. Biomech. 2010, 43, 1573–1579. [Google Scholar]
- Potvin, J.R.; Brown, S.H.M. Less is more: High pass filtering, to remove up to 99% of the surface EMG signal power, improves EMG-based biceps brachii muscle force estimates. J. Electromyogr. Kinesi. 2004, 14, 389–399. [Google Scholar] [CrossRef]
- Li, D.L.; Wang, J.H.; Xu, J.C.; Fang, X.K. Densely Feature Fusion Based on Convolutional Neural Networks for Motor Imagery EEG Classification. IEEE Access 2019, 7, 132720–132730. [Google Scholar] [CrossRef]
- Lilly, J.M.; Olhede, S.C. Generalized Morse Wavelets as a Superfamily of Analytic Wavelets. IEEE Trans. Signal Process. 2012, 60, 6036–6041. [Google Scholar] [CrossRef] [Green Version]
- Lilly, J.M.; Olhede, S.C. Higher-Order Properties of Analytic Wavelets. IEEE Trans. Signal Process. 2009, 57, 146–160. [Google Scholar] [CrossRef] [Green Version]
- Gao, H.; Liu, Z.; Laurens, V.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Sarikaya, R.; Hinton, G.E.; Deoras, A. Application of Deep Belief Networks for Natural Language Understanding. IEEE/ACM Trans. Aud. Spe. Lan. Process. 2014, 22, 778–784. [Google Scholar] [CrossRef] [Green Version]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Ear. Obser. Rem. Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Pan, T.Y.; Tsai, W.L.; Chang, C.Y.; Yeh, C.W.; Hu, M.C. A Hierarchical Hand Gesture Recognition Framework for Sports Referee Training-Based EMG and Accelerometer Sensors. IEEE Trans. Cyber. 2020, 10, 1–12. [Google Scholar]
- Kalayeh, M.M.; Shah, M. Training Faster by Separating Modes of Variation in Batch-Normalized Models. IEEE Trans. Patt. Analy. Mach. Intelli. 2020, 42, 1483–1500. [Google Scholar] [CrossRef] [Green Version]
- Chen, Z.D.; Deng, L.; Li, G.Q.; Sun, J.W.; Hu, X.; Liang, L.; Ding, Y.F.; Xie, Y. Effective and Efficient Batch Normalization Using a Few Uncorrelated Data for Statistics Estimation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 348–362. [Google Scholar] [CrossRef]
- Banerjee, C.; Mukherjee, T.; Pasiliao, E. Feature representations using the reflected rectified linear unit (RReLU) activation. Big Data Min. Anal. 2020, 3, 102–120. [Google Scholar] [CrossRef]
- Deng, R.; Liu, S. Relative Depth Order Estimation Using Multiscale Densely Connected Convolutional Networks. IEEE Access 2019, 7, 38630–38643. [Google Scholar] [CrossRef]
- Wang, Z.; Liu, G.; Tian, G. A Parameter Efficient Human Pose Estimation Method Based on Densely Connected Convolutional Module. IEEE Access 2018, 6, 58056–58063. [Google Scholar] [CrossRef]
- Matsubara, T.; Morimoto, J. Bilinear Modeling of EMG Signals to Extract User-Independent Features for Multiuser Myoelectric Interface. IEEE Trans. Biomed. Eng. 2013, 60, 2205–2213. [Google Scholar] [CrossRef]
- Ye, Q.; Chu, M.; Grethler, M. Upper Limb Motion Recognition Using Gated Convolution Neural Network via Multi-Channel sEMG. In Proceedings of the 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Shenyang, China, 22–24 January 2021; pp. 397–402. [Google Scholar]
Layers | Output Size | Multiple Feature Fusion Network |
---|---|---|
Input Layer | 224 × 224 × 3 | -- |
Convolution | 112 × 112 × 64 | filtersize: [7,7], numfilters: 64, stride 2, padding: [3,3,3,3] |
BN | 112 × 112 × 64 | channels: 64 |
ReLu | 112 × 112 × 64 | -- |
Pooling | 56 × 56 × 64 | [3,3] max pool, stride: [2,2] |
Dense Block (1) | 56 × 56 × 256 | [BN, ReLu, 1 × 1 conv, BN, ReLu, 3 × 3 conv] × 6 |
Transition Layer (1) | 56 × 56 × 256 | BN |
56 × 56 × 256 | ReLu | |
56 × 56 × 128 | [1,1] conv | |
28 × 28 × 128 | [2,2] average pool, stride 2 | |
Dense Block (2) | 28 × 28 × 512 | [BN, ReLu, 1 × 1 conv, BN, ReLu, 3 × 3 conv] × 12 |
Transition Layer (2) | 28 × 28 × 512 | BN |
28 × 28 × 512 | ReLu | |
28 × 28 × 256 | [1,1] conv | |
14 × 14 × 256 | [2,2] average pool, stride 2 | |
Classification Layer | 1024 × 1 | 1024 × 50,176 RBM |
1500 × 1 | 1500 × 1024 RBM | |
12 × 1 | 12D Fully Connected, softmax |
Methods | MFFN_FC | MFFN_DBN |
---|---|---|
subject 1, subject 2 | 83.94% | 86.10% |
subject 3, subject 4 | 63.23% | 65.11% |
subject 5, subject 9 | 79.17% | 79.37% |
subject 6, subject 10 | 75.00% | 77.78% |
subject 3, subject 9 | 62.33% | 65.89% |
subject 1, subject 4 | 76.39% | 77.39% |
subject 7, subject 10 | 62.69% | 62.78% |
subject 2, subject 5 | 83.33% | 85.10% |
subject 6, subject 7 | 64.17% | 66.56% |
subject 7, subject 9 | 68.28% | 68.83% |
AVE | 71.85% | 73.49% |
Methods | LDA [14] | SVM [48] | G_CNN [49] | LCNN [26] | Proposed Method |
---|---|---|---|---|---|
subject 1, subject 2 | 80.26% | 85.03% | 85.29% | 80.11% | 86.10% |
subject 3, subject 4 | 52.43% | 60.29% | 62.59% | 65.29% | 65.11% |
subject 5, subject 9 | 70.21% | 72.98% | 78.39% | 75.22% | 79.37% |
subject 6, subject 10 | 65.16% | 69.26% | 75.00% | 70.92% | 77.78% |
subject 3, subject 9 | 59.98% | 65.53% | 65.39% | 65.98% | 65.89% |
subject 1, subject 4 | 70.56% | 72.20% | 75.65% | 74.61% | 77.39% |
subject 7, subject 10 | 60.92% | 60.89% | 61.85% | 61.79% | 62.78% |
subject 2, subject 5 | 78.23% | 78.29% | 80.59% | 85.49% | 85.10% |
subject 6, subject 7 | 62.51% | 64.93% | 65.29% | 63.46% | 66.56% |
subject 7, subject 9 | 61.09% | 69.22% | 67.92% | 62.34% | 68.83% |
AVE | 66.14% | 69.86% | 71.80% | 70.52% | 73.49% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhong, T.; Li, D.; Wang, J.; Xu, J.; An, Z.; Zhu, Y. Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements. Sensors 2021, 21, 5385. https://doi.org/10.3390/s21165385
Zhong T, Li D, Wang J, Xu J, An Z, Zhu Y. Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements. Sensors. 2021; 21(16):5385. https://doi.org/10.3390/s21165385
Chicago/Turabian StyleZhong, Tianyang, Donglin Li, Jianhui Wang, Jiacan Xu, Zida An, and Yue Zhu. 2021. "Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements" Sensors 21, no. 16: 5385. https://doi.org/10.3390/s21165385
APA StyleZhong, T., Li, D., Wang, J., Xu, J., An, Z., & Zhu, Y. (2021). Fusion Learning for sEMG Recognition of Multiple Upper-Limb Rehabilitation Movements. Sensors, 21(16), 5385. https://doi.org/10.3390/s21165385