Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation †
Abstract
:1. Introduction
2. Related Work
3. Sensor-Based Rehabilitation Exercise Recognition
3.1. State Transition Probability CNN (S-CNN)
3.1.1. Quantization
3.1.2. Flowchart of the Lempel-Ziv-Welch (LZW) Coding
3.1.3. PFSA Construction
Algorithm 1: Modified LZW encoding algorithm | |
1 | Task 1: Encoding the sequence S and finding the LZW table T |
2 | Initialization: Initializing table T with single character in the sequence S |
3 | Output: code and table T |
4 | Set P = first input character in S |
5 | while not end of the sequence S |
6 | C = next input character in S |
7 | if P + C is in the table T |
8 | P = P + C |
9 | else |
10 | add P + C to the table T |
11 | P = C |
12 | end if |
13 | end while |
14 | Set P = first input character in S |
15 | while not end of the sequence S |
16 | C = next input character in S |
17 | if P + C is in the table T |
18 | P = P + C |
19 | else |
20 | output the code for P |
21 | P = C |
22 | end if |
23 | end while |
24 | output code for P |
3.1.4. S-CNN Model
3.2. Dynamic CNN (D-CNN)
3.2.1. Gaussian mixture model–Gaussian mixture regression (GMM-GMR) Model
3.2.2. Dynamic Assignment
3.3. Sensor-Based Rehabilitation Exercise Recognition by the Multipath CNN (MP-CNN)
Algorithm 2: Multipath Convolutional Neural Network | |
1 | Task 1: Learning S-CNN model |
2 | Input: Raw sensor signals |
3 | Output: State transition probabilities |
4 | Step 1: Quantization |
5 | Step 2: Symbolization |
6 | Step 3: LZW coding |
7 | Step 4: PFSA construction |
8 | Step 5: Obtain state transition probabilities |
9 | Step 6: S-CNN model training |
10 | End |
11 | |
12 | Task 2: Learning D-CNN model |
13 | Input: Raw sensor signals |
14 | Output: Classification results |
15 | Step 1: Feature extraction, gravity and body features |
16 | Step 2: GMM-GMR model learning |
17 | Step 3: Data partition and channel fitting |
18 | Step 4: D-CNN model learning |
19 | End |
20 | |
21 | Task 3: Learning MP-CNN |
22 | Input: Raw sensor signals |
23 | Output: Classification results |
24 | Step 1: Model setup (as depicted in Figure 4) |
25 | Step 2: Exploiting S-CNN and D-CNN as pre-train weights |
26 | Step 3: MP-CNN training |
27 | End |
4. Sensor-Based Rehabilitation Exercise Evaluation
4.1. Prediction Loss
4.2. Condition Loss
4.3. Evaluation Loss
5. Results
5.1. Dataset
5.2. Experimental Results of the MP-CNN
5.3. Action Evaluation Results
6. Discussion
6.1. Discussion about the MP-CNN Results
6.2. Discussion about the Action Evaluation Results
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Veeraraghavan, A.; Roy-Chowdhury, A.; Chellappa, R. Role of shape and kinematics in human movement analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004. [Google Scholar]
- Bobick, A.; Davis, J. The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 257–267. [Google Scholar] [CrossRef]
- Blank, M.; Gorelick, L.; Shechtman, E.; Irani, M.; Basri, R. Actions as space-time shapes. In Proceedings of the International Conference on Computer Vision, Beijing, China, 17–20 October 2005. [Google Scholar]
- Chennuru, S.; Chen, P.-W.; Zhu, J.; Zhang, J. Mobile life. In International Conference on Mobile Computing, Applications, and Services; Springer: Berlin, Heidelberg, 2010; pp. 263–281. [Google Scholar]
- Wu, P.; Zhu, J.; Zhang, J.Y. Mobisens: A versatile mobile sensing platform for real-world applications. Mob. Netw. Appl. 2013, 18, 60–80. [Google Scholar] [CrossRef]
- Wu, P.; Peng, H.-K.; Zhu, J.; Zhang, Y. Senscare. In International Conference on Mobile Computing, Applications, and Services; Springer: Berlin, Heidelberg, 2011; pp. 1–19. [Google Scholar]
- Forster, K.; Roggen, D.; Troster, G. Unsupervised classifir self-calibration through repeated context occurences: Is there robustness against sensor displacement to gain? In Proceedings of the International Symposium on Wearable Computers (ISWC ’09), Linz, Austria, 4–7 September 2009; pp. 77–84. [Google Scholar]
- Parkka, J.; Ermes, M.; Korpipaa, P.; Mantyjarvi, J.; Peltola, J.; Korhonen, I. Activity classi_cation using realistic data from wearable sensors. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 119–128. [Google Scholar] [CrossRef] [PubMed]
- Kao, T.P.; Lin, C.W.; Wang, J.S. Development of a portable activity detector for daily activity recognition. In Proceedings of the IEEE International Symposium on Industrial Electronics, Seoul, Korea, 5–8 July 2009; pp. 115–120. [Google Scholar]
- Ermes, M.; Parkka, J.; Cluitmans, L. Advancing from offline to online activity recognition with wearable sensors. In Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4451–4454. [Google Scholar]
- Krause, A.; Siewiorek, D.; Smailagic, A.; Farringdon, J. Unsupervised, dynamic identification of physiological and activity context in wearable computing. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, White Plains, NY, USA, 21–23 October 2003; pp. 88–97. [Google Scholar]
- Huynh, T.; Schiele, B. Analyzing features for activity recognition. In Proceedings of the Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, Grenoble, France, 12–14 October 2005. [Google Scholar]
- Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
- Reyes-Ortiz, J.-L.; Oneto, L.; Ghio, A.; Sama, A.; Anguita, D.; Parra, X. Human activity recognition on smartphones with awareness of basic activities and postural transitions. In International Conference on Artificial Neural Networks; Springer: Cham, Switzerland, 2014; pp. 177–184. [Google Scholar]
- Nguyen-Dinh, L.-V.; Roggen, D.; Calatroni, A.; Troster, G. Improving online gesture recognition with template matching methods in accelerometer data. In Proceedings of the 12th International Conference on Intelligent Systems Design and Applications (ISDA), Kochi, India, 27–29 November 2012; pp. 831–836. [Google Scholar]
- Nguyen-Dinh, L.-V.; Calatroni, A.; Troster, G. Robust online gesture recognition with crowdsourced annotation. J. Mach. Learn. Res. 2014, 15, 3187–3220. [Google Scholar]
- Hartmann, B.; Link, N. Gesture recognition with inertial sensors and optimized DTW prototypes. In Proceedings of the IEEE International Conference on Systems Man and Cybernetics (SMC), Istanbul, Turkey, 10–13 October 2010; pp. 2102–2109. [Google Scholar]
- Kern, N.; Schiele, B.; Junker, H.; Lukowicz, P.; Troster, G. Wearable Sensing to Annotate Meeting Recordings. Pers. Ubiquitous Comput. 2003, 7, 263–274. [Google Scholar] [CrossRef]
- Lukowicz, P.; Ward, J.A.; Junker, H.; Starner, T. Recognizing Workshop Activity Using Body Worn Microphones and Accelerometers. In Pervasive Computing; Springer: Berlin/Heidelberg, Germany, 2004; pp. 18–23. [Google Scholar]
- Lee, S.W.; Mase, K. Activity and location recognition using wearable sensors. IEEE Pervasive Comput. 2002, 1, 24–32. [Google Scholar]
- Li, N.; Dai, Y.; Wang, R.; Shao, Y. Study on Action Recognition Based on Kinect and Its Application in Rehabilitation Training. In Proceedings of the IEEE Fifth International Conference on Big Data and Cloud Computing, Dalian, China, 26–28 August 2015. [Google Scholar]
- Leightley, D.; Darby, J.; Li, B.; McPhee, J.S.; Yap, M.H. Human Activity Recognition for Physical Rehabilitation. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013. [Google Scholar]
- Venkataraman, V.; Turaga, P.; Lehrer, N.; Baran, M.; Rikakis, T.; Wolf, S.L. Attractor-Shape for Dynamical Analysis of Human Movement: Applications in Stroke Rehabilitation and Action Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
- Ha, S.; Yun, J.-M.; Choi, S. Multi-modal convolutional neural networks for activity recognition. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015; pp. 3017–3022. [Google Scholar]
- Yang, J.B.; Nguyen, M.N.; San, P.P.; Li, X.L.; Krishnaswamy, S. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), Buenos Aires, Argentina, 25–31 July 2015; pp. 25–31. [Google Scholar]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [PubMed]
- Palumbo, F.; Gallicchio, C.; Pucci, R.; Micheli, A. Human activity recognition using multisensory data fusion based on reservoir computing. J. Ambient Intell. Smart Environ. 2016, 8, 87–107. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Proc. Syst. 2012, 25, 1106–1114. [Google Scholar] [CrossRef]
- Sermanet, P.; Kavukcuoglu, K.; Chintala, S.; LeCun, Y. Pedestrian detection with unsupervised multi-stage feature learning. arXiv, 2013; arXiv:1212.0142. [Google Scholar]
- You, C.-H.; Chiang, C.-K. Dynamic convolutional neural network for activity recognition. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016. [Google Scholar]
- Wilson, J.; Najjar, N.; Hare, J.; Gupta, S. Human activity recognition using lzw-coded probabilistic finite state automata. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3018–3023. [Google Scholar]
- Karantonis, D.M.; Narayanan, M.R.; Mathie, M.; Lovell, N.H.; Celler, B.G. Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 2006, 10, 156–167. [Google Scholar] [CrossRef] [PubMed]
- Krassnig, G.; Tantinger, D.; Hofmann, C.; Wittenberg, T.; Struck, M. User-friendly system for recognition of activities with an accelerometer. In Proceedings of the 4th International Conference on Pervasive Computing Technologies for Healthcare, Munich, Germany, 22–25 March 2010; pp. 1–8. [Google Scholar]
- Bruno, B.; Mastrogiovanni, F.; Sgorbissa, A.; Vernazza, T.; Zaccaria, R. Analysis of human behavior recognition algorithms based on acceleration data. In Proceedings of the IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; p. 1602. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Zappi, P.; Lombriser, C.; Stiefmeier, T.; Farella, E.; Roggen, D.; Benini, L.; Tröster, G. Activity Recognition from On-Body Sensors: Accuracy-Power Trade-Off by Dynamic Sensor Selection. In Wireless Sensor Networks; Springer: Berlin/Heidelberg, Germany, 2008; pp. 17–33. [Google Scholar]
- Jiang, W.; Yin, Z. Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM International Conference on Multimedia (MM ’15), Brisbane, Australia, 26–30 October 2015; pp. 1307–1310. [Google Scholar]
- Vollmer, C.; Gross, H.-M.; Eggert, J.P. Learning Features for Activity Recognition with Shift-Invariant Sparse Coding; Springer: Berlin/Heidelberg, Germany, 2013; pp. 367–374. [Google Scholar]
- Zeng, M.; Nguyen, L.T.; Yu, B.; Mengshoel, O.J.; Zhu, J.; Wu, P.; Zhang, J. Convolutional neural networks for human activity recognition using mobile sensors. In Proceedings of the 6th International Conference on Mobile Computing, Applications and Services (MobiCASE), Austin, TX, USA, 6–7 November 2014; pp. 197–205. [Google Scholar]
- Alsheikh, M.A.; Selim, A.; Niyato, D.; Doyle, L.; Lin, S.; Tan, H.P. Deep activity recognition models with triaxial accelerometers. arXiv, 2016; arXiv:1511.04664. [Google Scholar]
- Xingjian, S.H.I.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A Machine Learning Approach for Precipitation Nowcasting. In Advances in Neural Information Processing Systems; Curran Associates: New York, NY, USA, 2015; pp. 802–810. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
MP-CNN-1(1,2,3) | MP-CNN-1(2,3) | MP-CNN-1(3) |
77.87% | 77.09% | 78.52% |
MP-CNN-1(0) | MP-CNN-2(0) | MP-CNN-2(3) |
73.74% | 75.55% | 79.43% |
GB-CNN | S-CNN | D-CNN |
73.88% | 69.64% | 75.26% |
MP-CNN-1 | MP-CNN-2 | |
78.52% | 79.43% |
Method | Accuracy | Method | Accuracy |
---|---|---|---|
SVM | 50.27% | SI [38] | 66.51% |
KNN | 55.78% | AI [38] | 71.45% |
NN | 57.71% | MP-CNN-1 | 78.52% |
GMM | 66.31% | MP-CNN-2 | 79.43% |
KNN | SVM | NN |
87.67% | 43.7% | 74.99% |
SC [39] | PCNN [40] | AI [38] |
84.5% | 88.19% | 84.33% |
HMM-CNN [41] | MP-CNN-1 | MP-CNN-2 |
89.38% | 94.09% | 94.69% |
Architecture | Training acc. | Testing acc. |
---|---|---|
ConvLSTM [42] | 99.64% | 89.62% |
VGG16 [43] | 99.81% | 90.13% |
ResNet50 [28] | 99.79% | 90.25% |
Proposed | 100.00% | 90.63% |
Feature Dim. | Epoch | Training acc. | Testing acc. |
---|---|---|---|
DIM_96 | 800 | 97.50% | 84.65% |
DIM_128 | 600 | 98.50% | 86.85% |
DIM_150 | 500 | 99.17% | 87.69% |
DIM_196 | 600 | 99.33% | 88.64% |
DIM_224 | 700 | 99.67% | 98.73% |
Dataset | Training acc. | Testing acc. |
---|---|---|
Subject_8 | 98.50% | 86.65% |
Subject_19 | 98.60% | 90.23% |
Subject_36 | 99.67% | 90.63% |
Action | Precision | Recall | F1-Measure | Support |
---|---|---|---|---|
0 | 0.87 | 0.95 | 0.91 | 65 |
1 | 0.88 | 0.85 | 0.86 | 71 |
2 | 0.90 | 0.93 | 0.92 | 76 |
3 | 0.91 | 0.94 | 0.92 | 65 |
4 | 0.94 | 0.92 | 0.93 | 65 |
5 | 0.92 | 0.92 | 0.92 | 66 |
6 | 0.86 | 0.92 | 0.89 | 62 |
7 | 0.91 | 0.92 | 0.92 | 76 |
8 | 0.91 | 0.82 | 0.86 | 61 |
9 | 0.90 | 0.81 | 0.85 | 68 |
10 | 0.91 | 0.88 | 0.89 | 56 |
11 | 0.84 | 0.88 | 0.86 | 59 |
Average | 0.90 | 0.90 | 0.90 | 790 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, Z.-A.; Lu, Y.-C.; You, C.-H.; Chiang, C.-K. Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation. Sensors 2019, 19, 887. https://doi.org/10.3390/s19040887
Zhu Z-A, Lu Y-C, You C-H, Chiang C-K. Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation. Sensors. 2019; 19(4):887. https://doi.org/10.3390/s19040887
Chicago/Turabian StyleZhu, Zheng-An, Yun-Chung Lu, Chih-Hsiang You, and Chen-Kuo Chiang. 2019. "Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation" Sensors 19, no. 4: 887. https://doi.org/10.3390/s19040887
APA StyleZhu, Z.-A., Lu, Y.-C., You, C.-H., & Chiang, C.-K. (2019). Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation. Sensors, 19(4), 887. https://doi.org/10.3390/s19040887