Continuous Human Activity Recognition through Parallelism LSTM with Multi-Frequency Spectrograms
Abstract
:1. Introduction
- A single-channel SFCW radar is utilized to collect the data streams of continuous sequences of human activities at multiple frequencies simultaneously. Through the proposed strategy of frequency-division short-time Fourier transformation (STFT), the spectrograms of multiple frequencies are generated, containing the activity features with different scattering properties and frequency resolutions.
- A parallelism LSTM framework is proposed to extract and integrate various types of temporal feature information in multi-frequency spectrograms for continuous HAR. By inputting multiple spectrograms of different frequencies in parallel, multiple parallel LSTM sub-networks are trained separately while they are tested at the same time. The probabilities of activity classification from these sub-networks are fused by addition at the decision level, achieving a superior recognition accuracy.
- The performance of parallelism unidirectional LSTM (Uni-LSTM) and Bi-LSTM for continuous HAR with multi-frequency spectrograms are analyzed, and Bi-LSTM is proven to be superior to Uni-LSTM.
- In order to simulate the actual scene, in data collection, 11 participants are required to continuously perform six activities in sequence with three different transitions and random durations.
2. Data Collection and Preprocessing
2.1. Data Collection
2.2. Multi-Frequency Spectrogram Formation Based on Frequency-Division Processing
3. Model and Method
3.1. The Cell Structure of Uni-LSTM and Bi-LSTM
3.2. Network Fusion Strategy and Label Recognition Method
4. Experimental Results and Analyses
4.1. Uni-LSTM Sub-Network for Single-Frequency Spectrogram
4.2. Uni-LSTM Sub-Network Based on Spectrogram of Different Frequencies
4.3. Decision Level Fusion of Parallelism Uni-LSTM
4.4. Performance Analysis of Bi-LSTM Network before and after Multi-Frequency Fusion
4.5. Performance Analysis on Variable Numbers of Multi-Frequency Fusion and Variant Learning Rate
4.6. Performance Comparison of the Three Models
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Muhammad, K.; Mustaqeem; Ullah, A.; Imranb, A.S.; Sajjad, M.; Kiran, M.S.; Sannino, G.; de Albuquerque, V.H.C. Human action recognition using attention based LSTM network with dilated CNN features. Future Gener. Comput. Syst. 2021, 125, 820–830. [Google Scholar] [CrossRef]
- Stove, A.G.; Sykes, S.R. A doppler-based automatic target classifier for a battlefield surveillance radar. In Proceedings of the IET Conference Proceedings, Edinburgh, UK, 15–17 October 2002; pp. 419–423. [Google Scholar]
- Yang, X.; Zhang, X.; Ding, Y.; Zhang, L. Indoor Activity and Vital Sign Monitoring for Moving People with Multiple Radar Data Fusion. Remote Sens. 2021, 13, 3791. [Google Scholar] [CrossRef]
- Li, G.; Zhang, R.; Ritchie, M.; Griffiths, H. Sparsity-driven micro-doppler feature extraction for dynamic hand gesture recognition. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 655–665. [Google Scholar] [CrossRef]
- Mukhopadhyay, S.C. Wearable sensors for human activity monitoring: A review. IEEE Sens. J. 2015, 15, 1321–1330. [Google Scholar] [CrossRef]
- Chaccour, K.; Darazi, R.; El Hassani, A.H.; Andrès, E. From fall detection to fall prevention: A generic classification of fall-related systems. IEEE Sens. J. 2017, 17, 812–822. [Google Scholar] [CrossRef]
- Zhu, C.; Sheng, W. Wearable sensor-based hand gesture and daily activity recognition for robot-assisted living. IEEE Trans. Syst. Man Cybern.-Part A Syst. Humans 2011, 41, 569–573. [Google Scholar] [CrossRef]
- Luo, F.; Poslad, S.; Bodanese, E. Human activity detection and coarse localization outdoors using micro-doppler signatures. IEEE Sens. J. 2019, 19, 8079–8094. [Google Scholar] [CrossRef]
- Wang, H.; Zhang, D.; Wang, Y.; Ma, J.; Wang, Y.; Li, S. RT-Fall: A real-time and contactless fall detection system with commodity WiFi devices. IEEE Trans. Mob. Comput. 2017, 16, 511–526. [Google Scholar] [CrossRef]
- Amin, M.G.; Zhang, Y.D.; Ahmad, F.; Ho, K.C.D. Radar signal processing for elderly fall detection: The future for in-home monitoring. IEEE Signal Process. Mag. 2016, 33, 71–80. [Google Scholar] [CrossRef]
- Alfuadi, R.; Mutijarsa, K. Classification method for prediction of human activity using stereo camera. In Proceedings of the 2016 International Seminar on Application for Technology of Information and Communication (ISemantic), Semarang, Indonesia, 5–6 August 2016; pp. 51–57. [Google Scholar]
- Zin, T.T.; Tin, P.; Hama, H.; Toriu, T. Unattended object intelligent analyzer for consumer video surveillance. IEEE Trans. Consum. Electron. 2011, 57, 549–557. [Google Scholar] [CrossRef]
- Li, Z.; Jin, T.; Dai, Y.; Song, Y. Through-Wall Multi-Subject Localization and Vital Signs Monitoring Using UWB MIMO Imaging Radar. Remote Sens. 2021, 13, 2905. [Google Scholar] [CrossRef]
- Narayanan, R.M.; Zenaldin, M. Radar micro-Doppler signatures of various human activities. IET Radar Sonar Navig. 2015, 9, 1205–1215. [Google Scholar] [CrossRef]
- Kumar, D.; Sarkar, A.; Kerketta, S.R.; Ghosh, D. Human activity classification based on breathing patterns using IR-UWB radar. In Proceedings of the 2019 IEEE 16th India Council International Conference (INDICON), Rajkot, India, 13–15 December 2019; pp. 1–4. [Google Scholar]
- Sasakawa, D.; Honma, N.; Nakayama, T.; Iizuka, S. Human activity identification by height and doppler RCS information detected by MIMO radar. IEICE Trans. Commun. 2019, E102.B, 1270–1278. [Google Scholar] [CrossRef]
- Lee, D.; Park, H.; Moon, T.; Kim, Y. Continual learning of micro-Doppler signature-based human activity classification. IEEE Geosci. Remote. Sens. Lett. 2021, 1–5. [Google Scholar] [CrossRef]
- Kim, Y.; Ling, H. Human activity classification based on micro-Doppler signatures using a support vector machine. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1328–1337. [Google Scholar]
- Lin, Y.; Le Kernec, J. Performance analysis of classification algorithms for activity recognition using micro-Doppler feature. In Proceedings of the 2017 13th International Conference on Computational Intelligence and Security (CIS), Hong Kong, China, 15–18 December 2017; pp. 480–483. [Google Scholar]
- Markopoulos, P.P.; Zlotnikov, S.; Ahmad, F. Adaptive radar-based human activity recognition with L1-Norm linear discriminant analysis. IEEE J. Electromagn. RF Microwaves Med. Biol. 2019, 3, 120–126. [Google Scholar] [CrossRef]
- Li, X.; He, Y.; Jing, X. A survey of deep learning-based human activity recognition in radar. Remote Sens. 2019, 11, 1068. [Google Scholar] [CrossRef] [Green Version]
- Jokanovic, B.; Amin, M.; Ahmad, F. Radar fall motion detection using deep learning. In Proceedings of the 2016 IEEE Radar Conference (RadarConf), Philadelphia, PA, USA, 2–6 May 2016; pp. 1–6. [Google Scholar]
- Vandersmissen, B.; Knudde, N.; Jalalvand, A.; Couckuyt, I.; Dhaene, T.; De Neve, W. Indoor human activity recognition using high-dimensional sensors and deep neural networks. Neural Comput. Appl. 2020, 32, 12295–12309. [Google Scholar] [CrossRef]
- Qiao, X.; Shan, T.; Tao, R. Human identification based on radar micro-Doppler signatures separation. Electron. Lett. 2020, 56, 155–156. [Google Scholar] [CrossRef]
- Kim, Y.; Moon, T. Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 8–12. [Google Scholar] [CrossRef]
- Li, X.; He, Y.; Yang, Y.; Hong, Y.; Jing, X. LSTM based human activity classification on radar range profile. In Proceedings of the 2019 IEEE International Conference on Computational Electromagnetics (ICCEM), Shanghai, China, 20–22 March 2019; pp. 1–2. [Google Scholar]
- Wang, M.; Zhang, Y.D.; Cui, G. Human motion recognition exploiting radar with stacked recurrent neural network. Digit. Signal Process. 2019, 87, 125–131. [Google Scholar] [CrossRef]
- Zhu, J.; Chen, H.; Ye, W. A hybrid CNN–LSTM network for the classification of human activities based on micro-Doppler radar. IEEE Access. 2020, 8, 24713–24720. [Google Scholar] [CrossRef]
- Shrestha, A.; Murphy, C.; Johnson, I.; Anbulselvam, A.; Fioranelli, F.; Kernec, J.L.; Gurbuz, S.Z. Cross-frequency classification of indoor activities with DNN transfer learning. In Proceedings of the 2019 IEEE Radar Conference (RadarConf), Boston, MA, USA, 22–26 April 2019; pp. 1–6. [Google Scholar]
- Seyfioglu, M.S.; Erol, B.; Gurbuz, S.Z.; Amin, M.G. DNN transfer learning from diversified micro-Doppler for motion classification. IEEE Trans. Aerosp. Electron. Syst. 2019, 55, 2164–2180. [Google Scholar] [CrossRef] [Green Version]
- Seyfioğlu, M.S.; Özbayoğlu, A.M.; Gürbüz, S.Z. Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 1709–1723. [Google Scholar] [CrossRef]
- Li, X.; Jing, X.; He, Y. Unsupervised domain adaptation for human activity recognition in radar. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21–25 September 2020; pp. 1–5. [Google Scholar]
- Du, H.; Jin, T.; Song, Y.; Dai, Y. Unsupervised adversarial domain adaptation for micro-Doppler based human activity classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 62–66. [Google Scholar] [CrossRef]
- Mi, Y.; Jing, X.; Mu, J.; Li, X.; He, Y. DCGAN-based scheme for radar spectrogram augmentation in human activity classification. In Proceedings of the 2018 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, Atlanta, GA, USA, 7–12 July 2018; pp. 1973–1974. [Google Scholar]
- Alnujaim, I.; Oh, D.; Kim, Y. Generative adversarial networks to augment micro-Doppler signatures for the classification of human activity. In Proceedings of the IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9459–9461. [Google Scholar]
- Jokanović, B.; Amin, M. Fall detection using deep learning in range-Doppler radars. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 180–189. [Google Scholar] [CrossRef]
- Hernangómez, R.; Santra, A.; Stańczak, S. Human activity classification with frequency modulated continuous wave radar using deep convolutional neural networks. In Proceedings of the 2019 International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; pp. 1–6. [Google Scholar]
- Jia, Y.; Guo, Y.; Wang, G.; Song, R.; Cui, G.; Zhong, X. Multi-frequency and multi-domain human activity recognition based on SFCW radar using deep learning. Neurocomputing 2020, 444, 274–287. [Google Scholar] [CrossRef]
- Ye, W.; Chen, H.; Li, B. Using an end-to-end convolutional network on radar signal for human activity classification. IEEE Sens. J. 2019, 19, 12244–12252. [Google Scholar] [CrossRef]
- Chen, W.; Ding, C.; Zou, Y.; Zhang, L.; Gu, C.; Hong, H.; Zhu, X. Non-contact human activity classification using DCNN based on UWB radar. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, 6–8 May 2019; pp. 1–4. [Google Scholar]
- Song, Y.; Jin, T.; Dai, Y.; Zhou, X. Through-wall human pose reconstruction via UWB MIMO radar and 3D CNN. Remote Sens. 2021, 13, 241. [Google Scholar] [CrossRef]
- Adib, F.; Hsu, C.; Mao, H.; Katabi, D.; Durand, F. Capturing the human figure through a wall. ACM Trans. Graph. 2015, 34, 219. [Google Scholar] [CrossRef]
- Zhao, M.; Liu, Y.; Raghu, A.; Zhao, H.; Li, T.; Torralba, A.; Katabi, D. Through-wall human mesh recovery using radio signals. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27–28 October 2019; pp. 10112–10121. [Google Scholar]
- Li, H.; Shrestha, A.; Heidari, H.; Kernec, J.L.; Fioranelli, F. Activities recognition and fall detection in continuous data streams using radar sensor. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, 6–8 May 2019; pp. 1–4. [Google Scholar]
- Guendel, R.G.; Fioranelli, F.; Yarovoy, A. Derivative target line (DTL) for continuous human activity detection and recognition. In Proceedings of the 2020 IEEE Radar Conference (RadarConf20), Florence, Italy, 21–25 September 2020; pp. 1–6. [Google Scholar]
- Ding, C.; Hong, H.; Zou, Y.; Chu, H.; Zhu, X.; Fioranelli, F.; Kernec, J.L.; Li, C. Continuous human motion recognition with a dynamic range-Doppler trajectory method based on FMCW radar. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6821–6831. [Google Scholar] [CrossRef]
- Shrestha, A.; Li, H.; Le Kernec, J.; Fioranelli, F. Continuous human activity classification from FMCW radar with Bi-LSTM networks. IEEE Sens. J. 2020, 20, 13607–13619. [Google Scholar] [CrossRef]
- Li, H.; Shrestha, A.; Heidari, H.; Le Kernec, J.; Fioranelli, F. Bi-LSTM Network for multimodal continuous human activity recognition and fall detection. IEEE Sens. J. 2020, 20, 1191–1201. [Google Scholar] [CrossRef] [Green Version]
- Vaishnav, P.; Santra, A. Continuous human activity classification with unscented kalman filter tracking using FMCW radar. IEEE Sens. Lett. 2020, 4, 1–4. [Google Scholar] [CrossRef]
- Shrestha, A.; Li, H.; Fioranelli, F.; Le Kernec, J. Activity recognition with cooperative radar systems at C and K band. J. Eng. 2019, 20, 7100–7104. [Google Scholar] [CrossRef]
- Mustaqeem; Kwon, S. Att-Net: Enhanced emotion recognition system using lightweight self-attention module. Appl. Soft Comput. 2021, 102, 107101. [Google Scholar] [CrossRef]
- Mustaqeem; Kwon, S. Optimal feature selection based speech emotion recognition using two-stream deep convolutional neural network. Int. J. Intell. Syst. 2021, 36, 5116–5135. [Google Scholar] [CrossRef]
Layer of Uni-LSTM Sub-Network | Layer of Bi-LSTM Sub-Network | Size | Properties |
---|---|---|---|
Input | Input | 256 | Number of frequency bins of input spectrogram |
Uni-LSTM | Bi-LSTM | 600 | Length of output feature vector of LSTM cell |
Uni-LSTM | Bi-LSTM | 600 | Length of output feature vector of LSTM cell |
FC | FC | 6 | According to the number of possible output classes |
Output | Output | 1 | Single label of the identified activity |
Frequency | 1.7 GHz | 1.8 GHz | 1.9 GHz | 2.0 GHz | 2.1 GHz |
accuracy | 74.99% | 75.05% | 79.75% | 80.06% | 79.32% |
Frequency | 1.7 GHz | 1.8 GHz | 1.9 GHz | 2.0 GHz | 2.1 GHz |
accuracy | 86.47% | 86.37% | 87.90% | 89.82% | 89.20% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ding, C.; Jia, Y.; Cui, G.; Chen, C.; Zhong, X.; Guo, Y. Continuous Human Activity Recognition through Parallelism LSTM with Multi-Frequency Spectrograms. Remote Sens. 2021, 13, 4264. https://doi.org/10.3390/rs13214264
Ding C, Jia Y, Cui G, Chen C, Zhong X, Guo Y. Continuous Human Activity Recognition through Parallelism LSTM with Multi-Frequency Spectrograms. Remote Sensing. 2021; 13(21):4264. https://doi.org/10.3390/rs13214264
Chicago/Turabian StyleDing, Congzhang, Yong Jia, Guolong Cui, Chuan Chen, Xiaoling Zhong, and Yong Guo. 2021. "Continuous Human Activity Recognition through Parallelism LSTM with Multi-Frequency Spectrograms" Remote Sensing 13, no. 21: 4264. https://doi.org/10.3390/rs13214264
APA StyleDing, C., Jia, Y., Cui, G., Chen, C., Zhong, X., & Guo, Y. (2021). Continuous Human Activity Recognition through Parallelism LSTM with Multi-Frequency Spectrograms. Remote Sensing, 13(21), 4264. https://doi.org/10.3390/rs13214264