Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU
Abstract
:1. Introduction
- A comprehensive system that utilizes a GRU-based architecture as the backbone for time-series feature representation. Following the extraction of GRU features, a self-attention mechanism is applied to focus on the most relevant activity information. The model is subsequently pruned to reduce weight redundancy, thereby significantly improving efficiency while maintaining high accuracy. An ablation study is conducted to analyze the impact of pruning on accuracy, demonstrating that our method effectively preserves performance despite a substantial reduction in model complexity.
- Several augmentation techniques are applied to address the small sample size of the available WiFi CSI dataset. The augmentation, in combination with pruning, helps prevent overfitting and enhances model generalization, while also reducing computational costs. Our experiments further assess how different augmentation strategies contribute to maintaining high accuracy in a resource-constrained environment.
- Extensive experimental evidence across multiple datasets is provided to support our evaluation, validating the model’s robustness and applicability in various scenarios.
2. Related Works
2.1. Hand-Crafted Methods
2.2. CNN-Based Methods
2.3. RNN-Based Methods
2.4. Transformer Methods
3. Preliminaries
3.1. Self-Attention
3.2. GRU
- Update gate (): The update gate determines how much of the past information should be retained and passed forward to future time steps, enabling selective retention or forgetting of information. It is defined as
- Reset gate (): The reset gate decides how much of the past information should be forgotten, allowing the model to discard irrelevant details. It is defined as
3.3. Network Pruning and Fine-Tuning
4. Proposed System
4.1. System Overview
4.2. Attention-GRU
4.3. Network Training, Pruning, and Fine-Tuning
4.4. Data Augmentation
4.4.1. Adding Gaussian Noise
4.4.2. Data Shifting
4.4.3. MixUp
5. Experiments
5.1. Databases
5.2. Experimental Settings
5.3. Experiment I: Finding the Best Hyperparameters
5.3.1. Determining the Best Hidden Dimensions of Attention-GRU
5.3.2. Network Pruning
5.3.3. Data Augmentation by Adding Gaussian Noise
5.3.4. Data Augmentation with MixUp
5.3.5. Data Augmentation by Temporal Shifting
5.4. Experiment II: Ablation Study of Each Mechanism
5.4.1. Attention-GRU with Pruning
5.4.2. Data Augmentation with Tuned Hyperparameters
5.5. Experiment III: Comparison with SOTA
5.6. Summary of Results and Discussion
5.6.1. Summary of Results
5.6.2. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ogbuabor, G.; La, R. Human Activity Recognition for Healthcare using Smartphones. In Proceedings of the 2018 10th International Conference on Machine Learning and Computing, Macau, China, 26–28 February 2018; pp. 41–46. [Google Scholar]
- Wang, Y.; Cang, S.; Yu, H. A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst. Appl. 2019, 137, 167–190. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Kevin, I.; Wang, K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-Learning-Enhanced Human Activity Recognition for Internet of Healthcare Things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
- Subasi, A.; Khateeb, K.; Brahimi, T.; Sarirete, A. Human activity recognition using machine learning methods in a smart healthcare environment. In Innovation in Health Informatics; Elsevier: Amsterdam, The Netherlands, 2020; pp. 123–144. [Google Scholar]
- Bianchi, V.; Bassoli, M.; Lombardo, G.; Fornacciari, P.; Mordonini, M.; De Munari, I. IoT wearable sensor and deep learning: An integrated approach for personalized human activity recognition in a smart home environment. IEEE Internet Things J. 2019, 6, 8553–8562. [Google Scholar] [CrossRef]
- Mehr, H.D.; Polat, H. Human Activity Recognition in Smart Home with Deep Learning Approach. In Proceedings of the 2019 7th International Istanbul Smart Grids and Cities Congress and Fair (ICSG), Istanbul, Turkey, 25–26 April 2019; pp. 149–153. [Google Scholar]
- Kim, K.; Jalal, A.; Mahmood, M. Vision-Based Human Activity Recognition System Using Depth Silhouettes: A Smart Home System for Monitoring the Residents. J. Electr. Eng. Technol. 2019, 14, 2567–2573. [Google Scholar] [CrossRef]
- Du, Y.; Lim, Y.; Tan, Y. A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction. Sensors 2019, 19, 4474. [Google Scholar] [CrossRef]
- Bouchabou, D.; Nguyen, S.M.; Lohr, C.; LeDuc, B.; Kanellos, I. A Survey of Human Activity Recognition in Smart Homes Based on IoT Sensors Algorithms: Taxonomies, Challenges, and Opportunities with Deep Learning. Sensors 2021, 21, 6037. [Google Scholar] [CrossRef]
- Xing, Y.; Lv, C.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.Y. Driver Activity Recognition for Intelligent Vehicles: A Deep Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 5379–5390. [Google Scholar] [CrossRef]
- Braunagel, C.; Kasneci, E.; Stolzmann, W.; Rosenstiel, W. Driver-Activity Recognition in the Context of Conditionally Autonomous Driving. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 15–18 September 2015; pp. 1652–1657. [Google Scholar]
- Ohn-Bar, E.; Martin, S.; Tawari, A.; Trivedi, M.M. Head, Eye, and Hand Patterns for Driver Activity Recognition. In Proceedings of the 2014 IEEE 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 660–665. [Google Scholar]
- Nutter, M.; Crawford, C.H.; Ortiz, J. Design of Novel Deep Learning Models for Real-time Human Activity Recognition with Mobile Phones. In Proceedings of the 2018 IEEE International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Javed, A.R.; Faheem, R.; Asim, M.; Baker, T.; Beg, M.O. A smartphone sensors-based personalized human activity recognition system for sustainable smart cities. Sustain. Cities Soc. 2021, 71, 102970. [Google Scholar] [CrossRef]
- Fu, B.; Damer, N.; Kirchbuchner, F.; Kuijper, A. Sensing Technology for Human Activity Recognition: A Comprehensive Survey. IEEE Access 2020, 8, 83791–83820. [Google Scholar] [CrossRef]
- Dang, L.M.; Min, K.; Wang, H.; Piran, M.J.; Lee, C.H.; Moon, H. Sensor-based and vision-based human activity recognition: Acomprehensive survey. Pattern Recognit. 2020, 108, 107561. [Google Scholar] [CrossRef]
- Beddiar, D.R.; Nini, B.; Sabokrou, M.; Hadid, A. Vision-based human activity recognition: A survey. Multimed. Tools Appl. 2020, 79, 30509–30555. [Google Scholar] [CrossRef]
- Poppe, R. A survey on vision-based human action recognition. Image Vis. Comput. 2010, 28, 976–990. [Google Scholar] [CrossRef]
- Wang, W.; Lian, C.; Zhao, Y.; Zhan, Z. Sensor-Based Gymnastics Action Recognition Using Time-Series Images and a Lightweight Feature Fusion Network. IEEE Sens. J. 2024, 24, 42573–42583. [Google Scholar] [CrossRef]
- Lian, C.; Zhao, Y.; Sun, T.; Shao, J.; Liu, Y.; Fu, C.; Lyu, X.; Zhan, Z. Incorporating image representation and texture feature for sensor-based gymnastics activity recognition. Knowl.-Based Syst. 2025, 311, 113076. [Google Scholar] [CrossRef]
- Zhao, Y.; Ren, X.; Lian, C.; Han, K.; Xin, L.; Li, W.J. Mouse on a Ring: A Mouse Action Scheme Based on IMU and Multi-Level Decision Algorithm. IEEE Sens. J. 2021, 21, 20512–20520. [Google Scholar] [CrossRef]
- Zhao, Y.; Liu, J.; Lian, C.; Liu, Y.; Ren, X.; Lou, J.; Chen, M.; Li, W.J. A Single Smart Ring for Monitoring 20 Kinds of Multi-Intensity Daily Activities—From Kitchen Work to Fierce Exercises. Adv. Intell. Syst. 2022, 4, 2200204. [Google Scholar] [CrossRef]
- Chen, K.; Zhang, D.; Yao, L.; Guo, B.; Yu, Z.; Liu, Y. Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities. ACM Comput. Surv. CSUR 2021, 54, 1–40. [Google Scholar] [CrossRef]
- De-La-Hoz-Franco, E.; Ariza-Colpas, P.; Quero, J.M.; Espinilla, M. Sensor-Based Datasets for Human Activity Recognition—A Systematic Review of Literature. IEEE Access 2018, 6, 59192–59210. [Google Scholar] [CrossRef]
- Chen, Z.; Zhang, L.; Jiang, C.; Cao, Z.; Cui, W. WiFi CSI Based Passive Human Activity Recognition Using Attention Based BLSTM. IEEE Trans. Mob. Comput. 2018, 18, 2714–2724. [Google Scholar] [CrossRef]
- Ding, J.; Wang, Y. WiFi CSI-based human activity recognition using deep recurrent neural network. IEEE Access 2019, 7, 174257–174269. [Google Scholar] [CrossRef]
- Ma, Y.; Zhou, G.; Wang, S.; Zhao, H.; Jung, W. SignFi: Sign Language Recognition Using WiFi. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 1–21. [Google Scholar] [CrossRef]
- Yadav, S.K.; Sai, S.; Gundewar, A.; Rathore, H.; Tiwari, K.; Pandey, H.M.; Mathur, M. CSITime: Privacy-preserving human activity recognition using WiFi channel state information. Neural Netw. 2022, 146, 11–21. [Google Scholar] [CrossRef] [PubMed]
- Memmesheimer, R.; Theisen, N.; Paulus, D. Gimme Signals: Discriminative signal encoding for multimodal activity recognition. arXiv 2020, arXiv:2003.06156. [Google Scholar] [CrossRef]
- Chen, P.; Li, C.; Zhang, X. Degradation trend prediction of pumped storage unit based on a novel performance degradation index and GRU-attention model. Sustain. Energy Technol. Assess. 2022, 54, 102807. [Google Scholar] [CrossRef]
- Abdelli, K.; Grießer, H.; Pachnicke, S. A Machine Learning-Based Framework for Predictive Maintenance of Semiconductor Laser for Optical Communication. J. Light. Technol. 2022, 40, 4698–4708. [Google Scholar] [CrossRef]
- Zou, H.; Zhou, Y.; Yang, J.; Gu, W.; Xie, L.; Spanos, C. Multiple Kernel Representation Learning for WiFi-based Human Activity Recognition. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 268–274. [Google Scholar]
- Guo, L.; Wang, L.; Lin, C.; Liu, J.; Lu, B.; Fang, J.; Liu, Z.; Shan, Z.; Yang, J.; Guo, S. Wiar: A Public Dataset for Wifi-Based Activity Recognition. IEEE Access 2019, 7, 154935–154945. [Google Scholar] [CrossRef]
- Yan, H.; Zhang, Y.; Wang, Y.; Xu, K. WiAct: A Passive WiFi-Based Human Activity Recognition System. IEEE Sens. J. 2020, 20, 296–305. [Google Scholar] [CrossRef]
- Huang, J.; Liu, B.; Miao, C.; Lu, Y.; Zheng, Q.; Wu, Y.; Liu, J.; Su, L.; Chen, C.W. PhaseAnti: An Anti-Interference WiFi-Based Activity Recognition System Using Interference-Independent Phase Component. IEEE Trans. Mob. Comput. 2023, 22, 2938–2954. [Google Scholar] [CrossRef]
- Guo, W.; Yamagishi, S.; Jing, L. Human Activity Recognition via Wi-Fi and Inertial Sensors with Machine Learning. IEEE Access 2024, 12, 18821–18836. [Google Scholar] [CrossRef]
- Zhang, R.; Jiang, C.; Wu, S.; Zhou, Q.; Jing, X.; Mu, J. Wi-Fi Sensing for Joint Gesture Recognition and Human Identification From Few Samples in Human-Computer Interaction. IEEE J. Sel. Areas Commun. 2022, 40, 2193–2205. [Google Scholar] [CrossRef]
- Moshiri, P.F.; Shahbazian, R.; Nabati, M.; Ghorashi, S.A. A CSI-Based Human Activity Recognition Using Deep Learning. Sensors 2021, 21, 7225. [Google Scholar] [CrossRef]
- Wang, F.; Feng, J.; Zhao, Y.; Zhang, X.; Zhang, S.; Han, J. Joint Activity Recognition and Indoor Localization. arXiv 2019, arXiv:1904.04964. [Google Scholar] [CrossRef]
- Islam, M.; Jannat, M.; Hossain, M.N.; Kim, W.S.; Lee, S.W.; Yang, S.H. STC-NLSTMNet: An Improved Human Activity Recognition Method Using Convolutional Neural Network with NLSTM from WiFi CSI. Sensors 2022, 23, 356. [Google Scholar] [CrossRef] [PubMed]
- Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. arXiv 2019, arXiv:1905.04899. [Google Scholar] [CrossRef]
- Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2018, arXiv:710.09412. [Google Scholar] [CrossRef]
- Zhang, C.; Jiao, W. ImgFi: A High Accuracy and Lightweight Human Activity Recognition Framework Using CSI Image. IEEE Sens. J. 2023, 23, 21966–21977. [Google Scholar] [CrossRef]
- Brinke, J.K.; Meratnia, N. Scaling Activity Recognition Using Channel State Information Through Convolutional Neural Networks and Transfer Learning. In Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things, AIChallengeIoT’19, New York, NY, USA, 10–13 November 2019; pp. 56–62. [Google Scholar] [CrossRef]
- Zhang, Y.; Zheng, Y.; Qian, K.; Zhang, G.; Liu, Y.; Wu, C. Widar3.0: Zero-Effort Cross-Domain Gesture Recognition with Wi-Fi. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8671–8688. [Google Scholar] [CrossRef]
- Chen, X.; Zou, Y.; Li, C.; Xiao, W. A Deep Learning Based Lightweight Human Activity Recognition System Using Reconstructed WiFi CSI. IEEE Trans. Hum.-Mach. Syst. 2024, 54, 68–78. [Google Scholar] [CrossRef]
- Zhao, Y.; Shao, J.; Lin, X.; Sun, T.; Li, J.; Lian, C.; Lyu, X.; Si, B.; Zhan, Z. CIR-DFENet: Incorporating cross-modal image representation and dual-stream feature enhanced network for activity recognition. Expert Syst. Appl. 2025, 266, 125912. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Li, Y.; Yang, G.; Su, Z.; Li, S.; Wang, Y. Human activity recognition based on multienvironment sensor data. Inf. Fusion 2023, 91, 47–63. [Google Scholar] [CrossRef]
- Zhang, J.; Wu, F.; Wei, B.; Zhang, Q.; Huang, H.; Shah, S.W.; Cheng, J. Data Augmentation and Dense-LSTM for Human Activity Recognition Using WiFi Signal. IEEE Internet Things J. 2021, 8, 4628–4641. [Google Scholar] [CrossRef]
- Shang, S.; Luo, Q.; Zhao, J.; Xue, R.; Sun, W.; Bao, N. LSTM-CNN network for human activity recognition using WiFi CSI data. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1883, p. 012139. [Google Scholar]
- Shalaby, E.; ElShennawy, N.; Sarhan, A. Utilizing deep learning models in CSI-based human activity recognition. Neural Comput. Appl. 2022, 34, 5993–6010. [Google Scholar] [CrossRef] [PubMed]
- Hari Kang, D.K.; Toh, K.A. Human Activity Recognition based on the GRU with Augmented Wi-Fi CSI Signals. In Proceedings of the IEEE TENCON, Singapore, 1–4 December 2024. [Google Scholar]
- Schäfer, J.; Barrsiwal, B.R.; Kokhkharova, M.; Adil, H.; Liebehenschel, J. Human Activity Recognition Using CSI Information with Nexmon. Appl. Sci. 2021, 11, 8860. [Google Scholar] [CrossRef]
- Yang, M.; Zhu, H.; Zhu, R.; Wu, F.; Yin, L.; Yang, Y. WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi. Sensors 2023, 23, 2612. [Google Scholar] [CrossRef]
- Yang, Z.; Zhang, Y.; Zhang, G.; Zheng, Y.; Chi, G. Widar 3.0: WiFi-Based Activity Recognition Dataset; IEEE Dataport: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
- Bian, J. SwinFi: A CSI Compression Method based on Swin Transformer for Wi-Fi Sensing. arXiv 2024, arXiv:2405.03957. [Google Scholar] [CrossRef]
- Luo, F.; Khan, S.; Jiang, B.; Wu, K. Vision Transformers for Human Activity Recognition Using WiFi Channel State Information. IEEE Internet Things J. 2024, 11, 28111–28122. [Google Scholar] [CrossRef]
- Wang, Z.; Oates, T. Imaging Time-Series to Improve Classification and Imputation. arXiv 2015, arXiv:1506.00327. [Google Scholar]
- Xu, H.; Li, J.; Yuan, H.; Liu, Q.; Fan, S.; Li, T.; Sun, X. Human Activity Recognition Based on Gramian Angular Field and Deep Convolutional Neural Network. IEEE Access 2020, 8, 199393–199405. [Google Scholar] [CrossRef]
- Eckmann, J.P.; Kamphorst, S.O.; Ruelle, D. Recurrence Plots of Dynamical Systems. World Sci. Ser. Nonlinear Sci. Ser. A 1995, 16, 441–446. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory; MIT Press: Cambridge, MA, USA, 1997; Volume 9. [Google Scholar] [CrossRef]
Method | Paper | Techniques | Dataset and Activities | Data Augmentation |
---|---|---|---|---|
Hand-crafted | Zou et al., 2017 [32] | MKRL and Auto-HSRL | 4 activities | None |
Guo et al., 2019 [33] | PCA, DWT, NB, RF, DT, KNN, and SVM cassifiers | 16 activities | None | |
Yan et al., 2020 [34] | AACA | 10 activities | None | |
Huang et al., 2023 [35] | PhaseAnti, PCA, and KNN-DTW classifiers | 10 activities | None | |
Guo et al., 2024 [36] | SVM, MLP, DT, RF, Logistic Regression, and KNN | 8 activities | Increasing the number of IMUs | |
CNN | Zhang et al., 2022 [37] | WiGesID(3D CNN) | Categories not in the train set | None |
Ma et al., 2018 [27] | 9-layer CNN | SignFi [27] | None | |
Moshiri et al., 2021 [38] | 2D CNN | 7 activities | None | |
Yadav et al., 2022 [28] | Kernel extraction and attention module | ARIL [39], SignFi [27], StanFi [40] | CutMix [41], MixUp [42] | |
Zhang et al., 2023 [43] | 3-layer CNN with GASF, GADF, MTF, RP, and STFT | Wiar [33], SAR [44], Widar3.0 [45], and self-test dataset | none | |
Chen et al., 2024 [46] | Dilated Conv with residual connection and Hadamard product | 6 types of activity in office and lab | None | |
Zhao et al., 2025 [47] | Hybrid model CNN with LSTM | 6 types of activity | Data interpolation | |
Memmesheimer et al., 2020 [29] | EfficientNet [48] | ARIL [39], TU RGB+D 120, and UTD-MHAD | Interpolating, sampling, scaling, filtering, adding noise | |
Li et al., 2023 [49] | Wide-time-domain CNN | 5 datasets with 13 to 25 activities | None | |
RNN | Chen et al., 2018 [25] | Attention-based ABLSTM | 6 common daily activities | None |
Ding et al., 2019 [26] | HARNN, which is an LSTM with features using FFT and STFT | 6 types of activity in home | None | |
Zhang et al., 2021 [50] | Dense-LSTM with PCA and SIFT | 10 activities | Gaussian noise, time stretch, spectrum shift, spectrum scale, frequency filter | |
Shang et al., 2021 [51] | LSTM-CNN network | Static and dynamic movements | None | |
Shalaby et al., 2022 [52] | CNN-GRU | 6 activities | None | |
Kang et al., 2024 [53] | Input-modified GRU | ARIL [39], SignFi [27], HAR [54] | Adding Gaussian noise, shifting, CutMix [41] | |
Transformer | Yang et al., 2021 [55] | United and Separated Spatiotemporal Transformers | Widar3.0 [56] | None |
Bian, 2024 [57] | Swin Transformer-based autoencoder–decoder | 21 different classes set up in a meeting room | CutMix [41] | |
Luo et al., 2024 [58] | Vision Transformer | UT-HAR, NTU-Fi HAR | None |
Dataset | Source Link |
---|---|
ARIL | https://github.com/geekfeiw/ARIL (accessed on 25 February 2025) |
StanWiFi | https://www.researchgate.net/figure/Summary-of-the-StanWiFi-dataset_tbl2_366717201 (accessed on 25 February 2025) |
Sign-Fi | https://yongsen.github.io/SignFi/ (accessed on 25 February 2025) |
Nexmon HAR | https://www.semanticscholar.org/paper/Human-Activity-Recognition-Using-CSI-Information-Sch%C3%A4fer-Barrsiwal/9ddb0cf17a3ac4e9d73bd7df525ff66ab2af73d1 (accessed on 25 February 2025) |
Brief Descriptions of the Experiments | Databases |
---|---|
Experiment I: Finding the best hyperparameter settings (a) Determine the GRU hidden dimension and the attention module hidden dimension (b) Determine the pruning ratio k and threshold, s (c) Determine the Gaussian noise variance, (d) Determine the shifting steps, n (e) Determine the mixing strength, and epoch ratios for reflection of MixUp, r | ARIL [39] |
Experiment II: Ablation study of each mechanism (a) Evaluate Attention-GRU with pruning (b) Assess data augmentation | ARIL [39] |
Experiment III: Comparison of the proposed method with SOTA | ARIL [39] StanFi [40] Sign-Fi [27] HAR [54] |
Hyperparameter | Value |
---|---|
Model | 1 GRU (hid dim = 128) and Attention (hid dim = 32) |
Batch size | 128 (after augmentation 512) |
Training epochs | 100 (50 in HAR dataset [54]) |
Adam | 0.9 |
Adam | 0.999 |
Learning rate | 1 × 10−3 |
Scheduler | CosineAnnealingLR (T max = 100) |
Loss function | Cross-Entropy Loss |
GRU Dimension | Attention Dimension | ||||
---|---|---|---|---|---|
32 | 64 | 128 | 256 | 512 | |
32 | 87.77 | 90.29 | 89.57 | 90.29 | 88.13 |
64 | 95.68 | 94.60 | 93.53 | 96.04 | 95.32 |
128 | 98.56 | 97.48 | 97.48 | 98.20 | 98.56 |
256 | 98.56 | 98.92 | 99.28 | 98.56 | 98.92 |
512 | 98.56 | 98.56 | 98.56 | 98.56 | 98.56 |
GRU Dimension | Attention Dimension | ||||
---|---|---|---|---|---|
32 | 64 | 128 | 256 | 512 | |
32 | 11.7 | 15.0 | 21.8 | 35.4 | 62.5 |
64 | 29.1 | 35.6 | 45.5 | 74.4 | 126.1 |
128 | 82.5 | 95.1 | 120.3 | 170.8 | 271.6 |
256 | 263.0 | 287.9 | 337.7 | 437.3 | 636.4 |
512 | 918.9 | 968.3 | 1067.3 | 1265.3 | 1660.9 |
GRU Dimension | Attention Dimension | ||||
---|---|---|---|---|---|
32 | 64 | 128 | 256 | 512 | |
32 | 0.00159 | 0.00159 | 0.00160 | 0.00161 | 0.00164 |
64 | 0.00436 | 0.00436 | 0.00438 | 0.00440 | 0.00445 |
128 | 0.01343 | 0.01344 | 0.01347 | 0.01352 | 0.01362 |
256 | 0.04574 | 0.04576 | 0.04581 | 0.04591 | 0.04595 |
512 | 0.04611 | 0.16697 | 0.16702 | 0.16732 | 0.16771 |
Model | Accuracy (%) | Total FLOPs (G) | Total Parameters (K) | |
---|---|---|---|---|
GRU dim | Attention dim | |||
128 | 32 | 98.56 | 0.0134 | 82.5 |
64 | 97.84 | 0.0134 | 95.1 | |
128 | 98.56 | 0.0135 | 120.3 | |
256 | 32 | 98.56 | 0.0457 | 263.0 |
64 | 98.20 | 0.0458 | 287.9 | |
128 | 99.28 | 0.0458 | 337.7 | |
512 | 32 | 98.56 | 0.0461 | 918.9 |
64 | 98.56 | 0.1670 | 968.3 | |
128 | 98.56 | 0.1670 | 1067.3 |
Model | Accuracy (%) | Total Parameters (K) | |
---|---|---|---|
k: Prune Ratio | s: Threshold | ||
0.7 | 0.5 | 98.92 | 60.6 |
0.8 | 98.56 | 57.8 | |
0.9 | 98.92 | 57.8 | |
0.8 | 0.5 | 98.56 | 65.9 |
0.8 | 98.92 | 65.9 | |
0.9 | 98.56 | 65.9 |
Variance | Accuracy (%) |
---|---|
0.01 | 92.81 |
0.001 | 93.53 |
0.0001 | 94.61 |
0.00001 | 94.24 |
Reflection Ratio r | 0 | 0.3 | 0.5 | 0.7 | 1 | |
---|---|---|---|---|---|---|
α | ||||||
0.5 | 80.22 | 91.73 | 96.40 | 97.12 | 96.40 | |
1.0 | 82.73 | 96.04 | 96.04 | 97.84 | 97.84 | |
1.5 | 83.09 | 94.60 | 96.76 | 96.76 | 97.84 | |
2.0 | 82.01 | 95.68 | 95.68 | 96.76 | 97.48 |
Step Sizes n | () | () | () | () | () | () | () |
Accuracy (%) | 88.49 | 91.73 | 94.60 | 94.25 | 97.48 | 98.92 | 98.20 |
Model Description | Accuracy (%) | Training Times | Total Parameters (K) |
---|---|---|---|
Without Data Augmentation | |||
GRU (Baseline) | 68.92 | 1 m 48 s | 69.7 |
Attention-GRU (Baseline) | 70.86 | 2 m 22 s | 82.1 |
Pruned Attention-GRU | 69.42 | 2 m 22 s + 14 s | 57.9 |
With Data Augmentation | |||
Shifting + MixUp | 97.12 | 48 m 46 s + 4 m 55s | 57.8 |
Gaussian Noise + Shifting | 89.93 | 36 m 1 s + 3 m 35 s | 57.8 |
Gaussian Noise + MixUp | 83.81 | 48 m 39 s + 4 m 56 s | 57.8 |
Dataset | Model | Accuracy (%) | GFLOPs | Total Parameters (M) |
---|---|---|---|---|
ARIL [39] | CSITime [28] | 98.20 | 18.06 | 252.10 |
Gimme signals [29] | 94.91 | 1.02 | 25.13 | |
Our system | 98.92 | 0.01 | 0.0578 | |
StanFi [40] | STC-NLSTMNet [40] | 99.88 | 0.044 | 0.0871 |
Our system | 99.33 | 0.0083 | 0.0680 | |
Sign-Fi (lab) [27] | CSITime [28] | 99.22 | 18.06 | 252.10 |
Our system | 99.32 | 0.05 | 0.2818 | |
HAR [54] | LSTM [54] | 98.95 | 3.72 | 0.2482 |
GRU with past input [53] | 100 | 0.245 | 0.2469 | |
Our system | 100 | 0.180 | 0.1660 |
Model | Accuracy (%) | |||
---|---|---|---|---|
Lab | Home | Lab + Home | Lab-5 | |
CSITime [28] | 99.22 | 97.39 | 96.23 | 88.83 |
Ours | 99.32 | 99.64 | 97.52 | 89.60 |
Details for Each Experiment | Configuration/Observation |
---|---|
Best model dimension settings | GRU dim: 128, Attention dim: 32 with 98.56% accuracy |
Optimal pruning ratio and threshold | , with 98.93% accuracy |
Best settings for MixUp | , with 97.84% accuracy |
Attention-GRU | +10% in total parameters and +1.94% accuracy |
Attention-GRU with pruning | −25% reduction in total parameters and +0.50% accuracy |
Data augmentation | Shifting > MixUp > Gaussian Noise and +20% accuracy |
Comparison with ARIL | 1000× fewer GFLOPs and +0.72% accuracy |
Comparison with StanFi | 20× fewer GFLOPs and 99.33% accuracy |
Comparison with Sign-Fi | 500× fewer GFLOPs and +0.1% accuracy |
Comparison with HAR | 100% accuracy |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kang, H.; Kim, D.; Toh, K.-A. Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU. Sensors 2025, 25, 1547. https://doi.org/10.3390/s25051547
Kang H, Kim D, Toh K-A. Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU. Sensors. 2025; 25(5):1547. https://doi.org/10.3390/s25051547
Chicago/Turabian StyleKang, Hari, Donghyun Kim, and Kar-Ann Toh. 2025. "Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU" Sensors 25, no. 5: 1547. https://doi.org/10.3390/s25051547
APA StyleKang, H., Kim, D., & Toh, K.-A. (2025). Human Activity Recognition Through Augmented WiFi CSI Signals by Lightweight Attention-GRU. Sensors, 25(5), 1547. https://doi.org/10.3390/s25051547