Next Article in Journal
Complex Network Responses to Regulation of a Brain-Computer Interface During Semi-Naturalistic Behavior
Previous Article in Journal
Multi-Agent Cooperative Optimisation of Microwave Heating Based on Phase–Power Coordinated Control and Consensus Feedback
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Attention-Based CNN-BiGRU-Transformer Model for Human Activity Recognition

1
School of Mechanical Engineering, Jiangsu Ocean University, Cangwu Road, Lianyungang 222005, China
2
School of Mechatronical Engineering, Beijing Institute of Technology, Zhongguancun South Street, Beijing 100081, China
3
Department of Precision Instrument, Tsinghua University, Qinghua East Road, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12592; https://doi.org/10.3390/app152312592
Submission received: 2 October 2025 / Revised: 19 November 2025 / Accepted: 25 November 2025 / Published: 27 November 2025

Abstract

Human activity recognition (HAR) based on wearable sensors is a key technology in the fields of smart sensing and health monitoring. With the rapid development of deep learning, its powerful feature extraction capabilities have significantly enhanced recognition performance and reduced reliance on traditional handcrafted feature engineering. However, current deep learning models still face challenges in effectively capturing complex temporal dependencies in long-term time-series sensor data and addressing information redundancy, which affect model recognition accuracy and generalization ability. To address these issues, this paper proposes an innovative CNN-BiGRU–Transformer hybrid deep learning model aimed at improving the accuracy and robustness of human activity recognition. The proposed model integrates a multi-branch Convolutional Neural Network (CNN) to effectively extract multi-scale local spatial features, and combines a Bidirectional Gated Recurrent Unit (BiGRU) with a Transformer hybrid module for modeling temporal dependencies and extracting temporal features in long-term time-series data. Additionally, an attention mechanism is incorporated to dynamically allocate weights, suppress redundant information, and enhance key features, further improving recognition performance. To demonstrate the capability of the proposed model, evaluations are performed on three public datasets: WISDM, PAMAP2, and UCI-HAR. The model achieved recognition accuracies of 98.41%, 95.62%, and 96.74% on the three datasets, respectively, outperforming several state-of-the-art methods. These results confirm that the proposed approach effectively addresses feature extraction and redundancy challenges in long-term sensor time-series data and provides a robust solution for wearable sensor-based human activity recognition.
Keywords: BiGRU; CNN; deep learning; HAR; transformer BiGRU; CNN; deep learning; HAR; transformer

Share and Cite

MDPI and ACS Style

Miao, M.; Yan, W.; Gao, X.; Yang, L.; Zhou, J.; Zhang, W. Attention-Based CNN-BiGRU-Transformer Model for Human Activity Recognition. Appl. Sci. 2025, 15, 12592. https://doi.org/10.3390/app152312592

AMA Style

Miao M, Yan W, Gao X, Yang L, Zhou J, Zhang W. Attention-Based CNN-BiGRU-Transformer Model for Human Activity Recognition. Applied Sciences. 2025; 15(23):12592. https://doi.org/10.3390/app152312592

Chicago/Turabian Style

Miao, Mingda, Weijie Yan, Xueshan Gao, Le Yang, Jiaqi Zhou, and Wenyi Zhang. 2025. "Attention-Based CNN-BiGRU-Transformer Model for Human Activity Recognition" Applied Sciences 15, no. 23: 12592. https://doi.org/10.3390/app152312592

APA Style

Miao, M., Yan, W., Gao, X., Yang, L., Zhou, J., & Zhang, W. (2025). Attention-Based CNN-BiGRU-Transformer Model for Human Activity Recognition. Applied Sciences, 15(23), 12592. https://doi.org/10.3390/app152312592

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop