sensors-logo

Journal Browser

Journal Browser

Sensor-Based Human Activity Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 December 2025 | Viewed by 12044

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information Systems Design, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe 610-0394, Kyoto, Japan
Interests: multimedia information processing; machine learning; data mining; sensor-based human activity recognition

Special Issue Information

Dear Colleagues,

Sensor-based human activity recognition (HAR) aims at recognising activities by using and integrating data obtained from various sensors, like wearable, object, and ambient sensors, as well as vision sensors. Here, activities are high-level descriptions that can be extracted from low-level sensor data and include not only the physical behaviours of a person but also their mental states (i.e., emotions) and health conditions. Sensor-based HAR to recognise such activities can lead to innovative and useful applications in various domains like healthcare, smart home, human–computer interaction, and autonomous driving. Several sensor-based HAR systems are already in practical use thanks to the recent advancement of sensor technologies, wireless communication networks, and machine learning techniques. However, devising really useful sensor-based HAR systems still needs to solve many problems like the scarcity of large-scale datasets, the treatment of missing or defective sensors, the compositional nature of an activity, the personalisation and calibration of a system for each user, and so on. This Special Issue is dedicated to collecting the latest research achievements and findings on the following topics:

  • System architecture for sensor-based HAR;
  • Sensing devices and technologies;
  • Signal processing for sensor recordings;
  • Machine learning for sensor-based HAR;
  • Multimodal deep learning for sensor-based HAR;
  • Knowledge discovery and data mining for sensor-based HAR;
  • Personalisation and calibration of a sensor-based HAR system;
  • Explainability of a sensor-based HAR system;
  • Sensor-based HAR using LLM;
  • Visualisation and user interface for sensor-based HAR;
  • Datasets to benchmark sensor-based HAR systems;
  • Real-world applications of sensor-based HAR;
  • Surveys on sensor-based HAR.

Dr. Kimiaki Shirahama
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition (HAR)
  • sensor technology
  • artificial intelligence
  • pervasive computing
  • wearable computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 3311 KB  
Article
Towards User-Generalizable Wearable-Sensor-Based Human Activity Recognition: A Multi-Task Contrastive Learning Approach
by Pengyu Guo and Masaya Nakayama
Sensors 2025, 25(22), 6988; https://doi.org/10.3390/s25226988 - 15 Nov 2025
Viewed by 392
Abstract
Human Activity Recognition (HAR) using wearable sensors has shown great potential for personalized health management and ubiquitous computing. However, existing deep learning-based HAR models often suffer from poor user-level generalization, which limits their deployment in real-world scenarios. In this work, we propose a [...] Read more.
Human Activity Recognition (HAR) using wearable sensors has shown great potential for personalized health management and ubiquitous computing. However, existing deep learning-based HAR models often suffer from poor user-level generalization, which limits their deployment in real-world scenarios. In this work, we propose a novel multi-task contrastive learning framework that jointly optimizes activity classification and supervised contrastive objectives to enhance generalization across unseen users. By leveraging both activity and user labels to construct semantically meaningful contrastive pairs, our method improves representation learning while maintaining user-agnostic inference at test time. We evaluate the proposed framework on three public HAR datasets using cross-user splits, achieving comparable results to both supervised and self-supervised baselines. Extensive ablation studies further confirm the effectiveness of our design choices, including multi-task training and the integration of user-aware contrastive supervision. These results highlight the potential of our approach for building more generalizable and scalable HAR systems. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

26 pages, 4563 KB  
Article
Personalized Smart Home Automation Using Machine Learning: Predicting User Activities
by Mark M. Gad, Walaa Gad, Tamer Abdelkader and Kshirasagar Naik
Sensors 2025, 25(19), 6082; https://doi.org/10.3390/s25196082 - 2 Oct 2025
Cited by 1 | Viewed by 1405
Abstract
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy [...] Read more.
A personalized framework for smart home automation is introduced, utilizing machine learning to predict user activities and allow for the context-aware control of living spaces. Predicting user activities, such as ‘Watch_TV’, ‘Sleep’, ‘Work_On_Computer’, and ‘Cook_Dinner’, is essential for improving occupant comfort, optimizing energy consumption, and offering proactive support in smart home settings. The Edge Light Human Activity Recognition Predictor, or EL-HARP, is the main prediction model used in this framework to predict user behavior. The system combines open-source software for real-time sensing, facial recognition, and appliance control with affordable hardware, including the Raspberry Pi 5, ESP32-CAM, Tuya smart switches, NFC (Near Field Communication), and ultrasonic sensors. In order to predict daily user activities, three gradient-boosting models—XGBoost, CatBoost, and LightGBM (Gradient Boosting Models)—are trained for each household using engineered features and past behaviour patterns. Using extended temporal features, LightGBM in particular achieves strong predictive performance within EL-HARP. The framework is optimized for edge deployment with efficient training, regularization, and class imbalance handling. A fully functional prototype demonstrates real-time performance and adaptability to individual behavior patterns. This work contributes a scalable, privacy-preserving, and user-centric approach to intelligent home automation. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Graphical abstract

16 pages, 1698 KB  
Article
Fall Detection by Deep Learning-Based Bimodal Movement and Pose Sensing with Late Fusion
by Haythem Rehouma and Mounir Boukadoum
Sensors 2025, 25(19), 6035; https://doi.org/10.3390/s25196035 - 1 Oct 2025
Viewed by 824
Abstract
The timely detection of falls among the elderly remains challenging. Single modality sensing approaches using inertial measurement units (IMUs) or vision-based monitoring systems frequently exhibit high false positives and compromised accuracy under suboptimal operating conditions. We propose a novel bimodal deep learning-based bimodal [...] Read more.
The timely detection of falls among the elderly remains challenging. Single modality sensing approaches using inertial measurement units (IMUs) or vision-based monitoring systems frequently exhibit high false positives and compromised accuracy under suboptimal operating conditions. We propose a novel bimodal deep learning-based bimodal sensing framework to address the problem, by leveraging a memory-based autoencoder neural network for inertial abnormality detection and an attention-based neural network for visual pose assessment, with late fusion at the decision level. Our experimental evaluation with a custom dataset of simulated falls and routine activities, captured with waist-mounted IMUs and RGB cameras under dim lighting, shows significant performance improvement by the described bimodal late-fusion system, with an F1-score of 97.3% and, most notably, a false-positive rate of 3.6% significantly lower than the 11.3% and 8.9% with IMU-only and vision-only baselines, respectively. These results confirm the robustness of the described fall detection approach and validate its applicability to real-time fall detection under different light settings, including nighttime conditions. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

18 pages, 2961 KB  
Article
Office Posture Detection Using Ceiling-Mounted Ultra-Wideband Radar and Attention-Based Modality Fusion
by Wei Lu, Christopher Bird, Moid Sandhu and David Silvera-Tawil
Sensors 2025, 25(16), 5164; https://doi.org/10.3390/s25165164 - 20 Aug 2025
Viewed by 924
Abstract
Prolonged sedentary behavior in office environments is a key risk factor for musculoskeletal disorders and metabolic health issues. While workplace stretching interventions can mitigate these risks, effective monitoring solutions are often limited by privacy concerns and constrained sensor placement. This study proposes a [...] Read more.
Prolonged sedentary behavior in office environments is a key risk factor for musculoskeletal disorders and metabolic health issues. While workplace stretching interventions can mitigate these risks, effective monitoring solutions are often limited by privacy concerns and constrained sensor placement. This study proposes a ceiling-mounted ultra-wideband (UWB) radar system for privacy-preserving classification of working and stretching postures in office settings. In this study, data were collected from ten participants in five scenarios: four posture classes (seated working, seated stretching, standing working, standing stretching), and empty environment. Distance and Doppler information extracted from the UWB radar signals was transformed into modality-specific images, which were then used as inputs to two classification models: ConcatFusion, a baseline model that fuses features by concatenation, and AttnFusion, which introduces spatial attention and convolutional feature integration. Both models were evaluated using leave-one-subject-out cross-validation. The AttnFusion model outperformed ConcatFusion, achieving a testing accuracy of 90.6% and a macro F1-score of 90.5%. These findings demonstrate the effectiveness of a ceiling-mounted UWB radar combined with attention-based modality fusion for unobtrusive office posture monitoring. The approach offers a privacy-preserving solution with potential applications in real-time ergonomic assessment and integration into workplace health and safety programs. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

23 pages, 2407 KB  
Article
Replication of Sensor-Based Categorization of Upper-Limb Performance in Daily Life in People Post Stroke and Generalizability to Other Populations
by Chelsea E. Macpherson, Marghuretta D. Bland, Christine Gordon, Allison E. Miller, Caitlin Newman, Carey L. Holleran, Christopher J. Dy, Lindsay Peterson, Keith R. Lohse and Catherine E. Lang
Sensors 2025, 25(15), 4618; https://doi.org/10.3390/s25154618 - 25 Jul 2025
Viewed by 820
Abstract
Background: Wearable movement sensors can measure upper limb (UL) activity, but single variables may not capture the full picture. This study aimed to replicate prior work identifying five multivariate categories of UL activity performance in people with stroke and controls and expand those [...] Read more.
Background: Wearable movement sensors can measure upper limb (UL) activity, but single variables may not capture the full picture. This study aimed to replicate prior work identifying five multivariate categories of UL activity performance in people with stroke and controls and expand those findings to other UL conditions. Methods: Demographic, self-report, and wearable sensor-based UL activity performance variables were collected from 324 participants (stroke n = 49, multiple sclerosis n = 19, distal UL fracture n = 40, proximal UL pain n = 55, post-breast cancer n = 23, control n = 138). Principal component (PC) analyses (12, 9, 7, or 5 accelerometry input variables) were followed by cluster analyses and numerous assessments of model fit across multiple subsets of the total sample. Results: Two PCs explained 70–90% variance: PC1 (overall UL activity performance) and PC2 (preferred-limb use). A five-variable, five-cluster model was optimal across samples. In comparison to clusters, two PCs and individual accelerometry variables showed higher convergent validity with self-report outcomes of UL activity performance and disability. Conclusions: A five-variable, five-cluster model was replicable and generalizable. Convergent validity data suggest that UL activity performance in daily life may be better conceptualized on a continuum, rather than categorically. These findings highlight a unified, data-driven approach to tracking functional changes across UL conditions and severity of functional deficits. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

22 pages, 3223 KB  
Article
An EMG-Based GRU Model for Estimating Foot Pressure to Support Active Ankle Orthosis Development
by Praveen Nuwantha Gunaratne and Hiroki Tamura
Sensors 2025, 25(11), 3558; https://doi.org/10.3390/s25113558 - 5 Jun 2025
Cited by 4 | Viewed by 1845
Abstract
As populations age, particularly in countries like Japan, mobility impairments related to ankle joint dysfunction, such as foot drop, instability, and reduced gait adaptability, have become a significant concern. Active ankle–foot orthoses (AAFO) offer targeted support during walking; however, most existing systems rely [...] Read more.
As populations age, particularly in countries like Japan, mobility impairments related to ankle joint dysfunction, such as foot drop, instability, and reduced gait adaptability, have become a significant concern. Active ankle–foot orthoses (AAFO) offer targeted support during walking; however, most existing systems rely on rule-based or threshold-based control, which are often limited to sagittal plane movements and lacking adaptability to subject-specific gait variations. This study proposes an approach driven by neuromuscular activation using surface electromyography (EMG) and a Gated Recurrent Unit (GRU)-based deep learning model to predict plantar pressure distributions at the heel, midfoot, and toe regions during gait. EMG signals were collected from four key ankle muscles, and plantar pressures were recorded using a customized sandal-integrated force-sensitive resistor (FSR) system. The data underwent comprehensive preprocessing and segmentation using a sliding window method. Root mean square (RMS) values were extracted as the primary input feature due to their consistent performance in capturing muscle activation intensity. The GRU model successfully generalized across subjects, enabling the accurate real-time inference of critical gait events such as heel strike, mid-stance, and toe off. This biomechanical evaluation demonstrated strong signal compatibility, while also identifying individual variations in electromechanical delay (EMD). The proposed predictive framework offers a scalable and interpretable approach to improving real-time AAFO control by synchronizing assistance with user-specific gait dynamics. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

20 pages, 18281 KB  
Article
IMU Sensor-Based Worker Behavior Recognition and Construction of a Cyber–Physical System Environment
by Sehwan Park, Minkyo Youm and Junkyeong Kim
Sensors 2025, 25(2), 442; https://doi.org/10.3390/s25020442 - 13 Jan 2025
Cited by 5 | Viewed by 2922
Abstract
According to South Korea’s Ministry of Employment and Labor, approximately 25,000 construction workers suffered from various injuries between 2015 and 2019. Additionally, about 500 fatalities occur annually, and multiple studies are being conducted to prevent these accidents and quickly identify their occurrence to [...] Read more.
According to South Korea’s Ministry of Employment and Labor, approximately 25,000 construction workers suffered from various injuries between 2015 and 2019. Additionally, about 500 fatalities occur annually, and multiple studies are being conducted to prevent these accidents and quickly identify their occurrence to secure the golden time for the injured. Recently, AI-based video analysis systems for detecting safety accidents have been introduced. However, these systems are limited to areas where CCTV is installed, and in locations like construction sites, numerous blind spots exist due to the limitations of CCTV coverage. To address this issue, there is active research on the use of MEMS (micro-electromechanical systems) sensors to detect abnormal conditions in workers. In particular, methods such as using accelerometers and gyroscopes within MEMS sensors to acquire data based on workers’ angles, utilizing three-axis accelerometers and barometric pressure sensors to improve the accuracy of fall detection systems, and measuring the wearer’s gait using the x-, y-, and z-axis data from accelerometers and gyroscopes are being studied. However, most methods involve use of MEMS sensors embedded in smartphones, typically attaching the sensors to one or two specific body parts. Therefore, in this study, we developed a novel miniaturized IMU (inertial measurement unit) sensor that can be simultaneously attached to multiple body parts of construction workers (head, body, hands, and legs). The sensor integrates accelerometers, gyroscopes, and barometric pressure sensors to measure various worker movements in real time (e.g., walking, jumping, standing, and working at heights). Additionally, incorporating PPG (photoplethysmography), body temperature, and acoustic sensors, enables the comprehensive observation of both physiological signals and environmental changes. The collected sensor data are preprocessed using Kalman and extended Kalman filters, among others, and an algorithm was proposed to evaluate workers’ safety status and update health-related data in real time. Experimental results demonstrated that the proposed IMU sensor can classify work activities with over 90% accuracy even at a low sampling rate of 15 Hz. Furthermore, by integrating internal filtering, communication modules, and server connectivity within an application, we established a cyber–physical system (CPS), enabling real-time monitoring and immediate alert transmission to safety managers. Through this approach, we verified improved performance in terms of miniaturization, measurement accuracy, and server integration compared to existing commercial sensors. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

20 pages, 5455 KB  
Article
A New Iterative Algorithm for Magnetic Motion Tracking
by Tobias Schmidt, Johannes Hoffmann, Moritz Boueke, Robert Bergholz, Ludger Klinkenbusch and Gerhard Schmidt
Sensors 2024, 24(21), 6947; https://doi.org/10.3390/s24216947 - 29 Oct 2024
Cited by 1 | Viewed by 1338
Abstract
Motion analysis is of great interest to a variety of applications, such as virtual and augmented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as a human–machine interface. In most cases, these systems are based on optical or acceleration/angular [...] Read more.
Motion analysis is of great interest to a variety of applications, such as virtual and augmented reality and medical diagnostics. Hand movement tracking systems, in particular, are used as a human–machine interface. In most cases, these systems are based on optical or acceleration/angular speed sensors. These technologies are already well researched and used in commercial systems. In special applications, it can be advantageous to use magnetic sensors to supplement an existing system or even replace the existing sensors. The core of a motion tracking system is a localization unit. The relatively complex localization algorithms present a problem in magnetic systems, leading to a relatively large computational complexity. In this paper, a new approach for pose estimation of a kinematic chain is presented. The new algorithm is based on spatially rotating magnetic dipole sources. A spatial feature is extracted from the sensor signal, the dipole direction in which the maximum magnitude value is detected at the sensor. This is introduced as the “maximum vector”. A relationship between this feature, the location vector (pointing from the magnetic source to the sensor position) and the sensor orientation is derived and subsequently exploited. By modelling the hand as a kinematic chain, the posture of the chain can be described in two ways: the knowledge about the magnetic correlations and the structure of the kinematic chain. Both are bundled in an iterative algorithm with very low complexity. The algorithm was implemented in a real-time framework and evaluated in a simulation and first laboratory tests. In tests without movement, it could be shown that there was no significant deviation between the simulated and estimated poses. In tests with periodic movements, an error in the range of 1° was found. Of particular interest here is the required computing power. This was evaluated in terms of the required computing operations and the required computing time. Initial analyses have shown that a computing time of 3 μs per joint is required on a personal computer. Lastly, the first laboratory tests basically prove the functionality of the proposed methodology. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

Back to TopTop