sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition Using Sensors and Machine Learning: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: 1 December 2025 | Viewed by 10136

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Aalborg University, 9220 Aalborg, Denmark
Interests: deep learning; mobile computing; pervasive computing; Internet of Things; brain–computer interface; health informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Aalborg University, DK-9220, Aalborg, Denmark
Interests: data mining, deep learning and sensor-based human activity recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300381, China
Interests: digital twin of ships; big data processing and optimization application
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Technology, MOEKLINNS Lab, Xi’an Jiaotong University, Xi’an, China
Interests: machine learning; deep learning; computer vision; weakly supervised learning; multi-modal emotion analysis; EEG emotion analysis

Special Issue Information

Dear Colleagues,

The recent advances in hardware and acquisition devices have accelerated the deployment of the Internet of Things, thus enabling myriad applications of human activity recognition. Human activity recognition is a time series classification task that involves predicting user behavior based on sensor data. The task is challenging in real-world applications due to many inherent issues and various practical problems in different scenarios. The most major inherent issue is how to filter noisy sensor data and extract high-quality features for better recognition performance. The practical problems include lightweight algorithms for wearable devices, modeling human behaviors with fewer annotated data, learning to recognize complex activities, continually learning patterns of streaming data, etc. Recently, we have witnessed compelling evidence from successful investigations of machine learning for activity recognition. While machine learning is shown to be effective and achieve state-of-the-art performance, the increasing number of related studies indicates that, in both academic and industrial communities, there is a considerable demand for developing more advanced machine learning algorithms in order to tackle the challenges and achieve a better activity recognition performance. Therefore, it is vital and timely to offer an opportunity of reporting the progress in human activity recognition using sensors and machine learning. The research foci of this Special Issue include theoretical studies, model designs, development, and advanced applications of machine learning algorithms on sensor-based activity data.

Dr. Dalin Zhang
Dr. Kaixuan Chen
Prof. Dr. Xu Cheng
Dr. Huan Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • supervised learning
  • semi-supervised learning
  • unsupervised learning
  • active learning
  • transfer learning
  • online learning
  • imbalance learning
  • representation learning
  • ensemble methods
  • auto-machine learning
  • data segmentation
  • explainable

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 12677 KiB  
Article
CLUMM: Contrastive Learning for Unobtrusive Motion Monitoring
by Pius Gyamenah, Hari Iyer, Heejin Jeong and Shenghan Guo
Sensors 2025, 25(4), 1048; https://doi.org/10.3390/s25041048 - 10 Feb 2025
Viewed by 698
Abstract
Traditional approaches for human monitoring and motion recognition often rely on wearable sensors, which, while effective, are obtrusive and cause significant discomfort to workers. More recent approaches have employed unobtrusive, real-time sensing using cameras mounted in the manufacturing environment. While these methods generate [...] Read more.
Traditional approaches for human monitoring and motion recognition often rely on wearable sensors, which, while effective, are obtrusive and cause significant discomfort to workers. More recent approaches have employed unobtrusive, real-time sensing using cameras mounted in the manufacturing environment. While these methods generate large volumes of rich data, they require extensive labeling and analysis for machine learning applications. Additionally, these cameras frequently capture irrelevant environmental information, which can hinder the performance of deep learning algorithms. To address these limitations, this paper introduces a novel framework that leverages a contrastive learning approach to learn rich representations from raw images without the need for manual labeling. This framework mitigates the effect of environmental complexity by focusing on critical joint coordinates relevant to manufacturing tasks. This approach ensures that the model learns directly from human-specific data, effectively reducing the impact of the surrounding environment. A custom dataset of human subjects simulating various tasks in a workplace setting is used for training and evaluation. By fine-tuning the learned model for a downstream motion classification task, we achieve up to 90% accuracy, demonstrating the effectiveness of our proposed solution in real-time human motion monitoring. Full article
Show Figures

Figure 1

30 pages, 4162 KiB  
Article
Enhancing Deep-Learning Classification for Remote Motor Imagery Rehabilitation Using Multi-Subject Transfer Learning in IoT Environment
by Joharah Khabti, Saad AlAhmadi and Adel Soudani
Sensors 2024, 24(24), 8127; https://doi.org/10.3390/s24248127 - 19 Dec 2024
Viewed by 993
Abstract
One of the most promising applications for electroencephalogram (EEG)-based brain–computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises [...] Read more.
One of the most promising applications for electroencephalogram (EEG)-based brain–computer interfaces (BCIs) is motor rehabilitation through motor imagery (MI) tasks. However, current MI training requires physical attendance, while remote MI training can be applied anywhere, facilitating flexible rehabilitation. Providing remote MI training raises challenges to ensuring an accurate recognition of MI tasks by healthcare providers, in addition to managing computation and communication costs. The MI tasks are recognized through EEG signal processing and classification, which can drain sensor energy due to the complexity of the data and the presence of redundant information, often influenced by subject-dependent factors. To address these challenges, we propose in this paper a multi-subject transfer-learning approach for an efficient MI training framework in remote rehabilitation within an IoT environment. For efficient implementation, we propose an IoT architecture that includes cloud/edge computing as a solution to enhance the system’s efficiency and reduce the use of network resources. Furthermore, deep-learning classification with and without channel selection is applied in the cloud, while multi-subject transfer-learning classification is utilized at the edge node. Various transfer-learning strategies, including different epochs, freezing layers, and data divisions, were employed to improve accuracy and efficiency. To validate this framework, we used the BCI IV 2a dataset, focusing on subjects 7, 8, and 9 as targets. The results demonstrated that our approach significantly enhanced the average accuracy in both multi-subject and single-subject transfer-learning classification. In three-subject transfer-learning classification, the FCNNA model achieved up to 79.77% accuracy without channel selection and 76.90% with channel selection. For two-subject and single-subject transfer learning, the application of transfer learning improved the average accuracy by up to 6.55% and 12.19%, respectively, compared to classification without transfer learning. This framework offers a promising solution for remote MI rehabilitation, providing both accurate task recognition and efficient resource usage. Full article
Show Figures

Figure 1

30 pages, 11972 KiB  
Article
Identifying Infant Body Position from Inertial Sensors with Machine Learning: Which Parameters Matter?
by Joanna Duda-Goławska, Aleksander Rogowski, Zuzanna Laudańska, Jarosław Żygierewicz and Przemysław Tomalski
Sensors 2024, 24(23), 7809; https://doi.org/10.3390/s24237809 - 6 Dec 2024
Cited by 1 | Viewed by 1356
Abstract
The efficient classification of body position is crucial for monitoring infants’ motor development. It may fast-track the early detection of developmental issues related not only to the acquisition of motor milestones but also to postural stability and movement patterns. In turn, this may [...] Read more.
The efficient classification of body position is crucial for monitoring infants’ motor development. It may fast-track the early detection of developmental issues related not only to the acquisition of motor milestones but also to postural stability and movement patterns. In turn, this may facilitate and enhance opportunities for early intervention that are crucial for promoting healthy growth and development. The manual classification of human body position based on video recordings is labour-intensive, leading to the adoption of Inertial Motion Unit (IMU) sensors. IMUs measure acceleration, angular velocity, and magnetic field intensity, enabling the automated classification of body position. Many research teams are currently employing supervised machine learning classifiers that utilise hand-crafted features for data segment classification. In this study, we used a longitudinal dataset of IMU recordings made in the lab in three different play activities of infants aged 4–12 months. The classification was conducted based on manually annotated video recordings. We found superior performance of the CatBoost Classifier over the Random Forest Classifier in the task of classifying five positions based on IMU sensor data from infants, yielding excellent classification accuracy of the Supine (97.7%), Sitting (93.5%), and Prone (89.9%) positions. Moreover, using data ablation experiments and analysing the SHAP (SHapley Additive exPlanations) values, the study assessed the importance of various groups of features from both the time and frequency domains. The results highlight that both accelerometer and magnetometer data, especially their statistical characteristics, are critical contributors to improving the accuracy of body position classification. Full article
Show Figures

Figure 1

16 pages, 4572 KiB  
Article
Latent Space Representation of Human Movement: Assessing the Effects of Fatigue
by Thomas Rousseau, Gentiane Venture and Vincent Hernandez
Sensors 2024, 24(23), 7775; https://doi.org/10.3390/s24237775 - 4 Dec 2024
Viewed by 1169
Abstract
Fatigue plays a critical role in sports science, significantly affecting recovery, training effectiveness, and overall athletic performance. Understanding and predicting fatigue is essential to optimize training, prevent overtraining, and minimize the risk of injuries. The aim of this study is to leverage Human [...] Read more.
Fatigue plays a critical role in sports science, significantly affecting recovery, training effectiveness, and overall athletic performance. Understanding and predicting fatigue is essential to optimize training, prevent overtraining, and minimize the risk of injuries. The aim of this study is to leverage Human Activity Recognition (HAR) through deep learning methods for dimensionality reduction. The use of Adversarial AutoEncoders (AAEs) is explored to assess and visualize fatigue in a two-dimensional latent space, focusing on both semi-supervised and conditional approaches. By transforming complex time-series data into this latent space, the objective is to evaluate motor changes associated with fatigue within the participants’ motor control by analyzing shifts in the distribution of data points and providing a visual representation of these effects. It is hypothesized that increased fatigue will cause significant changes in point distribution, which will be analyzed using clustering techniques to identify fatigue-related patterns. The data were collected using a Wii Balance Board and three Inertial Measurement Units, which were placed on the hip and both forearms (distal part, close to the wrist) to capture dynamic and kinematic information. The participants followed a fatigue-inducing protocol that involved repeating sets of 10 repetitions of four different exercises (Squat, Right Lunge, Left Lunge, and Plank Jump) until exhaustion. Our findings indicate that the AAE models are effective in reducing data dimensionality, allowing for the visualization of fatigue’s impact within a 2D latent space. The latent space representation provides insights into motor control variations, revealing patterns that can be used to monitor fatigue levels and optimize training or rehabilitation programs. Full article
Show Figures

Figure 1

21 pages, 956 KiB  
Article
BodyFlow: An Open-Source Library for Multimodal Human Activity Recognition
by Rafael del-Hoyo-Alonso, Ana Caren Hernández-Ruiz, Carlos Marañes-Nueno, Irene López-Bosque, Rocío Aznar-Gimeno, Pilar Salvo-Ibañez, Pablo Pérez-Lázaro, David Abadía-Gallego and María de la Vega Rodrigálvarez-Chamarro
Sensors 2024, 24(20), 6729; https://doi.org/10.3390/s24206729 - 19 Oct 2024
Cited by 1 | Viewed by 2375
Abstract
Human activity recognition is a critical task for various applications across healthcare, sports, security, gaming, and other fields. This paper presents BodyFlow, a comprehensive library that seamlessly integrates human pose estimation and multiple-person estimation and tracking, along with activity recognition modules. BodyFlow enables [...] Read more.
Human activity recognition is a critical task for various applications across healthcare, sports, security, gaming, and other fields. This paper presents BodyFlow, a comprehensive library that seamlessly integrates human pose estimation and multiple-person estimation and tracking, along with activity recognition modules. BodyFlow enables users to effortlessly identify common activities and 2D/3D body joints from input sources such as videos, image sets, or webcams. Additionally, the library can simultaneously process inertial sensor data, offering users the flexibility to choose their preferred input, thus facilitating multimodal human activity recognition. BodyFlow incorporates state-of-the-art algorithms for 2D and 3D pose estimation and three distinct models for human activity recognition. Full article
Show Figures

Figure 1

26 pages, 5154 KiB  
Article
A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique
by Nadeem Ahmed, Md Obaydullah Al Numan, Raihan Kabir, Md Rashedul Islam and Yutaka Watanobe
Sensors 2024, 24(13), 4343; https://doi.org/10.3390/s24134343 - 4 Jul 2024
Cited by 4 | Viewed by 3002
Abstract
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right [...] Read more.
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of ’scalograms’, derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms. Full article
Show Figures

Figure 1

Back to TopTop