Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = UniMiB SHAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 3836 KB  
Article
A Feature Engineering Method for Smartphone-Based Fall Detection
by Pengyu Guo and Masaya Nakayama
Sensors 2025, 25(20), 6500; https://doi.org/10.3390/s25206500 - 21 Oct 2025
Viewed by 1298
Abstract
A fall is defined as an event in which a person inadvertently comes to rest on the ground, floor, or another lower level. It is the second leading cause of unintentional death worldwide, with the elderly population (aged 65 and above) at the [...] Read more.
A fall is defined as an event in which a person inadvertently comes to rest on the ground, floor, or another lower level. It is the second leading cause of unintentional death worldwide, with the elderly population (aged 65 and above) at the highest risk. In addition to preventing falls, timely and accurate detection is crucial to enable effective treatment and reduce potential injury. In this work, we propose a smartphone-based method for fall detection, employing K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) classifiers to predict fall events from accelerometer data. We evaluated the proposed method on two simulated datasets (UniMiB SHAR and MobiAct) and one real-world fall dataset (FARSEEING), performing both same-dataset and cross-dataset evaluations. In same-dataset evaluation on UniMiB SHAR, the method achieved an average accuracy of 98.45% in Leave-One-Subject-Out (LOSO) cross-validation. On MobiAct, it achieved a peak accuracy of 99.89% using KNN. In cross-dataset validation on MobiAct, the highest accuracy reached 96.41%, while on FARSEEING, the method achieved 95.35% sensitivity and 98.12% specificity. SHAP-based interpretability analysis was further conducted to identify the most influential features and provide insights into the model’s decision-making process. These results demonstrate the high effectiveness, robustness, and transparency of the proposed approach in detecting falls across different datasets and scenarios. Full article
(This article belongs to the Special Issue Sensing Technology and Wearables for Physical Activity)
Show Figures

Figure 1

14 pages, 1326 KB  
Article
Fall Detection Based on Recurrent Neural Networks and Accelerometer Data from Smartphones
by Natalia Bartczak, Marta Glanowska, Karolina Kowalewicz, Maciej Kunin and Robert Susik
Appl. Sci. 2025, 15(12), 6688; https://doi.org/10.3390/app15126688 - 14 Jun 2025
Cited by 2 | Viewed by 1843
Abstract
An aging society increases the demand for solutions that enable quick reactions, such as calling for help in response to events that may threaten life or health. One of such events is a fall, which is a common cause (or consequence) of injuries [...] Read more.
An aging society increases the demand for solutions that enable quick reactions, such as calling for help in response to events that may threaten life or health. One of such events is a fall, which is a common cause (or consequence) of injuries among the elderly, that can lead to health problems or even death. Fall may be also a symptom of a serious health problem, such as a stroke or a heart attack. This study addresses the fall detection problem. We propose a fall detection solution based on accelerometer data from smartphone devices. The proposed model is based on a Recurrent Neural Network employing a Gated Recurrent Unit (GRU) layer. We compared the results with the state-of-the-art solutions available in the literature using the UniMiB SHAR dataset containing accelerometer data collected using smartphone devices. The dataset contains the validation dataset prepared for evaluation using the Leave-One-Subject-Out (LOSO-CV) and 5-Fold Cross-Validation (CV) strategies; consequently, we used them for evaluation. Our solution achieves the highest result for Leave-One-Subject-Out and a comparable result for the k-Fold Cross-Validation strategy, achieving 98.99% and 99.82% accuracy, respectively. We believe it has the potential for adoption in production devices, which could be helpful, for example, in nursing homes, improving the provision of assistance especially when combined into a multimodal system with other sensors. We also provide all the data and code used in our experiments publicly, allowing other researchers to reproduce our results. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 8117 KB  
Article
Enhancing the Transformer Model with a Convolutional Feature Extractor Block and Vector-Based Relative Position Embedding for Human Activity Recognition
by Xin Guo, Young Kim, Xueli Ning and Se Dong Min
Sensors 2025, 25(2), 301; https://doi.org/10.3390/s25020301 - 7 Jan 2025
Cited by 2 | Viewed by 4034
Abstract
The Transformer model has received significant attention in Human Activity Recognition (HAR) due to its self-attention mechanism that captures long dependencies in time series. However, for Inertial Measurement Unit (IMU) sensor time-series signals, the Transformer model does not effectively utilize the a priori [...] Read more.
The Transformer model has received significant attention in Human Activity Recognition (HAR) due to its self-attention mechanism that captures long dependencies in time series. However, for Inertial Measurement Unit (IMU) sensor time-series signals, the Transformer model does not effectively utilize the a priori information of strong complex temporal correlations. Therefore, we proposed using multi-layer convolutional layers as a Convolutional Feature Extractor Block (CFEB). CFEB enables the Transformer model to leverage both local and global time series features for activity classification. Meanwhile, the absolute position embedding (APE) in existing Transformer models cannot accurately represent the distance relationship between individuals at different time points. To further explore positional correlations in temporal signals, this paper introduces the Vector-based Relative Position Embedding (vRPE), aiming to provide more relative temporal position information within sensor signals for the Transformer model. Combining these innovations, we conduct extensive experiments on three HAR benchmark datasets: KU-HAR, UniMiB SHAR, and USC-HAD. Experimental results demonstrate that our proposed enhancement scheme substantially elevates the performance of the Transformer model in HAR. Full article
(This article belongs to the Special Issue Transformer Applications in Target Tracking)
Show Figures

Figure 1

33 pages, 2156 KB  
Article
Identification of Optimal Data Augmentation Techniques for Multimodal Time-Series Sensory Data: A Framework
by Nazish Ashfaq, Muhammad Hassan Khan and Muhammad Adeel Nisar
Information 2024, 15(6), 343; https://doi.org/10.3390/info15060343 - 11 Jun 2024
Cited by 12 | Viewed by 6780
Abstract
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature [...] Read more.
Recently, the research community has shown significant interest in the continuous temporal data obtained from motion sensors in wearable devices. These data are useful for classifying and analysing different human activities in many application areas such as healthcare, sports and surveillance. The literature has presented a multitude of deep learning models that aim to derive a suitable feature representation from temporal sensory input. However, the presence of a substantial quantity of annotated training data is crucial to adequately train the deep networks. Nevertheless, the data originating from the wearable devices are vast but ineffective due to a lack of labels which hinders our ability to train the models with optimal efficiency. This phenomenon leads to the model experiencing overfitting. The contribution of the proposed research is twofold: firstly, it involves a systematic evaluation of fifteen different augmentation strategies to solve the inadequacy problem of labeled data which plays a critical role in the classification tasks. Secondly, it introduces an automatic feature-learning technique proposing a Multi-Branch Hybrid Conv-LSTM network to classify human activities of daily living using multimodal data of different wearable smart devices. The objective of this study is to introduce an ensemble deep model that effectively captures intricate patterns and interdependencies within temporal data. The term “ensemble model” pertains to fusion of distinct deep models, with the objective of leveraging their own strengths and capabilities to develop a solution that is more robust and efficient. A comprehensive assessment of ensemble models is conducted using data-augmentation techniques on two prominent benchmark datasets: CogAge and UniMiB-SHAR. The proposed network employs a range of data-augmentation methods to improve the accuracy of atomic and composite activities. This results in a 5% increase in accuracy for composite activities and a 30% increase for atomic activities. Full article
(This article belongs to the Special Issue Human Activity Recognition and Biomedical Signal Processing)
Show Figures

Figure 1

23 pages, 4366 KB  
Article
Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors
by Thi Hong Vuong, Tung Doan and Atsuhiro Takasu
Sensors 2023, 23(24), 9721; https://doi.org/10.3390/s23249721 - 9 Dec 2023
Cited by 12 | Viewed by 3950
Abstract
Recent advances in wearable systems have made inertial sensors, such as accelerometers and gyroscopes, compact, lightweight, multimodal, low-cost, and highly accurate. Wearable inertial sensor-based multimodal human activity recognition (HAR) methods utilize the rich sensing data from embedded multimodal sensors to infer human activities. [...] Read more.
Recent advances in wearable systems have made inertial sensors, such as accelerometers and gyroscopes, compact, lightweight, multimodal, low-cost, and highly accurate. Wearable inertial sensor-based multimodal human activity recognition (HAR) methods utilize the rich sensing data from embedded multimodal sensors to infer human activities. However, existing HAR approaches either rely on domain knowledge or fail to address the time-frequency dependencies of multimodal sensor signals. In this paper, we propose a novel method called deep wavelet convolutional neural networks (DWCNN) designed to learn features from the time-frequency domain and improve accuracy for multimodal HAR. DWCNN introduces a framework that combines continuous wavelet transforms (CWT) with enhanced deep convolutional neural networks (DCNN) to capture the dependencies of sensing signals in the time-frequency domain, thereby enhancing the feature representation ability for multiple wearable inertial sensor-based HAR tasks. Within the CWT, we further propose an algorithm to estimate the wavelet scale parameter. This helps enhance the performance of CWT when computing the time-frequency representation of the input signals. The output of the CWT then serves as input for the proposed DCNN, which consists of residual blocks for extracting features from different modalities and attention blocks for fusing these features of multimodal signals. We conducted extensive experiments on five benchmark HAR datasets: WISDM, UCI-HAR, Heterogeneous, PAMAP2, and UniMiB SHAR. The experimental results demonstrate the superior performance of the proposed model over existing competitors. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

20 pages, 2177 KB  
Article
Human Activity Recognition Based on an Efficient Neural Architecture Search Framework Using Evolutionary Multi-Objective Surrogate-Assisted Algorithms
by Xiaojuan Wang, Mingshu He, Liu Yang, Hui Wang and Yun Zhong
Electronics 2023, 12(1), 50; https://doi.org/10.3390/electronics12010050 - 23 Dec 2022
Cited by 9 | Viewed by 2925
Abstract
Human activity recognition (HAR) is a popular and challenging research topic driven by various applications. Deep learning methods have been used to improve HAR models’ accuracy and efficiency. However, this kind of method has a lot of manually adjusted parameters, which cost researchers [...] Read more.
Human activity recognition (HAR) is a popular and challenging research topic driven by various applications. Deep learning methods have been used to improve HAR models’ accuracy and efficiency. However, this kind of method has a lot of manually adjusted parameters, which cost researchers a lot of time to train and test. So, it is challenging to design a suitable model. In this paper, we propose HARNAS, an efficient approach for automatic architecture search for HAR. Inspired by the popular multi-objective evolutionary algorithm, which has a strong capability in solving problems with multiple conflicting objectives, we set weighted f1-score, flops, and the number of parameters as objects. Furthermore, we use a surrogate model to select models with a high score from the large candidate set. Moreover, the chosen models are added to the training set of the surrogate model, which makes the surrogate model update along the search process. Our method avoids manually designing the network structure, and the experiment results demonstrate that it can reduce 40% training costs on both time and computing resources on the OPPORTUNITY dataset and 75% on the UniMiB-SHAR dataset. Additionally, we also prove the portability of the trained surrogate model and HAR model by transferring them from the training dataset to a new dataset. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 745 KB  
Article
The Applications of Metaheuristics for Human Activity Recognition and Fall Detection Using Wearable Sensors: A Comprehensive Analysis
by Mohammed A. A. Al-qaness, Ahmed M. Helmi, Abdelghani Dahou and Mohamed Abd Elaziz
Biosensors 2022, 12(10), 821; https://doi.org/10.3390/bios12100821 - 3 Oct 2022
Cited by 40 | Viewed by 4933
Abstract
In this paper, we study the applications of metaheuristics (MH) optimization algorithms in human activity recognition (HAR) and fall detection based on sensor data. It is known that MH algorithms have been utilized in complex engineering and optimization problems, including feature selection (FS). [...] Read more.
In this paper, we study the applications of metaheuristics (MH) optimization algorithms in human activity recognition (HAR) and fall detection based on sensor data. It is known that MH algorithms have been utilized in complex engineering and optimization problems, including feature selection (FS). Thus, in this regard, this paper used nine MH algorithms as FS methods to boost the classification accuracy of the HAR and fall detection applications. The applied MH were the Aquila optimizer (AO), arithmetic optimization algorithm (AOA), marine predators algorithm (MPA), artificial bee colony (ABC) algorithm, genetic algorithm (GA), slime mold algorithm (SMA), grey wolf optimizer (GWO), whale optimization algorithm (WOA), and particle swarm optimization algorithm (PSO). First, we applied efficient prepossessing and segmentation methods to reveal the motion patterns and reduce the time complexities. Second, we developed a light feature extraction technique using advanced deep learning approaches. The developed model was ResRNN and was composed of several building blocks from deep learning networks including convolution neural networks (CNN), residual networks, and bidirectional recurrent neural networks (BiRNN). Third, we applied the mentioned MH algorithms to select the optimal features and boost classification accuracy. Finally, the support vector machine and random forest classifiers were employed to classify each activity in the case of multi-classification and to detect fall and non-fall actions in the case of binary classification. We used seven different and complex datasets for the multi-classification case: the PAMMP2, Sis-Fall, UniMiB SHAR, OPPORTUNITY, WISDM, UCI-HAR, and KU-HAR datasets. In addition, we used the Sis-Fall dataset for the binary classification (fall detection). We compared the results of the nine MH optimization methods using different performance indicators. We concluded that MH optimization algorithms had promising performance in HAR and fall detection applications. Full article
(This article belongs to the Special Issue Wearable Sensing for Health Monitoring)
Show Figures

Figure 1

24 pages, 14754 KB  
Article
HARNAS: Human Activity Recognition Based on Automatic Neural Architecture Search Using Evolutionary Algorithms
by Xiaojuan Wang, Xinlei Wang, Tianqi Lv, Lei Jin and Mingshu He
Sensors 2021, 21(20), 6927; https://doi.org/10.3390/s21206927 - 19 Oct 2021
Cited by 14 | Viewed by 3523
Abstract
Human activity recognition (HAR) based on wearable sensors is a promising research direction. The resources of handheld terminals and wearable devices limit the performance of recognition and require lightweight architectures. With the development of deep learning, the neural architecture search (NAS) has emerged [...] Read more.
Human activity recognition (HAR) based on wearable sensors is a promising research direction. The resources of handheld terminals and wearable devices limit the performance of recognition and require lightweight architectures. With the development of deep learning, the neural architecture search (NAS) has emerged in an attempt to minimize human intervention. We propose an approach for using NAS to search for models suitable for HAR tasks, namely, HARNAS. The multi-objective search algorithm NSGA-II is used as the search strategy of HARNAS. To make a trade-off between the performance and computation speed of a model, the F1 score and the number of floating-point operations (FLOPs) are selected, resulting in a bi-objective problem. However, the computation speed of a model not only depends on the complexity, but is also related to the memory access cost (MAC). Therefore, we expand the bi-objective search to a tri-objective strategy. We use the Opportunity dataset as the basis for most experiments and also evaluate the portability of the model on the UniMiB-SHAR dataset. The experimental results show that HARNAS designed without manual adjustments can achieve better performance than the best model tweaked by humans. HARNAS obtained an F1 score of 92.16% and parameters of 0.32 MB on the Opportunity dataset. Full article
(This article belongs to the Special Issue AI-Enabled Advanced Sensing for Human Action and Activity Recognition)
Show Figures

Figure 1

19 pages, 4640 KB  
Article
Margin-Based Deep Learning Networks for Human Activity Recognition
by Tianqi Lv, Xiaojuan Wang, Lei Jin, Yabo Xiao and Mei Song
Sensors 2020, 20(7), 1871; https://doi.org/10.3390/s20071871 - 27 Mar 2020
Cited by 30 | Viewed by 4437
Abstract
Human activity recognition (HAR) is a popular and challenging research topic, driven by a variety of applications. More recently, with significant progress in the development of deep learning networks for classification tasks, many researchers have made use of such models to recognise human [...] Read more.
Human activity recognition (HAR) is a popular and challenging research topic, driven by a variety of applications. More recently, with significant progress in the development of deep learning networks for classification tasks, many researchers have made use of such models to recognise human activities in a sensor-based manner, which have achieved good performance. However, sensor-based HAR still faces challenges; in particular, recognising similar activities that only have a different sequentiality and similarly classifying activities with large inter-personal variability. This means that some human activities have large intra-class scatter and small inter-class separation. To deal with this problem, we introduce a margin mechanism to enhance the discriminative power of deep learning networks. We modified four kinds of common neural networks with our margin mechanism to test the effectiveness of our proposed method. The experimental results demonstrate that the margin-based models outperform the unmodified models on the OPPORTUNITY, UniMiB-SHAR, and PAMAP2 datasets. We also extend our research to the problem of open-set human activity recognition and evaluate the proposed method’s performance in recognising new human activities. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 977 KB  
Article
Asymmetric Residual Neural Network for Accurate Human Activity Recognition
by Jun Long, Wuqing Sun, Zhan Yang and Osolo Ian Raymond
Information 2019, 10(6), 203; https://doi.org/10.3390/info10060203 - 6 Jun 2019
Cited by 46 | Viewed by 5800
Abstract
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also [...] Read more.
Human activity recognition (HAR) using deep neural networks has become a hot topic in human–computer interaction. Machines can effectively identify human naturalistic activities by learning from a large collection of sensor data. Activity recognition is not only an interesting research problem but also has many real-world practical applications. Based on the success of residual networks in achieving a high level of aesthetic representation of automatic learning, we propose a novel asymmetric residual network, named ARN. ARN is implemented using two identical path frameworks consisting of (1) a short time window, which is used to capture spatial features, and (2) a long time window, which is used to capture fine temporal features. The long time window path can be made very lightweight by reducing its channel capacity, while still being able to learn useful temporal representations for activity recognition. In this paper, we mainly focus on proposing a new model to improve the accuracy of HAR. In order to demonstrate the effectiveness of the ARN model, we carried out extensive experiments on benchmark datasets (i.e., OPPORTUNITY, UniMiB-SHAR) and compared the results with some conventional and state-of-the-art learning-based methods. We discuss the influence of networks parameters on performance to provide insights about its optimization. Results from our experiments show that ARN is effective in recognizing human activities via wearable datasets. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Sports)
Show Figures

Figure 1

22 pages, 2627 KB  
Article
Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors
by Frédéric Li, Kimiaki Shirahama, Muhammad Adeel Nisar, Lukas Köping and Marcin Grzegorzek
Sensors 2018, 18(2), 679; https://doi.org/10.3390/s18020679 - 24 Feb 2018
Cited by 282 | Viewed by 17203
Abstract
Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting [...] Read more.
Getting a good feature representation of data is paramount for Human Activity Recognition (HAR) using wearable sensors. An increasing number of feature learning approaches—in particular deep-learning based—have been proposed to extract an effective feature representation by analyzing large amounts of data. However, getting an objective interpretation of their performances faces two problems: the lack of a baseline evaluation setup, which makes a strict comparison between them impossible, and the insufficiency of implementation details, which can hinder their use. In this paper, we attempt to address both issues: we firstly propose an evaluation framework allowing a rigorous comparison of features extracted by different methods, and use it to carry out extensive experiments with state-of-the-art feature learning approaches. We then provide all the codes and implementation details to make both the reproduction of the results reported in this paper and the re-use of our framework easier for other researchers. Our studies carried out on the OPPORTUNITY and UniMiB-SHAR datasets highlight the effectiveness of hybrid deep-learning architectures involving convolutional and Long-Short-Term-Memory (LSTM) to obtain features characterising both short- and long-term time dependencies in the data. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 541 KB  
Article
UniMiB SHAR: A Dataset for Human Activity Recognition Using Acceleration Data from Smartphones
by Daniela Micucci, Marco Mobilio and Paolo Napoletano
Appl. Sci. 2017, 7(10), 1101; https://doi.org/10.3390/app7101101 - 24 Oct 2017
Cited by 527 | Viewed by 25652
Abstract
Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of [...] Read more.
Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, there are only a few publicly available data sets, which often contain samples from subjects with too similar characteristics, and very often lack specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new dataset of acceleration samples acquired with an Android smartphone designed for human activity recognition and fall detection. The dataset includes 11,771 samples of both human activities and falls performed by 30 subjects of ages ranging from 18 to 60 years. Samples are divided in 17 fine grained classes grouped in two coarse grained classes: one containing samples of 9 types of activities of daily living (ADL) and the other containing samples of 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with four different classifiers and with two different feature vectors. We evaluated four different classification tasks: fall vs. no fall, 9 activities, 8 falls, 17 activities and falls. For each classification task, we performed a 5-fold cross-validation (i.e., including samples from all the subjects in both the training and the test dataset) and a leave-one-subject-out cross-validation (i.e., the test data include the samples of a subject only, and the training data, the samples of all the other subjects). Regarding the classification tasks, the major findings can be summarized as follows: (i) it is quite easy to distinguish between falls and ADLs, regardless of the classifier and the feature vector selected. Indeed, these classes of activities present quite different acceleration shapes that simplify the recognition task; (ii) on average, it is more difficult to distinguish between types of falls than between types of activities, regardless of the classifier and the feature vector selected. This is due to the similarity between the acceleration shapes of different kinds of falls. On the contrary, ADLs acceleration shapes present differences except for a small group. Finally, the evaluation shows that the presence of samples of the same subject both in the training and in the test datasets, increases the performance of the classifiers regardless of the feature vector used. This happens because each human subject differs from other subjects in performing activities even if she shares with them the same physical characteristics. Full article
Show Figures

Figure 1

Back to TopTop