Sensor-Based Human Activity Recognition in Real-World Scenarios

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 April 2022) | Viewed by 33191

Special Issue Editors

School of Computer Science, University of St Andrews, North Haugh, St Andrews Fife, UK
Interests: human activity recognition; sensor data analysis, smart environments; context awareness; uncertainty reasoning; temporal reasoning; ontologies; mobile computing; wearable assistive technologies; multi-sensory integration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Milan, Milan, Italy
Interests: context awareness; sensor-based human activity recognition; smart homes; wearable systems; hybrid data-driven and knowledge-based reasoning methods for pervasive computing

Special Issue Information

Dear Colleagues,

As we have witnessed over the last few decades, more and more smart homes, wearable-based systems, and real-world testbeds are emerging, indicating promising value in applications such as healthcare, wellbeing, and smart environments. For example, care homes are deploying sensorized assisted living platforms to identify the elderly’s daily routines in order to provide personalised care services to them, and smart home technology industries are aiming towards energy-efficient solutions for air purification or heating configurations based on detected human behaviours. One of the core enabling technologies underlying these applications is sensor-based human activity recognition, which consists of inferring high-level activities from low-level sensor data to support context-aware applications. The ability to correctly identify and predict users’ activities underpins the success of these applications.

Studying human behaviours using unobtrusive sensors (including environmental and/or wearable sensors) is a popular research area, and a large number of data- and knowledge-driven techniques have been proposed. However, developing robust human activity recognition systems for long-term and real-world deployments still faces many research challenges, including a lack of high-quality labelled data, continual learning, the emergence of new activities, and privacy issues. This Special Issue serves as a forum to enable researchers and practitioners to present their latest research findings and engineering experiences in empirical studies, including novel techniques  for activity recognition in real-world scenarios.

Dr. Juan Ye
Dr. Gabriele Civitarese
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • semi-supervised learning
  • weakly supervised learning
  • unsupervised learning
  • collaborative learning
  • federated learning
  • continual learning
  • data augmentation for activity recognition
  • transfer learning
  • privacy-aware activity recognition
  • knowledge-based reasoning
  • hybrid knowledge- and data-driven activity recognition
  • novel sensing technologies for activity recognition
  • novel datasets for activity recognition
  • novel applications for activity recognition

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 946 KiB  
Article
Marfusion: An Attention-Based Multimodal Fusion Model for Human Activity Recognition in Real-World Scenarios
by Yunhan Zhao, Siqi Guo, Zeqi Chen, Qiang Shen, Zhengyuan Meng and Hao Xu
Appl. Sci. 2022, 12(11), 5408; https://doi.org/10.3390/app12115408 - 26 May 2022
Cited by 4 | Viewed by 1854
Abstract
Human Activity Recognition(HAR) plays an important role in the field of ubiquitous computing, which can benefit various human-centric applications such as smart homes, health monitoring, and aging systems. Human Activity Recognition mainly leverages smartphones and wearable devices to collect sensory signals labeled with [...] Read more.
Human Activity Recognition(HAR) plays an important role in the field of ubiquitous computing, which can benefit various human-centric applications such as smart homes, health monitoring, and aging systems. Human Activity Recognition mainly leverages smartphones and wearable devices to collect sensory signals labeled with activity annotations and train machine learning models to recognize individuals’ activity automatically. In order to deploy the Human Activity Recognition model in real-world scenarios, however, there are two major barriers. Firstly, sensor data and activity labels are traditionally collected using special experimental equipment in a controlled environment, which means fitting models trained with these datasets may result in poor generalization to real-life scenarios. Secondly, existing studies focus on single or a few modalities of sensor readings, which neglect useful information and its relations existing in multimodal sensor data. To tackle these issues, we propose a novel activity recognition model for multimodal sensory data fusion: Marfusion, and an experimental data collection platform for HAR tasks in real-world scenarios: MarSense. Specifically, Marfusion extensively uses a convolution structure to extract sensory features for each modality of the smartphone sensor and then fuse the multimodal features using the attention mechanism. MarSense can automatically collect a large amount of smartphone sensor data via smartphones among multiple users in their natural-used conditions and environment. To evaluate our proposed platform and model, we conduct a data collection experiment in real-life among university students and then compare our Marfusion model with several other state-of-the-art models on the collected datasets. Experimental Results do not only indicate that the proposed platform collected Human Activity Recognition data in the real-world scenario successfully, but also verify the advantages of the Marfusion model compared to existing models in Human Activity Recognition. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

22 pages, 3870 KiB  
Article
UCA-EHAR: A Dataset for Human Activity Recognition with Embedded AI on Smart Glasses
by Pierre-Emmanuel Novac, Alain Pegatoquet, Benoît Miramond and Christophe Caquineau
Appl. Sci. 2022, 12(8), 3849; https://doi.org/10.3390/app12083849 - 11 Apr 2022
Cited by 9 | Viewed by 2583
Abstract
Human activity recognition can help in elderly care by monitoring the physical activities of a subject and identifying a degradation in physical abilities. Vision-based approaches require setting up cameras in the environment, while most body-worn sensor approaches can be a burden on the [...] Read more.
Human activity recognition can help in elderly care by monitoring the physical activities of a subject and identifying a degradation in physical abilities. Vision-based approaches require setting up cameras in the environment, while most body-worn sensor approaches can be a burden on the elderly due to the need of wearing additional devices. Another solution consists in using smart glasses, a much less intrusive device that also leverages the fact that the elderly often already wear glasses. In this article, we propose UCA-EHAR, a novel dataset for human activity recognition using smart glasses. UCA-EHAR addresses the lack of usable data from smart glasses for human activity recognition purpose. The data are collected from a gyroscope, an accelerometer and a barometer embedded onto smart glasses with 20 subjects performing 8 different activities (STANDING, SITTING, WALKING, LYING, WALKING_DOWNSTAIRS, WALKING_UPSTAIRS, RUNNING, and DRINKING). Results of the classification task are provided using a residual neural network. Additionally, the neural network is quantized and deployed on the smart glasses using the open-source MicroAI framework in order to provide a live human activity recognition application based on our dataset. Power consumption is also analysed when performing live inference on the smart glasses’ microcontroller. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

20 pages, 6696 KiB  
Article
Human Activity Signatures Captured under Different Directions Using SISO and MIMO Radar Systems
by Sahil Waqar, Muhammad Muaaz and Matthias Pätzold
Appl. Sci. 2022, 12(4), 1825; https://doi.org/10.3390/app12041825 - 10 Feb 2022
Cited by 7 | Viewed by 2027
Abstract
In this paper, we highlight and resolve the shortcomings of single-input single-output (SISO) millimeter wave (mm-Wave) radar systems for human activity recognition (HAR). A 2×2 distributed multiple-input multiple-output (MIMO) radar framework is presented to capture human activity signatures under realistic conditions [...] Read more.
In this paper, we highlight and resolve the shortcomings of single-input single-output (SISO) millimeter wave (mm-Wave) radar systems for human activity recognition (HAR). A 2×2 distributed multiple-input multiple-output (MIMO) radar framework is presented to capture human activity signatures under realistic conditions in indoor environments. We propose to distribute the two pairs of collocated transmitter–receiver antennas in order to illuminate the indoor environment from different perspectives. For the proposed MIMO system, we measure the time-variant (TV) radial velocity distribution and TV mean radial velocity to observe the signatures of human activities. We deploy the Ancortek SDR-KIT 2400T2R4 mm-Wave radar in a SISO as well as a 2×2 distributed MIMO configuration. We corroborate the limitations of SISO configurations by recording real human activities in different directions. It is shown that, unlike the SISO radar configuration, the proposed MIMO configuration has the ability to obtain superior human activity signatures for all directions. To signify the importance of the proposed 2×2 MIMO radar system, we compared the performance of a SISO radar-based passive step counter with a distributed MIMO radar-based passive step counter. As the proposed 2×2 MIMO radar system is able to detect human activity in all directions, it fills a research gap of radio frequency (RF)-based HAR systems. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

31 pages, 3650 KiB  
Article
A Mini-Survey and Feasibility Study of Deep-Learning-Based Human Activity Recognition from Slight Feature Signals Obtained Using Privacy-Aware Environmental Sensors
by Hirokazu Madokoro, Stephanie Nix, Hanwool Woo and Kazuhito Sato
Appl. Sci. 2021, 11(24), 11807; https://doi.org/10.3390/app112411807 - 12 Dec 2021
Cited by 4 | Viewed by 2487
Abstract
Numerous methods and applications have been proposed in human activity recognition (HAR). This paper presents a mini-survey of recent HAR studies and our originally developed benchmark datasets of two types using environmental sensors. For the first dataset, we specifically examine human pose estimation [...] Read more.
Numerous methods and applications have been proposed in human activity recognition (HAR). This paper presents a mini-survey of recent HAR studies and our originally developed benchmark datasets of two types using environmental sensors. For the first dataset, we specifically examine human pose estimation and slight motion recognition related to activities of daily living (ADL). Our proposed method employs OpenPose. It describes feature vectors without effects of objects or scene features, but with a convolutional neural network (CNN) with the VGG-16 backbone, which recognizes behavior patterns after classifying the obtained images into learning and verification subsets. The first dataset comprises time-series panoramic images obtained using a fisheye lens monocular camera with a wide field of view. We attempted to recognize five behavior patterns: eating, reading, operating a smartphone, operating a laptop computer, and sitting. Even when using panoramic images including distortions, results demonstrate the capability of recognizing properties and characteristics of slight motions and pose-based behavioral patterns. The second dataset was obtained using five environmental sensors: a thermopile sensor, a CO2 sensor, and air pressure, humidity, and temperature sensors. Our proposed sensor system obviates the need for constraint; it also preserves each subject’s privacy. Using a long short-term memory (LSTM) network combined with CNN, which is a deep-learning model dealing with time-series features, we recognized eight behavior patterns: eating, operating a laptop computer, operating a smartphone, playing a game, reading, exiting, taking a nap, and sitting. The recognition accuracy for the second dataset was lower than for the first dataset consisting of images, but we demonstrated recognition of behavior patterns from time-series of weak sensor signals. The recognition results for the first dataset, after accuracy evaluation, can be reused for automatically annotated labels applied to the second dataset. Our proposed method actualizes semi-automatic annotation, false recognized category detection, and sensor calibration. Feasibility study results show the new possibility of HAR used for ADL based on unique sensors of two types. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

14 pages, 6308 KiB  
Article
Using Homemade Pressure Device to Improve Plantar Pressure—A Case Study on the Patient with Lower Limb Lymphedema
by Jong-Chen Chen, Yao-Te Wang and Ying-Sheng Lin
Appl. Sci. 2021, 11(20), 9629; https://doi.org/10.3390/app11209629 - 15 Oct 2021
Viewed by 1125
Abstract
Feet play a very important and indispensable role in people’s lives. Patients with lymphedema often suffer from collapsed (or even deformed) foot arches as a result of lower extremity edema. This result will change the normal pressure distribution on the soles of their [...] Read more.
Feet play a very important and indispensable role in people’s lives. Patients with lymphedema often suffer from collapsed (or even deformed) foot arches as a result of lower extremity edema. This result will change the normal pressure distribution on the soles of their feet, which will affect their mobility and physical health. When the patient does not know that the distribution of pressure on the sole of the foot has changed significantly, the deformation of the sole of the foot will become severe. In response to this problem, this research team hopes to use a set of self-made sensor insoles to help to understand the plantar pressure points in different situations or actions. The subject invited in this study was a patient with lower extremity edema. The entire study was carried out with the consent of the patient, the guidance of the physician and the approval of the Ethics Committee of National Taiwan University Hospital (No: 201805068 RINB, date: 18 June 2018). This study uses this self-made sensor insole to analyze the plantar pressure distribution of the patient before and after the operation of lower extremity edema. The results show that the operation can effectively improve the high foot pressure in the center and rear of the foot area during different sports (standing, walking and biking). This not only increases its stability when standing and walking, but also significantly and effectively improves its walking speed and step distance. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

29 pages, 3080 KiB  
Article
Human Activity Recognition Using CSI Information with Nexmon
by Jörg Schäfer, Baldev Raj Barrsiwal, Muyassar Kokhkharova, Hannan Adil and Jens Liebehenschel
Appl. Sci. 2021, 11(19), 8860; https://doi.org/10.3390/app11198860 - 23 Sep 2021
Cited by 29 | Viewed by 7978
Abstract
Using Wi-Fi IEEE 802.11 standard, radio frequency waves are mainly used for communication on various devices such as mobile phones, laptops, and smart televisions. Apart from communication applications, the recent research in wireless technology has turned Wi-Fi into other exploration possibilities such as [...] Read more.
Using Wi-Fi IEEE 802.11 standard, radio frequency waves are mainly used for communication on various devices such as mobile phones, laptops, and smart televisions. Apart from communication applications, the recent research in wireless technology has turned Wi-Fi into other exploration possibilities such as human activity recognition (HAR). HAR is a field of study that aims to predict motion and movement made by a person or even several people. There are numerous possibilities to use the Wi-Fi-based HAR solution for human-centric applications in intelligent surveillance, such as human fall detection in the health care sector or for elderly people nursing homes, smart homes for temperature control, a light control application, and motion detection applications. This paper’s focal point is to classify human activities such as EMPTY, LYING, SIT, SIT-DOWN, STAND, STAND-UP, WALK, and FALL with deep neural networks, such as long-term short memory (LSTM) and support vector machines (SVM). Special care was taken to address practical issues such as using available commodity hardware. Therefore, the open-source tool Nexmon was used for the channel state information (CSI) extraction on inexpensive hardware (Raspberry Pi 3B+, Pi 4B, and Asus RT-AC86U routers). We conducted three different types of experiments using different algorithms, which all demonstrated a similar accuracy in prediction for HAR with an accuracy between 97% and 99.7% (Raspberry Pi) and 96.2% and 100% (Asus RT-AC86U), for the best models, which is superior to previously published results. We also provide the acquired datasets and disclose details about the experimental setups. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

28 pages, 4724 KiB  
Article
The Use of Transfer Learning for Activity Recognition in Instances of Heterogeneous Sensing
by Netzahualcoyotl Hernandez-Cruz, Chris Nugent, Shuai Zhang and Ian McChesney
Appl. Sci. 2021, 11(16), 7660; https://doi.org/10.3390/app11167660 - 20 Aug 2021
Cited by 3 | Viewed by 2376
Abstract
Transfer learning is a growing field that can address the variability of activity recognition problems by reusing the knowledge from previous experiences to recognise activities from different conditions, resulting in the leveraging of resources such as training and labelling efforts. Although integrating ubiquitous [...] Read more.
Transfer learning is a growing field that can address the variability of activity recognition problems by reusing the knowledge from previous experiences to recognise activities from different conditions, resulting in the leveraging of resources such as training and labelling efforts. Although integrating ubiquitous sensing technology and transfer learning seem promising, there are some research opportunities that, if addressed, could accelerate the development of activity recognition. This paper presents TL-FmRADLs; a framework that converges the feature fusion strategy with a teacher/learner approach over the active learning technique to automatise the self-training process of the learner models. Evaluation TL-FmRADLs is conducted over InSync; an open access dataset introduced for the first time in this paper. Results show promising effects towards mitigating the insufficiency of labelled data available by enabling the learner model to outperform the teacher’s performance. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

11 pages, 3585 KiB  
Article
TickPhone App: A Smartphone Application for Rapid Tick Identification Using Deep Learning
by Zhiheng Xu, Xiong Ding, Kun Yin, Ziyue Li, Joan A. Smyth, Maureen B. Sims, Holly A. McGinnis and Changchun Liu
Appl. Sci. 2021, 11(16), 7355; https://doi.org/10.3390/app11167355 - 10 Aug 2021
Cited by 1 | Viewed by 3311
Abstract
Tick species are considered the second leading vector of human diseases. Different ticks can transmit a variety of pathogens that cause various tick-borne diseases (TBD), such as Lyme disease. Currently, it remains a challenge to diagnose Lyme disease because of its non-specific symptoms. [...] Read more.
Tick species are considered the second leading vector of human diseases. Different ticks can transmit a variety of pathogens that cause various tick-borne diseases (TBD), such as Lyme disease. Currently, it remains a challenge to diagnose Lyme disease because of its non-specific symptoms. Rapid and accurate identification of tick species plays an important role in predicting potential disease risk for tick-bitten patients, and ensuring timely and effective treatment. Here, we developed, optimized, and tested a smartphone-based deep learning algorithm (termed “TickPhone app”) for tick identification. The deep learning model was trained by more than 2000 tick images and optimized by different parameters, including normal sizes of images, deep learning architectures, image styles, and training–testing dataset distributions. The optimized deep learning model achieved a training accuracy of ~90% and a validation accuracy of ~85%. The TickPhone app was used to identify 31 independent tick species and achieved an accuracy of 95.69%. Such a simple and easy-to-use TickPhone app showed great potential to estimate epidemiology and risk of tick-borne disease, help health care providers better predict potential disease risk for tick-bitten patients, and ultimately enable timely and effective medical treatment for patients. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

Review

Jump to: Research

15 pages, 1247 KiB  
Review
A Survey of IoT-Based Fall Detection for Aiding Elderly Care: Sensors, Methods, Challenges and Future Trends
by Mohamed Esmail Karar, Hazem Ibrahim Shehata and Omar Reyad
Appl. Sci. 2022, 12(7), 3276; https://doi.org/10.3390/app12073276 - 23 Mar 2022
Cited by 28 | Viewed by 7688
Abstract
Remote monitoring of a fall condition or activities and daily life (ADL) of elderly patients has become one of the essential purposes for modern telemedicine. Internet of Things (IoT) and artificial intelligence (AI) techniques, including machine and deep learning models, have been recently [...] Read more.
Remote monitoring of a fall condition or activities and daily life (ADL) of elderly patients has become one of the essential purposes for modern telemedicine. Internet of Things (IoT) and artificial intelligence (AI) techniques, including machine and deep learning models, have been recently applied in the medical field to automate the diagnosis procedures of abnormal and diseased cases. They also have many other applications, including the real-time identification of fall accidents in elderly patients. The goal of this article is to review recent research whose focus is to develop AI algorithms and methods of fall detection systems (FDS) in the IoT environment. In addition, the usability of different sensor types, such as gyroscopes and accelerometers in smartwatches, is described and discussed with the current limitations and challenges for realizing successful FDSs. The availability problem of public fall datasets for evaluating the proposed detection algorithms are also addressed in this study. Finally, this article is concluded by proposing advanced techniques such as lightweight deep models as one of the solutions and prospects of futuristic smart IoT-enabled systems for accurate fall detection in the elderly. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition in Real-World Scenarios)
Show Figures

Figure 1

Back to TopTop