applsci-logo

Journal Browser

Journal Browser

Human Activity Recognition (HAR) in Healthcare, 2nd Edition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: 30 March 2025 | Viewed by 13508

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil, Energy, Environmental and Materials Engineering (DICEAM), Mediterranean University of Reggio Calabria, Reggio Calabria, Italy
Interests: biomedical signal processing and sensors; photonics; optical fibers; MEMS; metamaterials; nanotechnology; artificial intelligence; neural network; virtual reality; augmented reality; indoor navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, NTNU/Norwegian University of Science and Technology, 7491 Trondheim, Norway
Interests: medical informatics applications; eHealth; social media; learning

Special Issue Information

Dear Colleagues,

Technological advances, including those in the medical field, have improved patients' quality of life. These results have led to an increased elderly population with a greater demand for healthcare, which is difficult to meet due to caregivers' expensive and scarce availability. Advances in artificial intelligence, wireless connection systems, and nanotechnologies allow intelligent human health monitoring systems to be created, avoiding hospitalization with apparent cost containment. Recognizing human activities (HAR), specially those based on the use of data collected through sensors or on viewing images captured by cameras, is fundamental in the health monitoring system. In addition, they can guarantee activity recognition functions, the monitoring of vital functions, traceability, the detection of falls and safety alarms, and cognitive assistance. The rapid development of the Internet of Things (IoT) supports research on a wide range of automated and interconnected solutions to improve the quality of life of older people and their independence. With IoT, it is possible to create innovative solutions in ambient intelligence (Aml) and ambient assisted living (AAL).

Dr. Luigi Bibbò
Prof. Dr. J. Artur Serrano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • wearable sensor
  • Internet of Things
  • ambient assisted living
  • ambient intelligent

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

14 pages, 1413 KiB  
Article
Enhanced Speech Emotion Recognition Using Conditional-DCGAN-Based Data Augmentation
by Kyung-Min Roh and Seok-Pil Lee
Appl. Sci. 2024, 14(21), 9890; https://doi.org/10.3390/app14219890 - 29 Oct 2024
Viewed by 623
Abstract
With the advancement of Artificial Intelligence (AI) and the Internet of Things (IoT), research in the field of emotion detection and recognition has been actively conducted worldwide in modern society. Among this research, speech emotion recognition has gained increasing importance in various areas [...] Read more.
With the advancement of Artificial Intelligence (AI) and the Internet of Things (IoT), research in the field of emotion detection and recognition has been actively conducted worldwide in modern society. Among this research, speech emotion recognition has gained increasing importance in various areas of application such as personalized services, enhanced security, and the medical field. However, subjective emotional expressions in voice data can be perceived differently by individuals, and issues such as data imbalance and limited datasets fail to provide the diverse situations necessary for model training, thus limiting performance. To overcome these challenges, this paper proposes a novel data augmentation technique using Conditional-DCGAN, which combines CGAN and DCGAN. This study analyzes the temporal signal changes using Mel-spectrograms extracted from the Emo-DB dataset and applies a loss function calculation method borrowed from reinforcement learning to generate data that accurately reflects emotional characteristics. To validate the proposed method, experiments were conducted using a model combining CNN and Bi-LSTM. The results, including augmented data, achieved significant performance improvements, reaching WA 91.46% and UAR 91.61%, compared to using only the original data (WA 79.31%, UAR 78.16%). These results outperform similar previous studies, such as those reporting WA 84.49% and UAR 83.33%, demonstrating the positive effects of the proposed data augmentation technique. This study presents a new data augmentation method that enables effective learning even in situations with limited data, offering a progressive direction for research in speech emotion recognition. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

15 pages, 5549 KiB  
Article
Thermal Threat Monitoring Using Thermal Image Analysis and Convolutional Neural Networks
by Mariusz Marzec and Sławomir Wilczyński
Appl. Sci. 2024, 14(19), 8878; https://doi.org/10.3390/app14198878 - 2 Oct 2024
Viewed by 732
Abstract
Monitoring of the vital signs or environment of disabled people is currently very popular because it increases their safety, improves their quality of life and facilitates remote care. The article proposes a system for automatic protection against burns based on the detection of [...] Read more.
Monitoring of the vital signs or environment of disabled people is currently very popular because it increases their safety, improves their quality of life and facilitates remote care. The article proposes a system for automatic protection against burns based on the detection of thermal threats intended for blind or visually impaired people. Deep learning methods and CNNs were used to analyze images recorded by mobile thermal cameras. The proposed algorithm analyses thermal images covering the field of view of a user for the presence of objects with high or very high temperatures. If the user’s hand appears in such an area, the procedure warning about the possibility of burns is activated and the algorithm generates an alarm. To achieve this effect, the thermal images were analyzed using the 15-layered convolutional neural network proposed in the article. The proposed solution provided the efficiency of detecting threat situations of over 99% for a set of more than 21,000 images. Tests were carried out for various network configurations, architecture and both the accuracy and precision of hand detection was 99.5%, whereas sensitivity reached 99.7%. The effectiveness of burn risk detection was 99.7%—a hot object—and the hand appeared simultaneously in the image. The presented method allows for quick, effective and automatic warning against thermal threats. The optimization of the model structure allows for its use with mobile devices such as smartphones and mobile thermal imaging cameras. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

15 pages, 2143 KiB  
Article
A Virtual Reality-Based Simulation Tool for Assessing the Risk of Falls in Older Adults
by Muhammad Asif Ahmad, Élvio Rúbio Gouveia and Sergi Bermúdez i Badia
Appl. Sci. 2024, 14(14), 6251; https://doi.org/10.3390/app14146251 - 18 Jul 2024
Viewed by 1082
Abstract
Falls are considered a significant cause of disability, pain, and premature deaths in older adults, often due to sedentary lifestyles and various risk factors. Combining immersive virtual reality (IVR) with physical exercise, or exergames, enhances motivation and personalizes training, effectively preventing falls by [...] Read more.
Falls are considered a significant cause of disability, pain, and premature deaths in older adults, often due to sedentary lifestyles and various risk factors. Combining immersive virtual reality (IVR) with physical exercise, or exergames, enhances motivation and personalizes training, effectively preventing falls by improving strength and balance in older people. IVR technology may increase the ecological validity of the assessments. The main goal of our study was to assess the feasibility of using a KAVE-based VR platform combining simulations of Levadas and a cable car to perform a balanced assessment and profiling of the older adult population for high risk of falls and the related user experience. A VR-based platform using a Wii balance board and a CAVE was developed to assess balance and physical fitness. Validated by the Biodex Balance System (BBS), 25 older adults participated in this study. The usability and presence were measured through the System Usability Scale and ITC-SOPI questionnaires, respectively. The IVR system showed a high presence and a good usability score of 75. Significant effects were found in the maximum excursion of the centre of pressure (COP) on the anterior–posterior axis during the cable car simulation (CCS), correlating with BBS metrics. Multiple discriminative analysis models and the support vector machine classified fall risk with moderate to high accuracy, precision, and recall. The system accurately identified all high-risk participants using the leave-one-out method. This study suggests that an IVR-based platform based on simulations with high ecological validity can be used to assess physical fitness and identify individuals at a higher risk of falls. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

17 pages, 5957 KiB  
Article
Inertial and Flexible Resistive Sensor Data Fusion for Wearable Breath Recognition
by Mehdi Zabihi, Bhawya, Parikshit Pandya, Brooke R. Shepley, Nicholas J. Lester, Syed Anees, Anthony R. Bain, Simon Rondeau-Gagné and Mohammed Jalal Ahamed
Appl. Sci. 2024, 14(7), 2842; https://doi.org/10.3390/app14072842 - 28 Mar 2024
Cited by 1 | Viewed by 3141
Abstract
This paper proposes a novel data fusion technique for a wearable multi-sensory patch that integrates an accelerometer and a flexible resistive pressure sensor to accurately capture breathing patterns. It utilizes an accelerometer to detect breathing-related diaphragmatic motion and other body movements, and a [...] Read more.
This paper proposes a novel data fusion technique for a wearable multi-sensory patch that integrates an accelerometer and a flexible resistive pressure sensor to accurately capture breathing patterns. It utilizes an accelerometer to detect breathing-related diaphragmatic motion and other body movements, and a flex sensor for muscle stretch detection. The proposed sensor data fusion technique combines inertial and pressure sensors to eliminate nonbreathing body motion-related artifacts, ensuring that the filtered signal exclusively conveys information pertaining to breathing. The fusion technique mitigates the limitations of relying solely on one sensor’s data, providing a more robust and reliable solution for continuous breath monitoring in clinical and home environments. The sensing system was tested against gold-standard spirometry data from multiple participants for various breathing patterns. Experimental results demonstrate the effectiveness of the proposed approach in accurately monitoring breathing rates, even in the presence of nonbreathing-related body motion. The results also demonstrate that the multi-sensor patch presented in this paper can accurately distinguish between varying breathing patterns both at rest and during body movements. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

19 pages, 3672 KiB  
Article
Estimation of Systolic and Diastolic Blood Pressure for Hypertension Identification from Photoplethysmography Signals
by Hygo Sousa De Oliveira, Rafael Albuquerque Pinto, Eduardo James Pereira Souto and Rafael Giusti
Appl. Sci. 2024, 14(6), 2470; https://doi.org/10.3390/app14062470 - 14 Mar 2024
Cited by 1 | Viewed by 2234
Abstract
Continuous monitoring plays a crucial role in diagnosing hypertension, characterized by the increase in Arterial Blood Pressure (ABP). The gold-standard method for obtaining ABP involves the uncomfortable and invasive technique of cannulation. Conversely, ABP can be acquired non-invasively by using Photoplethysmography (PPG). This [...] Read more.
Continuous monitoring plays a crucial role in diagnosing hypertension, characterized by the increase in Arterial Blood Pressure (ABP). The gold-standard method for obtaining ABP involves the uncomfortable and invasive technique of cannulation. Conversely, ABP can be acquired non-invasively by using Photoplethysmography (PPG). This non-invasive approach offers the advantage of continuous BP monitoring outside a hospital setting and can be implemented in cost-effective wearable devices. PPG and ABP signals differ in scale values, which creates a non-linear relationship, opening avenues for the utilization of algorithms capable of detecting non-linear associations. In this study, we introduce Neural Model of Blood Pressure (NeuBP), which estimates systolic and diastolic values from PPG signals. The problem is treated as a binary classification task, distinguishing between Normotensive and Hypertensive categories. Furthermore, our research investigates NeuBP’s performance in classifying different BP categories, including Normotensive, Prehypertensive, Grade 1 Hypertensive, and Grade 2 Hypertensive cases. We evaluate our proposed method by using data from the publicly available MIMIC-III database. The experimental results demonstrate that NeuBP achieves results comparable to more complex models with fewer parameters. The mean absolute errors for systolic and diastolic values are 5.02 mmHg and 3.11 mmHg, respectively. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

22 pages, 1026 KiB  
Article
Real-Time Human Activity Recognition on Embedded Equipment: A Comparative Study
by Houda Najeh, Christophe Lohr and Benoit Leduc
Appl. Sci. 2024, 14(6), 2377; https://doi.org/10.3390/app14062377 - 12 Mar 2024
Cited by 1 | Viewed by 1351
Abstract
As living standards improve, the growing demand for energy, comfort, and health monitoring drives the increased importance of innovative solutions. Real-time recognition of human activities (HAR) in smart homes is of significant relevance, offering varied applications to improve the quality of life of [...] Read more.
As living standards improve, the growing demand for energy, comfort, and health monitoring drives the increased importance of innovative solutions. Real-time recognition of human activities (HAR) in smart homes is of significant relevance, offering varied applications to improve the quality of life of fragile individuals. These applications include facilitating autonomy at home for vulnerable people, early detection of deviations or disruptions in lifestyle habits, and immediate alerting in the event of critical situations. The first objective of this work is to develop a real-time HAR algorithm in embedded equipment. The proposed approach incorporates the event dynamic windowing based on space-temporal correlation and the knowledge of activity trigger sensors to recognize activities in the case of a record of new events. The second objective is to approach the HAR task from the perspective of edge computing. In concrete terms, this involves implementing a HAR algorithm in a “home box”, a low-power, low-cost computer, while guaranteeing performance in terms of accuracy and processing time. To achieve this goal, a HAR algorithm was first developed to perform these recognition tasks in real-time. Then, the proposed algorithm is ported on three hardware architectures to be compared: (i) a NUCLEO-H753ZI microcontroller from ST-Microelectronics using two programming languages, C language and MicroPython; (ii) an ESP32 microcontroller, often used for smart-home devices; and (iii) a Raspberry-PI, optimizing it to maintain accuracy of classification of activities with a requirement of processing time, memory resources, and energy consumption. The experimental results show that the proposed algorithm can be effectively implemented on a constrained resource hardware architecture. This could allow the design of an embedded system for real-time human activity recognition. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

31 pages, 6952 KiB  
Article
Device Position-Independent Human Activity Recognition with Wearable Sensors Using Deep Neural Networks
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Appl. Sci. 2024, 14(5), 2107; https://doi.org/10.3390/app14052107 - 3 Mar 2024
Cited by 1 | Viewed by 2464
Abstract
Human activity recognition (HAR) identifies people’s motions and actions in daily life. HAR research has grown with the popularity of internet-connected, wearable sensors that capture human movement data to detect activities. Recent deep learning advances have enabled more HAR research and applications using [...] Read more.
Human activity recognition (HAR) identifies people’s motions and actions in daily life. HAR research has grown with the popularity of internet-connected, wearable sensors that capture human movement data to detect activities. Recent deep learning advances have enabled more HAR research and applications using data from wearable devices. However, prior HAR research often focused on a few sensor locations on the body. Recognizing real-world activities poses challenges when device positioning is uncontrolled or initial user training data are unavailable. This research analyzes the feasibility of deep learning models for both position-dependent and position-independent HAR. We introduce an advanced residual deep learning model called Att-ResBiGRU, which excels at accurate position-dependent HAR and delivers excellent performance for position-independent HAR. We evaluate this model using three public HAR datasets: Opportunity, PAMAP2, and REALWORLD16. Comparisons are made to previously published deep learning architectures for addressing HAR challenges. The proposed Att-ResBiGRU model outperforms existing techniques in accuracy, cross-entropy loss, and F1-score across all three datasets. We assess the model using k-fold cross-validation. The Att-ResBiGRU achieves F1-scores of 86.69%, 96.23%, and 96.44% on the PAMAP2, REALWORLD16, and Opportunity datasets, surpassing state-of-the-art models across all datasets. Our experiments and analysis demonstrate the exceptional performance of the Att-ResBiGRU model for HAR applications. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 3354 KiB  
Study Protocol
Protocol for the Development of Automatic Multisensory Systems to Analyze Human Activity for Functional Evaluation: Application to the EYEFUL System
by Paula Obeso-Benítez, Marta Pérez-de-Heredia-Torres, Elisabet Huertas-Hoyas, Patricia Sánchez-Herrera-Baeza, Nuria Máximo-Bocanegra, Sergio Serrada-Tejeda, Marta Marron-Romera, Javier Macias-Guarasa, Cristina Losada-Gutierrez, Sira E. Palazuelos-Cagigas, Jose L. Martin-Sanchez and Rosa M. Martínez-Piédrola
Appl. Sci. 2024, 14(8), 3415; https://doi.org/10.3390/app14083415 - 18 Apr 2024
Viewed by 881
Abstract
The EYEFUL system represents a pioneering initiative designed to leverage multisensory systems for the automatic evaluation of functional ability and determination of dependency status in people performing activities of daily living. This interdisciplinary effort, bridging the gap between engineering and health sciences, aims [...] Read more.
The EYEFUL system represents a pioneering initiative designed to leverage multisensory systems for the automatic evaluation of functional ability and determination of dependency status in people performing activities of daily living. This interdisciplinary effort, bridging the gap between engineering and health sciences, aims to overcome the limitations of current evaluation tools, which often lack objectivity and fail to capture the full range of functional capacity. Until now, it has been derived from subjective reports and observational methods. By integrating wearable sensors and environmental technologies, EYEFUL offers an innovative approach to quantitatively assess an individual’s ability to perform activities of daily living, providing a more accurate and unbiased evaluation of functionality and personal independence. This paper describes the protocol planned for the development of the EYEFUL system, from the initial design of the methodology to the deployment of multisensory systems and the subsequent clinical validation process. The implications of this research are far-reaching, offering the potential to improve clinical evaluations of functional ability and ultimately improve the quality of life of people with varying levels of dependency. With its emphasis on technological innovation and interdisciplinary collaboration, the EYEFUL system sets a new standard for objective evaluation, highlighting the critical role of advanced screening technologies in addressing the challenges of modern healthcare. We expect that the publication of the protocol will help similar initiatives by providing a structured approach and rigorous validation process. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

Back to TopTop