applsci-logo

Journal Browser

Journal Browser

Human Activity Recognition (HAR) in Healthcare, 3rd Edition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 13234

Special Issue Editors


E-Mail Website
Guest Editor
Department of Civil Engineering, Energy, Environment and Materials (DICEAM), Mediterranea University of Reggio Calabria, Via Zehender, 89124 Reggio Calabria, Italy
Interests: biomedical signal processing and sensors; photonics; optical fibers; mems; metamaterials; nanotechnology; artificial intelligence; neural network; virtual reality; augmented reality; indoor navigation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, NTNU/Norwegian University of Science and Technology, 7491 Trondheim, Norway
Interests: medical informatics applications; eHealth; social media; learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Technological advances, including those in the medical field, have improved patients’ quality of life. These results have led to an increased elderly population with a greater demand for healthcare, which is difficult to meet due to the expensiveness and scarce availability of caregivers. Advances in artificial intelligence, wireless connection systems, and nanotechnologies allow intelligent human health monitoring systems to be created, avoiding hospitalization with apparent cost containment. Human activity recognition (HAR), especially those based on the use of data collected through sensors or on viewing images captured by cameras, is fundamental in the health monitoring system. In addition, they can guarantee activity recognition functions, the monitoring of vital functions, traceability, the detection of falls and safety alarms, and cognitive assistance. The rapid development of the Internet of Things (IoT) supports research on a wide range of automated and interconnected solutions to improve the quality of life of older people and their independence. With IoT, it is possible to create innovative solutions in ambient intelligence (Aml) and ambient assisted living (AAL).

Dr. Luigi Bibbò
Prof. Dr. J. Artur Serrano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • wearable sensor
  • Internet of Things
  • ambient assisted living
  • ambient intelligent

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

27 pages, 2223 KB  
Article
Off-the-Shelf AAL—A Practical Approach to Face the Population Shift
by Gerhard Leitner
Appl. Sci. 2026, 16(5), 2251; https://doi.org/10.3390/app16052251 - 26 Feb 2026
Viewed by 304
Abstract
Although the concept of Active and Assisted Living (AAL) has been a prominent topic in academia and in industry for decades, the widespread adoption of related technologies remains well below expectations. The underlying causes are multifaceted. The installation and retrofitting of such systems [...] Read more.
Although the concept of Active and Assisted Living (AAL) has been a prominent topic in academia and in industry for decades, the widespread adoption of related technologies remains well below expectations. The underlying causes are multifaceted. The installation and retrofitting of such systems typically require substantial financial investments, significant manual effort, and specialized expertise for setup and maintenance. Existing solutions lack flexibility and are difficult to tailor to the individual living situations and diverse needs of the primary target group, older adults. While state-of-the-art smart home platforms would, in principle, be capable of supporting a broad range of AAL functionalities and could be adapted to different usage contexts, much of the research in this domain has been conducted in artificial settings, such as laboratory environments or model houses, conditions that fail to fully capture the complexity and variability of real-world living environments of the elderly population. In this paper, we explore the potential, opportunities, and limitations of integrating low-cost hardware with open-source software components in residential environments representative of older adults’ everyday lives. Our work is based on a longitudinal case study conducted over several years in an actual household, focusing on delivering fundamental AAL functionality. By documenting the iterative development and real-world deployment of the system, this study offers practical insights into the feasibility and challenges of implementing on-site AAL support under realistic conditions. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

19 pages, 580 KB  
Article
VERA: A Privacy-Preserving Framework for Deep Learning Data Collection and Object Detection in Private Settings
by Manuel H. Jimenez, Onur Toker and Luis G. Jaimes
Appl. Sci. 2026, 16(4), 2144; https://doi.org/10.3390/app16042144 - 23 Feb 2026
Viewed by 540
Abstract
This paper introduces VERA (Vision Expert Real Analysis), a privacy-supporting cyber-physical framework designed for real-time data collection and visual analysis in healthcare environments. VERA limits exposure to identifiable RGB content by ensuring that annotators interact only with non-identifiable edge-based representations, while original images [...] Read more.
This paper introduces VERA (Vision Expert Real Analysis), a privacy-supporting cyber-physical framework designed for real-time data collection and visual analysis in healthcare environments. VERA limits exposure to identifiable RGB content by ensuring that annotators interact only with non-identifiable edge-based representations, while original images remain encrypted at rest using AES-CFB, with integrity verification performed before in-memory decryption. The system integrates edge-based obfuscation, secure annotation, in-memory decryption, and dynamic data augmentation to train YOLO-based person detection models without compromising patient privacy. Experimental results on a curated COCO subset show that VERA enables effective person detection, improving mean Average Precision (mAP) from an intentionally minimal baseline of 0.61 percent to 99.94 percent after full training and augmentation. This baseline is used solely to illustrate the contribution of the secure data preparation pipeline and is not intended to represent a fully optimized YOLO configuration. The results demonstrate that privacy-supportive workflows can maintain strong model performance while aligning with data protection practices common in regulated environments. Although this work focuses on person detection as a foundational stage, the VERA architecture is designed to support future extensions toward privacy-preserving Human Activity Recognition (HAR) tasks in clinical and assisted-living settings. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

22 pages, 3752 KB  
Article
An IoT-Enabled Smart Pillow with Multi-Spectrum Deep Learning Model for Real-Time Snoring Detection and Intervention
by Zhuofu Liu, Kotchoni K. O. Perin, Gaohan Li, Jian Wang, Tian He, Yuewen Xu and Peter W. McCarthy
Appl. Sci. 2025, 15(24), 12891; https://doi.org/10.3390/app152412891 - 6 Dec 2025
Viewed by 3151
Abstract
Snoring, a common sleep-disordered breathing phenomenon, impairs sleep quality for both the sufferer and any bed partner. While mild snoring primarily disrupts sleep continuity, severe cases often indicate obstructive sleep apnea (OSA), a disorder affecting 9–17% of the global population, linked to significant [...] Read more.
Snoring, a common sleep-disordered breathing phenomenon, impairs sleep quality for both the sufferer and any bed partner. While mild snoring primarily disrupts sleep continuity, severe cases often indicate obstructive sleep apnea (OSA), a disorder affecting 9–17% of the global population, linked to significant comorbidities and socioeconomic burden (see Introduction for supporting data). Here, we propose a low-cost, real-time snoring detection and intervention system that integrates a multiple-spectrum deep learning framework with an Internet of Things (IoT)-enabled smart pillow. The modified Parallel Convolutional Spatiotemporal Network (PCSN) combines three parallel convolutional neural network (CNN) branches processing Constant-Q Transform (CQT), Synchrosqueezing Wavelet Transform (SWT), and Hilbert–Huang Transform (HHT) features with a Long Short-Term Memory (LSTM) network to capture spatial and temporal characteristics of sounds associated with snoring. The smart pillow prototype incorporates two Micro-Electro-Mechanical System (MEMS) microphones, an ESP8266 off-shelf board, a speaker, and two vibration motors for real-time audio acquisition, cloud-based processing via Arduino cloud, and closed-loop haptic/audio feedback that encourages positional changes without fully awakening the snorers. Experiments demonstrated that the modified PCSN model achieves 98.33% accuracy, 99.29% sensitivity, 98.34% specificity, 98.3% recall, and 98.32% F1-score, outperforming existing systems. Hardware costs are under USD 8 and a smartphone app provides authorized users with real-time visualization and secure data access. This solution offers a cost-effective and accurate approach for home-based OSA screening and intervention. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

34 pages, 11523 KB  
Article
Hand Kinematic Model Construction Based on Tracking Landmarks
by Yiyang Dong and Shahram Payandeh
Appl. Sci. 2025, 15(16), 8921; https://doi.org/10.3390/app15168921 - 13 Aug 2025
Cited by 2 | Viewed by 2545
Abstract
Visual body-tracking techniques have seen widespread adoption in applications such as motion analysis, human–machine interaction, tele-robotics and extended reality (XR). These systems typically provide 2D landmark coordinates corresponding to key limb positions. However, to construct a meaningful 3D kinematic model for body joint [...] Read more.
Visual body-tracking techniques have seen widespread adoption in applications such as motion analysis, human–machine interaction, tele-robotics and extended reality (XR). These systems typically provide 2D landmark coordinates corresponding to key limb positions. However, to construct a meaningful 3D kinematic model for body joint reconstruction, a mapping must be established between these visual landmarks and the underlying joint parameters of individual body parts. This paper presents a method for constructing a 3D kinematic model of the human hand using calibrated 2D landmark-tracking data augmented with depth information. The proposed approach builds a hierarchical model in which the palm serves as the root coordinate frame, and finger landmarks are used to compute both forward and inverse kinematic solutions. Through step-by-step examples, we demonstrate how measured hand landmark coordinates are used to define the palm reference frame and solve for joint angles for each finger. These solutions are then used in a visualization framework to qualitatively assess the accuracy of the reconstructed hand motion. As a future work, the proposed model offers a foundation for model-based hand kinematic estimation and has utility in scenarios involving occlusion or missing data. In such cases, the hierarchical structure and kinematic solutions can be used as generative priors in an optimization framework to estimate unobserved landmark positions and joint configurations. The novelty of this work lies in its model-based approach using real sensor data, without relying on wearable devices or synthetic assumptions. Although current validation is qualitative, the framework provides a foundation for future robust estimation under occlusion or sensor noise. It may also serve as a generative prior for optimization-based methods and be quantitatively compared with joint measurements from wearable motion-capture systems. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

14 pages, 1992 KB  
Article
G-CTRNN: A Trainable Low-Power Continuous-Time Neural Network for Human Activity Recognition in Healthcare Applications
by Abdallah Alzubi, David Lin, Johan Reimann and Fadi Alsaleem
Appl. Sci. 2025, 15(13), 7508; https://doi.org/10.3390/app15137508 - 4 Jul 2025
Viewed by 4019
Abstract
Continuous-time Recurrent Neural Networks (CTRNNs) are well-suited for modeling temporal dynamics in low-power neuromorphic and analog computing systems, making them promising candidates for edge-based human activity recognition (HAR) in healthcare. However, training CTRNNs remains challenging due to their continuous-time nature and the need [...] Read more.
Continuous-time Recurrent Neural Networks (CTRNNs) are well-suited for modeling temporal dynamics in low-power neuromorphic and analog computing systems, making them promising candidates for edge-based human activity recognition (HAR) in healthcare. However, training CTRNNs remains challenging due to their continuous-time nature and the need to respect physical hardware constraints. In this work, we propose G-CTRNN, a novel gradient-based training framework for analog-friendly CTRNNs designed for embedded healthcare applications. Our method extends Backpropagation Through Time (BPTT) to continuous domains using TensorFlow’s automatic differentiation, while enforcing constraints on time constants and synaptic weights to ensure hardware compatibility. We validate G-CTRNN on the WISDM human activity dataset, which simulates realistic wearable sensor data for healthcare monitoring. Compared to conventional RNNs, G-CTRNN achieves superior classification accuracy with fewer parameters and greater stability—enabling continuous, real-time HAR on low-power platforms such as MEMS computing networks. The proposed framework provides a pathway toward on-device AI for remote patient monitoring, elderly care, and personalized healthcare in resource-constrained environments. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 5014 KB  
Review
Flexible Micro-Neural Interface Devices: Advances in Materials Integration and Scalable Manufacturing Technologies
by Jihyeok Lee, Sangwoo Kang and Suck Won Hong
Appl. Sci. 2026, 16(1), 125; https://doi.org/10.3390/app16010125 - 22 Dec 2025
Cited by 1 | Viewed by 1758
Abstract
Flexible microscale neural interfaces are advancing current strategies for recording and modulating electrical activity in the brain and spinal cord. The aim of this review is to colligate recent progress in thin-film micro-electrocorticography (μECoG) systems and establish a framework for their translation toward [...] Read more.
Flexible microscale neural interfaces are advancing current strategies for recording and modulating electrical activity in the brain and spinal cord. The aim of this review is to colligate recent progress in thin-film micro-electrocorticography (μECoG) systems and establish a framework for their translation toward spinal bioelectronic implants. We first outline substrate and electrode material design, ranging from polymeric and hydrogel-based materials to nanostructured conductive materials that enable high-fidelity recording on mechanically compliant platforms. We then summarize structural design rules for μECoG arrays, including electrode size, pitch, and channel scaling, and relate these to data-driven μECoG applications in brain–computer interfaces and closed-loop neuromodulation. Bidirectional μECoG architectures for simultaneous stimulation and recording are examined, with emphasis on safe charge injection, electrochemical and thermal limits, and state-of-the-art hardware and algorithmic strategies for stimulation-artifact suppression. Building upon these cortical technologies, we briefly describe adaptation to spinal interfaces, where anatomical constraints demand optimized mechanical properties. Finally, we discuss the convergence of flexible bioelectronics, wireless power and telemetry, and embedded AI decoding as a path toward autonomous, clinically translatable μECoG and spinal neuroprosthetic systems. Ultimately, by synthesizing these multidisciplinary advances, this review provides a strategic roadmap for overcoming current translational barriers and realizing the full clinical potential of soft bioelectronics. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 3rd Edition)
Show Figures

Figure 1

Back to TopTop