sensors-logo

Journal Browser

Journal Browser

Human Activity Recognition in Smart Sensing Environment

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Wearables".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 21777

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milano-Bicocca, 20126 Milan, Italy
Interests: software architecture; mobile-based systems; m-health; e-health; self-healing; self-repairing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, University of Milano Bicocca, 20125 Milano, Italy
Interests: software architecture; self-healing; self-repairing; cloud computing; monitoring
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Research regarding smart sensing environments has been increasing in prevalence in recent years due to the availability of increasingly high-performance sensors. Smart sensing environments that can detect activities performed by users can contribute significantly to an individual's well-being, both mental (because their needs are automatically met) and physical (because their health can be monitored constantly and in real time).

Human Activity Recognition (HAR) is a field of research that defines and experiments with approaches in order to recognize human activities. Usually, such recognition is performed by exploiting data from sensors that may be available in the environment (also known as environmental sensors) or that are directly on the subject (also known as wearable sensors). The most common kind of data obtained from environmental sensors are images and videos from cameras; however, other environmental sensors are often employed, including, but not limited to: temperature, humidity, pressure, audio, and vibration sensors. On the other hand, the most commonly used wearable sensors are accelerometers, followed by magnetometers and gyroscopes, which today are commonly found in smartphones (which can be considered as wearables) and proper wearable devices such as smartwatches and fitness bands.

The aim of this Special Issue, entitled “Human Activity Recognition in Smart Sensing Environment”, is to attract high-quality, innovative, and original papers related to the field of exploiting sensor data in order to perform HAR-related tasks, regardless of the nature of the sensors themselves, which may be environmental, wearable, or a combination of the two.

Topics of interests include, but are not limited to, the following:

  • Sensor data fusion;
  • Smartphone sensors;
  • Wearable sensors;
  • Human Activity Recognition (HAR);
  • Smart environments;
  • Ambient intelligence;
  • Ambient sensing;
  • Ambient assisted living;
  • Machine and deep learning solutions exploiting sensor data in HAR.

Dr. Daniela Micucci
Dr. Marco Mobilio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human activity recognition
  • machine learning
  • deep learning
  • environmental sensors
  • wearable sensors
  • ambient sensing
  • ambient assisted living

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 944 KiB  
Article
Unobtrusive Cognitive Assessment in Smart-Homes: Leveraging Visual Encoding and Synthetic Movement Traces Data Mining
by Samaneh Zolfaghari, Annica Kristoffersson, Mia Folke, Maria Lindén and Daniele Riboni
Sensors 2024, 24(5), 1381; https://doi.org/10.3390/s24051381 - 21 Feb 2024
Viewed by 517
Abstract
The ubiquity of sensors in smart-homes facilitates the support of independent living for older adults and enables cognitive assessment. Notably, there has been a growing interest in utilizing movement traces for identifying signs of cognitive impairment in recent years. In this study, we [...] Read more.
The ubiquity of sensors in smart-homes facilitates the support of independent living for older adults and enables cognitive assessment. Notably, there has been a growing interest in utilizing movement traces for identifying signs of cognitive impairment in recent years. In this study, we introduce an innovative approach to identify abnormal indoor movement patterns that may signal cognitive decline. This is achieved through the non-intrusive integration of smart-home sensors, including passive infrared sensors and sensors embedded in everyday objects. The methodology involves visualizing user locomotion traces and discerning interactions with objects on a floor plan representation of the smart-home, and employing different image descriptor features designed for image analysis tasks and synthetic minority oversampling techniques to enhance the methodology. This approach distinguishes itself by its flexibility in effortlessly incorporating additional features through sensor data. A comprehensive analysis, conducted with a substantial dataset obtained from a real smart-home, involving 99 seniors, including those with cognitive diseases, reveals the effectiveness of the proposed functional prototype of the system architecture. The results validate the system’s efficacy in accurately discerning the cognitive status of seniors, achieving a macro-averaged F1-score of 72.22% for the two targeted categories: cognitively healthy and people with dementia. Furthermore, through experimental comparison, our system demonstrates superior performance compared with state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

17 pages, 1554 KiB  
Article
A Hybrid Protection Scheme for the Gait Analysis in Early Dementia Recognition
by Francesco Castro, Donato Impedovo and Giuseppe Pirlo
Sensors 2024, 24(1), 24; https://doi.org/10.3390/s24010024 - 19 Dec 2023
Cited by 1 | Viewed by 1153
Abstract
Human activity recognition (HAR) through gait analysis is a very promising research area for early detection of neurodegenerative diseases because gait abnormalities are typical symptoms of some neurodegenerative diseases, such as early dementia. While working with such biometric data, the performance parameters must [...] Read more.
Human activity recognition (HAR) through gait analysis is a very promising research area for early detection of neurodegenerative diseases because gait abnormalities are typical symptoms of some neurodegenerative diseases, such as early dementia. While working with such biometric data, the performance parameters must be considered along with privacy and security issues. In other words, such biometric data should be processed under specific security and privacy requirements. This work proposes an innovative hybrid protection scheme combining a partially homomorphic encryption scheme and a cancelable biometric technique based on random projection to protect gait features, ensuring patient privacy according to ISO/IEC 24745. The proposed hybrid protection scheme has been implemented along a long short-term memory (LSTM) neural network to realize a secure early dementia diagnosis system. The proposed protection scheme is scalable and implementable with any type of neural network because it is independent of the network’s architecture. The conducted experiments demonstrate that the proposed protection scheme enables a high trade-off between safety and performance. The accuracy degradation is at most 1.20% compared with the early dementia recognition system without the protection scheme. Moreover, security and computational analyses of the proposed scheme have been conducted and reported. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

21 pages, 7596 KiB  
Article
Visible Light Communications-Based Assistance System for the Blind and Visually Impaired: Design, Implementation, and Intensive Experimental Evaluation in a Real-Life Situation
by Alin-Mihai Căilean, Sebastian-Andrei Avătămăniței, Cătălin Beguni, Eduard Zadobrischi, Mihai Dimian and Valentin Popa
Sensors 2023, 23(23), 9406; https://doi.org/10.3390/s23239406 - 25 Nov 2023
Viewed by 853
Abstract
Severe visual impairment and blindness significantly affect a person’s quality of life, leading sometimes to social anxiety. Nevertheless, instead of concentrating on a person’s inability, we could focus on their capacities and on their other senses, which in many cases are more developed. [...] Read more.
Severe visual impairment and blindness significantly affect a person’s quality of life, leading sometimes to social anxiety. Nevertheless, instead of concentrating on a person’s inability, we could focus on their capacities and on their other senses, which in many cases are more developed. On the other hand, the technical evolution that we are witnessing is able to provide practical means that can reduce the effects that blindness and severe visual impairment have on a person’s life. In this context, this article proposes a novel wearable solution that has the potential to significantly improve blind person’s quality of life by providing personal assistance with the help of Visible Light Communications (VLC) technology. To prevent the wearable device from drawing attention and to not further emphasize the user’s deficiency, the prototype has been integrated into a smart backpack that has multiple functions, from localization to obstacle detection. To demonstrate the viability of the concept, the prototype has been evaluated in a complex scenario where it is used to receive the location of a certain object and to safely travel towards it. The experimental results have: i. confirmed the prototype’s ability to receive data at a Bit-Error Rate (BER) lower than 10−7; ii. established the prototype’s ability to provide support for a 3 m radius around a standard 65 × 65 cm luminaire; iii. demonstrated the concept’s compatibility with light dimming in the 1–99% interval while maintaining the low BER; and, most importantly, iv. proved that the use of the concept can enable a person to obtain information and guidance, enabling safer and faster way of traveling to a certain unknown location. As far as we know, this work is the first one to report the implementation and the experimental evaluation of such a concept. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

20 pages, 5594 KiB  
Article
Intelligent ADL Recognition via IoT-Based Multimodal Deep Learning Framework
by Madiha Javeed, Naif Al Mudawi, Abdulwahab Alazeb, Sultan Almakdi, Saud S. Alotaibi, Samia Allaoua Chelloug and Ahmad Jalal
Sensors 2023, 23(18), 7927; https://doi.org/10.3390/s23187927 - 16 Sep 2023
Viewed by 1528
Abstract
Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly [...] Read more.
Smart home monitoring systems via internet of things (IoT) are required for taking care of elders at home. They provide the flexibility of monitoring elders remotely for their families and caregivers. Activities of daily living are an efficient way to effectively monitor elderly people at home and patients at caregiving facilities. The monitoring of such actions depends largely on IoT-based devices, either wireless or installed at different places. This paper proposes an effective and robust layered architecture using multisensory devices to recognize the activities of daily living from anywhere. Multimodality refers to the sensory devices of multiple types working together to achieve the objective of remote monitoring. Therefore, the proposed multimodal-based approach includes IoT devices, such as wearable inertial sensors and videos recorded during daily routines, fused together. The data from these multi-sensors have to be processed through a pre-processing layer through different stages, such as data filtration, segmentation, landmark detection, and 2D stick model. In next layer called the features processing, we have extracted, fused, and optimized different features from multimodal sensors. The final layer, called classification, has been utilized to recognize the activities of daily living via a deep learning technique known as convolutional neural network. It is observed from the proposed IoT-based multimodal layered system’s results that an acceptable mean accuracy rate of 84.14% has been achieved. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

30 pages, 10215 KiB  
Article
Intelligent Localization and Deep Human Activity Recognition through IoT Devices
by Abdulwahab Alazeb, Usman Azmat, Naif Al Mudawi, Abdullah Alshahrani, Saud S. Alotaibi, Nouf Abdullah Almujally and Ahmad Jalal
Sensors 2023, 23(17), 7363; https://doi.org/10.3390/s23177363 - 23 Aug 2023
Cited by 12 | Viewed by 1213
Abstract
Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in [...] Read more.
Ubiquitous computing has been a green research area that has managed to attract and sustain the attention of researchers for some time now. As ubiquitous computing applications, human activity recognition and localization have also been popularly worked on. These applications are used in healthcare monitoring, behavior analysis, personal safety, and entertainment. A robust model has been proposed in this article that works over IoT data extracted from smartphone and smartwatch sensors to recognize the activities performed by the user and, in the meantime, classify the location at which the human performed that particular activity. The system starts by denoising the input signal using a second-order Butterworth filter and then uses a hamming window to divide the signal into small data chunks. Multiple stacked windows are generated using three windows per stack, which, in turn, prove helpful in producing more reliable features. The stacked data are then transferred to two parallel feature extraction blocks, i.e., human activity recognition and human localization. The respective features are extracted for both modules that reinforce the system’s accuracy. A recursive feature elimination is applied to the features of both categories independently to select the most informative ones among them. After the feature selection, a genetic algorithm is used to generate ten different generations of each feature vector for data augmentation purposes, which directly impacts the system’s performance. Finally, a deep neural decision forest is trained for classifying the activity and the subject’s location while working on both of these attributes in parallel. For the evaluation and testing of the proposed system, two openly accessible benchmark datasets, the ExtraSensory dataset and the Sussex-Huawei Locomotion dataset, were used. The system outperformed the available state-of-the-art systems by recognizing human activities with an accuracy of 88.25% and classifying the location with an accuracy of 90.63% over the ExtraSensory dataset, while, for the Sussex-Huawei Locomotion dataset, the respective results were 96.00% and 90.50% accurate. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

16 pages, 3400 KiB  
Article
Vision-Based Recognition of Human Motion Intent during Staircase Approaching
by Md Rafi Islam, Md Rejwanul Haque, Masudul H. Imtiaz, Xiangrong Shen and Edward Sazonov
Sensors 2023, 23(11), 5355; https://doi.org/10.3390/s23115355 - 05 Jun 2023
Cited by 1 | Viewed by 1469
Abstract
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, [...] Read more.
Walking in real-world environments involves constant decision-making, e.g., when approaching a staircase, an individual decides whether to engage (climbing the stairs) or avoid. For the control of assistive robots (e.g., robotic lower-limb prostheses), recognizing such motion intent is an important but challenging task, primarily due to the lack of available information. This paper presents a novel vision-based method to recognize an individual’s motion intent when approaching a staircase before the potential transition of motion mode (walking to stair climbing) occurs. Leveraging the egocentric images from a head-mounted camera, the authors trained a YOLOv5 object detection model to detect staircases. Subsequently, an AdaBoost and gradient boost (GB) classifier was developed to recognize the individual’s intention of engaging or avoiding the upcoming stairway. This novel method has been demonstrated to provide reliable (97.69%) recognition at least 2 steps before the potential mode transition, which is expected to provide ample time for the controller mode transition in an assistive robot in real-world use. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Graphical abstract

36 pages, 8924 KiB  
Article
Automated Implementation of the Edinburgh Visual Gait Score (EVGS) Using OpenPose and Handheld Smartphone Video
by Shri Harini Ramesh, Edward D. Lemaire, Albert Tu, Kevin Cheung and Natalie Baddour
Sensors 2023, 23(10), 4839; https://doi.org/10.3390/s23104839 - 17 May 2023
Viewed by 2181
Abstract
Recent advancements in computing and artificial intelligence (AI) make it possible to quantitatively evaluate human movement using digital video, thereby opening the possibility of more accessible gait analysis. The Edinburgh Visual Gait Score (EVGS) is an effective tool for observational gait analysis, but [...] Read more.
Recent advancements in computing and artificial intelligence (AI) make it possible to quantitatively evaluate human movement using digital video, thereby opening the possibility of more accessible gait analysis. The Edinburgh Visual Gait Score (EVGS) is an effective tool for observational gait analysis, but human scoring of videos can take over 20 min and requires experienced observers. This research developed an algorithmic implementation of the EVGS from handheld smartphone video to enable automatic scoring. Participant walking was video recorded at 60 Hz using a smartphone, and body keypoints were identified using the OpenPose BODY25 pose estimation model. An algorithm was developed to identify foot events and strides, and EVGS parameters were determined at relevant gait events. Stride detection was accurate within two to five frames. The level of agreement between the algorithmic and human reviewer EVGS results was strong for 14 of 17 parameters, and the algorithmic EVGS results were highly correlated (r > 0.80, “r” represents the Pearson correlation coefficient) to the ground truth values for 8 of the 17 parameters. This approach could make gait analysis more accessible and cost-effective, particularly in areas without gait assessment expertise. These findings pave the way for future studies to explore the use of smartphone video and AI algorithms in remote gait analysis. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

22 pages, 11919 KiB  
Article
Flight Controller as a Low-Cost IMU Sensor for Human Motion Measurement
by Artur Iluk
Sensors 2023, 23(4), 2342; https://doi.org/10.3390/s23042342 - 20 Feb 2023
Viewed by 2229
Abstract
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of [...] Read more.
Human motion analysis requires information about the position and orientation of different parts of the human body over time. Widely used are optical methods such as the VICON system and sets of wired and wireless IMU sensors to estimate absolute orientation angles of extremities (Xsens). Both methods require expensive measurement devices and have disadvantages such as the limited rate of position and angle acquisition. In the paper, the adaptation of the drone flight controller was proposed as a low-cost and relatively high-performance device for the human body pose estimation and acceleration measurements. The test setup with the use of flight controllers was described and the efficiency of the flight controller sensor was compared with commercial sensors. The practical usability of sensors in human motion measurement was presented. The issues related to the dynamic response of IMU-based sensors during acceleration measurement were discussed. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

13 pages, 525 KiB  
Article
Incremental Learning of Human Activities in Smart Homes
by Sook-Ling Chua, Lee Kien Foo, Hans W. Guesgen and Stephen Marsland
Sensors 2022, 22(21), 8458; https://doi.org/10.3390/s22218458 - 03 Nov 2022
Cited by 2 | Viewed by 1560
Abstract
Sensor-based human activity recognition has been extensively studied. Systems learn from a set of training samples to classify actions into a pre-defined set of ground truth activities. However, human behaviours vary over time, and so a recognition system should ideally be able to [...] Read more.
Sensor-based human activity recognition has been extensively studied. Systems learn from a set of training samples to classify actions into a pre-defined set of ground truth activities. However, human behaviours vary over time, and so a recognition system should ideally be able to continuously learn and adapt, while retaining the knowledge of previously learned activities, and without failing to highlight novel, and therefore potentially risky, behaviours. In this paper, we propose a method based on compression that can incrementally learn new behaviours, while retaining prior knowledge. Evaluation was conducted on three publicly available smart home datasets. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

22 pages, 4385 KiB  
Article
Group Decision Making-Based Fusion for Human Activity Recognition in Body Sensor Networks
by Yiming Tian, Jie Zhang, Qi Chen, Shuping Hou and Li Xiao
Sensors 2022, 22(21), 8225; https://doi.org/10.3390/s22218225 - 27 Oct 2022
Cited by 2 | Viewed by 1127
Abstract
Ensemble learning systems (ELS) have been widely utilized for human activity recognition (HAR) with multiple homogeneous or heterogeneous sensors. However, traditional ensemble approaches for HAR cannot always work well due to insufficient accuracy and diversity of base classifiers, the absence of ensemble pruning, [...] Read more.
Ensemble learning systems (ELS) have been widely utilized for human activity recognition (HAR) with multiple homogeneous or heterogeneous sensors. However, traditional ensemble approaches for HAR cannot always work well due to insufficient accuracy and diversity of base classifiers, the absence of ensemble pruning, as well as the inefficiency of the fusion strategy. To overcome these problems, this paper proposes a novel selective ensemble approach with group decision-making (GDM) for decision-level fusion in HAR. As a result, the fusion process in the ELS is transformed into an abstract process that includes individual experts (base classifiers) making decisions with the GDM fusion strategy. Firstly, a set of diverse local base classifiers are constructed through the corresponding mechanism of the base classifier and the sensor. Secondly, the pruning methods and the number of selected base classifiers for the fusion phase are determined by considering the diversity among base classifiers and the accuracy of candidate classifiers. Two ensemble pruning methods are utilized: mixed diversity measure and complementarity measure. Thirdly, component decision information from the selected base classifiers is combined by using the GDM fusion strategy and the recognition results of the HAR approach can be obtained. Experimental results on two public activity recognition datasets (The OPPORTUNITY dataset; Daily and Sports Activity Dataset (DSAD)) suggest that the proposed GDM-based approach outperforms the well-known fusion techniques and other state-of-the-art approaches in the literature. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

22 pages, 3314 KiB  
Article
A Novel Segmentation Scheme with Multi-Probability Threshold for Human Activity Recognition Using Wearable Sensors
by Bangwen Zhou, Cheng Wang, Zhan Huan, Zhixin Li, Ying Chen, Ge Gao, Huahao Li, Chenhui Dong and Jiuzhen Liang
Sensors 2022, 22(19), 7446; https://doi.org/10.3390/s22197446 - 30 Sep 2022
Cited by 4 | Viewed by 1262
Abstract
In recent years, much research has been conducted on time series based human activity recognition (HAR) using wearable sensors. Most existing work for HAR is based on the manual labeling. However, the complete time serial signals not only contain different types of activities, [...] Read more.
In recent years, much research has been conducted on time series based human activity recognition (HAR) using wearable sensors. Most existing work for HAR is based on the manual labeling. However, the complete time serial signals not only contain different types of activities, but also include many transition and atypical ones. Thus, effectively filtering out these activities has become a significant problem. In this paper, a novel machine learning based segmentation scheme with a multi-probability threshold is proposed for HAR. Threshold segmentation (TS) and slope-area (SA) approaches are employed according to the characteristics of small fluctuation of static activity signals and typical peaks and troughs of periodic-like ones. In addition, a multi-label weighted probability (MLWP) model is proposed to estimate the probability of each activity. The HAR error can be significantly decreased, as the proposed model can solve the problem that the fixed window usually contains multiple kinds of activities, while the unknown activities can be accurately rejected to reduce their impacts. Compared with other existing schemes, computer simulation reveals that the proposed model maintains high performance using the UCI and PAMAP2 datasets. The average HAR accuracies are able to reach 97.71% and 95.93%, respectively. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

13 pages, 9972 KiB  
Article
Cadence Detection in Road Cycling Using Saddle Tube Motion and Machine Learning
by Bernhard Hollaus, Jasper C. Volmer and Thomas Fleischmann
Sensors 2022, 22(16), 6140; https://doi.org/10.3390/s22166140 - 17 Aug 2022
Cited by 2 | Viewed by 2167
Abstract
Most commercial cadence-measurement systems in road cycling are strictly limited in their function to the measurement of cadence. Other relevant signals, such as roll angle, inclination or a round kick evaluation, cannot be measured with them. This work proposes an alternative cadence-measurement system [...] Read more.
Most commercial cadence-measurement systems in road cycling are strictly limited in their function to the measurement of cadence. Other relevant signals, such as roll angle, inclination or a round kick evaluation, cannot be measured with them. This work proposes an alternative cadence-measurement system with less of the mentioned restrictions, without the need for distinct cadence-measurement apparatus attached to the pedal and shaft of the road bicycle. The proposed design applies an inertial measurement unit (IMU) to the seating pole of the bike. In an experiment, the motion data were gathered. A total of four different road cyclists participated in this study to collect different datasets for neural network training and evaluation. In total, over 10 h of road cycling data were recorded and used to train the neural network. The network’s aim was to detect each revolution of the crank within the data. The evaluation of the data has shown that using pure accelerometer data from all three axes led to the best result in combination with the proposed network architecture. A working proof of concept was achieved with an accuracy of approximately 95% on test data. As the proof of concept can also be seen as a new method for measuring cadence, the method was compared with the ground truth. Comparing the ground truth and the predicted cadence, it can be stated that for the relevant range of 50 rpm and above, the prediction over-predicts the cadence with approximately 0.9 rpm with a standard deviation of 2.05 rpm. The results indicate that the proposed design is fully functioning and can be seen as an alternative method to detect the cadence of a road cyclist. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

21 pages, 7753 KiB  
Article
Predicting Activity Duration in Smart Sensing Environments Using Synthetic Data and Partial Least Squares Regression: The Case of Dementia Patients
by Miguel Ortiz-Barrios, Eric Järpe, Matías García-Constantino, Ian Cleland, Chris Nugent, Sebastián Arias-Fonseca and Natalia Jaramillo-Rueda
Sensors 2022, 22(14), 5410; https://doi.org/10.3390/s22145410 - 20 Jul 2022
Cited by 1 | Viewed by 1852
Abstract
The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the [...] Read more.
The accurate recognition of activities is fundamental for following up on the health progress of people with dementia (PwD), thereby supporting subsequent diagnosis and treatments. When monitoring the activities of daily living (ADLs), it is feasible to detect behaviour patterns, parse out the disease evolution, and consequently provide effective and timely assistance. However, this task is affected by uncertainties derived from the differences in smart home configurations and the way in which each person undertakes the ADLs. One adjacent pathway is to train a supervised classification algorithm using large-sized datasets; nonetheless, obtaining real-world data is costly and characterized by a challenging recruiting research process. The resulting activity data is then small and may not capture each person’s intrinsic properties. Simulation approaches have risen as an alternative efficient choice, but synthetic data can be significantly dissimilar compared to real data. Hence, this paper proposes the application of Partial Least Squares Regression (PLSR) to approximate the real activity duration of various ADLs based on synthetic observations. First, the real activity duration of each ADL is initially contrasted with the one derived from an intelligent environment simulator. Following this, different PLSR models were evaluated for estimating real activity duration based on synthetic variables. A case study including eight ADLs was considered to validate the proposed approach. The results revealed that simulated and real observations are significantly different in some ADLs (p-value < 0.05), nevertheless synthetic variables can be further modified to predict the real activity duration with high accuracy (R2(pred)>90%). Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 3463 KiB  
Review
Sonification for Personalised Gait Intervention
by Conor Wall, Peter McMeekin, Richard Walker, Victoria Hetherington, Lisa Graham and Alan Godfrey
Sensors 2024, 24(1), 65; https://doi.org/10.3390/s24010065 - 22 Dec 2023
Cited by 2 | Viewed by 1073
Abstract
Mobility challenges threaten physical independence and good quality of life. Often, mobility can be improved through gait rehabilitation and specifically the use of cueing through prescribed auditory, visual, and/or tactile cues. Each has shown use to rectify abnormal gait patterns, improving mobility. Yet, [...] Read more.
Mobility challenges threaten physical independence and good quality of life. Often, mobility can be improved through gait rehabilitation and specifically the use of cueing through prescribed auditory, visual, and/or tactile cues. Each has shown use to rectify abnormal gait patterns, improving mobility. Yet, a limitation remains, i.e., long-term engagement with cueing modalities. A paradigm shift towards personalised cueing approaches, considering an individual’s unique physiological condition, may bring a contemporary approach to ensure longitudinal and continuous engagement. Sonification could be a useful auditory cueing technique when integrated within personalised approaches to gait rehabilitation systems. Previously, sonification demonstrated encouraging results, notably in reducing freezing-of-gait, mitigating spatial variability, and bolstering gait consistency in people with Parkinson’s disease (PD). Specifically, sonification through the manipulation of acoustic features paired with the application of advanced audio processing techniques (e.g., time-stretching) enable auditory cueing interventions to be tailored and enhanced. These methods used in conjunction optimize gait characteristics and subsequently improve mobility, enhancing the effectiveness of the intervention. The aim of this narrative review is to further understand and unlock the potential of sonification as a pivotal tool in auditory cueing for gait rehabilitation, while highlighting that continued clinical research is needed to ensure comfort and desirability of use. Full article
(This article belongs to the Special Issue Human Activity Recognition in Smart Sensing Environment)
Show Figures

Figure 1

Back to TopTop