Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (111)

Search Parameters:
Keywords = Ambient/Active Assisted Living

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4029 KiB  
Article
Modelling Key Health Indicators from Sensor Data Using Knowledge Graphs and Fuzzy Logic
by Aurora Polo-Rodríguez, Isabel Valenzuela López, Raquel Diaz, Almudena Rivadeneyra, David Gil and Javier Medina-Quero
Electronics 2025, 14(12), 2459; https://doi.org/10.3390/electronics14122459 - 17 Jun 2025
Viewed by 393
Abstract
This paper describes the modelling of Key Health Indicators (KHI) of frail individuals through non-invasive sensors located in their environment and wearable devices. Primary care professionals defined four indicators for daily health monitoring: sleep patterns, excretion control, physical mobility, and caregiver social interaction. [...] Read more.
This paper describes the modelling of Key Health Indicators (KHI) of frail individuals through non-invasive sensors located in their environment and wearable devices. Primary care professionals defined four indicators for daily health monitoring: sleep patterns, excretion control, physical mobility, and caregiver social interaction. A minimally invasive and low-cost sensing architecture was implemented, combining indoor localisation and physical activity tracking through environmental sensors and wrist-worn wearables. The health outcomes are modelled using a knowledge-based framework that integrates knowledge graphs to represent control variables and their relationships with data streams, and fuzzy logic to linguistically define temporal patterns based on expert criteria. The proposed approach was validated in a real-world case study with an older adult living independently in Granada, Spain. Over several days of deployment, the system successfully generated interpretable daily summaries reflecting relevant behavioural patterns, including rest periods, bathroom usage, activity levels, and caregiver proximity. In addition, supervised machine learning models were trained on the indicators derived from the fuzzy logic system, achieving average accuracy and F1 scores of 93% and 92%, respectively. These results confirm the potential of combining expert-informed semantics with data-driven inference to support continuous, explainable health monitoring in ambient assisted living environments. Full article
Show Figures

Graphical abstract

27 pages, 4299 KiB  
Article
A Structured and Methodological Review on Multi-View Human Activity Recognition for Ambient Assisted Living
by Fahmid Al Farid, Ahsanul Bari, Abu Saleh Musa Miah, Sarina Mansor, Jia Uddin and S. Prabha Kumaresan
J. Imaging 2025, 11(6), 182; https://doi.org/10.3390/jimaging11060182 - 3 Jun 2025
Cited by 1 | Viewed by 2107
Abstract
Ambient Assisted Living (AAL) leverages technology to support the elderly and individuals with disabilities. A key challenge in these systems is efficient Human Activity Recognition (HAR). However, no study has systematically compared single-view (SV) and multi-view (MV) Human Activity Recognition approaches. This review [...] Read more.
Ambient Assisted Living (AAL) leverages technology to support the elderly and individuals with disabilities. A key challenge in these systems is efficient Human Activity Recognition (HAR). However, no study has systematically compared single-view (SV) and multi-view (MV) Human Activity Recognition approaches. This review addresses this gap by analyzing the evolution from single-view to multi-view recognition systems, covering benchmark datasets, feature extraction methods, and classification techniques. We examine how activity recognition systems have transitioned to multi-view architectures using advanced deep learning models optimized for Ambient Assisted Living, thereby improving accuracy and robustness. Furthermore, we explore a wide range of machine learning and deep learning models—including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, Temporal Convolutional Networks (TCNs), and Graph Convolutional Networks (GCNs)—along with lightweight transfer learning methods suitable for environments with limited computational resources. Key challenges such as data remediation, privacy, and generalization are discussed, alongside potential solutions such as sensor fusion and advanced learning strategies. This study offers comprehensive insights into recent advancements and future directions, guiding the development of intelligent, efficient, and privacy-compliant Human Activity Recognition systems for Ambient Assisted Living applications. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

34 pages, 9384 KiB  
Article
MEMS and IoT in HAR: Effective Monitoring for the Health of Older People
by Luigi Bibbò, Giovanni Angiulli, Filippo Laganà, Danilo Pratticò, Francesco Cotroneo, Fabio La Foresta and Mario Versaci
Appl. Sci. 2025, 15(8), 4306; https://doi.org/10.3390/app15084306 - 14 Apr 2025
Cited by 2 | Viewed by 2650
Abstract
The aging population has created a significant challenge affecting the world; social and healthcare systems need to ensure elderly individuals receive the necessary care services to improve their quality of life and maintain their independence. In response to this need, developing integrated digital [...] Read more.
The aging population has created a significant challenge affecting the world; social and healthcare systems need to ensure elderly individuals receive the necessary care services to improve their quality of life and maintain their independence. In response to this need, developing integrated digital solutions, such as IoT based wearable devices combined with artificial intelligence applications, offers a technological platform for creating Ambient Intelligence (AI) and Assisted Living (AAL) environments. These advancements can help reduce hospital admissions and lower healthcare costs. In this context, this article presents an IoT application based on MEMS (micro electro-mechanical systems) sensors integrated into a state-of-the-art microcontroller (STM55WB) for recognizing the movements of older individuals during daily activities. human activity recognition (HAR) is a field within computational engineering that focuses on automatically classifying human actions through data captured by sensors. This study has multiple objectives: to recognize movements such as grasping, leg flexion, circular arm movements, and walking in order to assess the motor skills of older individuals. The implemented system allows these movements to be detected in real time, and transmitted to a monitoring system server, where healthcare staff can analyze the data. The analysis methods employed include machine learning algorithms to identify movement patterns, statistical analysis to assess the frequency and quality of movements, and data visualization to track changes over time. These approaches enable the accurate assessment of older people’s motor skills, and facilitate the prompt identification of abnormal situations or emergencies. Additionally, a user-friendly technological solution is designed to be acceptable to the elderly, minimizing discomfort and stress associated with using technology. Finally, the goal is to ensure that the system is energy-efficient and cost-effective, promoting sustainable adoption. The results obtained are promising; the model achieved a high level of accuracy in recognizing specific movements, thus contributing to a precise assessment of the motor skills of the elderly. Notably, movement recognition was accomplished using an artificial intelligence model called Random Forest. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

29 pages, 6970 KiB  
Review
Advancements in Smart Wearable Mobility Aids for Visual Impairments: A Bibliometric Narrative Review
by Xiaochen Zhang, Xiaoyu Huang, Yiran Ding, Liumei Long, Wujing Li and Xing Xu
Sensors 2024, 24(24), 7986; https://doi.org/10.3390/s24247986 - 14 Dec 2024
Cited by 3 | Viewed by 4354
Abstract
Research into new solutions for wearable assistive devices for the visually impaired is an important area of assistive technology (AT). This plays a crucial role in improving the functionality and independence of the visually impaired, helping them to participate fully in their daily [...] Read more.
Research into new solutions for wearable assistive devices for the visually impaired is an important area of assistive technology (AT). This plays a crucial role in improving the functionality and independence of the visually impaired, helping them to participate fully in their daily lives and in various community activities. This study presents a bibliometric analysis of the literature published over the last decade on wearable assistive devices for the visually impaired, retrieved from the Web of Science Core Collection (WoSCC) using CiteSpace, to provide an overview of the current state of research, trends, and hotspots in the field. The narrative focuses on prominent innovations in recent years related to wearable assistive devices for the visually impaired based on sensory substitution technology, describing the latest achievements in haptic and auditory feedback devices, the application of smart materials, and the growing concern about the conflicting interests of individuals and societal needs. It also summarises the current opportunities and challenges facing the field and discusses the following insights and trends: (1) optimization of the transmission of haptic and auditory information while multitasking; (2) advance research on smart materials and foster cross-disciplinary collaboration among experts; and (3) balance the interests of individuals and society. Given the two essential directions, the low-cost, stand-alone pursuit of efficiency and the high-cost pursuit of high-quality services that are closely integrated with accessible infrastructure, the latest advances will gradually allow more freedom for ambient assisted living by using robotics and automated machines, while using sensor and human–machine interaction as bridges to promote the synchronization of machine intelligence and human cognition. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

21 pages, 3367 KiB  
Article
Optimized Edge-Cloud System for Activity Monitoring Using Knowledge Distillation
by Daniel Deniz, Eduardo Ros, Eva M. Ortigosa and Francisco Barranco
Electronics 2024, 13(23), 4786; https://doi.org/10.3390/electronics13234786 - 4 Dec 2024
Viewed by 1234
Abstract
Driven by the increasing care needs of residents in long-term care facilities, Ambient Assisted Living paradigms have become very popular, offering new solutions to alleviate this burden. This work proposes an efficient edge-cloud system for indoor activity monitoring in long-term care institutions. Action [...] Read more.
Driven by the increasing care needs of residents in long-term care facilities, Ambient Assisted Living paradigms have become very popular, offering new solutions to alleviate this burden. This work proposes an efficient edge-cloud system for indoor activity monitoring in long-term care institutions. Action recognition from video streams is implemented via Deep Learning networks running at edge nodes. Edge Computing stands out for its power efficiency, reduction in data transmission bandwidth, and inherent protection of residents’ sensitive data. To implement Artificial Intelligence models on these resource-limited edge nodes, complex Deep Learning networks are first distilled. Knowledge distillation allows for more accurate and efficient neural networks, boosting recognition performance of the solution by up to 8% without impacting resource usage. Finally, the central server runs a Quality and Resource Management (QRM) tool that monitors hardware qualities and recognition performance. This QRM tool performs runtime resource load balancing among the local processing devices ensuring real-time operation and optimized energy consumption. Also, the QRM module conducts runtime reconfiguration switching the running neural network to optimize the use of resources at the node and to improve the overall recognition, especially for critical situations such as falls. As part of our contributions, we also release the manually curated Indoor Action Dataset. Full article
Show Figures

Figure 1

22 pages, 10759 KiB  
Article
Design of a Cyber-Physical System-of-Systems Architecture for Elderly Care at Home
by José Galeas, Alberto Tudela, Óscar Pons, Juan Pedro Bandera and Antonio Bandera
Electronics 2024, 13(23), 4583; https://doi.org/10.3390/electronics13234583 - 21 Nov 2024
Cited by 1 | Viewed by 1480
Abstract
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems [...] Read more.
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems to share the knowledge (and not just the data) they capture from the context. Thus, by using knowledge extracted from the raw data captured by the sensors deployed in the environment, the robot can know where the person is and whether he/she should perform some physical exercise, as well as whether he/she should move a chair away to allow the robot to successfully complete a task. This paper describes the design of an Ambient Assisted Living system where an IoT scheme and robot coexist as independent but connected elements, forming a cyber-physical system-of-systems architecture. The IoT environment includes cameras to monitor the person’s activity and physical position (lying down, sitting…), as well as non-invasive sensors to monitor the person’s heart or breathing rate while lying in bed or sitting in the living room. Although this manuscript focuses on how both systems handle and share the knowledge they possess about the context, a couple of example use cases are included. In the first case, the environment provides the robot with information about the positions of objects in the environment, which allows the robot to augment the metric map it uses to navigate, detecting situations that prevent it from moving to a target. If there is a person nearby, the robot will approach them to ask them to move a chair or open a door. In the second case, even more use is made of the robot’s ability to interact with the person. When the IoT system detects that the person has fallen to the ground, it passes this information to the robot so that it can go to the person, talk to them, and ask for external help if necessary. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

17 pages, 469 KiB  
Article
Emergency Detection in Smart Homes Using Inactivity Score for Handling Uncertain Sensor Data
by Sebastian Wilhelm and Florian Wahl
Sensors 2024, 24(20), 6583; https://doi.org/10.3390/s24206583 - 12 Oct 2024
Cited by 3 | Viewed by 1859
Abstract
In an aging society, the need for efficient emergency detection systems in smart homes is becoming increasingly important. For elderly people living alone, technical solutions for detecting emergencies are essential to receiving help quickly when needed. Numerous solutions already exist based on wearable [...] Read more.
In an aging society, the need for efficient emergency detection systems in smart homes is becoming increasingly important. For elderly people living alone, technical solutions for detecting emergencies are essential to receiving help quickly when needed. Numerous solutions already exist based on wearable or ambient sensors. However, existing methods for emergency detection typically assume that sensor data are error-free and contain no false positives, which cannot always be guaranteed in practice. Therefore, we present a novel method for detecting emergencies in private households that detects unusually long inactivity periods and can process erroneous or uncertain activity information. We introduce the Inactivity Score, which provides a probabilistic weighting of inactivity periods based on the reliability of sensor measurements. By analyzing historical Inactivity Scores, anomalies that potentially represent an emergency can be identified. The proposed method is compared with four related approaches on seven different datasets. Our method surpasses existing approaches when considering the number of false positives and the mean time to detect emergencies. It achieves an average detection time of approximately 05:23:28 h with only 0.09 false alarms per day under noise-free conditions. Moreover, unlike related approaches, the proposed method remains effective with noisy data. Full article
(This article belongs to the Special Issue Multi-sensor for Human Activity Recognition: 2nd Edition)
Show Figures

Figure 1

22 pages, 2375 KiB  
Article
Real-Time Prediction of Resident ADL Using Edge-Based Time-Series Ambient Sound Recognition
by Cheolhwan Lee, Ah Hyun Yuh and Soon Ju Kang
Sensors 2024, 24(19), 6435; https://doi.org/10.3390/s24196435 - 4 Oct 2024
Cited by 4 | Viewed by 1316
Abstract
To create an effective Ambient Assisted Living (AAL) system that supports the daily activities of patients or the elderly, it is crucial to accurately detect and differentiate user actions to determine the necessary assistance. Traditional intrusive methods, such as wearable or object-attached devices, [...] Read more.
To create an effective Ambient Assisted Living (AAL) system that supports the daily activities of patients or the elderly, it is crucial to accurately detect and differentiate user actions to determine the necessary assistance. Traditional intrusive methods, such as wearable or object-attached devices, can interfere with the natural behavior of patients and may lead to resistance. Furthermore, non-intrusive systems that rely on video or sound data processed by servers or the cloud can generate excessive data traffic and raise concerns about the security of personal information. In this study, we developed an edge-based real-time system for detecting Activities of Daily Living (ADL) using ambient noise. Additionally, we introduced an online post-processing method to enhance classification performance and extract activity events from noisy sound in resource-constrained environments. The system, tested with data collected in a living space, achieved high accuracy in classifying ADL-related behaviors in continuous events and successfully generated user activity logs from time-series sound data, enabling further analyses such as ADL assessments. Future work will focus on enhancing detection accuracy and expanding the range of detectable behaviors by integrating the activity logs generated in this study with additional data sources beyond sound. Full article
(This article belongs to the Special Issue Internet of Medical Things and Smart Healthcare)
Show Figures

Figure 1

26 pages, 5154 KiB  
Article
A Robust Deep Feature Extraction Method for Human Activity Recognition Using a Wavelet Based Spectral Visualisation Technique
by Nadeem Ahmed, Md Obaydullah Al Numan, Raihan Kabir, Md Rashedul Islam and Yutaka Watanobe
Sensors 2024, 24(13), 4343; https://doi.org/10.3390/s24134343 - 4 Jul 2024
Cited by 9 | Viewed by 3648 | Correction
Abstract
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right [...] Read more.
Human Activity Recognition (HAR), alongside Ambient Assisted Living (AAL), are integral components of smart homes, sports, surveillance, and investigation activities. To recognize daily activities, researchers are focusing on lightweight, cost-effective, wearable sensor-based technologies as traditional vision-based technologies lack elderly privacy, a fundamental right of every human. However, it is challenging to extract potential features from 1D multi-sensor data. Thus, this research focuses on extracting distinguishable patterns and deep features from spectral images by time-frequency-domain analysis of 1D multi-sensor data. Wearable sensor data, particularly accelerator and gyroscope data, act as input signals of different daily activities, and provide potential information using time-frequency analysis. This potential time series information is mapped into spectral images through a process called use of ’scalograms’, derived from the continuous wavelet transform. The deep activity features are extracted from the activity image using deep learning models such as CNN, MobileNetV3, ResNet, and GoogleNet and subsequently classified using a conventional classifier. To validate the proposed model, SisFall and PAMAP2 benchmark datasets are used. Based on the experimental results, this proposed model shows the optimal performance for activity recognition obtaining an accuracy of 98.4% for SisFall and 98.1% for PAMAP2, using Morlet as the mother wavelet with ResNet-101 and a softmax classifier, and outperforms state-of-the-art algorithms. Full article
Show Figures

Figure 1

27 pages, 7047 KiB  
Article
Using Graphs to Perform Effective Sensor-Based Human Activity Recognition in Smart Homes
by Srivatsa P and Thomas Plötz
Sensors 2024, 24(12), 3944; https://doi.org/10.3390/s24123944 - 18 Jun 2024
Cited by 2 | Viewed by 2522
Abstract
There has been a resurgence of applications focused on human activity recognition (HAR) in smart homes, especially in the field of ambient intelligence and assisted-living technologies. However, such applications present numerous significant challenges to any automated analysis system operating in the real world, [...] Read more.
There has been a resurgence of applications focused on human activity recognition (HAR) in smart homes, especially in the field of ambient intelligence and assisted-living technologies. However, such applications present numerous significant challenges to any automated analysis system operating in the real world, such as variability, sparsity, and noise in sensor measurements. Although state-of-the-art HAR systems have made considerable strides in addressing some of these challenges, they suffer from a practical limitation: they require successful pre-segmentation of continuous sensor data streams prior to automated recognition, i.e., they assume that an oracle is present during deployment, and that it is capable of identifying time windows of interest across discrete sensor events. To overcome this limitation, we propose a novel graph-guided neural network approach that performs activity recognition by learning explicit co-firing relationships between sensors. We accomplish this by learning a more expressive graph structure representing the sensor network in a smart home in a data-driven manner. Our approach maps discrete input sensor measurements to a feature space through the application of attention mechanisms and hierarchical pooling of node embeddings. We demonstrate the effectiveness of our proposed approach by conducting several experiments on CASAS datasets, showing that the resulting graph-guided neural network outperforms the state-of-the-art method for HAR in smart homes across multiple datasets and by large margins. These results are promising because they push HAR for smart homes closer to real-world applications. Full article
(This article belongs to the Special Issue Intelligent Sensors in Smart Home and Cities)
Show Figures

Figure 1

16 pages, 2456 KiB  
Article
A Biologically Inspired Movement Recognition System with Spiking Neural Networks for Ambient Assisted Living Applications
by Athanasios Passias, Karolos-Alexandros Tsakalos, Ioannis Kansizoglou, Archontissa Maria Kanavaki, Athanasios Gkrekidis, Dimitrios Menychtas, Nikolaos Aggelousis, Maria Michalopoulou, Antonios Gasteratos and Georgios Ch. Sirakoulis
Biomimetics 2024, 9(5), 296; https://doi.org/10.3390/biomimetics9050296 - 15 May 2024
Cited by 5 | Viewed by 1675
Abstract
This study presents a novel solution for ambient assisted living (AAL) applications that utilizes spiking neural networks (SNNs) and reconfigurable neuromorphic processors. As demographic shifts result in an increased need for eldercare, due to a large elderly population that favors independence, there is [...] Read more.
This study presents a novel solution for ambient assisted living (AAL) applications that utilizes spiking neural networks (SNNs) and reconfigurable neuromorphic processors. As demographic shifts result in an increased need for eldercare, due to a large elderly population that favors independence, there is a pressing need for efficient solutions. Traditional deep neural networks (DNNs) are typically energy-intensive and computationally demanding. In contrast, this study turns to SNNs, which are more energy-efficient and mimic biological neural processes, offering a viable alternative to DNNs. We propose asynchronous cellular automaton-based neurons (ACANs), which stand out for their hardware-efficient design and ability to reproduce complex neural behaviors. By utilizing the remote supervised method (ReSuMe), this study improves spike train learning efficiency in SNNs. We apply this to movement recognition in an elderly population, using motion capture data. Our results highlight a high classification accuracy of 83.4%, demonstrating the approach’s efficacy in precise movement activity classification. This method’s significant advantage lies in its potential for real-time, energy-efficient processing in AAL environments. Our findings not only demonstrate SNNs’ superiority over conventional DNNs in computational efficiency but also pave the way for practical neuromorphic computing applications in eldercare. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

18 pages, 1837 KiB  
Systematic Review
Acceptability of Remote Monitoring in Assisted Living/Smart Homes in the United Kingdom and Associated Use of Sounds and Vibrations—A Systematic Review
by Ki Tong, Keith Attenborough, David Sharp, Shahram Taherzadeh, Manik Deepak-Gopinath and Jitka Vseteckova
Appl. Sci. 2024, 14(2), 843; https://doi.org/10.3390/app14020843 - 19 Jan 2024
Cited by 2 | Viewed by 2269
Abstract
The ageing of populations is increasing pressure on health and social care systems. Potentially, assistive technologies are a way to support the independence of older adults in their daily activities. Among existing assistive technologies, ambient sensing technologies have received less attention than wearable [...] Read more.
The ageing of populations is increasing pressure on health and social care systems. Potentially, assistive technologies are a way to support the independence of older adults in their daily activities. Among existing assistive technologies, ambient sensing technologies have received less attention than wearable systems. Moreover, there has been little research into cheaper technologies capable of using multiple modalities. A systematic review of the acceptability of assisted living or smart homes in the United Kingdom and the simultaneous use of sounds and vibrations in remote monitoring of assisted living or smart homes will inform and encourage the use of digital monitoring technologies. The acceptability of sensing technologies depends on whether there is any social stigma about their use, for example, the extent to which they invade privacy. The United Kingdom studies reviewed suggest a lack of measurements of the perceived efficacy or effectiveness of the monitoring devices. The primary use of vibration or acoustic technologies has been for detecting falls rather than monitoring health. The review findings suggest the need for further exploration of the acceptability and applicability of remote monitoring technologies, as well as a need for more research into the simultaneous use of sounds and vibrations in health monitoring. Full article
Show Figures

Figure 1

17 pages, 2393 KiB  
Article
Detection of Anomalies in Daily Activities Using Data from Smart Meters
by Álvaro Hernández, Rubén Nieto, Laura de Diego-Otón, María Carmen Pérez-Rubio, José M. Villadangos-Carrizo, Daniel Pizarro and Jesús Ureña
Sensors 2024, 24(2), 515; https://doi.org/10.3390/s24020515 - 14 Jan 2024
Cited by 9 | Viewed by 2857
Abstract
The massive deployment of smart meters in most Western countries in recent decades has allowed the creation and development of a significant variety of applications, mainly related to efficient energy management. The information provided about energy consumption has also been dedicated to the [...] Read more.
The massive deployment of smart meters in most Western countries in recent decades has allowed the creation and development of a significant variety of applications, mainly related to efficient energy management. The information provided about energy consumption has also been dedicated to the areas of social work and health. In this context, smart meters are considered single-point non-intrusive sensors that might be used to monitor the behaviour and activity patterns of people living in a household. This work describes the design of a short-term behavioural alarm generator based on the processing of energy consumption data coming from a commercial smart meter. The device captured data from a household for a period of six months, thus providing the consumption disaggregated per appliance at an interval of one hour. These data were used to train different intelligent systems, capable of estimating the predicted consumption for the next one-hour interval. Four different approaches have been considered and compared when designing the prediction system: a recurrent neural network, a convolutional neural network, a random forest, and a decision tree. By statistically analysing these predictions and the actual final energy consumption measurements, anomalies can be detected in the undertaking of three different daily activities: sleeping, breakfast, and lunch. The recurrent neural network achieves an F1-score of 0.8 in the detection of these anomalies for the household under analysis, outperforming other approaches. The proposal might be applied to the generation of a short-term alarm, which can be involved in future deployments and developments in the field of ambient assisted living. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

22 pages, 6493 KiB  
Article
Integrating Abnormal Gait Detection with Activities of Daily Living Monitoring in Ambient Assisted Living: A 3D Vision Approach
by Giovanni Diraco, Andrea Manni and Alessandro Leone
Sensors 2024, 24(1), 82; https://doi.org/10.3390/s24010082 - 23 Dec 2023
Cited by 5 | Viewed by 1905
Abstract
Gait analysis plays a crucial role in detecting and monitoring various neurological and musculoskeletal disorders early. This paper presents a comprehensive study of the automatic detection of abnormal gait using 3D vision, with a focus on non-invasive and practical data acquisition methods suitable [...] Read more.
Gait analysis plays a crucial role in detecting and monitoring various neurological and musculoskeletal disorders early. This paper presents a comprehensive study of the automatic detection of abnormal gait using 3D vision, with a focus on non-invasive and practical data acquisition methods suitable for everyday environments. We explore various configurations, including multi-camera setups placed at different distances and angles, as well as performing daily activities in different directions. An integral component of our study involves combining gait analysis with the monitoring of activities of daily living (ADLs), given the paramount relevance of this integration in the context of Ambient Assisted Living. To achieve this, we investigate cutting-edge Deep Neural Network approaches, such as the Temporal Convolutional Network, Gated Recurrent Unit, and Long Short-Term Memory Autoencoder. Additionally, we scrutinize different data representation formats, including Euclidean-based representations, angular adjacency matrices, and rotation matrices. Our system’s performance evaluation leverages both publicly available datasets and data we collected ourselves while accounting for individual variations and environmental factors. The results underscore the effectiveness of our proposed configurations in accurately classifying abnormal gait, thus shedding light on the optimal setup for non-invasive and efficient data collection. Full article
Show Figures

Figure 1

16 pages, 4885 KiB  
Article
Deep Convolutional Neural Network with Symbiotic Organism Search-Based Human Activity Recognition for Cognitive Health Assessment
by Mohammed Alonazi, Haya Mesfer Alshahrani, Fadoua Kouki, Nabil Sharaf Almalki, Ahmed Mahmud and Jihen Majdoubi
Biomimetics 2023, 8(7), 554; https://doi.org/10.3390/biomimetics8070554 - 19 Nov 2023
Cited by 2 | Viewed by 1764
Abstract
Cognitive assessment plays a vital role in clinical care and research fields related to cognitive aging and cognitive health. Lately, researchers have worked towards providing resolutions to measure individual cognitive health; however, it is still difficult to use those resolutions from the real [...] Read more.
Cognitive assessment plays a vital role in clinical care and research fields related to cognitive aging and cognitive health. Lately, researchers have worked towards providing resolutions to measure individual cognitive health; however, it is still difficult to use those resolutions from the real world, and therefore using deep neural networks to evaluate cognitive health is becoming a hot research topic. Deep learning and human activity recognition are two domains that have received attention for the past few years. The former is for its relevance in application fields like health monitoring or ambient assisted living, and the latter is due to their excellent performance and recent achievements in various fields of application, namely, speech and image recognition. This research develops a novel Symbiotic Organism Search with a Deep Convolutional Neural Network-based Human Activity Recognition (SOSDCNN-HAR) model for Cognitive Health Assessment. The goal of the SOSDCNN-HAR model is to recognize human activities in an end-to-end way. For the noise elimination process, the presented SOSDCNN-HAR model involves the Wiener filtering (WF) technique. In addition, the presented SOSDCNN-HAR model follows a RetinaNet-based feature extractor for automated extraction of features. Moreover, the SOS procedure is exploited as a hyperparameter optimizing tool to enhance recognition efficiency. Furthermore, a gated recurrent unit (GRU) prototype can be employed as a categorizer to allot proper class labels. The performance validation of the SOSDCNN-HAR prototype is examined using a set of benchmark datasets. A far-reaching experimental examination reported the betterment of the SOSDCNN-HAR prototype over current approaches with enhanced precision of 86.51% and 89.50% on Penn Action and NW-UCLA datasets, respectively. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

Back to TopTop