sensors-logo

Journal Browser

Journal Browser

Special Issue "From Sensors to Ambient Intelligence for Health and Social Care"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (1 September 2019).

Special Issue Editors

Prof. Ciprian Dobre
Website
Guest Editor
Computer Science, University Politehnica of Bucharest, 060042 Bucharest, Romania
Interests: mobile wireless networks and computing applications, pervasive services, context-awareness, and people-centric sensing
Special Issues and Collections in MDPI journals
Dr. Susanna Spinsante
Website
Guest Editor
Department of Information Engineering, Marche Polytechnic University, Ancona, Italy
Interests: electronic measurements; wearable sensors; ambient assisted living; depth sensors
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The increase in medical expenses due to societal issues like demographic ageing puts strong pressure on the sustainability of health and social care systems, on labour participation, and on quality of life for older people or for persons with disabilities. The Special Issue targets dissemination of solutions targeting the science and technology integrating sensors and biosensors with processing and actuating capabilities, leading to the creation of ambient intelligence, in which data is used for the benefit of the older person, allowing her to live safely, comfortably, and healthily at home. It aims to promote the dissemination of solutions for provision of AAL/IoT/sensor-based infrastructures and services for independent or more autonomous living, via the seamless integration of info-communication technologies within homes and residences. Such solutions aim, fundamentally, to increase quality of life and autonomy for older adults and persons with disabilities, or support an Active Aging lifestyle, maintaining one’s home (the preferable living environment) for as long as possible, thereby not causing disruption in the web of social and family interactions.

The particular feature of this direction is that the sensory-data has to be analysed in relation to models coming from not only Ambient-Assisted Living, but from social sciences, psychology, or medical disciplines. Most efforts towards the realization of ambient-assisted living systems are based on developing pervasive platforms that integrate sensory readings and use Ambient Intelligence to construct a safe environment. The missing interaction of multiple stakeholders that need to collaborate for the provision of environments supporting a multitude of care services (actuators), as well as barriers to innovation in the markets concerned, the government, and the health and care sector, are innovations that have not yet taken place on a relevant scale.

Many fundamental issues remain open. Most of the current efforts still do not fully express the power of human beings, and the importance of the integration of sensor data with a model that is capable of accurately describing the power of social connections and societal activities. Additionally, effective solutions require appropriate ICT algorithms, Internet of Things, and Smart Objects architectures and platforms, having in view the advance of science in this area and the development of new and innovative connected solutions (particularly in the area of pervasive and mobile systems). The Special Issue provides, in this sense, a platform for the dissemination of research efforts and the presentation of advances that explicitly aim to address these challenges.

Prof. Ciprian Dobre
Prof. Susanna Spinsante
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical and environmental home monitoring
  • Ambient Intelligence
  • Health and Social Care
  • Internet of Things and Smart Objects for Ambient-Assisted Living

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction
Sensors 2019, 19(20), 4474; https://doi.org/10.3390/s19204474 - 15 Oct 2019
Cited by 5
Abstract
Smart Homes are generally considered the final solution for living problem, especially for the health care of the elderly and disabled, power saving, etc. Human activity recognition in smart homes is the key to achieving home automation, which enables the smart services to [...] Read more.
Smart Homes are generally considered the final solution for living problem, especially for the health care of the elderly and disabled, power saving, etc. Human activity recognition in smart homes is the key to achieving home automation, which enables the smart services to automatically run according to the human mind. Recent research has made a lot of progress in this field; however, most of them can only recognize default activities, which is probably not needed by smart homes services. In addition, low scalability makes such research infeasible to be used outside the laboratory. In this study, we unwrap this issue and propose a novel framework to not only recognize human activity but also predict it. The framework contains three stages: recognition after the activity, recognition in progress, and activity prediction in advance. Furthermore, using passive RFID tags, the hardware cost of our framework is sufficiently low to popularize the framework. In addition, the experimental result demonstrates that our framework can realize good performance in both activity recognition and prediction with high scalability. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
An Unsupervised Framework for Online Spatiotemporal Detection of Activities of Daily Living by Hierarchical Activity Models
Sensors 2019, 19(19), 4237; https://doi.org/10.3390/s19194237 - 29 Sep 2019
Abstract
Automatic detection and analysis of human activities captured by various sensors (e.g., sequences of images captured by RGB camera) play an essential role in various research fields in order to understand the semantic content of a captured scene. The main focus of the [...] Read more.
Automatic detection and analysis of human activities captured by various sensors (e.g., sequences of images captured by RGB camera) play an essential role in various research fields in order to understand the semantic content of a captured scene. The main focus of the earlier studies has been widely on supervised classification problem, where a label is assigned to a given short clip. Nevertheless, in real-world scenarios, such as in Activities of Daily Living (ADL), the challenge is to automatically browse long-term (days and weeks) stream of videos to identify segments with semantics corresponding to the model activities and their temporal boundaries. This paper proposes an unsupervised solution to address this problem by generating hierarchical models that combine global trajectory information with local dynamics of the human body. Global information helps in modeling the spatiotemporal evolution of long-term activities, hence, their spatial and temporal localization. Moreover, the local dynamic information incorporates complex local motion patterns of daily activities into the models. Our proposed method is evaluated using realistic datasets captured from observation rooms in hospitals and nursing homes. The experimental data on a variety of monitoring scenarios in hospital settings reveals how this framework can be exploited to provide timely diagnose and medical interventions for cognitive disorders, such as Alzheimer’s disease. The obtained results show that our framework is a promising attempt capable of generating activity models without any supervision. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
About the Accuracy and Problems of Consumer Devices in the Assessment of Sleep
Sensors 2019, 19(19), 4160; https://doi.org/10.3390/s19194160 - 25 Sep 2019
Cited by 2
Abstract
Commercial sleep devices and mobile-phone applications for scoring sleep are gaining ground. In order to provide reliable information about the quantity and/or quality of sleep, their performance needs to be assessed against the current gold standard, i.e., polysomnography (PSG; measuring brain, eye, and [...] Read more.
Commercial sleep devices and mobile-phone applications for scoring sleep are gaining ground. In order to provide reliable information about the quantity and/or quality of sleep, their performance needs to be assessed against the current gold standard, i.e., polysomnography (PSG; measuring brain, eye, and muscle activity). Here, we assessed some commercially available sleep trackers, namely an activity tracker; Mi band (Xiaomi, Beijing, China), a scientific actigraph: Motionwatch 8 (CamNTech, Cambridge, UK), and a much-used mobile phone application: Sleep Cycle (Northcube, Gothenburg, Sweden). We recorded 27 nights in healthy sleepers using PSG and these devices and compared the results. Surprisingly, all devices had poor agreement with the PSG gold standard. Sleep parameter comparisons revealed that, specifically, the Mi band and the Sleep Cycle application had difficulties in detecting wake periods which negatively affected their total sleep time and sleep-efficiency estimations. However, all 3 devices were good in detecting the most basic parameter, the actual time in bed. In summary, our results suggest that, to date, the available sleep trackers do not provide meaningful sleep analysis but may be interesting for simply tracking time in bed. A much closer interaction with the scientific field seems necessary if reliable information shall be derived from such devices in the future. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Multi-Layer IoT Security Framework for Ambient Intelligence Environments
Sensors 2019, 19(18), 4038; https://doi.org/10.3390/s19184038 - 19 Sep 2019
Cited by 1
Abstract
Ambient intelligence is a new paradigm in the Internet of Things (IoT) world that brings smartness to living environments to make them more sensitive; adaptive; and personalized to human needs. A critical area where ambient intelligence can be used is health and social [...] Read more.
Ambient intelligence is a new paradigm in the Internet of Things (IoT) world that brings smartness to living environments to make them more sensitive; adaptive; and personalized to human needs. A critical area where ambient intelligence can be used is health and social care; where it can improve and sustain the quality of life without increasing financial costs. The adoption of this new paradigm for health and social care largely depends on the technology deployed (sensors and wireless networks), the software used for decision-making and the security, privacy and reliability of the information. IoT sensors and wearables collect sensitive data and must respond in a near real-time manner to input changes. An IoT security framework is meant to offer the versatility and modularization needed to sustain such applications. Our framework was designed to easily integrate with different health and social care applications, separating security tasks from functional ones and being designed with independent modules for each layer (Cloud, gateway and IoT device), that offer functionalities relative to that layer. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Non-Invasive Ambient Intelligence in Real Life: Dealing with Noisy Patterns to Help Older People
Sensors 2019, 19(14), 3113; https://doi.org/10.3390/s19143113 - 14 Jul 2019
Cited by 1
Abstract
This paper aims to contribute to the field of ambient intelligence from the perspective of real environments, where noise levels in datasets are significant, by showing how machine learning techniques can contribute to the knowledge creation, by promoting software sensors. The created knowledge [...] Read more.
This paper aims to contribute to the field of ambient intelligence from the perspective of real environments, where noise levels in datasets are significant, by showing how machine learning techniques can contribute to the knowledge creation, by promoting software sensors. The created knowledge can be actionable to develop features helping to deal with problems related to minimally labelled datasets. A case study is presented and analysed, looking to infer high-level rules, which can help to anticipate abnormal activities, and potential benefits of the integration of these technologies are discussed in this context. The contribution also aims to analyse the usage of the models for the transfer of knowledge when different sensors with different settings contribute to the noise levels. Finally, based on the authors’ experience, a framework proposal for creating valuable and aggregated knowledge is depicted. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
The SmartHabits: An Intelligent Privacy-Aware Home Care Assistance System
Sensors 2019, 19(4), 907; https://doi.org/10.3390/s19040907 - 21 Feb 2019
Cited by 3
Abstract
Many researchers and product developers are striving toward achieving ICT-enabled independence of older adults by setting up Enhanced Living Environments (ELEs). Technological solutions, which are often based on the Internet of Things (IoT), show great potential in providing support for Active Aging. To [...] Read more.
Many researchers and product developers are striving toward achieving ICT-enabled independence of older adults by setting up Enhanced Living Environments (ELEs). Technological solutions, which are often based on the Internet of Things (IoT), show great potential in providing support for Active Aging. To enhance the quality of life for older adults and overcome challenges in enabling individuals to achieve their full potential in terms of physical, social, and mental well-being, numerous proof-of-concept systems have been built. These systems, often labeled as Ambient Assisted Living (AAL), vary greatly in targeting different user needs. This paper presents our contribution using SmartHabits, which is an intelligent privacy-aware home care assistance system. The novel system comprising smart home-based and cloud-based parts uses machine-learning technology to provide peace of mind to informal caregivers caring for persons living alone. It does so by learning the user’s typical daily activity patterns and automatically issuing warnings if an unusual situation is detected. The system was designed and implemented from scratch, building upon existing practices from IoT reference architecture and microservices. The system was deployed in several homes of real users for six months, and we will be sharing our findings in this paper. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Deep Learning for Sensor-Based Rehabilitation Exercise Recognition and Evaluation
Sensors 2019, 19(4), 887; https://doi.org/10.3390/s19040887 - 20 Feb 2019
Cited by 4
Abstract
In this paper, a multipath convolutional neural network (MP-CNN) is proposed for rehabilitation exercise recognition using sensor data. It consists of two novel components: a dynamic convolutional neural network (D-CNN) and a state transition probability CNN (S-CNN). In the D-CNN, Gaussian mixture models [...] Read more.
In this paper, a multipath convolutional neural network (MP-CNN) is proposed for rehabilitation exercise recognition using sensor data. It consists of two novel components: a dynamic convolutional neural network (D-CNN) and a state transition probability CNN (S-CNN). In the D-CNN, Gaussian mixture models (GMMs) are exploited to capture the distribution of sensor data for the body movements of the physical rehabilitation exercises. Then, the input signals and the GMMs are screened into different segments. These form multiple paths in the CNN. The S-CNN uses a modified Lempel–Ziv–Welch (LZW) algorithm to extract the transition probabilities of hidden states as discriminate features of different movements. Then, the D-CNN and the S-CNN are combined to build the MP-CNN. To evaluate the rehabilitation exercise, a special evaluation matrix is proposed along with the deep learning classifier to learn the general feature representation for each class of rehabilitation exercise at different levels. Then, for any rehabilitation exercise, it can be classified by the deep learning model and compared to the learned best features. The distance to the best feature is used as the score for the evaluation. We demonstrate our method with our collected dataset and several activity recognition datasets. The classification results are superior when compared to those obtained using other deep learning models, and the evaluation scores are effective for practical applications. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Danger-Pose Detection System Using Commodity Wi-Fi for Bathroom Monitoring
Sensors 2019, 19(4), 884; https://doi.org/10.3390/s19040884 - 20 Feb 2019
Cited by 3
Abstract
A bathroom has higher probability of accidents than other rooms due to a slippery floor and temperature change. Because of high privacy and humidity, we face difficulties in monitoring inside a bathroom using traditional healthcare methods based on cameras and wearable sensors. In [...] Read more.
A bathroom has higher probability of accidents than other rooms due to a slippery floor and temperature change. Because of high privacy and humidity, we face difficulties in monitoring inside a bathroom using traditional healthcare methods based on cameras and wearable sensors. In this paper, we present a danger-pose detection system using commodity Wi-Fi devices, which can be applied to bathroom monitoring, preserving privacy. A machine learning-based detection method usually requires data collected in target situations, which is difficult in detection-of-danger situations. We therefore employ a machine learning-based anomaly-detection method that requires a small amount of data in anomaly conditions, minimizing the required training data collected in dangerous conditions. We first derive the amplitude and phase shift from Wi-Fi channel state information (CSI) to extract low-frequency components that are related to human activities. We then separately extract static and dynamic features from the CSI changes in time. Finally, the static and dynamic features are fed into a one-class support vector machine (SVM), which is used as an anomaly-detection method, to classify whether a user is not in bathtub, bathing safely, or in dangerous conditions. We conducted experimental evaluations and demonstrated that our danger-pose detection system achieved a high detection performance in a non-line-of-sight (NLOS) scenario. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Graphical abstract

Open AccessArticle
A Telemedicine Robot System for Assisted and Independent Living
Sensors 2019, 19(4), 834; https://doi.org/10.3390/s19040834 - 18 Feb 2019
Cited by 7
Abstract
The emerging demographic trends toward an aging population, demand new ways and solutions to improve the quality of elderly life. These include, prolonged independent living, improved health care, and reduced social isolation. Recent technological advances in the field of assistive robotics bring higher [...] Read more.
The emerging demographic trends toward an aging population, demand new ways and solutions to improve the quality of elderly life. These include, prolonged independent living, improved health care, and reduced social isolation. Recent technological advances in the field of assistive robotics bring higher sophistication and various assistive abilities that can help in achieving these goals. In this paper, we present design and validation of a low-cost telepresence robot that can assist the elderly and their professional caregivers, in everyday activities. The developed robot structure and its control objectives were tested in, both, a simulation and experimental environment. On-field experiments were done in a private elderly care center involving elderly persons and caregivers as participants. The goal of the evaluation study was to test the software architecture and the robot capabilities for navigation, as well as the robot manipulator. Moreover, participants’ reactions toward a possible adoption of the developed robot system in everyday activities were assessed. The obtained results of the conducted evaluation study are also presented and discussed. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Gesture Prediction Using Wearable Sensing Systems with Neural Networks for Temporal Data Analysis
Sensors 2019, 19(3), 710; https://doi.org/10.3390/s19030710 - 09 Feb 2019
Cited by 4
Abstract
A human gesture prediction system can be used to estimate human gestures in advance of the actual action to reduce delays in interactive systems. Hand gestures are particularly necessary for human–computer interaction. Therefore, the gesture prediction system must be able to capture hand [...] Read more.
A human gesture prediction system can be used to estimate human gestures in advance of the actual action to reduce delays in interactive systems. Hand gestures are particularly necessary for human–computer interaction. Therefore, the gesture prediction system must be able to capture hand movements that are both complex and quick. We have already reported a method that allows strain sensors and wearable devices to be fabricated in a simple and easy manner using pyrolytic graphite sheets (PGSs). The wearable electronics could detect various types of human gestures with high sensitivity, high durability, and fast response. In this study, we demonstrated hand gesture prediction by artificial neural networks (ANNs) using gesture data obtained from data gloves based on PGSs. Our experiments entailed measuring the hand gestures of subjects for learning purposes and we used these data to create four-layered ANNs, which enabled the proposed system to successfully predict hand gestures in real time. A comparison of the proposed method with other algorithms using temporal data analysis suggested that the hand gesture prediction system using ANNs would be able to forecast various types of hand gestures using resistance data obtained from wearable devices based on PGSs. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Validation of Thigh Angle Estimation Using Inertial Measurement Unit Data against Optical Motion Capture Systems
Sensors 2019, 19(3), 596; https://doi.org/10.3390/s19030596 - 31 Jan 2019
Cited by 1
Abstract
Inertial measurement units are commonly used to estimate the orientation of sections of sections of human body in inertial navigation systems. Most of the algorithms used for orientation estimation are computationally expensive and it is difficult to implement them in real-time embedded systems [...] Read more.
Inertial measurement units are commonly used to estimate the orientation of sections of sections of human body in inertial navigation systems. Most of the algorithms used for orientation estimation are computationally expensive and it is difficult to implement them in real-time embedded systems with restricted capabilities. This paper discusses a computationally inexpensive orientation estimation algorithm (Gyro Integration-Based Orientation Filter—GIOF) that is used to estimate the forward and backward swing angle of the thigh (thigh angle) for a vision impaired navigation aid. The algorithm fuses the accelerometer and gyroscope readings to derive the single dimension orientation in such a way that the orientation is corrected using the accelerometer reading when it reads gravity only or otherwise integrate the gyro reading to estimate the orientation. This strategy was used to reduce the drift caused by the gyro integration. The thigh angle estimated by GIOF was compared against the Vicon Optical Motion Capture System and reported a mean correlation of 99.58% for 374 walking trials with a standard deviation of 0.34%. The Root Mean Square Error (RMSE) of the thigh angle estimated by GIOF compared with Vicon measurement was 1.8477°. The computation time on an 8-bit microcontroller running at 8 MHz for GIOF is about a half of that of Complementary Filter implementation. Although GIOF was only implemented and tested for estimating pitch of the IMU, it can be easily extended into 2D to estimate both pitch and roll. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
A Comparison of Machine Learning and Deep Learning Techniques for Activity Recognition using Mobile Devices
Sensors 2019, 19(3), 521; https://doi.org/10.3390/s19030521 - 26 Jan 2019
Cited by 1
Abstract
We have compared the performance of different machine learning techniques for human activity recognition. Experiments were made using a benchmark dataset where each subject wore a device in the pocket and another on the wrist. The dataset comprises thirteen activities, including physical activities, [...] Read more.
We have compared the performance of different machine learning techniques for human activity recognition. Experiments were made using a benchmark dataset where each subject wore a device in the pocket and another on the wrist. The dataset comprises thirteen activities, including physical activities, common postures, working activities and leisure activities. We apply a methodology known as the activity recognition chain, a sequence of steps involving preprocessing, segmentation, feature extraction and classification for traditional machine learning methods; we also tested convolutional deep learning networks that operate on raw data instead of using computed features. Results show that combination of two sensors does not necessarily result in an improved accuracy. We have determined that best results are obtained by the extremely randomized trees approach, operating on precomputed features and on data obtained from the wrist sensor. Deep learning architectures did not produce competitive results with the tested architecture. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
IoT/Sensor-Based Infrastructures Promoting a Sense of Home, Independent Living, Comfort and Wellness
Sensors 2019, 19(3), 485; https://doi.org/10.3390/s19030485 - 24 Jan 2019
Cited by 2
Abstract
This paper presents the results of three interrelated studies concerning the specification and implementation of ambient assisted living (AAL)/Internet of Things (IoT)/sensor-based infrastructures, to support resident wellness and person-centered care delivery, in a residential care context. Overall, the paper reports on the emerging [...] Read more.
This paper presents the results of three interrelated studies concerning the specification and implementation of ambient assisted living (AAL)/Internet of Things (IoT)/sensor-based infrastructures, to support resident wellness and person-centered care delivery, in a residential care context. Overall, the paper reports on the emerging wellness management concept and IoT solution. The three studies adopt a stakeholder evaluation approach to requirements elicitation and solution design. Human factors research combines several qualitative human–machine interaction (HMI) design frameworks/methods, including realist ethnography, process mapping, persona-based design, and participatory design. Software development activities are underpinned by SCRUM/AGILE frameworks. Three structuring principles underpin the resident’s lived experience and the proposed ‘sensing’ framework. This includes (1) resident wellness, (2) the resident’s environment (i.e., room and broader social spaces which constitute ‘home’ for the resident), and (3) care delivery. The promotion of resident wellness, autonomy, quality of life and social participation depends on adequate monitoring and evaluation of information pertaining to (1), (2) and (3). Furthermore, the application of ambient assisted living technology in a residential setting depends on a clear definition of related care delivery processes and allied social and interpersonal communications. It is argued that independence (and quality of life for older adults) is linked to technology that enables interdependence, and specifically technology that supports social communication between key roles including residents, caregivers, and family members. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
Human Physical Activity Recognition Using Smartphone Sensors
Sensors 2019, 19(3), 458; https://doi.org/10.3390/s19030458 - 23 Jan 2019
Cited by 15
Abstract
Because the number of elderly people is predicted to increase quickly in the upcoming years, “aging in place” (which refers to living at home regardless of age and other factors) is becoming an important topic in the area of ambient assisted living. Therefore, [...] Read more.
Because the number of elderly people is predicted to increase quickly in the upcoming years, “aging in place” (which refers to living at home regardless of age and other factors) is becoming an important topic in the area of ambient assisted living. Therefore, in this paper, we propose a human physical activity recognition system based on data collected from smartphone sensors. The proposed approach implies developing a classifier using three sensors available on a smartphone: accelerometer, gyroscope, and gravity sensor. We have chosen to implement our solution on mobile phones because they are ubiquitous and do not require the subjects to carry additional sensors that might impede their activities. For our proposal, we target walking, running, sitting, standing, ascending, and descending stairs. We evaluate the solution against two datasets (an internal one collected by us and an external one) with great effect. Results show good accuracy for recognizing all six activities, with especially good results obtained for walking, running, sitting, and standing. The system is fully implemented on a mobile device as an Android application. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
A Social Virtual Reality-Based Application for the Physical and Cognitive Training of the Elderly at Home
Sensors 2019, 19(2), 261; https://doi.org/10.3390/s19020261 - 10 Jan 2019
Cited by 6
Abstract
Frailty is a clinical condition affecting the elderly population which results in an increased risk of falls. Previous studies demonstrated that falls prevention programs are effective, but they suffer from low adherence, especially when subjects have to train unsupervised in their homes. To [...] Read more.
Frailty is a clinical condition affecting the elderly population which results in an increased risk of falls. Previous studies demonstrated that falls prevention programs are effective, but they suffer from low adherence, especially when subjects have to train unsupervised in their homes. To try to improve treatment adherence, virtual reality and social media have been proposed as promising strategies for the increase of users’ motivation and thus their willingness to practice. In the context of smart homes, this work presents SocialBike, a virtual reality-based application aimed at improving the clinical outcomes of older frail adults in their houses. Indeed, SocialBike is integrated in the “house of the future” framework and proposes a Dual Task training program in which the users are required to cycle on a stationary bike while recognizing target animals or objects appearing along the way. It also implements the possibility of training with other users, thus reducing the risk of social isolation. Within SocialBike, users can choose the multiplayer mode they prefer (i.e., collaborative or competitive), and are allowed to train following their own attitude. SocialBike’s validation, refinement, and business model are currently under development, and are briefly discussed as future works. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
An Intelligent System for Monitoring Skin Diseases
Sensors 2018, 18(8), 2552; https://doi.org/10.3390/s18082552 - 04 Aug 2018
Cited by 14
Abstract
The practical increase of interest in intelligent technologies has caused a rapid development of all activities in terms of sensors and automatic mechanisms for smart operations. The implementations concentrate on technologies which avoid unnecessary actions on user side while examining health conditions. One [...] Read more.
The practical increase of interest in intelligent technologies has caused a rapid development of all activities in terms of sensors and automatic mechanisms for smart operations. The implementations concentrate on technologies which avoid unnecessary actions on user side while examining health conditions. One of important aspects is the constant inspection of the skin health due to possible diseases such as melanomas that can develop under excessive influence of the sunlight. Smart homes can be equipped with a variety of motion sensors and cameras which can be used to detect and identify possible disease development. In this work, we present a smart home system which is using in-built sensors and proposed artificial intelligence methods to diagnose the skin health condition of the residents of the house. The proposed solution has been tested and discussed due to potential use in practice. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessArticle
A Globally Generalized Emotion Recognition System Involving Different Physiological Signals
Sensors 2018, 18(6), 1905; https://doi.org/10.3390/s18061905 - 11 Jun 2018
Cited by 7
Abstract
Machine learning approaches for human emotion recognition have recently demonstrated high performance. However, only/mostly for subject-dependent approaches, in a variety of applications like advanced driver assisted systems, smart homes and medical environments. Therefore, now the focus is shifted more towards subject-independent approaches, which [...] Read more.
Machine learning approaches for human emotion recognition have recently demonstrated high performance. However, only/mostly for subject-dependent approaches, in a variety of applications like advanced driver assisted systems, smart homes and medical environments. Therefore, now the focus is shifted more towards subject-independent approaches, which are more universal and where the emotion recognition system is trained using a specific group of subjects and then tested on totally new persons and thereby possibly while using other sensors of same physiological signals in order to recognize their emotions. In this paper, we explore a novel robust subject-independent human emotion recognition system, which consists of two major models. The first one is an automatic feature calibration model and the second one is a classification model based on Cellular Neural Networks (CNN). The proposed system produces state-of-the-art results with an accuracy rate between 80% and 89% when using the same elicitation materials and physiological sensors brands for both training and testing and an accuracy rate of 71.05% when the elicitation materials and physiological sensors brands used in training are different from those used in training. Here, the following physiological signals are involved: ECG (Electrocardiogram), EDA (Electrodermal activity) and ST (Skin-Temperature). Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
A Survey of Vision-Based Human Action Evaluation Methods
Sensors 2019, 19(19), 4129; https://doi.org/10.3390/s19194129 - 24 Sep 2019
Cited by 3
Abstract
The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically [...] Read more.
The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically assessing the quality of human actions. This line of study has become popular because of its explosively emerging real-world applications, such as physical rehabilitation, assistive living for elderly people, skill training on self-learning platforms, and sports activity scoring. This paper presents a comprehensive survey of approaches and techniques in action evaluation research, including motion detection and preprocessing using skeleton data, handcrafted feature representation methods, and deep learning-based feature representation methods. The benchmark datasets from this research field and some evaluation criteria employed to validate the algorithms’ performance are introduced. Finally, the authors present several promising future directions for further studies. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessReview
A Technological Review of Wearable Cueing Devices Addressing Freezing of Gait in Parkinson’s Disease
Sensors 2019, 19(6), 1277; https://doi.org/10.3390/s19061277 - 13 Mar 2019
Cited by 6
Abstract
Freezing of gait is one of the most debilitating symptoms of Parkinson’s disease and is an important contributor to falls, leading to it being a major cause of hospitalization and nursing home admissions. When the management of freezing episodes cannot be achieved through [...] Read more.
Freezing of gait is one of the most debilitating symptoms of Parkinson’s disease and is an important contributor to falls, leading to it being a major cause of hospitalization and nursing home admissions. When the management of freezing episodes cannot be achieved through medication or surgery, non-pharmacological methods such as cueing have received attention in recent years. Novel cueing systems were developed over the last decade and have been evaluated predominantly in laboratory settings. However, to provide benefit to people with Parkinson’s and improve their quality of life, these systems must have the potential to be used at home as a self-administer intervention. This paper aims to provide a technological review of the literature related to wearable cueing systems and it focuses on current auditory, visual and somatosensory cueing systems, which may provide a suitable intervention for use in home-based environments. The paper describes the technical operation and effectiveness of the different cueing systems in overcoming freezing of gait. The “What Works Clearinghouse (WWC)” tool was used to assess the quality of each study described. The paper findings should prove instructive for further researchers looking to enhance the effectiveness of future cueing systems. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Open AccessReview
A Comprehensive Survey of Vision-Based Human Action Recognition Methods
Sensors 2019, 19(5), 1005; https://doi.org/10.3390/s19051005 - 27 Feb 2019
Cited by 24
Abstract
Although widely used in many applications, accurate and efficient human action recognition remains a challenging area of research in the field of computer vision. Most recent surveys have focused on narrow problems such as human action recognition methods using depth data, 3D-skeleton data, [...] Read more.
Although widely used in many applications, accurate and efficient human action recognition remains a challenging area of research in the field of computer vision. Most recent surveys have focused on narrow problems such as human action recognition methods using depth data, 3D-skeleton data, still image data, spatiotemporal interest point-based methods, and human walking motion recognition. However, there has been no systematic survey of human action recognition. To this end, we present a thorough review of human action recognition methods and provide a comprehensive overview of recent approaches in human action recognition research, including progress in hand-designed action features in RGB and depth data, current deep learning-based action feature representation methods, advances in human–object interaction recognition methods, and the current prominent research topic of action detection methods. Finally, we present several analysis recommendations for researchers. This survey paper provides an essential reference for those interested in further research on human action recognition. Full article
(This article belongs to the Special Issue From Sensors to Ambient Intelligence for Health and Social Care)
Show Figures

Figure 1

Back to TopTop