You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

16 October 2021

Robotic-Based Well-Being Monitoring and Coaching System for the Elderly in Their Daily Activities

,
,
,
,
,
,
,
,
and
1
ETSII (Escuela Técnica Superior de Ingeniería Industrial), Technical University of Cartagena, St. Dr. Fleming, s/n, 30203 Cartagena, Spain
2
AASS (Applied Autonomous Sensor Systems), Örebro University, 70281 Örebro, Sweden
3
The Hamlyn Centre for Robotic Surgery, Imperial College London, London SW7 2AZ, UK
4
Department of Evolutionary and Educational Psychology, Faculty of Psychology, Campus Regional Excellence Mare Nostrum, University of Murcia, 30100 Murcia, Spain
This article belongs to the Special Issue Sensor and Assistive Technologies for Smart Life

Abstract

The increasingly ageing population and the tendency to live alone have led science and engineering researchers to search for health care solutions. In the COVID 19 pandemic, the elderly have been seriously affected in addition to suffering from isolation and its associated and psychological consequences. This paper provides an overview of the RobWell (Robotic-based Well-Being Monitoring and Coaching System for the Elderly in their Daily Activities) system. It is a system focused on the field of artificial intelligence for mood prediction and coaching. This paper presents a general overview of the initially proposed system as well as the preliminary results related to the home automation subsystem, autonomous robot navigation and mood estimation through machine learning prior to the final system integration, which will be discussed in future works. The main goal is to improve their mental well-being during their daily household activities. The system is composed of ambient intelligence with intelligent sensors, actuators and a robotic platform that interacts with the user. A test smart home system was set up in which the sensors, actuators and robotic platform were integrated and tested. For artificial intelligence applied to mood prediction, we used machine learning to classify several physiological signals into different moods. In robotics, it was concluded that the ROS autonomous navigation stack and its autodocking algorithm were not reliable enough for this task, while the robot’s autonomy was sufficient. Semantic navigation, artificial intelligence and computer vision alternatives are being sought.

1. Introduction

Scientific advances, in general terms, have improved living conditions as reflected in such basic aspects as longevity. According to several studies, there are expected to be around 125 million people over the age of 65 in the EU in 2030. Life expectancy has indeed improved considerably in recent years. Spain is one of the countries at the top of this list despite a decline of 1.6 years in 2020 due to the COVID 19 pandemic, as has happened in other countries [1].
Increased longevity is not always accompanied by greater happiness. [2]. Cities are getting bigger and bigger and at the same time there are more lonely people. According to the survey published by the INE (the Spanish National Institute of Statistics) 42.7% of women over the age of 85 lived alone, compared to 23.6% of men [3]. Since humans are naturally social, this is becoming a significant psychological problem [4,5,6]. A clear example is the current COVID-19 pandemic, which has brought about social isolation with repercussions on mental health. Lockdowns and quarantines have forced even more of the elderly to live alone. INE statistics show that the number of people over 65 who lived alone in 2020 was 2,131,400 compared to 2,009,100 in 2019 [7], so that loneliness is already postulated as one of the main epidemics of the 21st century.
Despite the loneliness involved, older people defend their right to stay at home and in many cases they refuse to move in with relatives or to go to residences for the elderly, even though their mobility and ability to look after themselves could be progressively reduced [8]. Although assistance can be given by home caregivers, it will be difficult to cover all demands for home assistance in the near future due to a shortage of available health workers and doctors as a result of greater life expectancy and low birth rate. There is thus an urgent need for innovative forms of support and health care for the elderly to maintain their physical and mental well-being, for which society should be prepared by adapting to the greater health assistance and monitoring requirements.
The latter is an advantage since it allowed reaching populations with scarce resources, difficult to reach by traditional means [9]. In recent years, the contributions of psychology have made sense with the intention of promoting interest in health and well-being during aging. Being relevant that people live longer, with quality and above all with autonomy, health and well-being [10].
The RobWell Project was begun in 2019 with the aim of finding technological solutions to these needs. The team is made up of a multidisciplinary group of researchers, engineers and psychologists from the Technical University of Cartagena, the University of Murcia (Spain) and the University of Örebro (Sweden) and the University of Kuyshu (Japan). It is part of the larger HIMTAE Project in cooperation with the University Carlos III of Madrid, which includes physical assistance in the kitchen with a manipulating robot (not described in this paper). The RobWell Project includes a mobile robotic platform integrated with ambient intelligence in a smart home that also includes estimating the user’s mood through wearables with a proposal for emotional coaching strategies.
From the user’s point of view, the idea is to create a relatively simple system that can be installed in homes to monitor the user’s daily activities and his/her state of health and mood by means of distributed home automation sensors, medical devices and smart bands. The data is interpreted by artificial intelligence to estimate the habits and state of mood of those living alone. If strange behavior or a low mood is detected, emotional coaching strategies are proposed by a small robot with the size of a robot vacuum cleaner [9] through smart speakers. Alerts can also be generated to reach caregivers, family members or emergency services if necessary. Strange behavior would be considered, for example, as spending too long in bed, not going to the bathroom for a long time, or not opening the refrigerator all day. Emotional coaching strategies are recommended such as: “You should go for a walk”, “Why don’t you call someone on the phone” or “You haven’t eaten anything for a long time”. The robotic-based ambient intelligent system will not only detect situations that suggest that the person needs help but can suggest activities to improve the person’s mood, such as leaving the house to meet other people.
This system requires expertise ambient assisted living, sensor data analysis, and machine learning. There is an additional distributed artificial intelligent system that coordinates all the subsystems, with symbiosis of physical and virtual robotic and AI subsystems to assist the elderly living alone during their daily activities. Advances in smart home devices and activity wristbands may make it possible to address the design of assistive environments with low-cost devices to achieve a relatively simple system that can be marketed at an affordable price to potential users.
One of the most important aspects of this project is the artificial intelligence system involved in mood recognition. As we pointed out in a previous paper [10], good results have been obtained in laboratory studies [11,12]. However, there are many problems with making these experiments a reality. The biggest impediments encountered are the number of uncontrollable variables involved in daily activities and the inconvenience of including the sensors used in the studies in daily tasks. Advances have been made in wearable devices, including the possibility of transferring these studies to other scenarios (such as clinics [13] or experiments without laboratory models [14]). Ecological momentary assessments (EMAs) have been used in this project as proposed in [10].
The main objective of this article is to present the RobWell project, including the system architecture and integration of the distributed sensor ecosystem, data collection and method of mood prediction using machine learning algorithms. To do so, the following points will be discussed:
  • General scheme, both software and hardware proposed. This section will present the general hardware/software composition including elements such as the robotic platform, the sensors for home automation and user monitoring, the actuators, the data acquisition system for mood estimation, the user interface and so forth.
  • Integration of the proposed elements. The proposed test system and the integration achieved in the field of mobile robotic platform and home automation will be discussed.
  • Robustness of the navigation of the robotic platform. The navigation system implemented on the robotic platform will be discussed together with the continuous operation tests. In this way it will be possible to discuss whether autonomous navigation needs more sophisticated navigation strategies. The autodocking algorithm offered for the Kobuki robotic base will also be tested for robustness in continuous operation. With the continuous running test, the autonomy of the robotic system will be observed.
  • Mood prediction. The data acquisition system designed for conducting the experiment in an everyday environment is discussed. On the other hand, the machine learning strategy to be used for mood prediction is also discussed. In addition, the psychological tools to be used (questionnaires and EMAs) are presented.

3. Materials and Methods

3.1. Proposed System Design

3.1.1. Hardware Architecture

The proposed system consists of different elements. As in any control system, a distinction can be made between sensors, control and actuators. In the hardware architecture of the proposed system, the different nodes (sensor, control and actuator) are composed of the elements shown in Table 1.
Table 1. General description of the system elements.
All these elements are represented in Figure 1. It should be noted that all these elements will be dealt with in depth in later sections.
Figure 1. Hardware schematic of the system (only some representative elements of each subsystem are shown).

3.1.2. Software Design

The main problem with commercial devices is that, even though the devices may comply with the open Zigbee standard, each manufacturer has his own gateway and user application (Ikea, LIDL, Xiaomi, Philips, etc.). This makes it impossible to use devices with another brand’s gateways and the aim of this project was to integrate devices of different brands. One of the main software tasks was thus to provide inter-device interoperability and transparency.
As can be seen in Figure 2, the system is composed of a variety of software components for different domains. The elements that need specific software are as follows: robotic platforms, home automation system, positioning system, data acquisition system for predicting affective state and integration software. Of all these elements most are in the process of development or improvement, except for the indoor positioning system, which is currently under development. The rest of the systems are listed below with a brief description:
Figure 2. Software schematic of the system.
  • Robotics platforms. Of the robotic platforms in Figure 1, only the emotional coaching platform will be explained because the other platform is being developed by the in the HIMTAE Project in the Carlos III University of Madrid. The teleoperated mapping, autonomous navigation, autodocking and power management software to determine its performance in continuous operation has already been implemented and tested. Further features and enhancements are under development.
  • Smart home. The software that will run all the home automation logic is the Home Assistant operating system [93], selected because of its integration tools and the associated community for troubleshooting. It will run a Node-RED server and a Mosquitto MQTT broker. In addition, it also allows the use of the CC2531 USB dongle.
  • Acquisition system for affective state prediction. As can be seen in Figure 1, at the hardware level, this system consists of an Empatica E4 medical device and a smartphone. Two Android applications are used for the extraction of the data used in mood prediction. The first one, E4 RealTime, is the official application offered by Empatica for the extraction of physiological data. There is also a self-developed application for conducting tests and questionnaires to relate physiological data to mood states. The final idea is that this whole process will take place in a single application.
  • General integration system. This is the system in charge of carrying out the integration between the elements. There will only be differentiation at the level of sensors and actuators, regardless of the brand or manufacturer. This whole system will be developed on Ubuntu 16.04 and ROS. The use of ROS to carry out this task is justified by the fact that the integration, in some cases, is direct. Furthermore, being structured in nodes, the topics and following the publisher/subscriber policy makes development more accessible. The development of this application is not yet complete. For the time being, the integration paths have been basically developed and tested in a lightweight way. For example, the integration between Node-RED and ROS has been tested, as has the integration between elements with ROS, the integration between Zigbee and Node-RED elements with MQTT and between Android and Node-RED using MQTT. The integration application is currently in the conceptual development phase.

3.2. Home Automation

3.2.1. Sensor Environment

In the introduction, we talked about the different types of smart homes associated with health and wellbeing. This leads to starting the selection of sensors according to the activities to be monitored. Elements belonging to other fields can also be added such as security (from both external and internal agents), entertainment or energy saving, all depending on the user’s needs.
The sensor environment consists of a set of commercial and self-developed sensors whose main function is to monitor the environment surrounding the user in the least intrusive way possible. When we talk about intrusiveness, in this case we refer to possible works or modifications to the home. One of the objectives to keep in mind is that most sensors should be small, unobtrusive and wireless. Obviously, this is not going to be achieved in all cases as there are sensors that will require mains power. This monitoring is carried on mental and physical health (smart some oriented towards wellbeing and health). At this point, one can think directly of the typical activity bracelets that provide information on physiological parameters. While it is true that these are a valuable source of information for the prediction of the user’s affective state, there are many other sources in the home environment; for example, the frequency of personal grooming, opening the windows, raising the blinds to let the sun in, the time spent in bed or on the couch, and so forth (Table 2).
Table 2. Proposed monitoring activities and devices.
With the information obtained from these sensors, it will be possible to extract the necessary information about the user’s well-being and health. This would cover the health and wellness smart home part. In addition to this, there will be additional elements for security like gas, fire and water leaks detectors. The use of smart light bulbs and smart plugs is also proposed to achieve greater home automation and improve the household energy performance.
So far only a few commercial sensors have been tested in the system in addition to those offered by the Kobuki’s Iclebo base. It should be noted that the Iclebo base sensors are included in this case as they are interconnected sensors in the system although they will not be dealt with in depth. However, everything related to the robotics platform (sensors, actuators and software) will be dealt with in depth later on. Table 3 shows the sensors that we plan to include in the system. As some devices are not yet available, we have decided to perform a test setup with the commercial sensors presently available. The column headed “Inc.” specifies the sensors included in the test system (“Y”) and those which are not (“N”).
Table 3. Home automation system devices.
As can be seen in Figure 2, the communication protocols to be used are Zigbee, WiFi, MQTT and IR. Most of the commercial sensors have been selected with Zigbee protocol for power consumption reasons. If the system is to be as non-intrusive as possible, wireless devices should be selected. The Figure 3 shows the plan of the house where the system has been tested.
Figure 3. (a) Map of proposed elements in the final system. (b) Map of test system elements.

3.2.2. Computer Platform for Home Automation

As can be seen in Figure 1 and Figure 2, the single-board low-priced Raspberry Pi 4 Model B is the computational node in charge of home automation. It was selected for its extensive use with Home Assistant and its good cost/performance ratio.
In this type of device, the operating system is usually mounted on a removable SD card. The problem is that in previous experiences with systems like those proposed in this article, it was seen that the memory card ended up being corrupted, so it was proposed to add an external SSD type memory. This requires an expansion board that allows the SSD memory to be integrated with the Raspberry Pi (Raspberry Pi Foundation, Cambridge, UK). For this case, the ONE M.2 case has been selected, which allows the integration of both elements (SSD and Raspberry Pi). The SSD memory selected was the Kingston A400 M.2 (Kingston Corporation, Fountain Valley, CA, USA).
As already mentioned, most of the sensors use the Zigbee protocol. In order to coordinate all elements of the system, a Zigbee Gateway is needed. Normally, each manufacturer offers a gateway for their branded products; however, the use of these gateways would make the system more closed and proprietary, not to mention that most of them offer solutions in the cloud and an internet connection is always needed. To overcome this problem, we propose the use of a generic CC2531 USB dongle (Texas Instruments, Dallas, TX, USA). It is a USB device on which a CC2531, a SoC for IEEE 802.15.4, ZigBee and, RF4CE related applications, are mounted. All the elements used can be seen in Figure 4.
Figure 4. (a) Raspberry Pi, (b) Raspberry Pi Aluminum Case, (c) SSD Memory Disk, (d) USB Dongle cc2531.
The software used in this project is Home Assistant. This is a free, open-source software that is widely used for home automation. Home Assistant is characterized by the management of home automation locally. This provides greater security and speed compared to cloud-based systems. This software has a multitude of add-ons developed for the integration of different devices. It was therefore decided to install a Node-RED server, a Mosquitto MQTT broker and the necessary Zigbee2MQTT software for the management of the CC2531. Node RED is a programming tool that can be connected to hardware devices, APIs and online services through a programming based on flows and blocks. In this case it is used for integrating ROS. There is a node distribution in Node-RED called node-red-contrib-ros that allows us to publish and subscribe to ROS topics. There is also the Mosquitto MQTT broker. The MQTT protocol works according to the policy of the publisher/subscriber with the broker as the central block of the structure. Basically, it is the server that accepts the published messages and distributes them to the subscribing nodes. The Zigbee2MQTT add-on uses this broker to publish sensor information in MQTT topics. In this way, it will be easy to access this information from Node-RED and publish it in ROS.

3.3. Robotic Platform

The Turtlebot II commercial solution has been chosen for the robotic platform. This is a mobile robotic base widely used in education and research. It is a differential type of mobile robot with two support wheels located in a rhomboidal position. The mobile base on which the robot is mounted is the Kobuki IClebo (YUJIN ROBOT Co., Ltd., Yeonsu-gu, Incheon, Korea) (Figure 5b) which can power most of the systems (computer, sensors and other devices).
Figure 5. (a) Hokuyo UST-10LX; (b) IClebo robotic base; (c) INTEL NUC computer.
As this platform should map and navigate autonomously around the house, a Hokuyo UST-10LX LIDAR (Hokuyo Automatic Co., Ltd., Osaka, Japan) (Figure 5a) sensor was added to the base. In addition, the element that carries out robotics computing tasks will be a compact Intel NUC computer (Intel Corporation, Santa Clara, CA, USA) (Figure 5c), characterized mainly by its small dimensions. Another of its advantages is the possibility of being able to expand its features as required.
The robotic platform uses Ubuntu 16.04 as operating system and ROS as the software robotics framework. The mapping and navigation stack offered by ROS [94] (Figure 6) is used to map the home in which the robot is installed and autonomously navigate once mapped.
Figure 6. ROS navigation stack setup [95].
In addition to this, continuous operation experiments were carried out by monitoring the battery status. The package used for robot initialization is turtlebot_bringup [96] with a “.launch” file called minimal.launch, which launches the relevant nodes that publishes the desired information on the battery in a topic. Figure 7 shows the topic containing the battery information and field name. To interpret the evolution of the battery status, a node will be created that subscribes to the topic shown in Figure 7. Each time, the information of the desired data is updated, it will be saved in a text file together with its timestamp.
Figure 7. ROS nodes and topics from turtlebot_bringup.
The idea of a continuous operation experiment consists of autonomous navigation to certain points in the house according to the time of day. The idea is to create time circuits that start and end at the charging station (Table 4). In this way, it is tested whether the autodocking algorithm provided by Kobuki is reliable enough to be applied for the proposed purpose.
Table 4. Navigation routines.

3.4. Mood Estimation

3.4.1. Acquisition and Processing of Physiological Signals

According to Figure 1 and Figure 2, this section proposes a system consisting of a device capable of sensing physiological variables and an application that proposes tests and questionnaires. Physiological and activity measurements are performed by means of an E4 wristband device from Empatica. This CE certified device, commercially available but intended for research, enables data acquisition of accelerometry, skin temperature, electrodermal activity and blood volume pulse signals for up to 32 h (in recording and data-logging mode). Streaming mode is used (24 h battery autonomy) in conjunction with a 4G internet connected smartphone, which is used as the gateway to the cloud storage services provided by Empatica. Information about the Empatica E4 sensors can be found in Table 5.
Table 5. Empatica E4 built-in sensors.
(a)
Physiological Signal Processing and Feature Extraction
The different physiological signals were first preprocessed and filtered as summarized in Table 6. Features were extracted using a sliding window approach with window size of 60 s. This size has been widely reported as the appropriate value when working with physiological signals [14,97,98]. We also set an overlap of 10% between consecutive windows to reduce the boundary effect when signals are filtered.
Table 6. Signal preprocessing.
(b)
3-axis Accelerometer (72 features)
Three-axis accelerometer signals were sampled at a frequency of 32 Hz, and then the Euclidean norm of the acceleration vector was calculated before filtering the four time series with a 3rd order Butterworth band-pass filter with a 0.2 Hz to 10 Hz bandwidth [100]. Afterwards, features in the time-domain [85,100,101] and in the frequency-domain were extracted [102]. In Table 7, you can see the extracted characteristics of the accelerometer.
Table 7. Accelerometer features.
(c)
Peripheral Skin Temperature (13 features)
The peripheral temperature signal is sampled at a frequency of 4 Hz. It is set a threshold of 2 °C to discard incorrect gathered data when detecting an increase or decrease on the skin temperature trespassing such threshold. Features in the time-domain [85,101,103] and frequency-domain [102] are then extracted. In Table 8, you can see the extracted characteristics of the skin temperature.
Table 8. Skin Temperature Features.
(d)
Heart rate variability (27 features)
Since it was not possible to reliably reconstruct the tachogram after the IBI data read from the E4, an HR time series provided by the manufacturer was used for heart rate variability analysis. This instantaneous heart rate, sampled at 1Hz, was then normalized on a daily basis, taking the first ten minutes from each day as baseline to compare data from different participants. Features in both time-domain [85,100,101] and frequency-domain [104] were then extracted. In Table 9, you can see the extracted characteristics of the heart rate.
Table 9. Heart rate features.
(e)
Electrodermal Activity (82 features)
The EDA signal, sampled at 4Hz by the E4, was low-pass filtered through a 3rd order low-pass Butterworth filter at 1.5 Hz before extracting both, tonic (Skin Conductance Level, SCL) and phasic (Skin Conductance Response, SCR) components [99]. Features in both time [85,99] and frequency domains were calculated for each component [99,102]. In the Table 10 you can see the extracted characteristics of the electrodermal activity.
Table 10. Electrodermal activity features.

3.4.2. Mobile App for Ground Truth

The application was developed for the Android operating system. This APP asks the user about two parameters: level of happiness and activity. To answer this question, the user must select the level of two bars divided into discrete levels, with 0 being not at all happy/active and 4 being very happy/active. The test interface can be seen in Figure 8a.
Figure 8. (a) Mobile application to collect mood states labels (on the left); (b) Affection state model [76].
As the aim is to carry out five tests during the day, the application was designed so that the user could configure the times at which he/she wants to be notified to take the test. The notification alerts the user with an uncertainty interval of +− 10 min. The data extracted are as shown in Table 11.
Table 11. Information collected by the application.
All the information extracted through the application is saved in local files and Excel sheets in the cloud. Each user is assigned an ID code to identify him/her, so that no personal data is stored in the cloud. The designed application has two operation modes: user mode and administrator mode. The user mode only has the interface necessary to carry out the test, while the administrator mode allows the configuration of the test times (Figure 9).
Figure 9. (a) Administrator mode main menu. (b) Administrator mode test time settings.
As a significant amount of data must be obtained to train a machine-learning-based algorithm, labelled data are needed. In this paper, the EMA answer registered for each patient is extended a number of minutes before and after. We start out on the assumption that the preceding and subsequent physiological response stays in the body for this length of time before and after answering EMA using windows of 30-, 60- and 120-min.
As the participants can answer the questionnaires out of the time slot assigned to it, we can encounter the following situations: (1) the time between two consecutive answers is less than half of the fixed window; (2) the first and the last EMA of the day may be close to the time of the initial and final data collection. The procedure followed to solve these situations consists of evenly separating the available data and using it for each of the former situations, respectively. In Figure 10 we show an example of the explained situations.
Figure 10. Illustration of the extrapolation of the mood states obtained from the EMA answers received in one day with a 60-min centered window for temperature signal. Red line represents the mood state assigned (−1 represents no mood state). Situation A is when two consecutive EMA answers are given within an interval of time less than half the size of the EMA window. Situation B is when there is no available data in the initial or last EMA answered.

3.4.3. Signal Classification

The system introduced in this work is based on two elements: on the one hand, we have a wristband type device that can be used to obtain information on physiological variables and, on the other hand, we have an android application that collects user responses to the tests and questionnaires carried out [105]. This wristband is provided with sensors to monitor blood volume pulse (BVP), electrodermal activity (EDA), and peripheral skin temperature. It is also equipped with a 3-axis accelerometer and a built-in application to derive the heart rate (HR) and interbeat interval (IBI) from the BVP signal.
The emotional state of the monitored person is collected following the model proposed by Russel et al. [76], which is a multi-dimensional approach that defines emotions by two dimensions: arousal (or activeness) and pleasure (or happiness), pleasure being the range of negative and positive emotions; and arousal representing their active or passive degree, as explained previously in Section 3.4.2. Therefore, each emotional state can be placed as a point; in this case, a bidimensional space (see Figure 8b). The collection of those two dimensions is done via a mobile app (see Figure 8a). The participant is asked to fill in the happiness and activeness felt at a certain time on a 5-point Likert scale to quantify his/her emotional well-being by asking only two concise questions, known as an Ecological Momentary Assessment (EMA). The small mental burden of this technique makes it suitable for the environment proposed in this work: emotion recognition during daily activities. The user is asked to answer this questionnaire five times a day and more answers can be included if required.
The extracted features from the signals and the emotional state collected with the mobile APP define the classification problem proposed to estimate mood states. This problem is following a supervised learning approach, being training data samples features vectors extracted by a 60-s window for each of the signals mentioned before; and labels the mood state collected by the APP as explained in Section 3.4.2. We refer to Figure 10 in the previous section to emphasize the integration of the mood state with the collected physiological data. Figure 10 is an example of the logic followed to assign every vector a label. The sliding window will be placed at every data point of which has a label value different to −1 (as reflected in Figure 10). Once every feature vector is assigned with a mood state, we proceed to train a Support Vector Machine (SVM) with the goal of mapping feature vectors into mood states by minimizing a loss function.

3.5. Psychological Instruments

In addition to the experiments explained in the previous sections to monitor emotional well-being, psychological questionnaires were also included. These questionnaires aimed at measuring the levels of stress, anxiety, depression, and general well-being at the beginning and end of the experiment. This is a positive aspect as the experimental setup (wearable sensor, APP to answer EMA) did not cause stress to the participants. In addition to the personal interviews at the beginning and end of the experiment, each participant answered the first 20 questions of the STAI questionnaire. This questionnaire provides potential insights into symptoms of depression. Frequency analysis was carried out to contrast the correlation between the arousal (or activeness) and pleasure (or happiness) dimensions with the anxiety levels registered. The results concluded that correlations between affective dimensions and anxiety state were not statistically significant. Detailed information of the tests during the data collection process is given below:
  • A sociodemographic survey that provided information on age, gender, marital status and knowledge of digital devices (mobile, computer, etc.).
  • A short Inventory of Emotional Intelligence for the Elderly (EQ-I-M20) by Pérez-Fuentes, Gázquez, Mercader and Molero [106]. This is an adaptation and validation into Spanish of the Bar-On and Parker [107] instrument for the elderly consisting of 20 items on Likert scale. It is made up of the following factors: (a) Intrapersonal; (b) Interpersonal; (c) Stress management; (d) Adaptability; and (e) General mood. It has a Cronbach’s alpha value of 0.89 for reliability on the general scale.
  • CECAVIR (Quality of Life Assessment Questionnaire) prepared by Molero, Pérez-Fuentes, Gázquez and Mercader [108] consisting of 56 items ranging from 1 to 5 composed of six quality of life dimensions: Health, Social and family relationships, Activity and leisure, Environmental quality, Functional capacity and Satisfaction with life. It has a Cronbach’s Alpha of 0.865 on the global scale.
  • STAI, State Trait Anxiety Questionnaire [109]. It is a questionnaire made up of 40 items with two scales. It can evaluate anxiety as a transitory state (Anxiety/state) and as a latent trait (Anxiety/trait). The alpha coefficient for State Anxiety ranges between 0.90 and 0.93 and for Trait Anxiety between 0.84 and 0.87 [109].
  • Hamilton scale on depressive/anxiety symptoms (1960) [110] made up of 14 items on a Likert scale that ranges from 0 to 4 to evaluate somatic and respiratory symptoms, depression, and so forth. A higher score indicatives anxiety/depressive symptoms and provides information on psychic and somatic anxiety.
  • Abbreviated Yesavage questionnaire (GDS) on depression in its Spanish version for the elderly [111], a questionnaire that evaluates depression in people over 65 years of age, made up of 15 items that score dichotomously (0–1). A total score over 5 indicates depression. It has an internal consistency of 0.994.
  • Mini-cognitive exam -MEC35- [112], the Spanish adaptation of the Mini-Mental State Examination. It is an instrument that detects cognitive deterioration and examines different cognitive functions: orientation, memory, attention, calculation, language and construction, praxis and reasoning. The maximum score is 35 points, the optical cut-off points for cognitive impairment being in a population over 65 years of age and with a low educational level of 24 points.
  • The Global Impairment Scale (GDS) by Reisberg, Ferris, de León and Crook (1982) [113], assesses the seven different phases of Alzheimer’s disease: stage 1 (normal), stage 2 (subjective memory complaint), stage 3 (mild cognitive impairment), stage 4 (mild dementia), stage 5 (moderate dementia), stage 6 (moderately severe dementia) and stage 7 (severe dementia). The internal consistency is very good, presenting a Cronbach’s alpha of 0.82 [113].
  • The Katz index [114], which assesses the level of dependence/independence of daily activities. It has eight possible levels: (A) Independent in all its functions; (B) Independent in all functions except one; (C) Independent in all functions except in the bathroom or other; (D) Independent in all functions except in the bathroom, dress and any other; (E) Independent in all functions except in the bathroom, dress, use of the toilet and any other; (F) Independence in all functions except in the bathroom, dress, use of the toilet, mobility and any other of the remaining two; (G) Dependent in all functions; (H) Dependent in at least two functions, but not classifiable as C, D, E or F. Levels A–B (0–1 points) indicate absence of disability or mild disability, levels C–D (2–3 points) indicate moderate disability and E–G levels (4–6 points) indicate severe disability.
  • Psychological well-being Scale (Díaz, Rodríguez-Carvajal, Blanco, Moreno-Jiménez, Gallardo and Valle, 2006) [115], an instrument that consists of 29 items that ranges from 1 (totally disagree) to 6 (totally agree). It has six subscales that evaluates self-acceptance, personal growth, purpose in life, positive relationships with others, environmental mastery and autonomy. It is Cronbach’s alpha ranges between 0.70 and 0.83 in all dimensions (Díaz et al., 2006) [115].
  • Scales on positive and negative affect (PANAS), translated into Spanish by Robles and Páez [116] (2003), a scale composed of two factors of 10 items (ranging from 1 not at all or slightly to 5 a lot) that measures positive and negative affect and presents a Cronbach’s alpha of 0.92 for positive and 0.88 for negative affect (Robles and Páez, 2003) [116].
  • NeO-FFI Questionnaire (Costa and McCrae, 1999) [117], a reduced version of the well-known NEO-PI-R. The questionnaire consists of 60 items to assess personality according to the “big five” model (Neuroticism, Extraversion, Openness, Friendliness and Responsibility).

3.6. Experimental Design for Mood Prediction

This study was carried out thanks to the dataset collected in [10]. Moreover, the individual interviews performed with each candidate were used to collect demographic information and the following questionaries were used to assess their mental health: STAI [109], Hamilton anxiety scale [110], Yesavage geriatric depression scale [111], MEC-35 [112], Ryff scale of psychological well-being [115], global deterioration scale [113], Katz index [114], PANAS [116]. After the interview, the EMA App was installed in their personal smartphone, and they received an Empatica E4 wristband. Data collection was collected for 15 consecutive days. During this time, the participants were asked to wear the device during the day and remove it before sleeping. They also received instructions on how to answer the daily EMAs sent to their smartphones. After the data collection, they came back to the laboratory and repeated the personal interview. They were asked to complete a satisfaction survey and to return the E4 wristband. According to the questionnaires, they did not show any symptoms of anxiety, mental disorder, or disability nor were there any significant differences with respect to the first questionnaire, maintaining stable results. Demographical information and the collected data are summarized in Table 12.
Table 12. Participants’ details.

4. Results

4.1. Mood Prediction Resutls

We built a mood classifier using the support vector machine SVM (Support Vector Machines) library libSVM [118] using a radial basis function (RBF). Parameters C and γ were obtained by grid-search using cross-validation. The feature vectors were scaled in the range [−1, 1].
We propose two experiments to assess the ability of our model to solve the proposed problem of emotion recognition. Datasets were balanced due to lack of labels for certain mood states. It was discarded all samples which represent less than 10% of the total dataset. In addition, arousal and happiness dimension values were classified for the original sampled datasets, that is, without dropping non-representative mood states. The first experiment consists of splitting the dataset in two subsets, so that 75% of the samples are used for training and the remaining 25% are used for testing (Figure 11). Additionally, we include confusion matrices as a qualitative result (Figure 12, Figure 13, Figure 14 and Figure 15).
Figure 11. Accuracy metric obtained from predicting mood (a), arousal (or activeness) (b), and pleasure (or happiness) (c) values. Error bars represent standard deviation.
Figure 12. Confusion matrix for patient P1 for windows (a) 30-, (b) 60-, (c) 120-min. Predicted mood states: Pleasure, Sleepiness and Contentment.
Figure 13. Confusion matrix for patient P2 for windows (a) 30-, (b) 60-, (c) 120-min. Predicted mood states: Pleasure, Excitement, Arousal and Contentment.
Figure 14. Confusion matrix for patient P3 for windows (a) 30-, (b) 60-, (c) 120-min. Predicted mood states: Excitement, Arousal and Depression.
Figure 15. Confusion matrix for patient P4 for windows (a) 30-, (b) 60-, (c) 120-min. Predicted mood states: Pleasure, Excitement and Sleepiness.
The second experiment is named as leave-one(day)-out. It consists of using one full day of data to test the model and the rest of days for training the classifier. The accuracy was computed as the mean of all tested days (Figure 16). The leave-one(day)-out approach shows an exciting interest in assessing the ability of the classifier to predict tomorrow’s emotional well-being based on an historic data register.
Figure 16. Accuracy metric obtained for the leave-one(day)-out experiment to predict pleasure (or happiness) values. Error bars represent standard deviation. Red asterisk and Circle dots represent the mean pleasure accuracy for datasets of 10 and 20 days respectively stated by R. Likamwa et al. [119].

4.2. Robotic Platform Continuous Operation Results

This section provides the results obtained on the charging and discharging patterns of the robotic platform battery and the continuous operation experiments as well as the main observations made during the process. The testing environment was a real house. It was decided to perform the mapping in two different parts of the house to test the mapping task (Figure 17).
Figure 17. (a) Map of the first zone of the house. (b) Map of the second zone of the house.
The scenario shown in Figure 17b was selected for the battery charging and discharging experiments and the continuous operation experiments with navigation according to an hourly routine. To prove that the script developed to monitor the battery status worked correctly, four experiments were carried out to see if the battery discharge profile resembled the one provided by the manufacturer (Figure 18).
Figure 18. Battery discharge profiles: (a) First and second experiments, (b) Third and fourth experiments.
Figure 19 shows the evolution of the battery status on a normal day without failures at the time of the auto docking task. Figure 20a shows the first continuous operation experiment and Figure 20b the second one.
Figure 19. Battery charge and discharge profile for a normal operating day.
Figure 20. (a) First continuous operation experiment. (b) Second continuous operation experiment.

5. Discussion

5.1. Mood Prediction

The first experiment consisted of dividing the dataset in 75%–25% proportions for training and testing, respectively. We repeated this process ten times to ensure a certain degree of confidence. Figure 11 shows the accuracy obtained from training a classifier to predict mood mental states. We tried different EMA window of values 30, 60 and 120 min. The highest accuracies achieved in this first experiment correspond to the metrics obtained by patient three (P3) in estimating mood in 30-, 60-, 120-min windows: 83.44% ± 1.6; 86.68% ± 1.43; 88.75% ± 0.78, respectively. It is interesting to note that patient three (P3) had the highest EMA-answering average, as reported in Table 12.
When inspecting a confusion matrix, one is seeking the main diagonal to obtain the highest possible values. This would mean great precision of the trained algorithm. It can be observed in Figure 12, Figure 13, Figure 14 and Figure 15, that the highest values are under the main diagonal. Nevertheless, there is also a certain degree of confusion among the different classes. Figure 12 shows confusion between contentment and pleasure in patient P1, between sleepiness and pleasure. Figure 13 shows that the algorithm is misclassifying pleasure or arousal with excitement in patient P2 and contentment is confused with pleasure or arousal. Figure 14 shows for patient P3 a confusion between arousal with excitement, and between depression and excitement. Finally, Figure 15 confuses pleasure with excitement or sleepiness in patient P2. By inspecting the circumplex model introduced by Russel [76], all the confused aforementioned emotions are placed in the circle quite close to each other. As most of the mood states contain a high pleasure value (representing positive moods), we can conclude that to achieve a fine-grained algorithm able to fit data close to each other in the feature space we would need to collect more samples, which would increase the complexity of the algorithm. The case of patient P3, who confused depression with excitement, also caught our attention. These two mood states are far from each other in Russel’s circumplex model [76]. Future research will inspect the nature of the depressed- and excited-labelled raw vectors and how they spatially fall into the dimensionally reduced feature space.
To validate the precision of our trained algorithm for this second experiment, we compared it with the study by R. Likamwa et al. [119], which shares some similarities with ours. To carry out this comparison, we learnt a model for the pleasure value, which is the dimension used in [119]. We are conscious that to compare two learning algorithms one should use the same dataset. However, the approach followed by [119] is exactly what we are pursuing in this second experiment. This comparison is used to confirm that the obtained results follow the correct direction. Moreover, the work in [119] uses periods of 10 and 20 days (about 3 weeks) to train their models. We used the available number of days for each participant, which as it can be observed it differs from those reported by [119]. Figure 16 shows our results laying within the reported results by [119]. This led us to conclude that, with a bigger dataset, that is, collecting more signals throughout more days, higher accuracy metrics will be achieved.

5.2. Robotic Platform

The adaptation of Hokuyo UST-10LX for mapping and navigation of the robotic platform is correct, as can be seen in Figure 17. The areas not so well defined on the map are inaccessible to the robot, for example, the toilets in Figure 17. In the case of (b) there is no defined area. This is because there is a step at the entrance to the bathroom that the robot could not go beyond and thus could not map that area. as in the case in (a). In general, good results were obtained in terms of mapping although the main disadvantage was that it had to be teleoperated.
The shape is similar to that provided by the manufacturer as can be seen in the discharge profiles in Figure 18. The main difference is that the manufacturer’s graphs show 9 h of operation for a single 4S2P battery while our two 4S2P batteries operated in a range between 6 and 7 h. This result is not so strange if you consider that the manufacturer’s curves have only the base consumption, while we in addition had a computer and Hokuyo UST-10LX to explain the reduced autonomy.
The main difference between the curves in Figure 18a,b is that in the case of (a), the battery was charged while the computer was on. This continuous consumption prevented it from reaching full charge. In the case of (b), the base was completely switched off for charging so that the start of the graphs in B is higher than in A and in addition the discharge lasts longer.
The autonomous navigation tests using the Hokuyo UST-10LX were thus successful. Using the ROS navigation stack, it was possible to navigate autonomously throughout the house, avoiding both dynamic and static obstacles. The main navigation problem was at the doors and there were times when the robot got stuck in doorways because its navigation algorithm determined there was going to be an imminent collision.
On the other hand, the experiment with continuous operation was not very encouraging. As can be seen in Figure 20, the main problem occurred in executing the autodocking task. The specification for the most successful autodocking is to have a clean area 2 meters wide by 5 meters long around it. It should be noted that initially, an attempt was made to reduce this space to see how reliable the algorithm was by reducing this specification and it should be remembered that not every house has a clear space of 2 × 5 meters. The results obtained are shown in Figure 20a. As can be seen, compared to Figure 20a, there were many problems with autodocking. To remedy this, the station was relocated to a larger area that met the specifications. From the results shown in Figure 20b, it can be seen that the autodocking algorithm performed reasonably better. However, it was still flawed and, at times, required human intervention.

6. Conclusions

Some conclusions can be drawn in the order of the descriptions of the systems. Starting with a smart home, the system was in operation for 21 days without failures with the devices connected continuously. It should be noted that there were periods in which the system was without power due to power cuts and that the devices reconnected normally after restarting Raspberry Pi. Based on this experience, it can therefore be concluded that the system works satisfactorily and offers robustness connectivity.
It was also concluded that the robotic navigation and autodocking algorithms used are not sufficiently robust. Although the navigation algorithm works well in most cases, the robot needs human intervention when it fails. Some of the scenarios in which failures occurred were when navigating through narrow doorways and corridors. The autodocking algorithm runs normally if the surrounding conditions required by the algorithm to function correctly are met (2 × 5 empty area in front) and when it fails manual intervention is required. These results were observed during the evolution of the continuous operation experiments. In the discharge profiles shown in Figure 20, it can only be detected that the system has failed due to an abnormal discharge profile. As discussed, the most common faults causing this situation have been navigation in narrow areas and failures in performing autodocking. To address navigation problems, we are currently working on the implementation of semantic navigation algorithms to improve autonomous navigation in a changing environment and on the use of computer vision to detect the docking station.
The model shows a promising ability to predict mood values by means of only physiological data for the machine learning application for mood prediction. Experiments show that a 120-min EMA window works significantly better in predicting mood values, while the accuracy in predicting arousal, pleasure and mood values are comparable to, or better than, other methods reported in the literature.
The performance metrics obtained for the experiment leave-one(day)-out exhibits a path for future work, although its accuracy is similar to that found in other works. Reference [119] reported that, to achieve a performance of over 80%, 40 to 60 days were needed (about 2 months) so that our datasets need to be extended. The present system is endowed only with physiological data. Future research can combine different data modalities such as smartphone information (agenda events, screen use, etc.) or audio signals to provide greater model robustness.
As the learnt mood models also need generalization and transferability it is proposed to increase the complexity of the model’s classifier by using Deep Learning techniques as more data are collected.
The experimental data in this work were collected during daily activities with no restrictions on the participants’ behavior, which shows that our technology is viable for daily life use. The results obtained seem to indicate that the recognition of the user’s mood is sufficiently effective to be able to trigger the coaching actions that the team of psychologists considers appropriate for each user in each situation.
In terms of applying these technologies, research in artificial intelligence and home automation is especially relevant for therapeutic applications in mental health services. The interventions were especially directed towards caring for the elderly. It should be noted that our project focused on innovating by providing a responsible approach (under the supervision of a trained mental health professional) and taking into account the ethical implications of incorporating artificial intelligence and home automation in people’s lives (ecological sensors in the house so as not to interfere, the use of bracelets etc.). The most important thing is that our project escapes from the laboratory since it allows the subjects to carry out their daily activities by means of smart devices. Another of the fundamental aspects is the daily monitoring, after a previous evaluation, to be able to evaluate moods and mental states and to be able to make diagnoses where appropriate. From an ethical and responsible perspective, this project will bring important benefits from the application of robotics and artificial intelligence to mental health, which will allow new modes of treatment, opportunities to involve hard-to-reach populations, improve adherence to patient response and free up time for specialists through combined care models. That is why we argue that the union of artificial intelligence and home automation is a promising approach in the entire field of mental health, especially in innovative mental health care.

Author Contributions

F.M.C.-N. carried out this research as part of his Ph.D. thesis, under the supervision of J.A.V.-R. and F.J.O.; E.G.-M. carried out this research as part of his Ph.D. thesis, under the supervision of O.M.M.; Conceptualization, O.M.M., F.J.O.; methodology, O.M.M., F.J.O.; software architecture, F.J.O., O.M.M., software development, F.M.C.-N., O.M.M., D.B.-S., E.G.-M.; systems integration, F.M.C.-N., M.J.-B., J.A.V.-R., J.R.G., home automation, F.M.C.-N., M.J.-B., J.A.V.-R., acquisition and processing of physiological signals O.M.M., E.G.-M., D.B.-S., J.R.G.; validation, O.M.M., E.G.-M., D.B.-S.; formal analysis, O.M.M., E.G.-M., D.B.-S.; psychological instruments, I.M., C.R.-E.; investigation, all authors; resources, O.M.M., D.B.-S.; writing—original draft preparation, all authors; writing—review and editing, all authors; supervision, O.M.M., F.J.O.; project administration, O.M.M., F.J.O.; funding acquisition, O.M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Spanish Ministerio de Ciencia, Innovación y Univesidades, Agencia Estatal de Investigación (AEI) and the European Regional Development Fund (ERDF) under project ROBWELL (RTI2018-095599-A-C22) and by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki. The participants signed informed consent. We did no keep any reference or data that could identify the participants, and the datasets remain completely anonymous. Therefore, ethical review and approval were waived for this study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy issues.

Acknowledgments

The authors would like to thank the “Research Programme for Groups of Scientific Excellence at Region of Murcia” of the Seneca Foundation (Agency for Science and Technology of the Region of Murcia—19895/GERM/15).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. EUROSTAT. Mortality and Life Expectancy Statistics. 2021. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Mortality_and_life_expectancy_statistics (accessed on 5 March 2021).
  2. Freire Rodríguez, C.; Ferradás Canedo, M.d.M. Calidad de Vida y Bienestar en la Vejez; Ediciones Pirámide: Madrid, Spain, 2016. [Google Scholar]
  3. INE. Encuesta Continua de Hogares (ECH) 2018. 2018. Available online: https://www.ine.es/prensa/ech_2018.pdf (accessed on 10 March 2021).
  4. Unsar, S.; Dindar, I.; Kurt, S. Activities of daily living, quality of life, social support and depression levels of elderly individuals in Turkish society. J. Pak. Med. Assoc. 2015, 65, 14. [Google Scholar]
  5. Holt-Lunstad, J. The Potential Public Health Relevance of Social Isolation and Loneliness: Prevalence, Epidemiology, and Risk Factors. Public Policy Aging Rep. 2017, 27, 127–130. [Google Scholar] [CrossRef]
  6. Holt-Lunstad, J.; Smith, T.B.; Layton, J.B. Social relationships and mortality risk: A meta-analytic review. PLoS Med. 2010, 7, e1000316. [Google Scholar] [CrossRef] [PubMed]
  7. INE. Encuesta Continua de Hogares (ECH) 2020. 2021. Available online: https://www.ine.es/prensa/ech_2020.pdf (accessed on 12 March 2021).
  8. Pinazo-Hernandis, S. Impacto psicosocial de la COVID-19 en las personas mayores: Problemas y retos. Rev. Esp. Geriatr. Gerontol. 2020, 55, 249–252. [Google Scholar] [CrossRef]
  9. Turtlebot. TurtleBot2. Available online: https://www.turtlebot.com/turtlebot2/ (accessed on 14 March 2021).
  10. Bautista-Salinas, D.; Gonzalez, J.R.; Mendez, I.; Mozos, O.M. Monitoring and Prediction of Mood in Elderly People during Daily Life Activities. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019. [Google Scholar] [CrossRef]
  11. Sun, F.T.; Kuo, C.; Cheng, H.T.; Buthpitiya, S.; Collins, P.; Griss, M. Activity-aware mental stress detection using physiological sensors. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST; Springer: Berlin/Heidelberg, Germany, 2012; Volume 76. [Google Scholar]
  12. Mozos, O.M.; Sandulescu, V.; Andrews, S.; Ellis, D.; Bellotto, N.; Dobrescu, R.; Ferrandez, J.M. Stress detection using wearable physiological and sociometric sensors. Int. J. Neural Syst. 2017, 27, 1650041. [Google Scholar] [CrossRef]
  13. Hovsepian, K.; Al’absi, M.; Ertin, E.; Kamarck, T.; Nakajima, M.; Kumar, S. CStress: Towards a gold standard for continuous stress assessment in the mobile environment. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, Japan, 7–11 September 2015. [Google Scholar] [CrossRef]
  14. Healey, J.A.; Picard, R.W. Detecting stress during real-world driving tasks using physiological sensors. IEEE Trans. Intell. Transp. Syst. 2005, 6, 156–166. [Google Scholar] [CrossRef]
  15. Magazine, D. El Robot Panasonic Hospi Rimo Ayuda en el Transporte de Fármacos a los Pacientes Hospitalizados. Available online: https://www.digitalavmagazine.com/2013/10/29/el-robot-panasonic-hospi-rimo-ayuda-en-el-transporte-de-farmacos-a-los-pacientes-hospitalizados/ (accessed on 16 March 2021).
  16. Grupo ADD. Robot Moxi. Available online: https://grupoadd.es/el-robot-moxi (accessed on 20 March 2021).
  17. Bhattacharjee, T.; Lee, G.; Song, H.; Srinivasa, S.S. Towards Robotic Feeding: Role of Haptics in Fork-Based Food Manipulation. IEEE Robot. Autom. Lett. 2019, 4, 1485–1492. [Google Scholar] [CrossRef]
  18. Werle, J.; Hauer, K. Design of a bath robot system—User definition and user requirements based on International Classification of Functioning, Disability and Health (ICF). In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication, New York, NY, USA, 26–31 August 2016. [Google Scholar] [CrossRef]
  19. King, C.H.; Chen, T.L.; Jain, A.; Kemp, C.C. Towards an assistive robot that autonomously performs bed baths for patient hygiene. In Proceedings of the IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar] [CrossRef]
  20. Samsung. Samsung Bots, los Compañeros del Futuro. 2019. Available online: https://news.samsung.com/co/samsung-bot-los-mejores-companeros-roboticos-en-camino-a-enriquecer-la-calidad-de-vida-de-las-personas (accessed on 2 April 2021).
  21. García, E. Rassel, el Robot que Ayuda a las Personas Mayores. 2018. Available online: https://cadenaser.com/emisora/2018/12/07/radio_valencia/1544174190_652009.html (accessed on 2 April 2021).
  22. Pages, J.; Marchionni, L.; Ferro, F. TIAGo: The Modular Robot That Adapts to Different Research Needs. In Proceedings of the IROS Workshop on Robot Modularity; 2016; pp. 1–4. Available online: https://clawar.org/wp-content/uploads/2016/10/P2.pdf (accessed on 2 April 2021).
  23. Kamel, E.; Memari, A.M. State-of-the-Art Review of Energy Smart Homes. J. Archit. Eng. 2019, 25, 03118001. [Google Scholar] [CrossRef]
  24. Labonnote, N.; Høyland, K. Smart home technologies that support independent living: Challenges and opportunities for the building industry–a systematic mapping study. Intell. Build. Int. 2017, 9, 40–63. [Google Scholar] [CrossRef]
  25. Tsukiyama, T. In-home health monitoring system for solitary elderly. Procedia Comput. Sci. 2015, 63, 229–235. [Google Scholar] [CrossRef]
  26. Pigini, L.; Bovi, G.; Panzarino, C.; Gower, V.; Ferratini, M.; Andreoni, G.; Sassi, R.; Rivolta, M.W.; Ferrarin, M. Pilot Test of a New Personal Health System Integrating Environmental and Wearable Sensors for Telemonitoring and Care of Elderly People at Home (SMARTA Project). Gerontology 2017, 63, 281–286. [Google Scholar] [CrossRef] [PubMed]
  27. Bora, R.; De La Pinta, J.R.; Alvarez, A.; Maestre, J.M. Integration of service robots in the smart home by means of UPnP: A surveillance robot case study. Rob. Auton. Syst. 2013, 61, 153–160. [Google Scholar] [CrossRef]
  28. Wienke, J.; Wrede, S. A middleware for collaborative research in experimental robotics. In Proceedings of the 2011 IEEE/SICE International Symposium on System Integration, Kyoto, Japan, 20–22 December 2011. [Google Scholar] [CrossRef]
  29. Wrede, S.; Leichsenring, C.; Holthaus, P.; Hermann, T.; Wachsmuth, S. The Cognitive Service Robotics Apartment: A Versatile Environment for Human–Machine Interaction Research. KI Kunstl. Intell. 2017, 31, 299–304. [Google Scholar] [CrossRef]
  30. Bellocchio, E.; Costante, G.; Cascianelli, S.; Valigi, P.; Ciarfuglia, T.A. SmartSEAL: A ros based home automation framework for heterogeneous devices interconnection in smart buildings. In Proceedings of the IEEE 2nd International Smart Cities Conference: Improving the Citizens Quality of Life, Trento, Italy, 12–15 September 2016. [Google Scholar] [CrossRef]
  31. Uchechukwu, D.; Siddique, A.; Maksatbek, A.; Afanasyev, I. ROS-based Integration of Smart Space and a Mobile Robot as the Internet of Robotic Things. In Proceedings of the Conference of Open Innovation Association, FRUCT, Helsinki, Finland, 5–8 November 2019. [Google Scholar] [CrossRef]
  32. Ray, P.P. Internet of Robotic Things: Concept, Technologies, and Challenges. IEEE Access 2016, 4, 9489–9500. [Google Scholar] [CrossRef]
  33. Chakraborti, T.; Srivastava, S.; Pinto, A.; Kambhampati, S. An ROS-based shared communication middleware for plug and play modular intelligent design of smart systems. arXiv 2017, arXiv:1706.01133. [Google Scholar]
  34. Picard, R.W.; Healey, J. Affective wearables. Pers. Technol. 1997, 1, 231–240. [Google Scholar] [CrossRef]
  35. Pentland, A. Social Physics: How Social Networks Can Make Us Smarter. 2014. Available online: https://www.amazon.es/Social-Physics-Networks-Make-Smarter-ebook/dp/B00DMCUYRM (accessed on 22 March 2021).
  36. Sarkar, N. Psychophysiological control architecture for human-robot coordination-concepts and initial experiments. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 4, pp. 3719–3724. [Google Scholar] [CrossRef]
  37. Rani, P.; Sarkar, N.; Smith, C.A.; Kirby, L.D. Anxiety detecting robotic system—Towards implicit human-robot collaboration. Robotica 2004, 22, 85–95. [Google Scholar] [CrossRef]
  38. Bethel, C.L.; Salomon, K.; Murphy, R.R.; Burke, J.L. Survey of Psychophysiology Measurements Applied to Human-Robot Interaction. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju Island, Korea, 26–29 August 2007; pp. 732–737. [Google Scholar] [CrossRef]
  39. Swan, M. Emerging patient-driven health care models: An examination of health social networks, consumer personalized medicine and quantified self-tracking. Int. J. Environ. Res. Public Health 2009, 6, 492–525. [Google Scholar] [CrossRef] [PubMed]
  40. Aarts, E.; Harwig, R.; Schuurmans, M. Ambient intelligence. In The Invisible Future: The Seamless Integration of Technology into Everyday Life; McGraw-Hill, Inc.: New York, NY, USA, 2001; pp. 235–250. [Google Scholar]
  41. Kleinberger, T.; Becker, M.; Ras, E.; Holzinger, A.; Müller, P. Ambient Intelligence in Assisted Living: Enable Elderly People to Handle Future Interfaces; Springer: Berlin/Heidelberg, Germany, 2007; pp. 103–112. [Google Scholar] [CrossRef]
  42. Garzo, A.; Montalban, I.; León, E.; Schlatter, S. Sentient: An Approach to Ambient Assisted Emotional Regulation. 2010. Available online: https://www.researchgate.net/profile/Ainara-Garzo/publication/259459808_Sentient_An_approach_to_Ambient_Assisted_Emotional_Regulation/links/0a85e533d3bffc5e98000000/Sentient-An-approach-to-Ambient-Assisted-Emotional-Regulation.pdf (accessed on 28 April 2021).
  43. Karahanoğlu, A.; Erbuğ, Ç. Perceived Qualities of Smart Wearables: Determinants of User Acceptance; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1–8. [Google Scholar] [CrossRef]
  44. Rawassizadeh, R.; Price, B.A.; Petre, M. Wearables: Has the age of smartwatches finally arrived? Commun. ACM 2014, 58, 45–47. [Google Scholar] [CrossRef]
  45. Oung, Q.W.; Hariharan, M.; Lee, H.L.; Basah, S.N.; Sarillee, M.; Lee, C.H. Wearable multimodal sensors for evaluation of patients with Parkinson disease. In Proceedings of the 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 27–29 November 2015; pp. 269–274. [Google Scholar] [CrossRef]
  46. Schwartz, B.; Baca, A. Wearables and Apps—Modern Diagnostic Frameworks for Health Promotion through Sport. Dtsch. Z. Sportmed. 2016, 2016, 131–136. [Google Scholar] [CrossRef]
  47. Ometov, A.; Shubina, V.; Klus, L.; Skibińska, J.; Saafi, S.; Pascacio, P.; Flueratoru, L.; Gaibor, D.Q.; Chukhno, N.; Chukhno, O.; et al. A Survey on Wearable Technology: History, State-of-the-Art and Current Challenges. Comput. Netw. 2021, 193, 108074. [Google Scholar] [CrossRef]
  48. John Dian, F.; Vahidnia, R.; Rahmati, A. Wearables and the Internet of Things (IoT), Applications, Opportunities, and Challenges: A Survey. IEEE Access 2020, 8, 69200–69211. [Google Scholar] [CrossRef]
  49. Häkkilä, J. Designing for Smart Clothes and Wearables—User Experience Design Perspective. In Smart Textiles: Fundamentals, Design, and Interaction; Schneegass, S., Amft, O., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 259–278. [Google Scholar] [CrossRef]
  50. Fairburn, S.; Steed, J.; Coulter, J. Spheres of Practice for the Co-design of Wearables. J. Text. Des. Res. Pract. 2016, 4, 85–109. [Google Scholar] [CrossRef]
  51. Jones, J.; Gouge, C.; Crilley, M. Design principles for health wearables. Commun. Des. Q. 2017, 5, 40–50. [Google Scholar] [CrossRef]
  52. Klebbe, R.; Steinert, A.; Müller-Werdan, U. Wearables for Older Adults: Requirements, Design, and User Experience. In Perspectives on Wearable Enhanced Learning (WELL): Current Trends, Research, and Practice; Buchem, I., Klamma, R., Wild, F., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 313–332. [Google Scholar] [CrossRef]
  53. Motti, V.G.; Caine, K. Users’ Privacy Concerns About Wearables; Springer: Berlin/Heidelberg, Germany, 2015; pp. 231–244. [Google Scholar] [CrossRef]
  54. Piwek, L.; Ellis, D.A.; Andrews, S.; Joinson, A. The Rise of Consumer Health Wearables: Promises and Barriers. PLoS Med. 2016, 13, e1001953. [Google Scholar] [CrossRef] [PubMed]
  55. Yetisen, A.K.; Martinez-Hurtado, J.L.; Ünal, B.; Khademhosseini, A.; Butt, H. Wearables in Medicine. Adv. Mater. 2018, 30, 1706910. [Google Scholar] [CrossRef]
  56. Chiauzzi, E.; Rodarte, C.; DasMahapatra, P. Patient-centered activity monitoring in the self-management of chronic health conditions. BMC Med. 2015, 13, 77. [Google Scholar] [CrossRef] [PubMed]
  57. Espay, A.J.; Bonato, P.; Nahab, F.B.; Maetzler, W.; Dean, J.M.; Klucken, J.; Eskofier, B.M.; Merola, A.; Horak, F.; Lang, A.E.; et al. Technology in Parkinson’s disease: Challenges and opportunities. Mov. Disord. 2016, 31, 1272–1282. [Google Scholar] [CrossRef] [PubMed]
  58. Adams, J.L.; Dinesh, K.; Xiong, M.; Tarolli, C.G.; Sharma, S.; Sheth, N.; Aranyosi, A.J.; Zhu, W.; Goldenthal, S.; Biglan, K.M.; et al. Multiple Wearable Sensors in Parkinson and Huntington Disease Individuals: A Pilot Study in Clinic and at Home. Digit. Biomark. 2017, 1, 52–63. [Google Scholar] [CrossRef]
  59. Del Din, S.; Elshehabi, M.; Galna, B.; Hobert, M.A.; Warmerdam, E.; Suenkel, U.; Brockmann, K.; Metzger, F.; Hansen, C.; Berg, D.; et al. Gait analysis with wearables predicts conversion to Parkinson disease. Ann. Neurol. 2019, 86, 357–367. [Google Scholar] [CrossRef] [PubMed]
  60. Pau, M.; Caggiari, S.; Mura, A.; Corona, F.; Leban, B.; Coghe, G.; Lorefice, L.; Marrosu, M.G.; Cocco, E. Clinical assessment of gait in individuals with multiple sclerosis using wearable inertial sensors: Comparison with patient-based measure. Mult. Scler. Relat. Disord. 2016, 10, 187–191. [Google Scholar] [CrossRef] [PubMed]
  61. Bradshaw, M.J.; Farrow, S.; Motl, R.W.; Chitnis, T. Wearable biosensors to monitor disability in multiple sclerosis. Neurol. Clin. Pract. 2017, 7, 354–362. [Google Scholar] [CrossRef] [PubMed]
  62. Moon, Y.; McGinnis, R.S.; Seagers, K.; Motl, R.W.; Sheth, N.; Wright, J.A., Jr.; Ghaffari, R.; Sosnoff, J.J. Monitoring gait in multiple sclerosis with novel wearable motion sensors. PLoS ONE 2017, 12, e0171346. [Google Scholar] [CrossRef] [PubMed]
  63. Psarakis, M.; Greene, D.A.; Cole, M.H.; Lord, S.R.; Hoang, P.; Brodie, M. Wearable technology reveals gait compensations, unstable walking patterns and fatigue in people with multiple sclerosis. Physiol. Meas. 2018, 39, 075004. [Google Scholar] [CrossRef]
  64. Sparaco, M.; Lavorgna, L.; Conforti, R.; Tedeschi, G.; Bonavita, S. The Role of Wearable Devices in Multiple Sclerosis. Mult. Scler. Int. 2018, 2018, e7627643. [Google Scholar] [CrossRef] [PubMed]
  65. Frechette, M.L.; Meyer, B.M.; Tulipani, L.J.; Gurchiek, R.D.; McGinnis, R.S.; Sosnoff, J.J. Next Steps in Wearable Technology and Community Ambulation in Multiple Sclerosis. Curr. Neurol. Neurosci. Rep. 2019, 19, 80. [Google Scholar] [CrossRef] [PubMed]
  66. Angelini, L.; Hodgkinson, W.; Smith, C.; Dodd, J.M.; Sharrack, B.; Mazzà, C.; Paling, D. Wearable sensors can reliably quantify gait alterations associated with disability in people with progressive multiple sclerosis in a clinical setting. J. Neurol. 2020, 267, 2897–2909. [Google Scholar] [CrossRef]
  67. González-Landero, F.; García-Magariño, I.; Lacuesta, R.; Lloret, J. Green Communication for Tracking Heart Rate with Smartbands. Sensors 2018, 18, 2652. [Google Scholar] [CrossRef]
  68. Qaim, W.B.; Ometov, A.; Molinaro, A.; Lener, I.; Campolo, C.; Lohan, E.S.; Nurmi, J. Towards Energy Efficiency in the Internet of Wearable Things: A Systematic Review. IEEE Access 2020, 8, 175412–175435. [Google Scholar] [CrossRef]
  69. Pal, D.; Vanijja, V.; Arpnikanondt, C.; Zhang, X.; Papasratorn, B. A Quantitative Approach for Evaluating the Quality of Experience of Smart-Wearables From the Quality of Data and Quality of Information: An End User Perspective. IEEE Access 2019, 7, 64266–64278. [Google Scholar] [CrossRef]
  70. Benbunan-Fich, R. User Satisfaction with Wearables. AIS Trans. Hum.-Comput. Interact. 2020, 12, 1–27. [Google Scholar] [CrossRef]
  71. Oh, J.; Kang, H. User engagement with smart wearables: Four defining factors and a process model. Mob. Media Commun. 2021, 9, 314–335. [Google Scholar] [CrossRef]
  72. Benbunan-Fich, R. An affordance lens for wearable information systems. Eur. J. Inf. Syst. 2019, 28, 256–271. [Google Scholar] [CrossRef]
  73. Ledger, D.; McCaffrey, D. Inside Wearables: How the Science of Human Behavior Change Offers the Secret to Long-Term Engagement; Endeavour Partners LLC: Cambridge, MA, USA, 2014; pp. 1–17. [Google Scholar]
  74. Canhoto, A.I.; Arp, S. Exploring the factors that support adoption and sustained use of health and fitness wearables. J. Mark. Manag. 2017, 33, 32–60. [Google Scholar] [CrossRef]
  75. Smuck, M.; Odonkor, C.A.; Wilt, J.K.; Schmidt, N.; Swiernik, M.A. The emerging clinical role of wearables: Factors for successful implementation in healthcare. NPJ Digit. Med. 2021, 4, 1–8. [Google Scholar] [CrossRef] [PubMed]
  76. Russell, J.A. A circumplex model of affect. J. Pers. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  77. Pepa, L.; Capecci, M.; Ceravolo, M.G. Smartwatch based emotion recognition in Parkinson’s disease. In Proceedings of the 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), Ancona, Italy, 19–21 June 2019; pp. 23–24. [Google Scholar] [CrossRef]
  78. Costa, J.; Guimbretière, F.; Jung, M.F.; Choudhury, T. BoostMeUp: Improving Cognitive Performance in the Moment by Unobtrusively Regulating Emotions with a Smartwatch. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2019, 3, 1–23. [Google Scholar] [CrossRef]
  79. Jiang, S.; Li, Z.; Zhou, P.; Li, M. Memento: An Emotion-driven Lifelogging System with Wearables. ACM Trans. Sens. Netw. 2019, 15, 1–23. [Google Scholar] [CrossRef]
  80. Al Nahian, M.J.; Ghosh, T.; Uddin, M.N.; Islam, M.M.; Mahmud, M.; Kaiser, M.S. Towards Artificial Intelligence Driven Emotion Aware Fall Monitoring Framework Suitable for Elderly People with Neurological Disorder; Springer International Publishing: Cham, Switzerland, 2020; pp. 275–286. [Google Scholar] [CrossRef]
  81. Vaizman, Y.; Ellis, K.; Lanckriet, G. Recognizing Detailed Human Context In-the-Wild from Smartphones and Smartwatches. arXiv 2017, arXiv:1609.06354. [Google Scholar] [CrossRef]
  82. Vaizman, Y.; Ellis, K.; Lanckriet, G.; Weibel, N. ExtraSensory App: Data Collection In-the-Wild with Rich User Interface to Self-Report Behavior. In Proceedings of the CHI ’18: CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar] [CrossRef]
  83. Sultana, M.; Al-Jefri, M.; Lee, J. Using Machine Learning and Smartphone and Smartwatch Data to Detect Emotional States and Transitions: Exploratory Study. JMIR mHealth uHealth 2020, 8, e17818. [Google Scholar] [CrossRef] [PubMed]
  84. Saganowski, S.; Dutkowiak, A.; Dziadek, A.; Dzieżyc, M.; Komoszyńska, J.; Michalska, W.; Polak, A.; Ujma, M.; Kazienko, P. Emotion Recognition Using Wearables: A Systematic Literature Review—Work-in-progress. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Austin, TX, USA, 23–27 March 2020; pp. 1–6. [Google Scholar] [CrossRef]
  85. Picard, R.W.; Vyzas, E.; Healey, J. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1175–1191. [Google Scholar] [CrossRef]
  86. Hänsel, K.; Alomainy, A.; Haddadi, H. Large Scale Mood and Stress Self-Assessments on a Smartwatch; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1180–1184. [Google Scholar] [CrossRef]
  87. Costa, J.; Guimbretière, F.; Jung, M.F.; Khalid Choudhury, T. BoostMeUp: A Smartwatch App to Regulate Emotions and Improve Cognitive Performance. GetMobile Mob. Comput. Commun. 2020, 24, 25–29. [Google Scholar] [CrossRef]
  88. Miri, P.; Uusberg, A.; Culbertson, H.; Flory, R.; Uusberg, H.; Gross, J.J.; Marzullo, K.; Isbister, K. Emotion Regulation in the Wild: Introducing WEHAB System Architecture; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  89. Reddy Nadikattu, R. The Emerging Role of Artificial Intelligence in Modern Society. Int. J. Creat. Res. Thoughts 2016, 4, 906–911. [Google Scholar]
  90. Shiffman, S.; Stone, A.A.; Hufford, M.R. Ecological momentary assessment. Annu. Rev. Clin. Psychol. 2008, 4, 1–32. [Google Scholar] [CrossRef] [PubMed]
  91. Kim, H.; Lee, S.H.; Lee, S.E.; Hong, S.; Kang, H.J.; Kim, N. Depression prediction by using ecological momentary assessment, actiwatch data, and machine learning: Observational study on older adults living alone. JMIR mHealth uHealth 2019, 7, e14149. [Google Scholar] [CrossRef] [PubMed]
  92. Kocielnik, R.; Sidorova, N.; Maggi, F.M.; Ouwerkerk, M.; Westerink, J.H.D.M. Smart technologies for long-term stress monitoring at work. In Proceedings of the CBMS 2013—26th IEEE International Symposium on Computer-Based Medical Systems, Porto, Portugal, 20–22 June 2013. [Google Scholar] [CrossRef]
  93. Home Assistant. Available online: https://www.home-assistant.io/ (accessed on 10 May 2021).
  94. ROS.org. ROS Navigation. 2020. Available online: http://wiki.ros.org/navigation (accessed on 2 May 2021).
  95. ROS.org. Setup and Configuration of the Navigation Stack on a Robot. 2018. Available online: http://wiki.ros.org/navigation/Tutorials/RobotSetup (accessed on 5 June 2021).
  96. ROS.org. Turtlebot Bringup. 2013. Available online: http://wiki.ros.org/turtlebot_bringup (accessed on 2 June 2021).
  97. Healey, J.; Nachman, L.; Subramanian, S.; Shahabdeen, J.; Morris, M. Out of the lab and into the fray: Towards modeling emotion in everyday life. In Pervasive Computing; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6030. [Google Scholar] [CrossRef]
  98. Plarre, K.; Raij, A.; Hossain, S.M.; Ali, A.A.; Nakajima, M.; Al’Absi, M.; Ertin, E.; Kamarck, T.; Kumar, S.; Scott, M.; et al. Continuous inference of psychological stress from sensory measurements collected in the natural environment. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks, Chicago, IL, USA, 12–14 April 2011. [Google Scholar]
  99. Zangróniz, R.; Martínez-Rodrigo, A.; Pastor, J.M.; López, M.T.; Fernández-Caballero, A. Electrodermal activity sensor for classification of calm/distress condition. Sensors 2017, 17, 2324. [Google Scholar] [CrossRef]
  100. Vandecasteele, K.; Lázaro, J.; Cleeren, E.; Claes, K.; van Paesschen, W.; van Huffel, S.; Hunyadi, B. Artifact detection of wrist photoplethysmograph signals. In Proceedings of the BIOSIGNALS 2018—11th International Conference on Bio-Inspired Systems and Signal Processing, Funchal, Portugal, 19–21 January 2018. [Google Scholar]
  101. Ghasemzadeh, H.; Loseu, V.; Guenterberg, E.; Jafari, R. Sport training using body sensor networks: A statistical approach to measure wrist rotation for golf swing. In Proceedings of the BODYNETS 2009—4th International ICST Conference on Body Area Networks, Los Angeles, CA, USA, 1–3 April 2011. [Google Scholar] [CrossRef]
  102. Zhang, Z.; Song, Y.; Cui, L.; Liu, X.; Zhu, T. Emotion recognition based on customized smart bracelet with built-in accelerometer. PeerJ 2016, 2016, e2258. [Google Scholar] [CrossRef] [PubMed][Green Version]
  103. Gjoreski, M.; Luštrek, M.; Gams, M.; Gjoreski, H. Monitoring stress with a wrist device using context. J. Biomed. Inform. 2017, 73, 159–170. [Google Scholar] [CrossRef]
  104. Ramshur, J. Design, Evaluation, and Applicaion of Heart Rate Variability Analysis Software (HRVAS). Master’s Thesis, University of Memphis, Memphis, TN, USA, 2010. [Google Scholar]
  105. Empatica Inc. Available online: https://www.empatica.com/ (accessed on 25 June 2021).
  106. Pérez Fuentes, M.d.C.; Jesús Gázquez Linares, J.; Rubio, I.M.; Molero Jurado, M.d.M. Brief emotional intelligence inventory for senior citizens (EQ-i-M20). Psicothema 2014, 26, 524–530. [Google Scholar] [CrossRef]
  107. Bar-on, J.D.; Parker, R. Bar-On Emotional Quotient Inventory: Youth Version(BarOn EQ-i:YV), Technical Manual; MHS: Toronto, ON, Canada, 2000; Volume 1872. [Google Scholar]
  108. Molero, M.d.M.; Pérez-Fuentes, M.d.C.; Gázquez, J.J.; Mercader, I. Construction and Initial Validation of a Questionnaire to Assess Quality of Life in Older Institutionalized People. Eur. J. Investig. Health Psychol. Educ. 2012, 2, 53–65. [Google Scholar] [CrossRef]
  109. Spielberger, C.; Gorsuch, R.; Lushene, R. STAI Manual for the State-Trait Anxiety Inventory. Self-Evaluation Questionnaire; TEA Ediciones: Madrid, Spain, 1970. [Google Scholar]
  110. Hamilton, M. A rating scale for depression. J. Neurol. Neurosurg. Psychiatry 1960, 23, 56–62. [Google Scholar] [CrossRef]
  111. Martínez de la Iglesia, J.; Onís Vilches, M.C.; Dueñas Herrero, R.; Albert Colomer, C.; Aguado Taberné, C.; Luque Luque, R. Versión española del cuestionario de Yesavage abreviado (GDS) para el despistaje de depresión en mayores de 65 años: Adaptación y validación. MEDIFAM Rev. Med. Fam. Comunitaria 2002, 12, 620–630. [Google Scholar] [CrossRef]
  112. Lobo, A.; Ezquerra, J.; Gómez Burgada, F.; Sala, J.M.; Seva Díaz, A. El miniexamen, cognoscitivo (un “test” sencillo, práctico, para detectar alteraciones intelectuales en pacientes médicos). Actas Luso-Esp. Neurol. Psiquiatr. Cienc. Afines 1979, 7, 189–202. [Google Scholar]
  113. Reisberg, B.; Ferris, S.H.; de Leon, M.J.; Crook, T. The global deterioration scale for assessment of primary degenerative dementia. Am. J. Psychiatry 1982, 139, 1136–1139. [Google Scholar] [CrossRef]
  114. Katz, S.; Ford, A.B.; Moskowitz, R.W.; Jackson, B.A.; Jaffe, M.W. Studies of Illness in the Aged: The Index of ADL: A Standardized Measure of Biological and Psychosocial Function. JAMA J. Am. Med. Assoc. 1963, 185, 914–919. [Google Scholar] [CrossRef] [PubMed]
  115. Díaz, D.; Rodríguez-Carvajal, R.; Blanco, A.; Moreno-Jiménez, B.; Gallardo, I.; Valle, C.; van Dierendonck, D. Adaptación española de las escalas de bienestar psicológico de Ryff. Psicothema 2006, 18, 572–577. [Google Scholar]
  116. Robles, R.; Páez, F. Estudio sobre la traducción al Español y las propiedades psicométricas de las escalas de Afecto Positivo y Negativo (PANAS). Salud Ment. 2003, 26, 69–75. [Google Scholar]
  117. Costa, P.T.; McCrae, R.R. Inventario de Personalidad NEO Revisado (NEO PI-R) e Inventario NEO Reducido de Cinco Factores (NEO-FFI). In Man. Prof.; TEA Ediciones: Madrid, Spain, 1999. [Google Scholar]
  118. Chang, C.C.; Lin, C.J. LIBSVM: A Library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  119. Likamwa, R.; Liu, Y.; Lane, N.D.; Zhong, L. MoodScope: Building a mood sensor from smartphone usage patterns. In Proceedings of the MobiSys 2013—11th Annual International Conference on Mobile Systems, Applications, and Services, Taipei, Taiwan, 25–28 June 2013. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.