Next Article in Journal
Reliability Analysis of Steel Bridge Girders Strengthened with CFRP Considering the Debonding of Adhesive Layer
Previous Article in Journal
Addressing the Use of Artificial Intelligence Tools in the Design of Visual Persuasive Discourses
Previous Article in Special Issue
Enhanced Heart Disease Prediction Based on Machine Learning and χ2 Statistical Optimal Feature Selection Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CLARA: Building a Socially Assistive Robot to Interact with Elderly People

by
Adrián Romero-Garcés
,
Juan Pedro Bandera
,
Rebeca Marfil
,
Martín González-García
and
Antonio Bandera
*
Departamento Tecnologia Electronica, ETSI Telecomunicacion, University of Málaga, 29010 Málaga, Spain
*
Author to whom correspondence should be addressed.
Designs 2022, 6(6), 125; https://doi.org/10.3390/designs6060125
Submission received: 22 October 2022 / Revised: 6 December 2022 / Accepted: 8 December 2022 / Published: 13 December 2022
(This article belongs to the Special Issue Design of Reliable Framework for Healthcare Data Assessment)

Abstract

:
Although the global population is aging, the proportion of potential caregivers is not keeping pace. It is necessary for society to adapt to this demographic change, and new technologies are a powerful resource for achieving this. New tools and devices can help to ease independent living and alleviate the workload of caregivers. Among them, socially assistive robots (SARs), which assist people with social interactions, are an interesting tool for caregivers thanks to their proactivity, autonomy, interaction capabilities, and adaptability. This article describes the different design and implementation phases of a SAR, the CLARA robot, both from a physical and software point of view, from 2016 to 2022. During this period, the design methodology evolved from traditional approaches based on technical feasibility to user-centered co-creative processes. The cognitive architecture of the robot, CORTEX, keeps its core idea of using an inner representation of the world to enable inter-procedural dialogue between perceptual, reactive, and deliberative modules. However, CORTEX also evolved by incorporating components that use non-functional properties to maximize efficiency through adaptability. The robot has been employed in several projects for different uses in hospitals and retirement homes. This paper describes the main outcomes of the functional and user experience evaluations of these experiments.

1. Introduction

The world’s population is aging rapidly. Demographic change is relevant to all countries, but especially the United States, Japan, and the European Union (EU). Increasing economic resources will not be enough to guarantee the quality of life of this aging population. By 2030, a shortfall of more than 150,000 paid caregivers is expected in the United States, a figure that is expected to double by 2040. In this scenario, family and friends step in to help the elderly in their daily lives. However, the evolution of the caregiver support ratio (CSR) shows that the availability of potential caregivers is very limited and that there will be increasing pressure on fewer individuals within families to care for elderly relatives aged eighty and beyond that may require assistance [1].
It is not easy to find a cost-effective solution to compensate for the absence of qualified professional carers. It is clear that medical facilities (hospitals, day centers, etc.) will have to be upgraded to increase their efficiency and reach. Technological solutions include equipping everyday living environments with a certain level of care-related infrastructure. Smart environments have the potential to assist in the monitoring of the health status of older people and can use different interfaces to encourage them to perform rehabilitation or maintenance exercises or provide remote assistance. As the aging population becomes more familiar with the use of technology, these new solutions represent interesting business opportunities [2].
Robotic solutions are not outside this market sector. Assistive robots can enable disabled or elderly people to lead healthy and independent lives [3]. Considering contact with the person as the critical factor in assistance, two extremes have been taxonomically established: physically assistive robots, which help people through physical interactions, and socially assistive robots (SARs), which provide assistance through social interactions [4,5,6,7]. According to its original definition [8], a SAR can only assist users through social interactions but cannot have any kind of physical contact with them (a characteristic that, in fact, facilitates its deployment in real scenarios [9]). Thus, SARs are considered service robots [9] that are defined as the intersection between assistive robots and socially interactive robots (SIRs). The purpose of a SAR is to provide assistance but also develop, as with a SIR, a close and effective relationship with the user [10]. This characteristic links the success of SARs to the emotional bonds that may appear between the human user and the robot. For avoiding rejection due to false expectations, these bonds can be useful, for instance, for improving motivation and adherence to certain treatments [11,12,13,14,15]. By exploiting these benefits, SARs have been deployed as therapeutic robots. For instance, the robotic seal Paro was used in therapies for elderly people with dementia or Alzheimer’s [16]. The NAO robot was employed in therapies for Autism Spectrum Disorder (ASD) [17] and for rehabilitation purposes for children [18]. In the work of Ali et al. [17], a robot was endowed with a novel software solution for categorizing the severity of autism in children. In NaoTherapist, the NAO robot proposed exercises to the child and monitored how the motions were conducted. It can ask the child to correct the position of their arms and autonomously provide motivational messages [18]. At no time is there any physical contact with the child. The framework was intensively evaluated and is now offered as a commercial solution. Kaspar [15,19] and RoboParrot [20] are also examples of robots designed to interact with children with ASD. Kaspar is mainly used as a tool in robot-assisted therapy and education for children with communication and/or social interaction difficulties. With respect to the RoboParrot robot, the first versions were successfully employed for ASD screening. New versions have been used for interacting with both children and elderly people [20]. Kompai is a social robot designed by Robosoft to be used in assisted living facilities for providing cognitive and social support to the elderly [21]. As aforementioned, some of the proposals have been employed for screening. A recent example of a SAR used for clinical screening interviews is the 3D-printed social robot [22]. SARs can also be useful for monitoring the elderly at home. For instance, Bauer et al. [23] proposed to use a Sanbot Elf robot to detect unusual conditions or fallen people and send warning messages for assistance. Examples of SARs that are able to address tasks through socially interacting with people are the TIAGO [24] and Lio robots [25]. Lio is defined as a personal assistant robot for care applications. Contrary to the previously presented SARs, it has a gripper for manipulating objects; thus, it can open and close doors or move objects from one place to another. It is endowed with a relevant set of sensors, which allow Lio to navigate, understand, and interact with the environment and people. Similarly, TIAGO is a customizable robot, which can be endowed with single or dual arms, each one equipped with an interchangeable end-effector. Thus, it can also accomplish complex manipulation tasks [24].
Significantly, the previous taxonomy was omitted from the only ISO standard related to these robots, ISO 13482:2014 [26]. Here, robots that perform tasks to improve the quality of life of the intended user (personal care robots) are grouped into mobile servant robots (those that cooperate with the user to complete a task), physical assistant robots (those that compensate for or support a physical problem in the user, for example, an exoskeleton), and person carrier robots (those that transport people such as a robotic wheelchair). The strict exclusion of any physical contact in the definition of a SAR has motivated some researchers to adopt a new term, socially assistive robots, to refer to robots that can engage users in social interactions and also have some form of physical contact with them [10,27]. Regardless of their role and classification, the design, implementation, and commercialization of these robotic solutions should carefully consider a person-centered ethical approach, which is especially relevant to reliably address the social, emotional, and physical needs of elderly people in a way that respects their dignity and privacy. Although robots that can socialize, adapt their responses to each user, and even proactively push toward creating emotional bonds may seem to be an interesting replacement for humans, both from an ethical and a functional perspective this idea is not viable [28]. Hence, the design and use of SARs should focus on supporting caregivers as useful tools, instead of trying to do their work [29].
This paper describes the design process of CLARA, a socially assistive robot, both from a hardware and software point of view. This process started in 2016 and Figure 1 depicts its main phases. In the last six years, CLARA has evolved from being a robot designed to automate data collection from elderly people (as part of Comprehensive Geriatric Assessment (CGA) procedures) to a robot that performs various tasks in nursing homes for the elderly. Although several articles have been published in journals and conferences addressing specific aspects of our proposal, the current paper provides a whole view of the design process, focusing on describing and justifying the decisions that were made during this process.
The rest of the paper is organized as follows. Section 2 describes the first steps in the design of CLARA. These steps were completely conditioned by the use cases to be solved: different types of tests used in the CGA. These use cases determined the sensors that were mounted in CLARA but not the software architecture that was selected to facilitate the inclusion of new use cases. It is important to note that what is described in this section refers to the year 2016. CLARA was evaluated at the end of 2016 in a highly controlled environment by medical professionals. The main outcome of this evaluation was the need to include a team of experts in our consortium for the user-centered technology design. Section 3 describes how, together with these experts, the robot’s housing was designed and the interfaces for interaction with both patients and medical professionals were remodeled. CLARA was connected to an external server, where the data from the user sessions were stored, the agendas for the sessions were scheduled, and the reports required by the medical professionals were generated. The complete system was tested repeatedly by different groups of users and finally in a geriatric center in tests that involved real patients. Section 4 describes the final changes made to the CLARA casing to make the design more robust, as well as some hardware modifications, before three units were manufactured and deployed in health centers in Seville (Spain) and a hospital in Reims (France) for a period of six months. The results obtained from this evaluation showed that the users were satisfied with the robot but that their degree of autonomy was limited, often requiring the assistance of the center’s staff to explain how the session was to be carried out. Research on the automation of the CGA ended in early 2019. Although CLARA is capable of performing questionnaire-based tests or capturing and assessing users’ gaits autonomously, these use cases did not arise from end-users’ needs. Medical professionals prefer to tackle these evaluation tasks on their own, as they greatly influence their decision making. CLARA is interesting as it allows them to capture all the data from the sessions (video and audio), but they still prefer to be present at the assessments. Emerging from these conclusions, new projects started in 2019. In these projects, the CLARA robot was deployed in the Vitalia Teatinos retirement home in Malaga (Spain). This new phase, which ended in 2022, is described in Section 5. The use cases were proposed by the caregivers working in the residence once they became familiar with the features of the robot itself. The design of the robot changed only slightly to incorporate sensors that allowed it to identify the user (it now interacts with them in an open environment).

2. Designing a Robot for Automatizing the CGA

The initial requirements for CLARA were driven by a PDTI (public end-user-driven technological innovation) project funded by the EU project ECHORD++ [30]. The aim of the CLARC project was to develop a robot capable of helping staff in a geriatric department. More precisely, the robot should be able to autonomously drive certain tests in the Comprehensive Geriatric Assessment (CGA) procedures (see [30,31] for details). The CGA is an integrated clinical procedure for evaluating the status of frail old people and creating therapy plans to improve their quality and quantity of life. By automatizing data capture and test scoring, clinical experts will have more time to speak with the patient and family members to decide on an individualized care plan, which is the final and most important phase of the CGA process https://echord.eu/pdti/pdti-healthcare/index.php.html (accessed on 1 December 2022).
Thus, within the CLARC project, the fundamental service that CLARA had to provide was the ability to autonomously conduct some tests typically included in the CGA [30]. Briefly, the robot needed to autonomously conduct a questionnaire-based test (the Barthel test), a test for evaluating the patient’s gait (the get-up-and-go test), and a test for cognitive evaluation (the mini-mental test). However, before conducting the tests, the CLARC robot needed to introduce itself as an accessible and helpful assistant (or, at least, a tool) [32]. CLARA was designed to be able to receive and accompany patients and their families to the medical consulting room and, once there, help the physician to capture and manage their data during CGA procedures.
To meet these requirements, we had to develop a robot that could talk or navigate between standing people, but whose perceptual resources were mainly designed to interact with a seated person being interviewed in a questionnaire-type test or to capture the gait of a person in a test such as the get-up-and-go test in which the person walks up to four meters away from the robot [31]. Therefore, the height of the robot was set at 1.2 m, which allowed the sensors mounted in the robot to be useful, capture information correctly, and be in a comfortable position for the user in the HRI sessions. The height is similar to that of other current robots, such as Lio [25]. The first step was to select a base (Section 2.1) and then add the necessary sensors to solve the tasks required by this project (Section 2.2). Simultaneously, we endowed the robot with a software architecture that allowed it to address the required use cases. As detailed in Section 2.3, we instantiated in CLARA a version of the CORTEX software architecture [33,34].

2.1. Choosing a Robotic Platform

The first step in the design of CLARA was to choose a robotic base. Table 1 summarizes the main topics considered in this procedure. Several proposals were studied and the table covers the options that were considered the most relevant. It was considered a relevant issue that these robotic platforms had been previously deployed in the healthcare sector. The Giraff robot was a relatively inexpensive platform deployed in the Danderyd Hospital in Stockholm [35]. The SCITOS G5 from MetraLabs GmbH was deployed in the Rehabilitation Clinic in Bad Liebenstein [36] and the RB-1 base from Robotnik in the Nueva Fe Hospital in Valencia (Spain) and the Stella Maris Hospital in Pisa (Italy) https://robotnik.eu/products/mobile-robots/rb-1-base-en/ (accessed on 1 December 2022). The TIAGO robot from PAL Robotics was the platform employed in the SACRO project https://pal-robotics.com/robots/tiago/ (accessed on 1 December 2022). Price was also a factor taken into consideration. In 2016, prices ranged from EUR 29,750 for the TIAGO IRON to EUR 9500 for the Giraff robot. However, the available sensors or software functionalities were also very different. The TIAGO IRON was a real autonomous robot equipped with a multitude of sensors and actuators, whereas the Giraff robot was a telepresence platform only equipped with microphones and a camera.
After analyzing all the factors, we decided to build CLARA using the SCITOS G3 from Metralabs GmbH. This base was smaller and also cheaper than the SCITOS G5 base but retained its main features. There were several issues to consider:
  • Robust base and navigation skill—The SCITOS G3 is comparable to the MiR100, RB-1, or TIAGO bases.
  • Flexibility—The SCITOS G3 is a complete and modular platform that can be adapted to our specific requirements. This was considered a relevant feature, as the external appearance of the robot (and also other behavioral aspects) had to be adapted to our scenario and use cases.
  • Feasibility analysis—Designed to deal with HRI scenarios, Metralabs provided all low-level, fundamental functionalities to enable fast prototyping and testing of the scenario. This aspect can also be provided by the improved MobiNa platform (Fraunhofer IPA), the RB-1 from Robotnik, or the TIAGO IRON from PAL Robotics. Other companies focused mainly on the base platform and the ability to navigate.
Figure 2 shows a snapshot of the SCITOS G3 platform used for one of our CLARA robots. This robotic base has proven that it truly combines the robustness and longevity of industrial solutions (our four CLARA robots have continued to operate without any technical problems since 2016) with the flexibility of a research solution (over the years, various sensors and software modules have been added and exchanged according to the needs of the use cases). SCITOS G3 moves using a maintenance-free differential drive. The drive system can move the 60 kg platform at a speed of up to 1.4 m/s and it handles payloads of up to 50 kg without any difficulties.

2.2. Sensors

CLARA must navigate in an office-like environment and interact with the elderly during the tests. Figure 3 shows the appearance of the CLARA robot before enclosing it in housing. The robot was built using the aforementioned SCITOS G3 base. In this base, there was a LIDAR sensor (used for navigation and localization) and loudspeakers. Attached to a pillar mounted over this base were the touchscreen, the RGBD camera with a microphone array, and one webcam to record the tests (in order to adequately evaluate the user’s experiences). Two computers were also integrated into the robot, one of them fully dedicated to processing the data coming from the Kinect V2 sensor and the other running the cognitive architecture and connecting with the SCITOS G3 base. At this stage, CLARA used Microsoft’s Kinect V2 camera. To manage the audio channel, the array of microphones of the Kinect sensor worked well in lab environments but failed to capture the voices of users in crowded scenarios such as care centers or retirement homes. Although the Kinect SDK provided us with software to cancel echo or suppress noise, even using these features the results were not adequate in terms of the automatic speech recognition (ASR) rates. Hence, the Kinect microphone array was replaced with an Audio-Technica AT875 short-condenser shotgun microphone, which was connected to the computer using an Icicle XLR-to-USB mic converter/preamp (see Figure 3). The shotgun microphone was a narrow radiation diagram and allowed the robot to capture only the audio source located just in front of the microphone. The use of the shotgun microphone improved the ASR results, although this solution still faced challenges when deployed in real scenarios.

2.3. The Software Architecture CORTEX

In order to conduct the CGA tests, CLARA had to address several tasks. These tasks required the robot to ask the elderly person questions and collect their answers, ask the person to complete small cognitive exercises (where hand or face gestures were made), or capture the person’s gait. In addition, the robot had to be able to navigate through rooms shared with people or modify its behavior to respond appropriately to situations not included in normal courses of action. The large and diverse set of modules involved in this use case, as well as the close interaction that had to exist between the reactive and deliberative modules, motivated us to use in CLARA a cognitive architecture that, at the time, was in a very preliminary stage of development.
The CORTEX cognitive architecture [33,34] organizes software functionality into agents that always communicate with each other using a blackboard or state representation. The key element of CORTEX is the Deep State Representation (DSR). The DSR is a multi-labeled directed graph that groups symbolic and geometric information within the same structure. As a hybrid representation, the DSR’s nodes store concepts that can be symbolic, metric, or a mixture. Symbolic tokens are defined as logic attributes in the nodes of this graph. Metric concepts describe the numerical quantities of objects in the world, which can be structures such as a three-dimensional mesh, scalars such as the mass of a link, or lists such as revision dates. Edges represent the relationships between the symbols. Hence, predicates that relate symbolic concepts are stored as edges between the nodes (e.g., an edge labeled ‘is waiting’ may appear between the node ‘robot’ and the node ‘answer’). On the other hand, the geometric spatial transformation between two nodes is also represented by an edge, containing in this case the transformation matrix that encodes that transformation. More details on DSRs can be found in [33,34,37].
Figure 4 shows an overview of the first version of the CORTEX architecture implemented in CLARA. In this first implementation, the components of CORTEX were divided among two computers: one running Linux for most of the software and the other running Windows to deal with software components that used the Kinect sensor and the Windows Speech SDK that were selected to produce and recognize speech. Additionally, a Raspberry Pi mini-computer was included in the system to represent a pair of animated eyes on the head screen. The Raspberry Pi was also in charge of battery monitoring. The CORTEX’s agents were divided into perceptive agents, reactive agents, and deliberative agents (blocks colored green, orange, and blue, respectively). The first ones were responsible for internalizing the context in the DSR. The reactive and deliberative agents provided fast responses to changes in the DSR. However, the responses of the reactive agents were hard-coded into their algorithms. Deliberative agents used a schema that allowed them to make decisions based on context and past experiences to achieve a given goal. They are more flexible than reactive agents, although not necessarily any slower. In any case, there were actions such as avoiding an obstacle that had to be carried out immediately, whereas others such as answering a user could be afforded some latency (not excessive, of course).
At the end of this initial phase, CLARA was an autonomous robot that was able to navigate in the care center without supervision. Localization, navigation, and obstacle avoidance functionalities were provided by the CogniDrive software running over the MIRA middleware http://www.mira-project.org/joomla-mira/ (accessed on 1 December 2022). The connection between the DSR, developed using the RoboComp framework https://github.com/robocomp (accessed on 1 December 2022), and the navigation modules, running in MIRA, was programmed in a specific agent. Both RoboComp and MIRA use similar communication models so the design of this bridge was relatively easy. Perceptive agents were in charge of detecting people, capturing upper-body motion, analyzing the user’s gait, and performing automatic speech recognition (ASR). The use of the Microsoft SDK helped us to implement a multi-language interface, which was able to interact with the patient in French, English, or Spanish. Our Speech agent can be considered a perceptive module (as it is used to recognize speech) and also a reactive module (as it reacts to the speaking of a sentence if this was annotated in the DSR). There was an agent for managing the touchscreen on the torso of the robot. For certain activities in the mini-mental cognitive test, the user had to use an external graphics tablet (it is very hard for an elderly person to draw a letter or shape on the touchscreen). The card was connected via Bluetooth to the robot and the interface was managed by a specific agent.
The PELEA agent provided the deliberative skills for this version of CORTEX. It is an instance of the Planning, Learning, and Execution Architecture (PELEA) presented in [38], which maintains its own internal memory and software modules to monitor the course of action. It interacted with the rest of the agents using the same procedure (i.e., by adding/updating concepts or relationships as nodes or edges of the DSR). The use of this automated planning framework was one of the major milestones in the development of CLARA. The advantages were not only the ability to react to unexpected or unforeseen situations but also the high speed of the response. PELEA made it possible, for example, to manage a conversation with a person in a natural way. Automated planning managed the domain description, in terms of the available actions, and generated a plan that allowed for the achievement of a goal from an initial state. A symbolic, high-level model of the world was employed to perform a forward projection, reasoning in depth in terms of goals, preconditions, resources, and timing constraints. In general, this deliberative scheme allowed CLARA to react to situations that were not considered in the nominal course of action and also recover from bad decisions.
The PELEA and Speech agents implemented a task-oriented dialogue system. Dialogue systems, such as the popular voice assistants Alexa, Cortana, and Siri (from Amazon, Microsoft, and Apple, respectively) are other alternatives for addressing the questionnaire-based tests of the CGA. However, these systems are mainly designed to handle questions from the users [22]. Moreover, CLARA is not only a voice assistant. It was able to capture human motion in CGA tests such as the get-up-and-go test and conduct other tasks in the care center, as presented in the next sections. Other robots (or computers with audio systems) have been used for interview tasks [39]. As in CLARA, the audio systems in these proposals focused on playing a list of prerecorded questions and recording the responses.

2.4. Encoding the Use Cases in CORTEX

As mentioned, the use of CORTEX allowed us to design the use cases. In the first step, when a new use case needed to be implemented in CLARA, the specifics of this use case were modeled at the time of design by hand. The result was a sequence of DSR graphs (see Figure 5 for an example) that represented the nominal use case, i.e., the one that considered that all prerequisites would always be successfully fulfilled and, therefore, all steps in the use case would run without problems. This model of the use case was shared with the caregivers and medical experts to approve the use case flow and also foresee situations that could alter the nominal execution of the use case (e.g., the user gets up from the chair before answering a question). This feedback was processed to extend the use case and update or add new modules to the software or hardware architectures of the robot. During execution, the use cases were managed by the deliberative module (in our case, the aforementioned PELEA agent), which could modify the course of action when required (e.g., calling the doctor if the user gets up from the chair before completing a questionnaire-based test). In summary, we included modules in the architecture to capture all the features considered at the time of design.

3. User-Centered Design

CLARA’s ability to manage CGA tests was validated at the Hospital Sant Antoni Abat (Vilanova i la Geltru, Spain) in tests involving medical professionals who took on the role of patients to verify the system. Although CLARA demonstrated that it could successfully and fairly autonomously handle the different tests, the evaluation results also showed that the design had been carried out without the main user group: the elderly. Interfaces had to be changed and an ergonomic, comfortable alternative to the touchscreen had to be found. The housing for the robot also had to be designed. To support these tasks, a group from the Université de Technologie de Troyes (France) led by Dr. Voilmy that focuses on the design of human-centered technologies joined our consortium.

3.1. Design of the Robot Housing

The robot was evaluated in the Living Lab ActivAgeing (LL2A) in Troyes (France) and the Hospital Virgen del Rocío in Seville (Spain). As described by Lan et al. [32,40], we need to add housing to CLARA and the aim was that its appearance had to be designed by considering the preferences of the end users. Driven by experts in participatory and user-centered design approaches from the Université de Technologie de Troyes, several internal meetings were organized from April to October 2017. In the first phase, two meetings took place in Troyes and Seville. The impressions from the focus group were collected. We defined the major features of the robot housing. This design was refined in a second phase, where design engineers from Metralabs GmbH joined the focus group (initially consisting only of end users—elderly people, nurses, doctors, etc.). A set of questions had to be answered (e.g., Where to place the physical button to call the doctor? Where to place the sensors? How big should the touchscreen be?, etc). Considering all these issues, an online design of the robot was provided by Metralabs GmbH. In an interactive session with users in Troyes and robotics engineers in Ilmenau (Germany, Metralabs GmbH), this design was dynamically changed according to the preferences of the end users. A first draft of the housing was evaluated by the end users in June and October 2017 in Troyes and Seville, respectively. Finally, the ability of the system to successfully manage the required use cases was intensively checked in Malaga (Spain) in November 2017. Figure 6 (left) shows the external appearance of CLARA after adding the first version of the chassis. The touchscreen is lowered so that seated users can use it more comfortably. The RGBD camera allows the robot to correctly track the gestures or hand movements performed by a user who stands close to the robot. This system can also detect people up to a distance of about 4 meters. A small screen on the face allows gestures to be input into the robot. Finally, an IP camera (white, on the robot’s right shoulder) has been added to CLARA to allow for the monitoring of its behavior via a web-based interface.
Figure 7(left) shows how a person interacts with the touchscreen. The intense use of this quasi-vertical touchscreen forces the patient to adopt an uncomfortable position, not only because he has to keep his arm extended but also because the robot cannot be so close to the user as to avoid him bending forward and straightening up every time the screen had to be touched. This issue moved us to add a third element to the interface, a remote control device, which was designed based on ideas provided in a co-creation session with end users (see Figure 7(right)). This device allowed the user to answer questions using large buttons. There were buttons associated with the Barthel test question options to control the audio volume, answer yes or no, or call a doctor. In addition, the device also incorporated a tablet, which was required to perform certain activities in the mini-mental test. As further studies show [41], this device is the preferred channel for users to answer questions in the Barthel test.

3.2. Connecting CLARA to End Users

In its deployment in a health center, the system included not only the CLARA robot but also an external server, the CGAmed, where the captured data were stored and the robot’s agenda was managed. It is also where the reports were generated, reviewed, and closed by the medical professionals. In total, the complete system featured, in addition to the robot interface with the elderly person used in the tests, a robot interface with a medical professional (to set up a test or monitor a session in real time) and a server interface with a medical professional (to review the tests, customize the reports, etc.) (Figure 8). The first of these interfaces had been designed without the end users in mind and now had to be redesigned to cater to their preferences. The rest of the system was then implemented.
Figure 9 provides a snapshot of the remote control interface. The robot control interface provides the user with the tools for the (a) visualization of the agenda of the CLARA robot, (b) the manual launching or stopping of a CGA test, and (c) the online supervision of the session. The schedule interface (Figure 10 provides the user with the tools to manage the agenda of the CLARA robot and allows for adding patients and sessions to the agenda of the robot. These data can be visualized in the robot control interface.
The test results interface provides a physician with the tools for offline analysis of a recorded session including visualizing videos, editing scores, comparing the results from several sessions, or managing the automatic report generated to resume a session. Figure 11 shows an example of the editing of a Barthel test. In this interface, the questions that were not answered by the patient are shown in red, the questions that he/she answered that were independent are shown in green, and the rest of the answers are shown in gray. The interface allows the clinician to see the full video of the session and the video associated with a specific question on the Barthel test (view question), edit the score of a specific question on the Barthel test, or generate a medical report that can be copied and pasted.
Finally, the patient–robot interfaces were redesigned. Shapes and colors are useful for connecting the remote control device with the options shown on the touchscreen. Figure 12 shows a snapshot of the robot managing a Barthel test. More details about the update of this interface can be found in Voilmy et al. [30].

4. Large-Scale Evaluation in Care Centers

The final step in the CLARC project involved an intensive, long-term evaluation of the system in real settings that lasted for six months. This evaluation led to a new update in the design of the robot, as detailed below.
A relevant milestone of the previous stage, which demonstrated how problematic it was to establish the use cases without counting on the end users, was that the mini-mental cognitive test was outside CLARA’s responsibilities. This decision was made by the medical professionals at the Virgen de Rocío Hospital in Seville and the San Antoni Abat Hospital in Vilanova i la Geltrú who did not feel it was appropriate for a robot to attempt to evaluate a person’s cognitive state. These are complicated tests for which a robot can be useful for recording sessions and uploading data to the server but for which a medical professional must always be present.

4.1. Redesign of the Housing

The robot was evaluated in terms of depth and users were always satisfied with the robot’s appearance, voice, or the way it moved. The only problem with this design was the fragile positioning of the sensors, such as the IP camera and, especially, the shotgun microphone, which were located outside the housing. The robot moves between people and a slight bump to one of these sensors can cause it to move and no longer capture information properly.
To install the IP camera in the housing, it was moved into the head of the robot, eliminating the small screen, which was replaced by two cameras. The lenses of the cameras acted as and resembled the robot’s eyes, which allowed users to share attention with the robot, a characteristic that significantly helped to improve acceptability. Figure 13 shows this modification in the design phase. The shotgun microphone was also attached to a more safe place (see Figure 6(right)). Figure 14 schematizes the new external aspect of CLARA. The chassis was organized into six pieces that can be easily removed if needed.
As mentioned at the beginning of this section, the mini-mental test was omitted from this evaluation. This meant that the tablet connected to the remote control device was not necessary. The major drawback of the first version of this device was its large size. Moreover, it had to be present in the room where the questionnaire-based test was taking place (e.g., on the table). As a more practical option, the robot could carry the device and offer it to the user before starting the test. As the information was displayed on the touchscreen on the robot’s torso, we designed a new version of the device without a screen (Figure 15). This new device was 24 × 15 × 8 centimeters in size and can be carried by CLARA in a small tray just below the touchscreen (see Figure 14). As with the old version, the device included buttons to select up to four options (1 to 4, with different colors and shapes), buttons to increase or decrease CLARA’s voice volume, and a large red button that allowed the user to call a caregiver if necessary. This device was very useful for administering questionnaire-based tests in the CGA.

4.2. Evaluation Results

Details about the results collected during this field trial are extensively discussed in [40]. In summary, these tests showed the feasibility of an automated CGA procedure, with promising results in terms of performance and user satisfaction. People liked using the robot and they could do this without big problems (96% of users could adequately end the tests). However, these experiments also revealed that it would be naive to accept these initial results, even after such a long-term evaluation. Human–robot interaction remains a complex solution, much more so than other solutions in the ambient assisted-living topic, and hence it has to be dealt with using a more cautious approach to avoid inadequate conclusions [29]. For instance, a voice interface, which was considered an a priori best choice for users, was scarcely used in real, noisy, daily life settings. This was due to the difficulties an autonomous speech recognition system faces when trying to maintain a conversation with an elderly person not used to it and also the fact that other interfaces were preferred, even if they were less natural [40]. The CLARC project was a technical success and it incorporated an adequate user-centered perspective in the design of the platform, but the core of the CLARC project, i.e., automatizing CGA procedures, was debatable, as further works demonstrated.

5. A Tool for Caregivers in a Retirement Home

In 2018 after the CLARC project, two regional projects—ROSI and ITERA—involving the use of CLARA in retirement homes started. Unlike in the CLARC project, there were no specific use cases to develop, as they were defined in a co-creative process involving caregivers, residents, technicians, and administrative staff. Aiming to address the current main challenges of socially assistive robots [9], these projects focused on performing long-term evaluations of the functionality, acceptability, utility, and accessibility of a socially assistive robot deployed in a real scenario. The scenario was new, with more effort in navigation and open interaction, and the use cases would be defined and implemented following a user-centered design procedure.

5.1. User-Driven Definition of the Use Cases

The initial co-creative sessions performed to capture the needs of the users in Vitalia Teatinos revealed an important insight: caregivers working in the retirement home did not want a robot to perform CGA procedures. They claimed, reasonably, that these tests involve not only numerical results but also a deep, expert evaluation from a healthcare professional (e.g., “will the robot know when the person lie about their hygiene habits?”). Having a robot moving around the retirement home trying to conduct these tests would mean more work for them, as they would have to program the agenda of the robot, supervise its behavior, and go over all the results it collected.
These co-creative sessions showed that both residents and caregivers would prefer the robot to focus on performing repetitive, non-specialized tasks that took valuable time away from professionals. Among the different options discussed in these sessions, a first simple use case emerged as the initial behavior to be performed by CLARA in the ROSI project. In this use case, CLARA had to announce the menu, the daily events, and/or the weather forecast to the residents. This use case allowed the robot to be introduced to the users and helped to polish the design and functionality of the robot thanks to user feedback [42]. It also facilitated the generation of ideas for new use cases from residents and caregivers as they became familiar with the robot. Moreover, this simple task allowed for the testing and improvement of the autonomous navigation system and the interaction modalities of the robot, along with its physical design.
CLARA was located in a retirement home when the COVID-19 pandemic started. In this confinement scenario, the possibility of using the robot as a mobile teleconference stand was proposed. Hence, a use case was developed in which relatives could book video calls using a web interface. Then, when it was time for the video call, the CLARA robot autonomously announced it and moved to a particular spot, where it waited for the resident to arrive. Once the person was close to the robot, it used its equipment (screen, speakers, microphone, and camera) to conduct the video call. The cognitive architecture was able to robustly incorporate new software components to enable this communication. The potential of this use case has also been identified by many companies and research groups. In fact, the CLARA robot is currently being employed in parallel with another platform (the GoBe robot) in the DIH-Hero EU project SUSTAIN to evaluate user experiences when telepresence robots are deployed in retirement homes and care centers.

5.2. Redesigning the Sensor Configuration

In the ROSI and ITERA projects, the use cases were kept simple as the objective was to evaluate the acceptability of a robot working in a real setting. However, these projects revealed three main issues that need to be tackled before the robot can be effectively used in meaningful use cases in a real social context. Hence, it was concluded that the robot should (i) move around safely; (ii) be able to maintain at least simple conversations; and (iii) recognize people and contexts to adapt its behavior accordingly. The CLARA robot successfully achieved safe navigation, but its conversation abilities were very limited and it was not able to recognize people at all.
In order to increase the perceptual abilities of the robot, the Kinect V2 sensor was replaced with a D435i camera from Intel and, in some prototypes, a pair of RFID antennas and face recognition-devoted hardware (Intel F455) were also attached to the chassis (Figure 16). The robot hardware was also simplified by moving the speech generator component to a Linux PC and removing the Windows PC. Finally, the head’s screen was also definitively removed, as moving eyes tend to be disturbing rather than helpful in real human–robot interactions.

5.3. Software Updates

Regarding the cognitive architecture, Figure 17 shows the current implementation of CORTEX employed in CLARA. New components in charge of computing metrics related to non-functional properties have been included, along with adaption components that use those metrics to maximize the performance of the robot [41].
On the other hand, working in real daily settings exposed the need to improve the reactive and opportunistic execution capabilities of the robot. Hence, a new approach to the execution of use cases was tested. In order to simultaneously execute different simple use cases, the planner modules were removed from the cognitive architecture and the complexity of the agents connecting the components with the DSR increased. The execution of the use cases relied entirely on the stigmergic relationships of the agents through modifications performed in the DSR. This schema allowed for very fast responses and the possibility of performing several use cases simultaneously (e.g., the robot could be moving to a room to announce the menu, interrupt that use case to ask a person if she wanted to check the photos taken at the last event, and continue moving after that interaction). However, although it was an interesting exercise to test the capabilities of CORTEX, it also revealed itself as a very difficult approach to scale. Without advanced coding tools to help programmers identify already occurring configurations of the DSR or allow easy checking of the evolution of the DSR from particular configurations, it was too complex to increase the number of use cases that can be executed this way. Hence, planners were again included in CORTEX and the simultaneous execution of use cases is currently managed via hierarchical planning. Nevertheless, after that glimpse into the possibilities of stigmergy through DSR, research is currently being conducted by our group to effectively include small planners, e.g., behavioral trees, in each agent, which would allow them to work as autonomously as possible without the need to always count using the devoted deliberative modules.
Finally, the last software update of CLARA was related to the need to perform object and person recognition. Although individual sensors have limited capabilities, a multimodal recognition system can fuse information coming from different sources to produce more robust results. In [43], the first step in integrating such a system into CORTEX is presented. The approach has so far produced interesting results without significantly increasing the computational complexity of the system.

5.4. Evaluation Results

The complete evaluation of the results collected in the ROSI and ITERA projects is currently being performed and will be submitted for publication next year. Overall, the initial results demonstrate that the user-centered design approach proved effective in achieving high long-term acceptability: the robot was positively received by the residents and caregivers, who were willing to continue working with it [42]. The inclusion of the robot in the daily routines of the retirement home also fostered the generation of ideas for new use cases. Among these contributions, one was particularly interesting: using the robot to collect the menu choices of the residents for the next day. It was claimed that this routine could take up to one hour a day for a caregiver who would like to use that time for more meaningful activities with the residents. Hence, a new national project is starting in 2022 in which the CLARA robot will perform this task.

6. Conclusions and Future Work

When we started working on the design of CLARA, we had to decide between designing a closed architecture, which would strictly enforce the use cases contemplated in a CGA, or an open architecture, which would provide flexibility for both adding new agents or modifying or changing those already included and defining and implementing new tasks or use cases. The idea of using a central representation to which agents are connected and disconnected has been a success. This strategy has allowed us to use agents designed in different frameworks (RoboComp, ROS, MIRA...), even those deployed on different machines. The only two requirements were to be able to connect to the interfaces supported by the DSR (generic for all agents and based on the publish/subscribe concept) and know in advance how they should react to changes in the context (by annotating the DSR) or in the DSR (by acting, and thus causing changes in the context if necessary). This last requirement is undoubtedly the most challenging, as it requires the whole team to be highly coordinated and to respect, in the design of each of the agents, the grammar defined in the DSR for the use case. Hence, the use case progress depends on software agents correctly reading the DSR and annotating it at the right moments. An agent missing an annotation (e.g., the navigation agent not informing that a destination has been reached) or not responding to a given DSR configuration (e.g., the battery checker agent not detecting a low battery situation) may lead the robot to fail in its functionality. In order to simplify the complex process of producing functional and robust use cases that are able to adapt to the environment while the robot pursues its goals, we are working on debugging tools that allow us to visualize how the DSR graph changes during execution and the agents that have been activated and in which order, storing all this information in synchronized log files. Moreover, we work with the definitions of the quality of service metrics associated with the correct internal functioning of the use cases, which makes it possible to determine when an anomalous situation has been encountered (for example, the robot does not speak even though it has been asked to say a sentence).
As for the hardware design, the initial idea was to develop a prototype that would be functional and also pleasant for the technical team itself. This perception changed when we included Dr. Dimitry Voilmy’s team at the University of Troyes (France) in the consortium. The strategy and considerations taken into account for the design of CLARA to meet the needs and preferences of end users have been presented in several papers [32,44]. In this case, it is worth mentioning that the acceptance of the robot has always been high among end users according to the AUSUS evaluation framework employed in these projects [45]. Regarding the robustness of the design, the robots have been working for more than six years, and in this time, we have only had to replace the batteries.

Author Contributions

Conceptualization, R.M., J.P.B., A.R.-G. and A.B.; Funding acquisition, M.G.-G., J.P.B. and A.B.; Investigation, A.R.-G., R.M., J.P.B. and A.B.; Methodology, A.B.; Software, A.R.-G., J.P.B. and R.M.; Supervision, A.B. and M.G.-G.; Validation, R.M. and A.R.-G.; Writing—original draft, A.B.; Writing—review and editing, A.R.-G., J.P.B., M.G.-G. and A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been partially funded by the EU ECHORD++ project (FP7-ICT-601116), the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 825003 (DIH-HERO SUSTAIN), the RoQME and MiRON Integrated Technical Projects funded, in turn, by the EU RobMoSys project (H20202-732410), the project RTI2018-099522-B-C41, funded by the Gobierno de España and FEDER funds, the AT17-5509-UMA and UMA18-FEDERJA-074 projects funded by the Junta de Andalucía, and the ARMORI (CEIATECH-10) and B1-2021_26 projects funded by the University of Málaga.

Institutional Review Board Statement

The studies in which the robot was deployed in retirement homes were conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Junta de Andalucía in a session held on 24 September 2020.

Informed Consent Statement

Written informed consent was obtained from the subjects who participated in the experimentation. They cannot be identified in this paper.

Data Availability Statement

The data generated or analyzed during the projects in which the robot was involved are not publicly available because they contain sensitive personal information. However, some of them are available on reasonable request.

Acknowledgments

The authors warmly thank the members of the “Amis du Living Lab” community and the volunteers at Seville and Malaga centers for their participation in this research. CLARA is the result of a collaborative work with researchers from the Universities Carlos III of Madrid, Extremadura and Jaén, the Hospital Virgen del Rocıío of Seville, and MetraLabs GmbH.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ribeiro, O.; Araújo, L.; Figueiredo, D.; Paúl, C.; Teixeira, L. The Caregiver Support Ratio in Europe: Estimating the Future of Potentially (Un)Available Caregivers. Healthcare 2022, 10, 11. [Google Scholar] [CrossRef] [PubMed]
  2. Kohlbacher, F.; Herstatt, C. The Silver Market Phenomenon: Business Opportunities in an Era of Demographic Change; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  3. Christoforou, E.G.; Avgousti, S.; Ramdani, N.; Novales, C.; Panayides, A.S. The Upcoming Role for Nursing and Assistive Robotics: Opportunities and Challenges Ahead. Front. Digit. Health 2020, 2, 585656. [Google Scholar] [CrossRef] [PubMed]
  4. Choe, Y.-K.; Jung, H.-T.; Baird, J.; Grupen, R.A. Multidisciplinary stroke rehabilitation delivered by a humanoid robot: Interaction between speech and physical therapies. Aphasiology 2013, 27, 252–270. [Google Scholar] [CrossRef]
  5. Fasola, J.; Mataric, M. Robot exercise instructor: A socially assistive robot system to monitor and encourage physical exercise for the elderly. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication RO-MAN, 2010 IEEE, Viareggio, Italy, 13–15 September 2010; pp. 416–421. [Google Scholar]
  6. Kitt, E.R.; Crossman, M.K.; Matijczak, A.; Burns, G.B.; Kazdin, A.E. Evaluating the Role of a Socially Assistive Robot in Children’s Mental Health Care. J. Child Fam. Stud. 2021, 30, 1722–1735. [Google Scholar] [CrossRef] [PubMed]
  7. Suarez Mejas, C.; Echevarria, C.; Nuñez, P.; Manso, L.; Bustos, P.; Leal, S.; Parra, C. Ursus: A robotic assistant for training of children with motor impairments. In Biosystems & Biorobotics. Converging Clinical and Engineering Research on Neurorehabilitation; Springer: Berlin/Heidelberg, Germany, 2013; Volume 1, pp. 249–253. [Google Scholar]
  8. Feil-Seifer, D.; Mataric, M.J. Defining socially assistive robotics. In Proceedings of the 9th International Conference on Rehabilitation Robotics (ICORR), IEEE, Chicago, IL, USA, 28 June–1 July 2005; pp. 465–468. [Google Scholar]
  9. SPARC: The Partnership for Robotics in Europe. Robotics 2020 Multi-Annual Roadmap for Robotics in Europe; The EU Framework Programme for Research and Innovation Report; euRobotics Aisbl: Brussels, Belgium, 2015. [Google Scholar]
  10. Payr, S. Towards Human–Robot Interaction Ethics. In A Construction Manual for Robots’ Ethical Systems. Cognitive Technologies; Trappl, R., Ed.; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  11. Boccanfuso, L.; O’Kane, J.M. Charlie: An adaptive robot design with hand and face tracking for use in autism therapy. Int. J. Soc. Robot. 2011, 3, 337–347. [Google Scholar] [CrossRef]
  12. Dehkordi, P.S.; Moradi, H.; Mahmoudi, M.; Pouretemad, H.R. The design, development, and deployment of roboparrot for screening autistic children. Int. J. Soc. Robot. 2015, 7, 513–522. [Google Scholar] [CrossRef]
  13. Kozima, H.; Michalowski, M.P.; Nakagawa, C. Keepon. Int. J. Soc. Robot. 2008, 1, 3–18. [Google Scholar] [CrossRef]
  14. Mataric, M.; Eriksson, J.; Feil-Seifer, D.; Winstein, C. Socially assistive robotics for post-stroke rehabilitation. J. Neuroeng. Rehabil. 2007, 4, 5. [Google Scholar] [CrossRef] [PubMed]
  15. Wainer, J.; Dautenhahn, K.; Robins, B.; Amirabdollahian, F. A pilot study with a novel setup for collaborative play of the humanoid robot kaspar with children with autism. Int. J. Soc. Robot. 2013, 6, 45–65. [Google Scholar] [CrossRef]
  16. Chang, W.-L.; Šabanovic, S. Interaction expands function: Social shaping of the therapeutic robot PARO in a nursing home. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, Portland, OR, USA, 2–5 March 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 343–350. [Google Scholar]
  17. Ali, S.; Mehmood, F.; Ayaz, Y.; Sajid, M.; Sadia, H.; Nawaz, R. An Experimental Trial: Multi-Robot Therapy for Categorization of Autism Level Using Hidden Markov Model. J. Educ. Comput. Res. 2022, 60, 722–741. [Google Scholar] [CrossRef]
  18. Pulido, J.C.; González, J.C.; Suárez-Mejías, C.; Bandera, A.; Bustos, P.; Fernández, F. Evaluating the Child–Robot interaction of the NAOTherapist platform in pediatric rehabilitation. Int. J. Soc. Robot. 2017, 9, 343–358. [Google Scholar] [CrossRef] [Green Version]
  19. Wainer, J.; Robins, B.; Amirabdollahian, F.; Dautenhahn, K. Using the humanoid robot KASPAR to autonomously play triadic games and facilitate collaborative play among children with autism. IEEE Trans. Auton. Ment. Dev. 2014, 6, 183–199. [Google Scholar] [CrossRef]
  20. Shayan, A.M.; Sarmadi, A.; Pirastehzad, A.; Moradi, H.; Soleiman, P. RoboParrot 2.0: A multi-purpose social robot. In Proceedings of the 2016 4th International Conference on Robotics and Mechatronics, ICROM, IEEE, Tehran, Iran, 26–28 October 2016; pp. 422–427. [Google Scholar]
  21. Granata, C.; Pino, M.; Legouverneur, G.; Vidal, J.S.; Bidaud, P.; Rigaud, A.S. Robot services for elderly with cognitive impairment: Testing usability of graphical user interfaces. Technol. Health Care 2013, 21, 217–231. [Google Scholar] [CrossRef] [PubMed]
  22. Do, H.M.; Sheng, W.; Harrington, E.E.; Bishop, A.J. Clinical screening interview using a social robot for geriatric care. IEEE Trans. Autom. Sci. Eng. 2020, 18, 1229–1242. [Google Scholar]
  23. Bauer, J.; Gruendel, L.; Seßner, J.; Meiners, M.; Lieret, M.; Lechler, T.; Konrad, C.; Franke, J. Camera-based fall detection system with the service robot sanbot ELF. In Smart Public Building 2018 Conference Proceedings; University of Applied Sciences Stuttgart: Stuttgart, Germany, 2018; pp. 15–28. [Google Scholar]
  24. Jauhri, S.; Peters, J.; Chalvatzaki, G. Robot Learning of Mobile Manipulation With Reachability Behavior Priors. IEEE Robot. Autom. Lett. 2022, 7, 8399–8406. [Google Scholar] [CrossRef]
  25. Miseikis, J.; Caroni, P.; Duchamp, P.; Gasser, A.; Marko, R.; Miseikiene, N.; Zwilling, F.; de Castelbajac, C.; Eicher, L.; Fruh, M.; et al. Lio-A Personal Robot Assistant for Human-Robot Interaction and Care Applications. IEEE Robot. Autom. Lett. 2020, 5, 5339–5346. [Google Scholar] [CrossRef]
  26. Jacobs, T.; Virk, G.S. ISO 13482—The new safety standard for personal care robots. In Proceedings of the ISR/Robotik 2014; 41st International Symposium on Robotics, Munich, Germany, 2–3 June 2014; pp. 1–6. [Google Scholar]
  27. Pareto-Boada, J.; Román-Maestre, B.; Torras Genís, C. The ethical issues of social assistive robotics: A critical literature review. Technol. Soc. 2021, 67, 101726. [Google Scholar] [CrossRef]
  28. Vandemeulebroucke, T.; Casterle, B.D.; Gastmans, C. Ethics of socially assistive robots in aged-care settings: A socio-historical contextualisation. J. Med. Ethics 2020, 46, 128–136. [Google Scholar] [CrossRef]
  29. Seibt, J.; Damholdt, M.F.; Vestergaard, C. Integrative social robotics, value-driven design, and transdisciplinarity. Interact. Stud. 2020, 21, 111–144. [Google Scholar] [CrossRef] [Green Version]
  30. Voilmy, D.; Suárez, C.; Romero-Garcés, A.; Reuther, C.; Pulido, J.C.; Marfil, R.; Manso, L.J.; Lan Hing Ting, K.; Iglesias, A.; González, J.C.; et al. CLARC: A cognitive robot for helping geriatric doctors in real scenarios. In Advances in Intelligent Systems and Computing, Proceedings of the ROBOT 2017: Third Iberian Robotics Conference, Sevilla, Spain, 22–24 November 2017; Ollero, A., Sanfeliu, A., Montano, L., Lau, N., Cardeira, C., Eds.; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  31. Bandera, J.P.; Marfil, R.; Romero-Garcés, A.; Voilmy, D. A new paradigm for autonomous human motion description and evaluation: Application to the Get Up & Go test use case. Pattern Recognit. Lett. 2019, 118, 51–60. [Google Scholar]
  32. Lan Hing Ting, K.; Voilmy, D.; Iglesias, A.; Pulido, J.C.; García, J.; Romero-Garcés, A.; Bandera, J.P.; Marfil, R.; Dueñas, A. Integrating the users in the design of a robot for making Comprehensive Geriatric Assessments (CGA) to elderly people in care centers. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 483–488. [Google Scholar] [CrossRef]
  33. Bustos, P.; Manso, L.; Bandera, A.; Bandera, J.; García-Varea, I.; Martínez-Gómez, J. The CORTEX cognitive robotics architecture: Use cases. Cogn. Syst. Res. 2019, 55, 107–123. [Google Scholar] [CrossRef] [Green Version]
  34. Marfil, R.; Romero-Garces, A.; Bandera, J.; Manso, L.; Calderita, L.; Bustos, P.; Bandera, A.; Garcia-Polo, J.; Fernandez, F.; Voilmy, D. Perceptions or Actions? Grounding How Agents Interact Within a Software Architecture for Cognitive Robotics. Cogn. Comput. 2020, 12, 479–497. [Google Scholar] [CrossRef]
  35. Boman, I.-L. Health Professionals’ Perceptions of the Robot ’Giraff’ in Brain Injury Rehabilitation; Assistive Technology Research Series 33; IOS Press: Amsterdam, The Netherlands, 2013; pp. 115–119. [Google Scholar] [CrossRef]
  36. Gross, H.M.; Debes, K.; Einhorn, E.; Mueller, S.; Scheidig, A.; Weinrich, C.; Bley, A.; Martin, C. Mobile Robotic Rehabilitation Assistant for walking and orientation training of Stroke Patients: A report on work in progress. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1880–1887. [Google Scholar] [CrossRef]
  37. Romero-Garcés, A.; Hidalgo-Paniagua, A.; González-García, M.; Bandera, A. On Managing Knowledge for MAPE-K Loops in Self-Adaptive Robotics Using a Graph-Based Runtime Model. Appl. Sci. 2022, 12, 8583. [Google Scholar] [CrossRef]
  38. Alcázar, V.; Madrid, I.; Guzmán, C.; Prior, D.; Borrajo, D.; Castillo, L.; Onaindía, E. PELEA: Planning, learning and execution architecture. In Proceedings of the 28th Workshop of the UK Planning and Scheduling Special Interest Group (PlanSIG’10), Brescia, Italy, 1 December 2010. [Google Scholar]
  39. Kurth, A.E.; Martin, D.P.; Golden, M.R.; Weiss, N.S.; Heagerty, P.J.; Spielberg, F.; Handsfield, H.H.; Holmes, K.K. A comparison between audio computer-assisted self-interviews and clinician interviews for obtaining the sexual history. Sex. Transm. Dis. 2004, 31, 719–726. [Google Scholar] [CrossRef] [PubMed]
  40. Lan Hing Ting, K.; Voilmy, D.; De Roll, Q.; Iglesias, A.; Marfil, R. Fieldwork and Field Trials in Hospitals: Co-Designing A Robotic Solution to Support Data Collection in Geriatric Assessment. Appl. Sci. 2021, 11, 3046. [Google Scholar] [CrossRef]
  41. Romero-Garcés, A.; Martínez-Cruz, J.; Inglés-Romero, J.; Vicente-Chicote, C.; Marfil, R.; Bandera, A. Measuring Quality of Service in a Robotized Comprehensive Geriatric Assessment Scenario. Appl. Sci. 2020, 10, 6618. [Google Scholar] [CrossRef]
  42. Iglesias, A.; Viciana, R.; Pérez-Lorenzo, J.M.; Ting, K.L.H.; Tudela, A.; Marfil, R.; Dueñas, A.; Bandera, J.P. Towards long term acceptance of socially assistive robots in retirement houses: Use case definition. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 134–139. [Google Scholar]
  43. Cruces, A.; Tudela, A.; Romero-Garcés, A.; Bandera, J.P. Multimodal object recognition module for social robots. In Lecture Notes in Networks and Systems, Proceedings of the ROBOT2022: Fifth Iberian Robotics Conference Zaragoza, Spain, 23–25 November 2022; Tardioli, D., Matellán, V., Heredia, G., Silva, M.F., Marques, L., Eds.; Springer: Cham, Switzerland, 2023; Volume 590. [Google Scholar] [CrossRef]
  44. Martínez, J.; Romero-Garcés, A.; Suarez-Mejias, C.; Marfil, R.; Lan Hing Ting, K.; Iglesias, A.; García, J.; Fernández, F.; Dueñas-Ruiz, A.; Calderita, L.V.; et al. Towards a robust robotic assistant for Comprehensive Geriatric Assessment procedures: Updating the CLARC system. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 820–825. [Google Scholar] [CrossRef]
  45. Iglesias, A.; García, J.; García-Olaya, A.; Fuentetaja, R.; Fernández, F.; Romero-Garcés, A.; Marfil, R.; Bandera, A.; Lan Hing Ting, K.; Voilmy, D.; et al. Extending the Evaluation of Social Assistive Robots With Accessibility Indicators: The AUSUS Evaluation Framework. IEEE Trans.-Hum.-Mach. Syst. 2021, 51, 601–612. [Google Scholar] [CrossRef]
Figure 1. Evolution of the CLARA SAR.
Figure 1. Evolution of the CLARA SAR.
Designs 06 00125 g001
Figure 2. The SCITOS G3 as the base platform for CLARA.
Figure 2. The SCITOS G3 as the base platform for CLARA.
Designs 06 00125 g002
Figure 3. (Left) The internal structure of the CLARA robot and (right) detailed view showing the shotgun microphone and cameras (see text for details).
Figure 3. (Left) The internal structure of the CLARA robot and (right) detailed view showing the shotgun microphone and cameras (see text for details).
Designs 06 00125 g003
Figure 4. The CORTEX cognitive architecture instantiated in CLARA (see text for details).
Figure 4. The CORTEX cognitive architecture instantiated in CLARA (see text for details).
Designs 06 00125 g004
Figure 5. Graphs showing the nominal evolution of the DSR after the robot has asked a question in the Barthel test and the person can then provide an answer (left). Time is allowed for the user to respond (person thinking) using any of the available channels (voice, touch, remote control) (middle). If there is an answer, the corresponding agent updates the DSR (the person has provided an answer) (right).
Figure 5. Graphs showing the nominal evolution of the DSR after the robot has asked a question in the Barthel test and the person can then provide an answer (left). Time is allowed for the user to respond (person thinking) using any of the available channels (voice, touch, remote control) (middle). If there is an answer, the corresponding agent updates the DSR (the person has provided an answer) (right).
Designs 06 00125 g005
Figure 6. External aspect of CLARA after adding (left) a first version and (right) the second version of the external housing.
Figure 6. External aspect of CLARA after adding (left) a first version and (right) the second version of the external housing.
Designs 06 00125 g006
Figure 7. (Left) A user using the touchscreen for answering questions in a Barthel test. (Right) A resident using the first version of the remote control in the presence of CLARA asking questions using both voice and text on the touchscreen.
Figure 7. (Left) A user using the touchscreen for answering questions in a Barthel test. (Right) A resident using the first version of the remote control in the presence of CLARA asking questions using both voice and text on the touchscreen.
Designs 06 00125 g007
Figure 8. Once logged in, the medical professional or caregiver has access to the interface with the robot (robot control), the server (test results), or the robot’s agenda (schedule) and can also configure the language.
Figure 8. Once logged in, the medical professional or caregiver has access to the interface with the robot (robot control), the server (test results), or the robot’s agenda (schedule) and can also configure the language.
Designs 06 00125 g008
Figure 9. The robot control interface.
Figure 9. The robot control interface.
Designs 06 00125 g009
Figure 10. The schedule interface.
Figure 10. The schedule interface.
Designs 06 00125 g010
Figure 11. The test results interface: editing a Barthel test.
Figure 11. The test results interface: editing a Barthel test.
Designs 06 00125 g011
Figure 12. CLARA is managing a Barthel test. The user can answer using voice, the touchscreen, or the large buttons on the remote control device.
Figure 12. CLARA is managing a Barthel test. The user can answer using voice, the touchscreen, or the large buttons on the remote control device.
Designs 06 00125 g012
Figure 13. (Left) The bottom part of the head of CLARA showing the camera disposition and (right) the fields of view of both cameras [41].
Figure 13. (Left) The bottom part of the head of CLARA showing the camera disposition and (right) the fields of view of both cameras [41].
Designs 06 00125 g013
Figure 14. (Left) The final aspect of CLARA and (right) external structure.
Figure 14. (Left) The final aspect of CLARA and (right) external structure.
Designs 06 00125 g014
Figure 15. The final version of the remote control device. In both cases, the large buttons on the device are used by the user to express their choice of answer.
Figure 15. The final version of the remote control device. In both cases, the large buttons on the device are used by the user to express their choice of answer.
Designs 06 00125 g015
Figure 16. One of the CLARA robots with the two RFID antennas.
Figure 16. One of the CLARA robots with the two RFID antennas.
Designs 06 00125 g016
Figure 17. The final version of the CORTEX cognitive architecture instantiated in CLARA.
Figure 17. The final version of the CORTEX cognitive architecture instantiated in CLARA.
Designs 06 00125 g017
Table 1. Comparative analysis for choosing the power base for CLARA (the table was based on the responses from companies or research institutes in 2016).
Table 1. Comparative analysis for choosing the power base for CLARA (the table was based on the responses from companies or research institutes in 2016).
CriteriumGIRAFFSCITOS G5RB-1 BaseTIAGOMiR100Mobina
MaintenanceNoYesYesYesYesYes
Relevant customers in healthcareYesYesYesSACRO projectYesYes
Relevant expertiseExtensive experience in real human–robot interaction use cases (TERESA, ExCITE, or GiraffPlus projects)Experience in real healthcare scenarios focusing on HRI applications (ROREAS, ALIAS, ROBOT-ERA, HOBBIT, CompanionAble projects)Experience in providing robotic platforms for use in real use cases (ROBO-SPECT, RUBICON, RADIO projects)Expertise in robots designed to interact with people. Social HRI (e.g., socSMCs FET project) and European projects (Factory in a Day FP7)Experience in deployment in real scenarios (healthcare systems)Experience in providing robotic platforms for use in real use cases (WiMi-Care, EFFIROB, Elevon, SeRoDi projects)
Sales channelsDirect salesDirect salesDirect salesDirect salesDirect sales/EU distributorsR & D services
PriceEUR 9500Ca. EUR 25,000Ca. EUR 15,000EUR 29,750Ca. EUR 22,200 (w VAT)Ca. EUR 10,000
Payload5 kg50 kg50 kg30 kg100 kg10 kg
Optional sensorsNoCustomizableCustomizableForce/torque sensor. Laser 10 m upgrade. Rear sonars. Additional RGBD camera in the base. Additional speakerNoCustomizable
Interface with the patientMonitor and microphone/speakersNeeds to be addedNeeds to be addedMobile head with RGBD camera, microphones. Multilanguage text-to-speech, speakers.Needs to be addedRGBD camera, microphones, and a touchscreen
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Romero-Garcés, A.; Bandera, J.P.; Marfil, R.; González-García, M.; Bandera, A. CLARA: Building a Socially Assistive Robot to Interact with Elderly People. Designs 2022, 6, 125. https://doi.org/10.3390/designs6060125

AMA Style

Romero-Garcés A, Bandera JP, Marfil R, González-García M, Bandera A. CLARA: Building a Socially Assistive Robot to Interact with Elderly People. Designs. 2022; 6(6):125. https://doi.org/10.3390/designs6060125

Chicago/Turabian Style

Romero-Garcés, Adrián, Juan Pedro Bandera, Rebeca Marfil, Martín González-García, and Antonio Bandera. 2022. "CLARA: Building a Socially Assistive Robot to Interact with Elderly People" Designs 6, no. 6: 125. https://doi.org/10.3390/designs6060125

Article Metrics

Back to TopTop