Next Article in Journal
Improving the Quality Degradation of Dynamically Configurable Approximate Multipliers via Data Correlation
Previous Article in Journal
Smarter Robotic Sprayer System for Precision Agriculture
Previous Article in Special Issue
Advancing Stress Detection Methodology with Deep Learning Techniques Targeting UX Evaluation in AAL Scenarios: Applying Embeddings for Categorical Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prospects of Robots in Assisted Living Environment

by
Safdar Mahmood
1,*,
Kwame Owusu Ampadu
1,
Konstantinos Antonopoulos
2,
Christos Panagiotou
2,
Sergio Andres Pertuz Mendez
3,
Ariel Podlubne
3,
Christos Antonopoulos
2,*,
Georgios Keramidas
4,
Michael Hübner
1,
Diana Goehringer
3 and
Nikolaos Voros
2
1
Chair of Computer Engineering, Brandenburg University of Technology Cottbus-Senftenberg, 03046 Cottbus, Germany
2
Department of Electrical and Computer Engineer, University of Peloponnese, 263 34 Patra, Greece
3
Adaptive Dynamic Systems Chair, Technische Universität Dresden, 01062 Dresden, Germany
4
School of Informatics, Aristotle University of Thessaloniki, 541 24 Thessaloniki, Greece
*
Authors to whom correspondence should be addressed.
Electronics 2021, 10(17), 2062; https://doi.org/10.3390/electronics10172062
Submission received: 31 May 2021 / Revised: 12 August 2021 / Accepted: 24 August 2021 / Published: 26 August 2021
(This article belongs to the Special Issue Robots in Assisted Living)

Abstract

:
From caretaking activities for elderly people to being assistive in healthcare setup, mobile and non-mobile robots have the potential to be highly applicable and serviceable. The ongoing pandemic has shown that human-to-human contact in healthcare institutions and senior homes must be limited. In this scenario, elderlies and immunocompromised individuals must be exclusively protected. Robots are a promising way to overcome this problem in assisted living environments. In addition, the advent of AI and machine learning will pave a way for intelligent robots with cognitive abilities, while enabling them to be more aware of their surroundings. In this paper, we discuss the general perspectives, potential research opportunities, and challenges arising in the area of robots in assisted living environments and present our research work pertaining to certain application scenarios, i.e., robots in rehabilitation and robots in hospital environments and pandemics, which, in turn, exhibits the growing prospects and interdisciplinary nature of the field of robots in assisted living environment.

1. Introduction

The field of robotics, in general, is exceedingly broad and, in turn, also has a broader participation in a wide area of application scenarios. In recent years, one of the most important concern of robotics has been its impactful applicability in the healthcare sector, more precisely, in assisted living environments [1]. This includes assistance with, but not limited to, physical and mental rehabilitation, nursing for the elderly [2] and to provide help with daily chores, for assistance in home automation environments for people with special needs [2], and particularly for a matter of growing importance in recent years, i.e., pandemics and epidemics [3,4].
“Movement is life” is a saying used among medical doctors, highlighting the importance of mobility in every patient. Respiratory problems, weakness, muscle atrophy, cachexia, and sarcopenia [5,6] are common pathologies that are worsened due to a lack of movement. This is exacerbated when further restrictions are applied to patient mobility in pandemics, such as with COVID. The solution to the close, detailed, disciplined and full-time mobility surveillance of patients that return or are treated in their homes by low-cost user-friendly and non-intrusive systems is still an open issue. The use of rehabilitation assessment systems is the dominant practice that can provide experts a fast and accurate screening and conditioning methods that are mainly targeted at patients and older adults. However, such systems are hard to acquire and use on a daily basis since they have increased costs (e.g., $3000 for the Biodex Balance System SD) [7] and require a high level of expertise.
The incremental average age of the population [8] has led to new requirements in the healthcare domain, and more precisely in cases such as home assistance, rehabilitation, and in the early detection of diseases. The need for assistance systems that can improve the quality of life for older people is becoming more and more necessary to help older people live an active and productive life [9]. Technology can be integrated into daily activities for the elderly, providing a safe, high-quality, happiness and a longer period of independent living.
The ambient assisting living (AAL) concept defines services that are able to build intelligent environments for the assistance of elderly people, relying on accurate localization systems to provide time-critical and reliable services. Knowing the position and actions of the elderly is vital for medical observation, timely accident prevention, behavioral pattern characterization, or anomaly detection [10].
In addition to this, pandemics also pose a threat to a particular risk group, including people with special needs, individuals with compromised immune systems, and the elderly, both in hospitals and in nursing-homes [3,4,11]. In robot-assisted living environments, a reduction of human-to-human contact for repeated tasks is usually desired as this would greatly reduce the probability of pathogen transmission [12]. Examples of repeated tasks could be caretakers accessing doorways frequently and coming into contact with physical objects, such as door handles, knobs, and surfaces, or not being able to observe social distancing while delivering medicines or vaccination to individuals.
In view of the above-mentioned scope of problems, this article goes through different aspects of how robots and associated technologies in ambient assisted living (AAL) provide excellent research opportunities and challenges, along with application scenarios in these areas, exploring the prospects of robots and related inter-disciplinary fields in assisted-living environments. In these application scenarios, we present our research work to address issues in the areas of rehabilitation, localization, social distancing, and sanitization. This paper is organized as follows:
  • Section 2 provides a brief background of the related work and a few projects that have been carried out in this field. Specifically, it lays down the contributions of a few EU Horizon2020 Projects [13] that target the area of robots in assisted-living environments.
  • Section 3 includes a general perspective of research opportunities and challenges that the field of robots in ALEs offers. It exhibits that, when combined with emerging modern technologies, it enables a broad spectrum of research opportunities and challenges. This section also includes a special subsection that discusses the tremendous applicability of embedded systems pertaining to the combination of AI/ML, robots, and the need for executing on-robot computation-intensive algorithms.
  • Section 4 gives detailed insight into our research contributions under two different scenarios: (1) robots in rehabilitation and (2) robots in the hospital environment and pandemics. Under these scenarios we propose methodologies to address given use-cases discussed in the sub-sections therein. Particularly, systems discussed in robots in rehabilitation i.e., A-balance system and indoor localization, are an extension to approaches also developed under the project roadmap of the RADIO project [14,15,16], which used Turtlebot [17] as a base robot platform. Under robots in hospital environments, we propose methodologies that use machine learning and computer vision algorithms implemented on the Pepper robot [18], to address sanitization and social distancing.
  • Section 5 concludes the work.

2. Background: Existing Research Work and Projects

2.1. Related Work in the Field of Robots in Assisted Living Environment

Robots in assisted-living environments have been gaining importance and relevance for a long time. Different approaches for assisted living (AL) have been proposed in the past for applications ranging from caring for people in nursing homes or supporting persons with special needs in their own private spaces [19]. Human–robot interactions from a socially acceptable outlook is one of the main areas of focus when it comes to assisted living. An approach has also been demonstrated where a smart environment interface for human–robot interactions was developed [20]. This work uses ECHONET, which is an ISO-certified home network standard, universal (uAAL) platform for assisted living and the Pepper humanoid robot to enable human–robot interactions by way of verbal (natural language processing) and non-verbal communication, such as a touch interface. This combination, in turn, is employed in a use case where a user can control a device on the network. The idea of assistive robots being deployed in hospitals, caring homes, and in personal spaces is very promising. Japan’s “Machine Industry Memorial Foundation” estimates an approximate value of 21 billion US$ that can be saved through the use of assistive robots where they are needed [21]. The recent ongoing pandemic due to COVID-19 has also yielded the realization of how useful such systems can be in situations where the health of a vulnerable group is at risk due to human-to-human interactions and temporary isolation for medical attention is desired [3,4]. For various ambient assisted living (AAL) solutions, some recent non-robotic approaches have also been proposed, such as SMARTCARE, which focuses on integrating AAL and home monitoring systems using IoT devices [22]; such solutions, in turn, can eventually be used with a robot integrated within the aforementioned scenario to enable healthcare.

2.2. Research Projects on Assisted Living

The EU has identified and highlighted the needs that the increasing cognitive aging population of Europe has to deal with. Namely, issues regarding the risks of cognitive impairment, population frailty, and social exclusion which have considerable negative consequences on independence, quality of life and the sustainability of health care systems. Calls for proposals, such as PHC-19 [23], focused on the development of robotic services applied in the context of AAL for aging populations. Various funded projects presented innovative solutions in the context of robotics and AAL.
The RAMCIP Project [24] presented an autonomously moving service robot with an onboard touch screen and a dexterous robotic hand and arm. Its objectives were focused on serving elderly people that suffer from mild cognitive impairments and dementia through the development of high-level cognitive functions.
The GROWMEUP Project [25] also focused on improving the well-being of elders. In this context, a robotic platform was designed and developed with enhanced learning capabilities, cloud connectivity, and environment-awareness skills. These characteristics facilitated its adaptation to user behaviors, habits, and daily routines.
Another project that was funded in the context of PHC-19 was the ENRICHme Project [26]. ENRICHme deals with robotics in AAL environments from a different perspective. ENRICHme aims to improve the quality of life of elderly people suffering from MCI by introducing three levels of intelligence: robot, ambient, and social. These three key concepts are realized in the project through a robot that encompasses safe navigation, monitoring, and interactions with a person along with home automation and ambient sensing for long-term monitoring of human motion and life activities.
Finally, the RADIO Project [15] developed an integrated smart home/assistant robot system. Its main focus, was to develop a sensing/actuation system that would monitor the daily activities of elderly people, focusing mainly on user friendliness and familiarization with sensors technologies. The objective was to allow elderly people to focus on their daily chores, without disrupting their own activities.

3. Robots in Assisted Living Environment: General Perspective, Research Opportunities and Challenges

Robots that assist humans in any task, also known as assistive robots, have become more common in the last ten years [27]. Assistive robots can help achieve extended automation based on a labor division between humans and robots in the workplace. Workers are increasingly interchangeable with automated technology and intervene mostly to make up for robots’ shortcomings, portraying a future workplace in which human operators serve machines rather than vice versa; however, this does not mean removing humans from the workplace. Human’s formidable flexibility, object recognition, physical dexterity, and fine motor coordination are still necessary for many workplace tasks, such as in warehouses and retail. That need generates interdependency between humans and robots, i.e., robot introduction makes human interactions with those systems crucial, rather than removing humans from the picture. In this sense, the value that assistive robots and automation provide to workers must remain at the center of human–robot collaboration in the workplace [28].
For robots to work side by side with humans, they need to have feedback from their surroundings. Smart environments equipped with different technologies can capture thermal image data via infrared sensors, visual data via optical sensors or cameras, and spatial data through position sensors. This information is crucial for the optimal and safe control of robots by allowing the pervasive datafication of workers, objects, and activities. Similarly, wearable devices, such as glasses, bracelets, or gloves, capable of capturing kinematic data through accelerometers, gyroscopes, or speed scans, can be used for feedback in the system and ease human–robot collaboration in the workplace.
Assistive robots are also used to aid humans in domestic or health tasks. According to their purpose, these robots can be classified into different categories, such as (1) service robots that aid disabled and older adults in their daily routines; (2) mobility aid robots that help to transport a person from one place to another; (3) serving; and (4) feeding assistive robots, that serve people’s meals in hospitals and care centers, and monitoring robots that are useful for people who need constant supervision [28]. Furthermore, assistive robots can also give psychological comfort and improve the emotional well-being of patients in need, engaging them with mental activities. These assistive robots accompany children in hospitals, providing relaxation, supporting home nursing for the elderly, and keeping dementia patients’ company. A robot’s human-like resemblance and its ability to interact are critical for this type of robot, since it makes them more user-friendly. Engineers have explored different techniques for successful and natural human–robot interactions. In this sense, defining innovative methods of human–robot interactions remains an essential research avenue.
Speech recognition is an essential form of human–robot interaction in complex and changing environments because speech is the most used human form of communication. Communicating with a robot through speech enables a robot to have more reference information for autonomous task execution and, potentially, to make smarter decisions. Many speech recognition tools have been made available recently; however, integrating these with service robots is an ongoing challenge. Examples that continue in this field include a voice-controlled wheelchair that aids a disabled person in moving around without hand commands [29,30,31] and voice-controlled humanoid robots that can navigate autonomously to do home chores [32]. In reality, simple voice commands do not give enough information to enable a continuous and safe robot motion in a complex environment. It is also challenging to decide the correct time for giving voice commands during robot motion. Human voice commands may not be received promptly by the robot autonomy due to speech processing and network latency. This demands urgent attention to resolve the time-delay problem in voice-controlled systems.
On the other hand, non-verbal communication modalities, such as gestures and facial expressions can be more useful in certain situations, when it is faster or more advantageous not to make noise. Additionally, human gestures provide a more schematic way of controlling a robot, enabling their use in a wide range of service-robot applications. In this regard, human hand gestures are natural and permit a more user-friendly way to control a robot, as in [33], where a Pepper robot was controlled with gestures that were recorded by a camera in real-time. Another approach includes the use of surface electromyography (sEMG) sensors, which are useful non-invasive methods to record the electrical activity of muscles. These signals can be applied in controlling robot movements that are similar to those of humans. However, some users, mainly disabled people, tend to avoid using their extremities to ask for robot help and hand-arm gestures are not the only gestures humans can make. A “hands-free” way to interact with a robot can be with one’s face or with head gestures, such as in [34].
Thus far, we have discussed human–robot collaborations; however, another opportunity to be tackled is robot–robot coordination. In multi-robot systems, the actions of each agent must be planned considering every other part in order to maximize the performance of the team and avoid collisions. Cooperative planning strategies for multi-robot systems consider localization, target tracking, object recognition, exploration, motion planning, and more. With this in mind, and similar to human–robot interactions, communication is an essential part of effective robot–robot coordination. Typical characteristics of real-world applications, such as limitations of communication resources, unreliable networks, and interference susceptibility, hinder the sustainability of many of the aforementioned planning strategies for multi-robot systems. Such scenarios motivate the search for solutions that allow reducing these issues while also maintaining a reasonable coordination performance.

3.1. Research Opportunities in Conjunction with Embedded Systems

To fulfil the requirements of assistance with humanoid robots, several fields are to be explored further. First of all, it is important to have a broad understanding of human–machine interactions. This is of particular interest as it is vital that humans, usually in delicate situations, feel comfortable in the presence of a robot, despite its humanoid appearance. This could be done by taking advantage of cameras coupled to machine learning algorithms to recognize certain gestures and to deduce human emotions [35]. Moreover, the interfaces presented for humans to interact with robots have to be simple enough, allowing straightforward access to the provided functionality, as shown in Section 4.2 where a Pepper robot has a tablet-based Android interface. Not only is this interaction vital, but how robots interact with one another is also important (assistance with humanoid). This enables them to collectively solve complex tasks following the distributed computing paradigm.
It can be further deduced that robots in assisted-living environments will be endowed with multiple heterogeneous sensors to be aware, not only of their surroundings, but also to perceive human emotions. Usually, this set of sensors produces large amounts of data concurrently, which need to be processed accordingly. Moreover, these sorts of systems have difficult real-time constraints. This brings the use of embedded computers with reduced computation resources to their limit. Note that this is not only computationally intensive, but there is also an I/O-intensive task. FPGAs are good candidates for these systems, as they offer notable features for I/O management and can also offer data paths that are fully customized to the application’s computational requirements. Moreover, the most recent embedded FPGA devices exhibit performance per watt close to that of application-specific integrated circuits (ASICs) (while still offering outstanding I/O performance) [36]. In addition to this, they can pre-process data very close to sensors and provide versatility to specifically design hardware according to needs as they are programmable. Lastly, they offer the possibility of being modified dynamically during a run-time of some hardware blocks via dynamic partial reconfiguration (DPR) techniques. This characteristic could be of particular interest to switch algorithms on the fly depending on current environmental conditions. The benefit would be to avoid having an FPGA programmed for all potential circumstances, and switch these blocks according to the current needs, leading to a reduced power consumption [37], since FPGAs of a moderate size can be used.
There are several commercial robots designed as social humanoids with the ability to recognize faces and basic human emotions, such as SoftBank Robotics’ Pepper [38]. The problem with these is they are closed systems and an interface to their embedded computers for FPGAs is not possible. In addition, the cost of solutions like SoftBank Robotics’ Pepper is prohibitively high for it to be widely used in home environments. Therefore, new techniques are needed to enhance the embedded computers of robots with FPGAs [39]. Most of these robotic platforms are compatible with the robot operating system (ROS), which has become the mainstream middleware for robotics, not only in research but also in industry. As it is a software solution, it can run on any CPU capable of running Linux. Therefore, to circumvent the problem of a lack of interfaces on the robots with FPGAs, the ROS can be used as a bridge between these two computational platforms. This means that the most computationally intensive tasks can be off loaded to a hardware IP on the FPGA, specifically designed for a given task. Moreover, it is possible to feed the I/O data directly from sensors to the IP blocks (bypassing the embedded CPU) resulting in noticeable savings in terms of memory bandwidth and also in power consumption.
Even though FPGAs are a good match for robotic applications, they are not that straightforward to program compared to the most common available software solutions for robotics, such as ROS. Therefore, several research fields are to be further explored:
1.
Modeling: An abstraction in the design process has advantages towards simplified programmability. This would have to take into account how hardware architectures for robotics need to be modeled to comply with the requirements dictated by pre-existing software models and specifications. DPR techniques need to be studied to determine how models should express them. The end goal is to obtain hardware models from software specifications.
2.
Automation techniques: Considering that the workflow to obtain an RTL design could become cumbersome, code generation and automation techniques, leveraging the models, need to be explored and developed to provide effortless deployment. Further research has to focus on how robotic architectures and applications can be modeled and which modeling techniques can be useful for FPGA designs.
3.
Adaptivity and reusability: One of the key features of ROS implementation is the reusability of its components. Thereby, the hardware-based ones that result from automation techniques should also follow this. This could be ensured by relying on model-driven engineering and automation tools. This would also be beneficial for adaptivity of said components for multiple platforms, whether robotics or FPGAs.
However, it should be pointed out that during the last few years, we have experienced tremendous improvements in FPGA development tools. For example, performing remote access and programmability of FPGAs devices (even for languages as Python) is rapidly becoming a mature technology. Furthermore, new (more performance- or power-efficient) FPGA bitstreams can be instantiated in AAL robots from cloud-based “down-wards” links. This is expected to create new business and economic models (especially for SMEs and mid-cap companies) similar to FPGA-based cloud solutions.

3.2. Challenges

Social distancing and pandemic-oriented lockdowns have given a rise to many different research challenges that we face in the field of robots in assisted-living environments. In hospitals, in nursing, and in senior citizen homes, the sanitization of objects that come into contact with humans is a problem that can be addressed using robots. Use cases of such a solution can be sanitizing the handles and knobs of doors in a hospital corridor or a patient’s room. In recent work, a human-support robot uses a deep learning framework in such a case to detect door handles relying on a given dataset and sanitizes them by detecting a specific door handle [40,41]. In order to extend this application area, a neural network can also be trained to detect fire in care homes, raise alarms and also report the incidences of fire outbreak to a fire department or to provide first emergency response, such as by extinguishing a fire. Off the shelf humanoid robots like Pepper have enough computing resources to support their emotional engines and a limited memory support for heavy learning algorithms. They are usually slow when responding to external commands. Their field of view is limited to angles imposed by their front-facing input cameras and may not report fires that are out of range. However, they offer a good range of sensors (e.g., laser, RGB cameras, microphones) to build upon algorithms to address the challenges described in this section.
From a technical point of view, the challenge is how these algorithms are to be implemented, considering that they would require to go beyond the available computational resources of off-the-shelf robots. A solution to this problem is addressed in Section 3.1. By enhancing them using FPGAs, an off-the-shelf robot can be used, taking advantage of the sensors and actuators it already has, and enhance its processing capabilities using specific algorithms for any given task, such as the those described in the following Sections.
In this area, a significant challenge that can also arise is the ability of a robot and the neural networks that are responsible for detecting fire with dissimilar features, such as flame, smoke, and heat. This is a particular challenge that can possibly be addressed through cognitive machine learning [41], where an intelligent system can also perceive dissimilar objects by guessing their functions. Moreover, wireless sensor networks in combination with a robot equipped with cameras or a similar set of sensors can be combined to exploit the cognitive features provided by AI and machine learning. For example, where an object with slightly dissimilar features attached to a window is perceived as a handle to be disinfected.
In general the closer a robot is in appearance to the human form the more demands humans inherently place on them, leading to disappointments as provided functionality may not always meet user needs [19]. Humanoid robots are sensitive to humans within a meter radius and get confused when interacting with more than one person. Another challenge programmers face is personalizing the velocity to suit different elderly walking instances in care homes [20]. Furthermore, a training curve is necessary to adapt to their use, which may be uninviting for ailing elderly in care homes. Private data collection for experiential robot learning increases the power consumption and raises ethical concerns [19].
On the other hand, machine learning models have been applied to various robotic challenges involving sensing, control, and estimation in biomedical applications. In this context, cases in which a real-time response is required and is solved by hardware implementations, which often involve such data-driven methods. As the literature can show, machine learning has shown great applicability for creating models that interpret complex bio-signals. These bio-signals are especially useful for service robotics, since they may work as a communication bridge between a person’s wishes and the robot. Moreover, such mathematical abstractions favor efficient hardware implementation due to their structure, which is composed of simple entities, such as neurons or support vectors machines.
ANN’s inherent parallel architecture can be exploited with field programmable gate arrays (FPGAs). These devices are flexible logic structures that allow data processing in parallel with a high-speed performance on a single device. A recent review on the topic of ANN-embedded solutions on FPGAs and their advantages for other heterogeneous computing platforms is given in [42]. Recently, FPGA solutions of deep artificial neural networks have also been studied for both training and inference stages. In [42], the authors reported the most important features and advantages of hardware implementations for deep ANNs, namely their lower power consumption and inherent reconfigurability.
Nonetheless, FPGAs have some drawbacks, such as hardware resource utilization and power consumption. The former is an issue considering that the reconfigurable hardware inside an FPGA is not limitless; not every architecture will fit into a specific device. So, if the solution to a problem is to get a bigger FPGA, the latter drawback comes into play, and, as a consequence, a larger device or architecture will draw more power. Several solutions might come to mind for these problems, such as architecture optimization or better resource utilization; however, there are limitations regarding the availability of resources and the real advantages of optimizing the architecture. To meet the strict requirements of embedded applications in terms of performance, power consumption, and physical dimensions, a novel solution that gets past these issues is the implementation of dynamically partially reconfigurable (DPR) systems.
Another significant domain that robots are expected to contribute to in ambient assisted living (AAL) environments is rehabilitation. It is highly acknowledged that rehabilitation has a major effect on the quality of life of people with disabilities, whether they relate to chronic impairments or not. Injuries, such as spinal cord injury (SCI), are a dominant factor that greatly affect a person’s physical and mental health, along with their financial dependence. Therefore, it is of paramount importance for patients to engage in rehabilitation programs and get appropriate treatment. According to the World Health Organization (WHO), every year, between 250,000 and 500,000 people suffer a SCI while people who suffer from SCIs are two to five times more likely to die prematurely [43].
However, from an economic perspective, the costs for providing adequate rehabilitation routines are quite high. In parallel, devices dedicated to SCI treatment are limited and available only in rehab centers and hospital. With the cost of hospitalization remaining quite high, the pressure on the social care systems limits the access of people to such treatments.

4. Robots in Assisted-Living Environment: Application Scenarios and Use Cases

4.1. Robots in Rehabilitation

The design and development of low-cost solutions based on new technologies that could be applied in a patient’s personal space seem to be very promising. In this context, the “A-Balance” system was designed and developed in collaboration with medical experts.
The initial scope of the A-Balance as shown in Figure 1 is for it to be used in posture estimation and balance evaluations of patients that are in rehab due to a spinal cord injury (SCI), whether it was caused by an ischemic stroke or an accident. Specific requirements were defined in order to develop an appealing solution, both for patients and doctors. Such requirements include a low cost of purchasing and maintenance, portability, convenience of use and for it to be as unobtrusive as possible.
As instructed by medical experts, users are prompted to follow specific movements that give valuable feedback to the doctors regarding the neuromuscular condition of the patient in terms of controlling their torso. An inability or reduced ability to follow the exercises given by experts give valuable information about a patient’s status. The rehabilitation process is executed remotely and the final recovery of patients suffering SCI is affected by various factors. The most critical is the consistency and accuracy of exercise execution. Patients, in most cases, fail to execute the exercises on their own, either due to interpretation of the guidelines or due to negligence.
The presence of a home rehabilitation setup that emphasizes a low cost and high rates of engagement by users could address many of the problems that unattended rehabilitation presents.

4.1.1. System Description

The authors of this work in collaboration with medical experts identified the impact of such solutions, which were further aggravated due to the latest pandemic issues, and designed and developed the A-Balance system. The A-Balance aims to support people that suffer from SCI through a wearable and portable solution that could easily be applied in their home environment as depicted in Figure 1.
BALANCE heavily relies on using advanced micro-electronic (AME) systems and smart system integration (SSI). With respect to sensor modalities being highly accurate, open COTS IMU (inertial measurement unit) devices offering ultra-low power wireless interfaces (e.g., BLE) are being integrated in order to offer increased maneuverability to the end user.
The current practice in balance and posture assessment requires expensive and bulky systems that cannot be applied outside the environment of a clinic/hospital [44]. Apparently, the use of such devices requires necessary training and they cannot be operated by non-medical personnel, such as patients or their caregivers. In this context, people that need to improve their stability and balance must follow the required routines in hospitals, guided by clinicians. Thus, the overall cost for patients and/or insurance companies and the health system is greatly increased.
Wearable cyber-physical systems have already been used to mitigate the increased medical expenses posed by specialized equipment. The authors in [45] presented an algorithm that uses an accelerometer along with different barometer features. They compared two algorithms for classifying posture transitions and falls by processing accelerometer and barometer features.
The authors in [46] utilized accelerometers to develop and evaluate balance assessment algorithms that contribute towards the overall objective of moving the primary balance assessment from hospitals to the home environment. The authors in [47,48] utilized the increased use of smartphones in everyday life to deliver balance assessments in patients with stroke. However, a holistic approach that is ready to be adopted in daily practice is not yet available. The A-Balance aims to fill the gaps and proposes an end-to-end approach that utilizes wearable devices and edge computing along with AI techniques delivered as cloud services through efficient integration patterns.
In this context, the A-Balance (Figure 2) is a multimodal system that incorporates cyber–physical systems and IoT technologies that realize a holistic approach of ICT approaches in rehabilitation. The A-Balance follows a 3-layer architecture that consists of:
  • Layer 1: Layer of the physical devices (accelerometers, pressure sensors, cameras, etc.);
  • Layer 2: Layer of network integration and cloud interconnection (gateway);
  • Layer 3: Atlas [49] cloud.
Figure 2. A-Balance system.
Figure 2. A-Balance system.
Electronics 10 02062 g002

4.1.2. End Layer

The layer of the physical devices incorporates various sensing components that are responsible for collecting the multiple modalities that will be processed in a later step to facilitate the estimation of a patient’s posture and the evaluation of the rehab routine’s execution. The main modality that is integrated in the current version of the system is the inertial values as reported by a 3-axis accelerometer and 3-axis gyroscope. Both sensors are integrated in a low-cost wireless and wearable device, the SimpleLink™ multi-standard CC2650 SensorTag from Texas Instruments (TI) [50]. Some of the main characteristics of the device can be summarized as follows:
  • Multiple low power wireless interface support (BLE, ZigBee);
  • Based on ARM® Cortex®-M3 CC2650 wireless MCU;
  • Ultra-low power operation;
  • Small form factor;
  • Support for 10 low-power sensors, including ambient light, digital microphone, magnetic sensor, humidity, pressure, accelerometer, gyroscope, magnetometer, object temperature and ambient temperature;
  • Mainly ambient/kinetic sensor oriented;
  • Low cost;
  • Highly Configurable.
Measurements from the accelerometer and gyroscope are collected from the TI SensorTag and transmitted wirelessly through the available BLE interface. This setup allows the transmission of data for a significant period of time before battery replacement is needed. The sensor that is integrated on the TI SensorTag device is the MPU-9250 MEMS Motion Tracking device. MPU-9250 is a 9-axis motion processing unit, widely used in smartphones, tablets, and wearable devices. The MPU-9250 is a system in a package (SiP) that combines two chips: the MPU-6500, which contains a 3-axis gyroscope and a 3-axis accelerometer, and the AK8963 3-axis digital compass along with an onboard processor. The power consumption of the device is quite low, around 3.5 mA when all sensors are enabled.
The triple-axis MEMS gyroscope gives digital outputs in the range of 250, ±500, ±1000, and ±2000°/s through the integrated 16-bit ADC. The operating current is estimated at 3.2 mA and at 8 μA in sleep mode. The accelerometer sensor gives its digital outputs with a programmable range of ±2 g, ±4 g, ±8 g and ±16 g through a similar 16-bit ADC. The default sampling rate is 1000 Hz and the embedded low-pass filter is set to a cutoff frequency of 184 Hz.
Angles and accelerations are transmitted to the ATLAS cloud through the ATLAS gateway and the patient’s posture is estimated. This posture is presented through a graphical interface of the end user as inclination angles. Additionally, the position of the torso is visualized in real time through a human body graphic along with a moving dot that gives an indication of the user’s center of gravity.
Furthermore, using a low-cost camera and already existing vision-based algorithms, an estimation of the movement regarding the center of gravity of the user will be provided, which is fused with the rest of the modalities in order to increase event detection accuracy. A critical point here is that no device is required to be on the end user for camera detection since the respective algorithms can detect changes between frames and thus estimate the center of gravity of the moving part that the camera is recording.
The distribution of the patient’s weight is an important modality that is needed to be integrated in the system. This measurement is collected by an array of pressure sensors installed in an exercise/yoga mat supported by a controller that will carry the appropriate low consumption network interface (BLE) for the transmission of the collected data.

4.1.3. Network Integration

All the aforementioned devices transmit their data to the gateway. This gateway has a two-fold contribution to the overall system. Firstly, it delivers the required integration in terms of communication. Secondly, it can achieve improved utilization of the network, by moving less processing-intensive tasks closer to the data sources, following a fog computing approach, and thus reduce network-related delays. The gateway, based on the popular, low-cost Raspberry Pi 3, carries three different network interfaces, namely WiFi, Bluetooth/BLE, and ZigBee (installation of additional module).

4.1.4. Cloud Layer

Finally, the cloud layer of the system, is based on ATLAS [49], a cloud platform based on cutting edge technologies that is able to handle large volumes of data streams that originate from a constellation of IoT. Inside the ATLAS core, loosely coupled services are supported following the micro-service architectural pattern, which offers many advantages with respect to robustness, maintainability, and scalability. The communication from the physical domain to the core domain, as well as for the core services, is based on the MQTT communication paradigm. Finally, at the top of the ATLAS platform, services are responsible for data collection, processing, and storage.
The benefits of such an architecture are multifaceted in terms of resources, complexity, efficiency, and scalability. Of course, focus is also given to non-functional requirements that are strongly related to the ethical aspects of the application. Data collected by the user are transmitted over encrypted channels and stored anonymized. Strict authentication/authorization mechanisms are implemented for every interaction between the components and the users.
Moreover, special care has also been given to the graphical interfaces of the A-Balance platform in order to deliver an intuitive environment that will facilitate the use of the system and increase user engagement through adopting gamification.

4.1.5. Indoor Localization

As location information is the key to providing a variety of services for indoor computing environments, it is becoming an important feature in smart homes, and in general, for devices that change location frequently. There are plenty of localization methods, which can be divided in two main categories: range-based and range-free [51]. Range-based localization methods are based on indirect measurements of the distance or angle between sensors. The most common range-based methods are received signal strength indicator (RSSI), time of arrival (ToA), time difference of arrival (TDoA) and angle of arrival (AoA). On the other hand, range-free methods for distance estimations are based on measuring the number of hops between any pair of the sensors using numerical or statistical methods. Common range-free mechanisms are distance vector hop (DV-Hop) and proximity-based.
Our proposed approach for accurately estimating the position of a device in an indoor environment is to use a mixed localization technique, using both range-based and range-free methods, in different steps of the localization process. During the early steps, a proximity-based, range-free technique is used to remove outliers. With the term outliers, we mean fixed-position devices (beacons) that will be excluded from the next steps of the localization process. Next, we adopt an RSSI-based, range-based localization technique in order to estimate current position of the tracking device. Our proposed localization method is detailed as a process and is shown in Figure 3.
Our approach, is based on the assumption that high-strength (High RSSI) signals cannot be observed at long distances, and that low-strength (Low RSSI) signals can be observed at different distances, both near and far, due to signal propagation phenomena, which reduces the signal strength. Taking the aforementioned assumption into account, high-strength signals are classified as more accurate for the reduction and filtering of the collected signals.
Our estimation process starts with the collection of different signals using a component RSSI-sampler. We utilize a data transmission frequency of 10 Hz with a sampling period of 2 s, which means that each process is able to analyze up to ~20 RSSI samples per node. The second step is to remove RSSI outliers (outlier removal) with respect to the high-strength signals. Going further, the third step (envelope identifier) of our proposed process, is to identify a sub-part of the RSSI samples, equals or greater than three samples with respect to high-strength signals. In the fourth step (signal-normalizer), the signal normalization action is triggered, which normalizes the signal in favor of high-strength signals, also taking into account the previous normalization processes. Finally, the last two steps (distance estimator and location estimator) are used to calculate the distance between fixed-position devices and the tracking device, utilizing the log-normal-shadowing model, and, based on the distances calculated, the location is estimated by applying the non-linear least squares algorithm.

4.1.6. Results

The evaluation of our proposed localization approach is made on an area of 16 m2, where we have placed six fixed-position devices, with a maximum distance of 4 m between them. The tracking device, where the localization process is deployed, runs on Raspberry PI 3. The experimental results, are shown in Table 1 and Table 2 below:
The tables above posit that an estimation mainly fluctuates between 10 and 20 cm. This estimation can be evaluated as acceptable according to the objective we want to localize. For objects that cover a relatively large area, this estimation error is acceptable. Moving on to smaller objects, this estimation can be considered as not acceptable, but this is also relative to the application(s) we are planning to adapt the localization method to. One use-case where this estimation can be characterized as acceptable, is in relative localization applications, where the target is to identify if the location of the targeting object is close to the other related objects.
As future work, the following related works that accomplish more accurate localization, machine learning (ML) techniques will be adapted to our approach to increase localization accuracy.

4.2. Assistive Robots in a Hospital Environment

4.2.1. Pandemic and Robotics

The COVID-19 pandemic and its associated fear of the unknown has massively impacted the behavior of people worldwide [52]. The World Health Organization (WHO) and the Center for Disease Control and Prevention in North America (CDC) have used various means to raise awareness to the necessary hygiene measures to reduce the spread of viruses. In the heart of Europe, organizations such as the German Center for Neurodegenerative Diseases (DZNE) and the European Center for Disease Control and Prevention (ECDC) have also coordinated efforts to emphasize the need for intense personal hygiene and social distancing. From the wearing of masks and gloves in care homes, shopping malls, entertainment centers, and on public transport to home-office work practices, every effort is directly aimed at reducing the spread of the coronavirus.
In this new form of work environment, workers are being encouraged to work from home and medical doctors can reach their patients in a telepresence [53]. The absence of care givers in the assisted living environment is also a serious issue that has been known for some years and has been intensified with the global pandemic, making the use of robotics and machine learning relevant. In view of this, the Brandenburg University of Technology has started two projects, CleanMeAI and InjectMeAI, which target improving the functionality of existing robots to support relevant staff in hospitals and assisted-living spaces.
In many domains, robots are taking over human tasks in order to reduce the contact and spread of the virus. In the entertainment industry, for example, autonomous robots with tablets on their chests are now responsible for scanning various codes on tickets at entrances to cinemas and theaters [54,55]. Many institutions are also enforcing hygiene practices by providing hand sanitizers at entrances for public use [56,57]; however, humans under time constraints hardly give these isolated gadgets the attention needed and ignore them. In the CleanMeAI Project, a humanoid robot [58] will be trained using Machine learning to welcome visitors, providing them with doses of hand sanitizer, detecting (see Figure 4 and Figure 5) and disinfecting door handles—a common source of virus contagion [59]. The methodology consists of attaching a sanitizer system to the autonomous humanoid robot that can walk independently to interact with humans, request to sanitize their hands, and continue to sanitize door handles at regular intervals.
A sanitizer system in the form of a backpack comprising a spray-can controlled by a servo motor through an Arduino board will hang on the back of the Pepper robot. This battery-supported system attached to the robot will receive commands from the robot through an IP address. The robot will observe and classify images of doors and humans through a model based on that of [60], implemented in Android on its tablet, and send alerts to the Arduino-based hand sanitizer system. Preliminary tests reveal that this detection can be done in under 1.18 s.

4.2.2. Implementation

We implemented our approach for the CleanMeAI project using the Pepper robot [58], which has been widely used in other works dealing with human interactions [18,61,62]. Unlike humans, autonomous robots like Pepper as shown previously in Figure 5 do not tire with repetitive tasks. It has two RGB cameras and depth sensors in its head that detect a human presence to support its emotion engine. The cameras are used to scan its surroundings for humans and door handles. When a human is detected, Pepper will request the person’s hands, spray a dose of sanitizer, and proceed to disinfect door handles. Since the quantity of sanitizer in the backpack is limited, the robot has to keep track of the door handles that have been recently cleaned in order to prioritize the disinfection of others. This is accomplished by assigning time stamps to the door handles detected. Thus, the difference between the current time and the last detection time will determine if a door handle must be disinfected again. For our project, a duration of one hour was set. Allowing the robot to disinfect door handles after each hour of use.
The project shown in Figure 6, consists of software (and hardware) characteristics developed using QiSDK, which is an Android software development kit (SDK) [63] for the Pepper robot. Figure 7 depicts the overall flow of the algorithm involved in order to detect, sanitize, and re-sanitize door handles and hands of individuals detected. In order to create applications and deploy them on the Pepper robot, this specific SDK provides API to the robot controls. The application uses the Pytorch library to detect door handles and people, exploiting yolov5, which we custom trained on our datasets. The dataset that was used for training and testing consisted of manually annotated images of doors knobs and handles within our institute premises for initial experiments. For data augmentation, we captured images with different angles and perspectives, and annotated them accordingly.
The InjectMeAI project, on the other hand, seeks to reduce the spread of bodily fluids among care givers, especially at vaccination centers, where contact with bare skin is essential. Medical personnel have the best training regarding the treatment of patients suffering from the pandemic; however, their close contact with humans puts them at a higher risk of infection. Statistics on the infection rate of nurses and doctors are substantially increasing [64]. It is therefore necessary to find a means to further reduce contact with patients, especially during vaccinations. This is where assisting humanoid robots come in handy.
Administering injections requires person-to-person contact. In this circumstance, the spread of bodily fluids and consequently the coronavirus has become eminent. The objective is to attach an injection system to an autonomous humanoid robot that can independently interact with patients in a specified position to deliver injections at the shoulder through a needle attached to one finger.
The appropriate dose will be delivered when a patient is detected by the robot to be in a sitting (see Figure 8) or lying position through pose estimation methods. A bare shoulder classification and injection point identification are crucial and must be precisely estimated as these vary from one person to another. Existing computer vision algorithms and object detection methods make such computations realistic. Other issues being addressed include determining the trajectory of motion to the patient and raising the robot’s hand to the appropriate height after finding the injection position. Tensorflow Lite and keras libraries will be used for Android implementation of pose estimation, based on [65].
The algorithm that continuously processes video streams received through the front facing cameras and directs the humanoid robot to deliver an injection to COVID-19 patients is described below in Figure 9.
In Table 3, we summarize some of the key differences between several similar approaches and our propositions pertaining to CleanMeAI and InjectMeAI projects as an all-in-one approach.

4.2.3. Equipping Robots with Cognitive Skills

Earlier versions of robots were equipped with the much simpler Choregraphe programming interface in a NAO environment. The transition from Choregraphe platform to Android for imbuing cognitive skills in the Pepper autonomous robot permits the use of current object detection and pose estimation algorithms that elevate its intelligence. Roboticists, as well as robot engineers, have had their fair share of challenges arising from pandemic restrictions. The Pepper android emulator does not support the use of video and audio inputs. Programs incorporating such data must be implemented and tested directly on a real robot. Machine learning models requiring continuous image feed become difficult to test outside the physical presence of robots quarantined at research laboratories. The robotic autonomous engine further constrains the use of the robot while connected to the charging station. Thus, keeping the robot active and accessing it remotely for test purposes is also ill-advised, compounding the challenges of transferring mental abilities to the technological system.

5. Conclusions

A particular application scenario pertaining to robots in assisted-living environments requires an amalgamation of more than just a single research area. It is discernible from the past, present, and future perspectives within this area, that it is a multi-disciplinary field when a particular use-case is considered [1,2,3,4,19,20,21,25,26,29,31,34,39,40,41,61,62,64,66]. More specifically, complex robotic systems or intelligent robots in the future, which can operate independently in an assisted living environment, could combine knowledge from AI, machine learning, cognitive machine intelligence, sophisticated robotics, embedded systems, IoT and healthcare engineering. From our discussion of the projects, applications, ideas, and research work, it can be inferred that the prospects of robotics in assisted living are expanding and will be broadly diversified.

Author Contributions

Conceptualization, S.M., K.O.A., K.A., C.P. and C.A.; Funding acquisition, M.H., D.G. and N.V.; Investigation, S.M., K.O.A., K.A., C.P., S.A.P.M. and A.P.; Methodology, K.O.A., K.A. and C.P.; Software, K.A. and C.P.; Supervision, C.A., M.H., D.G. and N.V.; Visualization, S.M., K.O.A., K.A., C.P. and C.A.; Writing–original draft, S.M., K.O.A., K.A., C.P., S.A.P.M. and A.P.; Writing–review & editing, C.A., G.K., M.H., D.G. and N.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dimitrievski, A.; Zdravevski, E.; Lameski, P.; Trajkovik, V. A survey of Ambient Assisted Living systems: Challenges and opportunities. In Proceedings of the 2016 IEEE 12th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 8–10 September 2016. [Google Scholar]
  2. Pineau, J.; Montemerlo, M.; Pollack, M.; Roy, N. Towards Robotic Assistants in Nursing Homes: Challenges and Results. Rob. Auton. Syst. 2002, 42, 271–281. [Google Scholar] [CrossRef]
  3. Tavakoli, M.; Carriere, J.; Torabi, A. Robotics For COVID-19: How Can Robots Help Health Care in the Fight against Coronavirus. 2020. Available online: https://archive.ph/N4Hlf (accessed on 26 August 2021).
  4. Khan, Z.H.; Siddique, A.; Lee, C.W. Robotics Utilization for Healthcare Digitization in Global COVID-19 Management. Int. J. Environ. Res. Public Health 2020, 17, 3819. [Google Scholar] [CrossRef]
  5. Nagano, A.; Wakabayashi, H.; Maeda, K.; Kokura, Y.; Miyazaki, S.; Mori, T.; Fujiwara, D. Respiratory Sarcopenia and Sarcopenic Respiratory Disability: Concepts, Diagnosis, and Treatment. J. Nutr. Health Aging 2021, 25, 1–9. [Google Scholar] [CrossRef]
  6. Larsson, L.; Degens, H.; Li, M.; Salviati, L.; Lee, Y.I.; Thompson, W.; Kirkland, J.L.; Sandri, M. Sarcopenia: Aging-Related Loss of Muscle Mass and Function. Physiol. Rev. 2019, 99, 427–511. [Google Scholar] [CrossRef]
  7. NEW Balance SystemTM SD—Balance—Physical Medicine | Biodex 2021. Available online: https://archive.ph/nMJoQ (accessed on 24 August 2021).
  8. Kanasi, E.; Ayilavarapu, S.; Jones, J. The aging population: Demographics and the biology of aging. Periodontology 2000 2016, 72, 13–18. [Google Scholar] [CrossRef]
  9. Rashidi, P.; Mihailidis, A. A Survey on Ambient-Assisted Living Tools for Older Adults. IEEE J. Biomed. Health Inform. 2013, 17, 579–590. [Google Scholar] [CrossRef]
  10. Chiarini, G.; Ray, P.; Akter, S.; Masella, C.; Ganz, A. mHealth Technologies for Chronic Diseases and Elders: A Systematic Review. IEEE J. Sel. Areas Commun. 2013, 31, 6–18. [Google Scholar] [CrossRef] [Green Version]
  11. Bloom, D.E.; Cadarette, D. Infectious Disease Threats in the Twenty-First Century: Strengthening the Global Response. Front. Immunol. 2019, 10, 549. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Nande, A.; Adlam, B.; Sheen, J.; Levy, M.Z.; Hill, A.L. Dynamics of COVID-19 under social distancing measures are driven by transmission network structure. PLOS Comput. Biol. 2021, 17, e1008684. [Google Scholar] [CrossRef] [PubMed]
  13. Horizon 2020 Sections. Horiz. 2020 Eur. Comm. 2021. Available online: https://ec.europa.eu/programmes/horizon2020/en/home (accessed on 24 August 2021).
  14. RADIO | Unobtrusive, Efficient, Reliable and Modular Solutions for Independent Ageing; Springer: Berlin/Heidelberg, Germany, 2021.
  15. Antonopoulos, C.; Keramidas, G.; Voros, N.; Huebner, M.; Schwiegelshohn, F.; Goehringer, D.; Dagioglou, M.; Stavrinos, G.; Konstantopoulos, S.; Karkaletsis, V. Robots in Assisted Living Environments as an Unobtrusive, Efficient, Reliable and Modular Solution for Independent Ageing: The RADIO Experience. In Proceedings of the International Symposium on Applied Reconfigurable Computing, Darmstadt, Germany, 9–11 April 2018; ISBN 978-3-319-78889-0. [Google Scholar]
  16. Schwiegelshohn, F.; Hubner, M.; Wehner, P.; Gohringer, D. Tackling the New Health-Care Paradigm Through Service Robotics: Unobtrusive, efficient, reliable, and modular solutions for assisted-living environments. IEEE Consum. Electron. Mag. 2017, 6, 34–41. [Google Scholar] [CrossRef]
  17. TurtleBot2 2021. Available online: https://www.turtlebot.com/turtlebot2/ (accessed on 24 August 2021).
  18. Inc, R. Business Robots—Pepper Humanoid Robot 2021. Available online: https://business.robotlab.com/pepper-use-cases/ (accessed on 24 August 2021).
  19. Iglesias, A.; José, R.V.; Perez-Lorenzo, M.; Ting, K.L.H.; Tudela, A.; Marfil, R.; Dueñas, Á.; Bandera, J.P. Towards long term acceptance of Socially Assistive Robots in retirement houses: Use case definition. In Proceedings of the 2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Ponta Delgada, Portugal, 15–17 April 2020; pp. 134–139. [Google Scholar]
  20. Bui, H.; Chong, N.Y. An Integrated Approach to Human-Robot-Smart Environment Interaction Interface for Ambient Assisted Living. In Proceedings of the 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Genova, Italy, 27–29 September 2018; pp. 32–37. [Google Scholar]
  21. Elmer, A.; Matusiewicz, D.; Sulzberger, C.; Göttelmann, P.V.; Codourey, M. Die Digitale Transformation der Pflege | Wandel. Innovation. In Smart Services; MeadWestvaco: Richmond, VA, USA, 2019; pp. 280–281. ISBN 978-3-95466-404-7. [Google Scholar]
  22. Achirei, S.D.; Zvoristeanu, O.; Alexandrescu, A.; Botezatu, N.A.; Stan, A.; Rotariu, C.; Lupu, R.G.; Caraiman, S. SMARTCARE: On the Design of an IoT Based Solution for Assisted Living. In Proceedings of the 2020 International Conference on e-Health and Bioengineering (EHB), Iasi, Romania, 29–30 October 2020; pp. 1–4. [Google Scholar]
  23. Call for Proposals for Personalising Health and Care. Available online: https://euroalert.net/call/2868/call-for-proposals-for-personalising-health-and-care (accessed on 24 August 2021).
  24. RAMCIP. RAMCIP 2015. Available online: https://ramcip-project.eu/system/files/ramcip_1st_newsletter_v1.0.pdf (accessed on 24 August 2021).
  25. GrowMeUp Project: An Innovative Service Robot for Ambient Assisted Living Environments; European Commission: Brussels, Belgium, 2015.
  26. EnrichMe—PAL Robotics: Leading Service Robotics. PAL Robot: Barcelona, Spain. Available online: https://pal-robotics.com/collaborative-projects/enrichme/ (accessed on 24 August 2021).
  27. De Gauquier, L.; Brengman, M.; Willems, K. The Rise of Service Robots in Retailing: Literature Review on Success Factors and Pitfalls. In Retail Futures; Emerald Insight: Bingley, UK, 2020; ISBN 978-1-83867-664-3. [Google Scholar]
  28. Delfanti, A.; Frey, B. Humanly Extended Automation or the Future of Work Seen through Amazon Patents. Sci. Technol. Hum. Values 2020, 46, 655–682. [Google Scholar] [CrossRef]
  29. Pires, G.; Nunes, U. A Wheelchair Steered through Voice Commands and Assisted by a Reactive Fuzzy-Logic Controller. J. Intell. Robot. Syst. 2002, 34, 301–314. [Google Scholar] [CrossRef]
  30. Chen, Y.; Song, K. Voice control design of a mobile robot using shared-control approach. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 105–110. [Google Scholar]
  31. Sinyukov, D.A.; Li, R.; Otero, N.W.; Gao, R.; Padir, T. Augmenting a voice and facial expression control of a robotic wheelchair with assistive navigation. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1088–1094. [Google Scholar]
  32. Silva, J.R.; Simão, M.; Mendes, N.; Neto, P. Navigation and obstacle avoidance: A case study using Pepper robot. In Proceedings of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019; Volume 1, pp. 5263–5268. [Google Scholar]
  33. Goto, K.; Nishino, H.; Yatsuda, A.; Tsutsumi, H.; Haramaki, T. A method for driving humanoid robot based on human gesture. Int. J. Mech. Eng. Robot. Res. 2020, 9, 447–452. [Google Scholar] [CrossRef]
  34. Haseeb, M.A.; Kyrarini, M.; Jiang, S.; Ristic-Durrant, D.; Gräser, A. Head gesture-based control for assistive robots. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 26–29 June 2018; pp. 379–383. [Google Scholar]
  35. Zhang, L.; Jiang, M.; Farid, D.; Hossain, M.A. Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot. Expert Syst. Appl. 2013, 40, 5160–5168. [Google Scholar] [CrossRef]
  36. Kuon, I.; Rose, J. Measuring the Gap Between FPGAs and ASICs. IEEE Trans. Comput. Des. Integr. Circuits Syst. 2007, 26, 203–215. [Google Scholar] [CrossRef] [Green Version]
  37. Kamaleldin, A.; Hosny, S.; Mohamed, K.; Gamal, M.; Hussien, A.; Elnader, E.; Shalash, A.; Obeid, A.M.; Ismail, Y.; Mostafa, H. A reconfigurable hardware platform implementation for software defined radio using dynamic partial reconfiguration on Xilinx Zynq FPGA. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1540–1543. [Google Scholar]
  38. Pepper the humanoid and programmable robot | SoftBank Robotics 2021. Available online: https://www.softbankrobotics.com/emea/en/pepper (accessed on 14 October 2019).
  39. Podlubne, A.; Göhringer, D. FPGA-ROS: Methodology to Augment the Robot Operating System with FPGA Designs. In Proceedings of the 2019 International Conference on ReConFigurable Computing and FPGAs (ReConFig), Cancun, Mexico, 9–11 December 2019; pp. 1–5. [Google Scholar]
  40. Cresswell, K.; Sheikh, A. Can Disinfection Robots Reduce the Risk of Transmission of SARS-CoV-2 in Health Care and Educational Settings? J. Med. Internet Res. 2020, 22, e20896. [Google Scholar] [CrossRef]
  41. Shi, Z. Cognitive Machine Learning. Int. J. Intell. Sci. 2019, 09, 111–121. [Google Scholar] [CrossRef] [Green Version]
  42. Guo, K.; Zeng, S.; Yu, J.; Wang, Y.; Yang, H. A Survey of FPGA-Based Neural Network Accelerator. arXiv 2018, arXiv:1712.08934 [cs]. [Google Scholar]
  43. World Health Organization Spinal Cord Injury. Available online: https://www.who.int/news-room/fact-sheets/detail/spinal-cord-injury (accessed on 24 August 2021).
  44. Korebalance19 2021. Available online: https://archive.ph/CxThq (accessed on 26 August 2021).
  45. Rodríguez-Martín, D.; Samà Monsonís, A.; Pérez, C.; Català, A. Posture Transitions Identification Based on a Triaxial Accelerometer and a Barometer Sensor. In International Work-Conference on Artificial Neural Networks; Springer: Cham, Switzerland, 2017; ISBN 978-3-319-59147-6. [Google Scholar]
  46. Pitt, W.; Chou, L.-S. Reliability and practical clinical application of an accelerometer-based dual-task gait balance control assessment. Gait Amp Posture 2019, 71, 279–283. [Google Scholar] [CrossRef]
  47. Hou, Y.-R.; Chiu, Y.-L.; Chiang, S.-L.; Chen, H.-Y.; Sung, W.-H. Development of a Smartphone-Based Balance Assessment System for Subjects with Stroke. Sensors 2019, 20, 88. [Google Scholar] [CrossRef] [Green Version]
  48. Hou, Y.-R.; Chiu, Y.-L.; Chiang, S.-L.; Chen, H.-Y.; Sung, W.-H. Feasibility of a Smartphone-Based Balance Assessment System for Subjects with Chronic Stroke. Comput. Methods Programs Biomed. 2018, 161. [Google Scholar] [CrossRef]
  49. Antonopoulos, C.P.; Antonopoulos, K.; Panagiotou, C.; Voros, N.S. Tackling Critical Challenges towards Efficient CyberPhysical Components & Services Interconnection: The ATLAS CPS Platform Approach. J. Signal Process. Syst. 2019, 91, 1273–1281. [Google Scholar] [CrossRef]
  50. CC2650 Data Sheet, Product Information and Support | TI.com 2016. Available online: https://www.ti.com/product/CC2650 (accessed on 24 August 2021).
  51. Singh, S.P.; Sharma, S.C. Range Free Localization Techniques in Wireless Sensor Networks: A Review. Procedia Comput. Sci. 2015, 57, 7–16. [Google Scholar] [CrossRef] [Green Version]
  52. Saadat, S.; Rawtani, D.; Hussain, C.M. Environmental perspective of COVID-19. Sci. Total Environ. 2020, 728, 138870. [Google Scholar] [CrossRef] [PubMed]
  53. Facilitate a Smooth Connection between People with Pepper’s Telepresence Capabilities! | SoftBank Robotics 2021. Available online: https://www.softbankrobotics.com (accessed on 24 August 2021).
  54. Pepper Telepresence Toolkit. Available online: https://github.com/softbankrobotics-labs/pepper-telepresence-toolkit (accessed on 24 August 2021).
  55. Provide New Services | SoftBank Robotics EMEA 2021. Available online: https://archive.ph/pgfJF (accessed on 24 August 2021).
  56. Cure, L.; Van Enk, R. Effect of hand sanitizer location on hand hygiene compliance. Am. J. Infect. Control 2015, 43, 917–921. [Google Scholar] [CrossRef]
  57. Pradhan, D.; Biswasroy, P.; Kumar Naik, P.; Ghosh, G.; Rath, G. A Review of Current Interventions for COVID-19 Prevention. Arch. Med. Res. 2020, 51, 363–374. [Google Scholar] [CrossRef]
  58. Pandey, A.K.; Gelin, R. A Mass-Produced Sociable Humanoid Robot: Pepper: The First Machine of Its Kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  59. Ding, Z.; Qian, H.; Xu, B.; Huang, Y.; Miao, T.; Yen, H.-L.; Xiao, S.; Cui, L.; Wu, X.; Shao, W.; et al. Toilets dominate environmental detection of severe acute respiratory syndrome coronavirus 2 in a hospital. Sci. Total Environ. 2021, 753, 141710. [Google Scholar] [CrossRef]
  60. ultralytics/yolov5: v5.0—YOLOv5-P6 1280 Models, AWS, Supervise.ly and YouTube Integrations 2021. Available online: https://archive.ph/HJqRN (accessed on 24 August 2021).
  61. Sato, M.; Yasuhara, Y.; Osaka, K.; Ito, H.; Dino, M.J.S.; Ong, I.L.; Zhao, Y.; Tanioka, T. Rehabilitation care with Pepper humanoid robot: A qualitative case study of older patients with schizophrenia and/or dementia in Japan. Enfermería Clínica 2020, 30, 32–36. [Google Scholar] [CrossRef]
  62. Beyer-Wunsch, P.; Reichstein, C. Effects of a Humanoid Robot on the Well-being for Hospitalized Children in the Pediatric Clinic—An Experimental Study. Procedia Comput. Sci. 2020, 176, 2077–2087. [Google Scholar] [CrossRef]
  63. Pepper SDK for Android—QiSDK 2021. Available online: https://qisdk.softbankrobotics.com/sdk/doc/pepper-sdk/index.html (accessed on 24 August 2021).
  64. Kursumovic, E.; Lennane, S.; Cook, T.M. Deaths in healthcare workers due to COVID-19: The need for robust data and analysis. Anaesthesia 2020, 75, 989–992. [Google Scholar] [CrossRef] [PubMed]
  65. Cao, Z.; Simon, T.; Wei, S.-E.; Sheikh, Y. Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 1302–1310. [Google Scholar]
  66. Ramalingam, B.; Yin, J.; Rajesh Elara, M.; Tamilselvam, Y.K.; Mohan Rayguru, M.; Muthugala, M.A.V.J.; Félix Gómez, B. A Human Support Robot for the Cleaning and Maintenance of Door Handles Using a Deep-Learning Framework. Sensors 2020, 20, 3543. [Google Scholar] [CrossRef] [PubMed]
  67. Yin, J.; Koppaka, G.S.A.; Tamilselvam, Y.; Mohan, R.E.; Ramalingam, B.; Anh Vu, L. Table Cleaning Task by Human Support Robot Using Deep Learning Technique. Sensors 2020, 20, 1698. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Bačík, J.; Tkáč, P.; Hric, L.; Alexovič, S.; Kyslan, K.; Olexa, R.; Perduková, D. Phollower—The Universal Autonomous Mobile Robot for Industry and Civil Environments with COVID-19 Germicide Addon Meeting Safety Requirements. Appl. Sci. 2020, 10, 7682. [Google Scholar] [CrossRef]
  69. Sundar raju, G.; Sivakumar, K.; Ramakrishnan, A.; Selvamuthukumaran, D.; Sakthivel Murugan, E. Design and fabrication of sanitizer sprinkler robot for COVID-19 hospitals. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1059, p. 12070. [Google Scholar] [CrossRef]
Figure 1. A-Balance in action.
Figure 1. A-Balance in action.
Electronics 10 02062 g001
Figure 3. Range-based localization technique.
Figure 3. Range-based localization technique.
Electronics 10 02062 g003
Figure 4. Detection of door knobs and handles by our custom trained machine learning model, based on YOLOv5 [60], Door knob in the backgrounds get detected with a statistical probability of 0.75, which is a relatively small object as compared to the hand in the foreground with a probability of 0.89.
Figure 4. Detection of door knobs and handles by our custom trained machine learning model, based on YOLOv5 [60], Door knob in the backgrounds get detected with a statistical probability of 0.75, which is a relatively small object as compared to the hand in the foreground with a probability of 0.89.
Electronics 10 02062 g004
Figure 5. Detection of people inside a room. This figure contains two panels in which we show our Pepper robot equipped with an on-robot tablet, running Android OS, on the left. Our application running yolov5 for door handle detection and person detection runs on the tablet that can communicate with one of the on-robot cameras. The panel on the right shows multiple object detection with neural network yolov5 in action. The bounding boxes on the right panel seem to overlap as the objects are almost in the same position, i.e., door handles and the hand of the person holding the knob with accuracies of 0.66, 0.51, and 0.69, respectively. The accuracy tends to decrease as the small object detection usually becomes difficult for such networks. As the robot approaches the object, the object in the perspective of the camera will be detected with a greater accuracy, as also shown in Figure 4.
Figure 5. Detection of people inside a room. This figure contains two panels in which we show our Pepper robot equipped with an on-robot tablet, running Android OS, on the left. Our application running yolov5 for door handle detection and person detection runs on the tablet that can communicate with one of the on-robot cameras. The panel on the right shows multiple object detection with neural network yolov5 in action. The bounding boxes on the right panel seem to overlap as the objects are almost in the same position, i.e., door handles and the hand of the person holding the knob with accuracies of 0.66, 0.51, and 0.69, respectively. The accuracy tends to decrease as the small object detection usually becomes difficult for such networks. As the robot approaches the object, the object in the perspective of the camera will be detected with a greater accuracy, as also shown in Figure 4.
Electronics 10 02062 g005
Figure 6. Autonomous hand sanitizer setup.
Figure 6. Autonomous hand sanitizer setup.
Electronics 10 02062 g006
Figure 7. Algorithm for autonomous hand.
Figure 7. Algorithm for autonomous hand.
Electronics 10 02062 g007
Figure 8. Predicting patients pose for injection.
Figure 8. Predicting patients pose for injection.
Electronics 10 02062 g008
Figure 9. Algorithm for autonomous vaccination.
Figure 9. Algorithm for autonomous vaccination.
Electronics 10 02062 g009
Table 1. Localization: actual and estimated distance (m).
Table 1. Localization: actual and estimated distance (m).
Actual
Distance (m)
Estimated
Distance (m)
0.50.35
11.20
1.51.35
22.15
2.52.65
33.10
3.53.30
44.10
Table 2. Localization: x,y actual and estimated positions (m).
Table 2. Localization: x,y actual and estimated positions (m).
Actual
Position (m)
Estimated
Position (m)
XYXY
1.511.71.2
20.52.20.6
1110.7
1.811.950.9
2.512.50.5
Table 3. Comparison of projects CleanMeAI and InjectMeAI with similar techniques.
Table 3. Comparison of projects CleanMeAI and InjectMeAI with similar techniques.
Research WorkCleans Human HandsCleans Door HandlesScalable Cleaning AgentOvercomes
Vaccination
Challenges
Human Emotion SupportAdaptable to COVID-19 Operations
[a] [66]
[b] [67]
[c] [68]
[d] [69]
CleanMeAI
InjectMeAI
[a] Ramalingam et al. “A human support robot for the cleaning and maintenance of door handles using a deep-learning framework.” [b] Yin et al. “Table cleaning task by human support robot using deep learning technique.” [c] Bačík et al. “Phollower—The Universal Autonomous Mobile Robot for Industry and Civil Environments with COVID-19 Germicide Addon Meeting Safety Requirements.” [d] Sivakumar et al. “Design and fabrication of sanitizer sprinkler robot for COVID-19 hospitals.” Legend: →   Adaptable.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mahmood, S.; Ampadu, K.O.; Antonopoulos, K.; Panagiotou, C.; Mendez, S.A.P.; Podlubne, A.; Antonopoulos, C.; Keramidas, G.; Hübner, M.; Goehringer, D.; et al. Prospects of Robots in Assisted Living Environment. Electronics 2021, 10, 2062. https://doi.org/10.3390/electronics10172062

AMA Style

Mahmood S, Ampadu KO, Antonopoulos K, Panagiotou C, Mendez SAP, Podlubne A, Antonopoulos C, Keramidas G, Hübner M, Goehringer D, et al. Prospects of Robots in Assisted Living Environment. Electronics. 2021; 10(17):2062. https://doi.org/10.3390/electronics10172062

Chicago/Turabian Style

Mahmood, Safdar, Kwame Owusu Ampadu, Konstantinos Antonopoulos, Christos Panagiotou, Sergio Andres Pertuz Mendez, Ariel Podlubne, Christos Antonopoulos, Georgios Keramidas, Michael Hübner, Diana Goehringer, and et al. 2021. "Prospects of Robots in Assisted Living Environment" Electronics 10, no. 17: 2062. https://doi.org/10.3390/electronics10172062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop