Prospects of Robots in Assisted Living Environment

: From caretaking activities for elderly people to being assistive in healthcare setup, mobile and non-mobile robots have the potential to be highly applicable and serviceable. The ongoing pandemic has shown that human-to-human contact in healthcare institutions and senior homes must be limited. In this scenario, elderlies and immunocompromised individuals must be exclusively protected. Robots are a promising way to overcome this problem in assisted living environments. In addition, the advent of AI and machine learning will pave a way for intelligent robots with cognitive abilities, while enabling them to be more aware of their surroundings. In this paper, we discuss the general perspectives, potential research opportunities, and challenges arising in the area of robots in assisted living environments and present our research work pertaining to certain application scenarios, i.e., robots in rehabilitation and robots in hospital environments and pandemics, which, in turn, exhibits the growing prospects and interdisciplinary nature of the ﬁeld of robots in assisted living environment.


Introduction
The field of robotics, in general, is exceedingly broad and, in turn, also has a broader participation in a wide area of application scenarios. In recent years, one of the most important concern of robotics has been its impactful applicability in the healthcare sector, more precisely, in assisted living environments [1]. This includes assistance with, but not limited to, physical and mental rehabilitation, nursing for the elderly [2] and to provide help with daily chores, for assistance in home automation environments for people with special needs [2], and particularly for a matter of growing importance in recent years, i.e., pandemics and epidemics [3,4].
"Movement is life" is a saying used among medical doctors, highlighting the importance of mobility in every patient. Respiratory problems, weakness, muscle atrophy, cachexia, and sarcopenia [5,6] are common pathologies that are worsened due to a lack of movement. This is exacerbated when further restrictions are applied to patient mobility in pandemics, such as with COVID. The solution to the close, detailed, disciplined and full-time mobility surveillance of patients that return or are treated in their homes by low-cost user-friendly and non-intrusive systems is still an open issue. The use of rehabilitation assessment systems is the dominant practice that can provide experts a fast and accurate screening and conditioning methods that are mainly targeted at patients and older adults. However, such systems are hard to acquire and use on a daily basis since they have increased costs (e.g., $3000 for the Biodex Balance System SD) [7] and require a high level of expertise.
The incremental average age of the population [8] has led to new requirements in the healthcare domain, and more precisely in cases such as home assistance, rehabilitation, and in the early detection of diseases. The need for assistance systems that can improve the quality of life for older people is becoming more and more necessary to help older people live an active and productive life [9]. Technology can be integrated into daily activities for the elderly, providing a safe, high-quality, happiness and a longer period of independent living.
The ambient assisting living (AAL) concept defines services that are able to build intelligent environments for the assistance of elderly people, relying on accurate localization systems to provide time-critical and reliable services. Knowing the position and actions of the elderly is vital for medical observation, timely accident prevention, behavioral pattern characterization, or anomaly detection [10].
In addition to this, pandemics also pose a threat to a particular risk group, including people with special needs, individuals with compromised immune systems, and the elderly, both in hospitals and in nursing-homes [3,4,11]. In robot-assisted living environments, a reduction of human-to-human contact for repeated tasks is usually desired as this would greatly reduce the probability of pathogen transmission [12]. Examples of repeated tasks could be caretakers accessing doorways frequently and coming into contact with physical objects, such as door handles, knobs, and surfaces, or not being able to observe social distancing while delivering medicines or vaccination to individuals.
In view of the above-mentioned scope of problems, this article goes through different aspects of how robots and associated technologies in ambient assisted living (AAL) provide excellent research opportunities and challenges, along with application scenarios in these areas, exploring the prospects of robots and related inter-disciplinary fields in assistedliving environments. In these application scenarios, we present our research work to address issues in the areas of rehabilitation, localization, social distancing, and sanitization. This paper is organized as follows: • Section 2 provides a brief background of the related work and a few projects that have been carried out in this field. Specifically, it lays down the contributions of a few EU Horizon2020 Projects [13] that target the area of robots in assisted-living environments. • Section 3 includes a general perspective of research opportunities and challenges that the field of robots in ALEs offers. It exhibits that, when combined with emerging modern technologies, it enables a broad spectrum of research opportunities and challenges. This section also includes a special subsection that discusses the tremendous applicability of embedded systems pertaining to the combination of AI/ML, robots, and the need for executing on-robot computation-intensive algorithms. • Section 4 gives detailed insight into our research contributions under two different scenarios: (1) robots in rehabilitation and (2) robots in the hospital environment and pandemics. Under these scenarios we propose methodologies to address given use-cases discussed in the sub-sections therein. Particularly, systems discussed in robots in rehabilitation i.e., A-balance system and indoor localization, are an extension to approaches also developed under the project roadmap of the RADIO project [14][15][16], which used Turtlebot [17] as a base robot platform. Under robots in hospital environments, we propose methodologies that use machine learning and computer vision algorithms implemented on the Pepper robot [18], to address sanitization and social distancing. • Section 5 concludes the work.

Related Work in the Field of Robots in Assisted Living Environment
Robots in assisted-living environments have been gaining importance and relevance for a long time. Different approaches for assisted living (AL) have been proposed in the past for applications ranging from caring for people in nursing homes or supporting persons with special needs in their own private spaces [19]. Human-robot interactions from a socially acceptable outlook is one of the main areas of focus when it comes to assisted living. An approach has also been demonstrated where a smart environment interface for human-robot interactions was developed [20]. This work uses ECHONET, which is an ISO-certified home network standard, universal (uAAL) platform for assisted living and the Pepper humanoid robot to enable human-robot interactions by way of verbal (natural language processing) and non-verbal communication, such as a touch interface. This combination, in turn, is employed in a use case where a user can control a device on the network. The idea of assistive robots being deployed in hospitals, caring homes, and in personal spaces is very promising. Japan's "Machine Industry Memorial Foundation" estimates an approximate value of 21 billion US$ that can be saved through the use of assistive robots where they are needed [21]. The recent ongoing pandemic due to COVID-19 has also yielded the realization of how useful such systems can be in situations where the health of a vulnerable group is at risk due to human-to-human interactions and temporary isolation for medical attention is desired [3,4]. For various ambient assisted living (AAL) solutions, some recent non-robotic approaches have also been proposed, such as SMARTCARE, which focuses on integrating AAL and home monitoring systems using IoT devices [22]; such solutions, in turn, can eventually be used with a robot integrated within the aforementioned scenario to enable healthcare.

Research Projects on Assisted Living
The EU has identified and highlighted the needs that the increasing cognitive aging population of Europe has to deal with. Namely, issues regarding the risks of cognitive impairment, population frailty, and social exclusion which have considerable negative consequences on independence, quality of life and the sustainability of health care systems. Calls for proposals, such as PHC-19 [23], focused on the development of robotic services applied in the context of AAL for aging populations. Various funded projects presented innovative solutions in the context of robotics and AAL.
The RAMCIP Project [24] presented an autonomously moving service robot with an onboard touch screen and a dexterous robotic hand and arm. Its objectives were focused on serving elderly people that suffer from mild cognitive impairments and dementia through the development of high-level cognitive functions.
The GROWMEUP Project [25] also focused on improving the well-being of elders. In this context, a robotic platform was designed and developed with enhanced learning capabilities, cloud connectivity, and environment-awareness skills. These characteristics facilitated its adaptation to user behaviors, habits, and daily routines.
Another project that was funded in the context of PHC-19 was the ENRICHme Project [26]. ENRICHme deals with robotics in AAL environments from a different perspective. ENRICHme aims to improve the quality of life of elderly people suffering from MCI by introducing three levels of intelligence: robot, ambient, and social. These three key concepts are realized in the project through a robot that encompasses safe navigation, monitoring, and interactions with a person along with home automation and ambient sensing for long-term monitoring of human motion and life activities.
Finally, the RADIO Project [15] developed an integrated smart home/assistant robot system. Its main focus, was to develop a sensing/actuation system that would monitor the daily activities of elderly people, focusing mainly on user friendliness and familiarization with sensors technologies. The objective was to allow elderly people to focus on their daily chores, without disrupting their own activities.

Robots in Assisted Living Environment: General Perspective, Research Opportunities and Challenges
Robots that assist humans in any task, also known as assistive robots, have become more common in the last ten years [27]. Assistive robots can help achieve extended automation based on a labor division between humans and robots in the workplace. Workers are increasingly interchangeable with automated technology and intervene mostly to make up for robots' shortcomings, portraying a future workplace in which human operators serve machines rather than vice versa; however, this does not mean removing humans from the workplace. Human's formidable flexibility, object recognition, physical dexterity, and fine motor coordination are still necessary for many workplace tasks, such as in warehouses and retail. That need generates interdependency between humans and robots, i.e., robot introduction makes human interactions with those systems crucial, rather than removing humans from the picture. In this sense, the value that assistive robots and automation provide to workers must remain at the center of human-robot collaboration in the workplace [28].
For robots to work side by side with humans, they need to have feedback from their surroundings. Smart environments equipped with different technologies can capture thermal image data via infrared sensors, visual data via optical sensors or cameras, and spatial data through position sensors. This information is crucial for the optimal and safe control of robots by allowing the pervasive datafication of workers, objects, and activities. Similarly, wearable devices, such as glasses, bracelets, or gloves, capable of capturing kinematic data through accelerometers, gyroscopes, or speed scans, can be used for feedback in the system and ease human-robot collaboration in the workplace.
Assistive robots are also used to aid humans in domestic or health tasks. According to their purpose, these robots can be classified into different categories, such as (1) service robots that aid disabled and older adults in their daily routines; (2) mobility aid robots that help to transport a person from one place to another; (3) serving; and (4) feeding assistive robots, that serve people's meals in hospitals and care centers, and monitoring robots that are useful for people who need constant supervision [28]. Furthermore, assistive robots can also give psychological comfort and improve the emotional well-being of patients in need, engaging them with mental activities. These assistive robots accompany children in hospitals, providing relaxation, supporting home nursing for the elderly, and keeping dementia patients' company. A robot's human-like resemblance and its ability to interact are critical for this type of robot, since it makes them more user-friendly. Engineers have explored different techniques for successful and natural human-robot interactions. In this sense, defining innovative methods of human-robot interactions remains an essential research avenue.
Speech recognition is an essential form of human-robot interaction in complex and changing environments because speech is the most used human form of communication.
Communicating with a robot through speech enables a robot to have more reference information for autonomous task execution and, potentially, to make smarter decisions. Many speech recognition tools have been made available recently; however, integrating these with service robots is an ongoing challenge. Examples that continue in this field include a voice-controlled wheelchair that aids a disabled person in moving around without hand commands [29][30][31] and voice-controlled humanoid robots that can navigate autonomously to do home chores [32]. In reality, simple voice commands do not give enough information to enable a continuous and safe robot motion in a complex environment. It is also challenging to decide the correct time for giving voice commands during robot motion. Human voice commands may not be received promptly by the robot autonomy due to speech processing and network latency. This demands urgent attention to resolve the time-delay problem in voice-controlled systems.
On the other hand, non-verbal communication modalities, such as gestures and facial expressions can be more useful in certain situations, when it is faster or more advantageous not to make noise. Additionally, human gestures provide a more schematic way of con-trolling a robot, enabling their use in a wide range of service-robot applications. In this regard, human hand gestures are natural and permit a more user-friendly way to control a robot, as in [33], where a Pepper robot was controlled with gestures that were recorded by a camera in real-time. Another approach includes the use of surface electromyography (sEMG) sensors, which are useful non-invasive methods to record the electrical activity of muscles. These signals can be applied in controlling robot movements that are similar to those of humans. However, some users, mainly disabled people, tend to avoid using their extremities to ask for robot help and hand-arm gestures are not the only gestures humans can make. A "hands-free" way to interact with a robot can be with one's face or with head gestures, such as in [34].
Thus far, we have discussed human-robot collaborations; however, another opportunity to be tackled is robot-robot coordination. In multi-robot systems, the actions of each agent must be planned considering every other part in order to maximize the performance of the team and avoid collisions. Cooperative planning strategies for multi-robot systems consider localization, target tracking, object recognition, exploration, motion planning, and more. With this in mind, and similar to human-robot interactions, communication is an essential part of effective robot-robot coordination. Typical characteristics of real-world applications, such as limitations of communication resources, unreliable networks, and interference susceptibility, hinder the sustainability of many of the aforementioned planning strategies for multi-robot systems. Such scenarios motivate the search for solutions that allow reducing these issues while also maintaining a reasonable coordination performance.

Research Opportunities in Conjunction with Embedded Systems
To fulfil the requirements of assistance with humanoid robots, several fields are to be explored further. First of all, it is important to have a broad understanding of humanmachine interactions. This is of particular interest as it is vital that humans, usually in delicate situations, feel comfortable in the presence of a robot, despite its humanoid appearance. This could be done by taking advantage of cameras coupled to machine learning algorithms to recognize certain gestures and to deduce human emotions [35]. Moreover, the interfaces presented for humans to interact with robots have to be simple enough, allowing straightforward access to the provided functionality, as shown in Section 4.2 where a Pepper robot has a tablet-based Android interface. Not only is this interaction vital, but how robots interact with one another is also important (assistance with humanoid). This enables them to collectively solve complex tasks following the distributed computing paradigm.
It can be further deduced that robots in assisted-living environments will be endowed with multiple heterogeneous sensors to be aware, not only of their surroundings, but also to perceive human emotions. Usually, this set of sensors produces large amounts of data concurrently, which need to be processed accordingly. Moreover, these sorts of systems have difficult real-time constraints. This brings the use of embedded computers with reduced computation resources to their limit. Note that this is not only computationally intensive, but there is also an I/O-intensive task. FPGAs are good candidates for these systems, as they offer notable features for I/O management and can also offer data paths that are fully customized to the application's computational requirements. Moreover, the most recent embedded FPGA devices exhibit performance per watt close to that of application-specific integrated circuits (ASICs) (while still offering outstanding I/O performance) [36]. In addition to this, they can pre-process data very close to sensors and provide versatility to specifically design hardware according to needs as they are programmable. Lastly, they offer the possibility of being modified dynamically during a run-time of some hardware blocks via dynamic partial reconfiguration (DPR) techniques. This characteristic could be of particular interest to switch algorithms on the fly depending on current environmental conditions. The benefit would be to avoid having an FPGA programmed for all potential circumstances, and switch these blocks according to the current needs, leading to a reduced power consumption [37], since FPGAs of a moderate size can be used.
There are several commercial robots designed as social humanoids with the ability to recognize faces and basic human emotions, such as SoftBank Robotics' Pepper [38]. The problem with these is they are closed systems and an interface to their embedded computers for FPGAs is not possible. In addition, the cost of solutions like SoftBank Robotics' Pepper is prohibitively high for it to be widely used in home environments. Therefore, new techniques are needed to enhance the embedded computers of robots with FPGAs [39]. Most of these robotic platforms are compatible with the robot operating system (ROS), which has become the mainstream middleware for robotics, not only in research but also in industry. As it is a software solution, it can run on any CPU capable of running Linux. Therefore, to circumvent the problem of a lack of interfaces on the robots with FPGAs, the ROS can be used as a bridge between these two computational platforms. This means that the most computationally intensive tasks can be off loaded to a hardware IP on the FPGA, specifically designed for a given task. Moreover, it is possible to feed the I/O data directly from sensors to the IP blocks (bypassing the embedded CPU) resulting in noticeable savings in terms of memory bandwidth and also in power consumption.
Even though FPGAs are a good match for robotic applications, they are not that straightforward to program compared to the most common available software solutions for robotics, such as ROS. Therefore, several research fields are to be further explored:

1.
Modeling: An abstraction in the design process has advantages towards simplified programmability. This would have to take into account how hardware architectures for robotics need to be modeled to comply with the requirements dictated by preexisting software models and specifications. DPR techniques need to be studied to determine how models should express them. The end goal is to obtain hardware models from software specifications.

2.
Automation techniques: Considering that the workflow to obtain an RTL design could become cumbersome, code generation and automation techniques, leveraging the models, need to be explored and developed to provide effortless deployment. Further research has to focus on how robotic architectures and applications can be modeled and which modeling techniques can be useful for FPGA designs.

3.
Adaptivity and reusability: One of the key features of ROS implementation is the reusability of its components. Thereby, the hardware-based ones that result from automation techniques should also follow this. This could be ensured by relying on model-driven engineering and automation tools. This would also be beneficial for adaptivity of said components for multiple platforms, whether robotics or FPGAs.
However, it should be pointed out that during the last few years, we have experienced tremendous improvements in FPGA development tools. For example, performing remote access and programmability of FPGAs devices (even for languages as Python) is rapidly becoming a mature technology. Furthermore, new (more performance-or power-efficient) FPGA bitstreams can be instantiated in AAL robots from cloud-based "down-wards" links. This is expected to create new business and economic models (especially for SMEs and mid-cap companies) similar to FPGA-based cloud solutions.

Challenges
Social distancing and pandemic-oriented lockdowns have given a rise to many different research challenges that we face in the field of robots in assisted-living environments. In hospitals, in nursing, and in senior citizen homes, the sanitization of objects that come into contact with humans is a problem that can be addressed using robots. Use cases of such a solution can be sanitizing the handles and knobs of doors in a hospital corridor or a patient's room. In recent work, a human-support robot uses a deep learning framework in such a case to detect door handles relying on a given dataset and sanitizes them by detecting a specific door handle [40,41]. In order to extend this application area, a neural network can also be trained to detect fire in care homes, raise alarms and also report the incidences of fire outbreak to a fire department or to provide first emergency response, such as by extinguishing a fire. Off the shelf humanoid robots like Pepper have enough computing resources to support their emotional engines and a limited memory support for heavy learning algorithms. They are usually slow when responding to external commands. Their field of view is limited to angles imposed by their front-facing input cameras and may not report fires that are out of range. However, they offer a good range of sensors (e.g., laser, RGB cameras, microphones) to build upon algorithms to address the challenges described in this section.
From a technical point of view, the challenge is how these algorithms are to be implemented, considering that they would require to go beyond the available computational resources of off-the-shelf robots. A solution to this problem is addressed in Section 3.1. By enhancing them using FPGAs, an off-the-shelf robot can be used, taking advantage of the sensors and actuators it already has, and enhance its processing capabilities using specific algorithms for any given task, such as the those described in the following Sections.
In this area, a significant challenge that can also arise is the ability of a robot and the neural networks that are responsible for detecting fire with dissimilar features, such as flame, smoke, and heat. This is a particular challenge that can possibly be addressed through cognitive machine learning [41], where an intelligent system can also perceive dissimilar objects by guessing their functions. Moreover, wireless sensor networks in combination with a robot equipped with cameras or a similar set of sensors can be combined to exploit the cognitive features provided by AI and machine learning. For example, where an object with slightly dissimilar features attached to a window is perceived as a handle to be disinfected.
In general the closer a robot is in appearance to the human form the more demands humans inherently place on them, leading to disappointments as provided functionality may not always meet user needs [19]. Humanoid robots are sensitive to humans within a meter radius and get confused when interacting with more than one person. Another challenge programmers face is personalizing the velocity to suit different elderly walking instances in care homes [20]. Furthermore, a training curve is necessary to adapt to their use, which may be uninviting for ailing elderly in care homes. Private data collection for experiential robot learning increases the power consumption and raises ethical concerns [19].
On the other hand, machine learning models have been applied to various robotic challenges involving sensing, control, and estimation in biomedical applications. In this context, cases in which a real-time response is required and is solved by hardware implementations, which often involve such data-driven methods. As the literature can show, machine learning has shown great applicability for creating models that interpret complex bio-signals. These bio-signals are especially useful for service robotics, since they may work as a communication bridge between a person's wishes and the robot. Moreover, such mathematical abstractions favor efficient hardware implementation due to their structure, which is composed of simple entities, such as neurons or support vectors machines.
ANN's inherent parallel architecture can be exploited with field programmable gate arrays (FPGAs). These devices are flexible logic structures that allow data processing in parallel with a high-speed performance on a single device. A recent review on the topic of ANN-embedded solutions on FPGAs and their advantages for other heterogeneous computing platforms is given in [42]. Recently, FPGA solutions of deep artificial neural networks have also been studied for both training and inference stages. In [42], the authors reported the most important features and advantages of hardware implementations for deep ANNs, namely their lower power consumption and inherent reconfigurability.
Nonetheless, FPGAs have some drawbacks, such as hardware resource utilization and power consumption. The former is an issue considering that the reconfigurable hardware inside an FPGA is not limitless; not every architecture will fit into a specific device. So, if the solution to a problem is to get a bigger FPGA, the latter drawback comes into play, and, as a consequence, a larger device or architecture will draw more power. Several solutions might come to mind for these problems, such as architecture optimization or better resource utilization; however, there are limitations regarding the availability of resources and the real advantages of optimizing the architecture. To meet the strict requirements of embedded applications in terms of performance, power consumption, and physical dimensions, a novel solution that gets past these issues is the implementation of dynamically partially reconfigurable (DPR) systems.
Another significant domain that robots are expected to contribute to in ambient assisted living (AAL) environments is rehabilitation. It is highly acknowledged that rehabilitation has a major effect on the quality of life of people with disabilities, whether they relate to chronic impairments or not. Injuries, such as spinal cord injury (SCI), are a dominant factor that greatly affect a person's physical and mental health, along with their financial dependence. Therefore, it is of paramount importance for patients to engage in rehabilitation programs and get appropriate treatment. According to the World Health Organization (WHO), every year, between 250,000 and 500,000 people suffer a SCI while people who suffer from SCIs are two to five times more likely to die prematurely [43].
However, from an economic perspective, the costs for providing adequate rehabilitation routines are quite high. In parallel, devices dedicated to SCI treatment are limited and available only in rehab centers and hospital. With the cost of hospitalization remaining quite high, the pressure on the social care systems limits the access of people to such treatments.

Robots in Rehabilitation
The design and development of low-cost solutions based on new technologies that could be applied in a patient's personal space seem to be very promising. In this context, the "A-Balance" system was designed and developed in collaboration with medical experts.
The initial scope of the A-Balance as shown in Figure 1 is for it to be used in posture estimation and balance evaluations of patients that are in rehab due to a spinal cord injury (SCI), whether it was caused by an ischemic stroke or an accident. Specific requirements were defined in order to develop an appealing solution, both for patients and doctors. Such requirements include a low cost of purchasing and maintenance, portability, convenience of use and for it to be as unobtrusive as possible. As instructed by medical experts, users are prompted to follow specific movements that give valuable feedback to the doctors regarding the neuromuscular condition of the patient in terms of controlling their torso. An inability or reduced ability to follow the exercises given by experts give valuable information about a patient's status. The rehabilitation process is executed remotely and the final recovery of patients suffering SCI is affected by various factors. The most critical is the consistency and accuracy of exercise execution. Patients, in most cases, fail to execute the exercises on their own, either due to interpretation of the guidelines or due to negligence.
The presence of a home rehabilitation setup that emphasizes a low cost and high rates of engagement by users could address many of the problems that unattended rehabilitation presents.

System Description
The authors of this work in collaboration with medical experts identified the impact of such solutions, which were further aggravated due to the latest pandemic issues, and designed and developed the A-Balance system. The A-Balance aims to support people that suffer from SCI through a wearable and portable solution that could easily be applied in their home environment as depicted in Figure 1.
BALANCE heavily relies on using advanced micro-electronic (AME) systems and smart system integration (SSI). With respect to sensor modalities being highly accurate, open COTS IMU (inertial measurement unit) devices offering ultra-low power wireless interfaces (e.g., BLE) are being integrated in order to offer increased maneuverability to the end user.
The current practice in balance and posture assessment requires expensive and bulky systems that cannot be applied outside the environment of a clinic/hospital [44]. Apparently, the use of such devices requires necessary training and they cannot be operated by non-medical personnel, such as patients or their caregivers. In this context, people that need to improve their stability and balance must follow the required routines in hospitals, guided by clinicians. Thus, the overall cost for patients and/or insurance companies and the health system is greatly increased.
Wearable cyber-physical systems have already been used to mitigate the increased medical expenses posed by specialized equipment. The authors in [45] presented an algorithm that uses an accelerometer along with different barometer features. They compared two algorithms for classifying posture transitions and falls by processing accelerometer and barometer features.
The authors in [46] utilized accelerometers to develop and evaluate balance assessment algorithms that contribute towards the overall objective of moving the primary balance assessment from hospitals to the home environment. The authors in [47,48] utilized the increased use of smartphones in everyday life to deliver balance assessments in patients with stroke. However, a holistic approach that is ready to be adopted in daily practice is not yet available. The A-Balance aims to fill the gaps and proposes an end-to-end approach that utilizes wearable devices and edge computing along with AI techniques delivered as cloud services through efficient integration patterns.
In this context, the A-Balance ( Figure 2) is a multimodal system that incorporates cyberphysical systems and IoT technologies that realize a holistic approach of ICT approaches in rehabilitation. The A-Balance follows a 3-layer architecture that consists of:

End Layer
The layer of the physical devices incorporates various sensing components that are responsible for collecting the multiple modalities that will be processed in a later step to facilitate the estimation of a patient's posture and the evaluation of the rehab routine's execution. The main modality that is integrated in the current version of the system is the inertial values as reported by a 3-axis accelerometer and 3-axis gyroscope. Both sensors are integrated in a low-cost wireless and wearable device, the SimpleLink™ multi-standard CC2650 SensorTag from Texas Instruments (TI) [50]. Some of the main characteristics of the device can be summarized as follows: Measurements from the accelerometer and gyroscope are collected from the TI Sen-sorTag and transmitted wirelessly through the available BLE interface. This setup allows the transmission of data for a significant period of time before battery replacement is needed. The sensor that is integrated on the TI SensorTag device is the MPU-9250 MEMS Motion Tracking device. MPU-9250 is a 9-axis motion processing unit, widely used in smartphones, tablets, and wearable devices. The MPU-9250 is a system in a package (SiP) that combines two chips: the MPU-6500, which contains a 3-axis gyroscope and a 3-axis accelerometer, and the AK8963 3-axis digital compass along with an onboard processor. The power consumption of the device is quite low, around 3.5 mA when all sensors are enabled.
The triple-axis MEMS gyroscope gives digital outputs in the range of 250, ±500, ±1000, and ±2000 • /s through the integrated 16-bit ADC. The operating current is estimated at 3.2 mA and at 8 µA in sleep mode. The accelerometer sensor gives its digital outputs with a programmable range of ±2 g, ±4 g, ±8 g and ±16 g through a similar 16-bit ADC. The default sampling rate is 1000 Hz and the embedded low-pass filter is set to a cutoff frequency of 184 Hz.
Angles and accelerations are transmitted to the ATLAS cloud through the ATLAS gateway and the patient's posture is estimated. This posture is presented through a graphical interface of the end user as inclination angles. Additionally, the position of the torso is visualized in real time through a human body graphic along with a moving dot that gives an indication of the user's center of gravity.
Furthermore, using a low-cost camera and already existing vision-based algorithms, an estimation of the movement regarding the center of gravity of the user will be provided, which is fused with the rest of the modalities in order to increase event detection accuracy. A critical point here is that no device is required to be on the end user for camera detection since the respective algorithms can detect changes between frames and thus estimate the center of gravity of the moving part that the camera is recording.
The distribution of the patient's weight is an important modality that is needed to be integrated in the system. This measurement is collected by an array of pressure sensors installed in an exercise/yoga mat supported by a controller that will carry the appropriate low consumption network interface (BLE) for the transmission of the collected data.

Network Integration
All the aforementioned devices transmit their data to the gateway. This gateway has a two-fold contribution to the overall system. Firstly, it delivers the required integration in terms of communication. Secondly, it can achieve improved utilization of the network, by moving less processing-intensive tasks closer to the data sources, following a fog computing approach, and thus reduce network-related delays. The gateway, based on the popular, low-cost Raspberry Pi 3, carries three different network interfaces, namely WiFi, Bluetooth/BLE, and ZigBee (installation of additional module).

Cloud Layer
Finally, the cloud layer of the system, is based on ATLAS [49], a cloud platform based on cutting edge technologies that is able to handle large volumes of data streams that originate from a constellation of IoT. Inside the ATLAS core, loosely coupled services are supported following the micro-service architectural pattern, which offers many advantages with respect to robustness, maintainability, and scalability. The communication from the physical domain to the core domain, as well as for the core services, is based on the MQTT communication paradigm. Finally, at the top of the ATLAS platform, services are responsible for data collection, processing, and storage.
The benefits of such an architecture are multifaceted in terms of resources, complexity, efficiency, and scalability. Of course, focus is also given to non-functional requirements that are strongly related to the ethical aspects of the application. Data collected by the user are transmitted over encrypted channels and stored anonymized. Strict authentication/authorization mechanisms are implemented for every interaction between the components and the users.
Moreover, special care has also been given to the graphical interfaces of the A-Balance platform in order to deliver an intuitive environment that will facilitate the use of the system and increase user engagement through adopting gamification.

Indoor Localization
As location information is the key to providing a variety of services for indoor computing environments, it is becoming an important feature in smart homes, and in general, for devices that change location frequently. There are plenty of localization methods, which can be divided in two main categories: range-based and range-free [51]. Range-based localization methods are based on indirect measurements of the distance or angle between sensors. The most common range-based methods are received signal strength indicator (RSSI), time of arrival (ToA), time difference of arrival (TDoA) and angle of arrival (AoA). On the other hand, range-free methods for distance estimations are based on measuring the number of hops between any pair of the sensors using numerical or statistical methods. Common range-free mechanisms are distance vector hop (DV-Hop) and proximity-based.
Our proposed approach for accurately estimating the position of a device in an indoor environment is to use a mixed localization technique, using both range-based and rangefree methods, in different steps of the localization process. During the early steps, a proximity-based, range-free technique is used to remove outliers. With the term outliers, we mean fixed-position devices (beacons) that will be excluded from the next steps of the localization process. Next, we adopt an RSSI-based, range-based localization technique in order to estimate current position of the tracking device. Our proposed localization method is detailed as a process and is shown in Figure 3. Our approach, is based on the assumption that high-strength (High RSSI) signals cannot be observed at long distances, and that low-strength (Low RSSI) signals can be observed at different distances, both near and far, due to signal propagation phenomena, which reduces the signal strength. Taking the aforementioned assumption into account, high-strength signals are classified as more accurate for the reduction and filtering of the collected signals.
Our estimation process starts with the collection of different signals using a component RSSI-sampler. We utilize a data transmission frequency of 10 Hz with a sampling period of 2 s, which means that each process is able to analyze up to~20 RSSI samples per node. The second step is to remove RSSI outliers (outlier removal) with respect to the high-strength signals. Going further, the third step (envelope identifier) of our proposed process, is to identify a sub-part of the RSSI samples, equals or greater than three samples with respect to high-strength signals. In the fourth step (signal-normalizer), the signal normalization action is triggered, which normalizes the signal in favor of high-strength signals, also taking into account the previous normalization processes. Finally, the last two steps (distance estimator and location estimator) are used to calculate the distance between fixed-position devices and the tracking device, utilizing the log-normal-shadowing model, and, based on the distances calculated, the location is estimated by applying the non-linear least squares algorithm.

Results
The evaluation of our proposed localization approach is made on an area of 16 m 2 , where we have placed six fixed-position devices, with a maximum distance of 4 m between them. The tracking device, where the localization process is deployed, runs on Raspberry PI 3. The experimental results, are shown in Tables 1 and 2 below: The tables above posit that an estimation mainly fluctuates between 10 and 20 cm. This estimation can be evaluated as acceptable according to the objective we want to localize. For objects that cover a relatively large area, this estimation error is acceptable. Moving on to smaller objects, this estimation can be considered as not acceptable, but this is also relative to the application(s) we are planning to adapt the localization method to. One use-case where this estimation can be characterized as acceptable, is in relative localization applications, where the target is to identify if the location of the targeting object is close to the other related objects.
As future work, the following related works that accomplish more accurate localization, machine learning (ML) techniques will be adapted to our approach to increase localization accuracy.

Pandemic and Robotics
The COVID-19 pandemic and its associated fear of the unknown has massively impacted the behavior of people worldwide [52]. The World Health Organization (WHO) and the Center for Disease Control and Prevention in North America (CDC) have used various means to raise awareness to the necessary hygiene measures to reduce the spread of viruses. In the heart of Europe, organizations such as the German Center for Neurodegenerative Diseases (DZNE) and the European Center for Disease Control and Prevention (ECDC) have also coordinated efforts to emphasize the need for intense personal hygiene and social distancing. From the wearing of masks and gloves in care homes, shopping malls, entertainment centers, and on public transport to home-office work practices, every effort is directly aimed at reducing the spread of the coronavirus.
In this new form of work environment, workers are being encouraged to work from home and medical doctors can reach their patients in a telepresence [53]. The absence of care givers in the assisted living environment is also a serious issue that has been known for some years and has been intensified with the global pandemic, making the use of robotics and machine learning relevant. In view of this, the Brandenburg University of Technology has started two projects, CleanMeAI and InjectMeAI, which target improving the functionality of existing robots to support relevant staff in hospitals and assistedliving spaces.
In many domains, robots are taking over human tasks in order to reduce the contact and spread of the virus. In the entertainment industry, for example, autonomous robots with tablets on their chests are now responsible for scanning various codes on tickets at entrances to cinemas and theaters [54,55]. Many institutions are also enforcing hygiene practices by providing hand sanitizers at entrances for public use [56,57]; however, humans under time constraints hardly give these isolated gadgets the attention needed and ignore them. In the CleanMeAI Project, a humanoid robot [58] will be trained using Machine learning to welcome visitors, providing them with doses of hand sanitizer, detecting (see Figures 4 and 5) and disinfecting door handles-a common source of virus contagion [59]. The methodology consists of attaching a sanitizer system to the autonomous humanoid robot that can walk independently to interact with humans, request to sanitize their hands, and continue to sanitize door handles at regular intervals.  Pepper robot equipped with an on-robot tablet, running Android OS, on the left. Our application running yolov5 for door handle detection and person detection runs on the tablet that can communicate with one of the on-robot cameras. The panel on the right shows multiple object detection with neural network yolov5 in action. The bounding boxes on the right panel seem to overlap as the objects are almost in the same position, i.e., door handles and the hand of the person holding the knob with accuracies of 0.66, 0.51, and 0.69, respectively. The accuracy tends to decrease as the small object detection usually becomes difficult for such networks. As the robot approaches the object, the object in the perspective of the camera will be detected with a greater accuracy, as also shown in Figure 4.
A sanitizer system in the form of a backpack comprising a spray-can controlled by a servo motor through an Arduino board will hang on the back of the Pepper robot. This battery-supported system attached to the robot will receive commands from the robot through an IP address. The robot will observe and classify images of doors and humans through a model based on that of [60], implemented in Android on its tablet, and send alerts to the Arduino-based hand sanitizer system. Preliminary tests reveal that this detection can be done in under 1.18 s.

Implementation
We implemented our approach for the CleanMeAI project using the Pepper robot [58], which has been widely used in other works dealing with human interactions [18,61,62]. Unlike humans, autonomous robots like Pepper as shown previously in Figure 5 do not tire with repetitive tasks. It has two RGB cameras and depth sensors in its head that detect a human presence to support its emotion engine. The cameras are used to scan its surroundings for humans and door handles. When a human is detected, Pepper will request the person's hands, spray a dose of sanitizer, and proceed to disinfect door handles. Since the quantity of sanitizer in the backpack is limited, the robot has to keep track of the door handles that have been recently cleaned in order to prioritize the disinfection of others. This is accomplished by assigning time stamps to the door handles detected. Thus, the difference between the current time and the last detection time will determine if a door handle must be disinfected again. For our project, a duration of one hour was set. Allowing the robot to disinfect door handles after each hour of use.
The project shown in Figure 6, consists of software (and hardware) characteristics developed using QiSDK, which is an Android software development kit (SDK) [63] for the Pepper robot. Figure 7 depicts the overall flow of the algorithm involved in order to detect, sanitize, and re-sanitize door handles and hands of individuals detected. In order to create applications and deploy them on the Pepper robot, this specific SDK provides API to the robot controls. The application uses the Pytorch library to detect door handles and people, exploiting yolov5, which we custom trained on our datasets. The dataset that was used for training and testing consisted of manually annotated images of doors knobs and handles within our institute premises for initial experiments. For data augmentation, we captured images with different angles and perspectives, and annotated them accordingly.  The InjectMeAI project, on the other hand, seeks to reduce the spread of bodily fluids among care givers, especially at vaccination centers, where contact with bare skin is essential. Medical personnel have the best training regarding the treatment of patients suffering from the pandemic; however, their close contact with humans puts them at a higher risk of infection. Statistics on the infection rate of nurses and doctors are substantially increasing [64]. It is therefore necessary to find a means to further reduce contact with patients, especially during vaccinations. This is where assisting humanoid robots come in handy.
Administering injections requires person-to-person contact. In this circumstance, the spread of bodily fluids and consequently the coronavirus has become eminent. The objective is to attach an injection system to an autonomous humanoid robot that can independently interact with patients in a specified position to deliver injections at the shoulder through a needle attached to one finger.
The appropriate dose will be delivered when a patient is detected by the robot to be in a sitting (see Figure 8) or lying position through pose estimation methods. A bare shoulder classification and injection point identification are crucial and must be precisely estimated as these vary from one person to another. Existing computer vision algorithms and object detection methods make such computations realistic. Other issues being addressed include determining the trajectory of motion to the patient and raising the robot's hand to the appropriate height after finding the injection position. Tensorflow Lite and keras libraries will be used for Android implementation of pose estimation, based on [65]. The algorithm that continuously processes video streams received through the front facing cameras and directs the humanoid robot to deliver an injection to COVID-19 patients is described below in Figure 9. In Table 3, we summarize some of the key differences between several similar approaches and our propositions pertaining to CleanMeAI and InjectMeAI projects as an all-in-one approach.

Equipping Robots with Cognitive Skills
Earlier versions of robots were equipped with the much simpler Choregraphe programming interface in a NAO environment. The transition from Choregraphe platform to Android for imbuing cognitive skills in the Pepper autonomous robot permits the use of current object detection and pose estimation algorithms that elevate its intelligence. Roboticists, as well as robot engineers, have had their fair share of challenges arising from pandemic restrictions. The Pepper android emulator does not support the use of video and audio inputs. Programs incorporating such data must be implemented and tested directly on a real robot. Machine learning models requiring continuous image feed become difficult to test outside the physical presence of robots quarantined at research laboratories. The robotic autonomous engine further constrains the use of the robot while connected to the charging station. Thus, keeping the robot active and accessing it remotely for test purposes is also ill-advised, compounding the challenges of transferring mental abilities to the technological system.

Conclusions
A particular application scenario pertaining to robots in assisted-living environments requires an amalgamation of more than just a single research area. It is discernible from the past, present, and future perspectives within this area, that it is a multi-disciplinary field when a particular use-case is considered [1][2][3][4][19][20][21]25,26,29,31,34,[39][40][41]61,62,64,66]. More specifically, complex robotic systems or intelligent robots in the future, which can operate independently in an assisted living environment, could combine knowledge from AI, machine learning, cognitive machine intelligence, sophisticated robotics, embedded systems, IoT and healthcare engineering. From our discussion of the projects, applications, ideas, and research work, it can be inferred that the prospects of robotics in assisted living are expanding and will be broadly diversified.