Service Robots in the Healthcare Sector

Traditionally, advances in robotic technology have been in the manufacturing industry due to the need for collaborative robots. However, this is not the case in the service sectors, especially in the healthcare sector. The lack of emphasis put on the healthcare sector has led to new opportunities in developing service robots that aid patients with illnesses, cognition challenges and disabilities. Furthermore, the COVID-19 pandemic has acted as a catalyst for the development of service robots in the healthcare sector in an attempt to overcome the difficulties and hardships caused by this virus. The use of service robots are advantageous as they not only prevent the spread of infection, and reduce human error but they also allow front-line staff to reduce direct contact, focusing their attention on higher priority tasks and creating separation from direct exposure to infection. This paper presents a review of various types of robotic technologies and their uses in the healthcare sector. The reviewed technologies are a collaboration between academia and the healthcare industry, demonstrating the research and testing needed in the creation of service robots before they can be deployed in real-world applications and use cases. We focus on how robots can provide benefits to patients, healthcare workers, customers, and organisations during the COVID-19 pandemic. Furthermore, we investigate the emerging focal issues of effective cleaning, logistics of patients and supplies, reduction of human errors, and remote monitoring of patients to increase system capacity, efficiency, resource equality in hospitals, and related healthcare environments.


Introduction
Over the years, the field of service robotics has grown significantly, especially in the industry sector. However, less attention has been given to the healthcare sector than to other sectors, perhaps due to the challenges associated with providing interpersonal care, the very nature of the service. Although this has caused delays in the development of robots to help with patient care needs, this has all changed since the outbreak of the Coronavirus. Robotic technology is being quickly re-evaluated and used to help prevent the spread of viruses and bacteria through disinfection, logistics and telehealth. Furthermore, there is a demand for service robots to assist nurses and healthcare workers to increase productivity [1], limit further person-to-person contact, and compensate for a significant increase in the loss of healthcare workers due to the highly infectious nature of this disease. Without compromising the quality of care, robotics applications could offer the promise of sustainable and affordable healthcare provision, and although robot technology can be used in many areas, it could make a notable difference in clinical care. For instance, robots could be used for measuring temperatures through the use of thermal sensors to

Service Robotics
The field of service robotics is relatively new, it is predicted that the demand for professional service robots to support healthcare staff will reach 38 billion USD by 2022 [3] as robots will not only lower the workload for healthcare staff, but also aid in complex tasks that need to be carried out [4]. The first service robot definition was coined in 1993 by the Fraunhofer Institute for Manufacturing Engineering and Automation (Fraunhofer IPA): "A service robot is a freely programmable kinematic device that performs services semi-or fully automatically. Services are tasks that do not contribute to the industrial manufacturing of goods but are the execution of useful work for humans and equipment" [5]. Since then many definitions have been proposed; the International Standardisation Organisation defines the term as "a robot that performs useful tasks for humans or equipment, excluding industrial automation applications" [6], whereas the International Federation of Robotics emphasises the robot's autonomy in their definition: "a service robot is a robot which operates semi-or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations" [7]. As the term "Service Robot" has continued to evolve, its definition has blurred due to the crossover between industry and service sectors. For instance, mobile robots and automated guided vehicles (AGV) are used in industrial automation applications and as service robots in new environments such as hospitals.
The majority of this paper will focus on service robots designed for the healthcare sector and therefore categorise a service robot as one that carries out tasks, either partially (semi-) or fully autonomously in a clinical setting. In the case of the subsection on social care, we give examples of how personal service robots can be used to mitigate loneliness and promote productivity during social isolation. For example, service robots in a clinical setting can be used for many different purposes such as disinfection, surgery, logistics, monitoring, rehabilitation, and endoscopy (see . They have an important place in healthcare as they can provide precise control of instruments, increase safety, monitor patients, and perform some diagnostics [8]. In most cases, these robots share the environment with humans and as such, they should be able to recognise faces, gestures and speech as well as objects. The successful understanding of these tasks results in obstacle avoidance, and communication based on emotion [8]. Fundamentally, service robots are machines that can carry out a series of actions. They are capable of autonomous decision making based on the input they receive from their sensors, cameras, and microphones, and they can adapt to the situation, and learn from previous actions [9]. Many service robots are connected to larger systems such as cloud-based systems that store information such as the user's information and transaction data. When combined with biometrics such as face recognition, service robots can identify people and personalise their service at a marginal cost [9]. When describing the representation of a service robot, we tend to think of them as either machine-like in appearance or human-like in appearance, possessing some humanlike features, often stylised [10]. With regards to the types of tasks that they carry out, duties can either be logistical (operational tasks for transportation), analytic (image analysis for diagnosis) or emotional (directly dealing with people).
A major challenge in this field is the acceptance of service robots. Robots are only accepted if they can provide a benefit to one's work, making it more efficient and pleasant [11]. However, there is still the fear that robots will replace their human counterparts, resulting in job losses. In order to promote positive attitudes and acceptance towards service robots, it would be beneficial to hold public engagement campaigns that introduced and trained healthcare staff on how to operate these robots. Furthermore, by giving service robots more exposure-making them visible in everyday environments-they are more likely to be accepted [12]. On one hand, service robots can relieve pressure on healthcare staff-assisting them with daily activities, which gives them more time to focus on higher priority tasks. On the other hand, this increasing use of service robots can lead to less interpersonal contact between patients, which is especially important during COVID-19 as conscious efforts are being made to reduce person-to-person contact, notwithstanding the resulting patient isolation. Ethically, this is an issue because older patients in particular may feel there is an aspect of dehumanisation, so efforts must be made to provide these users with social care. Another common issue with service robots is reliability; service robotics need detailed safety features as they interact with humans, as well as a cyber-security that can deal with the challenges associated with remote connectivity such as unauthorised access or data breaches. Aside from safety, it is important to verify that there are no false positive or negative results when being used for detection or diagnosis as this can be a dangerous risk to public health. Therefore it is with the utmost importance that ongoing technological progress in sensor and actuator development is carried out as well as further research into deep learning and human interaction.  Specification sheets unavailable at this time.       LiDAR, sonar sensors, infrared, depth-perception camera, HD camera, speaker, omni-directional wheels, and touchscreen.

Sterilisation
According to Spagnolo et al. [73], up to 17% of infections in hospitals are due to unclean surgical rooms. Despite rigorous protocols, data from hospitals have shown that fatal infections are increasing, even with more efficient cleaning practices [74]. In fact, despite improved cleaning intervention, research suggests up to 30% of surfaces still contain biofilms even after cleaning [75,76]. These statistics propose that the current course of action is not enough to protect susceptible patients from serious, life-threatening infections, such as COVID-19. Indeed, COVID-19 has the ability to not only spread via close-contact, but also through unclean surfaces; it can persist on inanimate surfaces-metal, glass, or plastic-for days [77]. Therefore, surfaces and objects need to be sterilised; sterilisation kills all forms of microbial life on an object or surface [78].

Ultraviolet Sterilisation
In an effort to reduce the environmental bioburden and decrease the risks of microorganisms in patient care areas, the use of "no-touch" technologies, such as ultraviolet light disinfection, have been gaining a lot of interest in the last year due to its effective and comprehensive environmental disinfection strategy. This technology has many advantages including residue-free surfaces after treatment, a broad spectrum of action and rapid exposure times, as well as not having to change a room's ventilation during treatment [79]. Ultraviolet sterilisation uses ultraviolet light within the range of 100-280nm (UVC) to help eradicate microorganisms from hospitals. Light in this wavelength can be used to sterilise surfaces, disinfect water and kill microorganisms in food and air. The light is absorbed into the DNA and RNA of the microorganism and results in the dimerisation of adjacent molecules, thereby inhibiting cellular replication due to the presence of molecular lesions. To calculate exposure time for bacteria sterilisation, one must look at two factors: brightness and time. The total amount of light a source emits is called its luminosity, but the amount of light received from a light source is known as its brightness. Therefore, the exposure time for sterilisation is then directly proportional to the amount of UVC given per brightness.
However, UVC light does have its limitations; UVC exposure is harmful to humans if exposed for prolonged times due to its carcinogenic and cataractogenic nature. Additionally, hospitals typically have many different types of surfaces and objects jutting out at angles so it can be difficult to reach areas that are hidden under shadows or behind objects. A recent study carried out by Lindblad et al. [80] demonstrated that UVC light failed to reach places like the wardrobe and the sink which were partly or completely in shadow as readings from the radiometer were significantly lower in these areas.
To combat these issues, Chanprakon et al. [13] designed a UVC robot that can navigate around a room and thoroughly sterilise the entire operating room with or without human intervention. The robot is equipped with three UVC lamps mounted in a circular pattern 120 • apart, and six ultrasonic sensors ( Figure 1). Each lamp has a 19.3-watt output. It uses 4 freewheels for locomotion (0-1.4 m/s) and is controlled using a central command centre. In case of an obstacle, the microcontroller controls the wheels of the robot by a motor driver to avoid collision. To test the effectiveness of the UVC robot, sample plates of Staphylococcus Aureus were exposed to UVC light located 35cm away from the robot and the frequency of different exposure times were used. After the samples were exposed to UVC light, they were then incubated for 24 h at 37 • C. Chanprakon et al. [13]) demonstrated that there was a significant reduction in bacteria only after 2 s; a reduction to 7 colonies from the starting 23 colonies. Further UVC exposure time found that by 8 s, all bacteria had been eradicated.
A Danish-based company called UVD Robotics [14] have introduced clinically tested and verified, mobile UVC light robots into hospitals to efficiently disinfect rooms (Figure 2a). This robot is the only commercial disinfection solution capable of locating and positioning itself in the hospital room while getting close enough to all critical objects during the 10 min of the disinfection process. The UVD (ultraviolet disinfection) robot is used as part of a regular cleaning cycle and aims at preventing the spread of infections and diseases.
To use this robot, a member of the cleaning staff orders the robot via an app, the robot then takes the elevator, opens the door, and enters the required room at a maximum speed of 5.4 km/h. Staff then go through a security checklist, and the robot is left to disinfect the room using high-intensity UVC light with a wavelength of 254 nm to kill 99.99% of bacteria in 10 min [14]. Disinfection is done without exposing front-line hospital staff to the risk of disease while eliminating human error. Once the disinfection has been complete, the UVD robot notifies the staff, and a report is made. The robot is battery powered (with a 3 h charge time) and can disinfect up to 10 rooms on a single charge.
In the USA, a company called Xenex [15] produces the LightStike UV disinfection robot ( Figure 2b); a patented system powered by Xenon SureStrike technology. According to Xenex [15], this robot uses a pulsed UV in the range of 200-315 nm, delivering 4300 times greater UV pathogen killing intensity than the traditional UVC mercury vapour sources (a single wavelength of UVC at 254 nm), and can disinfect a room in 20 min. These robots have been deployed in over 400 hospitals worldwide to elevate their environmental hygiene practices. Beal et al. [81] carried out the first UK trial of the Xenex robot (PX-UV) to investigate how well it can remove vancomycin-resistant enterococci (VRE) as well as reduce the total aerobic colony counts on surfaces. This study was conducted in 10 single bed, en-suite rooms in a teaching hospital at three different time points: environmental sampling was carried out after patient discharge, after a regular cleaning cycle performed by cleaning staff using the national standard cleaning protocol, and following the completion of three PX-UV disinfection cycles. Sampling points included areas at risk of direct contamination and areas that may be difficult to access for cleaning. Beal et al. [81] found that after a sampling of test points using contact places, there was a 76% reduction in bacterial bioburden following manual cleaning and a further 14% in the total aerobic colony counts following UV disinfection. The researchers concluded that three 5-min cycles of UV disinfection were not enough to eradicate VRE, but longer times of UV emission might increase the effectiveness of this device against VRE.
Omron introduced the LD-UVC robot specifically to fight the COVID-19 pandemic, automating the UV disinfection process safely and wisely. This robot is used for effective disinfection of high contact points and rooms and is suitable for healthcare, hospitality, assisted living and commercial buildings [16]. The LD-UVC uses LiDAR sensors for navigation and obstacle avoidance, moving autonomously through a mapped route, as well as motion sensors for human detection-if a human is detected, the UVC light will turn off immediately. The robot has eight UVC lamps, which give it the ability to kill 99.90% airborne and droplet pathogens in a 360 • radius [16]. Furthermore, the robot supports infrastructures such as elevators, auto doors and PABX systems and due to its OMRON LD base, it can be used for various other applications such as material handling, catering, and other logistical needs. Geek+ Robotics has developed a disinfection robot to ensure safe working environments during COVID-19. Lavender, their UVC disinfection robot kills 99.99% of pathogens using a wavelength of UVC at 253.5 nm and can operate 24/7 without supervision without the use of chemicals or manual labour [17]. Similar to the LD-UVC [16] and UV Sterilisation [13] robots, Lavender has UV lamps that are arranged in a circular pattern to deliver light to all areas. This robot has three different functions: standard mode which automatically performs unmanned full-map disinfection, a user-defined mode which carries out disinfection along a user defined pre-set route, and lastly, a static mode whereby the robot reaches a specific spot and completes disinfection [17]. In order to traverse through complex environments, Lavender uses SLAM navigation and supports elevator integration. An alternative to UVC is the use of far-UVC, which emits a lower level of wavelength between 207 and 222 nm. Although the use of far-UVC for disinfection with far-UVC is still in its early stages, it could be advantageous. For instance, far-UVC light has been shown to be as effective at killing microorganisms as conventional germicidal UV light, without the associated human health risks [82]. This is due to the limited range in biological materials and therefore cannot reach living human cells in the skin or eyes. However, according to Buonanno et al. [83][84][85] and Ponnaiya et al. [86] the far-UVC light can still reach and penetrate viruses and bacteria, potentially having the same germicidal properties of UVC light without the associated health risks. Therefore, a potential idea would be to incorporate UVC lamps into standard light fittings so that they could be used in public spaces such as hospitals, airports, and train stations. If more research is carried out on far-UVC, and it is proven safe, this approach may prove ideal for disinfecting spaces that have a continuous stream of people, such as a 24-h market, or perhaps it could also be used to provide constant disinfection in hospitals [87]. A recent development created by Healthe [88], a light manufacturer, proposes the "Cleanse Portal", which is claimed to be the first-ever human-safe far-UVC technology for businesses looking to mitigate the spread of Coronavirus as economies re-open and employees head back to work. This portal sanitises clothing and personal belongings using far-UVC light. Upon entering a new space, simply step into the Cleanse Portal and make a slow 360 • turn for 20 s [88]. The far-UVC light disrupts the ability for viruses and bacteria to replicate and reduces the microbial load. Healthe believes that if nurses and doctors passed through this portal archway as they enter or leave a unit like intensive care, it would drastically reduce the chance that they bring germs in on their clothes, skin or personal belongings. More importantly, it would also dramatically reduce the chance that healthcare staff take any pathogens home with them after their shift [88]. When implementing UV disinfection, UV light can be used in either a continuous wave or a pulsed light. A continuous wave is based on low light intensity of either monochromatic or polychromatic light and is the most commonly used. However, according to Vilhunen and Sillanpää [90], mercury lamps that are used for continuous-wave irradiation are problematic due to their poor energy efficiency, environmental footprint (due to the use of mercury and its short bulb lifetime) and their incapability of switching on and off with high frequency. Pulsed light on the other hand, does not use conventional heating technology, but instead is based on the application of broad-spectrum, pulses of high-intensity light. However, these xenon lamps have a high energy demand and this can affect its efficiency as the pulse frequency becomes limited when overheated [91]. Therefore, a potential alternative to mercury lamps and xenon lamps would be to use LED lights. LEDs are long-lasting, use less energy than mercury and xenon lamps and are more environmentally friendly as they are made with non-toxic materials waste little energy as heat [90].
The effectiveness of continuous UVC and pulsed UVC for decontamination depends on many factors: the length of exposure time, the intensity of the irradiation, the penetration level, and the presence of any objects shielding surfaces to be decontaminated. Varying results have previously been reported on their effectiveness. McDonald et al. [92] demonstrate that pulsed UVC is significantly more effective in inactivating Bacillus subtilis spores in aqueous suspensions and on surfaces than continuous UVC for the same conditions. In contrast, Nerandzic et al. [89], found that continuous UVC is similarly effective or more effective than pulsed UVC; pulsed UVC did not perform as well as continuous UVC when reducing pathogens in hospital rooms with a 10-min exposure time. Additionally, the efficacy of the pulsed UVC device was dramatically reduced when the distance from the surfaces to be decontaminated increased.
In a study carried out by Nyangaresi et al. [93], results showed that when using continuous UVC with LED lights, the findings were comparable in both the pulsed light and continuous wave irradiation, suggesting that the wavelength range is more important than the irradiation mode. They also discovered that by using pulsed UVC with LED lights, it can effectively suppress the increase in temperature that is usually found when using continuous-wave UVC-LED, thus significantly improving the thermal management of the LEDs.
McLeod et al. [94] reported that UV of different wavelengths produces varying levels of effectiveness when trying to inhibit the growth of microorganisms. For instance, when using an exposure of 0.05 J/cm 2 (continuous wave) it was found that this was very similar to an exposure of 1.25 J/cm 2 (pulsed light), thus demonstrating that continuous wave UVC has a higher germicidal effect effect when the same exposure is irradiated [94]. However, the pulsed UVC demonstrated higher levels of penetration, reducing further numbers of bacteria [95]. This is demonstrated in a study by Otaki et al. [96] who state that the using continuous UVC for the decontamination of food is not as effected as pulsed UVC, which can penetrate organic load present in wastewater. Therefore, although the pulsed UVC device reduced contamination on surfaces, there can still be residual contamination present, and so alternative technologies such as hydrogen peroxide vapour may be more suited to the elimination of pathogens [89].

Hydrogen Peroxide
To control the spread of pathogens in hospitals, strict hygienic protocols are paramount. Traditionally, chemicals such as chlorine and 5% chloramine have been used for surface disinfection in patient's rooms and furniture throughout hospitals [18]. However, the use of hydrogen peroxide is a stronger disinfectant than chlorine and chloramine, so it is ideal to use for decontamination during outbreaks [97,98]. In fact, the routine use of hydrogen peroxide resulted in a significant reduction in C. difficile infection in a US hospital [99].
Andersen et al. [18] present a programmable robot that carries out surface decontamination using a dry aerosol of hydrogen peroxide disinfectant in test and surgery rooms, ambulances, and medical equipment. The pre-programmable robot ( Figure 3) works by producing hydrogen peroxide as electrically charged particles that move through the air as a dry aerosol. The hydrogen peroxide then adheres to particles in the atmosphere and on surfaces, forming a dry film on these particles. To get the most effective concentration of the dry aerosol, the robot was placed 2 m away from objects, and a concentration between 12 and 60 ppm was used; the concentration of ppm depends on the volume of the room to be decontaminated. When examining the test rooms and the surgery department, the robot was left in the corner of each enclosed room and the required concentration of hydrogen peroxide was inputted into the device. Three series of the dry aerosol is then carried out, with a diffusion time of 12 min, and the contact times were increased for each cycle (30, 60, and 120 min). When decontaminating the ambulance, the robot was placed in the corner of an enclosed garage and all ventilation systems were shut down. The diffusion time, in this case, was 29 min.
To control the effect of the decontamination, spores strips of Bacillus atrophaeus Raven were placed in various positions and surfaces in rooms (under tables, high up on walls, and on the ceiling and floor), ambulances, and on inner and outer parts of the medical equipment. The cultures were then left for seven days to grow at 37 • C. When examining the cultures, it was found that the treatment was effective in 87% of the test rooms, 100% in the operating department, 62.3% in the case of medical equipment and 100% in the ambulance experiment [18]. A similar study was carried out by Fu et al. [100], who investigated the efficacy, efficiency and safety aspects of hydrogen peroxide vapour. They use the Bioquell robot system [19], a mobile and robust hydrogen peroxide vapour generator that can be used hospital-wide to eliminate nosocomial pathogens. To make a condensing hydrogen peroxide vapour, the robot vaporises 30% liquid hydrogen peroxide at 120 • C. It also has an aeration unit to break down the hydrogen peroxide vapour after the exposure period, and an instrumental module to measure the concentration of hydrogen peroxide, temperature and humidity in the room. To release the hydrogen peroxide vapour, the robot uses a rotating nozzle until the air is completely saturated (the dew point). The vapour then begins to condense onto surfaces and rapidly kills microorganisms. As this is a hazardous substance, rooms should not be entered until the measurable concentration of hydrogen peroxide is less than 1 ppm. It has been shown that a single Bioquell system is more effective and faster than two aerosolised hydrogen peroxide robots; the Bioquell robot successfully deactivated 100% of the biological indicators over a series of tests in comparison to the aerosolised robots who only deactivated 50% [101].
To test the efficiency of the system, sample plates of Methicillin-resistant Staphylococcus aureus (MRSA), A. baumannii and C. difficile were placed in test locations throughout two rooms [100]. Both rooms had sealed doors to prevent the spread of hydrogen peroxide vapour. When the Bioquell robot completed the decontamination, the test discs were incubated at 57 • C for seven days. Results demonstrated that the Bioquell system inactivated on average 92.5% of the biological indicators, thus demonstrating that the use of hydrogen peroxide vapour is successful for decontamination [100]. However, this study had a few limitations: few pathogens were tested against, with only one strain per pathogen and actual surface contamination was not measured. Despite these limitations, this robot was safe to operate based on the study data, the hydrogen peroxide vapour system was safer to operate, slightly faster and achieved a greater level of biological inactivation than aerosolised hydrogen peroxide.

Disinfection
The goal of environmental control in the hospital, be it in patients' rooms, or operating rooms, is to keep microorganisms, including drug-resistant bacteria to an irreducible minimum. However, sterilisation methods are not always available, so surfaces and objects need to be disinfected; disinfecting kills harmful microorganisms on an object or surface, but it cannot kill all types of microbes, specifically bacterial spores [78]. Therefore, a cost-effective solution (both in time and in the minimisation of the risk of exposure) to the manual disinfection of surfaces would be to use robots [102]. A plethora of new disinfection robots has been developed to help clean and disinfect these high-risk areas. These include human support robots to sanitise high contact points and automated solutions for cleaning walls and floors.

Disinfecting High Contact Points
High contact points such as doors, lifts, and handrails are prone to contamination and are key mediums for spreading bacteria. Therefore, the creation of a robot that could carry out routine cleaning of these surfaces would help ensure safety as well as improve efficiency.
In an effort to mitigate the cleaning burden and high-risk of infection, Ramalingam et al. [20] introduced a human support robot for the cleaning and maintenance of door handles.
This fully autonomous robot cleans and wipes door handles to avoid direct contact with bacteria and viruses. The human support robot uses an arm manipulator with a spraying unit to spray disinfecting liquids and a cleaning brush to clean the sprayed region ( Figure 4a). This cleaning process can be complex as it requires locomotion, object detection and control of the arm manipulator. For object detection, the robot is equipped with a RGBD camera (RGB and depth perception) which is used to capture images in front of the robot as well as a wide-angle camera which helps locate the position of the door handles. The captured images are then used as inputs for a deep learning-based object detection framework. Additionally, the robot can carry out localisation and mapping, and is provided with pneumatic bumpers to prevent any collisions that may occur. To test the efficiency of the robot, an experiment was carried out in three phases: training the deep-learning framework, mapping the indoor environment, and configuring and testing. To train the model, a convolutional neural network was built, and 4500 images were collated and used as the dataset. These images were then sorted into three class types: circle, lever, and bar handles. 1200 images were used for training and 300 images were used for validation. After the network was successfully trained, the detection accuracy was examined offline using 120 images, resulting in a 95% confidence level. When examining the detection accuracy in real-time, different lighting conditions were observed, and door handles were detected with a confidence level of 88-92%. On average, the framework took 11 s to detect the door handle, and 18 min to clean four door handles. However, a pitfall of this study's robustness was the effect of occultation. That being said, this shortcoming could be eliminated by designing a different approaching direction for a single door. For instance, if the robot fails to detect the door handle from the designated position, it could move to another position in order to detect the door.
Yin et al. [21] developed the Human Support Robot further by modifying it to clean and inspect tables (Figure 4b). The researchers propose a deep learning algorithm to detect food litter and a path planner algorithm to clean tables for a standard food court setting. Two cleaning methods are used for the table cleaning tasks: sweeping and wiping. A sweeping and wiping method is used for the table cleaning task. The sweeping method cleans litter by using either group-based sweeping to clean multiple litters, or straight sweeping to remove the separated food litters. The wiping method is used to remove stains using a zig-zag technique.
To train the model, a 16-layer convolutional neural network was used. The RGBD camera was employed to capture the litter data set and there are 3000 images captured in total, each with different table backgrounds and various types of waste (food, liquid, and stains). The data set was then divided into two; single litter (solid or liquid) and mixed litter (solid and liquid), each with 1500 images. K-fold cross-validation was used to validate the dataset. To test the efficiency of the algorithm, the robot was placed in a workspace containing square and circular dining tables with various food litter scattered on them. The robot was able to detect litter at a 97% confidence level and stains and liquid spillage were recognised at a 96% confidence level. The execution module took an average of 210.5 s to clean three to five pieces of litter. In the case that litter was not detected or incorrectly identified, the overall miss rate and false rate was less than 3% and 2% respectively. These errors arose due to the lack of depth features and the effect of noise on the depth sensor. Therefore, the neural network needs to be customised to give the acceptable solid/liquid classification on the table with only depth information. That being said, this approach shows that the proposed cleaning framework detects litter with the highest detection accuracy among the considered methods (Faster RCNN with ResNet and Mobilnet V2 SSD) in their study.

Sanitising Floor and Wall Surfaces
Since the outbreak of COVID-19, more robust cleaning, disinfection and sterilisation protocols are being followed, with additional monitoring and auditing implemented. Nevertheless, safety and efficiency are major concerns of the utilisation of human labour in this process. Therefore, the use of robotic solutions for the disinfection of contaminated surfaces such as walls and floors needs more attention.
Cepoina et al. [22] proposed the design of GeckoH13 (Figure 5a), a climbing robot that is dedicated to wall and ceiling cleaning The robot has pneumatic sticking means using four rigid vacuum cups to climb walls; in order to lift and press the cups against the wall when the robot is moving, automatic suspensions are used. The robot has been designed with different levels of cleaning in mind and uses different detergent liquids for each task: low-level disinfecting for waiting rooms, an increased level of disinfection in patient rooms in case of outbreaks of infection, and sterilisation of surfaces using chemical detergents for surgical suites. It uses a steam generator and ejector to minimise the use of cleaning liquid needed, as well as micro-channels which recover the dirty liquid and pump it into a tank after the cleaning operation has been completed to prevent the spread of potential microorganisms. The liquid is sprayed onto the walls using nozzles, and a skirt along the perimeter of the robot prevents the spreading of detergent into the environment. Additionally, this system produces reports on the cleaning carried out, the duration, detergent used, and the temperature employed. Cepolina et al. [22] used virtual reality tests of a typical hospital to examine the robot's static and dynamic behaviour as well as the performance of the proposed design. However, as this robot is in its preliminary stages, it is hard to foresee the feasibility of its design; the robot may cause damage to walls, has a risk of falling, and there may be difficulty with adherence to multiple surfaces. An alternative solution would be to employ a mobile robot with an extending and articulating arm. A hygiene company called Taski [103] introduced the Swingobot 2000 (Figure 5c), a scrubber, drier, and floor cleaning robot. This robot delivers autonomous cleaning as well as simple manual operation. The robot is equipped with sensors that provide 360 • views, allowing the scrubber to operate hands-free. It also uses 2D lidar and sonar to detect objects and avoid collisions; if someone walks across the robot's path, it will stop within 0.48 s using instant breaking. Furthermore, the machine's touch sensors are an added safety feature which allows it stop all movement if it comes in contact with any object. When the Swingobot 2000 has finished cleaning the area, it communicates with the operator, via the cloud for full visibility, and generates detailed reports including login time, end of shift and areas cleaned. This robot not only doubles the efficiency in the cleaning site, but it is also sustainable, saving up to 76% water and chemical consumption compared to traditional scrubbers [23].
The eXtremeDisinfection roBOT (XDBOT) (Figure 5b), developed by researchers at Nanyang Technological University (NTU), Singapore [24] is a semi-autonomous disinfection robot that quickly disinfects large surfaces. XDBOT has a 6-axis cobot arm that can reach and disinfect awkward areas such as under tables and beds, as well as light switches. This robot can also be wirelessly controlled, removing the need for cleaning staff to be in contact with surfaces and can run up to four h making it ideal for disinfecting public areas [24]. In order to navigate semi-autonomously, XDBOT uses lidar and HD cameras while an operator controls the arm up to 30 m away. To disinfect the area, the robot is equipped with an 8.5-litre tank and an electrostatic-charged nozzle to ensure that a wider and further area can be disinfected. The chemicals discharged from the nozzle have a positive electrical charge which adhere to negatively-charged surfaces. If the surface has already been sprayed, the disinfectant will be repelled. Capable of running continuously for 4 h on a single charge, XDBOT has been successfully tested in public areas in the NTU campus and the team is preparing to trial the technology at local public hospitals [24]. Currently, the XDBOT is undergoing a pilot in a local public hospital.
As well as releasing the Lavender robot, Geek+ also developed Jasmin, a smart spray disinfection robot that uses liquid agents to execute swift and automated sterilisation. This robot can kill up to 99.99% of germs depending on the type and concentration used. What sets this robot apart from other liquid disinfection robots is its ability to support various disinfection liquids such as hydrogen peroxide, peracetic acid, and chlorine [104]. Similar to Lavender, Jasmin can perform around the clock, unmanned sterilisation operations using SLAM navigation to operate in complex environments [25]. Furthermore, Jasmin uses 3D, radar and human sensing to sense people and objects in order to operate in these environments.
Fetch robotics and Build with Robots have developed the first patented, autonomous disinfection robot built for large scale facilities [26]. This robot is designed not only to protect employees and passengers from dangerous microorganisms, but also from harmful cleaning agents. Breezy One can autonomously decontaminate 100,000 square feet in one and a half hours, eliminating 99.9999% of pathogens [105]. Once the areas have been disinfected, individuals can re-enter the space in 2 h with no risk of harmful residue present. A common challenge with choosing a disinfectant is selecting one that is strong enough to kill harmful pathogens, but safe for people to re-enter the building in a timely manner. Therefore, Breezy One used an EPA-registered disinfectant that has been tested by nine government agencies. Furthermore, Breezy One has shown to effectively and safely sanitize Albuquerque international airport, carrying out nightly sanitising runs to ensure that the airport is clean and safe for everyone. The robot's path, schedules, and frequencies can be remotely changed by the user [105].

COVID-19 Monitoring and Testing
One of the best ways to prevent contracting COVID-19 is to avoid exposure, which can be done by maintaining social distancing, wearing masks, and frequent hand washing. Nevertheless, if one were to be exposed, fever is one of the most common symptoms of people who have been infected with COVID-19. Therefore, remotely monitoring public spaces for early signs of the disease, as well as ensuring that social distancing is being maintained could prove beneficial in protecting the public as well as taking the pressure off healthcare workers. Furthermore, if one is exhibiting symptoms of infection, they must get tested. However, carrying out swab sampling for testing can be dangerous for healthcare staff as they have a high risk of infection due to the close person-to-person contact [27]. To combat this, the use of nasopharyngeal and oropharyngeal swabbing robots can reduce the spread of infection as well as allow healthcare staff to focus on higher priority tasks. In fact, due to COVID-19, there has been an unprecedented effort to accomplish testing through the means of robotics.

Monitoring
Remotely monitoring public places for social distance adherence, mask compliance and signs of fever are all approaches that have been used to help protect the public from the spread of infection. For instance, Sathyamoorthy et al. [64] present a novel method for automatically detecting pairs of humans in a crowded scenario who are not adhering to the social distance constraint; a distance of at least 6 feet from et al. They introduce the COVIDrobot, a robot that is equipped with RGBD camera and 2D lidar to perform collision-free navigation in a crowd (both low and high densities) using a hybrid deep reinforcement learning, as well as a thermal camera to monitor temperatures of individuals that may have a higher than normal temperature. The robot can calculate the distance between detected individuals in the camera's field of view and when used indoors, it can be combined with CCTV cameras in order to improve performance as a greater number of breaches can be detected. Once social distancing breaches have been detected, the COVID-robot navigates to the group and encourages following social distancing norms by displaying an alert message on a mounted screen. If the individuals are non-compliant, the robot tracks them and pursues them with further warnings. Although the concept of this study is quite novel, it does have a few limitations: the robot does not distinguish between strangers and people from the same household, and the current approach of issuing warnings to individuals by means of following is seen as a violation [64].
In an effort to provide safe and effective virus protection, a Chinese based company, Ubtech Robotics [65], have introduced the Aimbot robot, as an anti-epidemic solution. Its core features include non-contact temperature measurement, automatic disinfectant, a public address system, and mask detection with a detection accuracy of over 99% [65]. This robot can scan the temperatures of 200 people a min up to nine feet away while also reminding the public of social distancing. Aimbot can also spray 10 litres of disinfectant in 45 min while covering 11,000 feet of coverage space. This robot has not only been used in hospitals, but also in public transport, public buildings, and airports. Furthermore, the Aimbot has played a vital role in monitoring hospitals and schools in China; the robot takes a person's temperature and checks if they are wearing their masks correctly in an effort at reducing cross-infection risks [66]. Ubtech has since expanded their robot team with the addition of the Cruzr robot. The Cruzr robot is not only being used as an initial contact point in quarantine and outpatient areas, but also as remote consultation with a medical professional [66]. Akin to the Aimbot robot, Cruzr can carry out mask detection, broadcast health recommendations and vocalise reminders.
The Tsinghua Advanced Mechanism and Robotised Equipment Laboratory and the Yantai Tsingke+ Robot Joint Research Institute have developed SHUYU and SHUYUmini [67], rapid temperature screening systems suitable for use outdoors. These robots can detect drivers, passengers, and pedestrians using dual infrared cameras with a 98% accuracy, and a temperature accuracy of 0.2 • C [67]. The SHUYU robot (Figure 6a) has a two DOF translational parallel manipulator with a closed-loop passive limb. It also has two high accuracy thermometers installed at each end of the parallel manipulator, and ultrasound sensors to calculate the distance between itself and the vehicle. When the vehicle is approaching, the robot identifies the total number of people in the car as well as their positions; it takes less than 20 s to detect a driver and 2 s for a passenger. Once detected, SHUYU can then guide the driver and passengers through temperature measurement using a voice broadcast system. The driver and passengers extend their wrists for measurement and the temperature is recorded.
When screening the temperature of pedestrians, a SHUYUmini (Figure 6b) is used. The SHUYUmini has a parallelogram mechanism with one DOF, and a thermometer connected to the body of the robot. To identify the position of the pedestrian, the SHUYUmini uses three laser-ranging sensors. Similarly, to measure the temperature of a pedestrian, they are required to extend their wrist, but can adjust the direction of the thermometer to find a comfortable measurement pose. Currently, the SHUYU technology has been used in the car-parks of Tantai ETDA and Tsinghua University, and the SHUYUmini has been used in Peking Union Medical College Hospital, the First Affiliated Hospital of Tsinghua University, and the LeeShauKee Building of Tsinghua University [67].
Misty II (Figure 6c) is a similar contactless temperature screening robot developed by MistyRobotics in 2017. This commercial robot is fully automated and designed as an efficient approach to COVID-19 screening in the workplace [68]. The Misty II robot is a reliable, consistent screening management system that uses a thermal camera for increased accuracy (within 0.5C). Misty II works in the following manner: upon entering the building, Misty II asks the individual to step into her measurement zone where she asks some COVID-19 health screening questions. Then, the individual's temperature is taken and based on the outcome the robot either allows the person to proceed, or gives further instructions. The records of the interaction are recorded and are available to administration.
These monitoring robots are advantageous to use in public settings as they not only protect the public, but also potentially relieve pressure on healthcare staff by alerting individuals if they show early signs of the infection. They can free up healthcare workers for higher priority tasks, and save companies money by using fully automated solutions.

Testing
Since the outbreak of COVID-19, there has been an unprecedented demand for testing, which has led to a high risk of infection for frontline staff due to close contact. In response to this many nasopharyngeal and oropharyngeal swabbing, robots have been developed to cater for testing. For instance, Wang et al. [27] introduced a preliminary, low-cost miniature robot to assist with nasopharyngeal sampling (Figure 7a). The robot is 3D printed and has an active 2-degree of freedom (DOF) end-effector for actuating a swab and a generic 6-DOF passive arm for global positioning. It also has a specially designed detachable swab gripper which incorporates force-sensing ability using an optoelectronic sensor. To control the robot, a phone-based user interface is used to set current locations, direct motor control, depth of the linear stage as well as an emergency stop feature. The performance of the robot was verified, and the force needed for the procedure was quantified by performing nasopharyngeal swabs on a commercial nasopharynx phantom and pig noses. Wang et al. [27] determined that the forces used were within the robot's maximum payload and sensing range of the gripper used. However, this study was preliminary-testing was carried out on phantom designs and pig noses-so more work must be conducted on real participants. LifeLine Robotics created the world's first commercial throat swabbing robot in only four weeks [28]. Rather than developing the robot from scratch, they used existing components that could be mixed with new elements for a fast and reliable solution. For instance, this robot uses a conventional UR3 cobot arm fitted with a custom 3D-printed end-effector and a simple headrest for test subjects to lean into (Figure 7b). The robot works by first scanning a patient's ID card, then it prepares a sample kit consisting of a container with the patient's ID printed onto a label. When the patient opens their mouth, the robot picks up a swab and identifies the right points in the patient's throat by using an artificially intelligent vision system. The swab is then placed into a jar and the robot screws on the lid. The entire process takes 7 min in total, with the swab sample taking 25 s. The robot has been designed to be gentle and consistent; by automating this process, the quality of the samples can be guaranteed as the exact procedure is repeated again and again, and the chain of infection can be broken as patients and healthcare workers are not at risk.
The Shenyang Institute of Automation of the Chinese Academy of Science and First Affiliated Hospital of Guangzhou Medical University followed suit, developing a semiautomatic oropharyngeal swab robot (Figure 7c). The robot consists of a binocular endoscope, serpentine robotic arm, wireless transmission equipment and a terminal used for human-robot interaction. The effectiveness of the robot was investigated by carrying out different sampling forces on 20 participants. The success rate was determined by completion on the first attempt and the quality was determined on the threshold cycle for glyceraldehyde 3-phosphate dehydrogenase (GAPDH) [29]. A comparison study was made using manual sampling in 30 participants.
The sampling success rate was 95% in the first group of participants, with 97% of specimens qualified using the threshold cycle [29]. Although no congestion or injury was apparent, when the sampling force was above 40g, there was evidence of sore throats.
With regards to the manual sampling, it was found that there were no differences in the success rate, swab quality, or pathogen discovery. That being said, some issues still need more attention such as the shapes and sizes of patients' throats, gag reflexes, and patient fear as this could affect the efficacy of the robot.
Seo et al. [30] propose a telerobotic system for swab sampling (Figure 7d) in order to minimise close person-to-person contact. This robot comprises a master and slave system: the slave robot collects a swab sample from the upper respiratory system and is controlled at the master site where the position and orientation for swab insertion is determined via camera. The slave robot has six servo motors with forearms and rods; the rods support a top plate that holds the swab insertion unit. A head and chin rest is placed in the robot's base in order to maintain distance between the user and the robot. The master system is composed of a positioning and swab insertion device. The insertion device is based on a rack-and-pinion structure; when moving the swab, the recorded length of on insertion is transferred to the slave robot, which then moves the swab insertion an equal distance. The master and slave systems are connected via a local area network.
The proposed system is yet to be validated through clinical trials, but experimentation was carried out in the workplace using a head phantom model to examine accuracy and virtual target sampling [30]. They found that the robot can track the movement of the master device within a error of 3.4 mm and can successfully take a swab sample. One pitfall of this research is the 33 ms delay in data transmission, so an upgrade to teleconference codecs is required. However, an advantage of this system over using an industrial serial robot arm is that it does not need large joint movements to determine the position and insertion motion of the swab.

Logistics
An emerging application for autonomous navigating robots is hospital delivery. As a lot of materials are transported in hospitals each day, such as medicine, medical supplies, laboratory samples, food, and linen, the healthcare sector is an ideal area for this application. Although solutions have already been developed to accommodate some of these deliveries such as the pneumatic tube and the AGVs, they are not the most flexible solutions as they are not easily modified [106]. For instance, there are issues regarding installation in older buildings, navigating through narrow, snake-like corridors, and high human footfall. Due to these complex environments, many medical materials are still delivered by staff. Although technical challenges need to be overcome, the use of autonomous mobile robots would not only increase efficiency in the healthcare sector without exposing front-line hospital staff to the risk of disease, but it would also eliminate human error and allow staff to focus their attention on higher priority tasks.

Delivery
Nurses spend as much as 30% of their time away from patient bedside care tracking down medications, supplies, and lab results, as well as carrying out logistic duties [107]. As a result, there have been logistical developments to alleviate pressure on nurses and other staff. Furthermore, the use of robotics for delivery can minimise the spread of disease, requiring less healthcare staff to move from location to location in search of medical supplies.
HelpMate was the first mobile robot developed, and was designed to deliver pharmacy supplies and patient records between hospital departments and nursing stations [31]. Not only was HelpMate a trackless robot courier, but it also supported fleets of autonomous robots. The robot used sensor-based motion planning algorithms which were revolutionary at the time as they addressed navigating in unknown and unstructured environments [31]. However, since then technology has come a long way. For instance, Bačík et al. [32] developed a prototype robot for an AGV for hospital logistics; transporting medical supplies from the warehouse to clinics around the hospital. The Pathfinder (Figure 8a) is inspired by traditional AGVs, but is optimised for the requirements of working in a hospital.
It uses state-of-the-art simultaneous location and mapping technology, which creates a map and continually updates it. To explore the capabilities of the robot, preliminary on-site tests were carried out in a hospital [32]. The biggest challenge for the robot was navigating across 11 floors of the hospital as the robot had to use a human-operated elevator to gain access to different floors. Developing on from this further, Aethon created the TUG robot ( Figure 8b) to perform delivery and transportation for healthcare staff, giving them more time to focus on patient care [33]. TUG is an autonomous mobile robot that efficiently delivers medical supplies and reduces cost-per-delivery by up to 80%. The robot can be used by any department that needs materials transported as there are a wide variety of carts that can be rolled onto its lift for delivery, and materials can be secured in an integrated drawer with biometric access and pin-code security. Load-carrying modules are custom-designed for tasks such as delivering medications, blood or tissue specimens, trays of food to and from the kitchen, linen, and regulated waste. To use the robot, a member of staff loads the cart and touches a screen to send it to the correct destination. TUG stores a map of the hospital by importing CAD drawings of the building and this is used to create an on-board map. It uses a scanning laser, infrared, and ultrasonic sensors to detect and model the environment in real-time to maintain an accurate position and avoid obstacles [33]. TUG can also transmit instructions to the elevator as to the floor it wishes to go to via a wireless network. Once the delivery has been completed, the robot returns to its charging dock. This system is akin to another robotic assistant, HOSPI [34], an autonomous hospital delivery robot manufactured by Panasonic that has been under development since 1998. In order to avoid obstacles, the hospital's map data are used to program the robot. This also enhances the robot's flexibility as new hospital routes may be planned in advance. HOSPI has a similar security protocol to the TUG as it is equipped with security features to prevent tampering and theft, so only staff with ID cards can access its contents.
A common challenge for hospitals is the transportation of meals in a timely manner. This is especially true in recent times, as all person-to-person contact must be kept to a minimum in order to mitigate the risk of spreading disease. Usually, when manually transporting meals, it can take between 10 and 30 min, a sufficient time for a meal's temperature to decrease below 60 • C; if a meal's temperature falls below 60 • C, bacteria can begin to grow and multiply [108]. In response, Cerreira et al. [35] developed the i-MERC (Figure 8c), a prototype mobile robot with the aim of increasing the quality of meal transportation service in hospitals and healthcare centres. i-MERC can deliver personalised dishes to patients (data are available via a touch screen) and return the dishes to the washroom after mealtimes. To use the i-MERC, cooking staff load dishes inside the robot's hot and cold compartments and then indicate the desired location for the robot to go to. Once the i-MERC reaches the respective location, staff give the meals to the patients and then put the empty dishes back into the cold compartment. The robot then returns to the washroom.
One advantage the i-MERC has over other robots is its heating system; this keeps the temperature of meals above 60 • C at all times. If the temperature falls below 65 • C, the system is switched on and if the temperature rises above 70 • C, the system is switched off [35]. Cerreira et al. [35] used this physical prototype to simulate the service when presenting the i-MERC as part of a Master's thesis.
In an effort to replace heavy mobility delivery works such as the Pathfinder, TUG, and i-MERC, there have been further developments in the mobile robot field. For instance, the MiR100 mobile robot (Figure 9a) that has been developed by Mobile Industrial Robots (MIR). MiR100 is a safe, cost-effective mobile robot developed in the Zealand University Hospital in Denmark to make deliveries to and from the hospital's sterilisation centre [38]. The MiR100 robot has automated the transport of sterile equipment since June 2018, transports loads of up to 100 kg and travels more than 10 km each week. With the use of this delivery robot, service has improved storage space is minimised and shortages have been prevented. At the sterilisation centre, a member of staff loads the MiR100's cart-top cabinets. The robot then goes from centre to centre as well as to different stops within the hospital where service assistants empty the carts. Mir100 also politely warns staff when it is approaching. The robot is currently undergoing a pilot project for the delivery of equipment for planned operations in anticipation for when the hospital expands. The pilot has shown that the operations plans can be programmed into the robot's daily program and ensures that the delivery of operation equipment arrives to the correct operating theatre at the right time [38]. Diligent Robotics, a start-up company, has developed Poli, a one-armed mobile robot that helps nurses with fetching tasks, and has already been tested in several hospitals in Texas [36]. What makes this robot different from other logistical robots is that Poli is designed to collaboratively work alongside humans in human environments [36]. Diligent has since modified and adapted the design of the Poli robot to create a completely new robot called Moxi [109]. Moxi uses a freight mobile base to help it navigate and avoid obstacles, and has a compliant arm and hand for manipulation (Figure 9b). Moxi is socially intelligent with an expressive face and social awareness and it learns from humans by continually adapting to hospital workflows [37]. Moxi has been successfully piloted to carry out tasks such as gathering patient supplies and lab samples, distributing PPE, and delivering medications [37]. This helps staff spend more time with patients as well as provide more efficient routine care support.
Although the development of the mobile field has been a beneficial, robots are still needed to deliver goods to a wider location audience, especially during COVID-19 lockdowns. Unity Drive Innovation (UDI)-a Chinese start-up-has been deploying fleets of self-driving vans to deliver food to communities that are in lockdown. Hercules the robot can also provide contactless delivery in order to reduce the risk of person-to-person contact. This unmanned vehicle uses LiDAR sensors, cameras, and three learning algorithms to drive itself, along with 1000 kg of food, across communities in eastern China [39]. The first algorithm uses a convolutional neural network to detect and classify objects, the second algorithm processes images to identify road signs and traffic lights and the third algorithm matches point-clouds and IMU data to a global map which allows Hercules to self-localise [110]. Navigating can be quite difficult for Hercules due to the narrow streets of Shenzhen, double-parked cars and motorcycles. In fact, human intervention was required in some cases that the robot vehicle did not know how to handle such as traffic congestion and false detection of traffic lights at night [110]. Although the vehicle can drive at a higher speed, it drives at 30 kph for safety reasons, and can drive for 100 km before needing to charge its lithium-ion battery, which takes 5 h [39]. In a similar vein, UNIDO's Investment and Technology Promotion Office (ITPO) and White Rhino Auto have joined forces to deploy unmanned vehicles at Wuhan's Guanggu Field Hospital [40]. Two White Rhino vehicles were used to transport medical supplies and deliver meals to healthcare staff and patients. This not only helps reduce exposure, but also reduced the workload on doctors and nurses [40]. The robot uses 3D LiDAR, deep learning and rule algorithms for obstacle avoidance. It can detect the position of obstacles in a 360 • range in real-time. The robot also has a security system with multiple lines of defense. For instance, the vehicles have a top speed of 25 kph to prevent injury, a real-time collision risk calculation and it can take the required emergency measures when necessary to ensure safety [111]. Since then, White Rhino Auto has set an initiative to deploy 100 unmanned delivery vehicles in Shanghai in the next three years.

Testing and Sorting Blood Samples
Despite being the most commonly performed medical procedure in the world, diagnostic blood testing can have different success rates as drawing blood is not only dependent on the physician's skill, but also on the patient's physiology [41]. Furthermore, the results of blood samples are generally generated in labour-intensive labs. These issues on top of the unprecedented demand for same-day results of COVID-19 testing has furthered the strain on healthcare staff and resources.
A potential solution to these challenges is a venipuncture robot presented by Balter et al. [41] that delivers end-to-end blood testing by not only drawing blood from patients, but also by providing diagnostic results (Figure 10a). The proposed device can localise blood vessels and reconstruct them in 3D using near-infrared, ultrasound imaging and image analysis. It also uses miniaturised robotics to place a needle in the centre of the indicated vein. To test the feasibility of the robot, the researchers investigated density centrifugation, fluorescence microscopy, and image processing. First, to simulate blood flow in the venous, a blood mimicking fluid is pushed through phantom blood vessels. The robot then uses an electromagnetic mechanism to pick up a needle, which is then inserted into the phantom blood vessel to extract 3ml of the sample [41]. The blood sample is then pumped into the blood analyser, dispensed onto a centrifuge chip and spun for 5 min. Once this has been completed, the area of the sample is measured and used to quantify the bead concentration. This robot has since been developed further (Figure 10b), resulting in the first-inhuman evaluation of a hand-held automated venipuncture device for rapid venous blood draws [42]. The previously developed bench-top venipuncture robot proved to be too large, resulting in a lack of mobility, and the large number of DOFs made it difficult to implement in a number of clinical scenarios. The most recent version of this robot is a miniaturised, hand-held device which can guide a needle into the blood vessel with the use of 2D ultrasound imaging. It consists of a needle manipulator with 2 DOF, an ultrasound sensor which detects the force of the needle tip, and an electromagnetic needle loader [42]. To ensure the safety and accuracy of the device, the robot placed a need tip into a participant's vessel to draw blood. Participants were divided into two groups: those with difficult venous access, and those without. The device demonstrated an overall success rate of 87% in those with difficult venous access and a 97% success rate for those without. Furthermore, no uncomfortable sensations, bruising or punctures occurred during the study.
Aalborg University Hospital, the largest hospital in Denmark, has up to 3000 blood samples arriving in their lab every day. Each sample must be tested and sorted and this is a time-consuming process. To relieve the pressure on healthcare staff and reduce ergonomic concerns that arise due to day-to-day physical loading, the hospital has now automated this procedure [43]. Two KUKA robots ( Figure 11a) were installed, one to open intelligent transport boxes and remove samples, and the other to sort the samples for further clinical analysis. The intelligent transport boxes have an integrated RFID data logger that not only tracks the location of the box, but also saves the temperature inside the box. This is to make sure that the temperature of the blood sample is 21 • C. On average, the KUKA robots needed an average of 1.5 min per box, an equivalent of 40 boxes an hour.
Copenhagen University Hospital followed suit by employing two UR5 robots to optimise their handling and sorting of blood samples (Figure 11b). These collaborative robots can operate un-fenced alongside healthcare staff, unlike traditional robots that are bolted down in safety enclosures, as they have a built-in safety system enabling the UR robot arm to stop operating if it encounters objects or people in its route [44]. The first UR robot is used to pick up samples and place them in a barcode scanner. The second UR robot picks and places samples for centrifugation and analysis. Together, the robots can handle up to 3000 samples per day, about eight tubes per minute. By using these robots, the lab delivers over 90% of results within one hour despite a 20% increase in samples arriving for analysis [44].
Singapore has been using the ABB's high precision robots to automate steps in the sampling process of COVID-19 (Figure 11c). This robot is designed for fast, repeatable and articulate point-to-point movements and is ideal for jobs that require high precision and reliability, making it ideal for COVID-10 sampling. Two sets of the Rapid Volume Enhancer (RAVE) robots can process up to 4000 samples a day, and also minimises the test contamination risk of infection for healthcare workers [45]. To bring the product to market quickly, ABB's RobotStudio simulation software was used to simulate and test robot installation in a realistic, 3D environment. Furthermore, YuMi, another one of ABB's collaborative robots, is being used to support hospitals in serological testing of COVID-19. It is projected that the YuMi robot will be able to automate 77% of the operations needed to carry out testing and analysis of up to 450 samples an hour [46]. This is beneficial as it can relieve laboratory staff from clinical disorders such as inflamed tendons due to the repetitive nature of serological tests [46]. YuMi works by moving the plate into position, drawing the cleaning solution from the reservoir and filling the plate. Then, the robot draws the solution back out of each well and discards it. This process is repeated three times, and takes 3 min to complete.

Social Care
The world-wide quarantine measures put in place to restrict the spread of COVID-19, requiring people to live in social isolation has had an immediate and long-term effect on health [112]. Impaired social relationships due to lockdown and social isolation can lead to loneliness which in turn can have a serious effect on an individual's physical, cognitive, and mental health [113]. Therefore, the use of social robots can provide social interaction for those who are physically isolated from their friends and family as well as monitor mental and emotional states and assist in everyday life. Furthermore, these robots can be used to promote physical activity for the elderly as well as playtime for children in social isolation.

Social Interaction
One of the best known robots in this area is Pepper [47], designed by SoftBank Robotics for interaction and entertainment purposes. Pepper is capable of voice interaction and mood recognition and has been used as a social interacting robot for dementia patients in an acute hospital in Japan [114]. Pepper was programmed to carry out two 30 min programs designed for light physical activity and social activities such as singing. It was found that 45% of patients were happier after their social activity [114]. Another robot designed to mitigate isolation is Vector, a miniature companion robot developed by Anki and designed for the home [48]. Vector provides social connectivity, shares daily activities such as lunch and dinner, and takes part in enjoyable activities like dancing [115]. It can also answer questions, take photos, act as a timer and play card games. Vector has also been used as a personal assistant providing functional support. Powered by artificial intelligence, it can engage in its surroundings. Vector uses a HD camera to identify people, and navigate throughout its space while carrying out obstacle avoidance [48]. It also has a four-microphone array for directional hearing, as well as touch sensors and an accelerometer so it is aware when it is being moved or touched. Despite the positive results, neither Pepper or Vector are designed for the manipulation of objects so it is limited in its functionality.
In contrast, Lio, a mobile robot with a multi-functional arm, is considered an all-in-one platform for human interaction and personal care tasks [49]. This robot has already been deployed in healthcare facilities where it autonomously assists healthcare staff and patients. Lio is not only used as a delivery robot, but also to entertain patients on simple exercises. Lio can also be used as a reminder system-alerting patients about upcoming events, play music, offer snacks and take menu orders. This robot is constantly being updated and is currently being made to make it more proactive. This will help the robot find patients around the facility, actively approaching them to determine if an action has been carried out, e.g., taking medication or staying hydrated [49].
As the older people are the most vulnerable group to the COVID-19 disease, avoidance of close personal contact with other people is recommended for this group. However, as social isolation has been shown to negatively affect the body [113], it would be beneficial to have social support. Fasola and Matarić [50] present a solution to this problem by developing a socially assistive robot (SAR) (Figure 12a). This robot is used to engage older users in physical exercise using hands-off interaction strategies (speech, gestures, and facial expressions to provide assistance with exercises). The robot consists of a biomimetic anthropomorphic platform made from a humanoid torso mounted on a mobile base. The torso has 19 DOF including 2 arms with gripping arms, a neck, eyebrows, and expressive mouth.
There are four different exercise games available to users, all different variations of arm exercises: workout, sequence, imitation, and memory [50]. The workout game demonstrates arm exercise which the user imitates, and the sequence game builds on the workout game by introducing more workouts that must be carried out in sequence. The imitation game reverses the roles and the user becomes the instructor, showing the robot what exercises to do; this encourages users to get creative. Lastly, the memory game further develops the sequence with an ever-longer list of exercise making it into a competitive scoring game. In each game the robot monitors, instructs, evaluates and encourages users to perform exercises. The user responds to the robot by using a wireless button control interface which allows the user to respond to "yes" or "no" prompts, and a visual algorithm is used to recognise the user's arm gestures in real-time. The researchers carried out a multi-session user study with older adults to evaluate their robot's overall effectiveness. Participants felt that the exercise system was enjoyable, useful and helpful and that the robot was a good companion with a social presence. All in all, it was found that participants rated the robot favourably in terms of social interaction and as an exercise partner [50]. Another robot designed to act as a companion for the older adults is the TIAGo (Figure 12b) robot developed by PAL Robotics [51]. This robot is designed to help older people with their daily routine; acting as an alarm for appointments, a reminder for medications, and suggesting suitable meals for a balanced diet [116]. Since its invention, it has been developed further and can monitor daily vitals (temperature, respiratory rate and heart rate) as well as locate everyday objects in a domestic environment such as keys, wallet, glasses, or phone [117]. A visual interface is used to help the user locate an object. The user selects the object he or she wants to find and the robot will display the rooms in descending order of the probability of finding the object in that room. Hints with increasing levels of detail are given to the user instead of showing where the object is exactly. This is to support the maintenance of cognitive capabilities [117]. To improve these abilities further, nine different games were developed. These range from number and letter games as well as guessing games and puzzles. TIAGo was deployed in elderly residents in Greece, the UK, and Poland for testing evaluation. It was found that participants were pleased with the level of customisation and interaction capabilities and they felt that the robot was a good companion: "there was someone to talk to, waiting for them at home, and making the environment friendly" [117].

Development of Social Skills
Children are also a vulnerable group affected by the COVID-19 outbreak as prolonged school closures and home confinement can have negative effects on their physical and mental health. A possible solution to school that encourages activity could be a socially assistive robot, helping to develop communication skills. For instance, Scassellati et al. [52] conduct a one month, home-based study for increasing social communication skills in children with autism spectrum disorder (ASD). Each day children took part in social skill games conducted by an autonomous, socially assistive robot. The robot modelled social gaze behaviours like eye contact and sharing attention throughout the sessions. Six games targeting different social skills (social and emotional understanding, perspective-talking, ordering and sequencing) were played. To engage the child, the robot would first tell a story before playing three games which varied from day to day [52]. The robot was then able to adapt the difficulty of the game based on the child's history of performance in each skill set in order to keep children from disengaging. By the end of the study, it was reported that children increased their amount of eye contact, made more attempts to initiate communication and responded more to communication bids [52]. Furthermore, parents of the children reported that there was an increase in social skill performance between their child and other people. Although this work was preliminary, it demonstrated the potential for robot intervention studies as well as the feasibility of continued education during times of social isolation.
Another robot that has been used to develop dyadic interactions is the Keepon robot. Keepon was designed to get children involved in playful interaction. Its simple appearance and predictable responses encourage children to develop interpersonal communication skills in a playful, yet relaxed mood [53]. In fact, when studying the interaction between children and the robot, it was found that Keepon functioned as a pivot for interpersonal interactions between children and their carers [53]. Keepon has a CCD camera with a wide-angle lens, a microphone for a nose, and its body is deformable when touched or moved due to the presence of a gimbal. The body has four DOF (nodding, shaking, rocking, and bobbing), which allows it to carry out two actions: attentive and emotive. The attentive action orients the robot to a specific target in the environment, simulating eye-contact and joint movement. The emotive action rocks the robot's body from side to side while expressing emotions like pleasure and excitement. Due to the simple action states, this robot is easily understandable [53]. The robot also has two different modes: automatic and manual. In the manual mode, an operator in another room controls the robot. Keepon alternates its gaze from user to toy, and when the user engages with it either through eye-contact, touch, or speech, the robot expresses a positive emotion.
QTrobot was developed by LuxAI [54] to act as a teaching aid tool to help children with autism develop new skills (Figure 13b). This robot helps increase children's engagement through games, role-playing, and social narratives while also collaborating with users in order to keep them calm, concentrated, and motivated [118]. In fact, it has been shown to increase the attention and engagement of children by 27% as well as decrease disruptive behaviours by almost a third [54]. QTrobot has three different modes: tutor, reinforcer, and facilitator. As a tutor, the robot presents content and provides stimulus to keep children engaged. It then prompts the user to answer the question correctly. When QTrobot acts as a reinforcer, it focuses on keeping the user engaged in order to promote collaboration and motivation. Lastly, as a facilitator, the robot concentrates on building relationships between itself, the child, and the educator. The idea is that QTrobot can motivate the child to take part in activities by diverting their attention to the educator. To communicate with the user, QTrobot combines simple language and images on a table in order to help the user fully understand the content of the lesson. Furthermore, this robot can use emotions and gestures in each lesson to help develop dyadic abilities [118].
The iPal robot (Figure 13a) is another robot that is used as a teaching aid. This robot was designed to help bridge the communication gap between children and the people around them. It is engaging, interactive and predictable, making interactions simple and motivating [119]. iPal can conduct 20 lessons and activities, each designed to improve children's social skills while also reducing anxiety, and minimising the risk of over-stimulation. These lessons range from one-on-one social interaction, imitation tasks for interpersonal coordination, and fun games to teach different skills [119]. Furthermore, iPal can serve as a teacher's assistant, taking roll calls and further enhancing the teaching process. This robot can aid in the teaching process by presenting content in an engaging way, supporting the development of social skills and encouraging an interest in science subjects [55].

Telehealth
As many countries are facing the challenges of an ageing population, there is a need for remote monitoring and management of patients with deteriorating health, as well as declining mental and physical capabilities. For instance, the monitoring of daily vital measurements is still taken with handheld instruments even during the present pandemic. This is not only time-consuming, but it can also expose healthcare workers to infection. Moreover, monitoring can provide reassurance to patients especially in case of emergency, alerting staff if there has been an accident such as a fall. It could also be of great advantage if used on a larger scale such as scanning a multitude of people's temperatures and checking to see if masks are on correctly. This is where telehealth comes into effect; the term telehealth has been defined as "use of medical information that is exchanged from one site to another through electronic communication to improve a patient's health [120]. This is especially important in the era of COVID-19 because service robots can be used for social interaction as well as to reduce the quantity of PPE required, and the risk of frontline workers contracting COVID-19 [121].

Patient Monitoring
In a study carried out by Broadbent et al. [122], attitudes towards health-care robots in a retirement village were investigated. Participants of this study (managers, caregivers, and retirement residents) felt that robots were better suited to taking patients' vital signs than humans. Results of the study illustrated that the participants believed robots could make better diagnoses when monitoring patients' daily vitals, as well as provide assurance to people who have Alzheimer's disease with regard to the time and place. Furthermore, the participants reported that robots could help nursing home residents maintain a sense of independence not only by providing reassurance and monitoring, but also by alerting staff if help is needed or assistance with medical assessments is required. For instance, if a robot could learn the word "help" or "fire" and respond with an appropriate response, or to have a help function that responds to voice commands when the user needs assistance as alarm buttons can be difficult to reach. That being said, 24-surveillance is not acceptable as it is seen as an invasion of privacy, unless it purely monitors and assesses movement without sending images, so perhaps it would be best to have robots check rooms if residents do not report for meals, or to alert staff when patients fall [122].
Mamun et al. [56] propose a remote patient, physical condition monitoring service module for the iWard robot (part of an intelligent robot swarm for attendance, recognition, cleaning, and delivery) that can provide interchangeable service modules in hospitals. The iWard robot can monitor the patient's physical condition by taking the patient's vital signs as well as process and analyse them. The patient's information is then stored in on a server that can be accessed by healthcare staff with suitable security clearance. In order to differentiate between the patients, each one is given a unique ID. This is similar to the Carebot, a robot system for healthcare environments developed by Ahn et al. [57]. This robot gathers the patient's personal information and acquires data about their current health; it asks patients health questions and helps them take their vital signs using a sensor manager system. The sensor manager system uses specific sensors to measure blood pressure as well as pulse, and blood oxygen level. Carebot then informs the patient how to take measurements and when the process is complete, it sends the measured data to a medical server.
Arif et al. [59] present a telemedical assistant designed for medical applications and minimising human error in hospitals (Figure 14a). The robot has a variety of tasks that it performs: it can record patient's vitals at regular intervals and store the information in a database, dispense medicine to patients according to a schedule in order to prevent incorrect dosages, inform the staff if an emergency has occurred, and act as a virtual presence to communicate with physicians or the patient's family. The robot can work autonomously when dispensing medication or taking vitals by using a decision process. The robot repeatedly checks the database until it is time to give medicine to the patient, then it looks for the patient's information to find out where they are located (bed number). Once the robot has arrived at the patient's location, it takes the patient's vitals and uses a medicine rack with a LED to indicate which medicine needs to be taken. If the patient is unable to take the medicine a nurse must be present. To take the patient's vitals, the robot uses image processing and an optical character recognition algorithm instead of using individual sensors in order to keep the cost of the prototype down [59]. The robot can also enter a telepresence mode if a doctor needs to check on the patient remotely. The physician can control the movement of the robot, as well as speak with the patient, and access their information on the database. This is particularly useful in the case of emergencies as nurses can be advised as to what is the right course of treatment to use if there are no doctors physically present. Another telepresence robot, the Temi Robot [60], allows families and friends to communicate with a a loved one remotely. Furthermore, Temi allows healthcare care ot remotely treat patients. While not originally designed for healthcare, the Temi robot ( Figure 14b) has since had the addition of a thermal camera as well as thermometer. It can carry out collision-free, autonomous navigation with the aid of lidar, cameras, and proximity sensors. Temi also uses speech recognition and has user detection and following, which allows the robot to interact with patients, and provide remote video consultations with healthcare [60].
PAL robotics have been exploring ways to combat COVID-19 through means of collaboration and adaptation. Similar to the Temi Robot, PAL Robotic's ARI (Figure 14c) was previously designed for another use: multi-modal expressive gestures and behaviours [61]. ARI uses deep learning applied for natural language processing, face recognition and reinforcement learning to learn from each interaction, adapting its behaviour to each user. It also has a visual SLAM system for indoor environments. The combination of ORB SLAM and ARI's front torso RGBD camera allows it to detect features and conduct closed loops in order to update its map of the environment. However, ARI is now being adapted to interact and ask patients questions regarding COVID-19 such as their symptoms, and close contacts [123]. Furthermore, plans are being made to process patients' medical data and measure people's temperature with a thermal infrared camera [124]. The goal of ARI would be to use the robot in various different user cases such as welcoming patients, providing information on consultations, aiding with the completion of health forms, and acting as a guide [123]. ARI can also be used as a telepresence robot, allowing caregivers to interact and support patients without physical interaction.
Yang et al. [62] develop the idea of telepresence further with the proposal of a teleoperation system (Figure 14d) for remote care in an isolation ward in order to reduce the risk of contact between patients and healthcare staff. The robot consists of two systems: teleoperation and telepresence. The teleoperation system consists of a wearable motion detection device to capture the motion data of the healthcare worker and to control the robot arm remotely. Furthermore, data gloves are worn to record finger motions and teleoperate the robot's grippers. The telepresence system comprises a tablet mounted onto the front of the robot. This is used for remote daily medical checkups along with a voice wake-up function to facilitate patient operation and a deep neural network to monitor the patients' emotional states. The robot can conduct remote auscultation using a Doppler Ultrasound Stethoscope attached to the end of its arm and it can also achieve remote object delivery by using a two-finger gripper [62]. This allows the robot to grip medicine and deliver it to the patient. Furthermore, by attaching a stylus open to the end of the gripper, the robot can operate medical instruments remotely.
However, the iWard robot system has a distinct advantage over the aforementioned systems as it can be used for fall detection. For instance, while carrying out routine checks in a hospital environment, the iWard robot may come across a patient that has fallen out of bed. In this situation, the robot will analyse the sensor's data and attempt to detect the patient on the floor. The robot then alerts the nurse's station in order to take action. To detect a patient who has fallen, a three-layer artificial neural network is trained using back-propagation supervised learning. The neural network is presented with 6700 images of patients on the floor (positive) and of non-human objects and other persons (negative), and these are used as training samples (Figure 15). After the neural network was trained, a new set of 4000 images were presented as test data, and the success rate the identification of patients lying on the floor was 95.5%. Therefore, these results demonstrate that that robot can be used for patient monitoring; and alerting staff to adverse events such as trips and falls.
Wei et al. [63] propose a diagnostic robot for COVID-19. The robot carries out a consultation with a patient whereby the robot can detect coughing events. First, the robot measures the patient's temperature via infrared imaging and then a typical doctor consultation takes place posing questions such as "Do you have any cold or shown fever symptoms in the last 14 days? Have you been to the public areas of high risk in the last 14 days?" [63]. During the consultation, the robot records the dialogue. If no cough is detected during the visit, the patient will be asked to cough naturally. The cough detection is trained using a multi-layer convolutional neural network which extracts cough features. This proposed model achieved an accuracy of 98.8% when detecting coughs. Then, if a cough is detected, a classification network is trained to identify COVID-19 among other respiratory diseases; this model achieved an accuracy of 76% [63]. Although this work is preliminary, future work will focus on tracing people with high-risks of COVID-19, potentially reducing the strain on public resources.

Remote Surgery
Surgical robots, also known as collaborative robots (cobots) are sophisticated robots used to assist surgeons or medical expects to perform surgeries [125]. Using types of service robots in the medical sector has improved safety and quality of health management systems considerably in comparison to manual, people led systems [126]. Robotic surgeries in particular can be advantageous as they not only protect both the patient and healthcare workers from exposure to COVID-19, but they have also shown decreased recovery times and shorter hospital duration [127]. Now-a-days robotic surgery is more precise due to the small size and flexibility of robotic instruments, which leads to significantly less pain and risk of infection as only small incisions are made. This not only makes surgical care available to patients who would otherwise go untreated, but also reduces the cost and environmental footprint by eliminating unnecessary patient and surgeon travel [128].
One of the first available surgical robots was the DaVinci surgical system (Figure 16a). This robot is used for minimally invasive surgery and it consists of three different components: a vision system, surgical arm cart, and a surgeon console. The surgeon uses the surgeon console to perform the surgery by using hand and foot controls and a 3D visualiser [69]. The robot has four arms, a central arm which holds a telescope and three other arms which hold interchangeable instruments. The arms are quite large and obtrusive which has been deemed an issue in operating rooms as they frequently lead to collisions [126]. Furthermore, their large size makes it difficult to move from room to room. In 1995, Zeus was introduced as a minimally invasive surgical robot (Figure 16b). Since then, this robot has been approved for cardiac, abdominal, gynecology and urology surgeries with a surgeon present [70]. In fact, studies [129][130][131] have shown that the use of a laparoscopic tool such as this has significantly reduced pain in patients after surgery as well as hospital stay duration due to quicker recovery times. Zeus has three arms which are placed at a table. One of the arms holds an automated endoscopic system for optimal positioning (AESOP), which shows the interior of the operating field. The other two arms hold surgical instruments; a variety of instruments can be connected to the robot's arms, allowing the surgeon to use graspers, scissors, and hooks [70]. The surgeon controls the robotic arms via a control console and polarised glasses.
Zeus paved the way for today's telepresent surgery as it was the first surgical robot system to be configured to broadcast over networks, allowing for remote telepresence surgery [132]. Operation Lindbergh was the first ever remote, transatlantic human telesurgery, conducted no a Zeus robot with over 6000 km of distance between patient and surgeon. Despite a 'dry-run' on a set of animal experiments, there were multiple latency and video quality issues. To overcome these issues, the latency was counterbalanced with a delay in the other direction and a higher quality video codec was used with a greater allocation of more bandwidth for video [128]. The cholecystectomy procedure was successful and the patient was discharged from the hospital two days later. It was this telesurgery that made the first step toward making remote telemanipulation more generally available [128]. Since then, the DaVinci robot system has been upgraded and has successfully carried out six remote porcine pyeloplasty surgeries from Ontario to Halifax, over 1700 km away [133].
In the last decade, more funding has been invested in telehealth for routine check-ups and surgery. Telehealth is now at an advancement that is not only available in rural areas, but also in space-on-orbit. This was demonstrated in the NEEMO missions of NASA and the Space Shuttle and parabolic flight surgery experiments [134]. In 2004, five tests were performed: ultrasound examination of abdominal organs, ultrasound-guided abscess drainage, repair of vascular injury, cystoscopy, and renal stone removal. Due to latency, it took 10 min to tie a single knot [134]. Telerobots have also been used to carry out ultrasounds ( Figure 17). This is advantageous as it mitigates the exposure of high-risk patients to COVID-19. It works by allowing a sonographer to perform diagnostic ultrasounds remotely using the MELODY system. The MELODY system consists of a robotic arm that manipulates an ultrasound probe, and a control box which allows the sonographer to control the ultrasound probe remotely. This technology allowed sonographers to successfully carry out an ultrasound 605km away from the patient [71]. A videoconferencing system was used to allow the sonographer and patient to communicate with one another. A similar study was carried out by [72] who used a robotic tele-echography system to conduct a remote diagnosis of a patient for pneumonia 700km away. Examinations were successfully performed on the patient's lungs, heart, and blood vessels. However, an issue with remote surgeries, diagnosis, and check-ups can be the quality of imagery. Due to limited bandwidth, images were not as sharp as possible [135,136]. As human life can be at stake, issues related to safety and the detection of errors are of key importance, therefore future studies must be carried out on 5G networks in order to have a low delay, high speed and stable networks.

Advantages and Challenges
There are both advantages and challenges associated with robots in the areas of disease prevention and management (sterilisation and disinfection), logistics, and telehealth and social care. Service robots used for disease prevention and management is highly beneficial to society and public health. These robots can work around the clock to disinfect areas, thus relieving healthcare staff and reducing person-to-person contact in doing so. This is especially useful when conducting COVID-19 tests as the repetitive nature of the work can be exhausting for healthcare staff as well as dangerous for their safety. Nevertheless, challenges exist in this domain. For instance, UVC disinfection has its limitations: areas that are difficult to reach are not always successfully disinfected, which can be detrimental if the virus gets a chance to spread. Furthermore, UVC is harmful to humans and can cause the degeneration of cells, tissue and blood vessels. Hydrogen peroxide is also dangerous as it causes lipid peroxidation, irritation of the gastrointestinal tract and foaming of the mouth [137]. Rooms should not be entered until the measurable concentration of hydrogen peroxide is less than 1 ppm.
In logistics-especially in service robots that can manipulate objects-there exists an ethical risk to society: if robotics physically evolve to have the same dexterity as humans, do they pose a risk to job loss as a large number of jobs could be performed by robots [138]. Although this is not something to worry about for the next two decades, it is a cause for concern for some people. That being said, there are two limitations to this: although robots used for logistics in hospitals are helpful in the transport of medical supplies, they are limited by their speed and the size of the hospitals. Therefore, instead of conducting all logistic tasks, these robots will coexist with other solutions that offer high transportation speeds such as tube mail [139] and therefore cannot entirely replace manual labour. Service robots used for logistics are quite expensive due to the integration of sophisticated sensors such as LiDAR. Furthermore, these robots have difficulty manoeuvring in busy streets and narrow roads so they need to be under human supervision [140]. In fact, many of these robots are only suitable for corridors and not in more complex environments such as areas with patients or surgical wards. Therefore, before these types of service robots can be developed further, a move towards lower-cost sensor alternatives should be considered in order to make these robots more accessible, as well as conducting further research into machine learning. Taking these challenges into account, there are still many advantages to logistical robots. For instance, unlike humans, these robots cannot only work 24/7 carrying out logistical tasks, but they can also reduce human labour costs, avoid cross-infection, and increase productivity and efficiency.
In the telehealth and social care domain, two main issues stand out. First of all, there can be an issue with the privacy of users' data with regard to telepresence robots and personal assistant robots. There needs to be a strict protocol with regards to where the data are stored, who can access this data, and whether the data being recorded is done so with consent [141]. Although personal robots are generally used for entertainment purposes such as music and dance, there is a potential ethical risk of personal robots to exchange personal data with private companies [142]. For instance, robots that are used to record users' voices, faces, and body images may not be used appropriately [140]. Second, specifically with regards to personal robots is their anthropomorphic nature. Robots with human or animallike features are designed with the goal of being trustworthy, sociable and encouraging users to bond with them [143]. However, this is a double-edged sword; because the more anthropomorphic the robot, the more likely the user will start to expect real empathetic care, which the robot does not have [144]. A solution to this is to design humanoid robots with a visible demarcation between human and robotic features; for instance, designing the robot to have wheels instead of legs [12]. All things considered, personal assistant robots that have support functions have been shown to not only assist older people, but to also better the quality of life for users, improving behaviour through physical and mental exercises [140,145]. With regards to telepresence robots, an advantage is a timely treatment of patients with urgent needs, which can lead to shorter hospital stays [139]. Furthermore, these robots allow friends and family to communicate with the user, which is beneficial, especially if the user is in social isolation. Lastly, many of these robots are equipped with various sensors (thermal, camera, microphones, infrared, etc.) making them multi-functional; robots can check temperatures, analyse coughs, monitor emotions and conduct questionnaires.

Future Areas of Research
In this section, we look at potential avenues of future work in the healthcare sector. These avenues are divided into what we believe are short, medium, and long-term technology directions.
In the short-term, there is room for far-UVC to be examined further as more experiments need to be carried out in order to investigate its effects on humans. Furthermore, by combining the notion of far-UVC with general practice cleaning robots, the process of sterilisation could potentially become more autonomous. This could prove ideal for disinfecting areas that require frequent cleaning in hospitals while also mitigating the spread of the Coronavirus by reducing the spread of germs and pathogens. Existing solutions in this area require more training and greater sensing abilities; they need to be able to switch off in microseconds if a secured room is breached, carry out regular cleaning routines without human intervention, and connect to hospital facilities via wifi. However, training and protocol adherence may be a barrier to implementation.
With regard to medium-term goals, an area worth mentioning is robot manipulation and grasping ( Figure 18) for logistics. Specifically, if developments were made in grasping in cluttered environments and the manipulation of deformable objects, leaps and bounds could be made in terms of what could be transported. However, this is a difficult task as deformable objects such as rope, cloth or even muscle tissue are malleable and change shape when handled, making them impossible to model perfectly. However, we still need deformable objects to be manipulated, and to do this we need to improve a robot's ability to handle these sorts of objects. Developing robot manipulation and grasping further could also allow medical robots to perform difficult tasks in surgery, as well as make hospital beds or handle patient's clothes and prepare food. According to McConachie et al. [146] this challenge could be overcome by representing the object and task in terms of distance constraints and formulating control and planning methods based on this representation.
Lastly, if we take a step back from robotic applications, and instead, look at ideas that could improve already existing applications, there is a potential to increase the complexity of service robotic frameworks in the long-term, specifically in the areas of patient monitoring and social care. For instance, with the advent of high end, continuous monitoring and cheaper wearables, potential solutions could exist on a patient's body which could be connected to a patient monitor, or e-pharmacy to get alerts when there is an incident. Healthcare staff could also register a patient that requires medication on a dedicated health app. This app would alert a robotic pharmacy, saving healthcare staff time, reducing errors, as well as benefit stock management. Furthermore, social interaction is a major challenge for robotics due to the significant perceptual demands; if a response to a social cue or body language is delayed, it can be interpreted as a sign of uncertainty or mistrust. Therefore, it can be argued that robots need to better understand people and their intentions as well as predicting their movement, especially when it comes to adding robots to the workplace. As safety is vital, it is critical that robots can understand what is in their environment and react accordingly. One way of doing this would be to study body language because as humans we do a lot through our body movements, posture and facial expressions. Although there have been substantial advances in machine perception ( Figure 19)-the Carnegie Mellon University is currently looking to capture these subtle, nonverbal cues through the capture of images to create a dynamic 3D reconstruction of people, their body poses and motions [147]-in order to capture dynamic nature of social interaction, next-generation systems will need to be a fusion of prediction models and perceptual systems [148].
Machine learning is another large area that needs to be focused on as robots need to have decision-making capabilities, reliable sensing, adapt to their environments, and learn from their actions in order to improve over time. By adopting this approach, robots may be able to carry out more complex tasks at higher rates, as well as use low-cost sensor alternatives in the future. For instance, in the case of a medical emergency, intelligent robotic systems could be used to assist an emergency medical technician (EMT) with tasks such as inserting breathing tubes or intravenous lines as well as aiding in the preparation of the patient for transport to the hospital. This could significantly improve the ability of an EMT to provide urgent care. The main challenge here is adapting to circumstances that have not been observed before, so it is important to scope out the task correctly and be specific about what the robot should be able to do.

Conclusions
One significant aspect of this research is the close collaboration between academia, the healthcare provider industry and the end-user in the form of healthcare workers and their day-to-day environments. This level of collaboration provides some significant advantages. It ensures that research is not just theoretical, but that it is relevant to today's challenges. This is assured by the engaged partners providing context and guidance, all of which works to ensure the relevance of the academic research to the current real-world challenges the current pandemic creates.
In this paper, the concept of service robotics is defined and the challenges of their acceptance and reliability are discussed. Then, a summary of state-of-the-art applications in the healthcare systems is given, highlighting the emerging focal areas of disease prevention and management (sterilisation and disinfection), logistics, and telehealth and social care. In the disease prevention and management section, specific service robots are identified that can sterilise hospital environments using UVC light and hydrogen peroxide robots, as well as human support robots to sanitise high contact points, automated solutions for cleaning walls and floors, and robotic-assisted nasopharyngeal swabbing. With regards to logistics, we discuss service robots that are used for the delivery of medical supplies, laboratory samples, and food as well as robots that can test and sort blood samples. Lastly, in the telehealth and social care section, we describe systems that act as companions for people in isolation, help develop social skills, take patients' vital signs, detect falls and act as telemedical assistants/surgeons. We then expand on this by investigating mass monitoring, that is, observing large crowds for signs of social distancing and high temperatures to prevent the spread of disease. Following on from this, the current advantages and challenges of these different types of service robots are explored. To conclude, future areas of research are discussed, specifically focusing on short, medium and long-term technology directions in an effort to emphasise the need to improve current service robotics applications.
Funding: This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI 16/RC/3918, co-funded by the European Regional Development Fund.
Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.