Robotic Nursing Assistant Applications and Human Subject Tests through Patient Sitter and Patient Walker Tasks

This study presents the implementation of basic nursing tasks and human subject tests with a mobile robotic platform (PR2) for hospital patients. The primary goal of this study is to define the requirements for a robotic nursing assistant platform. The overall designed application scenario consists of a PR2 robotic platform, a human subject as the patient, and a tablet for patient–robot communication. The PR2 robot understands the patient’s request and performs the requested task by performing automated action steps. Two categories and three tasks are defined as: patient sitter tasks, include object fetching and temperature measurement, and patient walker tasks, including supporting the patient while they are using the walker. For this designed scenario and these tasks, human subject tests are performed with 27 volunteers in the Assistive Robotics Laboratory at the University of Texas at Arlington Research Institute (UTARI). Results and observations from human subject tests are provided. These activities are part of a larger effort to establish adaptive robotic nursing assistants (ARNA) for physical tasks in hospital environments.


Introduction
Patients with disabilities and with less mobility often require one-to-one assistance to manage their daily activities. Due to the increasing number of patients, nurses are not able to offer enough care and attention to patients [1]. By using robotic assistants for nursing tasks, we can free up some of the time of nurses so that they can prioritize their tasks with patients who have severe health conditions. In the literature, there are various robotic systems that have been developed to help patients with activities of their daily living without needing much help from others.
Devices such as wheelchairs are specifically designed for mobility, and they offer limited support in performing everyday tasks. Being confined to a wheelchair most of the time, people with disabilities often face difficulty in performing their everyday tasks. There are various studies in the literature in which manually controlled robotic manipulators are used to help disabled people with their everyday activities. Some of these studies are worth mentioning here. A joystick-controlled manipulator robot [2] is presented in the literature that is specifically designed to help people with eating. "Handy 1" [3], a rehabilitation robot, grippers to find a proper gripping point to hold objects such as plastic bottles and cups. Another study conducted in [21] uses markings on objects to fetch household items such as mugs, books, pencils, and toothbrushes. A similar technique is used in our current project to fetch objects. The objects used in our project are marked with unique AR tags. The "Personal Robot 2" (PR2) uses the AR tags to identify and fetch the objects in this study.
Fetching large objects can also be challenging and would require additional help to fetch objects since the objects cannot be held in a single gripper. Such a problem can be overcame by using additional robots to fetch objects instead of just using one. For instance, Pettinaro et al. introduced a study in which several tiny "S-Bots" [22] are used to fetch large objects that cannot be held by a single gripper. Although it is a good solution, using several robots is not a feasible solution in a hospital environment. In our project, we use two grippers to hold objects if they cannot be held by using one, such as fetching a patient walker, as discussed in the following sections.
Patient walkers are widely used in hospitals to support walking of patients. They increase mobility and allow patients to move freely, yet in some cases the patient requires additional assistance from nurses or caregivers when using a walker. Thus, our motivation is to develop algorithms for the robotic platforms to assist the patient with the walker equipment. "XR4000", a walker robot [23] with an inbuilt walker, assists elderly people in walking to a destination autonomously using a pre-defined map template. The PAMM (Personal Aid for Mobility and Monitoring) robot discussed in [24] adds additional functionalities such as obstacle avoidance and navigation guidance to existing walkers. When using the walker, the robot can monitor the health condition of the users and informs caregivers if any emergency situation is detected.
Based on their surroundings, people tend to walk with different speeds. To assist the user in such situations, robotic platforms have to change their speed with respect to the user actions. An omnidirectional moving robot discussed in [25,26] uses information from various sensors, such as force, laser, and tilt sensors, to predict user's actions. Based on the prediction, the robot adjusts its speed and facilitates users to move at a variable speed depending on the situation. The PAM-AID (Personal Adaptive Mobility) robot discussed in [27] detects surroundings and delivers the information to the users, helping blind people to navigate and interact with their surroundings. A similar technique is used in our project to assist patients with a walker. The PR2 platform used in this study supports users and prevents them from falling, similar to the robot discussed in [28].
Vital signs such as heartbeat, temperature, and blood pressure help doctors to understand the patient's health condition and to decide the treatments that should be given to the patients. Hence, a reliable and error-free measurement recording is crucial, especially in situations such as measuring the patient's heartbeat [29] during surgeries and measuring the activity [30] of the patient during rehabilitation. Among the above discussed vital signs, temperature measurement is a widely and commonly used method to monitor the patient's health condition. Several temperature measurement techniques have been proposed in earlier studies to monitor the patient's body temperature with precision. For instance, the mobile robot "Rollo" [30] uses an IR sensor (infrared sensor) to measure the temperature of the patient, and a robotic system introduced in [31] uses a temperature sensor to do so. However, temperature measurement alone is not enough to understand the patient's health condition, especially when they are confined to bed. In order to understand the health condition of such patients better, we require measuring more vital signs in addition to temperature. A "SleepSmart" [32] multi-vitals monitoring bed measures the person's blood pressure, oxygen levels, breathing inhale/exhale rate, heartbeat and the temperature of the patient to monitor their health condition. Another study monitors blood pressure, blood oxygen level, body temperature, pulse rate and galvanic skin response by using a modular health assistant robot called "Charles" [33]. The robot can also measure other vital signs such as blood glucose levels by interfacing with additional equipment, a blood glucose monitoring system, to understand the patient's health condition better.
Although these devices can measure the vital signs accurately, they require additional sensors, restricting them to performing only certain task. On the other hand, the PR2 robot used in our project measures the temperature of the patient by using a contactless home IR thermometer without any additional sensors. We utilize computer vision techniques to read the temperature from the thermometer's screen, and that information can be sent to nurses or caregivers for further analysis.
Toward to our larger goal of developing ARNA platforms, our main focus in this paper is to study three specific applications: "object fetching" and "temperature measurement" as patient sitter tasks, and a "patient walker" task. The results obtained from this study will be part of development efforts for ARNA platforms. In this paper, we present developed algorithms, parameter analysis, and observations from human subject tests. We build these efforts upon our previous studies, and further details about previous research on ARNA can be found in [34][35][36][37][38]. The original contributions of this paper include (i) identifying basic nursing tasks and designing an application pipeline of those tasks in order to implement them with a robotic platform, (ii) proposing solutions to the integration of the physical environment/objects and robotic platform in a hospital-like setup, (iii) performing parameter analysis to emphasize different effects of variables on the designed nursing task applications, (iv) conducting human subject tests to demonstrate practical aspects of designed nursing implementations, and (v) a general feasibility assessment of developed algorithms for basic nursing tasks with providing human subject test results and feedback and comments from human subjects.
The remainder of the paper is organized as follows. The next section describes the developed algorithms in this study. The hardware and workspace used and parameter analysis are presented in Section 3 and Section 4, respectively. Section 5 provides information about human subject test design, scenario details, results, and observations from participants. In the final section, conclusions are presented.

Navigation Algorithm
Navigation is one of the crucial tasks in this study. Since the hospital environment is unstructured and cluttered, the robotic platform operating in such an environment can face several challenges. It needs to know its environment accurately to avoid obstacles and reach the goal position precisely. The major objective of this task is to construct a safe and collision-free navigation for PR2 that can fulfil the above-stated challenges. We adopt 'ROS 2D navigation stack' [39] for this purpose. The modular software package constructs a 3D map of the surroundings and localizes PR2 on the map. It combines data from PR2's base LiDAR and torso LiDAR to construct a 3D occupancy grid, which is flattened to a 2D occupancy cost map of the surrounding obstacles. The cost map of the surroundings is fused with its odometry sensors by a probabilistic localization library, AMCL, to localize PR2 on the map [40]. The AMCL library implements an adaptive Monte Carlo localization algorithm to predict PR2's location and to track its position during the navigation. Using the 2D cost map, location, and position of PR2, we construct a navigation map of the environment. The map is updated with new obstacles in real-time and a new navigation plan is prepared using that information. In this study, several pre-defined waypoints are used for the patient's bed location, start position, and goal position. When a task is requested by the user in the experiments, PR2 uses its base LiDAR and torso LiDAR to estimate its location. Using initial and destination points, PR2 prepares a navigation plan, which is translated into velocity commands and sent to the base controller for navigation. For all our experiments discussed in this study, PR2 navigates to the patient's bed and waits for a request from the user at the beginning of the experiment. When requested, the PR2 robot navigates to the goal position, performs the task (for instance, fetch an item), and returns to the patient's location to hand over the object, or to complete some other task.

Object Position Detection Algorithm
To detect the position of the objects in this study, we use an open-source ROS AR tag tracking library, "ar_track_alvar", which detects AR tags in real-time [41]. The library detects the position and pose of the objects using AR tags. The reason for choosing this library is that it performs tag detection with high accuracy, even in poor lighting conditions. Additionally, it can detect multiple AR tags at the same time. AR tags with a fixed size and resolution are generated using this library. Objects used in experiments are labelled with the generated AR tags and are placed on a table, as shown in Figure 1, for the robot to pick up and fetch them. The idea of adding AR tags to objects is intended to increase detection performance for corresponding objects as these tags have unique patterns to help the developed algorithm with detection. PR2 uses its stereo camera to identify objects placed on the table. The library uses AR tags on objects to estimate information such as position, orientation, and distance from camera in order to plan the arm motion to fetch the objects.

Human Face Detection Algorithm
The face detection technique in this study is used to find the forehead location on a patient's face in a temperature measurement task. After reaching a goal position, PR2 uses its stereo camera to look for the "face" of the patient. The images are then processed by the "face_detector" [42] ROS library to find faces and their orientations. The library implements the Haar-Cascades technique to detect faces in real-time. The Haar-Cascades [43] technique uses pre-compiled model templates that can recognize face features, such as eyes, nose, and mouth, in images. Other facial features such as distance between eyes, depth of the eye sockets, and size of the nose [44] are used to generate unique fingerprints of a face. The images taken are then compared with pre-compiled fingerprints to detect faces. Any false positives in images are removed by using depth data of objects from the stereo camera. In addition to removing false positives, the stereo camera's depth data are also used to calculate the position (x, y, z) and orientation (Roll, Pitch and Yaw) of the patient's face with reference to the stereo camera's frame. The ROS "tf" library [45] provides several functions to keep track of coordinate frames and to transform of the coordinate frames without tracking them manually. The calculated coordinate frame is tracked with respect to various other coordinate frames (base, arms, head) in a tree structure by the "tf" library. Figure 2 shows details of several coordinate frames associated with PR2. Some of these frames are generated in real time using various PR2 sensors, while the others are hard-coded. PR2 keeps track of the patient's face with respect to camera frame and re-calculates face coordinates when the face moves. In our experiment, PR2 is able to track patients even when they are standing, sitting, or lying on bed. Even when the user is moving away from the robot, the technique can efficiently keep track of the patient's face from a long distance.

Motion Planning Algorithm for the Robot Arm
The patient's face is used as a virtual target frame to plan motion for the robotic arm. In our study for human subject tests, a safety offset called "safe distance" (Figure 2) is added to the virtual target frame to increase the patient's comfort and to prevent the robotic arm from getting too close to the patient. The offset parameter can be adjusted based on the user's comfort. The motion trajectory planning system uses a virtual target frame as the target frame. The coordinates of the target frame are checked to verify whether they lie in a currently defined workspace or not. After verifying the coordinates, the target frame is compared to check if any further movement should be performed to reach patient. If movement is needed, i.e., the PR2 arm cannot reach the target frame, the system calculates the necessary distance to move for the robot to reach desired target frame position. We use inverse kinematics to calculate the parameters for each joint (seven joints for PR2) of the PR2's arm. Since there can be several possible solutions, different constraints, such as trajectory time, effort required to perform the motion, and power consumption, are imposed on the possible solutions to select a feasible solution. After calculating the required parameters, the OMPL (Open Motion Planning Library) planner [46] from the "MoveIt" [47] ROS library is used to plan motion for the robotic arm. The library allows the user to configure virtual joints, collision matrix and some other motion parameters. The GUI also allows the user to tune optimization parameters such as search timeout by selecting the suitable kinematics solver. The various parameters such as target frame, joint parameters, and solver are used by the KDL (Kinematics and Dynamics Library) to calculate translation and rotation parameters that the robot should take to reach the desired goal, as shown in Figure 3. The values are then used by the arm controller to perform collision-free arm motion.

Thermometer Digit Detection Algorithm using OCR
For the temperature-measurement task, a high-resolution camera is mounted on the PR2's shoulder to record images of the thermometer's screen. Using the robot's odometry sensors, we estimate the orientation of the thermometer and use the perspective geometry to perform image tilt correction. The captured image is then cropped to show only the thermometer screen region and an additional buffer for better contour detection. An ROI (region of interest) is extracted from the captured image. A black hat morphological operation is performed on the image to separate dark (digits region) and light regions (backlit screen) of the image. The digits are joined together to create a continuous blob for each character using the fill technique. The ROI is further processed to extract the contours of the digits. A threshold is applied to the resultant image to extract larger regions in the image to filter out any noise. The image is then cropped using the contour area information to just show the region of the digits. A template-matching OCR technique is applied to the final cropped image. This technique matches the input image to a reference image to recognize digits. A seven-segment royalty-free image ( Figure 4) is used as the reference image in this algorithm. An additional fill operation is applied to this image to make the digits continuous, the same as the input image. A distance function is used to calculate scores for the pre-processed contours by using the reference image. The digit with the highest score is selected to estimate the temperature reading. The OCR algorithm can be described as follows. Let us call the input image I(x, y) and the reference image (template) T(x, y). The goal of the template-matching OCR technique is to find the highest matching pair using the function S(I, T). The 'correlation coefficient matching' technique is used to calculate scores for the input image using the equations below [48,49].
where x = 0 ... w − 1, y = 0 ... h − 1, w and h are width and height of the template image, and T and I are defined as The ROS Tesseract library is used for this OCR recognition task in our study [50]. The library creates a bounding box of the recognized region and displays temperature reading on the image. The reading can be sent to the nurses for monitoring the patient's health condition. Further, the PR2 can be programmed to take multiple temperature readings for better accuracy and to take frequent (bihourly, trihourly, hourly, etc.) to monitor the patient's health.

Patient Walker Algorithm
The patient walker task involves multiple forms of autonomous navigation. The robot makes use of the ROS navigation stack and 2DNav (two-dimensional navigation) method for navigating in dynamic cluttered environments full of obstacles. In addition, the robot uses a modified 2DNav and another simpler base controller for patient walker task. ROS 2DNav is designed to flatten the robot and environment geometry into a two-dimensional plane for path planning and obstacle avoidance. This works well with small objects being carried by the robot's grippers, but will fail if the robot needs to move a larger object. In a hospital-like environment, the robot can be programmed to move a cart, a walker, and an IV pole (intravenous pole), which affects the algorithm's ability to flatten and separate the carried items from the robot and the dynamic environment. The flattened robot footprint was expanded to include the area occupied by either the IV pole or walker. This helps both define the object as being rigidly attached to the robot and avoid collisions between the carried object and environment.
Since the robot follows and supports the user in this task, the robot's motion with the walker should be smooth and easy to operate. The user pushes the walker, therefore applying force on the grippers and leading the robot to a desired location, and the robot understands the user's intentions to walk in a corresponding direction. The PID controller uses traditional force-based logic tries to maintain a desired force all the time during the motion. Since our experiment requires operating at variable speeds, using a PID controller is not suitable for this task. Instead, we adopted a custom controller, called 'Stiffness controller', for this task [51]. When the user selects the 'Start Walker' function on the android tablet, this controller is initialized by PR2. Two parameters, a task position that is in front of the PR2 and a stiffness force parameter, are set before starting the experiment for the controller. When the user applies force greater than that of the stiffness parameter, PR2 grippers move freely to a new position and change the coordinates of the grippers. This motion creates an error in the task space, and to minimize this error, PR2 drives its base and grippers close to the home pose. This technique is used by the PR2 to coordinate and move along with the patient walker. This motion is continued until the patient selects the "Stop Walker" functionality on the android tablet. The controller allows the robot to follow the walker while applying a directionally adjustable level of stiffness to the walker for stability.The walking mode could allow the patient to adjust the stiffness their arms used to hold the walker into position, which then could allow different patients to use the walking mode more comfortably with different settings.

PR2 Robotic Platform
PR2 is equipped with two onboard computers that run on quad-core Nehalem processors [52]. PR2 has a 1.3 kWh Lion battery pack, which provides an average runtime of 2 h. The computers can be accessed remotely from a base station to operate PR2 functions [53]. A wide-angle stereo camera and a narrow-angle stereo camera are mounted to the PR2's head. The wide-angle camera is used for face detection and object detection in the experiments. In addition to these, there is a 5 MP (Mega Pixel) camera and a projector mounted to the head. Further, a high-definition camera with optical zoom capability is mounted to the PR2 shoulder as shown in Figure 5a. The camera is angled in such a way to record objects held in PR2's grippers. In this study, this camera is used in thermometer digit detection. The grippers are equipped with pressure-sensor arrays to detect objects held in them. A BLE (Bluetooth Low-Energy) speaker, as shown in Figure 5b, is mounted to the PR2's shoulder to repeat received commands out loud. Two LiDAR scanners are present in the PR2. One is mounted on its torso and the other is on its base. The PR2's base is omni-directional. The motion of the PR2 can also be controlled with a joystick and/or by a keyboard from the base station.

Experiment Workspace
Experiments for the project were conducted in Assistive Robotics Laboratory at UTARI. In the laboratory, a hospital setup is created to mimic the real-world environment. Several obstacles such as chairs and tables are added to create a cluttered space. The setup consists of a hospital bed for patients and a table to place objects on for the PR2 to pick up and fetch them. The hospital bed and table are placed 20 (6.1 m) apart and the PR2 start point is placed 9 (2.7 m) away from the bed for our experiments. The PR2 start point and the table are also placed 20 (6.1 m) apart, as shown in Figure 6. We use 'Hill-Rom Hospital

Thermometer
A contactless body thermometer (SinoPie Forehead thermometer) is used to measure the temperature of the patient in this study. A foam base is mounted to the thermometer to stand it upright. A glare filter is added to the thermometer's screen to reduce the effect of surrounding lighting to record the temperature. A Bluetooth microcontroller, 'Adafruit Feather 32u4 Bluefruit LE' (Figure 7) is attached to the thermometer in order to trigger it remotely (Figure 7). The PR2 connects to this module and triggers the thermometer during the temperature measurement task.

Patient Walker
In this study, we use 'Drive Medical Walker HX5 9JP' model no. 10226-1 for patient walker experiments. The four-wheeled walker provides easy steering, and the aluminum build makes the walker lightweight, so it requires less effort to walk with. The walker can hold up to 350 lbs (158.8 kg) and has dimensions of 16.75 × 25 (0.4 m × 0.6 m, Length × Width), and comes with 5 (0.1 m) wheels. It provides easy mobility for people with disabilities and elderly people. The walker is modified with a handle to support PR2 robot grippers to hold it, and a shelf is added to place the tablet on during experiments. The final design of the walker is shown in Figure 8.
In order for the patient to be able to rotate relatively easier, the walker was modified to have four caster wheels. In a traditional setting, the extra caster wheels could reduce the stability granted by the walker, but in this case the robot is used to increase stability for the patient. The casters allow the robot to make use of its dexterous holonomic base and allow the patient to choose between multiple paths to reach the same goal position.

Tablet and Android App User Interface
In order to provide a remotely controlled interface, an Android application software (running Android 5.1 or higher) is developed. The application software (app), named ARNA, includes a custom graphical user interface (GUI) for interacting with the PR2 (running on ROS). The application is developed to communicate and send instructions/information between the tablet and PR2. For this study, we use the Indigo version of ROS on an Ubuntu 14.04 computer. Since android and ROS are not directly compatible, we use ROSJAVA for Android to develop the app. ROSJAVA enables ROS nodes to run on Android devices. The Android tablet acts as a client, which requests items, information and actions to be performed by the robot (PR2). The robot acts as the server, which receives the client requests and processes them. It also sends information over the network to the tablet. Figure 9 shows a screen layout of the user interface. The app is intended to provide two main features for the users: (1) sending commands to the robot and (2) displaying the camera view that the robot sees. In order to send commands to the PR2 robot, the app is intended to allow participants to use either buttons or voice. In order to implement voice commands, Google Android Speech recognition is adopted to process audio from participants. After processing the audio, the app receives text sentences and extracts key words that match commands of interest. The display for the camera view delivers a live video stream from the PR2 cameras. This will be useful when the robot performs tasks away from the user's view.

Temperature Measurement Task
A parameter analysis is performed to determine the best set of parameters for thermometer screen digit detection in 15 cases varying the following parameters: threshold to be applied to average score (Th), aspect ratio (AR) for detected contours, size limits for detected contours (Cntr Limits-Width and Height), size of the structural element: rectangle (Rect) and square (Sq), morphological operation to fill the gaps (Fill), full image or cropped image (Crop). The list of the 15 cases with the values of these parameters is given in Table  1. The results of the analysis are evaluated considering three values: detection rate (DR), number of detected contours (#Cntr), and average matching score (AS). The detection rate equals the number of true digits that the algorithm detects over the total number of actual digits. The number of contours gives the total of the contours detected, which may include false positive detections. The results are given in the last three columns of Table  1. According to the results, it can be interpreted that contour-limiting parameters (width and height) and aspect ratio have an effect on eliminating contours other than the digits of interest. In addition to this, adding a threshold to average score is very effective in eliminating false positives. On the other hand, morphological operations (structural size), fill and crop parameter/cases affect the detection of the digits correctly.  Figure 10 shows sample case outputs from the analysis. As seen from Table 1 and Figure 10, the cases are given in an increasing performance manner. The performance of the detection algorithm increases with a higher detection rate, and contour number equals the number of digits on the screen. A contour number greater than the actual digit number indicates false positives. The best case desired is when the detection rate is 100% and the contour number is 3, because the actual temperature reads 94.1 • F in the parameter analysis ( Figure 10). In many cases, the detection rate is 100%, but the contour number is higher than 3. The last case, Case 15, has the parameters that give the best results: 100% detection rate and no false positives. These parameters are used for human subject tests.

Patient Walker Task
A parameter analysis is performed with 11 cases to optimize the robot navigation while retrieving the walker. The analysis is performed along the preferred path the robot has access to during the human subject testing. The following parameters are varied: the maximum linear velocity (V limit ), the maximum angular velocity (W limit ), the forward proportional gain (P x ), the sideward proportional gain (P y ), and the angular proportional gain (P w ). These cases are listed with the values of the parameters in Table 2.
The analysis is evaluated by considering four force values and three velocity values. These values are: the maximum recorded force (F max ), the minimum recorded force (F min ), the mean of the recorded force (F mean ), variance in the recorded force (F var ), the maximum recorded velocity (V max ), the mean of the recorded velocity (V mean ), and the variance in the recorded velocity (V var ). The force values can be separated between the left gripper and right gripper with l and r subscripts, such as F lmax and F rmax . The results show that the increase in proportional gains leads to an increase in output force values in corresponding direction. Similarly, increasing velocity limits results in higher force output values.
The input parameters in Case 11 are used during the human subject testing. These parameters are chosen in order to reduce the maximum force measured in both grippers to contact the walker handle without pushing it out of the open grippers before grasping, to complete the experiment in a timely manner, and to move the robot without aggressive maneuvers. Case 11 does not have the lowest force for either gripper but keeps the force in both grippers low without raising the force of the opposing gripper, while not increasing the angular speed of the robot. The values in Case 11 are more likely to be able to have the grippers contact the walker handle to grasp it without pushing the handle out of the grippers.

Object Fetching Task
Object fetching task experiments with human subjects are conducted at UTARI with a total of 11 volunteer participants (10 nursing students and 1 engineering student). The purpose of the experiments is to investigate how people interact with the robot and how the robot detects and responds to this interaction. The tablet with the developed app is used to request fetching three different objects. Subjects either sit or lie on the bed and interact with the robot following the experiment scenario described below. Each subject requests the robot to fetch three different objects. The time to complete each task is recorded and plotted for three trials (three objects are fetched) to show the required average time for this task ( Figure 11). The overall fetching task is also broken down to 17 individual smaller tasks and the time to complete each of these tasks is depicted in Figure 12.  The scenario below describes the fetching task. The fetching task takes about 2 minutes between taking the command and releasing the fetched item to the user. Scenario: • A human subject is asked to sit or lie on a hospital bed (pretending to be a patient in a hospital). The subject is asked to use buttons on the tablet to interact with the PR2 during the experiment. • The PR2 robot's starting position is nearby the patient, about 6 feet (1.8 m) away. • The PR2 robot detects a human face and start tracking the subject's face position. • The PR2 robot says "Please interact with the tablet". • The subject pushes a button on the tablet to request a fetch task. Objects that can be fetched are a soda bottle, water, or cereal box. Once the PR2 receives the tablet input, first, it moves to its starting pose to start the experiment (step 2 in Figure 11). • The PR2 robot acknowledges the subject's command from the tablet and starts moving toward a table located about 20 feet (6.1 m) away from the bed. • The PR2 robot stops near the table and picks up the requested object on the table (Figure 13). • The PR2 robot brings the object near to the bed, about 3 to 4 feet (0.9-1.2 m) away from the subject. • The subject is asked to take the object from the robot. • The robot releases the object ( Figure 14). • This task is repeated a total of three times for each subject. Observations: • The robot's navigation velocity is programmed to a max limit of 0. Considering that the time for a person to complete the same fetching task is a few seconds, the robot's speed needs to be improved for better efficiency. • The fetching tasks are completed with a success rate of 94.12% out of 34 trials (11 subjects × 3 trials + 1 additional trial for one subject). This rate is based on the robot returning the correct object directly from the tablet input. The failures (only to occurrences) include both the robot returning the wrong object due to wrong detection (computer vision) and the robot returning with nothing due to a bad grasp.
• The robot was stuck two times during navigation due to moving over the bed sheet. The robot is sensitive to obstacles under the wheels. When the wheels pass over the cloth, they pull the cloth closer to the robot, blocking some of the sensors and this impedes the path planning. • In one trial, the subject pushes multiple buttons unknowingly. Multiple item retrieval messages are sent to the robot. Each additional input is seen as a correction or change of command and overwrites the prior item message. • The robot's arm hits the table two times when reaching out for objects on two separate trials. The path planning for arm manipulation is not appropriate with a reduced distance between the robot and table. • Comments are collected from the human subjects. Some examples of those comments are as follows: -"The fetching speed is slow." -"Face tracking is a good feature making the robot more human like in interaction, however the constant tracking and searching can cause negative effects.
Depending on the requirements of the patient profile the face tracking behavior should vary."

Temperature Measurement Task
Human subject tests are performed with eight volunteers over 2 days for the temperature measurement task. The designed test scenario is as follows: A human subject is asked to lie on the bed, and once the PR2 receives the temperature measurement task request, it navigates next to the table to pick up the thermometer (Figure 15), navigates back next to the patient, finds the patient's face in order to direct the thermometer, and move its arm with thermometer to the calculated position ( Figure 16). Then, the thermometer is triggered by a Bluetooth module. Finally, the PR2 moves its arm with thermometer close to the high-definition camera and a single image is saved for detection purposes.   -"It looks like the robot from the Jetsons". -"The speed of the robot is too slow and that the tablet interface can be improved". -"Can the supplies be put on the robot?" The thermometer digit detection results from human subject tests are given in Table 3. In two out of eight human subject cases, the system reads the thermometer screen 100% correctly with no false positive contours. The system also has 100% for two more cases; however, there are 1 and 3 false positive detections in those cases, respectively. Three out of the remaining four cases ends up with a 33% detection rate, and there is one case with a 66% detection rate. Some examples of resulting images from human subject tests are shown in Figure 17. When the parameter analysis is performed, we defined an ROI in the image using known locations of the arm of PR2, the camera, and the thermometer. During human subject tests, we realize that, depending on how the PR2 picks up the thermometer, the orientation of the thermometer in the gripper may change. Even though the orientation difference is very small, it highly affects the performance of the detection algorithm. Additionally, lighting conditions may contribute to the high false positive rate. The possible solutions to improve detection include (i) modifying the thermometer to allow the PR2 to pick it up the exact same way every time, (ii) adding LED lights around the camera to improve visibility of the digits, and (iii) defining dynamic and adaptive ROI using visual markers around thermometer screen.

Patient Walker Task
Human subject tests are performed for the patient walker task with a total of eight volunteers. The patient walker task begins with the patient in a bed and having access to the tablet to communicate with the robot. A customized walker is stored in a separate location.
When the patient selects the walker task on the tablet, the robot will navigate to retrieve the walker using the ROS 2DNav algorithm [39]. Once the robot is positioned in front of the walker, it places its arms into the gripping position. The multimodal proportional controller is used to contact the walker. The robot closes its grippers and uses the controller to gently push the walker to the patient's bed. The patient can then stand up, place the tablet onto the walker, and use the tablet to turn the walking mode on the robot. The patient can then push and pull the walker in any direction. The robot will sense the motion of the walker and follow it, while limiting the walker's speed for stability. When the patient arrives at his/her desired location, he/she can turn off the walking mode and the robot will hold the walker rigidly in place. A snapshot from a test run is depicted in Figure 18.
The comments from volunteers and observations during the human subject tests are given below, which are provided as recommendations for the development of custom ARNA platforms. Observations: • Patient cannot be sure when to press the button (Test 1

Conclusions
In this study, we present outcomes of nursing assistant task design, analysis, and human subject test results using an assistive robot (PR2). Our main focus is to implement three tasks: object fetching and temperature measurement (patient sitter), and a patient walker task for assisting patients with basic tasks. Parameter analysis is performed and the parameters with the best results are selected to be used in the human subject tests. Human subject tests are performed with 27 volunteers in total. In the experiments with human subjects, in all cases the algorithm works successfully in assisting volunteers with the corresponding task. This study is part of a larger research effort in which the system is aimed to be integrated on an adaptive robotic nursing assistant (ARNA) platform.