Healthcare Students’ Ethical Considerations of Care Robots in The Netherlands

: Background: Older adults are a rapidly growing group world-wide, requiring an increasing amount of healthcare. Technological innovations such as care robots may support the growing demand for care. However, hardly any studies address those who will most closely collaborate with care robots: the (trainee) healthcare professional. Methods: This study examined the moral considerations, perceptions of utility, and acceptance among trainee healthcare professionals toward different types of care robots in an experimental questionnaire design ( N = 357). We also examined possible differences between participants’ intermediate and higher educational levels. Results: The results show that potential maleﬁcence of care robots dominated the discussion in both educational groups. Assisting robots were seen as potentially the most maleﬁcent. Both groups deemed companion robots least maleﬁcent and most acceptable, while monitoring robots were perceived as least useful. Results further show that the acceptance of robots in care was more strongly associated with the participants’ moral considerations than with utility. Conclusions: Professional care education should include moral considerations and utility of robotics as emerging care technology. The healthcare and nursing students of today will collaborate with the robotic colleagues of tomorrow.


Introduction
Over the past decades, a shift in healthcare needs has become evident. World population is ageing, whereas replacement fertility is dropping [1] (pp. [2][3][4][5]. Life expectancy is higher than ever [2] and older adults require an increasing amount of healthcare [3,4]. By 2040, the number of older dementia patients is estimated to be 81.1 million worldwide [5]. These people often require more specialised care [6,7]. For example, loneliness among older adults may lead to excess morbidity and mortality [8].
The changes in the amount and specialisation of healthcare needs also increase the costs [9]. On top of severe healthcare budget-cuts in most Western countries, a shortage of educated care professionals is expected in the area of specialised care [2]. This shortage of hands together with the aftermath of a global financial crisis foreshadows a future of low-quality eldercare against high costs [10].
Progressively, the call for care technology becomes louder and care robots seem to be in the vanguard of that development [11]. Care robots come in many forms, from surgery machines (e.g., Da Vinci Surgical System) to cuddle toys (e.g., PARO). Generally, three types of care robots may be distinguished: assistive, monitoring, and companion robots [12]. Assistive robots help with use new technology drop [25]. The reverse seems also valid: Intention to use and actual use increase if people feel that a device will perform well, is useful, and easy (ibid.). Hence, if care professionals see that a robot is practical and functional, would their initial moral objections become less of an issue [31] or will they co-exist?
Our RQ2, then, is: To what extent do care students believe that assistive, monitoring, and companion robots are acceptable and that robots will be useful in their future occupations? Furthermore, RQ3 asks: Does high perceived usefulness perhaps downplay earlier ethical concerns? To investigate ethical and occupational concerns of care students with different types of care robots, we designed a questionnaire study, probing the contrast between principles of ethics and considerations of utility and acceptance.
In other words, the current study seeks to investigate how care professionals' moral concerns relate to different types of care robots, whether these differ among the higher and lower educated professionals, how care professionals perceive a care robot's usefulness, and how their moral concerns relate to perceived utility and acceptability.
Recruitment occurred via three healthcare institutes (intermediate vocational) and two universities of applied sciences (higher vocational) in The Netherlands. These students were expected to work as care professionals within four years. A link including an online questionnaire was sent to all 7065 students who were enrolled in care courses at one of these institutes. The response rate was 36% for higher and 5% for intermediate vocational students. A reminder was sent to the latter once.
The research design crossed the three-levelled factor Robot Type (assistive, monitoring, and companion; between-subjects) with the two-levelled factor Education Level (intermediate vs. higher vocational), the dependent variables were measures of medical Ethics (autonomy, beneficence, maleficence, and justice), Utility, and Acceptance. Gender and age were also included as control variables.
Age naturally co-varies with an intermediate or higher vocational level because higher vocational usually follows intermediate. All participants were treated according to the university's ethical guidelines.

Materials and Procedure
We created a questionnaire based on the literature discussed in the above and piloting focus group sessions with care professionals providing feedback for improvements. Participants in the focus groups did not join the actual study. The improved questionnaire was made available online through Qualtrics. Upon opening the questionnaire link, participants read a brief introduction about healthcare robots, providing a definition and describing one of three different types of care robots (i.e., either assistive, monitoring, or companion robots) illustrated by a picture.
To avoid cross-comparisons among the robot types, participants were assigned randomly to conditions. We used the same questionnaire in each condition. After the introduction, a test question validated the comprehensibility of the task. Then, participants were asked to evaluate the care robot on each of the following items.

Measures
Each construct was measured by Likert-type items, balanced for indicative and counter-indicative items, each followed by six-point rating scales (1 = strongly disagree, 6 = strongly agree), avoiding the neutral position [32]. Items were recoded afterwards to all point into the same direction as indicated here. Measurement scales were optimized based on principal component analysis (PCA) and reliability analyses (SPSS-V21). PCA was run with direct Oblimin rotation to create uncorrelated factors for the questionnaire items (Field, 2009, p. 671). The final analysis comprised four factors explaining 65.09% of the variance including 18 items with primary loadings over 0.5 and no cross-loadings larger than 0.3. Composite scores were created for each of the four factors based on the mean of the items. The internal consistency of the resulting scales was well beyond the minimum with Cronbach alpha's > 0.70 [33] (see Table 2). Below, we describe how we measured each of the ethical categories. Note. Items were scored on six-point rating scales (1 = strongly disagree, 6 = strongly agree).
Autonomy (3 items; α = 0.87) was defined as a person's ability to make his/her own choices independently without the interference of other people or outsiders [20]. Example items are: "A care robot increases the independence of a patient" (indicative) and "A care robot makes the patient more dependent" (counter-indicative).
Beneficence referred to a person's sense of health and well-being [20]. A sample item is: "A care robot treats the patient well." Non-maleficence refers to a device or treatment not doing harm to patients [20]. Because care professionals actually think in terms of maleficence, we omitted the equivocal "non" from the original scale label. Whereas "maleficence" is not exactly the opposite of non-maleficence, this phrasing was more comprehensible. A sample item is: "A care robot is an obstacle for improving health." However, results of scale optimization showed a better fit for using only one scale to cover both Beneficence and Maleficence (4 items; α = 0.85). The dominant items on that scale indicted the name Maleficence.
Justice pertained to the need to distribute care evenly across patients [20]. An example item is: "A care robot keeps its promises". However, the items did not form a reliable scale and we therefore dismissed it from further analysis. Utility (five items; α = 0.78) concerned the degree to which care professionals believed a robot is practical, "handy," and useful during the execution of their jobs [24]. A sample item is: "A care robot is practical". Acceptance (six items; α = 0.87) referred to care professionals giving admittance and approval to a care robot without protest or refutation, while regarding a robot as a proper and normal care utensil [25]. A sample item is: "I will accept the assistance of a care robot".
An approximately normal distribution was evident for the composite score data in the current study, with skewness and kurtosis scores between −1 and 1, thus making the data well-suited for parametric statistical analyses.  Note. Items were scored on six-point rating scales (1 = strongly disagree, 6 = strongly agree). Equal superscripts indicate significant differences.

Results
The Box M value was associated with a p-value of 0.04. Note, however, that the Box M test is highly sensitive and should be ignored unless N ≥ 200 and p ≤ 0.001 [34]. Our sample size was larger and our p-value was 0.04, so a MANOVA was appropriate to execute.
To answer RQ3, we analysed the extent to which acceptance of a care robot was determined by maleficence and autonomy or by utility. Therefore, we conducted a multiple regression analysis with an interaction term [35] to predict the Acceptance scores through interactions with Autonomy, Utility, and Maleficence. Age and Education Level served as control variables (i.e., moderators).
Regarding assumptions, the Durbin-Watson of 2.032 was well within the range of 1.5-2.5, indicating the absence of autocorrelations. The variance-inflation factor (VIF) was acceptable (<5.0), suggesting no multicollinearity due to interdependency of variables (Rogerson, 2001). Three residual outliers based on a z-score > 3.29 or < −3.29 were removed, which was 0.1% of the most extreme values. The residuals were mean centred. There were no linearity problems and the standardized predicted and residual plot showed no problems of heteroscedasticity. Overall, it was suitable to perform a regression analysis.
To test the relative weights of moral considerations and utility perceptions on the acceptance of healthcare robots, we performed a hierarchical regression (method Enter) with Autonomy and Maleficence (both indicating moral concerns) entered as predictors in the first block, and Utility in the second block, on the Acceptance measure as dependent variable. Autonomy and Maleficence together significantly explained Acceptance, R 2 = 0.48, R 2 adj = 0.47, F (2,409) = 184.75, p = 0.000, which was mainly due to Maleficence. Utility added a significant increase of 9% in explained variance (R 2 change = 0.09, F (1,408) = 81.97, p = 0.000). The three predictors together explained a substantial amount of variance in Acceptance (R 2 = 0.56, R 2 adj = 0.56, F change(1,408) = 81.97, p = 0.000) in which the largest part is explained by the health professionals' moral considerations about care robots.

Discussion
The current study examined how moral considerations and perceptions of utility and acceptance of different types of care robots were appraised by trainee healthcare and nursing professionals at intermediate and higher educational levels. We also analysed the relative contributions of perceived utility and ethics in robot acceptance. Results showed that trainee care professionals evaluated assistive robots as more maleficent than either monitoring or companion robots. Companion robots were also more likely to be accepted to collaborate with than monitoring and assistive robots. Furthermore, monitoring robots were considered more maleficent than companion robots. Participants also thought that monitoring robots are less useful than companion or assisting robots. Considerations of autonomy did not differentiate between the robot types. No significant differences between the intermediate and higher educational levels were found. Finally, results show that moral concerns weigh more heavily in accepting to work with a robot than practical utility, although both contribute significantly to and explain a substantial amount of variance in accepting robots on the work floor.
Healthcare and nursing students were presented three types of care robots to examine their moral considerations, acceptance and utility perceptions. Results show that these trainee healthcare professionals saw little harm in companion robots and worried most for assistive robots, irrespective of their schooling. From the debriefing interviews, it appeared that assistive robots were perceived as most maleficent because they can actually physically drop someone or make a wrong move. Importantly, our results provide a more positive view on attitudes toward healthcare robots than previous studies showed. Only several years ago, healthcare students feared that robots would replace them and considered employing robots in care unacceptable and not very useful [26][27][28][29]. Today, trainee healthcare and nursing professionals are willing to accept care robots on the work floor provided that the technology does not harm the patient and that patient safety is guaranteed. Acceptance levels vary according to the type of robot with companion robots being the most acceptable and useful.
It is argued in the extant literature that companion robots that, for instance, are able to play the favourite music of an elderly patient who is suffering from dementia, could alleviate loneliness and increase feelings of well-being [36][37][38]. Hence, a robot might provide a perfect interface for accessing such a resource, providing music as treatment with a greater degree of interactivity simulating personal interaction with a human being.
Results further showed that morality was more important than utility in accepting a robot in care although utility was not trivial. This is an important addition to prevailing theories on technology acceptance [25], which primarily focus on utility and ease of use but neglect the possible moral or ethical considerations. The participants' position in our study is in line with Stephany and Majkowski [24] (p. 131) who opposed the trend that utility considerations often overthrow a "moral sense of care." In our study, morality and utility together explained a substantial amount of variance in the willingness to accept healthcare robots. Notably, when considering the relative weights in a hierarchical regression analysis, the largest part is explained by moral considerations such as robots doing harm or increasing a patient's independence. Thus, indeed, moral senses of care overthrow utility considerations in (trainee) care professionals.
Our findings suggest that care robots have the potential to solve urgent problems in care [10], provided that, before implementation, the moral concerns of healthcare and nursing professionals are taken into account. Our results indicate that each type of robot should follow its own line of introduction because their different functionalities come with different moral concerns. Companion robots may take the lead and pave the way because they were seen as highly useful, most acceptable, and least harmful. For the development and implementation of care technology, our results suggest that companion robots can be employed right away without much resistance from the caregiver, both at the work floor (lower/intermediate vocational) and in operational management (higher vocational). It also means that more work has to be put in making robots less threatening. Particularly for assisting machines, this means that robots that perform physical tasks should comply with the highest safety regulations and technology standards. Technically, they should be flawless, which should be demonstrated by extensive user tests. Compare, for example, the aviation industry where fear of flight is countered by proving it is the safest form of transportation available [39].
A limitation might be that the measurement of the moral considerations was based on the principles of medical ethics [20], which are generally used to assess medical and care procedures, treatments, interventions, and technologies. However, no clear measurements of these constructs yet existed (cf. available measurements for utility and acceptance [25]). Therefore, we carefully constructed items following the definitions of these four basic principles of medical ethics. Psychometric analyses showed that the items for "justice" could not be indexed as a reliable scale. Furthermore, the items for "non-maleficence" and "beneficence" showed overlap and had to be merged into one scale rather than two separate constructs. Although "(non)-maleficence" and "beneficence" are not just the opposite [20], the overlap in empirical observation is comprehensible. Finally, all scales used for analyses were internally consistent.
Another methodological consideration is the lower response rate of participants at the lower/intermediate educational level. Therefore, the subsamples in the current study were not equally distributed. Even though MANOVA is a robust technique and Box's test showed that the homogeneity of variance-covariance matrix was not violated [34], it is desirable that groups are of similar size. The relatively low response rate overall (7065 persons were approached, and only 406 of them completed the questionnaire) is due to a mistake in the planning of our study. School holidays are spread in time over different regions and it happened that it has been sent out during a school holiday to their school e-mail addresses. Moreover, the deadline for completing the questionnaire was set within this holiday period. Therefore, most likely many students did not open their school e-mail. For this exploratory study, we reasoned that the number of participants still is sufficient for interesting insights.
Related, another interesting note is that 75% of all vocational educated participants at the intermediate level were attendees of a Christian (Reformed) school. Because their holiday took place on a different schedule than the secular one, this might explain their relatively higher response rate. Whether religion could have any influence on the acceptance of care robots is unclear. It might be interesting to include religion in future research.
Implications of our results highlight that moral considerations are important in professional health care. Previous research emphasized the responsibilities of nurse educators and healthcare employers to provide learning opportunities for new care professionals in technical skills, to maintain patient safety, and to provide "good care" [40]. To achieve these goals, nursing students and trainee care professionals should understand the importance of using evidence-based guidelines and develop a reflective approach toward performance of technical tasks. This requires nuanced care education that welcomes innovative technology, whereas such innovations should also be consistently tested against ethical principles. With the increasing importance of healthcare technology, it is imperative that (trainee) healthcare professionals should learn skills and gain knowledge concerning health technology and medical informatics [41,42]. This should also include ethical considerations.

Conclusions
Trainee healthcare professionals today seem more willing to accept care robots than in previous research [26][27][28][29]. The willingness to accept a care robot in the current study was mainly associated with moral considerations and less so with utility. Their discussion on applying healthcare robots was dominated by potential maleficence, particularly for assisting care robots, irrespective of participants' educational level. Companion robots were considered most acceptable and least harmful. Overall, we suggest to enrich the curriculum of care students with classes on the ethical implications of robots as new care technology. Based on applied research, students should learn which (type of) machine has value for what specific care task. Future care professionals should be prepared to encounter new mechanical colleagues on the work floor and be able to make the most of their virtues and come to grips with their inconveniences.