Next Article in Journal
Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation
Next Article in Special Issue
Reconfigurable/Foldable Overconstrained Mechanism and Its Application
Previous Article in Journal
Experimental Approach for the Failure Mode of Small Laminated Rubber Bearings for Seismic Isolation of Nuclear Components
Previous Article in Special Issue
Magnetically Deployable Robots Using Layered Lamina Emergent Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Acceptance of Medical Treatment Regimens Provided by AI vs. Human

1
Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China
2
Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 110; https://doi.org/10.3390/app12010110
Submission received: 30 October 2021 / Revised: 9 December 2021 / Accepted: 17 December 2021 / Published: 23 December 2021
(This article belongs to the Special Issue New Trends in Robotics, Automation and Mechatronics (RAM))

Abstract

:
Along with the increasing development of information technology, the interaction between artificial intelligence and humans is becoming even more frequent. In this context, a phenomenon called “medical AI aversion” has emerged, in which the same behaviors of medical AI and humans elicited different responses. Medical AI aversion can be understood in terms of the way that people attribute mind capacities to different targets. It has been demonstrated that when medical professionals dehumanize patients—making fewer mental attributions to patients and, to some extent, not perceiving and treating them as full human—it leads to more painful and effective treatment options. From the patient’s perspective, will painful treatment options be unacceptable when they perceive the doctor as a human but disregard his or her own mental abilities? Is it possible to accept a painful treatment plan because the doctor is artificial intelligence? Based on the above, the current study investigated the above questions and the phenomenon of medical AI aversion in a medical context. Through three experiments it was found that: (1) human doctor was accepted more when patients were faced with the same treatment plan; (2) there was an interactional effect between the treatment subject and the nature of the treatment plan, and, therefore, affected the acceptance of the treatment plan; and (3) experience capacities mediated the relationship between treatment provider (AI vs. human) and treatment plan acceptance. Overall, this study attempted to explain the phenomenon of medical AI aversion from the mind perception theory and the findings are revealing at the applied level for guiding the more rational use of AI and how to persuade patients.

1. Introduction

Since AlphaGo’s comprehensive victory over human intelligence in Go, artificial intelligence (AI) has received widespread public attention. AI has developed so rapidly that now it can rival, or even surpass, humans in some tasks in which medicine may serve as one of the best examples [1]. In 2018, Stanford University developed a convolutional neural network algorithm called Cheyne. After being trained on a chest X-ray dataset, the CheXNet could outperform professional physicians in identifying diseases such as pneumonia. In 2021, Montfort developed a regimen for the treatment of mental illness. Not only can it monitor patients’ motor, cognitive, and mental states through smartphone sensors, it also uses AI to analyze and diagnose, promising to transform classical psychiatry into an exact science. The performance of medical robots is also impressive, such as the Da Vinci surgical robot that was developed by Intuitive, which was approved for minimally invasive surgery in 2005. “IBM Watson”, with its powerful natural language processing capabilities, can develop a care plan for a cancer patient in 10 minutes, while the same task would take a team of human experts 160 hours to complete [2]. Recent studies have shown that medical AI also plays an important role in tele-health [3], care services [4], and emergency department triage [5].
The rise of AI in the medical field seems to place it in opposition to humans and it has become a popular topic that physicians will be replaced by AI. The British Medical Journal (BMJ) published a debate titled “Could AI make physicians obsolete?” [6]. Proponents argued that AI would largely replace human physicians due to its enormous advantages in data processing, knowledge updating, working hours, training costs, and remote monitoring, as well as its less susceptibility to bias and greater impartiality without conflicts of interest with patients. However, opponents suggested that AI cannot bring true comfort to patients when they are the most vulnerable and in need of care, and thus human physicians still have an important role in the physician-patient relationship.
It follows that physicians and AI each have their own strengths in disease treatment, with the convenience of AI reflected in high efficiency and low cost, and the strength of physicians in human identity and emotional care. There is no doubt that both are protecting human health but while comparing the treatment providers, it should also be considered from the standpoint of the recipient. Will patients who have long been accustomed to being diagnosed by a human physician accept the change when an AI takes on the same role? If patients show a negative attitude toward medical AI, even the most cutting-edge technology will go unused due to the resistance of the audience. This is undoubtedly a waste of technology and medical resources and has the potential to increase the burden on physicians and pose a potential threat to patient health [7]. Therefore, the current research aims at comparing patients’ acceptance of physicians and medical AI, as well as exploring the psychological mechanisms behind the possible difference. More importantly, we tried to find a way to influence acceptance to better help human decision-making process in medical settings.

2. Literature Review and Theoretical Hypothesis

2.1. Algorithm Aversion

There are differences between patients’ attitudes towards human physicians vs. AI, that is to say, patients tend to resist AI. Patients not only perceive AI as lacking the essential qualities of medical practitioners such as enthusiasm and responsibility [8], but they have greater confidence toward human experts [9]. They believe that AI is only an aid for physicians and that physicians do not have to follow AI advice at all times [10]. Cadariosuggested that consumers generally believe that AI is unable to meet their unique needs, unable to take responsibility for medical errors, and performs worse in medical work than humans [11]. People need physicians to take responsibility for their health, and thus they are more willing to listen to the physicians’ advice [12]. For physicians, they rely more on their own experience in treatment than on statistical models that are provided by AI [13], and the reliance on statistical models may even be seen as a sign of lack of expertise [14].
Resistance to AI is not only seen in medicine, but also in business, ethics, gaming, education, and other fields [15,16,17,18]. People are more likely to accept recommendations concerning books and movies from their friends rather than AI [19]. In addition, people would trust the impartiality of human interviewers more in the hiring process [20], would be more willing to take advice from human experts when investing [21], and would be more averse to ethical decisions that were made by AI [22].
The resistance stems from concerns about possible replacement by AI [23] and distrust of AI in life-or-death decisions [24], which Dietvorst referred to as algorithm aversion [25]. Algorithm aversion is a widespread phenomenon [26], so it is reasonable to assume that patients will also show an “aversion” to AI during medical treatment. Therefore, the first objective of this study is to verify the phenomenon of “medical AI aversion” and to formulate the first hypothesis:
Hypothesis 1.
Patients show less acceptance of the same treatment plan that is proposed by an AI algorithm than which by a human physician.

2.2. Acceptance of Treatment Regimens

In addition to whether the treatment is performed by a physician or an AI, the treatment plan itself can also influence the patient’s attitude. When treatment is needed, especially surgery, patients often experience anxiety due to concerns about the disease and fear of pain [27,28]. If the anxiety cannot be calmed, then patients often show resistance to treatment options [29]. Use of anesthetics to reduce the patient’s distress can be effective in reducing anxiety, thus contributing to healing and recovery [30].
In most cases in the past, the medical practitioner was responsible for formulating the effective treatment plan based on expertise. But nowadays, with both physicians and AI acting as medical practitioners [31], patients can choose between a doctor and a medical AI. Due to the prevalence of “algorithmic aversion” and the potential for “medical AI aversion,” we are concerned that patients will not receive effective treatment that is provided by AI because of their attitudes, and we predict that the content of the treatment plan will influence and change patients’ aversion to medical AI. With the widespread use of AI technologies in medical image recognition, disease diagnosis, health management, disease prediction, and drug development, validating this discrepancy has become not only an academic issue but also a real one. After all, the application of medical AI ultimately depends on patient acceptance of this new technology [32].
Based on the above, the second hypothesis is that,
Hypothesis 2.
Although patients are less receptive to medical AI, this phenomenon is also influenced by the content of the treatment plan. Specifically, when faced with a humane treatment plan (less painful but averagely effective), patients are more resistant to medical AI, but when faced with a utilitarian treatment plan (painful but eradicating the disease), there is no difference in patient acceptance between physicians and AI.

2.3. Mind Perception

In previous research on the acceptance of medical AI, its efficacy has not been adequately discussed and patients may perceive AI as not being capable of treating disease and are, therefore, more resistant [12]. However, studies have demonstrated that statistical models of algorithms outperform human physicians in medical scenarios [33], and patients are not resistant to medical AI for this reason [32]. An important reason for believing that AI cannot replace physicians is the latter’s human identity [6]. This implies that patients see physicians as equals and AI is not. Can such a perception be the reason for medical AI aversion?
People judge whether a target belongs to the same kind (human) or not, depending on the perception of its mental capacities [34]. The mind perception theory suggests that, in people’s perception, other people, animals, robots, and even the dead and gods have minds, and those mental capacities are perceived through two dimensions: agency and experience [35]. Agency refers to the ability to think, reason, make plans, and realize intentions, and experience refers to the ability to feel emotions and feelings, such as feeling and experiencing pain [36].
Typically, robots are perceived as having near-human levels of agency and almost little experience [35]. At the same time, due to the specificity of the profession, the perception of physicians results in a higher level of agency and a lower level of experience than normal humans [37], and thus patients tend to view physicians as “The Empty Vessel” [38]. Even so, although physicians, as human beings, are perceived to have a lower level of experience than normal humans, their experience is perceived to be higher than AI, which becomes a psychological mechanism for patients to see physicians as their own kind.
Based on the mind perception theory, we need to answer two questions. The first is whether the patients’ perception of the therapists’ mental capacities would have an impact on their attitudes. The second is whether the content of the treatment plan, which is an important message for physician-patient communication, could change the outcome of the patients’ mind perception of the therapists. Accordingly, the third hypothesis of this study is proposed:
Hypothesis 3.
The aversion to medical AI is influenced by people’s perception of its mental ability and the content of the treatment plan can change patients’ mind perception of the therapists, which, in turn, changes the original attitude toward medical AI. Specifically, patients perceive fewer mental capacities in AI than physicians which leads to lower acceptance of medical AI, and “utilitarian” treatment solutions can reduce patients’ evaluation of physicians’ mental capacities compared to “humane” treatment solutions, which in turn can change patients’ aversion to medical AI.

3. Methodology

This study is about aversion to medical AI. Therefore, we first give an operational definition of aversion to medical AI, that is, when human doctors and medical AI provide the same treatment plan, individuals will accept the medical AI’s treatment less. On this basis, we need to answer three questions. First, is there a phenomenon where individuals have difficulty accepting medical AI treatment options? Second, if this phenomenon can be changed [39], will it be influenced by the content of specific treatment regimens? Third, do these differences in acceptance of medical providers reflect differences in mind perceptions of AI and humans to some extent?
We designed three experiments to answer these questions. Experiment 1 was a single-factor design. We expected to prove the existence of aversion to medical AI by comparing participants’ acceptance of the provider through ANOVA with a simulated medical treatment situation in which the doctor or the medical AI provided the same treatment to participants.
Experiment 2 was a 2 × 2 design, and different treatment regimens were added on the basis of the original group of providers. We expected to obtain evidence that the content of the treatment regimen can influence aversion to medical AI through ANOVA.
Experiment 3 was also a 2 × 2 design, and we added a measure of mental perception. We expected to re-examine the influence of the treatment regimen on provider acceptance and obtain evidence of the mediating role of mental perception between the provider and acceptance through process analysis and simple effect analysis. (Figure 1 represents the relationship between the three experiments, wherein Experiment 1–3 is replaced by E 1–3).
For the following reasons, we chose the psychological experimental method in this study. The first reason is that compared with questionnaire surveys, psychological experiments are better able to obtain causality. The second reason is that psychological experiments can compare the differences between different levels of IV such as AI vs. Human or different types of regimens. The third reason is that psychological experiments can obtain data ensuring psychological realism.
All of the experimental data was analyzed by using IBM SPSS Statistics 27. Given that all of the variables that were involved in this study are manifest variables, the mediation model analysis and moderating model analysis of Experiment 3 were conducted by using the PROCESS plug-in of the software. Compared with SEM, PROCESS can separate direct effects from indirect effects, enabling the readers to better understand the mediating and moderating role of mind perception and treatment regimen in it.

4. Experiment 1

Experiment 1 focuses on verifying the phenomenon of patient aversion to medical AI (Hypothesis 1). Specifically, we predicted that when faced with a treatment plan with the same content, patients will show lower acceptance of the treatment plan that was formulated by AI than human physicians.

4.1. Methods

This study began by calculating the sample size that was required for the study using G*Power 3.1 software [40]. Using ANOVA as the statistical approach, the parameters were set to one-way ANOVA, the medium effect size f = 0.25, the significance level α = 0.05, and the number of groups was 2. The calculation results indicated that to achieve 80% statistical test power, at least 106 participants were needed for this experiment. A total of 140 Chinese participants from the questionnaire platform, Baidu Smart Cloud (a network questionnaire platform that is based on the world’s largest Chinese search engine which is similar to Amazon Mechanical Turk), were recruited to participate in this experiment, including 62 males and 78 females, with an average age of 28.72 ± 6.98 years. There was no limit on the occupation, income level, or education of our participants and each participant received monetary compensation at the end of the experiment. The experiment was a single factor design in which the participants were randomly divided equally into two groups (i.e., the “human physicians’ group” vs. the “AI group”). At the beginning of the experiment, all of the participants were asked to read a scenario about a medical visit.
The human physicians group read the following content: “Please imagine the following scenario: During a medical checkup, your abdominal wall is examined for the presence of a 7 mm diameter polyp, which will seriously affect your health in the long run, so you go to the general surgery department of the best local hospital to register. The physician you see is a physician surnamed Zhang. You learn through official channels that Dr. Zhang has been working for 8 years and has treated more than 500 similar cases with a cure rate of 98% and is an expert in general surgery. After reading your physical examination report, Dr. Zhang believes that surgical treatment is needed. And he developed a surgical plan for you: removing the polyps through a painless and minimally invasive way, but the recurrence rate of polyps after surgery is around 20–25% because of the stimulation by anesthetic drugs”.
This material focuses on the cause and course of events and uses some fictitious practitioner data to enable the subject to form a basic judgment about the therapist. The treatment regimen is not perfect because it does not eradicate the disease, although it causes less pain to the patient.
The AI group read roughly the same material as the human physician’s group with the difference that the treatment plan is made by an AI: “During a medical checkup, your abdominal wall is examined for the presence of a 7 mm diameter polyp that can seriously affect your health in the long run, so you come to the general surgery department of the best local hospital to register. When you arrive at the hospital, you find out that thanks to the popularity of AI technology in the medical field, abdominal wall polyps are all diagnosed here by a medical AI called ZAPRO-6. It only needs to scan your physical examination results to complete the diagnosis. Through official channels, you know that ZAPRO-6 has been in use in this hospital for 8 years, during which it has treated more than 500 similar cases with a cure rate of 98%, fully up to the level of general surgery specialists. After scanning your physical examination report, ZAPRO-6 considers that surgical treatment is needed. And a surgical plan was developed for you: the polyps were removed through a painless and minimally invasive way, but the recurrence rate of polyps after surgery is around 20–25% because of the stimulation by anesthetic drugs”.
After reading the material, the participants were asked to answer the following question, “To what extent would you be able to accept this treatment option?” (Longoni et al., 2019) using a 7-point Likert scale (1 = not at all acceptable, 7 = fully acceptable). Finally, we included the question “what is the probability of recurrence of the disease in this regimen? (A: 30–35%; B: 20–25%)” as an awareness check.
After the above procedures were completed, the participants reported their gender and age, received their payment, and ended the experiment.

4.2. Results

We retained data from the participants who passed the awareness check and completed the experiment within a reasonable time (Mresponse duration ± 1 SD), and we collected a total of 121 valid responses. We performed independent sample t-tests on the acceptance scores of subjects in both groups. The human physician’s group (M = 4.76, SD = 1.56) was found to be more receptive to the treatment plan than the AI group (M = 4.12, SD = 1.47), t = 2.227, p = 0.025, Cohen’s d = 0.422.

4.3. Discussion

With Experiment 1, we obtained preliminary evidence of patients’ aversive attitudes toward medical AI. Specifically, participants showed lower acceptance of treatment regimens that were formulated by medical AI. In the next experiments, we will continue to validate this phenomenon and try to demonstrate that the content of treatment regimens, especially the difference between utilitarian and humane ones, plays a moderating role in the impact of treatment subjects on patients’ acceptance.

5. Experiment 2

In Lammers’s study, a medical dilemma was presented by providing the participants with two different treatment options, one painful but more effective, and the other less painful but less effective [41]. We will follow this approach in Experiment 2 and name the two options “utilitarian option” and “humane option”, where both the human physicians and the AI provide the patient with these two surgical options and we observe whether the participants still show lower acceptance of the AI option. We aim at exploring whether the participants still show lower acceptance of the AI regimen and whether the content of the regimen also influences the participants’ acceptance.

5.1. Methods

This study began by calculating the sample size that was required for the study using G*Power 3.1 software [40]. An ANOVA was used as the statistical approach, setting the parameters to one-way ANOVA, setting the medium effect size f = 0.25, a significance level of α = 0.05, and the number of groups was two. The calculation results indicated that to achieve 80% statistical test power, at least 196 participants were needed for this experiment. We recruited 205 Chinese participants on the same questionnaire platform, including 113 males and 92 females, with an average age of 26.31 ± 7.22 years. There was no limit to occupation, income level, or education and each participant received remunerations at the end of the experiment.
Experiment 2 was a 2 (treatment subject: human physicians vs. AI body) × 2 (regimen content: utilitarian vs. humane) between-group design. The participants were randomly divided equally into 4 groups and all of the participants were asked to read the contextual material at the beginning of the experiment, the content of which was largely the same as in Experiment 1. The difference was the inclusion of a new regimen for surgical treatment: “Without anesthesia, the possibility of recurrence of polyps is completely eliminated, although you will be in great pain during the procedure”.
After reading the material, the participants were asked to rate their acceptance of the treatment options (as in Experiment 1) and to rate the degree of pain and effectiveness of the two surgical options on a 7-point Likert scale as an operational test: “How painful do you think this option is? (1 = very mild pain, 7 = very great pain)”; “How effective do you think this regimen is? (1 = completely unable to eradicate the disease, 7 = completely able to eradicate the disease)”. Finally, the participants filled in their demographic measurements (i.e., gender, age), completed the experiment, and received payment.

5.2. Results

A manipulation check of the regimen content was first performed. A full-model ANOVA was done with the degree of surgical pain and the degree of effectiveness, using the treatment subject and the regimen content group as fixed factors. The results showed a significant effect of regimen content manipulation on the degree of surgical pain (Mutilitarian = 5.82, SDutilitarian = 0.89; Mhumane = 3.11, SDhumane = 1.03; t = 10.96, p < 0.001, d = 2.82), as well as a significant effect of regimen content on surgical effectiveness (Mutilitarian = 6.13, SDutilitarian = 0.77; Mhumane = 5.24, SDhumane = 0.63; t = 8.22, p < 0.001, d = 1.27). Therefore, the manipulation of the content of the surgical regimen was successful.
Next, we examined the interaction effects of treatment subject and regimen content on acceptance. A full-model ANOVA was conducted with treatment subject and regimen content groups as fixed factors and acceptance as the dependent variable. The results showed a significant main effect for the treatment subject group (Mhuman physicians = 4.17, SDhuman physicians = 0.87; MAI = 3.89, SDAI = 0.96, F(1, 205) = 5.752, p = 0.017, d = 0.31), while the main effect for the regimen content manipulation was not significant (F = 0.542, p = 0.462). More importantly, there was an interaction effect of treatment subject and regimen content group on acceptance (F = 4.110, p = 0.044, d = 0.24), with participants being more accepting of human physicians than AI in the humane regimen (Mhuman physicians = 4.26, SDhuman physicians = 0.71; MAI = 3.70, SDAI = 0.91; t(96) = 3.34, p < 0.001, d = 0.69). However, in the utilitarian regimen, there was no significant difference between participants’ acceptance of human physicians and AI (Mhuman physicians = 4.10, SDhuman physicians = 0.99; MAI = 4.05, SDAI = 0.97; t(109) = 0.23, p = 0.818, d = 0.05; Figure 2).

5.3. Discussion

In Experiment 2, we reconfirmed the lower acceptance of AI by the patients and the aversion to medical AI persisted even in treatment contexts where different regimens were presented.
Experiment 2 examined the mechanism of medical AI aversion by manipulating the content of the surgical regimen. Specifically, people showed lower acceptance of medical AI only when faced with a humane (painless but ineffective) surgical regimen while the patients’ aversion to medical AI disappeared when they were faced with a utilitarian (painful but eradicative) surgical regimen. In Experiment 3, we will measure the results of patients’ mind perception of the treatment subject and continue to examine the effects of the content of the surgical regimen.

6. Experiment 3

The purpose of Experiment 3 was twofold: first, to test the mediating effect of mind perception; second, because the content of the surgical regimen can have an effect on acceptance, along with the treatment subject, the regimen content needs to be tested as a significant moderating variable in the model.

6.1. Methods

This study began by calculating the sample size that was required for the study using G*Power 3.1 software [40]. Using an ANOVA as the statistical approach, the parameters were set to one-way ANOVA, the medium effect size f = 0.25, the significance level was α = 0.05, and the number of groups was two. The calculation results indicated that to achieve 80% statistical test power, at least 196 participants were needed for this experiment. A total of 227 Chinese participants, 102 males and 125 females, with a mean age of 29.82 ± 3.15 years, with no limit to occupation, income level, or education were recruited on the online questionnaire platform Baidu Smart Cloud. Each participant received remunerations at the end of the experiment.
This experiment was also a 2 (treatment subject: human physicians vs. AI body) × 2 (regimen content: utilitarian vs. humane) between-group experimental design. The experimental procedure followed the same approach as in Experiment 2 where all of the participants were randomly and equally assigned to one of four groups, but with one modification: after the introduction of the treatment subject, all the participants were presented with the following material.
“There are generally two surgical options for removing abdominal wall polyps: Option A is less painful but has the potential for recurrence, which means that polyps are removed in a painless and minimally invasive way, but the recurrence rate of polyps after surgery is around 20–25% because of the stimulation by anesthetic drugs. Option B is more painful but more effective, which means that with no anesthesia, you will feel very painful during the procedure, but it completely eliminates the possibility of recurrence of polyps”.
After this description, we told participants that the treatment subject had chosen one of the options for him. We wanted to create a strong contrast by showing both options at the same time so that the participants could understand the treatment provider’s choice.
The four groups of participants scored their acceptance of the surgical regimen on a 7-point scale after reading the contextual material and completing an attention check. Next, according to Gray et al.’s measure of mental abilities, the participants scored the treatment on a 7-point scale (1 = completely non-conforming, 7 = completely conforming) on the ten mental capacities with the highest experience and agency dimensional loadings factors [35]. The five abilities that were corresponding to experience: hunger, fear, pain, pleasure, and anger; and the five abilities corresponding to agency: making plans, emotion recognition, morality, memory, and self-control.
After answering all of the questions, the participants completed their demographic measures, received payment, and ended the experiment.

6.2. Results

First, an ANOVA was conducted with treatment subject and regimen content as independent variables and acceptance as dependent variables. The results showed significant main effects for the treatment subject group and higher acceptance in the human physicians group than in the AI group (Mhuman physicians = 4.14, SDhuman physicians = 1.14; MAI = 3.77, SDAI = 1.32, F(1, 227) = 4.524, p = 0.035, d = 0.30), while the main effect of regimen content was not significant F(1, 227) = 1.423, p = 0.234). In addition, the interaction between the treatment subject and the regimen content on acceptance was significant (F(1, 227) = 13.864, p < 0.001, d = 0.059). Specifically, the patient acceptance of AI was lower for humane regimens (Mhuman physicians = 4.51, SDhuman physicians = 0.95; MAI = 3.58, SDAI = 1.21; t(120) = 4.23, p < 0.001; d = 0.85), whereas for utilitarian regimens there was no significant difference between regimen developers (Mhuman physicians = 3.73, SDhuman physicians = 1.21; MAI = 3.98, SDAI = 1.16; t(107) = −1.10, p = 0.272); furthermore, within the human physician group, the patients were less receptive to utilitarian regimens (Mutilitarian = 3.73, SDutilitarian = 1.21; Mhumane = 4.51, SDhumane = 0.95; t(118) = −3.93, p < 0.001, d = 0.77), while there was no significant difference in the AI group (Mutilitarian = 3.98, SDutilitarian = 1.16; Mhumane = 3.58, SDhumane = 1.21; t(109) = 1.60, p = 0.112; Figure 2).
Next, we calculated the mean of the abilities corresponding to each of the two dimensions of mental ability to obtain the experience and agency scores and then we began to examine the mediating effects of each of experience and agency.
The mediating effect of mind perception in the relationship between treatment subject and acceptance was tested using Model4 in the SPSS macro that was developed by Hayes (2012). The results (see Table 1 and Table 2) showed that the difference in acceptance between treatments was significant (β = 0.373, t = 2.287, p = 0.023), and the difference in acceptance between treatment subjects was no longer significant when the relationship was mediated by experience (β = −0.034, t = −0.176, p = 0.861), indicating that experience had a fully mediated effect on the relationship between the treatment subject and acceptance relationship. In addition, the effect of the treatment subject on perception was significant (β = 1.337, t = 10.683, p < 0.001). Specifically, the participants rated experience higher for human physicians, and experience was a significant positive predictor of regimen acceptance (β = 0.305, t = 3.599, p = 0.004).
Similarly, we proceeded to test whether another dimension of mental capacity, agency, also had an effect on medical AI aversion. Model 4 of the SPSS macro, that was developed by Hayes (2012), was used to test the mediating effect of agency in the relationship between the treatment subjects and acceptance. The results (see Table 3 and Table 4) showed that the difference in acceptance between the participants facing different treatments was significant (β = 0.373, t = 2.287, p = 0.023), and when the mediating variable was put in, the difference in acceptance that was triggered by the treatment subject was no longer significant (β = 0.249, t = 1.492, p = 0.137), indicating that agency also had a fully mediated effect on the relationship between the treatment subject and acceptance. In addition, the effect of treatment subject on agency was significant (β = 0.610, t = 4.070, p < 0.001). Specifically, the participants rated human physicians higher on agency and agency was a significant positive predictor of acceptance (β = 0.205, t = 2.864, p = 0.046).
Finally, we conducted a conditional process analysis. With treatment subject as the independent variable, acceptance as the dependent variable, experience mind capacities as the mediating variable, and regimen content as the moderating variable, the test was conducted using the Bootstrap method, as suggested by Hayes (2013). Model 8 was chosen (Model 8 assumes that the first half of the mediator model and the direct path are moderated), where the Bootstrap sample size was set at 5000 and a 95% confidence interval was selected using a bias correction method. The results of the analysis (see Table 5) showed that after putting the regimen content into the model, the product term of treatment subject and regimen content had a significant effect on both experience (β = −0.356, t = −1.189, p = 0.235) and regimen acceptance (β = −1.113, t = −3.405, p = 0.001). This suggests that regimen content not only had the ability to moderate the treatment subject’s effect on acceptance, but also had the ability to moderate the treatment subject’s effect on experience. Further a simple slope analysis showed that, as shown in Figure 3, the patients were more receptive to human physicians compared to AI in the humane surgical options (simple slope = 0.519, t = 2.032, p = 0.043), while the patients were less receptive to human physicians compared to AI in the utilitarian surgical options (simple slope = −0.503, t = −2.076 p = 0.039). As can be seen in Figure 4, the patients rated their experience perception of human physicians higher when they were faced with humane regimens (simple slope = 1.638, t = 9.656, p < 0.001); when they were faced with utilitarian treatment regimens, the patients also rated their experience perception of human physicians higher than AI, although this effect was weakened (simple slope = 0.995, t = 5.542, p < 0.001). Through the above process, we verified the mediating role of experience mind capacities in the relationship between the treatment subject and acceptance, and that the first half of the model of this mediation and the direct path was influenced by the content of the regimen, actually obtaining the model results in Figure 4 and Figure 5.
Again, with treatment subject as the independent variable, acceptance as the dependent variable, agency as the mediating variable, and the regimen content as the moderating variable, the test was conducted using the bootstrap method, as suggested by Hayes [42]. Model 5 was chosen (Model 5 assumes that the direct path of the mediator model is moderated), where the Bootstrap sample size was set at 5000 and a 95% confidence interval was selected using a bias correction method. The results showed that the regimen content moderated the direct effect of treatment participants on regimen acceptance (β = −1.113, t = −3.405, p = 0.001).

6.3. Discussion

The results of Experiment 3 reconfirmed our two main conclusions that were previously obtained: (1) there was an aversion to medical AI, which means that patients were less receptive to AI compared to human physicians; and (2) the patients’ inherent resistance to AI is modifiable. In particular, changes in the pain level and treatment effect of surgical regimens could moderate the phenomenon of patients’ aversion to AI. Specifically, a regimen that was less painful but less effective would have an amplifying effect on the original aversion to AI and people would be more willing to accept human physicians. Conversely, a treatment plan that was painful but eradicates the lesion reversed the medical AI aversion, (i.e., people were more willing to accept the AI). This result was not entirely consistent with the findings of Experiment 2 which we believe was caused by the material showing the physician’s selection process for the patient on the two options.
In addition to reconfirming the above findings, Experiment 3 also revealed new findings. The patients’ mental capacity attributions—both experience and agency—to the physician and the AI fully mediated the patients’ aversion to the AI. The participants rated human physicians higher on experience and agency than the AI, and more mind capacity attributions to the treatment subject were also positively predictive of acceptance.
Not only that, but the content of the surgical regimen moderated both the mediating models. In the model with experience as the mediating variable, the content of the regimen not only moderated the treatment subject and the direct path of acceptance, but it also moderated the relationship between the treatment subject and experience; when the surgical regimen was less painful but generally effective, the patient’s evaluation of the human physician on experience was further enhanced, while when the surgical regimen was more utilitarian, the patient’s evaluation of the human physician on experience decreased. Similarly, in the model with agency as a mediating variable, the content of the regimen moderated the direct effect of the treatment subject on acceptance.

7. Conclusions

The present research demonstrates a lower acceptance of AI by patients in medical contexts. However, this phenomenon is changeable. The differences in patients’ attitudes toward the treatment subject can be eliminated by presenting different surgical options, especially when the condition necessitates a utilitarian treatment approach. The difference in acceptance that is exhibited by the patient is achieved by the mind perception of the therapist and different surgical regimens can result in differential mind perception of the treatment subject. When the human physician has a choice but draws up a utilitarian regimen, the patient reduces his or her evaluation of the physician’s perceptibility and thus exhibits a lower level of acceptance. Perhaps the patients are not resisting the pain that they suffer in treatment, they just reject it comes from peers.
The main objective of the current study is to verify the phenomenon of patients’ aversion to medical AI. Algorithm aversion has been studied by psychologists in areas where AI is relatively widespread, starting with Meehl’s research [43], and, in addition to medicine [44], a similar phenomenon occurs in the business, law, management, logistics [45,46,47,48], and other fields [25]. All of the results in all three experiments in the present study show a much lower acceptance of AI by the participants which is a revalidation and addition to the previous studies.
At the same time, we have tried to change patients’ aversion to medical AI and have succeeded in doing so by presenting different surgical options. In fact, people’s aversion to algorithms is by no means constant, and past research findings have demonstrated the existence of algorithm appreciation. Logg et al. found that people prefer to adopt AI and human advice when the former is given in a large and consistent manner [39]. When considering the potential for discrimination in the hiring process, people perceive AI’s selection rules to be fairer [49]. As moral objects (moral patients) in moral dilemmas, human physicians would be less blamed than AI, but as the same moral agents, AI is less blamed [50]. Although we did not shift the patients’ attitudes towards medical AI from aversion to appreciation through experimental manipulation in our study, we found that there was no difference in patient acceptance of human physicians and AI when they were faced with treatment options that were clearly utilitarian in nature, implying that there are ways to change or eliminate this algorithmic aversion.
In addition, we found that the patients’ perceptions of the therapist’s mental capacities can mediate the impact of treatment subject on aversion to medical AI. Specifically, we demonstrated that the regimens that were drawn up by the treatment subject could alter patients’ mind perception, which, in turn, can affect the degree of acceptance. In fact, the psychological mechanism of algorithmic aversion reflects the patient’s perception of the human identity of the therapist. In general, in human perception processes, animals are lower than humans in terms of agency and robots are lower than humans in terms of experience. In special cases, however, people also perceive pets and humanoid robots as more mentally competent and similar to humans, which implies the phenomenon of anthropomorphism [51]. In contrast, people also show differences in perceiving the minds of different humans. For example, black people, women, and elderly people are perceived as having fewer mental capacities, and such a phenomenon denying the human identity of others is referred to as dehumanization [52,53,54,55]. Anthropomorphism and dehumanization are both the result of mind perception, but they have very different effects on people. Anthropomorphizing driverless cars can lead to more tolerance and trust [56], as well as to building trust and repairing relationships in human-computer interactions [57]. Dehumanization, on the other hand, triggers more harm and less help to the target [58], such as unfriendliness, indifference, and an attitude of rejection [59,60,61]. Dehumanization is prevalent in the physician-patient relationship. For physicians and nurses who are often confronted with life and death, so dehumanizing the patient as a “case” can reduce empathy so that difficult decisions such as chemotherapy, radiation, and amputation can be made to save lives [62]. In one study, when participants who played physicians showed a greater degree of dehumanization of patients, they were more inclined to adopt painful and efficient treatments [41]. When individuals perceive dehumanization from others, they perceive other people in the same way, which is a process known as “meta-dehumanization” [63].
The present study showed that patients are likely perceived dehumanization by the medical practitioner. Especially in Experiment 3 where the participants learned that the human physicians chose the utilitarian type from two options, and they would regard the doctor as ignoring their own mind (especially experience, e.g., feeling pain). In response, the patients will meta-dehumanize the therapists, which means that they also perceive less mindfulness in the therapists. One of the most resistant situations for patients in this study was when the surgeon chose a surgical option for them without anesthetics, which we believe is a manifestation of the patient’s meta-dehumanization. A few studies can support this speculation, for example, utilitarianism is considered to be an important reason for dehumanization [64]. From the physician’s perspective, the choice of not using anesthetics to completely eradicate the disease is more utilitarian than a painless but consequential surgical option. In addition, people prefer AI to make utilitarian decisions compared to humans. From the patient’s perspective, medical AI is not inherently human and, therefore, there is no difference in acceptance between the two options, while the physician, as a human, is more difficult to accept because he or she has chosen the more utilitarian option. Mind perception theory intuitively reflects the similarities and differences between human doctors and medical artificial intelligence in the dimensions of agency and experience. Perception of mind also affects people’s attitudes towards goals, but since the outcome of perception is also affected by context, it is possible that patients’ attitudes towards medical AI diagnosis may also change. We believe that curing disease is not just about eradicating disease through medical technology, but also about caring for patients during treatment and recovery. The results of this study can provide some reference for real-world medical work, that is, making more acceptable decisions that are based on the patients’ perception of medical providers. This is a useful attempt to apply mind perception theory to medical AI. It is not only a confirmation of the difference between human beings and inanimate beings, but it also helps to expand the influence of the theory on the real world.
AI algorithms are bound to play an important role in the future of medical applications due to their huge advantages in information processing and image recognition. When a patient is in life-or-death situation and urgently needs medical treatment, only the solution with the biggest probability of saving a life is the best solution. However, due to the widespread algorithm aversion of patients, it is likely to influence their judgment. Therefore, by arguing for an aversion to medical AI, it is also hoped that patients will properly understand the statistical models behind AI algorithms and make more rational decisions when faced with health problems [65]. Even more, since resistance is not conducive to patient recovery and healing, in treatment situations where both human physicians and AI are present, different therapists can propose different solutions to enhance the persuasive effect. The current research explored the phenomenon of patient aversion to medical AI and we found the mediating role of mind perception as well as the moderating role of treatment plan. Nevertheless, there are still some limitations which implies directions for future research. First, this study did not recruit patient participants and we used material situations instead of real medical scenarios and, in the design stage of the experiment, we took into account that most people have been to the hospital and will be faced with the choice of different treatment plans. In fact, patients in hospitals usually exhibit different psychological states compared to the general population, such as deeper anxiety and distress, a stronger need for social support, and pronounced fear [66]. Moreover, people are often skeptical when AI is involved in decisions that concern their lives [67]. In addition, different clinical settings [68,69], and different types of diseases [70], may also influence patients’ attitudes towards physicians. All of the above factors are likely to have an impact on patients’ treatment acceptance, and patients’ aversion to AI may be more strongly expressed during real-life surgical treatments. To obtain more external validity and to make our results more robust, more control variables can be included in future studies. It should be noted that the samples that were selected in this study are from China, so the cultural differences in the samples should also be taken into consideration in future research. Furthermore, in this study, the difference between patient acceptance of the physician and the AI was influenced by presenting surgical regimens with different content. In future studies, it is possible to look beyond the content of surgical regimens, such as examining patients’ aversion to AI for different regimens in the diagnosis, treatment, and recovery phases.

Author Contributions

Conceptualization, F.Y. and L.X.; methodology, J.W.; writing—original draft preparation, J.W. and L.X.; writing—review and editing, F.Y., L.X. and K.P.; supervision, F.Y. and K.P.; funding acquisition, L.X. and F.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Natural Science Foundation of China (Grant No. 72101132) and National Social Science Foundation of China (Grant No. 20CZX059).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Parkin, S. The Artificially Intelligent Doctor Will Hear You Now–MIT Technology Review. 25 March 2018. Available online: https://www.technologyreview.com/s/600868/the-artificially-intelligent-doctor-will-hear-you-now/ (accessed on 22 December 2021).
  2. Hutson, M. Self-taught artificial intelligence beats doctors at predicting heart attacks. Science 2017, 14, 151516856. [Google Scholar] [CrossRef]
  3. Wosik, J.; Fudim, M.; Cameron, B.; Gellad, Z.F.; Cho, A.; Phinney, D.; Curtis, S.; Roman, M.; Poon, E.G.; Ferranti, J.; et al. Telehealth transformation: COVID-19 and the rise of virtual care. J. Am. Med. Inform. Assoc. 2020, 27, 957–962. [Google Scholar] [CrossRef]
  4. Keesara, S.; Jonas, A.; Schulman, K. Covid-19 and Health Care’s Digital Revolution. N. Engl. J. Med. 2020, 382, e82. [Google Scholar] [CrossRef]
  5. Goto, T.; Camargo, C.A.; Faridi, M.K.; Freishtat, R.; Hasegawa, K. Machine Learning—Based Prediction of Clinical Outcomes for Children During Emergency Department Triage. JAMA Netw. Open 2019, 2, e186937. [Google Scholar] [CrossRef] [Green Version]
  6. Goldhahn, J.; Rampton, V.; Weiss, B.; Spinas, G.A. Could artificial intelligence make doctors obsolete? BMJ 2018, 363, k4563. [Google Scholar] [CrossRef] [Green Version]
  7. Gao, S.; He, L.; Chen, Y.; Li, D.; Lai, K. Public Perception of Artificial Intelligence in Medical Care: Content Analysis of Social Media. J. Med. Internet Res. 2020, 22, e16649. [Google Scholar] [CrossRef] [PubMed]
  8. Haslam, N. Dehumanization: An Integrative Review. Pers. Soc. Psychol. Rev. 2006, 10, 252–264. [Google Scholar] [CrossRef]
  9. Hripcsak, G.; Wilcox, A. Reference standards, judges, and comparison subjects: Roles for experts in evaluating system performance. J. Am. Med. Inform. Assoc. 2002, 9, 1–15. [Google Scholar] [CrossRef] [Green Version]
  10. Arkes, H.A.; Dawes, R.M.; Christensen, C. Factors influencing the use of a decision rule in a probabilistic task. Organ. Behav. Hum. Decis. Process. 1986, 37, 93–110. [Google Scholar] [CrossRef]
  11. Cadario, R.; Longoni, C.; Morewedge, C.K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 2021, 12, 1636–1642. [Google Scholar] [CrossRef]
  12. Promberger, M.; Baron, J. Do patients trust computers? J. Behav. Decis. Mak. 2006, 19, 455–468. [Google Scholar] [CrossRef]
  13. Keeffe, B.; Subramanian, U.; Tierney, W.M.; Udris, E.; Willems, J.; McDonell, M.; Stephan, D. Provider response to computer-based care suggestions for chronic heart failure. Med. Care 2005, 43, 461–465. [Google Scholar] [CrossRef]
  14. Palmeira, M.; Spassova, G. Consumer reactions to professionals who use decision aids. Eur. J. Mark. 2013, 49, 302–326. [Google Scholar] [CrossRef]
  15. Sanders, N.R.; Manrodt, K.B. The Efficacy of Using Judgmental versus Quantitative Forecasting Methods in Practice. Omega Int. J. Manag. Sci. 2003, 31, 511–522. [Google Scholar] [CrossRef]
  16. Kleinberg, J.; Lakkaraju, H.; Leskovec, J.; Ludwig, J.; Mullainathan, S. Human Decisions and Machine Predictions. Q. J. Econ. 2018, 133, 237–293. [Google Scholar]
  17. Boatsman, J.R.; Moeckel, C.; Pei, B.K.W. The Effects of Decision Consequences on Auditors’ Reliance on Decision Aids in Audit Planning. Organ. Behav. Hum. Decis. Process. 1997, 71, 211–247. [Google Scholar] [CrossRef]
  18. Kuncel, N.R.; Klieger, D.M.; Connelly, B.S.; Ones, D.S. Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis. J. Appl. Psychol. 2013, 98, 1060–1072. [Google Scholar] [CrossRef] [PubMed]
  19. Sinha, R.; Swearingen, K. Comparing Recommendations Made by Online Systems and Friends. In Personalisation and Recommender Systems in Digital Libraries (DELOS 2001); Second DELOS Network of Excellence Workshop: Dublin, Ireland, 2001. [Google Scholar]
  20. Dineen, B.R.; Noe, R.A.; Wang, C. Perceived fairness of web-based applicant screening procedures: Weighing the rules of justice and the role of individual differences. Human Resource Management: Published in Cooperation with the School of Business Administration. Univ. Mich. Alliance Soc. Hum. Resour. Manag. 2004, 43, 127–145. [Google Scholar] [CrossRef]
  21. Önkal, D.; Goodwin, P.; Thomson, M.; Gonul, S.; Pollock, A. The relative influence of advice from human experts and statistical methods on forecast adjustments. J. Behav. Decis. Mak. 2009, 22, 390–409. [Google Scholar] [CrossRef]
  22. Bigman, Y.E.; Gray, K. People are averse to machines making moral decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef]
  23. Ford, M. The Rise of the Robots: Technology and the Threat of Mass Unemployment; Oneworld: New York, NY, USA, 2015. [Google Scholar]
  24. Kramer, M.F.; Borg, J.S.; Conitzer, V.; Sinnott-Armstrong, W. When Do People Want AI to Make Decisions? In Proceedings of the First Annual AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES-18), New Orleans, LA, USA, 13 February 2018. [Google Scholar]
  25. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114. [Google Scholar] [CrossRef] [Green Version]
  26. Kleinmuntz, D.N.; Schkade, D.A. Information Displays and Decision Processes. Psychol. Sci. 1993, 4, 221–227. [Google Scholar] [CrossRef]
  27. Ferreira-Valente, M.A.; Pais-Ribeiro, J.L.; Jensen, M.P. Validity of four pain intensity rating scales. Pain 2011, 152, 2399–2404. [Google Scholar] [CrossRef] [PubMed]
  28. Sullivan, E.E. The evolution of preoperative holding areas—Sciencedirect. J. PeriAnesthesia Nurs. 2009, 24, 119–121. [Google Scholar] [CrossRef]
  29. Gilmartin, J.; Wright, K. Day surgery: Patients’ felt abandoned during the preoperative wait. J. Clin. Nurs. 2008, 17, 2418–2425. [Google Scholar] [CrossRef] [PubMed]
  30. Apfelbaum, J.L.; Chen, C.; Mehta, S.S.; Gan, T.J. Postoperative pain experience: Results from a national survey suggest postoperative pain continues to be undermanaged. Anesth. Analg. 2004, 97, 534–540. [Google Scholar] [CrossRef] [Green Version]
  31. Dawes, R.M.; Faust, D.; Meehl, P.E. Clinical versus Actuarial Judgment. Science 1989, 243, 1668–1674. [Google Scholar] [CrossRef]
  32. Longoni, C.; Bonezzi, A.; Morewedge, C.K. Resistance to Medical Artificial Intelligence. J. Consum. Res. 2019, 46, 629–650. [Google Scholar] [CrossRef]
  33. Eastwood, J.; Snook, B.; Luther, K. What People Want from Their Professionals: Attitudes Toward Decision-Making Strategies. J. Behav. Decis. Mak. 2012, 25, 458–468. [Google Scholar] [CrossRef]
  34. Wegner, D.M.; Gray, K. The Mind Club; Viking: New York, NY, USA, 2017. [Google Scholar]
  35. Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of Mind Perception. Science 2007, 315, 619. [Google Scholar] [CrossRef] [Green Version]
  36. Gray, K.; Wegner, D.M. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 2012, 125, 125–130. [Google Scholar] [CrossRef]
  37. Goranson, A.; Sheeran, P.; Katz, J.; Gray, K. Doctors are seen as Godlike: Moral typecasting in medicine. Soc. Sci. Med. 2020, 258, 113008. [Google Scholar] [CrossRef] [PubMed]
  38. Schroeder, J.; Fishbach, A. The “empty vessel” physician. Soc. Psychol. Personal. Sci. 2015, 6, 940–949. [Google Scholar] [CrossRef] [Green Version]
  39. Logg, J.; Minson, J.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment; Social Science Electronic: New York, NY, USA, 2017. [Google Scholar]
  40. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef] [PubMed]
  41. Lammers, J.; Stapel, D.A. Power increases dehumanization. Group Process. Intergroup Relat. 2011, 14, 113–126. [Google Scholar] [CrossRef]
  42. Hayes, A.F. PROCESS: A Versatile Computational Tool for Observed Variable Mediation, Moderation, and Conditional Process Modeling. 2012. Available online: http://www.Afhayes.com/public/process2012.pdf (accessed on 16 December 2021).
  43. Meehl, P.E. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; University of Minnesota Press: Minneapolis, MN, USA, 1954. [Google Scholar]
  44. Chen, L.; Mislove, A.; Wilson, C. An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace. In Proceedings of the 25th International Conference on World Wide Web—WWW ’16, Montreal, QC, Canada, 11–15 April 2016; pp. 1339–1349. [Google Scholar] [CrossRef] [Green Version]
  45. Validi, S.; Bhattacharya, A.; Byrne, P. A solution method for a two-layer sustainable supply chain distribution model. Comput. Oper. Res. 2015, 54, 204–217. [Google Scholar] [CrossRef] [Green Version]
  46. Shapiro, A. Reform predictive policing. Nature 2017, 541, 458–460. [Google Scholar] [CrossRef] [Green Version]
  47. Cai, X.; Li, K. A genetic algorithm for scheduling staff of mixed skills under multi-criteria. Eur. J. Oper. Res. 2000, 125, 359–369. [Google Scholar] [CrossRef]
  48. Cárdenas-Barrón, L.E.; Treviño-Garza, G.; Wee, H.M. A simple and better algorithm to solve the vendor managed inventory control system of multi-product multi-constraint economic order quantity model. Expert Syst. Appl. 2012, 39, 3888–3895. [Google Scholar] [CrossRef]
  49. Lee, M.K. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018, 5, 205395171875668. [Google Scholar]
  50. Malle, B.F.; Scheutz, M.J.; Arnold, T.; Voiklis, J.; Cusimano, C. Sacrifice One for the Good of Many? People Apply Different Moral Norms to Human and Robot Agents. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015. [Google Scholar]
  51. Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef]
  52. Goff, P.A.; Jackson, M.C.; Di Leone, B.A.L.; Culotta, C.M.; DiTomasso, N.A. The essence of innocence: Consequences of dehumanizing Black children. J. Pers. Soc. Psychol. 2014, 106, 526–545. [Google Scholar] [CrossRef] [PubMed]
  53. Bernard, P.; Gervais, S.J.; Allen, J.; Delmée, A.; Klein, O. From sex objects to human beings: Masking sexual body parts and humanization as moderators to women’s objectification. Psychol. Women Quart. 2015, 39, 432–446. [Google Scholar] [CrossRef] [Green Version]
  54. Wiener, R.L.; Gervais, S.J.; Brnjic, E.; Nuss, G.D. Dehumanization of older people: The evaluation of hostile work environments. Psychol. Public Policy Law 2015, 20, 384–397. [Google Scholar] [CrossRef]
  55. Haslam, N.; Loughnan, S. Dehumanization and Infrahumanization. Annu. Rev. Psychol. 2014, 65, 399–423. [Google Scholar] [CrossRef] [PubMed]
  56. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  57. de Visser, E.J.; Monfort, S.S.; McKendrick, R.; Smith, M.A.B.; McKnight, P.E.; Krueger, F.; Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 2016, 22, 331–349. [Google Scholar] [CrossRef]
  58. Haslam, N.; Stratemeyer, M. Recent research on dehumanization. Curr. Opin. Psychol. 2016, 11, 25–29. [Google Scholar] [CrossRef]
  59. Nagar, D.; Sadhu, P. Relationship between workplace spirituality with Work Performance, Well-being and Health. Indian J. Psychol. Educ. 2015, 5, 1–11. [Google Scholar]
  60. Viki, G.T.; Osgood, D.; Phillips, S. Dehumanization and self-reported proclivity to torture prisoners of war. J. Exp. Soc. Psychol. 2013, 49, 325–328. [Google Scholar] [CrossRef]
  61. Tam, O.K.; Tan, G.S. Ownership, governance and firm performance in Malaysia. Corp. Gov. Int. Rev. 2007, 15, 208–222. [Google Scholar] [CrossRef]
  62. Schulman-Green, D. Coping mechanisms of physicians who routinely work with dying patients. OMEGA J. Death Dying 2003, 47, 253–264. [Google Scholar] [CrossRef]
  63. Kteily, N.; Hodson, G.; Bruneau, E. They see us as less than human: Metadehumanization predicts intergroup conflict via reciprocal dehumanization. J. Personal. Soc. Psychol. 2016, 110, 343–370. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, H.; Chan, D.K.-S.; Cao, Q. Deliberating on Social Targets’ Goal Instrumentality Leads to Dehumanization: An Experimental Investigation. Soc. Cogn. 2014, 32, 181–189. [Google Scholar] [CrossRef]
  65. Grove, W.M.; Zald, D.H.; Lebow, B.S.; Snitz, B.E.; Nelson, C. Clinical versus mechanical prediction: A meta-analysis. Psychol. Assess. 2000, 12, 19. [Google Scholar] [CrossRef]
  66. Kulik, J.A.; Mahler, H.I. Social support and recovery from surgery. Health Psychol. 1989, 8, 221. [Google Scholar] [CrossRef] [PubMed]
  67. Davis, R.; Libon, D.J.; Au, R.; Pitman, D.; Penney, D.L. Think: Inferring cognitive status from subtle behaviors. AI Mag. 2014, 4, 2898–2905. [Google Scholar] [CrossRef] [Green Version]
  68. Ramnarayan, P.; Kapoor, R.R.; Coren, M.; Nanduri, V.; Tomlinson, A.L.; Taylor, P.M.; Wyatt, J.C.; Britto, J.F. Measuring the impact of diagnostic decision support on the quality of clinical decision making: Development of a reliable and valid composite score. J. Am. Med. Inform. Assoc. 2003, 10, 563–572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Yun, J.H.; Lee, E.J.; Kim, D.H. Behavioral and neural evidence on consumer responses to hu-man doctors and medical artificial intelligence. Psychol. Mark. 2021, 38, 610–625. [Google Scholar] [CrossRef]
  70. Berner, E.S. Diagnostic Decision Support Systems: How to Determine the Gold Standard? J. Am. Med. Inform. Assoc. 2003, 10, 608–610. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Research framework.
Figure 1. Research framework.
Applsci 12 00110 g001
Figure 2. Interaction of Human vs. AI and the type of regimen on acceptance.
Figure 2. Interaction of Human vs. AI and the type of regimen on acceptance.
Applsci 12 00110 g002
Figure 3. Interaction of Human vs. AI and the type of regimen on acceptance.
Figure 3. Interaction of Human vs. AI and the type of regimen on acceptance.
Applsci 12 00110 g003
Figure 4. Sense of the regimen moderating effect to experience.
Figure 4. Sense of the regimen moderating effect to experience.
Applsci 12 00110 g004
Figure 5. Sense of the regimen moderating effect to acceptance.
Figure 5. Sense of the regimen moderating effect to acceptance.
Applsci 12 00110 g005
Table 1. Mediated model test of experience.
Table 1. Mediated model test of experience.
AcceptanceAcceptanceExperience
BtBtBt
treatment−0.034−0.1760.3732.287 *1.33710.683 **
experience0.3053.599 **
R20.0710.0230.337
F9.2295.23214.164
Note. * p < 0.05; ** p < 0.001.
Table 2. Decomposition of the total effect, direct effect, and mediated effect.
Table 2. Decomposition of the total effect, direct effect, and mediated effect.
Effect ValueBoot Standard ErrorBoot CI Lower LimitBoot CI Upper Limit
total effect0.3730.1630.0520.695
direct effect−0.0340.195−0.4190.351
mediated effect of experience0.3290.1060.1260.540
Table 3. Mediated model test for agency.
Table 3. Mediated model test for agency.
AcceptanceAcceptanceAgency
BtBtBt
treatment0.2491.4920.3732.287 *0.6104.070 **
agency0.2052.864 *
R20.0570.0230.069
F6.8015.23216.568
Note. * p < 0.05; ** p < 0.001.
Table 4. Decomposition of the total, direct, and mediated effects.
Table 4. Decomposition of the total, direct, and mediated effects.
Effect ValueBoot Standard ErrorBoot CI Lower LimitBoot CI Upper Limit
total effect0.3730.1630.0520.695
direct effect0.2490.167−0.0800.577
mediating effect of agency0.1250.0510.0370.232
Table 5. Mediated model tests with moderation.
Table 5. Mediated model tests with moderation.
ExperienceAcceptance
βtβt
treatment subject × regimen content−0.644−2.605 **−1.021−3.223 **
treatment subject1.33510.827 **0.0370.195
regimen content−0.172−1.397−0.169−1.083
experience 0.252.957 **
R20.0190.041
F6.78610.388
Note. ** p < 0.01.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, J.; Xu, L.; Yu, F.; Peng, K. Acceptance of Medical Treatment Regimens Provided by AI vs. Human. Appl. Sci. 2022, 12, 110. https://doi.org/10.3390/app12010110

AMA Style

Wu J, Xu L, Yu F, Peng K. Acceptance of Medical Treatment Regimens Provided by AI vs. Human. Applied Sciences. 2022; 12(1):110. https://doi.org/10.3390/app12010110

Chicago/Turabian Style

Wu, Jiahua, Liying Xu, Feng Yu, and Kaiping Peng. 2022. "Acceptance of Medical Treatment Regimens Provided by AI vs. Human" Applied Sciences 12, no. 1: 110. https://doi.org/10.3390/app12010110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop