You are currently viewing a new version of our website. To view the old version click .
Big Data and Cognitive Computing
  • Editor’s Choice
  • Article
  • Open Access

30 May 2023

Breaking Barriers: Unveiling Factors Influencing the Adoption of Artificial Intelligence by Healthcare Providers

,
,
,
,
,
,
,
,
and
1
Department of Urology, Father Muller Medical College, Mangalore 575002, Karnataka, India
2
iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
3
Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
4
Neuro-Informatics Laboratory, Department of Neurological Surgery, Mayo Clinic, Rochester, MN 55905, USA
This article belongs to the Special Issue Deep Network Learning and Its Applications

Abstract

Artificial intelligence (AI) is an emerging technological system that provides a platform to manage and analyze data by emulating human cognitive functions with greater accuracy, revolutionizing patient care and introducing a paradigm shift to the healthcare industry. The purpose of this study is to identify the underlying factors that affect the adoption of artificial intelligence in healthcare (AIH) by healthcare providers and to understand “What are the factors that influence healthcare providers’ behavioral intentions to adopt AIH in their routine practice?” An integrated survey was conducted among healthcare providers, including consultants, residents/students, and nurses. The survey included items related to performance expectancy, effort expectancy, initial trust, personal innovativeness, task complexity, and technology characteristics. The collected data were analyzed using structural equation modeling. A total of 392 healthcare professionals participated in the survey, with 72.4% being male and 50.7% being 30 years old or younger. The results showed that performance expectancy, effort expectancy, and initial trust have a positive influence on the behavioral intentions of healthcare providers to use AIH. Personal innovativeness was found to have a positive influence on effort expectancy, while task complexity and technology characteristics have a positive influence on effort expectancy for AIH. The study’s empirically validated model sheds light on healthcare providers’ intention to adopt AIH, while the study’s findings can be used to develop strategies to encourage this adoption. However, further investigation is necessary to understand the individual factors affecting the adoption of AIH by healthcare providers.

1. Introduction

With its myriad applications, artificial intelligence (AI) holds the promise to truly revolutionize patient care. Artificial intelligence in healthcare (AIH) as an emerging technological system enables healthcare providers to manage and analyze data by emulating human cognitive functions with greater accuracy []. This technology is introducing a paradigm shift to the healthcare industry, aided by the increasing availability of healthcare data and the progress of analytical techniques []. To reflect the legal, internationally accepted status of AI in healthcare, the medical device category of software as a medical device (SaMD) defines it as “analytical software with a significant potential for automating routine functions normally performed by a human”; it does not simply “simulate”, but rather performs analytical tasks previously performed only by a human.
The primary aim of many AI applications within the healthcare space is to analyze and understand the relationships between both prevention and/or treatment options, and their related patient outcomes []. Natural language processing (NLP) techniques and machine learning techniques (MLT) are two main categories for AIH. NLP augments structured medical data that could eventually be analyzed with MLT by utilizing unstructured data, including medical journals and clinical records. On the other hand, MLT seeks to estimate the disease outcomes and congregate patients’ characteristics by analyzing structured medical data, including imaging and genetic profiles [,]. While research on AI’s potential in the healthcare industry is often directed towards validating its efficacy in improving care, the risks it may introduce to both patients and providers, such as algorithmic bias and machine morality issues, are also worthy of exploration. These possible challenges additionally highlight the need to regulate such technology []. This also underscores the importance of ethically integrating AI and big data into the current healthcare landscape, in order to develop armamentariums that are satisfactory to patients and providers in all strata. To do so, we must first appraise the various factors that affect the adoption of artificial intelligence by healthcare professionals.
The world is on the brink of what is often called “The Fourth Industrial Revolution”. Rapid advances in technology are helping to blur the lines between physical, biological, and digital realms to completely reimagine the way we experience most aspects of life. Exponential leaps in computing power and the increased availability of large volumes of data are hastening this progress and have led to a world in which AI is already ubiquitous []. From virtual assistants to drones and self-driving cars, most industries are seeking to explore their potential, and healthcare is no exception. Today, AI is commonly harnessed in the field of medicine to perform a vast range of functions, including administrative support, clinical decision making, and interventions in management; it has also been shown to reduce overall healthcare costs and medical errors, among other benefits []. These advantages could greatly improve a system that is additionally challenged by a surge in non-communicable diseases and an aging population. The availability of AI, however, does not predicate its usage. Throughout the world, healthcare providers are faced with an increased presence of technology in their day-to-day practice and must learn how to effectively channel these new resources into improving patient outcomes, and perhaps physician satisfaction, as well []. Many questions remain unanswered about the opportunities and threats posed by AI in healthcare, as well as its effects on individual practitioners, policymakers, and institutions. Therefore, it is critical to learn what drives and what prevents healthcare professionals from integrating AI into their daily work [,,]. The current research aims to achieve this by extensively investigating such confounding variables. One can ensure the successful application of AI in healthcare by first identifying potential barriers to adoption, and then developing effective tactics to overcome them.
This study aims to significantly contribute to the study of AI in healthcare by investigating factors that influence the uptake of AI by healthcare practitioners. Through a comprehensive investigation of these factors, we may be able to better understand the influencing factors preventing wider acceptance and create strategies to resolve them. Understanding the drivers and barriers to the adoption of AI by healthcare providers is crucial to ensuring its smooth implementation in the industry, which is the purpose of this study. Our research has the potential to inform and influence healthcare authorities and organizations, as they are responsible for promoting the appropriate and productive use of AI.

3. Methodology

3.1. Study Setting

This cross-sectional survey was conducted with approval from the Institute Ethics Committee (ethical approval number FMIEC-102/2022), with all participants providing informed consent before taking part. A survey tool was constructed to determine the factors influencing and barriers preventing the adoption of new technology by healthcare professionals who possess experience working in Indian hospitals. The structured questionnaire consisted of questions about the demographic profile, including age, gender, professional ranks, and various domains about the factors affecting the adoption of new technology, assessing up to four pertinent questions in nine major constructs: performance expectancy (PE), effort expectancy (EE), personal innovativeness in information technology (PIIT), task complexity (TAC), technological characteristics (TC), initial trust (IT), social influence (SI), perceived substitution crisis (PSC), and behavioral intention (BI). This is depicted in Table 1. All questionnaire items are adapted from preceding investigations with proven eligibility, with minor wording amendments to emulate the context of the survey [,,]. Each item was assessed employing a 5-point Likert scale, with answers ranging from one (strongly disagree) to five (strongly agree). The cross-sectional survey received 392 responses from 1 October 2022 to 28 February 2023 in the form of an online questionnaire, using an anonymized Google Form. The target population consisted of healthcare professionals who possess experience working in Indian hospitals. The survey was disseminated via social media including e-mail and messaging apps (Facebook, WhatsApp, and Messenger). The study proposed and evaluated 12 hypotheses, as depicted in Table 1. These hypotheses were formulated based on the objectives of the study, and the structural equation model for each hypothesis was formulated accordingly. The results of the study indicated a significant relationship between the constructs and provided insights into the factors affecting the implementation of AIH.
Table 1. Hypotheses based on the constructs.

3.2. Data Analysis

The data collected from 392 respondents were analyzed using Partial Least Squares (PLS) as the method of structural equation modeling. Partial Least Squares (PLS) is a widely used method of structural equation modeling (SEM) that is often preferred over other methods, such as covariance-based SEM (CB-SEM), when dealing with complex models or small sample sizes. PLS is a form of regression analysis that enables the estimation of relationships between latent variables by finding linear combinations of observed variables that maximize the explained variance of the dependent latent variable. SmartPLS is a software program used in the study for PLS analysis; it is designed specifically for structural equation modeling using the Partial Least Squares algorithm. The study aimed to evaluate the relationship between latent constructs and identify factors affecting the implementation of AIH (Artificial Intelligence in Healthcare). Smart-PLS 3 was used to analyze the model and test the hypotheses formulated based on the objectives of the study. The constructs for the study were derived from the questionnaire data, and the values for each construct were calculated using the responses from the survey. The flow of the constructs is illustrated in Figure 1. Table 2 provides a detailed explanation of the constructs framed for the questions asked in the questionnaire. Overall, the use of PLS as a statistical tool allowed us to evaluate the relationships between different constructs and provided valuable insights into the factors affecting the implementation of AIH. The findings of the study have important implications for healthcare providers, policymakers, and researchers who are interested in advancing the use of AIH in healthcare.
Figure 1. Structural model design based on hypotheses.
Table 2. Measurements and constructs utilized in the survey.

4. Results

4.1. Demographic Characteristics

In this study, the data of 392 respondents were examined, which comprised 72.4% (284) male and 27.6% (108) female respondents. The responses for ≤30 years represented 50.7% (199), and the majority had an experience of <5 years. There were 33.67% (132) medical students, and the majority sub-specialization was in urology (33.16%, n = 130) (Table 3).
Table 3. Socio-demographic characteristics.

4.2. Model Evaluation

Structural equation modeling (SEM) is used to define and analyze structured connections between variables, some of which may reflect unobservable notions deduced from directly seen variables. One-way arrows represent the hypothesized direction of the causation of links. The resulting route model is represented by a set of equations. The goal of PLS in SEM, like that of multiple linear regression, is to maximize the explained variance of the dependent variables, and the model is largely assessed by its predictive powers. PLS uses composites to represent unobserved variables, which must be viewed as proxies for the factors they replace, as does principal component analysis about factor analysis. These composites are simply linear combinations of the corresponding block of indicator variables. As a result, they are numerically specified and computed by a sequence of ordinary least squares regressions for each of the route model’s different subparts, which are linked to reach convergence. The final estimated construct scores are then utilized to compute the remaining parameters of interest (weights, loadings, and path coefficients). As a result of this sequential procedure, the overall complexity of the model is lowered, and the sample size effects are less influenced. Due to this, PLS can handle increasingly complicated models while being computationally economical, as well as reach high levels of statistical power even with lower sample numbers.
Model evaluation is the first step in PLS analysis that examines the reliability and validity of each construct, which comprehensively reveals, and validates, the measured constructs in the AI adoption model for healthcare professionals. This is the essential prerequisite to validating the model based on the questionnaire. Table 3 exhibits the indexes of reliability and convergent validity for the scale. Cronbach’s α is used to represent internal consistency. If Cronbach’s α is higher than 0.7, it indicates good reliability. Table 4 indicates a Cronbach’s α value of more than 0.7, so it indicates a high degree of reliability.
Table 4. Reliability and convergent validity.
Construct validity is included under convergent validity and discriminant validity. The degree to which an indicator relates to the other indicators of the same construct is convergent validity, which is measured by three criteria: first, item loading; second, by composite reliability; and finally, the average variance is extracted (AVE) for each construct (a value of >0.5 signifies that the construct explains more than half of the variance of its indicators) and the outer loading of each indicator. The item loading of all standardized items is more than 0.70 and statistically significant. Composite reliability (CR) should be higher than 0.8. In our survey, the CR for each construct is more than 0.8, and AVE is more than 0.5. Therefore, the criteria hold good for a high level of convergent validity.
Discriminant validity determines if constructs in the model that are not connected are separate, and it is evaluated by examining the cross-loadings (the outer loading of an indicator on the linked construct should be greater than all of its loadings on the other constructs). Then, by rule of thumb, the square root of the AVE should be greater than all the correlation values between constructs. In this study, the square root value of AVE is more than the correlation coefficient value. If it had not held well, then we would have to check for variance inflation factor (VIF). The criterion is that if the VIF of the constructs is more than 10, then the constructs are highly collinear, which indicates multicollinearity. If multicollinearity is absent, then the data are suitable for regression analysis. Table 5 exhibits the correlation analysis between every two constructs. Cronbach’s α discriminates each reliability in Table 4.
Table 5. Correlation analysis between constructs.

4.3. Structural Model Evaluation

The second stage is structural model evaluation in PLS analysis for hypothesis testing. Once the measurement model’s validity is established, the examination of the structural model outcomes allows for an appreciation of the proposed model’s predictive capabilities. The primary criteria include the endogenous variable coefficients of determination (R2 values) and the predictive significance, if applicable. Because PLS focuses on exploration, the quantification of predicted links inside the inner model is of primary interest and frequently emerges as the primary outcome of the study. The path coefficients and effect sizes are the major estimations. Path coefficients (β) have standardized values ranging from −1 to +1, and they are interpreted similarly to standardized regression coefficients. Within the structural model, they directly measure the hypothesized connections. Bootstrapping is used to produce corresponding p-values and confidence intervals. The structural model for our study is shown in Figure 1. Structural model estimates are shown in Table 6. In this study, EE (β = 0.041, p > 0.05), IT (β = 0.468, p > 0.05), SI (β = 0.088, p > 0.05), and PSC (β = 0.004, p > 0.05) demonstrate no significant effects on the behavioral intention of the adoption of AIH. Hence, we rejected the hypotheses H2, H4, and H12. Moreover, PE (β = 0.000, p < 0.05) and IT (β = 0.468, p < 0.05) show significant effects on behavioral intention where we can support hypotheses H1 and H5. In addition, all five constructs together explain 61.6% (R2 = 0.616) of the variation in behavioral intention. PE (β = 0.229, p < 0.05) and EE (β = 0.225, p < 0.05) demonstrate significant effects on IT. Additionally, both constructs explain 69.2% (R2 = 0.692) of the variation in IT. Therefore, H6 and H7 are supported. Task capacity (β = 0.148, p < 0.05) and EE (β = 0.722, p < 0.05) demonstrate significant effects on PE. Both task capacity and EE demonstrate 67.0% (R2 = 0.670) variation in PE. Therefore, we accept H3 and H10. Technology characteristics (β = 0.451, p < 0.05) and personal innovativeness in IT (β = 0.457, p < 0.05) demonstrate significant effects on EE. Both the constructs explain 69.8% (R2 = 0.698) variations in the effort EE. Therefore, H9 and H11 are supported.
Table 6. Structural estimates (hypothesis testing).

5. Discussion

Despite the ability of AI to generate high-quality data, there are reservations about its real-world application, in part due to the lack of clarity surrounding technology that is often poorly understood []. The demand for AI-based services may also be affected by a wide spectrum of factors. A study by Kang S et al. involving a comparison between ward and non-ward nurses showed that individuals working in non-ward departments demonstrated a higher demand for AI tools to increase productivity, as compared to their ward counterparts []. The collective concerns of accuracy and efficiency must also be skillfully balanced against individual concerns of privacy and liability [].
This study aimed to investigate the attitude of healthcare professionals, specifically doctors, towards the use of artificial intelligence in their routine clinical practice. Most of the initial hypotheses of our model were supported. However, some unexpected results contradicted our original assumptions. The authors hereby provide insights into the determinant factors affecting the adoption of artificial intelligence by healthcare professionals.
  • Performance expectancy (PE), initial trust (IT), and propensity to trust (PT)
The degree to which an individual assumes that utilizing a specific system will enhance his or her job performance is defined as performance expectancy (PE) (10). Healthcare professionals are inclined to act more cautiously than other technology adopters when it comes to deciding how to adopt the best objective technologies to provide high-quality healthcare services. The user’s perception of the AIH system’s ability represents a considerable dimension for trust in the AIH system, which is highly associated with the mathematical error representation, the quality of the input data, and the algorithms considered in the decision-making process. Mcknight and collaborators described IT in terms of technology as beliefs towards a technology’s capability regardless of its antecedent motives or will []. Fan and collaborators considered PE and IT as meaningful predictors of behavioral intention to adopt AIH (10). Furthermore, IT has been demonstrated to have a robust potential impact inconsistent with trend studies, as PE is commonly found to have the greatest impact []. Nakrem and collaborators demonstrated that healthcare providers are less likely to adopt digital technology if they do not realize the underlying rationale for using it or consider that it can ameliorate the quality of care delivered []. The PT is defined as the stable contributing factor referring to a person’s overall predisposition to trust other people or technology []. Mayer’s trust theory reflects the pivotal role of PT as an antecedent factor formatting peoples’ trust, especially when there exists no preceding interaction between trustee and trustor [].
  • Effort expectancy (EE) and social influence (SI)
The effort expectancy (EE) refers to the level of ease related to technology adoption (19). Therefore, ease of use and usefulness are tailored to effort expectancy as well as performance, all of which are regarded as robust predictors of AIH adoption by healthcare providers (20). Social influence (SI) refers to the degree to which a person can convince others and expect them to adopt a new system []. SI was found to be a meaningful predictor of an individual’s intention to adopt health and mobile diet apps or other mobile health services. Behavioral intention (BI) is more affected by SI and facilitating conditions compared with PE. As a therapeutic instance, when Chinese physicians encountered AIH-based technology, their therapeutic point of view was more prone to be driven by individuals who share close relationships, e.g., hospital leaders, colleagues, and friends []. This concept, which is aligned with the ideology of a Chinese philosophy called “utilitarian Guanxi”, which combines profit with objective intentions, reveals the culture of vertical collectivism []. Moreover, social propaganda, such as news stories regarding the successful use of AIH-based technology by healthcare providers, is likely to affect physicians’ perceptions of adopting such technologies [,]. While the aforementioned contributing factors were argued to significantly affect AIH-based technology adoption in an appreciable number of studies, others did not indicate consistent results. Other factors, including standardization in healthcare practices and process orientation, dramatically corresponded with augmenting the perceived ease of adoption of AIH-based technologies. Concerning technology acceptance mode, it is not unexpected that all the above-mentioned psychological agents could positively or negatively affect people’s intention to adopt AIH-based technologies. This, in turn, depends thoroughly on how professionals perceive the subsequent results of adopting such technologies and the extent to which we can trust them (20).
  • Personal innovativeness (PI)
Personal innovativeness (PI) refers to the degree to which a person is receptive to novel views and independently makes innovative choices []. Considering innovation diffusion theory, differences in personal innovativeness lead to different individual reactions, yielding a predisposed tendency towards using the innovation []. As noted by Webster and collaborators, PI stands for a comparatively stable descriptor of each person and remains invariant over the different situations or internal variables []. A growing trend in the literature has regarded PI as a pivotal contributing factor when it comes to the adoption of novel mobile commerce, consumer products, mobile payment, and mobile diet apps. Moreover, it also appears to be an antecedent of intention to adopt information and communication technologies []. Yi and collaborators paid attention to the positive link between PI and perceived ease of adoption in terms of PDA acceptance by healthcare providers [].
  • Task complexity (TAC) and technology characteristics (TC)
Task complexity (TAC) is defined as the level of required load and perceived difficulty to perform a task successfully []; thereby, if healthcare providers are convinced that their given tasks are challenging, they are further prone to accept AIH-based support to augment their PE []. According to a study, Canadian medical students perceived the promising role of AIH in addressing TE, which has been efficient enough to build personalized medication, perform robotic surgery, and provide diagnosis and prognosis. In case of the complex diagnosis tasks that threaten the patient’s survival, healthcare professionals may perceive a higher appreciation of AIH-based technologies than the classic ones []. Technology using processes and more transparent interfaces could make the manipulation less complex []. Zhou and collaborators verified that the technological characteristic plays a pivotal role as a direct antecedent of EE towards the intention to adopt mobile banking []. However, healthcare professionals have raised some concerns regarding the competency of digital platforms to consider socio-demographic characteristics, including gender, race, and ethnicity [].
  • Perceived substitution crisis (PSC)
With the rapid evolution of AIH applications, this hypothesis that doctors’ authority may be challenged or even replaced by AIH products has recently raised several concerns and remains one of the most serious barriers facing the adoption of AIH-based technology [,]. According to a study, 49% of English medical students did not reflect enthusiasm in pursuing radiology as a profession because of AIH, and 17% of German medical students were convinced that healthcare professionals might be replaced by AIH (29, 30). Fans et al. [] indicated that PSC yields no meaningful impact on healthcare providers’ intention towards adopting AIH systems. Optimally, not only will AIH entirely replace professionals, but also enable them to focus more on important aspects of health care [].
  • Background characteristics
Gardner and collaborators addressed age as another contributing factor, as most younger healthcare professionals were found to recruit automated pain recognition procedures []. According to another investigation, minorities and male individuals were more likely to adopt AIH services rather than conventional physicians [].
  • Patient’s medical characteristics
According to a study among patients scheduled for magnetic resonance imaging or computed tomography, the severity of illness was proposed as the determinant factor for the adoption of AIH. In other words, AIH acceptance was meaningfully higher for diseases of medium-to-low severity compared with high-severity diseases []. The role of previous experience of missed or delayed diagnoses should be highlighted as a positively correlated factor in adopting AIH []. Patients who had received conventional Chinese medicine suffering from cancer, patients who had never experienced chemotherapy, and those who had undergone an operation were more likely to adopt AIH services in one study in our literature review [].
  • Fundamental pitfalls towards AIH adoption
At a time when the adoption of artificial intelligence is significantly changing the landscape of the healthcare industry, India is uniquely positioned to leverage AI to address its many shortcomings in patient care. As of June 2018, India has 0.76 doctors and 2.09 nurses per 1000 population, as compared to the WHO’s recommendation of 1 doctor and 2.5 nurses, respectively []. The gap in patient care due to this shortage is further weighed down by overwhelmed primary healthcare services, the absence of uniformity in physician training, the inaccessibility of standardized testing facilities, and the poor maintenance of patient records. Another major barrier to the delivery of good care is the lack of accessibility to quality healthcare services. A total of 67% of all doctors in India are clustered in urban areas, serving 33% of its population, and under-equipped public healthcare systems result in 78.8% and 71.7% of urban and rural cases, respectively, being treated in private (and invariably more expensive) facilities []. India’s public health expenditure amounts to 1.28% of its GDP, one of the lowest figures amongst countries globally. This amounts to USD 62 per capita as public health expenditure, far behind most BRIC and Southeast Asian countries []. Affordability also remains an issue, with a significant number of people (~63 million) being driven into poverty every year as a result of healthcare expenses. As of 2015, private expenditure accounted for approximately 70% of healthcare costs, ~62% of which was calculated to be out-of-pocket expenditure. This is estimated to be the highest of any country in the world. The liquidation of assets and borrowing of loans remains the primary means to finance healthcare expenditure in ~47% of rural and ~31% of urban households, respectively, which is not an inconsiderable amount. Needless to say, the poorest of the poor and more marginalized sections of society are the worst affected []. Several challenges remain for the management of big data, including a lack of interoperability and unstructured and unorganized data []. Moreover, data security, ethical use, and data privacy pose global challenges that remain a substantial issue, particularly in developing countries []. The minority population intends to be less enrolled in datasets applied to develop AIH algorithms []. Many AIH algorithms are regarded as a black box, which is not anticipated to be evaluated for bias []. According to Precedence Research, the global AIH market size is projected to reach approximately USD 187.95 billion by 2030, with an expanding growth at a compound annual growth rate of 37% from 2022 to 2030 []. An approximate total private and public sector investment in AIH exceeds USD 6.6 billion, indicating the scope of utilization of AIH by the year 2021 []. By applying AIH solutions, the industry is estimated to rescue approximately USD 150 billion per year by 2026 [].
The underrepresentation of the populations in datasets used to develop AIH algorithms and the opaqueness of these algorithms concerning bias evaluation are concerning []. Despite these issues, the global AIH market is expected to expand significantly, with a projected size of approximately USD 187.95 billion by 2030 and a compound annual growth rate of 37% from 2022 to 2030 []. The private and public sectors have already invested over USD 6.6 billion in AIH solutions, highlighting the growing importance of AIH []. The potential benefits of AIH are significant, with the industry estimated to save approximately USD 150 billion per year by 2026 through the application of AIH solutions []. However, it is essential to ensure that AIH algorithms are developed with fairness and inclusivity in mind to prevent the perpetuation of systemic biases and inequities in society. Therefore, it is critical to address the issues of underrepresentation and bias evaluation in the development of AIH algorithms to achieve equitable and just outcomes. [,,].
  • Limitations
The current study suffers several drawbacks, including that our data collection was geographically limited to India, which warrants the need for cross-regional studies. Subsequent studies in multidisciplinary/interdisciplinary aspects of healthcare settings containing more items, such as patients’ historical procedures, the severity of illness, and underlying disorders or cultural factors, could shed more light on this subject. The specific attributes of different AI models being deployed in different aspects of healthcare settings are bound to be different and would need specific and relevant studies to understand the pearls and pitfalls of individual AI models. Furthermore, due to the lack of specialized AI training and experience within our respondent cohort, it would be ideal to expand the study by giving the same questionnaire to a properly trained group of practitioners with the correctly taught definition of AI at the beginning of the survey, and to compare the two study groups to draw valuable evidence-based conclusions.

6. Conclusions

In summary, the purpose of this research was to establish factors that would prompt healthcare practitioners to embrace AI. The study found that people are more likely to adopt AIH if they have favorable expectations about its performance, effort, trustworthiness, creativity, task difficulty, and other attributes. These results have major ramifications for applying AIH in clinical settings. The study’s model, which was empirically validated, sheds light on healthcare practitioners’ intentions to implement AIH. As our results show, healthcare professionals’ performance expectations, effort expectations, and initial trust are major determinants in determining their behavioral intentions about adopting AIH. Significant personal factors, job difficulty, and technological attributes were also found to predict AIH effort expectations. The study’s findings have important implications for healthcare organizations and policymakers, including the need to consider both individual and organizational aspects in the adoption process. According to the findings, healthcare organizations should prioritize training and assistance to raise healthcare professionals’ performance and effort expectations. Additionally, there needs to be an effort to build initial trust in AIH, by making information about the technology and its benefits readily available. AIH should also take into account individual creativity, the difficulty of the work at hand, and the nature of the technology being used. There are, however, caveats to the study’s findings. First, the sample size is restricted to healthcare professionals from a certain geographical area, which may not be universally representative. Second, the study’s cross-sectional design makes it difficult to conclude cause and effect. These caveats necessitate additional research into the correlation between the listed parameters and their effect on AIH adoption. The results shed light on factors affecting healthcare professionals’ willingness to adopt AIH. These findings can be applied to the design of policies that promote AIH adoption and, eventually, better medical treatment.

Author Contributions

Conceptualization, B.Z.H., N.N., S.I., P.C., B.P.R. and B.K.S.; Data curation, N.S.T. and M.J.S.; Formal analysis, S.I., N.S.T. and P.H.; Investigation, D.P. and P.H.; Methodology, B.Z.H., N.N., M.J.S. and P.C.; Project administration, P.C.; Resources, B.Z.H., M.J.S. and D.P.; Software, N.S.T. and P.H.; Supervision, N.N., P.C., B.P.R. and B.K.S.; Validation, N.S.T. and P.H.; Visualization, B.K.S.; Writing—original draft, B.Z.H., S.I., M.J.S. and D.P.; Writing—review and editing, N.N., P.C., B.P.R. and B.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was conducted with permission from the Father Muller Medical College, Institute Ethics Committee (ethical approval number FMIEC-102/2022). All procedures performed in this study involving human participants were in accordance with the with the Declaration of Helsink.

Data Availability Statement

The datasets analyzed for this study can be retrieved upon request from the corresponding author.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Laï, M.-C.; Brian, M.; Mamzer, M.-F. Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. J. Transl. Med. 2020, 18, 14. [Google Scholar] [CrossRef] [PubMed]
  2. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial intelligence in healthcare: Past, present and future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed]
  3. Darcy, A.M.; Louie, A.K.; Roberts, L.W. Machine Learning and the Profession of Medicine. JAMA 2016, 315, 551–552. [Google Scholar] [CrossRef] [PubMed]
  4. Price, I.; Nicholson, W. Artificial intelligence in health care: Applications and legal issues. SciTech Lawyer 2017, 14, 10–13. [Google Scholar]
  5. TFIRwimahtrWEFnd. Available online: https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/ (accessed on 11 June 2021).
  6. Ransing, R.; Nagendrappa, S.; Patil, A.; Shoib, S.; Sarkar, D. Potential role of artificial intelligence to address the COVID-19 out-break-related mental health issues in India. Psychiatry Res. 2020, 290, 113176. [Google Scholar] [CrossRef]
  7. Secinaro, S.; Calandra, D.; Secinaro, A.; Muthurangu, V.; Biancone, P. The role of artificial intelligence in healthcare: A structured literature review. BMC Med. Inform. Decis. Mak. 2021, 21, 125. [Google Scholar] [CrossRef]
  8. Miller, D.D.; Brown, E.W. Artificial intelligence in medical practice: The question to the answer? Am. J. Med. 2018, 131, 129–133. [Google Scholar] [CrossRef]
  9. Kang, S.; Baek, H.; Jung, E.; Hwang, H.; Yoo, S. Survey on the demand for adoption of Internet of Things (IoT)-based services in hospitals: Investigation of nurses’ perception in a tertiary university hospital. Appl. Nurs. Res. 2019, 47, 18–23. [Google Scholar] [CrossRef]
  10. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of ac-ceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  11. Khanijahani, A.; Iezadi, S.; Dudley, S.; Goettler, M.; Kroetsch, P.; Wise, J. Organizational, professional, and patient characteristics associated with artificial intelligence adoption in healthcare: A systematic review. Health Policy Technol. 2022, 11, 100602. [Google Scholar] [CrossRef]
  12. Zhai, H.; Yang, X.; Xue, J.; Lavender, C.; Ye, T.; Li, J.-B.; Xu, L.; Lin, L.; Cao, W.; Sun, Y. Radiation Oncologists’ Perceptions of Adopting an Artificial Intelligence–Assisted Contouring Technology: Model Development and Questionnaire Study. J. Med. Internet Res. 2021, 23, e27122. [Google Scholar] [CrossRef]
  13. Pinto dos Santos, D.; Giese, D.; Brodehl, S.; Chon, S.H.; Staab, W.; Kleinert, R.; Maintz, D.; Baeßler, B. Medical students’ attitude towards artificial intelligence: A multicentre survey. Eur. Radiol. 2019, 29, 1640–1646. [Google Scholar] [CrossRef]
  14. Sit, C.; Srinivasan, R.; Amlani, A.; Muthuswamy, K.; Azam, A.; Monzon, L.; Poon, D.S. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: A multicentre survey. Insights Imaging 2020, 11, 14. [Google Scholar] [CrossRef]
  15. Lennartz, S.; Dratsch, T.; Zopfs, D.; Persigehl, T.; Maintz, D.; Hokamp, N.G.; dos Santos, D.P. Use and Control of Artificial Intelligence in Patients Across the Medical Workflow: Single-Center Questionnaire Study of Patient Perspectives. J. Med. Internet Res. 2021, 23, e24221. [Google Scholar] [CrossRef]
  16. Naik, N.; Ibrahim, S.; Sircar, S.; Patil, V.; Hameed, B.M.; Rai, B.P.; Chłosta, P.; Somani, B.K. Attitudes and perceptions of outpatients towards adoption of telemedicine in healthcare during COVID-19 pandemic. Ir. J. Med. Sci. (1971-) 2021, 191, 1505–1512. [Google Scholar] [CrossRef] [PubMed]
  17. Tavares, J.; Ong, F.S.; Ye, T.; Xue, J.; He, M.; Gu, J.; Lin, H.; Xu, B.; Cheng, Y. Psychosocial Factors Affecting Artificial Intelligence Adoption in Health Care in China: Cross-Sectional Study. J. Med. Internet Res. 2019, 21, e14316. [Google Scholar] [CrossRef]
  18. Fan, W.; Liu, J.; Zhu, S.; Pardalos, P.M. Investigating the impacting factors for the healthcare professionals to adopt artificial in-telligence-based medical diagnosis support system (AIMDSS). Ann. Oper. Res. 2020, 294, 567–592. [Google Scholar] [CrossRef]
  19. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  20. McKnight, D.H. Trust in information technology. Blackwell Encycl. Manag. 2005, 7, 329–331. [Google Scholar]
  21. Dulle, F.W.; Minishi-Majanja, M. The suitability of the Unified Theory of Acceptance and Use of Technology (UTAUT) model in open access adoption studies. Inf. Dev. 2011, 27, 32–45. [Google Scholar] [CrossRef]
  22. Nakrem, S.; Solbjør, M.; Pettersen, I.N.; Kleiven, H.H. Care relationships at stake? Home healthcare professionals’ experiences with digital medicine dispensers—A qualitative study. BMC Health Serv. Res. 2018, 18, 26. [Google Scholar] [CrossRef] [PubMed]
  23. Du, J.; Vantilborgh, T. Cultural Differences in The Content of Employees’ Psychological Contract: A Qualitative Study Comparing Belgium and China. Psychol. Belg. 2020, 60, 132. [Google Scholar] [CrossRef] [PubMed]
  24. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709. [Google Scholar] [CrossRef]
  25. Huang, C.Y.; Yang, M.C. Empirical Investigation of Factors Influencing Consumer Intention to Use an Artificial Intelligence-Powered Mobile Application for Weight Loss and Health Management. Telemed. J. e-Health Off. J. Am. Telemed. Assoc. 2020, 26, 1240–1251. [Google Scholar] [CrossRef] [PubMed]
  26. Wu, L.; Li, J.-Y.; Fu, C.-Y. The adoption of mobile healthcare by hospital’s professionals: An integrative perspective. Decis. Support Syst. 2011, 51, 587–596. [Google Scholar] [CrossRef]
  27. Webster, J.; Martocchio, J.J. Microcomputer Playfulness: Development of a Measure with Workplace Implications. MIS Q. 1992, 16, 201–226. [Google Scholar] [CrossRef]
  28. Yi, M.Y.; Jackson, J.D.; Park, J.S.; Probst, J.C. Understanding information technology acceptance by individual professionals: Toward an integrative view. Inf. Manag. 2006, 43, 350–363. [Google Scholar] [CrossRef]
  29. Zhou, T.; Lu, Y.; Wang, B. Integrating TTF and UTAUT to explain mobile banking user adoption. Comput. Hum. Behav. 2010, 26, 760–767. [Google Scholar] [CrossRef]
  30. Singh, R.P.; Hom, G.L.; Abramoff, M.D.; Campbell, J.P.; Chiang, M.F. Current challenges and barriers to real-world artificial intelli-gence adoption for the healthcare system, provider, and the patient. Transl. Vis. Sci. Technol. 2020, 9, 45. [Google Scholar] [CrossRef]
  31. Haider, H. Barriers to the Adoption of Artificial Intelligence in Healthcare in India; Institute of Development Studies: Brighton, UK, 2020. [Google Scholar]
  32. Shuaib, A.; Arian, H.; Shuaib, A. The Increasing Role of Artificial Intelligence in Health Care: Will Robots Replace Doctors in the Future? Int. J. Gen. Med. 2020, 13, 891. [Google Scholar] [CrossRef]
  33. Gardner, R.M.; Lundsgaarde, H.P. Evaluation of User Acceptance of a Clinical Expert System. J. Am. Med. Inform. Assoc. 1994, 1, 428–438. [Google Scholar] [CrossRef]
  34. Yang, K.; Zeng, Z.; Peng, H.; Jiang, Y. Attitudes of Chinese Cancer Patients toward The Clinical Use of Artificial Intelligence. Patient Prefer. Adherence 2019, 13, 1867. [Google Scholar] [CrossRef]
  35. Meyer, A.N.D.; Giardina, T.D.; Spitzmueller, C.; Shahid, U.; Scott, T.M.T.; Singh, H. Patient Perspectives on the Usefulness of an Artificial Intelligence–Assisted Symptom Checker: Cross-Sectional Survey Study. J. Med. Internet Res. 2020, 22, e14679. [Google Scholar] [CrossRef]
  36. National Strategy for AI Discussion. Available online: https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf#page24 (accessed on 11 June 2022).
  37. The Bengal Chamber. Reimagining the Possible in the Indian Healthcare Ecosystem with Emerging Technologies; PwC: Kolkata, India, 2018. [Google Scholar]
  38. Ministry of Health and Family Welfare. National Health Profile (NHP) of India-2020. Available online: https://www.cbhidghs.nic.in/showfile.php?lid=1155 (accessed on 12 June 2021).
  39. Guo, J.; Li, B. The Application of Medical Artificial Intelligence Technology in Rural Areas of Developing Countries. Health Equity 2018, 2, 174–181. [Google Scholar] [CrossRef]
  40. Gujral, G.; Shivarama, J.; Mariappan, M. Artificial Intelligence (AI) and Data Science for Developing Intelligent Health Informatics Systems. In Proceedings of the National Conference on AI in HI & VR, SHSS-TISS, Mumbai, India, 30–31 August 2020. [Google Scholar]
  41. Available online: https://www.globenewswire.com/en/news-release/2022/04/11/2420243/0/en/Artificial-Intelligence-in-Healthcare-Market-Size-to-Hit-US-187-95-Bn-By-2030.html (accessed on 11 June 2022).
  42. Naik, N.; Hameed, B.M.Z.; Shetty, D.K.; Swain, D.; Shah, M.; Paul, R.; Aggarwal, K.; Ibrahim, S.; Patil, V.; Smriti, K.; et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front. Surg. 2022, 9, 862322. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.