1. Introduction
Artificial intelligence (AI) has recently been at the forefront of global technological innovation, obtaining headlines all over the world [
1]. It has evolved as a result of the growing use of automation and technology to simulate human abilities in critical thinking and decision-making processes, leading to the emergence of AI [
2]. AI was first developed in 1950, when Alan Turing raised his widely discussed question, “Can machines think?” [
3].
Historically, the rules-based systems were found as the inception of a machine intelligence that does whatever is set only by programmers, and then they were advanced over time to more sophisticated algorithms that have the ability and capability to imitate the human brain in performing countless and complex tasks, referred to today as AI [
4]. Practically, AI consists of diverse techniques, including machine learning (ML), deep learning (DL), natural language processing (NLP), and computer vision (CV) [
4,
5]. The features offered by these applications include, but are not limited to, visual perceptions, speech recognition, decision-making, and translation between languages [
6].
Since its evolution, AI has been subsequently adopted and employed by many stakeholders, including manufacturing, e-commerce, banking, education, and healthcare [
6]. In healthcare, for example, the adoption of AI has been attributed to improving patient outcomes, including the diagnosis of medical conditions, development of customized treatment plans, provision of preventive interventions, and invention of drugs [
7]. As a result, a notable transformation has been experienced in the provision of medical services such as medicine, radiology, pathology, and dermatology [
8]. Apart from this, it has gained a crucial role in supporting medical practices in the prognosis of conditions, as well as the invention of novel therapies.
AI applications are being applied more and more to improve clinical and administrative performance, which then in turn improves overall patient outcomes. To cope with stress related to textual data entered in medical records, for example, NLP is being utilized [
9]. Additionally, CV technology has improved the process of reading radiological images by influencing their interpretation and analysis, while ML has assisted with data analysis and providing health professionals’ insights [
10].
Even though adopting AI has transformed the healthcare industry and produced exceptional performance in terms of provided services, a review of the literature revealed several challenges that hinder its full integration and application in clinical practice. The study of Aldhafeeri [
11] was conducted in the Saudi Arabian context. It reported that the highest-scored concerns were uncertainty about the outcome of AI systems, patients, and reliance on AI. Meanwhile, the study of Elnaggar et al. [
12] reported that replacing the healthcare providers’ jobs and patient privacy issues were more important concerns perceived by the study respondents.
Despite the substantial amount of literature targeting the ethical challenges associated with AI integration into clinical practice, there is a dearth of solid guidelines and standardized frameworks or international collaboration that fully tackles healthcare AI-related ethical and practical issues. Therefore, further assessment of such challenges at the global level is warranted to promote the appropriate application of AI in healthcare. Additionally, proactive assessment of AI integration-related ethical concerns will assist in closing the current knowledge gap and guaranteeing the provision of safe, reliable, and high-quality patient care [
13,
14]. To ameliorate this matter, this study aimed to identify the ethical and practical considerations from physicians’ and nurses’ perspectives regarding AI integration in clinical practices in Saudi Arabia.
2. Materials and Methods
2.1. Study Design
A cross-sectional study was carried out including Saudi Arabian practicing physicians and nurses, with the exclusion of dentists and those assigned administrative responsibilities. Sample size was calculated using the formula n = Z2P (1 − P)/d2, where Z = 1.96 for a 95% confidence level, p = 0.5 for an assumed proportion, and d = 0.05 as the margin of error. Accordingly, the estimated sample size was 385 respondents.
2.2. Questionnaire Development and Validation
A close-ended online questionnaire was developed by the research team members and was voluntarily validated by three independent subject-matter experts to ensure its validity. It included 26 questions divided into four sections: demographics (4 items), practitioners’ experience with AI (5 items), practitioners’ concerns about the integration of AI in clinical practice (10 items), and ethical challenges of integrating AI into clinical practice (7 items) (
Supplementary Materials).
Post-receipt of the IRB approval, the questionnaire was randomly piloted on 30 selected practitioners from different healthcare institutions who met the inclusion criteria. Based on their feedback, comments, and suggestions, the research tool was modified. This step aimed to ensure the face validity of the developed questionnaire and to measure the internal consistency (reliability) using Cronbach’s alpha coefficient. Accordingly, the internal consistency was found to be good, with an overall Cronbach’s alpha score of 0.866.
2.3. Conceptual Framework
While the current study employed a descriptive design, aiming to present a population’s perspectives on the integration of AI in clinical practices without testing theoretical models or any form of relationships, a conceptual framework is omitted. Therefore, the conceptual framework was not fundamental for achieving the study’s objectives.
2.4. Data Collection
Following the obtaining of IRB approval, the data collection process commenced. Questionnaires were distributed online via Google Forms on social media, including X (formerly Twitter), WhatsApp, Telegram, and LinkedIn platforms. All collected responses were de-identified through anonymous participation, stored, and kept confidential. Participation was voluntary, where the respondent had to agree before answering questions. Nevertheless, participants had the right to withdraw at any point, and their responses would then be excluded immediately.
2.5. Statistical Analysis
Questionnaires were analyzed using the Statistical Package for Social Sciences (SPSS) version 29. Descriptive data were analyzed using the central tendency, including frequency, mean, and standard deviation (SD), while the inferential data were analyzed using the independent samples t-test.
4. Discussion
AI has a promising impact on the provision of healthcare services. However, to ensure its effective and reliable operability in healthcare, AI must be comprehensively assessed and evaluated. This cross-sectional study has addressed physicians’ and nurses’ perspectives regarding the ethical and practical considerations of AI integration in clinical practices. Their feedback represents healthcare providers’ perspectives on the use of AI in streamlining processes and supporting medical decision-making. Overall, the study reveals that both physicians and nurses lack the skills to use AI effectively, and it emphasizes the fact that AI algorithms developed for specific populations might not be suitable for another.
Participants in the study reported conflicting findings. Most of them do not apply AI in clinical practices, even though they are willing to do so. This could be an indication of their enthusiasm and level of knowledge. However, it appears that the organizations they practiced in did not encourage or support healthcare providers in using this technology to provide medical care. As a result, the discrepancy between willingness and actual practice is explicit. Thus, the excitement of physicians and nurses might promote the use of AI in clinical fields.
When it comes to the AI-related concerns from the perspective of healthcare professionals, the result of our study reveals that physicians and nurses believe that the lack of essential skills was attributed to the ineffective adoption of AI in clinical fields. Lack of skills might lead to the loss of the opportunity of leveraging AI in providing better and safer clinical care. This implies that there is a demand for fostering an organizational culture that supports the integration of AI in healthcare through equipping practitioners with the required knowledge and training to improve AI literacy and implementation [
15]. This might be because AI is still a relatively new technology in the medical field and is not being used extensively in hospitals. This result aligns with Naik et al.’s study [
16], which emphasized the importance of clinicians’ competencies and proficiency in augmenting the benefits gained from AI. Thus, for successful adoption of AI-supported clinical care, healthcare leadership should take a multidimensional approach where AI governance is ensured through cultivating a culture of AI literacy and integration, focusing on strategic planning, providing mentoring and resources, and investing in training and development. In addition, to meet the potential demand of the healthcare sectors, leaders in healthcare education should proactively equip the future healthcare workforce with essential skills to fully harness AI potential. This is consistent with the study of Hamd et al. [
17], which recommends teaching medical students about the use of AI as part of the medical school curriculum.
Another key finding for the study is the potential manipulation of the AI database by a third party. Most participants expressed their worries about the quality of data released from AI-based systems that, as a result, could lead to poor clinical outcomes and, eventually, patient safety issues. From the technical point of view, these AI algorithms are trained using massive databases [
18]. Consequently, inadequate data may be incorporated and have an impact on the provision of healthcare services, which could result in substandard clinical outcomes. According to the study of Goktas and Grzybowski [
19], data themselves have the potential to negatively influence the clinical recommendations, leading to suboptimal diagnosis of medical conditions, therapeutic plans, and overall patient health outcomes. Likewise, biased or manipulated data were the most common concerns perceived by healthcare professionals [
18]. Moreover, the studies of Obermeyer et al. [
20] and Hantel et al. [
21] support our findings. It reported that AI algorithms had racial bias affecting the prediction of health status among White and Black patients. Inadvertently, AI algorithms may have potential biases that could lead to inequitable diagnosis or treatment [
22]. In dermatology, as an example, AI may exhibit bias based on race when determining skin disorders among people with darker skin colors in comparison to those with lighter-colored types [
23]. In rare cases of dermatology, as discussed by Refolo et al. [
24], given the lack of adequate image criteria along with the absence of standardized guidelines, the data utilized for AI training can lead to biased AI algorithms and consequently expose patients to system vulnerabilities, including inequitable care.
Additionally, the study showed that physicians and nurses do not believe that AI can accurately understand patients’ medical conditions. Accordingly, they believe that management and diagnosis of medical conditions by physicians are more reliable than such technology. In contrast, the study of Elendu et al. [
22] highlighted that the introduction of AI in healthcare has refined the role of practitioners in one way or another. Apart from this, AI may facilitate and streamline the process of analyzing huge volumes of data and identifying patterns that challenge physicians and other healthcare providers [
25]. Therefore, failure to appropriately understand patients’ medical status might expose them to diagnostic errors and potential harm [
24]. In addition to that, the study of Gundlack et al. [
26] reported the inability of AI to simulate the essential characteristics that humans have, such as patient–provider rapport in general, and empathy, as well as emotional intelligence, in particular, can dramatically impact health outcomes. Despite the rapidly increasing introduction of AI in clinical practices, the study of Pressman et al. [
27] demonstrated the inability of AI to substitute physicians’ knowledge and judgment. This supports the fact that AI-based tools and systems must be leveraged to help with and improve the provision of healthcare services but not to replace clinicians [
28].
Regarding the challenges of AI, the current study reported three major issues as perceived by participants. It showed that algorithms programmed in one culture may not be appropriate for the other one. This is like Tilala et al.’s study findings [
25], which concluded that a customized AI for one group might not be suitable for another, particularly if two populations have different cultures and norms. Due to such differences, AI might provide inaccurate outcomes. For example, AI algorithms that are widely disseminated and utilized in the US to determine required clinical care for African American patients give different information for Caucasian patients, even if they have the same score [
29]. This is supported by the findings of Monteith et al.’s study [
30], which reported that AI models do not perform well when they are deployed in settings where the characteristics of their population differ from the characteristics used for the training.
Another key point to highlight is the need to protect patient privacy and ensure data security in healthcare. According to the participants in this study, the privacy and security issues that are associated with the integration of AI-based systems are still not adequately addressed. This matter has been raised by many studies, such as Weiner et al. [
31], He et al. [
32], and Currie and Hawk [
33]. It is evident that the availability of patients’ data is crucial in training and testing AI models. Therefore, the lack of such data leads to limited training and eventually hinders the potential benefits of AI tools [
6]. The tendency of AI systems to gather and analyze enormous volumes of patient data makes them a desirable target for hackers, with possible consequences including identity theft, monetary loss, harm to an individual’s reputation, and loss of trust [
8]. This emphasizes the need for strong data protection protocols.
According to participants’ perspectives, this study reports that the cross-border issue is one of the main challenges. It is believed that information exchange and international collaboration might necessitate the adoption of different regulations and standards of care. The study of Lewin et al. [
34] raised the concerns of sharing patient information where this cutting-edge technology should not affect or breach individuals’ privacy. However, previous studies emphasized that sharing patient information is fundamental to feeding AI models with real data. These models have the potential to assist healthcare providers in advancing precision medicine and optimizing care plans [
5]. Despite the urgency for regulatory rules to govern the disclosure of patient data at the national and international levels, the systematic review conducted by Karimian et al. [
35] reported a lack of a comprehensive ethical framework for AI in healthcare. Nevertheless, sharing patient medical information mitigates potential biases and ensures open access to these data by developers to enhance the process of testing and training AI models [
36].
Our study had several strengths, including a sample from different healthcare settings and locations in Saudi Arabia, thus increasing the generalizability of the study findings. In alignment with AI integration as part of the Saudi Healthcare Sector Transformation Program (HSTP), this study is well-timed and is considered the first one at the national level in terms of exploring the ethical and practical issues related to AI integration. Apart from this, notable physicians’ responses signal the urgency for policymakers to accelerate AI integration as part of the Saudi HSTP. Another strength was the thorough analytical statistics used. This study conducted independent samples t-tests to figure out the differences between physicians’ and nurses’ perspectives regarding the AI concerns and ethical challenges of its integration into clinical practices.
The study has four limitations. Firstly, the current study targeted only physicians and nurses. Involving dentists and allied health professionals might enrich the findings with insightful information. Secondly, the number of received responses from participants is somewhat small compared to the number of physicians and nurses in the Kingdom of Saudi Arabia. Thirdly, the study was focused on quantitative data only, using predetermined concerns and challenges. If it involved individual semi-structured interviews, it might have explored more sensitive factors related to the effective integration of AI in clinical practices. Fourthly, years of experience were not studied in the current study, which may affect the awareness and willingness of healthcare providers to integrate AI into clinical practices.