Previous Article in Journal
Effectiveness of Gamification Versus Traditional Teaching Methods on Learning, Motivation, and Engagement in Undergraduate Nursing Education: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia

by
Muhammad Kamran Rasheed
,
Fay Alonayzan
,
Nouf Alresheedi
,
Reema I. Aljasir
,
Ibrahim S. Alhomoud
and
Alian A. Alrasheedy
*
Department of Pharmacy Practice, College of Pharmacy, Qassim University, Qassim 51452, Saudi Arabia
*
Author to whom correspondence should be addressed.
Int. Med. Educ. 2026, 5(1), 6; https://doi.org/10.3390/ime5010006
Submission received: 16 November 2025 / Revised: 23 December 2025 / Accepted: 25 December 2025 / Published: 30 December 2025

Abstract

The integration of artificial intelligence (AI) tools, such as the chat generative pre-trained transformer (ChatGPT), into health professions education is rapidly accelerating, creating new opportunities for personalized learning and clinical preparation. These tools have demonstrated the potential to enhance learning efficiency and critical thinking. However, concerns regarding reliability, academic integrity, and potential overreliance highlight the need to better understand how healthcare students adopt and perceive these technologies in order to guide their effective and responsible integration into educational frameworks. This nationwide, cross-sectional, survey-based study was conducted between February and April 2024 among undergraduate students enrolled in medical, pharmacy, nursing, dental, and allied health programs in Saudi Arabia. An online questionnaire collected data on ChatGPT usage patterns, satisfaction, perceived benefits and risks, and attitudes toward integrating them into the curricula. Among 1044 participants, the prevalence of ChatGPT use was 69.25% (n = 723). Students primarily utilized the tool for content summarization, assignment preparation, and exam-related study. Key motivators included time efficiency and convenience, with improved learning efficiency and reduced study stress identified as major benefits. Conversely, major challenges included subscription costs and difficulties in formulating effective prompts. Furthermore, concerns regarding overreliance and academic misconduct were frequently reported. In conclusion, the adoption of generative AI tools such as ChatGPT among healthcare students in Saudi Arabia was high, driven by its perceived ability to enhance learning efficiency and personalization. To maximize its benefits and minimize risks, institutions should establish clear policies, provide faculty oversight, and integrate AI literacy into the education of health professionals.

1. Introduction

The digitization of healthcare services in Saudi Arabia in the past decade has resulted in improved patient-provider communication, decreased medication and treatment-related costs, enhanced healthcare delivery, and better diagnostic accuracy and patient satisfaction [1,2]. These achievements were made possible by the successful implementation of digital healthcare apps (e.g., Sehatty), electronic healthcare records, clinical decision support systems in hospitals, and healthcare integration [1,3]. The Saudi Data and Artificial Intelligence Authority (SDAIA), under the Vision 2030 plan, focuses on artificial intelligence (AI), machine learning (ML), and deep learning tools to advance the country’s healthcare system and overcome challenges in healthcare delivery, services, and operations [4].
AI-powered tools are gaining global popularity and revolutionizing how information is sought and presented. In the healthcare sector, AI-powered technologies are utilized to provide relevant medical information to patients and providers, assist physicians in diagnosing diseases, analyze healthcare data to predict treatment outcomes, and provide preventative measures to the community [5,6,7]. Chat generative pre-trained transformer (ChatGPT) chatbots can generate human-like language responses and create content (e.g., images, tables, texts, and videos) tailored to personalized needs and requirements [8,9]. ChatGPT has the potential to transform healthcare by supporting healthcare professionals and individuals in making evidence-based and informed decisions about their health [10,11]. As with any AI tool, however, careful evaluation is needed to ensure the reliability and appropriateness of its outputs in educational and clinical settings [12].
Analysis of large datasets and identification of patterns in patient data, aided by AI algorithms, can facilitate healthcare professionals’ informed clinical decision-making, leading to improved treatment outcomes and a reduction in medical errors [13]. For example, AI-powered medical imaging analysis, exemplified by systems detecting breast cancer from mammograms and diabetic retinopathy from retinal images, showcases its potential for precise and efficient diagnosis [14]. Similarly, AI health technologies (AIHTs), such as virtual healthcare assistant apps, are predicting patient health status changes, enabling nurses to provide personalized patient care [15]. By utilizing ChatGPT and other AI algorithms, pharmacists can evaluate complex patient data to identify potential adverse drug events, assess the safety and efficacy of medicines, and make accurate and evidence-based clinical decisions [16]. Healthcare systems are increasingly adopting AI tools to predict and diagnose a wide range of disease states, including liver disease, ophthalmic disorders, HIV and other sexually transmitted infections, and cardiovascular conditions, highlighting their expanding application in clinical practice [17,18,19,20,21].
Integrating ChatGPT in medical, dental, pharmacy, and public health education is gaining attention among university students; however, its use has received mixed responses among academic professionals [22]. ChatGPT in health professions education can simplify complex concepts and facilitate the comprehension of challenging medical content, thereby enhancing students’ learning outcomes [22,23]. Additionally, ChatGPT can serve as a virtual teaching assistant, delivering real-time feedback to students’ queries, providing access to updated healthcare information, and monitoring the progress of healthcare students in a purposeful and accessible learning environment [24,25]. However, academics and policymakers are concerned about ChatGPT’s impartiality, overreliance, plagiarism, influence on the critical thinking of students, and copyright issues [23,24,25].
Given the rapidly evolving landscape and advances in generative and agentic AI and their applications, it is pivotal to explore the role of ChatGPT in health professions education in Saudi Arabia. Accordingly, this study aimed at examining this topic from the perspective of healthcare students. The study’s specific objectives were (1) to evaluate the prevalence and patterns of ChatGPT use among healthcare students; (2) to examine healthcare students’ perceptions on the usefulness, perceived benefits, and concerns related to the use of ChatGPT in their studies; and (3) to identify factors influencing the use of ChatGPT among healthcare students. This study’s findings could assist educational institutions and policymakers in integrating AI-driven technologies into medical education curricula and in enhancing the educational outcomes and competencies of future healthcare professionals.

2. Methods

2.1. Study Design, Population, and Setting

This cross-sectional study used an online self-reported questionnaire. The study population comprised undergraduate university students from health colleges (medicine, pharmacy, dentistry, nursing, and allied health programs) across Saudi Arabia. This study was conducted between February and April 2024.

2.2. Sample Method and Sample Size Calculations

The sample size was calculated using the Raosoft® sample size calculator [26]. With a 95% confidence interval, a 5% margin of error, and a 50% level of variance, the minimum sample size was estimated to be 384. As the study utilized an online survey, convenience sampling was used to distribute the survey link and electronically invite the target population through various social media platforms.

2.3. Development of the Study Questionnaire

The development of the initial draft of the survey questionnaire was informed by prior studies in the literature on this topic [7,22,27,28] and discussions among the research team. Subsequently, the questionnaire was evaluated by two senior researchers with experience in questionnaire development and health professions education. After considering their feedback and suggestions, the questionnaire was pilot-tested with 20 undergraduate students. The students were asked to comment on the questionnaire’s language, usability, layout, and relevance. Subsequently, the questionnaire was discussed and finalized by the research team.
The questionnaire consisted of six parts. In the first part, participants were asked if they had used ChatGPT before the study. Only those who answered “yes” to this question proceeded to the rest of the questionnaire. The second part of the questionnaire collected demographic information from the participants. The third part of the questionnaire included items and questions related to the participants’ utilization of ChatGPT, and the fourth part evaluated their satisfaction with the information and responses provided by ChatGPT. The fifth part included items related to the participants’ perceptions of the usefulness and disadvantages of ChatGPT. The final part assessed the participants’ perspectives on ChatGPT coverage in the healthcare curriculum and activities. The questionnaire is presented in Supplementary Material S1.

2.4. Data Collection Process and Study Flow

Google Forms was used to create the online questionnaire owing to its convenience, ease of use, and security. After creating the online survey, the URL link was shared with the undergraduate students’ social media groups on WhatsApp, X, and Telegram. Additionally, the survey link was shared with university staff from health colleges in Saudi Arabia, who were asked to share it with their students. Before commencement of the survey, study participants were prompted to read the study information sheet and provide their consent to participate. Several factors were considered while creating the online survey, including preventing repeated and duplicate responses from the same device. Furthermore, only complete responses were permitted to be submitted to the questionnaire, preventing incomplete responses.
During the study period, a total of 1044 students participated in the survey. In the first question of the questionnaire, the participants were asked whether they had ever used ChatGPT before this study, and 723 out of the 1044 (69.25%) responded with “yes”, while the rest (n = 321; 30.75%) responded with “no.” Consequently, the prevalence of ChatGPT use among healthcare students was 69.25%. Those who reported prior use of ChatGPT (i.e., ChatGPT users) were asked to complete the questionnaire, while those who had never used ChatGPT (i.e., non-users of ChatGPT) were asked to exit the survey and submit the form, and were not included in the study analysis. This approach was adopted because the study aimed at exploring the experiences, satisfaction, and perceptions of ChatGPT users. Accordingly, all analyses and reported outcomes pertain to participants who reported prior use of ChatGPT (n = 723). The study flow is presented in Figure 1.

2.5. Data Analysis

Data from the completed questionnaires were downloaded from Google Forms as a Microsoft® Excel® sheet and imported into IBM® SPSS® Statistics for Windows, version 20.0. Descriptive and inferential statistical analyses were conducted. Descriptive statistics (frequencies and percentages) were used to summarize data. The chi-square test assessed the association between demographic characteristics and study variables. Statistical significance was set at p < 0.05.

3. Results

3.1. Demographic Data and Characteristics of Participants Using ChatGPT (n = 723)

Most participants were female students (n = 430; 59.47%). The participants were students from different healthcare programs, including medicine (n = 219; 30.29%), pharmacy (n = 178; 24.62%), nursing (n = 105; 14.52%), dentistry (n = 57; 7.88%), and other allied healthcare programs (n = 164; 22.68%). They included students enrolled in the programs’ first through sixth years of study, as well as those in the one-year internship period. They were from all regions of Saudi Arabia, with the majority from the central region (n = 316; 43.71%), followed by the western (n = 133; 18.40%) and eastern regions (n = 108; 14.94%). Participant characteristics are presented in Table 1.

3.2. Utilization of ChatGPT Among Healthcare Students

As shown in Table 2, 305 (42.19%) participants reported using ChatGPT “sometimes,” while 250 (34.58%) used it either “often” or “always,” and the rest (n = 168, 23.24%) used it “rarely.” The top three factors influencing healthcare students’ decision to use ChatGPT in their studies were time-saving (n = 521; 72.06%), convenience (n = 289; 39.97%), and trust in the accuracy of the information provided (n = 237; 32.78%). The purposes of using ChatGPT in their studies included summarizing academic articles (n = 368, 50.90%) and writing assignments (n = 325, 44.95%). Of the participants, 523 (72.34%) encountered limitations or challenges while using ChatGPT. These included costs (n = 302; 57.74%), a lack of expertise (n = 221; 42.26%), and technical or connectivity issues (n = 183; 34.99%).

3.3. Participants’ Satisfaction with the Information and Responses Provided by ChatGPT

In this study, 459 (63.49%) participants were satisfied with the information and responses provided by ChatGPT, 211 (29.18%) were neutral, and 53 (7.33%) were dissatisfied (Figure 2).

3.4. Participants’ Perceived Usefulness and Disadvantages of ChatGPT

In this study, most healthcare students considered ChatGPT’s usefulness in helping with their courses as “extremely/very helpful” (n = 282; 39%) or “moderately helpful” (n = 244; 33.75%), while the rest (197; 27.25%) rated it “slightly” or “not helpful at all.” Many students who participated in this study believed that ChatGPT could be beneficial in improving learning efficiency (n = 408; 56.43%), reducing study stress (n = 387; 53.53%), and enhancing critical thinking skills (n = 282; 39.00%). However, the perceived disadvantages of using ChatGPT included overreliance (n = 399, 55.19%), academic integrity/misuse (n = 374, 51.73%), lack of human interaction (n = 288, 39.83%), and lack of clear regulations and guidelines for the responsible use of ChatGPT (n = 181, 25.03%) (Table 3).

3.5. ChatGPT in Health Professions Curriculum and Activities

More than half of the study participants (n = 383; 52.97%) indicated that they “never” discussed the use of ChatGPT with their instructors, whereas 19 (26.42%) indicated “rarely,” 120 (16.60%) indicated “sometimes,” and 29 (4.01%) “often” discussed it with their instructors. Students believed that the current coverage of ChatGPT in their program curriculum and activities was “very low” (n = 273; 37.76%); 314 (43.43%) considered it “average,” while only 136 (18.81%) considered it “high.” However, most participants (n = 575, 79.53%) believed that the extent of teaching about the appropriate use of ChatGPT in healthcare should be at least average or above. Furthermore, 421 (58.23%) students believed that integrating ChatGPT into education could positively impact the learning experiences of healthcare students (Table 4).

3.6. Association Between Participants’ Demographics and Frequency of Utilization of ChatGPT, Satisfaction with It, and Their Perceived Usefulness of ChatGPT

There were statistically significant associations between participants’ demographics and some variables (p < 0.05) (Table 5). A higher proportion of healthcare students in the 3rd year (41.07%), 4th year (41.51%), 5th year (40.79%), and 6th year (48.78%) “often” or “always” used ChatGPT compared to students in their 1st year (25.58%), 2nd year (30.23%), and internship year (29.89%) (p = 0.005). No significant differences in ChatGPT utilization, satisfaction, or perceived usefulness among healthcare students across academic health programs were found in this study.
A statistically significant difference was observed between the geographical locations of study participants and their satisfaction with ChatGPT use (p = 0.037). A higher proportion of students from the northern region (73.27%) of Saudi Arabia were satisfied with ChatGPT than those from the eastern (51.85%), central (64.87%), western (63.16%), and southern (61.54%) regions.

4. Discussion

In this study, we investigate healthcare students’ perceptions of ChatGPT within their academic programs. As AI rapidly transforms learning frameworks and practices worldwide, understanding students’ perspectives on the use, perceived benefits, and potential limitations of generative AI tools in health professions education is crucial. Our study provides valuable insights by capturing responses from a large cohort of healthcare students across diverse academic programs and regions in Saudi Arabia. Overall, the study findings reveal that AI-assisted tools, particularly ChatGPT, were widely adopted by healthcare students. This indicates a paradigm shift in contemporary health professions education toward technology-enhanced learning. Students primarily utilized ChatGPT to support academic tasks and educational activities, driven by motivations of time efficiency and convenience, and perceived benefits such as enhanced learning efficiency and reduced study-related stress. Conversely, challenges such as technical competencies (i.e., prompt engineering) and concerns regarding overreliance and potential academic misconduct highlight the cognitive and ethical complexities of integrating AI into the educational process. Consequently, the students expressed support for the inclusion of AI technologies in their curriculum to promote effective, optimal, and responsible use of technology in their studies.
The findings indicate that the prevalence of ChatGPT use was 69.25%. This is in line with recent studies showing the increasing use of AI chatbots, including ChatGPT, among students in healthcare programs [29,30,31,32,33,34,35]. The prevalence of ChatGPT use was reported as 48.9% among medical students in one 2023 US study, and 52.0% in another [29,30]. In China, a study reported that 62.9% of medical students used this chatbot in their studies [31]. In Egypt, a study involving medical students reported that 78.5% of students had used the tool [32]. Furthermore, a study among pharmacy students in Zambia reported a prevalence of 78.7% [33]. Consequently, the findings of our study, along with the cited literature, indicate the widespread use of ChatGPT among healthcare students.
In this study, approximately one-third (34.58%) reported frequent use—often or always. More than half of the respondents (58.23%) favored the increased integration of ChatGPT in health profession education, as they perceived its potential to improve learning outcomes. This chatbot can serve as a valuable tool to build core competencies in healthcare students through the interpretation and recall of foundational knowledge [36]. Furthermore, healthcare students reported using ChatGPT for advanced academic applications, including refining history-taking skills, receiving immediate feedback, reviewing concepts, and simulating patient interactions [37,38,39]. Participants in this study reported several key benefits of this tool, with over half of the healthcare students identifying enhanced learning efficiency (56.43%) and reduced study-related stress (53.53%) as the top learning benefits. Recent research supports this perspective, suggesting ChatGPT’s potential as a self-learning tool for healthcare students preparing for professional exams, including the Saudi Medical Licensing Exam (SMLE) [40,41,42,43]. Additionally, 39% of the study participants recognized the chatbot’s role in enhancing critical thinking. Healthcare students perceived ChatGPT as an educational tool that enabled them to manage academic demands, reduce stress, and build core competencies by providing quick access to information, simplifying complex topics, and offering timely feedback [36]. Notably, 31.54% of healthcare students reported using this tool to enhance their communication skills. Given that English is a second language for healthcare students in Saudi Arabia and healthcare programs are primarily taught in English, ChatGPT’s ability to refine written communication is a particularly valuable tool. Furthermore, 29.74% of students considered it beneficial for clinical preparation. With ChatGPT’s expanding role in health professions education and its potential to strengthen essential skills and educational outcomes, there is a clear need for structured guidance to maintain academic integrity and maximize the benefits of this transformative technology.
Despite the wide-ranging advantages of ChatGPT identified by healthcare students, concerns about its potential challenges and limitations were also expressed. Over half of the students (55.19%) noted the risk of overreliance on ChatGPT, while the other half (51.73%) was concerned about its impact on ethical issues. This highlights the students’ concerns regarding the potential of ChatGPT to compromise independent thinking and analytical skills, both of which are core competencies in health professions education. A systematic review highlighted ethical issues such as bias, privacy, and plagiarism as significant risks associated with ChatGPT, pointing to serious implications for academic and clinical integrity [7,44]. In health professions education, where evidence-based information is essential, ChatGPT could provide inaccurate information and generate nonexistent references [7,44]. These limitations are acknowledged by the chatbot, indicating that it can generate responses that appear credible but may include incorrect or misleading information [45]. While it can reduce language barriers in health professions education, reliance solely on ChatGPT content introduces a significant risk due to the potential for misleading translations [46]. Additionally, 39.83% of students expressed concern about the lack of human interaction. Although this chatbot provides some level of personalized learning opportunities, the lack of direct engagement with mentors and peers may limit essential learning experiences in the healthcare field, which heavily depends on interpersonal skills. Moreover, 25.03% of the healthcare students pointed to a lack of clear regulations and guidelines. As the utilization of ChatGPT expands, there is a clear demand for structured guidance to preserve academic integrity and maximize the benefits of this revolutionary technology.
Healthcare students consider several factors when deciding to use ChatGPT for academic purposes. According to the respondents, the primary reasons for relying on it were its time-saving benefits (72.06%), convenience (39.97%), and accuracy of the information provided (32.78%). Healthcare students applied the tool for a range of academic purposes. Recognizing these factors can guide the development of educational frameworks that encourage the optimal and responsible use of ChatGPT among healthcare students. Notably, half of the respondents (50.90%) reported using the chatbot to summarize academic articles. Given its efficiency and breadth in providing topic overviews, it can be a valuable tool for summarizing literature reviews. However, it currently lacks depth and contextual understanding [47], and therefore it can be used for preliminary literature reviews, while a comprehensive human review remains essential to validate the accuracy and relevance of the content [47]. ChatGPT was also utilized for various other academic purposes. For instance, 44.95% of the healthcare students used it to write assignments. This widespread use has raised concerns within academic communities about how this chatbot could compromise the intended purpose of assignments [48,49]. Other uses include preparing for exams (41.49%), writing effective emails (29.18%), generating study materials (28.77%), and practicing clinical scenarios (24.62%).
Our study did not show a statistically significant difference between academic health programs regarding the frequency of ChatGPT utilization, levels of satisfaction, or perceived usefulness. This suggests that healthcare students across different healthcare programs report broadly comparable engagement with this chatbot. These findings indicate that the potential benefits of ChatGPT as an educational tool can be applied to broader healthcare curricula. Furthermore, the study data suggest that the frequency of ChatGPT utilization tends to increase as students advance in their studies. This observation may reflect the growing academic demand and reliance on this chatbot for educational purposes to improve learning efficiency.

5. Strengths and Limitations

This study has several strengths. It included a large cohort of students from diverse medical and health programs in all regions of Saudi Arabia. However, convenience sampling, although inevitable, is a limitation of this study and could affect the generalization to the entire target population. Additionally, we focused on one AI chatbot, ChatGPT, and did not explore other chatbots or AI apps. Moreover, because the study was cross-sectional, it represents the status at the time of the study. Consequently, future studies should monitor progress given the rapid adoption and development of generative and agentic AI in Saudi Arabia and globally.

6. Conclusions and Implications for Health Professions Education in Saudi Arabia

This study reveals a high prevalence of ChatGPT use among healthcare students in Saudi Arabia. Furthermore, it shows that healthcare students had a positive perception of the usefulness of AI and ChatGPT for educational purposes. This finding aligns with Saudi Arabia’s vision of incorporating more AI technologies into healthcare and education, as well as the increasing integration of generative AI tools into educational practices globally [49]. Furthermore, there is a recognized benefit in integrating ChatGPT into educational settings, as educators in Saudi Arabia are increasingly adopting it to enhance the teaching and learning process [50]. The study findings show that the primary drivers of ChatGPT use were its time-saving and convenient features for summarizing content and assisting with study assignments. The findings also suggest that healthcare students perceive this innovative technology as beneficial for personalized learning, critical thinking, and communication skills, and for preparing them for clinical practice. Collectively, these factors provide a strong foundation for advancing the use of AI technology in health profession education. However, to ensure the effective and responsible adoption of AI chatbots in education, a comprehensive approach is required to maximize the benefits and address any potential risks (e.g., overreliance). First, establishing robust and clear guidelines for their responsible and effective use is essential to ensuring their integration leads to meaningful educational advancement without compromising academic integrity, the educational process, or students’ educational experiences. Second, all stakeholders, including faculty members and students, should receive structured education and training on the appropriate use of AI chatbots, covering ethical and responsible use, as well as technical aspects (i.e., prompt engineering). Third, colleges must ensure that their curricula are regularly updated to incorporate emerging AI technologies and related advances.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/ime5010006/s1, Supplementary Materials S1: The questionnaire used in the study.

Author Contributions

Conceptualization, M.K.R. and A.A.A.; methodology, M.K.R. and A.A.A.; software, M.K.R. and A.A.A.; validation, M.K.R. and A.A.A.; formal analysis, A.A.A.; investigation, M.K.R., F.A., N.A. and R.I.A.; resources, A.A.A., M.K.R., F.A., N.A. and R.I.A.; data curation, M.K.R., F.A., N.A. and R.I.A.; writing—original draft preparation, A.A.A., I.S.A., M.K.R., F.A., N.A. and R.I.A.; writing—review and editing, A.A.A. and I.S.A.; visualization, A.A.A.; supervision, M.K.R. and A.A.A.; project administration, M.K.R. and A.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was approved by the Regional Research Ethics Committee of the Qassim Region, Saudi Arabia (Reference No. 607-45-013310). This study was conducted in accordance with the principles of the Declaration of Helsinki.

Informed Consent Statement

Before commencement of the online survey, the participants were prompted to read the study information sheet and provide their consent to participate. In addition, they were informed that their participation was voluntary and that their responses were anonymous.

Data Availability Statement

The data associated with this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AIartificial intelligence
AIHTsartificial intelligence health technologies
ChatGPTchat generative pre-trained transformer
MLmachine learning
SDAIASaudi Data and Artificial Intelligence Authority
SMLESaudi Medical Licensing Exam

References

  1. Al-Kahtani, N.; Alruwaie, S.; Al-Zahrani, B.M.; Abumadini, R.A.; Aljaffary, A.; Hariri, B.; Alissa, K.; Alakrawi, Z.; Alumran, A. Digital health transformation in Saudi Arabia: A cross-sectional analysis using Healthcare Information and Management Systems Society’s health indicators. Digit. Health 2022, 8, 20552076221117742. [Google Scholar] [CrossRef]
  2. Mani, Z.A.; Goniewicz, K. Transforming Healthcare in Saudi Arabia: A Comprehensive Evaluation of Vision 2030’s Impact. Sustainability 2024, 16, 3277. [Google Scholar] [CrossRef]
  3. Alzghaibi, H.A. An examination of large-scale electronic health records implementation in Primary Healthcare Centers in Saudi Arabia: A qualitative study. Front. Public Health 2023, 11, 1121327. [Google Scholar] [CrossRef] [PubMed]
  4. Memish, Z.A.; Altuwaijri, M.M.; Almoeen, A.H.; Enani, S.M. The Saudi Data & Artificial Intelligence Authority (SDAIA) Vision: Leading the Kingdom’s Journey toward Global Leadership. J. Epidemiol. Glob. Health 2021, 11, 140–142. [Google Scholar] [CrossRef]
  5. Khedkar, S.; Gandhi, P.; Shinde, G.; Subramanian, V. Deep Learning and Explainable AI in Healthcare Using EHR. In Deep Learning Techniques for Biomedical and Health Informatics; Dash, S., Acharya, B., Mittal, M., Abraham, A., Kelemen, A., Eds.; Studies in Big Data; Springer: Cham, Switzerland, 2020; Volume 68. [Google Scholar] [CrossRef]
  6. Ali, O.; Abdelbaki, W.; Shrestha, A.; Elbasi, E.; Alryalat, M.A.; Dwivedi, Y.K. A systematic literature review of artificial intelligence in the healthcare sector: Benefits, challenges, methodologies, and functionalities. J. Innov. Knowl. 2023, 8, 100333. [Google Scholar] [CrossRef]
  7. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
  8. Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.L.; Tang, Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA J. Autom. Sin. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
  9. Kalla, D.; Smith, N.; Samaah, F.; Kuraku, S. Study and analysis of chat GPT and its impact on different fields of study. Int. J. Innov. Sci. Res. Technol. 2023, 8, 827–833. [Google Scholar]
  10. Secinaro, S.; Calandra, D.; Secinaro, A.; Muthurangu, V.; Biancone, P. The role of artificial intelligence in healthcare: A structured literature review. BMC Med. Inform. Decis. Mak. 2021, 21, 125. [Google Scholar] [CrossRef]
  11. Sapci, A.H.; Sapci, H.A. Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review. JMIR Med. Educ. 2020, 6, e19285. [Google Scholar] [CrossRef]
  12. Alsakaker, A.A.; Alfayez, J.S.; Alsalamah, J.A.; Alzughibi, L.S.; Anaam, M.S.; Dixon, D.L.; Khan, R.A.; Alhomoud, I.S. ChatGPT: Pharmacy students’ perceptions, current use trends, ethical awareness, standards of ethics, prospects and recommendations for future use. Curr. Pharm. Teach. Learn. 2025, 17, 102382. [Google Scholar] [CrossRef] [PubMed]
  13. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
  14. Marinovich, M.L.; Wylie, E.; Lotter, W.; Lund, H.; Waddell, A.; Madeley, C.; Pereira, G.; Houssami, N. Artificial intelligence (AI) for breast cancer screening: BreastScreen population-based cohort study of cancer detection. EBioMedicine 2023, 90, 104498. [Google Scholar] [CrossRef]
  15. Van Bulck, L.; Couturier, R.; Moons, P. Applications of artificial intelligence for nursing: Has a new era arrived? Eur. J. Cardiovasc. Nurs. 2023, 22, e19–e20. [Google Scholar] [CrossRef]
  16. Chalasani, S.H.; Syed, J.; Ramesh, M.; Patil, V.; Pramod Kumar, T.M. Artificial intelligence in the field of pharmacy practice: A literature review. Explor. Res. Clin. Soc. Pharm. 2023, 12, 100346. [Google Scholar] [CrossRef]
  17. Da, B.; Chen, H.; Wu, W.; Guo, W.; Zhou, A.; Yin, Q.; Gao, J.; Chen, J.; Xiao, J.; Wang, L.; et al. Development and validation of a machine learning-based model to predict survival in patients with cirrhosis after transjugular intrahepatic portosystemic shunt. EClinicalMedicine 2024, 79, 103001. [Google Scholar] [CrossRef]
  18. Zhou, Y.; Chia, M.A.; Wagner, S.K.; Ayhan, M.S.; Williamson, D.J.; Struyven, R.R.; Liu, T.; Xu, M.; Lozano, M.G.; Woodward-Court, P.; et al. A foundation model for generalizable disease detection from retinal images. Nature 2023, 622, 156–163. [Google Scholar] [CrossRef]
  19. Bao, Y.; Medland, N.A.; Fairley, C.K.; Wu, J.; Shang, X.; Chow, E.P.; Xu, X.; Ge, Z.; Zhuang, X.; Zhang, L. Predicting the diagnosis of HIV and sexually transmitted infections among men who have sex with men using machine learning approaches. J. Infect. 2021, 82, 48–59. [Google Scholar] [CrossRef]
  20. Attia, Z.I.; Noseworthy, P.A.; Lopez-Jimenez, F.; Asirvatham, S.J.; Deshmukh, A.J.; Gersh, B.J.; Carter, R.E.; Yao, X.; Rabinstein, A.A.; Erickson, B.J.; et al. An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: A retrospective analysis of outcome prediction. Lancet 2019, 394, 861–867. [Google Scholar] [CrossRef]
  21. Wu, J.; Biswas, D.; Ryan, M.; Bernstein, B.S.; Rizvi, M.; Fairhurst, N.; Kaye, G.; Baral, R.; Searle, T.; Melikian, N.; et al. Artificial intelligence methods for improved detection of undiagnosed heart failure with preserved ejection fraction. Eur. J. Heart Fail. 2024, 26, 302–310. [Google Scholar] [CrossRef] [PubMed]
  22. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J. 2023, 3, e103. [Google Scholar] [CrossRef] [PubMed]
  23. Aslam, M.S.; Nisar, S. Artificial Intelligence Applications Using ChatGPT in Education: Case Studies and Practices; IGI Global: Hershey, PA, USA, 2023. [Google Scholar]
  24. Shorey, S.; Mattar, C.; Pereira, T.L.; Choolani, M. A scoping review of ChatGPT’s role in healthcare education and research. Nurse Educ. Today 2024, 135, 106121. [Google Scholar] [CrossRef]
  25. Ali, K.; Barhom, N.; Tamimi, F.; Duggal, M. ChatGPT-A double-edged sword for healthcare education? Implications for assessments of dental students. Eur. J. Dent. Educ. 2024, 28, 206–211. [Google Scholar] [CrossRef]
  26. Raosoft Sample Size Calculator. Available online: http://www.raosoft.com/samplesize.html (accessed on 10 January 2024).
  27. Temsah, M.H.; Aljamaan, F.; Malki, K.H.; Alhasan, K.; Altamimi, I.; Aljarbou, R.; Bazuhair, F.; Alsubaihin, A.; Abdulmajeed, N.; Alshahrani, F.S.; et al. ChatGPT and the Future of Digital Health: A Study on Healthcare Workers’ Perceptions and Expectations. Healthcare 2023, 11, 1812. [Google Scholar] [CrossRef]
  28. Hosseini, M.; Gao, C.A.; Liebovitz, D.M.; Carvalho, A.M.; Ahmad, F.S.; Luo, Y.; MacDonald, N.; Holmes, K.L.; Kho, A. An exploratory survey about using ChatGPT in education, healthcare, and research. PLoS ONE 2023, 18, e0292216. [Google Scholar] [CrossRef]
  29. Zhang, J.S.; Yoon, C.; Williams, D.K.A.; Pinkas, A. Exploring the Usage of ChatGPT Among Medical Students in the United States. J. Med. Educ. Curric. Dev. 2024, 11, 23821205241264695. [Google Scholar] [CrossRef]
  30. Ganjavi, C.; Eppler, M.; O’Brien, D.; Ramacciotti, L.S.; Ghauri, M.S.; Anderson, I.; Choi, J.; Dwyer, D.; Stephens, C.; Shi, V.; et al. ChatGPT and large language models (LLMs) awareness and use. A prospective cross-sectional survey of U.S. medical students. PLoS Digit. Health 2024, 3, e0000596. [Google Scholar] [CrossRef]
  31. Hu, N.; Jiang, X.Q.; Wang, Y.D.; Kang, Y.M.; Xia, Z.; Chen, H.H.; Duan, S.N.; Chen, D.X. Status and perceptions of ChatGPT utilization among medical students: A survey-based study. BMC Med. Educ. 2025, 25, 831. [Google Scholar] [CrossRef]
  32. Abdelhafiz, A.S.; Farghly, M.I.; Sultan, E.A.; Abouelmagd, M.E.; Ashmawy, Y.; Elsebaie, E.H. Medical students and ChatGPT: Analyzing attitudes, practices, and academic perceptions. BMC Med. Educ. 2025, 25, 187. [Google Scholar] [CrossRef]
  33. Mudenda, S.; Mufwambi, W.; Mwale, R.S.; Kathewera, B.; Lubanga, A.F. Attitudes and usage of ChatGPT among pharmacy students in a Sub-Saharan African country, Zambia: Findings and implications on the education system. BMC Med. Educ. 2025, 25, 1237. [Google Scholar] [CrossRef]
  34. Alammari, D.; Alamari, E.; Alamri, R.; Alharbi, R.; Felimban, J.; Aljohani, J. Assessing medical students’ attitudes, performance, and usage of ChatGPT in Jeddah, Saudi Arabia. Front. Artif. Intell. 2025, 8, 1577911. [Google Scholar] [CrossRef] [PubMed]
  35. Pan, G.; Ni, J. A cross sectional investigation of ChatGPT-like large language models application among medical students in China. BMC Med. Educ. 2024, 24, 908. [Google Scholar] [CrossRef]
  36. Temsah, O.; Khan, S.A.; Chaiah, Y.; Senjab, A.; Alhasan, K.; Jamal, A.; Aljamaan, F.; Malki, K.H.; Halwani, R.; Al-Tawfiq, J.A.; et al. Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts. Cureus 2023, 15, e37281. [Google Scholar] [CrossRef] [PubMed]
  37. Kaur, A.; Singh, S.; Chandan, J.S.; Robbins, T.; Patel, V. Qualitative exploration of digital chatbot use in medical education: A pilot study. Digit. Health 2021, 7, 20552076211038151. [Google Scholar] [CrossRef]
  38. Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation with ChatGPT and a Call for Papers. JMIR Med. Educ. 2023, 9, e46885. [Google Scholar] [CrossRef]
  39. Breeding, T.; Martinez, B.; Patel, H.; Nasef, H.; Arif, H.; Nakayama, D.; Elkbuli, A. The Utilization of ChatGPT in Reshaping Future Medical Education and Learning Perspectives: A Curse or a Blessing? Am. Surg. 2024, 90, 560–566. [Google Scholar] [CrossRef]
  40. Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J.; et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
  41. Aljindan, F.K.; Al Qurashi, A.A.; Albalawi, I.A.S.; Alanazi, A.M.M.; Aljuhani, H.A.M.; Falah Almutairi, F.; Aldamigh, O.A.; Halawani, I.R.; Alarki, S.M.K.Z. ChatGPT Conquers the Saudi Medical Licensing Exam: Exploring the Accuracy of Artificial Intelligence in Medical Knowledge Assessment and Implications for Modern Medical Education. Cureus 2023, 15, e45043. [Google Scholar] [CrossRef]
  42. Antaki, F.; Touma, S.; Milad, D.; El-Khoury, J.; Duval, R. Evaluating the Performance of ChatGPT in Ophthalmology: An Analysis of Its Successes and Shortcomings. Ophthalmol. Sci. 2023, 3, 100324. [Google Scholar] [CrossRef] [PubMed]
  43. Fijačko, N.; Gosak, L.; Štiglic, G.; Picard, C.T.; John Douma, M. Can ChatGPT pass the life support exams without entering the American Heart Association course? Resuscitation 2023, 185, 109732. [Google Scholar] [CrossRef]
  44. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef]
  45. OpenAI ChatGPT. 2022. Available online: https://openai.com/index/chatgpt/ (accessed on 1 November 2024).
  46. Liebrenz, M.; Schleifer, R.; Buadze, A.; Bhugra, D.; Smith, A. Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. Lancet Digit. Health 2023, 5, e105–e106. [Google Scholar] [CrossRef]
  47. Mostafapour, M.; Fortier, J.H.; Pacheco, K.; Murray, H.; Garber, G. Evaluating Literature Reviews Conducted by Humans Versus ChatGPT: Comparative Study. JMIR AI 2024, 3, e56537. [Google Scholar] [CrossRef]
  48. Stokel-Walker, C. AI bot ChatGPT writes smart essays—Should professors worry? Nature 2022. online ahead of print. [Google Scholar] [CrossRef] [PubMed]
  49. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  50. Alammari, A. Evaluating generative AI integration in Saudi Arabian education: A mixed-methods study. PeerJ Comput. Sci. 2024, 10, e1879. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Study flowchart.
Figure 1. Study flowchart.
Ime 05 00006 g001
Figure 2. Participants’ satisfaction with the information and responses provided by ChatGPT.
Figure 2. Participants’ satisfaction with the information and responses provided by ChatGPT.
Ime 05 00006 g002
Table 1. Demographic data and characteristics of study participants (n = 723).
Table 1. Demographic data and characteristics of study participants (n = 723).
Variablen (%) *
Sex
  Male 293 (40.53)
  Female 430 (59.47)
Year of study
  First year172 (23.79)
  Second year129 (17.84)
  Third year112 (15.49)
  Fourth year106 (14.66)
  Fifth year76 (10.51)
  Sixth year41 (5.67)
  Internship year87 (12.03)
College (healthcare program)
  Medicine219 (30.29)
  Pharmacy178 (24.62)
  Nursing105 (14.52)
  Dentistry57 (7.88)
  Allied healthcare program164 (22.68)
Geographical location in Saudi Arabia
  Central region316 (43.71)
  Western region133 (18.40)
  Northern region101 (13.97)
  Southern region65 (8.99)
  Eastern region108 (14.94)
* Note: Percentages may not total 100% due to rounding.
Table 2. Utilization of ChatGPT among healthcare students (n = 723).
Table 2. Utilization of ChatGPT among healthcare students (n = 723).
Variablen (%) ***
1. How often do you use ChatGPT?
    Rarely168 (23.24)
    Sometimes305 (42.19)
    Often156 (21.58)
    Always94 (13.00)
2. Factors influencing students’ decision to use ChatGPT in their studies *
    Time-saving521 (72.06)
    Convenience289 (39.97)
    Trust in the accuracy of the information provided237 (32.78)
    Lack of access to other resources227 (31.40)
    Peer recommendation168 (23.24)
    Instructor recommendation144 (19.92)
3. Purpose of using ChatGPT in their studies *
    Summarizing academic articles368 (50.90)
    Writing assignments325 (44.95)
    Preparing for exams300 (41.49)
    Writing effective emails211 (29.18)
    Generating study materials208 (28.77)
    Practicing clinical scenarios178 (24.62)
    Not using it in my academic studies88 (12.17)
4. Encountering limitations or challenges while using ChatGPT
    Yes523 (72.34)
    No200 (27.66)
5. Limitations or challenges encountered while using ChatGPT (n = 523) **
    ChatGPT requires a paid subscription for full use, limiting its accessibility to students302 (57.74)
    Difficult to formulate questions while searching for specific answers221 (42.26)
    Technical issues: connectivity issues183 (34.99)
Notes: * This is a multiple-response question. ** Those participants (n = 523) who responded with “yes” to Question 4 were asked to proceed to this question and mention the type of challenge(s) they encountered. *** Percentages may not total 100% due to rounding.
Table 3. Healthcare students’ perceptions of the usefulness, benefits, and disadvantages of ChatGPT (n = 723).
Table 3. Healthcare students’ perceptions of the usefulness, benefits, and disadvantages of ChatGPT (n = 723).
Variablen (%) **
Usefulness of ChatGPT in helping with their healthcare courses
   Extremely helpful77 (10.65)
   Very helpful205 (28.35)
   Moderately helpful244 (33.75)
   Slightly helpful136 (18.81)
   Not helpful at all61 (8.44)
Can ChatGPT be beneficial for healthcare students? *
   Improving learning efficiency408 (56.43)
   Reducing study stress387 (53.53)
   Enhancing critical thinking skills282 (39.00)
   Developing communication skills228 (31.54)
   Preparing for clinical practice215 (29.74)
   Other (rare diseases, new ideas, simplified information, link to resources, etc.)27 (3.73)
Perceived disadvantages of using ChatGPT in studies *
   Overreliance: Overdependence on ChatGPT could hinder critical and independent thinking, crucial for healthcare professionals.399 (55.19)
   Academic integrity: Misuse of ChatGPT for plagiarism poses ethical and academic integrity concerns.374 (51.73)
   Lack of human interactions: The absence of human interactions and feedback from instructors or peers can limit valuable learning opportunities.288 (39.83)
   Regulations and guidelines: Clear guidelines and regulations for the responsible use of ChatGPT in health professions education are still evolving.181 (25.03)
Note: * Multiple-response question. ** Percentages may not total 100% due to rounding.
Table 4. Healthcare students’ perspectives on ChatGPT in curriculum and activities (n = 723).
Table 4. Healthcare students’ perspectives on ChatGPT in curriculum and activities (n = 723).
Variablen (%) *
Discuss the use of ChatGPT with your instructors.
   Never383 (52.97)
   Rarely191 (26.42)
   Sometimes120 (16.60)
   Often29 (4.01)
   Always00 (0.00)
Current coverage of ChatGPT in your program curriculum
   Very low99 (13.69)
   Low174 (24.07)
   Average314 (43.43)
   High101 (13.97)
   Very high35 (4.84)
To what extent do you believe that the effective use of ChatGPT should be taught in healthcare colleges?
   Very low53 (7.33)
   Low95 (13.14)
   Average341 (47.16)
   High162 (22.41)
   Very high72 (9.96)
Should ChatGPT be used more by healthcare students?
   Yes421 (58.23)
   No 88 (12.17)
   Unsure 214 (29.60)
* Note: Percentages may not total 100% due to rounding.
Table 5. Association between participants’ demographics and frequency of utilization of ChatGPT, satisfaction with it, and their perceived usefulness of ChatGPT.
Table 5. Association between participants’ demographics and frequency of utilization of ChatGPT, satisfaction with it, and their perceived usefulness of ChatGPT.
VariableFrequency of Utilization of ChatGPT n (%)p Value *Satisfaction n (%)p Value *Perceived Usefulness n (%)p Value *
RarelySometimesOften/AlwaysUnsatisfied/Very UnsatisfiedNeutralSatisfied/Very SatisfiedNot Helpful/Slightly HelpfulModerately HelpfulVery/Extremely Helpful
Sex
Male 74 (25.26)114 (38.91)105 (35.84)0.31022 (7.51)75 (25.60)196 (66.89)0.21381(27.65)100 (34.13)112 (38.23)0.939
Female 94 (21.86)191 (44.42)145 (33.72)31 (7.21)136 (31.63)263 (61.16)116 (26.98)144 (33.49)170 (39.53)
Year of Study
First year44 (25.58)84 (48.84)44 (25.58)0.005 a20 (11.63)56 (32.56)96 (55.81)0.06757 (33.14)43 (25.00)72 (41.86)0.086
Second year26 (20.16)64 (49.61)39 (30.23)7 (5.43)36 (27.91)86 (66.67)38 (29.46)40 (31.01)51 (39.53)
Third year27 (24.11)39 (34.82)46 (41.07)10 (8.93)31 (27.68)71 (63.39)33 (29.46)44 (39.29)35 (31.25)
Fourth year19 (17.92)43 (40.57)44 (41.51)3 (2.83)24 (22.64)79 (74.53)21 (19.81)39 (36.79)46 (43.40)
Fifth year12 (15.79)33 (43.42)31 (40.79)4 (5.26)23 (30.26)49 (64.47)16 (21.05)26 (34.21)34 (44.74)
Sixth year11 (26.83)10 (24.39)20 (48.78)5 (12.20)9 (21.95)27 (65.85)13 (31.71)14 (34.15)14 (34.15)
Internship year29 (33.33)32 (36.78)26 (29.89)4 (4.60)32 (36.78)51 (58.62)19 (21.84)38 (43.68)30 (34.48)
College (healthcare program)
Medicine61(27.85)95 (43.38)63 (28.77)0.09922 (10.05)72 (32.88)125 (57.08)0.19177 (35.16)65 (29.68)77 (35.16)0.149
Pharmacy40 (22.47)65 (36.52)73 (41.01)8 (4.49)49 (27.53)121 (67.98)40 (22.47)66 (37.08)72 (40.45)
Nursing2 0(19.05)53 (50.48)32 (30.48)10 (9.52)28 (26.67)67 (63.81)30 (28.57)34 (32.38)41 (39.05)
Dentistry14 (24.56)26 (45.61)17 (29.82)1 (1.75)16 (28.07)40 (70.18)11 (19.30)21 (36.84)25 (43.86)
Allied healthcare program33 (20.12)66 (40.24)65 (39.63)12 (7.32)46 (28.05)106 (64.63)39 (23.78)58 (35.37)67 (40.85)
Geographical location in Saudi Arabia
Central region65 (20.57)145 (45.89)106 (33.54)0.29115 (4.75)96 (30.38)205 (64.87)0.037 a77 (24.37)114 (36.08)125 (39.56)0.764
Western region37 (27.82)56 (42.11)40 (30.08)11 (8.27)38 (28.57)84 (63.16)42 (31.58)39 (29.32)52 (39.10)
Northern region28 (27.72)39 (38.61)34 (33.66)7 (6.93)20 (19.80)74 (73.27)30 (29.70)31(30.69)40 (39.60)
Southern region12 (18.46)23 (35.38)30 (46.15)7 (10.77)18 (27.69)40 (61.54)21 (32.31)21(32.31)23 (35.38)
Eastern region26 (24.07)42 (38.89)40 (37.04)13 (12.04)39 (36.11)56 (51.85)27 (25.00)39(36.11)42 (38.89)
* Note: Statistical tests were performed using the chi-squared test. a statistically significant result at p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rasheed, M.K.; Alonayzan, F.; Alresheedi, N.; Aljasir, R.I.; Alhomoud, I.S.; Alrasheedy, A.A. ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia. Int. Med. Educ. 2026, 5, 6. https://doi.org/10.3390/ime5010006

AMA Style

Rasheed MK, Alonayzan F, Alresheedi N, Aljasir RI, Alhomoud IS, Alrasheedy AA. ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia. International Medical Education. 2026; 5(1):6. https://doi.org/10.3390/ime5010006

Chicago/Turabian Style

Rasheed, Muhammad Kamran, Fay Alonayzan, Nouf Alresheedi, Reema I. Aljasir, Ibrahim S. Alhomoud, and Alian A. Alrasheedy. 2026. "ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia" International Medical Education 5, no. 1: 6. https://doi.org/10.3390/ime5010006

APA Style

Rasheed, M. K., Alonayzan, F., Alresheedi, N., Aljasir, R. I., Alhomoud, I. S., & Alrasheedy, A. A. (2026). ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia. International Medical Education, 5(1), 6. https://doi.org/10.3390/ime5010006

Article Metrics

Back to TopTop