Next Article in Journal
The Co-Design of a Locally Led Health Professional Education Curriculum in Lao People’s Democratic Republic
Previous Article in Journal
Sociology in Medical Undergraduate Education: A Survey in Greece
Previous Article in Special Issue
Assessment of Postgraduate Academic Productivity Following a Longitudinal Research Program in a Medical School Curriculum
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study

by
Malik Sallam
1,2,*,
Kholoud Al-Mahzoum
3,
Yousef Meteb Almutairi
3,
Omar Alaqeel
3,
Anan Abu Salami
3,
Zaid Elhab Almutairi
3,
Alhur Najem Alsarraf
3 and
Muna Barakat
4
1
Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman 11942, Jordan
2
Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman 11942, Jordan
3
School of Medicine, The University of Jordan, Amman 11942, Jordan
4
Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman 11931, Jordan
*
Author to whom correspondence should be addressed.
Int. Med. Educ. 2024, 3(4), 406-425; https://doi.org/10.3390/ime3040031
Submission received: 16 August 2024 / Revised: 9 September 2024 / Accepted: 2 October 2024 / Published: 9 October 2024
(This article belongs to the Special Issue New Advancements in Medical Education)

Abstract

:
Despite the potential benefits of generative artificial intelligence (genAI), concerns about its psychological impact on medical students, especially about job displacement, are apparent. This pilot study, conducted in Jordan during July–August 2024, aimed to examine the specific fears, anxieties, mistrust, and ethical concerns medical students harbor towards genAI. Using a cross-sectional survey design, data were collected from 164 medical students studying in Jordan across various academic years, employing a structured self-administered questionnaire with an internally consistent FAME scale—representing Fear, Anxiety, Mistrust, and Ethics—comprising 12 items, with 3 items for each construct. Exploratory and confirmatory factors analyses were conducted to assess the construct validity of the FAME scale. The results indicated variable levels of anxiety towards genAI among the participating medical students: 34.1% reported no anxiety about genAI‘s role in their future careers (n = 56), while 41.5% were slightly anxious (n = 61), 22.0% were somewhat anxious (n = 36), and 2.4% were extremely anxious (n = 4). Among the FAME constructs, Mistrust was the most agreed upon (mean: 12.35 ± 2.78), followed by the Ethics construct (mean: 10.86 ± 2.90), Fear (mean: 9.49 ± 3.53), and Anxiety (mean: 8.91 ± 3.68). Their sex, academic level, and Grade Point Average (GPA) did not significantly affect the students’ perceptions of genAI. However, there was a notable direct association between the students’ general anxiety about genAI and elevated scores on the Fear, Anxiety, and Ethics constructs of the FAME scale. Prior exposure to genAI and its previous use did not significantly modify the scores on the FAME scale. These findings highlight the critical need for refined educational strategies to address the integration of genAI into medical training. The results demonstrate notable anxiety, fear, mistrust, and ethical concerns among medical students regarding the deployment of genAI in healthcare, indicating the necessity of curriculum modifications that focus specifically on these areas. Interventions should be tailored to increase familiarity and competency with genAI, which would alleviate apprehensions and equip future physicians to engage with this inevitable technology effectively. This study also highlights the importance of incorporating ethical discussions into medical courses to address mistrust and concerns about the human-centered aspects of genAI. In conclusion, this study calls for the proactive evolution of medical education to prepare students for new AI-driven healthcare practices to ensure that physicians are well prepared, confident, and ethically informed in their professional interactions with genAI technologies.

1. Introduction

The widespread availability of generative artificial intelligence (genAI) models (e.g., ChatGPT, Gemini, Microsoft Copilot, and Llama) is set to transform various occupational sectors. This includes transformative changes to the higher education sector, especially medical education and healthcare practice [1,2,3,4,5,6]. For example, genAI can help with the automation of routine administrative and educational tasks, aid in the response to student inquiries, and assist in the delivery of basic educational content with an interesting and personalized style [7,8]. Subsequently, genAI models can help educators and administrators focus on the more complex, value-added activities of higher education (e.g., personalized teaching and research activities) [9,10].
Additionally, genAI can be extremely helpful in medical education by offering sophisticated, novel simulations and modeling, especially for practical training, which are invaluable in healthcare education [2,3,11,12,13]. Moreover, genAI models can facilitate educational initiatives without substantial additional resources, which helps to promote educational equity at the global level [14].
Recently, several potential benefits of genAI models in healthcare education, research, and practice have been well recognized [3,11,15]. However, there are growing concerns about the negative implications of genAI models for the workforce, particularly about job displacement and the changing nature of health professional roles [16,17,18]. Consequently, these concerns could lead to resistance to genAI implementation among health professionals, which could hinder the full realization of genAI advantages in healthcare [19].
Medical students are considered a key group of the future healthcare workforce. Therefore, the ability of medical students to adapt their career paths to emerging technologies, such as genAI, is essential for them to thrive in a future that will be driven by genAI integration [20,21].
The rapid adoption of genAI models into the healthcare sector has raised concerns about job security, especially for medical students who are set to enter the healthcare workforce [22,23]. Thus, evaluating medical students’ anxieties and fears regarding potential job displacement by genAI is a timely research investigation topic. Understanding the concerns of medical students towards genAI models can offer insights into the effects of this novel technology on health professionals’ identities and their future competitiveness, as well as the ethical dilemmas that may arise as a result of genAI models’ integration into healthcare [24,25,26].
The increasing popularity of genAI models in healthcare has heightened job displacement concerns, with evidence suggesting gaps in practical genAI experience and knowledge among currently practicing health professionals [27,28]. A genAI-driven shift in healthcare practice would redirect the focus from traditional patient-centered care to technology-centered methods, raising questions about the need to redefine the future roles of health professionals [29]. Subsequently, genAI-driven changes in healthcare could lead to the depersonalization of care, which is a major challenge to the core healthcare values of empathy and human judgment. This emphasizes the critical need to explore the evolving dynamics of genAI-driven healthcare with a comprehensive and evidence-based approach [30,31].
On a related note, ethical considerations are central to medical students’ perceptions of genAI models’ utility in healthcare practice [32,33]. Concerns about patient privacy, data security, and potential genAI-induced healthcare disparities illustrate the complex ethical challenges that future physicians will face [34]. Additionally, there are significant questions regarding whether current medical education practices adequately prepare medical students for a genAI-dominated healthcare practice [35].
Based on the aforementioned points, the integration of genAI models into healthcare is expected to provoke anxiety among medical students, with subsequent fears of job displacement, loss of professional identity, and ethical dilemmas [36,37,38]. Therefore, investigating medical students’ perspectives on genAI models and their concerns about this emerging technology are important. This area of investigation could be crucial from the medical educational perspective, to prepare future physicians for a genAI-driven era of healthcare practice and to equip future physicians with the necessary tools to improve patient care and healthcare outcomes [1,39,40].
Therefore, this study aimed to assess medical students’ fears, anxieties, and concerns regarding genAI models’ roles in healthcare. Specifically, this study aimed to answer the following research questions: (1) What are the primary fears, anxieties, and concerns that medical students have regarding the role of genAI in healthcare? (2) What key factors drive these fears and anxieties?
This study’s objectives included assessing the possible associated factors of these fears and anxieties. The ultimate aim was to provide preliminary evidence to guide the development of targeted genAI integration interventions in medical education (e.g., policy modifications, improvements in genAI implementation strategies in medical education). The major aim of this study was to address these genAI-related concerns among medical students, to ensure that future healthcare physicians are prepared and confident in AI-integrated healthcare settings. Understanding the factors driving their concerns is crucial for developing targeted interventions that ensure students are both confident and competent in using genAI responsibly.

2. Materials and Methods

2.1. Study Design and Ethical Permission

This pilot study was based on a cross-sectional survey involving medical students currently studying in Jordan. A convenience sampling strategy was employed to expedite participant recruitment given the timeliness of this study’s topic. The recruitment of potential medical students was performed using social media and instant messaging applications, including Facebook, X (formerly Twitter), Instagram, LinkedIn, Messenger, and WhatsApp, all of which are popular among the target demographic, namely medical students in Jordan.
The sampling process started with the authors, who were medical students at the time of the survey’s distribution (Y.M.A., O.A., A.A.S., Z.E.A., and A.N.A.), and who were encouraged to further distribute the survey among their acquaintances who were also medical students in Jordan (snowball sampling). The survey was distributed in Arabic via Google Forms as the questionnaire host, and no incentives were offered for participation. The survey distribution took place during July–August 2024.
This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the Faculty of Pharmacy at the Applied Science Private University (reference number: 2024-PHA-25), which was granted on 4 July 2024.

2.2. Survey Instrument Development

The initial phase of survey development involved an independent literature review by the first and senior authors (M.S. and M.B.) using Google Scholar, and their searches concluded on 4 June 2024. The following search terms were used to ensure a comprehensive understanding of the current role of genAI in healthcare and medical education: “generative AI in healthcare education”; “generative AI and anxiety of health professionals”; “generative AI concerns among health professionals”; “fear of generative AI in healthcare”; “healthcare job displacement by generative AI”; “ChatGPT anxiety in healthcare”; “ChatGPT fear in healthcare”; “ChatGPT concerns in healthcare”; “healthcare job displacement by ChatGPT”; “medical job displacement by ChatGPT”; and “AI anxiety among medical students”. This was followed by the identification of research records in English that were deemed relevant for the development of a tailored survey instrument for this study’s objectives by discussions between the first and senior authors (M.S. and M.B.) [3,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55].
A collaborative effort involving the first, second, and senior authors was conducted to critically analyze the retrieved literature, leading to the development of a set of survey questions for this study’s objectives. The first and senior authors have a combined 13 years of teaching experience in healthcare, while the second author is a recently graduated Doctor of Medicine (M.D.). This combination of broad academic expertise and recent practical experience of medical learning enhanced the content validity of the survey questions, with the subsequent identification and integration of four themes subjectively deemed relevant to meet this study’s objectives, as follows: (1) fear related to possible job displacement by the availability of efficient, robust, and beneficial genAI; (2) anxiety regarding long-term medical career prospects in light of genAI models’ availability; (3) mistrust related to concerns about genAI reducing the human role in healthcare practice, leading to the dehumanization of medical interactions and decision making; and (4) ethical dilemmas and concerns that may arise if genAI models’ utilization becomes integrated as a routine part of healthcare practice. Based on the initials of the dimensions, the scale was referred to as the “FAME” scale.
To enhance the validity of the survey, we opted to check the content of the initial draft by seeking feedback from two lecturers involved in medical education: a lecturer in Microbiology and Immunology, specializing in basic medical sciences, and a lecturer in Internal Medicine with a specialty in Endocrinology, with extensive involvement in supervising fourth-year medical students during their introductory course on clinical rotations. This process involved seeking specific feedback on the contextual relevance and comprehensiveness of the included four themes, thereby enhancing the content validity of this novel survey instrument.
Afterward, the survey instrument underwent a pilot test involving five medical students selected to represent a diverse range of perspectives within medical education (three third-year students and two fifth-year students). The feedback required included notes on the clarity, relevance, and language of the survey items. Following the obtained feedback, minor refinements of the survey items were conducted to improve the clarity of the language of the survey items by adjustments involving simplifying complex language and enhancing the flow of the questionnaire.

2.3. Final Survey Used

Before participation, all the medical students were provided with detailed information about this study’s objectives, inclusion criteria, and confidentiality measures to protect the anonymity of the responses obtained. Electronic consent was mandatory from each participant, with “Yes” as the response for the that item necessary for the opening of the survey (Supplementary S1).
The survey started with a demographics section that gathered data on age (as a scale variable); sex (male vs. female); current year of study (1st, 2nd, 3rd, 4th, 5th, or 6th year), later classified as pre-clinical (1st, 2nd, or 3rd) vs. clinical (4th, 5th, or 6th year); latest Grade Point Average (GPA), classified as unsatisfactory, satisfactory, good, very good, or excellent, and later classified as low (unsatisfactory, satisfactory, or good) vs. high (very good or excellent); and the desired future specialty (Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, Plastic Surgery, Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, Anesthesiology, Radiology, Pathology, Dermatology, or Other).
Then, the survey included the major question considered the primary study measure measured on a 4-point Likert scale (not at all, slightly anxious, somewhat anxious, extremely anxious) to assess the level of anxiety medical students feel towards generative AI technologies, such as ChatGPT. This question was “How anxious are you about genAI models like ChatGPT as a future physician?”.
Next, the participants were asked about their previous use of genAI models (ChatGPT, MS Copilot, Gemini, My AI Snapchat, others).
Finally, the main body of the survey consisted of twelve items assessed on a 5-point Likert scale (strongly agree, agree, neutral, disagree, strongly disagree). These items were designed with three questions per dimension, as follows: (1) “I feel concerned that AI will reduce the number of available jobs in healthcare”; (2) “I am concerned that AI will exceed the need for human skills in many areas of healthcare”; (3) “The advancement of AI in healthcare makes me anxious about my long-term career”; (4) “I am worried that I will not be able to compete with AI for jobs in the healthcare”; (5) “The thought of competing with AI for career opportunities in healthcare makes me feel anxious”; (6) “I am worried that AI will demand competencies beyond the current scope of medical teaching”; (7) “I believe that AI is missing the aspects of insight and empathy needed in medical practice”; (8) “I believe that AI does not take into account the personal and emotional aspects of patient care”; (9) “I believe that the essence of the medical profession will not be affected by AI technologies”; (10) “I believe that AI will lead to ethical dilemmas in healthcare”; (11) “I fear that AI in healthcare will compromise patient privacy and data security”; and (12) “I worry that AI will increase inequalities in patient care”.

2.4. Sample Size Calculation

The sample size was calculated using G*Power software (Version 3.1.9.7) to ensure sufficient statistical power for detecting differences between two groups: medical students who reported no anxiety about genAI and those who expressed any level of anxiety [56,57]. A small-to-medium effect size (0.3) was assumed based on the exploratory nature of this study. We set the significance level (α) at 0.05 and targeted a power of 95% to minimize the risk of Type II errors and to ensure that this study could reliably detect meaningful differences between the two aforementioned medical student groups. Accordingly, the recruitment of 147 participants was determined to be essential to ensure adequate statistical power for a comparison between the two groups (medical students who were not anxious at all regarding genAI vs. medical students who expressed anxiety at any level).

2.5. Statistical and Data Analysis

Statistical analyses were conducted using IBM SPSS Statistics for Windows, Version 27.0 (Armonk, NY: IBM Corp), with statistical significance established at p < 0.050. The Kolmogorov–Smirnov test was employed to assess data normality. Cronbach’s α was calculated to evaluate the internal consistency of the survey constructs. The Intraclass Correlation Coefficient (ICC) was employed to assess the reliability of the measurements under a one-way random model given the uniform style of survey administration, to ensure that any measurement error would be random and not due to systematic differences in how the measurements were taken. For the effect size between the two groups, Cohen’s d was utilized with Hedges’ correction, which adjusts for bias in the estimation of the standard deviation (SD) in small samples. To take into account the non-normality of the scale variables, the effect size analysis was supplemented by measuring the point-biserial correlation coefficients using bivariate Pearson correlations, with the correlation coefficients (r) acting as surrogates for the effect sizes. Nonparametric tests, including the Mann–Whitney U test for two independent samples and the Kruskal–Wallis test for more than two groups, were applied, given that the scale variable did not meet normality assumptions (p ≤ 0.001 using the Kolmogorov–Smirnov test). Additionally, chi-square tests were used to explore associations between the categorical variables.
Medical specialties were categorized based on the risk of job displacement by generative AI, factoring into the extent to which each specialty relies on procedural skills, personalized interactions, and automatable tasks. This categorization was agreed upon by the first, second, and senior authors as follows: high-risk specialties, such as Radiology, Pathology, and Dermatology, which involve the significant use of diagnostic imaging and pattern recognition that AI could replace; middle-risk specialties, like Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology, which could see moderate impacts from AI but retain crucial human elements; and low-risk specialties, including Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery, which involve complex decision making and personalized care that are difficult to automate.
An exploratory factor analysis (EFA) was conducted using JASP software (Version 0.19.0) to assess the underlying factor structure of the 12-item scale [58]. The analysis employed maximum likelihood factoring with an oblimin rotation, assuming that the factors are correlated. The sample adequacy for factor analysis was evaluated using the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett’s test of sphericity. A confirmatory factor analysis (CFA) was conducted to assess the factor structure of the 12-item scale using two latent variables: Fear and Anxiety and Mistrust and Ethics. The model fit was evaluated using multiple indices, including the Root Mean Square Error of Approximation (RMSEA), the Standardized Root Mean Square Residual (SRMR), and the Goodness of Fit Index (GFI). Items with weak factor loadings or high error terms were considered for removal to improve the model fit. Three items were subsequently removed from the Mistrust and Ethics factor based on their low factor loadings and high uniqueness values.

3. Results

3.1. General Features of Participating Medical Students

The final study sample comprised a total of 164 medical students with a mean age of 21.1 ± 2.3 years, with a majority being male (n = 88, 53.7%) and at the basic educational level (n = 113, 68.9%, Table 1).
Slightly less than three-quarters of this study’s participants reported prior use of at least a single genAI model, while slightly more than half of this study’s participants reported a cumulative GPA of very good or excellent (Table 1).
For the genAI models used, the most commonly reported model was ChatGPT (n = 105, 64.0%), followed by My AI Snapchat (n = 29, 17.7%), Gemini (n = 17, 10.4%), and Copilot (n = 13, 7.9%, Figure 1).

3.2. The Level of Anxiety Toward genAI and Its Associated Determinants

Slightly over a third of this study’s sample reported being not anxious at all regarding the role of genAI such as ChatGPT as future physicians (n = 56, 34.1%), while 61 students reported being slightly anxious (41.5%), 36 students reported being somewhat anxious (22.0%), and only 4 students reported being extremely anxious (2.4%). The demographic data did not show any statistically significant differences in the participants’ level of anxiety regarding genAI (Table 2).

3.3. FAME Construct’s Reliability

Cronbach’s α for the four FAME sub-scales were as follows: 0.874 for the Fear sub-scale, 0.880 for the Anxiety sub-scale, 0.724 for the Mistrust sub-scale, and 0.695 for the Ethics sub-scale. For the 12 items combined, Cronbach’s α was 0.853, reflecting robust internal consistency.
In terms of the ICC, the Fear sub-scale showed high reliability, with an ICC of 0.678 for the single measures and 0.863 for the average measures. The Anxiety sub-scale also exhibited high reliability, with an ICC of 0.673 for the single measures and 0.860 for the average measures. The Mistrust sub-scale displayed a moderate reliability of 0.405 for the single measures, but this increased to a good reliability of 0.671 for the average measures. Similarly, the Ethics sub-scale showed a moderate reliability of 0.388 for the single measures and improved to 0.656 for the average measures, indicating enhanced reliability and consistency when averaged across the respondents. The ICC values for the FAME sub-scales are detailed in Figure 2.
An EFA was conducted to identify the underlying structure of the dataset and to evaluate the relationships among the observed variables. The KMO test indicated a high level of sampling adequacy with an overall measure of 0.842, confirming that the data were suitable for factor analysis, with the individual measures of sampling adequacy (ranging from 0.731 to 0.936) all exceeding the acceptable threshold of 0.7. Bartlett’s test of sphericity was significant (χ2 = 1099.176, df = 66, p < 0.001), indicating that the correlations between items were sufficient for factor analysis. A chi-squared test of the model fit also yielded a significant result (χ2 = 127.379, df = 43, p < 0.001). The EFA revealed a two-factor solution based on a scree plot and eigenvalues greater than 1 (Figure 3).
The unrotated solution yielded eigenvalues of 4.854 and 2.558 for Factor 1 and Factor 2, respectively, accounting for 55.1% of the total variance (37.3% for Factor 1 and 17.8% for Factor 2). After applying an oblimin rotation, Factor 1 was composed of the six items representing the Fear and Anxiety constructs, with loadings ranging from 0.674 to 0.931, and moderate-to-low uniqueness values, indicating that these items were well explained by the underlying factor. Factor 2 consisted of six items representing the Mistrust and Ethics constructs, with loadings ranging from 0.461 to 0.871. The cumulative variance explained by the rotated factors was 54.7%, with Factor 1 (Fear and Anxiety) accounting for 34.7% and Factor 2 (Mistrust and Ethics) for 20.0%. Factor 1 had a sum of squared loadings of 4.163, accounting for 34.7% of the variance, while Factor 2 accounted for 20.0% of the variance, with a sum of squared loadings of 2.405.
The CFA revealed a two-factor structure, with the latent variables labeled as Fear and Anxiety and Mistrust and Ethics. The latent variable Fear and Anxiety was strongly associated with the observed variables of the initial six items of the FAME scale, with factor loadings ranging from 0.74 to 0.94. The error terms for these items ranged from 0.11 to 0.45, indicating that most of the variance was well explained by the Fear and Anxiety latent variable. The second latent variable, Mistrust and Ethics, initially included the remaining six items of the FAME scale. However, after reviewing the model fit, three items (Ethics (2), Ethics (3), and Mistrust (3)) were removed from the model due to their weaker loadings and higher error terms. These items had factor loadings ranging from 0.45 to 0.59 and exhibited high uniqueness values (0.65 to 0.80), suggesting they did not significantly contribute to explaining the Mistrust and Ethics construct. After the removal of these items, the remaining observed variables had factor loadings from 0.74 to 0.90, demonstrating stronger relationships with the Mistrust and Ethics construct. The removal of these three items improved the overall model fit. The RMSEA was 0.137 with a 90% confidence interval of 0.110 to 0.164 (p < 0.001). The SRMR was 0.096, though it was still slightly above the desired threshold of 0.08. The GFI was high at 0.991, indicating the reasonable but imperfect fit of the model. The two latent variables were found to be moderately correlated (r = 0.20), indicating that “Fear & Anxiety” and “Mistrust & Ethics” are related but distinct constructs (Figure 4).

3.4. FAME Constructs Scores

For the four FAME constructs, a higher level of agreement among the participants was seen for the Mistrust construct (mean: 12.35 ± 2.78), followed by the Ethics construct (mean: 10.86 ± 2.90) and Fear construct (mean: 9.49 ± 3.53), while the lowest level of agreement was seen for the Anxiety construct (mean: 8.91 ± 3.68, Figure 5). Statistically significant lower levels of agreement were seen among the participating medical students who were not anxious at all compared to those who showed any level of anxiety as future physicians towards genAI for three constructs, namely Fear, Anxiety, and Ethics (Figure 6).

3.5. Determinants of Anxiety about genAI among Participating Medical Students

Table 3 summarizes the determinants of Fear, Anxiety, Mistrust, and Ethics among the participating medical students toward genAI. No significant differences in Fear, Anxiety, Mistrust, or Ethics scores were found between the sexes, academic levels (basic and clinical), and GPA categories; however, lower Anxiety scores were marginally significant (p = 0.082) among students with higher academic achievement, reflected by a higher GPA.
When analyzed by their desired specialty, no significant differences emerged among the students aspiring to low-, middle-, or high-risk specialties, indicating uniform perceptions across the different desired fields. The number of genAI models used by the students also did not significantly influence the Fear, Anxiety, Mistrust, or Ethics scores, suggesting a consistent perception regardless of exposure level to genAI tools.
Importantly, the students who reported not being at all anxious about genAI models like ChatGPT had significantly lower Fear and Anxiety scores (p < 0.001 for both) and a lower Ethics score (p = 0.014) compared to the students who expressed any level of anxiety towards genAI (Table 3). However, this statistically significant difference did not extend to the Mistrust sub-scale between the two groups (p = 0.590).
In our analysis to assess the impact of anxiety towards generative AI models on the perceptions of Fear, Anxiety, and Ethics, significant effects were observed, as indicated by substantial effect sizes calculated using Cohen’s d and Hedges’ g. Specifically, for the Fear construct, Cohen’s d yielded a point estimate of 2.332 with a 95% confidence interval (CI) of 2.035 to 2.627, indicating a very large effect size (Pearson r = 0.411, p < 0.001). Similarly, Hedges’ correction resulted in a point estimate of 2.327 (95% CI: 2.031–2.621). For the Anxiety construct, Cohen’s d provided a point estimate of 2.038 (95% CI: 1.768–2.306), and Hedges’ correction gave a point estimate of 2.033 (95% CI: 1.764–2.300, Pearson r = 0.319, p < 0.001). For the Mistrust construct, Cohen’s d provided a point estimate of 3.831 (95% CI: 3.387–4.272), and Hedges’ correction gave a point estimate of 3.822 (95% CI: 3.379–4.263, Pearson r = 0.058, p = 0.462). Lastly, for the Ethics construct, Cohen’s d revealed a point estimate of 3.243 (95% CI: 2.858–3.625), with Hedges’ correction closely aligning with a point estimate of 3.235 (95% CI: 2.852–3.617, Pearson r = 0.205, p = 0.008).

4. Discussion

The analysis of fear, anxiety, mistrust, and ethical concerns among medical students in this study regarding genAI models reveal helpful insights. These insights highlight the psychological and ethical dimensions that would influence the adoption of these emerging technologies in medical education. This pilot study employed a new survey instrument, and both the EFA and CFA provided preliminary evidence for the validity of the novel FAME scale. Two distinct but related factors were identified: Fear and Anxiety and Mistrust and Ethics, capturing the key dimensions of medical students’ concerns about genAI in healthcare. The moderate correlation between the two factors suggests they are related but distinct. The results demonstrate the FAME scale’s ability to capture both the emotional and ethical dimensions of genAI integration into medical education.
Our study reveals significant concerns among medical students in Jordan regarding the implications of genAI models for their future careers. Surveying 164 students, we found that a substantial majority, over 72%, have already used genAI models, particularly ChatGPT. This high usage rate is consistent with global trends suggesting that the reliance on genAI models for academic support is becoming increasingly normalized. For example, a multi-national study involving participants from Brazil, India, Japan, the UK, and the USA highlighted that most students use ChatGPT for assistance with assignments and expect their peers to do the same, signaling a shift towards widespread acceptance of genAI tools in academic settings [59].
Moreover, a comprehensive study across several Arab countries—including Iraq, Kuwait, Egypt, Lebanon, and Jordan—engaged 2240 participants, and revealed that nearly half were aware of ChatGPT, and over half of them had used it before the study [60]. The favorable disposition towards genAI exemplified by ChatGPT was driven by its ease of use, positive technological attitudes, social influence, perceived usefulness, and minimal perceived risks and anxiety [60]. Similarly, a recent study from the United Arab Emirates (UAE) supports these findings, with the majority of the students surveyed reporting their routine use of ChatGPT, driven by its utility, ease of use, and the positive influence of social and cognitive factors [61]. Furthermore, a recent study of medical students in the U.S. showed that almost half of the surveyed students used ChatGPT in their medical studies [62]. Taken together, our findings, along with those of recent studies highlighted here, indicate a broad acceptance and integration of genAI models among university students, shaped by genAI models’ utility, ease of integration into daily tasks, and the broader, positive social perception of technological engagement [63,64,65,66].
In this study, approximately two-thirds of the medical students reported experiencing at least a mild level of anxiety about genAI. This anxiety was notable across different demographics, including sex, academic level, and varying GPA categories. This result reflects the widespread apprehension about genAI among the participants. This widespread concern is understandable given the broader levels of apprehension about genAI observed among university students globally. For example, a recent study that was conducted in Hong Kong involving 399 students across six universities and ten faculties revealed significant concerns [66]. Students feared that genAI would undermine the value of their university education and impede the development of essential skills, such as teamwork, problem solving, and leadership [66]. Additionally, a study among medical students in the UAE reported that a majority of the participants were worried that genAI would reduce trust in physicians, besides worries regarding the ethical impact of genAI in healthcare [21]. Moreover, a study among health students in Jordan, based on the technology acceptance model (TAM), revealed an overall anxious attitude towards ChatGPT among the participants [49]. This highlights the need for educational strategies to effectively integrate genAI into medical curricula while addressing the underlying justifiable anxiety among medical students.
Despite the notable anxiety reported in this study, the findings reveal even more pronounced levels of mistrust and ethical concerns among the participating medical students. Interestingly, while a related study among college students in Japan found a significant focus on unemployment as a major ethical issue with AI [67], our findings suggest that the concerns extend beyond personal job security to include broader ethical and trust issues associated with genAI applications in healthcare.
Of note, this study’s findings elucidate the significant psychological and ethical impacts of anxiety toward genAI models on medical students, revealing profound concerns regarding fear, anxiety, mistrust, and ethical considerations. The strong link with the Fear construct illustrates how apprehensions about genAI correlate with both general anxiety and specific fears concerning the future of medical practice and job security—a common trend observed with the introduction of new technologies [68,69,70,71]. These anxieties are likely fueled by uncertainties over how genAI might transform traditional medical roles, potentially replacing tasks currently undertaken by humans, thus sparking fears of job displacement and the diminution of human-centric skills in healthcare [72].
The fear of job displacement by genAI is not unique to the healthcare sector, as it resonates across various other occupational sectors. For example, studies in fields ranging from accounting to manufacturing have identified a correlation between the rise of AI and increased job displacement concerns, with policy recommendations often advocating for talent retention, investment in upskilling programs, and support mechanisms for those adversely affected by AI adoption [73]. In Germany, a manufacturing sector survey indicated that employee fears regarding AI are among the top barriers to its adoption, with non-managerial staff particularly expressing apprehension about AI implications for job security and workplace dynamics [74]. A study from Turkey revealed that while teacher candidates across various disciplines, ages, and genders show no apprehension about learning AI, they do express significant anxiety about its potential effects on employment and social dynamics [75].
In this study, the concerns among medical students about job displacement, while significant, were overshadowed by issues of ethics and mistrust. These findings reflect apprehensions about the ethical and empathetic dimensions of care—areas where AI is often perceived as lacking, as noted by Farhud and Zokaei [36]—despite the presence of recent evidence contradicting this viewpoint, including the promising potential of clinically oriented genAI models [76,77]. The pronounced levels of mistrust and ethical concerns found in this study may indicate that medical students fear not only potential job displacement but also doubt the capacity of genAI to fulfill the crucial humanistic aspects of healthcare, such as empathy and ethical judgment [78]. Our findings support this perspective, with the Mistrust construct receiving the highest level of agreement among the participants. This skepticism is deeply rooted in doubts about genAI’s ability to effectively handle the complex aspects of empathy and ethical decision making, as perceived by the medical students involved in our study.

4.1. Recommendations Based on This Study’s Findings

The findings of this study highlight the critical need for evolving medical curricula that incorporate comprehensive AI coverage, including genAI training [79,80,81,82,83]. This modification is recommended to illuminate the technical capabilities of genAI and to clarify its role in supplementing rather than replacing the role of human physicians [3,84]. This involves emphasizing genAI’s potential to enhance diagnostic precision, personalize treatment plans, and improve administrative efficiency, which have been thoroughly shown in recent literature [3,6,11,85,86]. Based on the current study’s results, incorporating both technical genAI training and ethical discussions into medical education appears crucial. These steps could help prepare medical students to address critical issues, such as data privacy, algorithmic bias, and patient autonomy. In a recent study, D’Souza et al. suggested a roadmap with twelve steps viewed as essential strategies for embedding ethics within genAI education in medicine [87].
The early introduction of medical students to genAI technology in medicine is expected to make them more comfortable with this inevitable technology and deepen their understanding of its impact on healthcare. An early narrative review suggested that transparency, explainability, and usability are key to building trust and ensuring effective collaboration between healthcare professionals and genAI in clinical settings [78]. Furthermore, Alam et al. stressed the importance of ethical considerations, robust technical infrastructure, and comprehensive faculty training to responsibly integrate genAI into medical education [88]. It is recommended that medical schools develop specific modules or courses focused on genAI ethics, highlighting transparency and clarity to foster trust among students, faculty, and genAI technologies [87,88,89]. These educational efforts are crucial for developing skilled and ethically informed practitioners who can adeptly handle the complexities of genAI in healthcare.
To address the considerable ethical concerns and mistrust regarding genAI among medical students, it is important to encourage ethical discussions on AI usage, data privacy, and patient-centered care in the medical training framework [90,91]. Incorporating role-playing, case studies, and ethical debates will help students competently train on the intricate moral issues they will encounter in their professional lives [39].
Moreover, as AI automates many technical tasks, enhancing uniquely human soft skills like emotional intelligence, communication, leadership, and adaptability will become crucial [92]. By promoting AI as a collaborative tool in healthcare rather than a competitor, and highlighting examples where AI and human physicians synergistically improve patient outcomes, AI will be viewed as an indispensable partner in healthcare [93].
Additionally, advocating for policies that protect healthcare workers’ job security in the wake of AI integration is important [22,72,94]. Clear guidelines on AI’s role in healthcare will ensure that it supports rather than replaces medical professionals [95,96]. Ongoing research into AI’s impacts, coupled with open dialogue among AI developers, healthcare professionals, educators, and policymakers, will create strategies to ensure that AI enhances rather than disrupts healthcare services [1,97].
This study advocates for medical curricula to thoroughly prepare future healthcare providers to integrate AI into their practices effectively, ensuring they deliver compassionate, competent, and ethically sound healthcare [98].

4.2. Study Limitations

The interpretation of this study’s results must be approached with caution due to several limitations as follows. First, the cross-sectional survey design prevented establishing causality between medical students’ perceptions of genAI and the other study variables. Longitudinal studies are needed for tracking changes in students’ perceptions of AI as genAI technology progresses rapidly.
Second, this study relied on convenience and snowball sampling approaches for swift data collection, which are expected to introduce a notable sampling bias. These methods depend heavily on existing social networks and a participant’s willingness to engage, potentially misrepresenting the broader medical student population in Jordan and beyond. Consequently, these approaches, while cost effective, are prone to sampling bias and may not yield a sample representative of the broader population. We acknowledge that these factors limit the generalizability of our findings to the broader medical student community in Jordan or other demographic profiles. Given these considerations, we advise caution in extending our conclusions beyond the participants sampled.
Third, using social media and instant messaging platforms for student recruitment likely biased this study’s sample toward students who hold specific views on technology that may not reflect the broader medical students’ perspectives. Distributing the survey solely in Arabic further limited the diversity of responses, potentially impacting the depth of insights into how students’ perceptions vary across different cultural or educational backgrounds. Another limitation is the online data collection, which may have led to misunderstandings or biases. While it allowed for a broader reach, in-person data collection could improve the clarity and accuracy of future studies.
Fourth, although the CFA results indicate a coherent two-factor structure, several limitations must be noted due to the pilot nature of this study. The relatively high RMSEA (0.137) and SRMR (0.096) suggest that the model may not fully capture the relationships among the variables, leaving room for improvement. Additionally, the small sample size limits the generalizability of the findings. Further research with larger samples and model refinements is needed to confirm these results and improve the model fit.
Finally, while the literature review for constructing the survey instrument was thorough, the selection of sources and subsequent survey questions may have been influenced by our subjective biases, shaped by our backgrounds and personal experiences in healthcare education. This subjective approach might have resulted in overlooking other relevant themes or emerging trends among genAI concerns that were unrecognized. Thus, further testing and validation of the survey instrument used in this study is strongly recommended in future studies.

5. Conclusions

This study illustrates that while medical students are anxious about the impact of genAI on their future job prospects, their deeper concerns are centered on the ethical and trust-related implications of genAI in the medical profession. These findings highlight the importance of addressing both the psychological and ethical dimensions of genAI integration into medical education. Understanding how genAI can complement human capabilities in healthcare is essential for preparing future physicians. The notable levels of anxiety, fear, mistrust, and ethical concerns expressed by the medical students in this study highlight the need for increased genAI familiarity and competency within medical training, alongside discussions that emphasize the ethical challenges of genAI in healthcare. Addressing these concerns is crucial to ensure that future physicians are equipped to function in AI-driven future healthcare settings with confidence and ethical awareness.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/ime3040031/s1, (Supplementary S1) The full questionnaire used in this study translated into English.

Author Contributions

Conceptualization, M.S.; methodology, M.S., K.A.-M., Y.M.A., O.A., A.A.S., Z.E.A., A.N.A. and M.B.; software, M.S.; validation, M.S. and M.B.; formal analysis, M.S.; investigation, M.S., K.A.-M., Y.M.A., O.A., A.A.S., Z.E.A., A.N.A. and M.B.; resources, M.S.; data curation, M.S., K.A.-M., Y.M.A., O.A., A.A.S., Z.E.A., A.N.A. and M.B.; writing—original draft preparation, M.S.; writing—review and editing, M.S., K.A.-M., Y.M.A., O.A., A.A.S., Z.E.A., A.N.A. and M.B.; visualization, M.S.; supervision, M.S.; project administration, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the Faculty of Pharmacy at the Applied Science Private University (reference number: 2024-PHA-25).

Informed Consent Statement

Informed consent was obtained from all the subjects involved in this study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

We are deeply thankful to Hiba Abbasi and Khaled Al-Salahat for their feedback on the content of the initial draft of the developed survey instrument.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AIArtificial Intelligence
CIConfidence Interval
FAMEFear, Anxiety, Mistrust, and Ethics
genAIGenerative Artificial Intelligence
GFIGoodness of Fit Index
GPAGrade Point Average
ICCIntraclass Correlation Coefficient
KMOKaiser–Meyer–Olkin Measure
RMSEARoot Mean Square Error of Approximation
SDStandard Deviation
SRMRStandardized Root Mean Square Residual
TAMTechnology Acceptance Model
UAEUnited Arab Emirates

References

  1. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
  2. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Tammemi, A.B. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J. 2023, 3, e103. [Google Scholar] [CrossRef] [PubMed]
  3. Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef] [PubMed]
  4. Hashmi, N.; Bal, A.S. Generative AI in higher education and beyond. Bus. Horiz. 2024, in press. [CrossRef]
  5. Lee, D.; Arnold, M.; Srivastava, A.; Plastow, K.; Strelan, P.; Ploeckl, F.; Lekkas, D.; Palmer, E. The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Comput. Educ. Artif. Intell. 2024, 6, 100221. [Google Scholar] [CrossRef]
  6. Yilmaz Muluk, S.; Olcucu, N. The Role of Artificial Intelligence in the Primary Prevention of Common Musculoskeletal Diseases. Cureus 2024, 16, e65372. [Google Scholar] [CrossRef]
  7. Sheikh Faisal, R.; Nghia, D.-T.; Niels, P. Generative AI in Education: Technical Foundations, Applications, and Challenges. In Artificial Intelligence for Quality Education; Seifedine, K., Ed.; IntechOpen: Rijeka, Croatia, 2024. [Google Scholar] [CrossRef]
  8. Acar, O.A. Commentary: Reimagining marketing education in the age of generative AI. Int. J. Res. Mark. 2024, in press. [Google Scholar] [CrossRef]
  9. Chiu, T.K.F. The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interact. Learn. Environ. 2023, 2023, 1–17. [Google Scholar] [CrossRef]
  10. Barakat, M.; Salim, N.A.; Sallam, M. Perspectives of University Educators Regarding ChatGPT: A Validation Study Based on the Technology Acceptance Model. Res. Sq. 2024, 1–25. [Google Scholar] [CrossRef]
  11. Sallam, M.; Al-Farajat, A.; Egger, J. Envisioning the Future of ChatGPT in Healthcare: Insights and Recommendations from a Systematic Identification of Influential Research and a Call for Papers. Jordan Med. J. 2024, 58, 236–249. [Google Scholar] [CrossRef]
  12. Mijwil, M.; Abotaleb, M.; Guma, A.L.I.; Dhoska, K. Assigning Medical Professionals: ChatGPT’s Contributions to Medical Education and Health Prediction. Mesopotamian J. Artif. Intell. Healthc. 2024, 2024, 76–83. [Google Scholar] [CrossRef] [PubMed]
  13. Roos, J.; Kasapovic, A.; Jansen, T.; Kaczmarczyk, R. Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany. JMIR Med. Educ. 2023, 9, e46482. [Google Scholar] [CrossRef] [PubMed]
  14. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  15. Safranek, C.W.; Sidamon-Eristoff, A.E.; Gilson, A.; Chartash, D. The Role of Large Language Models in Medical Education: Applications and Implications. JMIR Med. Educ. 2023, 9, e50945. [Google Scholar] [CrossRef]
  16. Wani, S.U.D.; Khan, N.A.; Thakur, G.; Gautam, S.P.; Ali, M.; Alam, P.; Alshehri, S.; Ghoneim, M.M.; Shakeel, F. Utilization of Artificial Intelligence in Disease Prevention: Diagnosis, Treatment, and Implications for the Healthcare Workforce. Healthcare 2022, 10, 608. [Google Scholar] [CrossRef]
  17. Howard, J. Artificial intelligence: Implications for the future of work. Am. J. Ind. Med. 2019, 62, 917–926. [Google Scholar] [CrossRef]
  18. George, A.S.; George, A.S.H.; Martin, A.S.G. ChatGPT and the Future of Work: A Comprehensive Analysis of AI’s Impact on Jobs and Employment. Partn. Univers. Int. Innov. J. 2023, 1, 154–186. [Google Scholar] [CrossRef]
  19. Yang, Y.; Ngai, E.W.T.; Wang, L. Resistance to artificial intelligence in health care: Literature review, conceptual framework, and research agenda. Inf. Manag. 2024, 61, 103961. [Google Scholar] [CrossRef]
  20. Stoumpos, A.I.; Kitsios, F.; Talias, M.A. Digital Transformation in Healthcare: Technology Acceptance and Its Applications. Int. J. Environ. Res. Public Health 2023, 20, 3407. [Google Scholar] [CrossRef]
  21. Alkhaaldi, S.M.I.; Kassab, C.H.; Dimassi, Z.; Oyoun Alsoud, L.; Al Fahim, M.; Al Hageh, C.; Ibrahim, H. Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study. JMIR Med. Educ. 2023, 9, e51302. [Google Scholar] [CrossRef]
  22. Bekbolatova, M.; Mayer, J.; Ong, C.W.; Toma, M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare 2024, 12, 125. [Google Scholar] [CrossRef] [PubMed]
  23. Bohr, A.; Memarzadeh, K. (Eds.) Chapter 2—The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Academic Press: Cambridge, MA, USA, 2020; pp. 25–60. [Google Scholar] [CrossRef]
  24. Rony, M.K.K.; Kayesh, I.; Bala, S.D.; Akter, F.; Parvin, M.R. Artificial intelligence in future nursing care: Exploring perspectives of nursing professionals—A descriptive qualitative study. Heliyon 2024, 10, e25718. [Google Scholar] [CrossRef] [PubMed]
  25. Weidener, L.; Fischer, M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024, 3, e51204. [Google Scholar] [CrossRef] [PubMed]
  26. Rahimzadeh, V.; Kostick-Quenet, K.; Blumenthal Barby, J.; McGuire, A.L. Ethics Education for Healthcare Professionals in the Era of ChatGPT and Other Large Language Models: Do We Still Need It? Am. J. Bioeth. 2023, 23, 17–27. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, M.; Zhang, B.; Cai, Z.; Seery, S.; Gonzalez, M.J.; Ali, N.M.; Ren, R.; Qiao, Y.; Xue, P.; Jiang, Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front. Med. 2022, 9, 990604. [Google Scholar] [CrossRef]
  28. Fazakarley, C.A.; Breen, M.; Leeson, P.; Thompson, B.; Williamson, V. Experiences of using artificial intelligence in healthcare: A qualitative study of UK clinician and key stakeholder perspectives. BMJ Open 2023, 13, e076950. [Google Scholar] [CrossRef]
  29. Zhang, P.; Kamel Boulos, M.N. Generative AI in Medicine and Healthcare: Promises, Opportunities and Challenges. Future Internet 2023, 15, 286. [Google Scholar] [CrossRef]
  30. Kerasidou, A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull. World Health Organ. 2020, 98, 245–250. [Google Scholar] [CrossRef]
  31. Adigwe, O.P.; Onavbavba, G.; Sanyaolu, S.E. Exploring the matrix: Knowledge, perceptions and prospects of artificial intelligence and machine learning in Nigerian healthcare. Front. Artif. Intell. 2023, 6, 1293297. [Google Scholar] [CrossRef]
  32. Alghamdi, S.A.; Alashban, Y. Medical science students’ attitudes and perceptions of artificial intelligence in healthcare: A national study conducted in Saudi Arabia. J. Radiat. Res. Appl. Sci. 2024, 17, 100815. [Google Scholar] [CrossRef]
  33. Bala, I.; Pindoo, I.; Mijwil, M.; Abotaleb, M.; Yundong, W. Ensuring Security and Privacy in Healthcare Systems: A Review Exploring Challenges, Solutions, Future Trends, and the Practical Applications of Artificial Intelligence. Jordan Med. J. 2024, 58, 250–270. [Google Scholar] [CrossRef]
  34. Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef] [PubMed]
  35. Buabbas, A.J.; Miskin, B.; Alnaqi, A.A.; Ayed, A.K.; Shehab, A.A.; Syed-Abdul, S.; Uddin, M. Investigating Students’ Perceptions towards Artificial Intelligence in Medical Education. Healthcare 2023, 11, 1298. [Google Scholar] [CrossRef] [PubMed]
  36. Farhud, D.D.; Zokaei, S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran. J. Public Health 2021, 50, i–v. [Google Scholar] [CrossRef]
  37. Kim, J.; Kadkol, S.; Solomon, I.; Yeh, H.; Soh, J.; Nguyen, T.; Choi, J.; Lee, S.; Srivatsa, A.; Nahass, G.; et al. AI Anxiety: A Comprehensive Analysis of Psychological Factors and Interventions. SSRN Electron. J. 2023, 2023. [Google Scholar] [CrossRef]
  38. Oniani, D.; Hilsman, J.; Peng, Y.; Poropatich, R.K.; Pamplin, J.C.; Legault, G.L.; Wang, Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digit. Med. 2023, 6, 225. [Google Scholar] [CrossRef]
  39. Dave, M.; Patel, N. Artificial intelligence in healthcare and education. Br. Dent. J. 2023, 234, 761–764. [Google Scholar] [CrossRef]
  40. Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
  41. Preiksaitis, C.; Rose, C. Opportunities, Challenges, and Future Directions of Generative Artificial Intelligence in Medical Education: Scoping Review. JMIR Med. Educ. 2023, 9, e48785. [Google Scholar] [CrossRef]
  42. Karabacak, M.; Ozkara, B.B.; Margetis, K.; Wintermark, M.; Bisdas, S. The Advent of Generative Language Models in Medical Education. JMIR Med. Educ. 2023, 9, e48163. [Google Scholar] [CrossRef]
  43. Caporusso, N. Generative artificial intelligence and the emergence of creative displacement anxiety. Res. Directs Psychol. Behav. 2023, 3, 1–12. [Google Scholar] [CrossRef]
  44. Woodruff, A.; Shelby, R.; Kelley, P.G.; Rousso-Schindler, S.; Smith-Loud, J.; Wilcox, L. How Knowledge Workers Think Generative AI Will (Not) Transform Their Industries. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; p. 641. [Google Scholar]
  45. Meskó, B.; Topol, E.J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit. Med. 2023, 6, 120. [Google Scholar] [CrossRef] [PubMed]
  46. Ansari, A.; Ansari, A. Consequences of AI Induced Job Displacement. Int. J. Bus. Anal. Technol. 2024, 2, 4–19. Available online: https://ijbat.com/index.php/IJBAT/article/view/18/31 (accessed on 14 August 2024).
  47. Ooi, K.-B.; Tan, G.W.-H.; Al-Emran, M.; Al-Sharafi, M.A.; Capatina, A.; Chakraborty, A.; Dwivedi, Y.K.; Huang, T.-L.; Kar, A.K.; Lee, V.-H.; et al. The Potential of Generative Artificial Intelligence Across Disciplines: Perspectives and Future Directions. J. Comput. Inf. Syst. 2023, 2023, 1–32. [Google Scholar] [CrossRef]
  48. Hosseini, M.; Gao, C.A.; Liebovitz, D.M.; Carvalho, A.M.; Ahmad, F.S.; Luo, Y.; MacDonald, N.; Holmes, K.L.; Kho, A. An exploratory survey about using ChatGPT in education, healthcare, and research. PLoS ONE 2023, 18, e0292216. [Google Scholar] [CrossRef]
  49. Sallam, M.; Salim, N.A.; Barakat, M.; Al-Mahzoum, K.; Al-Tammemi, A.B.; Malaeb, D.; Hallit, R.; Hallit, S. Assessing Health Students’ Attitudes and Usage of ChatGPT in Jordan: Validation Study. JMIR Med. Educ. 2023, 9, e48254. [Google Scholar] [CrossRef]
  50. Alanzi, T.M. Impact of ChatGPT on Teleconsultants in Healthcare: Perceptions of Healthcare Experts in Saudi Arabia. J. Multidiscip. Healthc. 2023, 16, 2309–2321. [Google Scholar] [CrossRef]
  51. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef]
  52. Javaid, M.; Haleem, A.; Singh, R.P. ChatGPT for healthcare services: An emerging stage for an innovative perspective. BenchCouncil Trans. Benchmarks Stand. Eval. 2023, 3, 100105. [Google Scholar] [CrossRef]
  53. Zaman, M. ChatGPT for healthcare sector: SWOT analysis. Int. J. Res. Ind. Eng. 2023, 12, 221–233. [Google Scholar] [CrossRef]
  54. Özbek Güven, G.; Yilmaz, Ş.; Inceoğlu, F. Determining medical students’ anxiety and readiness levels about artificial intelligence. Heliyon 2024, 10, e25894. [Google Scholar] [CrossRef] [PubMed]
  55. Saif, N.; Khan, S.U.; Shaheen, I.; Alotaibi, F.A.; Alnfiai, M.M.; Arif, M. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Comput. Hum. Behav. 2024, 154, 108097. [Google Scholar] [CrossRef]
  56. Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef] [PubMed]
  57. Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef] [PubMed]
  58. Jasp Team. JASP (Version 0.19.0) [Computer Software]. 2024. Available online: https://jasp-stats.org/ (accessed on 9 September 2024).
  59. Ibrahim, H.; Liu, F.; Asim, R.; Battu, B.; Benabderrahmane, S.; Alhafni, B.; Adnan, W.; Alhanai, T.; AlShebli, B.; Baghdadi, R.; et al. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Sci. Rep. 2023, 13, 12187. [Google Scholar] [CrossRef]
  60. Abdaljaleel, M.; Barakat, M.; Alsanafi, M.; Salim, N.A.; Abazid, H.; Malaeb, D.; Mohammed, A.H.; Hassan, B.A.R.; Wayyes, A.M.; Farhan, S.S.; et al. A multinational study on the factors influencing university students’ attitudes and usage of ChatGPT. Sci. Rep. 2024, 14, 1983. [Google Scholar] [CrossRef]
  61. Sallam, M.; Elsayed, W.; Al-Shorbagy, M.; Barakat, M.; El Khatib, S.; Ghach, W.; Alwan, N.; Hallit, S.; Malaeb, D. ChatGPT usage and attitudes are driven by perceptions of usefulness, ease of use, risks, and psycho-social impact: A study among university students in the UAE. Front. Educ. 2024, 9, 1414758. [Google Scholar]
  62. Zhang, J.S.; Yoon, C.; Williams, D.K.A.; Pinkas, A. Exploring the Usage of ChatGPT Among Medical Students in the United States. J. Med. Educ. Curric. Dev. 2024, 11, 23821205241264695. [Google Scholar] [CrossRef]
  63. Yusuf, A.; Pervin, N.; Román-González, M.; Noor, N.M. Generative AI in education and research: A systematic mapping review. Rev. Educ. 2024, 12, e3489. [Google Scholar] [CrossRef]
  64. Raman, R.; Mandal, S.; Das, P.; Kaur, T.; Sanjanasri, J.P.; Nedungadi, P. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Hum. Behav. Emerg. Technol. 2024, 2024, 3085910. [Google Scholar] [CrossRef]
  65. Almogren, A.S.; Al-Rahmi, W.M.; Dahri, N.A. Exploring factors influencing the acceptance of ChatGPT in higher education: A smart education perspective. Heliyon 2024, 10, e31887. [Google Scholar] [CrossRef] [PubMed]
  66. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  67. Ghotbi, N.; Ho, M.T.; Mantello, P. Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI Soc. 2022, 37, 283–290. [Google Scholar] [CrossRef]
  68. McClure, P.K. “You’re Fired,” Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment. Soc. Sci. Comput. Rev. 2017, 36, 139–156. [Google Scholar] [CrossRef]
  69. Erebak, S.; Turgut, T. Anxiety about the speed of technological development: Effects on job insecurity, time estimation, and automation level preference. J. High Technol. Manag. Res. 2021, 32, 100419. [Google Scholar] [CrossRef]
  70. Nam, T. Technology usage, expected job sustainability, and perceived job insecurity. Technol. Forecast. Soc. Change 2019, 138, 155–165. [Google Scholar] [CrossRef]
  71. Koo, B.; Curtis, C.; Ryan, B. Examining the impact of artificial intelligence on hotel employees through job insecurity perspectives. Int. J. Hosp. Manag. 2021, 95, 102763. [Google Scholar] [CrossRef]
  72. Rony, M.K.K.; Parvin, M.R.; Wahiduzzaman, M.; Debnath, M.; Bala, S.D.; Kayesh, I. “I Wonder if my Years of Training and Expertise Will be Devalued by Machines”: Concerns About the Replacement of Medical Professionals by Artificial Intelligence. SAGE Open Nurs. 2024, 10, 23779608241245220. [Google Scholar] [CrossRef]
  73. Rawashdeh, A. The consequences of artificial intelligence: An investigation into the impact of AI on job displacement in accounting. J. Sci. Technol. Policy Manag. 2023. ahead-of-print. [Google Scholar] [CrossRef]
  74. Link, J.; Stowasser, S. Negative Emotions Towards Artificial Intelligence in the Workplace–Motivation and Method for Designing Demonstrators. In Proceedings of the Artificial Intelligence in HCI, Washington, DC, USA, 29 June–4 July 2024; Springer: Cham, Switzerland, 2024; pp. 75–86. [Google Scholar]
  75. Hopcan, S.; Türkmen, G.; Polat, E. Exploring the artificial intelligence anxiety and machine learning attitudes of teacher candidates. Educ. Inf. Technol. 2024, 29, 7281–7301. [Google Scholar] [CrossRef]
  76. Maida, E.; Moccia, M.; Palladino, R.; Borriello, G.; Affinito, G.; Clerico, M.; Repice, A.M.; Di Sapio, A.; Iodice, R.; Spiezia, A.L.; et al. ChatGPT vs. neurologists: A cross-sectional study investigating preference, satisfaction ratings and perceived empathy in responses among people living with multiple sclerosis. J. Neurol. 2024, 271, 4057–4066. [Google Scholar] [CrossRef] [PubMed]
  77. Bragazzi, N.L.; Garbarino, S. Toward Clinical Generative AI: Conceptual Framework. JMIR AI 2024, 3, e55957. [Google Scholar] [CrossRef] [PubMed]
  78. Tucci, V.; Saary, J.; Doyle, T.E. Factors influencing trust in medical artificial intelligence for healthcare professionals: A narrative review. J. Med. Artif. Intell. 2021, 5. [Google Scholar] [CrossRef]
  79. Ogunleye, B.; Zakariyyah, K.I.; Ajao, O.; Olayinka, O.; Sharma, H. A Systematic Review of Generative AI for Teaching and Learning Practice. Educ. Sci. 2024, 14, 636. [Google Scholar] [CrossRef]
  80. Paranjape, K.; Schinkel, M.; Nannan Panday, R.; Car, J.; Nanayakkara, P. Introducing Artificial Intelligence Training in Medical Education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef] [PubMed]
  81. Ngo, B.; Nguyen, D.; vanSonnenberg, E. The Cases for and against Artificial Intelligence in the Medical School Curriculum. Radiol. Artif. Intell. 2022, 4, e220074. [Google Scholar] [CrossRef]
  82. Ngo, B.; Nguyen, D.; vanSonnenberg, E. Artificial Intelligence: Has Its Time Come for Inclusion in Medical School Education? Maybe…Maybe Not. MedEdPublish 2021, 10, 131. [Google Scholar] [CrossRef]
  83. Tolentino, R.; Baradaran, A.; Gore, G.; Pluye, P.; Abbasgholizadeh-Rahimi, S. Curriculum Frameworks and Educational Programs in AI for Medical Students, Residents, and Practicing Physicians: Scoping Review. JMIR Med. Educ. 2024, 10, e54793. [Google Scholar] [CrossRef]
  84. Sauerbrei, A.; Kerasidou, A.; Lucivero, F.; Hallowell, N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: Some problems and solutions. BMC Med. Inform. Decis. Mak. 2023, 23, 73. [Google Scholar] [CrossRef]
  85. Yim, D.; Khuntia, J.; Parameswaran, V.; Meyers, A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med. Inform. 2024, 12, e52073. [Google Scholar] [CrossRef]
  86. Sallam, M. Bibliometric top ten healthcare-related ChatGPT publications in the first ChatGPT anniversary. Narra J. 2024, 4, e917. [Google Scholar] [CrossRef] [PubMed]
  87. Franco D’Souza, R.; Mathew, M.; Mishra, V.; Surapaneni, K.M. Twelve tips for addressing ethical concerns in the implementation of artificial intelligence in medical education. Med. Educ. Online 2024, 29, 2330250. [Google Scholar] [CrossRef] [PubMed]
  88. Alam, F.; Lim, M.A.; Zulkipli, I.N. Integrating AI in medical education: Embracing ethical usage and critical understanding. Front. Med. 2023, 10, 1279707. [Google Scholar] [CrossRef] [PubMed]
  89. Weidener, L.; Fischer, M. Teaching AI Ethics in Medical Education: A Scoping Review of Current Literature and Practices. Perspect. Med. Educ. 2023, 12, 399–410. [Google Scholar] [CrossRef] [PubMed]
  90. Tang, L.; Li, J.; Fantus, S. Medical artificial intelligence ethics: A systematic review of empirical studies. Digit. Health 2023, 9, 20552076231186064. [Google Scholar] [CrossRef]
  91. Siala, H.; Wang, Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc. Sci. Med. 2022, 296, 114782. [Google Scholar] [CrossRef]
  92. Zirar, A.; Ali, S.I.; Islam, N. Worker and workplace Artificial Intelligence (AI) coexistence: Emerging themes and research agenda. Technovation 2023, 124, 102747. [Google Scholar] [CrossRef]
  93. Shuaib, A. Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward. Int. J. Gen. Med. 2024, 17, 1765–1771. [Google Scholar] [CrossRef]
  94. Khan, B.; Fatima, H.; Qureshi, A.; Kumar, S.; Hanan, A.; Hussain, J.; Abdullah, S. Drawbacks of Artificial Intelligence and Their Potential Solutions in the Healthcare Sector. Biomed. Mater. Devices 2023, 1, 731–738. [Google Scholar] [CrossRef]
  95. Reddy, S. Generative AI in healthcare: An implementation science informed translational path on application, integration and governance. Implement. Sci. 2024, 19, 27. [Google Scholar] [CrossRef]
  96. Bajwa, J.; Munir, U.; Nori, A.; Williams, B. Artificial intelligence in healthcare: Transforming the practice of medicine. Future Healthc. J. 2021, 8, e188–e194. [Google Scholar] [CrossRef] [PubMed]
  97. Maleki Varnosfaderani, S.; Forouzanfar, M. The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century. Bioengineering 2024, 11, 337. [Google Scholar] [CrossRef] [PubMed]
  98. Naqvi, W.M.; Sundus, H.; Mishra, G.; Muthukrishnan, R.; Kandakurti, P.K. AI in Medical Education Curriculum: The Future of Healthcare Learning. Eur. J. Ther. 2024, 30, e23–e25. [Google Scholar] [CrossRef]
Figure 1. The generative artificial intelligence (genAI) models used, as self-reported by this study’s participants.
Figure 1. The generative artificial intelligence (genAI) models used, as self-reported by this study’s participants.
Ime 03 00031 g001
Figure 2. The Intraclass Correlation Coefficient (ICC) for the four FAME sub-scales’ items. Higher correlations are indicated by deeper shades of green.
Figure 2. The Intraclass Correlation Coefficient (ICC) for the four FAME sub-scales’ items. Higher correlations are indicated by deeper shades of green.
Ime 03 00031 g002
Figure 3. Scree plot representing the eigenvalues of the factors identified by the exploratory factor analysis (EFA).
Figure 3. Scree plot representing the eigenvalues of the factors identified by the exploratory factor analysis (EFA).
Ime 03 00031 g003
Figure 4. Path diagram of the two-factor confirmatory factor analysis (CFA) model. Fear and Anxiety (FA) and Mistrust and Ethics (ME). F: Fear; A: Anxiety; M: Mistrust; E: Ethics.
Figure 4. Path diagram of the two-factor confirmatory factor analysis (CFA) model. Fear and Anxiety (FA) and Mistrust and Ethics (ME). F: Fear; A: Anxiety; M: Mistrust; E: Ethics.
Ime 03 00031 g004
Figure 5. Whisker plots for the distribution of the four FAME (Fear, Anxiety, Mistrust, and Ethics) constructs scores.
Figure 5. Whisker plots for the distribution of the four FAME (Fear, Anxiety, Mistrust, and Ethics) constructs scores.
Ime 03 00031 g005
Figure 6. Error bars showing the four FAME constructs scores stratified per the level of anxiety of the participating medical students towards generative artificial intelligence (genAI). CI: confidence interval for the mean.
Figure 6. Error bars showing the four FAME constructs scores stratified per the level of anxiety of the participating medical students towards generative artificial intelligence (genAI). CI: confidence interval for the mean.
Ime 03 00031 g006
Table 1. General features of participating medical students (n = 164).
Table 1. General features of participating medical students (n = 164).
VariableCategoryCountPercentage
SexMale8853.7%
Female7646.3%
Academic yearFirst year2515.2%
Second year5231.7%
Third year3622.0%
Fourth year2012.2%
Fifth year1911.6%
Sixth year127.3%
GPA 1Unsatisfactory84.9%
Satisfactory159.1%
Good5734.8%
Very good6539.6%
Excellent1911.6%
Desired specialty classification based on the risk of job loss due to genAI 2Low risk 36851.5%
Middle risk 44836.4%
High risk 51612.1%
How anxious are you about genAI models, like ChatGPT as a future physician?Not at all5634.1%
Slightly anxious6841.5%
Somewhat anxious3622.0%
Extremely anxious42.4%
Number of genAI models used04527.4%
17747.0%
23320.1%
353.0%
442.4%
1 GPA: Grade Point Average; 2 genAI: generative artificial intelligence; 3 Low-risk specialties: Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery; 4 Middle-risk specialties: Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology; 5 High-risk specialties: Radiology, Pathology, and Dermatology.
Table 2. Anxiety of the participating medical students regarding generative artificial intelligence (genAI) models as future physicians.
Table 2. Anxiety of the participating medical students regarding generative artificial intelligence (genAI) models as future physicians.
VariableCategoryHow Anxious Are You about genAI Models, Like ChatGPT as a Future Physician?p Value
Not at AllSlightly Anxious, Somewhat Anxious, or Extremely Anxious
Count (%)Count (%)
AgeMean ± SD 221.66 ± 3.1220.8 ± 1.610.186
SexMale29 (33.0)59 (67.0)0.729
Female27 (35.5)49 (64.5)
LevelBasic36 (31.9)77 (68.1)0.358
Clinical20 (39.2)31 (60.8)
GPA 1Unsatisfactory, satisfactory, good26 (32.5)54 (67.5)0.664
Very good, excellent30 (35.7)54 (64.3)
Desired specialtyLow risk 319 (27.9)49 (72.1)0.504
Middle risk 418 (37.5)30 (62.5)
High risk 56 (37.5)10 (62.5)
Number of genAI models used014 (31.1)31 (68.9)0.895
128 (36.4)49 (63.6)
210 (30.3)23 (69.7)
32 (40.0)3 (60.0)
42 (50.0)2 (50.0)
1 GPA: Grade Point Average; 2 SD: standard deviation; 3 Low-risk specialties: Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery; 4 Middle-risk specialties: Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology; 5 High-risk specialties: Radiology, Pathology, and Dermatology.
Table 3. The determinants of anxiety of the participating medical students towards genAI including the FAME sub-scales.
Table 3. The determinants of anxiety of the participating medical students towards genAI including the FAME sub-scales.
VariableCategoryFearAnxietyMistrustEthics
Mean ± SD 6p ValueMean ± SDp ValueMean ± SDp ValueMean ± SDp Value
SexMale9.55 ± 3.310.9729.00 ± 3.610.73912.3 ± 2.750.72810.86 ± 3.100.983
Female9.42 ± 3.778.80 ± 3.7812.41 ± 2.8210.86 ± 2.66
LevelBasic9.57 ± 3.540.6838.85 ± 3.620.78112.33 ± 2.700.59610.65 ± 2.860.108
Clinical9.31 ± 3.529.04 ± 3.8412.39 ± 2.9711.31 ± 2.96
GPA 1Unsatisfactory, satisfactory, good9.51 ± 3.530.8039.36 ± 3.810.08212.14 ± 2.870.27710.86 ± 2.880.987
Very good, excellent9.46 ± 3.548.48 ± 3.5212.55 ± 2.6910.86 ± 2.93
Desired specialtyLow risk 39.85 ± 3.430.5049.09 ± 3.620.79612.35 ± 2.870.95311.18 ± 2.880.812
Middle risk 49.10 ± 3.588.79 ± 3.9212.44 ± 2.7411.19 ± 2.71
High risk 58.94 ± 3.579.63 ± 3.5812.94 ± 1.8810.75 ± 2.98
Number of genAI 2 models used010.31 ± 3.380.3629.47 ± 3.490.58112.27 ± 2.590.10610.87 ± 2.520.496
19.06 ± 3.568.51 ± 3.6112.57 ± 2.9110.78 ± 3.05
29.39 ± 3.608.88 ± 3.8512.64 ± 2.1911.18 ± 2.78
39.80 ± 4.2110.40 ± 5.909.80 ± 4.4411.40 ± 4.93
48.75 ± 3.208.75 ± 3.209.75 ± 2.639.00 ± 2.58
How anxious are you about genAI models like ChatGPT as a future physician?Not at all7.48 ± 3.62<0.0017.29 ± 4.06<0.00112.13 ± 3.040.59010.04 ± 2.970.014
Slightly anxious, somewhat anxious, or extremely anxious10.53 ± 3.009.75 ± 3.1712.46 ± 2.6411.29 ± 2.78
1 GPA: Grade Point Average; 2 genAI: generative artificial intelligence; 3 Low-risk specialties: Pediatrics, General Surgery, Forensic Medicine, Orthopedics, Neurosurgery, Ophthalmology, and Plastic Surgery; 4 Middle-risk specialties: Internal Medicine, Psychiatry, Emergency Medicine, Obstetrics and Gynecology, Urology, and Anesthesiology; 5 High-risk specialties: Radiology, Pathology, and Dermatology; 6 SD: standard deviation. Statistically significant p values are highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sallam, M.; Al-Mahzoum, K.; Almutairi, Y.M.; Alaqeel, O.; Abu Salami, A.; Almutairi, Z.E.; Alsarraf, A.N.; Barakat, M. Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study. Int. Med. Educ. 2024, 3, 406-425. https://doi.org/10.3390/ime3040031

AMA Style

Sallam M, Al-Mahzoum K, Almutairi YM, Alaqeel O, Abu Salami A, Almutairi ZE, Alsarraf AN, Barakat M. Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study. International Medical Education. 2024; 3(4):406-425. https://doi.org/10.3390/ime3040031

Chicago/Turabian Style

Sallam, Malik, Kholoud Al-Mahzoum, Yousef Meteb Almutairi, Omar Alaqeel, Anan Abu Salami, Zaid Elhab Almutairi, Alhur Najem Alsarraf, and Muna Barakat. 2024. "Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study" International Medical Education 3, no. 4: 406-425. https://doi.org/10.3390/ime3040031

APA Style

Sallam, M., Al-Mahzoum, K., Almutairi, Y. M., Alaqeel, O., Abu Salami, A., Almutairi, Z. E., Alsarraf, A. N., & Barakat, M. (2024). Anxiety among Medical Students Regarding Generative Artificial Intelligence Models: A Pilot Descriptive Study. International Medical Education, 3(4), 406-425. https://doi.org/10.3390/ime3040031

Article Metrics

Back to TopTop