Next Article in Journal
The Application of Binary Logistic Regression in Modeling the Post-COVID-19 Effects on Food Security: In Search of Policy Recommendations in Promoting Sustainable Livelihoods for Food-Insecure Households
Previous Article in Journal
Fostering a Sustainable Campus: A Successful Selective Waste Collection Initiative in a Brazilian University
Previous Article in Special Issue
Advancing Sustainable Additive Manufacturing: Analyzing Parameter Influences and Machine Learning Approaches for CO2 Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Digital Sustainability Lens: Investigating Medical Students’ Adoption Intentions for AI-Powered NLP Tools in Learning Environments

by
Mostafa Aboulnour Salem
1,2
1
Deanship of Development and Quality Assurance, King Faisal University, Al-Ahsa 31982, Saudi Arabia
2
Department of Curricula and Teaching Methods, College of Education, King Faisal University, Al-Ahsa 31982, Saudi Arabia
Sustainability 2025, 17(14), 6379; https://doi.org/10.3390/su17146379
Submission received: 8 June 2025 / Revised: 2 July 2025 / Accepted: 3 July 2025 / Published: 11 July 2025

Abstract

This study investigates medical students’ intentions to adopt AI-powered Natural Language Processing (NLP) tools (e.g., ChatGPT, Copilot) within educational contexts aligned with the perceived requirements of digital sustainability. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT), data were collected from 301 medical students in Saudi Arabia and analyzed using Partial Least Squares Structural Equation Modelling (PLS-SEM). The results indicate that Performance Expectancy (PE) (β = 0.65), Effort Expectancy (EE) (β = 0.58), and Social Influence (SI) (β = 0.53) collectively and significantly predict Behavioral Intention (BI), explicating 62% of the variance in BI (R2 = 0.62). AI awareness did not significantly influence students’ responses or the relationships among constructs, possibly because practical familiarity and widespread exposure to AI-NLP tools exert a stronger influence than general awareness. Moreover, BI exhibited a strong positive effect on perceptions of digital sustainability (PDS) (β = 0.72, R2 = 0.51), highlighting a meaningful link between AI adoption and sustainable digital practices. Consequently, these findings indicate the strategic role of AI-driven NLP tools as both educational innovations and key enablers of digital sustainability, aligning with global frameworks such as the Sustainable Development Goals (SDGs) 4 and 9. The study also concerns AI’s transformative potential in medical education and recommends further research, particularly longitudinal studies, to better understand the evolving impact of AI awareness on students’ adoption behaviours.

Graphical Abstract

1. Introduction

As global challenges progress, education, technology, and innovation remain central to achieving sustainable development [1]. Moreover, the United Nations’ 2030 Agenda outlines 17 Sustainable Development Goals (SDGs), with SDG 4 aiming to ensure inclusive and quality education for all, and SDG 9 focusing on building resilient infrastructure, promoting innovation, and advancing sustainable industrialisation [2,3]. These goals highlight the crucial role of education and technology in promoting social, economic, and environmental sustainability [4].
Additionally, although no Sustainable Development Goal (SDG) targets explicitly mention information and communication technologies (ICTs), digital connectivity plays a critical enabling role by enhancing access to knowledge, fostering collaboration, and expanding opportunities for educational development [5,6]. However, this rapid digital expansion raises sustainability concerns, particularly regarding the increasing use of digital devices and services, which significantly contribute to global energy consumption and greenhouse gas emissions. The digital sector is estimated to account for approximately 4% of global emissions [7,8]. Hence, these environmental impacts necessitate a more responsible approach to digital innovation, known as digital sustainability, which balances technological advancements with ecological and ethical considerations [9].
Furthermore, AI-powered Natural Language Processing (NLP) tools such as ChatGPT and Copilot are gaining attention for their ability to personalise learning, streamline academic tasks, and support real-time feedback [10]. As well, these tools align with SDG 4 by enhancing learning outcomes and with SDG 9 by promoting educational innovation [11]. In the Saudi Arabian context, national strategies have increasingly adopted the integration of AI within higher education to support these goals [12].
However, the adoption of AI tools in higher education, particularly in medical education, also presents numerous challenges. Students may face risks, including overreliance on AI-generated content, diminished critical thinking, and the potential for receiving inaccurate or contextually inappropriate responses [13,14]. Therefore, these issues underscore the need for AI literacy, particularly in clinical education, to ensure responsible and effective use of such tools. Accordingly, this study examines medical students’ behavioural intentions to adopt AI-powered Natural Language Processing (AI-NLP) tools, considering both usability and sustainability dimensions.
Based on the Unified Theory of Acceptance and Use of Technology (UTAUT), the study extends the existing model by introducing AI awareness as a moderating factor. Moreover, it explores the relationship between students’ intentions to use AI tools and their perceptions of digital sustainability (PDS), a critically underexplored area in AI and educational research. This study, thus, contributes to a deeper understanding of how AI-NLP tools can be leveraged, not only to enhance educational outcomes in medical training, but also to support broader goals of ethical and sustainable innovation in higher education.

2. Theoretical Review and Research Hypothesis

2.1. Technology Acceptance in Medical Education

This study explores medical students’ behavioural intention to adopt AI-powered Natural Language Processing (NLP) models, such as Copilot and ChatGPT, within the context of medical education, with a particular focus on students’ perceptions of digital sustainability in AI-integrated learning environments.
Additionally, to support this exploration, an extensive review of the literature on technology acceptance was conducted, including foundational models such as the Theory of Planned Behaviour (TPB) [15], the Value-Based Adoption Model (VAM) [16], the Technology Acceptance Model (TAM) [17], and the Unified Theory of Acceptance and Use of Technology (UTAUT) [18]. Likewise, the UTAUT framework has been widely applied to examine users’ acceptance of emerging technologies across diverse contexts [18,19,20], including AI-based systems in education [18,21,22,23].
Furthermore, included are studies on learners’ use of AI assistants [24], AI-NLP tools [25], AI large language models (LLMs) [26], AI-powered customer relationship management (CRM) systems [27], and medical AI tools [28]. Furthermore, numerous studies have also incorporated key factors such as user awareness [29], motivation [30], assessment [31], and digital learning practices [32].
However, to the author’s knowledge, despite increasing global attention, relatively few studies have applied the UTAUT framework to AI adoption in medical education within Saudi Arabia [33,34,35]. Moreover, most existing research focuses on general educational technologies, often overlooking AI-NLP-specific applications and the contextual challenges of healthcare education.
Furthermore, AI awareness is rarely examined as a moderating factor, despite its crucial role in shaping users’ readiness and responsible engagement with AI systems. Additionally, empirical studies investigating how AI awareness interacts with UTAUT variables, specifically Performance Expectancy (PE), Effort Expectancy (EE), and Social Influence (SI), in the context of AI-NLP adoption within medical learning environments are limited. Moreover, in medical education, Performance Expectancy (PE) reflects students’ anticipation that AI-NLP tools will enhance their clinical learning skills.
Additionally, Effort Expectancy (EE) pertains to the perceived ease of integrating these tools into academic routines. Moreover, Social Influence (SI) highlights the role of AI-NLP tools in shaping the adoption of new ideas and concepts. Therefore, these constructs (PE, EE, and SI) are integrated into high-stakes environments, where perceived utility significantly influences the behavioural intention (BI) to adopt these tools.
Furthermore, to address these gaps, the current study incorporates AI awareness as a moderating variable. It employs the UTAUT framework to investigate how PE, EE, and SI influence medical learners’ intention to use AI-NLP tools for academic and clinical purposes. This leads to the formulation of the following hypotheses:
H1. 
Performance Expectancy (PE) has a positive effect on medical learners’ Behavioural Intention (BI) to use AI-NLP tools;
H2. 
Effort Expectancy (EE) positively influences medical learners’ BI to use AI-NLP tools;
H3. 
Social Influence (SI) positively affects medical learners’ BI to use AI-NLP tools.

2.2. The Role of Students’ AI Awareness in Medical Education

AI Natural Language Processing (NLP) tools, such as ChatGPT (Powered by OpenAI), Gemini (Powered by Google), Copilot, BioGPT (powered by Microsoft), Pub-MedBERT (Powered by the Allen Institute for AI), and others, are producing human-like language outputs [36]. Additionally, AI-NLP tools are trained with large-scale datasets and use transformer-based architecture (e.g., GPT-4) to generate contextually relevant and coherent responses [37].
Recently, several studies have shown that AI-NLP tools, such as ChatGPT and Copilot, excel in general conversational AI tasks, including tutoring, summarising texts, explaining complex ideas, and simulating clinical interactions [38,39,40,41]. However, integrating such tools into medical education presents considerable challenges. As well, medical learners must navigate non-verifiable outputs, hallucinated content, or oversimplified reasoning. Hence, these issues may lead to over-reliance, reduced critical thinking, and misinterpretation of AI-generated information, ultimately compromising clinical reasoning and academic accuracy [40,41].
In this context, AI awareness emerges as a critical factor: informed learners are better equipped to evaluate AI responses, whereas uninformed users may misuse or misunderstand the tool’s capabilities and limitations [42,43]. However, to the author’s knowledge, only a few articles have been examined in Saudi Arabia on the use of AI-NLP tools in medical education, particularly concerning the role of learners’ awareness of the potential and limitations of these tools in determining their effectiveness.
Additionally, this aspect is particularly crucial in the context of digital sustainability (PDS), where insufficient awareness may lead to misinterpretation of AI outputs, resulting in erroneous conclusions and ineffective learning outcomes [44]. Thus, AI awareness may moderate the strength of relationships between UTAUT constructs and behavioural intention, especially in high stakes learning environments, such as medical education. Therefore, to test this, the study proposes the following hypotheses:
H4. 
AI awareness (AW) moderates the relationship between Performance Expectancy (PE) and Behavioural Intention (BI);
H5. 
AW moderates the relationship between Effort Expectancy (EE) and BI;
H6. 
AW moderates the relationship between Social Influence (SI) and BI.

2.3. Digital Sustainability in Education and Implications and Conceptual Framework

Digital sustainability leverages the tools of digital transformation—such as enhanced connectivity, artificial intelligence (AI), and the Internet of Things (IoT)- to improve environmental outcomes and support sustainable institutional and operational practices [45]. Likewise, the concept of digital sustainability has evolved. Initially, it focused on the preservation and long-term maintenance of digital content and infrastructure [46]. However, its modern interpretation emphasises the use of digital technologies to foster environmentally and socially responsible ecosystems [47].
Furthermore, this expanded definition aligns closely with efforts to realise the Goals of Sustainable Development (SDGs) by promoting ethical, inclusive, and resource-efficient digital solutions [48]. Additionally, emerging trends in this field include AI-powered optimisation of circular economy models and a broader movement toward green digital transformation [49].
In the context of higher education, digital sustainability encompasses not only institutional commitments to technological integration but also broader concerns regarding ecological footprint, equity, and digital ethics. Despite a growing body of research on learners’ AI adoption [11,14,38,39,41,50]. A clear gap remains in the Saudi medical education literature regarding how students perceive the digital sustainability implications of AI-NLP tools and how these perceptions influence their learning behaviour.
Therefore, the current study aims to fill this gap by examining how Behavioural Intention (BI) to use AI-NLP tools is linked to students’ Perceived Digital Sustainability (PDS) requirements in educational contexts. This leads to the final hypothesis:
H7. 
Behavioural Intention (BI) to use AI-NLP tools positively influences learners’ perceptions of digital sustainability (PDS) requirements.
Figure 1 presents the conceptual model guiding this research, explaining the relationships among the study variables.

3. Methods and Materials

3.1. Population and Sample

This study was conducted at King Faisal University (KFU), a leading public university in Al-Ahsa, Eastern Province, Saudi Arabia, recognised for its strong focus on medical and applied sciences. Based on KFU’s strategic emphasis on digital transformation and alignment with Saudi Vision 2030, KFU provides an ideal setting for exploring students’ behavioural intentions toward AI integration and digital sustainability in education [51].
A random sample of 301 undergraduate students was selected from five health-related faculties: medicine, dentistry, pharmacy, veterinary science, and nursing. This ensured broad representation across key disciplines involved in both clinical practice and digital learning. Table 1 presents the distribution of participants’ gender.
The sample included 241 females (80%) and 60 males (20%), which closely reflects the actual gender balance in KFU’s health faculties. Female enrollment has consistently outnumbered male enrollment between 2022 and 2024, a trend that aligns with national patterns in Saudi higher education, especially within healthcare disciplines [52].
Considering the size of the sample, the model with the lower R2 estimate has been used [53]. Therefore, considering the three determinants of Intention Behaviour (BI), a sample size of at least 37 is required to reveal an R2 estimate of at least 0.25 (with a significance level of 5%). Furthermore, the sample size satisfies the criterion established by the 10-times rule, as it exceeds 30, i.e., 10 times the number of arrowheads (3) pointing at the BI construct [54].

3.2. Instrument and Data Collection:

The study employed an anonymous survey to collect data, ensuring no sensitive or privacy-related concerns were raised. Additionally, participants were informed that the questionnaire was anonymous and that their participation was entirely voluntary. To maximise participation, the survey link was disseminated via email, with faculty members from various colleges at King Faisal University encouraging learner engagement by emphasising the study’s relevance to the future of higher education.
Additionally, the questionnaire was divided into three main parts: (1) a consent statement explaining the study’s purpose and that participation is voluntary, (2) a demographic section asking for respondents’ gender and age, and (3) a set of Likert-scale questions evaluating the study’s theoretical model. Data collection was conducted in January 2025.
On the other hand, the questionnaire was developed using validated measurement items adapted from previous research. Performance Expectancy (PE) was measured with five items (PE1–PE5) adapted from Park et al. (2022) [55], Duong et al. (2024) [56], and Mushtaq (2024) [57]. Similarly, Effort Expectancy (EE) was measured with five parts (EE1–EE5), based on ideas from Dahri et al. (2024) [58], Portillo et al. (2025) [59], and Huang et al. (2025) [60].
Moreover, Social Influence (SI) was assessed using five survey items (SI1–SI5) taken from Nakhaie (2024) [26], Park et al. (2022) [55], and Dahri et al. (2024) [58] Park et al. Furthermore, AI Awareness (AW) was measured with three adjusted questions (AW1–AW3) inspired by the studies of Monaco (2024) [48], Darji and Singh (2025) [61], and Suzer and Koc (2024) [62].
Additionally, Behavioural Intentions (BI) were measured using three adapted items from Venkatesh et al. (2003) [63], which were explicitly modified to refer to artificial intelligence. Likewise, Perceptions of Digital Sustainability (PDS) requirements were evaluated using three survey items (PDS1-PDS3) sourced from Salem (2025) [51], Mushtaq (2024) [57], and Thompson and Okonkwo (2025) [64] and Mushtaq (2024) [57].
To ensure the suitability of the adapted items in the Saudi Arabian context, a localisation validation process was undertaken. The questionnaire was reviewed by thirteen bilingual experts in medical education and information technology to assess content validity. Moreover, the questionnaire (Arabic/English) was provided to ten medical students to evaluate clarity, comprehension, and contextual suitability.
Based on experts and student feedback, minor wording changes were made to enhance linguistic precision and contextual clarity. Additionally, specific questions (PE1, EE1, EE2, SI4, and AW2) were revised. To evaluate the instrument’s internal consistency, Cronbach’s Alpha (α) and McDonald’s Omega (Ω) were calculated. All constructs demonstrated high reliability (see Table 2).
These reliability values confirm the instrument’s robustness and its suitability for assessing in the Saudi medical education context.
Additionally, all constructs were modelled as reflective latent variables and analysed using Partial Least Squares Structural Equation Modelling (PLS-SEM) (see Figure 2). This approach was explicitly chosen to examine the hypothesised structural relationships among the study’s potential constructs [54] and aligns with the study’s focus on potential variables measured through observed indicators [63].
Methodologically, PLS-SEM was selected because (1) it effectively manages complex causal models with multiple reflectively measured potential constructs [65]; (2) it robustly incorporates both direct and moderating effects (e.g., AI awareness’s influence) through interaction term estimation [54]; and (3) it is particularly appropriate for predictive exploratory research with moderate sample sizes (n = 301) and non-normal data distributions [63,65]. Therefore, these considerations collectively justify the use of PLS-SEM for testing the theoretical model and hypotheses.

3.3. Hypothesis Examining Approach

The study implemented a sequential analytical approach to examine the hypothesised relationships. Additionally, the initial analysis focused on investigating the direct effects of the three core UTAUT constructs (PE, EE, and SI, along with PDS) within BI (H1–H3 and H7). In the same way, consistent with methodological best practices [54]. The moderator (AI awareness) was intentionally excluded from this initial analysis to ensure accurate interpretation of the main effects [66]. Furthermore, this approach prevents potential confounding that can occur when examining both main and interaction effects simultaneously in a single model [67].
Subsequently, examining structural relationships, the measurement model was rigorously evaluated to ensure the psychometric quality of all constructions. The assessment followed established PLS-SEM guidelines and included four key validity and reliability tests [54,63,67]: (1) indicator reliability evaluated through examination of outer loadings; (2) internal consistency assessed via alpha Cronbach’s and complex reliability; (3) collective validity measured using average variance extracted (AVE); and (4) discriminant validity analyzed through both Fornell–Larcker criterion as well Heterotrait–Monotrait ratio (H.T.M.T) approaches. Hence, this comprehensive evaluation confirmed that all constructions met the necessary thresholds for reliable measurement [66].
Consequently, examining moderation hypotheses (H4–H6) involved a multifaceted analytical approach. Moreover, the examination included [67] (1) assessing the statistical significance of interaction terms; (2) evaluating the effect size of moderation using f2 values; and (3) analysing the nature of interaction effects through visual inspection of slope plots [66]. Likewise, this comprehensive approach ensured a thorough understanding of the strength and direction of the moderate effects. All analyses were conducted using SmartPLS 4 software, which provides specialised tools for PLS-SEM analysis and includes advanced capabilities for moderation testing [54].

3.4. Ethical Approvals

Before initiating the data collection process, formal institutional approval was obtained from the Institutional Review Board at King Faisal University (Ethics Reference: KFU_2025_ETHICS3435). Furthermore, this procedure certified that our employed methods were consistent with the institutional criteria and the ethical concerns outlined in the Declaration of Helsinki [55].
Additionally, several safeguards to defend participants’ rights have been implemented: all participation was voluntary in its nature, with no pressure; written informed consent was obtained from each participant; respondents have the right to withdraw at any time without giving any reasons; and all data received was anonymized to defend participants’ identities.

4. Results

Table 3 displays the Measurement criteria of quality for the conceptual model—performance expectancy (PE), effort expectancy (EE), social influence (SI), and behavioural intention (BI), as well as Perceptions of Digital Sustainability Requisites (PDS), excluding the moderating variable (AI awareness).
Table 3 indicates that the reliability of all item loadings for the measured constructs exceeds the commonly accepted threshold of 0.70, suggesting that each item reliably reflects its corresponding latent variable.
Likewise, Performance Expectancy (PE) demonstrates strong loadings across all five indicators (PE1 to PE5), ranging from 0.791 to 0.876. Thus, this confirms that each item makes a significant contribution to the construct it is intended to measure. Additionally, the construct also exhibits excellent convergent validity with an Average Variance Extracted (AVE) of 0.717. Furthermore, the Composite Reliability (CR = 0.927) and Cronbach’s alpha (α = 0.921) values confirm a high degree of internal consistency.
Similarly, Effort Expectancy (EE) displays item loadings ranging from 0.774 to 0.871. Hence, these values also surpass the 0.70 reliability threshold, supporting strong indicator reliability. Moreover, the AVE for EE is 0.701, indicating that over 70% of the variance in its observed variables is explained by the underlying construct. Additionally, internal consistency is high, as evidenced by Cronbach’s alphas of 0.909 and 0.921. Moreover, these findings confirm the soundness of EE as a construct within the measurement model.
Furthermore, the construct of Social Influence (SI) demonstrates slightly lower but still acceptable item loadings, ranging from 0.744 to 0.869. Additionally, the AVE value of 0.665 remains above the minimum recommended level of 0.50, indicating sufficient convergent validity. Furthermore, SI shows a Cronbach’s alpha of 0.908 and a CR of 0.899, confirming that the items consistently measure the intended latent concept.
On the other hand, Behavioural Intention (BI) exhibits exceptionally high indicator loadings, ranging from 0.952 to 0.984, suggesting outstanding reliability at the item level. Additionally, with an AVE of 0.946, this construct captures nearly all the variance in its indicators. Moreover, the CR (0.981) and Cronbach’s alpha (0.972) values for BI are remarkably high, further establishing its robustness and reliability within the model.
In addition to the core constructs, the extended model incorporates Perceptions of Digital Sustainability Requisites (PDS) as a key outcome variable. Likewise, all three items (PDS1 to PDS3) exhibit very high factor loadings, ranging from 0.853 to 0.921, which is well above the reliability threshold of 0.7. Moreover, the AVE for this construct is 0.808, indicating that over 80% of the variance in the observed indicators is accounted for by the underlying construct. Additionally, internal consistency is confirmed, with a Cronbach’s alpha of 0.890 and a coefficient alpha of 0.927.
Therefore, these findings provide strong evidence for the empirical validity of PDS and highlight its significance in connecting behavioural intention with the sustainable and responsible integration of AI technologies in educational settings. The results show that the constructs in the measurement model meet or surpass the criteria for indicator reliability, convergent validity, and internal consistency. Furthermore, these findings confirm the psychometric robustness of the model and support its use for further structural model assessment and hypothesis testing.
Table 4 presents factor cross-loadings analysis used to assess discriminant validity within the PLS-SEM framework. Furthermore, PLS–SEM examines whether each indicator demonstrates its strongest association with its theoretically assigned construct relative to all other constructs in PLS-SEM. Consistently, with established methodological standards, discriminant validity is confirmed when each indicator’s loading is substantially higher on its corresponding latent construct than on any other construct in the analysis [54].
Table 4 provides strong evidence for discriminant validity across all measured constructs. Performance Expectancy (PE) indicators (PE1–PE5) all have loadings above 0.79 on their respective construct and notably lower on the others. Additionally, each item demonstrates its highest loading on the PE construct, confirming the distinctiveness of this latent variable within the Structural Equation Modelling (SEM) framework.
Similarly, the Effort Expectancy (EE) indicators (EE1–EE5) display loadings greater than 0.77 on the EE construct, while their cross-loadings on other constructs remain comparatively lower. Furthermore, each EE item is most strongly associated with its construct, reinforcing discriminant validity. Thus, the pattern is consistent and evident for all EE indicators.
Instance Social Influence (SI) indicators (SI1–SI5) show loadings above 0.74 on the SI construct, with smaller cross-loadings on other latent variables. Likewise, each SI item loads highest on its intended construct, confirming the distinct measurement of social influence in the model.
Additionally, the Behavioural Intention (BI) indicators (BI1–BI3) exhibit exceptionally high loadings, ranging from 0.952 to 0.984, on the BI construct, and substantially lower values across the remaining constructs. This strong loading pattern affirms excellent construct separation and supports the reliability of BI as a core outcome variable in the model.
Additionally, the Perceptions of Digital Sustainability Requisites (PDS) model introduces the construct Perceptions of Digital Sustainability Requisites (PDS), which also demonstrates robust discriminant validity. Also, PDS indicators (PDS1–PDS3) exhibit high loadings of 0.853 to 0.921 on their construct, while their loadings on PE, EE, SI, and BI remain considerably lower.
By results, PDS1 loads 0.892 on the PDS construct, compared to values below 0.53 on other constructs. These findings indicate that each item is uniquely aligned with the PDS factor, validating its distinct role in the model and affirming its theoretical contribution to assessing the sustainable adoption of AI-NLP tools (such as Copilot/ChatGPT) in education.
Overall, items across the constructs PE, EE, SI, PDS, and BI load significantly on their respective latent variables and lower on all others. Hence, the results confirm the validity of the entire measured construct’s discrimination and demonstrate the soundness of the measurement model. Furthermore, to ensure a comprehensive assessment of discriminant validity, the findings were cross-validated using both the Fornell–Larcker criterion and the Heterotrait–Monotrait (H.T.M.T) ratio.
As Table 5 shows, the square root of the AVE for each construct (diagonal values) exceeds its correlations with all other constructs (off-diagonal values), thereby confirming discriminant validity within the measurement model.
Furthermore, the square root of AVE for Effort Expectancy (EE) equals 0.837, which is greater than its highest correlation with any other construct, namely, Performance Expectancy (PE) at r = 0.597. Likewise, the square root of AVE for Performance Expectancy (PE) equals 0.846, which exceeds its correlation with Behavioural Intention (BI) (r = 0.698), thereby supporting the distinctiveness of the construct.
Similarly, Social Influence (SI) demonstrates a square root of AVE of 0.815, which surpasses its strongest correlation with BI (r = 0.464), confirming its uniqueness as a construct in the model. Moreover, the construct of Behavioural Intention (BI) also meets the Fornell-Larcker criterion, with a square root of AVE of 0.973, substantially higher than its correlations with other constructs, further affirming discriminant validity.
Additionally, the extended model includes the construct Perceptions of Digital Sustainability Requisites (PDS), which also satisfies the Fornell–Larcker criterion. Likewise, the square root of AVE for PDS is 0.899, clearly greater than its correlations with other constructs, such as PE (r = 0.508), EE (r = 0.492), SI (r = 0.473), and BI (r = 0.487). Thus, a substantial gap between the AVE square root and inter-construct correlations confirms that PDS is empirically distinct and conceptually relevant to the measurement model, especially in assessing the sustainable integration of AI-NLP tools (such as Copilot/ChatGPT) in educational settings.
Table 6 presents the results of the Heterotrait–Monotrait Ratio (H.T.M.T) analysis, an established method for assessing discriminant validity in structural equation modelling (Henseler et al., 2015). All H.T.M.T values were substantially below the conservative threshold of 0.85 (with an alternative threshold of 0.90), demonstrating a clear empirical distinction between constructs. Notably, key construct pairs showed particularly strong discriminant validity, including PE ↔ BI (0.754) and EE ↔ BI (0.635). Moreover, the strongest discriminant separation was observed between SI ↔ EE (0.443). Hence, these results collectively confirm that all construct pairs meet the H.T.M.T criteria for discriminant validity.
In addition to the Fornell–Larcker criterion, the Heterotrait–Monotrait Ratio (H.T.M.T) was employed to assess discriminant validity among the latent constructs. Regarding the H.T.M.T criterion, values below the conservative threshold of 0.85 indicate sufficient discriminant validity.
Additionally, all construct pairs exhibit H.T.M.T values well below this threshold, thereby confirming that each construct is statistically distinct from the others in the model. Furthermore, the H.T.M.T values for key construct pairs such as Effort Expectancy ⟷ Behavioural Intention (0.635), Performance Expectancy ⟷ Behavioural Intention (0.754), and Social Influence ⟷ Effort Expectancy (0.443) are all within acceptable bounds.
Likewise, the results reinforce the finding that each construct represents a unique aspect of a specific theoretical dimension within the model. Moreover, the H.T.M.T analysis further supports the robustness of the model’s discriminant validity. All H.T.M.T values involving PDS are well below the 0.85 threshold: PDS ⟷ Behavioural Intention (0.512), PDS ⟷ Effort Expectancy (0.529), PDS ⟷ Performance Expectancy (0.545), and PDS ⟷ Social Influence (0.498).
As Table 7 shows, the results of the structural model analysis include path coefficients (β), t-values, p-values, and R2 values. Likewise, the findings were obtained using the bootstrapping procedure to evaluate both the direct effects (H1–H3 and H7) and the moderation effects (H4–H6) within the extended Unified Theory of Acceptance and Use of Technology (UTAUT) model.
Additionally, the analysis confirms that the four core predictors—PE, EE, SI, and BI—have positive and statistically significant effects within the model. Behavioural Intention (BI) significantly influences learners’ Perceptions of Digital Sustainability Requisites (PDS) in the context of using AI-NLP tools (such as Copilot and ChatGPT) in educational settings.
Notably, the moderating effects of AI awareness on the relationships between PE, EE, and SI with BI (H4–H6) were not statistically significant (p > 0.05), despite positive β values. This indicates that students’ awareness of AI-NLP tools may not significantly alter how these core constructs influence their intention to adopt such tools.
Furthermore, although students report general awareness of AI-NLP tools, as a possible explanation, students’ reported AI awareness may be too superficial or uniformly distributed to create meaningful variance in behavioural responses [56]. Additionally, AI awareness might conceptually overlap with the core predictors—particularly PE—thereby diminishing its distinct moderating effect. Hence, these findings emphasise the need for more targeted AI literacy initiatives to improve not only familiarity but also critical understanding of AI systems, especially within medical education contexts.
Additionally, future studies should use longitudinal tracking to examine how students’ AI literacy evolves, also assessing whether deeper conceptual understanding, beyond surface-level AI awareness, more effectively enhances or moderates adoption behaviour.
Furthermore, these results provide strong empirical support for hypotheses H1 through H3 and H7, indicating substantial explanatory power. All VIF values were well below the threshold, confirming. In contrast, the analysis confirms a strong and significant positive effect of BI on students’ PDS (H7: β = 0.72, p < 0.01, R2 = 0.51). This finding indicates that students who are more willing to adopt AI tools such as ChatGPT and Copilot are also more likely to recognise the importance of responsible and sustainable AI integration in education.
This highlights the role of behavioural intention not only in technology adoption but also in advancing broader sustainability goals. It further suggests that promoting the informed and intentional use of AI among students may be a critical pathway to achieving digital sustainability aligned with the Sustainable Development Goals (SDG 4 and SDG 9). These insights may inform institutional policies on embedding digital sustainability within AI adoption frameworks in higher education.

5. Discussion

This study contributes to the emerging literature on artificial intelligence in medical education by examining how learners perceive AI-powered Natural Language Processing (NLP) tools (e.g., ChatGPT and Copilot) within the context of digital sustainability. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT), the findings offer both theoretical and practical insights into the behavioural drivers behind AI-NLP adoption among medical students.
The results indicate that all three core predictors—performance expectancy (PE), effort expectancy (EE), and social influence (SI)—significantly and positively influence learners’ behavioural intention (BI) to adopt AI-NLP tools. Among these, PE emerged as the most influential, suggesting that students prioritise tools that demonstrably improve educational outcomes. Additionally, respondents indicated that AI-NLP tools are effective for enhancing academic performance, engagement, and task achievement, findings that align with earlier studies on technology adoption in educational settings [11,38,39].
Effort expectancy (EE) also played a critical role, highlighting that ease of use remains a significant factor in acceptance. Students with higher levels of digital literacy and fewer perceived barriers were accepting of integrating AI tools into their learning routines, consistent with prior research [28,32,57]. These results confirm that accessible, user-friendly interfaces are not just desirable but essential for meaningful student engagement.
Social influence (SI) refers to the impact of peers, such as friends and colleagues, on students’ adoption of AI-NLP tools. This influence operates through three main mechanisms: shaping personal beliefs, promoting conformity, and motivating behaviour. The results suggest that AI tools are widely accepted and commonly used among participants, which increases the likelihood that students will adopt them. This aligns with previous findings [18,32,57,58].
A particularly novel aspect of this study is the significant association between students’ behavioural intention and their perceptions of digital sustainability (PDS). This extends the UTAUT model by explaining how learners’ motivations are not only driven by utility, but also by ethical and environmental considerations. Moreover, students with higher adoption intentions also reported more substantial alignment with sustainable digital practices, indicating that AI integration in education is increasingly viewed through the lens of social responsibility. These findings resonate with research emphasising the connection between digital adoption, environmental responsibility, and educational equity [40,59,60,64].
Contrary to theoretical expectations, AI awareness did not significantly moderate the relationship between UTAUT predictors (PE, EE, SI) and BI. One reasonable explanation is that medical students already exhibit a high level of functional familiarity with tools such as AI-NLP, which may create a significant effect. This widespread baseline knowledge may reduce variance in responses, thereby diminishing the moderating role of AI awareness.
Furthermore, much of this awareness may be operational (e.g., how to use tools) rather than critical (e.g., understanding their ethical, social, or sustainability implications). General familiarity alone is insufficient to alter behavioural intention unless coupled with essential digital literacy. As prior studies suggest [14,47,61,62].
Collectively, the results suggest that the adoption of AI-NLP tools in medical education is driven primarily by perceived usefulness and ease of use, with students’ self-reported AI awareness playing a marginal role. This underscores the need to shift from passive exposure to structured AI literacy initiatives that cultivate a deeper understanding of both technical functionalities and broader ethical and sustainability considerations.
Beyond educational benefits, AI-NLP tools can enhance medical practice by improving diagnostic accuracy, streamlining clinical documentation, and reducing administrative burdens, contributing to more efficient and sustainable healthcare systems. Thus, integrating AI into medical training is not merely a technological upgrade but a strategic imperative for preparing future healthcare professionals to innovate responsibly.
Overall, to ensure meaningful adoption, medical educators, researchers, and policymakers should embed AI and digital sustainability literacy into formal curricula. Additionally, modules and structured programs must equip learners with not only technical proficiency but also the ethical discernment and sustainability awareness needed to harness AI’s potential in clinical and academic settings.

6. Conclusions

Artificial Intelligence (AI) is transforming medical education by enhancing students’ digital competencies and supporting Sustainable Development Goal (SDG) 4, which aims to promote inclusive, equitable, and quality education for all. Moreover, AI-powered Natural Language Processing (AI-NLP) tools, such as ChatGPT and Copilot, facilitate personalised learning, provide real-time feedback, and organise knowledge management processes.
Additionally, to prepare responsible and future-ready healthcare professionals, medical schools should embed AI literacy and digital sustainability into their curricula. This can be achieved through structured academic programs, interdisciplinary modules, and hands-on learning experiences that develop both technical skills and awareness.
Notably, policymakers in higher education, especially those in medical schools, should not view AI merely as a technological tool but as a strategic driver of ethical, sustainable, and high-quality medical education. Likewise, AI integration with broader educational and environmental objectives can enable medical schools to prepare a new generation of digitally skilled, ethically grounded, and socially responsible healthcare professionals.
Furthermore, future studies should explore how medical learners’ AI awareness evolves and how it shapes their behavioural intentions. Longitudinal studies would provide valuable insights into these dynamics. Cross-cultural and institutional comparisons are also recommended to evaluate the generalisability of current findings and uncover context-specific variations. Moreover, mixed-methods approaches, such as qualitative interviews and observational studies, can offer a deeper understanding of how AI-NLP tools affect teaching and learning experiences.
Additionally, research is necessary to evaluate the digital sustainability outcomes of AI adoption in medical education, including reductions in resource consumption and improvements in digital efficiency. Furthermore, incorporating the perspectives of faculty and administrators will be essential in developing a holistic and sustainable model for AI implementation in healthcare education.
Policymakers should support AI integration by establishing regulatory frameworks, sustainable funding mechanisms, and infrastructure policies that address key challenges, such as energy consumption, data privacy, and digital dividing. These efforts will help ensure that AI adoption promotes both educational excellence and environmental responsibility.

7. Future Study Opportunities and Limitations

While this study provides important insights into medical learners’ behavioural intentions to adopt AI-powered NLP tools and their perceptions of digital sustainability, several limitations should be acknowledged.
First, the use of a cross-sectional survey design limits the ability to draw causal inferences between constructs. As such, future research should adopt longitudinal designs to explore how these relationships evolve and to verify potential causal pathways between AI adoption behaviours and sustainability perceptions.
Second, the study sample was restricted to medical students in Saudi Arabia. Although this focus offers valuable regional insights, it may limit the generalizability of findings. Future research should incorporate cross-cultural and cross-disciplinary comparisons to assess whether adoption patterns and sustainability attitudes differ across educational systems, professional contexts, and cultural settings.
Third, the reliance on self-reported measures of digital sustainability presents a methodological limitation. Future studies should consider mixed methods approaches, integrating qualitative interviews and objective data sources (e.g., energy usage analytics, institutional sustainability reports, or digital footprint assessments) to strengthen the validity and contextual depth of the findings.
Furthermore, although the study employed validated scales adapted from prior research, future studies should further assess the internal consistency and construct validity of these instruments across broader and more diverse samples. Expanding the number of items for key constructs, such as perceptions of digital sustainability and AI Awareness, may enhance measurement depth and improve the robustness of model estimations. This would help ensure that theoretical constructs are captured with greater nuance, particularly within rapidly evolving educational environments that are shaped by AI tools.
Finally, while student perspectives are central to understanding the adoption of AI in education, a comprehensive understanding also requires examining the views of other stakeholders. Future research should include faculty experiences, administrative considerations, and institutional policy frameworks to develop a more holistic and sustainable model for AI integration in medical education.
Additionally, to enhance the practical application of this research, policymakers in higher education are advised to develop national strategies that integrate AI literacy and digital sustainability into their policies. Medical schools are encouraged to implement structured curricular frameworks that incorporate awareness of AI and sustainability-focused digital practices, aligning with SDG 4 and SDG 9.

Funding

This research was funded by the Deanship of Scientific Research at the King Faisal University of Saudi Arabia, grant number (KFU252485).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of King Faisal University (Ethics Reference: KFU_2025_ETHICS3435.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Costantini, S. Ten Targets about SDG 4: Ensuring inclusive and equitable quality education and promoting lifelong learning opportunities for all. Strum. Didatt. Ric. 2019, 209, 147–155. [Google Scholar]
  2. Arora, N.K.; Mishra, I. United Nations Sustainable Development Goals 2030 and environmental sustainability: Race against time. Environ. Sustain. 2019, 2, 339–342. [Google Scholar] [CrossRef]
  3. Palmer, E. Introduction: The 2030 agenda. J. Glob. Ethics 2015, 11, 262–269. [Google Scholar] [CrossRef]
  4. Hanemann, U. Examining the application of the lifelong learning principle to the literacy target in the fourth Sustainable Development Goal (SDG 4). Int. Rev. Educ. 2019, 65, 251–275. [Google Scholar] [CrossRef]
  5. Kostoska, O.; Kocarev, L. A Novel ICT Framework for Sustainable Development Goals. Sustainability 2019, 11, 1961. [Google Scholar] [CrossRef]
  6. Salem, M.A.; Alshebami, A.S. Exploring the Impact of Mobile Exams on Saudi Arabian Students: Unveiling Anxiety and Behavioural Changes across Majors and Gender. Sustainability 2023, 15, 12868. [Google Scholar] [CrossRef]
  7. DataReportal. Digital 2025: Global Overview Report. 2025. Available online: https://datareportal.com/reports/digital-2025-global-overview-report (accessed on 5 May 2025).
  8. Bonab, S.R.; Haseli, G.; Ghoushchi, S.J. Digital technology and information and communication technology on the carbon footprint. In Decision Support Systems for Sustainable Computing; Elsevier: Amsterdam, The Netherlands, 2024; pp. 101–122. [Google Scholar]
  9. United Nations. Digital Environmental Sustainability. 2022. Available online: https://www.un.org/digital-emerging-technologies/content/digital-environmental-sustainability (accessed on 5 May 2025).
  10. Nemani, P.; Joel, Y.D.; Vijay, P.; Liza, F.F. Gender bias in transformers: A comprehensive review of detection and mitigation strategies. Nat. Lang. Process. J. 2024, 6, 100047. [Google Scholar] [CrossRef]
  11. Moursy, N.A.; Hamsho, K.; Gaber, A.M.; Ikram, M.F.; Sajid, M.R. A systematic review of progress test as longitudinal assessment in Saudi Arabia. BMC Med. Educ. 2025, 25, 100. [Google Scholar] [CrossRef]
  12. Alammari, A. Evaluating generative AI integration in Saudi Arabian education: A mixed-methods study. PeerJ Comput. Sci. 2024, 10, e1879. [Google Scholar] [CrossRef]
  13. Salem, M.A.; Zakaria, O.M.; Aldoughan, E.A.; Khalil, Z.A.; Zakaria, H.M. Bridging the AI Gap in Medical Education: A Study of Competency, Readiness, and Ethical Perspectives in Developing Nations. Computers 2025, 14, 238. [Google Scholar] [CrossRef]
  14. Abou Hashish, E.A.; Alnajjar, H. Digital proficiency: Assessing knowledge, attitudes, and skills in digital transformation, health literacy, and artificial intelligence among university nursing students. BMC Med. Educ. 2024, 24, 508. [Google Scholar] [CrossRef]
  15. Bornschlegl, M.; Townshend, K.; Caltabiano, N.J. Application of the Theory of Planned Behavior to Identify Variables Related to Academic Help Seeking in Higher Education. Front. Educ. 2021, 6, 738790. [Google Scholar] [CrossRef]
  16. Wong, C.T.; Tan, C.L.; Mahmud, I. Value-based adoption model: A systematic literature review from 2007 to 2021. Int. J. Bus. Inf. Syst. 2025, 48, 304–331. [Google Scholar] [CrossRef]
  17. Chahal, J.; Rani, N. Exploring the acceptance for e-learning among higher education students in India: Combining technology acceptance model with external variables. J. Comput. High. Educ. 2022, 34, 844–867. [Google Scholar] [CrossRef] [PubMed]
  18. Rana, M.; Siddiqee, M.S.; Sakib, N.; Ahamed, R. Assessing AI adoption in developing country academia: A trust and privacy-augmented UTAUT framework. Heliyon 2024, 10, e37569. [Google Scholar] [CrossRef]
  19. Dwivedi, Y.K.; Rana, N.P.; Chen, H.; Williams, M.D. A Meta-Analysis of the Unified Theory of Acceptance and Use of Technology (UTAUT). In Governance and Sustainability in Information Systems. Managing the Transfer and Diffusion of IT: IFIP WG 8.6, Proceedings of the International Working Conference, Hamburg, Germany, 22–24 September 2011; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  20. Al-Saedi, K.; Al-Emran, M.; Ramayah, T.; Abusham, E. Developing a general extended UTAUT model for M-payment adoption. Technol. Soc. 2020, 62, 101293. [Google Scholar] [CrossRef]
  21. Jain, R.; Garg, N.; Khera, S.N. Adoption of AI-Enabled Tools in Social Development Organizations in India: An Extension of UTAUT Model. Front. Psychol. 2022, 13, 893691. [Google Scholar] [CrossRef]
  22. Ballesteros, M.A.A.; Enríquez, B.G.A.; Farroñán, E.V.R.; Juárez, H.D.G.; Salinas, L.E.C.; Sánchez, J.E.B.; Castillo, J.C.A.; Licapa-Redolfo, G.S.; Chilicaus, G.C.F. The Sustainable Integration of AI in Higher Education: Analyzing ChatGPT Acceptance Factors Through an Extended UTAUT2 Framework in Peruvian Universities. Sustainability 2024, 16, 10707. [Google Scholar] [CrossRef]
  23. Tang, X.; Yuan, Z.; Qu, S. Factors Influencing University Students’ Behavioural Intention to Use Generative Artificial Intelligence for Educational Purposes Based on a Revised UTAUT2 Model. J. Comput. Assist. Learn. 2025, 41, e13105. [Google Scholar] [CrossRef]
  24. Xiong, Y.; Shi, Y.; Pu, Q.; Liu, N. More trust or more risk? User acceptance of artificial intelligence virtual assistant. Hum. Factors Ergon. Manuf. 2024, 34, 190–205. [Google Scholar] [CrossRef]
  25. Elshaer, I.A.; AlNajdi, S.M.; Salem, M.A. Sustainable AI Solutions for Empowering Visually Impaired Students: The Role of Assistive Technologies in Academic Success. Sustainability 2025, 17, 5609. [Google Scholar] [CrossRef]
  26. Nakhaie Ahooie, N. Enhancing Access to Medical Literature Through an LLM-Based Browser Extension. Master’s Thesis, University of Oulu, Oulu, Finland, 2024. [Google Scholar]
  27. Tanantong, T.; Wongras, P. A UTAUT-Based Framework for Analyzing Users’ Intention to Adopt Artificial Intelligence in Human Resource Recruitment: A Case Study of Thailand. Systems 2024, 12, 28. [Google Scholar] [CrossRef]
  28. Su, J.; Wang, Y.; Liu, H.; Zhang, Z.; Wang, Z.; Li, Z. Investigating the factors influencing users’ adoption of artificial intelligence health assistants based on an extended UTAUT model. Sci. Rep. 2025, 15, 18215. [Google Scholar] [CrossRef] [PubMed]
  29. Chai, C.S.; Wang, X.; Xu, C. An Extended Theory of Planned Behavior for the Modelling of Chinese Secondary School Students’ Intention to Learn Artificial Intelligence. Mathematics 2020, 8, 2089. [Google Scholar] [CrossRef]
  30. Salem, M.A.; Elshaer, I.A. Educators’ Utilizing One-Stop Mobile Learning Approach amid Global Health Emergencies: Do Technology Acceptance Determinants Matter? Electronics 2023, 12, 441. [Google Scholar] [CrossRef]
  31. Naseri, R.N.N.; Syahrivar, J.; Saari, I.S.; Yahya, W.K.; Muthusamy, G. The Development of Instruments to Measure Students’ Behavioural Intention Towards Adopting Artificial Intelligence (AI) Technologies in Educational Settings. In Proceedings of the 2024 5th International Conference on Artificial Intelligence and Data Sciences (AiDAS), Bangkok, Thailand, 3–4 September 2024; pp. 227–232. [Google Scholar]
  32. Asad, M.; Fryan, L.H.A.; Shomo, M.I. Sustainable Entrepreneurial Intention Among University Students: Synergetic Moderation of Entrepreneurial Fear and Use of Artificial Intelligence in Teaching. Sustainability 2025, 17, 290. [Google Scholar] [CrossRef]
  33. Akhtar, S.; Alfuraydan, M.M.; Mughal, Y.H.; Nair, K.S. Adoption of Massive Open Online Courses (MOOCs) for Health Informatics and Administration Sustainability Education in Saudi Arabia. Sustainability 2025, 17, 3795. [Google Scholar] [CrossRef]
  34. Sobaih, A.E.E.; Elshaer, I.A.; Hasanein, A.M. Examining Students’ Acceptance and Use of ChatGPT in Saudi Arabian Higher Education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 709–721. [Google Scholar] [CrossRef] [PubMed]
  35. Alsahli, S.; Hor, S.-Y.; Lam, M.K. Physicians’ acceptance and adoption of mobile health applications during the COVID-19 pandemic in Saudi Arabia: Extending the unified theory of acceptance and use of technology model. Health Inf. Manag. J. 2024. [Google Scholar] [CrossRef]
  36. Lane, H.; Dyshel, M. Natural Language Processing in Action, 2nd ed.; Simon and Schuster: New York, NY, USA, 2025. [Google Scholar]
  37. Raiaan, M.A.K.; Mukta, S.H.; Fatema, K.; Fahad, N.M.; Sakib, S.; Mim, M.M.J.; Ahmad, J.; Ali, M.E.; Azam, S. A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges. IEEE Access 2024, 12, 26839–26874. [Google Scholar] [CrossRef]
  38. Simon, R.; Petrisor, C.; Bodolea, C.; Golea, A.; Gomes, S.H.; Antal, O.; Vasian, H.N.; Moldovan, O.; Puia, C.I. Efficiency of Simulation-Based Learning Using an ABC POCUS Protocol on a High-Fidelity Simulator. Diagnostics 2024, 14, 173. [Google Scholar] [CrossRef] [PubMed]
  39. Khan, W.H.; Khan, S.; Khan, N.; Ahmad, A.; Siddiqui, Z.I.; Singh, R.B.; Malik, Z. Artificial intelligence, machine learning and deep learning in biomedical fields: A prospect in improvising medical healthcare systems. In Artificial Intelligence in Biomedical and Modern Healthcare Informatics; Elsevier: Amsterdam, The Netherlands, 2025. [Google Scholar]
  40. Nawaz, F.A.; Opriessnig, E.; Usman, F.M.; Agrohi, J. From Classroom to Clinic: The Impact of AI on Medical Education. In Precision Health in the Digital Age: Harnessing AI for Personalized Care; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 63–90. [Google Scholar]
  41. Verkooijen, M.H.M.; van Tuijl, A.A.C.; Calsbeek, H.; Fluit, C.R.M.G.; van Gurp, P.J. How to evaluate lifelong learning skills of healthcare professionals: A systematic review on content and quality of instruments for measuring lifelong learning. BMC Med. Educ. 2024, 24, 1423. [Google Scholar] [CrossRef] [PubMed]
  42. Ohalete, N.C.; Ayo-Farai, O.; Olorunsogo, T.O.; Maduka, P.; Olorunsogo, T. AI-Driven Environmental Health Disease Modeling: A Review of Techniques and Their Impact on Public Health in the USA and African Contexts. Int. Med. Sci. Res. J. 2024, 4, 51–73. [Google Scholar] [CrossRef]
  43. Vardhani, A.; Findyartini, A.; Wahid, M. Needs Analysis for Competence of Information and Communication Technology for Medical Graduates. Educ. Med. J. 2024, 16, 119–136. [Google Scholar] [CrossRef]
  44. Fiandini, M.; Nandiyanto, A.B.D.; Kurniawan, T. Bibliometric Analysis of Research Trends in Conceptual Understanding and Sustainability Awareness through Artificial Intelligence (AI) and Digital Learning Media. Indones. J. Multidiscip. Res. 2023, 3, 477–486. [Google Scholar] [CrossRef]
  45. Alrefai, A.; ElBanna, R.; Al Ghaddaf, C.; Abu-AlSondos, I.A.; Chehaimi, E.M.; Alnajjar, I.A. The Role of IoT in Sustainable Digital Transformation: Applications and Challenges. In Proceedings of the 2024 2nd International Conference on Cyber Resilience (ICCR), Dubai, United Arab Emirates, 26–28 February 2024. [Google Scholar]
  46. Owens, T. The Theory and Craft of Digital Preservation; Johns Hopkins University Press: Baltimore, MD, USA, 2018. [Google Scholar]
  47. Meinhold, R.; Wagner, C.; Dhar, B.K. Digital sustainability and eco-environmental sustainability: A review of emerging technologies, resource challenges, and policy implications. Sustain. Dev. 2025, 33, 2323–2338. [Google Scholar] [CrossRef]
  48. Monaco, S. SDG 4. Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for All. In Identity, Territories, and Sustainability: Challenges and Opportunities for Achieving the UN Sustainable Development Goals; Emerald Publishing Ltd.: Leeds, UK, 2024; pp. 43–49. [Google Scholar]
  49. Alshebami, A.S.; Seraj, A.H.A.; Elshaer, I.A.; Al Shammre, A.S.; Al Marri, S.H.; Lutfi, A.; Salem, M.A.; Zaher, A.M.N. Improving Social Performance through Innovative Small Green Businesses: Knowledge Sharing and Green Entrepreneurial Intention as Antecedents. Sustainability 2023, 15, 8232. [Google Scholar] [CrossRef]
  50. Moldt, J.-A.; Festl-Wietek, T.; Fuhl, W.; Zabel, S.; Claassen, M.; Wagner, S.; Nieselt, K.; Herrmann-Werner, A. Assessing AI Awareness and Identifying Essential Competencies: Insights from Key Stakeholders in Integrating AI Into Medical Education. JMIR Med. Educ. 2024, 10, e58355. [Google Scholar] [CrossRef] [PubMed]
  51. Salem, M.A. Bridging or Burning? Digital Sustainability and PY Students’ Intentions to Adopt AI-NLP in Educational Contexts. Computers 2025, 14, 265. [Google Scholar] [CrossRef]
  52. Ministry of Education, S.A. Higher Education Statistics. 17 May 2025. Available online: https://moe.gov.sa/ar/knowledgecenter/dataandstats/edustatdata/Pages/HigherEduStat.aspx (accessed on 5 May 2025).
  53. Hairm, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; Springer Nature: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  54. Leguina, A. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Taylor & Francis: Abingdon, UK, 2015. [Google Scholar]
  55. Park, I.; Kim, D.; Moon, J.; Kim, S.; Kang, Y.; Bae, S. Searching for New Technology Acceptance Model under Social Context: Analyzing the Determinants of Acceptance of Intelligent Information Technology in Digital Transformation and Implications for the Requisites of Digital Sustainability. Sustainability 2022, 14, 579. [Google Scholar] [CrossRef]
  56. Duong, C.D.; Le, T.T.; Dang, N.S.; Do, N.D.; Vu, A.T. Unraveling the determinants of digital entrepreneurial intentions: Do performance expectancy of artificial intelligence solutions matter? J. Small Bus. Enterp. Dev. 2024, 31, 1327–1356. [Google Scholar] [CrossRef]
  57. Mushtaq, A. Sustaining Digital Payment Adoption: The Role of Performance Expectancy in Shaping Continuance Intentions for Mobile Wallets. Pollster J. Acad. Res. 2024, 11, 11–17. [Google Scholar]
  58. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Vighio, M.S.; Alblehai, F.; Soomro, R.B.; Shutaleva, A. Investigating AI-based academic support acceptance and its impact on students’ performance in Malaysian and Pakistani higher education institutions. Educ. Inf. Technol. 2024, 29, 18695–18744. [Google Scholar] [CrossRef]
  59. Portillo, F.; Soler-Ortiz, M.; Sanchez-Cruzado, C.; Garcia, R.M.; Novas, N. The Impact of Flipped Learning and Digital Laboratory in Basic Electronics Coursework. Comput. Appl. Eng. Educ. 2025, 33, e22810. [Google Scholar] [CrossRef]
  60. Huang, Q.; Lv, C.; Lu, L.; Tu, S. Evaluating the Quality of AI-Generated Digital Educational Resources for University Teaching and Learning. Systems 2025, 13, 174. [Google Scholar] [CrossRef]
  61. Darji, H.; Singh, S. AI-Powered Digital Platform for Religious Literature. Int. J. Innov. Res. Sci. Eng. Technol. 2025, 14, 2552–2559. [Google Scholar]
  62. Suzer, E.; Koc, M. Teachers’ digital competency level according to various variables: A study based on the European DigCompEdu framework in a large Turkish city. Educ. Inf. Technol. 2024, 29, 22057–22083. [Google Scholar] [CrossRef]
  63. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  64. Thompson, C.C.; Okonkwo, S. Management of Artificial Intelligence as an Assistive Tool for Enhanced Educational Outcomes: Students Living with Disabilities in Nigeria. In Transformations in Digital Learning and Educational Technologies; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 187–218. [Google Scholar]
  65. Kaur, P.; Stoltzfus, J.; Yellapu, V. Descriptive statistics. Int. J. Acad. Med. 2018, 4, 60–63. [Google Scholar] [CrossRef]
  66. Raub, S.; Blunschi, S. The power of meaningful work: How awareness of CSR initiatives fosters task significance and positive work outcomes in service employees. Cornell Hosp. Q. 2014, 55, 10–18. [Google Scholar] [CrossRef]
  67. Becker, J.-M.; Cheah, J.-H.; Gholamzade, R.; Ringle, C.M.; Sarstedt, M. PLS-SEM’s most wanted guidance. Int. J. Contemp. Hosp. Manag. 2022, 35, 321–346. [Google Scholar] [CrossRef]
Figure 1. Conceptual Modelling.
Figure 1. Conceptual Modelling.
Sustainability 17 06379 g001
Figure 2. Statistical model. ** p < 0.01, * p < 0.05.
Figure 2. Statistical model. ** p < 0.01, * p < 0.05.
Sustainability 17 06379 g002
Table 1. Sample and population.
Table 1. Sample and population.
Demographic ItemsFemale%Male%Sum%
Medicine5518%124%6722%
Dentistry6020%186%7826%
Pharmacy3913%52%4415%
Veterinary Science3010%83%3813%
Nursing5719%176%7425%
Sum24180%6020%301100%
Table 2. Instrument reliability results from pilot testing (n = 25).
Table 2. Instrument reliability results from pilot testing (n = 25).
ConstructsαΩ
Performance Expectancy0.9010.911
Effort Expectancy0.8930.903
Social Influence0.9210.931
AI Awareness0.9130.923
Behavioral Intention0.8930.903
Table 3. Measurement criteria of quality for conceptual modelling.
Table 3. Measurement criteria of quality for conceptual modelling.
ConstructsLoadingsAVECRCronbach’s Alpha
Performance Expectancy 0.7170.9270.921
P E10.851
P E20.866
P E30.848
P E40.876
P E50.791
Effort Expectancy 0.7010.9210.909
E E10.803
E E20.774
E E30.871
E E40.859
E E50.866
Social Influence 0.6650.9080.899
S I10.816
S I20.744
S I30.836
S I40.869
S I50.807
Behavioral Intention 0.9460.9810.972
B I10.981
B I20.984
B I30.952
Perceptions of Digital Sustainability Requisites 0.8080.9270.893
PDS10.892
PDS20.853
PDS30.921
Table 4. Factor cross-loading analysis.
Table 4. Factor cross-loading analysis.
PEEESIBIPDS
P E10.8520.5120.4950.4680.489
P E20.8670.5210.5060.4720.501
P E30.8480.4980.4840.460.475
P E40.8760.5330.5170.4790.498
P E50.7910.4760.4650.4520.47
E E10.4780.8030.4950.460.466
E E20.4550.7740.4820.4430.455
E E30.4890.8710.5160.4780.481
E E40.5010.8590.5220.470.474
E E50.4930.8660.5190.4650.472
S I10.4620.4890.8160.450.461
S I20.4480.4750.7440.4320.442
S I30.4780.4930.8360.4550.46
S I40.4910.5110.8690.4680.473
S I50.470.4990.8070.4430.455
B I10.4920.4750.460.9810.47
B I20.5090.4890.4750.9840.482
B I30.4720.4640.450.9520.46
PDS10.5220.4980.5090.4870.892
PDS20.5080.4920.4980.4720.853
PDS30.530.4870.5020.490.921
Table 5. Fornell–Larcker criterion.
Table 5. Fornell–Larcker criterion.
BIEEPESIPDS
Effort Expectancy (EE)0.5730.837
Performance Expectancy (PE)0.6980.5970.846
Social Influence (SI)0.4640.3820.4010.815
Perceptions of Digital Sustainability Requisites (PDS)0.4870.4920.5080.4730.899
Table 6. H.T.M.T criterion.
Table 6. H.T.M.T criterion.
Construct PairH.T.M.T
EE ⟷ BI0.635
PE ⟷ BI0.754
PE ⟷ EE0.671
SI ⟷ BI0.511
SI ⟷ EE0.443
SI ⟷ PE0.462
PDS ⟷ BI0.512
PDS ⟷ EE0.529
PDS ⟷ PE0.545
PDS ⟷ SI0.498
Table 7. Structural model: β, t_value, p_value, and R2.
Table 7. Structural model: β, t_value, p_value, and R2.
HsPath Relationshipβt_Valuep_ValueR2Supported
H1Performance Expectancy → Intention to Use AI0.654.76<0.010.62Yes
H2Effort Expectancy → Intention to Use AI0.583.92<0.010.62Yes
H3Social Influence → Intention to Use AI0.533.47<0.050.62Yes
H4Performance Expectancy × AI Awareness → Intention to Use AI0.321.22<0.050.08No
H5Effort Expectancy × AI Awareness → Intention to Use AI0.281.08<0.050.06No
H6Social Influence × AI Awareness → Intention to Use AI0.250.97<0.050.05No
H7Behavioural Intention → Perceptions of Digital Sustainability Requisites0.725.21<0.010.51Yes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salem, M.A. A Digital Sustainability Lens: Investigating Medical Students’ Adoption Intentions for AI-Powered NLP Tools in Learning Environments. Sustainability 2025, 17, 6379. https://doi.org/10.3390/su17146379

AMA Style

Salem MA. A Digital Sustainability Lens: Investigating Medical Students’ Adoption Intentions for AI-Powered NLP Tools in Learning Environments. Sustainability. 2025; 17(14):6379. https://doi.org/10.3390/su17146379

Chicago/Turabian Style

Salem, Mostafa Aboulnour. 2025. "A Digital Sustainability Lens: Investigating Medical Students’ Adoption Intentions for AI-Powered NLP Tools in Learning Environments" Sustainability 17, no. 14: 6379. https://doi.org/10.3390/su17146379

APA Style

Salem, M. A. (2025). A Digital Sustainability Lens: Investigating Medical Students’ Adoption Intentions for AI-Powered NLP Tools in Learning Environments. Sustainability, 17(14), 6379. https://doi.org/10.3390/su17146379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop