Next Article in Journal
Complex Table Question Answering with Multiple Cells Recall Based on Extended Cell Semantic Matching
Previous Article in Journal
Source Robust Non-Parametric Reconstruction of Epidemic-like Event-Based Network Diffusion Processes Under Online Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities

1
Department of Information System & Cyber Security, College of Computing and Information Technology, University of Bisha, Bisha 61922, Saudi Arabia
2
College of Computing, Birmingham City University, Birmingham B4 7BD, UK
3
IBM, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
Data & AI specialist and Quantum Ambassador.
Big Data Cogn. Comput. 2025, 9(10), 264; https://doi.org/10.3390/bdcc9100264
Submission received: 12 August 2025 / Revised: 9 October 2025 / Accepted: 11 October 2025 / Published: 18 October 2025

Abstract

The integration of Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, into higher education has introduced new opportunities and challenges for students and lecturers alike. This study investigates the psychological, ethical, and institutional factors that shape the adoption of GenAI tools in Saudi Arabian universities, drawing on an extended Technology Acceptance Model (TAM) that incorporates constructs from Self-Determination Theory (SDT) and ethical decision-making. A cross-sectional survey was administered to 578 undergraduate students and 309 university lecturers across three major institutions in Southern Saudi Arabia. Quantitative analysis using Structural Equation Modelling (SmartPLS 4) revealed that perceived usefulness, intrinsic motivation, and ethical trust significantly predicted students’ intention to use GenAI. Perceived ease of use influenced intention both directly and indirectly through usefulness, while institutional support positively shaped perceptions of GenAI’s value. Academic integrity and trust-related concerns emerged as key mediators of motivation, highlighting the ethical tensions in AI-assisted learning. Lecturer data revealed a parallel set of concerns, including fear of overreliance, diminished student effort, and erosion of assessment credibility. Although many faculty members had adapted their assessments in response to GenAI, institutional guidance was often perceived as lacking. Overall, the study offers a validated, context-sensitive model for understanding GenAI adoption in education and emphasises the importance of ethical frameworks, motivation-building, and institutional readiness. These findings offer actionable insights for policy-makers, curriculum designers, and academic leaders seeking to responsibly integrate GenAI into teaching and learning environments.

1. Introduction

The rapid emergence of Generative Artificial Intelligence (GenAI) tools such as ChatGPT, Bard, and Claude has sparked profound changes across the higher education landscape. These tools can generate human-like text, assist in summarisation, language translation, and coding, and offer real-time feedback. Their accessibility and versatility have made them appealing to both students and educators, especially for academic support and content creation tasks. As a result, the use of GenAI in university settings has surged, prompting urgent questions about its pedagogical benefits, ethical implications, and impact on academic integrity.
Recent surveys reveal that a significant proportion of students have already incorporated GenAI into their learning routines. For instance, Shata and Hartley [1] reported that over 40% of students in the United States aged 18–29 had used ChatGPT for academic purposes within the first year of its release. Similarly, in the Middle East, early evidence indicates rising interest in using GenAI to support learning, especially in resource-constrained educational environments [2,3]. While this trend opens doors to more personalised, accessible, and efficient learning, it also introduces concerns. Critics warn that excessive reliance on GenAI may undermine critical thinking, facilitate academic misconduct, and challenge traditional assessment paradigms [4,5].
In response to these developments, scholars have increasingly applied technology acceptance frameworks such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) to explain students’ and educators’ attitudes toward GenAI [6,7]. Core constructs such as perceived usefulness, perceived ease of use, and behavioural intention remain foundational. However, GenAI’s distinctive characteristics also necessitate the inclusion of additional constructs like trust, academic integrity, ethical perceptions, institutional support, and intrinsic motivation [8,9]. Beyond student perspectives, lecturers play a critical role in shaping GenAI adoption in higher education. They serve as both gatekeepers of academic standards and potential users of GenAI tools for instructional design and content generation. However, existing research highlights a growing gap between student enthusiasm and faculty caution. Educators often express concerns about unregulated AI use, insufficient institutional policy, and the erosion of deep learning skills among students [10]. Understanding this divergence is essential to creating balanced, inclusive policies that align technological innovation with educational integrity.
Despite growing scholarly interest, few studies have jointly examined the views of both students and lecturers within the same institutional context, particularly in underrepresented regions such as the Kingdom of Saudi Arabia (KSA). As Saudi universities seek to align with Vision 2030’s national emphasis on digital transformation and artificial intelligence, understanding the drivers and barriers to GenAI adoption becomes a strategic educational priority. Therefore, the aim of this study is to investigate the factors influencing the adoption of Generative AI in higher education by examining both student and lecturer perspectives in Saudi universities, using an extended TAM framework that integrates motivational, ethical, and institutional support dimensions.

2. Literature Review

Generative AI tools are rapidly making their way into higher education, offering new ways for students to learn and complete academic tasks. Recent surveys show a sharp rise in usage; for instance, in 2024, about 43% of young adults (18–29) in the U.S. had already used ChatGPT [1]. This surge has sparked both excitement and concern in universities worldwide. Researchers are now examining why students choose to use generative AI in their studies, using established technology adoption frameworks to guide investigation. In particular, models such as the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) have been widely employed to identify key drivers of GenAI adoption [6]. These models and extensions of them point to various psychological and contextual constructs, ranging from perceived usefulness and ease of use to trust, ethics, and institutional support that shape a student’s intention to use generative AI. This work investigates those constructs and their ability to explain students’ intentions to use GenAI in education, and also highlights the perspective of lecturers on this emerging trend.

2.1. Students’ Perception

2.1.1. Perceived Usefulness and Ease of Use (Technology Acceptance Model)

One of the most fundamental frameworks is TAM, which suggests that two beliefs; perceived usefulness (PU) and perceived ease of use (PEOU); drive users’ attitudes and ultimately their intention to use a technology [11,12,13]. In the context of generative AI, perceived usefulness refers to the degree students believe tools like ChatGPT enhance their learning or productivity, while ease of use refers to how effortless and user-friendly they find these tools. Empirical studies consistently find perceived usefulness to be a strong positive predictor of students’ intention to use GenAI. For example, multiple studies report that when students see clear academic benefits from GenAI (e.g., improved understanding of complex concepts or time saved on assignments), their willingness to adopt the technology increases significantly [1,2]. By contrast, perceived ease of use tends to have a more modest effect; it generally supports adoption (students are more inclined to use AI if it’s easy to interact with), but its impact can diminish in populations already comfortable with technology [14]. In one TAM-based study, perceived usefulness had a significant influence on students’ attitudes toward using AI. In contrast, ease of use showed no significant direct effect on attitude, “suggesting familiarity with technology reduces the role of ease of use” [7]. This implies that in tech-savvy student groups, simply knowing an AI tool is beneficial matters more than the simplicity of its interface. Overall, TAM’s core constructs (usefulness and ease) are confirmed as important: students flock to generative AI when it clearly helps them learn or work more efficiently, and to a lesser extent when it is hassle-free to use [1]. These factors often work through shaping a positive attitude toward the AI. In other words, students who find GenAI useful and easy develop a favourable view of using it, which then drives their behavioural intention [8]. Given TAM’s explanatory power, many researchers use it as a base model, then extend it with additional constructs to better capture the nuances of AI in education [6].

2.1.2. Trust in AI Tools

Trust has emerged as a critical construct in explaining GenAI adoption by students [15]. Trust in this context means the degree to which students believe the AI tool is reliable, accurate, and will act in their best interest (e.g., providing correct information, maintaining privacy) [16]. High trust can alleviate students’ fears about using AI and make them more comfortable incorporating it into their learning [17]. Studies have found that greater trust in generative AI correlates with stronger intentions to use it [8]. Trust in technology is positively associated with intention to use GenAI for learning [8]. Trust can also play a moderating or mediating role between other factors and adoption. For instance, a study of university professors (lecturers) using TAM plus Social Cognitive Theory noted that trust significantly shaped their GenAI adoption decisions, even acting as a mediator that enhanced the effect of perceived usefulness on intention [1]. In other words, if the user trusts the AI, the perceived benefits of the AI are more likely to translate into actual willingness to use it. On the flip side, lack of trust; perhaps due to concerns about inaccurate outputs (“hallucinations”) or data security, can hinder usage. Building students’ trust in AI (through transparency, reliability, and clear guidelines) is therefore seen as key to promoting acceptance [7]. Some theoretical models explicitly incorporate trust alongside TAM variables; for example, Mustofa, Kuncoro, Atmono, Hermawan, and Sukirman [7] extended TAM by adding trust as a factor and found trust exerted a direct influence on attitudes toward AI adoption (though it did not significantly moderate usage behaviour as initially hypothesised).

2.1.3. Ethical Considerations and Academic Integrity

The rise of generative AI in academia has brought ethical considerations to the forefront, particularly surrounding academic integrity. Two related constructs frequently discussed are students’ perception of ethical use of AI and their concerns about plagiarism or cheating. Researchers are beginning to integrate these into models of AI adoption. For example, Mustofa, Kuncoro, Atmono, Hermawan, and Sukirman [7] added an Ethics construct to TAM and found that students who viewed GenAI use as ethically acceptable had more positive attitudes toward using it. In that study, Ethics had a direct positive impact on attitude toward using AI, highlighting that if students feel using ChatGPT is “the right thing to do” or at least not wrong, they are more inclined to use it. Conversely, concern about violating academic integrity can dampen students’ intentions. A multi-factor analysis of students introduced a specific variable for fear of plagiarism, and the results showed that fear of plagiarism had a significant negative relationship with intention to use GenAI [8]. In other words, students worried that using AI might lead them to accidentally commit plagiarism or be accused of cheating were less likely to intend to use such tools. This aligns with qualitative findings as well in interviews, students and faculty alike have flagged the potential for AI-facilitated plagiarism as a major downside of GenAI in education [4]. Faculty members often emphasise that unchecked use of AI could undermine the honesty of student work, citing examples of students having AI do their assignments or produce exam answers (which the students then submit as their own). Such misuse dents the academic honesty and is a serious concern for educators [4].
Importantly, students’ awareness of ethical guidelines is not always high. A survey of Saudi university students by Aldossary, Aljindi and Alamri [2]; one of the first studies in the Kingdom of Saudi Arabia (KSA) on GenAI in education; found that while students broadly recognised various risks of GenAI (data privacy breaches were the top concern, and many also worried about reduced human interaction and critical-thinking skills), the lowest-rated concern was using GenAI without following ethical principles. In that KSA sample, students were far less worried about the ethical misuse of AI than about other issues, suggesting a gap in awareness or emphasis. Aldossary, Aljindi, and Alamri [2] interpreted that as students do not fully grasp the academic integrity implications of AI or lack guidance on proper usage. This finding reinforces calls for better ethics education and clear academic integrity policies related to AI. Overall, ethical considerations, encompassing students’ values, fears of misconduct, and understanding of proper use, are becoming key determinants of GenAI adoption. Models like the Theory of Planned Behaviour (which includes moral norms) and extended TAM frameworks now incorporate these elements to explain student intentions. The evidence so far suggests that addressing academic integrity upfront can shape AI adoption: when students feel confident they can use AI responsibly (or will not be penalised unfairly for it), they are more open to using it; but if they fear crossing ethical lines, they may avoid AI tools [8].

2.1.4. Motivation, Enjoyment, and Confidence

Beyond usefulness and ethics, motivational factors also influence students’ inclination to use generative AI. One such factor is intrinsic motivation; for example, the enjoyment or satisfaction students get from using AI tools. Some researchers have incorporated perceived enjoyment (an intrinsic motivator) into acceptance models and found it to be a significant positive predictor of intention [8]. The logic is that if interacting with a GenAI tool is fun or engaging, students will be more likely to keep using it voluntarily [18]. Generative AI can spark curiosity and provide instant, interactive feedback, which may increase students’ enjoyment in learning tasks [19]. Another closely related construct is self-efficacy or confidence in using AI; essentially, the student’s belief in their ability to effectively use the tool [20]. Higher self-efficacy often translates to greater technology use. Some studies frame this as AI literacy or competence; student data showed AI literacy was positively associated with intention to use GenAI (students who felt knowledgeable and capable with AI had higher intentions) [8]. Moreover, generative AI can itself strengthen student confidence and reduce anxiety in learning [4]. Students reported that having an ever-available, non-judgmental AI assistant to help with tasks (like practicing language skills or getting feedback on writing) lowered their stress levels and made them more confident in their abilities [4]. This stress reduction can, in turn, motivate continued use. For example, students who feel anxious about writing assignments might be highly motivated to use AI tools that give them immediate help and boost their confidence in producing good work. On the other hand, if students lack confidence in the tool (or in their own ability to use it correctly), they may avoid it. Interestingly, the role of self-efficacy can differ between students and faculty: one survey of U.S. college professors found that faculty’s own tech self-efficacy had minimal impact on their GenAI adoption, compared to other factors like trust and peer influence [1]. For students, however, feeling competent with AI is generally a facilitator of use. In summary, motivational and affective factors; enjoying the tool, feeling less anxious, and feeling capable of using it; all feed into a student’s intention to embrace generative AI. This suggests that making AI tools more engaging and providing training to build students’ confidence can further increase adoption.

2.1.5. Social Influence and Institutional Support

The decision to use generative AI is not made in a vacuum; social and institutional contexts significantly shape student behaviour. In technology acceptance research, social influence (or subjective norms) refers to the effect of peers, instructors, or societal expectations on an individual’s intention to use a technology [21,22]. If students perceive that their classmates and teachers endorse or use AI tools, they may be more inclined to try them. Recent studies confirm that social influence positively impacts students’ behavioural intention to use GenAI [6]. For instance, in a Middle Eastern multi-country study, Mohamed, Goktas, Khalaf, Kucukkuya, Al-Faouri, Seleem, Ibraheem, Abdelhafez, Abdullah, Zaki, and Nashwan [6] showed that social influence has a significant positive path coefficient (β ≈ 0.31) toward intention, meaning students were swayed by the attitudes and usage of those around them. This aligns with the Unified Theory of Acceptance and Use of Technology (UTAUT), which emphasises social influence as a key determinant of technology adoption, especially in communal or classroom settings. From a practical standpoint, when respected faculty encourage AI as a learning aid, or when many peers start using ChatGPT for assignments, it creates a normative pressure (or encouragement) for others to follow suit.
Hand-in-hand with social norms is the concept of institutional support. This can include the resources, guidance, and infrastructure provided by the university to facilitate technology use, analogous to what UTAUT calls facilitating conditions. Research shows that a supportive environment can greatly enhance GenAI adoption. Facilitating Conditions (e.g., availability of access, support, and training) have a significant positive effect on students’ intention (β ≈ 0.19) [6]. Essentially, when students feel their institution makes it easy to use generative AI, by providing adequate technology access, IT support, or integration into coursework, they are more likely to use it. Conversely, a lack of support or unclear institutional stance can be a barrier. A case study of faculty perspectives in the UAE revealed that insufficient institutional support and unclear guidelines were key barriers in handling AI-related academic integrity issues [10]. Many instructors felt the university had not given them the necessary tools or policies to confidently manage students’ use of GenAI, leading to uncertainty and hesitancy in both faculty and students [23].
To address these issues, scholars and practitioners alike are calling for robust institutional frameworks. Universities are urged to develop clear policies, guidelines, and training programs around generative AI. This serves a dual purpose: it guides students on how to use AI ethically and effectively (addressing integrity and skill concerns), and it signals institutional support for positive uses of AI (addressing facilitating conditions). For example, Hasanein and Sobaih [4], after interviewing stakeholders in Saudi Arabia and Egypt, recommend that higher education institutions “establish and promote clear guidelines on the responsible use of ChatGPT”, accompanied by training sessions for both students and faculty. Such guidelines should clarify acceptable vs. unacceptable uses of AI, outline academic integrity expectations, and provide practical tips for using AI as a learning aid rather than a cheating shortcut. Additionally, they advise that institutions invest in tools (like AI output detectors) and encourage practices (like cross-verifying AI-generated content with credible sources) to support responsible use [4]. Similarly, Alshamy, Al-Harthi, and Abdullah [3] emphasised the need for supervisory frameworks, full institutional guidelines, and specialised training workshops to ensure that academic usage of GenAI is ethical and responsible, so that the technology’s benefits can be harnessed without compromising academic standards. In summary, strong institutional support, in the form of policy, infrastructure, and culture, is a pivotal factor in students’ adoption of generative AI. When done right, it creates an ecosystem where students feel encouraged and safe to use AI for learning, guided by positive peer examples and clear rules of the road.

2.2. Lecturer Perspectives on Student Use of GenAI

Lecturers and academic staff play a dual role in the adoption of generative AI: they are stakeholders who might use AI in their teaching, and they are gatekeepers influencing how students use (or do not use) AI in coursework. Understanding lecturers’ perspectives provides valuable context to student adoption, since instructors’ attitudes and policies can significantly affect student behaviour. The literature reveals that lecturers recognise the potential of GenAI in education but are often more cautious about its risks than students. Alshamy, Al-Harthi, and Abdullah [3] surveyed both students and academics and found a notable perception gap: students reported frequent use of GenAI for help with learning (e.g., brainstorming ideas, getting help on assignments), whereas academics were using GenAI for teaching prep (developing materials, lesson plans) and voicing stronger concerns about the technology’s downsides. Both groups saw GenAI as a tool for enhancing efficiency and innovation in academia. Yet academics steadily show greater concerns about issues such as plagiarism, academic misconduct, over-reliance on AI, and the erosion of critical thinking skills [3]. This aligns with anecdotal reports in many universities since ChatGPT’s debut: professors are often the ones raising alarms about academic integrity, while students may be more carelessly exploring the tool’s capabilities.
Lecturers’ concerns are not unfounded. As discussed earlier, unrestricted use of generative AI can lead to plagiarism or shallow learning. Educators have observed that over-reliance on tools like ChatGPT might impede the development of students’ own problem-solving and critical thinking abilities [4]. There is a fear that if an AI can produce an answer instantly, students might skip the deep engagement with material that learning traditionally requires [5]. Moreover, faculty have to grapple with detecting AI-generated work and ensuring fair assessment. These challenges put pressure on instructors to adapt [24]. Many lecturers thus approach student use of AI with cautious optimism: they see benefits (time saved in answering common questions, personalised tutoring at scale, etc.) but also feel responsible for mitigating the drawbacks. A common sentiment is a preference for “educative over punitive” approaches in managing AI usage [10]. Rather than outright bans or harsh penalties, instructors favour educating students about how to use AI appropriately and integrating it into teaching in a guided manner. For example, faculty in one study preferred to teach students about AI tools and set clear guidelines (what is allowed vs. cheating) rather than rely solely on punishment if misuse occurs [10]. This indicates that lecturers need backup from their institutions (in terms of clear rules and tools) to effectively handle GenAI in the classroom.
It’s also insightful to examine what influences lecturers’ own adoption of AI, since their personal use can translate into pedagogical practice. Shata and Hartley [1] applied TAM and Social Cognitive Theory to faculty adoption of GenAI and found that, similar to students, perceived usefulness was a prime driver of faculty’s intent to use AI; professors were more likely to adopt ChatGPT if they saw it as beneficial for their teaching or research. Interestingly, Shata and Hartley [1] noted that perceived ease of use was less impactful for faculty than usefulness, perhaps because most faculty already have experience with educational technologies. Moreover, social factors played a strong role for faculty: trust in AI and social reinforcement (e.g., encouragement or modelling by colleagues) significantly influenced professors’ decision to use GenAI. In fact, trust and social reinforcement acted as mediators linking the TAM factors to usage; for example, a professor might only act on the perceived usefulness of AI if they also trust the tool and see their peers or institution endorsing it [1]. Notably, Shata and Hartley [1] found that faculty’s self-efficacy (confidence in their own ability with AI) had minimal impact, suggesting that even less tech-confident professors might use AI if they perceive benefits and have a supportive peer environment. This emphasises how important a collegial and policy context is: when faculty feel that using AI is accepted, supported, and effective, they are more inclined to integrate it into their teaching. And when lecturers do integrate AI (for example, by allowing or encouraging certain uses in assignments), it normalises the tool for students too.
In summary, lecturers tend to be cautious champions of generative AI. They can envision and even experience the advantages of GenAI in making education more efficient and personalised, but they are also keenly aware of its pitfalls in the hands of students. From the lecturer’s perspective, the key is finding a balance: leveraging GenAI’s strengths while safeguarding academic integrity and learning quality. Thus, many advocate for clear guidelines and proactive training (for both faculty and students) as the way forward [3,4]. By engaging lecturers in policy-making and equipping them with the needed support, higher education can develop a coherent approach where students’ use of generative AI is encouraged in beneficial ways and curbed in dishonest ones. The dialogue between student adoption factors and lecturer perspectives is crucial- understanding both helps in designing interventions (like honour codes, AI literacy workshops, or assessment redesigns) that address concerns without stifling innovation.
Research into students’ intention to use generative AI in higher education reveals a multifaceted picture. Classical acceptance constructs like perceived usefulness and ease of use remain central; if students find AI tools genuinely helpful for learning, they are inclined to use them [1]. But beyond that, extended factors and new considerations play a significant role. Trust in the technology, perceived enjoyment, and confidence (or anxiety) can all tilt the balance toward or away from adoption [4,8]. Critically, ethical concerns and academic integrity issues weigh on adoption decisions: some students hesitate out of fear of plagiarism or violating rules, while others plunge in, perhaps unaware of such pitfalls [2,8]. This makes the role of universities and lecturers extremely important. Social influences (peer and instructor attitudes) and institutional support (policies, training, IT infrastructure) are the scaffolding that can either encourage responsible AI use or, if absent, leave students unsure and divided [6,10]. Lecturers generally want to harness AI’s benefits for education but also protect academic standards, and their perspectives highlight the need for balanced strategies; neither an uncritical embrace of GenAI nor a blanket ban, but a guided integration.
In practical terms, the literature suggests moving toward an academic environment where generative AI is acknowledged and managed: clear guidelines define what counts as acceptable AI-assisted work, honour codes are updated to include AI usage, and students are taught how to use AI as a learning support rather than a shortcut. Such steps, combined with faculty training and dialogue, can build a culture of ethical AI proficiency. Conceptual models like TAM, TPB, and UTAUT (augmented with trust, ethics, and other context-specific factors) provide a roadmap by highlighting which levers influence student intentions the most. Universities; including those in regions like the Middle East, where governments (e.g., Saudi Arabia’s Vision 2030) are heavily investing in AI [6] can utilise these insights to encourage positive usage (emphasising AI’s usefulness for learning, addressing anxiety through tutorials, and fostering trust via reliable tools) while mitigating risks (through academic integrity education and supportive monitoring). By addressing both the “appeal and concerns of GenAI in learning” [8], institutions can help students and lecturers alike to integrate generative AI in education in a responsible, effective manner. The conversation between students’ needs and lecturers’ concerns is ongoing, but the emerging consensus is that with the right constructs in place, usefulness, ease, trust, ethics, and support, generative AI can be a powerful ally in higher education rather than a threat.

3. The Conceptual Model in Higher Education GenAI Adoption

3.1. Core TAM Constructs: Perceived Usefulness and Ease of Use

The inclusion of perceived usefulness (PU) and perceived ease of use (PEOU) as fundamental determinants is well-grounded in prior research on technology acceptance [25,26,27,28]. Numerous studies confirm that if students find a new tool easy to use and useful for their academic tasks, they form positive attitudes and intentions toward using it [29]. In the context of generative AI tools (e.g., ChatGPT), perceived usefulness consistently emerges as a strong predictor of student adoption. For instance, a TAM-based study of Hong Kong undergraduates found that perceived benefits (analogous to usefulness) had the most significant impact on how frequently students used GenAI tools [30]. A large-scale survey of over 5000 Chinese university students similarly showed that perceived usefulness was the primary driver of intention to use GenAI (β ≈ 0.44), far outweighing most other factors [31]. These prior studies’ works were behind the development of the hypothesis H1, positing that PU positively influences students’ intention to use GenAI.
The model’s treatment of perceived ease of use is also supported by literature, though with some nuance. Generally, if an educational AI tool is easy to interact with, students are more likely to see its value and intend to use it [29]. PEOU often acts as an antecedent to PU, as documented in many TAM studies: easier-to-use systems tend to be seen as more useful by students, thereby indirectly boosting adoption [32,33]. Empirical evidence in higher education backs. For example, researchers have observed that effortlessness in using ChatGPT correlates with greater perceived value and intention among students [34]. EOU, positively affecting intention, is likewise in line with prior findings that usability can directly encourage technology acceptance [35,36]. However, recent studies on GenAI usage indicate that PEOU’s influence might diminish when the technology is already familiar or user-friendly. Mustofa, Kuncoro, Atmono, Hermawan, and Sukirman [7] found that perceived ease of use did not significantly affect attitudes toward AI tools among university students, suggesting that today’s digital-native students may take ease of use for granted once a tool meets a basic usability threshold. In summary, the model correctly retains TAM’s core: PU remains a central predictor of GenAI adoption intention [31], and PEOU contributes by enhancing PU and (to a lesser extent) intention; though its effect could be less pronounced if the GenAI tools are intuitively designed.

3.2. Intrinsic Motivation as a Driver of Adoption

In recognising motivation (Mov), particularly intrinsic motivation, the model extends beyond TAM into the realm of Self-Determination Theory (SDT) [37]. This addition is timely, as educators note that students’ intrinsic drive (curiosity, interest, enjoyment) can strongly influence engagement with new learning technologies. The idea that a student’s internal desire to learn or experiment would push GenAI usage is consistent with prior motivational models. Some recent evidence aligns with the hypothesis (motivation → intention). For example, a UTAUT2-based study identified hedonic motivation (the enjoyment or satisfaction from using the tool) as a significant positive influence on students’ intention to adopt generative AI technologies [9]. This suggests that students who find GenAI tools fun or inherently rewarding are more inclined to use them in their studies. Likewise, SDT would predict that if GenAI use aligns with students’ intrinsic goals (e.g., mastering content or creative exploration), their intention to use such tools will increase [38,39]. That said, findings on intrinsic motivation in the specific context of GenAI are somewhat mixed. Zogheib and Zogheib [40] combined TAM with SDT constructs and found that intrinsic motivation alone was not a significant predictor of behavioural intention to use ChatGPT, whereas certain extrinsic motivators were impactful. Students’ intention was driven more by external goals (e.g., improving grades or efficiency) and social influences, while pure interest or enjoyment did not reach significance [40]. By contrast, other researchers have reported that perceived enjoyment and curiosity can facilitate acceptance of AI tutors and chatbots in learning [9]. This discrepancy could be due to contextual differences (e.g., academic pressure emphasising extrinsic outcomes) or how motivation is measured (intrinsic interest vs. hedonic enjoyment). Overall, the literature acknowledges motivation as an important facet of technology acceptance in education. Incorporating motivation into the model is justified: a student who is genuinely motivated, feeling autonomous, and interested in using GenAI for learning, is more likely to integrate such tools into their study routine. Recent global reviews on ChatGPT adoption indeed call for considering motivational drivers alongside TAM factors [40]. Thus, while intrinsic motivation may not universally overpower other factors, its inclusion captures a critical human element: students are not just passive users of GenAI, but active learners whose drive to learn can amplify or hinder their adoption of new AI tools.

3.3. Ethical Disposition: Trust, Responsibility, and Academic Integrity

The model’s unique contribution is embedding ethical and responsible use factors, namely trust and responsibility (T-R) and academic integrity, as antecedents of motivation and technology acceptance. This reflects a growing consensus in recent literature that students’ attitudes toward AI are shaped not only by utility but also by their ethical comfort level with these tools. As universities contend with the implications of GenAI for academic honesty, students’ trust in the technology and their sense of responsibility in using it appropriately have become pivotal considerations [29].
There is strong support for the hypothesis that posits that trust and responsibility positively influence motivation. Trust in AI tools has been identified as a key determinant of adoption in multiple studies. Zhang and Wang [31], using a combined TAM-TPB model across thousands of students, found that trust in GenAI systems was the second-strongest predictor of students’ intention to use (β ≈ 0.22), second only to usefulness. Crucially, the analysis revealed that trust plays a mediating role between perceived usefulness and intention-in other words, students only act on an AI tool’s perceived benefits if they also trust the tool and feel secure about using it [31]. This reinforces that students’ intrinsic motivation to engage with GenAI will be higher when they trust the tool’s outputs and believe they can use it responsibly. Similarly, Mustofa, Kuncoro, Atmono, Hermawan, and Sukirman [7] extended TAM with ethics and found that students’ ethical perceptions directly shape their attitude toward AI: higher ethics (e.g., seeing the AI as aligning with one’s moral and responsible use) led to a significantly more positive attitude toward using the tool. Then, fostering ethical AI usage and trust-building is vital to improving acceptance in educational contexts [7]. All this evidence suggests that when students trust a GenAI platform (in terms of accuracy, data security, and alignment with their values) and feel a sense of responsibility in how they use it (e.g., not to cheat or plagiarise), they are intrinsically more driven to integrate it into their learning. Trust thus feeds motivation and intention, rather than being merely an afterthought in technology adoption.
The construct of academic integrity in the model taps into students’ ethical disposition toward honest academic work in the age of AI. Recent surveys indicate that concerns about academic integrity heavily influence student behaviour with GenAI. For example, over half of U.S. college students voice concerns about whether using AI tools might constitute cheating, and in China, 76% of students reported worries about GenAI leading to integrity violations in coursework [31]. Such concerns can dampen students’ ease and willingness to use these tools. Conversely, if students perceive that using GenAI can be done honestly and with approval (e.g., for formative learning or skill improvement, rather than outright cheating), they are likely to feel less apprehension or guilt, making the tools psychologically easier to use and more motivating. Indeed, clear ethical guidelines and norms can remove the ambiguity that might otherwise hinder adoption. Educators around the world have been rushing to clarify how GenAI may or may not be used in assignments, precisely because ambiguity in this area breeds fear and resistance [29]. Research in educational technology suggests that when institutional policies encourage responsible use of new tools (as opposed to outright bans), students develop greater trust and see the tools as legitimate aids, thereby lowering barriers to use [29]. We expect that a student who values academic integrity and sees GenAI as compatible with it under proper use will report higher perceived ease (no moral conflict acting as friction) and greater intrinsic motivation to use the technology for learning. This reasoning is supported by the broader literature on ethics in AI adoption: ethical concerns are significant barriers to acceptance, and reducing those concerns (through trust and integrity assurances) can remove psychological hurdles [41]. In summary, the model’s incorporation of T-R and Academic Integrity is strongly validated by recent studies that emphasise trust and ethics as direct influences on attitudes and intentions with AI [7,31]. By accounting for these factors, the model captures how responsible-use perceptions can either enable or stifle a student’s willingness to embrace GenAI in their academic life.

3.4. The Role of Institutional Support

Finally, the model recognises an organisational-level factor, institutional Support (Ins-Sup), which aligns with extended acceptance frameworks like TAM3/UTAUT that include environmental enablers (e.g., facilitating conditions or support). Hypothesis H7 posits that institutional support positively influences the perceived usefulness of GenAI for students. There is plenty of literature to support this link in a higher education setting. Institutional support refers to the guidance, training, resources, and encouragement that a university provides regarding a technology. When universities actively support GenAI integration (for example, by offering workshops, providing access to AI tools, or establishing clear usage policies), students are more likely to recognise valuable academic uses for those tools [42]. In effect, support from the institution can both reduce the perceived complexity of using GenAI and increase its perceived relevance to coursework; two factors closely tied to perceived usefulness, as reported by Jeilani and Abubakar [42]. The researchers [42] noted that perceived institutional support significantly boosts students’ perceptions of AI’s value in learning, in part by helping students overcome initial barriers and see clear benefits. This directly corroborates H7: when a university signals that GenAI tools are beneficial and provides help in using them, students tend to believe those tools will enhance their academic performance, i.e., higher PU.
Moreover, integrating institutional support echoes the facilitating conditions construct of UTAUT, which has been applied to GenAI adoption. In one UTAUT-based investigation, lack of sufficient support (facilitating conditions) surprisingly showed a negative effect on intention [9]. The authors interpreted this to mean that students who did not perceive strong support systems in place were less inclined to intend continued GenAI use, highlighting how crucial institutional facilitation is. Other educational technology research similarly concludes that perceived organisational support can directly influence TAM variables like usefulness and ease of use [42]. For example, Al-Rahmi, et al. [43] found that in e-learning, institutional support reduced perceived complexity and increased perceived usefulness of the platform for students. In our context, when an institution provides a supportive environment (e.g., integrates GenAI into curricula, trains students on proper use, addresses ethical issues), students interpret GenAI as a legitimate and useful academic tool rather than a banned or risky novelty. This increases the tool’s usefulness in the students’ eyes because it is now tied to institutional value (improving learning and outcomes with official endorsement). In sum, recent global perspectives underscore that without institutional encouragement, students may underutilise GenAI or remain skeptical of its benefits [9]. The conceptual model’s inclusion of Ins-Sup → PU is therefore well-founded: supportive academic infrastructures positively shape students’ perceived usefulness and uptake of generative AI [42], ultimately facilitating more effective adoption.
Overall, the proposed conceptual model; an augmented TAM framework for student GenAI adoption; is strongly substantiated by contemporary research. Perceived usefulness and ease of use remain fundamental, proven predictors of students’ intention to use GenAI tools, consistent with decades of TAM studies [29,31]. Building on that base, the model wisely integrates motivational factors and ethical considerations that are particularly salient in higher education’s encounter with AI. Recent peer-reviewed studies from around the world highlight that students’ intrinsic and extrinsic motivations, their trust in AI and sense of academic integrity, and the support provided by their institutions all critically influence GenAI acceptance [7,40,42]. The hypothesised paths H1–H7 reflect relationships that have empirical backing: for example, usefulness driving intention (H1) [30]; ease of use strengthening usefulness and intention (H2, H3) [34]; motivation (especially when supported by external incentives or enjoyment) encouraging usage (H4) [9]; ethical trust and responsibility cultivating positive attitudes and motivation (H5, H6) [7,31]; and institutional support creating an environment where GenAI is seen as useful (H7) [42]. By accounting for both the human motivational aspect and the ethical-institutional context, the model is well-positioned to explain student intentions in a global higher education setting, navigating generative AI. This comprehensive approach is in line with the latest calls in the literature for more holistic, multi-theory models to understand educational technology adoption [41]. In conclusion, recent studies not only validate the model’s individual components but also reinforce the model’s overarching premise: successful integration of GenAI in universities depends on usability and usefulness plus students’ motivation and ethical comfort, all supported by a conducive institutional framework. Such an evidence-based model can be tested statistically and can yield insights to help universities promote effective and responsible use of generative AI in learning [7,29].

3.5. Model Structure and Hypothesised Paths

The resulting model proposes the following relationships:
H1: 
PU positively predicts Intention to Use GenAI.
H2: 
EOU positively predicts PU.
H3: 
EOU positively predicts Intention.
H4: 
Mov positively predicts Intention.
H5: 
Academic Integrity positively predicts EOU and Mov.
H6: 
T-R positively predicts Mov.
H7: 
Ins-Sup positively predicts PU.
This structure allows for testing both direct and mediated effects, particularly the mediating roles of PU and Mov between institutional or ethical antecedents and intention. The model is flexible and statistically testable via Structural Equation Modelling (SEM), with capacity for both reflective and formative constructs. A conceptual diagram (Figure 1) illustrates the hypothesised relationships among constructs. Arrows represent the proposed causal paths grounded in theory and empirical logic.
This TAM-based model is well-suited to the emerging domain of GenAI adoption in education. It reflects both individual cognitive appraisals (PU, EOU), motivational forces (Mov), and institutional context (Ins-Sup), while also incorporating ethical dimensions (integrity, T-R). Such integration is essential for understanding technology adoption in academic settings where integrity and critical thinking are paramount. This model not only aligns with existing frameworks like TAM2/TAM3 and UTAUT but also advances them by embedding constructs specific to the GenAI discourse in education. Then, the proposed model is emphasising both cognitive beliefs and contextual supports.

4. Methodology

4.1. Research Design

This study employed a quantitative, cross-sectional survey design to explore the perceptions, concerns, and institutional readiness surrounding the use of Generative AI (GenAI) in higher education. Data were collected from two distinct but complementary participant groups: university lecturers and undergraduate students. This dual-perspective approach enabled a comparative understanding of attitudes toward GenAI across the educational spectrum, aligning with calls for multi-stakeholder insights in digital education research [44].

4.2. Participants and Sampling

Data were collected between 1 May and 30 June 2025. In total, 309 lecturers and 578 undergraduate students participated from three public universities in the south of Saudi Arabia: Najran University, Jazan University, and University of Bisha.
Lecturers.
Lecturers were recruited using purposive and snowball sampling via professional academic networks and represented a broad range of disciplines, including computing/IT, engineering, education, business/economics, and humanities/social sciences. Teaching experience was categorised into 0–5 years, 6–10 years, and >10 years. Lecturers also reported their familiarity with GenAI (none to advanced/regular use), enabling subgroup comparisons by experience and expertise.
Students.
Students were recruited via convenience sampling from the Colleges/Departments of Computer Science, Engineering, Economics/Business, Human Sciences, and Education across the three universities. Students reported their own use of GenAI in academic tasks and completed measures of Perceived Usefulness (PU), Perceived Ease of Use (EOU), Behavioural Intention, and Institutional Support. The student sample reflected diversity in gender, year of study, and prior exposure to AI tools.

4.3. Instrumentation

Two separate, but aligned survey instruments were developed; one for lecturers and one for students; sharing common constructs based on the extended Technology Acceptance Model (TAM) and Self-Determination Theory (SDT). Both instruments used five-point Likert scales and covered: perceived usefulness (PU), perceived ease of use (EOU), motivation and learner autonomy (students), academic integrity and assessment adaptation (lecturers), trust and responsibility (with a focus on concerns about student use (students)), institutional support (training and policy guidance), and behavioural intention to use GenAI.
Open-ended items (students).
“In one or two sentences, how do you feel about using generative AI (e.g., ChatGPT) for your learning?”
“How do you think your university should respond to student use of generative AI?”
“List the main ways you currently use generative AI in your studies.”
Open-ended items (lecturers).
“What are your main concerns (if any) about student use of generative AI in assignments?”
“What kinds of support or policies should institutions provide to manage generative AI in teaching and assessment?”
Validity and reliability. Face validity was established via expert review by specialists in educational technology and AI ethics. Internal consistency was high across constructs (Cronbach’s α > 0.80 in both instruments). Open-ended responses were analysed using directed content analysis aligned to the constructs above; category frequencies are reported as percentages for transparency and comparability. Moreover, the details of the survey distributed can be found in Appendix A.

4.4. Data Collection Procedure

The survey was administered online using Google Forms. Participants were informed about the research purpose and assured of confidentiality and anonymity. Ethical approval was obtained from the lead institution, and digital consent was required before participation. The survey remained open for four weeks.

4.5. Data Analysis

Quantitative data were analysed using IBM SPSS v30 and SmartPLS 4. Analytic techniques included: Descriptive statistics to summarise perceptions of GenAI usage and its educational impact; One-way ANOVA and Tukey HSD tests to compare perceptions across teaching experience and familiarity levels; correlation and cross-tabulation to assess relationships between constructs (motivation, intention, integrity), and Structural Equation Modelling (SEM) using SmartPLS to validate the conceptual model and test direct and indirect effects among variables, including mediation via PU and Motivation.
Cross-group comparisons for common constructs. For constructs administered identically to both groups (e.g., perceived usefulness (PU), perceived ease of use (EOU), behavioural intention, institutional support), we compared students vs. lecturers using independent-samples tests. After screening for outliers and missingness, we assessed univariate normality via Shapiro–Wilk (acknowledging reduced sensitivity at large n) and homogeneity of variances via Levene’s test. When normality and variance assumptions held, we used the Student t-test (equal variances) or Welch’s t-test (unequal variances). If normality was not supported, we used the Mann–Whitney U-test. We report group means, SDs, mean differences with 95% CIs, test statistics, and two-tailed p-values; to control for multiple comparisons across constructs, we applied Holm adjustment. Effect sizes are reported as Hedges’ g with 95% CIs (small ≈ 0.20, medium ≈ 0.50, large ≈ 0.80).

5. Results

5.1. Students

The student cohort was predominantly 21+ years (377; 65.2%), with 18–20 years comprising 201 (34.8%), Table 1. The gender split was 55.01% female (318) and 44.98% male (260). Reported fields of study clustered mainly in Computing/IT/AI (183; 31.7%), with smaller proportions in Engineering (89; 15.39%), Economics/Business (101; 17.47%), and Education (120; 20.76%); the remainder were Other (85; 14.70%). Academic standing spanned Year 1–5+, with the largest groups in Year 1 (139; 24.0%) and Year 2 (122; 21.1%), followed by Year 3 (115; 19.9%), Year 4 (104; 18.0%), and Year 5+ (98; 17.0%).
The study explored university students’ perceptions, usage patterns, and expectations regarding Generative AI (GenAI) in academic contexts. Thematic analysis of open-ended survey responses revealed diverse perspectives, with results categorised into key domains: emotional response (excitement and concern), preferred institutional response, actual use cases, and overall sentiment.

5.1.1. Emotional Response Toward GenAI

Among responses addressing emotional reactions to GenAI, 28.81% were classified as neutral or expressing no concerns, while 33.90% were too vague or unclassifiable. The remaining responses revealed both enthusiasm and apprehension. Positive themes such as “efficiency and convenience” (6.78%) and “accessibility of knowledge” that emphasised GenAI’s utility in accelerating tasks like writing, coding, and summarising. However, concerns were more pronounced: 12.71% highlighted overreliance and potential cognitive decline, 9.32% focused on academic integrity risks, and 5.08% feared misinformation. A smaller fraction (3.39%) worried about uniformity and loss of originality.
These results suggest that while some students appreciate GenAI’s academic support capabilities, a substantial segment fears erosion of critical thinking and ethical boundaries. Notably, fewer students emphasised the positive aspects of GenAI compared to those voicing concerns.

5.1.2. Institutional Response Preferences

When asked how universities should respond to GenAI use in learning, the dominant theme (39.32%) again fell under “Other/Unclassified”, indicating a lack of clarity or specificity in student expectations. However, a significant portion (23.93%) supported the integration of GenAI into learning, while 11.11% called for guidance and training on responsible use. A small minority (3.42%) advocated for openness and experimentation, and only 0.85% suggested regulation or control.
These findings reveal a student body generally open to GenAI, with a preference for structured support rather than restrictive policies. The minimal support for prohibition highlights the need for balanced governance that fosters responsible experimentation and literacy.

5.1.3. Patterns of GenAI Use

Analysis of usage patterns indicated that students primarily utilise GenAI for idea generation (125 responses) and summarising readings (91), which align with preparatory and cognitively active tasks. Coding assistance (83) and proofreading (58) were also common. However, 66 students admitted to using GenAI for drafting assignments, raising potential concerns about academic integrity if these texts are submitted without proper modification or acknowledgment.
This usage pattern highlights a tension between legitimate academic support and possible misconduct. While most uses fall into acceptable or grey areas, the prevalence of assignment drafting necessitates clearer institutional policies on ethical boundaries.

5.1.4. Sentiment Distribution and Gender Differences

A sentiment analysis of the qualitative data showed a dominance of neutral responses (n = 250), with only 55 negative and 20 positive reactions. The lack of strong sentiment may reflect uncertainty or limited awareness of GenAI’s implications. Gender differences were modest: females contributed more neutral and slightly more negative responses, while both genders offered similarly low levels of positive sentiment. This cautious emotional landscape indicates a need for awareness-building among students to support informed engagement with GenAI.

5.1.5. Synthesis and Educational Implications

Students overwhelmingly desire integration, training, and guidance, not bans or restrictions. The rarity of prohibition-oriented suggestions (only two respondents) signals an opportunity for universities to lead with constructive, transparent frameworks. Simultaneously, the large proportion of vague or neutral responses, especially regarding how institutions should act, suggests a lack of student preparedness or engagement with GenAI policy discourse. Together, these findings imply that student attitudes toward GenAI are potentially marked by ambivalence, curiosity, and caution, with a clear demand for institutional leadership. Educational strategies should therefore prioritise: (a) developing GenAI literacy and critical usage skills; (b) embedding ethical and creative uses of AI within curricula; (c) encouraging informed experimentation within academic integrity boundaries.

5.1.6. Structural Model Assessment

The structural model was evaluated using SmartPLS, incorporating key TAM-based and contextual constructs: perceived ease of use (EOU), perceived usefulness (PU), motivation (Mov), institutional support (Ins-Sup), academic integrity, trust-responsibility (T-R), and intention to use GenAI. Model performance was assessed using multiple criteria, including path coefficients, R2 values, mediation analysis, reliability and validity statistics, multicollinearity diagnostics, and model fit indices.
Measurement Model: Factor Loadings and Indicator Validity
For the construct Perceived Ease of Use (EOU), two indicators were used: Ease of Use1 and Ease of Use2, which exhibited factor loadings of 0.664 and 0.913, respectively (Table 2). While the first item was slightly below the recommended 0.70 threshold, it was retained due to acceptable AVE (0.638) and composite reliability (0.774), indicating adequate convergent validity. Motivation (Mov) was measured using two items (Mov1 and Mov2), which both showed exceptionally high factor loadings of 0.945 and 0.944, respectively (Table 2). These values reflect excellent indicator reliability and strong representation of the underlying latent construct. This was further supported by the construct’s AVE of 0.892 and composite reliability of 0.943. For perceived usefulness (PU), three items were included with loadings of 0.897 (PU1), 0.864 (PU2), and 0.796 (PU3) (Table 2). All values exceeded the minimum recommended threshold, indicating a strong reflective measurement model for this construct. The high internal consistency of PU was further confirmed by a composite reliability of 0.889 and AVE of 0.728.
Trust-responsibility (T-R) was assessed using three items (T-R1 = 0.649, T-R2 = 0.608, T-R3 = 0.859). Although two of the indicators fell slightly below the 0.70 threshold, the average variance extracted (AVE = 0.509) was still within acceptable limits, suggesting that the construct retains adequate convergent validity. Given the theoretical relevance of this construct and acceptable internal consistency (composite reliability = 0.753), the items were retained for further analysis. The construct institutional support (Ins-Sup) also included three items: ins-sup1 (0.794), ins-sup2 (0.801), and ins-sup3 (0.582). Two of the three indicators showed high loadings, while one fell slightly below 0.60. Despite this, the composite reliability was acceptable at 0.773, and AVE was 0.537; suggesting sufficient convergent validity for a three-item construct in an exploratory study. Integrity was measured by two items with factor loadings of 0.946 (Integrity1) and 0.623 (Integrity2). The first item strongly represents the construct, while the second is marginally below the optimal threshold. Nevertheless, the construct’s AVE (0.641) and composite reliability (0.774) support its validity.
Finally, the construct intention to use was captured using three items. The indicators exhibited loadings of 0.870 (Intention1), 0.885 (Intention2), and 0.720 (Intention3), indicating a well-functioning measurement model. The internal consistency of this construct is also high (composite reliability = 0.867; AVE = 0.686), suggesting reliable and valid measurement. In summary, while a few items had loadings below the ideal threshold of 0.70, the overall measurement model demonstrates good psychometric properties in terms of convergent validity, internal consistency, and indicator reliability. The constructs are deemed robust enough for further structural modelling and hypothesis testing.
Cross-loadings (Table 3) indicate that each indicator loads highest on its intended construct, supporting discriminant validity. Most primary loadings exceed the 0.70 guideline (e.g., PU1–PU3 = 0.796–0.897; Mov1-Mov2 = 0.944–0.945; Intention1–Intention3 = 0.720–0.885; EOU2 = 0.913; Integrity1 = 0.946), with acceptable primary loadings ≥ 0.60 for the remaining items (e.g., T-R1–T-R2 = 0.608–0.649; ins-sup3 = 0.582; Integrity2 = 0.623). Cross-loadings on non-target constructs are consistently lower, with typical separations ≥ 0.20 (e.g., EOU2: 0.913 on EOU vs. 0.659 on Mov; Mov1–Mov2: 0.944–0.945 on Motivation vs. 0.661–0.722 on PU; PU1–PU3: 0.796–0.897 on PU vs. 0.608–0.651 on Mov). A small number of conceptually adjacent constructs (Motivation ↔ PU; EOU ↔ Motivation; T-R ↔ Integrity) show moderate secondary associations, which is theoretically consistent with the model (ease can facilitate motivation; motivation relates to perceived usefulness; responsible use relates to integrity). Overall, the cross-loading pattern, together with our Fornell–Larcker and HTMT results (reported elsewhere), supports convergent validity of the intended constructs and adequate discriminant validity across the measurement model.
Construct Reliability and Validity
Internal consistency reliability was assessed using Cronbach’s alpha, composite reliability, and average variance extracted (AVE). Most constructs met the recommended thresholds (Cronbach’s alpha > 0.70, AVE > 0.50), with some exceptions. For instance, EOU had a low alpha (0.464) but acceptable composite reliability (0.774) and AVE (0.638), suggesting minimal risk despite limited indicators (Table 4). Constructs such as Mov (α = 0.879, AVE = 0.892), PU (α = 0.813, AVE = 0.728), and intention (α = 0.772, AVE = 0.686) all demonstrated strong reliability and convergent validity (Table 4). Discriminant validity was verified using inter-construct correlations and Fornell–Larcker criteria (Table 5). All square roots of AVE were higher than inter-construct correlations, supporting discriminant validity. High correlations between PU, Mov, and intention were expected due to the theoretical links among them, but did not exceed AVE thresholds (Table 5).
It is noticed that the correlation between PU and Mov exceeds the threshold of 0.7 (Table 5). This is because PU and Mov are theoretically adjacent in TAM/SDT models; we examined inner collinearity among predictors of each endogenous construct. All inner VIFs were below recommended thresholds (3.3/5.0): Mov → Intention = 2.471, PU → Intention = 2.298, EOU → Intention = 1.764; EOU → PU = 1.189, Institutional Support → PU = 1.189; Integrity → Motivation = 1.074, Trust/Responsibility = 1.074. Item-level VIFs were similarly low (maximum 2.589). These diagnostics indicate no harmful multicollinearity. Discriminant validity is supported by the updated Fornell–Larcker matrix (√AVE on the diagonal exceeds inter-construct correlations) and by HTMT, where the largest value (PU–Mov = 0.869) remains below the 0.90 guideline.
Coefficient of Determination (R2) and Predictive Power
The R2 values represent the proportion of variance in the dependent variables explained by the predictors. The R2 for motivation (Mov) was 0.165, for perceived usefulness (PU) it was 0.357, and for intention to use it was 0.348 (Table 6). These values suggest moderate explanatory power for PU and intention, and relatively lower explanatory power for Mov. The adjusted R2 values were closely aligned with the unadjusted figures, indicating model stability.
Effect Sizes (f2) and Contribution of Predictors
Effect size analysis revealed that EOU had a large effect on PU (f2 = 0.405), consistent with its theoretical importance in TAM (Table 7). Moderate effect sizes were observed for T-R → Mov (f2 = 0.108) and Mov → Intention (f2 = 0.069), while PU → Intention had a smaller but meaningful effect (f2 = 0.034)-Table 7. Other paths, such as Ins-Sup → PU (f2 = 0.011) and Integrity → Mov (f2 = 0.041)-Table 7 had small effect sizes, and the direct effect of EOU on Intention (f2 = 0.008) was minimal, emphasising the mediated role of PU.
The reliability analysis indicates that most constructions achieved satisfactory internal consistency and measurement stability, supporting confidence in the reported findings. Cronbach’s α and composite reliability values exceeded or approached the accepted thresholds (α ≥ 0.70; CR ≥ 0.70), and all AVE scores were above 0.50, confirming adequate convergent validity. Although a few constructs, specifically Ease of use (α = 0.46) and integrity (α = 0.50), showed lower alpha values, their composite reliabilities remained acceptable, suggesting that indicator homogeneity was sufficient for exploratory research. Moreover, discriminant validity checks (Fornell–Larcker and HTMT) and low inner-VIF statistics confirmed that multicollinearity was not a concern. Together, these results demonstrate that the measurement model is statistically reliable and that the observed relationships among constructs are unlikely to be artifacts of measurement error. Nevertheless, the moderate R2 values (0.16–0.35) imply that additional unmeasured factors may influence GenAI adoption and should be explored in future studies to further strengthen reliability and generalisability.
Path Coefficients and Hypothesis Testing
The direct path analysis (Figure 2 and Figure 3) revealed several statistically significant relationships. EOU → PU was highly significant (β = 0.555, t = 14.841, p < 0.001), confirming the foundational TAM hypothesis that perceived ease of use positively affects perceived usefulness (Figure 3). Similarly, PU → Intention (β = 0.227, t = 3.873, p < 0.001) and Mov → Intention (β = 0.334, t = 5.342, p < 0.001) were both strong predictors of behavioural intention, with motivation emerging as the strongest direct influence. The path from T-R → Mov was also significant (β = 0.314, t = 8.055, p < 0.001), indicating that ethical responsibility and trust contribute meaningfully to internal motivational states. The influence of Institutional Support → PU (β = 0.094, t = 2.384, p = 0.017) was statistically significant but relatively weaker in magnitude. Furthermore, the model confirmed a modest yet significant effect from EOU → Intention (β = 0.097, t = 2.276, p = 0.023), suggesting some direct influence beyond the mediated PU pathway. Lastly, the path from Integrity → Mov was also significant (β = 0.193, t = 4.223, p < 0.001), affirming the role of personal ethical alignment in shaping students’ motivation to use GenAI tools.
Mediation Analysis
Mediation effects were explored to test the indirect mechanisms linking antecedents to behavioural intention. The indirect path EOU → PU → Intention was significant (β = 0.126, t = 3.860, p < 0.001), highlighting that ease of use indirectly influences intention through perceived usefulness. Similarly, T-R → Mov → Intention produced a strong and significant indirect effect (β = 0.104, t = 4.572, p < 0.001), affirming that motivation serves as a key mediator linking trust to intention. Another significant mediation was observed in the path Integrity → Mov → Intention (β = 0.064, t = 3.001, p = 0.003), suggesting a motivational channel through which integrity influences behavioural intention. However, the mediation from Ins-Sup → PU → Intention approached but did not achieve statistical significance (β = 0.020, t = 1.755, p = 0.079).

5.2. Lecturer’s Perspectives

Data was collected from 309 lecturers where lecturers who have teaching experience exceeds 10-year represent the majority (~%75, 232). Meanwhile lecturers with 6–10 years represent 11.33%, followed by lecturers with experience of 0–5 years.
A cross-tabulation was performed to explore the relationship between lecturers’ teaching experience and their self-reported familiarity with Generative AI tools such as ChatGPT. The results, visualised in Figure 4, show notable differences across experience groups. Lecturers with more than 10 years of experience displayed the highest levels of GenAI familiarity, with 50% identifying at level 4 and 25.86% at level 5. This suggests that senior academics may have more exposure or institutional encouragement to experiment with emerging technologies. In contrast, lecturers with 0–5 years of experience showed greater polarisation: approximately 30.95% reported minimal familiarity (level 1) while a similar proportion reported moderate to high familiarity. Interestingly, lecturers with 6–10 years of experience had the highest proportion in the “very familiar” category (level 4, at 54.29%) but a noticeable drop in the lowest and middle categories. Overall, the results suggest that teaching experience is not linearly associated with GenAI familiarity. Rather, both early-career and veteran lecturers demonstrate bimodal familiarity patterns, while mid-career lecturers appear to consolidate their confidence primarily at higher levels of familiarity. These trends underscore the importance of targeted professional development, particularly for newer faculty who may require structured exposure to GenAI tools to build confidence and responsible integration in teaching practices.
To further explore lecturers’ experience with GenAI, we examined three key aspects: their beliefs about student usage of GenAI in assignments, experiences of suspected considerably GenAI-generated submissions, and their own use of GenAI in generating teaching materials. The analysis, stratified by years of teaching experience, reveals notable differences in perception and practice. Lecturers with more than 10 years of experience were the most likely to believe their students used GenAI in assignments. A clear majority in this group responded “Yes”, suggesting heightened awareness or detection ability stemming from longer academic exposure. In contrast, lecturers in the 0–5 years category showed a more even distribution between “Yes”, “No”, and “Not Sure”, indicating uncertainty or less confidence in detecting AI-assisted work.
Again, senior lecturers (Figure 5) were more likely to report having encountered suspicious submissions. This may stem from their experience in identifying non-authentic writing patterns or deviations from student norms. Conversely, early-career lecturers were more likely to respond “No”, which may reflect either a lack of detection experience or lower expectations regarding student AI usage. When asked about their own use of GenAI tools for teaching or preparation, the mid-career group (6–10 years) showed relatively higher usage rates, suggesting this group may be more proactive in exploring instructional innovations. In contrast, early-career lecturers had the highest proportion of non-use, possibly due to a lack of institutional support or uncertainty about acceptable AI integration. Interestingly, many senior lecturers also reported high levels of GenAI use, likely owing to more autonomy and access to institutional resources.

5.2.1. Main Concerns About GenAI in Student Assignments

Main Concerns Raised were (a) Overreliance and laziness- students may rely entirely on GenAI without putting in effort to understand, think critically, or verify content. GenAI use can reduce student motivation, cognitive effort, and promote laziness. (b) Loss of learning and academic skills-Risk of students not learning anything, skipping learning objectives, and missing out on acquiring essential knowledge, skills, and values. GenAI has (some of them believe) decreased the development of research, analytical, and verification skills. (c) Academic integrity issues-Increased potential for plagiarism and academic dishonesty; GenAI use may undermine academic integrity and degrade original thinking. (d) Creativity and critical thinking degradation-concerns that AI may hinder students’ creativity and reduce their ability to think “out of the box”; it may harm students’ ability to engage in deep, independent thinking. (e) ethical and unregulated use- Use without ethical considerations or institutional regulations; concern over lack of guidelines or training on responsible AI usage. Some lecturers feel outdated or challenged by rapid student adoption of AI tools. A few acknowledged that GenAI can be helpful for learning and simplifying assignment development, but stressed it must be used responsibly and not as a substitute for understanding. Emphasis on training students on both the advantages and drawbacks of GenAI use.

5.2.2. Institutions’ Support in Responding to GenAI

The main types of support suggested by lecturers to effectively respond to the rise of Generative AI in academic settings centre around six key areas. Training and workshops were the most frequently cited form of support. Participants emphasised the need for organised workshops, seminars, and hands-on training sessions for both faculty and students. These should cover practical applications, best practices, and the ethical integration of GenAI into teaching and learning processes. A second major area involves the establishment of clear guidelines and institutional policies. Respondents recommended that universities develop standardised regulations that define acceptable levels of GenAI usage and specify appropriate use-cases, particularly concerning student assignments.
Another widely supported suggestion was providing access to AI detection and plagiarism tools, such as Turnitin. This would assist faculty in identifying potential misuse or over-reliance on GenAI, thereby supporting academic integrity. Lecturers also highlighted the importance of facilitating access to AI tools and resources. Institutions were encouraged to offer subscriptions to professional GenAI platforms and invest in virtual labs, along with financial support to enable educators to explore and incorporate AI tools meaningfully in their pedagogy. A more proactive recommendation was to promote innovation by encouraging faculty to experiment with and develop custom AI applications tailored to educational needs. Sharing successful examples of GenAI integration was seen as a way to increase faculty confidence and stimulate wider adoption.
Finally, a smaller subset of participants proposed restrictions on GenAI usage in specific environments, such as computer labs or exam settings, to preserve the integrity of independent learning and assessment outcomes.

5.2.3. Motivation and Autonomy

Regarding students may use GenAI to enhance, not bypass, learning; most lecturers responded positively: nearly 46% agreed (rating 4), and about 12% strongly agreed (rating 5), suggesting a broad belief that GenAI can support genuine learning when used responsibly. Only 6% disagreed (ratings 1–2), while 36% remained neutral. Meanwhile, lecturers showed concerns regarding the assumption that GenAI may demotivate deep effort in assessments. Over 50% agreed (rating 4), and 12% strongly agreed (rating 5), indicating a strong perceived risk that GenAI might reduce students’ academic effort. Only about 8% disagreed. Therefore, lecturers appear to simultaneously recognise the constructive and disruptive potentials of GenAI. In contrast, many believe that students could use these tools to support meaningful learning, an equally strong contingent worries that GenAI may undermine academic effort and intrinsic motivation, especially in assessments. This duality highlights the need for careful policy framing and pedagogical support: enabling learning-enhancing uses while discouraging shortcuts or dependency. Lecturers with 6–10 years of experience expressed the highest agreement that GenAI can enhance learning (mean ≈ 4.00) and the strongest concern that it may demotivate student effort (mean ≈ 4.03) (Figure 6). Those with more than 10 years reported slightly more neutral views, suggesting a more tempered or nuanced stance likely informed by broader pedagogical experience. Early-career lecturers (0–5 years) showed the lowest belief in GenAI’s constructive role (mean ≈ 3.36) and least concern about demotivation (mean ≈ 3.61) (Figure 6), possibly reflecting lower exposure or confidence in evaluating GenAI’s academic implications.
Lecturers who are slightly familiar with GenAI had the strongest dual perceptions: very high belief in GenAI’s potential for learning (mean = 4.00) and high concern about its potential to reduce student effort (mean = 4.00) (Figure 7). Those not familiar with GenAI showed lower optimism (mean = 3.37) and more concern (mean = 3.80) (Figure 7), suggesting that uncertainty might amplify suspicion. Interestingly, those who are moderately to very familiar demonstrated more balanced responses, suggesting that deeper familiarity with GenAI tempers both overly positive and overly negative assumptions.
The correlation analysis reinforces a key insight: familiarity and experience shape more balanced views on GenAI’s educational role. Professional development programs should aim to elevate both GenAI literacy and pedagogical reflection, especially among early-career and unfamiliar lecturers. This dual focus can help institutions foster evidence-based adoption without overlooking risks to academic motivation and integrity.

5.2.4. Influence of Teaching Experience and Familiarity with GenAI on Perceptions of GenAI’s Demotivational Impact

The analysis revealed a statistically significant effect of teaching experience on lecturers’ perceptions, F (2, 306) = 10.07, p < 0.001. This result indicates that at least one teaching experience group differed significantly in its views regarding GenAI’s demotivational influence. To identify the specific group differences, a Tukey HSD post-hoc test was performed. The results showed that lecturers with 6–10 years of experience reported significantly stronger agreement with the demotivation statement compared to both those with 0–5 years (mean difference = 0.66, p = 0.004) and those with more than 10 years of experience (mean difference = 0.73, p < 0.001). No significant difference was found between the 0–5 years and more than 10 years groups (p = 0.90).
These findings suggest that mid-career lecturers (6–10 years) are most concerned about GenAI’s potential to erode student motivation in assessments. This group may possess sufficient pedagogical experience to detect understated shifts in student engagement, while still being actively involved in assessment design. In contrast, early-career lecturers may lack enough exposure to judge student adaptation to GenAI, and senior faculty may rely on longer-term trends or hold more nuanced views shaped by accumulated instructional resilience. These results empirically support the inclusion of teaching experience as a moderating contextual variable in conceptual models of GenAI integration in higher education.
Finally, the one-way ANOVA results, regarding lecturers’ familiarity with GenAI as a predictor of their belief that GenAI may demotivate deep student effort, revealed no statistically significant differences among the familiarity groups, F (4, 304) = 1.87, p = 0.116. This suggests that familiarity alone does not substantially shape perceptions of GenAI’s motivational impact.

5.2.5. Perceived Threat of AI to Traditional Assessment Credibility

The analysis of lecturers’ responses to the item “AI threatens traditional assessment credibility” (T-R1) reveals a strong perception of concern: (a) a majority of lecturers agreed (56%) or strongly agreed (24%), indicating widespread acknowledgment that AI poses a serious challenge to the integrity of traditional assessment methods. (b) Only 2.6% disagreed, while 17.5% remained neutral, suggesting that skepticism is minimal, and uncertainty exists mainly among a minority. This response pattern underscores the urgency for rethinking assessment practices in higher education in light of GenAI tools. It also validates the inclusion of trust-related constructs in conceptual models exploring GenAI adoption in academic contexts.
A one-way ANOVA was conducted to determine whether this perception differed significantly based on lecturers’ familiarity with GenAI tools like ChatGPT. The results showed a statistically significant effect, F (4, 304) = 9.13, p < 0.001. This indicates that lecturers’ concern about AI’s impact on assessment credibility varies significantly by how familiar they are with such technologies. To explore these differences further, a Tukey HSD post hoc test was performed. The analysis revealed a significant difference between lecturers who were not familiar (Group 1) and those who were moderately familiar with GenAI tools (Group 3), with the latter reporting significantly higher concern (mean difference = 0.75, p < 0.001). No other pairwise comparisons reached statistical significance. These results suggest that moderate familiarity may represent a critical threshold of awareness, where educators begin to understand the implications of GenAI use in academic contexts but may not yet have fully adapted their assessment strategies. In contrast, those with little exposure may underestimate the risks, while highly familiar lecturers may have developed mitigation strategies or more nuanced views. This highlights the importance of professional development initiatives that foster informed, critical engagement with AI technologies in education. Finally, the groups of year of experience showed no statistically significant differences, F(2, 306) = 2.86, p = 0.059, suggesting broadly shared concerns across experience levels.

5.2.6. Lecturer and Institutional Actions to Mitigate AI Misuse in Assessments

This section evaluates how lecturers and institutions are responding to the risks of GenAI misuse in academic settings by examining two key questions:
“I have adapted my assessments to reduce AI misuse” (lecturer action);
“My institution has provided guidance on handling GenAI” (institutional support).
A large proportion of lecturers reported taking action to adapt their assessments: 47% agreed, and 10% strongly agreed, suggesting widespread adaptation; and only 7.8% disagreed, and 35% were neutral, indicating some uncertainty or institutional inaction. Meanwhile, regarding institutional support, responses were more evenly spread: only 25% agreed or strongly agreed that their institution had provided guidance; nearly 46% were neutral or disagreed, and 7% strongly agreed, revealing a perceived gap in institutional policy and support.
One-Way ANOVA: Teaching Experience
For lecturer action, there were no significant differences across teaching experience groups, F (2, 306) = 1.01, p = 0.366, suggesting lecturers at all career stages are equally likely to adapt assessments in response to AI.
For institutional support, there was a significant difference, F(2, 306) = 7.15, p < 0.001. This implies that perceptions of institutional support differ by teaching experience, possibly reflecting longer exposure to institutional policy processes or greater involvement in strategic decisions among senior staff. The post hoc comparison for institutional support across teaching experience levels revealed that: (a) lecturers with more than 10 years of experience reported significantly higher agreement with the availability of institutional guidance than those with 0–5 years (mean diff = 0.47, p = 0.0008) and 6–10 years (mean diff = 0.83, p < 0.001). (b) This suggests that more experienced faculty may be more engaged with or aware of institutional policy development concerning GenAI or may have access to strategic-level communications not typically shared with junior staff.
One-Way ANOVA: Familiarity with GenAI
For lecturer action, significant differences emerged by familiarity level, F (4, 304) = 9.07, p < 0.001. This suggests that lecturers who are more familiar with GenAI are more likely to adapt their assessments to mitigate misuse. The Tukey HSD test showed statistically significant differences between lecturers with low familiarity (Group 1: Not familiar) and those with higher familiarity levels:
Group 1 vs. Group 3 (moderately familiar): mean diff = −1.52, p < 0.001;
Group 1 vs. Group 4 (familiar): mean diff = −0.83, p < 0.001;
Group 1 vs. Group 5 (very familiar): mean diff = −1.10, p < 0.001;
Group 2 (slightly familiar) also significantly differed from Group 3 (moderately familiar): mean diff = −1.98, p = 0.0001.
These results indicate that lecturers who are more familiar with GenAI tools are significantly more likely to adapt their assessment methods to prevent misuse. In contrast, those with little or no familiarity demonstrate lower levels of action, potentially due to a lack of awareness or perceived urgency. Similarly, institutional support differed significantly across familiarity levels, F(4, 304) = 12.70, p < 0.001. Lecturers more engaged with GenAI likely perceive or demand greater institutional guidance, highlighting the importance of professional development.
The findings illustrate a strong individual effort among lecturers to address GenAI misuse, but institutional responses appear less consistent and are perceived differently depending on teaching experience and GenAI familiarity. These results highlight the need for targeted institutional policies and professional training programs to support responsible AI integration across all academic levels.

5.3. Students vs. Lecturers on Shared TAM/SDT Constructs (Nonparametric Analysis)

For constructs administered identically across both groups, we report medians with interquartile ranges (IQRs), the bootstrap (median difference) with 95% CIs, Mann–Whitney U-tests with Holm-adjusted p-values, and rank-biserial effect sizes with 95% CIs.
PU. Students (n = 578) median = 4.33 (IQR 3.67–5.00) vs. Lecturers (n = 309) median = 4.00 (IQR 3.67–4.33). Median difference = 0.33 (bootstrap 95% CI 0.00 to 0.33), indicating Students are higher on this construct. Mann–Whitney U = 96,247.000, p = 0.000 (Holm-adjusted p = 0.000). Rank-biserial r = −0.25 (95% CI −0.32 to −0.17).
EOU. Students (n = 578) median = 4.00 (IQR 3.50–4.50) vs. Lecturers (n = 309) median = 4.00 (IQR 3.67–4.00). Median difference = 0.00 (bootstrap 95% CI 0.00 to 0.00), indicating students are similar on this construct. Mann–Whitney U = 86,996.500, p = 0.002 (Holm-adjusted p = 0.004). Rank-biserial r = −0.13 (95% CI −0.20 to −0.05).
Intention. Students (n = 578) median = 4.33 (IQR 3.67–5.00) vs. Lecturers (n = 309) median = 4.00 (IQR 3.67–4.67). Median difference = 0.33 (bootstrap 95% CI 0.00 to 0.33), indicating students are higher on this construct. Mann–Whitney U = 86,401.500, p = 0.004 (Holm-adjusted p = 0.004). Rank-biserial r = −0.12 (95% CI −0.20 to −0.04).

6. Discussion

This study aimed to understand the adoption of Generative AI (GenAI) in higher education by integrating constructs from the Technology Acceptance Model (TAM) with motivational, ethical, and institutional dimensions. The findings provide a multi-perspective analysis, capturing both student and lecturer views across universities in Southern Saudi Arabia. The results help understand how various factors shape students’ intention to use GenAI tools, and how lecturers perceive and respond to their use in academic contexts.
The reliability diagnostics lend credibility to the observed structural relationships; most constructs exhibited strong internal consistency, supporting the robustness of the results reported in Section 5.

6.1. Interpreting Cross-Group Differences (Students vs. Lecturers)

The direct comparisons between students and lecturers reveal a coherent pattern aligned with the extended TAM–SDT account of GenAI adoption. First, students reported higher perceived usefulness (PU) than lecturers (median +0.33 on a five-point scale; bootstrap 95% CI includes 0.00 to +0.33; Mann–Whitney, Holm-adjusted p < 0.001). This difference is practically small-to-moderate in magnitude (rank-biserial |r| ≈ 0.25) yet theoretically meaningful: students’ day-to-day study practices (summarising, ideation, coding assistance) likely translate more immediately into perceived performance gains, whereas lecturers’ usefulness judgments are filtered through concerns about assessment integrity, learning outcomes, and longer-term skills formation. In TAM terms, the PU → Intention pathway appears more strongly activated for students, consistent with their situated, task-support orientation.
Second, behavioural intention to use GenAI was also higher among students (median +0.33; bootstrap 95% CI: 0.00 to +0.33; Holm-adjusted p ≈ 0.004; small effect, |r| ≈ 0.12). Together with the PU result, this indicates that students are not only perceiving value but are also willing to convert that value perception into planned use. From the SDT perspective, this gap plausibly reflects differences in motivational regulation: students’ intentions are more closely tied to immediate task efficiency and perceived learning benefits, whereas lecturers’ intentions are moderated by responsibility and stewardship considerations (e.g., academic integrity, equitable assessment, and classroom norms).
By contrast, perceived ease of use (EOU) was statistically detectable but substantively negligible (median difference ≈ 0; bootstrap 95% CI: 0.00 to 0.00; Holm-adjusted p ≈ 0.004; very small effect, |r| ≈ 0.13). In practice, both groups find mainstream GenAI tools sufficiently usable. This is an important boundary condition: when EOU converges, differences in adoption are less about usability frictions and more about value, policy clarity, and ethics. In other words, raising adoption quality and alignment will not come from usability fixes but from institutional levers that shape value perceptions and ethical confidence.
These results have three actionable implications. (1) Institutional support → PU. Because students already see tangible value, clear, proactive institutional support (policy guidance, exemplars of appropriate use, assessment redesign) can further consolidate perceived usefulness into legitimate and learning-aligned practices. For lecturers, support that ties GenAI activities to construct-relevant evidence of learning (e.g., process artifacts, orals, reflective justifications) can mitigate perceived misalignment between tool use and learning outcomes, thereby elevating PU. (2) Integrity/responsibility → Motivation/intention. Explicit, discipline-specific integrity frameworks (what is permitted, how to attribute AI assistance, how outputs are validated) are likely to improve ethical comfort, increasing autonomous motivation for both groups and narrowing the intention gap. (3) Assessment adaptation as a bridging mechanism. Where lecturers see limited usefulness, it often traces back to the assessment regime: tasks designed to surface individual reasoning (e.g., staged drafts, viva components, data-to-insight tasks, replication/critique) allow GenAI to be used transparently while preserving evidential value. This reduces perceived risk and increases the legitimacy, and hence, the usefulness of GenAI-supported learning.
In sum, the EOU parity indicates that usability is not the bottleneck; rather, the student > Lecturer gaps on PU and Intention highlight a policy-and-pedagogy problem: students experience immediate value and intend to use, while lecturers require clearer institutional scaffolding to reconcile value with integrity and assessment quality. Interventions that (a) make support visible and practical, (b) codify responsible use at the task level, and (c) re-align assessments to capture genuine learning processes are the most promising routes to convergence.

6.2. Theoretical Contributions

This research confirms and extends the classical TAM framework. Perceived usefulness (PU) emerged as a strong predictor of students’ intention to use GenAI tools, corroborating findings from earlier technology adoption studies [31,45]. Ease of Use (EOU) retained its traditional role as both a direct predictor and an antecedent of PU, aligning with studies that show usability enhances perceived value [32,34]. However, its relatively modest direct effect on intention supports emerging literature suggesting that Gen Z students, already adept with digital interfaces, may consider ease of use a baseline expectation rather than a decisive factor [7].
The inclusion of motivational and ethical constructs provided deeper explanatory power. Motivation (Mov), encompassing both intrinsic interest and perceived enjoyment, significantly influenced intention, highlighting the relevance of Self-Determination Theory [37]. This suggests that students who find GenAI tools enjoyable or autonomy-supportive are more inclined to use them. Furthermore, trust and responsibility (T-R) significantly predicted motivation, confirming that ethical comfort and confidence in the AI tool are central to shaping students’ willingness to engage. Academic integrity concerns, particularly fear of misuse or plagiarism, also contributed to motivational outcomes, supporting the notion that ethical alignment is not just a peripheral issue but a determinant of adoption behaviour.
Institutional support (Ins-Sup) played a relatively smaller yet significant role in influencing PU. Although the effect size was modest, it underscores the importance of a supportive academic infrastructure. When universities provide clear guidelines and access to tools, students perceive GenAI as more legitimate and beneficial [42]. Notably, this construct’s limited explanatory power in the model may reflect gaps in institutional readiness or student awareness of available support structures.

6.3. Integration with Lecturer Perspectives

The lecturer data provided a critical contextual layer. While many faculty members acknowledged GenAI’s potential to enhance learning, their concerns centred around over-reliance, loss of critical thinking, and threats to assessment credibility. This duality, of promise and peril, mirrors student ambivalence. Interestingly, faculty responses revealed that GenAI familiarity was not uniformly associated with optimism. In fact, moderately familiar lecturers were often the most concerned, perhaps due to awareness of the risks without yet having fully developed mitigation strategies. Moreover, faculty with 6–10 years of experience expressed the strongest concern about GenAI’s demotivational potential, suggesting that this mid-career group is particularly sensitive to changes in student engagement and effort. However, this concern was not matched by consistent institutional support: less than one-third of lecturers agreed that their institutions had provided adequate guidance. This institutional gap may exacerbate uncertainty and leave lecturers without the tools needed to guide students responsibly. Encouragingly, many lecturers reported adapting their assessments, reflecting individual commitment to safeguarding academic integrity.

6.4. Practical Implications

Several practical implications emerge from this study. First, universities should prioritise the development of clear, accessible policies on GenAI usage. As students and staff navigate ethical uncertainties, transparent guidelines can mitigate fears of misconduct and promote responsible exploration. These policies should include definitions of acceptable and unacceptable use, assignment-specific instructions, and integration of GenAI literacy into curricula. Second, professional development programs for faculty are essential. The findings show that familiarity with GenAI is associated with both awareness and action. Therefore, training should go beyond technical skills to include pedagogical strategies for integrating GenAI, designing AI-resilient assessments, and facilitating student discussions on ethics and creativity.
Third, GenAI adoption strategies should consider student motivation and trust. Institutions can enhance both by curating reliable AI tools, embedding them in coursework, and highlighting their benefits for learning rather than shortcuts. Promoting trust through transparency; explaining how AI tools work, where their limits lie, and how outputs should be validated, can reduce fear and increase student engagement.

6.5. Contributions to Middle Eastern Higher Education

The study contributes context-specific evidence from the Middle East, a region undergoing rapid AI-driven educational reform under national initiatives like Saudi Vision 2030. The high student demand for guidance and the perception gaps among lecturers reveal the urgency for regionally tailored policies and capacity-building. While international frameworks like TAM and UTAUT offer a foundation, local adaptation is crucial given the cultural and regulatory differences that influence both adoption and institutional response.

6.6. Limitations and Future Research

This study has several limitations. Sampling and generalisability: Lecturers were recruited using purposive and snowball approaches, and students via convenience sampling across three universities in southern Saudi Arabia (Najran University, Jazan University, University of Bisha) during 1 May–30 June 2025. As non-probability designs, these strategies may introduce selection and non-response bias and limit the generalisability of findings beyond similar institutional and cultural contexts. Replication with probability-based or stratified sampling frames across additional regions and sectors is warranted.
Design and measurement: The cross-sectional, self-report design precludes causal inference and may be susceptible to common-method variance and social desirability. Although we used validated multi-item constructs and applied multiple-comparison control and distribution-free tests in between-group analyses, future work should consider procedural and statistical remedies (e.g., temporal separation of measures, multi-source data, marker variables, latent CMV controls). In addition, only a subset of constructs (e.g., PU, EOU, Intention) was administered identically to both groups, which constrained direct comparisons. Future studies should ensure full instrument alignment across stakeholders and assess measurement invariance (configural, metric, scalar) before comparing latent means.
Context and temporal dynamics: The sample represents public universities in one national region; institutional policy maturity, disciplinary mixes, and technology access may differ elsewhere. Moreover, GenAI tools and governance frameworks are rapidly evolving; perceptions captured in May–June 2025 may shift as policies, assessments, and tool capabilities change. Longitudinal or panel designs, coupled with behavioural usage traces (e.g., learning-management-system logs or assignment artifacts), would help track how institutional support, integrity practices, and assessment adaptations shape sustained, responsible use.
Taken together, these limitations suggest caution in extrapolating beyond comparable settings; they also highlight clear avenues for strengthening external validity (probability sampling across sites), internal validity (longitudinal/multi-method designs), and construct comparability (invariance testing and fully harmonised instruments).

7. Conclusions

This study offers an examination of the factors influencing the adoption of Generative AI (GenAI) in higher education, integrating perspectives from both students and lecturers in the Saudi Arabian context. Grounded in an extended Technology Acceptance Model (TAM) and enriched by motivational, ethical, and institutional constructs, the study demonstrates that GenAI adoption is shaped by more than just technological utility; it is influenced by trust, academic values, personal motivation, and organisational context. The findings confirm that Perceived Usefulness remains the most robust predictor of students’ intention to use GenAI, supported indirectly by Ease of Use. Motivation, particularly driven by enjoyment and confidence, also plays a significant role, reflecting the relevance of Self-Determination Theory in technology use. Ethical dimensions: trust, responsibility, and academic integrity emerged as critical antecedents, shaping both students’ motivation and their psychological readiness to engage with AI tools. Moreover, while Institutional Support showed a relatively weaker direct effect, it remains a vital enabler for building confidence in AI’s usefulness and ensuring responsible use through policy and training.
From the lecturers’ perspective, the study highlights a mixture of cautious optimism and concern. While many recognise GenAI’s potential to enhance instruction and streamline teaching practices, they also express valid worries about academic integrity, demotivation, and the erosion of critical thinking. Importantly, the data reveal uneven institutional responses; many educators are adapting their practices individually, often without adequate institutional guidance. This underlines a gap between policy ambition and operational readiness that universities must urgently address.
In practical terms, the study highlights the need for a dual approach: empowering students to use GenAI as a learning partner rather than a shortcut, and equipping lecturers to harness GenAI pedagogically while safeguarding assessment credibility. Institutions should prioritise developing AI literacy programs, clear usage guidelines, ethical standards, and robust support systems for both students and faculty. Rather than banning GenAI or ignoring its influence, universities must build an academic culture where its use is transparent, responsible, and learning-oriented.
Overall, the study contributes to the growing global discourse on GenAI in education by offering a validated conceptual model and empirical evidence from a rapidly digitising region. It reinforces the notion that successful GenAI adoption hinges not only on the tool’s features but also on the motivations, values, and support structures that shape how technology is understood and applied in academic life. As AI becomes more embedded in educational processes, sustained engagement with both students and educators will be critical to ensuring its benefits are realised ethically and effectively.

Author Contributions

D.M.B. and R.M. conceptualised the study, designed the survey, and led the analysis. D.M.B. contributed to data interpretation and literature review. S.B. supported model development and manuscript refinement. All authors have read and agreed to the published version of the manuscript.

Funding

This work received fund No. 13405 from University of Bisha, KSA.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Bisha University Ethical Unit-Information Systems & Cybersecurity (protocol code 13405 and 8 March 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Dataset available on request from the authors.

Acknowledgments

The authors are thankful to the Deanship of Graduate Studies and Scientific Research at University of Bisha for supporting this work through the Fast-Track Research Support Program.

Conflicts of Interest

Sara Bilal has been employed by IBM during the time of this research.

Appendix A. Questionnaire Sources and Items

Appendix A.1. Constructs Administered to Both Students and Lecturers

Appendix A.1.1. Perceived Usefulness (PU)—Adapted from [46,47]

  • Using GenAI would enhance my performance on academic/teaching tasks.
  • Using GenAI would improve the quality of my work.
  • GenAI would make it easier to accomplish key tasks.
  • Overall, I find GenAI useful for my academic/teaching activities.

Appendix A.1.2. Perceived Ease of Use (EOU)—Adapted from [46,48]

  • Learning to use GenAI is easy for me.
  • My interaction with GenAI is clear and understandable.
  • I find GenAI easy to use for the tasks I need.
  • It is easy to become skilful at using GenAI.

Appendix A.1.3. Behavioural Intention to Use GenAI—Adapted from [45,47]

  • I intend to use GenAI regularly for my academic/teaching tasks.
  • I plan to use GenAI whenever it is appropriate.
  • I expect to increase my use of GenAI in the near future.

Appendix A.1.4. Institutional Support (Training, Policy, Guidance)—Adapted from UTAUT “Facilitating Conditions” [47] and Perceived Organisational Support for Technology Enablement

  • My institution provides clear guidance/policy on appropriate GenAI use.
  • My institution offers training/resources that help me use GenAI effectively.
  • If I need help with GenAI, support is available.
  • Assessment/course policies are aligned with responsible GenAI use.

Appendix A.2. Constructs Administered to Students

Appendix A.2.1. Motivation for Learning with GenAI (Intrinsic/Identified)—Adapted from Self-Determination Theory Instruments [49,50]

  • I use GenAI because it helps me understand things better.
  • I find using GenAI for study tasks interesting/enjoyable.
  • GenAI helps me persist when tasks are challenging.
  • Using GenAI adds value to my learning process.

Appendix A.2.2. Learner Autonomy with GenAI—Adapted from SRQ-Learning/Autonomy Support [50,51]

  • When I use GenAI, I still make my own decisions about the final work.
  • I use GenAI to support my thinking, not to replace it.
  • I feel in control of how GenAI’s outputs are used in my assignments.
  • I can justify and explain the work even when GenAI assisted.

Appendix A.2.3. Trust and Responsibility (Student Perspective on Responsible Use)—Adapted from Technology Trust/Ethics Items [52,53] and Academic Integrity Framing

  • I can judge when GenAI outputs are reliable enough for my work.
  • I feel responsible for verifying GenAI outputs before using them.
  • I am confident I can use GenAI responsibly within course rules.
  • I would disclose/acknowledge meaningful GenAI assistance when required.

Appendix A.3. Constructs Administered to Lecturers

Appendix A.3.1. Academic Integrity (Concerns and Norms)—Adapted from Academic Integrity/Cheating Attitude Scales [54]—Adapted to GenAI

  • I am concerned that GenAI may undermine authentic student learning.
  • I am concerned about undisclosed GenAI use in assignments.
  • I believe students should acknowledge GenAI assistance in submitted work.
  • Clear course-level rules can mitigate integrity risks when GenAI is used.

Appendix A.3.2. Assessment Adaptation (Readiness to Redesign Assessment)—Study-Developed Items, Grounded in Assessment Literature [55,56] and Contemporary Guidance on AI-Resilient Assessment

  • I am willing to redesign assessments to support responsible GenAI use.
  • I see value in assessment tasks that require process evidence (drafts, orals, reflection) when GenAI is allowed.
  • I can align learning outcomes with tasks that permit transparent GenAI assistance.
  • With adequate guidance, I can integrate GenAI into assessment without compromising standards.

References

  1. Shata, A.; Hartley, K. Artificial intelligence and communication technologies in academia: Faculty perceptions and the adoption of generative AI. Int. J. Educ. Technol. High. Educ. 2025, 22, 14. [Google Scholar] [CrossRef]
  2. Aldossary, A.S.; Aljindi, A.A.; Alamri, J.M. The role of generative AI in education: Perceptions of Saudi students. Contemp. Educ. Technol. 2024, 16, ep536. [Google Scholar] [CrossRef] [PubMed]
  3. Alshamy, A.; Al-Harthi, A.S.A.; Abdullah, S. Perceptions of Generative AI Tools in Higher Education: Insights from Students and Academics at Sultan Qaboos University. Educ. Sci. 2025, 15, 501. [Google Scholar] [CrossRef]
  4. Hasanein, A.M.; Sobaih, A.E.E. Drivers and Consequences of ChatGPT Use in Higher Education: Key Stakeholder Perspectives. Eur. J. Investig. Health Psychol. Educ. 2023, 13, 2599–2614. [Google Scholar] [CrossRef]
  5. Zhai, C.; Wibowo, S.; Li, L.D. The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learn. Environ. 2024, 11, 28. [Google Scholar] [CrossRef]
  6. Mohamed, M.G.; Goktas, P.; Khalaf, S.A.; Kucukkuya, A.; Al-Faouri, I.; Seleem, E.A.E.S.; Ibraheem, A.; Abdelhafez, A.M.; Abdullah, S.O.; Zaki, H.N.; et al. Generative artificial intelligence acceptance, anxiety, and behavioral intention in the middle east: A TAM-based structural equation modelling approach. BMC Nurs. 2025, 24, 703. [Google Scholar] [CrossRef]
  7. Mustofa, R.H.; Kuncoro, T.G.; Atmono, D.; Hermawan, H.D.; Sukirman. Extending the technology acceptance model: The role of subjective norms, ethics, and trust in AI tool adoption among students. Comput. Educ. Artif. Intell. 2025, 8, 100379. [Google Scholar] [CrossRef]
  8. Ittefaq, M.; Zain, A.; Arif, R.; Ahmad, T.; Khan, L.; Seo, H. Factors influencing international students’ adoption of generative artificial intelligence: The mediating role of perceived values and attitudes. J. Int. Stud. 2025, 15, 127–154. [Google Scholar] [CrossRef]
  9. Sergeeva, O.V.; Zheltukhina, M.R.; Shoustikova, T.; Tukhvatullina, L.R.; Dobrokhotov, D.A.; Kondrashev, S.V. Understanding higher education students’ adoption of generative AI technologies: An empirical investigation using UTAUT2. Contemp. Educ. Technol. 2025, 17, ep571. [Google Scholar] [CrossRef]
  10. Alsharefeen, R.; Al Sayari, N. Examining academic integrity policy and practice in the era of AI: A case study of faculty perspectives. Front. Educ. 2025, 10, 1621743. [Google Scholar] [CrossRef]
  11. Yao-Ping, P.M.; Xu, Y.; Xu, C. Enhancing students’ english Language learning via M-learning: Integrating technology acceptance model and S-O-R model. Heliyon 2023, 9, e13302. [Google Scholar] [CrossRef] [PubMed]
  12. Mohamed, M.G.; Islam, M.R.; Ahmed, S.K.; Khalaf, S.A.; Abdelall, H.A.; Mahmood, K.A.; Khalek, E.M.A.; Arulappan, J.; Dewan, S.M.R. Assessment of knowledge, attitude, anxiety level and perceived mental healthcare needs toward Mpox infection among nursing students: A multi-center cross-sectional study. Glob. Transit. 2024, 6, 203–211. [Google Scholar] [CrossRef]
  13. Dai, H.M.; Teo, T.; Rappa, N.A.; Huang, F. Explaining Chinese university students’ continuance learning intention in the MOOC setting: A modified expectation confirmation model perspective. Comput. Educ. 2020, 150, 103850. [Google Scholar] [CrossRef]
  14. Toros, E.; Asiksoy, G.; Sürücü, L. Refreshment students’ perceived usefulness and attitudes towards using technology: A moderated mediation model. Humanit. Soc. Sci. Commun. 2024, 11, 333. [Google Scholar] [CrossRef]
  15. Amoozadeh, M.; Daniels, D.; Nam, D.; Kumar, A.; Chen, S.; Hilton, M.; Alipour, M.A. Trust in Generative AI among Students: An exploratory study. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education, Portland, OR, USA, 20–23 March 2024; pp. 67–73. [Google Scholar]
  16. Nazaretsky, T.; Mejia-Domenzain, P.; Swamy, V.; Frej, J.; Käser, T. The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions. Comput. Educ. Artif. Intell. 2025, 8, 100368. [Google Scholar] [CrossRef]
  17. Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. [Google Scholar] [CrossRef]
  18. Mohamed, A.M.; Shaaban, T.S.; Bakry, S.H.; Guillén-Gámez, F.D.; Strzelecki, A. Empowering the Faculty of Education Students: Applying AI’s Potential for Motivating and Enhancing Learning. Innov. High. Educ. 2025, 50, 587–609. [Google Scholar] [CrossRef]
  19. Wang, X.; Xu, X.; Zhang, Y.; Hao, S.; Jie, W. Exploring the impact of artificial intelligence application in personalized learning environments: Thematic analysis of undergraduates’ perceptions in China. Humanit. Soc. Sci. Commun. 2024, 11, 1644. [Google Scholar] [CrossRef]
  20. Chiu, T.K.F.; Çoban, M.; Sanusi, I.T.; Ayanwale, M.A. Validating student AI competency self-efficacy (SAICS) scale and its framework. Educ. Tech. Res. Dev. 2025, 73, 2785–2807. [Google Scholar] [CrossRef]
  21. Korchak, A.; Al Murshidi, G.; Getman, A.; Raouf, N.; Arshe, M.; Al Meheiri, N.; Costley, J. The role of social influence in generative artificial intelligence ChatGPT adoption intentions among undergraduate and graduate students. Innov. Educ. Teach. Int. 2025, 62, 1559–1573. [Google Scholar] [CrossRef]
  22. Ursavaş, Ö.F.; Yalçın, Y.; İslamoğlu, H.; Bakır-Yalçın, E.; Cukurova, M. Rethinking the importance of social norms in generative AI adoption: Investigating the acceptance and use of generative AI among higher education students. Int. J. Educ. Technol. High. Educ. 2025, 22, 38. [Google Scholar] [CrossRef]
  23. Lee, D.; Arnold, M.; Srivastava, A.; Plastow, K.; Strelan, P.; Ploeckl, F.; Palmer, E. The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Comput. Educ. Artif. Intell. 2024, 6, 100221. [Google Scholar] [CrossRef]
  24. Elshall, A.S.; Badir, A. Balancing AI-assisted learning and traditional assessment: The FACT assessment in environmental data science education. Front. Educ. 2025, 10, 1596462. [Google Scholar] [CrossRef]
  25. Soares, A.; Lerigo-Sampson, M.; Barker, J. Recontextualising the Unified Theory of Acceptance and Use of Technology (UTAUT) Framework to higher education online marking. J. Univ. Teach. Learn. Pract. 2025, 21, 1–26. [Google Scholar]
  26. Lin, Y.; Yu, Z. Extending Technology Acceptance Model to higher-education students’ use of digital academic reading tools on computers. Int. J. Educ. Technol. High. Educ. 2023, 20, 34. [Google Scholar] [CrossRef]
  27. Shaqrah, A.; Almars, A. Examining the internet of educational things adoption using an extended unified theory of acceptance and use of technology. Internet Things 2022, 19, 100558. [Google Scholar] [CrossRef]
  28. Datt, G.; Singh, G. Acceptance and Barriers of Open Educational Resources in the Context of Indian Higher Education. Can. J. Learn. Technol. 2021, 47, 15. [Google Scholar] [CrossRef]
  29. Sousa, A.E.; Cardoso, P. Use of Generative AI by Higher Education Students. Electronics 2025, 14, 1258. [Google Scholar] [CrossRef]
  30. Li, K.C.; Chong, G.H.L.; Wong, B.T.M.; Wu, M.M.F. A TAM-Based Analysis of Hong Kong Undergraduate Students’ Attitudes Toward Generative AI in Higher Education and Employment. Educ. Sci. 2025, 15, 798. [Google Scholar] [CrossRef]
  31. Zhang, R.; Wang, J. Perceptions, adoption intentions, and impacts of generative AI among Chinese university students. Curr. Psychol. 2025, 44, 11276–11295. [Google Scholar] [CrossRef]
  32. Zhou, L.; Xue, S.; Li, R. Extending the Technology Acceptance Model to Explore Students’ Intention to Use an Online Education Platform at a University in China. AGE Open 2022, 12, 215824402210852. [Google Scholar] [CrossRef]
  33. Mailizar, M.; Burg, D.; Maulina, S. Examining university students’ behavioural intention to use e-learning during the COVID-19 pandemic: An extended TAM model. Educ. Inf. Technol. 2021, 26, 7057–7077. [Google Scholar] [CrossRef]
  34. Al-kfairy, M. Factors Impacting the Adoption and Acceptance of ChatGPT in Educational Settings: A Narrative Review of Empirical Studies. Appl. Syst. Innov. 2024, 7, 110. [Google Scholar] [CrossRef]
  35. Al-Adwan, A.S.; Li, N.; Al-Adwan, A.; Abbasi, G.A.; Albelbisi, N.A.; Habibi, A. Extending the Technology Acceptance Model (TAM) to Predict University Students’ Intentions to Use Metaverse-Based Learning Platforms. Educ. Inf. Technol. 2023, 28, 15381–15413. [Google Scholar] [CrossRef] [PubMed]
  36. Putra, I.S.; Triatmanto, B.; Zuhro, D. The Effect of Perceived Ease of Use on User’s Intention to Use E-learning with Moodle Application in Higher Education Mediated by Perceived Usefulness. Manag. Econ. J. 2021, 5, 211–220. [Google Scholar] [CrossRef]
  37. Ryan, R.M.; Deci, E.L. Self-determination theory. In Encyclopedia of Quality of Life and Well-Being Research; Springer International Publishing: Cham, Switzerland, 2024; pp. 6229–6235. [Google Scholar]
  38. Annamalai, N.; Bervell, B.; Mireku, D.O.; Andoh, R.P.K. Artificial intelligence in higher education: Modelling students’ motivation for continuous use of ChatGPT based on a modified self-determination theory. Comput. Educ. Artif. Intell. 2025, 8, 100346. [Google Scholar] [CrossRef]
  39. Zhou, L.; Li, J.J. The impact of ChatGPT on learning motivation: A study based on self-determination theory. Educ. Sci. Manag. 2023, 1, 19–29. [Google Scholar] [CrossRef]
  40. Zogheib, S.; Zogheib, B. Understanding university students’ adoption of ChatGPT: Insights from TAM, SDT, and beyond. J. Inf. Technol. Educ. Res. 2024, 23, 25. [Google Scholar] [CrossRef]
  41. Shrivastava, P. Understanding acceptance and resistance toward generative AI technologies: A multi-theoretical framework integrating functional, risk, and sociolegal factors. Front. Artif. Intell. 2025, 8, 1565927. [Google Scholar] [CrossRef] [PubMed]
  42. Jeilani, A.; Abubakar, S. Perceived institutional support and its effects on student perceptions of AI learning in higher education: The role of mediating perceived learning outcomes and moderating technology self-efficacy. Front. Educ. 2025, 10, 1548900. [Google Scholar] [CrossRef]
  43. Al-Rahmi, W.M.; Uddin, M.; Alkhalaf, S.; Al-Dhlan, K.A.; Cifuentes-Faura, J.; Al-Rahmi, A.M.; Al-Adwan, A.S. Validation of an Integrated IS Success Model in the Study of E-Government. Mob. Inf. Syst. 2022, 2022, 8909724. [Google Scholar] [CrossRef]
  44. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education -Where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 39. [Google Scholar] [CrossRef]
  45. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  46. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  47. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  48. Venkatesh, V.; Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef]
  49. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef] [PubMed]
  50. Black, A.E.; Deci, E.L. The effects of instructors’ autonomy support and students’ autonomous motivation on learning organic chemistry: A self-determination theory perspective. Sci. Educ. 2000, 84, 740–756. [Google Scholar] [CrossRef]
  51. Williams, G.C.; Deci, E.L. Internalization of biopsychosocial values by medical students: A test of self-determination theory. J. Personal. Soc. Psychol. 1996, 70, 767–779. [Google Scholar] [CrossRef]
  52. McKnight, D.H.; Choudhury, V.; Kacmar, C. The impact of initial consumer trust on intentions to transact with a web site: A trust building model. J. Strateg. Inf. Syst. 2002, 11, 297–323. [Google Scholar] [CrossRef]
  53. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  54. McCabe, D.L.; Trevino, L.K. Academic dishonesty: Honor codes and other contextual influences. J. High. Educ. 1993, 64, 522–538. [Google Scholar] [CrossRef]
  55. Boud, D.; Falchikov, N. Aligning assessment with long-term learning. Assess. Eval. High. Educ. 2006, 31, 399–413. [Google Scholar] [CrossRef]
  56. Bearman, M.; Dawson, P.; Boud, D.; Bennett, S.; Hall, M.; Molloy, E. Support for assessment practice: Developing the Assessment Design Decisions Framework. Teach. High. Educ. 2016, 21, 545–556. [Google Scholar] [CrossRef]
Figure 1. Conceptual Model for GenAI Adoption in Higher Education-based on TAM and extended by motivation, integrity, and institutional support). T-R is trust and responsibility, Mov is motivation, EOU is ease of use, Ins-Sup is institute support, PU is perceived usefulness.
Figure 1. Conceptual Model for GenAI Adoption in Higher Education-based on TAM and extended by motivation, integrity, and institutional support). T-R is trust and responsibility, Mov is motivation, EOU is ease of use, Ins-Sup is institute support, PU is perceived usefulness.
Bdcc 09 00264 g001
Figure 2. Path analysis- smart PLS.
Figure 2. Path analysis- smart PLS.
Bdcc 09 00264 g002
Figure 3. Path analysis coefficient value (t-value).
Figure 3. Path analysis coefficient value (t-value).
Bdcc 09 00264 g003
Figure 4. Distribution of GenAI familiarity by teaching experience.
Figure 4. Distribution of GenAI familiarity by teaching experience.
Bdcc 09 00264 g004
Figure 5. Lecturers’ answers regarding their beliefs on whether students use GenAI in their assignments, submission fully generated by GenAI, and lecturers’ usage of AI in generating teaching materials.
Figure 5. Lecturers’ answers regarding their beliefs on whether students use GenAI in their assignments, submission fully generated by GenAI, and lecturers’ usage of AI in generating teaching materials.
Bdcc 09 00264 g005
Figure 6. Lecturer perspectives (based on experience by years) toward the role of GenAI in enhancing the learning process of students or demotivating their effort to produce high-quality deliverables.
Figure 6. Lecturer perspectives (based on experience by years) toward the role of GenAI in enhancing the learning process of students or demotivating their effort to produce high-quality deliverables.
Bdcc 09 00264 g006
Figure 7. Lecturer perspectives (based on familiarity with GenAI) toward the role of GenAI in enhancing the learning process of students or demotivating their effort to produce high-quality deliverables.
Figure 7. Lecturer perspectives (based on familiarity with GenAI) toward the role of GenAI in enhancing the learning process of students or demotivating their effort to produce high-quality deliverables.
Bdcc 09 00264 g007
Table 1. Student demographics (N = 578).
Table 1. Student demographics (N = 578).
VariableCategoryn%
Age18–20 years20134.8
21+ years37765.2
GenderFemale31855.01
Male26044.98
Field of StudyComputing/IT/AI18331.7
Engineering8915.39
Economics/Business10117.47
Education12020.76
Other8514.7
Years of StudyYear 113924
Year 212221.1
Year 311519.9
Year 410418
Year 5+9817
Table 2. Factor Loadings.
Table 2. Factor Loadings.
Items EOU Mov PU T-R Ins-Sup Integrity Intention
Ease of Use1 0.664
Ease of Use2 0.913
Integrity1 0.946
Integrity2 0.623
Mov1 0.945
Mov2 0.944
PU1 0.897
PU2 0.864
PU3 0.796
T-R1 0.649
T-R2 0.608
T-R3 0.859
ins-sup1 0.794
ins-sup2 0.801
ins-sup3 0.582
intention1 0.870
intention2 0.885
intention3 0.720
Table 3. Cross-loadings of indicators.
Table 3. Cross-loadings of indicators.
EOUMovPUT-RIns-SupIntegrityIntention
Ease of Use10.6640.2640.3390.3550.2840.2650.207
Ease of Use20.9130.6590.570.3340.3540.230.448
Integrity10.2170.2810.2920.170.2070.9460.396
Integrity20.3280.1170.1450.3520.1390.6230.336
Mov10.5880.9450.7220.3350.2030.280.527
Mov20.60.9440.6610.3480.2780.2360.53
PU10.5120.620.8970.2360.270.2890.535
PU20.5210.6510.8640.3320.3150.1790.435
PU30.4830.6080.7960.2350.2080.2830.367
T-R10.240.2060.2040.6490.2640.3350.19
T-R20.2510.1110.1360.6080.2680.3340.287
T-R30.3720.360.2860.8590.310.0770.238
ins-sup10.3660.1480.2350.1860.7940.1350.227
ins-sup20.20.1760.2330.3020.8010.0020.198
ins-sup30.3090.2370.2130.3590.5820.3610.338
intention10.3530.4570.4210.1860.2660.5040.87
intention20.4570.540.5340.2460.3150.2660.885
intention30.2480.3720.3240.3640.2770.3680.72
Note: bold numbers means high loading of indicators under their constructs.
Table 4. Reliability and validity.
Table 4. Reliability and validity.
ConstructCronbach’s Alpha Composite ReliabilityAverage Variance Extracted (AVE)
EOU 0.464 0.774 0.638
Mov 0.879 0.943 0.892
PU 0.813 0.889 0.728
T-R 0.582 0.753 0.509
ins-sup 0.552 0.773 0.537
integrity 0.501 0.774 0.641
intention 0.772 0.867 0.686
Table 5. Discriminant validity.
Table 5. Discriminant validity.
ConstructsEOU MovPUT-RIns-SupIntegrityIntention
EOU0.799
Mov0.6290.944
PU0.5920.7320.853
T-R0.4140.3620.3130.714
ins-sup0.3990.2540.3120.3840.733
integrity0.2930.2730.2920.2620.220.801
intention0.440.560.5280.3070.3450.4440.828
Table 6. R-square.
Table 6. R-square.
R-Square R-Square Adjusted
Mov 0.165 0.162
PU 0.357 0.355
intention 0.348 0.345
Table 7. F-square.
Table 7. F-square.
Mov PU Intention
EOU 0.405 0.008
Mov 0.069
PU 0.034
T-R 0.108
ins-sup 0.011
integrity 0.041
intention
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bamasoud, D.M.; Mohammad, R.; Bilal, S. Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities. Big Data Cogn. Comput. 2025, 9, 264. https://doi.org/10.3390/bdcc9100264

AMA Style

Bamasoud DM, Mohammad R, Bilal S. Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities. Big Data and Cognitive Computing. 2025; 9(10):264. https://doi.org/10.3390/bdcc9100264

Chicago/Turabian Style

Bamasoud, Doaa M., Rasheed Mohammad, and Sara Bilal. 2025. "Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities" Big Data and Cognitive Computing 9, no. 10: 264. https://doi.org/10.3390/bdcc9100264

APA Style

Bamasoud, D. M., Mohammad, R., & Bilal, S. (2025). Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities. Big Data and Cognitive Computing, 9(10), 264. https://doi.org/10.3390/bdcc9100264

Article Metrics

Back to TopTop