2.1. Use of Generative AI Tools Among Different Academic Segments
“Artificial intelligence (AI) is a broad umbrella term used to encompass a wide variety of subfields dedicated to creating algorithms to perform tasks that mimic human intelligence” [
5] (p. 96). One of the AI subfields is generative artificial intelligence (GenAI), which is “a technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)” [
6] (p. 2). Features such as engaging in human-like conversations, answering questions, generating sophisticated text, codes, videos, images, and other creative content—often indistinguishable from those produced by a human [
7]—leverage the perception of generative AI tools as a revolutionary technology with the potential to shift a new paradigm [
8] by shaping the future of learning and teaching practices [
9].
Artificial intelligence in education (AIED) is an interdisciplinary field that refers to the application of artificial intelligence techniques and systems to enhance educational practices [
10], including personalized learning, intelligent tutoring systems, and the automation of administrative tasks [
11]. Mittal et al. [
12] argued that generative AI tools, such as ChatGPT, Microsoft Copilot, and Gemini, are essential for addressing gaps in traditional educational systems. For example, traditional classes “in which students march through a textbook, page by page (or teachers through lecture notes) in a valiant attempt to traverse all the factual material within a prescribed time are difficult to avoid” [
13] (p. 15), and they often fail to ignite curiosity or have limited time to address additional questions and potential misunderstandings and dedicate sufficient attention to each individual. On the other hand, the findings of Abdelghani’s study emphasized the importance of GenAI in promoting a desire for knowledge-based learning [
14] and developing creativity while crafting prompts and formulating more sophisticated and research-oriented questions. Without fear of asking trivial questions in front of a full classroom, each student can now use GenAI tools to ask unlimited additional questions, requiring step-by-step explanations, examples, and personalized feedback to meet their unique requirements, learning pace, and preferences. This capability is especially valuable for higher education students, who are expected to develop a deeper understanding of complex topics and engage in scholarly discourse. Importantly, GenAI tools can also be very helpful and assist students with disabilities by offering text-to-speech and speech-to-text features to ensure fair and equal educational opportunities.
Teachers and educators, on the other hand, can leverage GenAI tools to design adaptive course materials, simulations, and interactive content that enhances student engagement and comprehension. For example, Mittal et al. suggested that educators can use GenAI to create animated, audio–visual instructional content (for history and geography courses) that “combined with metaverse technology, allows students to experience these disciplines virtually” [
12] (p. 742) and [
15]. Similarly, they suggested that educational practices should be enriched with advanced technologies, such as virtual labs and simulations, which would allow students to explore complex principles in a hands-on manner. AI-enabled gamification is another example of how emerging technologies could alter classroom dynamics and increase student engagement and motivation [
16], as well as prepare them for the demands of a knowledge-based economy. Along with support in preparing curriculum and class materials, AI-driven tools can streamline teachers’ grading process, especially open-ended questions [
17], provide frequent and fast personalized feedback, pinpoint areas needing improvement, adapt instructional content dynamically [
3], and create additional assessments customized for each student based on their performance history.
Interestingly, some discussions about educators’ use of GenAI tools still argue for contrasting viewpoints. One stream shares the concern that AI might replace essential educational experiences, depersonalize education, and diminish meaningful instructor–student interactions [
18]. Other authors emphasize the need for a collaborative approach in which GenAI tools are used as a helpful resource for busy teachers in automating repetitive routine tasks and manual and administrative work, which would offer them more time to focus on higher-level activities, such as mentorship, guidance, and fostering critical thinking skills [
3].
GenAI tools also offer a wide range of applications for researchers by supporting critical aspects of scholarly writing, like brainstorming, drafting, and revising text while ensuring logical flow and a coherent structure. Additionally, these tools assist in aggregating, summarizing, and synthesizing extensive amounts of literature and academic articles, enabling researchers to dedicate more time to higher-level tasks, such as argument development, analysis of findings, and creative expression [
19]. Moreover, researchers can utilize AI tools for practical tasks, such as translating their work into foreign languages for publication in international journals, by enhancing readability and style through assisted copy editing [
20], which includes correcting grammar, spelling, punctuation, and syntactic errors.
2.4. Dimensions of User Trust in GenAI Tools in the Academic Environment
The integration of generative AI tools in the educational environment has sparked considerable interest as well as many debates among teachers, researchers, and policymakers. Central to the successful adoption of AI technology is the issue of trust, particularly concerning the accuracy and relevance of generated content, the privacy and protection of personal data, and the nonmaleficence of generated content. Given the multifaceted nature of trust outlined above, this study adopts three key dimensions (accuracy and relevance, privacy protection, and no maliciousness) to comprehensively capture the core factors influencing user trust in GenAI tools within academic environments.
(1)
Trust in the accuracy and relevance of the generated content: The accuracy and relevance of information are cornerstones for scholarly integrity and effective learning. Accuracy refers to the correctness of the information provided, whereas relevance pertains to its applicability to the curriculum and alignment with learners’ needs. If GenAI tools provide incorrect, misleading, or irrelevant content, they can undermine academic quality and credibility. For example, students and researchers often use generative AI tools to draft ideas, summarize information, or generate citations. AI-generated factual inaccuracies and biases can lead to misleading arguments or incorrect references, undermining the quality of academic work. Similarly, learners encountering irrelevant, incorrect information, which could be a combination of truth and fabricated data [
34], may hinder the understanding of key concepts and contribute to the dissemination of misinformation. This results in both reduced trust and hesitation in the use of GenAI tools for academic purposes, especially among teachers [
35], as well as reputational damage to institutions that encourage their use. A brief example is a survey among 4000 Stanford students, where 17% reported using AI for brainstorming and outlining content, whereas 5% reported submitting written material directly from ChatGPT with little to no edits [
36]. Concerns about misinformation, biases, and AI “hallucinations”, where AI produces plausible but ultimately incorrect information [
37], are widespread. Such hallucinations occur because the LLMs underlying chatbots are programmed to predict text sequentially and cannot always evaluate the greater context of the prompt [
38,
39]. Reliance on answers produced by chatbots without verification or the knowledge to verify them can result in AI hallucinations being included in research articles, undermining their accuracy. In November 2022, Meta launched Galactica, a large language model trained on millions of scientific articles, websites, textbooks, lecture notes, and encyclopedias aimed at helping researchers and students summarize academic papers, solve math problems, generate articles, write scientific code, and more. However, just three days later, the LLM was shut down after users discovered that it produced biased and incorrect results, including the creation of fake papers and the misattribution of references to correct scholars [
40]. Michael Black, director at the Max Planck Institute for Intelligent Systems, criticized the model for being “wrong or biased but sounded right and authoritative,” labeling it as dangerous [
41]. This and similar examples highlight the broader concerns about the reliability of AI-generated content, as well as the importance of accuracy and relevance for academic settings.
(2)
Trust in the protection of privacy and personal data: Central to the AI educational design framework is the concept of personalized learning that analyzes student performance, identifies areas for improvement, and dynamically customizes learning content that closely matches user preferences and needs. To deliver such outputs, “whether students, teachers, parents, and policymakers welcome it or not, so-called intelligent, adaptive, or personalized learning systems are increasingly being deployed in schools and universities around the world, gathering and analyzing enormous amounts of student big data, which significantly impacted the lives of students and educators” [
13] (p. 9). There are many examples in the academic environment where sensitive personal data (including students’ biometrics, personal records, and behavior patterns), research subjects’ information, or novel academic findings are easily shared with GenAI tools. Data breaches or mishandling of personal data, such as input prompts containing private or confidential information, could violate privacy laws (e.g., the GDPR) and potentially lead to identity theft, unauthorized surveillance, or misuse [
42]. Similarly, the unauthorized use of data shared by GenAI tools could result in plagiarism, intellectual property theft [
43], and reputational damage to individuals or institutions and undermine academic integrity. The MIT Raise project, focused on responsible AI usage for social empowerment and education, highlights that “due to the vast amounts of student data being captured, the educational market ranks as the third-highest target for data hackers, trailing only the health and financial sectors" [
44]. MIT also shed light on incidents such as the cyberattack on Illuminate Education, a leading provider of student tracking software, which resulted in the exposure of the personal information of more than a million current and former students, including names, dates of birth, races or ethnicities, test scores, and, in some cases, more intimate details, such as tardiness rates, migrant status, behavior incidents, and descriptions of disabilities [
44,
45].
(3)
Trust in the nonmaliciousness of GenAI tools: Nonmaliciousness involves ensuring that GenAI tools are not intentionally harmful or inadvertently harmful, which is essential for maintaining a safe, fair, and inclusive academic environment. This includes, but is not limited to, (i) preventing bias by avoiding stereotypes and discriminatory content that could marginalize certain groups [
46]; (ii) discouraging cheating or generating falsified data; and (iii) avoiding harmful outputs, such as fabricated information or unethical and inappropriate suggestions that could jeopardize academic work or an individual’s personal reputation. Concerns about AI being exploited for harmful purposes, such as generating deepfakes and synthetic media, which are often indistinguishable from authentic content, are increasing. This undermines public trust in media and institutions, with potential implications for democracy and societal cohesion [
47,
48]. There have been alarming recent instances of students using AI to create explicit deepfake images of their peers, leading to psychological distress and breaches of privacy. Specifically, one high school student in Sydney generated deepfake pornographic images of female classmates via innocent photos from social media [
49], which illustrates the harm and misuse of GenAI tools to produce nonconsensual explicit content and to spread false information, undermining user trust and authenticity online. Moreover, the malicious use of GenAI tools is closely related to the protection of privacy and personal data. Another example is that LLMs can be manipulated to infer sensitive data from seemingly benign input–output observations, leading to data leakage. Likewise, unauthorized access can occur through various means, such as phishing attacks targeting school administrators or compromised credentials [
50].
This study aims to address the identified literature gap by exploring, analyzing, and comparing the factors that shape the perception of trust in GenAI tools within academic settings across diverse user segments, including students at all levels, teachers, and researchers. Specifically, this study seeks to deepen understanding of how trust—defined across three dimensions, namely, accuracy and relevance, protection of privacy and personal data, and nonmaliciousness—influences user decisions regarding the usage and future adoption of these tools.
While trust in generative AI tools is conceptually multidimensional (encompassing accuracy, privacy protection, and nonmaliciousness), this study operationalizes it as a composite construct that reflects the overall level of trust users hold toward these tools. This approach aligns with prior research that models trust as a unidimensional predictor in technology adoption contexts when the focus is on general usage decisions rather than context-specific tasks. Nevertheless, the distinctions among the trust dimensions are acknowledged and discussed in the interpretation of the results, recognizing that trust in technical accuracy differs in nature from trust related to data privacy or the prevention of harmful use. This decision allows this research to focus on broader patterns of trust formation while leaving the exploration of trust subdimensions as a direction for future research.
The proposed research model draws conceptually from the Unified Theory of Acceptance and Use of Technology (UTAUT), which identifies performance expectancy, effort expectancy, social influence, and facilitating conditions as key predictors of technology adoption. Building on this foundation, Venkatesh, Thong, and Xu (2012) [
51] extended the original model by introducing three additional factors—hedonic motivation, price value, and habit—that influence both the intention to use technology and actual usage behavior. This enhanced framework, known as UTAUT2, has been widely applied in the context of technology adoption. The present study extends the UTAUT2 framework by incorporating trust as a critical antecedent of behavioral intention, acknowledging the distinctive ethical, technical, and social concerns associated with generative AI tools in academic environments. Grounded in the conceptualization of trust as a multidimensional yet operationally unified construct, the following research questions and hypotheses are proposed to examine how trust and related factors influence the adoption and use of generative AI tools in academia:
RQ1: How does user trust in generative AI tools differ across academic segments?
RQ2: What is the relationship between gender and trust in generative AI tools?
RQ3: How does the intensity of generative AI tool usage predict trust levels?
RQ4: In what ways does the length of experience with generative AI tools influence user trust?
RQ5: How does self-perceived proficiency in using generative AI tools affect trust?
RQ6: What is the relationship between trust in generative AI tools and the behavioral intention to use them?
Therefore, the following hypotheses are developed:
H1: Academic roles are expected to influence trust in generative AI tools, with teachers and researchers exhibiting lower levels of trust than students across different educational levels (undergraduate, graduate, and doctoral).
Academic studies have outlined the different perspectives on using GenAI tools in education. For example, Chan and Lee [
52] addressed the impact of generational differences on ChatGPT adoption by comparing Gen Z students (born between 1995 and 2012) and their Gen X and Y teachers (born between 1960 and 1995). Their findings suggest that each generation’s attitudes and approaches toward AI are shaped by their generational experiences with technology. Gen Z grew up with constant access to digital technology and social media that cultivated a digital-first mindset as well as preferences for hyperconnected, on-demand experiences. With an average attention span of only 8 s [
53], they value immediate and personalized feedback, whereas their problem-solving nature and learning independence [
54] lead to optimism toward GenAI tools. In contrast, their Gen X and Gen Y teachers, who experienced the transition from traditional to technology-based education, embrace more cautious attitudes. While they acknowledge the benefits of ChatGPT, they also express significant concerns about its ethical and pedagogical implications. Their skepticism aligns with research showing that these generations often approach new technologies with caution, focusing on potential risks and challenges. To incorporate GenAI effectively, Gen X/Y educators require clear guidelines and policies on responsible use combined with tailored training and support that address their varied comfort levels with technology. The findings from another study [
35] revealed similar results—a general trend of enthusiasm among students toward GenAI, contrasted with a slightly more cautious approach from professors—indicating a potential generational gap.
H2: Gender is anticipated to influence trust in generative AI tools, with men demonstrating higher levels of trust than women.
There are multiple scholars’ perspectives and different research results on the influence of gender on the adoption and use of GenAI tools. First, as presented in a recent study conducted by Harvard Business School on gender gaps in GenAI tool usage [
55], there are no significant gender differences in trust in the accuracy of AI-generated content or concerns about privacy risks, such as data breaches, abuse, or compromised storage [
56]. However, other scholars, such as Møgelvang et al. [
57], who focused on gender differences in the use of GenAI chatbots in higher education, discovered that men exhibit heightened interest and more frequent engagement across a broader spectrum of GenAI applications. They are also aware of their relevance to future career prospects. On the other hand, women primarily utilize GenAI chatbots in text-related tasks and express greater concerns regarding critical and independent thinking. They also demonstrated a stronger need to understand the circumstances in which those tools can be helpful and to be aware of when they can (or cannot) trust them. These findings suggest that women’s use of GenAI chatbots depends more on training and integration, whereas men express less concern about their critical use and possible loss of independent thinking [
57].
H3: The frequency and intensity of generative AI tool usage are hypothesized to affect trust levels, with individuals who use these tools more intensively showing greater trust than less frequent users.
Research among higher education teachers revealed that those with a comprehensive understanding of AI concepts and applications are more likely to trust the technology’s reliability and effectiveness in educational settings [
58,
59]. Frequent usage and understanding of both AI technology benefits and constraints result in more effective implementation of those tools in academic work and environment [
60]. Proficiency, as a consequence of continuous usage, enhances users’ exposure to different scenarios, which reduces their hesitation or misconceptions that might arise from unfamiliarity [
10] and allows critical assessment of various AI products.
H4: The duration of experience with generative AI tools is expected to influence trust, with longer-term users exhibiting higher levels of trust in these tools.
Algorithm aversion is a term that refers to avoiding the use of algorithms after observing their errors, even when they outperform human advisors [
61]. This is closely related to the concept of user trust, which can be significantly eroded by mistakes. Therefore, users often require a prolonged period of error-free interactions to regain trust in a system that has previously failed [
62]. Following the rationale for the previous hypothesis and other research results, prolonged interactions have been assumed to enable the dynamic development of familiarity, confidence, and reliance on GenAI tool usage [
63].
H5: Self-assessed proficiency in using generative AI tools is anticipated to be positively associated with trust, with individuals who perceive their skills as higher demonstrating greater trust in the tools.
The rationale for this hypothesis is the assumption that greater competence may lead to greater comfort and reliance on GenAI tools. In one study [
64], the respondents who considered themselves “tech-savvy” demonstrated greater reliance on AI recommendations, which suggests a connection between perceived technical competence and trust in algorithmic predictions. Individuals who feel confident in their technical abilities may be more inclined to formulate specific prompts that generate desired personalized responses and recommendations aligned with their needs, thereby making them more receptive to AI tools. Another study among undergraduate students [
65] revealed that technological self-efficacy—trust in one’s ability to utilize new AI tools effectively for various reasons—affects the adoption of AI tools. Moreover, undergraduates with a higher degree of technological self-efficacy are more likely to explore and experiment with different novel AI tools when they are launched into the public domain.
H6: Trust positively correlates with the behavioral intention to use generative AI tools, so individuals with higher trust levels are more likely to express stronger intentions to adopt and utilize these tools.
Choung et al. [
66] examined the role of trust in the use and acceptance of AI voice assistants among college students. They concluded that trust had a significant effect on the intention to use AI; therefore, future studies investigating the acceptance of AI must include trust as an integral component of their predictive models. Nazaretsky et al. [
67] reported similar findings that emphasized the pivotal role of trust in shaping behavioral intentions to adopt AI technologies. Moreover, Bach et al. [
68] conducted a systematic review, which also identified user trust as one of the key elements of AI adoption because it enables calibration of user–AI relationship, regardless of the context.