Next Article in Journal
How ChatGPT’s Semantic Parrotting (Compared to Gemini’s) Impacts Text Summarization with Literary Text
Previous Article in Journal
New Permutation-Free Quantum Circuits for Implementing 3- and 4-Qubit Unitary Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers

1
Information and Communications Science—Postgraduate Program, University North, 48000 Koprivnica, Croatia
2
Department of Computer Science and Informatics, University North, 48000 Koprivnica, Croatia
3
Department of Multimedia, University North, 48000 Koprivnica, Croatia
*
Authors to whom correspondence should be addressed.
Information 2025, 16(7), 622; https://doi.org/10.3390/info16070622
Submission received: 6 June 2025 / Revised: 17 July 2025 / Accepted: 17 July 2025 / Published: 21 July 2025
(This article belongs to the Section Artificial Intelligence)

Abstract

Generative AI (GenAI) tools, including ChatGPT, Microsoft Copilot, and Google Gemini, are rapidly reshaping higher education by transforming how students, educators, and researchers engage with learning, teaching, and academic work. Despite their growing presence, the adoption of GenAI remains inconsistent, largely due to the absence of universal guidelines and trust-related concerns. This study examines how trust, defined across three key dimensions (accuracy and relevance, privacy protection, and nonmaliciousness), influences the adoption and use of GenAI tools in academic environments. Using survey data from 823 participants across different academic roles, this study employs multiple regression analysis to explore the relationship between trust, user characteristics, and behavioral intention. The results reveal that trust is primarily experience-driven. Frequency of use, duration of use, and self-assessed proficiency significantly predict trust, whereas demographic factors, such as gender and academic role, have no significant influence. Furthermore, trust emerges as a strong predictor of behavioral intention to adopt GenAI tools. These findings reinforce trust calibration theory and extend the UTAUT2 framework to the context of GenAI in education. This study highlights that fostering appropriate trust through transparent policies, privacy safeguards, and practical training is critical for enabling responsible, ethical, and effective integration of GenAI into higher education.

1. Introduction

Over the past few years, generative AI has become a prominent buzzword, shaping discussions about digital transformation, productivity, and the evolving future of work and education. Since there is no global consensus or framework for implementing these tools in higher education institutions (HEIs), the initiatives and policies related to their usage display significant variation because they are localized at the university level. Moreover, fewer than one-third of the top 500 universities in 2022 had implemented such policies, and among them, approximately 67.4% allowed ChatGPT, whereas 32.6% banned its usage [1]. Consequently, the adoption of GenAI tools remains a personal choice influenced by an individual’s cognitive, emotional, and contextual concerns. According to Straub’s review of technology adoption and diffusion theories, “individuals construct unique yet malleable perceptions of technology that influence their adoption decisions” [2] (p. 625).
In addition to the numerous benefits that generative AI tools offer, such as personalized feedback, automated content generation, and support for research writing, they also present significant downsides. These include concerns about misinformation (e.g., AI hallucinations), privacy breaches, and the potential misuse of AI for generating unethical content, like deepfakes or plagiarized material. Moreover, issues such as embedded algorithmic biases, erosion of critical thinking skills due to overreliance on AI, degradation of academic competencies, intellectual property concerns, and lack of transparency in AI outputs further complicate their adoption in educational contexts. These advantages and disadvantages jointly shape adoption decisions across different academic roles. It is essential to highlight and consider the unique set of challenges that differentiate educational settings from other contexts [3] where generative AI is applied. Specifically, those challenges and concerns focus on ethical implications (authorship and copyright issues, transparency, responsibility, academic integrity) and user trust, which refers to the accuracy of generated content, privacy, safety, and nonmaleficence of GenAI tools. Although these two sets of challenges are strongly interconnected and reflect the concerns of many educational stakeholders, each of them should be separately examined and empirically tested.
It is important to note that this study primarily adopts the concept of appropriate trust, which refers to the alignment between users’ expectations and the actual capabilities and limitations of GenAI tools [4]. This framing is particularly relevant in educational settings, where both "overtrust" (uncritical reliance) and "undertrust" (complete avoidance) can lead to ineffective or even harmful use. While this study operationalizes trust as a unified construct, it acknowledges that the three trust dimensions—accuracy and relevance, privacy protection, and nonmaliciousness—represent conceptually distinct concerns. Accuracy relates to whether the AI performs technically as intended, privacy pertains to the responsible handling of personal data, and nonmaliciousness refers to users’ concerns that AI outputs will not be misused for harmful purposes. Recognizing these distinctions is critical for understanding the multifaceted nature of trust in the academic use of generative AI. Overall, the study findings can serve as actionable guidelines for developing institutional policies and initiatives. Therefore, this paper focuses on empirically testing the relationship between user trust and various aspects of the adoption and usage of GenAI tools among different academic roles (students, teachers, and researchers) in higher education.

2. Background

2.1. Use of Generative AI Tools Among Different Academic Segments

“Artificial intelligence (AI) is a broad umbrella term used to encompass a wide variety of subfields dedicated to creating algorithms to perform tasks that mimic human intelligence” [5] (p. 96). One of the AI subfields is generative artificial intelligence (GenAI), which is “a technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)” [6] (p. 2). Features such as engaging in human-like conversations, answering questions, generating sophisticated text, codes, videos, images, and other creative content—often indistinguishable from those produced by a human [7]—leverage the perception of generative AI tools as a revolutionary technology with the potential to shift a new paradigm [8] by shaping the future of learning and teaching practices [9].
Artificial intelligence in education (AIED) is an interdisciplinary field that refers to the application of artificial intelligence techniques and systems to enhance educational practices [10], including personalized learning, intelligent tutoring systems, and the automation of administrative tasks [11]. Mittal et al. [12] argued that generative AI tools, such as ChatGPT, Microsoft Copilot, and Gemini, are essential for addressing gaps in traditional educational systems. For example, traditional classes “in which students march through a textbook, page by page (or teachers through lecture notes) in a valiant attempt to traverse all the factual material within a prescribed time are difficult to avoid” [13] (p. 15), and they often fail to ignite curiosity or have limited time to address additional questions and potential misunderstandings and dedicate sufficient attention to each individual. On the other hand, the findings of Abdelghani’s study emphasized the importance of GenAI in promoting a desire for knowledge-based learning [14] and developing creativity while crafting prompts and formulating more sophisticated and research-oriented questions. Without fear of asking trivial questions in front of a full classroom, each student can now use GenAI tools to ask unlimited additional questions, requiring step-by-step explanations, examples, and personalized feedback to meet their unique requirements, learning pace, and preferences. This capability is especially valuable for higher education students, who are expected to develop a deeper understanding of complex topics and engage in scholarly discourse. Importantly, GenAI tools can also be very helpful and assist students with disabilities by offering text-to-speech and speech-to-text features to ensure fair and equal educational opportunities.
Teachers and educators, on the other hand, can leverage GenAI tools to design adaptive course materials, simulations, and interactive content that enhances student engagement and comprehension. For example, Mittal et al. suggested that educators can use GenAI to create animated, audio–visual instructional content (for history and geography courses) that “combined with metaverse technology, allows students to experience these disciplines virtually” [12] (p. 742) and [15]. Similarly, they suggested that educational practices should be enriched with advanced technologies, such as virtual labs and simulations, which would allow students to explore complex principles in a hands-on manner. AI-enabled gamification is another example of how emerging technologies could alter classroom dynamics and increase student engagement and motivation [16], as well as prepare them for the demands of a knowledge-based economy. Along with support in preparing curriculum and class materials, AI-driven tools can streamline teachers’ grading process, especially open-ended questions [17], provide frequent and fast personalized feedback, pinpoint areas needing improvement, adapt instructional content dynamically [3], and create additional assessments customized for each student based on their performance history.
Interestingly, some discussions about educators’ use of GenAI tools still argue for contrasting viewpoints. One stream shares the concern that AI might replace essential educational experiences, depersonalize education, and diminish meaningful instructor–student interactions [18]. Other authors emphasize the need for a collaborative approach in which GenAI tools are used as a helpful resource for busy teachers in automating repetitive routine tasks and manual and administrative work, which would offer them more time to focus on higher-level activities, such as mentorship, guidance, and fostering critical thinking skills [3].
GenAI tools also offer a wide range of applications for researchers by supporting critical aspects of scholarly writing, like brainstorming, drafting, and revising text while ensuring logical flow and a coherent structure. Additionally, these tools assist in aggregating, summarizing, and synthesizing extensive amounts of literature and academic articles, enabling researchers to dedicate more time to higher-level tasks, such as argument development, analysis of findings, and creative expression [19]. Moreover, researchers can utilize AI tools for practical tasks, such as translating their work into foreign languages for publication in international journals, by enhancing readability and style through assisted copy editing [20], which includes correcting grammar, spelling, punctuation, and syntactic errors.

2.2. The Importance of Trust in Technology Adoption and Usage

AI systems offer multiple benefits across diverse domains and for various stakeholders. However, they often produce unpredictable, biased, or harmful outputs, disproportionately impacting marginalized groups and reinforcing societal inequities [21]. A common thread in many AI failures is the role of biased training data, leading to systemic outcome disparities. For example, Amazon’s AI Recruiting Tool, developed in 2014 to increase hiring efficiency, discriminated against women by favoring male candidates. This bias arose from training data dominated by male resumes, reflecting historical workplace inequalities [22]. Similarly, a South Korean AI chatbot trained on biased conversational data adopted hateful language targeting marginalized communities, including LGBTQ individuals and people with disabilities [23,24].
The opposite bias was observed in early versions of large language models [25]. Early versions of ChatGPT (GPT-3.5) falsified data when summarizing a NASA-funded study on CO2 levels and Earth’s greening, altering the findings to align with the narrative of man-made climate change. Instead of accurately presenting the study’s conclusions, which highlighted the beneficial greening effects of rising CO2 levels, ChatGPT fabricated information that diminished these positive aspects. Additionally, it inserted an unsolicited disclaimer emphasizing the dangers of climate change, despite the prompt not requesting such commentary, raising concerns that the model had been deliberately fine-tuned to reinforce a specific ideological stance. Likewise, Google Gemini exhibited racial bias in image generation by inaccurately depicting historical figures, such as Vikings and World War II soldiers, as people of color and similarly altering representations of America’s Founding Fathers. This led to public backlash and forced Google to adjust the model, highlighting concerns that AI-generated content may reflect the political and moral biases of those training the models. A recent study [26] found that more recent versions of ChatGPT exhibit a more neutral attitude compared to earlier iterations, likely due to updates in its training corpus. The researchers demonstrated a significant reduction in political bias, with newer versions displaying more balanced political tendencies in standardized political orientation tests. For instance, while earlier versions showed a strong left-libertarian orientation, the study found that newer iterations had shifted closer to political neutrality. The authors suggest that this shift may be attributed to OpenAI’s efforts to diversify training data and refine algorithms to mitigate biases toward specific political stances. Their findings indicate a broader trend of generative AI models becoming more ideologically balanced over time.
Small demographic biases in AI training datasets can outsize output accuracy, particularly in areas such as facial recognition. Studies have shown that these algorithms result in higher error rates for darker-skinned and female subjects than for lighter-skinned and male subjects, illustrating how skewed datasets amplify inequities [27]. A related issue was observed in IBM’s Watson for Oncology, which provided inaccurate and unsafe cancer treatment recommendations. The system’s reliance on synthetic, limited real-world insights emphasized the importance of diverse, high-quality training inputs for building trustworthy AI solutions [28]. Beyond biased outputs, the potential for AI misuse also highlights the critical need for trust in technology. Generative AI tools, such as ChatGPT, Google Bard, and Claude, have been manipulated to generate convincing phishing emails by mimicking users’ writing styles and exacerbating cyberattacks. Research shows that AI-generated phishing emails are often as persuasive as those crafted by humans, highlighting how AI capabilities can be misused for harm [29,30].
The examples above highlight how biases in training data, discriminatory outputs, and the potential for malicious misuse significantly undermine user trust in GenAI systems. When these technologies fail to deliver fair, accurate, and safe results or are exploited for harmful purposes, they erode confidence in their reliability and ethical application, which shapes users’ perceptions, attitudes, and motivation to engage with AI technology.

2.3. Defining “User Trust” in Generative Artificial Intelligence

“Trust is a central component of the interaction between people and AI, in that ‘incorrect’ levels of trust may cause misuse, abuse or disuse of the technology” [31] (p. 624). Trust in AI can take different forms, such as appropriate trust, calibrated trust, contractual, justified, responsible, and warranted trust—each addressing specific ways in which users rely on AI. Mehrotra et al. [4] defined the alignment between the perceived and actual performance of the system as “appropriate trust,” where users understand both the system’s actual capabilities and limitations. This understanding can mitigate two extremes: overtrust (placing undue reliance on AI) and undertrust (avoiding AI use even when it is beneficial). By fostering appropriate trust, the harm and negative consequences of misuse and disuse can be significantly reduced.
In the early 1990s, the human–computer interaction (HCI) community began extensive research on the factors influencing human trust in automation. Many studies draw on psychological, sociological, and philosophical explanations of interpersonal relationships to understand how people develop trust in technology and AI systems. Interpersonal trust has long been recognized as a dynamic process that develops through understanding, interaction, and shared experiences. Jacovi et al. [31] highlighted that these insights into how humans trust each other provide valuable guidance for designing AI systems that encourage trust. For example, just as interpersonal trust relies on transparency and consistency, trust in AI depends on how well the system communicates its functions and performs reliably over time. Mehrotra et al. [4] claimed that insights from early HCI research on automation failures also highlight the importance of trust calibration—aligning user reliance with system reliability. For example, users who are provided accurate feedback about a system’s performance limitations tend to adjust their trust levels accordingly.
The evolution of AI from simple tools to decision aids, virtual agents, and AI teammates has made trust even more critical. Trust is not a one-size-fits-all concept because different types of trust address specific aspects of human–AI interactions. For example, contractual trust, as described by certain authors [31], emerges when users believe that the AI system will reliably fulfill specific tasks or roles. This type of trust is particularly important in high-stakes applications, such as autonomous driving or healthcare, where failure to meet predefined expectations can have serious consequences. In contrast, appropriate trust is more general and emphasizes that users align their trust with the actual capabilities of the AI system [4]. Appropriate trust ensures that users neither over-rely on nor unnecessarily dismiss AI systems, leading to safer and more effective interactions. This is particularly important with the rise of GenAI tools, which demonstrate remarkable capabilities but can also generate misleading or harmful content. Similarly, the development of deepfake technology illustrates how trust in AI can be exploited for malicious purposes [32,33]. Cases in which users cannot distinguish between original and fabricated content lead to global confusion, skepticism, and threats to privacy and security, which results in eroded public trust in AI systems.
Building on this foundation, this study conceptualizes trust primarily through the lens of appropriate trust, which emphasizes the alignment between users’ expectations and the actual capabilities and limitations of GenAI tools. This framing is especially relevant in educational contexts where both overtrust (e.g., uncritical reliance on AI outputs) and undertrust (e.g., avoiding AI use despite its benefits) can lead to suboptimal outcomes. Furthermore, the three trust dimensions operationalized in this study (accuracy and relevance, privacy protection, and nonmaliciousness) capture distinct but interrelated concerns. Trust in accuracy reflects a technical assessment of whether GenAI functions correctly and delivers relevant outputs. Privacy-related trust extends beyond technical reliability to include users’ trust in developers and institutions to safeguard personal data. Finally, trust in nonmaliciousness pertains to concerns that GenAI will not be misused for harmful, unethical, or deceptive purposes. These distinctions reflect different layers of the human–AI trust relationship, ranging from functional reliability to social and ethical considerations. Acknowledging this multidimensionality is critical to understanding how trust shapes adoption decisions in the educational context.

2.4. Dimensions of User Trust in GenAI Tools in the Academic Environment

The integration of generative AI tools in the educational environment has sparked considerable interest as well as many debates among teachers, researchers, and policymakers. Central to the successful adoption of AI technology is the issue of trust, particularly concerning the accuracy and relevance of generated content, the privacy and protection of personal data, and the nonmaleficence of generated content. Given the multifaceted nature of trust outlined above, this study adopts three key dimensions (accuracy and relevance, privacy protection, and no maliciousness) to comprehensively capture the core factors influencing user trust in GenAI tools within academic environments.
(1) Trust in the accuracy and relevance of the generated content: The accuracy and relevance of information are cornerstones for scholarly integrity and effective learning. Accuracy refers to the correctness of the information provided, whereas relevance pertains to its applicability to the curriculum and alignment with learners’ needs. If GenAI tools provide incorrect, misleading, or irrelevant content, they can undermine academic quality and credibility. For example, students and researchers often use generative AI tools to draft ideas, summarize information, or generate citations. AI-generated factual inaccuracies and biases can lead to misleading arguments or incorrect references, undermining the quality of academic work. Similarly, learners encountering irrelevant, incorrect information, which could be a combination of truth and fabricated data [34], may hinder the understanding of key concepts and contribute to the dissemination of misinformation. This results in both reduced trust and hesitation in the use of GenAI tools for academic purposes, especially among teachers [35], as well as reputational damage to institutions that encourage their use. A brief example is a survey among 4000 Stanford students, where 17% reported using AI for brainstorming and outlining content, whereas 5% reported submitting written material directly from ChatGPT with little to no edits [36]. Concerns about misinformation, biases, and AI “hallucinations”, where AI produces plausible but ultimately incorrect information [37], are widespread. Such hallucinations occur because the LLMs underlying chatbots are programmed to predict text sequentially and cannot always evaluate the greater context of the prompt [38,39]. Reliance on answers produced by chatbots without verification or the knowledge to verify them can result in AI hallucinations being included in research articles, undermining their accuracy. In November 2022, Meta launched Galactica, a large language model trained on millions of scientific articles, websites, textbooks, lecture notes, and encyclopedias aimed at helping researchers and students summarize academic papers, solve math problems, generate articles, write scientific code, and more. However, just three days later, the LLM was shut down after users discovered that it produced biased and incorrect results, including the creation of fake papers and the misattribution of references to correct scholars [40]. Michael Black, director at the Max Planck Institute for Intelligent Systems, criticized the model for being “wrong or biased but sounded right and authoritative,” labeling it as dangerous [41]. This and similar examples highlight the broader concerns about the reliability of AI-generated content, as well as the importance of accuracy and relevance for academic settings.
(2) Trust in the protection of privacy and personal data: Central to the AI educational design framework is the concept of personalized learning that analyzes student performance, identifies areas for improvement, and dynamically customizes learning content that closely matches user preferences and needs. To deliver such outputs, “whether students, teachers, parents, and policymakers welcome it or not, so-called intelligent, adaptive, or personalized learning systems are increasingly being deployed in schools and universities around the world, gathering and analyzing enormous amounts of student big data, which significantly impacted the lives of students and educators” [13] (p. 9). There are many examples in the academic environment where sensitive personal data (including students’ biometrics, personal records, and behavior patterns), research subjects’ information, or novel academic findings are easily shared with GenAI tools. Data breaches or mishandling of personal data, such as input prompts containing private or confidential information, could violate privacy laws (e.g., the GDPR) and potentially lead to identity theft, unauthorized surveillance, or misuse [42]. Similarly, the unauthorized use of data shared by GenAI tools could result in plagiarism, intellectual property theft [43], and reputational damage to individuals or institutions and undermine academic integrity. The MIT Raise project, focused on responsible AI usage for social empowerment and education, highlights that “due to the vast amounts of student data being captured, the educational market ranks as the third-highest target for data hackers, trailing only the health and financial sectors" [44]. MIT also shed light on incidents such as the cyberattack on Illuminate Education, a leading provider of student tracking software, which resulted in the exposure of the personal information of more than a million current and former students, including names, dates of birth, races or ethnicities, test scores, and, in some cases, more intimate details, such as tardiness rates, migrant status, behavior incidents, and descriptions of disabilities [44,45].
(3) Trust in the nonmaliciousness of GenAI tools: Nonmaliciousness involves ensuring that GenAI tools are not intentionally harmful or inadvertently harmful, which is essential for maintaining a safe, fair, and inclusive academic environment. This includes, but is not limited to, (i) preventing bias by avoiding stereotypes and discriminatory content that could marginalize certain groups [46]; (ii) discouraging cheating or generating falsified data; and (iii) avoiding harmful outputs, such as fabricated information or unethical and inappropriate suggestions that could jeopardize academic work or an individual’s personal reputation. Concerns about AI being exploited for harmful purposes, such as generating deepfakes and synthetic media, which are often indistinguishable from authentic content, are increasing. This undermines public trust in media and institutions, with potential implications for democracy and societal cohesion [47,48]. There have been alarming recent instances of students using AI to create explicit deepfake images of their peers, leading to psychological distress and breaches of privacy. Specifically, one high school student in Sydney generated deepfake pornographic images of female classmates via innocent photos from social media [49], which illustrates the harm and misuse of GenAI tools to produce nonconsensual explicit content and to spread false information, undermining user trust and authenticity online. Moreover, the malicious use of GenAI tools is closely related to the protection of privacy and personal data. Another example is that LLMs can be manipulated to infer sensitive data from seemingly benign input–output observations, leading to data leakage. Likewise, unauthorized access can occur through various means, such as phishing attacks targeting school administrators or compromised credentials [50].
This study aims to address the identified literature gap by exploring, analyzing, and comparing the factors that shape the perception of trust in GenAI tools within academic settings across diverse user segments, including students at all levels, teachers, and researchers. Specifically, this study seeks to deepen understanding of how trust—defined across three dimensions, namely, accuracy and relevance, protection of privacy and personal data, and nonmaliciousness—influences user decisions regarding the usage and future adoption of these tools.
While trust in generative AI tools is conceptually multidimensional (encompassing accuracy, privacy protection, and nonmaliciousness), this study operationalizes it as a composite construct that reflects the overall level of trust users hold toward these tools. This approach aligns with prior research that models trust as a unidimensional predictor in technology adoption contexts when the focus is on general usage decisions rather than context-specific tasks. Nevertheless, the distinctions among the trust dimensions are acknowledged and discussed in the interpretation of the results, recognizing that trust in technical accuracy differs in nature from trust related to data privacy or the prevention of harmful use. This decision allows this research to focus on broader patterns of trust formation while leaving the exploration of trust subdimensions as a direction for future research.
The proposed research model draws conceptually from the Unified Theory of Acceptance and Use of Technology (UTAUT), which identifies performance expectancy, effort expectancy, social influence, and facilitating conditions as key predictors of technology adoption. Building on this foundation, Venkatesh, Thong, and Xu (2012) [51] extended the original model by introducing three additional factors—hedonic motivation, price value, and habit—that influence both the intention to use technology and actual usage behavior. This enhanced framework, known as UTAUT2, has been widely applied in the context of technology adoption. The present study extends the UTAUT2 framework by incorporating trust as a critical antecedent of behavioral intention, acknowledging the distinctive ethical, technical, and social concerns associated with generative AI tools in academic environments. Grounded in the conceptualization of trust as a multidimensional yet operationally unified construct, the following research questions and hypotheses are proposed to examine how trust and related factors influence the adoption and use of generative AI tools in academia:
RQ1: How does user trust in generative AI tools differ across academic segments?
RQ2: What is the relationship between gender and trust in generative AI tools?
RQ3: How does the intensity of generative AI tool usage predict trust levels?
RQ4: In what ways does the length of experience with generative AI tools influence user trust?
RQ5: How does self-perceived proficiency in using generative AI tools affect trust?
RQ6: What is the relationship between trust in generative AI tools and the behavioral intention to use them?
Therefore, the following hypotheses are developed:
H1: 
Academic roles are expected to influence trust in generative AI tools, with teachers and researchers exhibiting lower levels of trust than students across different educational levels (undergraduate, graduate, and doctoral).
Academic studies have outlined the different perspectives on using GenAI tools in education. For example, Chan and Lee [52] addressed the impact of generational differences on ChatGPT adoption by comparing Gen Z students (born between 1995 and 2012) and their Gen X and Y teachers (born between 1960 and 1995). Their findings suggest that each generation’s attitudes and approaches toward AI are shaped by their generational experiences with technology. Gen Z grew up with constant access to digital technology and social media that cultivated a digital-first mindset as well as preferences for hyperconnected, on-demand experiences. With an average attention span of only 8 s [53], they value immediate and personalized feedback, whereas their problem-solving nature and learning independence [54] lead to optimism toward GenAI tools. In contrast, their Gen X and Gen Y teachers, who experienced the transition from traditional to technology-based education, embrace more cautious attitudes. While they acknowledge the benefits of ChatGPT, they also express significant concerns about its ethical and pedagogical implications. Their skepticism aligns with research showing that these generations often approach new technologies with caution, focusing on potential risks and challenges. To incorporate GenAI effectively, Gen X/Y educators require clear guidelines and policies on responsible use combined with tailored training and support that address their varied comfort levels with technology. The findings from another study [35] revealed similar results—a general trend of enthusiasm among students toward GenAI, contrasted with a slightly more cautious approach from professors—indicating a potential generational gap.
H2: 
Gender is anticipated to influence trust in generative AI tools, with men demonstrating higher levels of trust than women.
There are multiple scholars’ perspectives and different research results on the influence of gender on the adoption and use of GenAI tools. First, as presented in a recent study conducted by Harvard Business School on gender gaps in GenAI tool usage [55], there are no significant gender differences in trust in the accuracy of AI-generated content or concerns about privacy risks, such as data breaches, abuse, or compromised storage [56]. However, other scholars, such as Møgelvang et al. [57], who focused on gender differences in the use of GenAI chatbots in higher education, discovered that men exhibit heightened interest and more frequent engagement across a broader spectrum of GenAI applications. They are also aware of their relevance to future career prospects. On the other hand, women primarily utilize GenAI chatbots in text-related tasks and express greater concerns regarding critical and independent thinking. They also demonstrated a stronger need to understand the circumstances in which those tools can be helpful and to be aware of when they can (or cannot) trust them. These findings suggest that women’s use of GenAI chatbots depends more on training and integration, whereas men express less concern about their critical use and possible loss of independent thinking [57].
H3: 
The frequency and intensity of generative AI tool usage are hypothesized to affect trust levels, with individuals who use these tools more intensively showing greater trust than less frequent users.
Research among higher education teachers revealed that those with a comprehensive understanding of AI concepts and applications are more likely to trust the technology’s reliability and effectiveness in educational settings [58,59]. Frequent usage and understanding of both AI technology benefits and constraints result in more effective implementation of those tools in academic work and environment [60]. Proficiency, as a consequence of continuous usage, enhances users’ exposure to different scenarios, which reduces their hesitation or misconceptions that might arise from unfamiliarity [10] and allows critical assessment of various AI products.
H4: 
The duration of experience with generative AI tools is expected to influence trust, with longer-term users exhibiting higher levels of trust in these tools.
Algorithm aversion is a term that refers to avoiding the use of algorithms after observing their errors, even when they outperform human advisors [61]. This is closely related to the concept of user trust, which can be significantly eroded by mistakes. Therefore, users often require a prolonged period of error-free interactions to regain trust in a system that has previously failed [62]. Following the rationale for the previous hypothesis and other research results, prolonged interactions have been assumed to enable the dynamic development of familiarity, confidence, and reliance on GenAI tool usage [63].
H5: 
Self-assessed proficiency in using generative AI tools is anticipated to be positively associated with trust, with individuals who perceive their skills as higher demonstrating greater trust in the tools.
The rationale for this hypothesis is the assumption that greater competence may lead to greater comfort and reliance on GenAI tools. In one study [64], the respondents who considered themselves “tech-savvy” demonstrated greater reliance on AI recommendations, which suggests a connection between perceived technical competence and trust in algorithmic predictions. Individuals who feel confident in their technical abilities may be more inclined to formulate specific prompts that generate desired personalized responses and recommendations aligned with their needs, thereby making them more receptive to AI tools. Another study among undergraduate students [65] revealed that technological self-efficacy—trust in one’s ability to utilize new AI tools effectively for various reasons—affects the adoption of AI tools. Moreover, undergraduates with a higher degree of technological self-efficacy are more likely to explore and experiment with different novel AI tools when they are launched into the public domain.
H6: 
Trust positively correlates with the behavioral intention to use generative AI tools, so individuals with higher trust levels are more likely to express stronger intentions to adopt and utilize these tools.
Choung et al. [66] examined the role of trust in the use and acceptance of AI voice assistants among college students. They concluded that trust had a significant effect on the intention to use AI; therefore, future studies investigating the acceptance of AI must include trust as an integral component of their predictive models. Nazaretsky et al. [67] reported similar findings that emphasized the pivotal role of trust in shaping behavioral intentions to adopt AI technologies. Moreover, Bach et al. [68] conducted a systematic review, which also identified user trust as one of the key elements of AI adoption because it enables calibration of user–AI relationship, regardless of the context.

3. Methodology

This study utilized a quantitative research design, employing a structured online questionnaire as the primary data collection tool. The questionnaire consisted of 15 items distributed across four key dimensions: (1) user trust (3 items), (2) behavioral intention (3 items), (3) use behavior (4 items), and (4) self-assessed skill (5 items). This structure was designed to comprehensively capture respondents’ perceptions of trust, their current usage patterns, their future intentions to use generative AI tools, and their perceived proficiency with these technologies. Detailed descriptions of each dimension and corresponding items are provided in Section 3.1, Section 3.2, Section 3.3 and Section 3.4. The questionnaire was developed via Google Forms and distributed through official university email lists and the Moodle-based learning platform “Merlin” to engage the entire academic community at University North in Croatia. A pilot study was conducted with a small group of respondents to evaluate the reliability and validity of the questionnaire, and adjustments were made based on their feedback. The participants were divided into four distinct academic groups, which were treated as independent samples for analysis: (1) undergraduate students, (2) graduate students, (3) doctoral students, and (4) teachers and researchers. Doctoral students were initially collected as a separate academic group (n = 26); however, because of their relatively small sample size, they were merged with graduate students (n = 133) for the purposes of statistical analysis. This decision was made to ensure sufficient statistical power and to avoid unreliable parameter estimates associated with small group sizes. To verify respondent authenticity and eligibility, the Google Forms single sign-on (SSO) feature, which requires institutional credentials, was employed. Data analysis was conducted via IBM SPSS software (version 29.0.2.0) to ensure a thorough statistical evaluation of the findings. ChatGPT-4o was used for translation from Croatian to English, English grammar correction, and language enhancements, as well as providing an additional interpretation of results generated by IBM SPSS.
The analysis focuses on the role of trust—defined across three dimensions (accuracy and relevance, protection of privacy and personal data, nonmaliciousness)—in shaping users’ perceptions, usage patterns, and adoption decisions of generative AI tools, such as ChatGPT, Microsoft Copilot, and Gemini, among different segments of the academic population.
The research sample of 823 respondents was structured based on key demographic and academic characteristics, such as gender, age, and role at the university, as shown in Table 1. The sociodemographic structure of respondents comprised 55.29% women (n = 455), while men accounted for 44.71% (n = 368) of the sample. This balanced representation allowed for meaningful comparisons by gender in subsequent analyses. The largest age group was 18–25 years, accounting for 70.35% of the sample. The percentage of participants in the 26–35 years age range was 11.30%, whereas those aged 36–45 years accounted for 8.02%. Older age groups (56–65 and 66+) together represented less than 3% of the sample. This age distribution indicates a predominantly younger participant pool and a diverse but student-centered sample, which reflects the academic demographic of this study. Undergraduate students constituted the largest portion of the sample, representing 69.14% (n = 569), followed by graduate students, at 16.16% (n = 133), and doctoral students, at 3.16% (n = 26). Teachers and researchers, on the other hand, accounted for 11.54% (n = 95) of the sample.

3.1. Measuring Dimensions of User Trust

To measure three dimensions of user trust in GenAI tools within an academic environment, trust was assessed through accuracy and relevance, protection of privacy and personal data, and nonmaliciousness of GenAI tools. This was performed via three items rated on a 5-point Likert scale (1—Strongly disagree, 2—Disagree, 3—Neutral, 4—Agree, 5—Strongly agree), where the respondents indicated their level of agreement:
(1)
I trust that GenAI will produce accurate and relevant content.
(2)
I trust that GenAI will protect my personal data and privacy.
(3)
I trust that GenAI will not cause harm to me or others.
While conceptually, these three items reflect distinct facets of trust (accuracy, privacy, and nonmaliciousness), they were combined into a single composite score for the purposes of this analysis, reflecting the overall trust level, following the rationale discussed in Section 2.3. This approach aligns with common practices in technology adoption research when the goal is to measure overall behavioral intention rather than task-specific trust dimensions. A composite Trust Factor Score (TRfs) was calculated as the mean value of three items: TR1 (trust in accuracy and relevance), TR2 (trust in privacy and data protection), and TR3 (trust in nonmaliciousness). The internal consistency of this trust scale was acceptable, with a Cronbach’s alpha of 0.791, indicating satisfactory reliability (Table 2). The corrected item-total correlations ranged from 0.548 to 0.694, suggesting that each item contributed meaningfully to the overall construct. Removing any item would not substantially improve reliability, so all three items were retained in the composite score for further analysis. However, it is acknowledged that the trust construct combines conceptually different aspects: accuracy relates to the technical reliability of the AI system, privacy protection reflects trust in how personal data are handled, and nonmaliciousness pertains to concerns about harmful or unethical use. This multidimensionality is recognized as a potential limitation of the single-factor operationalization. Additionally, it is important to note that Principal Component Analysis (PCA) was not applied because factor analysis is not statistically appropriate for scales with fewer than three items per latent factor. Instead, Cronbach’s alpha and item-total correlations were considered sufficient indicators of internal consistency for this construct.

3.2. Measuring Behavioral Intention

Behavioral intention to use GenAI tools in research and work was assessed with the participants’ perceptions of their future use. This included three items, rated on the same 5-point Likert scale (1—Strongly disagree, 2—Disagree, 3—Neutral, 4—Agree, 5—Strongly agree), where the respondents indicated their level of agreement:
(1)
I intend to continue using GenAI.
(2)
I anticipate that I will use GenAI regularly.
(3)
I will always use GenAI in research and work.
A composite Behavioral Intention Factor Score (BIfs) was calculated as the mean value of the three items (BI1, BI2, and BI3). The internal consistency of this scale was excellent, with Cronbach’s alpha of 0.903, indicating high reliability. Corrected item-total correlations ranged from 0.775 to 0.869, confirming that all three items contributed substantially to the overall construct (Table 3). Additionally, the Cronbach’s Alpha if Item Deleted values indicated that removing any of the items would reduce reliability, further supporting the retention of all three items in the composite score.

3.3. Measuring Use Behavior

To measure the frequency of use of GenAI and the capture type of GenAI used (OpenAI’s ChatGPT, Microsoft Copilot, and Google Gemini as the most prominent, as well as all other GenAI tools), a frequency scale was used, namely, 1—Never, 2—Rarely (once a month or less), 3—Occasionally (several times a month), 4—Frequently (several times a week), and 5—Very frequently (daily), with the following questions:
(1)
I use Google Gemini (previously known as Bard AI).
(2)
I use Microsoft Copilot (previously known as Bing AI).
(3)
I use OpenAI ChatGPT.
(4)
I use other generative artificial intelligence (GenAI) tools.
The questionnaire was designed to distinguish between the GenAI tools that were most commonly used at the time of measurement to determine which tools were used the most and if there were different patterns of usage. This research focuses on the usage and frequency of use of any GenAI tool. As an approximation of any GenAI tool use, the AnyGenAIUB variable was constructed as a maximum value of all frequencies stored in GeminiUB, CopilotUB, ChatGPTUB, and OtherGenAIUB.

3.4. Self-Assessment of Skill

Self-perceived skill in GenAI usage was measured through a single Likert-type item asking participants: “How would you assess your current level of knowledge and use of generative artificial intelligence (GenAI) tools?” This self-assessment was rated on a 5-point scale, where the ratings mean the following:
(1)
I do not know how to use GenAI tools.
(2)
I recognize GenAI tools and have tried using them.
(3)
I use GenAI tools for simple tasks.
(4)
I am skilled in using GenAI tools.
(5)
I am an expert in using a variety of GenAI tools that I integrate into work processes.
It is important to note that although the first option (“I do not know how to use GenAI tools”) reflects an absence of skill, it represents the lower bound of the skill continuum within the Likert-type scale, consistent with similar self-efficacy measures used in technology adoption research. The responses were designed to capture the whole spectrum of possibilities, ranging from GenAI illiterates to technically proficient users who are able to make advanced use of GenAI in their daily life and work. While this variable captures subjective self-assessment rather than objective skill, prior research has demonstrated that perceived self-efficacy plays a meaningful role in technology adoption behaviors. However, the limitations of using a single-item scale for measuring perceived competence are acknowledged in the discussion.

3.5. Choosing Adequate Statistical Methods

The choice of statistical methods in this study was determined by the objective to examine the relationships between user trust in generative AI tools, behavioral intention, and the participants’ sociodemographic and usage-related characteristics. Given that multiple independent variables were examined simultaneously—including gender, academic role, frequency of use, duration of use, and self-assessed skill level—a model-based approach was applied to account for potential interrelationships between predictors and to evaluate their unique contributions to the dependent variables. Initial analyses considered the use of non-parametric group comparisons (e.g., Kruskal–Wallis), but this approach was deemed insufficient given the interdependence of predictor variables such as gender, academic role, and frequency of use. Therefore, a multiple regression approach was chosen to enable the simultaneous evaluation of how each predictor contributes to trust and behavioral intention, while accounting for shared variance among them.
Multiple linear regression analysis was selected as the most appropriate method for this purpose. This approach allows for testing how each independent variable is associated with the dependent variables, while controlling for the influence of all other variables in the model. This is particularly relevant in the context of this study, as certain predictors, such as academic role and gender, may not be statistically independent. Regression analysis ensures that the effects of individual predictors are estimated accurately without being confounded by overlapping variance with other variables. Two linear regression models were constructed. The first model examined trust in GenAI tools (TRfs) as the dependent variable, with academic role (with doctoral students merged with graduate students), gender, frequency of GenAI use, duration of use, and perceived skill as independent variables. The second model examined behavioral intention to use GenAI (BIfs) as the dependent variable, including the same set of predictors as in the first model, with the addition of trust (TRfs) as a key predictor of behavioral intention.
This analytical strategy enabled a comprehensive understanding of how demographic factors, user experience, and trust are associated with the intention to use generative AI tools. Prior to performing the regression analyses, all variables were assessed for suitability regarding measurement level and distribution. Categorical variables, including gender and academic role, were dummy-coded for inclusion in the regression models. To address the small size of the doctoral student subgroup (n = 26), doctoral students were combined with graduate students (n = 133) into a single analytical category. This approach aligns with standard practices in quantitative research to ensure robust and reliable regression estimates. The assumptions of multiple linear regression were tested and met, including linearity, normality of residuals, homoscedasticity, and the absence of multicollinearity. Variance inflation factor (VIF) values confirmed that multicollinearity was not a concern, as all VIF scores were below acceptable thresholds.

4. Results

4.1. Trust in GenAI Across Demographic and Usage-Related Factors

A multiple linear regression analysis was conducted to examine whether gender, academic role, frequency of GenAI use, duration of use, and self-assessed skill predict trust in generative AI tools. The overall model was statistically significant, F(6, 816) = 12.612, p < 0.001, indicating that the set of predictors reliably explains variance in trust. The model accounted for 8.5% of the variance in trust (R2 = 0.085, Adjusted R2 = 0.078), which reflects a small but meaningful effect size in the context of social science research (Table 4).
Among the predictors, frequency of GenAI use emerged as the only statistically significant factor, β = 0.192, t = 3.961, p < 0.001 (Table 5). This indicates that participants who use GenAI tools more frequently report higher levels of trust in these technologies. None of the sociodemographic variables, namely, gender (β = –0.012, p = 0.721), academic role as teachers/researchers (β = –0.041, p = 0.228), or graduates/doctoral students (β = –0.055, p = 0.109), were statistically significant predictors of trust when compared to the undergraduate reference group. Similarly, neither the duration of GenAI use (β = 0.071, p = 0.150) nor self-assessed skill (β = 0.053, p = 0.288) was a significant predictor in the model. This suggests that frequency of use is the primary driver of trust, independent of demographic characteristics or subjective perceptions of skill.
Examination of the regression diagnostics confirmed that the model met the key assumptions of linearity, normality, homoscedasticity, and absence of multicollinearity. The variance inflation factor (VIF) values ranged from 1.04 to 2.19, well below the commonly accepted threshold of 5, indicating no concerns regarding multicollinearity. Visual inspection of the histogram of standardized residuals and the normal P–P plot indicated that the assumption of normality was reasonably met. Additionally, the scatterplot of standardized residuals versus predicted values suggested that the assumption of homoscedasticity was satisfied, with no evidence of systematic patterns in the residuals.

4.2. Behavioral Intention to Use GenAI Across Demographic and Usage-Related Factors

A multiple linear regression analysis was conducted to examine whether gender, academic role, frequency of GenAI use, duration of use, and self-assessed skill predict behavioral intention to use generative AI tools. The overall model was statistically significant, F(6, 816) = 82.477, p < 0.001, indicating that the set of predictors reliably explains the variance in behavioral intention. The model accounted for 37.8% of the variance in behavioral intention (R2 = 0.378, Adjusted R2 = 0.373), which represents a substantial effect size in the context of social science research (Table 6). An R2 of 0.378 indicates a substantial effect size based on Cohen’s (1988) benchmarks, suggesting that the model explains a meaningful proportion of the variance in behavioral intention.
Among the predictors, frequency of GenAI use emerged as the strongest and most statistically significant predictor (β = 0.431, t = 10.774, p < 0.001), indicating that participants who use generative AI tools more frequently are more likely to express a strong intention to continue using them in research and academic work.
In addition, both academic roles—teachers/researchers (β = 0.097, t = 3.450, p < 0.001) and graduate/doctoral students (β = 0.088, t = 3.139, p = 0.002)—were statistically significant predictors compared to the undergraduate reference group (Table 7). This indicates that, controlling for other factors, teachers/researchers and graduate-level students report higher behavioral intention to use generative AI tools than undergraduates. Furthermore, self-assessed skill (β = 0.147, t = 3.601, p < 0.001) and duration of use (β = 0.080, t = 1.965, p = 0.050) were also significant, albeit smaller, predictors. These findings suggest that both the perceived competence with AI tools and how long users have been engaging with them play a meaningful role in shaping their intention for future use. In contrast to the model predicting trust, gender was not a significant predictor (β = 0.030, p = 0.286), suggesting that intention to use generative AI does not vary by gender in this sample.
These findings collectively suggest that behavioral intention toward GenAI adoption is primarily driven by frequency of use, with notable contributions from perceived competence, academic role, and experience duration. Overall, frequency of use emerged as the strongest predictor, followed by perceived skills, duration of use, and academic role, while gender had no significant influence. The variance inflation factor (VIF) values ranged from 1.04 to 2.19, well below the threshold of 5, indicating no multicollinearity concerns. The normal P–P plot and histogram of standardized residuals suggested that the assumption of normality was reasonably satisfied, while the scatterplot of standardized residuals versus predicted values confirmed homoscedasticity.

4.3. The Relationship Between Trust and Behavioral Intention

Trust is hypothesized to positively correlate with the behavioral intention to use generative AI tools, such that individuals with higher trust levels are more likely to express stronger intentions to adopt and utilize these tools. A Pearson correlation analysis was conducted to measure the relationship between trust in generative AI (TRfs) and behavioral intention to use AI (BIfs). The results of the Pearson correlation analysis indicate a moderate, statistically significant positive correlation between trust in generative AI (TRfs) and behavioral intention (BIfs) (r = 0.495, p < 0.001, 95% CI [0.432, 0.552], N = 823). This finding suggests that individuals who have higher levels of trust in generative AI tools are also more likely to express an intention to use them in their work or daily activities. The positive correlation suggests that as users’ trust increases, their intention to adopt and integrate GenAI tools into their research and work processes strengthens. These findings provide strong evidence in support of H6, indicating that trust in GenAI tools significantly influences users’ behavioral intention to continue using them.

5. Discussion

The results of this study provide valuable insights into the role of trust in shaping the adoption and usage of generative AI (GenAI) tools across various academic segments. By empirically testing the proposed hypotheses through regression and correlation analyses, this study advances the understanding of how different factors—academic role, gender, frequency of use, duration of use, and self-perceived proficiency—influence trust and behavioral intentions toward GenAI tools. The findings from 823 respondents at University North are contextualized within the theoretical framework and prior research, highlighting both consistencies and divergences.
The first hypothesis (H1), which proposed that academic role predicts trust in GenAI tools, was not supported. The regression analysis revealed that academic role (teachers/researchers or graduate-level students versus undergraduates) was not a significant predictor of trust. Although undergraduates showed marginally higher trust than doctoral students or teachers, this variation was not statistically meaningful. This contrasts with prior research suggesting that educators tend to exhibit more cautious attitudes toward AI due to ethical concerns or pedagogical responsibilities [35,53]. One plausible explanation is the homogeneity in the academic environment in this sample, where exposure to shared academic experiences and technologies may flatten role-based differences in trust.
Similarly, the hypothesis that gender predicts trust (H2) was also not supported. The results indicated no significant difference in trust levels between male and female participants, which aligns with recent studies suggesting a diminishing gender gap in digital trust and technology adoption [56,57]. This finding diverges from older studies that often reported higher technology adoption rates among men [58]. A possible explanation is that the participants in this sample, regardless of gender, have reached a similar baseline in their exposure to and understanding of GenAI tools, reducing traditional gender-based disparities.
The hypothesis (H3) that the frequency of GenAI use predicts trust was strongly supported. Frequency of use emerged as the only significant predictor of trust in the regression model, with individuals who use GenAI tools more frequently reporting substantially higher trust levels. This is consistent with prior research highlighting how frequent engagement fosters greater confidence and reduces technological hesitation [59,62]. This supports the idea of “appropriate trust” [4], whereby continued interaction with a technology allows users to calibrate their expectations, become more familiar with both its capabilities and limitations, and develop justified trust over time. This finding emphasizes that encouraging regular, meaningful interaction with GenAI tools is critical for fostering trust within academic communities.
Contrary to expectations, the hypothesis (H4) that the duration of experience with GenAI predicts trust was not supported in the regression analysis. While duration showed a significant effect in earlier non-parametric tests, it did not remain significant when controlling for other variables in the regression model. This suggests that the recency and frequency of use are more important than the sheer length of time someone has been acquainted with GenAI. In other words, individuals who use these tools frequently develop trust regardless of whether they have been using them for weeks or years. This nuance highlights an important distinction: sustained engagement matters more than passive familiarity over time.
A weak-to-moderate bivariate correlation was found between self-perceived proficiency and trust, supporting the hypothesis (H5) at a correlational level but not as a significant predictor in the regression model. This means that individuals who perceive themselves as more skilled tend to report higher trust, but, when accounting for frequency of use and other factors, perceived skill alone does not independently predict trust. This partially aligns with previous research on technological self-efficacy and trust [66,67], which suggests that perceived competence boosts confidence in AI use. However, the current findings indicate that perceived skill may act more as a complementary factor rather than a primary driver of trust. This underlines the importance of facilitating hands-on usage rather than merely building self-perceptions of competence.
The final hypothesis (H6) that trust predicts behavioral intention was supported. The results revealed a moderate, statistically significant positive correlation between trust and behavioral intention to continue using GenAI tools. This is in line with a broad body of research on trust as a key driver of technology adoption [68]. The strength of this relationship (r = 0.495) suggests that trust plays a central role in shaping users’ willingness to integrate GenAI tools into their academic workflows and daily practices. This finding emphasizes that fostering trust is not merely an ethical imperative but a practical necessity for promoting sustained and responsible use of GenAI technologies in higher education.
In summary, this study underscores that frequency of use is the dominant factor in shaping trust in GenAI tools, while demographic factors, such as gender and academic role, as well as duration of experience and perceived proficiency, play less significant or secondary roles. The results also affirm the crucial role of trust in fostering behavioral intention to adopt and continuously use GenAI tools. From a practical perspective, these findings suggest that strategies aimed at increasing frequent, guided, and purposeful engagement with GenAI tools are likely to be more effective in building trust than merely offering passive exposure or training focused solely on skill development.

6. Limitations and Directions for Future Research

This study has several limitations that should be acknowledged. First, the unequal group sizes—particularly the overrepresentation of undergraduate students compared to doctoral students and teachers/researchers—may have influenced the findings related to academic role differences. Additionally, the reliance on self-reported data, including self-assessed proficiency, introduces potential biases, such as social desirability and inaccuracies in self-evaluation. Another key limitation is the single-institution context in Croatia, which may limit the generalizability of the findings to other cultural, institutional, or academic settings. Furthermore, the cross-sectional design restricts the ability to capture how trust and behavioral intentions develop or change over time. The lack of qualitative data limits the exploration of participants’ underlying motivations, concerns, and nuanced perceptions. Finally, the predominance of younger participants (70% aged 18–25) and the focus on prominent GenAI tools, such as ChatGPT, Microsoft Copilot, and Google Gemini, constrain the exploration of broader user demographics, alternative tools, and varying contextual factors that may shape trust dynamics.
A limitation of the current study is the operationalization of trust as a composite construct. While this approach aligns with prior models in technology adoption research, future studies should consider analyzing the distinct influence of accuracy, privacy protection, and nonmaliciousness independently, as each may play a unique role in shaping user behavior. Specifically, the inter-item correlation matrix and Cronbach’s alpha results suggest that trust may not be a fully unidimensional construct. Notably, TR1 (trust in accuracy and relevance) exhibits lower correlations with TR2 (trust in privacy protection) and TR3 (trust in nonmaliciousness) than TR2 and TR3 do with each other. TR1 correlates with TR2 at 0.521 and with TR3 at 0.479, whereas TR2 and TR3 demonstrate a stronger inter-correlation of 0.667. Additionally, the Cronbach’s alpha value for the overall trust construct (TRfs) is 0.791, but it would slightly improve if TR1 were excluded. This pattern suggests that trust in accuracy and relevance may reflect a cognitively distinct dimension from trust in privacy and safety. Future research should explore whether modeling trust as two separate but related constructs, functional trust (accuracy and relevance) versus ethical trust (privacy protection and nonmaliciousness), would provide a more valid and reliable understanding of user trust in GenAI tools. This refinement could offer greater specificity in understanding how distinct facets of trust influence adoption behavior. However, such a separation must be approached cautiously, as it risks fragmenting the broader, holistic concept of trust and may introduce complexity without substantive theoretical gain if the constructs are still highly correlated.
Additionally, while the trust construct demonstrates acceptable internal consistency (Cronbach’s alpha = 0.791), the analysis revealed that removing one item could marginally improve reliability. However, given that the trust scale comprises only three items, reducing it further would undermine its conceptual comprehensiveness. This trade-off reflects a recognized limitation when employing concise scales in exploratory research. To improve both construct validity and reliability, future research should consider expanding the item pool to capture a broader range of trust-related concerns. Furthermore, preliminary evidence from the inter-item correlations reinforces the possibility of a two-factor structure, distinguishing between functional trust (e.g., accuracy and relevance) and ethical trust (e.g., privacy protection and nonmaliciousness). Applying exploratory or confirmatory factor analysis in future studies could help refine the measurement model and offer deeper insights into the multidimensional nature of trust in generative AI tools. Despite these limitations, the decision to operationalize trust as a composite construct aligns with prior technology adoption research, where the focus is on general usage decisions rather than task-specific trust dimensions. Nonetheless, expanding this approach in future studies would significantly enhance the precision and theoretical richness of the model.
Future research should address these limitations while extending the current work in several meaningful directions. A key priority is integrating the findings with established technology adoption models, such as the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), which offers a comprehensive framework for understanding technology adoption. Incorporating constructs like performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, and habit can enrich the understanding of how these factors interact with trust dimensions (whether related to accuracy, privacy, or nonmaliciousness) to shape behavioral intentions toward GenAI tools. Moreover, longitudinal studies are essential for capturing how trust evolves over time, particularly as users gain more experience or as generative AI technologies rapidly advance. Expanding research to include cross-cultural and cross-institutional samples would also enhance the external validity of our findings. Finally, incorporating qualitative methods, such as interviews, focus groups, or think-aloud protocols, could uncover deeper insights into how users interpret trust, what drives their concerns, and how contextual or ethical considerations influence their willingness to adopt generative AI. These future directions are crucial for developing evidence-based strategies that foster responsible, ethical, and effective integration of generative AI tools in educational settings.

7. Conclusions

This study examined the role of trust in influencing the adoption and use of generative AI (GenAI) tools in academic settings, focusing on three core dimensions: accuracy and relevance, protection of privacy and personal data, and nonmaliciousness. By analyzing these trust dimensions across different academic roles—students, educators, and researchers—this study offers valuable insights into the factors that shape user perceptions and decisions regarding GenAI tools in higher education.
The results indicate that trust is a dynamic and multifaceted construct, primarily shaped by user experience and engagement rather than by demographic factors or academic roles. Contrary to initial expectations, no significant differences in trust were observed across academic roles (H1) or between genders (H2). This suggests that trust concerns related to GenAI tools are largely universal, transcending demographic boundaries within the academic community. These findings challenge earlier research that reported generational or gender-based differences in technology adoption, implying that shared academic environments may mitigate such disparities.
In contrast, trust was significantly influenced by user behavior. Participants who used GenAI tools more frequently (H3), had longer experience with them (H4), or perceived themselves as more proficient users (H5) reported higher levels of trust. These findings align with theories of trust calibration, emphasizing that familiarity, regular interaction, and positive user experiences foster greater confidence and reduce skepticism toward AI technologies. Furthermore, this study confirmed a strong relationship between trust and behavioral intention (H6), demonstrating that trust is a critical determinant of whether users adopt and integrate GenAI tools into their academic workflows.
The analysis of trust dimensions further highlights key concerns influencing user decisions. Concerns related to accuracy and reliability, particularly issues like AI hallucinations and misinformation, underscore the need for trustworthy and precise outputs in academic contexts. Privacy concerns, including fears of data misuse, breaches, and intellectual property violations, emphasize the importance of robust privacy protections and clear institutional policies. Additionally, concerns about non-malicious use, such as the potential for GenAI tools to facilitate unethical behaviors or generate harmful content, stress the need for comprehensive ethical guidelines.
These findings have important practical implications for higher education institutions. To enable the responsible and effective integration of GenAI tools, institutions should prioritize training programs and support resources that build users’ technical proficiency and familiarity with these technologies. Additionally, developing and enforcing clear policies addressing privacy, data protection, ethical use, and the reliability of AI-generated content is essential for fostering trust. Providing hands-on opportunities to engage with GenAI tools—alongside open, transparent communication about both their capabilities and limitations—will help users approach these technologies with confidence and critical awareness.
This study also reinforces established theories of technology adoption, including Straub’s framework [2], which highlights the interconnected roles of user perceptions, technology characteristics, and contextual influences. Building on these foundations, future research should integrate broader models, such as UTAUT2, which can capture additional factors influencing trust and adoption, including performance expectancy, effort expectancy, social influence, and facilitating conditions. Furthermore, longitudinal studies are essential to understand how trust evolves as users gain more experience or as the technology itself advances. Expanding research to include cross-cultural and multi-institutional contexts, as well as employing qualitative methods, such as interviews or focus groups, will offer deeper insights into how trust develops and how GenAI tools can be integrated ethically, effectively, and sustainably into higher education environments.

Author Contributions

Conceptualization, E.Đ. and D.F.; methodology, E.Đ. and D.F.; software, D.F.; validation, D.F.; formal analysis, D.F.; investigation, E.Đ.; resources, E.Đ.; data curation, D.F.; writing—original draft preparation, E.Đ.; writing—review and editing, E.Đ.; visualization, E.Đ. and D.F.; supervision, D.F. and M.M.; project administration, E.Đ.; funding acquisition, D.F. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study based on the response and feedback from the Institutional Ethics Committee. The Ethics Committee was consulted before questionnaire distribution and confirmed that this research is not subject to their approval since the questionnaire was voluntary, anonymous, and quantitative. For questionnaire dissemination, the authors (D.F. and E.Đ.) were granted access to the official mailing list, which includes all university members (teachers, researchers, students of all levels, and administrative personnel).

Informed Consent Statement

All participants were informed by the preamble of the questionnaire, and they could proceed voluntarily.

Data Availability Statement

Data supporting the reported results can be found at https://docs.google.com/forms/d/1N5enFr0JhC_P8L8AnFH3lzga3zba_-KKVxT3QFPwSyk/edit#responses (accessed on 3 June 2025). However, due to ongoing analysis for future research articles, we kindly state that these data are not publicly available. Thank you in advance for your understanding.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence
AIEDArtificial Intelligence in Education
HEIsHigher Education Institutions
T&RTeachers and Researchers
UGSsUndergraduate Students
GRSsGraduate Students
DRSsDoctoral Students

References

  1. Xiao, P.; Chen, Y.; Bao, W. Waiting, Banning, and Embracing: An Empirical Analysis of Adapting Policies for Generative AI in Higher Education; Social Science Research Network: Rochester, NY, USA, 2023; p. 4458269. [Google Scholar] [CrossRef]
  2. Straub, E.T. Understanding Technology Adoption: Theory and Future Directions for Informal Learning. Rev. Educ. Res. 2009, 79, 625–649. [Google Scholar] [CrossRef]
  3. Kayal, A. Transformative Pedagogy: A Comprehensive Framework for AI Integration in Education. In Explainable AI for Education: Recent Trends and Challenges; Singh, T., Dutta, S., Vyas, S., Rocha, Á., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2024; pp. 247–270. [Google Scholar] [CrossRef]
  4. Mehrotra, S.; Degachi, C.; Vereschak, O.; Jonker, C.M.; Tielman, M.L. A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges. ACM J. Responsib. Comput. 2024, 1, 1–45. [Google Scholar] [CrossRef]
  5. Mutasa, S.; Sun, S.; Ha, R. Understanding artificial intelligence based radiology studies: What is overfitting? Clin. Imaging 2020, 65, 96–99. [Google Scholar] [CrossRef] [PubMed]
  6. Lim, W.M.; Gunasekara, A.; Pallant, J.L.; Pallant, J.I.; Pechenkina, E. Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ. 2023, 21, 100790. [Google Scholar] [CrossRef]
  7. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  8. Gentile, M.; Città, G.; Perna, S.; Allegra, M. Do we still need teachers? Navigating the paradigm shift of the teacher’s role in the AI era. Front. Educ. 2023, 8, 1161777. [Google Scholar] [CrossRef]
  9. Lytras, M.D.; Pablos, P.O.D. Guest editorial: Active and transformative learning in higher education in times of artificial intelligence and ChatGPT. Interact. Technol. Smart Educ. 2024, 21, 489–498. [Google Scholar] [CrossRef]
  10. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  11. Chaudhry, M.A.; Kazim, E. Artificial Intelligence in Education (AIEd): A high-level academic and industry note 2021. AI Ethics 2022, 2, 157–165. [Google Scholar] [CrossRef]
  12. Mittal, U.; Sai, S.; Chamola, V.; Sangwan, D. A Comprehensive Review on Generative AI for Education. IEEE Access 2024, 12, 142733–142759. [Google Scholar] [CrossRef]
  13. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education. Promise and Implications for Teaching and Learning; Center for Curriculum Redesign: Boston, MA, USA, 2019. [Google Scholar]
  14. Abdelghani, R.; Wang, Y.-H.; Yuan, X.; Wang, T.; Lucas, P.; Sauzéon, H.; Oudeyer, P.-Y. GPT-3-driven pedagogical agents for training children’s curious question-asking skills. Int. J. Artif. Intell. Educ. 2024, 34, 483–518. [Google Scholar] [CrossRef]
  15. Lee, H.; Hwang, Y. Technology-Enhanced Education through VR-Making and Metaverse-Linking to Foster Teacher Readiness and Sustainable Learning. Sustainability 2022, 14, 4786. [Google Scholar] [CrossRef]
  16. Kurni, M.; Mohammed, M.S.; Srinivasa, K.G. AI-Enabled Gamification in Education. In A Beginner’s Guide to Introduce Artificial Intelligence in Teaching and Learning; Kurni, M., Mohammed, M.S., Srinivasa, K.G., Eds.; Springer International Publishing: Cham, Switzerland, 2023; pp. 105–114. [Google Scholar] [CrossRef]
  17. Stashevskaia, E. How Do Teachers Envision AI Grading for Open-Ended Questions in Universities? Available online: https://essay.utwente.nl/100864/ (accessed on 6 January 2025).
  18. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education. In Proceedings of the Tenth ACM Conference on Learning @ Scale, Copenhagen, Denmark, 20–22 July 2023; pp. 378–382. [Google Scholar] [CrossRef]
  19. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf. Technol. 2023, 28, 4221–4241. [Google Scholar] [CrossRef] [PubMed]
  20. Mohamed, Y.A.; Khanan, A.; Bashir, M.; Mohamed, A.H.H.M.; Adiel, M.A.E.; Elsadig, M.A. The Impact of Artificial Intelligence on Language Translation: A Review. IEEE Access 2024, 12, 25553–25579. [Google Scholar] [CrossRef]
  21. Ferrara, E. The Butterfly Effect in artificial intelligence systems: Implications for AI bias and fairness. Mach. Learn. Appl. 2024, 15, 100525. [Google Scholar] [CrossRef]
  22. Hunkenschroer, A.L.; Luetge, C. Ethics of AI-Enabled Recruiting and Selection: A Review and Research Agenda. J. Bus. Ethics 2022, 178, 977–1007. [Google Scholar] [CrossRef]
  23. Abdelhalim, E.; Anazodo, K.S.; Gali, N.; Robson, K. A framework of diversity, equity, and inclusion safeguards for chatbots. Bus. Horiz. 2024, 67, 487–498. [Google Scholar] [CrossRef]
  24. Gupta, A.; Royer, A.; Wright, C.; Khan, F.A.; Heath, V.; Galinkin, E.; Khurana, R.; Ganapini, M.B.; Fancy, M.; Sweidan, M.; et al. The State of AI Ethics Report (January 2021). arXiv 2021, arXiv:2105.09059. [Google Scholar] [CrossRef]
  25. Frank, D.; Bernik, A.; Milković, M. Efficient Generative AI-Assisted Academic Research: Considerations for a Research Model Proposal. In Proceedings of the 2024 IEEE 11th International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC), Hanoi, Vietnam, 4–6 April 2024; pp. 000025–000030. [Google Scholar] [CrossRef]
  26. Liu, Y.; Panwang, Y.; Gu, C. “Turning right”? An experimental study on the political value shift in large language models. Humanit. Soc. Sci. Commun. 2025, 12, 179. [Google Scholar] [CrossRef]
  27. Buolamwini, J.; Gebru, T. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Presented at the FAT, New York, NYC, USA, 23–24 February 2018. Available online: https://www.semanticscholar.org/paper/Gender-Shades%3A-Intersectional-Accuracy-Disparities-Buolamwini-Gebru/18858cc936947fc96b5c06bbe3c6c2faa5614540 (accessed on 12 January 2025).
  28. Generative AI @ Harvard. Teach with Generative AI. Available online: https://www.harvard.edu/ai/teaching-resources/ (accessed on 1 May 2024).
  29. Roy, S.S.; Thota, P.; Naragam, K.V.; Nilizadeh, S. From Chatbots to Phishbots?: Phishing Scam Generation in Commercial Large Language Models. In Proceedings of the IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–22 May 2024; IEEE Computer Society: Washington, DC, USA, 2024; pp. 36–54. [Google Scholar] [CrossRef]
  30. Ekekihl, E. Getting the General Public to Create Phishing Emails: A Study on the Persuasiveness of AI-Generated Phishing Emails Versus Human Methods. 2024. Available online: https://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-24094 (accessed on 6 January 2025).
  31. Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, in FAccT ’21, Virtually, 3–10 March 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 624–635. [Google Scholar] [CrossRef]
  32. Dangers of Deepfake: What to Watch For|University IT. Available online: https://uit.stanford.edu/news/dangers-deepfake-what-watch (accessed on 13 January 2025).
  33. Deepfake—A Global Crisis—European Commission. Available online: https://intellectual-property-helpdesk.ec.europa.eu/news-events/news/deepfake-global-crisis-2024-08-28_en (accessed on 13 January 2025).
  34. Alkaissi, H.; McFarlane, S.I. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus 2023, 15, e35179. [Google Scholar] [CrossRef]
  35. Rasmussen, D.; Karlsen, T. Adopt or Abort? Mapping Students’ and Professors’ Attitudes Towards the Use of Generative AI in Higher Education. Master’s Thesis, Norwegian School of Economics, Bergen, Norway, 2023. Available online: https://openaccess.nhh.no/nhh-xmlui/handle/11250/3130093 (accessed on 14 January 2025).
  36. Christianson, J.S. End the AI detection arms race. Patterns 2024, 5, 101058. [Google Scholar] [CrossRef]
  37. Sun, Y.; Sheng, D.; Zhou, Z.; Wu, Y. AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanit. Soc. Sci. Commun. 2024, 11, 1278. [Google Scholar] [CrossRef]
  38. Jabotinsky, H.Y.; Sarel, R. Co-Authoring with an AI? Ethical Dilemmas and Artificial Intelligence; Social Science Research Network: Rochester, NY, USA, 2023; p. 4303959. [Google Scholar] [CrossRef]
  39. Roustan, D.; Bastardot, F. The Clinicians’ Guide to Large Language Models: A General Perspective With a Focus on Hallucinations. Interact. J. Med. Res. 2025, 14, e59823. [Google Scholar] [CrossRef] [PubMed]
  40. Williamson, B.; Macgilchrist, F.; Potter, J. Re-examining AI, automation and datafication in education. Learn. Media Technol. 2023, 48, 1–5. [Google Scholar] [CrossRef]
  41. Why Meta’s Latest Large Language Model Only Survived Three Days Online|MIT Technology Review. Available online: https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/ (accessed on 1 April 2023).
  42. Huang, L. Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection. Sci. Insights Educ. Front. 2023, 16, 2577–2587. [Google Scholar] [CrossRef]
  43. Strowel, A. ChatGPT and Generative AI Tools: Theft of Intellectual Labor? Institution’s Innov. Counc. 2023, 54, 491–494. [Google Scholar] [CrossRef]
  44. Nambiar, A.A. A Project for the Massachusetts Institute of Technology Responsible AI for Social Empowerment and Education. Available online: https://raise.mit.edu/ (accessed on 1 May 2025).
  45. Singer, N. A Cyberattack Illuminates the Shaky State of Student Privacy, The New York Times, 31 July 2022. Available online: https://www.nytimes.com/2022/07/31/business/student-privacy-illuminate-hack.html (accessed on 14 January 2025).
  46. Akter, S.; Sultana, S.; Mariani, M.; Wamba, S.F.; Spanaki, K.; Dwivedi, Y.K. Advancing algorithmic bias management capabilities in AI-driven marketing analytics research. Ind. Mark. Manag. 2023, 114, 243–261. [Google Scholar] [CrossRef]
  47. Pawelec, M. Decent deepfakes? Professional deepfake developers’ ethical considerations and their governance potential. AI Ethics 2024, 5, 2641–2666. [Google Scholar] [CrossRef]
  48. Etienne, H. The future of online trust (and why Deepfake is advancing it). AI Ethics 2021, 1, 553–562. [Google Scholar] [CrossRef]
  49. News.com.au Student Allegedly Creates Deepfake Porn of Female Students Using AI. Available online: https://nypost.com/2025/01/09/world-news/student-allegedly-creates-deepfake-porn-of-female-students-using-ai/ (accessed on 14 January 2025).
  50. Vaza, R.N.; Parmar, A.B.; Mishra, P.S.; Abdullah, I.; Velu, C.M. Security And Privacy Concerns In AI-Enabled Iot Educational Frameworks: An In-Depth Analysis. Educ. Adm. Theory Pract. 2024, 30, 8436–8445. [Google Scholar]
  51. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  52. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  53. Giunta, C. An Emerging Awareness of Generation Z Students for Higher Education Professors. Am. Bus. Rev. 2017, 5. [Google Scholar] [CrossRef]
  54. Hernandez-de-Menendez, M.; Escobar Díaz, C.A.; Morales-Menendez, R. Educational experiences with Generation Z. Int. J. Interact. Des. Manuf. 2020, 14, 847–859. [Google Scholar] [CrossRef]
  55. Otis, N.G.; Cranney, K.; Delecourt, S.; Koning, R. Global Evidence on Gender Gaps and Generative AI; Harvard Business School: Boston, MA, USA, 2024. [Google Scholar] [CrossRef]
  56. Carvajal, D.; Franco, C.; Isaksson, S. Will Artificial Intelligence Get in the Way of Achieving Gender Equality? Social Science Research Network: Rochester, NY, USA, 2022; p. 4759218. [Google Scholar] [CrossRef]
  57. Møgelvang, A.; Bjelland, C.; Grassini, S.; Ludvigsen, K. Gender Differences in the Use of Generative Artificial Intelligence Chatbots in Higher Education: Characteristics and Consequences. Educ. Sci. 2024, 14, 1363. [Google Scholar] [CrossRef]
  58. Al-Abdullatif, A.M. Modeling Teachers’ Acceptance of Generative Artificial Intelligence Use in Higher Education: The Role of AI Literacy, Intelligent TPACK, and Perceived Trust. Educ. Sci. 2024, 14, 1209. [Google Scholar] [CrossRef]
  59. Nazaretsky, T.; Cukurova, M.; Alexandron, G. An Instrument for Measuring Teachers’ Trust in AI-Based Educational Technology. In Proceedings of the LAK22: 12th International Learning Analytics and Knowledge Conference, Online, 21–25 March 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 56–66. [Google Scholar] [CrossRef]
  60. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: Honolulu, HI, USA, 2020; pp. 1–16. [Google Scholar] [CrossRef]
  61. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114–126. [Google Scholar] [CrossRef]
  62. Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  63. Klingbeil, A.; Grützner, C.; Schreck, P. Trust and reliance on AI—An experimental study on the extent and costs of overreliance on AI. Comput. Hum. Behav. 2024, 160, 108352. [Google Scholar] [CrossRef]
  64. Biswas, M.; Murray, J. The Influence of Education and Self-Perceived Tech Savviness on AI Reliance: The Role of Trust. In TrustWorld Congress in Computer Science, Computer Engineering & Applied Computing; Springer Nature Switzerland: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  65. Falebita, O.S.; Kok, P.J. Artificial Intelligence Tools Usage: A Structural Equation Modeling of Undergraduates’ Technological Readiness, Self-Efficacy and Attitudes. J. STEM Educ. Res. 2024, 8, 257–282. [Google Scholar] [CrossRef]
  66. Choung, H.; David, P.; Ross, A. Trust in AI and Its Role in the Acceptance of AI Technologies. Int. J. Hum. Comput. Interact. 2023, 39, 1727–1739. [Google Scholar] [CrossRef]
  67. Nazaretsky, T.; Mejia-Domenzain, P.; Swamy, V.; Frej, J.; Käser, T. The critical role of trust in adopting AI-powered educational technology for learning: An instrument for measuring student perceptions. Comput. Educ. Artif. Intell. 2025, 8, 100368. [Google Scholar] [CrossRef]
  68. Bach, T.A.; Khan, A.; Hallock, H.; Beltrão, G.; Sousa, S. A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. Int. J. Hum. Comput. Interact. 2024, 40, 1251–1266. [Google Scholar] [CrossRef]
Table 1. Sociodemographic structure of the respondents (N = 823).
Table 1. Sociodemographic structure of the respondents (N = 823).
Questions%n
1. Gender
Male44.71%368
Female55.29%455
2. Age group
18–2557970.35%
26–359311.30%
36–45668.02%
46–55627.53%
56–65202.43%
66+30.36%
3. Academic segment/role at the university
Teachers and researchers11.54%95
Undergraduate students69.14%569
Graduate students16.16%133
Doctoral students a3.16%26
a. For the purposes of statistical analysis, doctoral students (n = 26) were combined with graduate students (n = 133) into a single group because of the small size of the doctoral subgroup. However, they are presented separately in this table to accurately reflect the sample’s sociodemographic structure.
Table 2. Item-total statistics for trust (TRfs).
Table 2. Item-total statistics for trust (TRfs).
Corrected Item-Total CorrelationCronbach’s Alpha If Item Deleted
TR10.5480.800
TR20.6940.647
TR30.6630.682
Table 3. Item-total statistics for behavioral intention (BIfs).
Table 3. Item-total statistics for behavioral intention (BIfs).
Corrected Item-Total CorrelationCronbach’s Alpha If Item Deleted
BI10.7750.888
BI20.8690.807
BI30.7800.884
Table 4. Model summary and significance tests for predictors of trust in generative AI tools.
Table 4. Model summary and significance tests for predictors of trust in generative AI tools.
Model Summary a
ModelRR SquareAdjusted R SquareStd. Error of the Estimate
10.2910.0850.0780.81064
ANOVA a
ModelSum of SquaresdfMean SquareFSig.
1Regression49.72768.28812.612<0.001
Residual536.2278160.657
Total585.954822
a. Predictors: (Constant), gender (male), academic role (teachers), academic role (graduates), frequency of use, duration of use, skills. Dependent variable: TRfs.
Table 5. Regression coefficients for predictors of trust in generative AI tools.
Table 5. Regression coefficients for predictors of trust in generative AI tools.
Coefficients a
PredictorBSEβtp
(Constant)2.3190.08527.143<0.001
Gender (Male)−0.0210.059−0.012−0.3570.721
Academic Role (Teachers)−0.1090.090−0.041−1.2050.228
Academic Role (Graduates)−0.1170.073−0.055−1.6060.109
Frequency of Use0.1280.0320.1923.961<0.001
Duration of Use0.0500.0350.0711.4420.150
Skills0.0420.0400.0531.0630.288
a. Note. The reference group for gender is female; for academic role, the reference is undergraduate students. Frequency of use refers to the usage frequency of any generative AI tool.
Table 6. Model summary and significance testing for predictors of behavioral intention in generative AI tools.
Table 6. Model summary and significance testing for predictors of behavioral intention in generative AI tools.
Model Summary a
ModelRR SquareAdjusted R SquareStd. Error of the Estimate
10.614 a0.3780.3730.85331
ANOVA a
ModelSum of SquaresdfMean SquareFSig.
1Regression360.323660.05482.477<0.001
Residual594.1568160.728
Total954.48822
a. Predictors: (constant), gender (male), academic role (teachers), academic role (graduates), frequency of use, duration of use, skills. Dependent variable: BIfs.
Table 7. Regression coefficients for predicting behavioral intention in generative AI tools.
Table 7. Regression coefficients for predicting behavioral intention in generative AI tools.
Coefficients a
PredictorBSEβtp
(Constant)1.3340.09014.835<0.001
Gender (Male)0.0660.0620.0301.0680.286
Academic Role (Teachers)0.3290.0950.0973.450<0.001
Academic Role (Graduates)0.2410.0770.0883.1390.002
Frequency of Use0.3680.0340.43110.774<0.001
Duration of Use0.0720.0360.0801.9650.050
Skills0.1510.0420.1473.601<0.001
a. The reference group for gender is female; for academic role, the reference is undergraduate students. Frequency of use refers to the usage frequency of any generative AI tool.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. https://doi.org/10.3390/info16070622

AMA Style

Đerić E, Frank D, Milković M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information. 2025; 16(7):622. https://doi.org/10.3390/info16070622

Chicago/Turabian Style

Đerić, Elena, Domagoj Frank, and Marin Milković. 2025. "Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers" Information 16, no. 7: 622. https://doi.org/10.3390/info16070622

APA Style

Đerić, E., Frank, D., & Milković, M. (2025). Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information, 16(7), 622. https://doi.org/10.3390/info16070622

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop