1. Introduction
Artificial intelligence (AI), especially generative AI (GenAI) chatbots, plays a pivotal role in the core activities carried out in higher education, including research, teaching, and learning. More significantly, in teaching and learning, it is used to enhance personalized learning, whereas in research, it is a veritable tool for promoting efficient research. Researchers use these GenAI tools to search for information and check grammar, translation and data analysis, among other methods (
Falebita & Kok, 2024b). In higher education, a number of GenAI tools are used for several purposes; one of the most commonly used of these tools is ChatGPT (free version) (
Falebita & Kok, 2024b;
Neumann et al., 2023). Chatbots have been in the limelight as a form of GenAI tool. On these platforms, users from higher education can interact with AI technology to resolve issues related to teaching, learning, and research by simply throwing in questions or statements (prompts) and then receiving responses pulled together from a pool of data with which they have been trained. They have emerged as powerful tools that provide personalized learning experiences and enhance students’ engagement, streamlining and enhancing research activities among researchers (
Ayeni et al., 2024;
Davar et al., 2025;
Limna et al., 2023).
The educational system has experienced changes due to several technologies, including the internet, AI, social media, augmented reality, online learning platforms, gamification and virtual reality. The GenAI chatbot has gained traction in the higher education sector and is facilitating student engagement. These AI-driven tools provide support every time and may answer questions about administrative and academic activities in higher education institutions. Students in higher educational institutions frequently use AI chatbots, and this trend is growing (
Aksu Dünya & Yıldız Durak, 2024;
Stöhr et al., 2024). This could be because of the learner-centric educational system and the emphasis on the use of AI technologies in education (
Cordero et al., 2025). Perhaps the primary factors influencing students’ adoption of AI-based chatbots are their simplicity, speedy search for resources, quick content generation, and accuracy in the content generated (
Almulla, 2024;
Rahman et al., 2025). Despite their many potential benefits, whether individuals adopt these technologies for various activities could depend heavily on their trust in the chatbot brand.
GenAI chatbot brand trust is the degree of confidence and dependability that GenAI users have in a particular brand of chatbot’s ability to offer accurate, useful, and consistent interactions and content (
Mai & Nguyen, 2025;
Obenza et al., 2024). While there is a growing presence of GenAI chatbots in higher education, several factors might interplay with the ability of the GenAI chatbot brand to gain trust among its users. Numerous studies have focused on the use of GenAI in the classroom because of the many opportunities it has offered to transform education radically (
BaiDoo-Anu & Owusu Ansah, 2023;
Chiu et al., 2023).
Raman et al. (
2024a) show how ChatGPT aligns with SDG-related educational outcomes, which could reinforce the importance of chatbot trust not only for adoption but also for sustainable education. In higher education, students are key players in the system; they are the ones for which the system is built. Therefore, their access to quality information helps the system and promotes a healthy society devoid of falsehood or fake information (
Auberry, 2018). Students find chatbots engaging, hands-on, and simple to use (
Shim et al., 2023). However, they believe that chatbot responses can occasionally be inaccurate (
Goodman et al., 2023). Additionally, studies have shown that various users are concerned with the level of accuracy of the content generated by chatbots (
Cornelison et al., 2024;
Limna et al., 2023). It is argued that individual preferences for GenAI chatbots could be based on brand trust, which could be explained by access (premium or free), ease of use, comprehensiveness of the information generated, accuracy of the information, intentions, usefulness, and attitudes, among other factors.
Given that students typically engage with multiple chatbot platforms rather than a single application, this study deliberately adopts a broader view of “GenAI chatbots” as a category of technologies. This approach reflects the reality of student use, where platforms such as ChatGPT, MetaAI, Gemini, Copilot and institution-specific bots are often accessed interchangeably. Therefore, the Technology Acceptance Model (TAM) is applied to this aggregated category, focusing on the shared features of generative chatbots, natural language processing, generative response capability, and interactive academic support, rather than limiting the analysis to one specific brand.
There has been a remarkable shortage of focused research on some specific determinants of chatbot brand trust in higher education, although the existing body of literature has investigated the use of AI technology in educational settings. This vacuum provides an opportunity to explore how confidence in GenAI chatbots is influenced by perceived usefulness, attitudes, behavioral intentions, perceived ease of use, and social influence, ultimately affecting the adoption of these chatbots. Therefore, this study investigates the determinants of GenAI chatbot brand trust among higher education students and the interplay between these constructs.
5. Discussion
As generative AI invades the space of higher education, changing how information is presented and processed, determining what motivates trust in chatbot technologies is very important for their successful adoption in higher education. These research results provide insight into the complex connections between attitudes, behavioral intentions, and trust in GenAI chatbots in higher education. The results of this study show that attitudes toward chatbots determine behavioral intentions toward GenAI chatbots. This indicates that users’ views of chatbot brands are important for their behavior toward GenAI chatbots. This finding suggests that when students perceive GenAI chatbots positively, whether in terms of efficiency, reliability, or overall usefulness, they are more likely to develop the intention to use them. In this way, attitudes act as a critical motivational factor that channels perceptions into concrete behavioral intentions. The evidence reinforces the TAM, which positions attitude as a key antecedent of intention, and highlights the importance of cultivating favorable perceptions to encourage adoption. This finding aligns with what is stated in the literature: positive attitudes contribute to intention, as a result of which students have a higher probability of participating in particular chatbot brands (
Ajzen et al., 2018;
Kerschner & Ehlers, 2016). Additionally, attitudes toward chatbots were found to influence trust in GenAI chatbot brands. This indicates that when students view chatbots positively, they are also more likely to extend confidence to the brand behind the tool, underscoring the close connection between user perceptions and trust formation. In other words, when students have favorable perceptions of the efficiency of GenAI chatbots and their usefulness, they are more likely to trust the brand that develops these tools. This finding supports the claim of
Chenchu et al. (
2025) that brand trust directly depends on user experiences, which is why the consistency and reliability of interactions with the brand are important. In addition,
Kerschner and Ehlers (
2016) reported that positive attitudes may contribute to user loyalty and advocacy, which reflects the assumption that trust is based on positive attitudes and experiences. Therefore, it is imperative to create the right environment that will evolve favorable attitudes toward the use of GenAI chatbots by educational institutions, aiming to increase user confidence in their use with the aim of achieving the broad adoption of such cutting-edge tools.
With respect to behavioral intention, the findings also show that behavioral intention does not determine GenAI chatbot brand trust. The fact that behavioral intention cannot predetermine GenAI chatbot brand trust suggests a complex interrelation between the intentions of the users and their trust in the technology. Although behavioral intention is commonly used to show that a user will use a given tool, this research finds that mere intention cannot be a key determinant of brand trust. This divergence from traditional TAM expectations may be explained by the unique characteristics of GenAI chatbots, where intention reflects curiosity or social influence rather than actual confidence in the system. In such contexts, trust appears to be formed less by willingness to engage and more by perceptions of the chatbot’s reliability, accuracy, and responsiveness over time. This could highlight the importance of long-lasting, positive experiences with the technology as a guarantee of trust-building rather than just a statement of intentions. A person can be willing to use a chatbot on the basis of recommendations or tendencies but still has doubts related to the reliability or efficiency of the chatbot. This means that trust cannot be defined as nothing more than readiness to utilize the technology; it presupposes stable and reliable work that may satisfy user expectations. Comparative studies, such as
Raman et al. (
2024b), who contrasted ChatGPT and Bard in educational contexts, reinforce this interpretation: they found that perceived usefulness and accuracy were far stronger predictors of trust than behavioral intention. This suggests that in higher education, trust in GenAI chatbots is contingent on demonstrated performance and reliability, rather than users’ initial willingness to adopt them. Theoretically, this highlights the importance of treating trust formation as contingent upon demonstrated reliability and consistent experience, rather than mere stated willingness. Practically, institutions and developers must recognize that motivating students to intend to use a chatbot is not sufficient; trust must be earned through transparency, accuracy, and performance over time. Furthermore, it has been noted that trust is acquired and nurtured during the experiences that showcase the value of the technology, which implies that to build and sustain trust in GenAI chatbots, educational organizations should, first, pay attention to the quality of interactions (
Kuchinka et al., 2018).
Additionally, the study revealed that the perceived ease of use of chatbots does not affect attitudes toward GenAI chatbot adoption. The fact that ease of use does not influence attitudes toward GenAI chatbot adoption refutes the traditional beliefs derived when the TAM is used, according to which ease of use is a key factor in regard to user attitudes (
Davis, 1989). This implies that other factors related to the perceived usefulness, effectiveness, and trust of the chatbot may be more important to students than the ease with which it can be used. Although ease of use can add quality to the overall user experience, as noted by
Rahman et al. (
2025), it is arguably unlikely to have a significant effect on attitudes if people feel that the chatbot is ineffective or unvalued as a means of communication. This observation supports the view that educational technologies should not only be easy to use but also have visible outcomes to the user (
Wu et al., 2024). Additionally, a positive attitude toward technology is commonly not influenced by its ease of use, as it is usually predetermined by the perception of its ability to satisfy user needs. Therefore, although user-friendliness is a notable concern, institutions should support GenAI chatbots with practical benefits to influence positive perceptions and increase the willingness to use them. In addition, the study revealed that the perceived ease of use of chatbots does not affect the behavioral intention of GenAI chatbots. This emphasizes the notable complexity of the process associated with user decision-making. This finding suggests that, to facilitate the intentions of being involved in a certain chatbot brand, the user-friendly interface might not be the only factor that defines the intentions of students. Rather, they can rely on other aspects of behavioral intentions, including perceived usefulness and whether the chatbot supports their academic lives in general. In line with what
Al-Adwan et al. (
2023) indicate, intentions are usually developed around the perceived value that the technology can offer, as opposed to its ease of use. Moreover, students might be more focused on the concrete advantages that a chatbot can provide (better learning performance, less paperwork, etc.) rather than the ease of using the technology. Therefore, educational institutions or developers are expected to concentrate on displaying the individual benefits of their GenAI chatbots, as this might lead to more positive brand intent among users, notwithstanding the perceived easy-to-use aspect of the product. Furthermore, the study revealed that the perceived ease of use of chatbots determines the perceived usefulness of GenAI. This suggests that the easier a GenAI chatbot is to use, the more likely it is that students will find it to be a useful tool that makes their learning process better. According to the hypothesis of
Davis (
1989), ease of use is another strong parameter of technology acceptance, which affects the way they judge the abilities of the technology. This interaction is critical in learning settings where learners tend to search for tools that are not only practical in learning but also require little effort. This finding aligns with what is commonly reported in the literature, which reveals positive interactions between perceived ease of use and perceived usefulness (
Al-Adwan et al., 2023;
Falebita & Kok, 2025;
Wu et al., 2024). Notably, designing user-friendly chatbots is necessary because of the positive outcomes associated with increased acceptance and incorporation in an educational setting.
This study’s findings also show that the perceived usefulness of chatbots determines attitudes toward GenAI chatbot adoption. This finding only supports the pivotal role of the perception of the efficiency of a technology, which is designed to influence the willingness of users to adopt it. This implies that when the students see a GenAI chatbot as useful, which they can use to improve their knowledge or workflow or obtain help on time, they are more likely to build a positive attitude toward the usage of such a tool. The TAM states that this relationship is essential because one unarguably predictive variable of user acceptance is perceived usefulness (
Davis, 1989). This suggests that users who have more expectations of how a chatbot will change their academic performance or streamline administrative work are more likely to think positively about a chatbot (
Rahman et al., 2025). In addition, chatbots that provide personal support and help students comprehend difficult topics create a more positive attitude (
Groothuijsen et al., 2024). Similarly, the perceived usefulness of chatbots was found to determine GenAI chatbot behavioral intention. This is an indication that the perception of the user is crucial in the intention to adopt the tool in learning. The students who believe that chatbots may help improve their academic performance (for example, owing to the appearance of personalized help or the presence of instant feedback) are more likely to be willing to use chatbots regularly (
Falebita & Kok, 2024a;
Groothuijsen et al., 2024). This finding is in agreement with the TAM, which posits that perceived usefulness has a significant influence on the attitude and intention to use the technology (
Davis, 1989). Moreover, research by
Prastiawan et al. (
2021) confirms that recognizing the practical benefits of GenAI chatbots (time savings and better learning experiences) can minimize resistance to adoption. Thus, a better understanding of how an educational facility can benefit from using these chatbots can make students more active and online to help each other and eventually benefit the learning process.
This study revealed that social influence determines attitudes toward GenAI chatbot adoption. This suggests that when students observe their peers or academic leaders positively engaging or successfully using GenAI chatbots to solve problems or complete tasks, they develop a positive attitude toward these tools. As
Ajzen (
1991) observes, people tend to seek opinions in their social groups when developing attitudes and making decisions. This phenomenon has the potential to generate a pro-environmental environment in which the use of GenAI chatbots has become socially acceptable and has even caused direct positive changes in the receptivity of students towards exploring technological opportunities. Studies have shown that positive experiences and recommendations from peers also play an essential role in influencing attitudes and consequently increasing the rate of adoption (
Chai et al., 2020;
Chen & Li, 2010;
Falebita & Kok, 2025;
Prastiawan et al., 2021). Additionally, social influence was found to determine GenAI chatbot behavioral intention. This is an indication of the significance of users’ shared experiences with brand perceptions. The fact that the peers they see use a particular chatbot and even promote it proves to be beneficial to the brand in terms of credibility and appeal. All this is in line with what
Venkatesh et al. (
2003) reveal, that social endorsements may result in brand fidelity and preference. The exposure of a chatbot to a social environment has the potential to form a positive feedback loop; the more people use it, the more they consider it credible, which will encourage more people to use it. The implication of this is that developers ought to utilize the power of social proofs, including testimonies and peer reviews, to boost behavioral intention, which would eventually facilitate increased adoption of GenAI chatbots in learning institutions. Similarly, the study revealed that social influence determines the GenAI chatbot’s perceived ease of use. This further highlights the importance of shared user experiences and how they can influence perceived ease of use. Subgroups of popular students can dissipate the fears of prospective users by discussing pleasant experiences with a brand of chatbot with them. As proposed by
Suh (
2023), social validation may diminish the feeling of apprehension and evoke trust in new technologies. This implies that the perception of students that these tools can be easily accessed and used may be because they have watched other students successfully use GenAI chatbots. Finally, social influence was found to negatively determine the GenAI chatbot’s perceived usefulness. Notably, the negative coefficient indicates that stronger social influence is associated with lower perceptions of usefulness. This unexpected result contradicts conventional expectations, which often assume that peer influence increases usefulness perceptions, and warrants further exploration. A possible explanation is that heightened reliance on peer endorsement may lead some students to view the chatbot as less independently valuable, potentially undermining their personal sense of utility. Additionally, some students may perceive a chatbot brand as useful only after personal experience with it, rather than relying on their peers’ views.
8. Limitations and Future Research
The study mainly concentrates on higher education institutions in one region of Nigeria and may thus restrict the ability to generalize the study to other students in higher education in other regions. Future research could extend this work by conducting cross-cultural comparative studies to examine how contextual differences shape trust in GenAI chatbots, as well as longitudinal studies to track how users’ trust evolves over time with sustained exposure and interaction. Moreover, self-reported data might introduce biases since the participants might not present their actual perceptions of and actions related to GenAI chatbots. The use of convenience sampling reduces generalizability, while reliance on self-reported data may introduce bias; however, we minimized common method bias through procedural and statistical checks, and we recommend future research to adopt probability sampling and triangulate self-reports with alternative data sources such as classroom observations or system-generated usage data. Also, in future studies, a more diverse sample size should be considered to support external validity, and longitudinal studies should be conducted to determine how perceptions of usefulness and brand trust develop over time. Moreover, investigating how particular characteristics of GenAI chatbots influence user interactions and satisfaction might provide additional knowledge on the strengths of chatbots in higher education. A deeper understanding of the experiences and perceptions of the people using chatbots might also be achieved by expanding the methodology to include a more qualitative tool, e.g., an interview or a focus group, which would later lead to a broader picture of the factors influencing the use of GenAI chatbots in higher education.
Additionally, a key methodological limitation concerns the measurement of “chatbot brand trust.” Although the construct was framed around confidence, dependability, and ethical assurance, the questionnaire did not anchor responses to a single identifiable brand. As a result, participants may have referred to different brands (e.g., ChatGPT, Gemini, MetaAI, Claude) or even multiple brands simultaneously, which introduces ambiguity in interpreting brand trust. This limitation may affect the internal validity of the construct; future studies should address this by explicitly specifying brand references or designing brand-specific instruments to ensure consistency in responses. In addition, regarding the model fitness, the SRMR values (0.081–0.083) were slightly above the strict cutoff of 0.08 but within the acceptable range of 0.10, while the NFI (0.731) fell below the ideal threshold of 0.90. These results indicate an acceptable, yet improvable model fit. Future studies should test the model with larger samples or apply CB-SEM for stronger validation.