Next Article in Journal
School Infrastructure as a Catalyst for Pedagogical and Collaborative Change: A Cultural-Historical Activity Theory Study
Previous Article in Journal
The Roles of Teachers and Contextual and Motivational Factors in Young Learners’ Motivation: A Structural Equation Modelling (SEM) Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education

by
Oluwanife Segun Falebita
1,*,
Joshua Abah Abah
1,
Akorede Ayoola Asanre
1,
Taiwo Oluwadayo Abiodun
2,
Musa Adekunle Ayanwale
3 and
Olubunmi Kayode Ayanwoye
4
1
Mathematics, Science and Technology Education Department, Faculty of Education, University of Zululand, KwaDlangezwa 3886, Richards Bay Private Bag X1001, South Africa
2
Department of Mathematics, Tai Solarin University of Education, Ijebu Ode P.M.B 2118, Nigeria
3
Department of Mathematics, Science and Technology Education, University of Johannesburg, Auckland Park, Johannesburg P.O. Box 524, South Africa
4
Science Education Department, Faculty of Education, Federal University Oye-Ekiti, Oye P.M.B. 373, Nigeria
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(10), 1389; https://doi.org/10.3390/educsci15101389
Submission received: 9 August 2025 / Revised: 23 September 2025 / Accepted: 26 September 2025 / Published: 17 October 2025
(This article belongs to the Topic AI Trends in Teacher and Student Training)

Abstract

The use of generative artificial intelligence (GenAI) chatbots in brands is growing exponentially, and higher education institutions are not unaware of how such tools effectively shape the attitudes and behavioral intentions of students. These chatbots are able to synthesize an enormous amount of data input and can create contextually aware, human-like conversational content that is not limited to simple scripted responses. This study examines the factors that determine chatbot brand trust in the adoption of GenAI in higher education. By extending the Technology Acceptance Model (TAM) with the construct of brand trust, the study introduces a novel contribution to the literature, offering fresh insights into how trust in GenAI chatbots is developed within the academic context. Using the convenience sampling technique, a sample of 609 students from public universities in North Central and Southwestern Nigeria was selected. The collected data were analyzed via partial least squares structural equation modelling. The results indicated that attitudes toward chatbots determine behavioral intentions and GenAI chatbot brand trust. Surprisingly, behavioral intentions do not affect GenAI chatbot brand trust. Similarly, the perceived ease of use of chatbots does not determine behavioral intention or attitudes toward GenAI chatbot adoption but rather determines perceived usefulness. Additionally, the perceived usefulness of chatbots affects behavioral intention and attitudes toward GenAI chatbot adoption. Moreover, social influence affects behavioral intention, perceived ease of use, perceived usefulness and attitudes toward GenAI chatbot adoption. The implications of the findings for higher education institutions are that homegrown GenAI chatbots that align with the principles of the institution should be developed, creating an environment that promotes a positive attitude toward these technologies. Specifically, the study recommends that policymakers and university administrators establish clear institutional guidelines for the design, deployment, and ethical use of homegrown GenAI chatbots, ensuring alignment with educational goals and safeguarding student trust.

1. Introduction

Artificial intelligence (AI), especially generative AI (GenAI) chatbots, plays a pivotal role in the core activities carried out in higher education, including research, teaching, and learning. More significantly, in teaching and learning, it is used to enhance personalized learning, whereas in research, it is a veritable tool for promoting efficient research. Researchers use these GenAI tools to search for information and check grammar, translation and data analysis, among other methods (Falebita & Kok, 2024b). In higher education, a number of GenAI tools are used for several purposes; one of the most commonly used of these tools is ChatGPT (free version) (Falebita & Kok, 2024b; Neumann et al., 2023). Chatbots have been in the limelight as a form of GenAI tool. On these platforms, users from higher education can interact with AI technology to resolve issues related to teaching, learning, and research by simply throwing in questions or statements (prompts) and then receiving responses pulled together from a pool of data with which they have been trained. They have emerged as powerful tools that provide personalized learning experiences and enhance students’ engagement, streamlining and enhancing research activities among researchers (Ayeni et al., 2024; Davar et al., 2025; Limna et al., 2023).
The educational system has experienced changes due to several technologies, including the internet, AI, social media, augmented reality, online learning platforms, gamification and virtual reality. The GenAI chatbot has gained traction in the higher education sector and is facilitating student engagement. These AI-driven tools provide support every time and may answer questions about administrative and academic activities in higher education institutions. Students in higher educational institutions frequently use AI chatbots, and this trend is growing (Aksu Dünya & Yıldız Durak, 2024; Stöhr et al., 2024). This could be because of the learner-centric educational system and the emphasis on the use of AI technologies in education (Cordero et al., 2025). Perhaps the primary factors influencing students’ adoption of AI-based chatbots are their simplicity, speedy search for resources, quick content generation, and accuracy in the content generated (Almulla, 2024; Rahman et al., 2025). Despite their many potential benefits, whether individuals adopt these technologies for various activities could depend heavily on their trust in the chatbot brand.
GenAI chatbot brand trust is the degree of confidence and dependability that GenAI users have in a particular brand of chatbot’s ability to offer accurate, useful, and consistent interactions and content (Mai & Nguyen, 2025; Obenza et al., 2024). While there is a growing presence of GenAI chatbots in higher education, several factors might interplay with the ability of the GenAI chatbot brand to gain trust among its users. Numerous studies have focused on the use of GenAI in the classroom because of the many opportunities it has offered to transform education radically (BaiDoo-Anu & Owusu Ansah, 2023; Chiu et al., 2023). Raman et al. (2024a) show how ChatGPT aligns with SDG-related educational outcomes, which could reinforce the importance of chatbot trust not only for adoption but also for sustainable education. In higher education, students are key players in the system; they are the ones for which the system is built. Therefore, their access to quality information helps the system and promotes a healthy society devoid of falsehood or fake information (Auberry, 2018). Students find chatbots engaging, hands-on, and simple to use (Shim et al., 2023). However, they believe that chatbot responses can occasionally be inaccurate (Goodman et al., 2023). Additionally, studies have shown that various users are concerned with the level of accuracy of the content generated by chatbots (Cornelison et al., 2024; Limna et al., 2023). It is argued that individual preferences for GenAI chatbots could be based on brand trust, which could be explained by access (premium or free), ease of use, comprehensiveness of the information generated, accuracy of the information, intentions, usefulness, and attitudes, among other factors.
Given that students typically engage with multiple chatbot platforms rather than a single application, this study deliberately adopts a broader view of “GenAI chatbots” as a category of technologies. This approach reflects the reality of student use, where platforms such as ChatGPT, MetaAI, Gemini, Copilot and institution-specific bots are often accessed interchangeably. Therefore, the Technology Acceptance Model (TAM) is applied to this aggregated category, focusing on the shared features of generative chatbots, natural language processing, generative response capability, and interactive academic support, rather than limiting the analysis to one specific brand.
There has been a remarkable shortage of focused research on some specific determinants of chatbot brand trust in higher education, although the existing body of literature has investigated the use of AI technology in educational settings. This vacuum provides an opportunity to explore how confidence in GenAI chatbots is influenced by perceived usefulness, attitudes, behavioral intentions, perceived ease of use, and social influence, ultimately affecting the adoption of these chatbots. Therefore, this study investigates the determinants of GenAI chatbot brand trust among higher education students and the interplay between these constructs.

2. Literature Review and Theoretical Framework

2.1. Technology Acceptance Model and Conceptual Framework

The Technology Acceptance Model (TAM) hypothesizes that a user’s behavioral intention to use a technology is the immediate determinant of actual system use, and that BI is in turn shaped by the user’s general attitude toward employing the technology (Davis, 1989; Venkatesh & Davis, 2000). The attitude toward technology usage is dependent on two belief functions: perceived usefulness and perceived ease of use. These two belief variables, perceived usefulness and perceived ease of use, are theorized to exert linear effects on attitude and behavioral intention, forming the core causal chain of TAM (Davis, 1989). The TAM aims to explain how people may understand and accept technological innovations and how they may use them. For any new technology, many factors affect people’s decision-making regarding how and when to use it (Lee et al., 2010). TAM provides a useful framework for explaining individual end-user acceptance and intention to use a specific information system or technology, including emerging tools such as artificial intelligence (AI). It was not originally developed to account for adoption processes at the organizational level (Lee et al., 2010). For AI tools to be widely adopted, intuitive interfaces that require minimal training are needed. This includes user-friendly dashboards, straightforward integration with existing systems, and seamless interactions (Chenchu et al., 2025). Members of academic institutions are likely to adopt AI if it has clear benefits, such as improved efficiency, faster decision-making, and cost savings. For example, GenAI chatbots can automate repetitive writing tasks, freeing up students’ resources for more strategic activities and many more.
In this study, the TAM is extended beyond the acceptance of a single, specific technology to cover a heterogeneous category of “GenAI chatbots.” This extension is justified because students in higher education typically interact with multiple chatbot platforms interchangeably (Falebita & Kok, 2024b; Vázquez-Parra et al., 2024) (e.g., ChatGPT, MetaAI, Gemini, Copilot, institutional chatbots), rather than limiting themselves to one. It is important to note that the proposed model intentionally diverges from the traditional TAM specification. While TAM is typically structured around perceived usefulness (PU), perceived ease of use (PEU), attitude (ATT), behavioral intention (BI), and actual use (USE), our model introduces “brand trust” and “behavioral intention” as outcome variables that better capture the realities of GenAI chatbot adoption in higher education. This divergence is theoretically justified on two grounds. First, in this context, actual use is difficult to measure objectively since students engage with multiple chatbot platforms in varied and informal ways; therefore, brand trust serves as a more meaningful indicator of adoption. Second, given the strong influence of social and institutional factors in shaping adoption, we include “social influence” as an external construct, consistent with TAM extensions such as TAM2 and UTAUT. By adapting TAM in this way, the model maintains conceptual coherence while reflecting the unique dynamics of GenAI chatbot adoption, where brand perception and trust are central to explaining usage behaviors.
The preference for a chatbot (including one offered by a particular brand) can be analyzed through the constructs of TAM, where perceived usefulness and perceived ease of use influence users’ attitudes. This study focused on how these constructs, together with behavioral intention and brand trust, affect the adoption of AI chatbots among higher education students. The proposed model, as conceptualized in this study, is shown in Figure 1.

2.2. Attitudes and GenAI Chatbot Behavioral Intentions

An attitude is a person’s thoughts, feelings, and actions toward something, an occasion, or a person. Attitude has a direct influence on individuals’ behavioral intention to take any action (Ajzen et al., 2018). In this study, attitudes are the feelings and thoughts of students towards the use and efficacy of GenAI chatbots. This may entail favorable or unfavorable opinions about the product of GenAI chatbots in their engagement with technological tools. With respect to the adoption of technology, Kerschner and Ehlers (2016) confirmed that customers’ attitudes vary when they adopt a technology. Attitudes are important because of their predictive value; positive attitudes tend to give rise to favorable behaviors, which may include adoption, loyalty, and advocacy, especially in emerging technologies. In the case of new technological tools such as GenAI chatbots, which remain novel to several individual users, attitudes tend to present themselves as an influential mediator of the perceptions concerning the technology and the intention to use the tools. In addition, attitudes play a significant role in the process of brand perception building and consumer loyalty building; thus, they are necessary variables in the process of strategic marketing (Kuchinka et al., 2018). Studies have shown that attitudes contribute to behavioral intentions (Bashir & Madhavaiah, 2015; Jung et al., 2016; Mailizar et al., 2021). The relationship between attitudes and intentions has been studied and deemed positively relevant. It is anticipated that the present study will show that students’ attitudes toward using GenAI chatbots would have a major influence on their behavioral intentions.

2.3. GenAI Chatbot Perceived Ease of Use

The level to which an individual feels that the utilization of a form of technological tool will not constitute any form of stress is known as perceived ease of use (Davis, 1989). According to Davis, a technology’s ease of use is a sign of its acceptability. Students are more likely to use chatbots to learn in an educational setting if there is an existing technological infrastructure in place (Rahman et al., 2025). This type of infrastructure comprises messaging apps and user-friendly interfaces that are compatible with various internet-enabled phones. Institutions lower the access barrier to make the setup simple and manageable, which makes it easier for individuals to adopt the technology (Singun, 2025). That is, if a technology is easy to understand and use, potential users will be prepared to embrace it. According to Hansen et al. (2018), individuals who use technology might have preconceived ideas about how easy or difficult it will be to use. It is important to consider how easy they are to use when developing technological products because it has a large influence on users’ adoption of them (Prastiawan et al., 2021). For example, students are more inclined to embrace app-based GenAI chatbots if they are easy to navigate and can be easily accessed when needed. Despite having an app version that makes it simple to use on laptops, tablets, and smartphones, the GenAI chatbot can still be accessed on a browser on any web-responsive device with little risk. Additionally, the feature of natural interaction draws attention to the higher level of usability that GenAI chatbots offer, perhaps promoting their favorable opinions.

2.4. GenAI Chatbot Perceived Usefulness

Perceived usefulness is an essential variable that affects the usage of technology, especially with respect to GenAI chatbots. TAM asserts that users will better accept a new technology where they feel it will increase their performance or that it will address a certain need (Davis, 1989). In the use of chatbots, perceived usefulness can refer to several advantages, including the ability to automate repetitive processes, fast access to information, and efficient communication. Chatbots provide tailored content to help students grasp difficult ideas and improve flexible, individualized learning (Davar et al., 2025). Chatbots are educational assistants that offer clarifications and explanations, assisting students in finishing assignments more efficiently (Navas et al., 2024). Research shows that chatbots can improve learning in undergraduate courses such as programming by providing individualized support and elucidating difficult ideas (Groothuijsen et al., 2024). Studies have also shown that users who believe that a GenAI chatbot can help bring benefits (e.g., becoming more efficient in their academic work or simplifying administrative procedures) will use it more (Falebita & Kok, 2024a; Osman, 2025; Prastiawan et al., 2021). This implies that there is a direct influence of perceived usefulness on the attitudes of users towards technology adoption. Moreover, behavioral intention is influenced by perceived usefulness, such that users’ willingness to adopt particular chatbot solutions depends on how useful they perceive GenAI chatbots to be (Groothuijsen et al., 2024). The greater the utility and value of chatbots are in the minds of users, the more they trust the brand behind such technology. Research has indicated that, in addition to having a positive perception of usefulness, this allows the development of strong brand loyalty and the intention to use services provided by that brand (Chenchu et al., 2025). The effectiveness of GenAI chatbots in an academic setting can considerably influence their perceived usefulness, contribute to overall acceptance and favor particular brands by users who are in need of tools on which they can rely. Therefore, the perception of usefulness tends not only to determine attitudes toward the adoption of chatbots but also to determine behavioral intention, which can be indispensable in determining the application of GenAI chatbots in the educational environment.

2.5. GenAI Chatbot Social Influence

One of the driving forces of the use of GenAI chatbots is social influence, especially in higher education, where there are many collaborations for different academic purposes among both students and academics. According to Ajzen (1991), when developing attitudes and making decisions, a person is likely to seek the guidance of people who are important to them (subjective norms in TAM terminology; often referred to as ‘social influence’ in UTAUT). When users (students) see their fellows or academic leaders interacting positively with GenAI chatbots, there is a snowball effect, resulting in an atmosphere of acceptability and interest. This endorsement from others can work dramatically in increasing the value of these technologies to users, and therefore, they tend to use them as well. In addition, social influence may take informal forms, including word-of-mouth suggestions or academic discussions, which further define the attitudes of new users. Various studies have established that social influence tends to predict the adoption of new technologies (Chai et al., 2020; Chen & Li, 2010; Falebita & Kok, 2025; Prastiawan et al., 2021). Building on this, Abdalla et al. (2024) extend the discussion beyond the Technology Acceptance Model by employing the Diffusion of Innovation Theory, which emphasizes how early adopters and peer influence accelerate the spread of innovations like GenAI chatbots. Their findings suggest that peer validation not only normalizes adoption but also creates legitimacy for the technology within academic communities.
Like attitude, social influence is another key binding factor of behavioral intention toward GenAI chatbots. When people recognize that their peers or those they look up to are using a certain chatbot, it not only makes them put more trust in the technology but also makes their intentions align with those of the brand that created the chatbot (Al-Oraini, 2025). It has been indicated that positive social endorsements have the propensity to result in higher brand fidelity and preference among brands that are popular or widely recognized in a society (Venkatesh et al., 2003). This forms a virtuous circle, as the more people use the chatbot, the more visible and perceived credible it is, which attracts even more people. This means that identifying the impact of social influence on behavioral intention is instrumental in the process of the successful introduction of GenAI chatbots in organizations. This is further confirmed by various studies (Liang et al., 2024; Zhang et al., 2025) showing that social influence contributes to behavioral intentions. However, there is also a contrary view from the study of Falebita and Kok (2025), which suggests that social influence, conceived as social norms in their study, does not influence intention. These mixed opinions serve as the basis for the investigation of social influence as a determinant of GenAI chatbot behavioral intention in this study. By incorporating both TAM and Diffusion of Innovation perspectives, this study highlights how social influence operates at multiple levels, shaping both individual behavioral intentions and the broader institutional culture around AI adoption.
Moreover, social influence tends to have a strong influence on the ease of use and usefulness of GenAI chatbots. The fact that influential peers can be asked to detail their positive experiences with the use of a chatbot means that potential users will be more inclined to recognize the technology as being accessible and useful to end users. This association between social validation and the user experience lowers fear and instils confidence in the use of new technologies (Suh, 2023). An example could be that a faculty member points out how a GenAI chatbot helps simplify some tasks or improve communication between students, which can not only promote its use but also help improve the general viewpoint towards the evidence of the utility of the technology. Consequently, the application of social influence to market GenAI chatbots will amount to a strategic initiative because having a friendly community can facilitate the communication and adoption of this new technology in higher learning institutions.

2.6. GenAI Chatbot Brand Trust

Brand trust occupies a central position in digital interaction reality, playing an important role in experiences and ensuring brand loyalty. When referring to GenAI chatbots, brand trust represents the individual user’s confidence in the trustworthiness, accuracy, and ethical use of the technology (Al-Oraini, 2025; Mostafa & Kasamani, 2021; Pande & Gupta, 2024), which, together, can determine the level of acceptance of the technology and the behavioral fidelity to the brand. The complexity of trust in AI-mediated communication requires knowledge of how chatbot-relevant factors are related to brand reputation and the predispositions of users so that trusting relationships can occur. Chatbot features (responsiveness and emotional cues) mediate the connection between chatbot features and positive behavioral intentions (repurchase, electronic word-of-mouth, and brand loyalty) via trust (Ali & Sheikh, 2025). In addition, both the technological features of the chatbot and the overall organizational brand have an impact on trust building (Hansen et al., 2018; Mostafa & Kasamani, 2021). The relationship between brand trust and intentions is insightful in terms of the adoption of AI chatbots, which further stresses the importance of designing trustworthy chatbots to encourage high rates of user acceptability and build a sustainable brand (Bandara & Jayarathne, 2019; Obenza et al., 2024).
In this study, however, brand trust is conceptualized as an outcome of students’ intention to use GenAI chatbots. This position reflects the experiential pathway where willingness to engage with a chatbot brand precedes the formation of confidence and dependability in it. Prior research suggests that trust often develops after repeated or intended interactions with technology or service providers (Gefen, 2000; McKnight et al., 2002), supporting our treatment of trust as a consequence of behavioral intention rather than solely as an antecedent. This approach is particularly relevant in the context of GenAI chatbots, where users’ initial readiness to use a platform often comes before the establishment of trust in its reliability and ethical assurances.
We therefore propose the following hypotheses in this study:
H1. 
Attitudes toward GenAI chatbot adoption determine the intention to use GenAI chatbots.
H2. 
Attitudes toward GenAI chatbot adoption determine GenAI chatbot brand trust.
H3. 
Intentions to use the GenAI chatbot determine its brand trust.
H4. 
GenAI chatbot perceived ease of use determines attitudes toward GenAI chatbot adoption.
H5. 
GenAI chatbot perceived ease of use determines the intention to use GenAI chatbots.
H6. 
GenAI chatbot perceived ease of use determines GenAI chatbot perceived usefulness.
H7. 
GenAI chatbot perceived usefulness determines attitudes toward GenAI chatbot adoption.
H8. 
GenAI chatbot perceived usefulness determines the intention to use GenAI chatbots.
H9. 
Social influence determines attitudes towards GenAI chatbot adoption.
H10. 
Social influence determines GenAI chatbot behavioral intention.
H11. 
Social influence determines the GenAI chatbot’s perceived ease of use.
H12. 
Social influence determines the GenAI chatbot’s perceived usefulness.

3. Methodology

3.1. Research Design

A cross-sectional research design was adopted to address the study’s aim. This design enables the investigation of the connections among the variables considered by gathering data from participants at one point in time, without long-term monitoring (Creswell & Creswell, 2017). It allows the simultaneous evaluation of the variables and offers a complete picture of their state at a given point.

3.2. Sampling

The study targeted students from universities in North Central and Southwestern Nigeria, with a specific focus on a sample of 609 (457 undergraduates and 152 postgraduates) from public institutions. The participants were drawn from eight public universities located in four different states within the region. A convenience sampling method was employed to select the participants from the universities. This approach allows researchers to easily access and recruit readily available respondents from departments or faculty WhatsApp platforms (Etikan et al., 2015). Participation in the survey was voluntary; potential participants were informed about the study, and their consent was obtained before inclusion.

Demographic Characteristics of the Participants

The sample composition revealed by the demographic characteristics of the study participants is presented in Table 1. A total of 609 students participated, with a majority identified as male (57.1%) and 42.9% female. This gender distribution may indicate that within the university setting, particularly in Southwestern Nigeria, male involvement might typically be greater in some disciplines. In terms of age, most participants were within the age range of 20–24 years (51.6%), suggesting that the sample largely comprised younger students, perhaps in their early undergraduate years. This age group is significant since it represents digital natives (Sadiku et al., 2017). This makes their opinions extremely relevant, as only a small number of participants were over 34 years old, indicating that the research predominantly reflects the opinions of younger digital natives, which may impact the generalizability of the results to older populations. Considering the sample distribution across faculties, the sciences (43.8%) had the most representation, followed by education (20.7%), management science (12.6%), social sciences (14.9%), and arts (7.9%). This may indicate an emphasis on more enrolment in technology or vocational-based disciplines. The status of the participants also revealed a strong skew toward undergraduates (75.0%), with postgraduates accounting for 25.0% of the sample.

3.3. Instrumentation

A questionnaire with six constructs related to chatbot adoption and brand trust, adapted from various studies, was used for data collection (see Appendix A). The constructs considered in this study, as adapted from other studies, are perceived usefulness, perceived ease of use adapted, and attitude (Falebita & Kok, 2024a), brand trust (Delgado-Ballester, 2004), social influence, and behavioral intention (Mutambara & Chibisa, 2022). The adapted items were refined to improve their relevance and clarity and were specifically tailored to align with the objectives of this study. This is to ensure that the instrument effectively captures the relevant constructs, allowing for a thorough analysis. Each construct consisted of five items rated on a 7-point Likert scale ranging from agreement to disagreement. Employing a 7-point scale enhances data reliability by offering a diverse range of responses, thus facilitating effective data collection and analysis (Sullivan & Artino, 2013). Thus, this scale enables a more comprehensive evaluation of respondents’ views, leading to a broader understanding of their perspectives. The instrument consisted of three sections; the first section provided brief information about the study and sought the consent of the participants, where those who were unwilling to participate could quit. The second section sought the demographic information of the participants, whereas the third section presented the items of the constructs.

3.4. Data Collection

This study collected data via an online questionnaire hosted through Google Forms; the URL to which it was linked was shared via WhatsApp. Researchers, colleagues, and students avidly distributed the link across several WhatsApp platforms. This enabled participants to complete the survey at their own pace and in a comfortable setting with easy access to the questionnaire. The questionnaire was left open for seven weeks with regular reminders to enable many students to respond to it.

3.5. Data Analysis

To analyze the data, we used the PLS‒SEM approach, which is considered suitable for this investigation. Its resilience and flexibility make it suitable for handling models that are complicated with latent variables, particularly when dealing with non-normally distributed data and small sample sizes (J. F. Hair et al., 2014). We carried out the analysis, conforming to a two-stage approach for evaluation: the measurement and structural model (J. Hair et al., 2017). In the first stage, the measurement model was examined, concentrating on the reliability and validity of the latent constructs. Reliability was examined via Cronbach’s alpha (CA) and composite reliability (CR). For validity, we looked at the average variance extracted (AVE) and used the Fornell–Larcker criterion to establish discriminant validity. Additionally, we analyzed the factor loadings of all the items to validate their contribution to the model. This procedure demonstrated that the observed variables accurately matched the theoretical ideas they were supposed to test (J. F. Hair et al., 2019). The structural model was examined in the second phase to study the predicted interactions and effects between the latent variables. This study revealed the relevance and severity of the interactions across constructs, presenting empirical data that might either corroborate or refute the suggested hypotheses (Sarstedt et al., 2017). The two-step SEM analytical procedure enabled the confirmation of the measuring instruments’ psychometric qualities before the conceptual model’s substantive relationships were evaluated. This extensive analytical strategy is generally regarded as a standard practice in quantitative research that strives to examine theoretical frameworks via the use of latent variable approaches, ensuring both the robustness and the credibility of the results (J. F. Hair et al., 2021).

4. Results

4.1. Measurement Model Assessment

As the first stage of PLS-SEM, we assess the measurement model, which ensures the reliability of the study’s results. At this stage, we examine the reliability of the indicators by considering their outer loading. We evaluated the reliability of our indicators by evaluating their outer loadings. We found that outer loading over 0.7 is desirable; however, one of the values above 0.60 was considered adequate since its VIF is still within the acceptable threshold and contributes to the development of our model (J. F. Hair et al., 2014; Henseler et al., 2009). The study indicated remarkable outer loading values ranging from 0.696 to 0.904 (see Table 2), validating the reliability of the measures that evaluate chatbot brand trust. We subsequently focused on the internal consistency of the constructs via composite reliability (CR) and Cronbach’s alpha (CA). All the constructs show remarkable internal consistency, with CA values ranging between 0.800 and 0.919 and CR values ranging from 0.804 to 0.927 (see Table 2). These results indicate a robust basis for reliability throughout our measurement model, providing paths for important insights. Additionally, the multicollinearity issue among the indicators was addressed by assessing the variance inflation factor (VIF). Keeping our focus on the optimum threshold of 5.0 (J. F. Hair et al., 2010), we obtained VIF values ranging from 1.471–3.578. This shows that our indicators are functional, with no collinearity issues impeding our analysis. In addition, we examined the validity of the constructs via a dual evaluation approach that focused on both convergent validity and discriminant validity. We established convergent validity via average variance extracted (AVE), with all the constructs above the required criterion of 0.5 (Ayanwale & Sanusi, 2023; Fornell & Larcker, 1981). Our AVE values varied from 0.610 to 0.755, indicating excellent convergent validity across the constructs linked with chatbot brand trust. To establish discriminant validity, we employed the Fornell–Larcker criterion, which requires that the square root of each AVE transcends the correlations with all other variables (Fornell & Larcker, 1981), and the heterotrait–monotrait ratio (HTMT), which requires that the values are less than 0.90, indicating distinction among the constructs (Henseler et al., 2015). Our results indicated discriminant validity indices of the Fornell–Larcker criterion ranging from 0.138 to 0.869 (see Table 3), whereas those of the HTMT range from 0.149 to 0.885 (see Table 3), demonstrating that the constructs are valid and contribute uniquely to the overall model.

4.2. Structural Model Assessment

In our investigation of the factors determining chatbot brand trust, we assessed the structural model via several key metrics, including path coefficient size and its significance (β and p values), t values, multicollinearity assessments (VIFs), effect sizes (f2), and predictive relevance (Q2). The assessment started with multicollinearity tests, where we considered the variance inflation factor (VIF) of the paths to detect any possible collinearity concerns across the components. With VIF values ranging from 1.000 to 3.297 (see Table 4), we find no substantial collinearity, validating the robustness of our model. The path coefficients (β), shown in Figure 2 and Table 4, range from −0.148 to 0.868. Our assessment of the t values, which ranged from 0.523 to 29.355, enabled us to make decisive conclusions about the relevance of the paths in support of the hypotheses. Regarding the hypotheses investigated, three were not supported, whereas nine were supported. As shown in Table 4 and Table 5, attitudes toward chatbots determine behavioral intentions toward GenAI chatbots (β = 0.451; t = 10.793; p < 0.05) and GenAI chatbot brand trust (β = 0.594; t = 9.305; p < 0.05). However, behavioral intentions do not affect GenAI chatbot brand trust (β = −0.032, t = 0.544, p = 0.586). In addition, the chatbot’s perceived ease of use does not affect the attitudes toward (β = 0.042; t = 0.523; p = 0.601) or the behavioral intentions toward the GenAI chatbot (β = 0.128; t = 1.787; p = 0.074). However, the chatbot’s perceived ease of use determines GenAI’s perceived usefulness (β = 0.868; t = 29.355; p < 0.05). Additionally, the perceived usefulness of chatbots affects attitudes toward GenAI chatbot adoption (β = 0.151; t = 2.278; p < 0.05) and behavioral intentions toward GenAI chatbots (β = 0.124; t = 2.076; p < 0.05). Additionally, social influence affects attitudes toward GenAI chatbot adoption (β = 0.687; t = 21.096; p < 0.05), behavioral intentions toward GenAI chatbots (β = 0.244; t = 5.943; p < 0.05), the perceived ease of use of GenAI chatbots (β = 0.553; t = 17.555; p < 0.05), and GenAI chatbots’ perceived usefulness negatively (β = −0.148; t = 4.779; p < 0.05).
Additionally, to better comprehend the significance of these interactions, we measured the path effect sizes via Cohen’s criterion: f2 values of ≥0.02, ≥0.15, and ≥0.35 corresponded to low, moderate, and high effect sizes, respectively (Ayanwale & Sanusi, 2023; Cohen, 1988; J. F. Hair et al., 2021). The f2 values, which range from 0.001 to 1.421, provide insights into the strength of relationships between constructs in the model. The moderate effects observed in the paths CATT → CBI and CATT → CTB highlight the pivotal role of attitudes in shaping both behavioral intentions and brand trust, underscoring attitude as a central driver in GenAI chatbot adoption. The small but significant effects for CPU → CATT, CSN → CBI, and CSN → CPU suggest that while perceived usefulness and social influence contribute to shaping attitudes and intentions, their influence is less direct and may operate through more complex mediational pathways. In addition, the small effect size for CPU → CBI indicates a statistically significant but negligible effect. By contrast, the large effects recorded in the paths CPEU → CPU, CSN → CATT, and CSN → CPEU demonstrate that perceived ease of use and social influence are critical determinants of both usefulness and attitudes, reinforcing the argument that usability and peer influence serve as key entry points for adoption. The very small and insignificant effects for CBI → CTB, CPEU → CATT, and CPEU → CBI indicate that behavioral intention is not sufficient to determine trust; likewise, ease of use alone is insufficient to drive attitudes or behavioral intentions, highlighting the need for value-based and socially reinforced experiences rather than mere technical simplicity.
The structural model assessment further strengthens these observations. High R2 values for attitudes (0.606) and behavioral intention (0.650) suggest that the model accounts for substantial variance in these constructs, confirming that the proposed framework robustly explains key drivers of adoption. The predictive relevance (Q2) values above zero reinforce these conclusions (Chin, 1998), with CATT (Q2 = 0.574) and CBI (Q2 = 0.484) showing particularly strong predictive power, indicating that the model not only explains but also predicts user behavior effectively. The modest explanatory power (see Table 6) of CPEU (R2 = 0.306; Q2 = 0.303) shows that ease of use is a contributing factor but not a dominant one, while CTB (R2 = 0.326; Q2 = 0.178) reflects the more complex, multidimensional nature of trust formation. Interestingly, CPU (R2 = 0.633) demonstrates strong explanatory capacity, but its low predictive relevance (Q2 = 0.106) suggests that while perceived usefulness is well explained by other constructs, it may not translate into consistent predictive accuracy for future behaviors. Collectively, these results indicate that the model is particularly strong in explaining attitudes and intentions, but further refinement may be needed to enhance the predictive strength of constructs such as trust and perceived usefulness.
Table 7 shows the model fit indices. The model fit indices were examined to assess the adequacy of the structural model. The SRMR values (0.081 for the saturated model and 0.083 for the estimated model) are slightly above the conservative 0.08 cutoff but remain within the broader acceptable threshold of 0.10 (Henseler et al., 2015; Hu & Bentler, 1999). This suggests that the model demonstrates an acceptable level of fit. The discrepancy measures (d_ULS and d_G) and the Chi-square values are close between the saturated and estimated models, indicating stable results. Although the absolute values of these indices are less interpretable in isolation, their consistency strengthens confidence in the model’s specification. The NFI value of 0.731 falls below the commonly recommended cutoff of 0.90. However, in PLS-SEM research, particularly with complex models and moderate sample sizes, NFI values above 0.70 are often considered acceptable (J. Hair et al., 2017; J. F. Hair et al., 2019). Taken together, the global model fit indices indicate that the model achieves an acceptable fit, though opportunities exist for improvement. These findings, combined with the PLS-specific metrics (β, R2, f2, Q2), provide sufficient support for the adequacy of the structural model.

5. Discussion

As generative AI invades the space of higher education, changing how information is presented and processed, determining what motivates trust in chatbot technologies is very important for their successful adoption in higher education. These research results provide insight into the complex connections between attitudes, behavioral intentions, and trust in GenAI chatbots in higher education. The results of this study show that attitudes toward chatbots determine behavioral intentions toward GenAI chatbots. This indicates that users’ views of chatbot brands are important for their behavior toward GenAI chatbots. This finding suggests that when students perceive GenAI chatbots positively, whether in terms of efficiency, reliability, or overall usefulness, they are more likely to develop the intention to use them. In this way, attitudes act as a critical motivational factor that channels perceptions into concrete behavioral intentions. The evidence reinforces the TAM, which positions attitude as a key antecedent of intention, and highlights the importance of cultivating favorable perceptions to encourage adoption. This finding aligns with what is stated in the literature: positive attitudes contribute to intention, as a result of which students have a higher probability of participating in particular chatbot brands (Ajzen et al., 2018; Kerschner & Ehlers, 2016). Additionally, attitudes toward chatbots were found to influence trust in GenAI chatbot brands. This indicates that when students view chatbots positively, they are also more likely to extend confidence to the brand behind the tool, underscoring the close connection between user perceptions and trust formation. In other words, when students have favorable perceptions of the efficiency of GenAI chatbots and their usefulness, they are more likely to trust the brand that develops these tools. This finding supports the claim of Chenchu et al. (2025) that brand trust directly depends on user experiences, which is why the consistency and reliability of interactions with the brand are important. In addition, Kerschner and Ehlers (2016) reported that positive attitudes may contribute to user loyalty and advocacy, which reflects the assumption that trust is based on positive attitudes and experiences. Therefore, it is imperative to create the right environment that will evolve favorable attitudes toward the use of GenAI chatbots by educational institutions, aiming to increase user confidence in their use with the aim of achieving the broad adoption of such cutting-edge tools.
With respect to behavioral intention, the findings also show that behavioral intention does not determine GenAI chatbot brand trust. The fact that behavioral intention cannot predetermine GenAI chatbot brand trust suggests a complex interrelation between the intentions of the users and their trust in the technology. Although behavioral intention is commonly used to show that a user will use a given tool, this research finds that mere intention cannot be a key determinant of brand trust. This divergence from traditional TAM expectations may be explained by the unique characteristics of GenAI chatbots, where intention reflects curiosity or social influence rather than actual confidence in the system. In such contexts, trust appears to be formed less by willingness to engage and more by perceptions of the chatbot’s reliability, accuracy, and responsiveness over time. This could highlight the importance of long-lasting, positive experiences with the technology as a guarantee of trust-building rather than just a statement of intentions. A person can be willing to use a chatbot on the basis of recommendations or tendencies but still has doubts related to the reliability or efficiency of the chatbot. This means that trust cannot be defined as nothing more than readiness to utilize the technology; it presupposes stable and reliable work that may satisfy user expectations. Comparative studies, such as Raman et al. (2024b), who contrasted ChatGPT and Bard in educational contexts, reinforce this interpretation: they found that perceived usefulness and accuracy were far stronger predictors of trust than behavioral intention. This suggests that in higher education, trust in GenAI chatbots is contingent on demonstrated performance and reliability, rather than users’ initial willingness to adopt them. Theoretically, this highlights the importance of treating trust formation as contingent upon demonstrated reliability and consistent experience, rather than mere stated willingness. Practically, institutions and developers must recognize that motivating students to intend to use a chatbot is not sufficient; trust must be earned through transparency, accuracy, and performance over time. Furthermore, it has been noted that trust is acquired and nurtured during the experiences that showcase the value of the technology, which implies that to build and sustain trust in GenAI chatbots, educational organizations should, first, pay attention to the quality of interactions (Kuchinka et al., 2018).
Additionally, the study revealed that the perceived ease of use of chatbots does not affect attitudes toward GenAI chatbot adoption. The fact that ease of use does not influence attitudes toward GenAI chatbot adoption refutes the traditional beliefs derived when the TAM is used, according to which ease of use is a key factor in regard to user attitudes (Davis, 1989). This implies that other factors related to the perceived usefulness, effectiveness, and trust of the chatbot may be more important to students than the ease with which it can be used. Although ease of use can add quality to the overall user experience, as noted by Rahman et al. (2025), it is arguably unlikely to have a significant effect on attitudes if people feel that the chatbot is ineffective or unvalued as a means of communication. This observation supports the view that educational technologies should not only be easy to use but also have visible outcomes to the user (Wu et al., 2024). Additionally, a positive attitude toward technology is commonly not influenced by its ease of use, as it is usually predetermined by the perception of its ability to satisfy user needs. Therefore, although user-friendliness is a notable concern, institutions should support GenAI chatbots with practical benefits to influence positive perceptions and increase the willingness to use them. In addition, the study revealed that the perceived ease of use of chatbots does not affect the behavioral intention of GenAI chatbots. This emphasizes the notable complexity of the process associated with user decision-making. This finding suggests that, to facilitate the intentions of being involved in a certain chatbot brand, the user-friendly interface might not be the only factor that defines the intentions of students. Rather, they can rely on other aspects of behavioral intentions, including perceived usefulness and whether the chatbot supports their academic lives in general. In line with what Al-Adwan et al. (2023) indicate, intentions are usually developed around the perceived value that the technology can offer, as opposed to its ease of use. Moreover, students might be more focused on the concrete advantages that a chatbot can provide (better learning performance, less paperwork, etc.) rather than the ease of using the technology. Therefore, educational institutions or developers are expected to concentrate on displaying the individual benefits of their GenAI chatbots, as this might lead to more positive brand intent among users, notwithstanding the perceived easy-to-use aspect of the product. Furthermore, the study revealed that the perceived ease of use of chatbots determines the perceived usefulness of GenAI. This suggests that the easier a GenAI chatbot is to use, the more likely it is that students will find it to be a useful tool that makes their learning process better. According to the hypothesis of Davis (1989), ease of use is another strong parameter of technology acceptance, which affects the way they judge the abilities of the technology. This interaction is critical in learning settings where learners tend to search for tools that are not only practical in learning but also require little effort. This finding aligns with what is commonly reported in the literature, which reveals positive interactions between perceived ease of use and perceived usefulness (Al-Adwan et al., 2023; Falebita & Kok, 2025; Wu et al., 2024). Notably, designing user-friendly chatbots is necessary because of the positive outcomes associated with increased acceptance and incorporation in an educational setting.
This study’s findings also show that the perceived usefulness of chatbots determines attitudes toward GenAI chatbot adoption. This finding only supports the pivotal role of the perception of the efficiency of a technology, which is designed to influence the willingness of users to adopt it. This implies that when the students see a GenAI chatbot as useful, which they can use to improve their knowledge or workflow or obtain help on time, they are more likely to build a positive attitude toward the usage of such a tool. The TAM states that this relationship is essential because one unarguably predictive variable of user acceptance is perceived usefulness (Davis, 1989). This suggests that users who have more expectations of how a chatbot will change their academic performance or streamline administrative work are more likely to think positively about a chatbot (Rahman et al., 2025). In addition, chatbots that provide personal support and help students comprehend difficult topics create a more positive attitude (Groothuijsen et al., 2024). Similarly, the perceived usefulness of chatbots was found to determine GenAI chatbot behavioral intention. This is an indication that the perception of the user is crucial in the intention to adopt the tool in learning. The students who believe that chatbots may help improve their academic performance (for example, owing to the appearance of personalized help or the presence of instant feedback) are more likely to be willing to use chatbots regularly (Falebita & Kok, 2024a; Groothuijsen et al., 2024). This finding is in agreement with the TAM, which posits that perceived usefulness has a significant influence on the attitude and intention to use the technology (Davis, 1989). Moreover, research by Prastiawan et al. (2021) confirms that recognizing the practical benefits of GenAI chatbots (time savings and better learning experiences) can minimize resistance to adoption. Thus, a better understanding of how an educational facility can benefit from using these chatbots can make students more active and online to help each other and eventually benefit the learning process.
This study revealed that social influence determines attitudes toward GenAI chatbot adoption. This suggests that when students observe their peers or academic leaders positively engaging or successfully using GenAI chatbots to solve problems or complete tasks, they develop a positive attitude toward these tools. As Ajzen (1991) observes, people tend to seek opinions in their social groups when developing attitudes and making decisions. This phenomenon has the potential to generate a pro-environmental environment in which the use of GenAI chatbots has become socially acceptable and has even caused direct positive changes in the receptivity of students towards exploring technological opportunities. Studies have shown that positive experiences and recommendations from peers also play an essential role in influencing attitudes and consequently increasing the rate of adoption (Chai et al., 2020; Chen & Li, 2010; Falebita & Kok, 2025; Prastiawan et al., 2021). Additionally, social influence was found to determine GenAI chatbot behavioral intention. This is an indication of the significance of users’ shared experiences with brand perceptions. The fact that the peers they see use a particular chatbot and even promote it proves to be beneficial to the brand in terms of credibility and appeal. All this is in line with what Venkatesh et al. (2003) reveal, that social endorsements may result in brand fidelity and preference. The exposure of a chatbot to a social environment has the potential to form a positive feedback loop; the more people use it, the more they consider it credible, which will encourage more people to use it. The implication of this is that developers ought to utilize the power of social proofs, including testimonies and peer reviews, to boost behavioral intention, which would eventually facilitate increased adoption of GenAI chatbots in learning institutions. Similarly, the study revealed that social influence determines the GenAI chatbot’s perceived ease of use. This further highlights the importance of shared user experiences and how they can influence perceived ease of use. Subgroups of popular students can dissipate the fears of prospective users by discussing pleasant experiences with a brand of chatbot with them. As proposed by Suh (2023), social validation may diminish the feeling of apprehension and evoke trust in new technologies. This implies that the perception of students that these tools can be easily accessed and used may be because they have watched other students successfully use GenAI chatbots. Finally, social influence was found to negatively determine the GenAI chatbot’s perceived usefulness. Notably, the negative coefficient indicates that stronger social influence is associated with lower perceptions of usefulness. This unexpected result contradicts conventional expectations, which often assume that peer influence increases usefulness perceptions, and warrants further exploration. A possible explanation is that heightened reliance on peer endorsement may lead some students to view the chatbot as less independently valuable, potentially undermining their personal sense of utility. Additionally, some students may perceive a chatbot brand as useful only after personal experience with it, rather than relying on their peers’ views.

6. Conclusions

This study investigates the factors that determine behavioral intention and trust in the adoption of GenAI chatbots among students in higher education. The results indicate that chatbot attitudes play an imperative role in defining behavioral intentions as well as brand trust, which supports the need to ensure good user experiences and perceptions. Interestingly, although behavioral intention is often expected to influence brand trust, the results of this study indicate no direct BI → Brand Trust path in the model. This means that while intention reflects willingness to adopt GenAI chatbots, it does not automatically translate into trust in the brand, highlighting the need to distinguish between stated intention and the formation of brand-level confidence. Additionally, ease of use is not directly linked to attitudes or behavioral intentions but is essential in the determination of how useful the brand is. This perceived usefulness, in turn, is crucial for attitude and behavioral intentions, so chatbots should be designed such that their positive impact on the user is obvious. Social influence is a strong variable that influences attitudes, and behavioral intentions, which implies that peer approval and recommendations are vital in prompting adoption. The results of this research emphasize the necessity of elaborating approaches to build on users’ perceptions to promote higher levels of interactions with GenAI chatbots. It is important to study these relationships further in various teaching environments and investigate other factors that can influence the adoption of emerging technologies by users in future studies.

7. Implications for Practice

The results show the critical influence of attitudes and perceived usefulness in developing students’ behavioral intentions and brand trust in adopting GenAI chatbots. It is paramount that institutions with higher levels of learning develop homegrown GenAI chatbots and adopt a positive attitude toward these technologies since this will translate into positive behavioral intentions and trust, particularly among students. The development of awareness through AI literacy to highlight the advantages of GenAI chatbots, including their unique learning experience and ability to access resources effectively, may be used to shape positive perceptions of such tools among students. When students feel that there is some usefulness in these tools, they are more willing to participate and market them, and this ripple effect results in the overall level of such adoption. Another aspect that should not be disregarded is the influence of social factors on the development of attitudes and perceived ease of use. Institutions should use peer recommendations and testimonials as a way of strengthening the importance of GenAI chatbots. Schools can develop a supportive community or peer-endorsed adoption strategies through welcoming the sharing of positive experiences and stories of how easy it is to use chatbots. However, the negative link between social influence and perceived usefulness suggests that relying on peer endorsements may undermine students’ perceptions of GenAI chatbots’ usefulness. In practice, educators and developers should prioritize strategies that promote direct, hands-on experiences and personalized interactions, enabling students to independently recognize the tool’s value rather than depending on social influence. Moreover, whereas the perceptual ease of use does not directly determine attitudes and intentions regarding brands, it does significantly influence their perceived usefulness. Thus, institutions must be concerned with providing GenAI chatbots that can be easily used and accessed. This should be achieved by making them user-friendly and providing extensive training, allowing students to master how to use these tools effectively. It is possible to increase the overall adoption of GenAI chatbots by focusing on making higher education institutions more hospitable to their development and usability and by creating a positive social context in which faculty members will encourage students to adopt GenAI chatbots in their learning process, making the whole process more aesthetically appealing and habitable.

8. Limitations and Future Research

The study mainly concentrates on higher education institutions in one region of Nigeria and may thus restrict the ability to generalize the study to other students in higher education in other regions. Future research could extend this work by conducting cross-cultural comparative studies to examine how contextual differences shape trust in GenAI chatbots, as well as longitudinal studies to track how users’ trust evolves over time with sustained exposure and interaction. Moreover, self-reported data might introduce biases since the participants might not present their actual perceptions of and actions related to GenAI chatbots. The use of convenience sampling reduces generalizability, while reliance on self-reported data may introduce bias; however, we minimized common method bias through procedural and statistical checks, and we recommend future research to adopt probability sampling and triangulate self-reports with alternative data sources such as classroom observations or system-generated usage data. Also, in future studies, a more diverse sample size should be considered to support external validity, and longitudinal studies should be conducted to determine how perceptions of usefulness and brand trust develop over time. Moreover, investigating how particular characteristics of GenAI chatbots influence user interactions and satisfaction might provide additional knowledge on the strengths of chatbots in higher education. A deeper understanding of the experiences and perceptions of the people using chatbots might also be achieved by expanding the methodology to include a more qualitative tool, e.g., an interview or a focus group, which would later lead to a broader picture of the factors influencing the use of GenAI chatbots in higher education.
Additionally, a key methodological limitation concerns the measurement of “chatbot brand trust.” Although the construct was framed around confidence, dependability, and ethical assurance, the questionnaire did not anchor responses to a single identifiable brand. As a result, participants may have referred to different brands (e.g., ChatGPT, Gemini, MetaAI, Claude) or even multiple brands simultaneously, which introduces ambiguity in interpreting brand trust. This limitation may affect the internal validity of the construct; future studies should address this by explicitly specifying brand references or designing brand-specific instruments to ensure consistency in responses. In addition, regarding the model fitness, the SRMR values (0.081–0.083) were slightly above the strict cutoff of 0.08 but within the acceptable range of 0.10, while the NFI (0.731) fell below the ideal threshold of 0.90. These results indicate an acceptable, yet improvable model fit. Future studies should test the model with larger samples or apply CB-SEM for stronger validation.

Author Contributions

Conceptualization, Methodology and Data Analysis, OSF; Writing—original draft preparation, O.S.F., J.A.A., A.A.A., T.O.A., O.K.A.; Writing—review and editing, O.S.F., J.A.A., A.A.A., T.O.A., O.K.A., M.A.A.; Resources, J.A.A., A.A.A., T.O.A., O.K.A., M.A.A.; Supervision, M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This experiment was approved by the University Ethical Review Committee (UERC), Approval Code: ELRC/SE/EDU/001/25, Approval Date: 27 June 2024.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data presented in this study are available at https://drive.google.com/file/d/1MiiZzHmu5oJOZujff6uLWP_dVUTgleab/view?usp=sharing (accessed on 1 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Status: Undergraduate [ ] Postgraduate [ ]
Gender: Female [ ] Male [ ]
Institution:
Faculty:
Discipline/Field:
Favorite brands of generative AI chatbots:
ConstructCodeItem
Chatbot Brand TrustCBT1The brand behind the GenAI chatbot is well-regarded in the education sector.
CBT2I trust the quality of service provided by this GenAI chatbot brand.
CBT3The brand has a positive reputation among my peers.
CBT4I believe this brand is committed to ethical practices.
CBT5The brand is known for its innovation in AI technology.
Attitude towards ChatbotsCATT1I feel optimistic about using GenAI chatbots in academic work.
CATT2I enjoy interacting with GenAI chatbots.
CATT3I believe that using GenAI chatbots is a good idea.
CATT4I see GenAI chatbots as valuable tools for learning.
CATT5I feel that using GenAI chatbots is beneficial for my education.
Chatbot Perceived UsefulnessCPU1Using GenAI chatbots enhances my learning efficiency.
CPU2GenAI chatbots improve my academic performance.
CPU3I find GenAI chatbots to be helpful in completing my assignments.
CPU4GenAI chatbots provide valuable information for my studies.
CPU5GenAI chatbots facilitate my understanding of complex topics.
Chatbot Perceived Ease of UseCPEU1I find it easy to interact with GenAI chatbots.
CPEU2Learning to use GenAI chatbots is straightforward.
CPEU3I can easily navigate the functionalities of GenAI chatbots.
CPEU4The design of GenAI chatbots makes them user-friendly.
CPEU5I require little effort to use GenAI chatbots effectively.
Chatbot Social InfluenceCSN1My peers encouraged me to use GenAI chatbots for academic purposes.
CSN2Most students I know use GenAI chatbots for their studies.
CSN3My instructors advocate for the use of GenAI chatbots.
CSN4I feel social pressure to use GenAI chatbots in my coursework.
CSN5People whose opinions I value use GenAI chatbots.
Chatbot Behavioral intentionCBI1I intend to use GenAI chatbots for my studies in the future.
CBI2I plan to recommend GenAI chatbots to my peers.
CBI3I would actively seek out help from GenAI chatbots for academic purposes.
CBI4I expect to use GenAI chatbots regularly in my coursework.
CBI5I am likely to incorporate GenAI chatbots into my study routine.

References

  1. Abdalla, A. A., Bhat, M. A., Tiwari, C. K., Khan, S. T., & Wedajo, A. D. (2024). Exploring ChatGPT adoption among business and management students through the lens of diffusion of innovation theory. Computers and Education: Artificial Intelligence, 7, 100257. [Google Scholar] [CrossRef]
  2. Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211. [Google Scholar] [CrossRef]
  3. Ajzen, I., Fishbein, M., Lohmann, S., & Albarracín, D. (2018). The influence of attitudes on behavior. In The handbook of attitudes, volume 1: Basic principles (2nd ed.). Routledge. [Google Scholar]
  4. Aksu Dünya, B., & Yıldız Durak, H. (2024). Hi! Tell me how to do it: Examination of undergraduate students’ chatbot-integrated course experiences. Quality & Quantity, 58(4), 3155–3170. [Google Scholar] [CrossRef]
  5. Al-Adwan, A. S., Li, N., Al-Adwan, A., Abbasi, G. A., Albelbisi, N. A., & Habibi, A. (2023). Extending the technology acceptance model (TAM) to predict university students’ intentions to use metaverse-based learning platforms. Education and Information Technologies, 28(11), 15381–15413. [Google Scholar] [CrossRef]
  6. Ali, A., & Sheikh, A. A. (2025). A mediation model of trust in chatbot interactions: The role of responsiveness and emotional cues in shaping behavioral intentions. Review of Applied Management and Social Sciences, 8(1), 577–592. [Google Scholar] [CrossRef]
  7. Almulla, M. A. (2024). Investigating influencing factors of learning satisfaction in AI ChatGPT for research: University students perspective. Heliyon, 10(11), e32220. [Google Scholar] [CrossRef] [PubMed]
  8. Al-Oraini, B. S. (2025). Chatbot dynamics: Trust, social presence and customer satisfaction in AI-driven services. Journal of Innovative Digital Transformation, 2(2), 109–130. [Google Scholar] [CrossRef]
  9. Auberry, K. (2018). Increasing students’ ability to identify fake news through information literacy education and content management systems. The Reference Librarian, 59(4), 179–187. [Google Scholar] [CrossRef]
  10. Ayanwale, M. A., & Sanusi, I. T. (2023, September 20–22). Perceptions of STEM vs. non-STEM teachers toward teaching artificial intelligence. 2023 IEEE AFRICON (pp. 1–5), Nairobi, Kenya. [Google Scholar] [CrossRef]
  11. Ayeni, O. O., Al Hamad, N. M., Chisom, O. N., Osawaru, B., & Adewusi, O. E. (2024). AI in education: A review of personalized learning and educational technology. GSC Advanced Research and Reviews, 18(2), 261–271. [Google Scholar] [CrossRef]
  12. BaiDoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. [Google Scholar] [CrossRef]
  13. Bandara, B. S. S. U., & Jayarathne, A. (2019). Impact of multichannel brand trust on behavioural intention. Sri Lanka Journal of Marketing, 5(2), 81–109. [Google Scholar] [CrossRef]
  14. Bashir, I., & Madhavaiah, C. (2015). Consumer attitude and behavioural intention towards Internet banking adoption in India. Journal of Indian Business Research, 7(1), 67–102. [Google Scholar] [CrossRef]
  15. Chai, C. S., Wang, X., & Xu, C. (2020). An extended theory of planned behavior for the modelling of chinese secondary school students’ intention to learn artificial intelligence. Mathematics, 8(11), 2089. [Google Scholar] [CrossRef]
  16. Chen, S.-C., & Li, S.-H. (2010). Consumer adoption of e-service: Integrating technology readiness with the theory of planned behavior. African Journal of Business Management, 4(16), 3556–3563. Available online: https://academicjournals.org/journal/AJBM/article-abstract/EC1210321485 (accessed on 6 April 2025).
  17. Chenchu, S., Kandem, V., Dattaram, T., & Rao, T. V. N. (2025). The convergence of artificial intelligence and graphical user interface. In Supply chain transformation through generative AI and machine learning (pp. 69–102). IGI Global Scientific Publishing. [Google Scholar] [CrossRef]
  18. Chin, W. W. (1998). The partial least squares approach for structural equation modeling. In Modern methods for business research (pp. 295–336). Lawrence Erlbaum Associates Publishers. [Google Scholar]
  19. Chiu, T., Xia, Q., Zhou, X., Chai, C., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118. [Google Scholar] [CrossRef]
  20. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. L. Erlbaum Associates. [Google Scholar]
  21. Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2025). Integration of generative artificial intelligence in higher education: Best practices. Education Sciences, 15(1), 32. [Google Scholar] [CrossRef]
  22. Cornelison, B. R., Erstad, B. L., & Edwards, C. (2024). Accuracy of a chatbot in answering questions that patients should ask before taking a new medication. Journal of the American Pharmacists Association, 64(4), 102110. [Google Scholar] [CrossRef] [PubMed]
  23. Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications. [Google Scholar]
  24. Davar, N. F., Dewan, M. A. A., & Zhang, X. (2025). AI chatbots in education: Challenges and opportunities. Information, 16(3), 235. [Google Scholar] [CrossRef]
  25. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. [Google Scholar] [CrossRef]
  26. Delgado-Ballester, E. (2004). Applicability of a brand trust scale across product categories. European Journal of Marketing, 38(5/6), 573–592. [Google Scholar] [CrossRef]
  27. Etikan, I., Musa, S. A., & Alkassim, R. S. (2015). Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4. [Google Scholar] [CrossRef]
  28. Falebita, O. S., & Kok, P. J. (2024a). Artificial intelligence tools usage: A structural equation modeling of undergraduates’ technological readiness, self-efficacy and attitudes. Journal for STEM Education Research, 7, 1–26. [Google Scholar] [CrossRef]
  29. Falebita, O. S., & Kok, P. J. (2024b). Strategic goals for artificial intelligence integration among STEM academics and undergraduates in African higher education: A systematic review. Discover Education, 3(1), 151. [Google Scholar] [CrossRef]
  30. Falebita, O. S., & Kok, P. J. (2025). Undergraduate Intentions of generative artificial intelligence for enhanced learning: Influence of attitude and perceived behavioral control. The International Journal of Technology, Knowledge, and Society, 21(1), 169–188. [Google Scholar] [CrossRef]
  31. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  32. Gefen, D. (2000). E-commerce: The role of familiarity and trust. Omega, 28(6), 725–737. [Google Scholar] [CrossRef]
  33. Goodman, R. S., Patrinely, J. R., Stone, C. A., Jr., Zimmerman, E., Donald, R. R., Chang, S. S., Berkowitz, S. T., Finn, A. P., Jahangir, E., Scoville, E. A., Reese, T. S., Friedman, D. L., Bastarache, J. A., van der Heijden, Y. F., Wright, J. J., Ye, F., Carter, N., Alexander, M. R., Choe, J. H., … Johnson, D. B. (2023). Accuracy and reliability of chatbot responses to physician questions. JAMA Network Open, 6(10), e2336483. [Google Scholar] [CrossRef] [PubMed]
  34. Groothuijsen, S., van den Beemt, A., Remmers, J. C., & van Meeuwen, L. W. (2024). AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning. Computers and Education: Artificial Intelligence, 7, 100290. [Google Scholar] [CrossRef]
  35. Hair, J., Hollingsworth, C. L., Randolph, A. B., & Chong, A. Y. L. (2017). An updated and expanded assessment of PLS-SEM in information systems research. Industrial Management & Data Systems, 117(3), 442–458. [Google Scholar] [CrossRef]
  36. Hair, J. F., Jr., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis. In Multivariate data analysis (785p). Available online: https://pesquisa.bvsalud.org/portal/resource/pt/biblio-1074274 (accessed on 24 April 2024).
  37. Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (Eds.). (2021). Evaluation of the structural model. In Partial least squares structural equation modeling (PLS-SEM) using R: A workbook (pp. 115–138). Springer International Publishing. [Google Scholar] [CrossRef]
  38. Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2–24. [Google Scholar] [CrossRef]
  39. Hair, J. F., Sarstedt, M., Hopkins, L., & Kuppelwieser, V. G. (2014). Partial least squares structural equation modeling (PLS-SEM): An emerging tool in business research. European Business Review, 26(2), 106–121. [Google Scholar] [CrossRef]
  40. Hansen, J. M., Saridakis, G., & Benson, V. (2018). Risk, trust, and the interaction of perceived ease of use and behavioral control in predicting consumers’ use of social media for transactions. Computers in Human Behavior, 80, 197–206. [Google Scholar] [CrossRef]
  41. Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. [Google Scholar] [CrossRef]
  42. Henseler, J., Ringle, C. M., & Sinkovics, R. R. (2009). The use of partial least squares path modeling in international marketing. In R. R. Sinkovics, & P. N. Ghauri (Eds.), New challenges to international marketing (Vol. 20, pp. 277–319). Emerald Group Publishing Limited. [Google Scholar] [CrossRef]
  43. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
  44. Jung, J., Shim, S. W., Jin, H. S., & Khang, H. (2016). Factors affecting attitudes and behavioural intention towards social networking advertising: A case of Facebook users in South Korea. International Journal of Advertising. Available online: https://www.tandfonline.com/doi/abs/10.1080/02650487.2015.1014777 (accessed on 20 July 2025). [CrossRef]
  45. Kerschner, C., & Ehlers, M.-H. (2016). A framework of attitudes towards technology in theory and practice. Ecological Economics, 126, 139–151. [Google Scholar] [CrossRef]
  46. Kuchinka, D. G. J., Balazs, S., Gavriletea, M. D., & Djokic, B.-B. (2018). Consumer attitudes toward sustainable development and risk to brand loyalty. Sustainability, 10(4), 997. [Google Scholar] [CrossRef]
  47. Lee, Y.-C., Li, M.-L., Yen, T.-M., & Huang, T.-H. (2010). Analysis of adopting an integrated decision making trial and evaluation laboratory on a technology acceptance model. Expert Systems with Applications, 37(2), 1745–1754. [Google Scholar] [CrossRef]
  48. Liang, S.-Z., Xu, J.-L., & Huang, E. (2024). Comprehensive Analysis of the effect of social influence and brand image on purchase intention. SAGE Open, 14(1), 21582440231218771. [Google Scholar] [CrossRef]
  49. Limna, P., Kraiwanit, T., Jangjarat, K., Klayklung, P., & Chocksathaporn, P. (2023). The use of ChatGPT in the digital era: Perspectives on chatbot implementation. Journal of Applied Learning & Teaching, 6(1), 1–10. [Google Scholar] [CrossRef]
  50. Mai, X. T., & Nguyen, T. (2025). Trust in generative artificial intelligence: A systematic literature review. In Trust in generative artificial intelligence. Routledge. [Google Scholar]
  51. Mailizar, M., Burg, D., & Maulina, S. (2021). Examining university students’ behavioural intention to use e-learning during the COVID-19 pandemic: An extended TAM model. Education and Information Technologies, 26(6), 7057–7077. [Google Scholar] [CrossRef]
  52. McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334–359. [Google Scholar] [CrossRef]
  53. Mostafa, R. B., & Kasamani, T. (2021). Antecedents and consequences of chatbot initial trust. European Journal of Marketing, 56(6), 1748–1771. [Google Scholar] [CrossRef]
  54. Mutambara, D., & Chibisa, A. (2022). Rural STEM preservice teachers’ acceptance of virtual learning. International Journal of Learning, Teaching and Educational Research, 21(2), 155–175. [Google Scholar] [CrossRef]
  55. Navas, G., Navas-Reascos, G., Navas-Reascos, G. E., & Proaño-Orellana, J. (2024). Exploring the effectiveness of advanced chatbots in educational settings: A mixed-methods study in statistics. Applied Sciences, 14(19), 8984. [Google Scholar] [CrossRef]
  56. Neumann, M., Rauschenberger, M., & Schön, E.-M. (2023). “We need to talk about ChatGPT”: The future of ai and higher education. In 2023 IEEE/ACM 5th international workshop on software engineering education for the next generation (SEENG) (pp. 29–32). IEEE. [Google Scholar] [CrossRef]
  57. Obenza, B. N., Baguio, J. S. I. E., Bardago, K. M. W., Granado, L. B., Loreco, K. C. A., Matugas, L. P., Talaboc, D. J., Zayas, R. K. D. D., Caballo, J. H. S., & Caangay, R. B. R. (2024). The mediating effect of AI trust on AI self-efficacy and attitude toward AI of college students. International Journal of Metaverse, 2(1), 1. [Google Scholar] [CrossRef]
  58. Osman, Z. (2025). Attitude as a catalyst: The role of perceived ease of use, perceived usefulness, and self-efficacy in shaping student intentions to use artificial intelligence in higher education. International Journal of Academic Research in Accounting, Finance and Management Sciences, 15(1), 201–215. [Google Scholar] [CrossRef]
  59. Pande, S., & Gupta, K. P. (2024). The relationship between brand experience, brand trust and brand identification in the process of creating brand love. International Journal of Business Excellence, 34(2), 264–282. [Google Scholar] [CrossRef]
  60. Prastiawan, D. I., Aisjah, S., & Rofiaty, R. (2021). The effect of perceived usefulness, perceived ease of use, and social influence on the use of mobile banking through the mediation of attitude toward use. APMBA (Asia Pacific Management and Business Application), 9(3), 243–260. [Google Scholar] [CrossRef]
  61. Rahman, M. K., Hossain, M. A., Ismail, N. A., Hossen, M. S., & Sultana, M. (2025). Determinants of students’ adoption of AI chatbots in higher education: The moderating role of tech readiness. Interactive Technology and Smart Education. ahead-of-print. [Google Scholar] [CrossRef]
  62. Raman, R., Lathabai, H. H., Mandal, S., Das, P., Kaur, T., & Nedungadi, P. (2024a). ChatGPT: Literate or intelligent about UN sustainable development goals? PLoS ONE, 19(4), e0297521. [Google Scholar] [CrossRef]
  63. Raman, R., Sreenivasan, A., Suresh, M., Gunasekaran, A., & Nedungadi, P. (2024b). AI-driven education: A comparative study on ChatGPT and bard in supply chain management contexts. Cogent Business & Management, 11(1), 2412742. [Google Scholar] [CrossRef]
  64. Sadiku, M., Shadare, A., & Musa, S. (2017). Digital Natives. International Journal of Advanced Research in Computer Science and Software Engineering, 7, 125. [Google Scholar] [CrossRef]
  65. Sarstedt, M., Ringle, C. M., & Hair, J. F. (2017). Partial least squares structural equation modeling. In C. Homburg, M. Klarmann, & A. Vomberg (Eds.), Handbook of market research (pp. 1–40). Springer International Publishing. [Google Scholar] [CrossRef]
  66. Shim, K. J., Menkhoff, T., Teo, L. Y. Q., & Ong, C. S. Q. (2023). Assessing the effectiveness of a chatbot workshop as experiential teaching and learning tool to engage undergraduate students. Education and Information Technologies, 28(12), 16065–16088. [Google Scholar] [CrossRef]
  67. Singun, A., Jr. (2025). Unveiling the barriers to digital transformation in higher education institutions: A systematic literature review. Discover Education, 4(1), 37. [Google Scholar] [CrossRef]
  68. Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence, 7, 100259. [Google Scholar] [CrossRef]
  69. Suh, A. (2023). How users cognitively appraise and emotionally experience the metaverse: Focusing on social virtual reality. Information Technology & People, 37(4), 1613–1641. [Google Scholar] [CrossRef]
  70. Sullivan, G. M., & Artino, A. R., Jr. (2013). Analyzing and interpreting data from likert-type scales. Journal of Graduate Medical Education, 5(4), 541–542. [Google Scholar] [CrossRef]
  71. Vázquez-Parra, J. C., Henao-Rodríguez, C., Lis-Gutiérrez, J. P., & Palomino-Gámez, S. (2024). Importance of university students’ perception of adoption and training in artificial intelligence tools. Societies, 14(8), 141. [Google Scholar] [CrossRef]
  72. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. [Google Scholar] [CrossRef]
  73. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. [Google Scholar] [CrossRef]
  74. Wu, H., Wang, Y., & Wang, Y. (2024). “To use or not to use?” A mixed-methods study on the determinants of EFL college learners’ behavioral intention to use AI in the distributed learning context. The International Review of Research in Open and Distributed Learning, 25(3), 158–178. [Google Scholar] [CrossRef]
  75. Zhang, H., Li, B., Hu, B., & Ai, P. (2025). Exploring the role of personal innovativeness on purchase intention of artificial intelligence products: An investigation using social influence theory and value-based adoption model. International Journal of Human–Computer Interaction, 1–15. [Google Scholar] [CrossRef]
Figure 1. Proposed model of the GenAI chatbot behavioral intention and brand trust.
Figure 1. Proposed model of the GenAI chatbot behavioral intention and brand trust.
Education 15 01389 g001
Figure 2. Structural model of the GenAI chatbot behavioral intention and brand trust.
Figure 2. Structural model of the GenAI chatbot behavioral intention and brand trust.
Education 15 01389 g002
Table 1. Characteristics of the respondents.
Table 1. Characteristics of the respondents.
CharacteristicsLevelN%
GenderMale34857.1%
Female26142.9%
AgeBelow 20 years589.5%
20–24 years31451.6%
25–29 years11919.5%
30–34 years7412.2%
35–39 years183.0%
40–44 years152.5%
Above 44 years111.8%
FacultyManagement Science7712.6%
Sciences26743.8%
Education12620.7%
Social Sciences9114.9%
Arts487.9%
StatusPostgraduates15225.0%
Undergraduates45775.0%
Total609100.0
Table 2. Measurement model assessment results.
Table 2. Measurement model assessment results.
ConstructIndicatorOuter LoadingVIFCACRAVE
CATTCATT10.8001.9050.8980.8990.714
CATT20.7501.662
CATT30.8803.102
CATT40.8943.578
CATT60.8903.200
CBICBI10.7581.4710.8000.8040.624
CBI40.8171.674
CBI50.7751.725
CBI60.8081.760
CPEUCPEU10.7621.7350.8620.8700.644
CPEU20.8332.077
CPEU30.8362.216
CPEU40.8342.179
CPEU60.7411.796
CPRCTB10.8602.6070.9190.9270.755
CTB20.8813.017
CTB30.8853.125
CTB40.9043.253
CTB60.8122.027
CPUCPU20.8322.1070.8970.9050.707
CPU30.8282.197
CPU40.8772.877
CPU50.8232.250
CPU60.8442.261
CSNCSN10.7751.7310.8400.8510.610
CSN20.8302.024
CSN30.8442.351
CSN50.7501.676
CSN60.6961.633
Table 3. Discriminant validity.
Table 3. Discriminant validity.
ConstructsFornell–Larcker Criterion
CATTCBICPEUCPRCPUCSN
CATT0.845
CBI0.7570.790
CPEU0.5400.6050.802
CPR0.5700.4180.2670.869
CPU0.4120.4920.7860.1380.841
CSN0.7600.6990.5530.4300.3320.781
Heterotrait-Monotrait Ratio (HTMT)
CATTCBICPEUCPRCPUCSN
CATT
CBI0.885
CPEU0.5990.725
CPR0.6240.4800.295
CPU0.4470.5740.8770.149
CSN0.8610.8260.6380.4780.368
Table 4. Assessment of the structural model results CATT→ CBI.
Table 4. Assessment of the structural model results CATT→ CBI.
PathβVIFt Valuep Valuef2Remark
CATT → CBI0.4512.56610.7930.0000.229S
CATT → CTB0.5943.2979.3050.0000.224S
CBI → CTB−0.0322.7530.5440.5860.001NS
CPEU → CATT0.0422.1360.5230.6010.001NS
CPEU → CBI0.1282.3771.7870.0740.014NS
CPEU → CPU0.8681.00029.3550.0001.421S
CPU → CATT0.1511.8702.2780.0230.021S
CPU → CBI0.1241.8732.0760.0380.016SS
CSN → CATT0.6871.00021.0960.0000.797SVL
CSN → CBI0.2441.9165.9430.0000.063S
CSN → CPEU0.5532.61917.5550.0000.441S
CSN → CPU−0.1482.3594.7790.0000.041S
Table 5. Summary of hypothesis testing.
Table 5. Summary of hypothesis testing.
HypothesisStatementDecision
H1Attitudes toward GenAI chatbot adoption determine the intention to use GenAI chatbots.Accepted
H2Attitudes toward GenAI chatbot adoption determine GenAI chatbot brand trust.Accepted
H3Intentions to use the GenAI chatbot determine its brand trust.Rejected
H4GenAI chatbot perceived ease of use determines attitudes toward GenAI chatbot adoption.Rejected
H5GenAI chatbot perceived ease of use determines the intention to use GenAI chatbots.Rejected
H6GenAI chatbot perceived ease of use determines GenAI chatbot perceived usefulness.Accepted
H7GenAI chatbot perceived usefulness determines attitudes toward GenAI chatbot adoption.Accepted
H8GenAI chatbot perceived usefulness determines the intention to use GenAI chatbots.Accepted
H9Social influence determines attitudes towards GenAI chatbot adoption.Accepted
H10Social influence determines GenAI chatbot behavioral intention.Accepted
H11Social influence determines the GenAI chatbot’s perceived ease of use.Accepted
H12Social influence determines the GenAI chatbot’s perceived usefulness.Accepted
Table 6. PLS predictive assessment summary.
Table 6. PLS predictive assessment summary.
ConstructR2Q2predict
CATT0.6060.574
CBI0.6500.484
CPEU0.3060.303
CTB0.3260.178
CPU0.6330.106
Table 7. Model fit indices for the structural model.
Table 7. Model fit indices for the structural model.
Fit IndexSaturated ModelEstimated Model
SRMR0.0810.083
d_ULS2.8412.981
d_G1.2191.222
Chi-square3866.4433876.088
NFI0.7310.731
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Falebita, O.S.; Abah, J.A.; Asanre, A.A.; Abiodun, T.O.; Ayanwale, M.A.; Ayanwoye, O.K. Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education. Educ. Sci. 2025, 15, 1389. https://doi.org/10.3390/educsci15101389

AMA Style

Falebita OS, Abah JA, Asanre AA, Abiodun TO, Ayanwale MA, Ayanwoye OK. Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education. Education Sciences. 2025; 15(10):1389. https://doi.org/10.3390/educsci15101389

Chicago/Turabian Style

Falebita, Oluwanife Segun, Joshua Abah Abah, Akorede Ayoola Asanre, Taiwo Oluwadayo Abiodun, Musa Adekunle Ayanwale, and Olubunmi Kayode Ayanwoye. 2025. "Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education" Education Sciences 15, no. 10: 1389. https://doi.org/10.3390/educsci15101389

APA Style

Falebita, O. S., Abah, J. A., Asanre, A. A., Abiodun, T. O., Ayanwale, M. A., & Ayanwoye, O. K. (2025). Determinants of Chatbot Brand Trust in the Adoption of Generative Artificial Intelligence in Higher Education. Education Sciences, 15(10), 1389. https://doi.org/10.3390/educsci15101389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop