Next Article in Journal
A Review of Human–Robot Collaboration Safety in Construction
Previous Article in Journal
Stated Preference Approach in Shaping Urban Sustainable Multimodal Transport—A Literature Review
Previous Article in Special Issue
Harnessing Generative Artificial Intelligence to Construct Multimodal Resources for Chinese Character Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework

by
Yue Zhang
1,†,
Jiayuan Guo
2,†,
Yun Wang
2,
Shanshan Li
2,
Qian Yang
2,
Jiajin Zhang
2 and
Zhaolin Lu
2,*
1
School of Architecture and Art, Hefei University of Technology, No. 193, Tunxi Road, Hefei 230009, China
2
School of Design and Arts, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing 100081, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Systems 2025, 13(10), 855; https://doi.org/10.3390/systems13100855
Submission received: 21 August 2025 / Revised: 22 September 2025 / Accepted: 26 September 2025 / Published: 28 September 2025

Abstract

Student trust plays a pivotal role in shaping the future integration of artificial intelligence (AI) in higher education. This study investigates how AI Facilitating Conditions (FCs), Performance Expectancy (PE), and task type influence students’ System-like Trust (AST) and Human-like Trust (AHT) in AI and further examines the mediating role of human-like trust in fostering the willingness to continue AI-assisted learning. Drawing on valid data collected from 466 Chinese university students, we employed partial least squares structural equation modeling (PLS-SEM) in combination with artificial neural networks (ANN) to test the hypothesized relationships, mediating mechanisms and the relative importance of influencing factors. The findings indicate that AI facilitating conditions significantly enhance both system-like trust and usage intention; performance expectancy exerts a positive effect on both forms of trust, with particularly strong effects observed in subjective tasks. Moreover, system-like trust positively promotes human-like trust, and together, these dimensions jointly strengthen students’ intention to engage in AI-assisted learning. Results from the ANN analysis further highlight that performance expectancy, system-like trust, and facilitating conditions are the primary determinants of system-like trust, human-like trust, and usage intention, respectively. This study extends the application of interpersonal trust theory to the AI domain and offers theoretical insights for fostering more positive and effective patterns of AI adoption in higher education.

1. Introduction

In recent years, with the continuous advancement of artificial intelligence (AI) technology, the emergence of Large Language Models (LLMs) has enabled machines to recognize and generate coherent text with remarkable accuracy [1]. As theoretical algorithms transition into practical applications, AI tools are evolving at an unprecedented pace, especially in the realm of Artificial Intelligence Generated Content (AIGC), which has demonstrated immense potential. In 2022, OpenAI introduced ChatGPT based on GPT-3.5, an optimized conversational language model that quickly gained widespread recognition for its powerful capabilities in information processing, text generation, and human–computer interaction. ChatGPT’s rapid rise as a transformative tool across various industries highlights the profound impact of AIGC-driven technological advancements [2]. These tools are reshaping how individuals access information and create content, fundamentally altering the landscape of digital interaction.
ChatGPT and similar systems generate content from large text corpora, enabling users to solve complex problems and receive human-like responses. These tools not only reshape human–computer interaction but also introduce new forms of interactive learning [3]. As a demographic that rapidly adopts new technologies and has a strong demand for problem-solving, information acquisition, and learning efficiency, university students have emerged as one of the primary user groups of AI chatbots such as ChatGPT [4].
However, alongside the efficiency and convenience brought by AIGC tools, new challenges have emerged. From an algorithmic perspective, AI-generated information often operates as a “technical black box” due to its opacity, lack of interpretability, and inherent uncertainty [5]. When AI systems provide incorrect or misleading answers, potentially resulting in negative consequences for users, trust-related concerns may arise, leading to a “crisis of trust”. The trust relationship between humans and AI has become a crucial factor influencing user experience and the willingness to adopt AI tools such as ChatGPT. In this context, trust refers to users’ subjective confidence in AI-generated content, which directly affects the practical applications of AI tools [6].
Prior research suggests that the trust users place in AI significantly shapes their attitudes toward adopting the technology, ultimately determining AI’s long-term role in various domains [7]. Given the growing integration of AI in education, it is essential to investigate students’ perceptions and usage behaviors to facilitate the deeper and more effective application of AIGC in higher education. This study, therefore, focuses on two key aspects: user trust and willingness to use AI in human–AI interactions. Scholars have widely explored the topics of human–AI trust and AI acceptance. For instance, Das and Chernova [8] found that individuals are more likely to trust and rely on AI systems when tackling complex tasks. Wang et al. [9] conducted research based on the Theory of Planned Behavior (TPB) and AI literacy, finding that college students’ attitudes mediate the relationship between AI literacy and subjective norms in influencing their intention to use generative AI. AI literacy and social norms are key factors affecting college students’ use of generative AI. Choung et al. [10], using the Technology Acceptance Model (TAM), examined the impact of human-like trust and functional trust on AI acceptance in education. Their findings demonstrated the significant role of trust in shaping users’ willingness to continue using AI and provided valuable insights for subsequent research on human–AI trust.
Although existing research provides valuable insights into AI adoption behavior in education, significant gaps remain. Theoretical frameworks such as TAM and Unified Theory of Acceptance and Use of Technology (UTAUT) mainly emphasize adoption behavior. While they examine factors such as trust and perceived usefulness, they typically conceptualize trust as a single dimension, without distinguishing its rational and emotional mechanisms [11]. As a result, the processes by which students develop trust in AI in educational contexts, and how such trust translates into sustained usage intention, remain insufficiently understood. This study addresses this gap by examining how rational (system-like) trust develops into affective (human-like) trust and how this dual-dimensional trust shapes continued usage intention.
At the same time, this study introduces antecedent variables including AI Facilitating Conditions (FCs), Performance Expectations (PEs), and perceived competence in subjective versus objective tasks (STs and OTs). It adopts a combined approach of structural equation modeling (SEM) and artificial neural networks (ANNs) to comprehensively examine variable relationships and predictive significance. By integrating theoretical and methodological perspectives, this study seeks to advance the understanding of trust formation and application mechanisms for generative AI in education, offering both theoretical contributions and practical insights to encourage positive, rational, and sustainable use of AI tools among university students.

2. Literature Review

2.1. Artificial Intelligence in Higher Education

In the technological revolution driven by generative AI, particularly large language models (LLMs), the ways in which information is accessed and content is created in higher education are undergoing profound change. Tools such as ChatGPT and DeepSeek are rapidly shaping teaching and academic practices, drawing on the language processing and content generation capabilities of models pre-trained on massive datasets [3]. AI chatbots can assist university students by improving learning processes and enhancing performance across a variety of complex tasks [12]. Against this backdrop, scholars have examined AI applications in higher education and their outcomes. Lund et al. [13] found that AI chatbots improve research quality across stages from idea generation to editing and proofreading, offering particular support to students with language barriers through grammar assessment and correction, which enhances clarity, coherence, and overall writing proficiency. Tossell et al. [14] reported that generative AI is becoming a collaborative resource in academic writing, where calibrated trust enhances student experiences across writing stages. In addition, AI demonstrates strong auxiliary functions in creative and open-ended tasks by providing inspirational input from novel perspectives [15]. Collectively, these studies suggest that generative AI is driving innovation in higher education learning methods across multiple dimensions, offering students more efficient learning support.
At the same time, the widespread adoption of generative AI raises risks and challenges, particularly around user trust and willingness to adopt. If students lack trust in or resist AI tools, the expected benefits of applying them in higher education may not materialize [16]. Conversely, excessive trust, such as overreliance, can lead to misuse, dependency, and ethical concerns [17]. Users’ understanding of new technologies and their level of trust are mutually reinforcing, with continued use helping to calibrate trust to appropriate levels, thereby increasing confidence and effectiveness in using the technology [18]. Moreover, willingness to use AI tools is closely associated with student learning outcomes in higher education [19]. In light of this, this paper focuses on two themes: university students’ trust in generative AI chatbots and their willingness to continue using them for academic purposes.

2.2. Human-Intelligence Trust

Artificial intelligence tools exhibit complexity and uncertainty in user interactions, and their widespread adoption has drawn increasing scholarly attention to the issue of user trust in AI systems [7]. Previous studies have proposed various models for classifying technological trust. In the field of human–machine trust, McKnight et al. [20] developed a model comprising functionality, reliability, and usefulness. This framework emphasizes rational evaluations of a system’s capabilities, stability, and output quality, essentially defining machines as “tools.” However, as artificial intelligence technology advances, such traditional frameworks are insufficient for explaining trust in human–AI interaction, requiring models that incorporate the distinctive characteristics of generative AI.
McAllister [21] examined trust in both human–human and human–machine interactions, emphasizing that technologies integrate functional and social attributes, and categorized trust into cognition-based trust and affect-based trust. The former is grounded in rational assessments of competence, reliability, and expertise, whereas the latter arises from emotional bonds and interpersonal care. Research shows that these two types of trust differ in their formation mechanisms and behavioral consequences: cognition-based trust often precedes affect-based trust, while affect-based trust more effectively facilitates collaboration, supportive behavior, and long-term cooperation. Lee and See [22] proposed a tri-process trust model consisting of analytical, analogical, and affective processes. Analytical trust derives from rational risk assessment, analogical trust arises from experiential and categorical judgments, and affective trust becomes critical when rules fail or cognition is constrained, exerting a stronger influence on analytical judgments. Building on this, Choung et al. [10] further divided trust into two dimensions, anthropomorphic trust and functional trust, and demonstrated through two empirical studies that trust plays a central role in the Technology Acceptance Model (TAM). Anthropomorphic trust reflects users’ alignment with an AI’s ethical principles and values, while functional trust emphasizes its capabilities and reliability. This distinction enriches the multidimensional understanding of human–AI trust and offers a new theoretical perspective for exploring trust mechanisms in high-risk and human-centered AI applications.
Generative AI technologies exhibit a degree of autonomy and opacity, making traditional trust frameworks for general information systems less applicable. Building on established human–computer trust theories and drawing on the classifications of AI trust proposed by Choung et al. [10] and Singh et al. [23], this study develops a trust model tailored to generative AI. Within this model, university students’ trust in generative AI is divided into two dimensions: AI System-like Trust (AST) and AI Human-like Trust (AHT). System-like trust reflects rational evaluations of functionality, reliability, and usefulness in human–AI interactions, whereas human-like trust captures users’ emotional bonds formed through anthropomorphizing AI and establishing ethical confidence. This dual framework incorporates both the tool-oriented attributes of AI chatbots emphasized in earlier technology trust models and the human-like characteristics increasingly relevant to trust in generative AI. Prior classifications of trust and the dimensions adopted in this study are summarized in Table 1.

2.3. Trust and Willingness to Use AI

Research indicates that applying artificial intelligence technologies in educational settings can enhance learners’ motivation to acquire knowledge. Shahzad et al. [24] found that ChatGPT and AI-based generative chatbots can increase student learning motivation. To promote student acceptance and usage of AI products, factors influencing students’ willingness to adopt and use such products have garnered extensive attention [25,26]. Bilquise et al. [27] observed that when users are unfamiliar with a system, their level of trust influences adoption decisions. Low trust fosters skepticism towards new systems, thus affecting users’ willingness to adopt them [28]. Regarding the willingness to use AI chatbots and their application performance, Siau et al. [29] identified trust as a key factor that hinders the widespread and effective application of AI tools, noting that trust impacts people’s willingness to collaborate with AI. Glikson et al. [7] emphasized that effective integration of AI into organizational collaboration to improve work efficiency critically depends on the trust of individuals in AI technology. To explore human–AI trust issues in higher education settings and further analyze how different types of trust influence students’ willingness to collaborate with AI, this study introduces human–AI trust theory into university students’ learning contexts. It employs two dimensions, ‘System-like trust’ and ‘Human-like trust’, to categorize trust levels and examine how these types of trust affect university students’ willingness to use generative AI tools.

3. Research Model and Hypothesis

This paper constructs a hypothetical model framework based on the Stimuli-Organism-Response (S-O-R) theory. The S-O-R model is an important research framework in the fields of cognitive and educational psychology, primarily used in studies of user behavior. In recent years, the S-O-R model has demonstrated considerable applicability in exploring social phenomena [30]. The model comprises three levels, stimuli, organism, and response, which represent, respectively, the triggering factors from the external environment, the user’s emotional state or internal psychological mechanisms, and the subsequent attitude changes or behavioral responses resulting from their emotional and cognitive processes. Its operational process is that stimulus factors, as external conditions, influence the user’s cognitive processes and psychological state to ultimately form a final response [31].
This study identifies three key motivating factors driving the use of AI chatbots in higher education: AI Facilitating Conditions (FCs), Task Performance Expectancy (PE), and task type. These external factors shape the internal perceptions of users, specifically system-like trust and human-like trust in AI chatbots, which ultimately translate into their behavioral willingness to engage in AI-assisted learning. Based on the stimulus-organism-response (SOR) framework and supported by the previous literature, the proposed research model is illustrated in Figure 1. The model hypothesizes that AI facilitating conditions, task performance expectancy, and task type influence both system-like trust and human-like trust in AI. Furthermore, it examines how AI facilitating conditions, system-like trust, and human-like trust collectively affect the willingness of college students to continue adopting AI-assisted learning.

3.1. AI Facilitating Conditions and Trust

In this study, facilitating conditions primarily refer to students’ knowledge, resources, and capabilities in using artificial intelligence (AI) tools for learning [32,33]. Facilitating conditions encompass students’ understanding of the technologies and functionalities of various AI products, as well as their perceptions of the necessary resources, knowledge, and support required for utilizing them in academic activities [34]. Human–AI trust is a complex psychological construct shaped by multiple factors. Facilitating conditions help users develop a deeper understanding of AI, reduce uncertainty and perceived risks, and foster more favorable attitudes toward AI [35,36]. They also enable users to more accurately evaluate the reliability and effectiveness of AI systems, thereby enhancing their trust in AI [37]. Moreover, the presence of adequate facilitating conditions reflects a supportive, reliable, and safe environment, which allows users to better comprehend the capabilities and application scenarios of AI technologies, ultimately strengthening trust during actual use [38]. Based on this reasoning, we propose the following hypotheses:
H1. 
AI facilitating conditions positively influence AI system-like trust.
H2. 
AI facilitating conditions positively influence AI human-like trust.

3.2. Task Performance Expectancy and Trust

When students hold high expectations for the performance of AI tools and believe they can consistently deliver accurate and valuable information, they are more likely to trust the resulting system. According to cognitive load theory, when tasks are complex, humans may experience cognitive load, leading to heightened expectations of AI performance and a greater tendency to rely on AI systems for support [39]. Existing research further indicates that performance expectations are the most critical factor determining individuals’ attitudes toward technology. When faced with tasks involving higher performance expectations, users are more likely to develop cognitive trust in AI systems’ ability to address these challenges [40]. Moreover, since outcomes of tasks with higher performance expectations are often uncertain and difficult to verify independently, individuals may develop a form of affective trust toward AI [41]. Building on these theoretical insights, this study examines the influence of performance expectancy on both system-like and human-like trust and proposes the following hypotheses:
H3. 
Performance expectancy positively influences AI system-like trust.
H4. 
Performance expectancy positively influences AI human-like trust.

3.3. Type of Task and Trust

This study examines how university students utilize AI in different types of tasks and how these tasks influence trust in AI. Objectivity refers to the quality of being impartial, unbiased, and independent of human opinions [42]. Based on the level of objectivity, tasks can be classified into subjective tasks (STs) and objective tasks (OTs). Subjective tasks are characterized by lower objectivity, greater personal opinions or emotions, less certainty in outcomes, and more open-ended responses [43]. Examples include opinion-based discussions and creative ideation. Objective tasks, on the other hand, are quantifiable, fact-based, and rely more on rational thinking, with clearly defined correct or incorrect outcomes [44]. Examples include data analysis and programming.
Recent empirical studies on educational contexts further indicate that students’ attitudes toward AI vary across different task types. Students not only focus on AI’s performance in logical organization and functionality but its anthropomorphic characteristics also play a crucial role in trust formation. For instance, Pitts [45] points out that in subjective tasks, students often establish affective trust through AI’s anthropomorphic interactive features, and this human-like trust is significantly linked to their willingness to use it. In contrast, objective tasks with clear standard answers and evaluation criteria may lead students to focus more on AI’s system-like attributes. Yang et al. [46] investigated how task objectivity, time pressure, and cognitive load influence user trust in algorithms. They found that higher task objectivity and time pressure significantly enhance algorithmic trust, markedly increasing users’ reliance on the system’s performance. Based on this literature, we propose that subjective and objective tasks may influence students’ system-like and human-like trust in AI through distinct pathways, leading to the following hypotheses:
H5. 
Perceived AI ability in subjective tasks has a positive effect on AI system-like trust.
H6. 
Perceived AI ability in subjective tasks has a positive effect on AI human-like trust.
H7. 
Perceived AI ability in objective tasks has a positive effect on AI system-like trust.
H8. 
Perceived AI ability in objective tasks has a positive impact on AI human-like trust.

3.4. AI System-Like Trust (AST) and AI Human-Like Trust (AHT)

Drawing on Choung et al.’s [10] research on AI trust, this study distinguishes trust in AI into two dimensions: system-like trust and human-like trust. AI system-like trust reflects cognitive trust in the functionality, reliability, and usefulness of AI in human–AI interactions. In contrast, AI human-like trust captures users’ emotional and ethical trust in AI, such as trust in its benevolence, integrity, and dependability [47]. McAllister et al. [21] demonstrated that when individuals develop strong cognitive trust in their collaborative partners, they also tend to establish high levels of emotional trust. Following this reasoning, this study proposes the following hypothesis regarding the relationship between AI system-like trust and AI human-like trust:
H9. 
AI system-like trust positively influences AI human-like trust.

3.5. Willingness to Continue Using AI-Assisted Learning

Building on prior research on human–AI trust in educational contexts, this study further examines how both system-like trust and human-like trust affect students’ willingness to use AI in learning, as well as which dimension exerts a stronger influence on continuance usage intention (CUI). Previous studies have emphasized that facilitating conditions constitute a critical factor in determining willingness to use and behavioral intention [48]. For example, Topsakal et al. [49] found that users’ perception of the convenience of generative AI directly shapes their trust in AI and their willingness to adopt AI tools. Integrating these insights with the UTAUT framework, this study argues that facilitating conditions—representing users’ cognitive understanding, mastery, and external support for AI—may not only enhance trust in AI (H1–H2) but also directly increase students’ willingness to continue using AI-assisted learning. Thus, the following hypothesis is proposed:
H10. 
AI facilitating conditions positively influence continuance usage intention.
Prior studies have also consistently shown that users’ trust in AI tools is positively associated with their willingness to adopt and continue using them. Topsakal et al. [50] demonstrated that trust in AI significantly enhances users’ willingness to engage with it, while Prakash et al. [51] found that trust in the functional capabilities of chatbots strongly predicts intention to continue use. Similarly, Delgosha et al. [52] confirmed that higher levels of functional trust foster users’ cognitive absorption and exploratory behaviors toward the technology. In the context of the sharing economy, Califf et al. [53] further revealed that both human-like and system-like trust positively affect enjoyment and continuance intention, with human-like trust exerting a particularly notable impact. These findings collectively suggest that trust in AI substantially shapes users’ willingness to engage with the technology. Accordingly, this study investigates the dual-dimensional pathways through which system-like and human-like trust influence continuance usage intention and proposes the following hypotheses:
H11. 
AI system-like trust positively influences continuance usage intention.
H12. 
AI human-like trust positively influences continuance usage intention.

4. Methodology

4.1. Data Collection and Sampling Method

The participants in this study were recruited from multiple universities in China, encompassing undergraduate, master’s, and doctoral students across diverse academic disciplines, including science and technology as well as humanities and social sciences. This broad participant pool enhances the representativeness of the sample. Students were invited to complete the questionnaire both online and offline through the Questionnaire Star platform. Prior to participation, all respondents were screened to confirm that they had prior experience using linguistic AI tools (e.g., ChatGPT), ensuring that the sample consisted exclusively of users familiar with AI applications.
A total of 500 questionnaires were collected through online and offline channels. After excluding 34 responses due to abnormal completion time or poor response quality, the final dataset consisted of 466 valid responses. All participants were current students (n = 466) with prior experience using linguistic AI tools and reported using such tools at least once per month. The sample distribution was as follows: master’s students comprised the largest proportion (n = 233, 50%), followed by undergraduate students (n = 216, 46.4%), and doctoral students (n = 17, 3.6%). The sample contains a slightly higher proportion of master’s students compared with doctoral students. As this study does not focus on differences in AI trust and usage across academic stages, the overall sample size is sufficient to support SEM and ANN analyses. Nevertheless, perspectives may vary across students at different academic levels, and these background characteristics should be considered by future researchers when interpreting the findings.

4.2. Measurement Instrument

The questionnaire items were adapted from established scales and previous research, with minor modifications to align with the context of this study. All items were measured using a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).
The measurement items for the dependent variable TRUST are adapted from Lee and See’s description of trust in human-automated systems and Muir and Moray’s proposed measurement dimensions for trust in automated systems [22,54] and are further divided into two parts—system-like trust and human-like trust—based on Choung et al.’s [10] research on trust in AI. The measurement items for AI facilitating conditions were adapted from the research of Balakrishnan et al. [55]. The questions regarding performance expectancy measure three dimensions: the ability, effectiveness, and efficiency of AI tools in addressing academic issues [11]. In measuring the type of task, this study draws upon existing research to categorize tasks into objective and subjective tasks based on their objectivity [56]. The questionnaire guides participants to distinguish different task contexts by providing examples (such as data analysis or creative generation) and separately assesses their perceptions of AI capabilities in subjective and objective tasks. Finally, continuance usage intention is measured through three questions regarding the willingness to use AI tools in learning [57].

4.3. Data Analysis

In this study, the data were analyzed using both Partial Least Squares Structural Equation Modeling (PLS-SEM) and Artificial Neural Networks (ANNs) for a more comprehensive understanding of the relationships between the structures. In the structural equation section, Partial Least Squares Structural Equation Modeling (PLS-SEM) was used using SmartPLS version 4.1.0.9 [58]. The reason for choosing PLS-SEM over covariance-based SEM is that the nature of this study is exploratory rather than validation. PLS-SEM is more suitable for this study compared to CB-SEM due to its higher statistical power and ability to handle non-normally distributed data [59]. SEM is the main method to explore whether hypotheses are valid and to test significance factors, but it can only be used to test linear models [60], and artificial neural network techniques can be used to measure complex non-linear relationships [61], but due to their “black box” nature, ANNs are not suitable for testing hypotheses and causal relationships [62]. Therefore, this study integrates both SEM and ANN analyses to test hypotheses and to determine the order of importance of predictors for the dependent variable.
The minimum sample size required for this study was estimated using G*Power 3.1.9.7 with f2 of 0.15, α = 0.05, (1 − β ) = 0.8, the number of predictors was 4, the minimum sample size required was calculated to be 85, and the actual sample size of 466 in this study was qualified.

5. Results

5.1. Measurement Model Assessment (CFA)

Dijkstra-Henseler’s rho_A and Composite Reliability (CR) were used to assess internal reliability. As shown in Table 2, the rho_A and CR values for all variables except PE exceed the threshold of 0.70, indicating strong internal consistency [63]. The rho_A value for PE is 0.667. In exploratory research, reliability values between 0.60 and 0.70 are considered ‘acceptable’ [64]. The data reliability in this study meets the conditions for analysis. Convergent Validity (CV) was evaluated using factor loadings (FLs) and average variance extracted (AVE). With the exception of PE1 and CUI2, all FL values surpass the 0.70 threshold [65]. According to [66], FL values between 0.40 and 0.70 are acceptable if AVE > 0.50 and CR > 0.70 ; however, values below 0.40 should be considered for removal. Since PE1 and CUI2 explain over 60% of AVE and exceed the CR threshold of 0.70, they were retained. Simultaneously, the model was re-estimated after removing PE1 and CUI2. The results indicate that the path coefficients and significance levels of the main relationships remained basically consistent with those of the original model. For instance, the path coefficient from PE to AHT changed from 0.176 to 0.167, while the coefficient from AST to AHT shifted from 0.416 to 0.422, both representing only minor variations. In addition, the significance results for each path were consistent with the original model without altering the overall pattern of significance. This suggests that PE1 and CUI2 do not introduce bias into the conclusions and that their retention helps maintain the content validity of the constructs.
The AVE values for all constructs range from 0.595 to 0.823, consistently exceeding the minimum requirement of 0.50 [65]. These results confirm that the measurement model demonstrates a satisfactory CV. Discriminant validity (DV) was assessed using the heterotrait–monotrait (HTMT) correlation ratio, where values close to 1 indicate a lack of discriminant validity [65]. As shown in Table 3, all HTMT values remain below 0.85, demonstrating adequate DV for the model [63].

5.2. Inspecting the Inner Structural Model

Model fit was evaluated using the standardized root mean square residual (SRMR) [66]. The SRMR values for both the saturated and estimated models are below 0.100, confirming a good fit for models using the PLS algorithm [67]. Multicollinearity was tested using the variance inflation factor (VIF), with all values ranging from 1.141 to 2.471, well below the critical threshold of 5.000, indicating no multicollinearity issues [68]. The structural model’s hypothesized paths were assessed using a bias-corrected and accelerated (BCa) bootstrap procedure with 5000 subsamples.
The results are presented in Table 4. Among the examined factors, FC (p < 0.001), PE (p < 0.001), and ST (p = 0.002) exerted significant positive effects on AST, whereas OT (p = 0.232) showed no significant influence. Similarly, AST (p < 0.001), performance expectancy (p = 0.001), and ST (p = 0.003) had significant positive effects on AHT, while FC (p = 0.690) and OT (p = 0.148) were not significant predictors. Finally, AST (p = 0.007), AHT (p < 0.001), and FC (p < 0.001) were all found to significantly enhance students’ continuance intention toward AI-assisted learning. Accordingly, hypotheses H2, H7, and H8 were not supported, whereas the remaining hypotheses received empirical support.
Effect sizes between variables were assessed using f2, as presented in Table 4. The results indicate that FC and ST exerted small effects on AST (f2 = 0.033, 0.023), whereas PE demonstrated a large effect (f2 = 0.219). For AHT, PE and ST had small effects (f2 = 0.032, 0.025), while AST exhibited a medium-to-large effect (f2 = 0.178). Regarding CUI, AST and AHT demonstrated small effects (f2 = 0.018, 0.031), whereas FC showed a substantial effect (f2 = 0.171).

5.3. The Mediation Effect

This study examines the mediating effects of AI-related variables on AI trust and AI-assisted learning willingness. The bootstrapping method (5000 resamples) was used to estimate indirect effects, with the total indirect effects summarized in Table 5.
The results reveal several notable indirect effects. First, PE demonstrated the strongest indirect influence in the model: it significantly affected AHT through a mediating pathway (indirect effect = 0.175, t = 5.611, p < 0.001) and exerted a significant indirect effect on CUI (indirect effect = 0.122, t = 5.039, p < 0.001). This finding suggests that, when using ChatGPT to complete learning tasks, students perceive performance expectancy as a key driver shaping their trust and attitudes toward AI.
Second, FC significantly and positively influenced AHT and CUI through mediating effects, with indirect values of 0.070 (t = 3.470, p = 0.001) and 0.033 (t = 2.108, p = 0.035), respectively. In addition, the indirect effect of AST on CUI was also significant (indirect effect = 0.074, t = 3.472).
Finally, differences emerged between subjective and objective task types. Under objective tasks, the indirect effects were not significant, indicating that users’ trust in AI and willingness to use it did not increase meaningfully through mediating pathways. In contrast, subjective tasks yielded significant indirect effects on both AHT (indirect effect = 0.055, p = 0.003) and CUI (indirect effect = 0.053, p = 0.002). These results highlight that AI applications in subjective task scenarios are more effective in fostering user trust and strengthening students’ willingness to adopt AI-assisted learning.

5.4. The Predictive Relevance and Effect Size

This study used R2 and Q2 to assess the explanatory and predictive power of the model. The model explains 37.1% of the variance in AST (R2 = 0.371), 35.3% in AI human-like trust (R2 = 0.353), and 30.5% in AI continuance usage intention (R2 = 0.305). The model’s predictive relevance was assessed using Stone–Geisser’s Q2 values [69], all of which are greater than zero, confirming its predictive validity (Table 6).

5.5. Artificial Neural Network Analysis

Since PLS-SEM is limited to capturing compensatory and linear relationships [70], this study further supplements the analysis by employing Artificial Neural Networks (ANNs) to explore potential nonlinear relationships that could enhance decision-making. Based on the SEM analysis results presented earlier, we constructed three ANN models for AST, AHT, and CUI, with their structures shown in Figure 2. The Root Mean Square Error (RMSE) results from 10-fold cross-validation of the model are shown in Table 7. The corresponding sensitivity analysis results are presented in Table 8.
To mitigate overfitting, a tenfold cross-validation procedure was implemented, using 90% of the data for training and 10% for testing [62]. To assess the predictive accuracy of the ANN models, RMSE values from ten neural network runs were averaged (Table 7). The cross-validated RMSE for the test models were 0.093523 (AST), 0.102876 (AHT), and 0.127188 (CUI), while those for the training models were 0.094136, 0.112714, and 0.121637, respectively. The consistently low RMSE values indicate strong reliability, demonstrating the ANN models’ ability to accurately and consistently explain the relationships between predictors and outcomes [71].
The results of the Sensitivity Analysis (SA) for the independent variables are presented in Table 8. The relative importance of each predictor was calculated using ANN analysis and ranked accordingly. Predictor importance reflects the extent to which variations in the predictor values alter the predicted outcomes of the network model [72]. Based on the normalized importance scores, PE (SA = 100%) emerged as the most significant determinant of AST, followed by FC (SA = 42.64%) and ST (SA = 31.99%). For AHT, AST (SA = 100%) was identified as the strongest predictor, followed by PE (SA = 51.54%) and ST (SA = 31.13%). Regarding CUI, FC (SA = 99.41%) had the greatest influence, followed by AST (SA = 69.06%) and AHT (SA = 56.40%).
When compared with the SEM results, the ANN analysis produced consistent findings. Both methods confirmed that PE, AST, and FC represent the most critical factors influencing AST, AHT, and students’ continuance usage intention, respectively.

6. Discussion

6.1. Key Findings

6.1.1. Facilitating Conditions as a Key Driver of Trust in AI

The findings of this study highlight facilitating conditions (FCs) as an important driver influencing students’ trust in AI systems and their willingness to continue using AI-assisted learning. As expected, students who are more familiar with AI operations, find it convenient to use AI tools, and possess richer AI-related knowledge demonstrate stronger trust in AI systems and a greater willingness to adopt AI for learning. This result is consistent with prior studies emphasizing the role of facilitating conditions in shaping trust and usage intentions [50,73]. When students believe they have sufficient access to resources, knowledge, and support to effectively utilize ChatGPT, their attitudes toward the technology become more trusting, which in turn strengthens their willingness to use it. Conversely, students with limited AI literacy or inadequate support are more likely to perceive the technology as difficult to use [74], thereby reducing their willingness to adopt it. This underscores the direct influence of supportive conditions on AI usage intention.
While prior studies have largely examined the influence of facilitating conditions on overall trust and behavioral intention, few have explored their differentiated effects on distinct dimensions of trust. This study contributes to filling this gap by showing that, although facilitating conditions significantly enhance students’ trust in the functional effectiveness and reliability of AI systems (supporting H1), they do not significantly shape trust in AI’s ethical or emotional dimensions (H2 not supported). This finding suggests that facilitating conditions primarily strengthen system-like trust, rather than human-like trust, in the context of AI-assisted learning.

6.1.2. The Dual Role of Performance Expectancy in Promoting AI Trust

The results confirm that performance expectancy is a key driver of both system-like and human-like trust in AI and exerts the strongest mediating effect on students’ willingness to continue using AI. When students expect ChatGPT to deliver accurate and useful information, they are more likely to trust the system, consistent with Dwivedi et al. [75], who identified performance expectancy as a critical determinant of technology adoption.
In complex or uncertain tasks, students may experience cognitive overload, and AI tools provide efficiency and problem-solving capacity beyond what they could achieve alone, thereby strengthening both functional and emotional trust [76]. For simpler tasks, reliance on AI is weaker. Thus, when tasks are perceived as challenging, students are more motivated to leverage AI, and if these tools consistently meet or exceed expectations, trust and sustained use are reinforced [11].

6.1.3. AI’s Trust Paradox in Different Task Types

The type of task had a significant effect on functional trust and emotional trust in this study. Unlike previous studies suggesting that people tend to trust AI more in objective tasks [77], the college students in this study exhibited a stronger trust in AI when engaging in subjective tasks, while the effect of objective tasks was not significant. This study posits that a plausible explanation lies in the fact that advancements in AI technology have enhanced users’ trust in it when addressing subjective-type questions: Castelo et al. [44] suggest that earlier concerns about AI’s inability to handle subjective tasks due to a lack of human emotions may no longer hold. If the AI tools used in this study have not yet achieved high accuracy in objective tasks (e.g., early image recognition systems), users’ trust remains low. However, in subjective tasks, AI models such as GPT-5 and DeepSeek have shown significant improvements in generating meaningful and contextually relevant content, leading to increased trust. This aligns with Song et al. [42], who argue that since subjective tasks allow for open-ended responses, trust in AI depends on individual opinions and preferences. Previously, consumers perceived AI with lower intelligence as incapable of accurately handling these tasks. However, more advanced AI models are now recognized for their ability to analyze complex information and provide reliable recommendations [78], thereby enhancing user trust [79].
Based on this finding, two hypotheses are suggested for future research. First, cognitive differences may shape trust across task types. Students aware of AI’s algorithmic limits may demand higher accuracy in objective tasks, reducing trust when errors occur, whereas in subjective tasks, they may prioritize inspiration and interaction, showing more tolerance for technical flaws. Second, risk perception may also matter. In objective tasks with clear right or wrong outcomes, especially in academic or medical domains, concerns about errors and responsibility can reduce trust. By contrast, subjective tasks such as brainstorming or discussion lack strict correctness standards, lowering perceived risks and making trust easier to build.

6.1.4. Rational Trust as the Foundation of Emotional Trust

This study confirms that trust in AI system-like attributes has a significant positive effect on trust in AI human-like attributes. This finding resonates with prior research on the impact of interpersonal trust on students in educational settings and further elucidates the relationship between the two types of trust [45]. Specifically, the higher the students’ trust in the functionality and effectiveness of AI, the stronger their emotional trust in its human-like qualities. This result suggests that, in the process of building students’ trust in and use of AI, system-like trust serves as a more fundamental and essential basis than human-like trust, with the development of trust evolving from rational trust to emotional trust. These findings also support Castelo et al.’s [44] perspective on the progression from system trust to human-like trust in task-driven contexts. By extending prior insights from interpersonal trust research, this study contributes to the domains of human–AI trust and human–machine trust.

6.1.5. The Influence of Dual-Dimensional Trust on Continuance Usage Intention

Finally, the findings demonstrate that both system-like trust and human-like trust are significantly associated with students’ willingness to continue using AI-assisted learning. Each type of trust serves as a mediator that strongly influences behavioral intention, highlighting the central role of trust in driving the adoption of AI learning tools. These results are consistent with prior user studies showing that both system trust and human trust exert positive effects on usage willingness and behavioral intention [80].
Specifically, students’ functional trust and emotional trust in AI tools jointly shape their willingness to adopt AI-assisted learning. The more students trust the effectiveness of AI in solving academic problems and the more confident they are in its ethical soundness and capabilities, the more inclined they are to use AI tools in their learning. This is in line with the findings of Tams et al. [81], who revealed that trust influences willingness to use by enhancing deep engagement and innovative experimentation with technology adoption through perceptions of self-efficacy. Functional trust motivates users to explore generative AI and fosters positive adoption intentions. Moreover, due to the natural language interaction capabilities of generative AI, human–AI communication increasingly resembles interpersonal interaction. Such human-like engagement cultivates emotional trust in generative AI [82], which further strengthens students’ willingness to incorporate AI tools into their learning practices.

6.2. Theoretical Implications

From a theoretical perspective, this study extends human–machine trust theory to generative AI in education. By adopting a dual-dimensional trust framework, it confirms that system-like trust fosters human-like trust (H9), showing how rational cognition reinforces emotional sentiment. The study also indicates that performance expectancy significantly enhances students’ dual-dimensional trust in AIGC tools (H3, H4), with trust levels in subjective tasks showing a significantly greater increase than in objective tasks (H5, H6 supported; H7, H8 not supported), offering a new angle for exploring dynamic trust in generative AI.
In addition, by integrating PLS-SEM and ANNs, this study reveals how facilitating conditions shape continuance usage intention both directly and indirectly through system-like trust (H1, H10 supported). This underscores the central role of facilitating conditions in building trust and adoption, providing theoretical insights and practical guidance for promoting the responsible and effective application of AI tools in higher education.
It is noteworthy that even though H2, H7, and H8 were not supported, these results still provide meaningful theoretical implications. First, the facilitative condition did not significantly influence humanized trust (H2). This indicates that the facilitative condition primarily strengthens systemic trust by enhancing students’ perceptions of AI functionality and reliability. Its influence on humanized trust is more likely to occur indirectly through the pathway ‘FC → AST → AHT’ ( β = 0.070, t = 3.470, p = 0.001, 95% CI [0.032, 0.111]). Second, objective tasks did not show significant effects on either system trust or human trust (H7, H8). Prior studies suggest that when students use AI tools in contexts requiring high accuracy or verifiable outcomes, an algorithm aversion effect may arise, reducing trust due to the increased visibility of errors and higher perceived risks [83]. In this study, errors in objective tasks such as data analysis or programming were more noticeable to users and carried high-stakes consequences (for example, implications for academic reliability), which likely undermined trust [84]. In contrast, subjective tasks such as writing or brainstorming produce results that are less strictly defined, with no single correct answer. This lower perception of risk may make students more likely to develop trust in artificial intelligence [85].

6.3. Ethical Considerations of Trust in Education and Responsible AI Use

This study reveals the mechanism linking human AI trust and willingness to use. It not only supports the rational application of artificial intelligence in higher education but also provides insights for preventing over-reliance and misuse while advancing responsible adoption. While the findings highlight the positive role of trust in driving students’ willingness to use generative AI tools, they also underscore the need to address ethical risks that accompany their widespread use. A key concern is blind trust or excessive dependence on AI systems [86]. Such reliance may weaken students’ independent thinking and critical evaluation skills and, in extreme cases, lead to academic misconduct [87]. In addition, algorithmic bias and unequal access to AI tools may widen disparities among students, raising concerns of fairness in learning and assessment [88].
To mitigate these risks, higher education institutions should promote responsible AI usage. Educators need to guide students in combining technological understanding, critical evaluation, and ethical judgment, especially in open-ended and high-performance tasks, to build responsible models of human–AI collaboration [89]. Students should also strengthen AI literacy and adopt prudent practices such as cross-validating outputs and using task-specific strategies to avoid academic risks and dependence. Finally, academic policy bodies should establish clear guidelines to ensure ethical and accountable use. Embedding these considerations into practice is essential to maximize AI’s educational benefits while minimizing potential harms.

6.4. Limitation

This study has certain limitations. First, regarding the findings, although the data analysis showed no statistically significant differences between humanities and social sciences students and STEM students in systemic trust and anthropomorphic trust (p = 0.766, 0.983), subtle variations in attitudes and engagement patterns across disciplines cannot be ruled out. Such differences may shape how students perceive and apply AI in learning activities. Future research could adopt group comparisons, multilevel modeling, or qualitative methods to further examine potential disciplinary variations in trust and usage intentions.
Second, the proposed model was validated mainly in Chinese higher education. Future studies should examine cross-cultural contexts and include additional factors such as disciplinary background, task–technology fit, and environmental influences to provide a more comprehensive understanding of trust in AI-assisted learning.
Finally, the cross-sectional design employed here does not capture the dynamic nature of trust. Prior studies suggest that trust evolves through processes of erosion and restoration, making it difficult to measure precisely. At the same time, large language models are advancing at a rapid pace, while evaluations of their impact often lag behind technological progress. Consequently, the findings of this study should be regarded as a milestone in the evolution of knowledge, not as a final destination. Future research could apply longitudinal or mixed-method approaches to better capture the dynamics of trust while maintaining theoretical foresight and practical relevance.

7. Conclusions

This study is among the first to examine, within the context of Chinese higher education, how AI facilitating conditions, performance expectancy, and perceived AI ability in subjective and objective tasks influence students’ dual-dimensional trust in AI, namely system-like trust and human-like trust, and how these forms of trust in turn shape continuance usage intention. The findings provide theoretical support for a deeper understanding of the application of generative AI tools in education, highlighting the close interconnection between trust formation and usage contexts. By validating the distinct yet complementary roles of rational (system-like) and emotional (human-like) trust, this research enriches the theoretical framework of human–AI trust and underscores trust as a critical mediating mechanism. Ultimately, this study offers insights for promoting more positive and effective integration of AI into higher education by leveraging trust to guide sustainable adoption.

Author Contributions

Conceptualization: J.G., Y.Z., and Z.L.; Methodology: J.G.; Formal analysis and investigation: J.G., Y.W., S.L., Q.Y., and J.Z.; Writing—original draft preparation: J.G.; Writing—review and editing: Y.Z. and Z.L.; Funding acquisition: Z.L., Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “the National Natural Science Foundation of China, grant number 52275234”, “China Scholarship Council Visiting Scholar Program, grant number 202106695014”, “the Humanities and Social Science Fund of the Ministry of Education of China, grant number 22YJA760055”, and “the Science and Technology Innovation Program Project of Beijing Institute of Technology, grant number 2024CX01023”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Beijing Institute of Technology Ethics Committee (Approval No. BIT-EC-H-115). The anonymity and confidentiality of the participants were guaranteed, and participation was completely voluntary.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this manuscript are available by contacting the corresponding author.

Acknowledgments

We thank all participants for their commitment to research.

Conflicts of Interest

No potential conflicts of interest were reported by the author(s).

References

  1. Van Dis, E.A.; Bollen, J.; Zuidema, W.; Van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef]
  2. Wu, T.; He, S.; Liu, J.; Sun, S.; Liu, K.; Han, Q.L.; Tang, Y. A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA J. Autom. Sin. 2023, 10, 1122–1136. [Google Scholar] [CrossRef]
  3. Hsiao, J.C.; Chang, J.S. Enhancing EFL reading and writing through AI-powered tools: Design, implementation, and evaluation of an online course. Interact. Learn. Environ. 2024, 32, 4934–4949. [Google Scholar] [CrossRef]
  4. Taktak, M.; Bellibas, M.S.; Özgenel, M. Use of ChatGPT in Education: Future Strategic Road Map with SWOT Analysis. Educ. Process. Int. J. 2024, 13, 7–21. [Google Scholar] [CrossRef]
  5. Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  6. Rapp, A.; Curti, L.; Boldi, A. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. Int. J. Hum.-Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
  7. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  8. Das, D.; Chernova, S. Leveraging rationales to improve human task performance. In Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; pp. 510–518. [Google Scholar]
  9. Wang, C.; Wang, H.; Li, Y.; Dai, J.; Gu, X.; Yu, T. Factors influencing university students’ behavioral intention to use generative artificial intelligence: Integrating the theory of planned behavior and AI literacy. Int. J. Hum.–Comput. Interact. 2025, 41, 6649–6671. [Google Scholar] [CrossRef]
  10. Choung, H.; David, P.; Ross, A. Trust in AI and its role in the acceptance of AI technologies. Int. J. Hum.–Comput. Interact. 2023, 39, 1727–1739. [Google Scholar] [CrossRef]
  11. Duong, C.D. Modeling the determinants of HEI students’ continuance intention to use ChatGPT for learning: A stimulus–organism–response approach. J. Res. Innov. Teach. Learn. 2024, 17, 391–407. [Google Scholar] [CrossRef]
  12. Yang, W.; Lu, Z.; Li, Z.; Cui, Y.; Dai, L.; Li, Y.; Ma, X.; Zhu, H. The impact of human-AIGC tools collaboration on the learning effect of college students: A key factor for future education? Kybernetes 2024. [Google Scholar] [CrossRef]
  13. Lund, B.D.; Wang, T. Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Libr. Hi Tech News 2023, 40, 26–29. [Google Scholar] [CrossRef]
  14. Tossell, C.C.; Tenhundfeld, N.L.; Momen, A.; Cooley, K.; De Visser, E.J. Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Trans. Learn. Technol. 2024, 17, 1069–1081. [Google Scholar] [CrossRef]
  15. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  16. Wang, F.; Li, N.; Cheung, A.C.K.; Wong, G.K.W. In GenAI we trust: An investigation of university students’ reliance on and resistance to generative AI in language learning. Int. J. Educ. Technol. High. Educ. 2025, 22, 59. [Google Scholar] [CrossRef]
  17. Jacovi, A.; Marasović, A.; Miller, T.; Goldberg, Y. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 3–10 March 2021; pp. 624–635. [Google Scholar]
  18. Đerić, E.; Frank, D.; Milković, M. Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers. Information 2025, 16, 622. [Google Scholar] [CrossRef]
  19. Shahzad, M.F.; Xu, S.; Javed, I. ChatGPT awareness, acceptance, and adoption in higher education: The role of trust as a cornerstone. Int. J. Educ. Technol. High. Educ. 2024, 21, 46. [Google Scholar] [CrossRef]
  20. Mcknight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 1–25. [Google Scholar] [CrossRef]
  21. McAllister, D.J. Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 1995, 38, 24–59. [Google Scholar] [CrossRef]
  22. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef]
  23. Singh, D.; Chandra, S. Mitigating Uncertainty and Enhancing Trust in AI: Harmonizing Human-Like, System-Like Features with Innovative Organizational Culture. In Proceedings of the Australasian Conference on Information Systems, ACIS, Canberra, Australia, 4–6 December 2024. Paper 22. [Google Scholar]
  24. Shahzad, M.F.; Xu, S.; An, X.; Javed, I. Assessing the impact of AI-chatbot service quality on user e-brand loyalty through chatbot user trust, experience and electronic word of mouth. J. Retail. Consum. Serv. 2024, 79, 103867. [Google Scholar] [CrossRef]
  25. Alhumaid, K.; Al Naqbi, S.; Elsori, D.; Al Mansoori, M. The adoption of artificial intelligence applications in education. Int. J. Data Netw. Sci. 2023, 7, 457–466. [Google Scholar] [CrossRef]
  26. Foroughi, B.; Senali, M.G.; Iranmanesh, M.; Khanfar, A.; Ghobakhloo, M.; Annamalai, N.; Naghmeh-Abbaspour, B. Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA. Int. J. Hum.–Comput. Interact. 2024, 40, 4501–4520. [Google Scholar] [CrossRef]
  27. Bilquise, G.; Ibrahim, S.; Salhieh, S.M. Investigating student acceptance of an academic advising chatbot in higher education institutions. Educ. Inf. Technol. 2024, 29, 6357–6382. [Google Scholar] [CrossRef]
  28. Chang, W.; Park, J. A comparative study on the effect of ChatGPT recommendation and AI recommender systems on the formation of a consideration set. J. Retail. Consum. Serv. 2024, 78, 103743. [Google Scholar] [CrossRef]
  29. Siau, K.; Wang, W. Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 2018, 31, 47. [Google Scholar]
  30. Cao, X.; Sun, J. Exploring the effect of overload on the discontinuous intention of social media users: An SOR perspective. Comput. Hum. Behav. 2018, 81, 10–18. [Google Scholar] [CrossRef]
  31. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  32. Hemment, D.; Currie, M.; Bennett, S.J.; Elwes, J.; Ridler, A.; Sinders, C.; Vidmar, M.; Hill, R.; Warner, H. AI in the public eye: Investigating public AI literacy through AI art. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 931–942. [Google Scholar]
  33. Wang, B.; Rau, P.L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  34. Menon, D.; Shilpa, K. “Chatting with ChatGPT”: Analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the UTAUT model. Heliyon 2023, 9, e20962. [Google Scholar] [CrossRef]
  35. Siegrist, M.; Árvai, J. Risk perception: Reflections on 40 years of research. Risk Anal. 2020, 40, 2191–2206. [Google Scholar] [CrossRef] [PubMed]
  36. Kerstan, S.; Bienefeld, N.; Grote, G. Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. Risk Anal. 2024, 44, 939–957. [Google Scholar] [CrossRef]
  37. Batut, A.; Prudhomme, L.; van Sambeek, M.; Chen, W. Do You Trust AI? Examining AI Trustworthiness Perceptions Among the General Public. In Proceedings of the International Conference on Human-Computer Interaction, Washington, DC, USA, 29 June–4 July 2024; Springer: Cham, Switzerland, 2024; pp. 15–26. [Google Scholar]
  38. Bayer, S.; Gimpel, H.; Markgraf, M. The role of domain expertise in trusting and following explainable AI decision support systems. J. Decis. Syst. 2022, 32, 110–138. [Google Scholar] [CrossRef]
  39. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  40. Salimzadeh, S.; He, G.; Gadiraju, U. Dealing with uncertainty: Understanding the impact of prognostic versus diagnostic tasks on trust and reliance in human-AI decision making. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–17. [Google Scholar]
  41. De-Arteaga, M.; Fogliato, R.; Chouldechova, A. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  42. Song, J.; Lin, H. Exploring the effect of artificial intelligence intellect on consumer decision delegation: The role of trust, task objectivity, and anthropomorphism. J. Consum. Behav. 2024, 23, 727–747. [Google Scholar] [CrossRef]
  43. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  44. Castelo, N.; Bos, M.W.; Lehmann, D.R. Task-dependent algorithm aversion. J. Mark. Res. 2019, 56, 809–825. [Google Scholar] [CrossRef]
  45. Pitts, G.; Motamedi, S. Understanding Human-AI Trust in Education. arXiv 2025. [Google Scholar] [CrossRef]
  46. Yang, R.; Li, S.; Qi, Y.; Liu, J.; He, Q.; Zhao, H. Unveiling users’ algorithm trust: The role of task objectivity, time pressure, and cognitive load. Comput. Hum. Behav. Rep. 2025, 18, 100667. [Google Scholar] [CrossRef]
  47. Donati, D. Trust, Trustworthiness and the Moral Dimension in human-AI Interactions. Ethics Politics Soc. 2024, 7, 103–120. [Google Scholar] [CrossRef]
  48. Chai, C.S.; Wang, X.; Xu, C. An extended theory of planned behavior for the modelling of Chinese secondary school students’ intention to learn artificial intelligence. Mathematics 2020, 8, 2089. [Google Scholar] [CrossRef]
  49. Cai, R.; Cain, L.N.; Jeon, H. Customers’ perceptions of hotel AI-enabled voice assistants: Does brand matter? Int. J. Contemp. Hosp. Manag. 2022, 34, 2807–2831. [Google Scholar] [CrossRef]
  50. Topsakal, Y. How Familiarity, Ease of Use, Usefulness, and Trust Influence the Acceptance of Generative Artificial Intelligence (AI)-Assisted Travel Planning. Int. J. Hum.–Comput. Interact. 2025, 41, 9478–9491. [Google Scholar] [CrossRef]
  51. Prakash, A.V.; Joshi, A.; Nim, S.; Das, S. Determinants and consequences of trust in AI-based customer service chatbots. Serv. Ind. J. 2023, 43, 642–675. [Google Scholar] [CrossRef]
  52. Delgosha, M.S.; Hajiheydari, N. How human users engage with consumer robots? A dual model of psychological ownership and trust to explain post-adoption behaviours. Comput. Hum. Behav. 2021, 117, 106660. [Google Scholar] [CrossRef]
  53. Califf, C.B.; Brooks, S.; Longstreet, P. Human-like and system-like trust in the sharing economy: The role of context and humanness. Technol. Forecast. Soc. Chang. 2020, 154, 119968. [Google Scholar] [CrossRef]
  54. Muir, B.M.; Moray, N. Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 1996, 39, 429–460. [Google Scholar] [CrossRef] [PubMed]
  55. Balakrishnan, J.; Abed, S.S.; Jones, P. The role of meta-UTAUT factors, perceived anthropomorphism, perceived intelligence, and social self-efficacy in chatbot-based services? Technol. Forecast. Soc. Chang. 2022, 180, 121692. [Google Scholar] [CrossRef]
  56. Inbar, Y.; Cone, J.; Gilovich, T. People’s intuitions about intuitive insight and intuitive choice. J. Personal. Soc. Psychol. 2010, 99, 232. [Google Scholar] [CrossRef]
  57. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  58. Ringle, C.; Wende, S.; Becker, J. SmartPLS 4; SmartPLS: Bönningstedt, Germany, 2024. [Google Scholar]
  59. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  60. Tan, G.W.H.; Ooi, K.B.; Leong, L.Y.; Lin, B. Predicting the drivers of behavioral intention to use mobile learning: A hybrid SEM-Neural Networks approach. Comput. Hum. Behav. 2014, 36, 198–213. [Google Scholar] [CrossRef]
  61. Priyadarshinee, P.; Raut, R.D.; Jha, M.K.; Gardas, B.B. Understanding and predicting the determinants of cloud computing adoption: A two staged hybrid SEM-Neural networks approach. Comput. Hum. Behav. 2017, 76, 341–362. [Google Scholar] [CrossRef]
  62. Liébana-Cabanillas, F.; Marinkovic, V.; De Luna, I.R.; Kalinic, Z. Predicting the determinants of mobile payment acceptance: A hybrid SEM-neural network approach. Technol. Forecast. Soc. Chang. 2018, 129, 117–130. [Google Scholar] [CrossRef]
  63. Bawack, R.E.; Wamba, S.F.; Carillo, K.D.A. Exploring the role of personality, trust, and privacy in customer experience performance during voice shopping: Evidence from SEM and fuzzy set qualitative comparative analysis. Int. J. Inf. Manag. 2021, 58, 102309. [Google Scholar] [CrossRef]
  64. Dolinting, P.P.; Pang, V. Assessing the validity and reliability of adapted classroom climates instrument for Malaysian rural schools using PLS-SEM. Int. J. Educ. Psychol. Couns 2022, 7, 383–401. [Google Scholar] [CrossRef]
  65. Loh, X.M.; Lee, V.H.; Tan, G.W.H.; Hew, J.J.; Ooi, K.B. Towards a cashless society: The imminent role of wearable technology. J. Comput. Inf. Syst. 2022, 62, 39–49. [Google Scholar] [CrossRef]
  66. Hair, J.F. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  67. Hu, L.t.; Bentler, P.M. Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification. Psychol. Methods 1998, 3, 424. [Google Scholar] [CrossRef]
  68. Lew, S.; Tan, G.W.H.; Loh, X.M.; Hew, J.J.; Ooi, K.B. The disruptive mobile wallet in the hospitality industry: An extended mobile technology acceptance model. Technol. Soc. 2020, 63, 101430. [Google Scholar] [CrossRef]
  69. Yan, L.Y.; Tan, G.W.H.; Loh, X.M.; Hew, J.J.; Ooi, K.B. QR code and mobile payment: The disruptive forces in retail. J. Retail. Consum. Serv. 2021, 58, 102300. [Google Scholar] [CrossRef]
  70. Lim, A.F.; Lee, V.H.; Foo, P.Y.; Ooi, K.B.; Tan, G.W.-H. Unfolding the impact of supply chain quality management practices on sustainability performance: An artificial neural network approach. Supply Chain Manag. Int. J. 2022, 27, 611–624. [Google Scholar] [CrossRef]
  71. Sternad Zabukovšek, S.; Kalinic, Z.; Bobek, S.; Tominc, P. SEM–ANN based research of factors’ impact on extended use of ERP systems. Cent. Eur. J. Oper. Res. 2019, 27, 703–735. [Google Scholar] [CrossRef]
  72. Chong, A.Y.L.; Bai, R. Predicting open IOS adoption in SMEs: An integrated SEM-neural network approach. Expert Syst. Appl. 2014, 41, 221–229. [Google Scholar] [CrossRef]
  73. Szymkowiak, A.; Melović, B.; Dabić, M.; Jeganathan, K.; Kundi, G.S. Information technology and Gen Z: The role of teachers, the internet, and technology in the education of young people. Technol. Soc. 2021, 65, 101565. [Google Scholar] [CrossRef]
  74. Malinka, K.; Peresíni, M.; Firc, A.; Hujnák, O.; Janus, F. On the educational impact of chatgpt: Is artificial intelligence ready to obtain a university degree? In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1, Turku, Finland, 7–12 July 2023; pp. 47–53. [Google Scholar]
  75. Dwivedi, Y.K.; Rana, N.P.; Jeyaraj, A.; Clement, M.; Williams, M.D. Re-examining the unified theory of acceptance and use of technology (UTAUT): Towards a revised theoretical model. Inf. Syst. Front. 2019, 21, 719–734. [Google Scholar] [CrossRef]
  76. Duong, C.D.; Bui, D.T.; Pham, H.T.; Vu, A.T.; Nguyen, V.H. How effort expectancy and performance expectancy interact to trigger higher education students’ uses of ChatGPT for learning. Interact. Technol. Smart Educ. 2024, 21, 356–380. [Google Scholar] [CrossRef]
  77. Mahmud, H.; Islam, A.K.M.N.; Mitra, R.K. What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness. Technol. Forecast. Soc. Chang. 2023, 193, 122641. [Google Scholar] [CrossRef]
  78. Haenlein, M.; Kaplan, A. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar] [CrossRef]
  79. Bertrandias, L.; Ben, L.; Sadik-Rozsnyai, O.; Carricano, M. Delegating decision-making to autonomous products: A value model emphasizing the role of well-being. Technol. Forecast. Soc. Chang. 2021, 169, 120846. [Google Scholar] [CrossRef]
  80. Muhammad, S.S.; Dey, B.L.; Syed Alwi, S.F.; Kamal, M.M.; Asaad, Y. Consumers’ willingness to share digital footprints on social media: The role of affective trust. Inf. Technol. People 2023, 36, 595–625. [Google Scholar] [CrossRef]
  81. Tams, S.; Thatcher, J.B.; Craig, K. How and why trust matters in post-adoptive usage: The mediating roles of internal and external self-efficacy. J. Strateg. Inf. Syst. 2018, 27, 170–190. [Google Scholar] [CrossRef]
  82. De Visser, E.J.; Monfort, S.S.; McKendrick, R.; Smith, M.A.; McKnight, P.E.; Krueger, F.; Parasuraman, R. Almost human: Anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. 2016, 22, 331. [Google Scholar] [CrossRef]
  83. Jones-Jang, S.M.; Park, Y.J. How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. J. Comput.-Mediat. Commun. 2023, 28, zmac029. [Google Scholar] [CrossRef]
  84. Jussupow, E.; Benbasat, I.; Heinzl, A. Why are we averse towards algorithms? A comprehensive literature review on algorithm aversion. In Proceedings of the 28th European Conference on Information Systems (ECIS), Online, 15–17 June 2020. [Google Scholar]
  85. Peng, Z.; Wan, Y. Human vs. AI: Exploring students’ preferences between human and AI TA and the effect of social anxiety and problem complexity. Educ. Inf. Technol. 2024, 29, 1217–1246. [Google Scholar] [CrossRef]
  86. Zhang, Y.; Reusch, P. Trust in and Adoption of Generative AI in University Education: Opportunities, Challenges, and Implications. In Proceedings of the 2025 IEEE Global Engineering Education Conference (EDUCON), London, UK, 22–25 April 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1–10. [Google Scholar]
  87. Abuzar, M.; Mahmudulhassan; Muthoifin. University Students’ Trust in AI: Examining Reliance and Strategies for Critical Engagement. Int. J. Interact. Mob. Technol. 2025, 19, 70–82. [Google Scholar] [CrossRef]
  88. Baker, R.S.; Hawn, A. Algorithmic bias in education. Int. J. Artif. Intell. Educ. 2022, 32, 1052–1092. [Google Scholar] [CrossRef]
  89. George, A.S. Preparing students for an AI-driven world: Rethinking curriculum and pedagogy in the age of artificial intelligence. Partners Univers. Innov. Res. Publ. 2023, 1, 112–136. [Google Scholar]
Figure 1. Research model and hypotheses.
Figure 1. Research model and hypotheses.
Systems 13 00855 g001
Figure 2. ANN models.
Figure 2. ANN models.
Systems 13 00855 g002
Table 1. Summary of representative trust models and their dimensions.
Table 1. Summary of representative trust models and their dimensions.
Conceptual ModelDimensions of TrustCore EssenceSource
Cognitive–Emotional Trust ModelCognitive Trust, Emotional TrustCognitive trust relies on rational judgment (competence/integrity), while emotional trust is based on bonds formed through interaction.[21]
Initial Trust in Web-based SystemsFunctionality, Reliability, UsefulnessTechnical trust reflects perceptions of system functionality, operational stability, and usefulness of outputs.[20]
Tri-process Model of Trust in AutomationAnalytic, Analogical, AffectiveAnalytic trust reflects rational, risk-based judgment; analogical trust derives from experience and categorization; affective trust arises from emotional responses and guides reliance when rules are insufficient.[22]
Dual-Dimensional Trust in Functionality and EmotionSystem-like Trust, Human-like TrustSystem-like trust emphasizes functionality and reliability; human-like trust emphasizes sociality and ethics.[10,23]
Table 2. Loadings, composite reliability, Dijkstra Henseler, and average variance extracted.
Table 2. Loadings, composite reliability, Dijkstra Henseler, and average variance extracted.
ConstructsItemsLoadings (p-Levels)rho_ACRAVE
FCFC10.862 (p < 0.001)0.7180.8370.632
FC20.717 (p < 0.001)
FC30.800 (p < 0.001)
PEPE10.667 (p < 0.001)0.6670.8140.595
PE20.817 (p < 0.001)
PE30.820 (p < 0.001)
OTOT10.866 (p < 0.001)0.9070.9030.823
OT20.947 (p < 0.001)
STST10.899 (p < 0.001)0.7580.8920.805
ST20.896 (p < 0.001)
ASTAST10.838 (p < 0.001)0.7690.8660.683
AST20.821 (p < 0.001)
AST30.821 (p < 0.001)
AHTAHT10.858 (p < 0.001)0.7110.8690.769
AHT20.896 (p < 0.001)
CUICUI10.928 (p < 0.001)0.7210.8250.617
CUI20.644 (p < 0.001)
CUI30.758 (p < 0.001)
Table 3. Heterotrait–monotrait ratio (HTMT) results.
Table 3. Heterotrait–monotrait ratio (HTMT) results.
AHTFCASTCUIOTSTPE
AHT
FC0.396
AST0.7620.546
CUI0.5310.6940.537
OT0.3430.4510.3350.733
ST0.4500.4370.4150.8080.395
PE0.6590.6110.7630.5360.3650.384
Table 4. Structural model results.
Table 4. Structural model results.
HypothesisPath CoefficientSample MeanSTDEVt-Valuep-Valuef2Remarks
H1: FC → AST0.1690.1710.0463.6440.0000.033Supported
H2: FC → AHT−0.020−0.0190.0500.3980.6900.000Not Supported
H3: PE → AST0.4210.4210.0488.7810.0000.219Supported
H4: PE → AHT0.1760.1750.0553.2170.0010.032Supported
H5: ST → AST0.1320.1310.0423.1440.0020.023Supported
H6: ST → AHT0.1390.1390.0463.0250.0030.025Supported
H7: OT → AST0.0560.0560.0471.1940.2320.004Not Supported
H8: OT → AHT0.0650.0650.0451.4460.1480.005Not Supported
H9: AST → AHT0.4160.4160.0508.3840.0000.178Supported
H10: FC → CUI0.3780.3800.0478.0100.0000.171Supported
H11: AST → CUI0.1420.1400.0532.6780.0070.018Supported
H12: AHT → CUI0.1780.1800.0473.7580.0000.031Supported
Table 5. Results of the mediation effect test.
Table 5. Results of the mediation effect test.
PathOriginal SampleSample Meant Statisticsp ValuesBCCI
2.5%97.5%
FC → AHT0.0700.0713.4700.0010.0320.111
FC → CUI0.0330.0332.1080.0350.0040.065
AST → CUI0.0740.0753.4720.0010.0340.118
OT → AHT0.0230.0241.1740.241−0.0140.063
OT → CUI0.0240.0251.6800.093−0.0000.055
ST → AHT0.0550.0542.9260.0030.0190.092
ST → CUI0.0530.0553.1040.0020.0250.090
PE → AHT0.1750.1765.6110.0000.1180.240
PE → CUI0.1220.1225.0390.0000.0770.171
Table 6. Predictive relevance (Q2) and R2.
Table 6. Predictive relevance (Q2) and R2.
ConstructQ2Predicative RelevanceR2
AST0.232Q2 > 00.371
AHT0.270Q2 > 00.353
CUI0.180Q2 > 00.305
Table 7. RMSE for neural network model.
Table 7. RMSE for neural network model.
Model AModel BModel C
Input: PE, FC, ST Input: PE, ST, AST Input: FC, AST, AHT
Output: AST Output: AHT Output: CUI
Training Testing Training Testing Training Testing
Neural Network RMSE RMSE RMSE RMSE RMSE RMSE
ANN10.0920.1070.1130.0930.1210.109
ANN20.0890.1070.1110.1070.1290.129
ANN30.0950.0840.1160.1200.1230.128
ANN40.0900.0890.1100.1090.1210.123
ANN50.0880.1000.1090.1040.1200.134
ANN60.1090.1020.1120.0870.1220.114
ANN70.0950.0750.1110.1030.1210.123
ANN80.0940.0910.1170.0980.1170.138
ANN90.0950.0920.1120.0920.1200.139
ANN100.0940.0890.1140.1160.1210.135
Mean0.0940.0940.1130.1030.1220.127
SD0.0060.0100.0030.0100.0030.010
Table 8. Sensitivity analysis.
Table 8. Sensitivity analysis.
Model AModel BModel C
(Output: AST) (Output: AHT) (Output: CUI)
Neural Network PE FC ST AST PE ST FC AST AHT
ANN10.6460.2160.1370.5910.2780.1310.4870.2560.257
ANN20.6820.1670.1510.5560.2880.1560.3860.3430.270
ANN30.5520.2380.2100.4720.2680.2600.3720.3580.270
ANN40.4720.3420.1860.3970.3820.2210.4400.2100.350
ANN50.5800.2540.1650.5670.2710.1620.3760.3990.225
ANN60.5020.2530.2450.6190.2040.1780.4500.3120.238
ANN70.5170.2770.2060.6450.2270.1280.4360.3050.259
ANN80.5300.2130.2570.4330.3630.2040.4840.2890.227
ANN90.6540.1970.1490.6120.2320.1550.5190.2490.231
ANN100.6890.2410.0700.6820.1790.1390.5340.3110.155
Average relative importance0.5820.2400.1780.5570.2690.1730.4480.3030.248
Normalized relative importance (%)100.0042.6431.99100.0051.5433.1399.4169.0656.40
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Guo, J.; Wang, Y.; Li, S.; Yang, Q.; Zhang, J.; Lu, Z. Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework. Systems 2025, 13, 855. https://doi.org/10.3390/systems13100855

AMA Style

Zhang Y, Guo J, Wang Y, Li S, Yang Q, Zhang J, Lu Z. Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework. Systems. 2025; 13(10):855. https://doi.org/10.3390/systems13100855

Chicago/Turabian Style

Zhang, Yue, Jiayuan Guo, Yun Wang, Shanshan Li, Qian Yang, Jiajin Zhang, and Zhaolin Lu. 2025. "Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework" Systems 13, no. 10: 855. https://doi.org/10.3390/systems13100855

APA Style

Zhang, Y., Guo, J., Wang, Y., Li, S., Yang, Q., Zhang, J., & Lu, Z. (2025). Understanding Trust and Willingness to Use GenAI Tools in Higher Education: A SEM-ANN Approach Based on the S-O-R Framework. Systems, 13(10), 855. https://doi.org/10.3390/systems13100855

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop