1. Introduction
In recent years, with the continuous advancement of artificial intelligence (AI) technology, the emergence of Large Language Models (LLMs) has enabled machines to recognize and generate coherent text with remarkable accuracy [
1]. As theoretical algorithms transition into practical applications, AI tools are evolving at an unprecedented pace, especially in the realm of Artificial Intelligence Generated Content (AIGC), which has demonstrated immense potential. In 2022, OpenAI introduced ChatGPT based on GPT-3.5, an optimized conversational language model that quickly gained widespread recognition for its powerful capabilities in information processing, text generation, and human–computer interaction. ChatGPT’s rapid rise as a transformative tool across various industries highlights the profound impact of AIGC-driven technological advancements [
2]. These tools are reshaping how individuals access information and create content, fundamentally altering the landscape of digital interaction.
ChatGPT and similar systems generate content from large text corpora, enabling users to solve complex problems and receive human-like responses. These tools not only reshape human–computer interaction but also introduce new forms of interactive learning [
3]. As a demographic that rapidly adopts new technologies and has a strong demand for problem-solving, information acquisition, and learning efficiency, university students have emerged as one of the primary user groups of AI chatbots such as ChatGPT [
4].
However, alongside the efficiency and convenience brought by AIGC tools, new challenges have emerged. From an algorithmic perspective, AI-generated information often operates as a “technical black box” due to its opacity, lack of interpretability, and inherent uncertainty [
5]. When AI systems provide incorrect or misleading answers, potentially resulting in negative consequences for users, trust-related concerns may arise, leading to a “crisis of trust”. The trust relationship between humans and AI has become a crucial factor influencing user experience and the willingness to adopt AI tools such as ChatGPT. In this context, trust refers to users’ subjective confidence in AI-generated content, which directly affects the practical applications of AI tools [
6].
Prior research suggests that the trust users place in AI significantly shapes their attitudes toward adopting the technology, ultimately determining AI’s long-term role in various domains [
7]. Given the growing integration of AI in education, it is essential to investigate students’ perceptions and usage behaviors to facilitate the deeper and more effective application of AIGC in higher education. This study, therefore, focuses on two key aspects: user trust and willingness to use AI in human–AI interactions. Scholars have widely explored the topics of human–AI trust and AI acceptance. For instance, Das and Chernova [
8] found that individuals are more likely to trust and rely on AI systems when tackling complex tasks. Wang et al. [
9] conducted research based on the Theory of Planned Behavior (TPB) and AI literacy, finding that college students’ attitudes mediate the relationship between AI literacy and subjective norms in influencing their intention to use generative AI. AI literacy and social norms are key factors affecting college students’ use of generative AI. Choung et al. [
10], using the Technology Acceptance Model (TAM), examined the impact of human-like trust and functional trust on AI acceptance in education. Their findings demonstrated the significant role of trust in shaping users’ willingness to continue using AI and provided valuable insights for subsequent research on human–AI trust.
Although existing research provides valuable insights into AI adoption behavior in education, significant gaps remain. Theoretical frameworks such as TAM and Unified Theory of Acceptance and Use of Technology (UTAUT) mainly emphasize adoption behavior. While they examine factors such as trust and perceived usefulness, they typically conceptualize trust as a single dimension, without distinguishing its rational and emotional mechanisms [
11]. As a result, the processes by which students develop trust in AI in educational contexts, and how such trust translates into sustained usage intention, remain insufficiently understood. This study addresses this gap by examining how rational (system-like) trust develops into affective (human-like) trust and how this dual-dimensional trust shapes continued usage intention.
At the same time, this study introduces antecedent variables including AI Facilitating Conditions (FCs), Performance Expectations (PEs), and perceived competence in subjective versus objective tasks (STs and OTs). It adopts a combined approach of structural equation modeling (SEM) and artificial neural networks (ANNs) to comprehensively examine variable relationships and predictive significance. By integrating theoretical and methodological perspectives, this study seeks to advance the understanding of trust formation and application mechanisms for generative AI in education, offering both theoretical contributions and practical insights to encourage positive, rational, and sustainable use of AI tools among university students.
3. Research Model and Hypothesis
This paper constructs a hypothetical model framework based on the Stimuli-Organism-Response (S-O-R) theory. The S-O-R model is an important research framework in the fields of cognitive and educational psychology, primarily used in studies of user behavior. In recent years, the S-O-R model has demonstrated considerable applicability in exploring social phenomena [
30]. The model comprises three levels, stimuli, organism, and response, which represent, respectively, the triggering factors from the external environment, the user’s emotional state or internal psychological mechanisms, and the subsequent attitude changes or behavioral responses resulting from their emotional and cognitive processes. Its operational process is that stimulus factors, as external conditions, influence the user’s cognitive processes and psychological state to ultimately form a final response [
31].
This study identifies three key motivating factors driving the use of AI chatbots in higher education: AI Facilitating Conditions (FCs), Task Performance Expectancy (PE), and task type. These external factors shape the internal perceptions of users, specifically system-like trust and human-like trust in AI chatbots, which ultimately translate into their behavioral willingness to engage in AI-assisted learning. Based on the stimulus-organism-response (SOR) framework and supported by the previous literature, the proposed research model is illustrated in
Figure 1. The model hypothesizes that AI facilitating conditions, task performance expectancy, and task type influence both system-like trust and human-like trust in AI. Furthermore, it examines how AI facilitating conditions, system-like trust, and human-like trust collectively affect the willingness of college students to continue adopting AI-assisted learning.
3.1. AI Facilitating Conditions and Trust
In this study, facilitating conditions primarily refer to students’ knowledge, resources, and capabilities in using artificial intelligence (AI) tools for learning [
32,
33]. Facilitating conditions encompass students’ understanding of the technologies and functionalities of various AI products, as well as their perceptions of the necessary resources, knowledge, and support required for utilizing them in academic activities [
34]. Human–AI trust is a complex psychological construct shaped by multiple factors. Facilitating conditions help users develop a deeper understanding of AI, reduce uncertainty and perceived risks, and foster more favorable attitudes toward AI [
35,
36]. They also enable users to more accurately evaluate the reliability and effectiveness of AI systems, thereby enhancing their trust in AI [
37]. Moreover, the presence of adequate facilitating conditions reflects a supportive, reliable, and safe environment, which allows users to better comprehend the capabilities and application scenarios of AI technologies, ultimately strengthening trust during actual use [
38]. Based on this reasoning, we propose the following hypotheses:
H1. AI facilitating conditions positively influence AI system-like trust.
H2. AI facilitating conditions positively influence AI human-like trust.
3.2. Task Performance Expectancy and Trust
When students hold high expectations for the performance of AI tools and believe they can consistently deliver accurate and valuable information, they are more likely to trust the resulting system. According to cognitive load theory, when tasks are complex, humans may experience cognitive load, leading to heightened expectations of AI performance and a greater tendency to rely on AI systems for support [
39]. Existing research further indicates that performance expectations are the most critical factor determining individuals’ attitudes toward technology. When faced with tasks involving higher performance expectations, users are more likely to develop cognitive trust in AI systems’ ability to address these challenges [
40]. Moreover, since outcomes of tasks with higher performance expectations are often uncertain and difficult to verify independently, individuals may develop a form of affective trust toward AI [
41]. Building on these theoretical insights, this study examines the influence of performance expectancy on both system-like and human-like trust and proposes the following hypotheses:
H3. Performance expectancy positively influences AI system-like trust.
H4. Performance expectancy positively influences AI human-like trust.
3.3. Type of Task and Trust
This study examines how university students utilize AI in different types of tasks and how these tasks influence trust in AI. Objectivity refers to the quality of being impartial, unbiased, and independent of human opinions [
42]. Based on the level of objectivity, tasks can be classified into subjective tasks (STs) and objective tasks (OTs). Subjective tasks are characterized by lower objectivity, greater personal opinions or emotions, less certainty in outcomes, and more open-ended responses [
43]. Examples include opinion-based discussions and creative ideation. Objective tasks, on the other hand, are quantifiable, fact-based, and rely more on rational thinking, with clearly defined correct or incorrect outcomes [
44]. Examples include data analysis and programming.
Recent empirical studies on educational contexts further indicate that students’ attitudes toward AI vary across different task types. Students not only focus on AI’s performance in logical organization and functionality but its anthropomorphic characteristics also play a crucial role in trust formation. For instance, Pitts [
45] points out that in subjective tasks, students often establish affective trust through AI’s anthropomorphic interactive features, and this human-like trust is significantly linked to their willingness to use it. In contrast, objective tasks with clear standard answers and evaluation criteria may lead students to focus more on AI’s system-like attributes. Yang et al. [
46] investigated how task objectivity, time pressure, and cognitive load influence user trust in algorithms. They found that higher task objectivity and time pressure significantly enhance algorithmic trust, markedly increasing users’ reliance on the system’s performance. Based on this literature, we propose that subjective and objective tasks may influence students’ system-like and human-like trust in AI through distinct pathways, leading to the following hypotheses:
H5. Perceived AI ability in subjective tasks has a positive effect on AI system-like trust.
H6. Perceived AI ability in subjective tasks has a positive effect on AI human-like trust.
H7. Perceived AI ability in objective tasks has a positive effect on AI system-like trust.
H8. Perceived AI ability in objective tasks has a positive impact on AI human-like trust.
3.4. AI System-Like Trust (AST) and AI Human-Like Trust (AHT)
Drawing on Choung et al.’s [
10] research on AI trust, this study distinguishes trust in AI into two dimensions: system-like trust and human-like trust. AI system-like trust reflects cognitive trust in the functionality, reliability, and usefulness of AI in human–AI interactions. In contrast, AI human-like trust captures users’ emotional and ethical trust in AI, such as trust in its benevolence, integrity, and dependability [
47]. McAllister et al. [
21] demonstrated that when individuals develop strong cognitive trust in their collaborative partners, they also tend to establish high levels of emotional trust. Following this reasoning, this study proposes the following hypothesis regarding the relationship between AI system-like trust and AI human-like trust:
H9. AI system-like trust positively influences AI human-like trust.
3.5. Willingness to Continue Using AI-Assisted Learning
Building on prior research on human–AI trust in educational contexts, this study further examines how both system-like trust and human-like trust affect students’ willingness to use AI in learning, as well as which dimension exerts a stronger influence on continuance usage intention (CUI). Previous studies have emphasized that facilitating conditions constitute a critical factor in determining willingness to use and behavioral intention [
48]. For example, Topsakal et al. [
49] found that users’ perception of the convenience of generative AI directly shapes their trust in AI and their willingness to adopt AI tools. Integrating these insights with the UTAUT framework, this study argues that facilitating conditions—representing users’ cognitive understanding, mastery, and external support for AI—may not only enhance trust in AI (H1–H2) but also directly increase students’ willingness to continue using AI-assisted learning. Thus, the following hypothesis is proposed:
H10. AI facilitating conditions positively influence continuance usage intention.
Prior studies have also consistently shown that users’ trust in AI tools is positively associated with their willingness to adopt and continue using them. Topsakal et al. [
50] demonstrated that trust in AI significantly enhances users’ willingness to engage with it, while Prakash et al. [
51] found that trust in the functional capabilities of chatbots strongly predicts intention to continue use. Similarly, Delgosha et al. [
52] confirmed that higher levels of functional trust foster users’ cognitive absorption and exploratory behaviors toward the technology. In the context of the sharing economy, Califf et al. [
53] further revealed that both human-like and system-like trust positively affect enjoyment and continuance intention, with human-like trust exerting a particularly notable impact. These findings collectively suggest that trust in AI substantially shapes users’ willingness to engage with the technology. Accordingly, this study investigates the dual-dimensional pathways through which system-like and human-like trust influence continuance usage intention and proposes the following hypotheses:
H11. AI system-like trust positively influences continuance usage intention.
H12. AI human-like trust positively influences continuance usage intention.
4. Methodology
4.1. Data Collection and Sampling Method
The participants in this study were recruited from multiple universities in China, encompassing undergraduate, master’s, and doctoral students across diverse academic disciplines, including science and technology as well as humanities and social sciences. This broad participant pool enhances the representativeness of the sample. Students were invited to complete the questionnaire both online and offline through the Questionnaire Star platform. Prior to participation, all respondents were screened to confirm that they had prior experience using linguistic AI tools (e.g., ChatGPT), ensuring that the sample consisted exclusively of users familiar with AI applications.
A total of 500 questionnaires were collected through online and offline channels. After excluding 34 responses due to abnormal completion time or poor response quality, the final dataset consisted of 466 valid responses. All participants were current students (n = 466) with prior experience using linguistic AI tools and reported using such tools at least once per month. The sample distribution was as follows: master’s students comprised the largest proportion (n = 233, 50%), followed by undergraduate students (n = 216, 46.4%), and doctoral students (n = 17, 3.6%). The sample contains a slightly higher proportion of master’s students compared with doctoral students. As this study does not focus on differences in AI trust and usage across academic stages, the overall sample size is sufficient to support SEM and ANN analyses. Nevertheless, perspectives may vary across students at different academic levels, and these background characteristics should be considered by future researchers when interpreting the findings.
4.2. Measurement Instrument
The questionnaire items were adapted from established scales and previous research, with minor modifications to align with the context of this study. All items were measured using a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree).
The measurement items for the dependent variable TRUST are adapted from Lee and See’s description of trust in human-automated systems and Muir and Moray’s proposed measurement dimensions for trust in automated systems [
22,
54] and are further divided into two parts—system-like trust and human-like trust—based on Choung et al.’s [
10] research on trust in AI. The measurement items for AI facilitating conditions were adapted from the research of Balakrishnan et al. [
55]. The questions regarding performance expectancy measure three dimensions: the ability, effectiveness, and efficiency of AI tools in addressing academic issues [
11]. In measuring the type of task, this study draws upon existing research to categorize tasks into objective and subjective tasks based on their objectivity [
56]. The questionnaire guides participants to distinguish different task contexts by providing examples (such as data analysis or creative generation) and separately assesses their perceptions of AI capabilities in subjective and objective tasks. Finally, continuance usage intention is measured through three questions regarding the willingness to use AI tools in learning [
57].
4.3. Data Analysis
In this study, the data were analyzed using both Partial Least Squares Structural Equation Modeling (PLS-SEM) and Artificial Neural Networks (ANNs) for a more comprehensive understanding of the relationships between the structures. In the structural equation section, Partial Least Squares Structural Equation Modeling (PLS-SEM) was used using SmartPLS version 4.1.0.9 [
58]. The reason for choosing PLS-SEM over covariance-based SEM is that the nature of this study is exploratory rather than validation. PLS-SEM is more suitable for this study compared to CB-SEM due to its higher statistical power and ability to handle non-normally distributed data [
59]. SEM is the main method to explore whether hypotheses are valid and to test significance factors, but it can only be used to test linear models [
60], and artificial neural network techniques can be used to measure complex non-linear relationships [
61], but due to their “black box” nature, ANNs are not suitable for testing hypotheses and causal relationships [
62]. Therefore, this study integrates both SEM and ANN analyses to test hypotheses and to determine the order of importance of predictors for the dependent variable.
The minimum sample size required for this study was estimated using G*Power 3.1.9.7 with f2 of 0.15, = 0.05, (1 −) = 0.8, the number of predictors was 4, the minimum sample size required was calculated to be 85, and the actual sample size of 466 in this study was qualified.
5. Results
5.1. Measurement Model Assessment (CFA)
Dijkstra-Henseler’s rho_A and Composite Reliability (CR) were used to assess internal reliability. As shown in
Table 2, the rho_A and CR values for all variables except PE exceed the threshold of 0.70, indicating strong internal consistency [
63]. The rho_A value for PE is 0.667. In exploratory research, reliability values between 0.60 and 0.70 are considered ‘acceptable’ [
64]. The data reliability in this study meets the conditions for analysis. Convergent Validity (CV) was evaluated using factor loadings (FLs) and average variance extracted (AVE). With the exception of PE1 and CUI2, all FL values surpass the 0.70 threshold [
65]. According to [
66], FL values between 0.40 and 0.70 are acceptable if AVE
and CR
; however, values below 0.40 should be considered for removal. Since PE1 and CUI2 explain over 60% of AVE and exceed the CR threshold of 0.70, they were retained. Simultaneously, the model was re-estimated after removing PE1 and CUI2. The results indicate that the path coefficients and significance levels of the main relationships remained basically consistent with those of the original model. For instance, the path coefficient from PE to AHT changed from 0.176 to 0.167, while the coefficient from AST to AHT shifted from 0.416 to 0.422, both representing only minor variations. In addition, the significance results for each path were consistent with the original model without altering the overall pattern of significance. This suggests that PE1 and CUI2 do not introduce bias into the conclusions and that their retention helps maintain the content validity of the constructs.
The AVE values for all constructs range from 0.595 to 0.823, consistently exceeding the minimum requirement of 0.50 [
65]. These results confirm that the measurement model demonstrates a satisfactory CV. Discriminant validity (DV) was assessed using the heterotrait–monotrait (HTMT) correlation ratio, where values close to 1 indicate a lack of discriminant validity [
65]. As shown in
Table 3, all HTMT values remain below 0.85, demonstrating adequate DV for the model [
63].
5.2. Inspecting the Inner Structural Model
Model fit was evaluated using the standardized root mean square residual (SRMR) [
66]. The SRMR values for both the saturated and estimated models are below 0.100, confirming a good fit for models using the PLS algorithm [
67]. Multicollinearity was tested using the variance inflation factor (VIF), with all values ranging from 1.141 to 2.471, well below the critical threshold of 5.000, indicating no multicollinearity issues [
68]. The structural model’s hypothesized paths were assessed using a bias-corrected and accelerated (BCa) bootstrap procedure with 5000 subsamples.
The results are presented in
Table 4. Among the examined factors, FC (
p < 0.001), PE (
p < 0.001), and ST (
p = 0.002) exerted significant positive effects on AST, whereas OT (
p = 0.232) showed no significant influence. Similarly, AST (
p < 0.001), performance expectancy (
p = 0.001), and ST (
p = 0.003) had significant positive effects on AHT, while FC (
p = 0.690) and OT (
p = 0.148) were not significant predictors. Finally, AST (
p = 0.007), AHT (
p < 0.001), and FC (
p < 0.001) were all found to significantly enhance students’ continuance intention toward AI-assisted learning. Accordingly, hypotheses H2, H7, and H8 were not supported, whereas the remaining hypotheses received empirical support.
Effect sizes between variables were assessed using
f2, as presented in
Table 4. The results indicate that FC and ST exerted small effects on AST (
f2 = 0.033, 0.023), whereas PE demonstrated a large effect (
f2 = 0.219). For AHT, PE and ST had small effects (
f2 = 0.032, 0.025), while AST exhibited a medium-to-large effect (
f2 = 0.178). Regarding CUI, AST and AHT demonstrated small effects (
f2 = 0.018, 0.031), whereas FC showed a substantial effect (
f2 = 0.171).
5.3. The Mediation Effect
This study examines the mediating effects of AI-related variables on AI trust and AI-assisted learning willingness. The bootstrapping method (5000 resamples) was used to estimate indirect effects, with the total indirect effects summarized in
Table 5.
The results reveal several notable indirect effects. First, PE demonstrated the strongest indirect influence in the model: it significantly affected AHT through a mediating pathway (indirect effect = 0.175, t = 5.611, p < 0.001) and exerted a significant indirect effect on CUI (indirect effect = 0.122, t = 5.039, p < 0.001). This finding suggests that, when using ChatGPT to complete learning tasks, students perceive performance expectancy as a key driver shaping their trust and attitudes toward AI.
Second, FC significantly and positively influenced AHT and CUI through mediating effects, with indirect values of 0.070 (t = 3.470, p = 0.001) and 0.033 (t = 2.108, p = 0.035), respectively. In addition, the indirect effect of AST on CUI was also significant (indirect effect = 0.074, t = 3.472).
Finally, differences emerged between subjective and objective task types. Under objective tasks, the indirect effects were not significant, indicating that users’ trust in AI and willingness to use it did not increase meaningfully through mediating pathways. In contrast, subjective tasks yielded significant indirect effects on both AHT (indirect effect = 0.055, p = 0.003) and CUI (indirect effect = 0.053, p = 0.002). These results highlight that AI applications in subjective task scenarios are more effective in fostering user trust and strengthening students’ willingness to adopt AI-assisted learning.
5.4. The Predictive Relevance and Effect Size
This study used
R2 and
Q2 to assess the explanatory and predictive power of the model. The model explains 37.1% of the variance in AST (
R2 = 0.371), 35.3% in AI human-like trust (
R2 = 0.353), and 30.5% in AI continuance usage intention (
R2 = 0.305). The model’s predictive relevance was assessed using Stone–Geisser’s
Q2 values [
69], all of which are greater than zero, confirming its predictive validity (
Table 6).
5.5. Artificial Neural Network Analysis
Since PLS-SEM is limited to capturing compensatory and linear relationships [
70], this study further supplements the analysis by employing Artificial Neural Networks (ANNs) to explore potential nonlinear relationships that could enhance decision-making. Based on the SEM analysis results presented earlier, we constructed three ANN models for AST, AHT, and CUI, with their structures shown in
Figure 2. The Root Mean Square Error (RMSE) results from 10-fold cross-validation of the model are shown in
Table 7. The corresponding sensitivity analysis results are presented in
Table 8.
To mitigate overfitting, a tenfold cross-validation procedure was implemented, using 90% of the data for training and 10% for testing [
62]. To assess the predictive accuracy of the ANN models, RMSE values from ten neural network runs were averaged (
Table 7). The cross-validated RMSE for the test models were 0.093523 (AST), 0.102876 (AHT), and 0.127188 (CUI), while those for the training models were 0.094136, 0.112714, and 0.121637, respectively. The consistently low RMSE values indicate strong reliability, demonstrating the ANN models’ ability to accurately and consistently explain the relationships between predictors and outcomes [
71].
The results of the Sensitivity Analysis (SA) for the independent variables are presented in
Table 8. The relative importance of each predictor was calculated using ANN analysis and ranked accordingly. Predictor importance reflects the extent to which variations in the predictor values alter the predicted outcomes of the network model [
72]. Based on the normalized importance scores, PE (SA = 100%) emerged as the most significant determinant of AST, followed by FC (SA = 42.64%) and ST (SA = 31.99%). For AHT, AST (SA = 100%) was identified as the strongest predictor, followed by PE (SA = 51.54%) and ST (SA = 31.13%). Regarding CUI, FC (SA = 99.41%) had the greatest influence, followed by AST (SA = 69.06%) and AHT (SA = 56.40%).
When compared with the SEM results, the ANN analysis produced consistent findings. Both methods confirmed that PE, AST, and FC represent the most critical factors influencing AST, AHT, and students’ continuance usage intention, respectively.
6. Discussion
6.1. Key Findings
6.1.1. Facilitating Conditions as a Key Driver of Trust in AI
The findings of this study highlight facilitating conditions (FCs) as an important driver influencing students’ trust in AI systems and their willingness to continue using AI-assisted learning. As expected, students who are more familiar with AI operations, find it convenient to use AI tools, and possess richer AI-related knowledge demonstrate stronger trust in AI systems and a greater willingness to adopt AI for learning. This result is consistent with prior studies emphasizing the role of facilitating conditions in shaping trust and usage intentions [
50,
73]. When students believe they have sufficient access to resources, knowledge, and support to effectively utilize ChatGPT, their attitudes toward the technology become more trusting, which in turn strengthens their willingness to use it. Conversely, students with limited AI literacy or inadequate support are more likely to perceive the technology as difficult to use [
74], thereby reducing their willingness to adopt it. This underscores the direct influence of supportive conditions on AI usage intention.
While prior studies have largely examined the influence of facilitating conditions on overall trust and behavioral intention, few have explored their differentiated effects on distinct dimensions of trust. This study contributes to filling this gap by showing that, although facilitating conditions significantly enhance students’ trust in the functional effectiveness and reliability of AI systems (supporting H1), they do not significantly shape trust in AI’s ethical or emotional dimensions (H2 not supported). This finding suggests that facilitating conditions primarily strengthen system-like trust, rather than human-like trust, in the context of AI-assisted learning.
6.1.2. The Dual Role of Performance Expectancy in Promoting AI Trust
The results confirm that performance expectancy is a key driver of both system-like and human-like trust in AI and exerts the strongest mediating effect on students’ willingness to continue using AI. When students expect ChatGPT to deliver accurate and useful information, they are more likely to trust the system, consistent with Dwivedi et al. [
75], who identified performance expectancy as a critical determinant of technology adoption.
In complex or uncertain tasks, students may experience cognitive overload, and AI tools provide efficiency and problem-solving capacity beyond what they could achieve alone, thereby strengthening both functional and emotional trust [
76]. For simpler tasks, reliance on AI is weaker. Thus, when tasks are perceived as challenging, students are more motivated to leverage AI, and if these tools consistently meet or exceed expectations, trust and sustained use are reinforced [
11].
6.1.3. AI’s Trust Paradox in Different Task Types
The type of task had a significant effect on functional trust and emotional trust in this study. Unlike previous studies suggesting that people tend to trust AI more in objective tasks [
77], the college students in this study exhibited a stronger trust in AI when engaging in subjective tasks, while the effect of objective tasks was not significant. This study posits that a plausible explanation lies in the fact that advancements in AI technology have enhanced users’ trust in it when addressing subjective-type questions: Castelo et al. [
44] suggest that earlier concerns about AI’s inability to handle subjective tasks due to a lack of human emotions may no longer hold. If the AI tools used in this study have not yet achieved high accuracy in objective tasks (e.g., early image recognition systems), users’ trust remains low. However, in subjective tasks, AI models such as GPT-5 and DeepSeek have shown significant improvements in generating meaningful and contextually relevant content, leading to increased trust. This aligns with Song et al. [
42], who argue that since subjective tasks allow for open-ended responses, trust in AI depends on individual opinions and preferences. Previously, consumers perceived AI with lower intelligence as incapable of accurately handling these tasks. However, more advanced AI models are now recognized for their ability to analyze complex information and provide reliable recommendations [
78], thereby enhancing user trust [
79].
Based on this finding, two hypotheses are suggested for future research. First, cognitive differences may shape trust across task types. Students aware of AI’s algorithmic limits may demand higher accuracy in objective tasks, reducing trust when errors occur, whereas in subjective tasks, they may prioritize inspiration and interaction, showing more tolerance for technical flaws. Second, risk perception may also matter. In objective tasks with clear right or wrong outcomes, especially in academic or medical domains, concerns about errors and responsibility can reduce trust. By contrast, subjective tasks such as brainstorming or discussion lack strict correctness standards, lowering perceived risks and making trust easier to build.
6.1.4. Rational Trust as the Foundation of Emotional Trust
This study confirms that trust in AI system-like attributes has a significant positive effect on trust in AI human-like attributes. This finding resonates with prior research on the impact of interpersonal trust on students in educational settings and further elucidates the relationship between the two types of trust [
45]. Specifically, the higher the students’ trust in the functionality and effectiveness of AI, the stronger their emotional trust in its human-like qualities. This result suggests that, in the process of building students’ trust in and use of AI, system-like trust serves as a more fundamental and essential basis than human-like trust, with the development of trust evolving from rational trust to emotional trust. These findings also support Castelo et al.’s [
44] perspective on the progression from system trust to human-like trust in task-driven contexts. By extending prior insights from interpersonal trust research, this study contributes to the domains of human–AI trust and human–machine trust.
6.1.5. The Influence of Dual-Dimensional Trust on Continuance Usage Intention
Finally, the findings demonstrate that both system-like trust and human-like trust are significantly associated with students’ willingness to continue using AI-assisted learning. Each type of trust serves as a mediator that strongly influences behavioral intention, highlighting the central role of trust in driving the adoption of AI learning tools. These results are consistent with prior user studies showing that both system trust and human trust exert positive effects on usage willingness and behavioral intention [
80].
Specifically, students’ functional trust and emotional trust in AI tools jointly shape their willingness to adopt AI-assisted learning. The more students trust the effectiveness of AI in solving academic problems and the more confident they are in its ethical soundness and capabilities, the more inclined they are to use AI tools in their learning. This is in line with the findings of Tams et al. [
81], who revealed that trust influences willingness to use by enhancing deep engagement and innovative experimentation with technology adoption through perceptions of self-efficacy. Functional trust motivates users to explore generative AI and fosters positive adoption intentions. Moreover, due to the natural language interaction capabilities of generative AI, human–AI communication increasingly resembles interpersonal interaction. Such human-like engagement cultivates emotional trust in generative AI [
82], which further strengthens students’ willingness to incorporate AI tools into their learning practices.
6.2. Theoretical Implications
From a theoretical perspective, this study extends human–machine trust theory to generative AI in education. By adopting a dual-dimensional trust framework, it confirms that system-like trust fosters human-like trust (H9), showing how rational cognition reinforces emotional sentiment. The study also indicates that performance expectancy significantly enhances students’ dual-dimensional trust in AIGC tools (H3, H4), with trust levels in subjective tasks showing a significantly greater increase than in objective tasks (H5, H6 supported; H7, H8 not supported), offering a new angle for exploring dynamic trust in generative AI.
In addition, by integrating PLS-SEM and ANNs, this study reveals how facilitating conditions shape continuance usage intention both directly and indirectly through system-like trust (H1, H10 supported). This underscores the central role of facilitating conditions in building trust and adoption, providing theoretical insights and practical guidance for promoting the responsible and effective application of AI tools in higher education.
It is noteworthy that even though H2, H7, and H8 were not supported, these results still provide meaningful theoretical implications. First, the facilitative condition did not significantly influence humanized trust (H2). This indicates that the facilitative condition primarily strengthens systemic trust by enhancing students’ perceptions of AI functionality and reliability. Its influence on humanized trust is more likely to occur indirectly through the pathway ‘FC → AST → AHT’ (
= 0.070,
t = 3.470,
p = 0.001, 95% CI [0.032, 0.111]). Second, objective tasks did not show significant effects on either system trust or human trust (H7, H8). Prior studies suggest that when students use AI tools in contexts requiring high accuracy or verifiable outcomes, an algorithm aversion effect may arise, reducing trust due to the increased visibility of errors and higher perceived risks [
83]. In this study, errors in objective tasks such as data analysis or programming were more noticeable to users and carried high-stakes consequences (for example, implications for academic reliability), which likely undermined trust [
84]. In contrast, subjective tasks such as writing or brainstorming produce results that are less strictly defined, with no single correct answer. This lower perception of risk may make students more likely to develop trust in artificial intelligence [
85].
6.3. Ethical Considerations of Trust in Education and Responsible AI Use
This study reveals the mechanism linking human AI trust and willingness to use. It not only supports the rational application of artificial intelligence in higher education but also provides insights for preventing over-reliance and misuse while advancing responsible adoption. While the findings highlight the positive role of trust in driving students’ willingness to use generative AI tools, they also underscore the need to address ethical risks that accompany their widespread use. A key concern is blind trust or excessive dependence on AI systems [
86]. Such reliance may weaken students’ independent thinking and critical evaluation skills and, in extreme cases, lead to academic misconduct [
87]. In addition, algorithmic bias and unequal access to AI tools may widen disparities among students, raising concerns of fairness in learning and assessment [
88].
To mitigate these risks, higher education institutions should promote responsible AI usage. Educators need to guide students in combining technological understanding, critical evaluation, and ethical judgment, especially in open-ended and high-performance tasks, to build responsible models of human–AI collaboration [
89]. Students should also strengthen AI literacy and adopt prudent practices such as cross-validating outputs and using task-specific strategies to avoid academic risks and dependence. Finally, academic policy bodies should establish clear guidelines to ensure ethical and accountable use. Embedding these considerations into practice is essential to maximize AI’s educational benefits while minimizing potential harms.
6.4. Limitation
This study has certain limitations. First, regarding the findings, although the data analysis showed no statistically significant differences between humanities and social sciences students and STEM students in systemic trust and anthropomorphic trust (p = 0.766, 0.983), subtle variations in attitudes and engagement patterns across disciplines cannot be ruled out. Such differences may shape how students perceive and apply AI in learning activities. Future research could adopt group comparisons, multilevel modeling, or qualitative methods to further examine potential disciplinary variations in trust and usage intentions.
Second, the proposed model was validated mainly in Chinese higher education. Future studies should examine cross-cultural contexts and include additional factors such as disciplinary background, task–technology fit, and environmental influences to provide a more comprehensive understanding of trust in AI-assisted learning.
Finally, the cross-sectional design employed here does not capture the dynamic nature of trust. Prior studies suggest that trust evolves through processes of erosion and restoration, making it difficult to measure precisely. At the same time, large language models are advancing at a rapid pace, while evaluations of their impact often lag behind technological progress. Consequently, the findings of this study should be regarded as a milestone in the evolution of knowledge, not as a final destination. Future research could apply longitudinal or mixed-method approaches to better capture the dynamics of trust while maintaining theoretical foresight and practical relevance.
7. Conclusions
This study is among the first to examine, within the context of Chinese higher education, how AI facilitating conditions, performance expectancy, and perceived AI ability in subjective and objective tasks influence students’ dual-dimensional trust in AI, namely system-like trust and human-like trust, and how these forms of trust in turn shape continuance usage intention. The findings provide theoretical support for a deeper understanding of the application of generative AI tools in education, highlighting the close interconnection between trust formation and usage contexts. By validating the distinct yet complementary roles of rational (system-like) and emotional (human-like) trust, this research enriches the theoretical framework of human–AI trust and underscores trust as a critical mediating mechanism. Ultimately, this study offers insights for promoting more positive and effective integration of AI into higher education by leveraging trust to guide sustainable adoption.