Next Article in Journal
Enhancing Wind Turbine Sustainability Through LiDAR Configuration Analysis and Evaluation of Two Reference LiDAR-Assisted Control Strategies
Previous Article in Journal
A Comparative Evaluation of Harmonic Analysis and Neural Networks for Sea Level Prediction in the Northern South China Sea
Previous Article in Special Issue
Sustainable AI Solutions for Empowering Visually Impaired Students: The Role of Assistive Technologies in Academic Success
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Understanding Continuance Intention of Generative AI in Education: An ECM-Based Study for Sustainable Learning Engagement

1
Department of Aviation Tourism, Hanseo University, Seosan-si 31962, Republic of Korea
2
HJ Institute of Technology and Management, Seoul 06134, Republic of Korea
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(13), 6082; https://doi.org/10.3390/su17136082
Submission received: 27 May 2025 / Revised: 1 July 2025 / Accepted: 1 July 2025 / Published: 2 July 2025
(This article belongs to the Special Issue Artificial Intelligence in Education and Sustainable Development)

Abstract

Rapid advancements in artificial intelligence (AI) have led to the emergence of generative AI models like generative AI that produce human-like responses and support a wide range of applications. This study explores the key factors influencing the continuance intention of generative AI among university students, drawing on established theoretical frameworks including the expectancy confirmation model and technology acceptance model. Using data collected from 282 users, structural equation modeling was applied to examine relationships among knowledge application, perceived intelligence, perceived usefulness, confirmation, satisfaction, AI configuration, social influence, and continuance intention. The results show that both knowledge application and perceived intelligence significantly influence perceived usefulness and confirmation. Perceived usefulness was found to positively affect both satisfaction and continuance intention, while confirmation strongly influenced both perceived usefulness and satisfaction. Satisfaction emerged as a key predictor of continuance intention, as did social influence. However, AI configuration did not significantly impact continuance intention. The model explained 64.1% of the variance in continuance intention. These findings offer meaningful insights for improving the design, implementation, and promotion of AI-based language tools in educational settings.

1. Introduction

Rapid advancements in artificial intelligence (AI) have led to the emergence of sophisticated AI-based language models capable of generating human-like responses, significantly impacting various domains including education, healthcare, customer service, and entertainment [1,2,3,4,5]. Among these language models, generative AI has gained considerable attention due to its ability to understand and generate contextually relevant responses, providing a user-friendly interface for diverse applications [6]. Generative AI stands out as a novel and powerful language model, offering distinct advantages over traditional AI engines in terms of versatility, coherence, and user adaptability. One of the key strengths of generative AI is its remarkable few-shot learning capabilities [7]. This enables generative AI to generate coherent and contextually appropriate responses with minimal training data, which is a significant improvement over earlier models that required extensive fine-tuning. In this study, the term generative AI refers specifically to advanced AI-based systems that are capable of producing human-like text responses through natural language processing.
Among various types of generative AIs, the remarkable success of ChatGPT-3.5 is evident from its rapid user growth, as it amassed an astounding 1 million users within a mere 5-day span following its launch. This phenomenal achievement propelled the platform to become the fastest-growing service, reaching an impressive 100 million users by January 2023 [8]. ChatGPT provides substantial support for college students’ knowledge acquisition and learning activities by enabling information retrieval, integration, generation, and suggestion [9,10]. With the increasing adoption of AI-based language models in educational settings, it is vital to comprehend the determinants that impact users’ intention to persist in utilizing these technologies. Consequently, generative AI users’ continuance intention may be explained by the expectation–confirmation model (ECM) [11]. The ECM is a theoretical framework that explains user satisfaction, post-adoption behavior, and the continued use of a product, service, or technology. The ECM has been widely applied in the field of information systems to understand and predict user behavior regarding technology acceptance and usage [12,13,14,15]. Researchers have modified or extended the model to explicate the behaviors of AI users [16,17,18]. According to the ECM, users perceive higher usefulness when their expectations are met, leading to increased satisfaction and continuance intention.
However, the rapid expansion of its use, especially in academia, has also sparked critical discussions around its ethical implications and appropriate boundaries in educational contexts. From an educational and ethical point of view, students’ use of ChatGPT has caused various debates [19,20]. In this context, it is important to identify the intention of college students to use and derive empirical implications for constructive distribution in the future.
Generative AI intelligently discerns user needs and offers appropriate knowledge, allowing college students to apply it according to their goals [21,22]. The spread of generative AI was mainly driven by word-of-mouth effects, suggesting that social influence may encourage sustained usage. However, users might experience psychological anxiety due to generative AI’s remarkable abilities and speed, fearing job replacement or the potential risks of AI [23]. Many students use generative AI for practical purposes, making it a typical information system. Therefore, this study investigates the impact of knowledge application, perceived intelligence, social influence, and AI configuration on continuance intention based on the ECM. Specifically, this study aims to
  • Determine whether the ECM can effectively explain continuance intention in the context of generative AI;
  • Examine the effects of knowledge application and perceived intelligence on perceived usefulness and confirmation;
  • Investigate the influence of social influence and AI configuration on continuance intention.
Previous research has explored various aspects of AI-based language models, such as their efficiency, accuracy, and user satisfaction [24,25]. Nonetheless, the majority of existing studies have primarily concentrated on AI language models’ performance or their application in specific fields [26,27,28,29,30,31], leaving an unexplored area in comprehending the factors that drive continuance intention, particularly for university students. This study makes several key contributions to the literature on artificial intelligence in education. First, it offers theoretical advancement by integrating the ECM [11] with the TAM [32] to better understand post-adoption behavior related to generative AI. Second, it shifts the focus from initial adoption to continuance intention, addressing a gap in existing research on sustained technology use [33,34,35]. Third, it provides the empirical validation of core constructs such as knowledge application, perceived intelligence, and satisfaction through structural equation modeling. Fourth, it reevaluates the role of confirmation by revealing its differentiated effects on perceived usefulness and satisfaction [36,37]. Fifth, it offers practical implications for educators and AI developers by identifying actionable factors that influence sustained engagement. Lastly, it identifies key predictors of continuance intention—namely, perceived usefulness, satisfaction, and social influence—thus contributing to the design of more effective AI-based learning tools.
The remainder of this paper is structured as follows. Section 2 reviews the relevant literature and presents the theoretical framework, including hypothesis development based on the ECM and TAM. Section 3 details the research methodology, including data collection, sample characteristics, and measurement constructs. Section 4 presents the results of the structural equation modeling analysis. Section 5 discusses the findings in light of prior research, highlights practical implications, outlines limitations, and suggests directions for future research. Finally, Section 6 concludes the paper by summarizing the theoretical contributions.

2. Theoretical Foundation and Research Hypotheses

This study is mainly anchored in established theoretical frameworks that elucidate the psychological and behavioral aspects of technology usage. Specifically, we draw extensively on the ECM [11], which provides a robust framework for understanding continuance decisions in technology usage by explaining how prior expectations and the subsequent confirmation of those expectations influence users’ perceived usefulness and satisfaction. Moreover, we integrate constructs from the Technology Acceptance Model (TAM) [32], which emphasizes perceived usefulness as a fundamental determinant of technology acceptance and usage behavior. The TAM has been extensively validated and applied in numerous studies investigating the adoption of new technologies and offers a critical perspective on the cognitive mechanisms that drive technology acceptance and continuance intention [38,39,40]. The integration of the ECM and TAM is particularly well-suited for investigating generative AI usage in educational contexts for several reasons. First, generative AI tools were not only initially adopted but are also frequently used and revisited by students, which makes understanding continuance intention more relevant than mere adoption. The ECM, which emphasizes users’ post-adoption evaluation—such as the confirmation of expectations and perceived usefulness—offers a valuable framework to explain why students choose to continue using such tools. Meanwhile, the TAM provides complementary insight by highlighting cognitive perceptions such as usefulness and functionality, which are especially critical for emerging, complex technologies like generative AI. Perceived intelligence and knowledge applicability shape users’ initial evaluations, while continued use is driven by ongoing satisfaction and value realization—core aspects captured by both the ECM and TAM. Thus, this dual-framework approach allows for a more comprehensive understanding of both the affective and cognitive factors that govern sustainable engagement with generative AI in education.
Additionally, we consider the role of social influence as posited by the Theory of Planned Behavior (TPB) [41], which asserts that social norms and peer influences can significantly affect individual behavior, particularly in the context of technology usage. This theory complements our model by providing a social lens through which to view the continuance intention of using generative AI.
To extend the classical ECM/TAM/TPB framework and better capture user behavior in the context of generative AI, we incorporate two novel constructs: perceived intelligence and AI configuration. Perceived intelligence reflects users’ recognition of the cognitive capabilities and competence of AI systems. Prior research has shown that perceptions of intelligence can shape cognitive trust and influence users’ willingness to rely on autonomous technologies [42]. In the context of generative AI, which mimics human reasoning and creativity, perceived intelligence serves as a key antecedent to both usefulness and confirmation, as it shapes users’ judgments of the tool’s problem-solving and learning support capacities. AI configuration, on the other hand, captures the emotional and affective responses users have toward the physical or interface embodiment of AI (e.g., humanoid avatars or anthropomorphic features). This construct extends the ECM by incorporating affective barriers—such as discomfort or fear—which may deter continued use despite cognitive evaluations of utility. Affective trust and perceived creepiness are increasingly relevant in human–AI interaction, especially in education, where psychological safety influences sustained engagement. Thus, AI configuration is introduced to enrich the model with emotional factors that may moderate or override rational decision-making.
In sum, the ECM explains the post-adoption process by focusing on confirmation, satisfaction, and perceived usefulness; the TAM adds cognitive constructs like perceived usefulness and perceived intelligence that influence user attitudes; and the TPB contributes the dimension of social influence, capturing normative pressures that shape continued usage. Together, these frameworks allow for a comprehensive analysis of both individual cognition and social context in shaping continuance intention toward generative AI tools.
Figure 1 illustrates the research model. The present study investigates the determinants of continuance intention for AI-based language models, focusing on university students using generative AI. The research model examines the relationships between knowledge application, perceived intelligence, perceived usefulness, confirmation, satisfaction, social influence, and AI configuration on continuance intention. These insights help better understand the factors affecting AI language model adoption among university students and can inform strategies to enhance user experience and promote the integration of AI tools in educational settings.

2.1. Knowledge Application

Knowledge application refers to the process of using acquired knowledge to solve problems, make decisions, or create new ideas [43]. It is an essential aspect of the learning process, as it enables individuals to apply their understanding to practical situations, thereby enhancing the value of knowledge [44]. When knowledge is effectively applied, it is likely to enhance the usefulness of systems and technologies, as individuals can better understand the potential benefits and leverage them for improved performance [45]. Knowledge application can influence confirmation through its effect on users’ ability to fully utilize and comprehend the potential benefits of a system or technology [46]. As individuals apply their knowledge, they become more adept at using the technology, leading to improved performance and a greater likelihood of meeting or exceeding initial expectations [47]. Users are more likely to continue using AI chatbots when they find the acquired knowledge practical and applicable [15]. As users gain tacit knowledge through the application of their explicit knowledge, they may develop a deeper understanding of the technology, further contributing to confirmation [48]. Therefore, based on the theoretical underpinnings and empirical evidence, the following hypotheses are proposed:
H1a. 
Knowledge application has a positive effect on perceived usefulness.
H1b. 
Knowledge application has a positive effect on confirmation.

2.2. Perceived Intelligence

Regarding AI, perceived intelligence refers to its ability to understand natural language and generate appropriate, effective responses [49]. Users form an emotional and cognitive attitude toward AI chatbots more favorably when they perceive them as more intelligent [42]. As students perceive themselves as more intelligent, they may be more likely to view technologies as useful [50]. A higher perceived intelligence may enable students to better understand and apply technologies, thus increasing their perceived usefulness [51]. Students with a higher perceived intelligence could be more adept at utilizing technologies, potentially leading to experiences that meet or exceed their initial expectations [47]. Consequently, the following hypotheses are proposed:
H2a. 
Perceived intelligence has a positive effect on perceived usefulness.
H2b. 
Perceived intelligence has a positive effect on confirmation.

2.3. Perceived Usefulness

The concept of perceived usefulness relates to an individual’s belief that a particular system or technology would contribute to their efficiency and productivity [52]. When students perceive generative AI as useful, their satisfaction with the system could increase [52]. A higher perceived usefulness implies that students believe generative AI enhances their performance and assists them in achieving their goals, leading to greater satisfaction [11]. Perceived usefulness positively influences the decision to continue using a technology [11]. When students find generative AI useful, they are more likely to continue using it, as it fulfills their academic needs and expectations [51]. Users tend to have higher satisfaction levels when they perceive an AI chatbot to be more useful [16,53] and try to use it more continuously [16,53,54]. Therefore, based on the theoretical foundations and empirical evidence, the following hypotheses are proposed:
H3a. 
Perceived usefulness has a positive effect on satisfaction.
H3b. 
Perceived usefulness has a positive effect on continuance intention.

2.4. Confirmation

Confirmation is conceptualized as the degree to which individuals’ initial expectations of a system or technology are met or exceeded after using it [11]. High levels of confirmation are associated with increased user satisfaction and continued use [55]. Higher levels of confirmation suggest that students’ experiences align with their expectations, resulting in a stronger belief in the benefits and utility of AI-based systems [56]. When students’ initial expectations of AI-based systems are met or exceeded (confirmation), they tend to have a higher perception of the usefulness of these systems. [57]. Confirmation is a prevailing contributor to user satisfaction [11]. When expectations for user AI are met more, they perceive it as more useful and are satisfied with it more [16]. Drawing on relevant theoretical frameworks and empirical evidence, this paper posits the following hypotheses:
H4a. 
Confirmation has a positive effect on perceived usefulness.
H4b. 
Confirmation has a positive effect on satisfaction.

2.5. Satisfaction

Satisfaction corresponds to users’ overall evaluation of their experience with an information system [11]. Recent studies have indicated that user satisfaction has a substantial impact on the intention to continue using technology-based systems [58]. Users who experience satisfaction with technology-driven tools are more inclined to continue using them as they perceive the system to fulfill their needs and expectations [59]. When users’ gratification with chatting with AI increases, they are more likely to perceive the system as valuable for their pursuits, leading to an increased likelihood of continued use [60,61]. According to the ECM [11,36,62], satisfaction is a direct antecedent of continuance intention. When users are satisfied with the usefulness and reliability of a system, they form positive attitudes that translate into behavioral loyalty. In the generative AI context, repeated satisfaction reinforces trust and habitual use, supporting continued engagement. Therefore, the following hypothesis is proposed:
H5. 
Satisfaction has a positive effect on continuance intention.

2.6. Social Influence

Social influence is delineated as the extent to which people think that others who are important to them think they should use a particular technology [63,64]. Research indicates that social influence plays a significant role in technology adoption and continuance intention [63,65]. Individuals are more likely to continue using technology if they perceive that their peers or other important individuals in their social network approve of and support its use [66]. The more influence your peers have, the more users are willing to use AI [67]. In the context of university students using generative AI, if they perceive a strong social influence supporting the use of generative AI, it could potentially increase their intention to continue to use. The TPB emphasizes the role of subjective norms in shaping behavioral intention [41,68]. When peers, instructors, or influential figures endorse the use of generative AI, students are likely to internalize these cues and persist in their usage. Social reinforcement creates normative pressure, encouraging continued use even beyond initial adoption. Therefore, the following hypothesis is proposed:
H6. 
Social influence has a positive effect on continuance intention.

2.7. AI Configuration

AI configuration refers to the design and features of an AI-based system that may evoke emotional responses, such as fear or intimidation, in users [69]. It has been developed and validated as a measure of anxiety about AI [23]. Recent research indicates that users’ emotional responses to technology can significantly impact their intentions to continue using it [70]. Negative emotions, such as fear or intimidation, may discourage users from continuing their engagement with the system [71]. In the context of university students using generative AI, if an AI configuration evokes negative emotions, it could potentially decrease their continuance intention. Therefore, the following hypothesis is proposed:
H7. 
AI configuration has a negative effect on continuance intention.

3. Research Methodology

3.1. Measures

To evaluate the constructs within the context of generative AI usage, this study selected and adapted validated survey items from prior research. Specifically, the knowledge application scale was based on Al-Sharafi, et al. [15], who originally assessed AI chatbots in educational settings; this was contextually modified to reflect generative AI use. Similarly, items for perceived intelligence were adapted from Rafiq, et al. [42], who also focused on AI chatbots. Perceived usefulness was drawn from Davis [32], whose original work pertained to PC-based information systems, and confirmation and satisfaction scales were adapted from Bhattacherjee [11], originally applied in an online banking environment. Additional satisfaction items were derived from Nguyen, et al. [16], who addressed banking chatbots. The social influence construct was based on Venkatesh, et al. [72] with a mobile internet context, while AI configuration was adapted from Wang and Wang [23], focused on general AI interfaces. Lastly, continuance intention items were adopted from Bhattacherjee [11]. All selected items were carefully reworded to reflect the specific technological characteristics and educational use cases of generative AI while maintaining their original conceptual integrity.
The author created the questionnaire in English, which was then translated into Korean by an information system field bilingual researcher. The Korean version was then back-translated into English by a marketing field bilingual professional, and any discrepancies were resolved by the author. Three researchers in qualitative analysis and information systems reviewed the questionnaire items before implementation. A pilot survey was carried out to confirm the validity, reliability, and logical arrangement of questions. The feedback from the pilot study was used to design and check the effectiveness of the final questionnaire. All indicators were measured on a seven-point Likert scale, and Table 1 presents the measurement items for the constructs.

3.2. Data Collection

This study used a cross-sectional survey-based field study design. Several university faculty members assisted in the distribution and administration of the survey to ensure academic relevance and participant engagement. They encouraged their school students to respond to this survey on the premise of voluntary participation. The survey was distributed and collected through the Google Survey form in March 2025. The purpose of the study and academic publication was explained on the first page of the online questionnaire to potential participants. Only those who agreed to publication participated in the survey. After removing four responses identified as insincere—defined as those containing straight-lining patterns (e.g., selecting the same scale point for all items)—282 valid responses were retained for analysis. Figure 2 illustrates the survey data collection procedure.
Figure 3, Figure 4 and Figure 5 present pie charts illustrating the participants’ demographic information by gender, age, and major, respectively. The sample consisted of 41.1% male and 58.9% female respondents. Most participants were aged 20 or younger (60.6%), followed by those aged 23 or older (19.5%). Regarding academic majors, the largest groups were from Arts and Physical Education (32.6%) and Humanities and Social Sciences (31.6%), followed by Engineering (14.2%) and Natural Sciences (8.5%), as well as smaller proportions from Aviation, Tourism, Business, Health Sciences, and other disciplines.

4. Results

The present study applied the partial least squares (PLS) method to handle the formative factors and the large number of constructs. The PLS has been widely selected as a tool in the IS/IT field [73,74]. PLS was used in this study due to its robustness and less restriction on the distribution of data and size of sample [75]. As this study involved a number of constructs, it was conducted as a two-stage SEM. In the first stage, we assessed the convergent validity, reliability, and discriminant validity of the measurement items. In the second stage, we tested the structural model.

4.1. Common Method Bias

To assess the potential for common method bias in our study, we analyzed variance inflation factors (VIFs) for each construct involved in the study. All VIF values reported are below the recommended threshold of 3.3 [76], indicating that multicollinearity is not a significant issue in our data. This combination of tests provides reassurance that our findings are robust and not unduly influenced by methodological artifacts.

4.2. Measurement Model

To assess the convergent validity, reliability, and discriminant validity of the measurement scales, confirmatory factor analysis was conducted. Composite reliability (CR) and Cronbach’s alpha were used to assess scale reliability, and both were found to exhibit adequate reliability [77,78]. Convergent validity was also satisfied as the item loading values exceeded 0.70 [79], and the average variance extracted (AVE) was above 0.5 [80]. Table 2 shows the test results of reliability and convergent validity of measures.
Discriminant validity was assessed using the Fornell–Larcker criterion, which compares the square root of the AVE for each construct with the correlations between constructs. According to [80], a construct should share more variance with its indicators than with other constructs in the model. This means that the square root of the AVE for each construct should be greater than its highest correlation with any other construct. As shown in Table 3, the square roots of the AVEs (diagonal values) for all constructs are all greater than their respective inter-construct correlations. For instance, the correlation between knowledge application and perceived usefulness is 0.715, which is lower than the square root of the AVE for both constructs (0.894 and 0.868, respectively). Similarly, the correlation between confirmation and satisfaction is 0.862, which is lower than their AVE square roots of 0.925 and 0.932. These results indicate that each construct in the measurement model is empirically distinct from the others, confirming that discriminant validity is adequately established across all variables.
The model fit was assessed using the Standardized Root Mean Square Residual (SRMR). The SRMR value for the saturated model was 0.058 and that for the estimated model was 0.084, both of which fall within acceptable thresholds, indicating a good model fit [81].

4.3. Structural Model

An SEM was performed to assess the proposed relationships among the constructs. The bootstrap resampling technique, with 5000 resamples, was employed to examine the significance of the hypotheses within the research framework. The analytical model explained approximately 64.1% of the variance in continuance intention. Additionally, it accounted for 70.3% of the variance in perceived usefulness, 57.9% in confirmation, and 75.1% in satisfaction, indicating strong overall model performance across key constructs.
Table 4 presents the results of the structural model analysis. Most hypotheses were statistically supported. Specifically, knowledge application significantly influenced both perceived usefulness (H1a: β = 0.275, p < 0.001) and confirmation (H1b: β = 0.274, p < 0.001). Similarly, perceived intelligence had a significant effect on both perceived usefulness (H2a: β = 0.271, p < 0.001) and confirmation (H2b: β = 0.547, p < 0.001). Perceived usefulness positively affected satisfaction (H3a: β = 0.142, p < 0.01) and continuance intention (H3b: β = 0.303, p < 0.001). Confirmation strongly predicted both perceived usefulness (H4a: β = 0.394, p < 0.001) and satisfaction (H4b: β = 0.752, p < 0.001). Satisfaction significantly contributed to continuance intention (H5a: β = 0.347, p < 0.001), as did social influence (H6: β = 0.264, p < 0.001). However, AI configuration did not have a significant impact on continuance intention (H7: β = −0.050, p = 0.323), and, thus, H7 was not supported.

5. Discussion

This study’s findings underscore the critical role of knowledge application in shaping users’ perceptions and satisfaction with technology, specifically generative AI. It aligns with Al-Sharafi, et al. [15]’s research, which posits that the ability to apply knowledge enhances sustainability, fostering a utilitarian perspective towards technological tools. Moreover, the positive influence on confirmation corroborates Bhattacherjee [11]’s expectancy confirmation theory, suggesting that when knowledge application meets or exceeds prior expectations, users’ satisfaction is likely to increase. Gourlay [48]’s insights further affirm this dynamic, indicating that effective knowledge application validates users’ decision to engage with a technology. This bidirectional reinforcement between knowledge application and perceived value highlights the importance of generative AI’s role in educational contexts. Educators should consider integrating generative AI into assignments that require problem-solving, critical thinking, or synthesis of new ideas. For instance, instructors can ask students to use generative AI to draft essays, generate research questions, or simulate discussions on complex topics. These activities not only help students actively apply what they’ve learned but also reinforce the tool’s perceived usefulness. When students can clearly see how generative AI contributes to their academic performance, they are more likely to remain engaged over time. Educational institutions can support this by offering faculty training sessions focused on instructional design that incorporates AI meaningfully into coursework.
Building on the cognitive dimension of generative AI use, the analysis reveals that perceived intelligence of generative AI is a pivotal factor in how students assess its utility and the degree to which it meets their expectations. This aligns with Rafiq, et al. [42], who emphasize that users’ perceptions of a system’s intelligence significantly influence their acceptance and continued use. Moreover, Wang [47] corroborated the notion that perceived intelligence can enhance the perceived usefulness of a technology, thereby leading to a higher rate of its adoption and integration into daily tasks. In essence, the more intelligent generative AI is deemed by users, the more beneficial and satisfactory the technology becomes. This relationship is crucial in the educational technology sector, as it suggests that perceived intelligence can directly affect both the practical adoption of AI tools and user satisfaction. AI developers should prioritize refining generative AI’s ability to understand context, maintain coherent dialogue, and deliver accurate, relevant information. These improvements contribute to students’ perception of the AI as a reliable and intelligent assistant. For example, features that allow students to ask follow-up questions or receive explanations in different formats—such as bullet points, diagrams, or simplified summaries—could make the system feel more responsive and personalized. Institutions might also implement user dashboards that track student progress or generate tailored learning suggestions based on AI interactions, further emphasizing the system’s intelligent support role.
Turning to the affective and functional aspects of user interaction with AI, the finding that perceived usefulness significantly influences satisfaction confirms the idea that when students find generative AI to be useful—contributing to their academic productivity or task efficiency—they are more likely to report higher satisfaction with the system. This outcome demonstrates the central role of functional utility in shaping user experiences and suggests that enhancing the tangible benefits of AI language models can meaningfully improve student satisfaction. It also indicates that perceived usefulness remains a core driver of affective responses to technology in educational contexts. On the other hand, the study’s correlation between perceived usefulness and continuance intention lends further empirical backing to the TAM framework, which posits perceived usefulness as a determinant of user acceptance and subsequent usage behavior [32]. Ashfaq, et al. [53] and Nguyen, et al. [16] previously highlighted that the practical benefits recognized by users significantly contribute to their satisfaction levels. Furthermore, the noted impact on continuance intention mirrors Thong, et al. [82] and Yang and Lee [54], who found that the perceived benefits of a system play a substantial role in users’ decisions to persist with a technology. These insights suggest that the utility that students derive from generative AI directly correlates with their contentment and loyalty to the platform, indicating that perceived usefulness is a strong predictor of both current satisfaction and future usage.
In addition to the role of perceived usefulness, the impact of expectation confirmation was also evident. The confirmation of expectations was found to significantly influence perceived usefulness. This relationship implies that when students’ experiences with generative AI align with or exceed their initial expectations, they are more likely to perceive the tool as useful. This validates the idea that expectations play a foundational role in shaping cognitive evaluations of AI systems. In practical terms, it suggests that managing and meeting user expectations—through clear communication, proper onboarding, or tailored examples of use—can enhance perceived value and facilitate continued engagement. On the other hand, the empirical evidence showing confirmation’s impact on satisfaction is a significant validation of expectancy confirmation theory within the context of educational technology. The influence on satisfaction is echoed by Nguyen, et al. [16] and supported by Oliver [55], who found satisfaction to be directly tied to the confirmation of expectations. Therefore, this study substantiates the idea that for university students using generative AI, the fulfillment of anticipated outcomes enhances both the perceived value of the tool and their overall contentment with its performance.
Satisfaction emerged as another pivotal factor influencing users’ ongoing engagement. The research findings corroborate the fundamental role of user satisfaction as a strong predictor of continuance intention [60,61,83]. Satisfaction, as an affective response to the use of technology, directly influences a user’s decision to keep using a system. The implication for generative AI is clear: the more satisfied university students are with the AI tool, the more inclined they will be to persist in its use. This relationship is vital for the development and improvement of such technologies, indicating that positive user experiences with generative AI can lead to sustained engagement and loyalty over time. To increase satisfaction, developers and educators should focus on creating a seamless and enjoyable user experience. This includes minimizing technical issues, ensuring accessibility across devices, and incorporating user-friendly interfaces. Instructors can contribute by setting realistic expectations and providing students with onboarding materials or orientation sessions that explain how to use generative AI effectively. Peer mentoring programs, where experienced students guide others in using AI tools for academic tasks, may also foster greater confidence and satisfaction. These efforts can help students feel more in control and supported, which enhances their overall experience with the tools. Regarding the findings related to perceived usefulness, confirmation, and satisfaction, the results provide robust support for their intertwined roles in driving continuance intention, consistent with both the TAM and ECM frameworks.
To further capitalize on the effects of perceived intelligence and satisfaction, practical strategies can be implemented. To enhance perceived intelligence and user satisfaction, AI tools could integrate real-time feedback mechanisms that allow users to rate the relevance and usefulness of generated content. Additionally, adaptive learning systems that tailor responses based on user preferences, prior interactions, or domain-specific language could further improve the perceived intelligence of generative AI. Providing users with brief explanations of how responses are generated or offering suggestions to refine prompts can also build trust and engagement. These mechanisms not only foster a sense of control and transparency but also enable developers to refine AI performance based on user input.
Social dynamics also play a central role in shaping continuance behavior. The results converge with the social influence theory, which posits that individual behaviors are often influenced by the perceptions and actions of others, particularly within one’s social circle [72]. Tanribilir [67] also emphasized the significance of social influence in the context of technology continuance, suggesting that individuals tend to continue using a technology if they perceive that important others believe they should use it. This demonstrates the importance of social factors in the decision-making process related to the ongoing use of technologies like generative AI among university students. It highlights that students are likely to continue using generative AI if they perceive that their peers, educators, or influential figures within their networks view the technology as beneficial and endorse its use. Educators can leverage this by openly discussing their own use of AI tools in teaching and encouraging students to share successful use cases in class. Institutions may also facilitate AI-focused student communities, workshops, or forums where users exchange tips and collaborate on projects using generative AI. Such initiatives create a culture that normalizes and encourages the productive use of AI in academic life, emphasizing its perceived value and increasing the likelihood of sustained engagement.
Interestingly, not all hypothesized predictors showed significant effects. The finding that AI configuration does not significantly affect continuance intention suggests that students’ long-term use of generative AI is not strongly influenced by its design or presentation features, such as human-like attributes or interface elements that might evoke fear or discomfort. This result contrasts with prior studies that identified AI configuration as a potential barrier to adoption due to emotional responses like intimidation or unease [23,71]. One possible explanation is that university students, as digital natives, may be more accustomed to interacting with advanced technologies and thus less likely to be influenced by anthropomorphic or emotionally evocative design elements. Their continued engagement with generative AI may be driven more by practical benefits such as academic support, ease of access, or peer influence rather than by affective reactions to its configuration. This finding indicates that while emotional design may play a role in initial impressions, it may be less critical in shaping users’ long-term engagement with AI tools in educational contexts.
In contrast to prior studies that have predominantly focused on traditional AI-based systems or chatbots, this study provides a novel contribution by examining generative AI within an educational context through an integrated ECM–TAM–TPB framework. It extends existing models by incorporating perceived intelligence and AI configuration as new constructs to capture both cognitive trust and affective barriers. Furthermore, by focusing on university students’ post-adoption behaviors rather than initial adoption, this study shifts the lens toward sustained engagement with emerging AI technologies. This emphasis on continuance intention in the generative AI domain—particularly in education—represents a significant and timely addition to the literature. Table 5 summarizes a comparison between the current study and related prior research.
This study offers several actionable implications by directly linking theoretical constructs to educational practice. The positive effect of knowledge application on perceived usefulness and confirmation suggests that educators should design assignments that require students to actively apply generative AI for critical thinking, synthesis, or problem-solving. These tasks reinforce the utility of AI tools, strengthening satisfaction and continuance intention. The influence of perceived intelligence underscores the need for platform designers to develop systems that demonstrate contextual awareness, maintain coherent dialogue, and provide adaptive, relevant feedback—fostering trust and engagement. The role of confirmation and satisfaction in predicting continued use highlights the value of aligning AI performance with user expectations. Institutions can address this by offering clear onboarding materials and training that help users set realistic expectations and navigate AI tools effectively. The significance of social influence indicates that cultivating supportive peer and faculty communities—such as student-led AI clubs or classroom discussions about AI use—can further encourage adoption. Furthermore, although AI configuration was not a significant predictor, future design considerations should still accommodate user comfort to avoid negative affective responses. In addition to promoting the effective use of generative AI in education, it is essential to address accompanying ethical and institutional responsibilities. As AI tools become integrated into academic settings, concerns over academic integrity—such as unauthorized use, plagiarism, or over-reliance on AI-generated content—must be proactively managed. Together, these recommendations bridge theoretical insights with practical strategies, helping stakeholders foster sustained, meaningful engagement with generative AI in educational settings.
The current study’s scope is confined by several limitations which should be taken into consideration for future research. Primarily, the sample was concentrated on university students, which may not reflect the broader population’s experience with AI language models, particularly varying by occupation or age demographics. To enhance generalizability, future investigations could assess the factors influencing the continued use of AI tools among diverse user groups, including professionals and different age brackets. Another potential limitation is the exclusion of possible influential factors beyond the scope of this study. Future research should consider examining additional elements, such as individual personality traits, cultural influences, or the role of specific technological features, to yield a more comprehensive understanding of what drives long-term user engagement with AI language models. Moreover, future research could adopt longitudinal study designs to observe how users’ perceptions, satisfaction, and continuance intentions toward generative AI tools change over time, particularly as users gain experience, encounter updates, or shift usage contexts. Additionally, incorporating qualitative methods such as interviews or focus groups would offer richer insight into users’ motivations, challenges, and contextual influences. Future research may also explore how domain-specific applications (e.g., medical, legal, or creative writing) influence continuance intention, particularly when moderated by task complexity or learning objectives.

6. Conclusions

This study provides significant theoretical advancements by extending and refining established models of technology continuance in the context of generative AI. It does so by integrating key constructs from the ECM [11], the TAM [32], and the TPB [41]. Unlike prior studies that largely focused on the initial adoption of AI or general user attitudes toward chatbots [90,91,92,93], this research concentrates on the post-adoption phase—continuance intention—within an educational technology context. By applying and testing these theoretical foundations in the use of a generative AI tool, this study addresses a notable gap in the literature and offers new insights into what sustains engagement with AI beyond initial use.
One of the most important contributions of this study is the empirical validation of knowledge application and perceived intelligence as core antecedents of perceived usefulness and confirmation. While previous studies have identified perceived usefulness as a central construct in technology use [32,63], they have not sufficiently explored the cognitive or experiential factors that enhance it in AI-based tools. This research shows that users’ ability to apply knowledge through generative AI significantly boosts both their perception of its usefulness and their confirmation of expectations. This finding expands on the work of Alavi and Leidner [45] by highlighting knowledge application as a dynamic process that not only facilitates effective learning but also strengthens users’ confidence in a technology’s value. Similarly, the incorporation of perceived intelligence addresses a limitation in previous research, which rarely examined how users’ impressions of a system’s intelligence impact downstream outcomes such as satisfaction and continued use. The present findings demonstrate that perceived intelligence plays a critical role in shaping both cognitive and emotional evaluations of generative AI, supporting insights from Rafiq, et al. [42] and extending their relevance to post-adoption behavior.
Another theoretical contribution lies in the reevaluation of confirmation’s role within the ECM. While the model traditionally asserts that confirmation enhances both perceived usefulness and satisfaction, previous studies have consistently supported this relationship across various technologies [11,13]. Aligning with these findings, the present study reconfirms that confirmation significantly affects both perceived usefulness and satisfaction, thus validating the ECM once again within the specific context of generative AI.
Additionally, while the path from AI configuration to continuance intention was not statistically significant, its inclusion in the model highlights the conceptual relevance of emotional and affective responses to AI system design. This aligns with Wang and Wang [23], who emphasize the importance of considering user discomfort or emotional dissonance when evaluating human–AI interaction. Rather than presenting AI configuration as a proven inhibitor, this study positions it as a theoretically meaningful but empirically unsupported factor within this context. Its non-significance may reflect the digital fluency of university students, who are more accustomed to AI systems and less affected by anthropomorphic or uncanny design elements. Future research should further investigate emotional and psychological barriers to AI adoption in different populations or usage contexts, contributing to an expanded framework that includes cognitive, affective, and environmental influences on continuance intention.

Author Contributions

Conceptualization, Y.M.J. and H.J.; methodology, Y.M.J. and H.J.; validation, H.J.; formal analysis, H.J.; writing—original draft preparation, Y.M.J. and H.J.; writing—review and editing, Y.M.J. and H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study complied with ethical research standards as outlined in the Declaration of Helsinki and was conducted with full respect for participants’ rights and privacy. The survey was anonymous and designed in accordance with the Personal Information Protection Act of Korea, ensuring that no personally identifiable or sensitive information—such as health status, religious beliefs, or political opinions—was collected. Participation was entirely voluntary and posed no more than minimal risk to individuals. Hanseo University does not have a specific Institutional Review Board (IRB) procedure applicable to this type of minimal-risk, anonymous survey research; therefore, no formal IRB review was required for this study.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data used in this study are available from the corresponding authors.

Acknowledgments

This study was supported by the 2025 In-House Research Grant Program of Hanseo University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training; OpenAI: San Francisco, CA, USA, 2018. [Google Scholar]
  2. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 140. [Google Scholar]
  3. Su, H.; Mokmin, N.A.M. Unveiling the Canvas: Sustainable Integration of AI in Visual Art Education. Sustainability 2024, 16, 7849. [Google Scholar] [CrossRef]
  4. Holgado-Apaza, L.A.; Ulloa-Gallardo, N.J.; Aragon-Navarrete, R.N.; Riva-Ruiz, R.; Odagawa-Aragon, N.K.; Castellon-Apaza, D.D.; Carpio-Vargas, E.E.; Villasante-Saravia, F.H.; Alvarez-Rozas, T.P.; Quispe-Layme, M. The Exploration of Predictors for Peruvian Teachers’ Life Satisfaction through an Ensemble of Feature Selection Methods and Machine Learning. Sustainability 2024, 16, 7532. [Google Scholar] [CrossRef]
  5. Huangfu, J.; Li, R.; Xu, J.; Pan, Y. Fostering Continuous Innovation in Creative Education: A Multi-Path Configurational Analysis of Continuous Collaboration with AIGC in Chinese ACG Educational Contexts. Sustainability 2025, 17, 144. [Google Scholar] [CrossRef]
  6. Badini, S.; Regondi, S.; Frontoni, E.; Pugliese, R. Assessing the capabilities of ChatGPT to improve additive manufacturing troubleshooting. Adv. Ind. Eng. Polym. Res. 2023, 6, 278–287. [Google Scholar] [CrossRef]
  7. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  8. Nerdynav. 91 Important ChatGPT Statistics & User Numbers in April 2023 (GPT-4, Plugins Update). Available online: https://nerdynav.com/chatgpt-statistics/ (accessed on 23 April 2025).
  9. Krishnan, R.; Kandasamy, L. How ChatGPT Shapes Knowledge Acquisition and Career Trajectories in Higher Education: Decoding Students’ Perceptions to Achieve Quality Education. In Indigenous Empowerment Through Human-Machine Interactions; Emerald Publishing Limited: Leeds, UK, 2025; pp. 73–92. [Google Scholar]
  10. Naznin, K.; Al Mahmud, A.; Nguyen, M.T.; Chua, C. ChatGPT Integration in Higher Education for Personalized Learning, Academic Writing, and Coding Tasks: A Systematic Review. Computers 2025, 14, 53. [Google Scholar] [CrossRef]
  11. Bhattacherjee, A. Understanding information systems continuance: An expectation-confirmation model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  12. Jo, H. Determinants of continuance intention towards e-learning during COVID-19: An extended expectation-confirmation model. Asia Pac. J. Educ. 2022, 45, 479–499. [Google Scholar] [CrossRef]
  13. Tam, C.; Santos, D.; Oliveira, T. Exploring the influential factors of continuance intention to use mobile Apps: Extending the expectation confirmation model. Inf. Syst. Front. 2020, 22, 243–257. [Google Scholar] [CrossRef]
  14. Cheng, Y.-M. Extending the expectation-confirmation model with quality and flow to explore nurses’ continued blended e-learning intention. Inf. Technol. People 2014, 27, 230–258. [Google Scholar] [CrossRef]
  15. Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2022, 31, 7491–7510. [Google Scholar] [CrossRef]
  16. Nguyen, D.M.; Chiu, Y.-T.H.; Le, H.D. Determinants of Continuance Intention towards Banks’ Chatbot Services in Vietnam: A Necessity for Sustainable Development. Sustainability 2021, 13, 7625. [Google Scholar] [CrossRef]
  17. Brill, T.M.; Munoz, L.; Miller, R.J. Siri, Alexa, and other digital assistants: A study of customer satisfaction with artificial intelligence applications. J. Mark. Manag. 2019, 35, 1401–1436. [Google Scholar] [CrossRef]
  18. Eren, B.A. Determinants of customer satisfaction in chatbot use: Evidence from a banking application in Turkey. Int. J. Bank Mark. 2021, 39, 294–311. [Google Scholar] [CrossRef]
  19. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  20. van Dis, E.A.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef]
  21. Fauzi, F.; Tuhuteru, L.; Sampe, F.; Ausat, A.M.A.; Hatta, H.R. Analysing the Role of ChatGPT in Improving Student Productivity in Higher Education. J. Educ. 2023, 5, 14886–14891. [Google Scholar] [CrossRef]
  22. Jimenez, K. ChatGPT in the Classroom: Here’s What Teachers and Students Are Saying. Available online: https://www.usatoday.com/story/news/education/2023/03/01/what-teachers-students-saying-ai-chatgpt-use-classrooms/11340040002/ (accessed on 25 April 2025).
  23. Wang, Y.-Y.; Wang, Y.-S. Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interact. Learn. Environ. 2022, 30, 619–634. [Google Scholar] [CrossRef]
  24. Nass, C.I.; Brave, S. Wired for Speech: How Voice Activates and Advances the Human-Computer Relationship; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  25. Shum, H.-y.; He, X.-d.; Li, D. From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Front. Inf. Technol. Electron. Eng. 2018, 19, 10–26. [Google Scholar] [CrossRef]
  26. Balakrishnan, J.; Dwivedi, Y.K. Conversational commerce: Entering the next stage of AI-powered digital assistants. Ann. Oper. Res. 2021, 333, 653–687. [Google Scholar] [CrossRef]
  27. Chen, J.-S.; Le, T.-T.-Y.; Florence, D. Usability and responsiveness of artificial intelligence chatbot on online customer experience in e-retailing. Int. J. Retail Distrib. Manag. 2021, 49, 1512–1531. [Google Scholar] [CrossRef]
  28. Chopra, K. Indian shopper motivation to use artificial intelligence. Int. J. Retail Distrib. Manag. 2019, 47, 331–347. [Google Scholar] [CrossRef]
  29. Lee, C.T.; Pan, L.-Y.; Hsieh, S.H. Artificial intelligent chatbots as brand promoters: A two-stage structural equation modeling-artificial neural network approach. Internet Res. 2022, 32, 1329–1356. [Google Scholar] [CrossRef]
  30. Lin, T.; Zhang, J.; Xiong, B. Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability 2025, 17, 3698. [Google Scholar] [CrossRef]
  31. Wu, R.; Gao, L.; Li, J.; Huang, Q.; Pan, Y. Key Factors Influencing Design Learners’ Behavioral Intention in Human-AI Collaboration Within the Educational Metaverse. Sustainability 2024, 16, 9942. [Google Scholar] [CrossRef]
  32. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  33. Tiwari, C.K.; Bhat, M.A.; Khan, S.T.; Subramaniam, R.; Khan, M.A.I. What drives students toward ChatGPT? An investigation of the factors influencing adoption and usage of ChatGPT. Interact. Technol. Smart Educ. 2024, 21, 333–355. [Google Scholar] [CrossRef]
  34. Shahsavar, Y.; Choudhury, A. User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study. JMIR Hum. Factors 2023, 10, e47564. [Google Scholar] [CrossRef]
  35. Choudhury, A.; Shamszare, H. Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis. J. Med. Internet Res. 2023, 25, e47184. [Google Scholar] [CrossRef]
  36. Ngo, T.T.A.; An, G.K.; Nguyen, P.T.; Tran, T.T. Unlocking educational potential: Exploring students’ satisfaction and sustainable engagement with ChatGPT using the ECM model. J. Inf. Technol. Educ. Res. 2024, 23, 21. [Google Scholar]
  37. Chen, H.-J. Verifying the link of innovativeness to the confirmation-expectation model of ChatGPT of students in learning. J. Inf. Commun. Ethics Soc. 2025, 23, 433–447. [Google Scholar] [CrossRef]
  38. Liu, G.; Ma, C. Measuring EFL learners’ use of ChatGPT in informal digital learning of English based on the technology acceptance model. Innov. Lang. Learn. Teach. 2024, 18, 125–138. [Google Scholar] [CrossRef]
  39. Lai, C.Y.; Cheung, K.Y.; Chan, C.S. Exploring the role of intrinsic motivation in ChatGPT adoption to support active learning: An extension of the technology acceptance model. Comput. Educ. Artif. Intell. 2023, 5, 100178. [Google Scholar] [CrossRef]
  40. Saif, N.; Khan, S.U.; Shaheen, I.; Alotaibi, F.A.; Alnfiai, M.M.; Arif, M. Chat-GPT; validating Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism. Comput. Hum. Behav. 2024, 154, 108097. [Google Scholar] [CrossRef]
  41. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  42. Rafiq, F.; Dogra, N.; Adil, M.; Wu, J.-Z. Examining consumer’s intention to adopt AI-chatbots in tourism using partial least squares structural equation modeling method. Mathematics 2022, 10, 2190. [Google Scholar] [CrossRef]
  43. Nonaka, I.; Takeuchi, H. The Knowledge Creating; Oxford University Press: New York, NY, USA, 1995; Volume 304. [Google Scholar]
  44. Wang, S.; Noe, R.A. Knowledge sharing: A review and directions for future research. Hum. Resour. Manag. Rev. 2010, 20, 115–131. [Google Scholar] [CrossRef]
  45. Alavi, M.; Leidner, D.E. Knowledge management and knowledge management systems: Conceptual foundations and research issues. MIS Q. 2001, 1, 107–136. [Google Scholar] [CrossRef]
  46. Mun, Y.Y.; Hwang, Y. Predicting the use of web-based information systems: Self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model. Int. J. Hum.-Comput. Stud. 2003, 59, 431–449. [Google Scholar]
  47. Wang, Y.S. Assessing e-commerce systems success: A respecification and validation of the DeLone and McLean model of IS success. Inf. Syst. J. 2008, 18, 529–557. [Google Scholar] [CrossRef]
  48. Gourlay, S. Conceptualizing knowledge creation: A critique of Nonaka’s theory. J. Manag. Stud. 2006, 43, 1415–1436. [Google Scholar] [CrossRef]
  49. Moussawi, S.; Koufaris, M.; Benbunan-Fich, R. How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 2021, 31, 343–364. [Google Scholar] [CrossRef]
  50. Liu, Y.; Li, H.; Carlsson, C. Factors driving the adoption of m-learning: An empirical study. Comput. Educ. 2010, 55, 1211–1219. [Google Scholar] [CrossRef]
  51. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef]
  52. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  53. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  54. Yang, H.; Lee, H. Understanding user behavior of virtual personal assistant devices. Inf. Syst. E-Bus. Manag. 2019, 17, 65–87. [Google Scholar] [CrossRef]
  55. Oliver, R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  56. Tarhini, A.; Masa’deh, R.e.; Al-Busaidi, K.A.; Mohammed, A.B.; Maqableh, M. Factors influencing students’ adoption of e-learning: A structural equation modeling approach. J. Int. Educ. Bus. 2017, 10, 164–182. [Google Scholar] [CrossRef]
  57. McArthur, D.; Lewis, M.; Bishary, M. The roles of artificial intelligence in education: Current progress and future prospects. J. Educ. Technol. 2005, 1, 42–80. [Google Scholar] [CrossRef]
  58. Alalwan, A.A.; Dwivedi, Y.K.; Rana, N.P.; Algharabat, R. Examining factors influencing Jordanian customers’ intentions and adoption of internet banking: Extending UTAUT2 with risk. J. Retail. Consum. Serv. 2018, 40, 125–138. [Google Scholar] [CrossRef]
  59. Tarhini, A.; Hone, K.; Liu, X. A cross-cultural examination of the impact of social, organisational and individual factors on educational technology acceptance between B ritish and L ebanese university students. Br. J. Educ. Technol. 2015, 46, 739–755. [Google Scholar] [CrossRef]
  60. Han, S.; Yang, H. Understanding adoption of intelligent personal assistants: A parasocial relationship perspective. Ind. Manag. Data Syst. 2018, 118, 618–636. [Google Scholar] [CrossRef]
  61. Cheng, Y.; Jiang, H. How do AI-driven chatbots impact user experience? Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued use. J. Broadcast. Electron. Media 2020, 64, 592–614. [Google Scholar] [CrossRef]
  62. Duong, C.D.; Nguyen, T.H.; Ngo, T.V.N.; Pham, T.T.P.; Vu, A.T.; Dang, N.S. Using generative artificial intelligence (ChatGPT) for travel purposes: Parasocial interaction and tourists’ continuance intention. Tour. Rev. 2025, 80, 813–827. [Google Scholar] [CrossRef]
  63. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  64. Elshaer, I.A.; AlNajdi, S.M.; Salem, M.A. Sustainable AI Solutions for Empowering Visually Impaired Students: The Role of Assistive Technologies in Academic Success. Sustainability 2025, 17, 5609. [Google Scholar] [CrossRef]
  65. Chan, G.; Cheung, C.; Kwong, T.; Limayem, M.; Zhu, L. Online consumer behavior: A review and agenda for future research. BLED 2003 Proc. 2003, 43. [Google Scholar]
  66. Nikou, S.A.; Economides, A.A. Mobile-based assessment: Investigating the factors that influence behavioral intention to use. Comput. Educ. 2017, 109, 56–73. [Google Scholar] [CrossRef]
  67. Tanribilir, R.N. Analysing antecedence of an intelligent voice assistant use intention and behaviour. F1000Research 2021, 10, 496. [Google Scholar] [CrossRef]
  68. Bali, S.; Edi, S.; Tsai-Ching, C.; Cheng-Yi, L.; and Liu, M.-C. Social Influence, Personal Views, and Behavioral Intention in ChatGPT Adoption. J. Comput. Inf. Syst. 2024, 1–12. [Google Scholar] [CrossRef]
  69. Gnewuch, U.; Morana, S.; Maedche, A. Towards Designing Cooperative and Social Conversational Agents for Customer Service. In Proceedings of the 38th International Conference on Information Systems (ICIS) 2017, Seoul, Republic of Korea, 10–13 December 2017. [Google Scholar]
  70. Treiblmaier, H.; Putz, L.-M.; Lowry, P.B. Setting a definition, context, and theory-based research agenda for the gamification of non-gaming applications. Assoc. Inf. Syst. Trans. Hum. -Comput. Interact. (THCI) 2018, 10, 129–163. [Google Scholar] [CrossRef]
  71. Li, X.; Hess, T.J.; Valacich, J.S. Why do we trust new technology? A study of initial trust formation with organizational information systems. J. Strateg. Inf. Syst. 2008, 17, 39–71. [Google Scholar] [CrossRef]
  72. Venkatesh, V.; Thong, J.Y.L.; Xu, X. Consumer Acceptance and Use of Information Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  73. Chin, W.W.; Marcolin, B.L.; Newsted, P.R. A partial least squares latent variable modeling approach for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study. Inf. Syst. Res. 2003, 14, 189–217. Available online: https://www.jstor.org/stable/23011467 (accessed on 25 April 2025). [CrossRef]
  74. Hair, J.F.; Sarstedt, M.; Ringle, C.M.; Mena, J.A. An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 2012, 40, 414–433. [Google Scholar] [CrossRef]
  75. Falk, R.F.; Miller, N.B. A Primer for Soft Modeling; University of Akron Press: Akron, OH, USA, 1992. [Google Scholar]
  76. Kock, N. WarpPLS 5.0 User Manual; ScriptWarp SystemTM: Lareedo, TX, USA, 2015. [Google Scholar]
  77. Gefen, D.; Straub, D.W.; Boudreau, M.C. Structural equation modeling and regression: Guidelines for research practice. Commun. AIS 2000, 4, 1–79. [Google Scholar] [CrossRef]
  78. Nunnally, J.C. Psychometric Theory, 2nd ed.; Mcgraw Hill Book Company: New York, NY, USA, 1978. [Google Scholar]
  79. Hair, J.; Anderson, R.; Tatham, B.R. Multivariate Data Analysis, 6th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  80. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  81. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS path modeling in new technology research: Updated guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  82. Thong, J.Y.; Hong, S.-J.; Tam, K.Y. The effects of post-adoption beliefs on the expectation-confirmation model for information technology continuance. Int. J. Hum.-Comput. Stud. 2006, 64, 799–810. [Google Scholar] [CrossRef]
  83. Limayem, M.; Hirt, S.G.; Cheung, C.M. How habit limits the predictive power of intention: The case of information systems continuance. MIS Q. 2007, 31, 705–737. [Google Scholar] [CrossRef]
  84. Romero-Rodríguez, J.-M.; Ramírez-Montoya, M.-S.; Buenestado-Fernández, M.; Lara-Lara, F. Use of ChatGPT at university as a tool for complex thinking: Students’ perceived usefulness. J. New Approaches Educ. Res. 2023, 12, 323–339. [Google Scholar] [CrossRef]
  85. Alshammari, S.H.; Babu, E. The mediating role of satisfaction in the relationship between perceived usefulness, perceived ease of use and students’ behavioural intention to use ChatGPT. Sci. Rep. 2025, 15, 7169. [Google Scholar] [CrossRef]
  86. Rahaman, M.S.; Ahsan, M.T.; Anjum, N.; Dana, L.P.; Salamzadeh, A.; Sarker, D.; Rahman, M.M. ChatGPT in Sustainable Business, Economics, and Entrepreneurial World: Perceived Usefulness, Drawbacks, and Future Research Agenda. J. Entrep. Bus. Econ. 2024, 12, 88–123. [Google Scholar]
  87. Ma, J.; Wang, P.; Li, B.; Wang, T.; Pang, X.S.; Wang, D. Exploring user adoption of ChatGPT: A technology acceptance model perspective. Int. J. Hum.-Comput. Interact. 2025, 41, 1431–1445. [Google Scholar] [CrossRef]
  88. Sallam, M.; Elsayed, W.; Al-Shorbagy, M.; Barakat, M.; El Khatib, S.; Ghach, W.; Alwan, N.; Hallit, S.; Malaeb, D. ChatGPT usage and attitudes are driven by perceptions of usefulness, ease of use, risks, and psycho-social impact: A study among university students in the UAE. Front. Educ. 2024, 9, 1414758. [Google Scholar] [CrossRef]
  89. Kim, M.K.; Jhee, S.Y.; Han, S.-L. The Impact of Chat GPT’s Quality Factors on~ Perceived Usefulness, Perceived Enjoyment, and~ Continuous Usage Intention Using the IS Success Model. Asia Mark. J. 2025, 26, 243–254. [Google Scholar] [CrossRef]
  90. Schillaci, C.E.; de Cosmo, L.M.; Piper, L.; Nicotra, M.; Guido, G. Anthropomorphic chatbots’ for future healthcare services: Effects of personality, gender, and roles on source credibility, user satisfaction, and intention to use. Technol. Forecast. Soc. Chang. 2024, 199, 123025. [Google Scholar] [CrossRef]
  91. Liu, W.; Jiang, M.; Li, W.; Mou, J. How does the anthropomorphism of AI chatbots facilitate users’ reuse intention in online health consultation services? The moderating role of disease severity. Technol. Forecast. Soc. Chang. 2024, 203, 123407. [Google Scholar] [CrossRef]
  92. Casheekar, A.; Lahiri, A.; Rath, K.; Prabhakar, K.S.; Srinivasan, K. A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: Applications, open challenges and future research directions. Comput. Sci. Rev. 2024, 52, 100632. [Google Scholar] [CrossRef]
  93. Zhu, Y.; Zhang, R.; Zou, Y.; Jin, D. Investigating customers’ responses to artificial intelligence chatbots in online travel agencies: The moderating role of product familiarity. J. Hosp. Tour. Technol. 2023, 14, 208–224. [Google Scholar] [CrossRef]
Figure 1. Analytical model.
Figure 1. Analytical model.
Sustainability 17 06082 g001
Figure 2. Survey data collection procedure.
Figure 2. Survey data collection procedure.
Sustainability 17 06082 g002
Figure 3. Distribution of participants by gender (Female: 59%; Male: 41%).
Figure 3. Distribution of participants by gender (Female: 59%; Male: 41%).
Sustainability 17 06082 g003
Figure 4. Distribution of participants by age group (20 or younger: 61%; 21: 11%; 22: 9%; 23 or older: 19%).
Figure 4. Distribution of participants by age group (20 or younger: 61%; 21: 11%; 22: 9%; 23 or older: 19%).
Sustainability 17 06082 g004
Figure 5. Distribution of participants by academic major (e.g., Arts and Physical Education: 33%; Humanities and Social Sciences: 32%; Engineering: 14%; others: smaller proportions).
Figure 5. Distribution of participants by academic major (e.g., Arts and Physical Education: 33%; Humanities and Social Sciences: 32%; Engineering: 14%; others: smaller proportions).
Sustainability 17 06082 g005
Table 1. List of constructs and items.
Table 1. List of constructs and items.
ConstructItemDescriptionSource
Knowledge
Application
KAP1Generative AI provides me with instant access to various types of knowledge.Al-Sharafi, et al. [15]
KAP2Generative AI allows me to integrate different types of knowledge.
KAP3Generative AI can help us better manage the course materials within the university.
Perceived
Intelligence
PIE1I feel that generative AI for learning is competent.Rafiq, et al. [42]
PIE2I feel that generative AI for learning is knowledgeable.
PIE3I feel that generative AI for learning is intelligent.
Perceived
Usefulness
PUS1I find generative AI useful in my daily life. Davis [32]
PUS2Using generative AI helps me to accomplish things more quickly.
PUS3Using generative AI increases my productivity.
ConfirmationCON1My experience with using generative AI is better than what I expected.Bhattacherjee [11]
CON2The service level provided by generative AI is better than I expected.
CON3Overall, most of my expectations from using generative AI are confirmed.
SatisfactionSAT1I am very satisfied with generative AI.Bhattacherjee [11];
Nguyen, et al. [16]
SAT2Generative AI meets my expectations.
SAT3Generative AI meets my needs and requirements.
Social
Influence
SOI1People who influence me think that I should use generative AI. Venkatesh, et al. [72]
SOI2People who are important to me think I should use generative AI.
SOI3Most people who are important to me understand that I use generative AI.
AI
Configuration
CFG1I find humanoid generative AI scary.Wang and Wang [23]
CFG2I find humanoid generative AI intimidating.
CFG3I don’t know why, but humanoid generative AI scares me.
Continuance IntentionCOI1I intend to continue using generative AI in the future. Bhattacherjee [11]
COI2I will always try to use generative AI in my daily life.
COI3I will strongly recommend others to use generative AI.
Table 2. Reliability and convergent validity.
Table 2. Reliability and convergent validity.
ConstructItemMeanSt. Dev.Factor LoadingCronbach’s AlphaCR (rho_c)AVE
Knowledge
Application
KAP15.6881.2500.8880.8750.9230.800
KAP25.6451.2940.923
KAP35.4501.4410.871
Perceived
Intelligence
PIE15.8511.2320.8680.8290.8970.745
PIE25.2341.4860.827
PIE35.4961.3000.892
Perceived
Usefulness
PUS15.8191.2320.8270.8350.9010.753
PUS26.0461.1710.901
PUS35.7201.3000.873
ConfirmationCON15.5821.3220.9310.9150.9470.855
CON25.5461.3390.938
CON35.4891.2550.905
SatisfactionCSA15.5711.2560.9330.9250.9520.869
CSA25.4041.3100.947
CSA35.3831.3220.916
Social
Influence
SOI14.5821.7370.9200.8630.9170.787
SOI24.5111.7910.921
SOI35.3761.3130.816
AI ConfigurationCFG14.8621.8180.9410.8390.8620.678
CFG24.2591.8180.745
CFG33.2381.9270.771
Continuance
Intention
COI15.7551.2750.8850.8710.9210.795
COI25.0741.6520.879
COI35.2801.4480.912
Table 3. Discriminant validity (Fornell–Larcker criteria).
Table 3. Discriminant validity (Fornell–Larcker criteria).
Construct12345678
1. Knowledge Application0.894
2. Perceived Intelligence0.681 0.863
3. Perceived Usefulness0.715 0.748 0.868
4. Confirmation0.647 0.734 0.771 0.925
5. Satisfaction0.657 0.746 0.722 0.862 0.932
6. Social Influence0.513 0.558 0.564 0.637 0.705 0.887
7. AI Configuration0.142 0.107 0.171 0.125 0.142 0.216 0.824
8. Continuance Intention0.666 0.661 0.694 0.717 0.746 0.670 0.108 0.892
Table 4. Analysis of path coefficients.
Table 4. Analysis of path coefficients.
HPredictorOutcomeβtpHypothesis
H1aKnowledge ApplicationPerceived Usefulness0.2754.8540.000Supported
H1bKnowledge ApplicationConfirmation0.2744.3800.000Supported
H2aPerceived IntelligencePerceived Usefulness0.2714.7290.000Supported
H2bPerceived IntelligenceConfirmation0.54710.9550.000Supported
H3aPerceived UsefulnessSatisfaction0.1423.0390.002Supported
H3bPerceived UsefulnessContinuance Intention0.3034.9980.000Supported
H4aConfirmationPerceived Usefulness0.3945.8290.000Supported
H4bConfirmationSatisfaction0.75217.6890.000Supported
H5aSatisfactionContinuance Intention0.3474.2770.000Supported
H6Social InfluenceContinuance Intention0.2643.8780.000Supported
H7AI ConfigurationContinuance Intention−0.0500.9890.323Not Supported
Table 5. Comparison of current study with related studies.
Table 5. Comparison of current study with related studies.
StudyResearch FocusTheoretical FrameworkMethodologySample CharacteristicsKey Findings (PU)Implications
Current StudyPost-adoption of Generative AI in educationTAM + ECM + TPBPLS-SEM, online survey282 Korean university studentsPU significantly affects satisfaction and continuance intentionSuggests design and training to improve PU
[84]Acceptance of ChatGPTUTAUT2Online survey400 Spanish studentsExperience strongly influences PUPromote experience-based learning with ChatGPT
[85]Role of satisfaction in ChatGPT useTAMAMOS SEM297 studentsPU affects satisfaction and behavioral intentionReinforce usefulness to sustain use
[86]ChatGPT in business/entrepreneurshipScoping ReviewThematic synthesis40 studies (mixed domains)PU helps
decision-making
Need reliability improvement
[87]Adoption of ChatGPTTAMSurvey, SEM784 Chinese usersPU directly influences intentionImprove PU and PEU for adoption
[88]Usage and attitudes in UAETAME-ChatGPT (based on TAM)Cross-sectional, e-survey608 UAE studentsPU and PEU drive usage and attitudesAddress risks to maximize PU
[89]IS Success Model and ChatGPTIS Success ModelSurvey225 experienced usersQuality factors → PU → usage intentionImprove design and functionality
Note: PU stands for perceived usefulness.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jung, Y.M.; Jo, H. Understanding Continuance Intention of Generative AI in Education: An ECM-Based Study for Sustainable Learning Engagement. Sustainability 2025, 17, 6082. https://doi.org/10.3390/su17136082

AMA Style

Jung YM, Jo H. Understanding Continuance Intention of Generative AI in Education: An ECM-Based Study for Sustainable Learning Engagement. Sustainability. 2025; 17(13):6082. https://doi.org/10.3390/su17136082

Chicago/Turabian Style

Jung, Young Mee, and Hyeon Jo. 2025. "Understanding Continuance Intention of Generative AI in Education: An ECM-Based Study for Sustainable Learning Engagement" Sustainability 17, no. 13: 6082. https://doi.org/10.3390/su17136082

APA Style

Jung, Y. M., & Jo, H. (2025). Understanding Continuance Intention of Generative AI in Education: An ECM-Based Study for Sustainable Learning Engagement. Sustainability, 17(13), 6082. https://doi.org/10.3390/su17136082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop