Next Article in Journal
Artificial Intelligence Empowerment and Carbon Emission Performance: A Systems Perspective on Sustainable Cleaner Production
Previous Article in Journal
Systemic Interactions Among Digital Transformation, Sustainable Orientation, and Economic Outcomes in EU Countries
Previous Article in Special Issue
A Systematic Review of Generative AI in K–12: Mapping Goals, Activities, Roles, and Outcomes via the 3P Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Student Loyalty in the Age of Generative AI: A Structural Equation Analysis of ChatGPT’s Role in Higher Education

National Institute for Lifelong Education, Seoul 07800, Republic of Korea
Systems 2025, 13(10), 915; https://doi.org/10.3390/systems13100915
Submission received: 29 August 2025 / Revised: 15 October 2025 / Accepted: 15 October 2025 / Published: 17 October 2025

Abstract

Lately, there has been a notable surge in the use of AI-driven dialogue systems like ChatGPT-3.5 within the realm of education. Understanding the factors that are associated with student engagement in these digital platforms is crucial for maximizing their potential and long-term efficacy. This study aims to systematically identify the key drivers behind university students’ loyalty to ChatGPT. Data gathered from university participants was analyzed using structural equation modeling. The findings indicate that novelty value is positively associated with both task attraction and hedonic value. Perceived intelligence shows significant associations with knowledge acquisition, task attraction, and hedonic value. Moreover, knowledge acquisition is positively related to task attraction and hedonic value, while creepiness is negatively related to them. Both task attraction and hedonic value demonstrate significant relationships with satisfaction and loyalty, with trust also positively associated with satisfaction. These insights provide a clearer understanding of what motivates university students to engage with AI conversational platforms like ChatGPT. This information is invaluable for stakeholders aiming to augment the adoption and effective use of such tools in educational contexts.

1. Introduction

In our contemporary digital society, artificial intelligence (AI) plays an increasingly important role, influencing areas such as communication, learning, and decision-making. ChatGPT, developed by OpenAI, has attracted attention for its ability to generate human-like dialogue, though its applications remain under continuous scrutiny [1]. ChatGPT has shown potential across domains such as education, customer service, and content generation, but evidence of its long-term effectiveness is still emerging [2]. University students, as digital natives, are among the early adopters of this cutting-edge technology, leveraging ChatGPT for various academic purposes [3]. Even with ChatGPT’s escalating repute and extensive embrace, the elements steering college student loyalty and contentment with this AI interlocutor are relatively untapped. Understanding the underlying factors that contribute to students’ satisfaction and loyalty is critical in enhancing the user experience and fostering the technology’s long-term success. This research aims to bridge the existing knowledge gap by investigating the determinants of loyalty and satisfaction toward ChatGPT among university students.
ChatGPT, rooted in the GPT-4 design, represents an advancement in natural language processing, although debates remain regarding the extent of its educational benefits [4]. Its edge is its adeptness at crafting dialogues mirroring human interaction, grasping nuances, and participating in unrestricted discourses [5,6]. This diverges from its simplistic predecessors, which faltered at delivering consistent replies. The advanced linguistic models in ChatGPT equip it to offer precise and context-tuned answers, cementing its role in spheres from academia, client interaction, to leisure [7]. Its ability to process complex queries has led to growing interest in academic use, such as supporting students’ learning tasks [8]. However, recent analyses caution that the empirical evidence on AI in education may not be as robust as it appears, particularly due to limitations in outcome measures and overalignment between interventions and assessments [9]. The distinctiveness of ChatGPT, with its advanced linguistic capabilities, positions it as a noteworthy development in the AI landscape, though its novelty value in academic contexts requires critical examination. Given that university students are typically exposed to both traditional and cutting-edge technologies, their perception of and interaction with something as novel as ChatGPT could deviate from the norm. Furthermore, as students’ engagement with technology is often influenced by its novelty, the unique attributes of ChatGPT can be expected to significantly impact their adoption behaviors and overall learning experiences.
ChatGPT, with its advanced AI capabilities, showcases its intelligence through its adeptness at comprehending intricate questions, offering precise and pertinent answers, and facilitating substantial interactions with its users [10]. For scholars in higher education, the caliber of information is paramount. ChatGPT’s perceived acumen bears weight on their educational gleaning and overarching scholastic journey [11]. The perceived intelligence of ChatGPT plays a crucial role in influencing students’ reliance on and engagement with the platform. As university students prioritize the quality and depth of information, their perception of ChatGPT’s intelligence directly correlates with their trust and utilization of it for academic purposes. Consequently, understanding this perceived intelligence is essential in gauging the tool’s efficacy and potential impact on students’ academic experiences and outcomes.
Central to tertiary education is the absorption of knowledge [12]. When students discern enriching insights from chatbots, they may deem it more beneficial and captivating [13]. With ChatGPT, students get immediate gateways to data, scholarly assets, and specialized expertise, enriching their educational pursuits [14,15]. Moreover, the process of learning through interaction with AI chatbots can be engaging and enjoyable for students, further enhancing their perception of the technology [16]. As students identify ChatGPT as a rich source of valuable insights, they are more inclined to embrace it as a vital educational tool, recognizing its potential to enhance and supplement their learning journeys.
AI has the potential to evoke a sense of creepiness when users interact with it [17]. The concept of creepiness in human-computer interaction (HCI) often arises from a user’s uneasiness or discomfort when interacting with technology that exhibits human-like qualities. Creepiness as a major factor can significantly impact user engagement and adoption of ChatGPT among university students. If students perceive ChatGPT as creepy, they may be less inclined to use the technology, ultimately affecting its effectiveness and potential benefits in an academic setting.
When examining ChatGPT usage by college students, it’s pivotal to understand the roles of task attraction and hedonic value as precursors to satisfaction and loyalty for several reasons. To begin with, these elements are seen as vital influencers for the embrace and sustained application of technology, as indicated in models like the Technology Acceptance Model (TAM) [18] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [19]. These well-regarded frameworks underscore that if individuals find a technology both beneficial and pleasurable, they’re more inclined to use it, appreciate it, and stay committed over extended periods. Furthermore, college-goers, being ChatGPT users, are on the lookout for tools that support their scholarly tasks like research, dialogue, and education. If they regard ChatGPT as an effective and engaging asset for their academic aims, their contentment with the tool naturally escalates, fostering deeper allegiance. Past research validates the positive ties between perceived benefit, enjoyment, satisfaction, and loyalty across different tech arenas [20,21,22,23]. Therefore, this research zeroes in on the notions of benefit and delight as pivotal to comprehending satisfaction and dedication.
Once, the ChatGPT bug leaked users’ conversation histories [24]. One of the leading companies in consumer electronics has prohibited its employees from using ChatGPT after discovering that data had leaked from the AI language model [25]. As known in the aforementioned cases, trust is a fundamental component of user acceptance and adoption of any technology, particularly when it involves sensitive data or personal information [26]. University students may share private or sensitive information while interacting with ChatGPT, and their trust in the chatbot’s ability to handle and protect this data is vital to their satisfaction with the technology. The increasing prevalence of cyber threats and data breaches has raised concerns about the security of online platforms and applications [27]. Consequently, users are more cautious about the technologies they choose to engage with, especially in an academic context. Trust in information security can alleviate these concerns, leading to increased satisfaction and ultimately influencing the users’ willingness to adopt and continue using ChatGPT.
This article tackles uncharted areas and brings novel insights to existing literature, emphasizing several facets of ChatGPT usage among college attendees. Firstly, the document broadens the prevailing understanding of chatbots by probing into the importance of perceived intellect with respect to ChatGPT, an avant-garde AI-driven conversational agent developed for a myriad of purposes, inclusive of academia. While prior studies have delved into the perceived intellect of various AI frameworks [28,29,30], our investigation uniquely zones in on the repercussions of perceived intellect on knowledge gleaning, perceived value, and user delight vis-à-vis ChatGPT. Through this, we grant a holistic view of elements that mold user satisfaction and loyalty to AI conversational tools in scholastic environments. Secondly, our work enriches the academic discourse by weaving in the notion of ‘creepiness’ when evaluating ChatGPT’s engagement with college-goers. This pivotal inclusion shines a light on a potential impediment to user onboarding and engagement with chatbots, a facet often sidestepped in prior works. By dissecting the bearing of ‘creepiness’ on user delight, we furnish critical cues on neutralizing adverse perceptions and enhancing AI chatbot user encounters. Thirdly, this article underscores the significance of knowledge absorption as a precursor to perceived advantage and enjoyment when interacting with ChatGPT. This layer augments existing narratives by shedding light on the interplay between chatbot-facilitated knowledge procurement and user perspectives, cascading into user contentment and dedication. Therefore, while this study acknowledges ChatGPT’s potential for education, it also aligns with calls for a more balanced perspective that considers both benefits and risks, including possible de-skilling effects and overstated evidence [9]. In conclusion, our work plugs a scholarly void by foregrounding confidence in data safety as a forerunner to satisfaction in leveraging ChatGPT among college scholars. This dimension responds to escalating apprehensions surrounding data safeguarding and confidentiality in AI undertakings, illuminating the role of trust in data protection in user gratification and the embracement of AI chatbots in academic realms.

2. Literature Review

2.1. Novelty, Intelligence, Knowledge, and Creepiness

The unique features and capabilities of ChatGPT distinguish it from other AI tools, leading to a surge in its user base. ChatGPT boasts a user base of 1.6 billion, making it the quickest-growing online service in history. Its rapid success is exemplified by the fact that it garnered a million users within a mere five days of its launch [31]. Its exceptional proficiency lies in its ability to surpass traditional chatbots with its ultra-high intelligence. College students often utilize and acquire knowledge from this technology, but some may feel uneasy due to its remarkable intelligence and data-collecting capabilities.
Novelty value pertains to the distinct and innovative attributes of a technology that captivate and motivate users to delve deeper [32]. Prior research underscores its positive influence on attributes like perceived utility and user enjoyment [33,34]. Within AI chatbot arenas, the allure of novelty is pivotal in drawing university students to engage with platforms like ChatGPT.
The emphasis on perceived intelligence in determining evaluations of AI-driven systems is well-documented, touching upon facets like knowledge assimilation, perceived utility, and enjoyment [13,35,36]. Perceived intelligence encompasses users’ assessment of a technology’s cognitive prowess, including context comprehension and apt response generation [37,38]. Ashfaq, Yun [39] posited a positive link between chatbot response quality and user involvement and satisfaction. Concurrently, Rapp, Curti [40] discerned that user perceptions of chatbot acumen greatly influence their interaction inclination and information dependence. Hence, this article centers on perceived intelligence when elucidating ChatGPT loyalty.
Research has highlighted the positive impact of information acquisition on the valuation and enjoyment of technological interfaces [41,42,43]. The connection between knowledge acquisition through ChatGPT and task attraction among university students is supported by studies suggesting that increased understanding and mastery of a technology or service can enhance users’ perceptions of its utility [44,45]. Within AI chatbot spheres, knowledge acquisition is paramount to user interaction, playing a seminal role in university students’ academic achievements.
Creepiness is characterized by unsettling or eerie feelings that users might experience due to the human-like responses generated by AI chatbots [46]. Creepiness can stem from the growing concerns surrounding privacy, trust, and the uncanny nature of AI-based systems [47,48]. Users may find AI chatbots creepy due to their advanced natural language processing capabilities, which can sometimes result in an unsettlingly human-like conversation [46]. Additionally, the potential for AI chatbots to collect, store, and analyze personal data raises privacy concerns among users, further contributing to the perception of creepiness [49,50]. Some scholars have identified creepiness as a factor that may negatively impact users’ perceived enjoyment and willingness to engage with AI-based systems [51,52]. Rajaobelina, Prom Tep [51] found that creepiness negatively affected users’ trust in AI systems and accelerated the formation of negative emotions. Olivera-La Rosa, Arango-Tobón [46] concluded that creepiness was associated with lower user acceptance of chatbots. Addressing the creepiness factor is essential for AI chatbots like ChatGPT to ensure a seamless and enjoyable user experience.
Table 1 synthesizes prior research that informs this study, showing how novelty, intelligence, knowledge, and creepiness shape technology adoption and continuance. Studies on novelty [32,33,34] consistently highlight its role in enhancing utilitarian and hedonic evaluations, with novelty value boosting enjoyment and strengthening continuance intention across innovation, mobile gaming, and AI assistant contexts. Research on perceived intelligence [12] underscores its critical role in chatbot adoption, revealing that intelligence perceptions drive user engagement through the S-O-R framework. Investigations of knowledge factors [41,45] demonstrate that knowledge acquisition, application, and sharing significantly affect perceived usefulness and ease of use, thus reinforcing acceptance of mobile learning. Finally, studies addressing creepiness [51] reveal its detrimental effect, showing that creepiness erodes loyalty directly and indirectly by undermining trust and generating negative emotions, while usability mitigates these effects. Together, these works illustrate that novelty and intelligence function as positive cognitive and affective enablers, knowledge facilitates learning-based acceptance, and creepiness operates as a perceptual barrier that weakens long-term loyalty to AI-driven technologies.

2.2. Task Attraction and Hedonic Value

Task attraction refers to the allure or appeal an activity holds for users, influencing the extent to which they engage with a particular tool or system [53]. In the case of AI-driven models like ChatGPT, task attraction can be understood as the propensity of users to utilize these platforms for specific tasks because they find them effective, efficient, or simply enjoyable. Prior studies have identified a significant link between a system’s utility and the user’s inclination to embrace it [54,55,56]. In AI contexts, platforms that can efficiently cater to user queries, support decision-making, or facilitate problem-solving inherently exhibit higher task attraction [57,58,59,60]. Such systems are not merely tools; they evolve into integral components of the user’s workflow, ensuring consistent utilization [61,62,63].
Beyond utility, the hedonic value of a system, capturing its capacity to provide pleasure or enjoyment, is another crucial determinant of user behavior [64]. Hedonic value encapsulates the emotional and affective responses a user may experience while interacting with a platform [33,65]. In the context of AI, this translates to the feeling of novelty, fun, or intrigue derived from conversing with a seemingly sentient entity. Researchers emphasize that hedonic aspects often drive user satisfaction and loyalty [66,67,68]. AI systems with higher hedonic value ensure a richer user experience, translating to increased user engagement, prolonged usage durations, and higher user satisfaction [69].
Research has consistently demonstrated that both task attraction and hedonic value play pivotal roles in shaping user behavior, especially in AI settings [70,71,72,73,74]. In AI contexts like ChatGPT, the synergistic effect of task attraction and hedonic value is evident. Here, task attraction could be a student sourcing academic material or seeking clarifications on concepts, while hedonic value might arise from the sheer novelty of obtaining information from an AI, the humor it might exhibit, or the simulated human-like interaction. Both task attraction and hedonic value serve as fundamental pillars supporting the edifice of user satisfaction and loyalty. If an AI tool consistently meets (task attraction) or exceeds (through hedonic experiences) these expectations, satisfaction naturally ensues. Furthermore, loyalty, a long-term commitment to re-use a product or service, is strengthened when users not only find value in the tasks they accomplish through the AI but also enjoy the process of doing so. Given the intertwined nature of these constructs, it is crucial to consider task attraction and hedonic value in explaining satisfaction and loyalty of university students.
Table 2 summarizes key studies addressing task attraction and hedonic value of AI-powered technologies across contexts such as personal assistants, digital wallets, mobile apps, voice assistants, FinTech robo-advisors, and tourism chatbots. The reviewed studies collectively show that task attraction and hedonic value are central in explaining adoption and continuance behaviors of AI-powered technologies. For instance, Han and Yang [53] demonstrated that interpersonal attraction, including task attraction, directly affects the adoption of intelligent personal assistants, highlighting that users are motivated when the technology is perceived as beneficial for accomplishing tasks. In parallel, Akdim, Casaló [64] and Jo [33] emphasized the role of hedonic value, where perceived enjoyment significantly shaped continuance intention in both social mobile apps and AI assistants. Studies on chatbots [39,71] also confirmed that beyond utilitarian benefits, satisfaction and enjoyment are decisive drivers of repeated use. Meanwhile, research in FinTech and digital wallets [55,73] further validated that while usefulness and trust matter, hedonic value strengthens user engagement by making technology appealing beyond mere efficiency. Together, these findings suggest that sustainable engagement with AI in education or service contexts requires balancing task attraction (practical utility) with hedonic value (enjoyable experience), reinforcing their dual importance for long-term loyalty.

2.3. Trust

The mounting chatbot traction across sectors, encompassing education, has spurred research around the significance of trust in data protection for chatbot interaction [75,76]. Confidence in data protection is pivotal for user actions since users exhibit higher chatbot engagement when they sense their data confidentiality is intact [77]. A chatbot’s perceived safety is a primary driver of such trust [78]. According to Hamad and Yeferny [79] and Wang, Lin [80], users exhibit enhanced chatbot trust when they deem the platform secure, bolstering their satisfaction and loyalty. Effective safety provisions and their articulation to users can amplify data protection trust, fostering satisfaction and allegiance [51,81,82]. Literature accentuates the significance of trust in data safety for user satisfaction and loyalty amid chatbot engagement. Given AI chatbots’ sensitivity and AI-driven feedback, trust becomes paramount for ChatGPT, especially to engender student satisfaction and sustained involvement.
Table 3 summarize the key studies regarding trust in the various contexts. Across the reviewed studies, trust consistently emerges as a pivotal determinant shaping user engagement, satisfaction, and loyalty toward AI-based technologies such as chatbots, digital assistants, and voice interfaces. Nguyen, Ta [77] demonstrated that trust significantly drives continuance intention in voice-user interfaces, with gender differences influencing how risk and trust are perceived. Extending this, Wang, Lin [80] reconceptualized trust as a multidimensional construct—including functionality, reliability, and data protection—showing that higher trust enables innovative chatbot use. In the context of digital assistants, Brill, Munoz [81] emphasized that satisfaction with AI applications stems from expectation and confirmation processes, both of which strengthen trust in these systems. Meanwhile, Hasan, Shams [82] highlighted the dual role of trust and risk in shaping brand loyalty, where trust and novelty reinforce positive outcomes, but perceived risk diminishes loyalty. Similarly, Rajaobelina, Prom Tep [51] revealed that creepiness erodes loyalty directly and indirectly through undermining trust, while usability enhances it. Even in the educational domain, Okonkwo and Ade-Ibijola [75] noted that future adoption of chatbots depends not only on their functionality but also on cultivating user trust through personalized and reliable services. Taken together, these findings indicate that trust operates as both a direct driver and a mediating mechanism: enhancing satisfaction, reducing perceptions of risk or creepiness, and ultimately strengthening loyalty and sustained use of AI technologies.

3. Research Model

This study’s framework explores the mechanisms underlying university students’ adoption of ChatGPT, focusing on novelty value, perceived intelligence, knowledge acquisition, creepiness, task attraction, hedonic value, trust, satisfaction, and loyalty. In specifying the model, only theoretically justified paths were included, while other potential connections (e.g., direct effects from perceived intelligence to trust) were excluded to maintain parsimony and theoretical clarity. Prior work on technology acceptance highlights that perceived intelligence primarily operates through task- and affect-related appraisals rather than exerting a direct effect on trust [13,80]. Thus, we modeled its influence as mediated through task attraction and hedonic value. Similarly, full mediation was assumed in moving from early-stage perceptions (e.g., novelty, perceived intelligence, knowledge acquisition, creepiness) to distal outcomes (satisfaction and loyalty). This specification is consistent with established frameworks such as TAM and UTAUT [18,19], which emphasize the sequential process of cognitive appraisals shaping affective responses, which then drive behavioral outcomes.
While some studies include direct paths for exploratory purposes, our model follows a confirmatory approach aligned with theory-driven SEM practices [83]. The full mediation assumption allows us to test the role of task attraction, hedonic value, and trust as central mechanisms linking perceptual antecedents to satisfaction and loyalty. This approach is also supported by evidence in human–AI interaction studies, which show that user trust and enjoyment often act as intervening mechanisms rather than outcomes directly shaped by initial perceptions [33,51,82]. By clearly specifying these constraints, our model balances explanatory power and theoretical consistency, while avoiding unnecessary complexity. Figure 1 illustrates the research model.

3.1. Novelty Value

Novelty value typically refers to the unique and innovative attributes of a technology that captures users’ attention and piques their interest in further exploration [84]. In the context of technology adoption, the concept of novelty has been linked to both the functionality and the pleasure derived from a technological tool [33,85]. Specifically, when users encounter a novel technological feature, they are often motivated to integrate it into their tasks due to its distinctive attributes, leading to task attraction [86,87]. Simultaneously, the fresh and unparalleled experiences provided by novel technologies can evoke positive emotions and feelings of joy, associating with the hedonic value [88,89,90]. Considering ChatGPT’s innovative characteristics in the realm of AI chatbots, it is plausible that its novelty value of ChatGPT is positively associated with both task attraction and hedonic value among university students. Thus, this study suggests the following hypotheses.
H1a. 
Novelty value of ChatGPT is positively associated with task attraction.
H1b. 
Novelty value of ChatGPT is positively associated with hedonic value.

3.2. Perceived Intelligence

Perceived intelligence pertains to users’ assessment of a technology’s cognitive capabilities, such as comprehending context and producing coherent responses [13]. In the domain of AI-based systems, perceived intelligence has been closely tied to user evaluations of the system’s efficacy in promoting learning and knowledge acquisition. For instance, when users perceive high levels of intelligence in an AI tool, they are more likely to view it as a valuable resource for gaining new knowledge [35,36]. Moreover, an AI system’s perceived intelligence can also enhance its appeal for task-oriented functions [91]. A tool perceived as intelligent is likely to be seen as capable of assisting users in achieving their objectives efficiently, leading to greater task attraction [92]. Simultaneously, the perception of intelligence in a technological system can elevate the user’s sense of enjoyment [58]. Engaging with an intelligent system can evoke feelings of amazement and pleasure, contributing to its hedonic value [33]. Given these associations, it can be posited that the perceived intelligence of ChatGPT would positively influence knowledge acquisition, task attraction, and hedonic value. Thus, this study suggests the following hypotheses.
H2a. 
Perceived intelligence of ChatGPT is positively associated with knowledge acquisition.
H2b. 
Perceived intelligence of ChatGPT is positively associated with task attraction.
H2c. 
Perceived intelligence of ChatGPT is positively associated with hedonic value.

3.3. Knowledge Acquisition

Knowledge acquisition pertains to the process of acquiring, updating, and refining information and skills [93]. In an educational setting, when students successfully acquire knowledge through a technological tool, their perception of the tool’s value in aiding tasks can heighten. For instance, when university students gain insights and learn through a platform like ChatGPT, they are likely to find the platform increasingly attractive for academic and related tasks, validating the assertion that effective knowledge acquisition can boost task attraction [94,95]. Moreover, the process of acquiring knowledge can be inherently rewarding and pleasurable for learners [96]. Engaging with a tool that facilitates learning can offer a sense of achievement and satisfaction [97]. When students perceive a system as a significant knowledge source, they may tend to derive enjoyment from their interactions with it, thereby elevating the hedonic value they attribute to the system. Drawing on these insights, it is plausible to suggest that university students’ knowledge acquisition via ChatGPT can enhance both their task attraction to the platform and their hedonic enjoyment of it. Thus, this study suggests the following hypotheses.
H3a. 
Knowledge acquisition via ChatGPT is positively associated with task attraction.
H3b. 
Knowledge acquisition via ChatGPT is positively associated with hedonic value.

3.4. Creepiness

Creepiness involves the unsettling feeling or apprehension that users experience when interacting with technology [46]. Such feelings can significantly impact users’ overall pleasure and joy derived from a platform. In particular, an eerie or uncanny interaction with technology can detract from the hedonic value users attribute to it, as they might associate the platform with discomfort rather than enjoyment [98,99]. Furthermore, trust is an essential element in the user-technology relationship, fostering prolonged engagement and commitment [100,101]. Trust, in this context, is based on the belief that the platform will function in a predictable and beneficial manner [77]. Experiencing a sense of unease or apprehension can erode this trust. If users find an interaction with technology, like a chatbot, to be creepy, they may doubt its intentions, reliability, and security, leading to diminished trust in the platform [51]. In light of these insights, it is conceivable that feelings of creepiness in relation to a technological tool can decrease both its perceived hedonic value and the trust users place in it. Thus, this study suggests the following hypotheses.
H4a. 
Creepiness is negatively associated with hedonic value.
H4b. 
Creepiness is negatively associated with trust.

3.5. Task Attraction

Task attraction refers to the perceived value of a technology in assisting users to accomplish specific tasks efficiently [53]. In the realm of technology adoption, when a tool like ChatGPT is seen as instrumental in performing tasks, users are more likely to derive satisfaction from it [102,103]. Satisfaction, understood as the fulfillment of one’s expectations or needs, often stems from the perceived efficiency and utility of a tool in task accomplishment [104]. Moreover, when users perceive a tool as beneficial for their tasks, they’re not just satisfied; they also develop a sense of loyalty to it [105,106]. Loyalty, in this sense, indicates a user’s preference, repeated use, and recommendation of the tool to others, stemming from its task-related benefits [105]. Tools that provide tangible benefits in task accomplishment cement their position as indispensable assets, fostering deeper loyalty among users. Given that task attraction fundamentally influences both satisfaction and loyalty by bridging the gap between user needs and tool capabilities, this study suggests the following hypotheses.
H5a. 
Task attraction derived from ChatGPT is positively associated with satisfaction.
H5b. 
Task attraction derived from ChatGPT is positively associated with loyalty.

3.6. Hedonic Value

Hedonic value pertains to the intrinsic pleasure and enjoyment users derive from their interaction with a technology, regardless of any tangible benefits it might offer [107]. This emotional and subjective assessment stems from the enjoyable experiences users associate with the use of a tool like ChatGPT [108]. Researcher underscore that the gratification and pleasure derived from a technology have a significant influence on a user’s satisfaction with it [109,110,111]. Furthermore, when users gain hedonic value from a platform, it not only enhances their satisfaction but also fortifies their loyalty to it [112,113]. Technologies that manage to strike a chord by offering hedonic value often create a lasting bond with the user, paving the way for sustained loyalty. Recognizing the inherent connection between hedonic value, satisfaction, and loyalty, this study suggests the following hypotheses.
H6a. 
Hedonic value derived from ChatGPT is positively associated with satisfaction.
H6b. 
Hedonic value derived from ChatGPT is positively associated with loyalty.

3.7. Trust

Trust in information security refers to the assurance users feel regarding the protection of their personal and confidential data against unauthorized access or misuse [114]. Literature on AI have reinforced the idea that trust profoundly impacts user satisfaction [101,115,116,117,118]. When users are confident about a platform’s information security measures, they are more inclined to use it frequently [119]. Trust serves as a cornerstone in cultivating loyalty towards AI chatbots [51]. Given that trust acts as a lynchpin in both the satisfaction and loyalty users feel toward platforms like ChatGPT, this study suggests the following hypotheses.
H7a. 
Trust is positively associated with satisfaction.
H7b. 
Trust is positively associated with loyalty.

3.8. Satisfaction

Satisfaction refers to the degree to which users’ expectations are met or exceeded when engaging with a technology [81,120]. In technology acceptance research, satisfaction has consistently been shown to play a critical role in shaping user loyalty [116,121]. Satisfied users are more likely to continue using the technology and recommend it to others [122,123]. In the context of AI-based systems, positive user experiences can reinforce attachment and lead to sustained usage [70,124,125]. Therefore, examining the association between satisfaction and loyalty is important for understanding the factors that contribute to long-term adoption. Based on this, the current study posits that satisfaction is likely to drive loyalty toward ChatGPT among university students.
H8. 
Satisfaction is positively associated with loyalty.

3.9. Gender and Age

Both gender and age have been found to affect users’ attitudes, perceptions, and behaviors when interacting with technology [126,127,128]. In terms of gender, research has shown that males and females may differ in their preferences, perceptions, and usage patterns concerning technology [129,130]. Age, on the other hand, has been found to impact technology adoption and usage in several ways [131]. Incorporating gender and age as control variables, this study seeks to mitigate the influence of these demographic elements on the core relationships examined, thus strengthening the integrity and credibility of the results.

4. Research Methodology

4.1. Development of Measurement Tools

For confirming construct reliability and authenticity, the measurement tools in this inquiry were derived from scales with prior validation in scholarly works. Multiple items were utilized to represent the different facets of each construct comprehensively. A seven-point Likert scale, where 1 represents ‘strongly disagree’ and 7 signifies ‘strongly agree,’ was applied across all items to gauge respondents’ attitudes.
Table A1 in the Appendix A provides a comprehensive list of constructs and their corresponding items. The construct novelty value is sourced from [82] and represents items that convey the uniqueness and educational value of using ChatGPT. The perceived intelligence construct, with references to [13], comprises items that highlight the competency and intelligence of ChatGPT for learning. Knowledge acquisition, credited to [28], focuses on ChatGPT’s capability to facilitate new knowledge generation and acquisition. The construct of creepiness, rooted in the work of [51], encompasses items detailing feelings of unease or threat when using ChatGPT. Task attraction references [53] and underscores ChatGPT’s usefulness and efficiency in aiding task completion. The hedonic value construct, as attributed to [107], centers on the enjoyment and positive feelings users derive from ChatGPT. Trust, referencing [77], encompasses items that express users’ confidence in ChatGPT’s data security measures. The satisfaction construct, based on [132], delves into users’ satisfaction levels with ChatGPT’s performance. Lastly, the loyalty construct, sourced from [105], covers items illustrating users’ preference and commitment to continue using ChatGPT in comparison to other chatbots.
Prior to the primary data gathering phase, an initial pilot survey was carried out with a select group of participants (N = 20) to evaluate the clarity and understandability of the questionnaire items. Feedback from this preliminary study led to slight modifications in the phrasing of certain items to enhance their clarity.

4.2. Subject and Data Collection

This study employed a quantitative research approach to examine the relationships between trust in information security, satisfaction, and loyalty in the context of chatbots among university students. The target population comprised university students across various disciplines who had used ChatGPT at least once in the past three months. Purposive sampling was employed because it allows for the deliberate selection of particular participants who are well-informed or have particular experiences pertinent to the research questions [133]. This strategy is particularly valuable when one wants to gain deep insights from a specific subgroup of a population, ensuring that the data collected is both rich and contextually relevant [134]. An online survey was developed and distributed through various university-related platforms to reach the target audience. These platforms included class communities and online forums. In addition, the survey link was shared with university student clubs and organizations to maximize the response rate. A short description of the study’s purpose and assurance of confidentiality was provided in the survey invitation to encourage participation. Informed consent was obtained from all participants. The data collection period lasted for four weeks. A total of 253 responses were received, out of which 242 were deemed valid and complete for the final analysis. The final sample consisted of 106 female and 142 male university students. Table 4 shows the demographic information of sample.

4.3. Statistical Analyses

Partial Least Squares Structural Equation Modeling (PLS-SEM) was employed as the primary analytical technique to test the hypothesized relationships among constructs. The choice of PLS-SEM over covariance-based SEM (CB-SEM) was guided by both theoretical and methodological considerations. PLS-SEM is particularly well-suited for exploratory and prediction-oriented research that focuses on theory development rather than theory confirmation [135]. Given that this study integrates new constructs such as creepiness and task attraction into the existing technology adoption framework, a variance-based approach was preferred to assess complex interrelationships among latent constructs.
From a methodological standpoint, PLS-SEM is more robust in handling non-normal data distributions and suitable for studies with moderate sample sizes (n < 300) [136,137]. It also allows for simultaneous estimation of both reflective and formative constructs, making it ideal for behavioral and technology adoption research [138]. Furthermore, PLS-SEM enables the assessment of model fit indices, which align with the predictive and practical objectives of this study.
Model estimation was performed using SmartPLS 4.0, following a two-step analytical procedure: (1) evaluation of the measurement model to verify construct reliability, convergent validity, and discriminant validity; and (2) evaluation of the structural model to examine path coefficients and R2 values. Bootstrapping with 5000 resamples was applied to assess the significance of path coefficients. Model fit indices were also reported to provide a holistic understanding of the model’s adequacy.
In line with open science practices, the SmartPLS project file (.splsm) and associated analysis syntax can be made available upon request for replication or further scholarly use, in compliance with institutional data-sharing policies.

5. Results

5.1. Common Method Bias (CMB)

The study proactively addressed the concern of CMB as per Podsakoff, MacKenzie [139], considering the data for all variables was collated through a singular survey. A preliminary exploration using Harman’s single-factor test revealed three distinct factors, where the principal factor represented 41.451% of the variance, which does not cross the threshold of concern. Furthermore, the study investigated variance inflation factors (VIFs) to gauge CMB, adhering to Kock [140] criteria that VIFs above 3.3 may indicate potential bias issues. The VIFs calculated for perceived intelligence and knowledge acquisition stood at 1.00, and for knowledge acquisition and hedonic value at 2.056, all well below the stipulated 3.3 threshold. This suggests a minimal likelihood of CMB influencing the study’s findings.

5.2. Measurement Model

The assessment of the measurement model for this study involved rigorous testing for both reliability and validity (Table 5). To ascertain internal consistency reliability, the analysis employed Cronbach’s alpha and composite reliability (CR) indices, with the benchmark for acceptability set at a value greater than 0.7 as per [141], ensuring that the constructs are consistently measured. For convergent validity, which measures the extent to which items of a construct are in agreement, the average variance extracted (AVE) was calculated. AVE values exceeding 0.5 were considered acceptable, indicating a sufficient level of variance captured by the construct from its indicators [142]. The outcomes of these assessments were systematically tabulated, providing a detailed insight into the factor analysis and reliability statistics.
The study’s approach to evaluating discriminant validity entailed the use of the Fornell-Larcker criterion and the heterotrait-monotrait (HTMT) ratio, which is a relatively newer and stringent criterion. HTMT values should be below 0.85 to confirm discriminant validity [143]. Table 6 in the study reports the correlation matrix and provides a discriminant validity assessment for the constructs under investigation. At the diagonal, the square roots of the AVEs are presented, which, being higher than the inter-construct correlations (off-diagonal values), attest to discriminant validity. This means that each construct is more closely related to its own measures than to those of other constructs, thus ensuring that the constructs are distinct and measure different phenomena. The off-diagonal entries of the matrix indicate the correlations among different constructs. These values are crucial for understanding the relationships and potential overlaps between constructs, but for discriminant validity, they should not surpass the diagonal AVE square roots, ensuring that each construct is unique and captures a phenomenon not measured by others in the model.
Table 7 presents the HTMT matrix of the constructs involved in the study. The HTMT is a ratio of the between-construct correlations to the within-construct correlations and serves as a measure of discriminant validity [144]. Values closer to 1 indicate a lack of discriminant validity, while values significantly lower than 1 indicate satisfactory discriminant validity. The values in the table reveal the discriminant validity of the constructs, as all HTMT values are below the threshold of 0.85 or 0.90, which are common benchmarks in the literature [143].
While constructs such as novelty value and hedonic value may appear conceptually adjacent—both relating to the enjoyment of new experiences—they were treated as distinct dimensions following established theoretical foundations in technology acceptance and experiential value research [82,107]. Specifically, novelty value captures users’ cognitive appraisal of uniqueness and educational curiosity, whereas hedonic value reflects the emotional pleasure derived from using ChatGPT. This conceptual distinction was supported by discriminant validity tests (Fornell–Larcker criterion and HTMT). Moreover, knowledge acquisition in this study was operationalized as a perceived cognitive gain, consistent with prior studies examining users’ self-assessed learning outcomes rather than objective performance metrics [28]. This approach aligns with the study’s focus on user perception and behavioral intention rather than measurable academic performance. Nonetheless, future research should expand this construct by incorporating objective indicators such as task accuracy, grades, or performance analytics to triangulate perception-based measures. Finally, the creepiness construct was measured primarily as affective discomfort, emphasizing users’ emotional unease when interacting with AI. While this captures one essential dimension of creepiness, future work should broaden the scope to encompass perceived privacy risks, algorithmic bias, and ethical concerns—factors increasingly recognized as central to user trust and AI ethics in educational contexts [51,98,99]. These refinements will help strengthen the conceptual clarity and ecological validity of future generative AI research frameworks.

5.3. Model Fit

To evaluate the quality of the structural model, we assessed the model fit using standard PLS-SEM criteria. Model fit in PLS-SEM is typically judged by the standardized root mean square residual (SRMR), the squared Euclidean distance (d_ULS), the geodesic distance (d_G), and the normed fit index (NFI) [136,145]. SRMR values below 0.08 are considered acceptable [146], while lower values of d_ULS and d_G indicate better model fit. NFI values above 0.80 suggest a reasonable fit, even though this index is less stringent than in CB-SEM.
For this study, the saturated model yielded an SRMR of 0.050, which is well below the recommended threshold, indicating a good model-data fit. The estimated model reported an SRMR of 0.095, which is slightly above the commonly suggested cut-off, but still within a tolerable range in PLS-SEM applications where complex models are tested. Additional indices were as follows: d_ULS (saturated = 1.107; estimated = 3.914), d_G (saturated = 0.661; estimated = 0.754), and NFI (saturated = 0.819; estimated = 0.814). Although the Chi-square values (saturated = 1010.396; estimated = 1034.162) were relatively high, this is not unexpected given the sensitivity of Chi-square to sample size [136]. Overall, the model demonstrates an acceptable fit, with SRMR and NFI supporting the adequacy of the structural specification.

5.4. Testing of Hypotheses

To verify the proposed associations among the study’s constructs, SEM analysis was performed. The study utilized a bootstrapping method, with a subsample of 5000, to evaluate the hypothesized pathways and path coefficients. As depicted in Figure 2, the research model supports fourteen paths.
Overall, the structural framework explained approximately 61.5 percent of the variation in loyalty. Table 8 details the results of hypothesis testing.

6. Discussion

Aligning with prior research, the findings of this study confirmed that novelty value is positively associated with both task attraction and hedonic value [88,89,90]. The results underscore the human penchant for novelty, especially among university students. When tasks are perceived as fresh and captivating, there’s an uptick in student engagement. This attraction to new experiences isn’t a mere surface-level fascination but taps into deeper cognitive layers, sparking curiosity and encouraging exploration. Interestingly, ChatGPT’s novelty doesn’t just amplify task commitment. It also resonates emotionally, offering users genuine joy. In essence, ChatGPT seems to offer a dual advantage: it efficiently aids tasks, addressing practical requirements, while also satisfying emotional and psychological needs. This harmony between utility and pleasure could pave the way for enduring user engagement, highlighting ChatGPT’s multi-dimensional promise in educational contexts.
Additionally, the results indicated that the perceived intelligence of ChatGPT is significantly associated with knowledge acquisition, task attraction, and hedonic value. There have been previous studies that have validated the association between perceived intelligence with knowledge acquisition [35,36], affective attitude (similar to task attraction) [13], and cognitive attitude (similar to hedonic value) [13]. The findings suggest that students’ willingness to use ChatGPT for learning is closely related to their perception of its intelligence. Furthermore, this perception appears to shape not only students’ evaluation of ChatGPT’s functional utility but also their emotional responses, enhancing its hedonic appeal and overall enjoyment. This dual association underscores the multifaceted role of perceived intelligence, suggesting that it contributes to both practical effectiveness and affective engagement in students’ learning experiences with ChatGPT.
Moreover, our findings demonstrated that knowledge acquisition is positively related to task attraction and hedonic value, supporting earlier research [94,95,96]. The interpretation suggests that students’ perception of acquiring knowledge through ChatGPT is associated with higher motivation and a stronger affinity toward the tool. When they discern tangible learning outcomes, they are more inclined to engage with ChatGPT. This isn’t just about the academic benefits, but also the intrinsic joy derived from the learning process. In essence, the effectiveness of ChatGPT as a knowledge-acquisition tool enhances both its utility and its appeal, creating a more holistic and fulfilling user experience for students.
The results also revealed a significant negative association between creepiness and hedonic value, which aligns with prior studies on negative emotions and enjoyment [99,100]. The negative association between creepiness and hedonic value highlights the need to address concerns related to the uncanny nature of AI chatbots to improve user experiences and mitigate potential negative emotions. The findings of this study indicate that creepiness is significantly associated with trust. This observation aligns with prior research on human-computer interaction, where the perception of creepiness has been shown to negatively impact trust in technology [46,51]. When university students perceive ChatGPT as creepy, their trust in the platform tends to decline, which may be associated with less favorable overall experiences and lower usage intentions.
Furthermore, the current study found that task attraction is significantly associated with satisfaction and loyalty. The results corroborate previous research which verified the significant effects of task attraction [102,103] and loyalty [105,106]. This assertion aligns with the foundational principle that utility and practicality are primary determinants of user commitment. When students discern tangible benefits from ChatGPT in assisting with their tasks, it triggers a twofold response: an immediate sense of satisfaction and a longer-term loyalty. In essence, when a tool seamlessly integrates into a user’s workflow and offers discernible advantages, it doesn’t just meet immediate needs but also fosters sustained allegiance. It underscores the idea that the true value of a tool is not just in its features but in its ability to reliably enhance user productivity and satisfaction over time.
Similarly, the results confirmed that hedonic value is positively associated with satisfaction and loyalty, which is consistent with earlier findings on satisfaction [109,110,111] and loyalty [112,113]. The emotional resonance of a tool, as evidenced by ChatGPT, plays a significant role in user satisfaction and loyalty. While the tangible benefits of functionality are undeniably crucial, the intangible joy and pleasure derived from its usage are equally pivotal. This interplay between the functional and emotional aspects of a user’s experience reaffirms the importance of holistic tool design. It isn’t enough for a tool to just “work”; it needs to resonate emotionally with its users. The findings underscore that in the realm of technology adoption, the heart’s contentment can be as compelling as the mind’s recognition of utility.
Additionally, the empirical results validated a significant relationship between trust and satisfaction, supporting prior research on trust and user satisfaction [116,117,118]. The positive relationship between trust and satisfaction emphasizes the importance of establishing and maintaining trust in ChatGPT’s information security to ensure user satisfaction. Contrary to Rajaobelina, Prom Tep [51], the finding that trust is not significantly associated with loyalty contrasts with established perspectives in consumer behavior, where trust is typically regarded as a foundational element in building and maintaining loyalty. In many contexts, trust is seen as a precursor to loyalty; when users trust a product or service, they are more likely to remain committed. However, in the AI context, trust may function as a necessary but not sufficient condition for continued use. University students may trust ChatGPT’s functional reliability and data security but still perceive its limitations in accuracy, bias, or ethical concerns, thereby constraining emotional or brand-like loyalty. This finding also aligns with studies suggesting that in technology-mediated interactions, trust fosters short-term adoption or task satisfaction rather than enduring loyalty, particularly when the technology is rapidly evolving or substitutable [147,148]. This divergence underscores that while trust remains crucial for initial acceptance, sustained loyalty toward generative AI may depend more on experiential and contextual factors, such as novelty, perceived intelligence, or emotional engagement, rather than on traditional relational trust.
The analysis reveals that satisfaction does not exhibit a significant association with loyalty among university students, which contradicts many previous studies in the technology adoption literature. Typically, satisfaction has been shown to be a strong predictor of loyalty, as satisfied users are more likely to continue using a service and recommend it to others [116,120,121]. One plausible explanation for this divergence may lie in the unique usage dynamics of ChatGPT, where repeated and task-driven engagement patterns lead to habitual use rather than affective attachment. In such contexts, users may continue using the platform not because of satisfaction, but because it has become an embedded academic habit or due to the absence of equally competent alternatives [149,150]. Moreover, the rapid technological evolution of generative AI tools and students’ perceived dependency on them could create a “functional loyalty” that is behaviorally maintained despite fluctuating satisfaction levels. This result suggests that, for ChatGPT, satisfaction alone may not be sufficient to ensure long-term commitment from users, as behavioral continuity might be sustained by convenience, perceived utility, or the lack of substitutes.
The analysis indicated that gender was not significantly associated with loyalty, whereas age showed a positive relationship with loyalty among university students. Gender-based preferences or biases do not play a substantial role in shaping long-term commitment or allegiance in this context. On the other hand, as users grow older, they are more inclined to display loyalty. This could be attributed to various reasons. Older individuals might value consistency and reliability over novelty, or they may have more established routines and preferences, making them less susceptible to switch or experiment with alternatives. It’s a testament to the age-old adage, “With age comes wisdom,” implying that mature users might discern the lasting value in sticking with a familiar tool or service.
While the present study utilized PLS-SEM due to its suitability for prediction-oriented research and moderate sample sizes, it is important to recognize that this methodological choice places an emphasis on exploratory and predictive accuracy rather than strict model confirmation. PLS-SEM enabled the examination of complex relationships among emergent constructs such as novelty, creepiness, and task attraction; however, confirmatory validation using CB-SEM is recommended in future research. A replication of these findings with larger and more heterogeneous samples using CB-SEM would allow for rigorous testing of model fit and theoretical assumptions, enhancing the robustness and generalizability of the proposed framework. Such methodological triangulation would also strengthen confidence in causal inference and deepen theoretical refinement across educational contexts.

7. Conclusions

7.1. Theoretical Contributions

This research notably enhances the current body of knowledge by zeroing in on the determinants impacting university students’ acceptance and utilization of ChatGPT. Past studies predominantly centered on the integration of AI-chatbots in diverse sectors, like customer support [39,71,151] and healthcare [152,153,154], leaving the academic domain relatively untouched. This investigation fills that void by emphasizing the educational landscape, providing invaluable insights into students’ interactions and perceptions of ChatGPT. The investigation of the impact of novelty value, perceived intelligence, knowledge acquisition, creepiness, and trust on task attraction, hedonic value, satisfaction, and loyalty offers valuable insights for scholars seeking to understand AI chatbot adoption in diverse settings.
Another notable theoretical advancement stems from the fusion of diverse theoretical underpinnings, encompassing realms of knowledge, intelligence, and trust. By interweaving these perspectives, the research offers a nuanced comprehension of ChatGPT’s acceptance among tertiary students. In juxtaposition with prior works that may have adopted a singular theoretical lens, this research’s multifaceted approach illuminates the intricate web of determinants propelling ChatGPT’s endorsement. Such a holistic stance sets a robust platform for ensuing scholarly explorations in this domain. For academicians, this signifies the value of embracing interdisciplinary insights to decode the complexities of technology adoption. As a directive, future scholars might consider delving deeper into the synergies and conflicts between these intertwined theories, enriching the academic dialogue in the realm of AI chatbot assimilation.
This study also emphasizes the need to address the creepiness factor in AI chatbot adoption, an aspect that has been largely overlooked in previous research. Creepiness in AI chatbots can stem from various factors, such as uncanny resemblance to human behavior, unsolicited personalization, and privacy concerns. When users encounter unsettling interactions with ChatGPT, they may become wary of the platform, leading to reduced trust and subsequent disengagement. The findings confirm that managing the uncanny nature of AI chatbots and mitigating potential negative emotions can significantly enhance user experiences and encourage adoption [155]. Scholars are encouraged to further explore the psychological aspects of AI chatbot interactions, such as the Uncanny Valley phenomenon, to develop a more comprehensive understanding of users’ emotional responses to AI chatbots.
In addition, the research findings highlight the importance of perceived intelligence and knowledge acquisition in influencing users’ perceptions of ChatGPT’s usefulness and enjoyment. These results underscore the need for researchers to investigate the cognitive and learning aspects of AI-chatbot interactions in the educational context [21,156,157]. Future studies could explore the role of individual differences in shaping users’ perceptions and experiences with AI chatbots, as well as the long-term impact of AI chatbot use on students’ academic performance and engagement.

7.2. Practical Implications

The actionable insights from this research hold significant value for educators and creators of AI linguistic frameworks like ChatGPT. Recognizing the elements that drive the acceptance and utilization of ChatGPT by university students empowers educators to make judicious choices regarding the integration of AI chatbots into academic settings. For instance, the findings on the importance of perceived intelligence and knowledge acquisition suggest that educators should focus on selecting AI tools that demonstrate a high level of competence and can effectively facilitate learning [158,159]. By prioritizing these aspects, educators can enhance students’ perceptions of the AI chatbot’s usefulness and enjoyment, ultimately improving their overall learning experiences.
For developers of AI chatbots, this study provides crucial insights into the features and attributes that university students find most appealing and beneficial. The positive relationship between novelty value and both task attraction and hedonic value highlights the importance of designing AI language models that are not only functional but also engaging and entertaining. Developers should consider incorporating elements of gamification [160], interactive features [161], and personalized learning experiences [162] to create AI models that captivate and motivate users. By doing so, developers can increase the likelihood of widespread adoption and long-term success of their AI chatbots in educational settings.
The findings related to creepiness emphasize the need for AI chatbot developers to carefully consider the emotional aspects of user interactions. By addressing potential concerns related to the uncanny nature of AI artifacts, developers can create more user-friendly and emotionally appealing chatbots. This could involve refining the chatbot’s appearance, communication style, and responsiveness to users’ emotions to create a more relatable and less intimidating user experience. Consequently, it is essential for developers and designers of AI chatbots like ChatGPT to address and mitigate factors contributing to the perception of creepiness. For example, developers may choose to use a more human-like appearance or allow users to customize the chatbot’s avatar to their preferences to mitigate the creepiness factor [163].
In addition to the implications for educators and developers, the study’s findings also hold practical relevance for university administrators and policymakers. The positive relationship between task attraction, hedonic value, satisfaction, and loyalty suggests that the successful integration of AI models like ChatGPT in university settings can foster long-term student engagement and loyalty. University administrators should consider investing in AI chatbot technologies and supporting faculty in their efforts to incorporate these tools into their curricula. This could involve providing training and resources to help educators effectively utilize chat-based AI and promoting a culture of innovation and experimentation within the institution.
Furthermore, universities can play a proactive role in strategically integrating AI tools like ChatGPT into teaching and learning practices. Institutions should develop clear policies and ethical guidelines governing AI use in coursework, emphasizing academic integrity, data privacy, and transparency. Professional development programs can help faculty design AI-supported assignments, ensuring that generative tools are used to complement—not replace—critical thinking and problem-solving skills. Universities may also consider creating AI literacy programs to equip students with the competencies needed to responsibly and effectively engage with AI systems. These initiatives would not only enhance students’ educational experiences but also mitigate potential risks, such as overreliance on AI or de-skilling effects, ensuring that AI adoption aligns with pedagogical and ethical standards.
The implications of this study are substantial for the ed-tech sector. By delving into the determinants of AI chatbots’ acceptance among university attendees, ed-tech firms can fine-tune their products to align with the specific demands and inclinations of this demographic. Insights gleaned from this analysis can guide the creation, promotion, and customer service approaches of ed-tech enterprises, fostering the evolution of AI chatbot solutions that are more impactful and resonant with the higher education milieu. For instance, ed-tech enterprises might partner with academic institutions to perform user evaluations and procure direct input from both students and faculty, which could be pivotal in refining their AI chatbot features to ensure they address the actual needs of their intended market.

7.3. Limitation and Future Research

This study has several limitations that should be acknowledged and addressed in future research.
First, the use of cross-sectional survey data restricts the ability to infer causal relationships among the constructs. The relationships observed between variables such as novelty, perceived intelligence, and satisfaction should therefore be interpreted as associative rather than causal. To strengthen causal inference, future research could adopt longitudinal or experimental designs to verify temporal ordering and identify potential causal mechanisms.
Second, the sample comprised 242 university students from a single country, which limits the generalizability of the findings. This relatively homogeneous group—university students with high digital literacy and technology acceptance—may not represent broader populations such as faculty or professionals. Future studies should replicate this research across different educational levels, cultural settings, and professional domains to enhance external validity and capture sociocultural variation in perceptions of generative AI.
Third, the study relied exclusively on self-reported Likert-scale measures, which may introduce common-method bias (CMB) and limit objectivity. Although Harman’s single-factor test was conducted, it cannot fully eliminate potential bias inherent in self-report data. In particular, students’ perceived knowledge acquisition may not accurately reflect actual learning outcomes. Future research should incorporate objective indicators—such as academic grades, task-completion quality, or system-generated analytics—and combine behavioral or experimental data with survey responses to achieve a more balanced and empirically grounded understanding of ChatGPT’s educational impact.
Fourth, the analysis focused solely on ChatGPT as a representative generative AI platform, which may introduce a technology-specific bias. While ChatGPT is currently the most prominent conversational model, it does not represent the diversity of systems such as Google Gemini, Anthropic Claude, or Meta LLaMA, each differing in design, data sources, and interaction style. Comparative studies across multiple AI platforms would allow researchers to identify both common and unique determinants of user trust, satisfaction, and loyalty, thus offering a broader and more generalizable understanding of generative AI adoption in education.
Fifth, the purposive sampling of students who had used ChatGPT within the previous three months may involve self-selection bias. Because ChatGPT usage has become nearly ubiquitous among Korean university students, constructing a valid non-user control group was not feasible within the data-collection period. Moreover, given ongoing updates to ChatGPT, new data collection could have produced inconsistent results between cohorts. Future work should nevertheless include comparative analyses of users and non-users across different AI ecosystems (e.g., Gemini, Claude, Copilot) to obtain a more balanced perspective.
Sixth, this study did not distinguish between different usage contexts—such as academic writing, coding, brainstorming, or casual conversation—which may shape satisfaction, trust, and loyalty differently. Future research should segment use cases to test whether these relationships vary across educational and task contexts, thereby improving ecological validity.
Finally, the findings must be viewed in light of the rapidly evolving nature of generative AI technologies. As platforms like ChatGPT undergo frequent updates, user perceptions and experiences are likely to shift. Periodic replication studies are therefore recommended to track longitudinal changes in trust, satisfaction, and loyalty as AI literacy and regulatory frameworks continue to mature.

Funding

This research received no external funding.

Institutional Review Board Statement

This study adhered to the Declaration of Helsinki guidelines. The nature of the research and the type of data collected did not involve sensitive information, as defined under Article 23 of the Personal Information Protection Act of Korea; therefore, approval from an Ethics Committee was not required.

Informed Consent Statement

Informed consent was obtained in written form from all individual participants included in the study.

Data Availability Statement

The data used in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Table A1. List of Constructs and Items.
Table A1. List of Constructs and Items.
ConstructItemDescriptionSource
Novelty
Value
NVT1Using ChatGPT is a unique experience.Hasan, Shams [83]
NVT2Using ChatGPT is an educational experience.
NVT3The experience of using ChatGPT satisfies my curiosity.
Perceived
Intelligence
PIE1I feel that ChatGPT for learning is competent.Rafiq, Dogra [13]
PIE2I feel that ChatGPT for learning is knowledgeable.
PIE3I feel that ChatGPT for learning is intelligent.
Knowledge
Acquisition
KAQ1ChatGPT allows me to generate new knowledge based on my existingAl-Sharafi, Al-Emran [29]
KAQ2ChatGPT enables me to acquire knowledge through various resources.
KAQ3ChatGPT assists me to acquire the knowledge that suits my needs.
CreepinessCPN1When using ChatGPT, I had a queasy feeling.Rajaobelina, Prom Tep [52]
CPN2When using ChatGPT, I felt uneasy.
CPN3When using ChatGPT, I somehow felt threatened.
Task
Attraction
TAT1ChatGPT is beneficial for my tasks.Han and Yang [54]
TAT2ChatGPT aids me in accomplishing tasks more quickly.
TAT3ChatGPT enhances my productivity.
Hedonic
Value
HEV1I enjoy using ChatGPT.Kim and Han [108]
HEV2ChatGPT elicits positive feelings.
HEV3Engaging with ChatGPT is genuinely enjoyable.
TrustTRU1I trust that my personal information won’t be misused.Nguyen, Ta [78]
TRU2I am confident that my personal data is safeguarded.
TRU3I believe my personal data is securely stored.
SatisfactionSAT1I am very satisfied with ChatGPT.Kim, Wong [133]
SAT2ChatGPT meets my expectations.
SAT3ChatGPT fits my needs/wants.
LoyaltyLYT1I prefer ChatGPT to other Chatbots.Daud, Farida [106]
LYT2I will continue to use ChatGPT in the future.
LYT3I am willing to refer ChatGPT to other people or friends.

References

  1. van Dis, E.A.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C.L. ChatGPT: Five priorities for research. Nature 2023, 614, 224–226. [Google Scholar] [CrossRef]
  2. George, A.S.; George, A.H. A review of ChatGPT AI’s impact on several business sectors. Partn. Univers. Int. Innov. J. 2023, 1, 9–23. [Google Scholar]
  3. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  4. Cheng, K.; Sun, Z.; He, Y.; Gu, S.; Wu, H. The potential impact of ChatGPT/GPT-4 on surgery: Will it topple the profession of surgeons? Int. J. Surg. 2023, 109, 1545–1547. [Google Scholar] [CrossRef]
  5. Gilson, A.; Safranek, C.W.; Huang, T.; Socrates, V.; Chi, L.; Taylor, R.A.; Chartash, D. How does ChatGPT perform on the united states medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educ. 2023, 9, e45312. [Google Scholar] [CrossRef]
  6. Lin, C.-C.; Huang, A.Y.; Yang, S.J. A review of ai-driven conversational chatbots implementation methodologies and challenges (1999–2022). Sustainability 2023, 15, 4012. [Google Scholar] [CrossRef]
  7. Alkaissi, H.; McFarlane, S.I. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 2023, 15, e35179. [Google Scholar] [PubMed]
  8. Firat, M. How Chat GPT Can Transform Autodidactic Experiences and Open Education; Department of Distance Education Open Education Faculty, Anadolu University: Eskişehir, Turkey, 2023. [Google Scholar]
  9. Bardach, L.; Emslander, V.; Kasneci, E.; Eitel, A.; Lindner, M.; Bailey, D. Research Syntheses on AI in Education Offer Limited Educational Insights; Technical University of Munich (TUM): Munich, Germany, 2025. [Google Scholar]
  10. Kalla, D.; Smith, N. Study and Analysis of Chat GPT and its Impact on Different Fields of Study. Int. J. Innov. Sci. Res. Technol. 2023, 8, 827–833. [Google Scholar]
  11. Essel, H.B.; Vlachopoulos, D.; Tachie-Menson, A.; Johnson, E.E.; Baah, P.K. The impact of a virtual teaching assistant (chatbot) on students’ learning in Ghanaian higher education. Int. J. Educ. Technol. High. Educ. 2022, 19, 57. [Google Scholar] [CrossRef]
  12. Feroz, H.M.B.; Zulfiqar, S.; Noor, S.; Huo, C. Examining multiple engagements and their impact on students’ knowledge acquisition: The moderating role of information overload. J. Appl. Res. High. Educ. 2022, 14, 366–393. [Google Scholar] [CrossRef]
  13. Rafiq, F.; Dogra, N.; Adil, M.; Wu, J.-Z. Examining consumer’s intention to adopt AI-chatbots in tourism using partial least squares structural equation modeling method. Mathematics 2022, 10, 2190. [Google Scholar] [CrossRef]
  14. Sun, G.H.; Hoelscher, S.H. The ChatGPT storm and what faculty can do. Nurse Educ. 2023, 48, 119–124. [Google Scholar] [CrossRef]
  15. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ. 2020, 151, 103862. [Google Scholar] [CrossRef]
  16. Köchling, A.; Wehner, M.C.; Warkocz, J. Can I show my skills? Affective responses to artificial intelligence in the recruitment process. Rev. Manag. Sci. 2023, 17, 2109–2138. [Google Scholar] [CrossRef]
  17. Woźniak, P.W.; Karolus, J.; Lang, F.; Eckerth, C.; Schöning, J.; Rogers, Y.; Niess, J. Creepy technology: What is it and how do you measure it? In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13. [Google Scholar]
  18. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  19. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward a Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  20. Selamat, M.A.; Windasari, N.A. Chatbot for SMEs: Integrating customer and business owner perspectives. Technol. Soc. 2021, 66, 101685. [Google Scholar] [CrossRef]
  21. Rahim, N.I.M.; Iahad, N.A.; Yusof, A.F.; Al-Sharafi, M.A. AI-Based Chatbots Adoption Model for Higher-Education Institutions: A Hybrid PLS-SEM-Neural Network Modelling Approach. Sustainability 2022, 14, 12726. [Google Scholar] [CrossRef]
  22. Følstad, A.; Brandtzaeg, P.B. Users’ experiences with chatbots: Findings from a questionnaire study. Qual. User Exp. 2020, 5, 3. [Google Scholar] [CrossRef]
  23. Jenneboer, L.; Herrando, C.; Constantinides, E. The impact of chatbots on customer loyalty: A systematic literature review. J. Theor. Appl. Electron. Commer. Res. 2022, 17, 212–229. [Google Scholar] [CrossRef]
  24. Derico, B. ChatGPT Bug Leaked Users’ Conversation Histories. Available online: https://www.bbc.com/news/technology-65047304 (accessed on 5 May 2025).
  25. Gurman, M. Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak. Available online: https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak#xj4y7vzkg (accessed on 5 May 2025).
  26. Matemba, E.D.; Li, G. Consumers’ willingness to adopt and use WeChat wallet: An empirical study in South Africa. Technol. Soc. 2018, 53, 55–68. [Google Scholar] [CrossRef]
  27. Cheng, L.; Liu, F.; Yao, D. Enterprise data breach: Causes, challenges, prevention, and future directions. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2017, 7, e1211. [Google Scholar] [CrossRef]
  28. Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2022, 31, 7491–7510. [Google Scholar] [CrossRef]
  29. Tussyadiah, I.P.; Wang, D.; Jung, T.H.; Tom Dieck, M.C. Virtual reality, presence, and attitude change: Empirical evidence from tourism. Tour. Manag. 2018, 66, 140–154. [Google Scholar] [CrossRef]
  30. Yu, C.-E. Humanlike robots as employees in the hotel industry: Thematic content analysis of online reviews. J. Hosp. Mark. Manag. 2020, 29, 22–38. [Google Scholar] [CrossRef]
  31. Graw, M. 50+ ChatGPT Statistics for May 2023—Data on Usage & Revenue. Available online: https://www.business2community.com/statistics/chatgpt (accessed on 4 May 2025).
  32. Koc, T.; Bozdag, E. Measuring the degree of novelty of innovation based on Porter’s value chain approach. Eur. J. Oper. Res. 2017, 257, 559–567. [Google Scholar] [CrossRef]
  33. Jo, H. Continuance intention to use artificial intelligence personal assistant: Type, gender, and use experience. Heliyon 2022, 8, e10662. [Google Scholar] [CrossRef] [PubMed]
  34. Merikivi, J.; Nguyen, D.; Tuunainen, V.K. Understanding perceived enjoyment in mobile game context. In Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016; pp. 3801–3810. [Google Scholar]
  35. Lv, Y.; Hu, S.; Liu, F.; Qi, J. Research on Users’ Trust in Customer Service Chatbots Based on Human-Computer Interaction; Springer: Singapore, 2022; pp. 291–306. [Google Scholar]
  36. Alam, A. Possibilities and apprehensions in the landscape of artificial intelligence in education. In Proceedings of the 2021 International Conference on Computational Intelligence and Computing Applications (ICCICA), Nagpur, India, 26–27 November 2021; pp. 1–8. [Google Scholar]
  37. McGinn, C.; Cullinan, M.F.; Otubela, M.; Kelly, K. Design of a terrain adaptive wheeled robot for human-orientated environments. Auton. Robot. 2019, 43, 63–78. [Google Scholar] [CrossRef]
  38. Petisca, S.; Dias, J.; Paiva, A. More social and emotional behaviour may lead to poorer perceptions of a social robot. In Proceedings of the Social Robotics: 7th International Conference, ICSR 2015, Paris, France, 26–30 October 2015; Proceedings 7; pp. 522–531. [Google Scholar]
  39. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  40. Rapp, A.; Curti, L.; Boldi, A. The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. Int. J. Hum. Comput. Stud. 2021, 151, 102630. [Google Scholar] [CrossRef]
  41. Al-Emran, M.; Mezhuyev, V.; Kamaludin, A. Towards a conceptual model for examining the impact of knowledge management factors on mobile learning acceptance. Technol. Soc. 2020, 61, 101247. [Google Scholar] [CrossRef]
  42. Al-Maroof, R.; Ayoubi, K.; Alhumaid, K.; Aburayya, A.; Alshurideh, M.; Alfaisal, R.; Salloum, S. The acceptance of social media video for knowledge acquisition, sharing and application: A comparative study among YouYube users and TikTok users’ for medical purposes. Int. J. Data Netw. Sci. 2021, 5, 197. [Google Scholar] [CrossRef]
  43. Khan, M.N.; Ashraf, M.A.; Seinen, D.; Khan, K.U.; Laar, R.A. Social media for knowledge acquisition and dissemination: The impact of the COVID-19 pandemic on collaborative learning driven social media adoption. Front. Psychol. 2021, 12, 648253. [Google Scholar] [CrossRef] [PubMed]
  44. Al-Emran, M.; Teo, T. Do knowledge acquisition and knowledge sharing really affect e-learning adoption? An empirical study. Educ. Inf. Technol. 2020, 25, 1983–1998. [Google Scholar] [CrossRef]
  45. Al-Emran, M.; Mezhuyev, V.; Kamaludin, A. Is M-learning acceptance influenced by knowledge acquisition and knowledge sharing in developing countries? Educ. Inf. Technol. 2021, 26, 2585–2606. [Google Scholar] [CrossRef]
  46. Olivera-La Rosa, A.; Arango-Tobón, O.E.; Ingram, G.P. Swiping right: Face perception in the age of Tinder. Heliyon 2019, 5, e02949. [Google Scholar] [CrossRef] [PubMed]
  47. Phelan, C.; Lampe, C.; Resnick, P. It’s creepy, but it doesn’t bother me. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 5240–5251. [Google Scholar]
  48. Laakasuo, M.; Palomäki, J.; Köbis, N. Moral uncanny valley: A robot’s appearance moderates how its decisions are judged. Int. J. Soc. Robot. 2021, 13, 1679–1688. [Google Scholar] [CrossRef]
  49. Oravec, J.A. Negative Dimensions of Human-Robot and Human-AI Interactions: Frightening Legacies, Emerging Dysfunctions, and Creepiness. In Good Robot, Bad Robot: Dark and Creepy Sides of Robotics, Autonomous Vehicles, and AI; Springer: Berlin/Heidelberg, Germany, 2022; pp. 39–89. [Google Scholar]
  50. McWhorter, R.R.; Bennett, E.E. Creepy technologies and the privacy issues of invasive technologies. In Research Anthology on Privatizing and Securing Data; IGI Global: Hershey, PA, USA, 2021; pp. 1726–1745. [Google Scholar]
  51. Rajaobelina, L.; Prom Tep, S.; Arcand, M.; Ricard, L. Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 2021, 38, 2339–2356. [Google Scholar] [CrossRef]
  52. Söderlund, M. Service robots with (perceived) theory of mind: An examination of humans’ reactions. J. Retail. Consum. Serv. 2022, 67, 102999. [Google Scholar] [CrossRef]
  53. Han, S.; Yang, H. Understanding adoption of intelligent personal assistants: A parasocial relationship perspective. Ind. Manag. Data Syst. 2018, 118, 618–636. [Google Scholar] [CrossRef]
  54. Moussawi, S.; Koufaris, M.; Benbunan-Fich, R. How perceptions of intelligence and anthropomorphism affect adoption of personal intelligent agents. Electron. Mark. 2021, 31, 343–364. [Google Scholar] [CrossRef]
  55. Shetu, S.N.; Islam, M.M.; Promi, S.I. An Empirical Investigation of the Continued Usage Intention of Digital Wallets: The Moderating Role of Perceived Technological Innovativeness. Future Bus. J. 2022, 8, 43. [Google Scholar] [CrossRef]
  56. Gatzioufa, P.; Saprikis, V. A literature review on users’ behavioral intention toward chatbots’ adoption. Appl. Comput. Inform. 2022, ahead-of-print. [Google Scholar] [CrossRef]
  57. Burton, S.L. Grasping the cyber-world: Artificial intelligence and human capital meet to inform leadership. Int. J. Econ. Commer. Manag. 2019, 7, 707–759. [Google Scholar]
  58. Seo, K.; Tang, J.; Roll, I.; Fels, S.; Yoon, D. The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 2021, 18, 54. [Google Scholar] [CrossRef]
  59. Deranty, J.-P.; Corbin, T. Artificial intelligence and work: A critical review of recent research from the social sciences. AI Soc. 2022, 39, 675–691. [Google Scholar] [CrossRef]
  60. Zhang, J.; Zhang, W.; Xu, J. Bandwidth-efficient multi-task AI inference with dynamic task importance for the Internet of Things in edge computing. Comput. Netw. 2022, 216, 109262. [Google Scholar] [CrossRef]
  61. Yang, B.; Wei, L.; Pu, Z. Measuring and Improving User Experience Through Artificial Intelligence-Aided Design. Front. Psychol. 2020, 11, 595374. [Google Scholar] [CrossRef]
  62. Fares, O.H.; Butt, I.; Lee, S.H.M. Utilization of artificial intelligence in the banking sector: A systematic literature review. J. Financ. Serv. Mark. 2022, 28, 835–852. [Google Scholar] [CrossRef]
  63. Zhang, Q.; Lu, J.; Jin, Y. Artificial intelligence in recommender systems. Complex Intell. Syst. 2021, 7, 439–457. [Google Scholar] [CrossRef]
  64. Akdim, K.; Casaló, L.V.; Flavián, C. The role of utilitarian and hedonic aspects in the continuance intention to use social mobile apps. J. Retail. Consum. Serv. 2022, 66, 102888. [Google Scholar] [CrossRef]
  65. Hamouda, M. Purchase intention through mobile applications: A customer experience lens. Int. J. Retail Distrib. Manag. 2021, 49, 1464–1480. [Google Scholar] [CrossRef]
  66. Lee, S.; Kim, D.-Y. The effect of hedonic and utilitarian values on satisfaction and loyalty of Airbnb users. Int. J. Contemp. Hosp. Manag. 2018, 30, 1332–1351. [Google Scholar] [CrossRef]
  67. Evelina, T.Y.; Kusumawati, A.; Nimran, U.; Sunarti. The influence of utilitarian value, hedonic value, social value, and perceived risk on customer satisfaction: Survey of e-commerce customers in Indonesia. Bus. Theory Pract. 2020, 21, 613–622. [Google Scholar] [CrossRef]
  68. Ponsignon, F.; Lunardo, R.; Michrafy, M. Why are international visitors more satisfied with the tourism experience? The role of hedonic value, escapism, and psychic distance. J. Travel Res. 2021, 60, 1771–1786. [Google Scholar] [CrossRef]
  69. Xie, C.; Wang, Y.; Cheng, Y. Does artificial intelligence satisfy you? A meta-analysis of user gratification and user satisfaction with AI-powered chatbots. Int. J. Hum. Comput. Interact. 2022, 40, 613–623. [Google Scholar] [CrossRef]
  70. Kim, J.; Merrill Jr, K.; Collins, C. AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telemat. Inform. 2021, 64, 101694. [Google Scholar] [CrossRef]
  71. Pillai, R.; Sivathanu, B. Adoption of AI-based chatbots for hospitality and tourism. Int. J. Contemp. Hosp. Manag. 2020, 32, 3199–3226. [Google Scholar] [CrossRef]
  72. Balakrishnan, J.; Dwivedi, Y.K.; Hughes, L.; Boy, F. Enablers and Inhibitors of AI-Powered Voice Assistants: A Dual-Factor Approach by Integrating the Status Quo Bias and Technology Acceptance Model. Inf. Syst. Front. 2021, 26, 921–942. [Google Scholar] [CrossRef]
  73. Belanche, D.; Casaló, L.V.; Flavián, C. Artificial Intelligence in FinTech: Understanding robo-advisors adoption among customers. Ind. Manag. Data Syst. 2019, 119, 1411–1430. [Google Scholar] [CrossRef]
  74. Vlačić, B.; Corbo, L.; e Silva, S.C.; Dabić, M. The evolving role of artificial intelligence in marketing: A review and research agenda. J. Bus. Res. 2021, 128, 187–203. [Google Scholar] [CrossRef]
  75. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. 2021, 2, 100033. [Google Scholar] [CrossRef]
  76. Belen Saglam, R.; Nurse, J.R.; Hodges, D. Privacy concerns in Chatbot interactions: When to trust and when to worry. In Proceedings of the HCI International 2021-Posters: 23rd HCI International Conference, HCII 2021, Virtual Event, 24–29 July 2021; Proceedings, Part II 23. pp. 391–399. [Google Scholar]
  77. Nguyen, Q.N.; Ta, A.; Prybutok, V. An integrated model of voice-user interface continuance intention: The gender effect. Int. J. Hum. Comput. Interact. 2019, 35, 1362–1377. [Google Scholar] [CrossRef]
  78. Følstad, A.; Nordheim, C.B.; Bjørkli, C.A. What Makes Users Trust a Chatbot for Customer Service? An Exploratory Interview Study; Springer: Cham, Switzerland, 2018; pp. 194–208. [Google Scholar]
  79. Hamad, S.; Yeferny, T. A Chatbot for Information Security. arXiv 2020, arXiv:2012.00826. [Google Scholar]
  80. Wang, X.; Lin, X.; Shao, B. Artificial intelligence changes the way we work: A close look at innovating with chatbots. J. Assoc. Inf. Sci. Technol. 2023, 74, 339–353. [Google Scholar] [CrossRef]
  81. Brill, T.M.; Munoz, L.; Miller, R.J. Siri, Alexa, and other digital assistants: A study of customer satisfaction with artificial intelligence applications. J. Mark. Manag. 2019, 35, 1401–1436. [Google Scholar] [CrossRef]
  82. Hasan, R.; Shams, R.; Rahman, M. Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. J. Bus. Res. 2021, 131, 591–597. [Google Scholar] [CrossRef]
  83. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  84. Wellsandta, S.; Rusak, Z.; Ruiz Arenas, S.; Aschenbrenner, D.; Hribernik, K.A.; Thoben, K.-D. Concept of a Voice-Enabled Digital Assistant for Predictive Maintenance in Manufacturing. In Proceedings of the 9th International Conference on Through-life Engineering Services, Cranfield University, Bedford, UK, 3–4 November 2020. [Google Scholar]
  85. Lai, P.C. The literature review of technology adoption models and theories for the novelty technology. J. Inf. Syst. Technol. Manag. 2017, 14, 21–38. [Google Scholar] [CrossRef]
  86. Jeong, S.C.; Kim, S.-H.; Park, J.Y.; Choi, B. Domain-specific innovativeness and new product adoption: A case of wearable devices. Telemat. Inform. 2017, 34, 399–412. [Google Scholar] [CrossRef]
  87. Adapa, S.; Fazal-e-Hasan, S.M.; Makam, S.B.; Azeem, M.M.; Mortimer, G. Examining the antecedents and consequences of perceived shopping value through smart retail technology. J. Retail. Consum. Serv. 2020, 52, 101901. [Google Scholar] [CrossRef]
  88. Oyman, M.; Bal, D.; Ozer, S. Extending the technology acceptance model to explain how perceived augmented reality affects consumers’ perceptions. Comput. Hum. Behav. 2022, 128, 107127. [Google Scholar] [CrossRef]
  89. Matt, C.; Benlian, A.; Hess, T.; Weiß, C. Escaping from the filter bubble? The effects of novelty and serendipity on users’ evaluations of online recommendations. In Proceedings of the 2014 International Conference on Information Systems (ICIS 2014), Auckland, New Zealand, 14–17 December 2014. [Google Scholar]
  90. Nguyen, D. Understanding Perceived Enjoyment and Continuance Intention in Mobile Games. Master’s Thesis, Aalto University, Espoo, Finland, 2015. [Google Scholar]
  91. Hernández-Orallo, J. Evaluation in artificial intelligence: From task-oriented to ability-oriented measurement. Artif. Intell. Rev. 2017, 48, 397–447. [Google Scholar] [CrossRef]
  92. Yang, Y.; Luo, J.; Lan, T. An empirical assessment of a modified artificially intelligent device use acceptance model—From the task-oriented perspective. Front. Psychol. 2022, 13, 975307. [Google Scholar] [CrossRef]
  93. Lee, J.; Park, D.-H.; Han, I. The effect of negative online consumer reviews on product attitude: An information processing view. Electron. Commer. Res. Appl. 2008, 7, 341–352. [Google Scholar] [CrossRef]
  94. Fergusson, L. Learning by… Knowledge and skills acquisition through work-based learning and research. J. Work-Appl. Manag. 2022, 14, 184–199. [Google Scholar] [CrossRef]
  95. Joseph, R.P.; Arun, T.M. Models and Tools of Knowledge Acquisition. In Computational Management: Applications of Computational Intelligence in Business Management; Patnaik, S., Tajeddini, K., Jain, V., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 53–67. [Google Scholar] [CrossRef]
  96. Ngoc Thang, N.; Anh Tuan, P. Knowledge acquisition, knowledge management strategy and innovation: An empirical study of Vietnamese firms. Cogent Bus. Manag. 2020, 7, 1786314. [Google Scholar] [CrossRef]
  97. Rotar, O. Online student support: A framework for embedding support interventions into the online learning cycle. Res. Pract. Technol. Enhanc. Learn. 2022, 17, 2. [Google Scholar] [CrossRef]
  98. Tene, O.; Polonetsky, J. A theory of creepy: Technology, privacy and shifting social norms. Yale JL Tech. 2013, 16, 59. [Google Scholar]
  99. Langer, M.; König, C.J.; Fitili, A. Information as a double-edged sword: The role of computer experience and information on applicant reactions towards novel technologies for personnel selection. Comput. Hum. Behav. 2018, 81, 19–30. [Google Scholar] [CrossRef]
  100. Yang, R.; Wibowo, S. User trust in artificial intelligence: A comprehensive conceptual framework. Electron. Mark. 2022, 32, 2053–2077. [Google Scholar] [CrossRef]
  101. Lukyanenko, R.; Maass, W.; Storey, V.C. Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities. Electron. Mark. 2022, 32, 1993–2020. [Google Scholar] [CrossRef]
  102. Jo, H. Examining the key factors influencing loyalty and satisfaction toward the smart factory. J. Bus. Ind. Mark. 2023, 38, 484–493. [Google Scholar] [CrossRef]
  103. Jo, H.; Park, S. Success factors of untact lecture system in COVID-19: TAM, benefits, and privacy concerns. Technol. Anal. Strateg. Manag. 2022, 36, 1385–1397. [Google Scholar] [CrossRef]
  104. Doménech-Betoret, F.; Abellán-Roselló, L.; Gómez-Artiga, A. Self-Efficacy, Satisfaction, and Academic Achievement: The Mediator Role of Students’ Expectancy-Value Beliefs. Front. Psychol. 2017, 8, 1193. [Google Scholar] [CrossRef]
  105. Daud, A.; Farida, N.; Razak, M. Impact of customer trust toward loyalty: The mediating role of perceived usefulness and satisfaction. J. Bus. Retail Manag. Res. 2018, 13, 235–242. [Google Scholar] [CrossRef]
  106. Almahamid, S.; Mcadams, A.C.; Al Kalaldeh, T.; MO’TAZ, A.-S.E. The relationship between perceived usefulness, perceived ease of use, perceived information quality, and intention to use e-government. J. Theor. Appl. Inf. Technol. 2010, 11, 30–44. [Google Scholar]
  107. Kim, B.; Han, I. The role of utilitarian and hedonic values and their antecedents in a mobile data service environment. Expert Syst. Appl. 2011, 38, 2311–2318. [Google Scholar] [CrossRef]
  108. Rodríguez-Ardura, I.; Meseguer-Artola, A.; Fu, Q. The utilitarian and hedonic value of immersive experiences on WeChat: Examining a dual mediation path leading to users’ stickiness and the role of social norms. Online Inf. Rev. 2023, ahead-of-print. [Google Scholar] [CrossRef]
  109. Weli, W. Student satisfaction and continuance model of Enterprise Resource Planning (ERP) system usage. Int. J. Emerg. Technol. Learn. 2019, 14, 71. [Google Scholar] [CrossRef]
  110. Isaac, O.; Abdullah, Z.; Ramayah, T.; Mutahar, A.M.; Alrajawy, I. Integrating user satisfaction and performance impact with technology acceptance model (TAM) to examine the internet usage within organizations in Yemen. Asian J. Inf. Technol. 2018, 17, 60–78. [Google Scholar]
  111. Bae, S.; Jung, T.H.; Moorhouse, N.; Suh, M.; Kwon, O. The influence of mixed reality on satisfaction and brand loyalty in cultural heritage attractions: A brand equity perspective. Sustainability 2020, 12, 2956. [Google Scholar] [CrossRef]
  112. Lee, R.; Murphy, J. The moderating influence of enjoyment on customer loyalty. Australas. Mark. J. 2008, 16, 11–21. [Google Scholar] [CrossRef]
  113. Rao, Q.; Ko, E. Impulsive purchasing and luxury brand loyalty in WeChat Mini Program. Asia Pac. J. Mark. Logist. 2021, 33, 2054–2071. [Google Scholar] [CrossRef]
  114. Kim, J.; Lennon, S.J. Effects of reputation and website quality on online consumers’ emotion, perceived risk and purchase intention: Based on the stimulus-organism-response model. J. Res. Interact. Mark. 2013, 7, 33–56. [Google Scholar] [CrossRef]
  115. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  116. Kassim, E.S.; Jailani, S.F.A.K.; Hairuddin, H.; Zamzuri, N.H. Information System Acceptance and User Satisfaction: The Mediating Role of Trust. Procedia Soc. Behav. Sci. 2012, 57, 412–418. [Google Scholar] [CrossRef]
  117. Martínez-Navalón, J.-G.; Gelashvili, V.; Gómez-Ortega, A. Evaluation of User Satisfaction and Trust of Review Platforms: Analysis of the Impact of Privacy and E-WOM in the Case of TripAdvisor. Front. Psychol. 2021, 12, 750527. [Google Scholar] [CrossRef]
  118. Montesdioca, G.P.Z.; Maçada, A.C.G. Measuring user satisfaction with information security practices. Comput. Secur. 2015, 48, 267–280. [Google Scholar] [CrossRef]
  119. Pal, D.; Babakerkhell, M.D.; Zhang, X. Exploring the Determinants of Users’ Continuance Usage Intention of Smart Voice Assistants. IEEE Access 2021, 9, 162259–162275. [Google Scholar] [CrossRef]
  120. Oliver, R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  121. Kim, D.J.; Ferrin, D.L.; Rao, H.R. A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents. Decis. Support Syst. 2008, 44, 544–564. [Google Scholar] [CrossRef]
  122. Oliver, R.L. Satisfaction: A Behavioral Perspective on the Consumer New York; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  123. Chen, Q.; Lu, Y.; Gong, Y.; Xiong, J. Can AI chatbots help retain customers? Impact of AI service quality on customer loyalty. Internet Res. 2023, ahead-of-print. [Google Scholar] [CrossRef]
  124. Singh, P.; Singh, V. The power of AI: Enhancing customer loyalty through satisfaction and efficiency. Cogent Bus. Manag. 2024, 11, 2326107. [Google Scholar] [CrossRef]
  125. Koenig, P.D. Attitudes toward artificial intelligence: Combining three theoretical perspectives on technology acceptance. AI Soc. 2024, 40, 1333–1345. [Google Scholar] [CrossRef]
  126. Palmquist, A.; Jedel, I. Influence of Gender, Age, and Frequency of Use on Users’ Attitudes on Gamified Online Learning; Springer: Cham, Switzerland, 2021; pp. 177–185. [Google Scholar]
  127. Chawla, D.; Joshi, H. The moderating role of gender and age in the adoption of mobile wallet. Foresight 2020, 22, 483–504. [Google Scholar] [CrossRef]
  128. White Baker, E.; Al-Gahtani, S.S.; Hubona, G.S. The effects of gender and age on new technology implementation in a developing country. Inf. Technol. People 2007, 20, 352–375. [Google Scholar] [CrossRef]
  129. Cirillo, D.; Catuara-Solarz, S.; Morey, C.; Guney, E.; Subirats, L.; Mellino, S.; Gigante, A.; Valencia, A.; Rementeria, M.J.; Chadha, A.S. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digit. Med. 2020, 3, 81. [Google Scholar] [CrossRef] [PubMed]
  130. Kim, A.; Cho, M.; Ahn, J.; Sung, Y. Effects of Gender and Relationship Type on the Response to Artificial Intelligence. Cyberpsychology Behav. Soc. Netw. 2019, 22, 249–253. [Google Scholar] [CrossRef]
  131. Yap, Y.-Y.; Tan, S.-H.; Choon, S.-W. Elderly’s intention to use technologies: A systematic literature review. Heliyon 2022, 8, e08765. [Google Scholar] [CrossRef] [PubMed]
  132. Kim, M.-K.; Wong, S.F.; Chang, Y.; Park, J.-H. Determinants of customer loyalty in the Korean smartphone market: Moderating effects of usage characteristics. Telemat. Inform. 2016, 33, 936–949. [Google Scholar] [CrossRef]
  133. Neuman, C.; Rossman, G. Basics of Social Research Methods Qualitative and Quantitative Approaches; Allyn and Bacon: Boston, MA, USA, 2006. [Google Scholar]
  134. Palinkas, L.A.; Horwitz, S.M.; Green, C.A.; Wisdom, J.P.; Duan, N.; Hoagwood, K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm. Policy Ment. Health Ment. Health Serv. Res. 2015, 42, 533–544. [Google Scholar] [CrossRef]
  135. Hair, J.; Hollingsworth, C.L.; Randolph, A.B.; Chong, A.Y.L. An updated and expanded assessment of PLS-SEM in information systems research. Ind. Manag. Data Syst. 2017, 117, 442–458. [Google Scholar] [CrossRef]
  136. Hair Jr, J.; Hair Jr, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2021. [Google Scholar]
  137. Sarstedt, M.; Cheah, J.-H. Partial least squares structural equation modeling using SmartPLS: A software review. J. Mark. Anal. 2019, 7, 196–202. [Google Scholar] [CrossRef]
  138. Chin, W.W. Issues and opinions on structural equation modeling. MIS Q. 1998, 22, 7–16. [Google Scholar]
  139. Podsakoff, P.M.; MacKenzie, M.; Scott, B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef] [PubMed]
  140. Kock, N. WarpPLS 5.0 User Manual; ScriptWarp Systems: Laredo, TX, USA, 2015. [Google Scholar]
  141. Nunnally, J.C. Psychometric Theory, 2nd ed.; Mcgraw Hill Book Company: New York, NY, USA, 1978. [Google Scholar]
  142. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  143. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  144. Roemer, E.; Schuberth, F.; Henseler, J. HTMT2—An improved criterion for assessing discriminant validity in structural equation modeling. Ind. Manag. Data Syst. 2021, 121, 2637–2650. [Google Scholar] [CrossRef]
  145. Henseler, J.; Hubona, G.; Ray, P.A. Using PLS path modeling in new technology research: Updated guidelines. Ind. Manag. Data Syst. 2016, 116, 2–20. [Google Scholar] [CrossRef]
  146. Henseler, J.; Dijkstra, T.K.; Sarstedt, M.; Ringle, C.M.; Diamantopoulos, A.; Straub, D.W.; Ketchen, D.J.; Hair, J.F.; Hult, G.T.M.; Calantone, R.J. Common beliefs and reality about PLS: Comments on Rönkkö and Evermann (2013). Organ. Res. Methods 2014, 17, 182–209. [Google Scholar] [CrossRef]
  147. Gefen, D. E-commerce: The role of familiarity and trust. Omega 2000, 28, 725–737. [Google Scholar] [CrossRef]
  148. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 334–359. [Google Scholar] [CrossRef]
  149. Limayem, M.; Hirt, S.G.; Cheung, C.M. How habit limits the predictive power of intention: The case of information systems continuance. MIS Q. 2007, 31, 705–737. [Google Scholar] [CrossRef]
  150. Ouellette, J.A.; Wood, W. Habit and intention in everyday life: The multiple processes by which past behavior predicts future behavior. Psychol. Bull. 1998, 124, 54. [Google Scholar] [CrossRef]
  151. Fotheringham, D.; Wiles, M.A. The effect of implementing chatbot customer service on stock returns: An event study analysis. J. Acad. Mark. Sci. 2022, 51, 802–822. [Google Scholar] [CrossRef]
  152. Xu, L.; Sanders, L.; Li, K.; Chow, J.C. Chatbot for health care and oncology applications using artificial intelligence and machine learning: Systematic review. JMIR Cancer 2021, 7, e27850. [Google Scholar] [CrossRef]
  153. Nadarzynski, T.; Miles, O.; Cowie, A.; Ridge, D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit. Health 2019, 5, 2055207619871808. [Google Scholar] [CrossRef]
  154. Powell, J. Trust Me, I’ma chatbot: How artificial intelligence in health care fails the turing test. J. Med. Internet Res. 2019, 21, e16222. [Google Scholar] [CrossRef]
  155. Benke, I.; Gnewuch, U.; Maedche, A. Understanding the impact of control levels over emotion-aware chatbots. Comput. Hum. Behav. 2022, 129, 107122. [Google Scholar] [CrossRef]
  156. Kumar, J.A. Educational chatbots for project-based learning: Investigating learning outcomes for a team-based design course. Int. J. Educ. Technol. High. Educ. 2021, 18, 65. [Google Scholar] [CrossRef] [PubMed]
  157. Mageira, K.; Pittou, D.; Papasalouros, A.; Kotis, K.; Zangogianni, P.; Daradoumis, A. Educational AI Chatbots for Content and Language Integrated Learning. Appl. Sci. 2022, 12, 3239. [Google Scholar]
  158. Moraes, C.L. Chatbot as a Learning Assistant: Factors Influencing Adoption and Recommendation. Master’s Thesis, Universidade NOVA de Lisboa, Lisboa, Portugal, 2021. [Google Scholar]
  159. Davies, J.N.; Verovko, M.; Verovko, O.; Solomakha, I. Personalization of E-Learning Process Using AI-Powered Chatbot Integration; Springer: Cham, Switzerland, 2023; pp. 209–216. [Google Scholar]
  160. Hidayatulloh, I.; Pambudi, S.; Surjono, H.D.; Sukardiyono, T. Gamification on chatbot-based learning media: A review and challenges. Electron. Inform. Vocat. Educ. 2021, 6, 71–80. [Google Scholar] [CrossRef]
  161. Nicolescu, L.; Tudorache, M.T. Human-Computer Interaction in Customer Service: The Experience with AI Chatbots&mdash;A Systematic Literature Review. Electronics 2022, 11, 1579. [Google Scholar]
  162. Pataranutaporn, P.; Danry, V.; Leong, J.; Punpongsanon, P.; Novy, D.; Maes, P.; Sra, M. AI-generated characters for supporting personalized learning and well-being. Nat. Mach. Intell. 2021, 3, 1013–1022. [Google Scholar] [CrossRef]
  163. Van Pinxteren, M.M.E.; Pluymaekers, M.; Lemmink, J.G.A.M. Human-like communication in conversational agents: A literature review and research agenda. J. Serv. Manag. 2020, 31, 203–225. [Google Scholar] [CrossRef]
Figure 1. Research Model.
Figure 1. Research Model.
Systems 13 00915 g001
Figure 2. PLS Algorithm Results.
Figure 2. PLS Algorithm Results.
Systems 13 00915 g002
Table 1. Summary of Key Literature on Novelty, Intelligence, Knowledge, and Creepiness.
Table 1. Summary of Key Literature on Novelty, Intelligence, Knowledge, and Creepiness.
Author(s)Key VariableMethodKey Findings
Koc and Bozdag [32]noveltyconceptual model, case studyProposed value chain-based model; fuel cell technology had highest novelty among alternatives.
Jo [33]novelty, hedonic value, continuance intentionsurvey, Partial Least Squares (PLS-SEM)Novelty value affects utilitarian and hedonic value; continuance intention influenced by utilitarian and hedonic factors.
Merikivi, Nguyen [34]novelty, perceived enjoymentsurvey, SEMDesign aesthetics, ease of use, and novelty influence perceived enjoyment; enjoyment drives continuance intention in games.
Rafiq, Dogra [13]perceived intelligence, AI-chatbot adoptionsurvey, PLS-SEMAdoption influenced by multiple factors under S-O-R framework; supported ten hypotheses on chatbot adoption in tourism.
Ashfaq, Yun [39]Information quality, perceived usefulness, satisfaction, continuance intentionsurvey, SEMInformation and service quality enhance satisfaction; satisfaction predicts continuance intention; need for human interaction moderates effects.
Al-Emran, Mezhuyev [41]knowledge management, M-learningsurvey, PLS-SEMKnowledge acquisition, application, protection positively influence perceived usefulness and ease of use; knowledge sharing partly supported.
Al-Emran, Mezhuyev [45]knowledge acquisition, knowledge sharingsurvey, PLS-SEMKnowledge acquisition positively influences ease of use and usefulness in both countries; sharing affects usefulness in Oman but not Malaysia.
Rajaobelina, Prom Tep [51]creepiness, loyaltysurvey, SEMCreepiness reduces loyalty directly and indirectly through trust and emotions; usability reduces creepiness, privacy concerns increase it.
Table 2. Summary of Key Literature on Task Attraction and Hedonic Value.
Table 2. Summary of Key Literature on Task Attraction and Hedonic Value.
Author(s)Key VariableMethodKey Findings
Han and Yang [53]continuance intention, interpersonal attraction, privacy risksurvey, PLS-SEMTask, social, and physical attraction, along with privacy/security risk, influence IPA adoption; PSR is validated in this context.
Shetu, Islam [55]digital wallet adoption, technological innovativenesssurvey, SEMPerceived usefulness, ease of use, compatibility, and insecurity influence adoption; innovativeness did not moderate intention.
Akdim, Casaló [64]utilitarian vs. hedonic value, continuance intentionsurvey, PLS-SEM, multi-group analysisPerceived usefulness, ease of use, and enjoyment explain continuance intention; utilitarian factors dominate in utility apps, enjoyment in hedonic apps.
Balakrishnan, Dwivedi [72]AI voice assistants adoption, resistancesurvey, SEMStatus quo bias factors and TAM variables explain adoption resistance; perceived value reduces resistance; inertia varies by gender/age.
Belanche, Casaló [73]AI in FinTech, robo-advisors adoptionsurvey, SEM, multi-sample analysisAttitude, mass media, and subjective norms drive adoption; familiarity with robots moderates effects of usefulness and norms across demographics.
Pillai and Sivathanu [71]chatbot adoption in tourisminterviews + survey, mixed-methods, PLS-SEMEase of use, usefulness, trust, perceived intelligence, and anthropomorphism predict adoption; technological anxiety not significant; human-agent stickiness negatively moderates intention-usage link.
Table 3. Summary of Key Literature on Trust.
Table 3. Summary of Key Literature on Trust.
Author(s)Key Variable(s)MethodKey Findings
Okonkwo and Ade-Ibijola [75]chatbot use in educationsystematic review (53 articles)Identified benefits, challenges, and future research directions of chatbots in education; emphasized personalized services for students and staff.
Nguyen, Ta [77]trust, continuance intention, perceived enjoyment, risksurvey, SEMTrust, risk, enjoyment, and self-efficacy influenced continuance intention; gender differences shaped perception and behavior.
Wang, Lin [80]trust, innovative use of chatbotssurvey, SEMTrust conceptualized as functionality, reliability, and data protection; knowledge support and work-life balance increased trust and innovative use.
Brill, Munoz [81]trust, satisfaction with digital assistantssurvey, PLS-SEMExpectation and confirmation significantly influenced satisfaction; evidence of positive customer experience with AI assistants.
Hasan, Shams [82]trust, risk, novelty, loyaltysurvey, SEMPerceived risk negatively affected loyalty, while trust, interaction, and novelty value positively influenced brand loyalty in Siri users.
Rajaobelina, Prom Tep [51]trust, creepiness, loyaltysurvey, SEMCreepiness reduced loyalty directly and indirectly via trust and emotions; usability reduced creepiness, privacy concerns increased it.
Table 4. Demographic characteristics of the samples.
Table 4. Demographic characteristics of the samples.
DemographicsItemSubjects (N = 242)
FrequencyPercentage
GenderMale10242.1%
Female14057.9%
Age20 or younger10141.7%
21218.7%
222610.7%
23 or older9438.8%
Table 5. Factor Analysis and Reliability.
Table 5. Factor Analysis and Reliability.
ConstructItemsMeanSt. Dev.Factor
Loading
Cronbach’s
Alpha
CRAVE
Novelty ValueNVT15.6861.3700.7210.7000.8330.625
NVT25.3601.4020.787
NVT35.5541.3040.858
Perceived IntelligencePIE15.4881.2200.8780.8640.9170.785
PIE25.1781.3290.888
PIE35.2891.3540.893
Knowledge AcquisitionKAQ14.9011.3200.8610.8700.9200.794
KAQ25.0661.2610.907
KAQ35.0951.2640.904
CreepinessCPN15.0171.7740.8850.8870.9290.813
CPN24.5081.7520.934
CPN33.7271.8200.886
Task
Attraction
TAT15.3841.2250.9080.9040.9400.840
TAT25.4381.2260.940
TAT35.3351.2160.901
Hedonic
Value
HEV14.8061.4020.8990.8120.8890.728
HEV24.9171.2830.867
HEV34.3431.5330.790
TrustTRU13.7481.6560.9280.9360.9590.886
TRU23.6491.6000.956
TRU33.4631.5980.940
SatisfactionSAT15.0121.2350.9320.9240.9520.868
SAT24.9171.2830.953
SAT34.9051.2840.910
LoyaltyLYT15.0661.4780.8990.9010.9380.835
LYT25.0451.4270.938
LYT35.3311.2950.902
Table 6. Correlation Matrix and Discriminant Assessment.
Table 6. Correlation Matrix and Discriminant Assessment.
Construct123456789
1. Novelty Value0.790
2. Perceived Intelligence0.5680.886
3. Knowledge Acquisition0.5860.6600.891
4. Creepiness0.0420.166−0.0250.902
5. Task Attraction0.5890.7060.6500.0540.916
6. Hedonic Value0.5940.5860.657−0.1800.6130.853
7. Trust0.1890.1810.135−0.1460.1380.2330.941
8. Satisfaction0.5480.7060.6130.0790.6150.5570.2820.932
9. Loyalty0.5620.5650.537−0.1640.6600.7110.2390.5650.914
Table 7. HTMT matrix.
Table 7. HTMT matrix.
Construct123456789
1. Novelty Value
2. Perceived Intelligence0.706
3. Knowledge Acquisition0.7360.756
4. Creepiness0.0870.1940.045
5. Task Attraction0.7260.7930.7320.084
6. Hedonic Value0.7800.6960.7770.2030.713
7. Trust0.2310.1940.1470.1550.1490.267
8. Satisfaction0.6600.7880.6790.0880.6700.6370.300
9. Loyalty0.7040.6350.6060.1830.7300.8300.2590.614
Table 8. Test Results.
Table 8. Test Results.
HPredictorOutcomeβtpHypothesis
H1aNovelty ValueTask Attraction0.1993.4550.001Supported
H1bNovelty ValueHedonic Value0.2663.7900.000Supported
H2aPerceived IntelligenceKnowledge Acquisition0.66014.4590.000Supported
H2bPerceived IntelligenceTask Attraction0.4266.4660.000Supported
H2cPerceived IntelligenceHedonic Value0.2593.6030.000Supported
H3aKnowledge AcquisitionTask Attraction0.2533.5340.000Supported
H3bKnowledge AcquisitionHedonic Value0.3244.3770.000Supported
H4aCreepinessHedonic Value−0.2274.4960.000Supported
H4bCreepinessTrust−0.1462.0720.038Supported
H5aTask AttractionSatisfaction0.4386.3160.000Supported
H5bTask AttractionLoyalty0.3104.0960.000Supported
H6aHedonic ValueSatisfaction0.2523.4210.001Supported
H6bHedonic ValueLoyalty0.4445.8020.000Supported
H7aTrustSatisfaction0.1633.1640.002Supported
H7bTrustLoyalty0.0651.5450.122Not Supported
H8SatisfactionLoyalty0.0991.6990.089Not Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahn, H.Y. Modeling Student Loyalty in the Age of Generative AI: A Structural Equation Analysis of ChatGPT’s Role in Higher Education. Systems 2025, 13, 915. https://doi.org/10.3390/systems13100915

AMA Style

Ahn HY. Modeling Student Loyalty in the Age of Generative AI: A Structural Equation Analysis of ChatGPT’s Role in Higher Education. Systems. 2025; 13(10):915. https://doi.org/10.3390/systems13100915

Chicago/Turabian Style

Ahn, Hyun Yong. 2025. "Modeling Student Loyalty in the Age of Generative AI: A Structural Equation Analysis of ChatGPT’s Role in Higher Education" Systems 13, no. 10: 915. https://doi.org/10.3390/systems13100915

APA Style

Ahn, H. Y. (2025). Modeling Student Loyalty in the Age of Generative AI: A Structural Equation Analysis of ChatGPT’s Role in Higher Education. Systems, 13(10), 915. https://doi.org/10.3390/systems13100915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop