Next Article in Journal
How Doctors’ Proactive Crafting Behaviors Influence Performance Outcomes: Evidence from an Online Healthcare Platform
Previous Article in Journal
Dynamic Pricing for Internet Service Platforms with Initial Demand Constraints: Unified or Differentiated Pricing?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude

1
School of New Media and Communication, Tianjin University, Tianjin 300072, China
2
Department of Linguistics, University of Konstanz, 78464 Konstanz, Germany
*
Authors to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 228; https://doi.org/10.3390/jtaer20030228
Submission received: 21 June 2025 / Revised: 16 August 2025 / Accepted: 18 August 2025 / Published: 1 September 2025

Abstract

Artificial intelligence (AI) is reshaping customer service, with AI-powered chatbots serving as a critical component in delivering continuous support across sales, marketing, and service domains, thereby enhancing operational efficiency. However, consumer engagement remains suboptimal, as many users favor human interaction due to concerns regarding chatbots’ ability to address complex issues and their perceived lack of empathy, which subsequently reduces satisfaction and sustainable usage. This study examines the determinants of user attitude and identifies factors influencing sustainable chatbot use. Utilizing survey data from 735 Chinese university students who have engaged with AI-powered chatbots, the analysis reveals that four key motivational categories: utilitarian (information acquisition), hedonic (enjoyment and time passing), technology (media appeal), and social (social presence and interaction) significantly influence user attitude toward chatbot services. Conversely, privacy invasion exerts a negative impact on user attitude, suggesting that while chatbots provide certain benefits, privacy issues can significantly undermine user satisfaction. Moreover, the findings suggest that user attitude serves as a pivotal determinant in fostering both user stickiness and sustainable usage of chatbot services. This study advances prior U&G-, TAM-, and ECM-based research by applying these frameworks to AI-powered chatbots in business communication, refining the U&G model with four specific motivations, integrating perceived privacy invasion to bridge gratification theory with risk perception, and directly linking user motivations to business outcomes such as attitude and stickiness. This study underscores that optimizing chatbot functionalities to enhance user gratification while mitigating privacy risks can substantially improve user satisfaction and stickiness, offering valuable implications for businesses aiming to enhance customer loyalty through AI-powered services.

1. Introduction

In the contemporary digital era, artificial intelligence (AI) is positioned to fundamentally alter the dynamics of the labor market, with profound implications for text-based conversational agents, commonly known as AI-powered chatbots [1,2,3]. AI-powered chatbots are reshaping the customer service profession, offering considerable benefits for both users and businesses [1,4,5]. Currently, AI-powered chatbots provide round-the-clock services across various domains, including sales, support, and marketing. Among these, AI-powered chatbots are most frequently employed in sales, followed by support and marketing [5]. Notably, their integration has been linked to a 67% increase in average sales, with 26% of total sales being facilitated through chatbot interactions [6]. AI-powered chatbots utilize natural language processing to communicate with users, enabling them to interpret and provide relevant responses to user queries effectively [7,8,9]. When well-designed and systematically implemented, chatbots can deliver substantial benefits, such as significant savings in resources and time [3,10].
Numerous prominent platforms, including Facebook, Amazon, WeChat, and eBay, have integrated AI-powered chatbots to enhance e-service experiences [7,11]. AI-powered chatbots now function not only as automated information providers but also as digital representatives capable of facilitating personalized user interactions [1,12]. Consumers increasingly expect these technologies to replicate interpersonal exchanges akin to those in physical retail environments [13,14]. These sophisticated programs fulfill diverse roles, ranging from personal assistants to intelligent virtual agents and companions, as emphasized by Ernest Mfumbilwa et al. [15]. By offering uninterrupted customer service and reducing response times, AI-powered chatbots significantly contribute to customer attitude [8,13]. While chatbots offer clear advantages in terms of availability and scalability, their success in cultivating enduring user relationships remains questionable.
The existing literature provides limited exploration of chatbots’ applications in business communication. Additionally, studies on the Uses and Gratifications (U&G) of chatbot consumers remains scarce, as the framework has been predominantly used to analyze traditional and social media consumption patterns [14,16,17]. Despite the widespread adoption of chatbots, a critical analysis of the literature reveals several underexplored issues. First, most prior research has been descriptive rather than analytical, emphasizing technical capabilities and efficiency outcomes without sufficiently theorizing user-level continuance behaviors. Second, although the constructs of user stickiness and sustainable usage are critical for long-term business value, they have received limited theoretical elaboration in the context of AI-powered interfaces. While user satisfaction and continuance intention have been discussed in general technology contexts, few studies have specifically examined the motivational and cognitive mechanisms that sustain chatbot usage in the long run [2,5,15].
In particular, consumer resistance to chatbot interactions remains strong. Studies indicate that approximately 87% of users still prefer human agents due to perceptions of emotional deficiency, cognitive rigidity, and lack of empathy in chatbot responses [10]. Moreover, concerns regarding limited trust, information credibility, and privacy invasion further reduce users’ willingness to sustainably engage with chatbots [1,18]. Yet, few empirical studies have systematically examined why users discontinue chatbot use, or how perceptions of trust, satisfaction, and data security interact to shape sustainable usage and loyalty-like behaviors [7,12].
Addressing these gaps, this study proposes a multi-theoretical framework to explain the antecedents of user stickiness and sustainable usage in chatbot-mediated business contexts. Specifically, the study integrates the U&G theory, the Technology Acceptance Model (TAM), the Expectation–Confirmation Model (ECM), and the Information Systems Success Model (ISSM). Although U&G theory has been valuable for revealing users’ underlying motivations, such as entertainment, information-seeking, and efficiency, most prior applications have been limited to traditional or one-way media, making it insufficient for capturing the reciprocal, adaptive, and algorithm-driven nature of chatbot interactions. Similarly, TAM and ECM, while widely used to model technology adoption and continuance, tend to focus on functional perceptions and post-adoption evaluations (e.g., perceived usefulness, satisfaction) without adequately addressing the richer spectrum of social, hedonic, and contextual motivations that often drive chatbot engagement [19,20]. The inclusion of ISSM further allows consideration of system quality, information quality, and service reliability, factors that are crucial in chatbot contexts. This theoretical integration is not merely additive but complementary: U&G addresses the why of initial user engagement (motivational gratifications), TAM and ECM focus on the how of sustainable use (cognitive evaluation and expectation alignment), while ISSM highlights the what of system-related determinants (technical and service quality). This triangulated model provides a more comprehensive explanation of user attitudes, privacy invasion, user stickiness, and sustainable usage, thereby addressing a critical gap in the chatbot literature. Building on this framework, the study aims to investigate the following research questions.
RQ1. 
To what extent do perceived motivations and privacy invasion influence user attitude toward text-based AI-powered chatbots?
RQ2. 
How does user attitude affect user stickiness and sustainable usage within the context of text-based AI-powered chatbots?
RQ3. 
What is the association between user stickiness and sustainable usage within the context of text-based AI-powered chatbots?
This study makes several notable contributions to the scholarly discourse on smart media and customer engagement. Firstly, this research extends the scope of U&G theory by identifying key motivational dimensions derived from AI-powered chatbots across diverse markets and examining their impact on user attitudes. Secondly, the study enriches the literature on privacy invasion by investigating how perceptions of privacy invasion influence user attitude in interactions with smart media. Thirdly, it reframes user stickiness and sustainable usage as complex, multi-dimensional outcomes shaped by the interplay of user attitude, system evaluation, and risk invasion. Lastly, this study offers actionable implications for practitioners aiming to design chatbot systems that foster deeper user commitment, higher satisfaction, and more ethically responsible engagement strategies.

2. Theoretical Framework and Hypotheses Development

2.1. Research Model

Figure 1 presents the theoretical model. The study delineates five distinct types of factors: utilitarian motivation (including information seeking), hedonic motivation (comprising perceived enjoyment and time passing), technology motivation (embodying media appeal), social motivation (encompassing social interaction and social presence), and privacy invasion. The research model integrates constructs derived from diverse theoretical frameworks, such as motivation theories and communication theory, to holistically capture a range of motivational factors.

2.2. Linking Utilitarian and Hedonic Motivations to User Attitude

Motivation is conceptualized as a driving force behind actions, representing an individual’s overarching inclination towards the fulfillment of personal needs and desires [21,22,23,24]. This multifaceted construct is intricately tied to one’s subjective assessment of the values or rewards expected through consumptive activities. Within the framework of a goal-directed behavior model, motivation assumes a pivotal role in the transformation of one’s evaluative judgments into actionable behaviors [25,26]. Motivations can be broadly categorized into two distinct forms: appetitive and volitive motivations. Appetitive motivations are characterized by consumers’ yearnings or aspirations for acquisition, stimulating actions to address latent or concealed desires, often associated with biological needs. In contrast, volitive motivations prompt consumers to deliberate on rationales for behavior, fostering self-commitments to action through intentional processing of motivational factors [21,23]. Despite their nuanced differences, both appetitive and volitive motivations possess the capacity to transmute grounds for action into self-regulated motivations to engage in specific behaviors. Bagozzi [27]’s proposition posits that the genesis of motivations can be ascribed to attitudinal, cognitive, emotional, and social factors, providing a foundation for elucidating latent fundamental human needs. These elements are identified as instrumental in augmenting volitive motivations while concurrently diminishing appetitive motivations.
Utilitarian motivation encompasses the satisfaction of individuals’ utility demands, particularly the desire to acquire accurate, relevant, and timely information. This motivation aligns with the cognitive dimension of media use, where users engage with chatbots primarily to fulfill informational demands, such as retrieving product details, resolving service issues, or obtaining personalized recommendations [6,28]. Unlike other motivational types, utilitarian motivations are task-driven and emphasize functionality, efficiency, and utility in problem-solving. Previous research in the domain of U&G has underscored the significance of utilitarian motivations [17,23]. In the context of this study, particular emphasis is placed on cognitive information requirements, representative of utilitarian motivations that cater to the facilitation of information sharing or seeking needs within chatbots. Chatbots allow for the versatile transfer of information through various formats, such as text, photos, or videos. Historically, the provision of information about products, services, or brands has been a fundamental role of chatbots in marketing communication. Notably, exemplified by the case of Gucci’s chatbots, these automated information systems garnered positive reception from customers due to their adeptness in delivering valuable, personalized information and fostering meaningful engagement with each individual customer [1]. Central to the acceptance of new technology is the concept of perceived usefulness, a critical factor highlighted in the technology adoption literature [2,29]. Substantiating this, extant research has consistently shown the substantial influence of perceived usefulness on the utilitarian motivations toward specific forms of technology [29,30]. The assertion by Anifa & Sanaji [29] posits that the utility of technology exerts a profound influence on the consumer experience. Building upon this premise, numerous scholars have delved into the exploration of the impact of perceived usefulness on customer happiness and experience [13]. This body of research underscores the intrinsic connection between the utilitarian motivations of chatbots and the enhancement of customer satisfaction and overall experiential outcomes.
Hedonic motivation, by contrast, is rooted in users’ desire for emotional gratification, entertainment, and pleasure [31]. It reflects the pursuit of enjoyment, mood enhancement, and diversion from boredom through interaction with chatbots [32]. Users may engage with chatbots not for a specific task, but to experience novelty, playful conversation, or emotionally positive responses that enrich their overall digital experience. This motivational category focuses on affective rewards rather than practical utility. Extant research has consistently demonstrated the pivotal role of hedonic pleasure in elucidating user motivations across various technological platforms, including commercial websites, cellphones, and mobile messaging tools [22,30]. AI-powered smart media, exemplified by chatbots, initially found their inception in the realm of amusement, responding to user inputs online through basic machine languages [15]. Ernest Mfumbilwa et al. [15], through descriptive interviews, underscored that chatbots could effectively fulfill human demands for leisure and entertainment. In the context of this study, entertainment serves as an illustrative instance of hedonic motivations, depicting the manner in which consumers may engage with chatbots for the sole purpose of amusement or enjoyment. Users may engage with chatbots not for a specific task, but to experience novelty, playful conversation, or emotionally positive responses that enrich their overall digital experience [33]. Perceived entertainment constitutes a pivotal aspect of the hedonic dimension, holding inherent value in the realm of online commerce by gauging users’ emotional responses. Extensive research has consistently demonstrated that users who engage with chatbots and derive enjoyment from the experience are more inclined to make purchases and exhibit repeated return visits [12,15]. The virtual interaction between chatbots and consumers assumes a critical role, necessitating an entertaining interface to elicit feelings of enjoyment and pleasure in the consumer. The generation of entertainment serves as a strategic mechanism for chatbots, fostering the establishment of enduring relationships with their customers [12,15]. Consequently, recognizing hedonic motivation as a fundamental concept is imperative, as it emerges as an indispensable element in the enhancement of a positive customer experience within online environments. Accordingly, we formulate the following hypotheses.
H1. 
Utilitarian motivation is positively associated with user attitude.
H2. 
Hedonic motivation is positively associated with user attitude.

2.3. Linking Technology and Social Motivations to User Attitude

The third category examined in this study is technological motivations, which refer to the capacity of emerging technologies to rapidly and seamlessly engage individuals. Schmuck et al. [34] highlighted that mobile devices’ social media tools fulfills users’ technology motivations by providing the flexibility to access and engage with information anytime and anywhere. Florenthal [16] incorporated the concept of technology motivations to explore the instant reactions elicited by cell phones among adopters. Media appeal, as defined by Tamborini et al. [35], pertains to the ease and immediacy with which individuals can communicate with others using a medium, and has been identified as a technology motivation in previous literature. Gan & Li [36] identified media appeal as a significant motivator influencing continuous use behavior of social media, and Anifa & Sanaji [29] asserts that user-generated media benefit from characteristics of being easy to use and control. Building on this perspective, Xu et al. [17] investigated users’ technology motivations, such as media appeal, on video game streaming. This study specifically examines the technology motivations behind chatbot usage, emphasizing how interactivity and accessibility are enhanced through technological support across multiple devices and platforms. Unlike utilitarian motivation, which focuses on what is achieved through chatbot use, technological motivation emphasizes how the interaction is delivered, and its responsiveness, novelty, and perceived sophistication. Furthermore, in comparison to human agents, machine agents possess the potential to demonstrate greater objectivity and resolve issues with enhanced precision and efficiency, thereby increasing their appeal as a communication medium among current users [3,8,10].
Social motivation captures users’ interest in connecting with others or replicating social presence through chatbot use. This motivation is distinct from hedonic enjoyment in that it is relational rather than purely affective. As a distinct category, social motivation is essential in enhancing interactions among media users and others. A substantial body of literature has explored how social media tools contribute to social motivations, encompassing aspects like social interactions and social presence [36,37,38]. Several scholars have embraced and applied social presence theory to deepen our understanding of this concept. For example, Johnson & Hong [37] applied social presence theory in their study, investigating the extent to which individuals utilize a medium to cultivate a psychological sense of connection with others. Social presence is defined as the extent to which the other person in an interaction is perceived as salient, including the connection, perception, and emotional engagement of customers toward another intellectual entity [38]. Visual human-like cues, such as figures, are often employed to express social presence, heightening the feeling of salience. The establishment of social presence creates a psychological connection between customers and technology, leading to positive experiences [4]. Araujo [39] also suggested that users adopted agency bots because of their social presence, that is, the feeling that another being (whether living or synthetic) exists in the world and appears to react to you. Consequently, research has explored the impact of social presence on perceived human warmth in digital communication environments [39]. Integrating social presence attributes into technology has been proven to be predictive of customer experiences with technological interfaces [4]. Social interaction is a crucial dimension of chatbot usage, addressing individuals’ social motivations. As a key factor in sustaining user engagement, social interaction has been recognized as a significant motivator for sustainable chatbot use. In the context of human–robot communication, Ernest Mfumbilwa et al. [15]’s study emphasized the role of “small-talk”-oriented chatbots in facilitating social interactions. Similarly, Chakraborty & Biswal [40] observed that online social interaction plays a significant role in shaping users’ information-sharing behavior within social communities. Chong et al. [41] emphasized the pivotal role of social interaction during the bidding phase in influencing online auction continuance intentions. The perception of enhanced social motivations through interactions with chatbots may thus foster a stronger intention among users to continue engaging with these technologies [42]. Building on these findings, the following hypotheses are formulated:
H3. 
Technology motivation is positively associated with user attitude.
H4. 
Social motivation is positively associated with user attitude.

2.4. Linking Privacy Invasion to User Attitude

In addition to examining user attitude, this study delves into the perceived privacy invasion associated with the use of chatbots, denoting users’ apprehension about utilizing chatbot services due to potential adverse consequences arising from the disclosure of personal information [7]. Unlike general risk perception, which broadly encompasses any potential negative outcome arising from technology use (e.g., financial loss, system failure), privacy invasion centers on the emotional and cognitive discomfort stemming from the unauthorized or excessive access to personal data. It is affect-laden and more personalized, often evoking feelings of vulnerability, surveillance, or loss of control. Notably, concerns regarding the collection of personal information have been observed when users access website personalization services, and system-initiated personalization, while enhancing convenience, has been noted to heighten users’ privacy anxieties [7,8]. In the realm of customer service, emerging media tools such as mobile payment systems, mobile banking, and smartwatches have been recognized as carrying inherent privacy risks from the user’s standpoint [9]. Similarly, businesses integrating chatbots into their communication strategies may struggle when users view chatbot interactions as invasive or compromising their privacy. This becomes especially pronounced in chatbot-facilitated transactions, where users risk privacy invasion if their personal information, such as phone numbers, names, or addresses, is mishandled or disclosed to unauthorized entities [9]. From a theoretical standpoint, privacy invasion can undermine users’ trust and satisfaction by violating expectations of data autonomy and informational integrity. Researchers have extensively examined the adverse ramifications of privacy invasion on user attitude. For instance, Söderlund [43] demonstrated that concerns related to privacy and security can diminish customer satisfaction with online environment. Thus, we propose the following hypothesis:
H5. 
Privacy invasion is negatively associated with user attitude.

2.5. Linking User Attitude and Stickiness to Sustainable Usage

In accordance with Aguirre-Rodriguez et al. [21], intention is defined as an individual’s subjective probability of performing an actual behavior. In the context of this study, our focus lies on the intentions of chatbot adopters to use the technology sustainably. This conceptualization has been widely employed in previous U&G literature to comprehend adopters’ intentions of continued use and its correlation with user attitude [23]. For instance, Mailizar et al. [19] posited that user attitude has a positive correlation with future behavioral intentions. Franque et al. [44] confirmed a robust relationship between user attitude and the continued use of an information system. Akdim et al. [30] discovered that user satisfaction significantly enhances users’ intention to persist in using social networking sites. Xu et al. [17] highlighted that user attitude on video game streaming leads to more active engagement. Beyond intentions for sustainable use, scholars, such as Shao et al. [45], contended that stickiness represents a significant outcome of user attitude. Khairawati [46] emphasized that stickiness has been extensively investigated in business communication research as an intentional customer behavior, capturing a sustained commitment to repurchasing or re-patronizing preferred brands, even when faced with the marketing efforts of competitive brands. Previous studies have highlighted the critical role of user attitude in cultivating stickiness among long-term users [2,10,45]. For instance, the satisfaction derived from a service encounter can bolster the trust users place in recommendation agents, subsequently influencing their willingness to purchase products or services from a brand [1,47]. The pleasure and enjoyment customers experience from interacting with social media or chatbots can potentially influence brand perception, increase the likelihood of purchase, and raise brand awareness [12]. Moreover, customer stickiness is expected to be closely linked to their intentions of sustainable usage of chatbot services. Grounded in agency theory, Van Lierop, Badami, & El-Geneidy [48] demonstrated that stickiness could exert a significant influence on sustainable usage. Jung & Shin [49] similarly found that the likelihood of customers becoming sticky to internet-based banks, as evidenced by repeat purchasing behavior and the willingness to recommend to others, is positively correlated with the likelihood of sustainable usage of such financial services. Building upon these insights, we thus propose,
H6. 
User attitude is positively associated with sustainable usage.
H7. 
User attitude is positively associated with user stickiness.
H8. 
User stickiness is positively associated with sustainable usage.

3. Research Methodology

3.1. Subjects and Data Collection

The questionnaire was designed and collected by employing a professionally designed online survey instrument (Sojunmp app). The study recruited participants from a preeminent public university situated in the northern region of mainland China, with the aim of securing a diverse and reliable sample. Students from various majors, including journalism, psychology, and education, were included to provide a representative cross-section of the university student. To verify authenticity online, participants were first provided with instructions explaining the concept of chatbots and were presented with real-world examples to facilitate their understanding of AI-powered chatbot offerings. This preliminary step was conducted before participants proceeded to the primary questionnaire. Specifically, our study focused exclusively on text-based chatbots, as they represent the most prevalent type of chatbot in use. Participants were then prompted to choose a single brand from a predefined list if they had previously utilized an AI-powered chatbot and were confident in answering questions about those services. Subsequently, participants were requested to recollect their most recent interaction with a text-based chatbot for customer support and to complete the survey based on that experience. Filter questions were utilized to specifically enroll individuals who had adopted corporate chatbot services. Additionally, attention-check questions were incorporated to maintain the quality of the survey responses. After participating, students could receive small gifts or nominal cash incentives. As a result, 750 students in total finished our survey, and invalid or incomplete surveys were removed from the sample, resulting in 735 valid responses for further analysis. The demographic data collected from the participants encompassed gender, age, and chatbot usage experience. Valid responses were received from 735 participants, including 375 males (51%) and 360 females (49%). Nearly all respondents (90.2%) reported having access to an AI-powered chatbot. Table 1 provides a detailed breakdown of the demographic information.

3.2. Measurement

This study’s questionnaire is structured into two key sections. The initial section focuses on gathering demographic details from respondents, including their gender, age, and prior interactions with chatbots. The second section encompasses eight constructs: utilitarian motivation, hedonic motivation, technological motivation, social motivation, privacy invasion, user attitude, sustainable usage, and user stickiness. To ensure the validity and relevance of the measures, the constructs were derived from prior research, as detailed in Table 2. The survey was administered within the context of chatbot usage in China, employing a back-translation technique to ensure linguistic fidelity and conceptual consistency. In the first step, a native Chinese-speaking researcher translated the original English questionnaire into Chinese (forward translation). In the second step, another researcher performed a back-translation of the Chinese version into English (back-translation) to verify content consistency. Any discrepancies were resolved through discussion, resulting in the finalized version of the questionnaire. Prior to administering the full-scale survey, a pilot test was carried out with 30 university students who had differing degrees of familiarity with chatbot technology. Insights from their feedback facilitated refinements to enhance the questionnaire’s clarity and readability, ultimately strengthening its validity.

3.2.1. Utilitarian Motivation

The development of this six-item utilitarian motivation scale was informed by existing research to effectively measure respondents’ utilitarian motivation in the context of AI-powered chatbots (e.g., “I use AI-powered Chatbots to find what I’m looking for” “AI-powered Chatbots helps me with finding the information that I need”) [50]. All constructs were conceptualized on a quintuple-tiered Likert-type scaling mechanism, ranging from “1 = strongly disagree” to “5 = strongly agree” (M = 3.87, SD = 0.68, α = 0.939).

3.2.2. Hedonic Motivation

We employed seven items that were adapted from a previous study and adjusted to AI-powered chatbot usage for this study [50,51]. Example statements from the survey include “It is fun and enjoyable to share a conversation with AI-powered chatbots”, “I was absorbed in the conversation with AI-powered chatbots”. Participants rated each statement on a 5-point Likert scale, with response options ranging from “1 = strongly disagree” to “5 = strongly agree.” (M = 2.49, SD = 0.85, α = 0.946).

3.2.3. Technology Motivation

This seven-item scale was adapted to gauge the degree to which consumers engage with AI-powered chatbots to fulfill their technology motivations (e.g., “Using AI-powered chatbots is more efficient than other forms of communication”, “AI-powered chatbots save a tremendous amount of time”) [36]. A five-point Likert scale, spanning from “1 = strongly disagree” to “5 = strongly agree” (M = 3.75, SD = 0.71, α = 0.916), was implemented to assess the responses to each item.

3.2.4. Social Motivation

A six-item scale was utilized to evaluate respondents’ social motivations for engaging with AI-powered chatbots, specifically focusing on aspects like social interaction and social presence (e.g., “In your interactions with chatbot, you are interacting with an intelligent being?”, “In your interactions with chatbot, you are not alone?”) [39]. Participants evaluated each statement on a 5-point Likert scale, with response options ranging from “1 = strongly disagree” to “5 = strongly agree.” (M = 3.56, SD = 0.81, α = 0.903).

3.2.5. Privacy Invasion

This five-item scale was modified to measure university students’ perceptions regarding the potential invasion of privacy by AI-powered chatbots [7,9]. For instance, statements such as “My information can be used in a way I do not foresee” and “The information I submit can be misused” were included in the scale. To evaluate these items, the study employed a 5-point Likert scale spanning from “1 = strongly disagree” to “5 = strongly agree” (M = 3.56, SD = 0.81, α = 0.937).

3.2.6. User Attitude

Participants provided responses to a set of inquiries designed to assess their perspectives on the role of AI-powered chatbots in their everyday interactions. Specifically, they were presented with seven questions aimed at assessing their attitudes toward AI-powered chatbots (e.g., “I am satisfied with chatbot service agent”, “This company’s chatbot service agent did a good job”) [51]. A 5-point Likert scale, ranging from “1 = strongly disagree” to “5 = strongly agree,” was designed to evaluate the items (M = 3.49, SD = 0.75, α = 0.919).

3.2.7. Sustainable Usage

A five-item scale, adapted from prior research, was employed to assess sustainable usage. Example statements include: “I will continue to use this company’s chatbot service agent,”, “I will use this company’s chatbots for other purposes than my current usage.” [36]. All items were rated on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree; M = 3.51, SD = 0.70, α = 0.854).

3.2.8. User Stickiness

A five-item measure that was thoroughly reviewed and adapted from relevant literature was used to evaluate user stickiness (e.g., “I intend to keep purchasing products/services from this company’s chatbots”) [52]. These three terms were assessed using a 5-point Likert scale from “1 = strongly disagree” to “5 = strongly agree” (M = 3.27, SD = 0.86, α = 0.900).

4. Result

4.1. Measurement Model

Firstly, Harman’s one-factor test was conducted to determine whether a single factor emerged that accounted for the majority of the variance, which would indicate potential common method bias. The results revealed that the first factor accounted for 33.1% of the total variance, well below the threshold of 50%, suggesting that common method bias is not a serious concern in this study. To evaluate the proposed theoretical model and the connections among the constructs, a rigorous two-step analysis was employed. The measurement theory’s validity, construct reliability, and overall model fit were examined using confirmatory factor analysis (CFA) and AMOS 22.0. The structural links between each model component were then examined using structural equation modeling (SEM). The initial phase of the analysis involved evaluating the measurement model to confirm the reliability and validity of the constructs. In the subsequent phase, the focus shifted to testing the structural relationships between the constructs. Model fit was assessed through both absolute fit indices (χ2/d.f. = 2.067; RMSEA = 0.065; RMR = 0.064) and incremental fit indices (CFI = 0.928; AGFI = 0.864; IFI = 0.930; TLI = 0.917). These indices collectively suggested that the model adequately fit the data, as indicated by the values falling within the acceptable thresholds (Table 3). To ensure internal consistency, the study utilized both Cronbach’s alpha and composite reliability (CR). All measures exceed the permissible level (Cronbach’s alpha > 0.7, CR > 0.7), indicating excellent reliability across the constructs. Convergent validity was further assessed by examining factor loadings, average variance extracted (AVE), and squared multiple correlations (SMC). The results revealed that all factor loadings were above 0.7, while the AVE for each construct exceeded 0.5 and SMC values were similarly above 0.5, thus confirming strong convergent validity for the measurement model (Table 4). Discriminant validity was evaluated by comparing the square root of AVE (shown on the diagonal of Table 5) to the inter-construct correlations (off-diagonal values). The square roots of AVE exceeded the correlation coefficients, confirming the distinctiveness of each construct and affirming the robustness of the measurement model.

4.2. Structural Model

The model fit indices for the proposed structural model indicate a strong model fit (χ2/d.f. = 2.725 < 3; RMSEA = 0.067 < 0.08; RMR = 0.066 < 0.08; CFI = 0.916 > 0.9; AGFI = 0.811 > 0.8; IFI = 0.916 > 0.9; TLI = 0.905 > 0.9). Subsequently, the structural model was evaluated to explore the proposed relationships. Standardized path coefficients confirmed strong support for the hypotheses. Utilitarian motivation (β = 0.239, p < 0.001), hedonic motivation (β = 0.196, p < 0.001), technology motivation (β = 0.326, p < 0.001), and social motivation (β = 0.210, p < 0.01) all exhibit positive associations with determinants of user attitude, supporting Hypotheses 1, 2, 3, and 4. These findings highlight that user attitudes toward AI-powered chatbots are positively shaped by utilitarian, hedonic, technological, and social motivations. However, privacy invasion (β = −0.094, p < 0.05) negatively affects user attitude, supporting Hypothesis 5. Furthermore, user attitude exhibits a significant influence on significant impact on sustainable usage (β = 0.676, p < 0.001) and user stickiness (β = 0.768, p < 0.001), confirming Hypotheses 6 and 7. User stickiness (β = 0.344, p < 0.01) is also positively correlated with sustainable usage, providing statistical support for Hypothesis 8. This indicates that user stickiness significantly influences sustainable usage. Figure 2 displays the conclusions of the hypothesis test, and the path coefficients are summarized in Table 6.

5. Discussion

This study investigates the factors that affect individual users’ attitude with AI-powered chatbots in China. The results show that five types of motivations have significant impacts on chatbots user’ attitude: utilitarian motivation (information seeking), hedonic motivation (perceived enjoyment and time passing), technology motivation (media appeal), social motivation (social presence and social interaction), and privacy invasion.
Technology motivation, particularly through the lens of media appeal, emerges as the most significant driver shaping user attitude toward chatbot use. This motivation extends beyond mere convenience; it reflects users’ evolving expectations for immediacy, availability, and frictionless interaction within digitally mediated environments. Unlike traditional interfaces, chatbots embody a form of “ambient technology” that integrates seamlessly into users’ daily routines, responding to the increasing desire for real-time, low-effort service access. This explains why users develop favorable attitudes when chatbots provide on-demand, personalized support through intuitive modalities such as text and voice. In theoretical terms, this finding advances U&G theory by specifying technology motivation as a distinct gratification category that captures users’ need for immediacy, seamless integration into daily routines, and multi-modal accessibility—elements not fully addressed in traditional U&G categories such as entertainment or information-seeking. This motivation, in turn, fosters stronger intentions for sustainable use of chatbot services. These findings align with the conclusions of Gan and Li [36], who highlight the pivotal influence of technology motivation (media appeal) in shaping user interactions and influencing user engagement. This study further underscores the significance of utilitarian motivation, particularly information-seeking behaviors, as a primary driver behind their engagement with chatbot services. As an effective communication tool, chatbots fulfill utilitarian functions by delivering relevant responses and providing essential information that fosters users’ intention to continue using the service. However, this utilitarian value should not be interpreted as a standalone explanatory factor. Instead, its impact appears to be contingent upon the chatbot’s ability to deliver relevant, context-sensitive content that aligns with users’ immediate goals. This expands TAM’s perceived usefulness construct by demonstrating that “goal-aligned adaptiveness” is as critical as efficiency in sustaining interaction. These informational capabilities play a pivotal role in shaping user attitude and indirectly impact sustainable usage and customer stickiness, aligning with prior research [6,30]. Akdim et al. [30] specifically highlight that perceived utilitarian motivations within online brand communities play a pivotal role in determining business outcomes, including customer satisfaction and sustained engagement. While the reinforcements of utilitarian motivation established understandings of chatbots as transactional tools, it is important to consider the limited differentiation this provides. Chatbots’ utilitarian function overlaps heavily with other digital interfaces (e.g., mobile apps, websites), and thus, their competitive advantage may rely more on their affective and interactive capabilities than purely on information delivery.
Additionally, this study found that hedonic and social motivations were linked to user attitude and sustainable usage, although the observed connections were relatively weaker. While this corroborates earlier U&G findings [22,53], it invites deeper theoretical reflection. The limited impact of hedonic and social motivations, despite increasing efforts to anthropomorphize chatbots, suggests a potential disconnect between design intention and user perception. Users may still perceive chatbots as functional tools rather than relational agents, underscoring the inherent boundary of emotional engagement with non-human entities. This has implications for theories of social presence and media equation: even when chatbots simulate humanlike cues, users may not fully ascribe social agency or experience genuine social gratification. From a TAM perspective, these results expand the construct of perceived ease of use to include affective comfort—users may continue using a chatbot not only when it is functionally efficient but also when it is emotionally non-intrusive and socially pleasant. This aligned with previous studies by Ernest Mfumbilwa et al. [15] which indicate that, besides the increasing importance of various social factors in promoting AI adoption, hedonic gratification plays a significant role in promoting the adoption of AI in various business activities. Moreover, the result that social interaction and social presence enhance user attitude points to the evolving expectations of users in digitally mediated environments. Previous studies have indicated that within the landscape of developing countries, social value emerges as a pivotal factor in shaping consumer behaviors [3,15].
Furthermore, findings from the SEM analysis reaffirm that privacy invasion is a crucial determinant that negatively affects customer attitude and could diminish intentions for sustainable use of chatbot services [12]. The findings suggest that when businesses fail to adequately address users’ privacy protection expectations, heightened perceived privacy risks can hinder customer satisfaction, as predicted by the Expectation–Confirmation Model. This, in turn, may hinder users’ sustainable usage to engage with chatbot services and negatively impact customer stickiness. But users appear to experience a privacy paradox: they are drawn to the convenience and responsiveness of chatbots but remain apprehensive about how their data is used. This duality suggests that privacy concerns may not lead to outright rejection of chatbot services but may instead manifest as restrained or conditional usage. Additionally, the analysis revealed that user attitude has a positive influence on both the user stickiness and sustainable usage of chatbot services. These results underscore the critical role of user attitude in fostering the sustained use and attachment to AI-powered chatbots. Notably, a consumer’s sustainable intention to use AI-powered chatbots does not materialize unless they exhibit a positive attitude toward the platform [2,3]. The robust correlation between user satisfaction and stickiness further suggests that when users hold favorable perceptions of the brand and their expectations are met, satisfaction levels increase, which in turn promotes future sustainable usage. Customers who experience high levels of satisfaction are more inclined to demonstrate brand stickiness and continue utilizing its services [5,8]. Theoretically, this resonates with attitude–behavior consistency models, where favorable experiences continuously strengthen attitude–behavior alignment. As anticipated, the study also found that user stickiness positively affects the sustainable usage of AI-powered chatbots, reinforcing the importance of user stickiness in ensuring sustaining engagement with these services. These insights point to the value of developing hybrid models that integrate cognitive, emotional, and relational factors to better predict long-term adoption and loyalty in human–AI interactions.

6. Limitations and Implications for Future Research

6.1. Theoretical Implications

This research contributes to the ongoing application and contextual refinement of U&G theory by situating it within the rapidly evolving domain of AI-powered chatbots [23]. Although prior research has employed the U&G, TAM, and ECM frameworks across various digital platforms, relatively few studies have systematically investigated how these models function in the context of intelligent conversational agents embedded within business communication. While the proliferation of AI-driven chatbots has transformed media consumption habits, empirical research remains limited in exploring the specific motivations users derive from corporate chatbot services. In light of the increasing adoption of AI into business communication, offering innovative solutions to meet diverse customer demands, this study enhances the traditional U&G framework by identifying and examining four primary user motivations associated with top brands’ commercial chatbots: social motivations, hedonic motivations, utilitarian motivations, and technology motivations. This nuanced categorization reflects the multifaceted roles chatbots now play in shaping customer experiences and expectations, offering a conceptual refinement of the U&G framework in light of AI integration. Second, this study provides a detailed examination of privacy invasion, highlighting its influence on user attitude toward chatbot services. The results reveal that while users benefit from various motivations, such as fulfilling informational needs and experiencing entertainment through AI technology, they simultaneously express significant apprehensions about privacy invasion. Specifically, users are concerned that their personal data may be exploited or misused in unforeseen ways, leading to uncertainty and discomfort during interactions with chatbots, particularly in commercial transactions. These findings highlight the imperative for businesses to implement robust privacy measures to mitigate user anxiety, address data security concerns, and cultivate trust in AI-powered services. By empirically demonstrating how privacy-related anxieties coexist with gratification-seeking behaviors, this study offers a conceptual bridge between gratification theory and risk perception, thus enriching the explanatory power of U&G and extending its relevance to trust-sensitive AI applications. Finally, this study addresses an omission in the U&G literature by linking user motivations not only to media usage but also to critical business outcomes, such as user attitude and stickiness. The majority of the existing U&G literature has predominantly centered on the technological attributes of media [23,54], often neglecting the critical linkages between these technological features and key business outcomes, including user attitude and user stickiness. While the role of chatbots in enhancing customer engagement, particularly in marketing communications like luxury branding [1], is well recognized, there remains a gap in studies exploring how technology motivations impact these business outcomes. By mapping these motivational pathways, this study contributes to conceptual innovation within the U&G paradigm, underscoring its utility in decoding user behavior in technologically mediated business ecosystems. In doing so, this study addresses this gap by elucidating how motivations derived from chatbot technology shape user perceptions and behaviors, with profound implications for customer attitudes and stickiness.

6.2. Practical Implications

First, the findings underscore the critical role of smart media appeal in driving customer satisfaction. To meet or exceed user expectations, corporate service providers should adopt specific chatbot design strategies such as minimizing response latency, incorporating natural language processing to enable more human-like interactions, and ensuring intuitive interface navigation. Chatbots should be optimized to recognize common customer intents and provide contextually relevant, time-saving solutions. For example, Sephora’s Facebook Messenger chatbot allows customers to book in-store makeover appointments within seconds and offers tailored product recommendations, reducing the time needed for service requests. Incorporating fallback mechanisms (e.g., seamless handoff to human agents) when chatbot limitations are reached can also help maintain user trust and prevent frustration. Second, the study identifies privacy invasion as a key concern that negatively influences user attitudes. Beyond general calls for data protection, companies should implement explicit privacy communication strategies, such as privacy-by-design frameworks, just-in-time notifications during sensitive data exchanges, and clearly worded consent dialogues. Additionally, allowing users to control the level of data shared and access logs of past interactions can further reinforce perceptions of safety and control. These strategies can increase transparency, reassure users about data handling practices, and mitigate privacy-related anxieties, particularly in commercial chatbot applications involving personal or financial information. Lastly, to cultivate sustainable use and enhance customer stickiness, brand managers must tailor chatbot experiences based on a granular understanding of user motivations. Rather than offering one-size-fits-all experiences, businesses should segment chatbot features based on different user motivation profiles. Utilitarian users may prefer concise, goal-oriented support, while hedonic users may respond better to gamified features or playful conversational styles. Socially motivated users may appreciate community elements or peer recommendations, whereas technology enthusiasts might engage more with cutting-edge features or AI transparency. For example, Duolingo’s AI-driven conversation bots engage hedonic users through gamified language challenges, while H&M’s Ada chatbot streamlines shopping, size inquiries, and return processes for utilitarian users. Personalization strategies, enabled by ethically sourced user data, can dynamically tailor interactions, while periodic sentiment analysis and user feedback collection ensure chatbots continue to evolve with user preferences. Together, these approaches help drive deeper engagement, promote positive emotional responses, and ultimately improve both user satisfaction and stickiness.

6.3. Limitations and Directions for Future Research

Despite the substantial contributions of this pioneering study in advancing the understanding of smart media applications in business communication, several limitations warrant consideration. Firstly, this study did not account for key control variables that may influence user behavior, such as prior technology experience, personality traits (e.g., need for cognition or privacy sensitivity), and digital literacy levels. The omission of these factors may limit the interpretive depth of the findings. Future research should incorporate such variables to better isolate the unique effects of specific motivations on chatbot engagement, thereby offering a more nuanced understanding of the user decision-making process. Secondly, the study did not investigate the privacy paradox phenomenon in the context of chatbots. As Gerber et al. [55] pointed out, users often exhibit a paradoxical tendency to place greater trust in machines than in human agents when sharing personal data, willingly disclosing sensitive information despite expressing apprehensions about privacy risks. Future studies should operationalize constructs such as “heuristic AI trust,” “perceived privacy trade-off,” and “protective discontinuance” to empirically capture this paradox and examine their moderating or mediating roles in chatbot usage.
Thirdly, the assumption that media users must consistently form intentions to develop habitual usage patterns may oversimplify the intricacies of user behavior. As modern digital media continuously evolves, users’ needs may be constantly shifting, potentially driven by the affordances provided by new and emerging technologies [44]. Thus, future studies might benefit from applying U&G 2.0, a more contemporary framework, which posits that traditional U&G models (i.e., “U&G 1.0”) may be insufficient for capturing the underlying motivations driving users’ engagement with social media or AI tools [23]. While U&G 2.0 offers a more adaptive and context-sensitive lens, future research should move beyond conceptual discussions by operationalizing specific constructs such as “dynamic gratification cycles” (tracking changes in user motives over time), “social algorithmic engagement” (user interaction with algorithm-curated content), and “adaptive AI affordance alignment” (fit between evolving AI capabilities and user needs). These constructs could be directly applied to AI functionalities such as proactive content recommendations, emotionally responsive dialogues, and context-aware personalization. Fourthly, an important yet underexplored dimension relates to the role of ethical or moral motivations in shaping user attitudes and behavior in AI-mediated interactions. Although this study primarily focused on utilitarian, hedonic, social, and technology-related drivers, we acknowledge that ethical concerns, such as fairness, transparency, environmental responsibility, and the perceived moral agency of AI systems, may significantly influence user trust, resistance, and advocacy behaviors, especially as AI becomes more embedded in socially sensitive domains. Future research should consider incorporating ethical orientations into theoretical models of AI acceptance and engagement to capture users’ value-driven decision-making processes more comprehensively.
Fifthly, the sample employed in this study was composed exclusively of students from a single university in China. While this cohort may yield valuable insights into the perceptions and behaviors of early adopters of AI-powered chatbot services—particularly younger, digitally literate individuals whose usage patterns may signal broader adoption trends—the homogeneity of the sample limits the external validity and generalizability of the findings. In particular, it may fail to reflect the perspectives of users from other age groups, cultural contexts, educational backgrounds, or levels of digital literacy. To enhance the robustness and applicability of future research, it is essential to incorporate more diverse and representative participant pools, especially those drawn from underrepresented or marginalized demographics. It would be worthwhile to extend the research group to include the Baby Boomer generation, who may have distinct expectations, trust levels, and barriers when interacting with AI-powered tools. Lastly, while this study logically clustered motivations into distinct categories, empirical verification of this structure remains necessary. Additionally, the potential existence of reciprocal causal relationships between motivations, sustainable usage, and customer stickiness was not comprehensively addressed. Future studies should extend this cross-sectional research by employing experimental designs to test these causal links and deepen insights into the mechanisms underlying chatbot usage and customer engagement.

Author Contributions

Conceptualization, H.P.; methodology, H.P., Z.H. and L.W.; writing—draft preparation, H.P. and Z.H.; analysis and interpretation of data, H.P., Z.H. and L.W.; writing—revision and editing, H.P., Z.H. and L.W.; funding acquisition, H.P. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Tianjin Philosophy and Social Science Planning Project (Grant No. TJWHSX2302-01). The authors also acknowledge support by the Open Access Publication Funds of the University of Konstanz.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Tianjin University (19CXW035).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chaturvedi, R.; Verma, S. Opportunities and challenges of AI-driven customer service. In Artificial Intelligence in Customer Service: The Next Frontier for Personalized Engagement; Springer: Berlin/Heidelberg, Germany, 2023; pp. 33–71. [Google Scholar] [CrossRef]
  2. Foroughi, B.; Huy, T.Q.; Iranmanesh, M.; Ghobakhloo, M.; Rejeb, A.; Nikbin, D. Why users continue E-commerce chatbots? Insights from PLS-fsQCA-NCA approach. Serv. Ind. J. 2024, 45, 935–965. [Google Scholar] [CrossRef]
  3. Sundjaja, A.M.; Utomo, P.; Colline, F. The determinant factors of continuance use of customer service chatbot in Indonesia e-commerce: Extended expectation confirmation theory. J. Sci. Technol. Policy Manag. 2024, 16, 182–203. [Google Scholar] [CrossRef]
  4. Tsai, W.-H.S.; Liu, Y.; Chuan, C.-H. How chatbots’ social presence communication enhances consumer engagement: The mediating role of parasocial interaction and dialogue. J. Res. Interact. Mark. 2021, 15, 460–482. [Google Scholar] [CrossRef]
  5. Shahzad, M.F.; Xu, S.; An, X.; Javed, I. Assessing the impact of AI-chatbot service quality on user e-brand loyalty through chatbot user trust, experience and electronic word of mouth. J. Retail. Consum. Serv. 2024, 79, 103867. [Google Scholar] [CrossRef]
  6. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  7. Yang, J.; Chen, Y.-L.; Por, L.Y.; Ku, C.S. A systematic literature review of information security in chatbots. Appl. Sci. 2023, 13, 6355. [Google Scholar] [CrossRef]
  8. Rajaobelina, L.; Prom Tep, S.; Arcand, M.; Ricard, L. Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 2021, 38, 2339–2356. [Google Scholar] [CrossRef]
  9. Bhardwaj, V.; Khan, S.S.; Singh, G.; Patil, S.; Kuril, D.; Nahar, S. Risks for Conversational AI Security. Conversational Artif. Intell. 2024, 557–587. [Google Scholar] [CrossRef]
  10. Nguyen, T.H.; Le, X.C. Artificial intelligence-based chatbots–a motivation underlying sustainable development in banking: Standpoint of customer experience and behavioral outcomes. Cogent Bus. Manag. 2025, 12, 2443570. [Google Scholar] [CrossRef]
  11. Pang, H.; Ruan, Y.; Zhang, K. Deciphering technological contributions of visibility and interactivity to website atmospheric and customer stickiness in AI-driven websites: The pivotal function of online flow state. J. Retail. Consum. Serv. 2024, 78, 103795. [Google Scholar] [CrossRef]
  12. Rese, A.; Ganster, L.; Baier, D. Chatbots in retailers’ customer communication: How to measure their acceptance? J. Retail. Consum. Serv. 2020, 56, 102176. [Google Scholar] [CrossRef]
  13. Trivedi, J. Examining the customer experience of using banking chatbots and its impact on brand love: The moderating role of perceived risk. J. Internet Commer. 2019, 18, 91–111. [Google Scholar] [CrossRef]
  14. Pang, H.; Wang, Y. Deciphering dynamic effects of mobile app addiction, privacy concern and cognitive overload on subjective well-being and academic expectancy: The pivotal function of perceived technostress. Technol. Soc. 2025, 81, 102861. [Google Scholar] [CrossRef]
  15. Ernest Mfumbilwa, E.; Mnong’one, C.; Chao, E.; Amani, D. Invigorating continuance intention among users of AI chatbots in the banking industry: An empirical study from Tanzania. Cogent Bus. Manag. 2024, 11, 2419482. [Google Scholar] [CrossRef]
  16. Florenthal, B. Students’ motivation to participate via mobile technology in the classroom: A uses and gratifications approach. J. Mark. Educ. 2019, 41, 234–253. [Google Scholar] [CrossRef]
  17. Xu, X.-Y.; Tayyab, S.M.U.; Jia, Q.; Huang, A.H. A multi-model approach for the extension of the use and gratification theory in video game streaming. Inf. Technol. People 2023, 38, 137–179. [Google Scholar] [CrossRef]
  18. Luo, X.; Tong, S.; Fang, Z.; Qu, Z. Frontiers: Machines vs. humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Mark. Sci. 2019, 38, 937–947. [Google Scholar] [CrossRef]
  19. Mailizar, M.; Burg, D.; Maulina, S. Examining university students’ behavioural intention to use e-learning during the COVID-19 pandemic: An extended TAM model. Educ. Inf. Technol. 2021, 26, 7057–7077. [Google Scholar] [CrossRef]
  20. Pang, H.; Zhang, K. Determining influence of service quality on user identification, belongingness, and satisfaction on mobile social media: Insight from emotional attachment perspective. J. Retail. Consum. Serv. 2024, 77, 103688. [Google Scholar] [CrossRef]
  21. Aguirre-Rodriguez, A.; Bagozzi, R.P.; Torres, P.L. Beyond craving: Appetitive desire as a motivational antecedent of goal-directed action intentions. Psychol. Mark. 2021, 38, 2169–2190. [Google Scholar] [CrossRef]
  22. Widagdo, B.; Roz, K. Hedonic shopping motivation and impulse buying: The effect of website quality on customer satisfaction. J. Asian Financ. Econ. Bus. 2021, 8, 395–405. [Google Scholar] [CrossRef]
  23. Camilleri, M.A.; Falzon, L. Understanding motivations to use online streaming services: Integrating the technology acceptance model (TAM) and the uses and gratifications theory (UGT). Span. J. Mark. ESIC 2021, 25, 217–238. [Google Scholar] [CrossRef]
  24. Yin, J.; Goh, T.-T.; Hu, Y. Interactions with educational chatbots: The impact of induced emotions and students’ learning motivation. Int. J. Educ. Technol. High. Educ. 2024, 21, 47. [Google Scholar] [CrossRef]
  25. Kim, J.S.; Lee, T.J.; Kim, N.J. What motivates people to visit an unknown tourist destination? Applying an extended model of goal-directed behavior. Int. J. Tour. Res. 2021, 23, 13–25. [Google Scholar] [CrossRef]
  26. Pang, H.; Hu, Z. Detrimental influences of social comparison and problematic WeChat use on academic achievement: Significant role of awareness of inattention. Online Inf. Rev. 2025, 49, 552–569. [Google Scholar] [CrossRef]
  27. Bagozzi, R.P. The self-regulation of attitudes, intentions, and behavior. Soc. Psychol. Q. 1992, 55, 178–204. [Google Scholar] [CrossRef]
  28. Ashraf, A.R.; Thongpapanl, N. Connecting with and converting shoppers into customers: Investigating the role of regulatory fit in the online customer’s decision-making process. J. Interact. Mark. 2015, 32, 13–25. [Google Scholar] [CrossRef]
  29. Anifa, N.; Sanaji, S. Augmented Reality Users: The Effect of Perceived Ease of Use, Perceived Usefulness, and Customer Experience on Repurchase Intention. J. Bus. Manag. Rev. 2022, 3, 252–274. [Google Scholar] [CrossRef]
  30. Akdim, K.; Casaló, L.V.; Flavián, C. The role of utilitarian and hedonic aspects in the continuance intention to use social mobile apps. J. Retail. Consum. Serv. 2022, 66, 102888. [Google Scholar] [CrossRef]
  31. Thongpapanl, N.; Ashraf, A.R.; Lapa, L.; Venkatesh, V. Differential effects of customers’ regulatory fit on trust, perceived value, and m-commerce use among developing and developed countries. J. Int. Mark. 2018, 26, 22–44. [Google Scholar] [CrossRef]
  32. Ashraf, A.R.; Razzaque, M.A.; Thongpapanl, N.T. The role of customer regulatory orientation and fit in online shopping across cultural contexts. J. Bus. Res. 2016, 69, 6040–6047. [Google Scholar] [CrossRef]
  33. Wang, K.-Y.; Ashraf, A.R.; Thongpapanl, N.; Iqbal, I. How perceived value of augmented reality shopping drives psychological ownership. Internet Res. 2025, 35, 1213–1251. [Google Scholar] [CrossRef]
  34. Schmuck, D.; Karsay, K.; Matthes, J.; Stevic, A. “Looking Up and Feeling Down”. The influence of mobile social networking site use on upward social comparison, self-esteem, and well-being of adult smartphone users. Telemat. Inform. 2019, 42, 101240. [Google Scholar] [CrossRef]
  35. Tamborini, R.; Eden, A.; Bowman, N.D.; Grizzard, M.; Weber, R.; Lewis, R.J. Predicting media appeal from instinctive moral values. Mass Commun. Soc. 2013, 16, 325–346. [Google Scholar] [CrossRef]
  36. Gan, C.; Li, H. Understanding the effects of gratifications on the continuance intention to use WeChat in China: A perspective on uses and gratifications. Comput. Hum. Behav. 2018, 78, 306–315. [Google Scholar] [CrossRef]
  37. Johnson, E.K.; Hong, S.C. Instagramming social presence: A test of social presence theory and heuristic cues on Instagram sponsored posts. Int. J. Bus. Commun. 2023, 60, 543–559. [Google Scholar] [CrossRef]
  38. Kreijns, K.; Xu, K.; Weidlich, J. Social presence: Conceptualization and measurement. Educ. Psychol. Rev. 2022, 34, 139–170. [Google Scholar] [CrossRef]
  39. Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
  40. Chakraborty, U.; Biswal, S.K. Is digital social communication effective for social relationship? A study of online brand communities. J. Relatsh. Mark. 2024, 23, 94–118. [Google Scholar] [CrossRef]
  41. Chong, S.-E.; Lim, X.-J.; Ng, S.I.; Kamal Basha, N. Unlocking the enigma of social commerce discontinuation: Exploring the approach and avoidance drivers. Mark. Intell. Plan. 2025, 43, 952–976. [Google Scholar] [CrossRef]
  42. Wang, K.-Y.; Ashraf, A.R.; Thongpapanl, N.T.; Nguyen, O. Influence of social augmented reality app usage on customer relationships and continuance intention: The role of shared social experience. J. Bus. Res. 2023, 166, 114092. [Google Scholar] [CrossRef]
  43. Söderlund, M. Employee encouragement of self-disclosure in the service encounter and its impact on customer satisfaction. J. Retail. Consum. Serv. 2020, 53, 102001. [Google Scholar] [CrossRef]
  44. Franque, F.B.; Oliveira, T.; Tam, C.; Santini, F.d.O. A meta-analysis of the quantitative studies in continuance intention to use an information system. Internet Res. 2021, 31, 123–158. [Google Scholar] [CrossRef]
  45. Shao, Z.; Zhang, L.; Chen, K.; Zhang, C. Examining user satisfaction and stickiness in social networking sites from a technology affordance lens: Uncovering the moderating effect of user experience. Ind. Manag. Data Syst. 2020, 120, 1331–1360. [Google Scholar] [CrossRef]
  46. Khairawati, S. Effect of customer loyalty program on customer satisfaction and its impact on customer loyalty. Int. J. Res. Bus. Soc. Sci. (2147-4478) 2020, 9, 15–23. [Google Scholar] [CrossRef]
  47. Ashraf, A.R.; Thongpapanl, N.T.; Spyropoulou, S. The connection and disconnection between e-commerce businesses and their customers: Exploring the role of engagement, perceived usefulness, and perceived ease-of-use. Electron. Commer. Res. Appl. 2016, 20, 69–86. [Google Scholar] [CrossRef]
  48. Van Lierop, D.; Badami, M.G.; El-Geneidy, A.M. What influences satisfaction and loyalty in public transport? A review of the literature. Transp. Rev. 2018, 38, 52–72. [Google Scholar] [CrossRef]
  49. Jung, J.-H.; Shin, J.-I. The effect of choice attributes of internet specialized banks on integrated loyalty: The moderating effect of gender. Sustainability 2019, 11, 7063. [Google Scholar] [CrossRef]
  50. Pöyry, E.; Parvinen, P.; Malmivaara, T. Can we get from liking to buying? Behavioral differences in hedonic and utilitarian Facebook usage. Electron. Commer. Res. Appl. 2013, 12, 224–235. [Google Scholar] [CrossRef]
  51. Chung, M.; Ko, E.; Joung, H.; Kim, S.J. Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res. 2020, 117, 587–595. [Google Scholar] [CrossRef]
  52. Godey, B.; Manthiou, A.; Pederzoli, D.; Rokka, J.; Aiello, G.; Donvito, R.; Singh, R. Social media marketing efforts of luxury brands: Influence on brand equity and consumer behavior. J. Bus. Res. 2016, 69, 5833–5841. [Google Scholar] [CrossRef]
  53. Kang, Y.J.; Lee, W.J. Effects of sense of control and social presence on customer experience and e-service quality. Inf. Dev. 2018, 34, 242–260. [Google Scholar] [CrossRef]
  54. Pang, H.; Zhao, Y.; Yang, Y. Struggling or Shifting? Deciphering potential influences of cyberbullying perpetration and communication overload on mobile app switching intention through social cognitive approach. Inf. Process. Manag. 2025, 62, 104167. [Google Scholar] [CrossRef]
  55. Gerber, N.; Gerber, P.; Volkamer, M. Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Comput. Secur. 2018, 77, 226–261. [Google Scholar] [CrossRef]
Figure 1. The conceptual research model.
Figure 1. The conceptual research model.
Jtaer 20 00228 g001
Figure 2. Path analysis result of the structural model. Note: * p < 0.05; ** p < 0.01; *** p < 0.001.
Figure 2. Path analysis result of the structural model. Note: * p < 0.05; ** p < 0.01; *** p < 0.001.
Jtaer 20 00228 g002
Table 1. Summary of demographic statistics (n = 735).
Table 1. Summary of demographic statistics (n = 735).
Frequency%
Gender
Males37551.0
Females36049.0
Age
18–2137551.0
22–2532043.5
26–29354.8
30–3350.7
Have you ever used an AI-powered chatbot?
Yes66390.2
No729.8
How often do you use AI-powered chatbots?
Almost everyday12517.0
Several times a week30040.8
Several times a month19526.5
Very rarely11515.7
Table 2. Measurement and questionnaire.
Table 2. Measurement and questionnaire.
VariableItemSource
Utilitarian motivation
(1)
I use AI-powered chatbots to find what I’m looking for.
(2)
AI-powered chatbots help me find the information I need.
(3)
I like using AI-powered chatbots to search for information that meets my needs.
(4)
AI-powered chatbots provides sufficient information.
(5)
Through AI-powered chatbots, I get the information I need on time.
(6)
Information provided by AI-powered chatbots is useful.
[50]
Hedonic motivation
(1)
It is fun and enjoyable to have conversations with AI-powered chatbots.
(2)
I am absorbed when conversing with AI-powered chatbots.
(3)
I enjoy passing the time using AI-powered chatbots.
(4)
Compared to the other activities, using AI-powered chatbots is truly enjoyable.
(5)
I enjoy using AI-powered chatbots for their own sake, not just for any specific information.
(6)
Conversations with AI-powered chatbots are exciting.
(7)
When I’m bored, using AI-powered chatbots makes me happy.
[50,51]
Technology motivation
(1)
Using AI-powered chatbots is more efficient than other forms of communication.
(2)
AI-powered chatbots save a tremendous amount of time.
(3)
Interaction with AI-powered chatbots does not require much mental effort.
(4)
I find AI-powered chatbots to be easy to use.
(5)
Using AI-powered chatbots helps me to accomplish things more quickly.
(6)
Using AI-powered chatbots increases my productivity.
(7)
I like AI-powered chatbots because they allow me to communicate with others immediately.
[36]
Social motivation
(1)
When I interact with AI-powered chatbots, I feel like I am engaging with an intelligent being.
(2)
When I interact with AI-powered chatbots, I feel that I am not alone.
(3)
When I interact with AI-powered chatbots, I feel that an intelligent being is responding to me.
(4)
When I interact with AI-powered chatbots, I can be myself and show who I really am.
(5)
I feel good when AI-powered chatbots agree with my comment.
(6)
When using AI-powered chatbots, I feel like I am in a virtual reality.
[39]
Privacy invasion
(1)
My information can be used in ways I do not foresee.
(2)
The information I submit can be misused.
(3)
There is too much uncertainty associated with using AI-powered chatbots.
(4)
When I use AI-powered chatbots, I think my private information might be used for big data model training.
(5)
There are too many uncertainties in using AI-powered chatbots.
[7,9]
User attitude
(1)
I am satisfied with AI-powered chatbots.
(2)
AI-powered chatbots perform well.
(3)
AI-powered chatbots meet my expectations.
(4)
I am happy with AI-powered chatbots.
(5)
I think using AI-powered chatbots is wise.
(6)
I think using AI-powered chatbots is beneficial.
(7)
I think using AI-powered chatbots is rewarding.
[51]
Substantial usage
(1)
I will continue to use this company’s AI-powered chatbots.
(2)
I will use this company’s AI-powered chatbots for purposes other than my current usage.
(3)
I will explore the company’s other AI-powered chatbots than the one(s) that I’m currently using.
(4)
I intend to keep using this company’s AI-powered chatbots rather than switch to alternative tools.
(5)
I will always try to use the company’s other AI-powered chatbots in my daily life.
[36]
User stickiness
(1)
I intend to keep purchasing products/services from this company’s AI-powered chatbots.
(2)
I will recommend this company’s AI-powered chatbots to others.
(3)
I consider myself loyal to this company’s AI-powered chatbots.
(4)
In the future, I will maintain or increase the frequency of visits to this company’s AI-powered chatbots, and even extend the stay time.
(5)
I’m satisfied with this company’s AI-powered chatbots I’m using now and wouldn’t switch to another platform.
[52]
Table 3. Fit indices for the measurement model.
Table 3. Fit indices for the measurement model.
Model Fit MeasuresModel Fit CriterionIndex ValueGood Model Fit (Y/N)
Absolute fit indices
RMSEA<0.080.065Y
RMR<0.080.064Y
χ2/d.f. (χ2 = 2195.656, d.f. = 1062)<32.067Y
Incremental fit indices
CFI>0.90.928Y
AGFI>0.80.864Y
IFI>0.90.930Y
TLI>0.90.917Y
Table 4. Statistical outcomes of confirmatory factor analysis.
Table 4. Statistical outcomes of confirmatory factor analysis.
Constructs and ItemsLoading (>0.7)SMC (>0.5)CR (>0.7)AVE (>0.5)
Utilitarian Motivation (UM) 0.9400.724
UM10.8880.789
UM20.8490.721
UM30.8460.716
UM40.8130.661
UM50.8450.714
UM60.8630.745
Hedonic Motivation (HM) 0.9460.716
HM10.7280.530
HM20.8130.661
HM30.8750.766
HM40.8770.770
HM50.8460.716
HM60.8970.805
HM70.8770.770
Technology Motivation (TM) 0.9180.617
TM10.7600.578
TM20.8150.664
TM30.7770.604
TM40.7160.513
TM50.8310.691
TM60.8450.714
TM70.7470.558
Social Motivation (SM) 0.9040.611
SM10.7630.582
SM20.8120.659
SM30.7880.621
SM40.8540.729
SM50.7360.542
SM60.7300.533
Privacy Invasion (PI) 0.9370.750
PI10.9150.837
PI20.8920.796
PI30.8520.726
PI40.8560.733
PI50.8100.656
User Attitude (UA) 0.9180.617
UA10.8120.659
UA20.7490.561
UA30.7540.569
UA40.7710.594
UA50.8090.654
UA60.7950.632
UA70.8050.648
Substantial Usage (SU) 0.8530.537
SU10.7300.533
SU20.7270.529
SU30.7070.501
SU40.7490.561
SU50.7510.564
User Stickiness (US) 0.9020.650
US10.7840.615
US20.7430.552
US30.8640.746
US40.8710.759
US50.7590.576
Notes: SMC, squared multiple correlations; CR, construct reliability; AVE, average variance extracted. CR for internal consistency, Loadings, SMC, AVE for convergent validity.
Table 5. Discriminant validity.
Table 5. Discriminant validity.
PISMTMHMUMUAUSSU
PI0.866
SM0.1900.781
TM0.1760.2890.785
HM0.3090.4730.3290.846
UM0.1440.2460.4860.3100.850
UA0.2560.4010.4510.5370.4340.781
US0.1970.3080.3470.4120.3340.4300.806
SU0.2410.3770.4240.5050.4080.5260.4690.733
Notes: PI, Privacy invasion; SM, Social motivation; TM, Technology motivation; HM, Hedonic motivation; UM, Utilitarian motivation; UA, User attitude; US, User stickiness; SU, sustainable usage. Diagonal elements (bold) represent the square root of the AVE. Off-diagonal elements represent the correlations between variables.
Table 6. Path results of the structural model.
Table 6. Path results of the structural model.
HypothesesPathsStandardized Coefficientp-Value
H1Utilitarian motivation → User attitude0.2390.000
H2Hedonic motivation → User attitude0.1960.000
H3Technology motivation → User attitude0.3260.000
H4Social motivation → User attitude0.2100.008
H5Privacy invasion → User attitude−0.0940.032
H6User attitude → Sustainable usage0.6760.000
H7User attitude → User stickiness0.7680.000
H8User stickiness → Sustainable usage0.3440.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pang, H.; Hu, Z.; Wang, L. How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 228. https://doi.org/10.3390/jtaer20030228

AMA Style

Pang H, Hu Z, Wang L. How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(3):228. https://doi.org/10.3390/jtaer20030228

Chicago/Turabian Style

Pang, Hua, Zhuyun Hu, and Lei Wang. 2025. "How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 3: 228. https://doi.org/10.3390/jtaer20030228

APA Style

Pang, H., Hu, Z., & Wang, L. (2025). How Perceived Motivations Influence User Stickiness and Sustainable Engagement with AI-Powered Chatbots—Unveiling the Pivotal Function of User Attitude. Journal of Theoretical and Applied Electronic Commerce Research, 20(3), 228. https://doi.org/10.3390/jtaer20030228

Article Metrics

Back to TopTop