Next Article in Journal
The Role of Mobile Applications in Shaping Digital Transformation in Higher Education Among Generation I: A Bibliographic Study
Previous Article in Journal
Integrating Instrument Networking and Programming into Electronics Curricula: Design, Implementation, and Impact
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms

1
School of English Language and Culture, Xi’an Fanyi University, Xi’an 710105, China
2
School of Foreign Languages, Xi’an Jiaotong University, Xi’an 710049, China
3
School of Humanities and Social Science, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(12), 1025; https://doi.org/10.3390/info16121025
Submission received: 27 October 2025 / Revised: 20 November 2025 / Accepted: 21 November 2025 / Published: 24 November 2025

Abstract

With AI chatbots becoming increasingly embedded in everyday life, growing concerns have emerged regarding users’ psychological dependency on these systems. While previous studies have mainly addressed utilitarian drivers, less attention has been paid to the cognitive and affective mechanisms driving chatbot dependency. Drawing upon Uses and Gratifications Theory, Compensatory Internet Use Theory, and Attachment Theory, this study proposes a dual-path model that investigates how instrumental motivations (e.g., information-seeking, entertainment, efficiency) and affective motivations (e.g., companionship, loneliness, anxiety) influence chatbot dependency through two mediating mechanisms: cognitive reliance and emotional attachment. Using survey data collected from 354 participants, the model was tested through structural equation modeling (SEM). The results indicate that information-seeking and efficiency significantly predict cognitive reliance, which subsequently enhances chatbot dependency. In contrast, entertainment does not exhibit a significant influence. Furthermore, affective motivations such as companionship, loneliness, and anxiety are indirectly linked to dependency through emotional attachment, with loneliness demonstrating the strongest indirect effect. These findings underscore the dual influence of functional cognition and emotional vulnerability in fostering chatbot dependency, emphasizing the importance of emotionally sensitive and ethically responsible AI design.

1. Introduction

As artificial intelligence (AI) chatbots become increasingly integrated into daily life, their role has evolved from simple information providers to socially interactive agents capable of simulating empathy, building user trust, and fostering emotional connections [1]. Modern AI systems, such as OpenAI’s ChatGPT, Google’s Bard, and Baidu’s Ernie Bot, engage users through human-like conversations that simulate empathy, provide companionship, and influence emotional states [2,3]. These interactions not only address informational needs but also fulfill social and emotional gratifications, demonstrating the broadening scope of AI applications in enhancing human–computer relationships. While such advancements offer advantages including enhanced efficiency, ubiquitous access, and perceived emotional support, they have also raised concerns about the psychological consequences of prolonged engagement—particularly the emergence of chatbot dependency, defined as a habitual or emotional reliance on AI systems [4].
Early research on chatbot use has primarily adopted a functionalist and instrumental lens, emphasizing utilitarian outcomes such as information acquisition [5,6], task efficiency [7], and customer service optimization [8,9]. In these contexts, chatbots are positioned as task-oriented tools evaluated through metrics such as response time, interaction fluency, and task completion. However, the emergence of emotionally intelligent agents has shifted scholarly attention toward affective and psychosocial dimensions of human–AI interaction. Recent research indicates that users can form emotional attachments and relational bonds with AI chatbots through perceived empathy and anthropomorphism, which in turn influence their sustained engagement and dependency behaviors [10,11]. These findings suggest that chatbot engagement is not solely driven by rational or instrumental goals but is also shaped by users’ emotional vulnerabilities and relational needs.
Despite the increasing interest in AI chatbot research, the current literature remains theoretically fragmented, particularly regarding how users develop psychological dependency. While some studies have explored either cognitive or affective drivers of chatbot use, few have examined how these mechanisms jointly operate through distinct psychological pathways. Specifically, the mediating roles of cognitive reliance and emotional attachment are often under-theorized or untested, leaving a gap in understanding how user motivations translate into habitual or emotional reliance on AI agents. To address this gap, the present study proposes an integrated dual-path model informed by Uses and Gratifications Theory, Compensatory Internet Use Theory, and Attachment Theory. This framework conceptualizes chatbot dependency as a psychological outcome shaped by two distinct motivational pathways: (1) instrumental motivations (e.g., information-seeking, entertainment, and efficiency) are posited to drive cognitive reliance, and (2) affective motivations (e.g., companionship, loneliness, and anxiety) are theorized to foster emotional attachment. By modeling these parallel mechanisms, the study provides a more comprehensive and theory-driven explanation of how diverse user needs contribute to excessive or maladaptive chatbot use.
To guide the empirical investigation, the study is structured around the following research questions:
RQ1
How do instrumental motivations (e.g., information-seeking, entertainment, and efficiency) predict cognitive reliance on AI chatbots?
RQ2
How do affective motivations (e.g., companionship, loneliness, and anxiety) predict emotional attachment to AI chatbots?
RQ3
How do cognitive reliance and emotional attachment, as mediators, differentially influence the development of AI chatbot dependency?

2. Literature Review

2.1. AI Chatbot Dependency

AI chatbot dependency has emerged as a multidimensional psychological and behavioral phenomenon characterized by users’ habitual or even excessive reliance on AI-driven conversational agents [11]. Rather than being limited to goal-oriented or instrumental interactions, such dependency reflects a deeper level of engagement wherein users increasingly turn to chatbots not only for informational support but also for emotional regulation, social connection, and perceived companionship [12]. This form of dependency can be conceptualized through two interrelated mechanisms: cognitive and affective dependency.
From a cognitive perspective, dependency involves the delegation of reasoning and decision-making responsibilities to AI systems, often manifesting in users’ uncritical acceptance of algorithm-generated outputs. This phenomenon, commonly referred to as automation bias, may result in diminished critical thinking and overreliance on system recommendations [13,14]. In contexts such as education, healthcare, and personal decision-making, such cognitive offloading can significantly undermine user autonomy and engagement, especially when users exhibit algorithm appreciation and overtrust automated systems despite known limitations [15]. In contrast, the affective dimension of dependency centers on users’ emotional involvement with chatbots that simulate empathy, attentiveness, and emotional warmth. Such interactions often foster parasocial relationships, wherein users develop one-sided emotional attachments by projecting interpersonal needs onto the AI agent. These tendencies are especially pronounced among individuals experiencing loneliness, social anxiety, or a lack of fulfilling offline relationships [5,16]. Emotionally intelligent chatbots such as Replika and Woebot are designed to simulate empathy and responsiveness, fostering emotional bonds with users. Evidence shows these systems can reduce psychological distress in the short term, particularly among those experiencing loneliness or anxiety [17]. However, concerns remain about their potential to create a false sense of companionship, displacing real human interaction and reinforcing social withdrawal over time [18].
Taken together, these affective and cognitive processes jointly contribute to the formation of chatbot dependency. To examine these mechanisms systematically, the present study draws upon three theoretical frameworks: Uses and Gratifications Theory, Compensatory Internet Use Theory, and Attachment Theory, which together provide a foundation for modeling both the functional and emotional antecedents of AI chatbot dependency.

2.2. Theoretical Frameworks

This study draws on three complementary theoretical perspectives—Uses and Gratifications Theory (UGT), Compensatory Internet Use Theory (CIUT), and Attachment Theory—to build an integrated framework explaining psychological dependency on AI chatbots. Each framework offers a distinct but converging lens on user behavior, together providing a holistic view of both rational and emotional drivers of chatbot dependency. UGT explains how users engage with chatbots to fulfill instrumental needs such as information acquisition, task efficiency, and entertainment. CIUT adds an affective dimension, emphasizing how emotionally vulnerable users may rely on chatbots as coping mechanisms rather than for intentional gratification. Attachment Theory extends this perspective by explaining how repeated emotionally satisfying interactions can lead to emotional bonds and sustained reliance. Together, these theories inform a dual-path model in which instrumental motivations predict cognitive reliance and affective motivations predict emotional attachment—two psychological mechanisms that jointly shape chatbot dependency. The following subsections elaborate on each theory and its relevance to this framework.

2.2.1. Uses and Gratifications Theory

Uses and Gratifications Theory (UGT), originally proposed by Katz, Blumler, and Gurevitch [19], offers a user-centered framework for understanding how individuals actively engage with media to satisfy various psychological, informational, and social needs. Departing from earlier media effects models that portrayed audiences as passive recipients, UGT emphasizes that media consumers are purposive actors who selectively utilize media channels to achieve specific goals [19]. This perspective has since been applied across a broad spectrum of media contexts, from traditional broadcast systems to contemporary digital technologies, including AI-based platforms [20,21,22,23].
A key contribution of UGT is its classification of user gratifications into five principal types: cognitive (e.g., acquiring knowledge), affective (e.g., experiencing emotional stimulation), personal integrative (e.g., enhancing self-identity or confidence), social integrative (e.g., fostering interpersonal connections), and tension release (e.g., relaxation or escapism) [21]. These gratification types have demonstrated sustained relevance, as users continue to seek similar psychological and social outcomes through increasingly personalized and interactive media environments [24,25]. Building on this conceptual foundation, UGT has been effectively applied to emerging forms of human–AI interaction, particularly in the context of AI-powered conversational agents such as chatbots. These systems deliver real-time responses, tailored dialogue, and affective cues that correspond to diverse user needs, from information-seeking to emotional companionship [26]. Such applications exemplify how intelligent media technologies can operationalize traditional gratification categories, thereby reaffirming the theoretical robustness of UGT in the context of algorithmic and personalized media. Nevertheless, UGT’s explanatory strength primarily lies in understanding deliberate, goal-oriented media use. It is less equipped to account for compulsive, habitual, or affect-driven engagement patterns that are increasingly common in digital consumption. To address these behavioral dynamics, the following section introduces Compensatory Internet Use Theory as a complementary perspective.

2.2.2. Compensatory Internet Use Theory

Compensatory Internet Use Theory (CIUT) offers a psychological framework for understanding technology use as a coping strategy in response to negative life circumstances or emotional distress. Developed by Kardefelt-Winther [27], the theory challenges the traditional assumption that media use is always driven by gratification-seeking. Instead, it posits that individuals may turn to digital media, such as social platforms, games, or AI systems, not primarily to pursue pleasure, but to alleviate negative emotions such as loneliness, stress, anxiety, or boredom. Central to CIUT is the distinction between goal-directed use and compensatory use. The former aligns with voluntary engagement based on utility or entertainment, whereas the latter emerges as a maladaptive or emotion-driven behavior aimed at avoiding unpleasant psychological states [28]. Empirical studies have shown that individuals experiencing higher levels of emotional distress are more likely to report excessive or habitual use of digital technologies, particularly when offline coping resources are limited [28,29].
Recent research has extended CIUT to explain interactions with emotionally responsive technologies, including AI chatbots. These systems offer simulated empathy, nonjudgmental dialogue, and 24/7 availability, making them appealing tools for users seeking relief from social isolation, anxiety, or low self-worth [30,31]. For some users, particularly those lacking supportive human relationships, chatbots may serve as compensatory agents that provide a sense of connection or control. CIUT offers a lens to understand how emotional vulnerabilities, not just goal-oriented motivations, can drive media use. Building on this affective perspective, the next section introduces Attachment Theory to explain emotional bonds with non-human agents.

2.2.3. Attachment Theory

Attachment Theory, originally developed by Bowlby [32], explains how emotional bonds between individuals, particularly between infants and caregivers, form and influence behavior. The theory posits that humans are biologically predisposed to seek proximity to attachment figures, especially during times of stress or uncertainty, to gain a sense of security and emotional regulation. Over time, these patterns of attachment generalize beyond caregivers to romantic partners, friends, and, more recently, mediated entities such as pets, fictional characters, and intelligent technologies. In the context of human–technology interaction, attachment theory has been extended to explore how users develop emotional connections with non-human agents that simulate responsiveness, empathy, or availability [33]. Technologies perceived as emotionally supportive, such as AI chatbots, can fulfill attachment-related functions by offering companionship, consistent presence, and perceived understanding. These agents may act as “surrogate attachment figures” for individuals experiencing social isolation, anxiety, or emotional vulnerability [34,35].
Recent research has found that emotionally expressive AI systems can trigger affective responses similar to human attachment, leading to increased feelings of trust, empathy, and dependence [36,37]. Users may attribute anthropomorphic qualities to chatbots and form parasocial relationships, characterized by one-sided emotional involvement. These emotional bonds can deepen over time, particularly when chatbots engage in consistent, personalized, and emotionally attuned dialogue [38]. Attachment Theory thus provides a compelling framework for understanding how users form emotional attachment to AI chatbots, beyond utilitarian or compensatory motivations. This perspective is especially relevant in the age of emotionally intelligent machines that increasingly mimic human relational behavior.
These insights provide a theoretical foundation for understanding emotional attachment in human–AI interaction. Building on this foundation, the following section introduces the conceptual variables and hypotheses that structure the present study.

2.3. Conceptual Variables and Hypotheses Development

To clarify the conceptual framework of the proposed model, this section introduces the key variables investigated in the study. Each construct is defined with reference to relevant theoretical and empirical sources. Table 1 summarizes their operational definitions and supporting literature. The variables are organized by their functional roles in the model—namely, instrumental motivations, affective motivations, and mediators. The following subsections elaborate on each construct and present the corresponding hypotheses.

2.3.1. Cognitive Reliance

Cognitive reliance refers to the extent to which individuals delegate cognitive tasks, such as information processing, problem-solving, and decision-making, to external agents or tools [39]. It represents a shift from autonomous mental effort toward dependence on external systems, often motivated by convenience, cognitive efficiency, or perceived authority [46]. While this phenomenon has traditionally been studied in contexts like automation and decision support systems, it has become increasingly salient in the age of intelligent technologies. In particular, AI-based conversational agents have become common targets of cognitive delegation. Recent empirical studies have shown that users tend to rely more on chatbots when they perceive the system as competent, efficient, and accurate. Küper and Krämer [47] found that users interacting with anthropomorphic AI assistants were significantly more likely to develop trust and rely on system-generated suggestions, particularly when the chatbot exhibited socially intelligent and emotionally responsive behavior. Similarly, Liao et al. [48] demonstrated that productivity-oriented chatbots elicited strong user reliance when anthropomorphic cues, such as conversational tone and personality, were combined with system recommendations that enhanced perceived task efficiency. Over time, this repeated acceptance may result in automation bias, whereby users begin to reduce critical evaluation and default to AI-generated content [49]. As chatbot integration deepens in daily cognitive activities, users may even develop patterns of emotional or psychological dependency [50], highlighting the need to examine long-term implications of such reliance. Based on this rationale, the following hypothesis is proposed:
H1. 
Cognitive reliance positively predicts AI chatbot dependency.

2.3.2. Emotional Attachment

Emotional attachment refers to a user’s affective bond with a technological agent, marked by feelings of closeness, comfort, and emotional security [40]. Originally derived from interpersonal attachment theory [32], this construct has been extended to human–AI interactions, where users perceive AI systems, such as chatbots, as emotionally responsive or socially present [51]. Unlike mere satisfaction or preference, emotional attachment reflects a deeper psychological connection that can influence continued engagement and even dependency. Recent empirical studies provide robust support for the role of emotional attachment in AI-mediated contexts. For instance, Hastuti [52] found that users interacting with emotionally expressive chatbots reported stronger attachment feelings and exhibited signs of parasocial bonding. Similarly, Edalat et al. [53] showed that emotional vulnerability, such as anxiety or loneliness, heightened users’ tendency to form affective connections with chatbots, particularly when these agents simulated empathy or companionship. These attachments can be powerful, often fulfilling unmet social needs and contributing to prolonged or excessive chatbot use [54]. Given its capacity to shape user attitudes and behavior, emotional attachment is theorized to function as a key affective mechanism in the development of chatbot dependency [51]. Users who form strong emotional bonds with chatbots are more likely to seek comfort or support from them, even in non-utilitarian contexts. Accordingly, the following hypothesis is proposed:
H2. 
Emotional attachment positively predicts AI chatbot dependency.

2.3.3. Information-Seeking

Information-seeking refers to the deliberate effort to acquire relevant, accurate, or novel information to fulfill one’s cognitive needs [41]. This motivation aligns with the instrumental dimension of the UGT, wherein users engage with media technologies in a goal-directed manner to gain knowledge or solve problems. In the context of AI-mediated interaction, chatbots have emerged as accessible and responsive tools for information retrieval, often replacing traditional search engines or static content sources. For instance, Hong [55] observed that users frequently turned to AI chatbots to address perceived insufficiencies in domain-specific knowledge, particularly in risk-related contexts. Liao et al. [56] further demonstrated that motivations to seek information from AI agents were distinct from those driving human-based inquiry, with users attributing efficiency, neutrality, and convenience to chatbot interactions. Supporting this, Subaveerapandiyan et al. [57] reported that AI chatbots significantly enhanced users’ information discovery behavior, particularly by reducing search time and cognitive load. As users increasingly depend on chatbots to process, filter, and interpret complex information, this may lead to cognitive reliance, where individuals outsource information processing and decision support to AI systems. Based on this, we propose the following hypothesis:
H3. 
Information-seeking positively predicts cognitive reliance on AI chatbots.

2.3.4. Efficiency

Efficiency refers to the desire to use media technologies to save time, reduce effort, and streamline task completion [42]. Within the UGT framework, it is categorized under instrumental gratifications, emphasizing task-oriented goals such as productivity and convenience [41]. As AI-based tools become increasingly integrated into daily life, this motivation has gained prominence. Specifically, in the chatbot domain, efficiency is frequently cited as a key driver of adoption. For instance, Zhou & Li [58] found that users preferred AI chatbots over traditional search engines due to their faster, more targeted, and less effortful information delivery. Similarly, Pham et al. [59] observed that perceived efficiency strongly predicted AI tool adoption for task management and decision support. When chatbots consistently provide fast, accurate responses with low cognitive demand, users may begin to cognitively rely on them for routine problem-solving. This aligns with findings by Hong [55], who reported that efficiency-driven motivations significantly influenced users’ habitual use of AI systems in health and finance contexts. Building on this, we propose the following hypothesis:
H4. 
Efficiency positively predicts cognitive reliance on AI chatbots.

2.3.5. Entertainment

Entertainment refers to the desire to use media technologies for enjoyment, amusement, and emotional satisfaction [43]. Within the UGT framework, it is categorized as a hedonic gratification, emphasizing pleasure, relaxation, and escapism [19]. As AI chatbots evolve, their ability to provide engaging and enjoyable interactions has become a key factor in their widespread adoption. According to Mei [60], entertainment plays a significant role in driving AI chatbot engagement, as users seek enjoyable, interactive experiences, which increases their overall motivation to use these systems. Similarly, Yang & Lee [61] found that the entertainment value provided by virtual assistants significantly influenced user satisfaction and continued use. In the chatbot context, entertainment motivation can also shape cognitive reliance. When users find chatbots enjoyable, they are more likely to rely on them for both casual and recreational tasks. Park et al. [62] suggested that emotional satisfaction derived from AI interactions reduces cognitive load, which, in turn, encourages users to depend on these systems for various tasks by lowering mental effort. Therefore, we hypothesize:
H5. 
Entertainment positively predicts cognitive reliance on AI chatbots.

2.3.6. Companionship

Companionship refers to an individual’s desire to seek social connection, emotional presence, or perceived relational closeness through mediated interaction [21]. It reflects a social integrative need, wherein users turn to communication technologies not just for utility, but to fulfill interpersonal or affective gaps [63]. In AI-mediated contexts, socially intelligent chatbots offer the potential for companionship by simulating empathy, remembering personal details, or maintaining conversational presence. Empirical studies have shown that users experiencing social isolation or interpersonal deficits may engage with chatbots to fulfill companionship needs. For example, Pani et al. [64] found that companionship motivation significantly predicted emotional attachment to AI agents. Similarly, Zou et al. [65] reported that users who engaged with AI assistants for social presence, rather than productivity, were more likely to form parasocial bonds. Since companionship is inherently relational, it is more likely to influence emotional attachment rather than task-based reliance. Therefore, it is anticipated that users motivated by companionship will develop deeper emotional connections with chatbots, which in turn leads to stronger emotional attachment. Thus, we advance the following hypothesis:
H6. 
Companionship positively predicts emotional attachment to AI chatbots.

2.3.7. Loneliness

Loneliness is defined as a subjective psychological state in which individuals perceive a gap between desired and actual social relationships [44]. Rather than the objective absence of interaction, it reflects unmet social needs and has been consistently linked to increased engagement with media that simulate social connection. In AI-mediated environments, chatbots capable of expressing empathy or conversational presence may serve as surrogate companions for those experiencing loneliness. Empirical studies have demonstrated that lonely individuals are more likely to form emotional bonds with socially responsive chatbots. For example, Peng et al. [66] found that loneliness significantly increased users’ intention to interact with anthropomorphized AI agents, primarily through enhanced parasocial interactions. Similarly, Yan et al. [67] observed that users with higher loneliness levels reported stronger emotional attachment to AI agents, particularly when those agents conveyed emotional warmth or attentiveness. These findings suggest that loneliness can be a precursor to deeper human–AI affective ties. Therefore, users experiencing higher levels of loneliness are more likely to develop emotional attachments to AI chatbots. On the basis of these findings, we posit:
H7. 
Loneliness positively predicts emotional attachment to AI chatbots.

2.3.8. Anxiety

Anxiety is a negative affective state marked by heightened physiological arousal, emotional discomfort, and an increased desire for control in uncertain or threatening situations [45]. Within the domain of media and technology use, anxiety is understood as a psychological trigger that motivates individuals to seek emotionally stabilizing experiences. AI chatbots, offering predictable, nonjudgmental, and responsive interactions, can serve as affective anchors for users experiencing such emotional distress. Recent research provides support for this view. For example, Heng and Zhang [68] found that individuals with higher levels of attachment-related anxiety were more likely to exhibit problematic engagement with conversational AI, with emotional attachment mediating this relationship. Likewise, Wu et al. [69] reported that anxious users demonstrated greater intention to adopt AI chatbots for emotional regulation, particularly when trust and relational consistency were perceived. These findings suggest that anxiety not only increases reliance on AI agents for emotional relief but also facilitates the development of emotional bonds. Building on this, we propose the following hypothesis:
H8. 
Anxiety positively predicts emotional attachment to AI chatbots.

2.4. Conceptual Research Model

Based on the theoretical framework and prior empirical evidence, this study proposes a dual-path model to explain the development of AI chatbot dependency. The model integrates instrumental and affective motivations, highlighting two mediating mechanisms: cognitive reliance and emotional attachment. In the cognitive pathway, information-seeking, efficiency, and entertainment are hypothesized to influence users’ reliance on chatbots for cognitive tasks. In contrast, the affective pathway posits that companionship, loneliness, and anxiety increase users’ emotional attachment to chatbots. Both cognitive reliance and emotional attachment are, in turn, expected to positively predict chatbot dependency. This integrated approach allows for a more nuanced understanding of how both rational and emotional factors contribute to users’ psychological engagement with AI systems. The proposed research model is presented in Figure 1.

3. Methodology

3.1. Participants and Procedures

A total of 403 responses were collected through the Chinese online survey platform Wenjuanxing between June and July 2025. Recruitment links were distributed via major social media platforms in China, including WeChat, Weibo, Douyin, and Zhihu. Eligibility criteria required participants to be 18 years or older and to have used at least one AI chatbot (e.g., DeepSeek, Baidu Ernie Bot, Doubao, or ChatGPT via third-party access) within the past month. Prior to participation, all respondents were presented with an informed-consent statement describing the study’s purpose, confidentiality safeguards, and their right to withdraw at any time without penalty. Participation was voluntary and anonymous, and no personally identifiable information was collected. After removing incomplete submissions and responses that failed attention-check items or showed signs of careless answering, 354 valid samples were retained for data analysis, resulting in a valid response rate of 87.84%. A detailed summary of participant characteristics and chatbot usage patterns is shown in Table 2.

3.2. Instruments

The survey instrument consisted of 9 constructs measured by 27 items (see Table A1 in Appendix A). All items were adapted from previously validated scales and modified to reflect the context of AI chatbot use. For example, terms such as “media” or “technology” in the original items were replaced with “AI chatbots” to align with the focus of this study. Items measuring information-seeking were adapted from Papacharissi and Rubin [70], Whiting and Williams [71], and Xie et al. [72], reflecting the use of AI chatbots for acquiring information and learning. Items for efficiency were based on LaRose and Eastin [73], Venkatesh et al. [74], and Zhai and Ma [75], focusing on saving time and improving task efficiency through chatbot use. For entertainment, we followed Whiting and Williams [71], Sundar et al. [20], and Seok et al. [76], capturing users’ hedonic use of chatbots. The companionship items were adapted from Wang et al. [77] and Cheng et al. [78], assessing whether users perceive AI chatbots as emotional companions. To measure emotional triggers, we included three items on loneliness adapted from Shen and Wang [79] and Zhang et al. [80], and three items on anxiety from Abd-Alrazaq et al. [81] and Kim et al. [82], reflecting the psychological states that may prompt chatbot use. Cognitive reliance was evaluated using items adapted from LaRose and Eastin [73] and Xie et al. [83], indicating the extent to which users depend on chatbots for decision-making and information retrieval. Emotional attachment was assessed using items adapted from Heng and Zhang [68], exploring users’ emotional bonds with their frequently used AI chatbots. Lastly, items for AI chatbot dependency were drawn from Shawar and Atwell [84], Kwon et al. [85], and Montag and Elhai [86], measuring problematic use and behavioral symptoms of overreliance. All items were rated using a seven-point Likert-type scale ranging from 1 (strongly disagree) to 7 (strongly agree). The questionnaire was originally developed in English and translated into Chinese using the back-translation method to ensure language equivalence. Internal consistency of the constructs was assessed using Cronbach’s alpha, and all scales demonstrated acceptable reliability levels above 0.80.

3.3. Data Analysis

Structural equation modeling (SEM) was conducted using IBM SPSS AMOS 29.0. Parameters were estimated via the maximum likelihood (ML) method, which is widely used due to its robustness under the assumption of multivariate normality [87]. The sample size (N = 354) met the recommendations for SEM, exceeding the minimum threshold of 150 observations for models of moderate complexity [88]. Following the two-step approach proposed in recent SEM practice, the measurement model was first evaluated through confirmatory factor analysis (CFA). Internal consistency was assessed using Cronbach’s alpha and composite reliability (CR), both exceeding the recommended threshold of 0.70 [89,90]. Convergent validity was confirmed through standardized factor loadings greater than 0.70 and average variance extracted (AVE) values above 0.50 [88]. Discriminant validity was assessed using the heterotrait–monotrait ratio of correlations (HTMT), a method recognized for its higher sensitivity in detecting construct overlap compared to legacy approaches [91]. All HTMT values were below the conservative threshold of 0.85, supporting adequate discriminant validity across constructs. In the structural model evaluation, model fit was assessed using multiple indices: χ2/df < 3.0, CFI, TLI, and GFI > 0.90, and RMSEA and SRMR < 0.08 [87,88,92,93], all indicating acceptable to good model fit. The model’s explanatory power was evaluated through R2 values for each endogenous variable, while local effect sizes (F2) were calculated for each predictor, following Cohen’s thresholds for small (0.02), medium (0.15), and large (0.35) effects [94]. Hypotheses were tested by examining the significance of standardized path coefficients.

4. Results

4.1. The Measurement Model

Confirmatory factor analysis (CFA) was conducted to assess the reliability and validity of the latent constructs. As presented in Table 3, internal consistency was confirmed with Cronbach’s alpha values ranging from 0.841 to 0.884, all exceeding the recommended threshold of 0.70 [89]. Similarly, composite reliability (CR) values ranged from 0.858 to 0.922, indicating acceptable construct reliability. Convergent validity was supported by standardized factor loadings between 0.799 and 0.907, all above the 0.70 benchmark [88], and average variance extracted (AVE) values between 0.669 and 0.797, surpassing the 0.50 criterion [89], suggesting that each construct explains a majority of the variance in its items. All HTMT values fell below the conservative threshold of 0.85 [91], indicating that the constructs are empirically distinct (see Table 4). Together, the results demonstrate that the measurement model meets established criteria for reliability, convergent validity, and discriminant validity, providing a solid foundation for the structural model analysis.

4.2. The Structural Model and Hypotheses Testing

The structural model was evaluated based on (a) global fit indices, (b) the explained variance (R2) of endogenous constructs, (c) the effect size measured by F2, and (d) the statistical significance of standardized path coefficients. These elements together assess the model’s overall adequacy, predictive strength, and the strength of individual relationships.
The structural model demonstrated a satisfactory fit across multiple indices. As shown in Table 5, all fit statistics met or exceeded conventional thresholds: χ2/df = 2.36, CFI = 0.961, TLI = 0.948, RMSEA = 0.047, SRMR = 0.041, and GFI = 0.931, indicating that the model fits the data well. The model’s predictive power was assessed using R2 values for key outcomes. As shown in Table 6, cognitive reliance was predicted by instrumental motivations (R2 = 0.523), explaining over 52% of its variance. Emotional attachment was shaped by affective motivations (R2 = 0.471), indicating moderate explanatory power. Chatbot dependency, explained by both cognitive reliance and emotional attachment, had a robust R2 of 0.592. The F2 values, also shown in Table 6, indicate the strength of the relationships between variables. The effects of cognitive reliance on AI chatbot dependency, information-seeking on cognitive reliance, and loneliness on emotional attachment are of medium magnitude, while other relationships demonstrate small effect sizes. The R2 and F2 values together demonstrate moderate to strong predictive power, surpassing the thresholds proposed by Hair et al. [95] and Cohen [94], thus supporting the model’s validity and predictive strength.
All hypotheses were tested via SEM analysis (see Figure 2 and Table 6). For RQ1, information-seeking (β = 0.422, p < 0.001) and efficiency (β = 0.349, p < 0.001) significantly predicted cognitive reliance, while entertainment (β = 0.071, p = 0.09) did not, supporting H3 and H4 but rejecting H5. For RQ2, loneliness (β = 0.389, p < 0.001), companionship (β = 0.243, p < 0.01), and anxiety (β = 0.221, p < 0.05) all significantly influenced emotional attachment, supporting H6, H7, and H8. For RQ3, both cognitive reliance (β = 0.473, p < 0.001) and emotional attachment (β = 0.360, p < 0.001) significantly affected chatbot dependency, confirming H1 and H2. These findings collectively provide strong empirical support for the proposed framework, illustrating how motivational antecedents shape user cognition and affect, which in turn influence behavioral dependency on AI chatbots.

5. Discussion

This study provides robust empirical support for a dual-pathway model in which instrumental and affective motivations shape users’ dependency on AI chatbots through the mediating roles of cognitive reliance and emotional attachment. The structural model exhibits excellent fit and strong predictive power for all key constructs, indicating that chatbot dependency is better understood as the outcome of interacting psychological processes rather than a simple by-product of novelty or overuse.
In the instrumental domain, our results show that information-seeking and efficiency motivations significantly increase users’ cognitive reliance on AI chatbots. This pattern suggests that users tend to treat chatbots as problem-solving partners when facing complex, goal-oriented tasks that require processing large amounts of information. These findings are consistent with prior research emphasizing users’ preference for chatbots as task-oriented tools [96]. For example, Chen et al. [97] found that students and professionals increasingly rely on generative AI for academic and professional tasks, highlighting the functional rationale behind such reliance. Similarly, recent work shows that cognitive offloading to AI systems is often driven by the intent to boost productivity and alleviate cognitive load in complex decision-making contexts [98,99]. Our results extend this line of research by demonstrating that instrumental motivations not only increase chatbot use but also translate into deeper cognitive reliance.
At the same time, the influence of instrumental motivations appears to be context-dependent. Our findings are in line with evidence that users primarily engage generative chatbots for pragmatic purposes, rather than for entertainment [100]. This helps explain why motivations related to task efficiency and information acquisition are more strongly linked to cognitive reliance, whereas more entertainment-oriented motives may play a weaker or non-significant role in driving sustained dependence. Moreover, prior work suggests that users’ reliance on AI systems may decline in tasks requiring emotional engagement or creativity, where AI cannot provide adequate support [101]. Taken together, these results indicate that AI chatbots are most effective as cognitive tools in structured, goal-directed contexts, whereas their instrumental value diminishes in loosely defined or emotionally nuanced situations. Future research could further explore these “gray areas,” where instrumental and affective functions overlap, and where the boundary between work-related and leisure use becomes blurred [102].
On the affective side, our findings show that loneliness, need for companionship, and anxiety significantly shape users’ emotional attachment to AI chatbots, with loneliness exerting the strongest influence. This corroborates prior evidence that individuals experiencing loneliness often seek comfort and companionship through AI-based agents [66]. Similarly, Zhang et al. [103] reported that chatbots are increasingly used to meet emotional needs, with users viewing them as supportive companions. These findings reinforce the view that affective needs are key drivers of emotional attachment to AI.
However, this affective pathway also operates within clear boundaries. Prior studies suggest that while chatbots may temporarily alleviate loneliness, they cannot fully replace human interaction, and their effectiveness may diminish over time [104,105]. Sustained emotional reliance on chatbots may also raise concerns about reduced social interaction and potential overstimulation of dependence on AI for emotional well-being. Building on Wu et al. [69], who showed that individuals with anxious attachment styles seek emotionally safe interactions with AI in therapeutic contexts, and Zou et al. [65], who found that individuals with social anxiety view AI companions as less emotionally threatening, our findings imply that chatbots may offer a psychologically safer space for some users. Yet the limited emotional intelligence of AI systems raises questions about whether they can genuinely meet the deeper emotional needs of highly vulnerable individuals. Emotional attachment to AI chatbots is therefore nuanced and conditional; it depends on users’ emotional profiles, attachment styles, and interaction context [106]. Future research should examine the long-term emotional consequences of frequent chatbot use, including possible impacts on social skills and mental health.
Finally, the simultaneous influence of cognitive reliance and emotional attachment on chatbot dependency confirms the dual-pathway structure hypothesized in this research. This finding aligns with Huang and Huang [107], who demonstrated that both practical and emotional factors predict sustained AI use. Nevertheless, the relevance of this dual-pathway model seems to be context-dependent. In contexts where task efficiency is paramount, instrumental motivations dominate, whereas emotional attachment may take precedence in more interpersonal or personalized interactions [3,108]. This highlights the contextual variability of chatbot dependency, suggesting that its formation depends on the nature of the task, user needs, and emotional salience of the interaction.
In addition, prior work highlights the psychological dimensions of AI dependence and underscores the central role of emotional attachment in shaping long-term reliance on AI systems [109,110]. This perspective can be further enriched by considering individual differences and cognitive factors alongside affective processes. For instance, individuals high in neuroticism may be more prone to forming strong emotional attachments [68], whereas those with higher cognitive self-regulation may maintain a more utilitarian perspective on chatbot use [111]. These results suggest that emotional attachment should not be viewed in isolation but understood as interacting with cognitive processes. When unbalanced, affective bonds may enhance user engagement but also contribute to overdependence [112].
Overall, chatbot dependency reflects a dynamic interplay between instrumental motivations and affective needs. However, as discussed, this dual-pathway mechanism operates within clear boundaries shaped by contextual factors and individual psychological differences. Recognizing these constraints is essential for interpreting the findings appropriately and guiding responsible AI application and design.

6. Implications

6.1. Theoretical Implications

This study extends dual-process models of media dependency to the context of AI-powered conversational agents, addressing a theoretical gap in existing literature that has primarily focused on traditional media or internet technologies. By integrating cognitive reliance (e.g., productivity, task assistance) and emotional attachment (e.g., companionship, affective comfort) into a unified framework, this research offers a more comprehensive model that captures the multifaceted nature of user-AI relationships. Furthermore, the study highlights the role of motivational antecedents, such as efficiency needs and emotional needs, as key predictors of dependency on AI systems, providing empirical support for the importance of these factors in shaping AI usage behaviors. In addition, the findings suggest directions for future research, particularly regarding the complex interplay between cognitive and emotional motivations in AI engagement, thereby offering a foundation for further theoretical development in the field.

6.2. Practical Implications

For AI developers, platform designers, and policymakers, the findings offer valuable insights for optimizing user experience and ensuring ethical AI deployment. First, understanding that users often develop emotional attachments driven by feelings of loneliness and anxiety underscores the need for ethical design principles that avoid exploiting these vulnerabilities. Features that enhance emotional well-being without promoting over-dependence (e.g., boundary-setting prompts, real-human referrals) may improve long-term user outcomes [113]. Additionally, from a cognitive perspective, the fact that information-seeking and efficiency are core motivators suggests that chatbot systems should prioritize transparency, response accuracy, and usability to sustain trust and reliance [114]. From a policy standpoint, these results may inform the development of guidelines on the psychological risks and benefits of prolonged AI engagement, ensuring that both emotional and cognitive needs are responsibly addressed.

7. Limitations and Future Research Directions

Several limitations of the current study must be acknowledged, and future research can build upon these insights in several important ways. First, the reliance on self-reported data presents potential biases, such as social desirability bias and common method variance, which could affect the accuracy of the findings. Although SEM modeling helps mitigate some statistical threats, future research could enhance causal inference by employing experimental or behavioral-tracking designs, which would provide a more objective measure of AI dependency and its impact on users. Second, the sample used in this study may not fully capture the cultural and demographic diversity of the broader population, limiting the generalizability of the results. Future studies should aim to include more diverse samples, incorporating cross-cultural comparisons and examining different age groups to explore how motivational structures and patterns of emotional attachment to AI vary across cultures and demographics. For example, moderators such as culture or personality traits (e.g., introversion vs. extraversion) may influence how users engage with AI and develop dependency, and these factors should be explored further. Third, the current study primarily focuses on generative AI chatbots. However, different types of AI interfaces, such as embodied agents or voice assistants, may engage distinct dependency mechanisms. Future research should expand beyond chatbots to compare how these various AI systems affect user dependency and emotional engagement. Finally, this study predominantly examines cognitive and emotional motivations behind AI dependency. However, the ethical and practical concerns surrounding long-term AI engagement, such as potential social withdrawal and mental health implications, should be addressed more thoroughly. Future research could investigate the broader impacts of AI dependency on user well-being, exploring outcomes like social isolation, emotional distress, or digital resilience.

8. Conclusions

This study contributes to the understanding of AI chatbot dependency by proposing a dual-pathway model, highlighting both instrumental and emotional motivations. The findings reveal that cognitive reliance is driven by information-seeking and efficiency needs, while emotional attachment is shaped by loneliness, companionship and anxiety. These two pathways work together to explain sustained user engagement with AI chatbots, suggesting that dependency arises from both functional utility and emotional fulfillment. By advancing theoretical perspectives on AI reliance, this study provides valuable insights into how users interact with AI technologies beyond task completion. Despite these contributions, the study has limitations, such as its cross-sectional design and focus on specific user groups, which suggests the need for future research exploring these dynamics across diverse populations and over extended periods. Such research would deepen our understanding of the long-term evolution of AI dependency and its broader societal implications.

Author Contributions

Conceptualization, N.Z.; methodology, N.Z.; software, N.Z.; validation, X.D.; formal analysis, N.Z.; investigation, N.Z.; resources, N.Z.; data curation, N.Z.; writing—original draft preparation, N.Z.; writing—review and editing, X.D.; visualization, N.Z.; supervision, X.M.; project administration, X.M.; funding acquisition, N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the 2023 Annual Project of the Shaanxi Province “14th Five-Year Plan” for Educational Science, Grant No. SGH23Y2779.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Xi’an Fanyi University (Protocol Code: 250031; Approval Date: 10 March 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Questionnaire items used in the study.
Table A1. Questionnaire items used in the study.
ConstructsItemsDescriptionsReferences
Information-Seeking IS1I use AI chatbots to obtain information quickly and accurately.Papacharissi and Rubin [70]
IS2I rely on AI chatbots to learn new things that interest me.Whiting and Williams [71]
IS3AI chatbots help me stay informed about topics I care about.Xie et al. [72]
Efficiency EF1Using AI chatbots saves me time in completing daily or work-related tasks.LaRose and Eastin [73]
EF2AI chatbots make my work or study more efficient.Venkatesh et al. [74]
EF3I use AI chatbots because they simplify complicated tasks.Zhai and Ma [75]
Entertainment EN1I use AI chatbots for fun or relaxation.Whiting and Williams [71]
EN2Chatting with AI chatbots is enjoyable.Sundar et al. [20]
EN3I use AI chatbots when I want to relieve boredom.Seok et al. [76]
Companionship CO1I use AI chatbots because they make me feel accompanied.Wang et al. [77]
CO2AI chatbots can act as my companions when I feel alone.Cheng et al. [78]
CO3Interacting with AI chatbots makes me feel cared for.
LonelinessLO1I use AI chatbots when I feel lonely.Shen and Wang [79]
LO2I turn to AI chatbots when I lack someone to talk to.Zhang et al. [80]
LO3When I am alone, I prefer chatting with AI chatbots.
AnxietyAN1I use AI chatbots to relieve my stress or anxiety.Abd-Alrazaq et al. [81]
AN2Talking with AI chatbots helps calm me down when I feel nervous.
AN3I turn to AI chatbots when I feel significant anxiety or emotional distress.Kim et al. [82]
Cognitive RelianceCR1I often depend on AI chatbots to make decisions for me.LaRose and Eastin [73]
CR2I feel uneasy when I cannot access AI chatbots for help.Xie et al. [83]
CR3I find myself checking AI chatbots even for simple questions.
Emotional AttachmentEA1I feel emotionally connected to the AI chatbot I often use.Heng and Zhang [68]
EA2I miss interacting with AI chatbots when I cannot use them.
EA3I consider my favorite AI chatbot to be an important part of my daily life.
AI Chatbot DependencyAICD1I find it hard to control the amount of time I spend using AI chatbots.Shawar and Atwell [84]
AICD2I feel restless or anxious when I cannot use AI chatbots.Kwon et al. [85]
AICD3My use of AI chatbots sometimes interferes with my normal activities.Montag and Elhai [86]

References

  1. Murtaza, Z.; Sharma, I.; Carbonell, P. Examining chatbot usage intention in a service encounter: Role of task complexity, communication style, and brand personality. Technol. Forecast. Soc. Change 2024, 209, 123806. [Google Scholar] [CrossRef]
  2. Badu, D.; Joseph, D.; Kumar, R.M.; Alexander, E.; Sasi, R.; Joseph, J. Emotional AI and the rise of pseudo-intimacy: Are we trading authenticity for algorithmic affection? Front. Psychol. 2025, 16, 1679324. [Google Scholar] [CrossRef] [PubMed]
  3. Richet, J.-L. AI companionship or digital entrapment? Investigating the impact of anthropomorphic AI-based chatbots. J. Innov. Knowl. 2025, 10, 100835. [Google Scholar] [CrossRef]
  4. Moylan, K.; Doherty, K. Expert and Interdisciplinary Analysis of AI-Driven Chatbots for Mental Health Support: Mixed Methods Study. J. Med. Internet Res. 2025, 27, e67114. [Google Scholar] [CrossRef]
  5. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
  6. Liao, Q.V.; Geyer, W.; Muller, M.; Khazaen, Y. Conversational interfaces for information search. In Understanding and Improving Information Search: A Cognitive Approach; Springer: Berlin/Heidelberg, Germany, 2020; pp. 267–287. [Google Scholar]
  7. Hsu, P.-F.; Nguyen, T.; Wang, C.-Y.; Huang, P.-J. Chatbot commerce—How contextual factors affect Chatbot effectiveness. Electron. Mark. 2023, 33, 14. [Google Scholar] [CrossRef]
  8. Adam, M.; Wessel, M.; Benlian, A. AI-based chatbots in customer service and their effects on user compliance. Electron. Mark. 2021, 31, 427–445. [Google Scholar] [CrossRef]
  9. Følstad, A.; Brandtzæg, P.B. Chatbots and the new world of HCI. Interactions 2017, 24, 38–42. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Yu, Z. Emotional attachment as the key mediator: Expanding UTAUT2 to examine how perceived anthropomorphism in intelligent agents influences the sustained use of DouBao (Cici) among EFL learners. Educ. Inf. Technol. 2025, 30, 1–23. [Google Scholar] [CrossRef]
  11. Huang, H.; Shi, L.; Pei, X. When AI Becomes a Friend: The “Emotional” and “Rational” Mechanism of Problematic Use in Generative AI Chatbot Interactions. Int. J. Hum. Comput. Interact. 2025, 41, 1–19. [Google Scholar] [CrossRef]
  12. Yankouskaya, A.; Babiker, A.; Rizvi, S.; Alshakhsi, S.; Liebherr, M.; Ali, R. LLM-D12: A Dual-Dimensional Scale of Instrumental and Relational Dependencies on Large Language Models. ACM Trans. Web 2025, 19, 1–33. [Google Scholar] [CrossRef]
  13. Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm appreciation: People prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
  14. Glikson, E.; Woolley, A.W. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  15. Dietvorst, B.J.; Simmons, J.P.; Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 2015, 144, 114. [Google Scholar] [CrossRef] [PubMed]
  16. Schuetzler, R.M.; Grimes, G.M.; Scott Giboney, J. The impact of chatbot conversational skill on engagement and perceived humanness. J. Manag. Inf. Syst. 2020, 37, 875–900. [Google Scholar] [CrossRef]
  17. Gaffney, H.; Mansell, W.; Tai, S. Conversational agents in the treatment of mental health problems: Mixed-method systematic review. JMIR Ment. Health 2019, 6, e14166. [Google Scholar] [CrossRef]
  18. Vaidyam, A.N.; Wisniewski, H.; Halamka, J.D.; Kashavan, M.S.; Torous, J.B. Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Can. J. Psychiatry 2019, 64, 456–464. [Google Scholar] [CrossRef]
  19. Katz, E.; Blumler, J.G.; Gurevitch, M. Uses and gratifications research. Public Opin. Q. 1973, 37, 509–523. [Google Scholar] [CrossRef]
  20. Sundar, S.S.; Bellur, S.; Oh, J.; Jia, H.; Kim, H.-S. Theoretical importance of contingency in human-computer interaction: Effects of message interactivity on user engagement. Commun. Res. 2016, 43, 595–625. [Google Scholar] [CrossRef]
  21. Rubin, A.M. Uses-and-gratifications perspective on media effects. In Media Effects; Routledge: Abingdon-on-Thames, UK, 2009; pp. 181–200. [Google Scholar]
  22. Ruggiero, T.E. Uses and gratifications theory in the 21st century. Mass Commun. Soc. 2000, 3, 3–37. [Google Scholar] [CrossRef]
  23. Chang, Y.; Lee, S.; Wong, S.F.; Jeong, S.-P. AI-powered learning application use and gratification: An integrative model. Inf. Technol. People 2022, 35, 2115–2139. [Google Scholar] [CrossRef]
  24. Biru, I.R.; Rahmawati, R.F.; Salsabila, B.A.D. Analysis of Motivation for Continued Use of Meta AI on WhatsApp: Uses and Gratification Theory Approach. J. Artif. Intell. Eng. Appl. (JAIEA) 2025, 4, 2162–2168. [Google Scholar] [CrossRef]
  25. Xie, C.; Wang, Y.; Cheng, Y. Does artificial intelligence satisfy you? A meta-analysis of user gratification and user satisfaction with AI-powered chatbots. Int. J. Hum.–Comput. Interact. 2024, 40, 613–623. [Google Scholar] [CrossRef]
  26. Pathak, A. AI Chatbots and Interpersonal Communication: A Study on Uses and Gratification amongst Youngsters. IIS Univ. J. Arts 2024, 13, 355–366. [Google Scholar]
  27. Kardefelt-Winther, D. A conceptual and methodological critique of internet addiction research: Towards a model of compensatory internet use. Comput. Hum. Behav. 2014, 31, 351–354. [Google Scholar] [CrossRef]
  28. Boursier, V.; Gioia, F.; Musetti, A.; Schimmenti, A. Facing loneliness and anxiety during the COVID-19 isolation: The role of excessive social media use in a sample of Italian adults. Front. Psychiatry 2020, 11, 586222. [Google Scholar] [CrossRef]
  29. Servidio, R.; Soraci, P.; Pisanti, R.; Boca, S. From loneliness to problematic social media use: The mediating roles of fear of missing out, self-esteem, and social comparison. Psicol. Soc. 2025, 20, 89–112. [Google Scholar]
  30. Meng, J.; Rheu, M.; Zhang, Y.; Dai, Y.; Peng, W. Mediated social support for distress reduction: AI Chatbots vs. Human. In Proceedings of the ACM on Human-Computer Interaction, Hamburg, Germany, 23–28 April 2023; Volume 7, pp. 1–25. [Google Scholar]
  31. Yao, R.; Qi, G.; Sheng, D.; Sun, H.; Zhang, J. Connecting self-esteem to problematic AI chatbot use: The multiple mediating roles of positive and negative psychological states. Front. Psychol. 2025, 16, 1453072. [Google Scholar] [CrossRef]
  32. Bowlby, J. Attachment and Loss, 3rd ed.; The Hogarth Press and Institute of Psycho-Analysis: London, UK, 1969; Volume 1. [Google Scholar]
  33. Hoffman, G.; Forlizzi, J.; Ayal, S.; Steinfeld, A.; Antanitis, J.; Hochman, G.; Hochendoner, E.; Finkenaur, J. Robot presence and human honesty: Experimental evidence. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; pp. 181–188. [Google Scholar]
  34. Złotowski, J.; Yogeeswaran, K.; Bartneck, C. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. Int. J. Hum. Comput. Stud. 2017, 100, 48–54. [Google Scholar] [CrossRef]
  35. Skjuve, M.; Følstad, A.; Fostervold, K.I.; Brandtzaeg, P.B. My chatbot companion-a study of human-chatbot relationships. Int. J. Hum. Comput. Stud. 2021, 149, 102601. [Google Scholar] [CrossRef]
  36. Shah, T.R.; Purohit, S.; Das, M.; Arulsivakumar, T. Do I look real? Impact of digital human avatar influencer realism on consumer engagement and attachment. J. Consum. Mark. 2025, 42, 416–430. [Google Scholar] [CrossRef]
  37. Wetzels, R.; Wetzels, M.; Grewal, D.; Doek, B. Evoking trust in smart voice assistants. J. Serv. Manag. 2025, 36, 1–27. [Google Scholar] [CrossRef]
  38. Sokolova, K.; Perez, C. You follow fitness influencers on YouTube. But do you actually exercise? How parasocial relationships, and watching fitness influencers, relate to intentions to exercise. J. Retail. Consum. Serv. 2021, 58, 102276. [Google Scholar] [CrossRef]
  39. Parasuraman, R.; Riley, V. Humans and automation: Use, misuse, disuse, abuse. Hum. Factors 1997, 39, 230–253. [Google Scholar] [CrossRef]
  40. Truong, T.T.H.; Chen, J.S. When empathy is enhanced by human–AI interaction: An investigation of anthropomorphism and responsiveness on customer experience with AI chatbots. Asia Pac. J. Mark. Logist. 2025, 37, 1–18. [Google Scholar] [CrossRef]
  41. Sundar, S.S.; Limperos, A.M. Uses and grats 2.0: New gratifications for new media. J. Broadcast. Electron. Media 2013, 57, 504–525. [Google Scholar] [CrossRef]
  42. Rose, S. Towards an Integrative Theory of Human-AI Relationship Development; Association for Information Systems: Atlanta, GA USA, 2024. [Google Scholar]
  43. Raney, A.A.; Janicke-Bowles, S.H.; Oliver, M.B.; Dale, K.R. Introduction to Positive Media Psychology; Routledge: Abingdon-on-Thames, UK, 2020. [Google Scholar]
  44. Hawkley, L.C.; Cacioppo, J.T. Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Ann. Behav. Med. 2010, 40, 218–227. [Google Scholar] [CrossRef] [PubMed]
  45. Spielberger, C.D. Assessment of state and trait anxiety: Conceptual and methodological issues. South. Psychol. 1985, 2, 6–16. [Google Scholar]
  46. Gnewuch, U.; Morana, S.; Maedche, A. Towards Designing Cooperative and Social Conversational Agents for Customer Service. In Proceedings of the 38th International Conference on Information Systems (ICIS), Seoul, Republic of Korea, 10–13 December 2017. [Google Scholar]
  47. Küper, A.; Krämer, N. Psychological traits and appropriate reliance: Factors shaping trust in AI. Int. J. Hum.–Comput. Interact. 2025, 41, 4115–4131. [Google Scholar] [CrossRef]
  48. Liao, Q.V.; Gruen, D.; Miller, S. Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–15. [Google Scholar]
  49. Romeo, G.; Conti, D. Exploring automation bias in human–AI collaboration: A review and implications for explainable AI. AI Soc. 2025, 40, 1–20. [Google Scholar] [CrossRef]
  50. Aghaziarati, A.; Rahimi, H. The Future of Digital Assistants: Human Dependence and Behavioral Change. J. Foresight Health Gov. 2025, 2, 52–61. [Google Scholar]
  51. Alalwan, A.A.; Algharabat, R.; Abu El Samen, A.; Albanna, H.; Al-Okaily, M. Examining the impact of anthropomorphism and AI-chatbots service quality on online customer flow experience–exploring the moderating role of telepresence. J. Consum. Mark. 2025, 42, 448–471. [Google Scholar] [CrossRef]
  52. Hastuti, N.T. Redefining Emotional Presence: Lived Experiences of AI-Mediated Romantic Relationships with Chatbot-Based Virtual Companions. Humanexus J. Humanist. Soc. Connect. Stud. 2025, 1, 245–252. [Google Scholar]
  53. Edalat, A.; Hu, R.; Patel, Z.; Polydorou, N.; Ryan, F.; Nicholls, D. Self-initiated humour protocol: A pilot study with an AI agent. Front. Digit. Health 2025, 7, 1530131. [Google Scholar] [CrossRef] [PubMed]
  54. Mei, K.-Q.; Chen, Y.-J.; Liu, S.-T. Navigating “Limerence”: Designing a Customized ChatGPT-Based Assistant System. In Proceedings of the International Conference on Human-Computer Interaction, Gothenburg, Sweden, 22–27 June 2025; pp. 162–172. [Google Scholar]
  55. Hong, S.J. What drives AI-based risk information-seeking intent? Insufficiency of risk information versus (Un) certainty of AI chatbots. Comput. Hum. Behav. 2025, 162, 108460. [Google Scholar] [CrossRef]
  56. Liao, W.; Weisman, W.; Thakur, A. On the motivations to seek information from artificial intelligence agents versus humans: A risk information seeking and processing perspective. Sci. Commun. 2024, 46, 458–486. [Google Scholar] [CrossRef]
  57. Subaveerapandiyan, A.; Vijay Kumar, R.; Prabhu, S. Marine information-seeking behaviours and AI chatbot impact on information discovery. Inf. Discov. Deliv. 2025, 53, 206–216. [Google Scholar] [CrossRef]
  58. Zhou, T.; Li, S. Understanding user switch of information seeking: From search engines to generative AI. J. Librariansh. Inf. Sci. 2024, 56, 09610006241244800. [Google Scholar] [CrossRef]
  59. Pham, V.K.; Pham Thi, T.D.; Duong, N.T. A study on information search behavior using AI-powered engines: Evidence from chatbots on online shopping platforms. Sage Open 2024, 14, 21582440241300007. [Google Scholar] [CrossRef]
  60. Mei, Y. AI & entertainment: The revolution of customer experience. Lect. Notes Educ. Psychol. Public Media 2023, 30, 274–279. [Google Scholar] [CrossRef]
  61. Yang, H.; Lee, H. Understanding user behavior of virtual personal assistant devices. Inf. Syst. E-Bus. Manag. 2019, 17, 65–87. [Google Scholar] [CrossRef]
  62. Park, G.; Chung, J.; Lee, S. Effect of AI chatbot emotional disclosure on user satisfaction and reuse intention for mental health counseling: A serial mediation model. Curr. Psychol. 2023, 42, 28663–28673. [Google Scholar] [CrossRef]
  63. Shahab, H.; Mohtar, M.; Ghazali, E.; Rauschnabel, P.A.; Geipel, A. Virtual reality in museums: Does it promote visitor enjoyment and learning? Int. J. Hum.–Comput. Interact. 2023, 39, 3586–3603. [Google Scholar] [CrossRef]
  64. Pani, B.; Crawford, J.; Allen, K.-A. Can generative artificial intelligence foster belongingness, social support, and reduce loneliness? A conceptual analysis. In Applications of Generative AI; Lyu, Z., Ed.; Springer International Publishing: Cham, Switzerland, 2024; pp. 261–276. [Google Scholar] [CrossRef]
  65. Zou, W.; Liu, Z.; Lin, C. The influence of individuals’ emotional involvement and perceived roles of AI chatbots on emotional self-efficacy. Inf. Commun. Soc. 2025, 28, 1–21. [Google Scholar] [CrossRef]
  66. Peng, C.; Zhang, S.; Wen, F.; Liu, K. How loneliness leads to the conversational AI usage intention: The roles of anthropomorphic interface, para-social interaction. Curr. Psychol. 2025, 44, 8177–8189. [Google Scholar] [CrossRef]
  67. Yan, W.; Xiaowei, G.; Xiaolin, Z. From Para-social Interaction to Attachment: The Evolution of Human-AI Emotional Relationships. J. Psychol. Sci. 2025, 48, 948–961. [Google Scholar]
  68. Heng, S.; Zhang, Z. Attachment Anxiety and Problematic Use of Conversational Artificial Intelligence: Mediation of Emotional Attachment and Moderation of Anthropomorphic Tendencies. Psychol. Res. Behav. Manag. 2025, 18, 1775–1785. [Google Scholar] [CrossRef]
  69. Wu, X.; Liew, K.; Dorahy, M.J. Trust, Anxious Attachment, and Conversational AI Adoption Intentions in Digital Counseling: A Preliminary Cross-Sectional Questionnaire Study. JMIR AI 2025, 4, e68960. [Google Scholar] [CrossRef]
  70. Papacharissi, Z.; Rubin, A.M. Predictors of Internet use. J. Broadcast. Electron. Media 2000, 44, 175–196. [Google Scholar] [CrossRef]
  71. Whiting, A.; Williams, D. Why people use social media: A uses and gratifications approach. Qual. Mark. Res. Int. J. 2013, 16, 362–369. [Google Scholar] [CrossRef]
  72. Xie, Y.; Zhao, S.; Zhou, P.; Liang, C. Understanding continued use intention of AI assistants. J. Comput. Inf. Syst. 2023, 63, 1424–1437. [Google Scholar] [CrossRef]
  73. LaRose, R.; Eastin, M.S. A social cognitive theory of Internet uses and gratifications: Toward a new model of media attendance. J. Broadcast. Electron. Media 2004, 48, 358–377. [Google Scholar] [CrossRef]
  74. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  75. Zhai, N.; Ma, X. Automated writing evaluation (AWE) feedback: A systematic investigation of college students’ acceptance. Comput. Assist. Lang. Learn. 2022, 35, 2817–2842. [Google Scholar] [CrossRef]
  76. Seok, J.; Lee, B.H.; Kim, D.; Bak, S.; Kim, S.; Kim, S.; Park, S. What Emotions and Personalities Determine Acceptance of Generative AI?: Focusing on the CASA Paradigm. Int. J. Hum.–Comput. Interact. 2025, 41, 11436–11458. [Google Scholar] [CrossRef]
  77. Wang, X.; Lin, X.; Shao, B. How does artificial intelligence create business agility? Evidence from chatbots. Int. J. Inf. Manag. 2022, 66, 102535. [Google Scholar] [CrossRef]
  78. Cheng, X.; Bao, Y.; Zarifis, A.; Gong, W.; Mou, J. Exploring consumers’ response to text-based chatbots in e-commerce: The moderating role of task complexity and chatbot disclosure. Internet Res. 2022, 32, 496–517. [Google Scholar] [CrossRef]
  79. Shen, X.; Wang, J.-L. Loneliness and excessive smartphone use among Chinese college students: Moderated mediation effect of perceived stressed and motivation. Comput. Hum. Behav. 2019, 95, 31–36. [Google Scholar] [CrossRef]
  80. Zhang, Y.; Li, Y.; Xia, M.; Han, M.; Yan, L.; Lian, S. The relationship between loneliness and mobile phone addiction among Chinese college students: The mediating role of anthropomorphism and moderating role of family support. PLoS ONE 2023, 18, e0285189. [Google Scholar] [CrossRef]
  81. Abd-Alrazaq, A.A.; Rababeh, A.; Alajlani, M.; Bewick, B.M.; Househ, M. Effectiveness and safety of using chatbots to improve mental health: Systematic review and meta-analysis. J. Med. Internet Res. 2020, 22, e16021. [Google Scholar] [CrossRef]
  82. Kim, M.; Lee, S.; Kim, S.; Heo, J.-I.; Lee, S.; Shin, Y.-B.; Cho, C.-H.; Jung, D. Therapeutic potential of social chatbots in alleviating loneliness and social anxiety: Quasi-experimental mixed methods study. J. Med. Internet Res. 2025, 27, e65589. [Google Scholar] [CrossRef] [PubMed]
  83. Xie, T.; Pentina, I.; Hancock, T. Friend, mentor, lover: Does chatbot engagement lead to psychological dependence? J. Serv. Manag. 2023, 34, 806–828. [Google Scholar] [CrossRef]
  84. Shawar, B.A.; Atwell, E. Chatbots: Are they really useful? J. Lang. Technol. Comput. Linguist. 2007, 22, 29–49. [Google Scholar] [CrossRef]
  85. Kwon, S.K.; Shin, D.; Lee, Y. The application of chatbot as an L2 writing practice tool. Lang. Learn. Technol. 2023, 27, 1–19. [Google Scholar] [CrossRef]
  86. Montag, C.; Elhai, J.D. Introduction of the AI-Interaction Positivity Scale and its relations to satisfaction with life and trust in ChatGPT. Comput. Hum. Behav. 2025, 172, 108705. [Google Scholar] [CrossRef]
  87. Kline, R.B. Principles and Practice of Structural Equation Modeling; Guilford Publications: New York, NY, USA, 2023. [Google Scholar]
  88. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  89. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  90. Nunnally, J.; Bernstein, I. Psychometric Theory, 3rd ed.; MacGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  91. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  92. Hu, L.T.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. A Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  93. Jöreskog, K.G.; Sörbom, D. LISREL 8: Structural Equation Modeling with the SIMPLIS Command Language; Scientific Software International: Chapel Hill, NC, USA, 1993. [Google Scholar]
  94. Cohen, J. Statistical Power Analysis for the Behavioral Sciences; Routledge: Abingdon-on-Thames, UK, 2013. [Google Scholar]
  95. Hair, J.F. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage: Thousand Oaks, CA, USA, 2014. [Google Scholar]
  96. Lim, J.; Hwang, W. Proactivity of chatbots, task types and user’s characteristics when interacting with artificial intelligence (AI) chatbots. Int. J. Hum.–Comput. Interact. 2025, 41, 11848–11866. [Google Scholar] [CrossRef]
  97. Chen, Y.; Ma, X.; Wu, C. The concept, technical architecture, applications and impacts of satellite internet: A systematic literature review. Heliyon 2024, 10, e33793. [Google Scholar] [CrossRef] [PubMed]
  98. Zhang, K.; Xie, Y.; Chen, D.; Ji, Z.; Wang, J. Effects of attractions and social attributes on peoples’ usage intention and media dependence towards chatbot: The mediating role of parasocial interaction and emotional support. BMC Psychol. 2025, 13, 986. [Google Scholar] [CrossRef]
  99. Yeh, S.-F.; Wu, M.-H.; Chen, T.-Y.; Lin, Y.-C.; Chang, X.; Chiang, Y.-H.; Chang, Y.-J. How to guide task-oriented chatbot users, and when: A mixed-methods study of combinations of chatbot guidance types and timings. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; pp. 1–16. [Google Scholar]
  100. Lin, Z.; Ng, Y.-L. Unraveling gratifications, concerns, and acceptance of generative artificial intelligence. Int. J. Hum.–Comput. Interact. 2025, 41, 10725–10742. [Google Scholar] [CrossRef]
  101. Lee, J.H.; Shin, D.; Hwang, Y. Investigating the capabilities of large language model-based task-oriented dialogue chatbots from a learner’s perspective. System 2024, 127, 103538. [Google Scholar] [CrossRef]
  102. Sánchez Cuadrado, J.; Pérez-Soler, S.; Guerra, E.; De Lara, J. Automating the Development of Task-oriented LLM-based Chatbots. In Proceedings of the 6th ACM Conference on Conversational User Interfaces, Luxembourg, 8–10 June 2024; pp. 1–10. [Google Scholar]
  103. Zhang, N.; Xu, J.; Zhang, X.; Wang, Y. Social robots supporting children’s learning and development: Bibliometric and visual analysis. Educ. Inf. Technol. 2024, 29, 12115–12142. [Google Scholar] [CrossRef]
  104. Wyatt, Z. Talk to Me: AI Companions in the Age of Disconnection. Med. Clin. Sci. 2025, 7, 050. [Google Scholar]
  105. Adewale, M.D.; Muhammad, U.I. From virtual companions to forbidden attractions: The seductive rise of artificial intelligence love, loneliness, and intimacy—A systematic review. J. Technol. Behav. Sci. 2025, 3, 1–18. [Google Scholar] [CrossRef]
  106. Liu, A.R.; Pataranutaporn, P.; Maes, P. The Heterogeneous Effects of AI Companionship: An Empirical Model of Chatbot Usage and Loneliness and a Typology of User Archetypes. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Madrid, Spain, 20–22 October 2025; pp. 1585–1597. [Google Scholar]
  107. Huang, Y.; Huang, H. Exploring the effect of attachment on technology addiction to generative AI chatbots: A structural equation modeling analysis. Int. J. Hum.–Comput. Interact. 2025, 41, 9440–9449. [Google Scholar] [CrossRef]
  108. Dinh, C.-M.; Park, S. How to increase consumer intention to use Chatbots? An empirical analysis of hedonic and utilitarian motivations on social presence and the moderating effects of fear across generations. Electron. Commer. Res. 2024, 24, 2427–2467. [Google Scholar] [CrossRef]
  109. Wang, Y. Emotional Dependence Path of Artificial Intelligence Chatbot Based on Structural Equation Modeling. Procedia Comput. Sci. 2024, 247, 1089–1094. [Google Scholar] [CrossRef]
  110. Biswas, M.; Murray, J. “Incomplete Without Tech”: Emotional Responses and the Psychology of AI Reliance. In Proceedings of the Annual Conference Towards Autonomous Robotic Systems, London, UK, 21–23 August 2024; pp. 119–131. [Google Scholar]
  111. Lee, Y.-F.; Hwang, G.-J.; Chen, P.-Y. Technology-based interactive guidance to promote learning performance and self-regulation: A chatbot-assisted self-regulated learning approach. Educ. Technol. Res. Dev. 2025, 73, 1–26. [Google Scholar] [CrossRef]
  112. Guan, R.; Raković, M.; Chen, G.; Gašević, D. How educational chatbots support self-regulated learning? A systematic review of the literature. Educ. Inf. Technol. 2025, 30, 4493–4518. [Google Scholar] [CrossRef]
  113. Saracini, C.; Cornejo-Plaza, M.I.; Cippitani, R. Techno-emotional projection in human–GenAI relationships: A psychological and ethical conceptual perspective. Front. Psychol. 2025, 16, 1662206. [Google Scholar] [CrossRef]
  114. Shen, J.; DiPaola, D.; Ali, S.; Sap, M.; Park, H.W.; Breazeal, C. Empathy toward artificial intelligence versus human experiences and the role of transparency in mental health and social support chatbot design: Comparative study. JMIR Ment. Health 2024, 11, e62679. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The proposed conceptual research model.
Figure 1. The proposed conceptual research model.
Information 16 01025 g001
Figure 2. Path coefficients of the research model. Note: p < 0.05 (*), p < 0.01 (**), p < 0.001 (***).
Figure 2. Path coefficients of the research model. Note: p < 0.05 (*), p < 0.01 (**), p < 0.001 (***).
Information 16 01025 g002
Table 1. Conceptual variables, definitions, and supporting literature.
Table 1. Conceptual variables, definitions, and supporting literature.
GroupConstructDefinition in This StudySupporting Literature
MediatorCognitive RelianceThe extent to which individuals delegate cognitive tasks to external agents, such as chatbots, to simplify decision-making or problem-solving.[39]
Emotional AttachmentA user’s affective bond with a chatbot, marked by feelings of closeness, comfort, and emotional security.[40]
Instrumental MotivationInformation-SeekingThe deliberate use of chatbots to acquire relevant or novel information to fulfill cognitive needs.[41]
EfficiencyThe desire to use chatbots to save time, reduce effort, and streamline tasks.[42]
EntertainmentThe use of chatbots for enjoyment, amusement, and emotional satisfaction.[43]
Affective MotivationCompanionshipThe desire to experience social connection or relational closeness via chatbot interaction.[21]
LonelinessA perceived gap between desired and actual social relationships, driving engagement with emotionally responsive chatbots.[44]
AnxietyA negative emotional state that motivates users to seek comfort, control, or predictability through chatbot interaction.[45]
Table 2. Demographic profile of respondents.
Table 2. Demographic profile of respondents.
VariableCategoryFrequency (n)Percentage (%)
GenderMale16145.48
Female19254.24
Prefer not to say10.28
Age (years)18–2423867.23
25–349727.40
35–45195.37
Education LevelUndergraduate20257.06
Master’s11833.33
Doctoral123.39
Other226.22
OccupationStudent21861.58
Working professional12334.75
Other133.67
Chatbot Usage FrequencyDaily13638.42
Several times per week15142.65
Occasionally6718.93
AI Chatbot Used in the Past MonthDeepSeek R1 11031.07
Baidu Ernie Bot 4.08223.16
Doubao 9.07019.77
ChatGPT-4o (third-party access)4011.30
Tencent Hunyuan T1257.15
Others (e.g., iFlytek Spark 4.0, Kimi1.5, and Qwen3)277.55
Table 3. Descriptive statistics and convergent validity of the measurement model.
Table 3. Descriptive statistics and convergent validity of the measurement model.
Construct/ItemsMeanStd. Dev.Standardized Factor Loading (>0.70)Cronbach’s α (>0.70)Composite Reliability
(>0.70)
Average Variance Extracted
(>0.50)
AI Chatbot Dependency 0.8750.9130.777
AICD14.2210.6870.895
AICD24.3530.7130.882
AICD34.2710.6640.867
Cognitive Reliance 0.8790.9220.797
CR14.2620.6570.889
CR24.4140.6320.907
CR34.3320.6140.882
Emotional Attachment 0.8680.9060.763
EA14.0110.6970.874
EA24.0920.6830.860
EA34.1150.6510.886
Information-Seeking 0.8710.9160.785
IS14.3130.6500.892
IS24.2370.6820.861
IS34.3700.7010.905
Efficiency 0.8840.9190.790
EF14.4520.6020.901
EF24.3280.6190.879
EF34.3810.6700.887
Entertainment 0.8410.8580.669
EN13.8920.7820.823
EN24.0030.7470.831
EN33.9510.7230.799
Companionship 0.8490.8650.682
CP13.9430.7310.812
CP23.8930.7630.837
CP33.9810.700.828
Loneliness 0.8810.9090.769
LO14.0520.6790.878
LO24.1200.6420.892
LO34.0810.6630.861
Anxiety 0.8660.9010.752
AN14.1760.7210.869
AN24.1040.7030.855
AN34.1580.7120.878
Table 4. Discriminant validity of constructs using HTMT criterion.
Table 4. Discriminant validity of constructs using HTMT criterion.
Constructs123456789
1. AI Chatbot Dependency
2. Cognitive Reliance0.763
3. Emotional Attachment0.7920.721
4. Information-Seeking0.6920.7490.664
5. Efficiency0.7470.7330.6830.702
6. Entertainment0.7170.6990.7650.6620.666
7. Companionship0.7440.6780.8040.6550.6620.761
8. Loneliness0.7240.7070.7800.6350.6510.7000.686
9. Anxiety0.7130.7050.7700.6260.6480.7130.7140.730
Table 5. Model fit indices.
Table 5. Model fit indices.
Fit IndexValueRecommended ThresholdReference
χ2/df2.360<3.0Hair et al. [88]
CFI0.961>0.90 (good), >0.95 (excellent)Hu and Bentler [91]
TLI0.948>0.90Hu and Bentler [91]
RMSEA 0.047<0.08 (acceptable), <0.05 (excellent)Kline [87]
SRMR0.041<0.08Hu and Bentler [91]
GFI 0.931>0.90Jöreskog and Sörbom [92]
Table 6. Hypotheses testing results.
Table 6. Hypotheses testing results.
HypothesisStructural PathPath CoefficientR2F2Empirical Evidence
H1CR → AICD0.4730.5920.164Supported
H2EA → AICD0.3600.098Supported
H3IS → CR0.4220.5230.168Supported
H4EF → CR0.3490.084Supported
H5EN → CR0.0710.031Not supported
H6CP → EA0.2430.4710.076Supported
H7LO → EA0.3890.151Supported
H8AN → EA0.2210.076Supported
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhai, N.; Ma, X.; Ding, X. Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms. Information 2025, 16, 1025. https://doi.org/10.3390/info16121025

AMA Style

Zhai N, Ma X, Ding X. Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms. Information. 2025; 16(12):1025. https://doi.org/10.3390/info16121025

Chicago/Turabian Style

Zhai, Na, Xiaomei Ma, and Xiaojun Ding. 2025. "Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms" Information 16, no. 12: 1025. https://doi.org/10.3390/info16121025

APA Style

Zhai, N., Ma, X., & Ding, X. (2025). Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms. Information, 16(12), 1025. https://doi.org/10.3390/info16121025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop