Previous Article in Journal
How Can Users Be Confident About Self-Disclosure in Mobile Payment? From Institutional Mechanism Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CASA in Action: Dual Trust Pathways from Technical–Social Features of AI Agents to Users’ Active Engagement Through Cognitive–Emotional Trust

by
Qinbo Xue
1,2,
Magdalena Dzitkowska-Zabielska
2,
Liguo Wang
1 and
Jiaolong Xue
3,*
1
Faculty of Sport and Leisure, Guangdong Ocean University, Zhanjiang 524000, China
2
Faculty of Physical Education, Gdansk University of Physical Education and Sport, 80-336 Gdańsk, Poland
3
Department of Marketing, Business School, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2026, 21(1), 11; https://doi.org/10.3390/jtaer21010011 (registering DOI)
Submission received: 31 October 2025 / Revised: 8 December 2025 / Accepted: 9 December 2025 / Published: 2 January 2026

Abstract

As artificial intelligence (AI) agents become deeply integrated into fitness systems, trustworthy human–AI agent interaction has become pivotal for user engagement in smart home fitness (SHF) e-commerce platforms. Grounded in the Computers Are Social Actors (CASA) framework, this study empirically investigates how, acting as AI fitness coaches, AI agents’ technical and social features shape users’ active engagement in the in-home social e-commerce context. A mixed-method approach was employed, combining computational text mining of 17,582 user reviews from fitness e-commerce platforms with a survey (N = 599) of Chinese consumers. The results show that (1) the technical–social features of AI agents serving as AI fitness coaches include visibility, gamification, interactivity, humanness, and sociability; (2) these five technical–social features of AI agents positively influence user compliance via both cognitive and emotional trust in AI agents; (3) these five technical–social features of AI agents serving as AI fitness coaches positively impact active engagement via both cognitive and emotional trust in AI agents. This study extends the CASA framework to the domain of AI coaching by demonstrating the parallel roles of cognitive and emotional trust in AI agents. For designers and managers in the fitness e-commerce industries, this study offers actionable insights for designing AI agents integrating functional and social features that foster trust and drive behavioral outcomes.

1. Introduction

The global market size for smart fitness mirrors (equipped with embedded AI agents) is projected to grow from $359.5 million in 2025 to $625.4 million by 2034. This coincides with promising these prospects: according to the International Health Racquet & Sportsclub Association (IHRSA), the global fitness market was valued at $96 billion in 2023, with home fitness solutions accounting for a significant portion of this smart home fitness e-commerce market [1]. In particular, China’s fitness population is projected to expand from 374 million in 2022 to 464 million by 2027, creating substantial growth opportunities for smart home fitness products. Underlying this market expansion is technological advancement. Driven by innovations in the Internet of Things (IoT), artificial intelligence, and computer vision, smart fitness mirrors are evolving into comprehensive platforms that integrate hardware, content, services, and social features [2,3]. Beyond functioning as independent devices, these AI-powered fitness systems, embedded with AI agents that double as an e-commerce platform, represent a significant convergence in the digital health and e-commerce [4,5]. Specifically, by functioning as personalized AI fitness coaches, these AI agents deliver tailored guidance through voice interaction, real-time movement correction, and dynamically adaptable training programmes [6]. From a structural perspective, this integrated framework comprises three core components: the smart mirror as the hardware interface, the AI coach as the cognitive engine, and the e-commerce platform as the commercial backbone. Given this architecture, AI agents (acting as AI fitness coaches) serve as a central coordinator, seamlessly integrating the physical interface (smart mirror), intelligent processing layer (AI agent), and commercial infrastructure (e-commerce system) into a unified service ecosystem. Furthermore, these AI agents can intelligently recommend and facilitate the purchase of complementary products, services, and subscriptions to unlocking sustainable business models. For instance, the incorporation of e-commerce services—such as long-term membership subscriptions, paid courses, fitness cashback rewards, online community group buying, and accessory upgrades—effectively enhances the consumer experience and strengthens user retention. Thus, this integration establishes a pioneering paradigm for data-driven e-commerce service delivery and sustainable value creation in the digital fitness industry.
From a technical perspective, AI agents (i.e., AI fitness coaches) also provide real-time posture guidance and correction, interactive gamified workouts, progress tracking functionalities, and access to online fitness course products for consumers. These personalized characteristics are consistent with previous research findings, and further extend the study on how other features influence user behavior, which indicated that AI-enabled personalization (AIP) redefined the customer journey and optimized user interaction experiences [7]. Compared to traditional in-gym personal trainers, these systems address critical limitations of home workouts by helping users in achieving fitness goals safely, minimizing injury risks, and avoiding the high costs of one-on-one coaching. As such, a central marketing strategy involves reframing monotonous home workouts into engaging, sustainable, and socially interactive gamified experiences across diverse contexts, such as at home, in gyms, or during traveling. Moreover, the rise of AI-empowered fitness platforms has stimulated e-commerce activities, including the sale of smart wearable devices, nutritional supplements, and online subscription services, forming an expanding digital health market [8]. Nevertheless, despite these technological and commercial advances, sustaining user engagement and ensuring long-term compliance with AI-generated recommendations remain challenging. User trust in AI agents shaped by both technical and social functionalities-is critical for adoption and continuous usage [9]. From an e-commerce perspective, understanding how trust in AI agents develops and subsequently influences user behavior is equally essential for platform retention, cross-selling, and revenue models [10]. In addition, this study does not replicate the well-documented findings from existing literature (such as consumer attitudes and perception). Instead, by incorporating the unique characteristics of the health and fitness domain, it innovatively introduces two key variables—user compliance and active engagement—thereby expanding the application depth of interactive marketing theory in vertical scenarios [11]. Therefore, a nuanced investigation into the technological and psychological determinants of trust and behavior is urgently needed.
The Computers Are Social Actors (CASA) framework offers a valuable lens to analyze these dynamics [12]. CASA posits that individuals naturally apply human social rules to computers and AI systems, responding to them as they would to others, particularly when these systems exhibit human-like attributes [13,14]. By leveraging this tendency, our study attempts to develop AI agents (acting as AI fitness coaches) as social actors, not just technological tools. To achieve this, we focus on how five key features-humanness, visibility, gamification, interactivity, and sociability—influence users’ behavioral intention and outcomes, encompassing both user compliance and active engagement. Specifically, humanness, the assignment of human traits to AI [15], enabled users to form relational bonds, creating a sense of companionship and understanding [13], which imbued the AI with human-like qualities to enhance user trust [16]. Visibility ensures that the AI processes, recommendations, feedback, and progress tracking are transparent and easily understood, thereby enhancing user confidence in AI processes [17]. Further, previous work showed that gamification was effectively enhancing user engagement [18], which transformed routine fitness tasks into engaging challenges. Moreover, interactivity was crucial in shaping a favorable user experience [19], which could be a foundation for building user trust [20,21]. For instance, from a functional perspective, AI agents (e.g., AI fitness coaches) utilize real-time, 24/7 personalized guidance through responsive two-way communication. In addition, sociability fosters a sense of community or companionship through shared goals and virtual interactions. These social elements have been established as key antecedents for cultivating trust in AI agents and are directly associated with behavioral engagement [22]. Building on these insights, we posit that these five features may foster initial user trust and create a more engaging experience.
In many AI-related studies, researchers continued to categorize trust into two distinct types: cognitive trust (CT) and emotional trust (ET) [23,24]. CT reflected a user’s confidence in an AI’s competence, reliability, and logical decision-making capabilities, contingent upon the perceived trustworthiness of the system; whereas ET stemmed from the user’s comfort, perceived empathy, and affective connection with the AI, shaped by feelings, emotions, and moods of others [25,26]. Notably, these two trust dimensions operate complementarily and mutually reinforce each other across different relationship stages [27]. Trust within human–robot interaction, specifically, has been characterized as an individual’s readiness to engage with automated systems to achieve particular goals, which was a vital factor influencing user adoption and consumers’ online purchasing intentions [28,29]. While existing literature has explored trust in AI or applied the CASA paradigm, few studies have integrated CASA with a dual-path (cognitive and emotional) trust framework specifically within the context of AI fitness coaching. In light of the Chinese smart home fitness e-commerce platform, our study elucidates how these two dimensions mediate the relationship between AI agents’ features and user engagement. Specifically, we illustrate how social cues can act as triggers for emotional trust, while functional cues play a key role in strengthening cognitive trust, which bridges a crucial gap in understanding the mechanisms that drive user compliance and active engagement in AI coaching and commerce environments by focusing on Chinese smart fitness e-commerce platforms and specifically targeting AI agents (e.g., AI fitness coaches). The subsequent questions direct our study:
RQ1: Which technical and social features of AI agents (acting as AI fitness coaches) effectively foster users’ cognitive and emotional trust in AI agents on Chinese smart fitness e-commerce platforms?
RQ2: How do cognitive and emotional trust in AI agents influence user behaviors within AI-mediated fitness interactions, particularly in driving user compliance with AI recommendations and active engagement?
RQ3: How does trust in AI agents mediate the relationship between AI agents’ features and behavioral outcomes, as posited by CASA?
This research offers several important insights based on a mixed-methods approach that utilized text mining and structural equation modeling (SEM). First, by leveraging CASA framework, our unique contribution lies in demonstrating that for AI agents—which are inherently hybrid entities blending functional tools with social roles—both cognitive (performance-based) and emotional (relationship-based) trust are parallel and critical mechanisms translating technical–social features of AI agents into behavioral engagement in digital e-commerce environments. The identification of five key features—visibility, gamification, interactivity, humanness, and sociability—and their role in trust formation extend the CASA framework into the context of AI-mediated fitness e-commerce. Second, it proposes and empirically validates a dual-path trust mechanism comprising cognitive trust (rational reliance) and emotional trust (affective bond) in AI agents, to elucidate the psychological mechanisms underlying user interactions on Chinese smart fitness e-commerce platforms. This mechanism is particularly relevant in the smart home fitness e-commerce platforms, where trust drives purchase intention, subscription renewals, and brand loyalty. Third, both cognitive and emotional trust significantly influence user behaviors, including compliance with AI-generated recommendations and active engagement with AI agents (acting as AI fitness coaches). These behaviors contribute directly to business outcomes on Chinese smart fitness e-commerce platforms, including higher user retention, repeated transactions, and improved conversion rates for health and fitness products.
In addition, examining the dual-path trust mechanism—encompassing both cognitive and emotional trust in AI agents—and its behavioral consequences carries important theoretical and practical implications. Theoretically, while trust is widely acknowledged as pivotal in human–AI interaction, the distinct pathways through which technical–social features of an AI agent (e.g., AI fitness coach) foster cognitive trust in AI agents (rooted in perceived competence) versus emotional trust in AI agents (rooted in perceived empathy and social connection) remain underexplored. By integrating the CASA paradigm with a dual-path trust model, this study advances the literature beyond adoption-focused theories (e.g., TAM, UTAUT) toward a relational and process-oriented understanding of sustained AI-mediated engagement. Practically, for designers and platform managers in the smart home fitness e-commerce sector, a granular comprehension of which features drive which form of trust-and subsequently which behavioral outcomes-offers actionable guidance. Rather than relying on intuition, developers can strategically emphasize specific features (e.g., visibility and gamification for cognitive trust in AI agents, humanness and sociability for emotional trust in AI agents) to enhance user compliance and active engagement, thereby supporting higher retention, recurring subscriptions, and long-term platform viability. Thus, clarifying these mechanisms not only fills a conceptual gap but also provides an evidence-based framework for designing AI agents (e.g., AI fitness coaches) that are both functionally reliable and socially resonant.
From a practical perspective, this research provides actionable insights for designers and marketers seeking to build trustworthy AI interfaces that facilitate user satisfaction and commercial success in the growing online fitness market.

2. Background Literature

2.1. AI Agents in E-Commerce

The concept of AI Agents traces its origins to philosophical inquiry before evolving within the fields of computer science and AI. Defined as intelligent entities capable of perceiving environmental and informational inputs, AI Agents autonomously make decisions based on their knowledge and toolkits to accomplish designated tasks [30]. They are characterized by four core modules: perception, memory, planning and action capacity. The emergence of large language models has significantly accelerated their evolution, transforming AI Agents from passive responders to proactive task initiators. This shift enables dynamic strategy generation and supports widespread application across diverse scenarios, such as market analysis, financial services, healthcare and sports. With the support of large models, AI Agents are now transitioning from being mere tools to becoming intelligent collaborators, capable of working alongside humans in increasingly sophisticated ways. The integration of AI Agents in e-commerce represents a paradigm shift from traditional transaction platforms to intelligent commerce ecosystems. For instance, unlike Large Language Models (LLMs), AI agents (acting as AI fitness coaches) use technologies like computer vision (CV), natural language processing (NLP), motion recognition technology, and machine learning (ML) algorithms to develop personalized training programs and diet plans based on individual preferences, objectives, and fitness levels. This capability revolutionizes how consumers discover, evaluate, and purchase products online [5] (Table 1).
Unlike traditional AI robots and large language models, an AI agent—functioning as an AI fitness coach (sellers)—integrates core modules including perception, memory, planning, and action [31], which together systematically enhance its visibility, gamification, interactivity, humanness, and sociability within smart home fitness e-commerce platforms (see Table 2): (1) Perception, enabled by computer vision and natural language processing, continuously captures user actions, voice commands, and contextual data, forming the basis for real-time interactivity [31,32,33,34]. (2) Memory stores and organizes user preferences, progress, and historical interactions [31,32,34,35], allowing the planning module to generate highly personalized and empathetic feedback, thereby strengthening AI agents’ interactivity and humanness. (3) Planning not only devises tailored workout and nutrition programs but also breaks down fitness goals into engaging game-like strategies [31,33,35]—such as point systems and achievement levels—enabling rich visibility and gamification. (4) The action module brings these plans to life through a clear virtual avatar and seamless API integrations [31,32,33,34,35]. It executes adapted exercise rules, displays real-time guidance for greater visibility and interactivity, and connects users to social challenges and community leaderboards, thereby fostering a sense of sociability. Together, these four modules of AI agents form a cohesive loop that makes AI fitness coaches visible, interactive, human-like, gamified, and socially connected, offering a cohesive and engaging user experience (see Table 2). We will expand the existing literature review to more explicitly and critically survey prior work on AI agent modules (perception, memory, planning, action) and their applications [36], both within and outside the fitness domain [37]. The goal is to clearly establish the current state of knowledge and identify the specific gap regarding the integrated application and experiential mapping of these modules in smart home fitness e-commerce contexts.
Table 2 is presented as an integrative framework synthesized for this study, rather than being directly drawn from a single existing source. The purpose of this table was to visually consolidate and map the functional modules of the AI agent (perception, memory, planning, action) onto the key AI agents’ features (visibility, gamification, interactivity, humanness, sociability) that we identified as critical within the smart home e-commerce context. The structures and mapping presented in Table 2 are synthesized based on both theoretical review and the findings of Study 1 in this research.
The structures and mapping presented in Table 2 are synthesized based on both theoretical review and the findings of Study 1 in this research. First, it clarifies the theoretical novelty of our approach. This study establishes a first-of-its-kind, systematic mapping between the four core technical modules of an AI agent (perception, memory, planning, and action) and the five key features of AI agents (visibility, gamification, interactivity, humanness, and sociability) critical to the smart home fitness context.
Second, it explains the empirical foundation of the framework. This mapping is not merely a theoretical deduction but is directly informed and justified by the user-centric, qualitative insights derived from Study 1. The empirical findings validate the connections between specific technical functions and targeted user experiences.
Third, it defines the core contribution of the research. The study’s principal value lies in bridging a significant gap in the literature: the disconnect between generic, technology-driven agent models and the specific, user-centered practical needs within a defined application context-smart home fitness e-commerce. Our framework thus serves as a conceptual bridge between established theory and context-specific application.

2.2. Computers Are Social Actors (CASA)

According to the Media Equivalence Theory (MET), individuals often perceive media as if they were real individuals instead of mere tools, with the Computers Are Social Actors (CASA) framework central to this concept [38]. This paradigm has been increasingly used to explain how people interact with AI [13,39]. Within this framework, social cues including language, facial expressions, voice, emotions, interactivity, and engagement collectively form social signals that convey personalities, emotions, interactivity, engagement, and more [40]. Prior research has proposed that casual, humanistic, and congenial language styles significantly impact social perceptions of technology more positive responses than formal, non-humanistic, and impersonal communication [41]. Consistent with previous findings, in our study, social cues include human-like vocal encouragement using friendly language styles, action-recognition reminders, and interactive features. While social cues and signals can influence users’ social reactions to AI, they are not the only variables. In previous studies, users’ social reactions included social presence, perceived credibility, compliance behavior, and intentions to use [13]. Consequently, this research focuses on two typical social responses: user compliance and active engagement.
This study employs the Computers Are Social Actors (CASA) paradigm as its core theoretical framework for two primary reasons:
Firstly, the CASA paradigm is highly congruent with our research questions. The core of this study is to investigate how users interact with AI agents acting as “fitness coaches” within a social context and build trust, rather than merely evaluating their instrumental utility. The foundational premise of CASA—that humans instinctively apply social rules and expectations to computers—provides the most direct theoretical lens for this investigation. It allows us to treat the AI agent as a “social actor” and systematically examine how its techno-social features (e.g., humanness, sociability) elicit user socio-emotional responses (e.g., cognitive and emotional trust), which in turn drive specific behaviors (e.g., compliance and active engagement). This perspective is essential for understanding the long-term, dynamic “human–AI coach” relationship.
The widespread application of the CASA paradigm reflects a growing tendency to perceive AI systems as social actors. Thus, there is an increasing relationship between humans and AI, ranging from monitors or assistants to coaches, and the expected focus shifts from utility to humanness, hedonism, socialization, and interactivity. In line with this perspective, our study employs a social actor lens to explain how AI agents (e.g., AI fitness coaches) present their core features or potential capabilities (e.g., humanness, visibility, gamification, interactivity, and sociability; see Figure 1) to users. We propose that these features shape users’ cognitive and emotional trust, which in turn drive two key behavioral outcomes: user compliance and active engagement. Overall, it offers a theoretical foundation for understanding how users perceive and respond to AI agents (e.g., AI fitness coaches) during exercise.
Secondly, by integrating CASA with a dual-path trust model, this study fills a critical gap through the introduction of a more precise mechanism. Rather than stopping at the general observation that “users respond socially,” we delve deeper to elucidate the underlying processes—specifically, by exploring the concurrent cultivation of rational, performance-driven confidence (cognitive trust) and relational, emotionally based bonds (emotional trust). This methodology enables us to enrich the CASA framework within the realm of AI coaching, revealing that the “social actor” attribute is not a uniform phenomenon but functions through dual, complementary trust-building channels that echo human interactions.
Moreover, our model serves as a bridge between disparate research streams in HCI and AI, which tend to focus either on usability and functional performance (in line with cognitive trust) or solely on social presence. This nuanced perspective constitutes the central theoretical innovation of our study. By leveraging CASA to delineate social features and then connecting these features to a dual-path trust model, this study provides a far more detailed and nuanced understanding. Specifically, we illustrate how social features can act as triggers for emotional trust, while technical features play a key role in strengthening cognitive trust.
Thirdly, we acknowledge the limitations of other dominant technology acceptance models. While models such as the Technology Acceptance Model (TAM) or the Unified Theory of Acceptance and Use of Technology (UTAUT) are highly effective in predicting technology adoption based on perceptions of usefulness and ease of use, they primarily focus on pre-adoption intentions. They offer less granularity for capturing the rich socio-emotional mechanisms inherent in sustained interaction. For instance, TAM struggles to adequately explain the formation of “emotional trust,” while UTAUT has limited explanatory power for how social cues like “humanness” influence long-term engagement behaviors. However, these theories are not independently applicable to this study and are inconsistent with its hypotheses, which aligns with the viewpoints of Wang [11,42]. Therefore, while these models offer valuable reference points, they are insufficient as the core theoretical framework for this study to deeply explain the complex trust and behavioral mechanisms triggered by an AI agent acting as a “social coach.”
In summary, the CASA paradigm is the optimal theoretical framework for this study due to its direct focus on social perception and the process of human–computer interaction, enabling a nuanced explanation of how users perceive, feel, and respond to an AI agent (e.g., AI fitness coach) endowed with social attributes.

2.3. Cognitive and Emotional Trust in AI Agents

Trust was a core element in maintaining a relationship with users and an essential factor in technology adoption [43]. Currently, defined as one party’s willingness to be vulnerable to the actions of another, trust has been extensively applied to human–AI interactions [44]. In AI contexts, trust reflects a mental state held by the trustor regarding the trustee’s actions under ambiguity and vulnerability [45]. Trust, as a keystone of social interaction, strengthens human–robot interaction [46]. In the context of smart home fitness e-commerce platforms, it implies that trust in AI agents is mostly developed from a cognitive perspective while largely neglecting emotional dimensions. Prior research has primarily examined the consumer’s trust in social shopping, healthcare, and law enforcement contexts [29]. As we all know, particularly in fitness contexts, no study has investigated user trust in AI agents from an emotional perspective. As a multidimensional concept, it possesses different dimensions that differently influence consumer behavior [47].
This study divided trust into two categories: CT in AI agents and ET in AI agents. CT in AI agents referred to a rational confidence in a partner’s competence to fulfill role-specific tasks or activities grounding in knowledge, facts, and rational choice; conversely, ET in AI agents referred to an effective security in depending on a trustee emerging from shared principles, values, emotions, and goodwill [48]. In sports training, AI supports trainers by identifying and correcting movement errors, improving performance, and providing data-driven recommendations [49], thereby fostering user trust in AI agents. Previous studies demonstrated that trust changes over time and can be relatively different in terms of cognitive and emotional trust in AI agents; notably, these two dimensions of trust are complementary at different relationship stages [27]. In addition, by leveraging CASA to delineate social features and then connecting these features to a dual—path trust model, this study provides a far more detailed and nuanced understanding. This methodology enables us to enrich the CASA framework within the realm of AI agents, revealing that the “social actor” attribute is not a uniform phenomenon but functions through dual, complementary trust-building channels that echo human interactions.

3. Methodology

This research employs a sequential mixed-methods design where Study 1 and Study 2 are functionally connected and mutually reinforcing. This study utilizes a combination of text mining (Study 1) and SEM (Study 2), specifically adapted to scenarios in the field of AI agents (acting as AI fitness coaches). In study 1, text-mining techniques were applied to qualitative user data to inductively identify the core features of AI agents (i.e., visibility, gamification, interactivity, humanness, and sociability). These dimensions formed the foundational constructs for our conceptual model. Study 2 then operationalized these constructs into measurable scales and tested the hypothesized relationships among them using PLS-SEM. Study 1 adopts an exploratory design to find emerging features of AI agents (acting as AI fitness coaches) through the lens of CASA. Combining the results of Study 1 with trust theory, Study 2 then develops and empirically assesses a conceptual model that outlines how technical and social features of AI agents (acting as AI fitness coaches) influence user compliance and active engagement. Thus, the qualitative phase (Study 1) generated the theory-grounded constructs, while the quantitative phase (Study 2) statistically validated the model’s structure and relationships. This two-phase approach ensures that the final framework is both empirically grounded and rigorously validated.

4. Study 1: Text Mining

Firstly, in December 2024, this research selected data sources from the dominant Chinese e-commerce platforms: JD.com, Taobao, Tmall, and Pinduoduo, including product reviews, image-text composites, and community discussions. The reason for specifically focusing on leading smart home fitness brands or in China, such as Xiaodu, AEKE, and FITURE, is that these platforms represent central hubs for users who integrate AI fitness agents into their home workout routines. Focusing on these leading smart home fitness brands allows us to tap into the experiences of engaged, regular users who are deeply integrated into an AI-agent-mediated fitness routine. Content from these brands reveals insights into long-term engagement, habitual use, and nuanced interaction with the features of AI agents.
Our sampling strategy was designed to capture user perspectives from the complete consumer journey-from purchase consideration to in-home usage-across the dominant channels in China’s market. These platforms represent the primary commercial channels where consumers purchase, experience, and review fitness-tech products. Sampling from all four major platforms ensures we capture a broad spectrum of the general consumer population across different demographics, purchasing behaviors, and platform loyalties. The collected user-generated content (reviews, image-text composites, discussions) reflects post-purchase, real-world usage experiences, providing insights into pragmatic satisfaction, usability issues, and value perception.
Secondly, the study utilized Gooseeker’s integrated text mining platform to collect and process user-generated content from e-commerce platforms (JD, Taobao, Tianmao, Pinduoduo), which has recently gained significant adoption among Chinese textual data [50]. Raw textual data containing keywords, including product reviews and forum discussions, was automatically crawled using predefined URL rules and pagination protocols. In the end, 689,540 words of comments and 468,309 words of image-associated text were eventually captured. After that, the platform’s browser interface allows data to be imported directly without installing software and generates downloadable tokenized reports and frequency tables. Subsequently, using the segmentation and classification search platform in Gooseeker (14.1), we configured stop-word lists and custom dictionaries, merged synonyms, and extracted high-frequency words. In addition, the platform uses a Chinese analysis method based on machine learning algorithms, enabling effective text analysis. It further facilitated automatic text classification and matching through custom-tagged lexicons.
Thirdly, this study adopted a three-stage mixed-methods approach. In the first stage, term frequency analysis was conducted using the GooSeeker (V12.3.4) tool, combined with the TF-IDF algorithm, to automatically extract 37 candidate terms characterized by both high frequency and strong distinctiveness from the raw corpus. In the second stage, two researchers independently performed qualitative induction and categorization of the candidate terms to preliminarily construct a thematic framework. Subsequently, Co-hen’s Kappa coefficient was calculated using SPSS 28.0 to assess classification consistency. Reliability is higher (89.19%). The obtained result (k = 0.795) reached a “substantial agreement” level of reliability, leading to the final establishment of a six-dimensional thematic analysis framework. In the third stage, all corpus materials were coded ac-cording to this framework, and the TF-IDF algorithm was applied again to calculate the comprehensive weight of each thematic dimension. By ranking the weights in descending order, five core features were identified, in the following sequence: interactivity, gamification, sociability, visibility, and humanness.
Accordingly, interactivity holds a predominant position across all indicators, establishing itself as the most distinctive and indispensable core concept in AI-powered fitness. Established research affirms its centrality as a key driver of user experience [5,20]. Operationally, this study defines interactivity as the bidirectional human–machine communication process characterized by voice-based control and real-time responsiveness, personalized guidance from the AI agent. Concurrently, paralleling gamification, sociability functions as another fundamental pillar of the system, emphasizing interpersonal connectivity [51]. For this research, sociability denotes the inherent attribute and functional capacity of an AI system to support, promote, and sustain human social interactions and relationship-building processes. Sociability refers to the degree to which the AI agent facilitates a perceived connection to a social environment, often by enabling social comparisons or social interactions among users to satisfy the innate need for connection with others (e.g., friends and family members) and receive social support(e.g., “The AI coach makes me feel I’m part of a group challenge”). This concept goes beyond the simple user–AI dyadic interaction. Gamification refers to the process of applying game design elements of AI agents to non-game contexts in order to create an enjoyable experience for users. Visibility refers to the dynamic, bidirectional exchange of information between humans and AI agents, orchestrated through sequential voice controls and instantaneous responsiveness to deliver personalized guidance. Humanness is conceptualized as the tendency to imbue real or imagined behavior of AI fitness coaches with humanlike characteristics or emotions, focusing on its static attributes that mimic human forms, such as a human-like name, or voice tone (e.g., “The AI coach sounds like a real person”). Its primary role is to make the interaction feel less machine-like. Therefore, AI agents integrates the five core features of interactivity, gamification, sociability, visibility, and humanness (see Table 3).

5. Study 2: Research Model and Hypotheses

5.1. The Conceptual Model

This study employs these five themes identified through text mining to integrate the CASA framework with trust in AI agents, thereby enhancing the explanation of user behaviors towards AI agents. As shown in the conceptual model (Figure 2), the independent variables include visibility, gamification, interactivity, humanness, and sociability. User compliance and active engagement serve as dependent variables, while cognitive trust and emotional trust in AI agents function as mediating variables.

5.2. Hypothesis Development

5.2.1. Humanness and Trust in AI Agents

Previous studies have demonstrated that humanness in smart services positively correlated with trust [52,53,54], whereas humanness more profoundly affected emotional trust [23]. Similar to anthropomorphism, this concept involved the tendency to attribute human-like traits, motivations, intentions, or emotions to the behaviors of nonhuman agents, whether real or imagined [15]. Recent scholarship has increasingly explored the influence of perceived humanness on individuals’ trust in AI systems [23]. For instance, researchers have found a positive link between conversational human voice (CHV) and consumer trust [55]; furthermore, the human-like vocal qualities in virtual assistants could positively impact the user’s perceived friendliness, empathy, and security, thus increasing trust and overall acceptance of the technology; the increased perceptions of humanness in chat agents correlate with greater trust in the organization [56]. Thus, we posit that humanness may increase cognitive and emotional trust in AI agents.
In terms of AI coaching, we propose that humanness enhances trust through dual pathways, aligning with core relational and functional expectations in digital fitness and e-commerce. First, by leveraging machine learning and NLP, acting as AI fitness coaches, AI agents simulate human-like reasoning and communication, enhancing users’ rational assessment of AI capabilities. As such, we can speculate that humanness may enhance cognitive trust in AI agents. Second, acting as AI fitness coaches, AI agents can use different vocal tones to communicate with users and enhance users’ affective bonds with the system. Thus, we posit that humanness may increase emotional trust in AI agents. Therefore, we hypothesize:
H1a. 
Humanness positively influences cognitive trust in AI agents.
H2a. 
Humanness positively influences emotional trust in AI agents.

5.2.2. Visibility and Trust in AI Agents

Extant research has exposed that visibility positively influences trust [20,57], by visually displaying products to users [58]. It proved particularly valuable in fitness, where traditional home workouts often lack real-time guidance, resulting in both reduced engagement and increased injury risk [59]. In contrast, AI agents (acting as AI fitness coaches) address these limitations through enhanced visibility features. Users could synchronously view exercise regimens, technical movements, and performance results. This represents a paradigm shift from conventional video-based workouts and text-based conversational agents [60]. In fitness e-commerce contexts, visibility may increase cognitive and emotional trust in AI agents.
Leveraging AI-generated content (AIGC) technology, visibility enables AI agents (acting as AI fitness coaches) to convey their professional skills and emotions through these screens. The professional presentation of entire exercise programs, real-time progress tracking, and accurate feedback can make it easier for users to believe in their professionalism, contributing to increased cognitive trust in AI agents, Simultaneously, aesthetic elements such as colors, interface shapes, and dynamic effects can trigger users’ emotional resonance, which may increase emotional trust in AI agents. Therefore, we infer:
H1b. 
Visibility positively influences cognitive trust in AI agents.
H2b. 
Visibility positively influences emotional trust in AI agents.

5.2.3. Gamification and Trust in AI Agents

Gamification was primarily characterized as the incorporation of game design elements within non-game contexts [61]. This process integrates various motivational components, including points, levels, leaderboards, virtual badges, and avatars, into interactions and reward systems, making the user experience more enjoyable [62,63]. Researchers have proved that a verified badge can influence consumer trust [64]. Extant research often conceptualizes gamification through two first-order factors: hedonic and utilitarian features [65]. Moreover, gamification exerted a notable and beneficial influence on hedonic and utilitarian motivation by introducing elements of entertainment [66]. Hedonic motivations, in turn, can impact trust in technology, while enjoyment as a gamification element aims to provide joy during technology use [67]. Given that perceived enjoyment has been shown to positively influence trust [68], we posit that gamification may increase cognitive and emotional trust in AI agents.
The integration of gamification in AI agents (acting as AI fitness coaches in this study) can foster user trust through dual pathways in the context of fitness e-commerce. First, gamified settings (e.g., progress bars, mission challenges, and narrative storylines) in AI agents (acting as AI fitness coaches) can provide challenging goals and inspire users’ desire to actively participate, increasing trust in AI agents in the effectiveness of functionality. As such, we can speculate that gamification may enhance cognitive trust in AI agents. Second, instant rewards (e.g., celebratory animations and sound effects) can correlate system use with a sense of enjoyment [69], forming emotional attachment to keep exercising. Additionally, immersive narrative elements can trigger users’ emotional projection and develop perceived companionship (“ally support”), thereby increasing emotional trust in AI agents. Therefore, we formulate the following hypothesis:
H1c. 
Gamification positively influences cognitive trust in AI agents.
H2c. 
Gamification positively influences emotional trust in AI agents.

5.2.4. Interactivity and Trust in AI Agents

Previous researchers have established a link between interactivity and consumer trust [20,70,71]. In AI coaching contexts, interactivity manifests through personalized guidance, voice control, and responsiveness. Empirical studies indicate that personalization techniques and responsiveness in the realm of e-commerce directly enhance cognitive and affective trust [57,72]. Moreover, high-quality communication in AI chatbot services has been found to be crucial for building cognitive and emotional trust [73]. Hence, we posit that interactivity may increase cognitive and emotional trust in AI agents.
The integration of advanced generative AI technologies (e.g., GANs for real-time motion synthesis) and general-purpose language models (e.g., GPT-4, Doubao, and Deepseek-R1 for adaptive dialogue), combined with NLP, enables continuous, 24/7 real-time interactions. This supports trust development through two key mechanisms: First, personalized, on-demand coaching services can enhance perceived competence and autonomy, thereby increasing perceived trust in AI agents (AI fitness coaches). These AI-driven interactive features are increasingly integral to e-commerce applications. Second, empathetic and real-time responsiveness help foster emotional connections and a sense of belonging, strengthening emotional trust in AI agents (AI fitness coaches). Therefore, we hypothesize:
H1d. 
Interactivity positively influences cognitive trust in AI agents.
H2d. 
Interactivity positively influences emotional trust in AI agents.

5.2.5. Sociability and Trust in AI Agents

Sociability reflected the fundamental human need for social connection and support, whether from human coaches, friends, or family [51]. Disregarding or improperly implementing these needs may undermine motivation. Previous study has demonstrated social support and social presence positively correlate with trust in social commerce contexts and encouraging social sharing intentions [74]. In online settings, perceptions of social support positively affected participants’ swift trust [75], and both informational and emotional social support have been shown to significantly influence trust in e-commerce [76]. Thus, we propose that sociability enhances cognitive and emotional trust in AI agents.
In AI coaching, on one hand, AI agents (acting as AI fitness coaches) enable information exchange with human coaches, friends, family members, and co-users on the platforms, while systematically integrating user-generated content (e.g., reviews, ratings, and use cases) to provide credible third-party validation. This enhances perceived professionalism and reliability, thereby strengthening cognitive trust in AI agents. On the other hand, designed social features of AI coach agents, such as simulated interpersonal care mechanisms and structured social validation processes, can create psychological safety, develop meaningful user-platform attachment, counteract social isolation, and establish lasting affective bonds, consequently enhancing emotional trust in AI agents. Therefore, we hypothesize:
H1e. 
Sociability positively influences cognitive trust in AI agents.
H2e. 
Sociability positively influences emotional trust in AI agents.

5.2.6. Trust in AI Agents and Compliance

Previous evidence demonstrated that users’ initial trust significantly increases, resulting in enhanced compliance with the robot’s directives [25], including cognitive and emotional trust. Furthermore, a prior study revealed that the propensity to trust can greatly affect the likelihood of adhering to AI-generated recommendations [77]. Thus, these findings collectively suggest that cognitive and emotional trust serve as critical antecedents to user compliance in human–AI interactions.
First, when users perceive AI agents (acting as AI fitness coaches) to possess high professional competence and analytical capabilities to analyze their workouts thoroughly, they are more likely to trust and comply with exercise recommendations. Thus, we deduce that cognitive trust in AI agents might positively affect user compliance. Second, through emotional language (e.g., “the progress is remarkable!”) and the sense of belonging and identity from online community interactions, AI agents (acting as AI coaches) can transform affective bonds into sustained compliance with system recommendations, indicating that emotional trust in AI agents also enhances compliance. Therefore, we hypothesize:
H3a. 
Cognitive trust in AI agents positively influences user compliance.
H3b. 
Emotional trust in AI agents positively influences user compliance.

5.2.7. Trust in AI Agents and Active Engagement

Research has demonstrated that trust significantly enhances user engagement [9,78]. Recent work has revealed that brand trust serves as a mediating construct linking brand anthropomorphism to the multidimensional aspects of consumer-brand engagement, which include cognitive, affective, and behavioral dimensions [16]. Conversely, trust deficiency is regarded as a primary factor contributing to consumer disengagement [79]. Thus, we posit that cognitive and emotional trust in AI agents might affect active engagement.
Similarly, based on the professionalism of AI agents (acting as AI fitness coaches to provide scientific and reasonable pre-exercise assessments and personalized exercise plans), users will trust their competence, which promotes adherence to recommendations, consumption of related content, and consistent habit formation. Thus, cognitive trust in AI agents (acting as AI coaches) might positively increase active engagement. Furthermore, empathic expression and community features, such as leaderboards, facilitate emotional connections and community interaction, reinforcing emotional trust in these AI agents and encouraging active engagement. Thus, we propose that:
H4a. 
Cognitive trust in AI agents positively influences active engagement.
H4b. 
Emotional trust in AI agents positively influences active engagement.

5.2.8. User Compliance and Active Engagement

Research has established a robust connection between exercise adherence and user engagement, with gamification serving as a crucial catalyst for both outcomes [80,81]. Due to the robot’s engagement in such human-like behaviors, users’ initial trust increases dramatically, resulting in higher compliance with the robot’s commands [78]. Notably, the previous study also demonstrated that humanness significantly affected user compliance [82], which might further enhance active engagement.
In AI coaching contexts, users are more likely to adhere exercise programs because of the system’s professionalism, which subsequently supports further active exploration. Additionally, according to the gamification design, AI agents (acting as AI fitness coaches) gradually transform passive task completions into active challenges, enabling users to actively participate in, share, and even recommend fitness activities within their social networks. Therefore, we hypothesize:
H5. 
User compliance positively influences active engagement.

5.3. Measurement

According to the research hypotheses, the proposed theoretical framework delineates a sophisticated amalgamation of nine underlying variables: humanness, visibility, gamification, interactivity, sociability, cognitive trust in AI agents, emotional trust in AI agents, user compliance, and active engagement. All measurement instruments were adopted from previously validated studies to ensure validity, reliability, and contextual fit. First, for technical features of the AI actor, humanness was made up of 3 items by [83]. Visualization was assessed by employing 3 items from [17]. Gamification was made up of 3 items by [84]. Interactivity was measured using 5 items from [85]. For social features, sociability was tested using 3 items from [86]. In the second part, four items derived from several research were employed to assess user compliance by [87]. Active engagement was measured with 4 items by [88]. Third, trust in AI agents was tested using scale items from [27,48] for both cognitive and affective components. All scales used a 7-point Likert scale, ranging from “strongly disagree (1)” to “strongly agree (7)”. Static data (i.e., age, gender, educational level, exercise frequency) were also gathered.

5.4. Data Collection and Analysis

Firstly, we carried out an online survey in China through Credamo (https://www.credamo.com/ (accessed on 5 December 2024)), collecting online data using a questionnaire from December 2024 to January 2025. The survey was administered via Credamo, a professional online data collection platform widely used in academic and market research in China. Credamo employs stratified sampling and attention-check mechanisms to improve sample quality. The platform’s reliability and data quality have been validated in numerous peer-reviewed international studies. Before launching the survey, the questionnaire underwent a two-stage review by experts in psychology and physical education, followed by two pilot tests (each with N = 30 participants). The first pilot test identified two ambiguous items, which were revised to ensure accuracy. The second pilot confirmed the clarity of the revisions. After addressing two typographical errors, the final version was deployed from full-scale data collection. These steps ensured that the constructs-such as cognitive trust in AI agents, emotional trust in AI agents, and active engagement-were accurately operationalized and culturally appropriate.
Secondly, this study, conducted in accordance with the ethical principles of the Declaration of Helsinki, received approval from the Ethics Committee of Guangdong Ocean University. Prior to participation, all individuals provided written informed consent. Participation was entirely voluntary, and the confidentiality and anonymity of all data were strictly maintained throughout the research.
Thirdly, participants began by watching a three-minute video introducing AI agents (acting as AI fitness coaches)’ technical features and functions. Then, they completed questions covering demographic information (e.g., gender, education, income, and experience) and topics, including interactivity, humanness, visibility, gamification, sociability, cognitive trust in AI agents, emotional trust in AI agents, user compliance, and active engagement. The average completion time ranged from 8 to 12 min. Lastly, participants were randomly awarded electronic cash prizes as incentives. In summary, our sampling design accounted for key demographic representativeness, utilized a reputable platform for quota-based recruitment, and incorporated rigorous pre-testing and quality checks. These steps strengthen the reliability and generalizability of our findings.
Finally, out of 630 responses, 599 valid questionnaires were retained after excluding 31 invalid responses. The data were analyzed quantitatively by using Partial Least Squares Structural Equation Modeling (PLS-SEM) in SmartPLS 3.2.8 to test the hypothesized relationships among variables and evaluate the overall model fit. The survey phase provides structured, psychometrically validated data that tests specific theoretical pathways and controls for extraneous variables.
Our choice of PLS-SEM was deliberate and is justified by several characteristics of our research model and data. First, the primary objective of our study is prediction and theory development regarding the key driver constructs of trust in AI agents, rather than theory confirmation alone, which aligns with the strengths of PLS-SEM. Second, our model includes formative constructs in the measurement of the AI agent’s features, and PLS-SEM is particularly suitable for handling such complex models with formative indicators. Finally, PLS-SEM does not require strict assumptions about data normality, making it a robust choice for our data distribution. We have now explicitly stated this rationale in Section 3.

5.5. Results

5.5.1. Participants’ Demographic Characteristics

Table 4 illustrates that females slightly outnumbered males in the sample, accounting for 57.8% and 42.2%, respectively. The most represented age groups were 26–30 (21.4%) and 31–35 (41.6%), indicating a higher likelihood of AI fitness coaches’ usage among mature individuals. Educationally, 61.4% of participants held a bachelor’s degree and 31.9% had a graduate degree. Income distribution revealed that 24.0% earned 5001–8000 RMB monthly, while 18.9% reported incomes above 15,001 RMB. Marital status data revealed more married participants (75.9%) than singles (23.9%). Additionally, 69.3% of participants exercised more frequently than 3 times per week, indicating an active base.

5.5.2. Common-Method Variance Bias Test (CMV)

To begin with, we implemented procedural remedies during the survey design stage. Specifically, we improved the item phrasing to reduce ambiguity, used different scale formats for predictors and outcomes where possible, and assured respondents of the confidentiality and academic purpose of the study to reduce evaluation apprehension.
In addition, we conducted two approaches to test for CMB. Firstly, Harman’s single-factor test assessed the potential for common-method bias. The results indicated that a single factor accounted for just 34.59% of the variance, far behind the 50% benchmark. This indicates that CMV does not significantly threaten our results, validating data reliability. Secondly, common method bias (CMB) was assessed using the common method factor approach [89]. The results showed that the average method variance of the items is 0.004, while the average substantive variance is 0.645. The ratio of substantively explained variance to method-based variance is approximately 161:1, indicating that common method bias is not a concern in this study (see Table 5).
Last but not least, procedural measures, such as randomizing the question sequence and ensuring participant anonymity, were employed to minimize bias further.

5.5.3. Measurement Model

The measurement model demonstrated robust psychometric properties across key validity and reliability criteria (see Table 6 and Table 7). All standardized factor loadings exceeded 0.72, and the average variance extracted (AVE) values ranged from 0.582 to 0.673, exceeding the recommended thresholds of 0.5 and 0.7, respectively. Reliability assessments revealed Cronbach’s α coefficients between 0.719 and 0.856 and composite reliability (CR) scores ranging from 0.84 to 0.89, all exceeding the 0.70 benchmark. Furthermore, Heterotrait-Monotrait Ratio (HTMT) scores for discriminant validity remained below 0.85 (Max = 0.834, Min = 0.459), confirming distinct construct boundaries.

5.5.4. Structural Model

SEM evaluated the relationships among technical–social features, trust dimensions, and user outcomes. The empirical results of the proposed model are comprehensively summarized in Figure 3 and Table 7. Key findings are outlined below:
Firstly, as demonstrated in Table 8, the structural model demonstrated a good fit, with a standardized root mean square residual (SRMR) value of 0.063. This value is below the recommended threshold of 0.080 [90]. Based on the model fit indices provided, the hypothesized nine-factor model demonstrates statistically superior fit compared to both competing models (the five-factor and four-factor models), supporting its adoption as the preferred theoretical framework. A detailed examination of the fit indices reveals consistent advantages for the nine-factor model. Most notably, it achieves the lowest SRMR value (0.063) among all models, falling below the recommended threshold of 0.08 and indicating better approximate fit. The nine-factor model also shows substantially lower discrepancy measures, with both d_ULS (1.840) and d_G (0.529) values being markedly smaller than those of the five-factor (d_ULS = 2.310, d_G = 0.612) and four-factor (d_ULS = 2.314, d_G = 0.586) models. The chi-square statistic provides further evidence, with the nine-factor model yielding the lowest value (1815.375) compared to both the five-factor (2048.289) and four-factor (1947.436) models, suggesting better reproduction of the observed covariance structure. Additionally, the nine-factor model achieves the highest NFI value (0.763), outperforming both the five-factor (0.733) and four-factor (0.746) models. While the principle of parsimony might initially favor simpler models, the substantial improvement in fit indices across all measures justifies the additional complexity of the nine-factor structure. The consistent pattern of superior performance across multiple indices strongly suggests that the more nuanced theoretical representation offered by the nine-factor model better captures the underlying constructs and their relationships. In conclusion, based on comprehensive model fit evaluation, the nine-factor model represents the most appropriate framework for understanding the phenomenon under investigation, as it provides significantly better fit to the data without compromising theoretical coherence.
Thirdly, all variance inflation factor (VIF) values ranged from 1.326 to 2.160, falling below 3.3, which indicates no multicollinearity.
Fourthly, the analysis revealed that socio-technical features of AI agents (acting as AI fitness coaches) influence cognitive trust with an R2 score of 49.7% and emotional trust with an R2 score of 49.5%. Furthermore, these two dimensions of trust (CT and ET) in AI agents explained approximately 42.8% of the variance in user compliance and 48.5% in active engagement.
Fifthly, as demonstrated in Table 9, AI agents’ technical and social aspects strongly enhance CT and ET. In particular, technical aspects (HU: β = 0.183, p < 0.001; VI: β = 0.152, p < 0.001; and GA: β = 0.179, p < 0.001) and social aspects (IN: β = 0.270, p < 0.001; SO: β = 0.139, p < 0.001) significantly enhance CT, and these findings support H1a–H1e. Similarly, technical aspects (HU: β = 0.229, p < 0.001; VI: β = 0.165, p < 0.01; GA: β = 0.162, p < 0.001) and social aspects (IN: β = 0.242, p < 0.001; SO: β = 0.123, p < 0.01) also positively affect ET, thereby supporting H2a–H2e. Both trust dimensions drive behavioral outcomes: CT significantly predicts UC (β = 0.417, p < 0.001; supporting H3a) and AE (β = 0.274, p < 0.001; supporting H4a), while ET also enhances UC (β = 0.317, p < 0.001; supporting H3b) and AE (β = 0.364, p < 0.001; supporting H4b). Further, UC further contributes to AE (β = 0.177, p < 0.001), supporting the behavioral interdependence hypothesized in H5.

5.5.5. Mediation Analysis

Bootstrap analysis (n = 5000) examined whether trust in AI agents mediates how AI agents’ social and technical features affect user compliance and active engagement. As shown in Table 10 and Figure 3, the results indicate that:
(1) CT mediates the effects of technical features (HU: β = 0.076, p < 0.001; VI: β = 0.063, p < 0.001; GA: β = 0.075, p < 0.001) and social features (IN: β = 0.113, p < 0.001; SO: β = 0.058, p < 0.01) on UC, as well as their effects on AE (HU: β = 0.050, p < 0.01; VI: β = 0.042, p < 0.01; GA: β = 0.049, p < 0.01; IN: β = 0.074, p < 0.001; SO: β = 0.038, p < 0.01).
(2) ET similarly mediates technical features (HU: β = 0.072, p < 0.001; VI: β = 0.052, p < 0.001; GA: β = 0.051, p < 0.001) and social features (IN: β = 0.077, p < 0.001; SO: β = 0.039, p < 0.01) on UC, along with their influences on AE (HU: β = 0.083, p < 0.01; VI: β = 0.060, p < 0.01; GA: β = 0.059, p < 0.01; IN: β = 0.088, p < 0.001; SO: β = 0.045, p < 0.01).
(3) UC significantly mediates the relationship between both trust in AI agents (CT: β = 0.074, p < 0.001; ET: β = 0.056, p < 0.001) and active engagement, demonstrating sequential mediation pathways.

6. Discussion and Conclusions

6.1. Conclusions

In summary, our findings reveal that AI agents’ key technical and social features including humanness, visibility, gamification, interactivity, and sociability positively impact user compliance and active engagement, mediated by cognitive and affective trust. All the hypotheses are supported. These results align with and extend prior work in four key ways. First, these findings align with the CASA framework, emphasizing the significance of human-like interactions in AI design. Based on the humanness of AI agents (e.g., AI fitness coaches), name, emotional voice and visual effects could effectively attract more users and promote active participation. It also demonstrated the important role of humanness in human-nonhuman interactions [87], which is consistent with prior studies [87,91,92]. Zhou & Wang [91] argued that naming AI agents can positively affect responsible behaviors in shared economy contexts and examine interaction frequency as a mediating mechanism. This indicated that AI agents with names can significantly alter user behavior. Westphal et al. [92] have empirically demonstrated the significant impact of social cues on the credibility of virtual influencers in the influencer marketing context, with anthropomorphism serving a moderating role. This finding suggested a connection between anthropomorphism and credibility. Kim & Im [87] pointed out that anthropomorphic response depends on perceptions of agent appearance and demonstrated that users can perceive more humanness in highly intelligent but disembodied agents rather than in highly intelligent agents that have poorly designed appearances. This indicated that the appearance of AI agents may influence human/AI interactions. Thus, this demonstrates that users can modify their behavior by enhancing trust. Second, our findings are consistent with research highlighting the significance of gamification and sociability in enhancing trust and fostering user compliance and engagement [64,81,92]. For example, Liao [64] confirmed through three experiments that verified badges among micro-influencers can influence followers’ trust and behavioral intentions in the field of influencer marketing. This confirmed the effectiveness of gamification on social media. As previously mentioned [92], the impact of social cues on user trust has been extensively explored in existing research, and findings suggested that deploying games and gamification can enhance user engagement in existing smoking cessation interventions [81]. Third, the findings demonstrate that trust operates as a critical mechanism strengthening users’ confidence in AI capabilities, thereby fostering greater adherence to system recommendations. This finding aligns with researchers’ trust-compliance paradigm in AI systems and extends it to the fitness domain by revealing how cognitive assessments of competence and emotional bonds contribute to this effect [93]. Notably, enhanced compliance with fitness recommendations of AI agents (acting as AI fitness coaches) directly predicts user active engagement, highlighting the importance of designing systems that sustain user interaction, as we know, which has rarely been demonstrated in previous studies and proves particularly crucial for individuals able to sustain prolonged exercise regimens, regardless of movement skill constraints, locations, or time [94]. This study extends the studies of Gao et al. [95], which discovered that when using intelligent customer service robots, customer engagement played a mediating effect on the relationship between the perceived interactivity (PI) and value co-creation. Thus, finally, enhanced compliance with fitness recommendations directly predicts user active engagement, which proves particularly crucial for individuals able to sustain prolonged exercise regimens.

6.2. Theoretical Contribution

Grounded in the Computers Are Social Actors (CASA) framework, this study empirically investigates how, acting as AI fitness coaches, AI agents’ technical and social features shape users’ active engagement in the smart home-based e-commerce context. This study advances the literature on AI agents by applying the CASA paradigm to demonstrate how trust in AI agents mediate the relationship between AI agents’ features and user behaviors. From a theoretical perspective, our findings contribute to three key innovations.
Firstly, while prior studies have predominantly examined the technical and social features of AI through a technical lens [96,97], this study extends the literature by applying the Computers Are Social Actors (CASA) paradigm to demonstrate how technical and social features of AI agents influence active engagement via trust in AI agents, and also expands the research to the context of AI-enabled fitness interactive marketing on the core characteristics of AI agents, which conducted an extensive literature review in 16 top marketing journals on AI [98]. More specifically, our findings deepen theoretical understanding by integrating CASA with trust research, demonstrating not only that users attribute human-like qualities to AI agents, but also that specific dimensions of anthropomorphism—such as interactivity, humanness, and sociability—directly foster both cognitive and emotional trust. Our findings reveal that key technical (humanness, visibility, gamification) and social (interactivity, sociability) features of AI agents can leverage these tendencies by making AI agents (e.g., AI fitness coaches) more relatable, transparent, engaging, and socially connected, which has already demonstrated that trust in AI agents is contingent upon these attributes [99]. Building upon established relationships between humanness [57], interactivity [17,23], and visibility [20] with user trust, this study makes novel contributions by: (1) empirically validating gamification and sociability as distinct yet complementary trust-building mechanisms, and (2) demonstrating their combined effects on both cognitive and emotional trust dimensions in fitness contexts. This moves beyond prior applications of CASA that often focus on social presence alone [40,41], and offers a more nuanced, trust-centered pathway through which anthropomorphism shapes user behavior (compliance and active engagement).
Secondly, it implies that trust in AI agents is developed through both cognitive and emotional dimensions in the context of smart home fitness e-commerce platforms. Although prior studies have explored consumer trust in contexts such as social shopping, healthcare, and law enforcement [29,99], the emotional aspects of trust—critical in fostering user engagement and loyalty in fitness-related e-commerce remain unexamined. To date, no study has specifically investigated how emotional trust in AI agents shapes user experience and decision-making in smart home fitness platforms. This paper provides a theoretical framework for researchers examining both types of trust in AI agents. In addition, we present the first comprehensive model demonstrating how humanness, visibility, gamification, interactivity, and sociability foster emotional and cognitive trust, respectively. This study, through dual-path trust (cognitive trust and emotional trust), extends previous research [64,70]. Liao [64] who discovered that consumers are likely to transfer that account trust and post trust to a verified badge when they trust a social media platform. Samarah et al. [70] investigated the impact of perceived brand interactivity on social media CBE (customer brand engagement), which was positively related to brand trust. However, this research did not explicitly distinguish which specific dimension of trust these effects target. Thus, this further deepens this research on the impact of technological and social characteristics on user trust. For instance, highly anthropomorphic AI agents (acting as AI fitness coaches) may evoke emotional trust in AI agents, while clear visibility of its decision-making process can strengthen cognitive trust in AI agents; interactivity and sociability further deepen trust through the improvement of the user’s feeling of connection and involvement. Thus, this framework advances beyond single-dimension models by revealing how feature combinations simultaneously address users’ rational assessments and affective needs, offering theoretical insights for human–AI interaction research.
Finally, this paper also contributes theoretically by outlining the unique psychological mechanisms involved in active engagement with AI agents (acting as AI fitness coaches), particularly the transition from initial adoption and products (e.g., online fitness courses) purchasing to dependency in smart home fitness e-commerce platforms. As AI-powered fitness coaches, these agents create a compelling value proposition for e-commerce platforms by delivering 24/7 personalized guidance, real-time feedback, and continuous progress tracking. This always-available support not only differentiates them from traditional offerings but also builds the digital trust necessary to drive purchase decisions and foster user loyalty.
By bridging the Computers Are Social Actors (CASA) paradigm with a dual-dimensional conception of trust (cognitive and emotional) in AI agents, this study moves beyond examining social presence in isolation [100]. This moves beyond prior work that established social presence as an outcome [101,102], by positioning trust as the critical mediator that translates perceived humanness into behavioral engagement in the context of SHF e-commerce platforms. It provides a holistic framework that advances the theoretical understanding of how technical and social features of AI agents influence trust formation and subsequently affect user compliance and active engagement in AI fitness settings. This interdisciplinary approach also bridges gaps in AI fitness literature by connecting social psychology principles with technology design, offering transferable insights that may extend to other AI-assisted domains like education and healthcare, where similar trust-building mechanisms and human–AI interaction dynamics may operate in the context of smart home fitness e-commerce platforms.

6.3. Practical Implications

This study offers some actionable insights for designers, product managers, and e-commerce strategists in the AI agents (e.g., AI fitness coaches) and smart home fitness e-commerce industries. Firstly, we found that the five identified technical–social features of AI agents (visibility, gamification, interactivity, humanness, sociability) positively influence cognitive and emotional trust in AI agents, and argue that managers should leverage both technological and social features to build cognitive and emotional trust in AI agents within fitness e-commerce and live-streaming platforms. Sales teams should receive training that combines technical product knowledge, such as real-time movement correction and adaptive workout planning, with brand storytelling. This approach helps translate interpersonal trust into product trust during short-form videos or live shopping demonstrations. Incorporating social commerce elements, such as group fitness challenges with exclusive coupon incentives, shared achievement badges, and community leaderboards, can stimulate participation and improve user retention. These strategies will foster existing users’ sense of belonging while simultaneously attracting new participants.
Secondly, this study confirms that social features of AI agents like humanness and sociability mainly positively impact active engagement via emotional trust in AI agents, and contend that developers of AI agents should embody human-like qualities of agents that foster trust and align with e-retail strategies to enhance active engagement. Designers can incorporate personalized names, natural language dialogue, and context-aware responses to simulate empathy and understanding. These features not only improve user satisfaction but also support emotional connections essential for subscription models and repeat purchases. For instance, given our finding that the humanness of the AI agent was a primary driver of emotional trust, we recommend that designers invest in nuanced anthropomorphic cues, such as natural language processing for more conversational interaction, to foster deeper user connection.
Thirdly, we find that technical features of AI agents mainly positively influence user compliance via cognitive trust in AI agents, and confirm that leveraging user interactive behavioral data from fitness e-commerce platforms can enable hyper-personalized marketing and enhance user tolerance for uncertainty, thereby improving adoption and trust. Technically, optimizing motion tracking responsiveness and feedback accuracy remains essential to reduce latency-related distrust and support premium service experiences. Platform developers should prioritize real-time form correction and dynamically adaptive workout plans, as our results confirm that such functional reliability is critical for building user compliance and engagement.
Thirdly, given the strong influence of gamification and interactivity on compliance and engagement, it is crucial to keep these elements fresh and motivating. Regularly updating fitness courses and content, such as workouts designed by Olympic champions, celebrity coaches, and all-star instructors, AI agents (acting as AI fitness coaches) can strengthen cognitive trust of users and drive their purchase behaviors. Integrating advanced fitness accessories (e.g., heart rate monitors, smart scales, smart grips, and posture sensors), AI agents (acting as AI fitness coaches and product recommenders) enables more scientific physical assessments and personalized recommendations. Simultaneously, by refreshing gamification elements including achievement badges, personalized challenges, and progress visualizations, AI agents (acting as AI fitness coaches) can help maintain user motivation, compliance and engagement. It is essential, however, to implement game mechanics in a way that avoids excessive competition and reinforces trust and intrinsic motivation in the home fitness e-commerce landscape. Thus, these multifaceted approaches address the core requirements for active engagement and reduce churn in competitive fitness e-marketplaces.
Finally, the study identifies that both cognitive and emotional trust in AI agents are critical mediators leading to active engagement and compliance, and find that features like accuracy and reliability enhance cognitive trust in AI agents, while empathetic responses and personalized interactions should evoke emotional trust in AI agents, particularly in fitness e-commerce settings where consumers assess product efficacy before purchase. For instance, an AI fitness coach (agent) can integrate data-driven guidance with supportive feedback to sustain user confidence and connection. By aligning feature design with trust-building and e-commerce synergy, firms with in-house AI Agent design and implementation can improve user experience and stimulate long-term engagement, and capture value in the competitive digital health market. By explicitly linking each design and management action to the specific features and trust pathways identified in this study, practitioners can make informed, evidence-based decisions to enhance user engagement and commercial success in AI-empowered fitness e-commerce.

6.4. Limitations and Further Research

This study has certain limitations that simultaneously offer valuable directions for future research. Firstly, concerning contextual limitations, while our findings from the Chinese context provide valuable insights, the cultural specificity of the sample may limit the generalizability of the results. Cross-cultural validation is therefore needed to determine the universality of the proposed framework across different national and cultural settings. We acknowledge a significant limitation concerning the potential influence of cultural bias. As the data were collected from a Chinese sample, unique cultural characteristics, such as high collectivism and power distance, may have systematically shaped the observed relationships between the antecedents of trust in AI agents and active engagement. For instance, collectivism might amplify the role of social proof in trust formation, while power distance could affect users’ reliance on platform authority. These culturally specific dynamics suggest that our model’s applicability beyond similar cultural contexts is uncertain. We recommend that future studies employ a cross-cultural comparative design to disentangle the universal and culture-specific mechanisms documented in this research.
To mitigate sample bias, future studies should employ stratified or cross-cultural sampling strategies to include demographically and experientially diverse user groups, or adopt longitudinal designs to examine how user perceptions evolve over time.
Secondly, the cross-sectional design captures user trust and active engagement at a single time point, which limits our comprehension of long-term trust dynamics. With continued use after adoption, longitudinal data can provide valuable insights into changes in long-term trust.
Thirdly, in terms of instrument constraints, because our findings were based on text mining and a questionnaire survey, subsequent investigations might employ quantitative surveys, qualitative interviews, or user experience data to provide deeper insights into the psychological mechanisms underlying trust formation. To improve measurement validity, researchers may develop and validate context-specific scales, or integrate multi-method approaches such as physiological sensors or behavioral logs to complement self-reported data. For example, incorporating objective behavioral metrics (e.g., actual usage data and exercise adherence rates) alongside self-reported measures could strengthen the validity of engagement assessments.
Finally, these methodological advancements would address current limitations while expanding the theoretical and practical understanding of trust and engagement in e-commerce fitness contexts.

Author Contributions

Conceptualization, J.X. and Q.X.; methodology, J.X.; software, J.X.; validation, J.X., L.W. and M.D.-Z.; formal analysis, J.X.; investigation, Q.X. and L.W.; resources, Q.X. and J.X.; data curation, J.X.; writing—original draft preparation, Q.X. and J.X.; writing—review and editing, J.X. and Q.X.; visualization, J.X. and L.W.; supervision, J.X.; project administration, J.X. and M.D.-Z.; funding acquisition, J.X. and M.D.-Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China [72102238, 72372110, 72172100], the Fundamental Research Funds for the Central Universities of China [2024ZY-SX06], the China Postdoctoral Science Foundation [2021M702319], and the Sichuan Federation of Social Science Associations [SCJJ24ND117].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. The study involving human participants was approved by the Ethics Committee from Guangdong Ocean University (approval code: GDOU-CSL-20240025; approval date: 10 September 2024). We extend our sincere gratitude to the editors and anonymous reviewers for their valuable and constructive feedback, which has significantly improved this manuscript.

Informed Consent Statement

Written informed consent was obtained from all participants prior to their involvement in the study. Participation was voluntary, and the confidentiality and anonymity of all collected data were guaranteed.

Data Availability Statement

Data will be available on request from the corresponding author.

Acknowledgments

This paper has received the Best Paper Award in the Second Interactive Marketing Symposium and Workshop, and has been revised according to the feedback during conference presentation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Singh, A. Smart Fitness Mirrors Market—By Type, By Frame Material, By Price Range, By Shape, By Screen Size, By End-Use, By Distribution Channel, Forecast 2025–2034; Global Market Insights Inc.: Pune, MH, India, 2024; Available online: https://www.gminsights.com/zh/industry-analysis/smart-fitness-mirror-market (accessed on 2 May 2025).
  2. Bocean, C.G.; Popescu, L.; Simion, D.; Sperdea, N.M.; Puiu, C.; Marinescu, R.C.; Maria, E. The Role of AI Technologies in E-Commerce Development: A European Comparative Study. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 225. [Google Scholar] [CrossRef]
  3. Qi, Y.; Sajadi, S.M.; Baghaei, S.; Rezaei, R.; Li, W. Digital Technologies in Sports: Opportunities, Challenges, and Strategies for Safeguarding Athlete Wellbeing and Competitive Integrity in the Digital Era. Technol. Soc. 2024, 77, 102496. [Google Scholar] [CrossRef]
  4. Khasentino, J.; Belyaeva, A.; Liu, X.; Yang, Z.; Furlotte, N.A.; Lee, C.; Schenck, E.; Patel, Y.; Cui, J.; Schneider, L.D.; et al. A Personal Health Large Language Model for Sleep and Fitness Coaching. Nat. Med. 2025, 31, 3394–3403. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, L.; Xue, Q.; Xue, J. My Fitness Coach Is AI: Unraveling the Influential Mechanisms of AI Coach-User-Human Coach Interactions on Users’ Behavioral Intention. In Proceedings of the International Conference on Computer Vision, Robotics, and Automation Engineering (CRAE 2024), Kunming, China, 21–23 June 2024; Cheng, Q., Chen, L., Eds.; SPIE: Bellingham, WA, USA, 2024; p. 44. [Google Scholar]
  6. Joshitha, K.L.; Madhanraj, P.; Rithick Roshan, B.; Prakash, G.; Monish Ram, V.S. AI-FIT COACH—Revolutionizing Personal Fitness With Pose Detection, Correction and Smart Guidance. In Proceedings of the 2024 International Conference on Communication, Computing and Internet of Things (IC3IoT), Chennai, India, 17–18 April 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  7. Gao, Y.; Liu, H. Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. J. Res. Interact. Mark. 2023, 17, 663–680. [Google Scholar] [CrossRef]
  8. Zhang, D.; Tao, X. How Do Technology and Gamification Influence Continued Usage Intention of AI-Based Wearable Devices? Evidence from SEM-ANN. Asia Pac. J. Mark. Logist. 2025; ahead-of-print, 1–22. [Google Scholar] [CrossRef]
  9. Ahmad, A.; Ghani, N.A.; Hamid, S. Examining the Predictors of Consumer Trust and Social Commerce Engagement: A Systematic Literature Review. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 247. [Google Scholar] [CrossRef]
  10. Zhao, X.; You, W.; Zheng, Z.; Shi, S.; Lu, Y.; Sun, L. How Do Consumers Trust and Accept AI Agents? An Extended Theoretical Framework and Empirical Evidence. Behav. Sci. 2025, 15, 337. [Google Scholar] [CrossRef]
  11. Wang, C.L. Editorial—What is an interactive marketing perspective and what are emerging research areas? J. Res. Interact. Mark. 2024, 18, 161–165. [Google Scholar] [CrossRef]
  12. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  13. De Kervenoael, R.; Schwob, A.; Hasan, R.; Psylla, E. SIoT Robots and Consumer Experiences in Retail: Unpacking Repeat Purchase Intention Drivers Leveraging Computers Are Social Actors (CASA) Paradigm. J. Retail. Consum. Serv. 2024, 76, 103589. [Google Scholar] [CrossRef]
  14. Lee, H.; Yi, Y. Humans vs. Service Robots as Social Actors in Persuasion Settings. J. Serv. Res. 2024, 28, 150–167. [Google Scholar] [CrossRef]
  15. Epley, N.; Waytz, A.; Cacioppo, J.T. On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychol. Rev. 2007, 114, 864–886. [Google Scholar] [CrossRef] [PubMed]
  16. Patrizi, M.; Šerić, M.; Vernuccio, M. Hey Google, I Trust You! The Consequences of Brand Anthropomorphism in Voice-Based Artificial Intelligence Contexts. J. Retail. Consum. Serv. 2024, 77, 103659. [Google Scholar] [CrossRef]
  17. Pang, H.; Ruan, Y.; Zhang, K. Deciphering Technological Contributions of Visibility and Interactivity to Website Atmospheric and Customer Stickiness in AI-Driven Websites: The Pivotal Function of Online Flow State. J. Retail. Consum. Serv. 2024, 78, 103795. [Google Scholar] [CrossRef]
  18. Bitrián, P.; Buil, I.; Catalán, S. Enhancing User Engagement: The Role of Gamification in Mobile Apps. J. Bus. Res. 2021, 132, 170–185. [Google Scholar] [CrossRef]
  19. Sun, Y.; Chen, J.; Sundar, S.S. Chatbot Ads with a Human Touch: A Test of Anthropomorphism, Interactivity, and Narrativity. J. Bus. Res. 2024, 172, 114403. [Google Scholar] [CrossRef]
  20. Zhang, M.; Liu, Y.; Wang, Y.; Zhao, L. How to Retain Customers: Understanding the Role of Trust in Live Streaming Commerce with a Socio-Technical Perspective. Comput. Hum. Behav. 2022, 127, 107052. [Google Scholar] [CrossRef]
  21. Pitardi, V.; Marriott, H.R. Alexa, She’s Not Human But… Unveiling the Drivers of Consumers’ Trust in Voice-based Artificial Intelligence. Psychol. Mark. 2021, 38, 626–642. [Google Scholar] [CrossRef]
  22. Edwards, O.V.; Taasoobshirazi, G. Social Presence and Teacher Involvement: The Link with Expectancy, Task Value, and Engagement. Internet High. Educ. 2022, 55, 100869. [Google Scholar] [CrossRef]
  23. Li, Y.; Lee, S.O. Navigating the Generative AI Travel Landscape: The Influence of ChatGPT on the Evolution from New Users to Loyal Adopters. IJCHM 2024, 37, 1421–1447. [Google Scholar] [CrossRef]
  24. Chen, Q.Q.; Park, H.J. How Anthropomorphism Affects Trust in Intelligent Personal Assistants. IMDS 2021, 121, 2722–2737. [Google Scholar] [CrossRef]
  25. Glikson, E.; Woolley, A.W. Human Trust in Artificial Intelligence: Review of Empirical Research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  26. Kaplan, A.D.; Kessler, T.T.; Brill, J.C.; Hancock, P.A. Trust in Artificial Intelligence: Meta-Analytic Findings. Hum. Factors 2023, 65, 337–359. [Google Scholar] [CrossRef] [PubMed]
  27. Duenas, N.; Mangen, C. Trust in International Cooperation: Emotional and Cognitive Trust Complement Each Other over Time. Crit. Perspect. Account. 2023, 92, 102328. [Google Scholar] [CrossRef]
  28. Ngo, T.T.A.; Bui, C.T.; Chau, H.K.L.; Tran, N.P.N. Electronic Word-of-Mouth (eWOM) on Social Networking Sites (SNS): Roles of Information Credibility in Shaping Online Purchase Intention. Heliyon 2024, 10, e32168. [Google Scholar] [CrossRef]
  29. Wu, W.; Wang, S.; Ding, G.; Mo, J. Elucidating Trust-Building Sources in Social Shopping: A Consumer Cognitive and Emotional Trust Perspective. J. Retail. Consum. Serv. 2023, 71, 103217. [Google Scholar] [CrossRef]
  30. Liu, B.; Li, X.; Zhang, J.; Wang, J.; He, T.; Hong, S.; Liu, H.; Zhang, S.; Song, K.; Zhu, K.; et al. Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary. arXiv 2025, arXiv:2504.01990. [Google Scholar] [CrossRef]
  31. Xie, J.; Chen, Z.; Zhang, R.; Wan, X.; Li, G. Large multimodal agents: A survey. arXiv 2024, arXiv:2402.15116. [Google Scholar] [CrossRef]
  32. Zhang, H.; Du, W.; Shan, J.; Zhou, Q.; Du, Y.; Tenenbaum, J.B.; Shu, T.; Gan, C. Building cooperative embodied agents modularly with large language models. arXiv 2023, arXiv:2307.02485. [Google Scholar]
  33. Li, Y.; Liu, Z.; Li, Z.; Zhang, X.; Xu, Z.; Chen, X.; Shi, H.; Jiang, S.; Wang, X.; Wang, J. Perception, reason, think, and plan: A survey on large multimodal reasoning models. arXiv 2025, arXiv:2505.04921. [Google Scholar] [CrossRef]
  34. Zheng, J.; Shi, C.; Cai, X.; Li, Q.; Zhang, D.; Li, C.; Yu, D.; Ma, Q. Lifelong learning of large language model based agents: A roadmap. arXiv 2025, arXiv:2501.07278. [Google Scholar] [CrossRef]
  35. Azam, M.; Rafiq, T.; Naz, F.G.; Ghafoor, M.; Nisa, M.U.; Malik, H. A novel model of narrative memory for conscious agents. Int. J. Inf. Syst. Comput. Technol. 2024, 3, 12–22. [Google Scholar] [CrossRef]
  36. Hu, Q.; Lu, Y.; Pan, Z.; Gong, Y.; Yang, Z. Can ai artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. Int. J. Inf. Manag. 2021, 56, 102250. [Google Scholar] [CrossRef]
  37. Hughes, L.; Dwivedi, Y.K.; Li, K.; Appanderanda, M.; Al-Bashrawi, M.A.; Chae, I. AI Agents and Agentic Systems: Redefining Global it Management. J. Glob. Inf. Technol. Manag. 2025, 3, 175–185. [Google Scholar] [CrossRef]
  38. Nass, C.; Steuer, J.; Tauber, E.R. Computers Are Social Actors. In Proceedings of the Conference Companion on Human Factors in Computing Systems—CHI ’94, Boston, MA, USA, 24–28 April 1994; ACM Press: New York, NY, USA, 1994; p. 204. [Google Scholar]
  39. Zhan, Y.; Xiong, Y.; Han, R.; Lam, H.K.S.; Blome, C. The Impact of Artificial Intelligence Adoption for Business-to-Business Marketing on Shareholder Reaction: A Social Actor Perspective. Int. J. Inf. Manag. 2024, 76, 102768. [Google Scholar] [CrossRef]
  40. Xu, K.; Chen, X.; Huang, L. Deep Mind in Social Responses to Technologies: A New Approach to Explaining the Computers Are Social Actors Phenomena. Comput. Hum. Behav. 2022, 134, 107321. [Google Scholar] [CrossRef]
  41. Ham, J.; Li, S.; Looi, J.; Eastin, M.S. Virtual Humans as Social Actors: Investigating User Perceptions of Virtual Humans’ Emotional Expression on Social Media. Comput. Hum. Behav. 2024, 155, 108161. [Google Scholar] [CrossRef]
  42. Wang, C.L. Editorial: Demonstrating contributions through storytelling. J. Res. Interact. Mark. 2025, 19, 1–4. [Google Scholar] [CrossRef]
  43. Mou, J.; Benyoucef, M. Consumer Behavior in Social Commerce: Results from a Meta-Analysis. Technol. Forecast. Soc. Chang. 2021, 167, 120734. [Google Scholar] [CrossRef]
  44. Afroogh, S.; Akbari, A.; Malone, E.; Kargar, M.; Alambeigi, H. Trust in AI: Progress, Challenges, and Future Directions. Humanit. Soc. Sci. Commun. 2024, 11, 1568. [Google Scholar] [CrossRef]
  45. Ho, S.S.; Cheung, J.C. Trust in Artificial Intelligence, Trust in Engineers, and News Media: Factors Shaping Public Perceptions of Autonomous Drones through UTAUT2. Technol. Soc. 2024, 77, 102533. [Google Scholar] [CrossRef]
  46. Liu, X.S.; Yi, X.S.; Wan, L.C. Friendly or Competent? The Effects of Perception of Robot Appearance and Service Context on Usage Intention. Ann. Tour. Res. 2022, 92, 103324. [Google Scholar] [CrossRef]
  47. Jiang, Y.; Yang, X.; Zheng, T. Make Chatbots More Adaptive: Dual Pathways Linking Human-like Cues and Tailored Response to Trust in Interactions with Chatbots. Comput. Hum. Behav. 2023, 138, 107485. [Google Scholar] [CrossRef]
  48. Chai, J.C.Y.; Malhotra, N.K.; Alpert, F. A Two-Dimensional Model of Trust–Value–Loyalty in Service Relationships. J. Retail. Consum. Serv. 2015, 26, 23–31. [Google Scholar] [CrossRef]
  49. Ihsan, F.; Nasrulloh, A.; Nugroho, S.; Yuniana, R. A Review of the Use of Technology in Sport. Coaching: Current Trends and Future Directions. Health Sport Rehabil. 2025, 11, 85–101. [Google Scholar] [CrossRef]
  50. Hu, T.; Geng, J. Research on the Perception of the Terrain Image of the Tourism Destination Based on Multimodal User-Generated Content Data. PeerJ Comput. Sci. 2024, 10, e1801. [Google Scholar] [CrossRef]
  51. Kusuma, A.A.; Afiff, A.Z.; Gayatri, G.; Hati, S.R.H. Is Visual Content Modality a Limiting Factor for Social Capital? Examining User Engagement within Instagram-Based Brand Communities. Humanit. Soc. Sci. Commun. 2024, 11, 9. [Google Scholar] [CrossRef]
  52. Liu, K.; Tao, D. The Roles of Trust, Personalization, Loss of Privacy, and Anthropomorphism in Public Acceptance of Smart Healthcare Services. Comput. Hum. Behav. 2022, 127, 107026. [Google Scholar] [CrossRef]
  53. Yim, A.; Cui, A.P.; Walsh, M. The role of cuteness on consumer attachment to artificial intelligence agents. J. Res. Interact. Mark. 2024, 18, 127–141. [Google Scholar] [CrossRef]
  54. Qu, Y.; Baek, E. Let virtual creatures stay virtual: Tactics to increase trust in virtual influencers. J. Res. Interact. Mark. 2024, 18, 91–108. [Google Scholar] [CrossRef]
  55. Lu, L.; McDonald, C.; Kelleher, T.; Lee, S.; Chung, Y.J.; Mueller, S.; Vielledent, M.; Yue, C.A. Measuring Consumer-Perceived Humanness of Online Organizational Agents. Comput. Hum. Behav. 2022, 128, 107092. [Google Scholar] [CrossRef]
  56. Calahorra-Candao, G.; Martín-de Hoyos, M.J. The Effect of Anthropomorphism of Virtual Voice Assistants on Perceived Safety as an Antecedent to Voice Shopping. Comput. Hum. Behav. 2024, 153, 108124. [Google Scholar] [CrossRef]
  57. Wen, J.; Li, X. AI Digital Human Responsiveness and Consumer Purchase Intention: The Mediating Role of Trust. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 246. [Google Scholar] [CrossRef]
  58. Sun, Y.; Shao, X.; Li, X.; Guo, Y.; Nie, K. How Live Streaming Influences Purchase Intentions in Social Commerce: An IT Affordance Perspective. Electron. Commer. Res. Appl. 2019, 37, 100886. [Google Scholar] [CrossRef]
  59. Jin, H.; Chen, T.; Shi, H. Beyond the Fitness: A Big Data Analysis of Home Exercisers Demand. Sage Open 2024, 14, 21582440241274533. [Google Scholar] [CrossRef]
  60. Noh, E.; Won, J.; Jo, S.; Hahm, D.-H.; Lee, H. Conversational Agents for Body Weight Management: Systematic Review. J. Med. Internet Res. 2023, 25, e42238. [Google Scholar] [CrossRef]
  61. Deterding, S.; Dixon, D.; Khaled, R.; Nacke, L. From Game Design Elements to Gamefulness: Defining “Gamification”. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments, Tampere, Finland, 28–30 September 2011; ACM: New York, NY, USA, 2011; pp. 9–15. [Google Scholar]
  62. Li, N.; Aumeboonsuke, V. How Gamification Features Drive Brand Loyalty: The Mediating Roles of Consumer Experience and Brand Engagement. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 113. [Google Scholar] [CrossRef]
  63. Eppmann, R.; Bekk, M.; Klein, K. Gameful Experience in Gamification: Construction and Validation of a Gameful Experience Scale [GAMEX]. J. Interact. Mark. 2018, 43, 98–115. [Google Scholar] [CrossRef]
  64. Liao, C.H.; Hsieh, J.-K.; Kumar, S. Does the verified badge of social media matter? The perspective of trust transfer theory. J. Res. Interact. Mark. 2024, 18, 1017–1033. [Google Scholar] [CrossRef]
  65. Singh, Y.; Milan, R. Utilitarian and Hedonic Values of Gamification and Their Influence on Brand Engagement, Loyalty, Trust and WoM. Entertain. Comput. 2025, 52, 100868. [Google Scholar] [CrossRef]
  66. Elmashhara, M.G.; De Cicco, R.; Silva, S.C.; Hammerschmidt, M.; Silva, M.L. How Gamifying AI Shapes Customer Motivation, Engagement, and Purchase Behavior. Psychol. Mark. 2024, 41, 134–150. [Google Scholar] [CrossRef]
  67. Wirani, Y.; Nabarian, T.; Romadhon, M.S. Evaluation of Continued Use on Kahoot! As a Gamification-Based Learning Platform from the Perspective of Indonesia Students. Procedia Comput. Sci. 2022, 197, 545–556. [Google Scholar] [CrossRef]
  68. To, A.T.; Trinh, T.H.M. Understanding Behavioral Intention to Use Mobile Wallets in Vietnam: Extending the Tam Model with Trust and Enjoyment. Cogent Bus. Manag. 2021, 8, 1891661. [Google Scholar] [CrossRef]
  69. Ma, C.; Shao, J. Modeling Mobile Game Design Features Through Grounded Theory: Key Factors Influencing User Behavior. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 132. [Google Scholar] [CrossRef]
  70. Samarah, T.; Bayram, P.; Aljuhmani, H.Y.; Elrehail, H. The role of brand interactivity and involvement in driving social media consumer brand engagement and brand loyalty: The mediating effect of brand trust. J. Res. Interact. Mark. 2022, 16, 648–664. [Google Scholar] [CrossRef]
  71. Zhao, H.; Furuoka, F. Linking Social Capital to Green Purchasing Intentions in Livestreaming Sessions: The Moderating Role of Perceived Visible Heterogeneity. Humanit. Soc. Sci. Commun. 2025, 12, 492. [Google Scholar] [CrossRef]
  72. Shin, D. The Actualization of Meta Affordances: Conceptualizing Affordance Actualization in the Metaverse Games. Comput. Hum. Behav. 2022, 133, 107292. [Google Scholar] [CrossRef]
  73. Wang, J.; Shahzad, F.; Ashraf, S.F. Elements of Information Ecosystems Stimulating the Online Consumer Behavior: A Mediating Role of Cognitive and Affective Trust. Telemat. Inform. 2023, 80, 101970. [Google Scholar] [CrossRef]
  74. Fan, J.; Zhou, W.; Yang, X.; Li, B.; Xiang, Y. Impact of Social Support and Presence on Swift Guanxi and Trust in Social Commerce. IMDS 2019, 119, 2033–2054. [Google Scholar] [CrossRef]
  75. Leung, W.K.S.; Chang, M.K.; Cheung, M.L.; Shi, S. Swift Trust Development and Prosocial Behavior in Time Banking: A Trust Transfer and Social Support Theory Perspective. Comput. Hum. Behav. 2022, 129, 107137. [Google Scholar] [CrossRef]
  76. Monfared, A.R.K.; Ghaffari, M.; Barootkoob, M.; Malmiri, M.M. The Role of Social Commerce in Online Purchase Intention: Mediating Role of Social Interactions, Trust, and Electronic Word of Mouth. J. Int. Bus. Entrep. Dev. 2021, 13, 22. [Google Scholar] [CrossRef]
  77. Gurney, N.; Pynadath, D.V.; Wang, N. Comparing Psychometric and Behavioral Predictors of Compliance During Human-AI Interactions. In Persuasive Technology; Meschtscherjakov, A., Midden, C., Ham, J., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2023; Volume 13832, pp. 175–197. ISBN 978-3-031-30932-8. [Google Scholar]
  78. Jingchuan, J.; Wu, S. How Trust in Human-like AI-Based Service on Social Media Will Influence Customer Engagement: Exploratory Research to Develop the Scale of Trust in Human-like AI-Based Service. Asia Mark. J. 2024, 26, 129–144. [Google Scholar] [CrossRef]
  79. Kosiba, J.P.; Boateng, H.; Okoe, A.F.; Hinson, R. Trust and Customer Engagement in the Banking Sector in Ghana. Serv. Ind. J. 2020, 40, 960–973. [Google Scholar] [CrossRef]
  80. Fatima, S.; Augusto, J.C.; Moseley, R.; Urbonas, P.; Elliott, A.; Payne, N. Applying Motivational Techniques for User Adherence to Adopt a Healthy Lifestyle in a Gamified Application. Entertain. Comput. 2023, 46, 100571. [Google Scholar] [CrossRef]
  81. White, J.S.; Toussaert, S.; Raiff, B.R.; Salem, M.K.; Chiang, A.Y.; Crane, D.; Warrender, E.; Lyles, C.R.; Abroms, L.C.; Westmaas, J.L.; et al. Evaluating the Impact of a Game (Inner Dragon) on User Engagement Within a Leading Smartphone App for Smoking Cessation: Randomized Controlled Trial. J. Med. Internet Res. 2024, 26, e57839. [Google Scholar] [CrossRef]
  82. Adam, M.; Wessel, M.; Benlian, A. AI-Based Chatbots in Customer Service and Their Effects on User Compliance. Electron. Mark. 2021, 31, 427–445. [Google Scholar] [CrossRef]
  83. Li, X.; Sung, Y. Anthropomorphism Brings Us Closer: The Mediating Role of Psychological Distance in User–AI Assistant Interactions. Comput. Hum. Behav. 2021, 118, 106680. [Google Scholar] [CrossRef]
  84. Liu, R.; Benitez, J.; Zhang, L.; Shao, Z.; Mi, J. Exploring the Influence of Gamification-Enabled Customer Experience on Continuance Intention towards Digital Platforms for e-Government: An Empirical Investigation. Inf. Manag. 2024, 61, 103986. [Google Scholar] [CrossRef]
  85. Shi, X.; Evans, R.; Shan, W. Solver Engagement in Online Crowdsourcing Communities: The Roles of Perceived Interactivity, Relationship Quality and Psychological Ownership. Technol. Forecast. Soc. Chang. 2022, 175, 121389. [Google Scholar] [CrossRef]
  86. Rupp, M.A.; Michaelis, J.R.; McConnell, D.S.; Smither, J.A. The Role of Individual Differences on Perceptions of Wearable Fitness Device Trust, Usability, and Motivational Impact. Appl. Ergon. 2018, 70, 77–87. [Google Scholar] [CrossRef]
  87. Kim, J.; Im, I. Anthropomorphic Response: Understanding Interactions between Humans and Artificial Intelligence Agents. Comput. Hum. Behav. 2023, 139, 107512. [Google Scholar] [CrossRef]
  88. Xue, J.; Liang, X.; Xie, T.; Wang, H. See Now, Act Now: How to Interact with Customers to Enhance Social Commerce Engagement? Inf. Manag. 2020, 57, 103324. [Google Scholar] [CrossRef]
  89. Liang, H.; Saraf, N.; Hu, Q.; Xue, Y. Assimilation of enterprise systems: The effect of institutional pressures and the mediating role of top management. MIS Q. 2007, 31, 59–87. [Google Scholar] [CrossRef]
  90. Ringle, C.M.; Sarstedt, M. Gain more insight from your PLS-SEM results: The importance-performance map analysis. Ind. Manag. Data Syst. 2016, 116, 1865–1886. [Google Scholar] [CrossRef]
  91. Zhou, L.; Wang, V.L. “What’s in a Name?”: The Effect of AI Agent Naming on Psychological Ownership and Responsible Behaviors in the Shared Economy. J. Appl. Bus. Behav. Sci. 2025, 1, 144–162. [Google Scholar] [CrossRef]
  92. Yoo, J.W.; Park, J.; Park, H. How can I trust you if you’re fake? Understanding human-like virtual influencer credibility and the role of textual social cues. J. Res. Interact. Mark. 2025, 19, 730–748. [Google Scholar] [CrossRef]
  93. Westphal, M.; Vössing, M.; Satzger, G.; Yom-Tov, G.B.; Rafaeli, A. Decision Control and Explanations in Human-AI Collaboration: Improving User Perceptions and Compliance. Comput. Hum. Behav. 2023, 144, 107714. [Google Scholar] [CrossRef]
  94. Xie, G.; Wang, X. Exploring the Impact of AI Enhancement on the Sports App Community: Analyzing Human-Computer Interaction and Social Factors Using a Hybrid SEM-ANN Approach. Int. J. Hum.-Comput. Interact. 2024, 41, 8734–8755. [Google Scholar] [CrossRef]
  95. Gao, L.; Li, G.; Tsai, F.; Gao, C.; Zhu, M.; Qu, X. The impact of artificial intelligence stimuli on customer engagement and value co-creation: The moderating role of customer ability readiness. J. Res. Interact. Mark. 2023, 17, 317–333. [Google Scholar] [CrossRef]
  96. Richter, C.; O’Reilly, M.; Delahunt, E. Machine Learning in Sports Science: Challenges and Opportunities. Sports Biomech. 2024, 23, 961–967. [Google Scholar] [CrossRef]
  97. Su, Z.; Ge, S.; Li, L.; Su, Y. Review Study Of Integrating Ai Technology Into Sports Training System. Kuey 2024, 30, 5. [Google Scholar] [CrossRef]
  98. Peltier, J.W.; Dahl, A.J.; Schibrowsky, J.A. Artificial intelligence in interactive marketing: A conceptual framework and research agenda. J. Res. Interact. Mark. 2024, 18, 54–90. [Google Scholar] [CrossRef]
  99. Riley, B.K.; Dixon, A. Emotional and Cognitive Trust in Artificial Intelligence: A Framework for Identifying Research Opportunities. Curr. Opin. Psychol. 2024, 58, 101833. [Google Scholar] [CrossRef]
  100. Gong, L. How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 2008, 24, 1494–1509. [Google Scholar] [CrossRef]
  101. Hassanein, K.; Head, M. The impact of infusing social presence in the web interface: An investigation on online shopping experience. Interact. Comput. 2005, 17, 384–396. [Google Scholar] [CrossRef]
  102. Kristine, L.; Nowak, F.B. The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence Teleoperators Virtual Environ. 2003, 12, 481–494. [Google Scholar] [CrossRef]
Figure 1. The five technical and social features of AI agents in SHF e-commerce platforms.
Figure 1. The five technical and social features of AI agents in SHF e-commerce platforms.
Jtaer 21 00011 g001
Figure 2. Research model.
Figure 2. Research model.
Jtaer 21 00011 g002
Figure 3. Theoretical model results. Notes: * = p < 0.05, ** = p < 0.01, *** = p < 0.001.
Figure 3. Theoretical model results. Notes: * = p < 0.05, ** = p < 0.01, *** = p < 0.001.
Jtaer 21 00011 g003
Table 1. The differences between LLMs and AI Agents.
Table 1. The differences between LLMs and AI Agents.
ItemsLLMsAI Agents
ConceptLLM is a subclass of AI, falling under natural language processing (NLP) models. It is trained on large-scale data and uses deep learning (typically Transformer architecture) to generate and understand natural language.An AI Agent is a system capable of perceiving its environment, setting goals, making decisions, and taking actions. It often uses LLMs or other AI models as its “thinking/reasoning engine.”
TypeA model (Tool)A system (System)
Core modulesNatural language understanding and generation.Perception, memory, planning, and action.
InitiativeNo, requires external instructions.Yes, can autonomously use tools and execute actions.
MemoryUsually lacks long-term memory.Usually has long-term memory.
Role positioning in AI fitnessServing as fitness advisors, Large Language Models leverage vast datasets to process and generate human-like text, enabling personalized fitness guidance and powering modern e-commerce recommendations.AI personal fitness agents can act as personalized trainers and savvy shopping assistants, seamlessly integrating fitness planning with e-commerce recommendation. By understanding a user’s unique goals, progress, and habits, the AI coach can craft effective workout regimens and suggest relevant products to enhance the entire fitness journey.
Core strength in smart home-based e-commerce contextThe LLM’s core strength lies in understanding nuanced intent and generating contextually relevant, personalized text responses, and LLMs can generate tailored fitness plans and drive fitness product recommendations.An AI fitness agent can perceive, act, and create dynamic workout plans. It memorizes and assesses user data while leveraging real-time training performance to recommend relevant products, thereby achieving a seamless integration of personalized fitness and e-commerce.
Table 2. The interconnected relationships between AI agent features and its core modules.
Table 2. The interconnected relationships between AI agent features and its core modules.
ModulesPerceptionMemoryPlanningAction
Features
Visibility
(Making the agent’s intelligence and value apparent)
Demonstrates awareness by recognizing user state (e.g., heart rate, form) and commenting on it (“I notice…”).Proves it “knows” the user by recalling personal details and history, making the agent seem knowledgeable.Makes strategic thinking transparent by explaining the “why” behind a new training plan.Turning internal decisions into tangible outputs like real-time feedback and updated plans.
Gamification
(Using game-like elements for motivation)
Tracks and validates achievements (e.g., counting reps, judging perfect form) as the basis for rewards.Stores progress and history to unlock badges and level up the user, building a persistent growth system.Acts as the game designer, structuring challenges, quests, and milestones scaled to the user’s ability.Delivers the game elements, such as instant achievement notifications and progress bars, providing immediate satisfaction.
Interactivity
(Enabling a dynamic, two-way dialogue)
Serves as the primary channel for interaction, using NLP to understand user commands and questions.Ensures contextual continuity, remembering previous conversations to make exchanges fluid and natural.Enables real-time adaptation and replanning based on user input (e.g., modifying a workout after a user complaint).Serves as the agent’s side of the dialogue, providing responses that are direct consequences of the user’s input.
Humanness
(Being relatable, empathetic, and personable)
Detects user’s emotional state (e.g., frustration from tone) to form the basis for an empathetic response.Builds a “relationship” by remembering personal events (birthdays) and past struggles, showing “care.”Demonstrates understanding and flexibility, like replanning a workout after a user has a stressful day.Communicates with a “human touch”—using conversational language, humor, and empathetic encouragement.
Sociability
(Extending interaction to a community or social dimension)
Scales to perceive group dynamics, tracking multiple users’ performance in a team challenge.Remembers group interactions and relationships, storing team contributions for meaningful social engagement.Creates socially driven plans, like generating friendly competitions or cooperative challenges for a group of friends.Facilitates social connections by sharing achievements to a feed, sending team messages, or matching users for sessions.
SourcesXie et al. [32]; Zhang et al. [33]; Li et al. [34]; Zheng et al. [35].Xie et al. [32]; Zhang et al. [33]; Zheng et al. [35]; Azam et al. [36].Xie et al. [32]; Li et al. [34]; Azam et al. [36].Xie et al. [32]; Zhang et al. [33]; Zheng et al. [35]; Azam et al. [36].
Table 3. The technical and social features of AI agents (acting as AI fitness coaches) and its definitions based on text mining.
Table 3. The technical and social features of AI agents (acting as AI fitness coaches) and its definitions based on text mining.
ConstructDefinitionText Mining Keywords and Frequency
HumannessThe tendency to imbue real or imagined behavior of AI agents with human-like characteristics, intentions or emotions, focusing on its static attributes that mimic human forms, such as a human-like name, or voice tone.Stick Figure (257)
Sports Partner (986)
Conversational AI (1513)
AI-Enabled Companionship (182)
InteractivityThe dynamic, bidirectional exchange of information between humans and AI agents, orchestrated through sequential voice controls and instantaneous responsiveness to deliver personalized guidance.Interaction (1321)
Personalization (10,574)
Voice Control (3403)
Responsiveness (3697)
VisibilityThe property that enables users to clearly see the system interface, operational processes, and displayed information.Vision (687)
HD Viewing (267)
Data-Driven Insights (1189)
SociabilityThe degree to which the AI agent facilitates a perceived connection to a social environment, often by enabling social comparisons or social interactions among users to satisfy the innate need for connection with others (e.g., friends and family members) and receive social support.Fun for All Ages (10,945)
Two-player Mode (1788)
Virtual Communities (521)
Gather and exercise (517)
In-person Gatherings (152)
GamificationThe process of applying game design elements of AI agents to non-game contexts in order to create an enjoyable experience for users.Game (8363)
Play-to-Train (5702)
Badges & Rankings (437)
PK (1763)
Happy (3932)
Table 4. Demographics information (N = 599).
Table 4. Demographics information (N = 599).
Variables Frequency (N)Percentage (%)
GenderMale25342.2
Female34657.8
Age≥18–25 years9215.4
≥26–30 years12821.4
≥31–35 years24941.6
≥36–40 years8814.7
≥41–45 years213.5
≥46–50 years81.3
≥51 years132.2
Education degreeHigh education school or below81.3
College325.3
Bachelor36861.4
Master19131.9
Monthly disposable incomes (RMB)1000 or below122.0
1001–300015025.0
3001–5000396.5
5001–800014424.0
8001–10,000447.3
10,001–15,0009716.2
15,001 and over11318.9
Marital statusSingle14424.1
Married without child355.8
Married with child42070.1
Exercise frequencyNo exercise91.5
1–2 times a week17529.2
3–4 times a week25742.9
5–6 times a week8213.7
Exercise every day7612.7
Table 5. Results of common method factor analysis.
Table 5. Results of common method factor analysis.
ConstructIndicatorR12R1R22R2
HUHU → HU10.6300.794 ***0.0030.055
HU → HU20.6900.831 ***0.002−0.039
HU → HU30.6820.826 ***0.000−0.020
VIVI → VI10.6720.820 ***0.0000.013
VI → VI20.6190.787 ***0.000−0.002
VI → VI30.6830.826 ***0.000−0.012
GAGA → GA10.7650.875 ***0.003−0.054
GA → GA20.5010.708 ***0.0110.105 **
GA → GA30.7650.875 ***0.002−0.047
ININ → IN10.7180.847 ***0.001−0.026
IN → IN20.4680.684 ***0.0090.092
IN → IN30.6560.810 ***0.0000.014
IN → IN40.4830.695 ***0.0110.105 *
IN → IN50.8940.945 ***0.034−0.184 ***
SOSO → SO10.7470.864 ***0.004−0.064 *
SO → SO20.5040.710 ***0.0070.084 *
SO → SO30.6980.835 ***0.000−0.016
CTCT → CT10.6630.814 ***0.0000.007
CT → CT20.5660.752 ***0.0010.029
CT → CT30.6950.834 ***0.001−0.035
ETET → ET10.6580.811 ***0.0010.032
ET → ET20.6370.798 ***0.000−0.013
ET → ET30.6740.821 ***0.000−0.021
UCUC → UC10.5210.722 ***0.0100.102 **
UC → UC20.5580.747 ***0.0000.005
UC → UC30.8640.930 ***0.011−0.105 **
AEAE → AE10.6860.828 ***0.006−0.075
AE → AE20.5200.721 ***0.0030.056
AE → AE30.5360.732 ***0.0000.000
AE → AE40.5930.770 ***0.0000.017
Average  -0.645 0.004
Notes: HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement; R1 = Substantive Factor Loading; R2 = Method Factor Loading; *: p < 0.05, **: p < 0.01, ***: p < 0.001.
Table 6. Results of reliability and convergent validity analysis.
Table 6. Results of reliability and convergent validity analysis.
ConstructsItemsLoadingCACRAVE
HUHU10.8470.7490.8560.665
HU20.796
HU30.803
VIVI10.8330.7400.8520.658
VI20.786
VI30.814
GAGA10.8290.7570.8600.673
GA20.800
GA30.831
ININ10.8220.8560.8970.634
IN20.766
IN30.820
IN40.790
IN50.784
SOSO10.8170.7270.8460.647
SO20.780
SO30.816
CTCT10.8190.7190.8420.640
CT20.779
CT30.802
ETET10.8430.7370.8510.655
ET20.787
ET30.797
UCUC10.8150.7200.8430.641
UC20.746
UC30.839
AEAE10.7660.7610.8480.582
AE20.776
AE30.726
AE40.782
Notes: HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement. CA = Cronbach’s α; CR = Composite reliability; AVE = Average variance extracted.
Table 7. Discriminant validity.
Table 7. Discriminant validity.
HUVIGAINSOCTETUCAE
HU0.553
VI0.4750.811
GA0.4590.4730.820
IN0.5360.4330.4780.797
SO0.5090.3380.3900.5200.804
CT0.5530.4870.5180.5920.4940.800
ET0.5740.4960.5090.5770.4840.5810.809
UC0.5260.4370.4920.5610.5150.6020.5590.801
AE0.5250.4060.4480.5060.5180.5920.6220.5460.763
Notes: HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement.
Table 8. Comparison of the fit indices of the hypothesized model against competing models.
Table 8. Comparison of the fit indices of the hypothesized model against competing models.
Competing ModelsPathsSRMRd_ULSChi-Squared_GNFI
Nine-factor model
(the hypothesized model)
HU, VI, GA, IN, SO → (CT, ET) → UC → AE0.0631.8401815.3750.5290.763
Five-factor modelTSF (HU + VI + GA + IN + SO) → (CT, ET) → UC → AE0.0702.3102048.2890.6120.733
Four-factor modelTSF (HU + VI + GA + IN + SO) → CT + ET → UC → AE0.0712.3141947.4360.5860.746
Notes: TSF = Technical–social features of AI Agents; HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement; SRMR: standardized root mean square residual; d_ULS: unweighted least squares discrepancy or distance; d_G: geodesic discrepancy or distance; NFI: normed fit index.
Table 9. The results of path analysis.
Table 9. The results of path analysis.
HypothesisPathsβT-ValueResults
H1aHU → CT0.183 ***4.196Accepted
H1bVI → CT0.152 ***4.051Accepted
H1cGA → CT0.179 ***4.639Accepted
H1dIN → CT0.270 ***5.364Accepted
H1eSO → CT0.139 ***3.705Accepted
H2aHU → ET0.229 ***5.498Accepted
H2bVI → ET0.165 ***4.474Accepted
H2cGA → ET0.162 ***4.155Accepted
H2dIN → ET0.242 ***4.856Accepted
H2eSO → ET0.123 **3.166Accepted
H3aCT → UC0.417 ***10.999Accepted
H3bET → UC0.317 ***8.192Accepted
H4aCT → AE0.274 ***5.604Accepted
H4bET → AE0.364 ***9.147Accepted
H5UC → AE0.177 ***3.852Accepted
Notes: HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement; ** = p < 0.01, *** = p < 0.001.
Table 10. The results of the mediation analysis.
Table 10. The results of the mediation analysis.
PathsβT-Value
HU → CT → UC0.076 ***3.935
VI → CT → UC0.063 ***3.823
GA → CT → UC0.075 ***4.130
IN → CT → UC0.113 ***4.738
SO → CT → UC0.058 **3.390
HU → ET → UC0.072 ***4.308
VI → ET → UC0.052 ***3.693
GA → ET → UC0.051 ***3.696
IN → ET → UC0.077 ***4.390
SO → ET → UC0.039 **2.774
HU → CT → AE0.050 **3.293
VI → CT → AE0.042 **3.384
GA → CT → AE0.049 **3.948
IN → CT → AE0.074 ***3.812
SO → CT → AE0.038 **2.975
HU → ET → AE0.083 ***4.676
VI → ET → AE0.060 ***3.969
GA → ET → AE0.059 ***3.623
IN → ET → AE0.088 ***4.384
SO → ET → AE0.045 **2.910
CT → UC → AE0.074 ***3.527
ET → UC → AE0.056 ***3.561
Notes: HU: Humanness; VI: Visibility; GA: gamification; IN: interactivity; SO: sociability; CT: cognitive trust in AI agents; ET: emotional trust in AI agents; UC: user compliance; AE: active engagement; ** = p < 0.01, *** = p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xue, Q.; Dzitkowska-Zabielska, M.; Wang, L.; Xue, J. CASA in Action: Dual Trust Pathways from Technical–Social Features of AI Agents to Users’ Active Engagement Through Cognitive–Emotional Trust. J. Theor. Appl. Electron. Commer. Res. 2026, 21, 11. https://doi.org/10.3390/jtaer21010011

AMA Style

Xue Q, Dzitkowska-Zabielska M, Wang L, Xue J. CASA in Action: Dual Trust Pathways from Technical–Social Features of AI Agents to Users’ Active Engagement Through Cognitive–Emotional Trust. Journal of Theoretical and Applied Electronic Commerce Research. 2026; 21(1):11. https://doi.org/10.3390/jtaer21010011

Chicago/Turabian Style

Xue, Qinbo, Magdalena Dzitkowska-Zabielska, Liguo Wang, and Jiaolong Xue. 2026. "CASA in Action: Dual Trust Pathways from Technical–Social Features of AI Agents to Users’ Active Engagement Through Cognitive–Emotional Trust" Journal of Theoretical and Applied Electronic Commerce Research 21, no. 1: 11. https://doi.org/10.3390/jtaer21010011

APA Style

Xue, Q., Dzitkowska-Zabielska, M., Wang, L., & Xue, J. (2026). CASA in Action: Dual Trust Pathways from Technical–Social Features of AI Agents to Users’ Active Engagement Through Cognitive–Emotional Trust. Journal of Theoretical and Applied Electronic Commerce Research, 21(1), 11. https://doi.org/10.3390/jtaer21010011

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop