Previous Article in Journal
Blockchain Technology and Decentralized Applications: CBDC, Healthcare, and Not-for-Profit Organizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach

1
Faculty of Art and Communication, Kunming University of Science and Technology, Kunming 650500, China
2
Department of Smart Experience Design, Graduate School of Techno Design, Kookmin University, Seoul 02707, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 255; https://doi.org/10.3390/jtaer20040255
Submission received: 22 July 2025 / Revised: 10 September 2025 / Accepted: 11 September 2025 / Published: 24 September 2025

Abstract

As a pivotal carrier of emerging human–computer interaction technologies, artificial intelligence (AI) conversational agents (CAs) hold critical significance for research on the mechanisms of users’ continuance usage behaviour, which is essential for technological optimization and commercial transformation. However, the differential impact pathways of multidimensional perceived interactivity on continuance usage intention, particularly the synergistic mechanisms between technical and affective dual-path dimensions, remain unclear. This study investigates the personalized AI-based CAs project “Dialogue with Great Souls,” launched on a Chinese social platform, using survey data from 305 users. A hybrid approach combining partial least squares structural equation modelling (PLS-SEM) and artificial neural networks (ANN) was employed for empirical analysis. The results indicate that technical dimensions, such as control and responsiveness, are key factors influencing trust, while affective interactive dimensions, including communication, personalization, and playfulness, significantly affect social presence, thereby shaping users’ continuance usage intention. ANN results corroborated most PLS-SEM findings but revealed inconsistencies in the predictive importance of personalization and communication on social presence, highlighting the complementary nature of linear and nonlinear interaction mechanisms. By expanding the interactivity model and adopting a hybrid methodology, this study constructs a novel framework for AI CAs. The empirical findings suggest that developers should strengthen socio-emotional bonds in anthropomorphic interactions while ensuring technical credibility to enhance users’ continuance usage intention. This research not only advances theoretical perspectives on the integration of technical and affective dimensions in agent systems but also provides practical recommendations for optimizing the design and development of AI CAs.

1. Introduction

With breakthroughs in generative AI, communication with intelligent agents through natural human language has become a common activity [1]. CAs have evolved from single-task-oriented services (e.g., customer service responses) to intelligent agents capable of complex interactions [2]. This breakthrough lies in their ability to generate contextually coherent and appropriate responses through algorithmic processes, enhancing the realism of AI-driven interactions [3]. According to industry experts, by 2025, generative AI will be embedded in 80% of conversational AI solutions, leading more companies to actively use AI CAs to enhance customer satisfaction and retention rates [4]. Recent advancements in multimodal AI, such as integration with visual and auditory inputs, further amplify their potential to deliver immersive experiences [5].
Globally, platforms such as ChatGPT and DeepSeek have emerged as widely adopted AI conversation systems. In China, social platforms like Soul have introduced projects such as “Dialogue with Great Souls,” integrating conversations with visual scenarios to enable immersive interactions with virtualized historical figures such as Einstein, Aristotle, etc. [6]. These multimodal interaction scenarios reflect a shift in the core competitiveness of AI systems from functional implementation to experience optimization. And through the AI CAs system, meet users’ personalized demands [7]. By leveraging personalized interactions and improved conversation management, these systems enhance overall user experiences across applications such as online education [8,9,10], social platforms [11,12], healthcare [8,13], and e-commerce [14,15,16,17].
In recent years, academic research on the usage of CAs has gained significant attention. Scholars have primarily employed technology adoption models and theoretical frameworks—such as the Technology Acceptance Model [18,19], trust theory [19], and perceived interactivity theory [18,20]—to investigate factors influencing user experiences. Research findings indicate that among various design features, interactivity has been proven to significantly affect user experience and behavioural decisions [18,21]. It encompasses technical attributes such as conversational agency, and response efficiency as well as emotional dimensions such as social needs [19,22] and psychological ownership [18]. This underscores the importance of adopting a multifaceted perspective that integrates both technological and social factors in studying AI CAs.
However, the existing literature on AI CAs industry still exhibits three primary gaps. First, prior studies often treat perceived interactivity as a monolithic construct [16,21,23], lacking systematic integration of its multidimensional aspects—particularly as modern systems increasingly incorporate emerging features like entertainment [24] and personalized customization [25]. Second, while extensive research focuses on improving user behavioural intentions through technical enhancements, few studies analyse how users’ social attributes and psychological factors influence sustained usage [26]. Although system efficacy remains critical, users’ emotional resonance and trust in AI CAs during interactions are increasingly recognized as vital to behavioural outcomes [27,28,29]. Yet, research on the relationships between interactivity, social presence, and trust in AI CAs industry remains scarce. Third, despite the growing adoption of AI CAs, studies reveal that most users eventually reduce or discontinue usage, with initial enthusiasm for novel technologies diminishing over time [18]. While recent research has introduced social presence theory to explain AI-driven services [26,30], the psychological mechanisms through which perceived interactivity affects long-term usage behavior remain underexplored. Consequently, understanding the dynamic coupling between multidimensional interactivity variables and users’ continuance usage intentions through affective pathways constitutes the core focus of this study.
To address these gaps, this study aims to investigate how perceived interactivity influences users’ continuance intention to use AI conversational agents, with a focus on the mediating roles of social presence and trust. The primary objective is to develop a comprehensive model that integrates multidimensional interactivity variables and psychological factors to explain sustained usage behavior. By employing a two-stage hybrid PLS-SEM and ANN approach, this study seeks to uncover both linear and nonlinear relationships, providing a nuanced understanding of user behavior in AI CA interactions. This hybrid approach is particularly suited to the dynamic nature of AI CAs, where user perceptions and behaviors are influenced by both structured and emergent factors. Furthermore, the study extends the application of perceived interactivity theory by incorporating playfulness and personalization as critical dimensions, offering new insights into user engagement in emerging AI applications such as virtual assistants and social platforms.
To address these theoretical gaps and provide actionable insights for designing and developing AI CAs, this study aims to investigate three research questions:
(1)
How does perceived interactivity influence users’ continuance intention to use artificial intelligence conversational agents?
(2)
How do social presence and trust mediate the relationship between interactivity and continuance intention?
(3)
How can insights into perceived interactivity inform targeted strategies for improving user experiences?
The remainder of this paper is structured as follows: Section 2 introduces the theoretical foundation and hypothesis development. Section 3 details the research method, followed by data analysis results in Section 4. Section 5 discusses key findings, while Section 6 elaborates on theoretical and practical implications. Finally, limitations and future research directions are presented.

2. Literature Review and Theoretical Foundation

2.1. AI Conversational Agent

Traditional CAs are rule-based systems that rely on scripts or decision trees to predefine them [31]. Although it can complete the interaction with users to a certain extent, its limitation lies in the lack of the ability to understand the users’ contextual intentions and emotional perception. Supported by advances in natural language processing and generative AI, a new domain has emerged—AI-based conversational agents. These systems are designed to comprehend user input, generate appropriate responses, and progressively enhance the quality of interaction through adaptive learning mechanisms [32,33].
Recent breakthroughs in deep learning-based nature language processing have enabled these systems to understand contextual information, produce human-like responses, and enhance empathetic capabilities through affective computing [34]. As a result, their applications have expanded across various domains, including virtual assistants [35,36], online customer service [16,37,38], and social media platforms [11,39].
A significant technological advancement underpinning these developments is the self-attention mechanism proposed by Vaswani et al. [40], which enables models to process long-range semantic dependencies in parallel. This innovation has not only greatly improved the coherence of generated conversations but also paved the way for multimodal learning, allowing interactions to move beyond text to include speech, vision, and even touch. In this context, the Computers as Social Actors (CASA) theory proposed by Nass and Moon [41] offers a critical framework for understanding the user experience with contemporary AI-based CA systems. Their experiments demonstrated that users, even when fully aware they are interacting with a machine, tend to unconsciously apply social norms—such as politeness and reciprocity—during interactions, particularly when the system employs multimodal feedback through synchronized voice, facial expressions, and gestures. This phenomenon transforms CAs from functional tools into quasi-social entities, thereby enhancing users’ sense of immersion through anthropomorphism and interactivity. An illustrative example is Duolingo’s AI-based language learning agent, Lily, which engages learners via video calls. Lily can accurately interpret and respond to spoken input in real time, with a voice that closely mimics human tone and rhythm. It dynamically adjusts responses based on user feedback to correct language errors, thereby improving both learning efficiency and emotional satisfaction simultaneously [42].

2.2. Perceived Interactivity Theory

The concept of perceived interactivity was initially proposed by Newhagen et al. [43], who defined it along two primary dimensions: “psychological sense of efficacy” and “perceived interactivity of the media system.” This early conceptualization emphasized the role of users’ subjective cognition in technological interactions as a driving force behind behavioural decision-making. Subsequent scholars have considered perceived interactivity to be a multidimensional construct [44,45], with its definition evolving alongside advancements in technology [46]. Liu and Shrum [47] further systematized the theoretical framework by introducing a three-dimensional model of perceived interactivity, comprising active control, two-way communication, and synchronicity. And this model has since served as a foundational framework for empirical research across various fields.
Building upon this foundation, Yang et al. [48] proposed that perceived interactivity in mobile commerce contexts encompasses control, responsiveness, and personalization. In the domain of online advertising, McMillan et al. [49] identified communication, user control, and time as key dimensions. Zhao et al. [45] extended this framework to social media platforms, suggesting four dimensions: control, playfulness, connectedness, and responsiveness. While scholars have offered differing classifications of the dimensions that constitute perceived interactivity, most agree on the central roles of control, responsiveness, and communication. Notably, control and responsiveness are generally regarded as technical features, whereas communication pertains to the communicative capacity between interacting agents [50].
This study focuses on how perceived interactivity, as experienced by users during conversations with AI systems, influences their intention to continue using such systems. Accordingly, the interaction of interest is human–computer interaction. Synthesizing insights from prior research, this study adopts a five-dimensional model of perceived interactivity: control, responsiveness, communication, playfulness, and personalization.
Control refers to the perception that users can independently choose what to do and what they are going to do [49]. In the context of AI CAs, this is reflected in the user’s capacity to direct the conversation toward task completion and extract specific and satisfactory information from system feedback. [18]. This perception is closely linked to the user’s level of digital literacy and the reliability of the system.
Responsiveness is defined as the degree to which one party’s response is a function of the other party’s input [51]. Within AI CAs, it pertains to users’ perceptions of how promptly and coherently the system reacts to their commands, the relevance of the generated content, and the overall continuity during the interaction.
Communication denotes the depth and bidirectionality of information exchange [49]. AI CAs have transcended the functional limitations of traditional human–computer interaction by employing contextual reasoning capabilities, such as recognizing users’ implicit emotional shifts or simulating non-verbal cues and social support behaviours common in human conversation. For instance, the digital health tool Woebot uses CAs to detect users’ emotional and psychological states and provide appropriate support, which has been shown to alleviate anxiety and depression [52].
Playfulness refers to the user’s experience of pleasure and curiosity, stimulated by the system’s use of gamification, humour, or dynamic feedback. This dimension serves to reduce the instrumental nature of interaction and enhance intrinsic motivation [53]. In human–AI interactions, incorporating enjoyable design elements can significantly improve user experience and satisfaction [54].
Finally, Personalization reflects the system’s adaptive capacity to meet the user’s unique needs. As noted by Blomsma et al. [55] AI CA systems can leverage behavioural data, user preferences, and contextual information to dynamically tailor responses, thereby enhancing user engagement.

2.3. Social Presence and Perceived Interactivity

The concept of social presence, introduced by Short, Williams, and Christie in 1976, refers to the degree to which individuals perceive the presence of others within a communication medium [56]. This theory posits that a medium’s capacity for interaction directly influences users’ experiences of social presence [57]. In AI-based conversational systems, multimodal interactions extend beyond traditional spatial immersion, enabling users not only to feel “situated” within a virtual environment but also to form cognitive, quasi-social relationships with AI agents.
Empirical studies have demonstrated a positive correlation between perceived interactivity and social presence [58,59]. Biocca and Harms suggest that users’ control over interactive scenarios activates “behavioral presence,” fostering a sense of others’ existence through active participation [60]. Xue et al. [20] indicate that multimodal control in smart home voice assistants positively influence user perceived capability. In the context of this paper, social presence can be regarded as a form of user perception of the voice assistant’s capability. Rettie [61] notes that immediate responses from a system can replicate the continuity of face-to-face communication, thereby reducing media-induced alienation. Experimental evidence indicates that response delays exceeding eight seconds can lead to user frustration, diminishing social presence [62]. Kang et al. [18] note that CAs’ timely and effective responses may foster a sense of care and friendliness toward smart agent, while also building a sense of belonging and social connection as users’ needs are met. Consequently, high-responsiveness designs can enhance users’ perceptions of social entities by reinforcing “conversational presence.” Based on this, the following hypotheses are proposed:
H1a. 
Control positively influences social presence.
H1b. 
Responsiveness positively influences social presence.
Wang et al. [63] highlight that communication in social media positively affects users’ perceptions of social presence. Applying Media Richness Theory [64], this can be extended to AI systems, which convey nonverbal cues through vocal intonation or visual representations, compensating for the lack of social cues in text-based interactions. For instance, Amazon Alexa’s social robot employs emotive expressions to enhance users’ perceived quality of system communication and their sense of the system’s anthropomorphism [65]. Algharabat et al. [66] points out that business social communication in emerging markets positively impacts social presence. Boutet et al. [67] argues that in digital communication, using positive emojis can enhance the sender’s approachability, and this warm-hearted communication can strengthen social presence. Accordingly, we propose:
H1c. 
Communication positively influences social presence.
Marcel et al. [68] argue that robots with superior social skills have a greater impact on social presence, thereby increasing perceived enjoyment. Biocca et al. [60] emphasize in their theory of the “affective layer” of social presence that entertaining designs can strengthen users’ anthropomorphic empathy toward AI agents by eliciting positive emotions. Xue et al. [20] note that adult playfulness positively influences users’ perception of voice assistant interaction capabilities. Meanwhile, the research from Mishra et al. [69] indicate that receiving humorous and entertaining responses during interactions with voice assistants positively influences users’ attitudes toward these assistants. Hsieh et al. [70] point out that in mobile instant messaging, playful and entertaining messages foster social connections and enhance social presence. In such cases, users exhibit greater willingness to accept and engage with robots. Thus, we propose:
H1d. 
Playfulness positively influences social presence.
Finally, Peter et al. [55] suggest that the personality of AI robots can enhance social presence and conversational quality. For example, Replika, an AI chatbot designed for emotional companionship, allows users to customize the AI’s “personality type” and adapt its conversational style, enabling tailored interactions to combat loneliness and improve mental health [71]. Xie et al. [72] further demonstrate that CAs exhibiting humor positively influence user satisfaction and social presence. Based on these findings, we propose:
H1e. 
Personalization positively influences social presence.

2.4. Trust and Perceived Interactivity

With the increasingly widespread application of AI in fields such as information acquisition and decision support, its technological autonomy is significantly higher than that of traditional technological tools [73]. While this autonomy contributes to improved efficiency, it also intensifies users’ concerns about the opacity of algorithmic decision-making—often described as the “black-box” effect [74]. Consequently, exploring how users develop trust in AI conversation systems has become a pressing issue.
A widely accepted definition of trust is offered by Mayer et al. [75], who describe it as a person’s willingness to be vulnerable to another party based on an assessment of the latter’s ability, benevolence, and integrity. In the context of AI CAs, this concept can be interpreted as users’ belief that the system is capable of successfully completing assigned tasks, and that its actions are driven not by self-interest but by honest and transparent intentions.
While little empirical research has examined how specific dimensions of interactivity affect trust in AI CAs, numerous studies in related fields suggest a close relationship between perceived interactivity and trust in human–computer interactions [76,77,78]. Batch et al. [76] identified social ethics, technological features, and user characteristics as key factors influencing trust in AI systems. Among these, controllability and responsiveness—as technical attributes—can enhance perceived system predictability and transparency, which in turn improves users’ attitudes, trust, and perceptions toward AI [79,80]. Based on the above research, we propose the following hypotheses:
H2a. 
Control positively influences trust.
H2b. 
Responsiveness positively influences trust.
Chakraborty [81] found that generative AI can increase trust by offering tailored responses based on user needs and facilitating human-like communication. In the context of e-commerce, Chen et al. [82] demonstrated that chatbots exhibiting empathy and friendliness positively affect users’ trust in the system. Yue et al. [83] point out that transparent communication is key to enhancing trust. Effective communication positively influences trust, which is crucial for maintaining close interpersonal relationships [84]. Similarly, Hamacher et al. [85] found that expressive communication helps build trust and can even compensate for operational errors by robots. Therefore, we propose the following hypothesis:
H2c. 
Communication positively influences trust.
The role of playfulness in fostering trust is debated among scholars. Kim et al. [86] indicate that in online marketing, playfulness positively influences trust in virtual influencers and brand trust. Holflod et al. [87] point out that playfulness in higher education can promote trust and cooperation and break down disciplinary and professional boundaries. Nikghalb et al. [88] argued that playful interaction contributes positively to trust in human–AI relationships. Xue et al. [20] point out that adult playfulness is a key factor that enhancing users’ perception of smart device capability. However, Xie et al. [72] suggested that while humor and entertainment improve satisfaction in casual conversations, they may become distracting or even counterproductive in task-oriented dialogue scenarios. Accordingly, we propose:
H2d. 
Playfulness positively influences trust.
Lastly, personalization has generally been found to positively affect trust in CAs [89]. Personalized agents are seen as intelligent systems that combine both functional and relational advantages by learning from earlier interactions [90]. Sipos et al. [91] point out that AI-driven personalization is one of the most effective strategies for fostering consumer trust in e-commerce. In self-driving filed, Sun et al. [92] note that personalized interactions with in-vehicle systems can enhance users’ trust in autonomous driving. This capacity to adapt to user preferences and contexts has been shown to enhance trust in CAs [1]. Additionally, AI customer service agents that dynamically adjust responses based on user preferences enhance trust and engagement [93]. Based on the literature reviewed, the following hypothesis is proposed:
H2e. 
Personalization positively influences trust.

2.5. Social Presence and Trust

There is substantial evidence demonstrating a significant association between social presence and user trust in AI CAs [94,95,96,97]. According to social presence theory [56], when users perceive human-like agency in a CA—such as the exhibition of human emotions, intentions, or autonomy—they are more likely to assess its trustworthiness in the same way they would evaluate a human interlocutor [98]. Studies by Toader et al. [30] and Fan et al. [99] have shown that social presence plays a critical role in enhancing users’ trust in chatbots. Pavone et al. [97], in the context of AI in autonomous vehicles, found that the use of a female voice increased perceived social presence by conveying warmth and perceived competence, which in turn positively influenced trust. Similarly, Lee et al. [100] validated the positive effect of social presence on trust through a theoretical model of human–machine communication trust. Based on the above research, the following hypothesis is proposed.
H3. 
Social presence positively influences trust.

2.6. User’s Continuance Intention

Behavioral intention is defined as an individual’s conscious plan to perform a specific behavior [101]. This study focuses on AI CAs with primarily commercial applications, such as agents, social apps, and e-commerce customer service agents. These systems are designed not only for short-term interactions but also for medium- to long-term engagement [102], making continued usage intention a critical determinant of business sustainability. In this context, continued usage intention refers to a user’s willingness to keep using the current product or service and serves as a core indicator of system success [103].
This study posits that continuance intention of AI CAs is reinforced by enhanced social presence and trust. Previous research has confirmed the impact of both trust and social presence on continuance usage [104,105,106,107]. Hsieh et al. [105] and Lina [19], for example, have demonstrated that the social presence of AI CAs positively influences continuance intention through increased trust. Similarly, Ella et al. [108] emphasized that trust is fundamentally an emotional response; once users perceive that AI CAs can operate reliably and predictably, they are more likely to develop dependence on it, thereby strengthening continuance usage intentions. Based on this literature, the following hypotheses are proposed:
H4. 
Social presence positively influences continuance intention.
H5. 
Trust positively influences continuance intention.
Accordingly, this study proposes an integrated model of continuance intention based on perceived interactivity. The independent variables include control (CON), responsiveness (RES), communication (COM), playfulness (PLA), and personalization (PER), which influence the dependent variable continuance intention (CI) through two mediators: social presence (SP) and trust (TR). All hypothesized paths are illustrated in the theoretical model (see Figure 1).

3. Methods

3.1. Research Method Design

To investigate the influence of perceived interactivity on users’ continuance intention to use AI CAs, this study adopts a two-stage hybrid methodology combining Partial Least Squares Structural Equation Modeling (PLS-SEM) and Artificial Neural Networks (ANN). In the first stage, PLS-SEM is employed to test the hypothesized relationships among perceived interactivity variables, social presence, trust, and continuance intention. PLS-SEM is suitable for this study due to its ability to handle complex models with latent variables and smaller sample sizes, as supported by Hair et al. [109]. In the second stage, ANN is used to capture nonlinear relationships and validate the predictive accuracy of the PLS-SEM results, leveraging its strength in modeling complex, non-compensatory interactions [110]. This hybrid approach ensures a robust analysis of both linear and nonlinear dynamics in user behavior, aligning with the dynamic nature of AI CA interactions.
The experimental platform selected for this study is the Dialogue with Great Souls project hosted on Soul APP (https://www.soulapp.cn/en) (accessed on 11 January 2025), jointly developed by Soul APP, the EchoVerse APP (Version 1.42.0), and the China Academy of Art. The project includes both an offline digital art interactive exhibition and an online platform experience. It leverages AI technology to integrate Soul APP with its subsidiary EchoVerse APP, and features eight AI-based virtual “great souls.” These AI CAs are modeled after influential historical figures, including Charles Darwin, Zhuangzi, Albert Einstein, Aristotle, Georg Wilhelm Friedrich Hegel, Arthur Schopenhauer, Friedrich Nietzsche, and Mark Twain.
At the physical exhibition site, visitors can interact with these eight figures through various interactive installations, enabling diverse and engaging exchanges. Online users can participate in the experience via the EchoVerse APP, where they engage in “cross-temporal” conversations with the great souls (see Figure 2).
Selecting an appropriate platform is critical for exploring user experiences with AI CAs. The decision to use this platform was based on the following four considerations:
Advanced technology: The platform uses advanced AI language models to process user queries and provide timely responses, and CAs in platform have a rich knowledge graph to ensure the relevance of the answer content.
Communicative richness: Each “great soul” is endowed with visual representations and matching voice profiles, enabling users to feel as though they are engaging in real-time conversation with a human interlocutor.
Playfulness: The offline exhibition enhances enjoyment through digital human displays and interactive installations, while the online experience offers diverse functions—such as the “Book of Answers,” which provides responses from the great souls through a randomized draw.
Personalization: The eight figures represent distinguished contributors across various domains of human knowledge. Users can inquire about emotional, academic, or professional topics, or engage in deep philosophical discussions within the respective expert domains of these historical figures.

3.2. Questionnaire Design

In the first part of the questionnaire, participants were provided with a brief introduction to Soul APP’s Dialogue with Great Souls project to ensure they had a clear understanding of the experimental context. A download link to the app was included, and participants were instructed to engage with the project for a minimum of 10 min, during which they were encouraged to interact with multiple AI representations of the great historical figures.
The second part of the questionnaire collected respondents’ demographic information, including gender, age, education level, familiarity with AI CAs, and frequency of use.
The third part consisted of 25 items measuring nine constructs: CON, RES, COM, PLA, PER, SP, TR, and CI. All items were measured using a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). To ensure content validity, the items for each variable were adapted from the established literature and contextualized for AI CAs. Specifically, items for control, responsiveness, and communication were based on the work of Park et al. [46] and Shi et al. [111]; items for playfulness and social presence were adapted from Anubhav et al. [69,112]; the personalization scale was drawn from Lee et al. [105,113]; trust items were adapted from Fernandes et al. [114] and Gao [115]; and continued usage intention was measured using the scale developed by Bhattacherjee et al. [103] and Ashfaq et al. [112]. A detailed itemization is provided in Table 1.
As most of the original measurement scales were developed in English and the study aimed to collect data from participants in China, two professional translators were employed to translate the questionnaire into Chinese to ensure accuracy. Prior to finalizing the survey, three researchers with over one year of experience using AI CAs and one university professor were invited to review the translated version for clarity and accuracy. From an ethical standpoint, informed consent was obtained from all participants prior to their participation. Respondents were assured that their privacy would be strictly protected and that all data collected would be used solely for academic research purposes, with no commercial intent.

3.3. Data Collection and Analysis

Data for this study were collected using Questionnaire Star (www.wjx.cn) (accessed on 19 January 2025), a professional online survey platform widely used in China. The questionnaire link and corresponding QR code generated on the platform were distributed via social media channels including WeChat (version 8.0.54), QQ (version 9.0.90.614), and others. To optimize sample diversity, the research team initially disseminated the survey through WeChat and QQ group chats and encouraged respondents to recommend potential participants from various backgrounds. This approach aimed to mitigate the clustering bias typically associated with snowball sampling methods [116]. To enhance participation, an incentive mechanism was implemented. Respondents who completed the survey were entered into a random draw to win small prizes, including WeChat cash red packets (valued at 1 or 5 RMB) and an electronic thank-you letter. Participants were instructed to complete the questionnaire immediately after their interaction with the Dialogue with Great Souls project to ensure the timeliness and accuracy of their feedback. All participation was voluntary, and no conflict of interest was present throughout the study.
The sample selection process was designed to ensure representativeness and relevance to the study’s objectives. Participants were selected based on two primary criteria: (1) they had actively engaged with the Dialogue with Great Souls platform (via the EchoVerse APP or offline exhibition) at least once within the past three months, ensuring recent and relevant interaction with AI CAs; and (2) they were active users of social media platforms (WeChat or QQ), facilitating survey distribution and response collection. To address the sample size requirements for the two-stage hybrid methodology (PLS-SEM and ANN), a minimum sample size was determined using a priori power analysis. For PLS-SEM, the total sample size should be at least ten times the number of observed variables corresponding to each latent variable to be considered sufficient based on Hill et al. [117]. For ANN, the rule of thumb is that the sample size needs to be at least 10 times the number of weights in the network [118].Following these guidelines, a number of 300 valid responses was targeted to ensure robust predictive accuracy. Therefore, based on these criteria, the sample size currently selected for this study not only conforms to the general norms in the field of social sciences but also has good representativeness and statistical significance.
The survey was conducted over a one-month period, from 19 January to 19 February 2025. A total of 327 responses were collected. After thorough screening by three researchers, 22 invalid responses were excluded, resulting in 305 valid samples for data analysis. This final sample size exceeds the minimum requirements for both PLS-SEM and ANN, enhancing the reliability and generalizability of the findings. The screening process involved cross-checking responses for completion and consistency, with inter-rater agreement among researchers exceeding 90% to ensure data quality. Descriptive statistics for demographic characteristics are presented in Table 2.

4. Result

4.1. Model Fit

To enhance the reliability and validity of the research findings, this study conducted a model fit assessment of the measurement model. Using Partial Least Squares Structural Equation Modeling (PLS-SEM) to construct the theoretical framework, model fit was evaluated by analyzing the distribution characteristics of covariate error and bias.
Specifically, two key indices were employed to assess model fit: the Standardized Root Mean Square Residual (SRMR) and the Normed Fit Index (NFI). SRMR is defined as the difference between the observed correlation matrix and the model-implied correlation matrix, with a value of SRMR ≤ 0.08 indicating a good model fit [119]. NFI is an incremental fit index, where values closer to 1 suggest better model fit [120]. Empirical results showed that the SRMR for this study was 0.036, and the NFI reached 0.867, indicating a relatively good model fit. These results support the adequacy of the model, as illustrated in Table 3.

4.2. Common Method Bias

In survey-based research, common method bias (CMB) is a critical concern as a potential source of systematic error. The Variance Inflation Factor (VIF), widely used in formative measurement models, serves as a diagnostic tool for assessing multicollinearity among constructs. This metric quantifies the degree of variance inflation due to collinearity among variables, with higher values indicating greater risk of multicollinearity. In empirical research, a VIF threshold of 3.0 is commonly used as a conservative benchmark [121].
Data analysis in this study revealed that the VIF values for all measured items ranged from 1.152 to 1.362, which are well below the critical threshold of 3.0 (see Table 4). This provides statistical evidence that multicollinearity is not a concern and that the measurement items do not suffer from significant common method bias.

4.3. Reliability and Validity Analysis

To evaluate the reliability and validity of the questionnaire, this study employed PLS-SEM for a systematic assessment of the measurement instruments. Specifically, Cronbach’s alpha (α), Composite Reliability (CR), and Average Variance Extracted (AVE) were calculated.
Cronbach’s alpha assesses the internal consistency and stability of scale items. When both α and CR values exceed 0.7, the measurement tool is considered acceptably reliable; values above 0.8 indicate high reliability [122]. Factor loadings reflect the correlation between observed variables and their corresponding latent constructs, indicating the extent to which the construct explains variance in the observed item. Loadings greater than 0.7 are generally considered strong, indicating a high degree of relevance between items and constructs [123].
In this study, all factor loadings and CR values exceeded the 0.7 threshold, and AVE values for all constructs were above 0.5, suggesting that the measurement model demonstrates adequate convergent validity [124]. Empirical results (see Table 5) show that the Cronbach’s alpha coefficients and composite reliability values (rho_A and rho_C) for all latent variables exceeded 0.8. Regarding convergent validity, the Average Variance Extracted (AVE) values for each construct ranged between 0.80 and 1.00. In addition, all standardized factor loadings were above the critical threshold of 0.5. These findings collectively indicate strong internal consistency, reliability, and convergent validity of the measurement model, confirming the robustness of the study’s results.
For validity testing, we further assessed discriminant validity using the Fornell–Larcker criterion, Heterotrait–Monotrait ratio (HTMT), and cross-loadings. As shown in Table 6, the square root of each construct’s AVE was greater than its correlations with other constructs, indicating adequate discriminant validity [122]. The HTMT values ranged from 0.206 to 0.397—well below the recommended threshold of 0.85 (see Table 7), further supporting the presence of strong discriminant validity [122]. Additionally, each observed variable demonstrated a higher factor loading on its corresponding latent construct than on any other construct (see Table 8), further confirming satisfactory construct differentiation [122].

4.4. Hypothesis Testing

Based on 305 valid responses, path analysis was conducted using the bootstrap resampling method with 5000 iterations to estimate the parameters of the theoretical model. The PLS-SEM results (see Figure 3 and Table 9) indicate that 12 out of 13 hypothesized paths were statistically significant, while one was not supported.
Specifically, control (β = 0.139, p = 0.010), responsiveness (β = 0.110, p < 0.050), communication (β = 0.182, p < 0.010), playfulness (β = 0.163, p < 0.050), and personalization (β = 0.195, p < 0.010) had significant positive effects on social presence. These results support hypotheses H1a–e, confirming that perceived interactivity dimensions significantly influence users’ sense of social presence.
In terms of trust, significant positive effects were observed for control (β = 0.179, p < 0.010), responsiveness (β = 0.157, p < 0.010), communication (β = 0.133, p < 0.050), and playfulness (β = 0.141, p < 0.050), supporting hypotheses H2a–d. However, personalization (β = 0.087, p = 0.116) did not have a significant effect on trust, thus hypothesis H2e was not supported.
Finally, the study found that social presence positively influenced trust (β = 0.133, p < 0.050), and both social presence (β = 0.272, p < 0.010) and trust (β = 0.209, p < 0.010) significantly predicted users’ continuance intention to use AI-based CA. These findings provide empirical support for hypotheses H3 through H5.

4.5. Mediation Analysis

Following the guidelines proposed by Preacher et al. [125], a mediation effect is considered statistically significant when the 95% confidence interval (CI) derived from the bootstrapping procedure does not include zero. In this study, the 95% CIs for the indirect effects of CON, RES, COM, and PLA all excluded zero, indicating significant mediation effects through SP and TR (see Table 10). However, the confidence interval for the indirect path from perceived personalization (PER) through trust included zero, suggesting a non-significant mediating effect.
To further assess the strength of these mediating effects, the Variance Accounted For (VAF) metric was used in the PLS-SEM analysis. VAF is calculated as the ratio of the indirect effect to the total effect (VAF = ab/c), and is used to evaluate the magnitude of mediation. A VAF value between 20% and 80% indicates a partial mediation effect, while a value greater than 80% suggests full mediation.
According to the results, the mediating role of social presence and trust accounted for 46.83% and 48.10% of the total effect of control, indicating partial mediation. Responsiveness exhibited a partial mediation effect through trust, explaining 50% of the variance. Communication demonstrated a partial mediation effect through social presence, accounting for 59.76% of the variance. Playfulness also showed partial mediation, with 56.41% of the effect mediated through social presence and 36.70% through trust. Finally, social presence mediated the relationship to trust with a relatively weak effect, accounting for only 10.23% of the variance.

4.6. Artificial Neural Network (ANN) Analysis

This study adopted a two-stage modeling strategy by incorporating ANN analysis alongside PLS-SEM to better capture nonlinear relationships and enhance predictive accuracy. While SEM is well-suited for evaluating linear associations and compensatory effects among constructs, it has limitations in modeling complex nonlinear interactions [126]. In contrast, ANN is highly capable of identifying non-normal distributions and nonlinear dependencies between exogenous and endogenous variables [127].
ANN also demonstrates strong tolerance to data noise, outliers, and relatively small sample sizes. Moreover, it is applicable to non-compensatory models, where a deficiency in one factor does not necessarily need to be offset by an increase in another [128]. Therefore, ANN was integrated into this study as an extended analytical tool to more systematically identify and explain the key drivers influencing users’ continued adoption of AI-based CAs.

4.6.1. Structure of ANN Model

Based on the SPSS 26 software platform, deep learning modeling was conducted using a Multilayer Perceptron (MLP) architecture, which includes an input layer, one or more hidden layers, and an output layer. During the model training process, the feedforward–backpropagation (FFBP) algorithm was employed. In this method, inputs are processed in the forward pass, while the estimation error is propagated backward through the network to update weights and improve predictive performance [129].
Statistically significant variables identified through the SEM analysis were transformed into input neurons in the neural network. Social presence, trust, and continuance intention were used as output nodes in three separate ANN models, respectively. The corresponding neural network structures are illustrated in Figure 4, Figure 5 and Figure 6.

4.6.2. Result of ANN

According to the methodology proposed by Guo et al. [127], this study utilized 90% of the sample data for the training phase and 10% for testing. Both the input and output layers adopted the sigmoid activation function to handle nonlinear relationships. All input and output variables were normalized to a scale of 0 to 1 to improve computational efficiency [130]. To prevent overfitting, a 10-fold cross-validation procedure was applied, and Root Mean Square Error (RMSE) was used as the evaluation metric. The iterative learning process was designed to minimize error and enhance predictive accuracy [131].
Experimental results (see Table 11) showed relatively low RMSE values for both training and testing phases, indicating strong predictive performance and good model fit [128]. The average training RMSE values for Models A, B, and C were 0.168, 0.173, and 0.178, respectively, while the average testing RMSE values were 0.142, 0.166, and 0.177, respectively.
In addition, sensitivity analysis results (Table 10) were conducted to determine the normalized relative importance of the input neurons [128]. In ANN Model A, COM emerged as the most critical predictor of SP, with a normalized importance of 90.78%, followed by PER at 84.98%, PLA at 63.55%, CON at 57.41%, and RES at 45.83%. In ANN Model B, control was the most influential predictor of TR with a normalized importance of 95.66%, followed by responsiveness at 76.45%, playfulness at 69.41%, and communication at 63.64%. In ANN Model C, social presence was the strongest predictor of CI, with a normalized importance of 93.9%, followed by TR at 84.66%.
Finally, a comparison between the PLS-SEM and ANN results was conducted to examine differences in predictor rankings (see Table 11). In ANN Model A, the ranking of predictors differed slightly from that in the SEM model. While the SEM model identified personalization (β = 0.195, p = 0.001) as the strongest predictor of social presence, followed by communication (β = 0.182, p < 0.001), the ANN model indicated that communication (90.78%) was the most important predictor, followed by personalization (84.98%). However, in Models B and C, the ranking of predictors was consistent across both PLS-SEM and ANN methods.

5. Discussion

This study grounded in perceived interactivity theory, investigates how the interactive features of AI CAs influence users’ experience and their continuance intention. By employing a two-stage modeling approach PLS-SEM and ANN, the study confirms several hypothesized relationships while also revealing notable and unexpected insights.
First, the structural equation model supported hypotheses H1a to H1e: CON, RES, COM, PLA, and PER all exerted significant positive effects on users’ perceived SP within AI CAs. Among these, personalization emerged as the strongest predictor of social presence, followed closely by communication. This is consistent with findings from Hsieh et al. [105] and Wang et al. [63]. A possible explanation is that social presence may be more strongly driven by dimensions related to contextual and personality construction—such as communication and personalization. When CAs adapt to user characteristics and demonstrate fluent, natural interaction abilities, users may feel a heightened sense of “real presence,” as if they are conversing with a human.
Interestingly, ANN analysis yielded a different ranking of predictor importance: communication was found to be more critical than personalization. This discrepancy can be attributed to the methodological differences—PLS-SEM emphasizes linear and significance-based paths, whereas ANN captures more complex nonlinear interactions and synergistic effects [132]. It can be inferred that personalization may exert more direct and perceptible influence, while communication potentially forms stronger interactions with other predictors, thus demonstrating higher predictive power overall. Moreover, playfulness was confirmed as a significant factor. CA incorporating humor and enjoyable interactions enhance user perceptions of realism and satisfaction, in line with findings by Xie et al. [133].
Conversely, the effects of control and responsiveness on social presence were relatively weaker, which contrasts with Zhao et al.’s findings [134]. One possible reason is that control mainly concerns the user’s ability to guide dialogue direction or interface operations, while responsiveness reflects system speed and accuracy—basic functions often taken for granted by users. These technical features may exhibit diminishing marginal utility: once fundamental interaction needs are met, their incremental contribution to social presence becomes limited. This speculation was supported by ANN results, where control and responsiveness were ranked lowest in importance.
Second, the study found that control, responsiveness, communication, and playfulness all positively influenced trust (supporting H2a–d). Among them, control was the most influential, consistent with Li et al.’s research [135]. In AI interaction contexts, perceived control fosters trust by reassuring users that their intentions are correctly understood and promptly addressed. Responsiveness ranked second, reinforcing Wang et al. [136] indicated that timely feedback not only ensures interactional coherence but also reduces uncertainty, thereby enhancing perceived system reliability [45]. Playfulness, through lighthearted and engaging interactions, helps ease user apprehension and increase affinity—findings echoed by Kim et al. [86]. Similarly, communication had a positive impact on trust, aligned with Hamacher et al. [85], due to its natural and human-like qualities.
However, personalization did not significantly influence trust (H2e not supported), diverging from findings by Foroughi et al. [132]. This could be due to users’ ambivalent perceptions of personalized interaction [85]. While customization may improve satisfaction, it may also raise privacy or algorithmic manipulation concerns—as Knote et al. have suggested [137].
Third, the study confirmed that both social presence and trust significantly predict users’ continuance intention (supporting H4 and H5). Social presence transforms technology from a functional medium into a social entity during interaction, evoking emotional projections that increase engagement and stickiness. Trust, in turn, stabilizes user behavior by reducing perceived uncertainty and technological risk—findings consistent with Attar et al. [104]. The ANN results further corroborated this, with social presence ranking as the most important predictor, followed by trust. This may indicate that in highly anthropomorphic environments, users prioritize “social functionality” over basic technical reliability.
Last but not least, the study revealed a positive mediating role of social presence in fostering trust, which subsequently promotes continuance intention—consistent with findings by Lina et al. [19]. When AI CAs exhibits high social presence (e.g., human-like communication, continuity of memory, emotional bonding), users are more likely to perceive them as socially capable and trustworthy entities. This forms a dual-path mechanism driving continuance intention—through both emotional connection and trust formation.

6. Implications

6.1. The Implications of Artificial Neural Network (ANN) Analysis

This study advances theoretical development in the field of AI CAs by applying a multidimensional interactivity framework and an innovative hybrid methodology. Unlike the traditional three-dimensional model proposed by Liu and Shrum [47], this research expands the construct of perceived interactivity to include five distinct dimensions: control, responsiveness, communication, playfulness, and personalization. This extension reflects users’ evolving expectations for emotional resonance and personalized services as CA technology matures, providing a comprehensive understanding of how interactivity affects both user experience and continuance intention.
Moreover, at the mechanism level, while recent studies have explored the role of perceived competence and warmth [20], psychological ownership and subjective well-being [18], few have systematically examined how perceived interactivity drives behavioral intention through the dual mediators of social presence and trust. This study empirically reveals differentiated pathways: social-cue-related dimensions such as communication and personalization significantly affect behavioral intention via enhanced social presence, whereas technically oriented dimensions such as control and responsiveness primarily influence trust. Playfulness, notably, has significant impacts on both mediators.
These findings extend social presence theory beyond traditional computer-mediated communication (CMC) into embodied intelligent interaction contexts [56], highlighting the dual nature of user decision-making in AI: the need for both social attributes and functional reliability.
From a methodological perspective, this research is among the first to integrate both PLS-SEM and ANN analysis in the context of CA. It expands the prevailing reliance on single-model approaches by showing that different methods yield nuanced insights [20,138]. The SEM results identified personalization as the most critical driver of social presence, while ANN analysis emphasized communication as most predictive. This difference underscores the value of methodological triangulation in capturing both linear and nonlinear relationships, offering a more holistic understanding of how AI CAs shape users’ perceptions. This methodological innovation provides a framework that balances explanatory power and predictive accuracy—particularly valuable for future studies on the dynamic interplay between technology and emotion in anthropomorphized AI CA systems.

6.2. Analysis Practical Implications

This research provides valuable practical insights, enabling developers of AI CAs and service providers to understand the factors that affect users’ continuous use of CAs. The findings highlight the importance of perceived interactivity, social presence, and trust in fostering long-term engagement. These implications are categorized into recommendations for employees, managers, and organizations, with each including explanations, good practice examples, and solutions.

6.2.1. Practical Implications for Employees

Firstly, in the process between human–AI CAs interaction, control and response, as the core of the basic technical capabilities, are the fundamental elements that influence whether users are willing to use AI CA by building trust [20]. Especially control, both the SEM results and the ANN sensitivity analysis in this study show that trust is the most important influencing factor. Therefore, employees should ensure reliability and stability, for instance by reducing input recognition errors or improving conversational coherence [139]. For technical employees in the industry, such as AI developers, UX designers, and customer support staff, play a direct role in implementing technical and interactive features of AI CAs. A good practice example is integrating real-time feedback mechanisms, like adaptive error correction in platforms such as ChatGPT [140], where the system acknowledges misunderstandings and refines responses iteratively. This is proved to enhances users’ sense of control and reduces risk perception [1].

6.2.2. Practical Implications for Managers

Further research indicates that the construction of social presence is the core mechanism driving users’ long-term dependence, while communication, personalization and play are all important influencing factors. Therefore, at the management level, while ensuring the basic technical reliability of AI CA, it has been proven effective to mitigate the negative experience of technical delays through active communication and sincere expression [85]. And through this process, functional reliability can be transformed into an enhanced opportunity for emotional connection. This part requires that in addition to technological research and development, managers also need to consider the user experience design aspect, taking emotional connection as one aspect of the product’s functional dimension. For instance, managers may consider embedding multimodal technologies, such as constructing virtual visual avatars and anthropomorphic tones. In addition, by simulating social cues in interpersonal interactions such as personalized addressing, or differentiating users’ short-term and long-term dialogues, instrumental interactions can be transformed into more emotionally sticky “human-like conversations”, thereby enhancing users’ emotional identification with CAs and promoting continuous use.

6.2.3. Practical Implications for Organizations

Lastly, at the organizational level, organizations should view perceived interactivity as a synergy between utility and user experience, influencing overall sustainability. For example, in healthcare organizations using AI agents like IBM Watson Health [141], emphasizing skills of communication with credible expertise can build trust and social presence, while educational AI tools could emphasize playfulness through gamified narratives to boost learners’ social presence. Health-focused agents need to balance personalized care with credible expertise. Developers should build dynamic, context-specific interaction models to meet varied user needs. These insights not only guide the design of more effective AI CA systems but also offer a theoretical foundation for understanding the synergy between technical utility and user experience. Ultimately, these recommendations can enhance product sustainability and long-term user engagement in real-world applications.

7. Limitations and Future Work

Despite its contributions, this study has several limitations that should be acknowledged. First, data collection was limited to users within a single country, which may constrain the generalizability of the findings. Cultural differences can significantly influence users’ familiarity with and acceptance of AI technologies. Future research should include more diverse cultural and regional populations to test the robustness of the proposed interactivity mechanism and develop culturally adaptive design frameworks.
Second, due to research constraints, this study mainly focuses on general AI CA. However, there may be significant differences in the interactivity requirements of agents in vertical fields such as medical consultation and educational companionship. For example, educational agents may rely more heavily on playfulness, while medical agents must enhance the credibility of communication. Additionally, the unique interaction needs of special user groups such as children with autism or older adults with cognitive impairments were not considered in this study. Future research should explore customized CA for these populations and develop tailored evaluation tools to investigate the boundaries and optimization paths of interactivity dimensions across use cases.
Third, this study overlooked the integration of Natural Language Processing (NLP) techniques, which are pivotal in conversational agents for analyzing and responding to user sentiment [142]. Understanding the emotional state of users during interactions can greatly enhance the agent’s effectiveness and user engagement [143]. NLP enables AI conversational agents (CAs) to process and understand human language in a more natural way, facilitating tasks such as intent recognition, entity extraction, and dialogue management, which are crucial for creating personalized and context-aware responses. By incorporating NLP, CAs can improve user satisfaction through empathetic and adaptive interactions, ultimately transforming human–computer communication [142]. Therefore, future research would emphasize incorporating NLP to detect and adapt to user emotions, thereby addressing this gap and improving the interactivity framework.

Author Contributions

Conceptualization, K.Z. (Kewei Zhang); methodology, K.Z. (Kewei Zhang); software, K.Z. (Kewei Zhang), J.L., Q.H. and K.Z. (Kuan Zhang); validation, J.L., Q.H. and K.Z. (Kuan Zhang); formal analysis, K.Z. (Kewei Zhang); investigation, K.Z. (Kewei Zhang) and J.L.; data curation, K.Z. (Kewei Zhang); writing—original draft preparation, K.Z. (Kewei Zhang); writing—review and editing, J.D.; supervision, J.D.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to local regulations of the institution’s location (https://www.law.go.kr/LSW//lsLinkCommon-Info.do?lspttninfSeq=75929&chrClsCd=010202) (accessed on 11 January 2025). All participants were informed and consented to participate in the study before it commenced. Additionally, the study adhered to the local government requirements of the data collection site. According to Chapter III Ethical Review—Article 32 of the “Implemen-tation of Ethical Review Measures for Human-Related Life Science and Medical Research” issued by the Chinese government, this study used anonymized information for research purposes, did not pose any harm to the subjects, and did not involve sen-sitive personal information or commercial interests; therefore, it was exempt from ethical review and approval (https://www.gov.cn/zhengce/zhengceku/2023-02/28/content_5743658.htm) (aceessed on 11 January 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data generated or analyzed during this study are included in this article. The raw data are available from the corresponding author upon reasonable request.

Acknowledgments

The authors thank all the participants in this study for their time and willingness to share their experiences and feelings.

Conflicts of Interest

The authors declare no conflicts of interest concerning the research, authorship, and publication of this article.

References

  1. Rheu, M.; Shin, J.Y.; Peng, W.; Huh-Yoo, J. Systematic Review: Trust-Building Factors and Implications for Conversational Agent Design. Int. J. Hum. Comput. Interact. 2021, 37, 81–96. [Google Scholar] [CrossRef]
  2. Mannekote, A. Towards a Neural Era in Dialogue Management for Collaboration: A Literature Survey. arXiv 2023, arXiv:2307.09021. [Google Scholar] [CrossRef]
  3. Alexander, K. The Analysis of The Efficiency of Generative Ai Algorithms for Creating a Natural Dialogue. Am. J. Interdiscip. Innov. Res. 2024, 6, 26–34. [Google Scholar] [CrossRef]
  4. Lau, T. The Rise of AI and Generative AI. In Banking on (Artificial) Intelligence: Navigating the Realities of AI in Financial Services; Lau, T., Ed.; Springer: Cham, Switzerland, 2025; pp. 21–42. ISBN 978-3-031-81647-5. [Google Scholar]
  5. Dobbala, M.K.; Lingolu, M.S.S. Conversational AI and Chatbots: Enhancing User Experience on Websites. Am. J. Comput. Sci. Technol. 2024, 7, 62–70. [Google Scholar] [CrossRef]
  6. Forbes KIC and Soul APP Launch “Dialogue with Great Souls”, a Digital Art Journey Across Time and Space. Available online: https://forbeschina.com/innovation/67968 (accessed on 17 April 2025).
  7. Teepapal, T. AI-Driven Personalization: Unraveling Consumer Perceptions in Social Media Engagement. Comput. Hum. Behav. 2025, 165, 108549. [Google Scholar] [CrossRef]
  8. Schachner, T.; Keller, R.; Wangenheim, F. von Artificial Intelligence-Based Conversational Agents for Chronic Conditions: Systematic Literature Review. J. Med. Internet Res. 2020, 22, e20701. [Google Scholar] [CrossRef]
  9. Song, D.; Oh, E.Y.; Rice, M. Interacting with a Conversational Agent System for Educational Purposes in Online Courses. In Proceedings of the 2017 10th International Conference on Human System Interactions (HSI), Ulsan, Republic of Korea, 17–19 July 2017; pp. 78–82. [Google Scholar]
  10. Yusuf, H.; Money, A.; Daylamani-Zad, D. Pedagogical AI Conversational Agents in Higher Education: A Conceptual Framework and Survey of the State of the Art. Educ. Tech. Res. Dev. 2025, 73, 815–874. [Google Scholar] [CrossRef]
  11. Chaudhari, V.; Bhangale, S.K. Enhancing Engagement and Communication Using Artificial Intelligence on Social Media. Int. J. Educ. Mod. Manag. Appl. Sci. Amp Soc. Sci. 2024, 6, 35–39. [Google Scholar] [CrossRef]
  12. Symeonaki, E.; Arvanitis, K.; Papageorgas, P.; Piromalis, D. AI-Based Chatbot System Integration to a Social Media Platform for Controlling IoT Devices in Smart Agriculture Facilities. In Information and Communication Technologies for Agriculture—Theme IV: Actions; Bochtis, D.D., Pearson, S., Lampridi, M., Marinoudi, V., Pardalos, P.M., Eds.; Springer: Cham, Switzerland, 2021; pp. 193–209. ISBN 978-3-030-84156-0. [Google Scholar]
  13. Ly, K.H.; Ly, A.-M.; Andersson, G. A Fully Automated Conversational Agent for Promoting Mental Well-Being: A Pilot RCT Using Mixed Methods. Internet Interv. 2017, 10, 39–46. [Google Scholar] [CrossRef]
  14. Anastasia, N.; Harlili; Yulianti, L.P. Designing Embodied Virtual Agent in E-Commerce System Recommendations Using Conversational Design Interaction. In Proceedings of the 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), Online, 29–30 September 2021; pp. 1–6. [Google Scholar]
  15. Chattaraman, V.; Kwon, W.; Gilbert, J.E.; In Shim, S. Virtual Agents in E–commerce: Representational Characteristics for Seniors. J. Res. Interact. Mark. 2011, 5, 276–297. [Google Scholar] [CrossRef]
  16. Chen, J.-S.; Le, T.-T.-Y.; Florence, D. Usability and Responsiveness of Artificial Intelligence Chatbot on Online Customer Experience in E-Retailing. Int. J. Retail Distrib. Manag. 2021, 49, 1512–1531. [Google Scholar] [CrossRef]
  17. Garcıía-Serrano, A.M.; Martıínez, P.; Hernández, J.Z. Using AI Techniques to Support Advanced Interaction Capabilities in a Virtual Assistant for E-Commerce. Expert Syst. Appl. 2004, 26, 413–426. [Google Scholar] [CrossRef]
  18. Kang, W.; Shao, B.; Zhang, Y. How Does Interactivity Shape Users’ Continuance Intention of Intelligent Voice Assistants? Evidence from SEM and fsQCA. Psychol. Res. Behav. Manag. 2024, 17, 867–889. [Google Scholar] [CrossRef] [PubMed]
  19. Lina, S.; Ali, T.; Acikgoz, F. AI-Enabled Service Continuance: Roles of Trust and Privacy Risk. J. Comput. Inf. Syst. 2025, 0, 1–16. [Google Scholar] [CrossRef]
  20. Xue, J.; Niu, Y.; Liang, X.; Yin, S. Unraveling the Effects of Voice Assistant Interactions on Digital Engagement: The Moderating Role of Adult Playfulness. Int. J. Hum. Comput. Interact. 2024, 40, 4934–4955. [Google Scholar] [CrossRef]
  21. Zhou, T.; Ma, X. Examining Generative AI User Continuance Intention Based on the SOR Model. Aslib. J. Inf. Manag. 2025. ahead-of-print. [Google Scholar] [CrossRef]
  22. Suh, A. How Users Cognitively Appraise and Emotionally Experience the Metaverse: Focusing on Social Virtual Reality. Inf. Technol. People 2023, 37, 1613–1641. [Google Scholar] [CrossRef]
  23. Li, Y.; Sun, M. The impact of perceived interactivity on purchase intention in interactive video: A CAB-based chain mediation model of empathy, immersion, and arousal. Front. Commun. 2025, 10, 1615509. [Google Scholar] [CrossRef]
  24. Huang, F.; Zou, B. English Speaking with Artificial Intelligence (AI): The Roles of Enjoyment, Willingness to Communicate with AI, and Innovativeness. Comput. Hum. Behav. 2024, 159, 108355. [Google Scholar] [CrossRef]
  25. Soonpipatskul, N.; Pal, D.; Watanapa, B.; Charoenkitkarn, N. Personality Perceptions of Conversational Agents: A Task-Based Analysis Using Thai as the Conversational Language. IEEE Access 2023, 11, 94545–94562. [Google Scholar] [CrossRef]
  26. Al-Oraini, B. Chatbot Dynamics: Trust, Social Presence and Customer Satisfaction in AI-Driven Services. J. Innov. Digit. Transform. 2025, 2, 109–130. [Google Scholar] [CrossRef]
  27. Gillath, O.; Ai, T.; Branicky, M.S.; Keshmiri, S.; Davison, R.B.; Spaulding, R. Attachment and Trust in Artificial Intelligence. Comput. Hum. Behav. 2021, 115, 106607. [Google Scholar] [CrossRef]
  28. Holzwarth, M.; Janiszewski, C.; Neumann, M. The Influence of Avatars on Online Consumer Shopping Behavior. J. Mark. 2006, 70, 19–36. [Google Scholar] [CrossRef]
  29. Wang, L.C.; Baker, J.; Wagner, J.A.; Wakefield, K. Can A Retail Web Site Be Social? J. Mark. 2007, 71, 143–157. [Google Scholar] [CrossRef]
  30. Toader, D.-C.; Boca, G.; Toader, R.; Măcelaru, M.; Toader, C.; Ighian, D.; Rădulescu, A.T. The Effect of Social Presence and Chatbot Errors on Trust. Sustainability 2020, 12, 256. [Google Scholar] [CrossRef]
  31. Abd-Alrazaq, A.A.; Alajlani, M.; Alalwan, A.A.; Bewick, B.M.; Gardner, P.; Househ, M. An Overview of the Features of Chatbots in Mental Health: A Scoping Review. Int. J. Med. Inf. 2019, 132, 103978. [Google Scholar] [CrossRef]
  32. Kusal, S.; Patil, S.; Choudrie, J.; Kotecha, K.; Mishra, S.; Abraham, A. AI-Based Conversational Agents: A Scoping Review From Technologies to Future Directions. IEEE Access 2022, 10, 92337–92356. [Google Scholar] [CrossRef]
  33. Mctear, M. Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots. Synth. Lect. Hum. Lang. Technol. 2020, 13, 12–14. [Google Scholar] [CrossRef]
  34. Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent Trends in Deep Learning Based Natural Language Processing. IEEE Comput. Intell. Mag. 2017, 13, 55–75. [Google Scholar] [CrossRef]
  35. Abid, A.; Sheikh, M.T.; Patil, H.; Chinchkhede, W.; Chavhan, R.; Asutkar, D.G.M.; Wankhede, M.J. An Audio-Visual Virtual Personal Assistant. Int. J. Multidiscip. Res. 2024. [Google Scholar] [CrossRef]
  36. Këpuska, V.; Bohouta, G. Next-Generation of Virtual Personal Assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home). In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 99–103. [Google Scholar] [CrossRef]
  37. Boichenko, A.V.; Boichenko, O.A. Online Education Empowerment with Artificial Intelligence Tools. Artif. Intell. 2020, 25, 22–29. [Google Scholar] [CrossRef]
  38. Perez--Vega, R.; Kaartemo, V.; Lages, C.R.; Razavi, N.; Männistö, J. Reshaping the Contexts of Online Customer Engagement Behavior via Artificial Intelligence: A Conceptual Framework. J. Bus. Res. 2020, 129, 902–910. [Google Scholar] [CrossRef]
  39. Aggarwal, D.; Sharma, D.; Saxena, A. Exploring the Role of AI for Enhancement of Social Media Marketing. J. Media Cult. Commun. 2024, 4, 1–11. [Google Scholar] [CrossRef]
  40. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Auckland, New Zealand, 2–6 December 2024; Curran Associates Inc.: Red Hook, NY, USA; pp. 6000–6010. [Google Scholar]
  41. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  42. Daniluk, A. The Effects of Gameful and Playful Design on Users’ Behavior: The Case of Duolingo. Master’s Thesis, Tilburg University, Tilburg, The Netherlands, February 2024. [Google Scholar]
  43. Newhagen, J.E.; Cordes, J.W.; Levy, M.R. Nightly@nbc.com: Audience Scope and the Perception of Interactivity in Viewer Mail on the Internet. J. Commun. 1995, 45, 164–175. [Google Scholar] [CrossRef]
  44. Lin, H.-C.; Chang, C.-M. What Motivates Health Information Exchange in Social Media? The Roles of the Social Cognitive Theory and Perceived Interactivity. Inf. Manag. 2018, 55, 771–780. [Google Scholar] [CrossRef]
  45. Zhao, L.; Lu, Y. Enhancing Perceived Interactivity through Network Externalities: An Empirical Study on Micro-Blogging Service Satisfaction and Continuance Intention. Decis. Support Syst. 2012, 53, 825–834. [Google Scholar] [CrossRef]
  46. Park, M.; Yoo, J. Effects of Perceived Interactivity of Augmented Reality on Consumer Responses: A Mental Imagery Perspective. J. Retail Consum. Serv. 2020, 52, 101912. [Google Scholar] [CrossRef]
  47. Liu, Y.; Shrum, L.J. What Is Interactivity and Is It Always Such a Good Thing? Implications of Definition, Person, and Situation for the Influence of Interactivity on Advertising Effectiveness. J. Advert. 2002, 31, 53–64. [Google Scholar] [CrossRef]
  48. Yang, S.; Lee, Y.J. The Dimensions of M-Interactivity and Their Impacts in the Mobile Commerce Context. Int. J. Electron. Commer. 2016, 21, 548–571. [Google Scholar] [CrossRef]
  49. McMillan, S.J.; Hwang, J.-S. Measures of Perceived Interactivity: An Exploration of the Role of Direction of Communication, User Control, and Time in Shaping Perceptions of Interactivity. J. Advert. 2002, 31, 29–42. [Google Scholar] [CrossRef]
  50. Wu, G. The Mediating Role of Perceived Interactivity in the Effect of Actual Interactivity on Attitude Toward the Website. J. Interact. Advert. 2005, 5, 29–39. [Google Scholar] [CrossRef]
  51. Johnson, G.J.; Bruner II, G.C.; Kumar, A. Interactivity and Its Facets Revisited: Theory and Empirical Test. J. Advert. 2006, 35, 35–52. [Google Scholar] [CrossRef]
  52. Fitzpatrick, K.; Darcy, A.M.; Vierhile, M. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health 2017, 4, e7785. [Google Scholar] [CrossRef]
  53. Hassenzahl, M. Experience Design: Technology for All the Right Reasons; Morgan & Claypool Publishers: San Rafael, CA, USA, 2010; Volume 3. [Google Scholar]
  54. Liapis, A.; Guckelsberger, C.; Zhu, J.; Harteveld, C.; Kriglstein, S.; Denisova, A.; Gow, J.; Preuss, M. Designing for Playfulness in Human-AI Authoring Tools. In Proceedings of the 18th International Conference on the Foundations of Digital Games, Lisbon, Portugal, 11–14 April 2023. [Google Scholar] [CrossRef]
  55. Blomsma, P.; Skantze, G.; Swerts, M. Backchannel Behavior Influences the Perceived Personality of Human and Artificial Communication Partners. Front. Artif. Intell. 2022, 5, 835298. [Google Scholar] [CrossRef]
  56. Short, J.; Williams, E.; Christie, B. The Social Psychology of Telecommunications; Wiley: Hoboken, NJ, USA, 1976; ISBN 978-0-471-01581-9. [Google Scholar]
  57. Lee, K.M. Presence, Explicated. Commun. Theory 2004, 14, 27–50. [Google Scholar] [CrossRef]
  58. Li, W.; Mao, Y.; Zhou, L. The Impact of Interactivity on User Satisfaction in Digital Social Reading: Social Presence as a Mediator. Int. J. Hum. Comput. Interact. 2021, 37, 1636–1647. [Google Scholar] [CrossRef]
  59. Tu, C.; Mcisaac, M. The Relationship of Social Presence and Interaction in Online Classes. Am. J. Distance Educ. 2002, 16, 131–150. [Google Scholar] [CrossRef]
  60. Biocca, F.; Harms, C.; Burgoon, J. Towards A More Robust Theory and Measure of Social Presence: Review and Suggested Criteria. Presence 2003, 12, 456–480. [Google Scholar] [CrossRef]
  61. Rettie, R. Connectedness, Awareness and Social Presence. Ph.D. Thesis, Kingston University, London, UK, 2008. [Google Scholar]
  62. Abbas, T.; Gadiraju, U.; Khan, V.-J.; Markopoulos, P. Understanding User Perceptions of Response Delays in Crowd-Powered Conversational Systems. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–42. [Google Scholar] [CrossRef]
  63. Wang, Y.T.; Wu, L.L.; Chen, H.C.; Yeh, M.Y. Interactivity of Social Media and Online Consumer Behavior: The Moderating Effects of Opinion Leadership. 2012. Available online: https://scholar.archive.org/work/cmnyprehbjd5rg7gyxn3alg7my/access/wayback/https://aisel.aisnet.org/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1124&context=icis2012 (accessed on 26 January 2025).
  64. Daft, R.L.; Lengel, R.H. Organizational Information Requirements, Media Richness and Structural Design. Manag. Sci. 1986, 32, 554–571. [Google Scholar] [CrossRef]
  65. Purington, A.; Taft, J.G.; Sannon, S.; Bazarova, N.N.; Taylor, S. “Alexa Is My New BFF”: Social Roles, User Satisfaction, and Personification of the Amazon Echo. In Proceedings of 2017 CHI Conference Extended Abstracts on Human Factors in Computing System, Denver, CO, USA, 6–11 May 2017. [Google Scholar] [CrossRef]
  66. Algharabat, R.S.; Rana, N.P. Social Commerce in Emerging Markets and Its Impact on Online Community Engagement. Inf. Syst. Front. 2021, 23, 1499–1520. [Google Scholar] [CrossRef]
  67. Boutet, I.; LeBlanc, M.; Chamberland, J.A.; Collin, C.A. Emojis Influence Emotional Communication, Social Attributions, and Information Processing. Comput. Hum. Behav. 2021, 119, 106722. [Google Scholar] [CrossRef]
  68. Heerink, M.; Krse, B.J.A.; Evers, V.; Wielinga, B. The Influence of Social Presence on Acceptance of a Companion Robot by Older People. Int. J. Comput. Vis. 2008, 2, 33–40. [Google Scholar] [CrossRef]
  69. Mishra, A.; Shukla, A.; Sharma, S.K. Psychological Determinants of Users’ Adoption and Word-of-Mouth Recommendations of Smart Voice Assistants. Int. J. Inf. Manag. 2022, 67, 102413. [Google Scholar] [CrossRef]
  70. Hsieh, S.H.; Tseng, T.H. Playfulness in Mobile Instant Messaging: Examining the Influence of Emoticons and Text Messaging on Social Interaction. Comput. Hum. Behav. 2017, 69, 405–414. [Google Scholar] [CrossRef]
  71. Siemon, D.; Strohmann, T.; Khosrawi-Rad, B.; de Vreede, T.; Elshan, E.; Meyer, M. Why Do We Turn to Virtual Companions? A Text Mining Analysis of Replika Reviews. 2022. Available online: https://www.alexandria.unisg.ch/server/api/core/bitstreams/c36ed62b-4a0d-4ede-8aac-e1da3f48838b/content (accessed on 26 January 2025).
  72. Xie, Y.; Liang, C.; Zhou, P.; Zhu, J. When Should Chatbots Express Humor? Exploring Different Influence Mechanisms of Humor on Service Satisfaction. Comput. Hum. Behav. 2024, 156, 108238. [Google Scholar] [CrossRef]
  73. Kovari, A. AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across Sectors. Information 2024, 15, 725. [Google Scholar] [CrossRef]
  74. Wanner, J.; Herm, L.-V.; Heinrich, K.; Janiesch, C. The Effect of Transparency and Trust on Intelligent System Acceptance: Evidence from a User-Based Study. Electron. Mark. 2022, 32, 2079–2102. [Google Scholar] [CrossRef]
  75. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  76. Bach, T.A.; Khan, A.; Hallock, H.P.; Beltrao, G.; Sousa, S. A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective. Int. J. Hum. Comput. Interact. 2022, 40, 1251–1266. [Google Scholar] [CrossRef]
  77. Lee, J.-E.; Nass, C. Trust in Computers: The Computers-Are-Social-Actors (CASA) Paradigm and Trustworthiness Perception in Human-Computer Communication. In Trust and Technology in a Ubiquitous Modern Environment: Theoretical and Methodological Perspectives; IGI Global Scientific Publishing: Hershey, PA, USA, 2010; pp. 1–15. [Google Scholar] [CrossRef]
  78. Sanders, T.; Kaplan, A.; Koch, R.; Schwartz, M.; Hancock, P.A. The Relationship Between Trust and Use Choice in Human-Robot Interaction. Hum. Factors 2019, 61, 614–626. [Google Scholar] [CrossRef] [PubMed]
  79. Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and Explainability of AI Systems: From Ethical Guidelines to Requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
  80. Ismatullaev, U.; Kim, S.-H. Review of the Factors Affecting Acceptance of AI-Infused Systems. Hum. Factors 2022, 66, 126–144. [Google Scholar] [CrossRef] [PubMed]
  81. Chakraborty, D.; Kumar Kar, A.; Patre, S.; Gupta, S. Enhancing Trust in Online Grocery Shopping through Generative AI Chatbots. J. Bus. Res. 2024, 180, 114737. [Google Scholar] [CrossRef]
  82. Cheng, X.; Bao, Y.; Zarifis, A.; Gong, W.; Mou, J. Exploring Consumers’ Response to Text-Based Chatbots in e-Commerce: The Moderating Role of Task Complexity and Chatbot Disclosure. Internet Res. 2021, 32, 496–517. [Google Scholar] [CrossRef]
  83. Yue, C.A.; Men, L.R.; Ferguson, M.A. Bridging Transformational Leadership, Transparent Communication, and Employee Openness to Change: The Mediating Role of Trust. Public Relat. Rev. 2019, 45, 101779. [Google Scholar] [CrossRef]
  84. Forbush, A.; LeBaron-Black, A.B.; Saxey, M.T.; Suxo-Sanchez, S.; Holmes, E.K.; Yorgason, J. Can I Trust You? Bidirectional, Longitudinal Associations between Trust and Various Topics of Couple Communication. J. Soc. Pers. Relatsh. 2025, 42, 1778–1799. [Google Scholar] [CrossRef]
  85. Hamacher, A.; Bianchi-Berthouze, N.; Pipe, A.G.; Eder, K. Believing in BERT: Using Expressive Communication to Enhance Trust and Counteract Operational Error in Physical Human-Robot Interaction. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 493–500. [Google Scholar]
  86. Kim, J.; Kim, M.; Lee, S.-M. Unlocking Trust Dynamics: An Exploration of Playfulness, Expertise, and Consumer Behavior in Virtual Influencer Marketing. Int. J. Hum. Comput. Interact. 2025, 41, 378–390. [Google Scholar] [CrossRef]
  87. Holflod, K. Playful Learning and Boundary-Crossing Collaboration in Higher Education: A Narrative and Synthesising Review. J. Furth. High. Educ. 2023, 47, 465–480. [Google Scholar] [CrossRef]
  88. Nikghalb, M.R.; Cheng, J. Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT. Proc. ACM Hum.Comput. Interact. 2024, 9, 1–23. [Google Scholar] [CrossRef]
  89. Mariani, M.M.; Hashemi, N.; Wirtz, J. Artificial Intelligence Empowered Conversational Agents: A Systematic Literature Review and Research Agenda. J. Bus. Res. 2023, 161, 113838. [Google Scholar] [CrossRef]
  90. Araujo, T.; Bol, N. From Speaking like a Person to Being Personal: The Effects of Personalized, Regular Interactions with Conversational Agents. Comput. Hum. Behav. Artif. Hum. 2024, 2, 100030. [Google Scholar] [CrossRef]
  91. Sipos, D. The Effects of AI-Powered Personalization on Consumer Trust, Satisfaction, and Purchase Intent. Eur. J. Appl. Sci. Eng. Technol. 2025, 3, 14–24. [Google Scholar] [CrossRef]
  92. Sun, X.; Li, J.; Tang, P.; Zhou, S.; Peng, X.; Li, H.N.; Wang, Q. Exploring Personalised Autonomous Vehicles to Influence User Trust. Cogn. Comput. 2020, 12, 1170–1186. [Google Scholar] [CrossRef]
  93. Raut, G.; Goel, A.; Taneja, U. Humanizing E-Tail Experiences: Navigating User Acceptance, Social Presence, and Trust in the Realm of Conversational AI Agents. Pers. Ubiquit Comput. 2024, 28, 895–906. [Google Scholar] [CrossRef]
  94. Cicco, R.D.; Silva, S.; Alparone, F. Millennials’ Attitude toward Chatbots: An Experimental Study in a Social Relationship Perspective. Int. J. Retail Distrib. Manag. 2020, 48, 1213–1233. [Google Scholar] [CrossRef]
  95. Janson, A. How to Leverage Anthropomorphism for Chatbot Service Interfaces: The Interplay of Communication Style and Personification. Comput. Hum. Behav. 2023, 149, 107954. [Google Scholar] [CrossRef]
  96. Fu, J.; Mouakket, S.; Sun, Y. The Role of Chatbots’ Human-like Characteristics in Online Shopping. Electron. Commer. Res. Appl. 2023, 61, 101304. [Google Scholar] [CrossRef]
  97. Pavone, G.; Desveaud, K. Gendered AI in Fully Autonomous Vehicles: The Role of Social Presence and Competence in Building Trust. J. Consum. Mark. 2025, 42, 240–254. [Google Scholar] [CrossRef]
  98. Lalot, F.; Bertram, A.-M. When the Bot Walks the Talk: Investigating the Foundations of Trust in an Artificial Intelligence (AI) Chatbot. J. Exp. Psychol. Gen. 2024, 154, 533. [Google Scholar] [CrossRef]
  99. Fan, M.; Zou, F.; He, Y.; Xuan, J. Research on Users’ Trust of Chatbots Driven by AI: An Empirical Analysis Based on System Factors and User Characteristics. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17 January 2021; pp. 55–58. [Google Scholar]
  100. Lee, S.K.; Sun, J. Testing a Theoretical Model of Trust in Human-Machine Communication: Emotional Experience and Social Presence. Behav. Inf. Technol. 2023, 42, 2754–2767. [Google Scholar] [CrossRef]
  101. Agudo-Peregrina, Á.F.; Hernández-García, Á.; Pascual-Miguel, F.J. Behavioral Intention, Use Behavior and the Acceptance of Electronic Learning Systems: Differences between Higher Education and Lifelong Learning. Comput. Hum. Behav. 2014, 34, 301–314. [Google Scholar] [CrossRef]
  102. Nißen, M.; Selimi, D.; Janssen, A.; Cardona, D.R.; Breitner, M.H.; Kowatsch, T.; von Wangenheim, F. See You Soon Again, Chatbot? A Design Taxonomy to Characterize User-Chatbot Relationships with Different Time Horizons. Comput. Hum. Behav. 2022, 127, 107043. [Google Scholar] [CrossRef]
  103. Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  104. Attar, R.; Amidi, A.; Hajli, N. The Role of Social Presence and Trust on Customer Loyalty. Br. Food J. 2022, 125, 96–111. [Google Scholar] [CrossRef]
  105. Hsieh, S.H.; Lee, C.T. The AI Humanness: How Perceived Personality Builds Trust and Continuous Usage Intention. J. Prod. Brand Manag. 2024, 33, 618–632. [Google Scholar] [CrossRef]
  106. Nadeem, W.; Khani, A.; Schultz, C.D.; Adam, N.A.; Attar, R.; Hajli, N. How Social Presence Drives Commitment and Loyalty with Online Brand Communities? The Role of Social Commerce Trust. J. Retail Consum. Serv. 2020, 55, 102136. [Google Scholar] [CrossRef]
  107. Thesia, F.A.B.; Aruan, D.T. The Effect of Social Presence on the Trust and Repurchase of Social Commerce Tiktok Shop Users. J. Soc. Res. 2023, 2, 3776–3785. [Google Scholar] [CrossRef]
  108. Ella, G.; Anita, W. Human Trust in Artificial Intelligence: Review of Empirical Research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
  109. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; Classroom Companion: Business; Springer: Cham, Switzerland, 2021; ISBN 978-3-030-80518-0. [Google Scholar]
  110. Sohaib, O.; Hussain, W.; Asif, M.; Ahmad, M.; Mazzara, M. A PLS-SEM Neural Network Approach for Understanding Cryptocurrency Adoption. IEEE Access 2020, 8, 13138–13150. [Google Scholar] [CrossRef]
  111. Shi, X.; Evans, R.; Shan, W. Solver Engagement in Online Crowdsourcing Communities: The Roles of Perceived Interactivity, Relationship Quality and Psychological Ownership. Technol. Forecast. Soc. Change 2022, 175, 121389. [Google Scholar] [CrossRef]
  112. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the Determinants of Users’ Satisfaction and Continuance Intention of AI-Powered Service Agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  113. Lee, T. The Impact of Perceptions of Interactivity on Customer Trust and Transaction Intentions in Mobile Commerce. J. Electron. Commer. Res. 2005, 6, 165. [Google Scholar]
  114. Fernandes, T.; Oliveira, E. Understanding Consumers’ Acceptance of Automated Technologies in Service Encounters: Drivers of Digital Voice Assistants Adoption. J. Bus. Res. 2021, 122, 180–191. [Google Scholar] [CrossRef]
  115. Gao, T. Research on the Influence of Perceived Interaction on the Willingness to Continuously Use Generative AI from the Perspective of Human-AI Trust. Master’s Thesis, Jilin University, Changchun, China, 2024. [Google Scholar]
  116. Leighton, K.; Kardong-Edgren, S.; Schneidereith, T.; Foisy-Doll, C. Using Social Media and Snowball Sampling as an Alternative Recruitment Strategy for Research. Clin. Simul. Nurs. 2021, 55, 37–42. [Google Scholar] [CrossRef]
  117. Hill, D.R. What Sample Size Is “Enough” in Internet. Interpers. Comput. Technol. Electron. J. 21st Century 1998, 6, 1–12. [Google Scholar]
  118. Alwosheel, A.; Van Cranenburgh, S.; Chorus, C.G. Is Your Dataset Big Enough? Sample Size Requirements When Using Artificial Neural Networks for Discrete Choice Analysis. J. Choice Model. 2018, 28, 167–182. [Google Scholar] [CrossRef]
  119. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to Use and How to Report the Results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  120. Akram, K.; Saeed, A.; Bresciani, S.; Rehman, S.U.; Ferraris, A. Factors Affecting Environmental Performance during the Covid-19 Period in the Leather Industry: A Moderated-Mediation Approach. J. Compet. 2022, 14, 5–22. [Google Scholar] [CrossRef]
  121. Becker, J.-M.; Rai, A.; Ringle, C.M.; Völckner, F. Discovering Unobserved Heterogeneity in Structural Equation Models to Avert Validity Threats. MIS Q. 2013, 37, 665–694. [Google Scholar] [CrossRef]
  122. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  123. Child, D. The Essentials of Factor Analysis; A&C Black: London, UK, 2006; ISBN 978-0-8264-8000-2. [Google Scholar]
  124. Bagozzi, R.P.; Yi, Y. On the Evaluation of Structural Equation Models. JAMS 1988, 16, 74–94. [Google Scholar] [CrossRef]
  125. Preacher, K.J.; Hayes, A.F. Asymptotic and Resampling Strategies for Assessing and Comparing Indirect Effects in Multiple Mediator Models. Behav. Res. Methods 2008, 40, 879–891. [Google Scholar] [CrossRef]
  126. Wong, L.-W.; Tan, G.; Ooi, K.-B.; Lin, B.; Dwivedi, Y. Artificial Intelligence-Driven Risk Management for Enhancing Supply Chain Agility: A Deep-Learning-Based Dual-Stage PLS-SEM-ANN Analysis. Int. J. Prod. Res. 2022, 62, 5535–5555. [Google Scholar] [CrossRef]
  127. Guo, M. The Influence of Educational and Emotional Support on E-Learning Continuance Intention: A Two-Stage Approach PLS-SEM and ANN Analysis. SAGE Open 2024, 14, 21582440241280793. [Google Scholar] [CrossRef]
  128. Lee, V.-H.; Dwivedi, Y.K.; Tan, G.W.-H.; Ooi, K.-B.; Wong, L.-W. How Does Information Technology Capabilities Affect Business Sustainability? The Roles of Ambidextrous Innovation and Data-Driven Culture. RD Manag. 2024, 54, 750–774. [Google Scholar] [CrossRef]
  129. Liébana-Cabanillas, F.; Marinković, V.; Kalinić, Z. A SEM-Neural Network Approach for Predicting Antecedents of m-Commerce Acceptance. Int. J. Inf. Manag. 2017, 37, 14–24. [Google Scholar] [CrossRef]
  130. Xu, Y.; Zhang, W.; Bao, H.; Zhang, S.; Xiang, Y. A SEM–Neural Network Approach to Predict Customers’ Intention to Purchase Battery Electric Vehicles in China’s Zhejiang Province. Sustainability 2019, 11, 3164. [Google Scholar] [CrossRef]
  131. Taufiq-Hail, G.A.-M.; Sarea, A.; Hawaldar, I.T. The Impact of Self-Efficacy on Feelings and Task Performance of Academic and Teaching Staff in Bahrain during COVID-19: Analysis by SEM and ANN. J. Open Innov. Technol. Mark. Complex. 2021, 7, 224. [Google Scholar] [CrossRef]
  132. Wang, G.; Tan, G.W.-H.; Yuan, Y.-P.; Ooi, K.-B.; Dwivedi, Y.K. Revisiting TAM2 in behavioral targeting advertising: A deep learning-based dual-stage SEM-ANN analysis. Technol. Forecast. Soc. Change 2022, 174, 121345. [Google Scholar] [CrossRef]
  133. Xie, Y.; Liang, C.; Zhou, P.; Jiang, L. Exploring the Influence Mechanism of Chatbot-Expressed Humor on Service Satisfaction in Online Customer Service. J. Retail Consum. Serv. 2024, 76, 103599. [Google Scholar] [CrossRef]
  134. Zhao, C.L.; Wang, X.; Ma, C.X. Impact of Perceived Interactivity on the online learners’ continuance intention: Based on S-O-R Perspective. Mod. Distance Educ. 2018, 177, 12–20. [Google Scholar]
  135. Li, Y.; Wang, H.; Zeng, X.; Yang, S.; Wei, J. Effects of Interactivity on Continuance Intention of Government Microblogging Services: An Implication on Mobile Social Media. Int. J. Mob. Commun. 2020, 18, 420–442. [Google Scholar] [CrossRef]
  136. Wang, H.; Meng, Y.; Wang, W. The Role of Perceived Interactivity in Virtual Communities: Building Trust and Increasing Stickiness. Connect. Sci. 2013, 25, 55–73. [Google Scholar] [CrossRef]
  137. Knote, R. Towards Solving the Personalization-Privacy Paradox for Smart Personal Assistants. 2019. Available online: https://www.alexandria.unisg.ch/entities/publication/0e64a973-314e-4f23-a6fc-26b1b9e727a4 (accessed on 20 February 2025).
  138. Jiang, Y.; Yang, X.; Zheng, T. Make Chatbots More Adaptive: Dual Pathways Linking Human-like Cues and Tailored Response to Trust in Interactions with Chatbots. Comput. Hum. Behav. 2022, 138, 107485. [Google Scholar] [CrossRef]
  139. Mehra, S.; Ranga, V.; Agarwal, R. Improving Speech Command Recognition through Decision-Level Fusion of Deep Filtered Speech Cues. SIViP 2024, 18, 1365–1373. [Google Scholar] [CrossRef]
  140. Fang, T.; Yang, S.; Lan, K.; Wong, D.F.; Hu, J.; Chao, L.S.; Zhang, Y. Is ChatGPT a Highly Fluent Grammatical Error Correction System? A Comprehensive Evaluation. arXiv 2023, arXiv:2304.01746. [Google Scholar] [CrossRef]
  141. Shader, R.I. Some Reflections on IBM Watson and on Women’s Health. Clin. Ther. 2016, 38, 1–2. [Google Scholar] [CrossRef]
  142. Supriyono; Wibawa, A.P.; Suyono; Kurniawan, F. Advancements in Natural Language Processing: Implications, Challenges, and Future Directions. Telemat. Inform. Rep. 2024, 16, 100173. [Google Scholar] [CrossRef]
  143. Adiwardana, D.; Luong, M.-T.; So, D.R.; Hall, J.; Fiedel, N.; Thoppilan, R.; Yang, Z.; Kulshreshtha, A.; Nemade, G.; Lu, Y.; et al. Towards a Human-like Open-Domain Chatbot. arXiv 2020, arXiv:2001.09977. [Google Scholar]
Figure 1. Theoretical model.
Figure 1. Theoretical model.
Jtaer 20 00255 g001
Figure 2. User Interface of Dialogue with Great Souls.
Figure 2. User Interface of Dialogue with Great Souls.
Jtaer 20 00255 g002
Figure 3. Results of the PLS structural model (The black line indicates a significant influence among the influencing factors, while the red dotted line indicates no obvious influence).
Figure 3. Results of the PLS structural model (The black line indicates a significant influence among the influencing factors, while the red dotted line indicates no obvious influence).
Jtaer 20 00255 g003
Figure 4. The structure of the ANN model A.
Figure 4. The structure of the ANN model A.
Jtaer 20 00255 g004
Figure 5. The structure of the ANN model B.
Figure 5. The structure of the ANN model B.
Jtaer 20 00255 g005
Figure 6. The structure of the ANN model C.
Figure 6. The structure of the ANN model C.
Jtaer 20 00255 g006
Table 1. Questionnaire scales and references.
Table 1. Questionnaire scales and references.
VariableItemsQuestionsReference
Control (CON)3
  • I feel that I have a high degree of control when using the CA.
  • I feel I can operate the CA to obtain the information I want.
  • My actions determine the kind of experience I have when using the CA.
[46]
Responsiveness (RES)4
  • The CA retrieves information very quickly.
  • The CA processes my input promptly.
  • I can obtain the information I want without delay when using the CA.
  • When using the CA, I feel that I receive information instantly.
[46]
Communication (COM)3
  • The CA facilitates communication between the information seeker and the solution provider.
  • I feel that the CA listens to me.
  • The CA gives me opportunities to provide feedback on issues.
[111]
Playfulness (PLA)3
  • Interacting with the CA is fun and enjoyable.
  • I enjoy chatting with the CA.
  • Conversations with the CA are exciting.
[69,112]
Personalization (PER)2
  • The CA allows me to access products or services tailored to my needs.
  • This CA makes me feel like a unique customer.
[105,113]
Social Presence (SP)3
  • Interacting with the CA makes me feel comfortable, like being with a friend.
  • I feel a sense of human contact when interacting with the voice assistant.
  • Interacting with the voice assistant gives me a sense of social connection.
[69]
Trust (TR)4
  • I feel I can rely on the virtual assistant to complete tasks I need done.
  • I trust the information provided by the virtual assistant is accurate.
  • The CA sincerely considers and addresses user issues.
  • The CA does not misuse the information and advantages it holds over users during interaction.
[114,115]
Continuance Intention (CI)3
  • I intend to continue using the CA in the future.
  • I will continue trying to use this CA in my daily life.
  • I would strongly recommend others to use it.
[103,112]
Table 2. Participant demographic information (n= 302).
Table 2. Participant demographic information (n= 302).
MeasureCategoryFrequencyPercent
GenderMale16955.41%
Female13644.59%
Age<184712.13%
18–3010935.74%
31–408929.18%
>406022.95%
EducationJunior high school and below237.54%
High school/secondary school3511.48%
Undergraduate17457.10%
Master and above7323.93%
The experience of using CALess than one year6922.62%
1–2 year10434.10%
2.1–3 year8026.33%
over 3 years5217.05%
The frequency of using CANever use before289.18%
1–5 times8628.20%
5–10 times6721.97%
More than 10 times12440.66%
Table 3. The results of the construct assessment.
Table 3. The results of the construct assessment.
Fit IndexComputed ValueReference
SRMR0.036[119]
NFI0.867[120]
Table 4. The results of the covariance diagnostics.
Table 4. The results of the covariance diagnostics.
PathVIF
CON→SP1.215
CON→TR1.241
RES→SP1.210
RES-→TR1.227
COM→SP1.201
COM→TR1.246
PLA→SP1.267
PLA→TR1.303
PER→SP1.199
PER→TR1.250
SP→TR1.362
SP→CI1.152
TR→CI1.152
Table 5. Reliability and validity analysis.
Table 5. Reliability and validity analysis.
VariableFactor LoadingAVECR (Rho_a)CR (Rho_c)α
CONCON10.959 0.8420.9330.9410.906
CON20.868
CON30.924
RESRES10.897 0.8470.9410.9570.940
RES20.934
RES30.930
RES40.920
COMCOM10.889 0.8290.8980.9360.897
COM20.925
COM30.916
PLAPLA10.9200.8440.9120.9420.908
PLA20.916
PLA30.920
PERPER10.961 0.9180.9130.9570.910
PER20.955
SPSP10.904 0.8190.8920.9310.889
SP20.885
SP30.926
TRTR10.9250.8450.9400.9560.939
TR20.925
TR30.902
TR40.924
CICI10.8940.8100.8840.9280.883
CI20.897
CI30.908
Table 6. Discriminant validity (Fornell–Larcker criterion).
Table 6. Discriminant validity (Fornell–Larcker criterion).
CICOMCONPERPLARESSPTR
CI0.900
COM0.2000.910
CON0.3360.2930.918
PER0.2720.1860.2800.958
PLA0.3000.3130.2660.3320.919
RES0.2520.2790.2910.2570.3000.920
SP0.3480.3410.3230.3500.3550.3000.905
TR0.3080.3340.3680.2960.3540.3510.3640.919
Table 7. Discriminant validity (HTMT values).
Table 7. Discriminant validity (HTMT values).
CICOMCONPERPLARESSPTR
CI
COM0.225
CON0.3720.317
PER0.3040.2060.302
PLA0.3350.3460.2850.366
RES0.2770.3040.3110.2780.323
SP0.3900.3810.3530.3890.3920.328
TR0.3380.3640.3950.3210.3800.3720.397
Table 8. Discriminant validity (cross loadings).
Table 8. Discriminant validity (cross loadings).
CICOMCONPERPLARESSPTR
CI10.8940.2000.3120.2270.2670.2340.3300.253
CI20.8970.1720.2770.2190.2790.2290.3380.269
CI30.9080.1700.3200.2920.2630.2180.2680.310
COM10.1790.8890.2760.1730.2880.2460.3080.287
COM20.1930.9250.3080.1900.2840.2760.2950.315
COM30.1750.9160.2180.1470.2830.2400.3270.311
CON10.3420.3280.9590.3100.3020.3040.3450.385
CON20.2540.1650.8680.1990.1670.2170.2230.302
CON30.3200.2910.9240.2480.2450.2710.3050.317
PER10.2750.2030.2840.9610.3220.2560.3480.289
PER20.2460.1530.2520.9550.3140.2360.3230.278
PLA10.2670.2940.2620.3030.9200.3140.3490.349
PLA20.3040.3030.2380.2950.9160.2670.3200.314
PLA30.2550.2630.2310.3190.9200.2410.3050.309
RES10.2090.2300.2550.2370.2900.8970.2850.332
RES20.2410.2910.3090.2140.2690.9340.2590.355
RES30.2430.2510.2480.2500.2730.9300.2580.303
RES40.2370.2540.2580.2470.2720.9200.3030.297
SP10.3510.3370.2700.2670.3050.2850.9040.303
SP20.2600.2840.2940.3540.2890.2640.8850.335
SP30.3300.3030.3120.3310.3650.2670.9260.349
TR10.3060.3200.3280.2640.3410.3260.3270.925
TR20.2740.3230.3620.2780.3640.3510.3500.925
TR30.2890.2980.3530.2350.3060.2950.3390.902
TR40.2620.2860.3100.3130.2830.3140.3200.924
Table 9. Analysis of pathway relationships.
Table 9. Analysis of pathway relationships.
HypothesisβSDt-ValuepResult
H1a: CON→SP0.1390.0542.5680.010Supported
H2a: CON→TR0.1790.0563.1840.001Supported
H1b: RES→SP0.1100.0552.0140.044Supported
H2b: RES-→TR0.1570.0552.8350.005Supported
H1c: COM→SP0.1820.0523.5100.000Supported
H2c: COM→TR0.1330.0622.1310.033Supported
H1d: PLA→SP0.1630.0552.9870.003Supported
H2d: PLA→TR0.1410.0562.5050.012Supported
H1e: PER→SP0.1950.0573.3940.001Supported
H2e: PER→TR0.0870.0561.5720.116Unsupported
H3: SP→TR0.1330.0542.4460.014Supported
H4: SP→CI0.2720.0574.7760.000Supported
H5: TR→CI0.2090.0573.6450.000Supported
Table 10. Direct and indirect effects.
Table 10. Direct and indirect effects.
HypothesisβSDt-ValuepCIs (2.5–97.5%)ResultVAF
CON→CI0.0790.0272.9450.003(0.033; 0.138)Supported
RES→CI0.0660.0232.8930.004(0.024; 0.113)Supported
COM→CI0.0820.0233.5830.000(0.040; 0.129)Supported
PLA→CI0.0780.0233.3440.001(0.036; 0.127)Supported
PER→CI0.0770.0243.2030.001(0.032; 0.127)Supported
SP→CI0.2990.0573.6450.000(0.184; 0.404)Supported
CON→SP→CI0.0380.0192.0390.042(0.008; 0.083)Supported48.10
CON→TR→CI0.0370.0182.0710.038(0.011; 0.083)Supported46.83
RES→SP→CI0.0300.0171.7070.088(0.002; 0.072)Unsupported
RES→TR→CI0.0330.0152.1580.031(0.009; 0.069)Supported50.00
COM→SP→CI0.0490.0172.8430.004(0.021; 0.088)Supported59.76
COM→TR→CI0.0280.0161.7760.076(0.004; 0.066)Unsupported
PLA→SP→CI0.0440.0182.4250.015(0.015; 0.087)Supported56.41
PLA→TR→CI0.0300.0151.9680.049(0.006; 0.068)Supported38.46
PER→SP→CI0.0370.0162.2830.022(0.021; 0.101)Supported48.05
PER→TR→CI0.0530.0202.6270.009(−0.002; 0.053)Unsupported
SP→TR→CI0.0280.0132.0750.038(0.007; 0.062)Supported9.36
Table 11. Comparison between PLS-SEM and ANN results.
Table 11. Comparison between PLS-SEM and ANN results.
PLS PathPLS-SEM Results: Path CoefficientANN Results: Normalized Importance (%)Ranking (PLS-SEM)Ranking
(ANN)
Remark
Model A (output: PV)
CON→SP0.13957.4144Match
RES→SP0.11045.8355Match
COM→SP0.18290.7821Not match
PLA→SP0.16363.5533Match
PER→SP0.19584.9812Not match
Model B (output: TR)
CON→TR0.17995.6611Match
RES→TR0.15776.4522Match
COM→TR0.13363.6444Match
PLA→TR0.14169.4133Match
Model C (output: PI)
SP→CI0.27293.911Match
TR→CI0.20983.6622Match
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, K.; Luo, J.; Huang, Q.; Zhang, K.; Du, J. The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach. J. Theor. Appl. Electron. Commer. Res. 2025, 20, 255. https://doi.org/10.3390/jtaer20040255

AMA Style

Zhang K, Luo J, Huang Q, Zhang K, Du J. The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach. Journal of Theoretical and Applied Electronic Commerce Research. 2025; 20(4):255. https://doi.org/10.3390/jtaer20040255

Chicago/Turabian Style

Zhang, Kewei, Jiacheng Luo, Qianghong Huang, Kuan Zhang, and Jiang Du. 2025. "The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach" Journal of Theoretical and Applied Electronic Commerce Research 20, no. 4: 255. https://doi.org/10.3390/jtaer20040255

APA Style

Zhang, K., Luo, J., Huang, Q., Zhang, K., & Du, J. (2025). The Effect of Perceived Interactivity on Continuance Intention to Use AI Conversational Agents: A Two-Stage Hybrid PLS-ANN Approach. Journal of Theoretical and Applied Electronic Commerce Research, 20(4), 255. https://doi.org/10.3390/jtaer20040255

Article Metrics

Back to TopTop