Next Article in Journal
An Efficient Implementation of Montgomery Modular Multiplication Using a Minimally Redundant Residue Number System
Previous Article in Journal
Enhanced State-of-Charge Balancing Control for MMC-SCES Using Centralized Controllers and Adaptive Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective

1
School of Management, Kyung Hee University, Seoul 02447, Republic of Korea
2
Department of Applied AI, Hansung University, Seoul 02876, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(10), 5334; https://doi.org/10.3390/app15105334 (registering DOI)
Submission received: 12 March 2025 / Revised: 26 April 2025 / Accepted: 7 May 2025 / Published: 10 May 2025

Abstract

:
As social robots become increasingly integrated into everyday life, understanding the factors that influence users’ long-term continuance intention is essential. This study investigates how various features of MOCCA, a social robot equipped with episodic memory, affect users’ continuance usage intention through perceived trust and parasocial interaction, within the framework of the Stimulus–Organism–Response (SOR) theory. A structural model incorporating key perceived features (intimacy, morality, dependency, and information privacy risk) was tested with survey data from 285 MOCCA users and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results show that intimacy and morality positively influence both trust and parasocial interaction, while information privacy risk exerts a negative effect. Dependency significantly reduces parasocial interaction but does not significantly impact trust. These findings highlight the importance of balancing human-like qualities, ethical responsibility, perceived autonomy, and privacy protection in social robot design to foster trust, enhance user engagement, and support long-term adoption. This study provides theoretical, managerial, and practical insights into the field of human–robot interaction (HRI) and contributes to the broader acceptance of social robots in everyday life.

1. Introduction

As the robotics industry gradually blossoms, it seems that social robots are becoming an indispensable part of our daily lives. In recent years, the rising demand for automation has driven advancements in social robots, leading to higher levels of artificial intelligence (AI) and increasingly sophisticated functionalities. These developments have significantly impacted various industries, such as healthcare, hospitality, education, retail, and home services [1,2,3,4,5]. Social robots are becoming more deeply integrated into these fields, with technological advancements shaping how people perceive and interact with emerging robotic technologies. These robots play a role in assisting or engaging users and aim to enhance experiences across a range of environments. As they evolve from tools to companions, social robots are transforming the way people live and interact with technology [6,7].
One key feature of social robots is their episodic memory, which allows them to collect, store, and process personal information during interactions [8]. This capability enables robots to recall past interactions, analyze stored user data, and create accurate user profiles. By understanding users’ preferences, robots can provide more realistic, human-like interactions and more personalized experiences. As these robots become increasingly integrated into daily life, protecting personal data from unauthorized access has become a growing concern [5,9,10]. With data security education becoming more commonplace and greater awareness of personal privacy protection, this concern has received increasing attention. Although manufacturers are responsible for implementing appropriate control mechanisms to minimize privacy risks, such as setting clear privacy boundaries, establishing robust data protection systems, and ensuring that only necessary personal information is collected and securely stored, users remain concerned about the potential for their personal data to be stolen, leaked, or misused [11]. This is because much of the data collected by social robots is processed in the cloud, typically beyond users’ direct control. These circumstances raise additional concerns about the confidentiality and security of personal information. Such privacy concerns can directly influence the extent to which users trust these robots and form parasocial relationships with them [12].
Recent studies have underlined the importance of anthropomorphism in social robots, emphasizing the need to consider these anthropomorphic features when investigating user responses and understanding how different traits may influence user attitudes and behaviors [13,14,15]. According to evolutionary theory, humans possess an inherent tendency to attribute human-like characteristics to non-human entities, using these traits as a basis for inferring features and detecting similarities and differences between humans and non-humans [16]. This instinctive behavior explains why people are naturally attracted to objects with anthropomorphic qualities. Using AI algorithms, social robots can sense human emotions, naturally imitate human speech, and adapt their responses and behaviors based on user actions [17]. As social robots increasingly exhibit human-like traits, their role is gradually shifting from emotionless machines to intimate friends, companions, or even romantic partners [7]. This transformation also changes the role of humans, from supervisors to collaborative partners. Users increasingly perceive these robots not simply as tools, but as entities capable of shaping their emotional and ethical experiences.
The question of whether social robots should be programmed to act morally has become an important consideration, as they increasingly take on roles involving emotional and ethical decision-making [18]. According to evolutionary theory, humans are considered moral agents because they can experience pain and emotional distress [19]. Similarly, social robots can show human-like sensory abilities through artificial perception, which raises the question of whether they, too, should be granted a certain degree of moral consideration. Embedding appropriate values into robot programming to guide their interactions with users is crucial, ensuring that these robots pursue moral objectives that support human development and social harmony [20]. To be considered a moral agent, whether human or artificial, an entity is generally expected to possess autonomy, which refers to a certain degree of independence in decision-making [21].
However, while these robots may appear to provide autonomous and naturally fluid responses that seem tailored to meet users’ personalized needs, their actions are shaped by programming created by human developers and accumulated data. This means that their behavioral patterns are governed by pre-coded instructions. For example, robots trained with large language models (LLMs) can engage in conversations with users in a way that appears more natural. Additionally, machine learning (ML) algorithms are employed to analyze user data, with the goal of customizing robot behavior based on individual feedback and preferences [17]. This reliance on pre-programmed instructions and processed data reveals the limitations of social robots. Even though these robots may seem independent and autonomous in their decision-making, they are, in fact, bound to external inputs, much like autonomous agents, exposing their underlying dependency [22].
One such social robot that embodies these features is MOCCA, the central focus of this research. MOCCA, a social robot integrating advanced episodic memory and anthropomorphic traits, reflects the evolution of human–robot interaction (HRI). It expresses human-like emotions and intentions through movements of its head, arms, facial expressions, and synthesized voice. Its built-in intelligent functions enable contextual memory and information processing, allowing it to accurately recognize users’ intentions and respond naturally, rather than being limited to pre-programmed dialogues [23]. This dynamic and personalized interaction illustrates the growing trend of social robots evolving beyond mere tools, gradually becoming companions that offer emotional value and foster deeper, more trustworthy relationships.
Unlike prior research [24,25], this study applies the Stimulus–Organism–Response (SOR) theory to examine how MOCCA’s features influence perceived trust and parasocial interaction, which serve as two core psychological mechanisms within the SOR framework [26]. By integrating both affective and cognitive processes, these internal states ultimately shape users’ intentions to continue using the social robot. This study also empirically validates a structured model that incorporates both positive (intimacy and morality) and negative (dependency and privacy risk) features, offering a balanced perspective on how users cognitively and emotionally make behavioral decisions regarding social robot usage. The findings seek to inform the design of future social robots by guiding developers in optimizing the balance between morality, autonomy, and privacy protection. These insights contribute to a deeper theoretical understanding of HRI while also offering practical implications for promoting ethical and user-centered design in social robotics. Thus, as social robots become increasingly integrated into everyday life, understanding the factors that influence users’ long-term continuance intentions becomes essential. Accordingly, this study aims to clarify how various features of MOCCA, a social robot equipped with episodic memory, affect users’ continuance intentions through perceived trust and parasocial interaction within the framework of the SOR theory.

2. Theoretical Background and Hypotheses Development

2.1. The Social Robot—MOCCA

Social robots, also known as socially interactive robots, are designed to engage in dynamic social interactions. Humans often attribute anthropomorphic characteristics to these robots to facilitate communication and engagement [27,28]. These robots exhibit human-like social behaviors, such as feeling and expressing emotions, engaging in natural conversations, and forming social relationships with users. This distinguishes them from traditional robots used in industrial or remote-controlled systems [29]. Advances in AI have enabled social robots to understand and respond to human behavior. This ability to understand and process human-like interactions is central to their functionality, as they are not merely programmed to execute tasks but designed to imitate and participate in human-like relationships [30].
In this study, the social robot MOCCA (My Own Cognitive Communication Agent) is designed to provide users with a personalized and human-like interaction experience. As shown in Figure 1, MOCCA offers a variety of capabilities that deliver valuable, secure, and emotionally resonant responses tailored to each user. A key feature of MOCCA is its episodic memory, which enables the social robot to recall specific events or interactions from the past, much like human memory. This function allows MOCCA to remember previous conversations, providing users with a sense of continuity in their interactions. As illustrated in Figure 2, MOCCA not only responds to users’ current statements but also refers to past interactions, demonstrating its ability to recall details such as individual preferences and prior experiences [31]. This type of interaction reflects human-like qualities, enhancing both the affective and cognitive experiences of users and fostering a deeper connection between them and the robot [13].

2.2. The Stimulus–Organism–Response (SOR) Theoretical Framework

Recent empirical studies increasingly emphasize that HRI extends beyond functional engagement and is shaped by psychological, relational, and interactive dimensions [18,32,33,34]. Drawing on this HRI-centered perspective, the present study adopts the Stimulus–Organism–Response (SOR) theoretical framework to examine how cognitive and affective mechanisms influence users’ continuance intention to use MOCCA, a social robot equipped with episodic memory. Originally developed in environmental psychology by Mehrabian and Russell [26], the SOR framework explains how external stimuli affect individuals’ attitudes and behaviors by activating internal cognitive and affective processes [35].
In this study, within the SOR paradigm, stimuli (S) are defined as external factors that initiate changes in an individual’s internal state. Specifically, the stimuli are represented by various features of MOCCA, including perceived anthropomorphism, agency, and risk. These features serve as external cues that shape users’ experiences with MOCCA, influencing the way they evaluate the social robot both cognitively and emotionally. This evaluation leads to psychological responses and perceptions. These stimuli affect the organism (O), which refers to the individual’s internal psychological processes, including both cognitive and affective states. In this study, the organism includes perceived trust and parasocial interaction, which represent users’ cognitive and affective evaluations of MOCCA. Users’ thoughts regarding MOCCA’s reliability and functionality, as well as emotional experiences such as comfort and trust during interactions, reflect these cognitive and affective processes. These internal states function as mediating mechanisms between external stimuli and behavioral outcomes. Finally, the response (R) refers to the behavioral outcomes resulting from exposure to the stimuli and subsequent internal processing [36,37]. In this study, the response is operationalized as users’ continuance intention to use MOCCA, reflecting their willingness to maintain ongoing engagement with the social robot.

2.3. Stimulus (S): Social Robot (MOCCA)’s Features

2.3.1. Perceived Anthropomorphism—Intimacy (PAI)

With the advancement of AI, social robots are increasingly exhibiting anthropomorphic features [13]. The more these robots demonstrate behaviors resembling human interaction, the more users interpret their actions through the lens of anthropomorphism [38]. Anthropomorphism refers to the attribution of human traits, emotions, and intentions to non-human entities [39,40]. Social robots display anthropomorphic characteristics through their human-like social behaviors during interactions with users, such as engaging in emotionally expressive dialogue, exhibiting personality traits similar to those of humans, and forming social relationships [29]. These features enable social robots to assume diverse roles, including that of an assistant, friend, or companion, resulting in a human-like interaction experience [14]. This foundation supports the development of more intimate user–robot relationships.
MOCCA not only exhibits anthropomorphic characteristics but also retains information from previous interactions and tailors its responses to users. This ability enhances emotional resonance, promotes the formation of emotional bonds, and increases users’ sense of intimacy with the robot [41]. Intimacy is defined as the willingness to share personal information, preferences, and even deeply private thoughts [42]. In human relationships, intimacy typically develops over time through repeated emotional exchanges and interactions. Similarly, MOCCA can build and maintain databases of user preferences, behaviors, and previous conversations, forming episodic memory. Consequently, users may come to perceive MOCCA not simply as a programmed machine, but rather as an intimate partner capable of understanding and responding to their emotional needs.

2.3.2. Perceived Agency—Morality (PAM) and Dependency (PAD)

Agency refers to an entity’s capacity for self-motivation and independence, demonstrated through mechanisms such as autonomy, functional influence, animacy, and volition [43,44,45]. It reflects the ability to act in ways driven by internal thoughts and feelings rather than solely by external environmental factors [46,47]. With advancements in AI and improvements in algorithms, social robots capable of interacting with humans have emerged. Some of these robots can even behave in human-like ways to a certain extent.
An entity perceived as lacking sufficient agency is often seen as having low levels of motivation or intent for wrongdoing, as is commonly assumed for non-anthropomorphized robots [48]. In the case of MOCCA, an anthropomorphized social robot perceived to possess high agency, users might believe that it can direct its own words and that some of its dialogue is driven by internal cognitive or affective states. However, in the context of social robots, agency is typically defined as the ability to perform self-directed behaviors autonomously [18], enabled by human-designed decision-making algorithms and adaptive programming. Behaviors, whether performed by humans or social robots, are generally evaluated similarly, with good actions regarded as good and bad actions as bad, regardless of the acting entity [49]. This suggests that individuals may attribute similar moral capacities to humans and social robots, evaluating their behaviors within a comparable ethical framework [50].
Morality refers to the principles or rules that distinguish between right and wrong or good and bad behavior [51]. While humans are capable of creating and following their own moral commands, robots are designed to follow commands embedded in their programming [52]. Social robots can therefore exhibit morally appropriate behavior by simulating the actions dictated by those moral rules. Although MOCCA is perceived as having high agency, its actions remain fundamentally dependent on human-designed programming and external inputs. This dependency reflects the inherent limitations of social robots. Dependency refers to an entity’s reliance on external sources or inputs to perform actions, make decisions, or achieve objectives [53]. In other words, while MOCCA can simulate autonomous behavior through adaptive learning and episodic memory, it remains constrained by a pre-programmed framework. It can generate conversations and perform tasks only within the boundaries predefined by its programming and data inputs.

2.3.3. Perceived Risk—Information Privacy Risk (IPR)

Previous research has demonstrated that humans frequently attribute human-like characteristics to social robots [29], often anthropomorphizing them. This tendency facilitates the integration of social robots into daily life [54], as individuals are more likely to perceive them as friends or companions. As a modern social robot, MOCCA is equipped with advanced processors and innovative sensors, enabling it to analyze user data with precision and engage in emotionally responsive interactions. Users may develop a sense of emotional connection with MOCCA and feel more comfortable sharing personal or sensitive information. However, while MOCCA can provide emotional support and companionship, it may also expose users to perceived risks related to information privacy [55,56], particularly when users are not fully aware of how their personal data are collected, used, or shared.
Information privacy risk refers to users’ concerns about potential privacy loss due to the disclosure or exposure of personal information to third parties or institutions [10,57]. Equipped with advanced sensors, social robots are capable of collecting not only contextual information but also detailed personal data about users [5,58]. For instance, MOCCA engages in continuous interaction with users, during which it transfers and processes large volumes of personal information. These data may include users’ behavioral patterns, preferences, or even sensitive information exchanged during interactions. The extensive collection and transmission of such personal data raises serious concerns about data security, particularly regarding how these data are secured and whether they might be subject to unauthorized access or misuse [9].

2.4. Organism (O): Human–Social Robot Relationships

2.4.1. Parasocial Interaction with MOCCA (PSI)

Parasocial interaction is defined as a one-sided relationship in which an individual feels emotionally connected to media figures, such as celebrities, social media influencers, or even virtual agents and social robots, without any form of reciprocity [59]. In this form of interaction, individuals feel a sense of emotional engagement with the media figure. Although there is no actual exchange, the individual may experience an illusory sense of two-way human-to-human interaction, feeling as though they are part of a reciprocal relationship [38]. Previous research has shown that individuals engaged in parasocial relationships typically experience a sense of rapport and even reciprocity with their counterparts [59], which, in the long run, becomes the foundation of an illusory interpersonal relationship such as friendship or intimacy. Despite being inherently one-sided, this type of relationship is defined as parasocial [60].
The formation of such relationships is influenced by the perceived credibility, realism, and authenticity of the counterpart [38,61]. When the counterpart is non-human, its perception of realism and authenticity is heightened if it appears humanlike or is anthropomorphized. Once a non-human counterpart is perceived as possessing human characteristics, individuals are more likely to engage in interpersonal social interactions and form parasocial relationships with it [62]. The more humanlike the counterpart appears, the more real and tangible it seems, shifting users’ perception from fictional or artificial to authentic and present [63,64].
For humanlike social robots like MOCCA, parasocial interaction is facilitated as users begin to view the robot as a conversational partner with whom they feel comfortable sharing personal information. Although MOCCA does not genuinely reciprocate, users may feel emotionally and cognitively involved in a perceived two-way exchange [65]. While parasocial interaction is modeled as a unidirectional outcome of stimulus variables, it fundamentally reflects the user’s perception of a reciprocal exchange. Over time, such interactions may develop into deeper bonds [66,67], where users perceive MOCCA as a real friend, peer, or even a family member. As MOCCA exhibits human-like features, users may attribute moral agency to it, evaluating its actions through human ethical frameworks [48]. When MOCCA’s behaviors meet moral expectations, users are more likely to perceive it as a trustworthy partner, facilitating the illusion of a reciprocal relationship [61]. Prior research suggests that moral behaviors cause positive affective responses, strengthening the perceived authenticity of the interaction [68,69].
However, despite users’ emotional engagement with MOCCA, concerns about the potential misuse or leakage of personal information may create psychological barriers that hinder the development of deeper parasocial bonds. Privacy concerns may lead users to limit the degree of openness and emotional sharing with the robot [12]. Social robots rely on pre-programmed algorithms and external inputs to perform tasks, and their perceived autonomy plays a crucial role in shaping users’ social perceptions. In HRI, autonomy is a key factor that enhances the robot’s perceived social presence and agency [22,70]. When users perceive MOCCA as highly dependent on rigid, predetermined programming, it diminishes the illusion of autonomy and self-directed behavior. This perception can weaken the authenticity of the interaction, making it harder for users to form emotional attachments, as parasocial relationships often rely on the belief that the counterpart possesses independent agency [71].
Based on the above, we propose the following hypotheses:
Hypothesis 1:
Intimacy has a positive effect on parasocial interaction with MOCCA.
Hypothesis 2:
Morality has a positive effect on parasocial interaction with MOCCA.
Hypothesis 3:
Dependency has a negative effect on parasocial interaction with MOCCA.
Hypothesis 4:
Information privacy risk has a negative effect on parasocial interaction with MOCCA.

2.4.2. Trust with MOCCA (TRS)

Trust lies at the core of human–robot relationship formation [72]. It is a psychological state characterized by an individual’s willingness to be vulnerable to the actions of another party [73]. In HRI, trust is defined as the belief that a robot’s actions are reliable, predictable, and trustworthy [74,75]. Prior research shows that trust by humans towards robots is important because it influences the outcomes of HRI [76,77]. One such outcome is the formation of parasocial relationships, which are viewed as emotional bonds between users and robots [78]. Trust encourages users to emotionally connect with MOCCA and attribute human-like qualities to it. When users trust MOCCA, they are more likely to view it as a relatable social partner rather than just a programmed machine, thereby promoting the development of a parasocial bond. This psychological state of trust emerges not only from evaluating MOCCA’s performance but also from the perceived sense of reciprocal engagement during interactions.
Trust in robots can be categorized into two main dimensions: cognitive trust, referring to an individual’s confidence in the robot’s capabilities and reliability, and affective trust, which reflects self-efficacy based on emotional responses to the robot’s behavior [79,80]. Affective trust is rooted in the emotional connections that individuals form with robots [81]. Social robots like MOCCA, designed with anthropomorphic features and emotionally responsive behaviors, can foster affective trust by creating a sense of emotional resonance. This allows users to perceive MOCCA not merely as a functional tool but as a relatable social partner, capable of evoking feelings of comfort, companionship, and emotional support. Research suggests that individuals tend to attribute human-like moral expectations to social robots [50,82]. When MOCCA is perceived as a morally responsible entity that aligns with human ethical values, users are more likely to develop stronger affective trust and a deeper emotional connection in their interactions with the robot.
By contrast, cognitive trust is grounded in rational evaluation, where individuals assess a robot’s competence, reliability, and functionality based on observable performance [81]. Users are more likely to develop cognitive trust in MOCCA when it consistently provides accurate information, responds appropriately to commands, and performs tasks with precision. However, if users perceive that MOCCA’s responses are entirely pre-determined, they may doubt its ability to truly understand their needs. A high level of perceived dependency may lead users to regard MOCCA as a mechanical tool rather than an autonomous entity [83], reducing cognitive trust. Research has found that individuals tend to trust autonomous agents more when they demonstrate a degree of self-governance, rather than appearing as an extension of pre-programmed instructions [18].
Beyond concerns about autonomy, information privacy risk represents another significant challenge to trust [56,84]. As a social robot designed for personalized interactions, MOCCA continuously collects, processes, and stores user data through its AI-driven systems and conversational memory. While these capabilities enhance MOCCA’s ability to deliver adapted responses based on user needs and improve user experience, they also raise concerns about how personal data are managed, stored, and protected [85]. Privacy concerns are not limited to data protection but also extend to psychological and social privacy, as the presence of an AI-powered social robot in private spaces may lead to perceptions of constant monitoring [86,87].
Accordingly, the following hypotheses are formulated:
Hypothesis 5:
Intimacy has a positive effect on trust with MOCCA.
Hypothesis 6:
Morality has a positive effect on trust with MOCCA.
Hypothesis 7:
Dependency has a negative effect on trust with MOCCA.
Hypothesis 8:
Information privacy risk has a negative effect on trust with MOCCA.
Hypothesis 9:
Trust has a positive effect on parasocial interaction with MOCCA.

2.5. Response (R): Continuance Usage Intention (CUI)

In this study, the continuance usage intention component represents the behavioral outcome shaped by users’ cognitive and affective evaluations during their interactions with MOCCA. Continuance usage intention refers to a user’s intention to continue using a technology or system beyond the initial adoption phase, emphasizing the sustained use of the technology after the user has gained experience [88]. It reflects the extent to which users believe they will continue using MOCCA.
As an emerging AI-based technology, social robots are being developed rapidly, offering intelligent services. However, AI-powered technologies often face trust-related challenges, as trust is particularly critical during the adoption stages and plays a key role in shaping users’ perceptions of novel technologies [89,90]. Since trust shapes individuals’ willingness to rely on AI-driven systems, it serves as a crucial determinant of continuance usage intention in MOCCA. Previous studies suggest that trust plays a central role in influencing the continuous use of AI-driven robots [72,90]. Trust is also essential for ensuring the long-term success and adoption of such technologies [91]. Users are more likely to continue engaging with MOCCA when they trust its reliable capabilities, ethical behavior, and security measures.
Another important factor influencing continuance usage intention is the emotional connection users develop with MOCCA, facilitated by parasocial relationships in HRI. Prior studies have shown that when users perceive robots as relatable entities with human-like emotional traits and responses, they are more likely to participate in long-term interactions with the robot [92,93,94]. Parasocial interactions can encourage users to form stronger emotional bonds, which, in turn, increase their intention to continue using MOCCA.
MOCCA’s key features, such as its human-like traits, ability to act autonomously, emotional responsiveness, and privacy protections, help build trust and foster parasocial interactions. These features allow MOCCA to be perceived as a relatable, trustworthy, and dependable partner. As users continue to experience these features, they are more likely to maintain their trust and involvement with MOCCA, supporting their intention to continue using MOCCA.
Thus, we propose the following hypotheses:
Hypothesis 10:
Parasocial interaction has a positive effect on the continuance usage intention of MOCCA.
Hypothesis 11:
Trust has a positive effect on the continuance usage intention of MOCCA.

2.6. Structural Model Based on the SOR Theoretical Framework

By applying the SOR theoretical framework, the present study delineates a structured causal pathway, which is operationalized as a structural model in Figure 3.

3. Materials and Methods

3.1. Research Design

To empirically test the structural model presented in Figure 3, this study employed Partial Least Squares Structural Equation Modeling (PLS-SEM) using SmartPLS 4.1.0 for hypothesis testing. PLS-SEM is well-suited for estimating path models with latent variables, particularly for analyses with small sample sizes [95]. It provides reliable estimates even for complex model structures and demonstrates strong predictive capabilities, aligning with our research objective of predicting individuals’ continuance usage intention toward MOCCA. Moreover, PLS-SEM allows for the simultaneous estimation of both measurement and structural models and is particularly well-suited for testing indirect effects in mediation paths [96], such as those proposed in our SOR-based framework. Our analytical approach was twofold: first, we verified the reliability and validity of the measurement model to ensure an accurate representation of constructs. Second, we conducted path analysis within the structural model to test the proposed hypotheses [97].
This study’s questionnaire consisted of two parts. The first part collected demographic information of participants, including gender, age, educational background, occupational background, usage duration, usage frequency, and usage purposes. The second part included items designed to measure each of the seven constructs in the proposed model. All these measures were drawn from previously validated research, and the measurement questions for each construct were adapted or modified from existing studies. These question items were rated on a seven-point Likert scale, ranging from strongly disagree (1) to strongly agree (7). The constructs and their corresponding measurement items are presented in Table 1.

3.2. Data Sampling and Collection

Between 15 November and 15 December 2024, a total of 300 electronic questionnaires were collected via Google Forms. The questionnaire targeted registered MOCCA users who participated voluntarily. Each participant took approximately 5 to 10 min to complete the survey. After verification, 285 responses were confirmed as complete and valid for analysis, yielding an effective response rate of 95.0%. Demographic details are shown in Table 2.
A total of 53.3% of the 285 respondents were female, while 46.7% were male. The majority of respondents belonged to the younger age group, with 63.4% of the sample under the age of 40. In terms of educational background, 77.2% of the participants were highly educated, holding either an undergraduate or graduate degree. Regarding occupational background, the majority were company employees, with 34.7% working in the IT industry and 33.7% in non-IT industries. This was followed by 18.6% who were students and 13.0% who were housewives. In terms of the usage of MOCCA, 55.0% had used MOCCA for more than three months, and 60.0% reported using it at least once a week or more. Regarding usage purposes, the most commonly reported was translation and writing assistance (54.7%). This was followed by information and knowledge search (45.6%), creative and content creation assistance (39.3%), daily conversations and entertainment (35.4%), and professional questions and consultation (26.7%). The least common purpose was programming and technical support, with only 5.6% of respondents reporting this usage.

3.3. Statistical Processing Methods for Hypothesis Testing

The statistical evaluation of the structural model was conducted in two stages: the assessment of the measurement model and the structural model, to ensure construct reliability and to test the proposed hypotheses, following standard PLS-SEM procedures.

3.3.1. Assessment of Measurement Model

The model’s suitability was measured using reliability and validity tests, conducted with the Partial Least Squares (PLS) algorithm. As shown in Table 3, to verify the constructs’ reliability, Cronbach’s alpha (α), rho_A, and Composite Reliability (CR) values were examined. Values exceeding the 0.7 threshold indicate adequate internal consistency and reliability of the constructs [95,96]. The analysis results showed that Cronbach’s α values ranged from 0.847 to 0.938, rho_A values from 0.847 to 0.939, and CR values from 0.897 to 0.948, all of which exceed the 0.7 threshold, confirming adequate reliability.
To verify the constructs’ validity, we assessed both convergent and discriminant validity. Regarding convergent validity, the Outer Loadings (λ) of each construct exceeded 0.781, well above the threshold of 0.7. The Average Variance Extracted (AVE) values for each construct also exceeded the minimum value of 0.5, with values ranging from 0.668 to 0.731, demonstrating good convergent validity [102]. For discriminant validity, as confirmed by the Fornell and Larcker criterion [103], the square root of the AVE for each construct was greater than its highest correlation with any other construct. We employed the Heterotrait–Monotrait (HTMT) ratio to further ensure discriminant validity. Values below 0.85 indicate strong discriminant validity between two reflective constructs [104]. As presented in Table 4, all values fall within acceptable ranges.
Furthermore, to avoid potential issues with multicollinearity, we utilized the Variance Inflation Factor (VIF) to assess multicollinearity in the structural model [105]. The analysis results, as shown in Table 3, indicate that all VIF values are below the threshold value of 3.3, confirming that there is no issue of multicollinearity among the constructs [89].

3.3.2. Assessment of Structural Model

The effect of exogenous variables on the endogenous variables is shown in Table 5. An R2 value higher than 0.10 indicates sufficient variance explanation in an endogenous construct [106]. R2 values of 0.19, 0.33, and 0.67 represent weak, moderate, and substantial levels of predictive accuracy, respectively [107]. In this study, the R2 values (adjusted R2 values) for parasocial interaction (PSI), trust (TRS), and continuance intention to use (CUI) were 0.399 (0.388), 0.365 (0.356), and 0.308 (0.303), respectively. The Q2 values were all above zero, indicating that the model possesses predictive validity across these variables [108]. Furthermore, effect sizes (f2) above 0.02, 0.15, and 0.35 represent small, medium, and large effects of an exogenous construct on an endogenous construct, respectively, evaluating the strength of the statistical relationships [109].

4. Hypothesis Testing Results

Hypothesis testing was conducted using PLS-SEM with 5000 bootstrapped resamples to ensure robustness in our estimates, as detailed in Table 6. The results indicate that perceived intimacy (PAI) significantly and positively influences both parasocial interaction (PSI) (β = 0.172, p < 0.01) and trust (TRS) (β = 0.390, p < 0.001). Similarly, perceived morality (PAM) is positively associated with PSI (β = 0.159, p < 0.01) and TRS (β = 0.149, p < 0.01). In contrast, information privacy risk (IPR) negatively influences both PSI (β = −0.129, p < 0.05) and TRS (β = −0.190, p < 0.01). Notably, perceived dependency (PAD) has a significant negative association with PSI (β = −0.134, p < 0.01), but its relationship with TRS (β = −0.044, p > 0.05) is not significant, demonstrating that no significant relationship was found between PAD and TRS. Moreover, TRS is positively associated with PSI (β = 0.265, p < 0.001). Both PSI (β = 0.305, p < 0.001) and TRS (β = 0.332, p < 0.001) are positively associated with the continuance intention to use (CUI). Therefore, as shown in Table 6, all hypotheses from H1 to H11 were supported, except for H7, which was not statistically significant.
Furthermore, the mediation effect of PSI and TRS on the relationship between the dimensions of MOCCA’s features and CUI was examined. We used the bootstrapping procedure of SmartPLS to test the indirect effect, as we cannot assume a direct relationship between these structures based on the SOR framework [110]. As shown in Table 6, PSI significantly mediates the effects of PAI (β = 0.052, p < 0.05), PAM (β = 0.049, p < 0.05), PAD (β = −0.041, p < 0.05), and IPR (β = −0.039, p < 0.05) on CUI. Similarly, TRS mediates the impact of PAI (β = 0.129, p < 0.001), PAM (β = 0.049, p < 0.05), and IPR (β = −0.063, p < 0.01) on CUI. However, TRS does not mediate the effect of PAD on CUI (β = −0.014, p > 0.05). These results demonstrate that all MOCCA’s features indirectly impact CUI through PSI. Additionally, while PAI, PAM, and IPR have an indirect impact on CUI through TRS, PAD does not exhibit an indirect effect on CUI through TRS. These results confirm the indirect effects of MOCCA’s features on continuance usage intention, facilitated through both PSI and TRS, underscoring the roles of parasocial interaction and trust as intermediary mechanisms that modulate the human–social robot relationship within the SOR framework, as illustrated in Figure 4.

5. Discussion

The present study examined the differential effects of MOCCA’s features on parasocial interaction and trust, which subsequently influenced users’ intention to continue using MOCCA. The results indicated that three of MOCCA’s key features: perceived anthropomorphism, agency, and risk, serve as stimuli within the SOR framework, each playing a distinct role in shaping human-social robot relationships through parasocial interaction and trust. These emotional and cognitive experiences significantly contribute to users’ intentions to continue using the social robot.
Specifically, MOCCA’s perceived anthropomorphism, its intimacy feature, showed the strongest positive influence on both parasocial interaction and trust [38,77,78]. Users who perceive MOCCA as an intimate partner are more likely to interact emotionally, increasing trust and willingness to continue using the robot. As intimacy grows, users view MOCCA as more relatable, capable of understanding their needs and emotions, and responsive to their personal context. Since trust is rooted in emotional connections, intimacy is crucial in forming these bonds [72]. As MOCCA demonstrates human-like understanding and empathy, users develop a psychological connection, even though the interaction is with a machine.
While intimacy significantly enhances parasocial interaction and trust, MOCCA’s perceived risk, regarding information privacy, has a significant negative influence on both parasocial interaction and trust [12,56]. This negative impact results in a lower intention among users to continue using MOCCA. When users believe that their personal information is under threat or could be misused, they no longer trust the social robot [85]. Since trust is foundational to forming strong emotional bonds, concerns about data security can hinder the development of an emotional connection with MOCCA. Privacy concerns make users less likely to view MOCCA as a trustworthy partner; instead, they may view it as a tool that potentially compromises their privacy. Users are less willing to share personal details with MOCCA, hindering the development of parasocial interaction and weakening the bond that would promote continued use [87].
The findings of this study provide valuable insights into the role of agency in shaping parasocial interaction and trust. Morality plays a crucial role in developing human-like qualities in social robots [50]. When users perceive MOCCA as acting with moral responsibility, they are more likely to trust it. This perception helps bridge the gap between human and robot engagement, developing emotional connections and increasing trust in MOCCA’s actions. Morality enhances the user’s sense of emotional safety, as they believe the robot will act following social and ethical norms. This sense of security strengthens emotional involvement with the robot, making users feel their emotional investment is worthwhile [69]. It creates an illusion of reciprocity, which contributes to the development of trust. As trust in MOCCA grows, users are more likely to view it as a reliable and authentic social partner, strengthening their willingness to continue using the robot.
In contrast to the positive effects of morality, dependency demonstrated a more complex relationship with parasocial interaction and trust. Specifically, dependency significantly reduces parasocial interaction but does not have a significant effect on trust. Parasocial relationships are often built on the illusion of reciprocity [38]. When dependency is high, the robot’s actions may be perceived as less spontaneous or genuine, as they are simply responses dictated by pre-programmed instructions rather than independent decisions [83]. Users may perceive MOCCA as overly reliant on rigid programming, which could reduce emotional involvement and hinder the development of parasocial interaction. While users may still trust the robot’s functionality and reliability, they might not interact deeply on an emotional or social level due to perceived limitations in its autonomous social capabilities.
Interestingly, dependency does not significantly affect trust in MOCCA. This suggests that trust may be more strongly associated with MOCCA’s functional reliability than with its perceived autonomy. While dependency can reduce emotional involvement by diminishing the sense of agency and reciprocity, it does not necessarily undermine confidence in the robot’s capabilities. Users may continue to trust MOCCA to provide accurate information or assist with specific tasks effectively [74,75].
This distinct result may be better understood by recognizing that trust comprises both cognitive and affective dimensions. Cognitive trust is based on perceptions of reliability, while affective trust is grounded in emotional connection [80]. Dependency might restrict the development of affective trust by weakening the illusion of autonomy and relational authenticity. However, it appears to have a limited impact on cognitive trust when the robot, such as MOCCA, consistently completes its functions. These findings suggest that users may prioritize consistent performance over perceived autonomy. Even when a robot’s actions are known to rely on pre-programmed algorithms, trust can still be maintained if its behavioral outcomes reliably meet users’ expectations [111,112].

6. Conclusions and Implications

As technological integration deepens, social robots are expected to play an increasingly important role in enhancing human life and promoting social harmony [20]. This study aimed to investigate how the perceived features of MOCCA affect users’ continuance intention, with particular attention to the roles of parasocial interaction and trust in shaping user behavioral responses, within the SOR theoretical framework.
By applying the SOR framework, the study offers new insights into how parasocial interaction and trust, as core psychological mechanisms, mediate the relationship between social robot features and users’ continuance intention. The results emphasize the importance of intimacy, morality, and privacy protection in shaping user interaction and trust with MOCCA, while also revealing the complex influence of perceived dependency. These findings contribute to the development of a comprehensive theoretical model that advances our understanding of how social robot characteristics shape user engagement and trust in HRI.

6.1. Theoretical Implications

This study makes theoretical contributions to the growing field of HRI by extending the SOR framework to the context of MOCCA, a social robot equipped with episodic memory. It integrates cognitive and affective dimensions of user response while empirically validating a model that includes both positive and negative robot features. These findings broaden the theoretical understanding of how users form psychological connections with social robots and make behavioral decisions based on both emotional resonance and perceived reliability.

6.2. Managerial Implications

From a managerial standpoint, this study provides actionable insights for industry stakeholders involved in the development and deployment of social robots. Managers in the robotics sector should prioritize enhancing transparency in data usage policies, foster trust by incorporating human-like and morally responsible features, and personalize interaction strategies based on user feedback. These managerial implications can strengthen user–robot relationships, increase potential user engagement, and support the sustainable adoption of social robots across diverse industries.

6.3. Practical Implications

This study offers several practical implications for the design and development of social robots aimed at fostering long-term user engagement and trust.
First, future designs should focus on enhancing the human-like qualities of social robots and prioritizing personalization features, such as the ability to recall and adapt to past interactions through episodic memory and adjust their behavior accordingly. This can build a sense of familiarity and continuity between users and robots, strengthening emotional bonds with users over time [113].
Next, as social robots become more deeply embedded in people’s daily lives, advancing the research of HRI with the themes of user-centeredness, ethics of behavior, and design for privacy protection can contribute to the sustainable development of social robots. Future designs should incorporate transparent and user-controllable privacy settings, especially in sensitive environments such as homes, schools, or hospitals [114,115,116,117]. Developers could design robots to offer customizable memory retention periods, clear data-sharing mechanisms, and real-time feedback on the data being collected from users. These practical designs can mitigate users’ concerns about data misuse or leakage and reinforce trust in the robot’s data management.
Finally, this study highlights the importance of achieving a balance between autonomy and perceived dependency in robot design. If a robot is perceived as overly dependent on pre-coded instructions, it may hinder emotional involvement and the development of parasocial interaction. Thus, future designs should aim to find a balance between autonomy and dependency to optimize the user experience and promote a sense of authenticity in human–robot relationships. This balance may enhance the overall user experience and foster deeper emotional connections between users and robots.

7. Limitations and Avenues for Future Research

While the study provides valuable implications for the field of HRI, it is subject to several limitations. As with most online survey-based studies, our study relied on self-reporting to measure constructs. Although the measurement items were adapted from validated scales, self-reported data may be subject to potential biases, such as social desirability and recall bias [118,119]. To enhance the validity and accuracy of the findings, future research could incorporate more diverse data sources or experimental approaches, such as collecting physiological or behavioral data during real-time user–robot interactions, to provide a more comprehensive understanding of users’ actual response patterns.
Moreover, the current study is limited to users’ cognitive and affective dimensions during non-physical interactions with social robots. However, recent studies in robotics have increasingly emphasized the importance of physical interaction in shaping user experience [120,121], offering a broader perspective for social robot design and interaction. Future research could expand the current SOR framework by incorporating physical interaction modalities, such as force-based control or tactile feedback, to provide a more holistic understanding of HRI.
Another limitation lies in the focus on a specific social robot, MOCCA. While this study provides valuable insights into the field of HRI, the findings may have limited generalizability to other social robots with different interaction modalities or functional characteristics. To strengthen the applicability of the findings, future research could explore a broader range of social robots with diverse design features. Additionally, recruiting participants from more diverse demographic backgrounds could offer deeper insights into how different users perceive and engage with social robots.

Author Contributions

Conceptualization, Y.Y., H.-K.C. and M.-Y.K.; methodology, H.-K.C. and M.-Y.K.; validation, Y.Y.; formal analysis, Y.Y., H.-K.C. and M.-Y.K.; resources, Y.Y. and M.-Y.K.; writing—original draft preparation, Y.Y. and H.-K.C.; writing—review and editing, Y.Y., H.-K.C. and M.-Y.K.; supervision, H.-K.C. and M.-Y.K.; project administration, H.-K.C.; funding acquisition, H.-K.C. and M.-Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study because it was a non-interventional, anonymous online survey that did not involve any personally identifiable information or sensitive personal data. According to institutional guidelines and national bioethics regulations, such studies are exempt from formal IRB approval.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. All participants were presented with a clear and detailed informed consent statement at the beginning of the questionnaire. This statement assured participants of anonymity, data protection, and the exclusive academic use of their responses. Participants were required to acknowledge the statement before proceeding with the survey.

Data Availability Statement

The data that support the findings of this study are available upon request from the corresponding author. The data are not publicly available due to restrictions, but can be made available upon reasonable request.

Acknowledgments

Thanks to all the anonymous reviewers for their academic effort in ascertaining quality research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Johanson, D.L.; Ahn, H.S.; Broadbent, E. Improving interactions with healthcare robots: A review of communication behaviors in social and healthcare contexts. Int. J. Soc. Robot. 2021, 13, 1835–1850. [Google Scholar] [CrossRef]
  2. Woo, H.; LeTendre, G.K.; Pham-Shouse, T.; Xiong, Y. The use of social robots in classrooms: A review of field-based studies. Educ. Res. Rev. 2021, 33, 100388. [Google Scholar] [CrossRef]
  3. Song, C.S.; Kim, Y.K. The role of the human-robot interaction in consumers’ acceptance of humanoid retail service robots. J. Bus. Res. 2022, 146, 489–503. [Google Scholar] [CrossRef]
  4. Liao, J.; Huang, J. Think like a robot: How interactions with humanoid service robots affect consumers’ decision strategies. J. Retail. Consum. Serv. 2024, 76, 103575. [Google Scholar] [CrossRef]
  5. Chatterjee, S.; Chaudhuri, R.; Vrontis, D. Usage intention of social robots for domestic purpose: From security, privacy, and legal perspectives. Inf. Syst. Front. 2024, 26, 121–136. [Google Scholar] [CrossRef]
  6. Yuan, X.; Xu, J.; Hussain, S.; Wang, H.; Gao, N.; Zhang, L. Trends and prediction in daily new cases and deaths of COVID-19 in the United States: An internet search-interest based model. Explor. Res. Hypothesis Med. 2020, 5, 41–46. [Google Scholar] [CrossRef]
  7. Song, Y.; Luximon, A.; Luximon, Y. The effect of facial features on facial anthropomorphic trustworthiness in social robots. Appl. Ergon. 2021, 94, 103420. [Google Scholar] [CrossRef]
  8. Lee, W.H.; Yoo, S.M.; Choi, J.W.; Kim, U.H.; Kim, J.H. Human robot social interaction framework based on emotional episodic memory. In Proceedings of the Robot Intelligence Technology and Applications: 6th International Conference, Putrajaya, Malaysia, 16–18 December 2018; Springer: Singapore, 2019; Volume 6, pp. 101–116. [Google Scholar]
  9. Abba Ari, A.A.; Ngangmo, O.K.; Titouna, C.; Thiare, O.; Mohamadou, A.; Gueroui, A.M. Enabling privacy and security in Cloud of Things: Architecture, applications, security & privacy challenges. Appl. Comput. Inform. 2024, 20, 119–141. [Google Scholar]
  10. Song, M.; Xing, X.; Duan, Y.; Cohen, J.; Mou, J. Will artificial intelligence replace human customer service? The impact of communication quality and privacy risks on adoption intention. J. Retail. Consum. Serv. 2022, 66, 102900. [Google Scholar] [CrossRef]
  11. Oruma, S.O.; Ayele, Y.Z.; Sechi, F.; Rødsethol, H. Security aspects of social robots in public spaces: A systematic mapping study. Sensors 2023, 23, 8056. [Google Scholar] [CrossRef]
  12. Laban, G.; Cross, E.S. Sharing our Emotions with Robots: Why do we do it and how does it make us feel? IEEE Trans. Affect. Comput. 2024, 3470984. [Google Scholar] [CrossRef]
  13. Chung, H.; Kang, H.; Jun, S. Verbal anthropomorphism design of social robots: Investigating users’ privacy perception. Comput. Hum. Behav. 2023, 142, 107640. [Google Scholar] [CrossRef]
  14. Chiang, A.H.; Chou, S.Y. Exploring robot service quality priorities for different levels of intimacy with service. Serv. Bus. 2023, 17, 913–935. [Google Scholar] [CrossRef]
  15. Premathilake, G.W.; Li, H.; Li, C.; Liu, Y.; Han, S. Understanding the effect of anthropomorphic features of humanoid social robots on user satisfaction: A stimulus-organism-response approach. Ind. Manag. Data Syst. 2025, 125, 768–796. [Google Scholar] [CrossRef]
  16. Epley, N.; Waytz, A.; Cacioppo, J.T. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 2007, 114, 864. [Google Scholar] [CrossRef]
  17. Obaigbena, A.; Lottu, O.A.; Ugwuanyi, E.D.; Jacks, B.S.; Sodiya, E.O.; Daraojimba, O.D. AI and human-robot interaction: A review of recent advances and challenges. GSC Adv. Res. Rev. 2024, 18, 321–330. [Google Scholar] [CrossRef]
  18. Banks, J. A perceived moral agency scale: Development and validation of a metric for humans and social machines. Comput. Hum. Behav. 2019, 90, 363–371. [Google Scholar] [CrossRef]
  19. Floridi, L. The Ethics of Information; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  20. Dignum, V.; Dignum, F.; Vázquez-Salceda, J.; Clodic, A.; Gentile, M.; Mascarenhas, S.; Augello, A. Design for values for social robot architectures. In Envisioning Robots in Society–Power, Politics, and Public Space; IOS Press: Amsterdam, The Netherlands, 2018; Volume 311, pp. 43–52. [Google Scholar]
  21. Tavani, H.T. Can social robots qualify for moral consideration? Reframing the question about robot rights. Information 2018, 9, 73. [Google Scholar] [CrossRef]
  22. Jackson, R.B.; Williams, T. A theory of social agency for human-robot interaction. Front. Robot. AI 2021, 8, 687726. [Google Scholar] [CrossRef]
  23. Lee, H.; Shin, W.; Cho, H. A study on preschool children’s perceptions of a robot’s theory of mind. J. Korea Robot. Soc. 2020, 15, 365–374. [Google Scholar] [CrossRef]
  24. Abumalloh, R.A.; Halabi, O.; Nilashi, M. The relationship between technology trust and behavioral intention to use Metaverse in baby monitoring systems’ design: Stimulus-Organism-Response (SOR) theory. Entertain. Comput. 2025, 52, 100833. [Google Scholar] [CrossRef]
  25. Vafaei-Zadeh, A.; Nikbin, D.; Wong, S.L.; Hanifah, H. Investigating factors influencing AI customer service adoption: An integrated model of stimulus–organism–response (SOR) and task-technology fit (TTF) theory. Asia Pac. J. Mark. Logist. 2024. [Google Scholar] [CrossRef]
  26. Mehrabian, A.; Russell, J.A. An Approach to Environmental Psychology; Massachusetts Institute of Technology: Cambridge, MA, USA, 1974. [Google Scholar]
  27. Breazeal, C. Toward sociable robots. Robot. Auton. Syst. 2003, 42, 167–175. [Google Scholar] [CrossRef]
  28. De Graaf, M.M.; Allouch, S.B. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  29. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  30. Formosa, P. Robot autonomy vs. human autonomy: Social robots, artificial intelligence (AI), and the nature of autonomy. Minds Mach. 2021, 31, 595–616. [Google Scholar] [CrossRef]
  31. Lee, W.; Park, C.H.; Jang, S.; Cho, H.K. Design of effective robotic gaze-based social cueing for users in task-oriented situations: How to overcome in-attentional blindness? Appl. Sci. 2020, 10, 5413. [Google Scholar] [CrossRef]
  32. Nguyen, H.N.; Nguyen, N.T.; Hancer, M. Human-robot collaboration in service recovery: Examining apology styles, comfort emotions, and customer retention. Int. J. Hosp. Manag. 2025, 126, 104028. [Google Scholar] [CrossRef]
  33. Gao, Y.; Chang, Y.; Yang, T.; Yu, Z. Consumer acceptance of social robots in domestic settings: A human-robot interaction perspective. J. Retail. Consum. Serv. 2025, 82, 104075. [Google Scholar] [CrossRef]
  34. Massaguer Gómez, G. Should we Trust Social Robots? Trust without Trustworthiness in Human-Robot Interaction. Philos. Technol. 2025, 38, 24. [Google Scholar] [CrossRef]
  35. Jacoby, J. Stimulus-organism-response reconsidered: An evolutionary step in modeling (consumer) behavior. J. Consum. Psychol. 2002, 12, 51–57. [Google Scholar] [CrossRef]
  36. Hsiao, C.H.; Tang, K.Y. Who captures whom–Pokémon or tourists? A perspective of the Stimulus-Organism-Response model. Int. J. Inf. Manag. 2021, 61, 102312. [Google Scholar] [CrossRef]
  37. Xie, Y.; Zhu, K.; Zhou, P.; Liang, C. How does anthropomorphism improve human-AI interaction satisfaction: A dual-path model. Comput. Hum. Behav. 2023, 148, 107878. [Google Scholar] [CrossRef]
  38. Whang, C.; Im, H. “I Like Your Suggestion!” the role of humanlikeness and parasocial relationship on the website versus voice shopper’s perception of recommendations. Psychol. Mark. 2021, 38, 581–595. [Google Scholar] [CrossRef]
  39. Stroessner, S.J.; Benitez, J. The social perception of humanoid and non-humanoid robots: Effects of gendered and machinelike features. Int. J. Soc. Robot. 2019, 11, 305–315. [Google Scholar] [CrossRef]
  40. Blut, M.; Wang, C.; Wünderlich, N.V.; Brock, C. Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. J. Acad. Mark. Sci. 2021, 49, 632–658. [Google Scholar] [CrossRef]
  41. Samani, H. The evaluation of affection in human-robot interaction. Kybernetes 2016, 45, 1257–1272. [Google Scholar] [CrossRef]
  42. McAdams, D.P. Intimacy: The Need to Be Close; Doubleday & Co.: New York, NY, USA, 1989. [Google Scholar]
  43. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68. [Google Scholar] [CrossRef]
  44. Allen, C.; Wallach, W.; Smit, I. Why machine ethics? IEEE Intell. Syst. 2006, 21, 12–17. [Google Scholar] [CrossRef]
  45. Brown, L.A.; Walker, W.H. Prologue: Archaeology, animism and non-human agents. J. Archaeol. Method Theory 2008, 15, 297–299. [Google Scholar] [CrossRef]
  46. Dennett, D. Brainstorms: Philosophical Essays on Mind and Psychology; MIT Press: Cambridge, MA, USA, 1978. [Google Scholar]
  47. Trafton, J.G.; McCurry, J.M.; Zish, K.; Frazier, C.R. The perception of agency. ACM Trans. Hum.-Robot Interact. 2024, 13, 1–23. [Google Scholar] [CrossRef]
  48. Gray, K.; Young, L.; Waytz, A. Mind perception is the essence of morality. Psychol. Inq. 2012, 23, 101–124. [Google Scholar] [CrossRef]
  49. Sytsma, J.; Machery, E. The two sources of moral standing. Rev. Philos. Psychol. 2012, 3, 303–324. [Google Scholar] [CrossRef]
  50. Banks, J. From warranty voids to uprising advocacy: Human action and the perceived moral patiency of social robots. Front. Robot. AI 2021, 8, 670503. [Google Scholar] [CrossRef] [PubMed]
  51. Mikhail, J. Universal moral grammar: Theory, evidence and the future. Trends Cogn. Sci. 2007, 11, 143–152. [Google Scholar] [CrossRef]
  52. Johnson, A.M.; Axinn, S. The morality of autonomous robots. J. Mil. Ethics 2013, 12, 129–141. [Google Scholar] [CrossRef]
  53. Falcone, R.; Sapienza, A. The Role of Trust in Dependence Networks: A Case Study. Information 2023, 14, 652. [Google Scholar] [CrossRef]
  54. Turkle, S. Authenticity in the age of digital companions. Interact. Stud. 2007, 8, 501–517. [Google Scholar] [CrossRef]
  55. Lutz, C.; Tamò, A.; Guzman, A. Communicating with robots: ANTalyzing the interaction between healthcare robots and humans with regards to privacy. In Human-Machine Communication: Rethinking Communication, Technology, and Ourselves; Peter Lang: New York, NY, USA, 2018; pp. 145–165. [Google Scholar]
  56. Lutz, C.; Tamó-Larrieux, A. The Robot Privacy Paradox: Understanding How Privacy Concerns Shape Intentions to Use Social Robots. Hum.-Mach. Commun. 2020, 1, 87–111. [Google Scholar] [CrossRef]
  57. Xu, H.; Dinev, T.; Smith, H.J.; Hart, P. Examining the formation of individual’s privacy concerns: Toward an integrative view. In Proceedings of the ICIS 2008 Proceedings, Paris, France, 14–17 December 2008; p. 6. [Google Scholar]
  58. Koops, B.J.; Leenes, R. Privacy regulation cannot be hardcoded. A critical comment on the ‘privacy by design’provision in data-protection law. Int. Rev. Law Comput. Technol. 2014, 28, 159–171. [Google Scholar] [CrossRef]
  59. Horton, D.; Richard Wohl, R. Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry 1956, 19, 215–229. [Google Scholar] [CrossRef] [PubMed]
  60. Konijn, E.A.; Utz, S.; Tanis, M.; Barnes, S.B. Parasocial interactions and paracommunication with new media characters. Mediat. Interpers. Commun. 2008, 1, 191–213. [Google Scholar]
  61. Giles, D.C. Parasocial interaction: A review of the literature and a model for future research. Media Psychol. 2002, 4, 279–305. [Google Scholar] [CrossRef]
  62. Han, S.; Yang, H. Understanding adoption of intelligent personal assistants: A parasocial relationship perspective. Ind. Manag. Data Syst. 2018, 118, 618–636. [Google Scholar] [CrossRef]
  63. Hartmann, T.; Goldhoorn, C. Horton and Wohl revisited: Exploring viewers’ experience of parasocial interaction. J. Commun. 2011, 61, 1104–1121. [Google Scholar] [CrossRef]
  64. Banks, J.; Bowman, N.D. Avatars are (sometimes) people too: Linguistic indicators of parasocial and social ties in player–avatar relationships. New Media Soc. 2016, 18, 1257–1276. [Google Scholar] [CrossRef]
  65. Labrecque, L.I. Fostering consumer–brand relationships in social media environments: The role of parasocial interaction. J. Interact. Mark. 2014, 28, 134–148. [Google Scholar] [CrossRef]
  66. Peng, C.; Zhang, S.; Wen, F.; Liu, K. How loneliness leads to the conversational AI usage intention: The roles of anthropomorphic interface, para-social interaction. Curr. Psychol. 2024, 1–13. [Google Scholar] [CrossRef]
  67. Klimmt, C.; Hartmann, T.; Schramm, H. Parasocial interactions and relationships. In Psychology of Entertainment; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2013; pp. 291–313. [Google Scholar]
  68. Nah, H.S. The appeal of “real” in parasocial interaction: The effect of self-disclosure on message acceptance via perceived authenticity and liking. Comput. Hum. Behav. 2022, 134, 107330. [Google Scholar] [CrossRef]
  69. Maeda, T.; Quan-Haase, A. When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, Rio de Janeiro, Brazil, 3–6 June 2024; pp. 1068–1077. [Google Scholar]
  70. Kim, K.J.; Park, E.; Sundar, S.S. Caregiving role in human–robot interaction: A study of the mediating effects of perceived benefit and social presence. Comput. Hum. Behav. 2013, 29, 1799–1806. [Google Scholar] [CrossRef]
  71. Noor, N.; Rao Hill, S.; Troshani, I. Artificial intelligence service agents: Role of parasocial relationship. J. Comput. Inf. Syst. 2022, 62, 1009–1023. [Google Scholar] [CrossRef]
  72. Law, T.; de Leeuw, J.; Long, J.H. How movements of a non-humanoid robot affect emotional perceptions and trust. Int. J. Soc. Robot. 2021, 13, 1967–1978. [Google Scholar] [CrossRef]
  73. Mayer, R.C. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  74. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; De Visser, E.J.; Parasuraman, R. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  75. Cheng, X.; Zhang, X.; Cohen, J.; Mou, J. Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Inf. Process. Manag. 2022, 59, 102940. [Google Scholar] [CrossRef]
  76. Billings, D.R.; Schaefer, K.E.; Chen, J.Y.; Hancock, P.A. Human-robot interaction: Developing trust in robots. In Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, Boston, MA, USA, 5–8 March 2012; pp. 109–110. [Google Scholar]
  77. Gompei, T.; Umemuro, H. Factors and development of cognitive and affective trust on social robots. In Proceedings of the Social Robotics: 10th International Conference, ICSR 2018, Qingdao, China, 28–30 November 2018; Springer International Publishing: Cham, Switzerland, 2018; Volume 10, pp. 45–54. [Google Scholar]
  78. Youn, S.; Jin, S.V. In AI we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy. Comput. Hum. Behav. 2021, 119, 106721. [Google Scholar] [CrossRef]
  79. Lewis, J.D. Trust as social reality. Soc. Forces 1985, 63, 967–985. [Google Scholar] [CrossRef]
  80. McAllister, D.J. Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad. Manag. J. 1995, 38, 24–59. [Google Scholar] [CrossRef]
  81. Burke, C.S.; Sims, D.E.; Lazzara, E.H.; Salas, E. Trust in leadership: A multi-level review and integration. Leadersh. Q. 2007, 18, 606–632. [Google Scholar] [CrossRef]
  82. Malle, B.F.; Ullman, D. A multidimensional conception and measure of human-robot trust. In Trust in Human-Robot Interaction; Academic Press: Cambridge, MA, USA, 2021; pp. 3–25. [Google Scholar]
  83. Sundar, S.S.; Waddell, T.F.; Jung, E.H. The Hollywood robot syndrome media effects on older adults’ attitudes toward robots and adoption intentions. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 343–350. [Google Scholar]
  84. Tamò-Larrieux, A.; Tamò-Larrieux, S.; Seyfried. Designing for Privacy and Its Legal Framework: Data Protection by Design and Default for the Internet of Things; Springer: Cham, Switzerland, 2018; Volume 9. [Google Scholar]
  85. Lutz, C.; Schöttler, M.; Hoffmann, C.P. The privacy implications of social robots: Scoping review and expert interviews. Mob. Media Commun. 2019, 7, 412–434. [Google Scholar] [CrossRef]
  86. Rueben, M.; Aroyo, A.M.; Lutz, C.; Schmölz, J.; Van Cleynenbreugel, P.; Corti, A.; Agrawal, S.; Smart, W.D. Themes and research directions in privacy-sensitive robotics. In Proceedings of the 2018 IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO), Genova, Italy, 27–29 September 2018; pp. 77–84. [Google Scholar]
  87. Chuah, S.H.W.; Aw, E.C.X.; Yee, D. Unveiling the complexity of consumers’ intention to use service robots: An fsQCA approach. Comput. Hum. Behav. 2021, 123, 106870. [Google Scholar] [CrossRef]
  88. Bhattacherjee, A. Understanding information systems continuance: An expectation-confirmation model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  89. Lankton, N.; McKnight, D.H.; Thatcher, J.B. Incorporating trust-in-technology into expectation disconfirmation theory. J. Strateg. Inf. Syst. 2014, 23, 128–145. [Google Scholar] [CrossRef]
  90. Liu, X.; He, X.; Wang, M.; Shen, H. What influences patients’ continuance intention to use AI-powered service robots at hospitals? The role of individual characteristics. Technol. Soc. 2022, 70, 101996. [Google Scholar] [CrossRef]
  91. Naneva, S.; Sarda Gou, M.; Webb, T.L.; Prescott, T.J. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  92. Ciechanowski, L.; Przegalinska, A.; Magnuski, M.; Gloor, P. In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Gener. Comput. Syst. 2019, 92, 539–548. [Google Scholar] [CrossRef]
  93. Ferrario, A.; Loi, M.; Viganò, E. In AI we trust incrementally: A multi-layer model of trust to analyze human-artificial intelligence interactions. Philos. Technol. 2020, 33, 523–539. [Google Scholar] [CrossRef]
  94. Lee, M.; Park, J.S. Do parasocial relationships and the quality of communication with AI shopping chatbots determine middle-aged women consumers’ continuance usage intentions? J. Consum. Behav. 2022, 21, 842–854. [Google Scholar] [CrossRef]
  95. Hair, J.F.; Risher, J.J.; Sarstedt, M.; Ringle, C.M. When to use and how to report the results of PLS-SEM. Eur. Bus. Rev. 2019, 31, 2–24. [Google Scholar] [CrossRef]
  96. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M.; Danks, N.P.; Ray, S. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook; Springer Nature: Berlin/Heidelberg, Germany, 2021; p. 197. [Google Scholar]
  97. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Leeds, UK, 2009; pp. 277–319. [Google Scholar]
  98. Chien, S.Y.; Lin, Y.L.; Chang, B.F. The effects of intimacy and proactivity on trust in human-humanoid robot interaction. Inf. Syst. Front. 2024, 26, 75–90. [Google Scholar] [CrossRef]
  99. Malhotra, N.K.; Kim, S.S.; Agarwal, J. Internet users’ information privacy concerns (IUIPC): The construct, the scale, and a causal model. Inf. Syst. Res. 2004, 15, 336–355. [Google Scholar] [CrossRef]
  100. Schaefer, K. The Perception and Measurement of Human-Robot Trust. Doctoral Dissertation, University of Central Florida, Orlando, FL, USA, 2013. Available online: https://stars.library.ucf.edu/etd/2688 (accessed on 26 April 2025).
  101. Ashfaq, M.; Yun, J.; Yu, S.; Loureiro, S.M.C. I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inform. 2020, 54, 101473. [Google Scholar] [CrossRef]
  102. Nitzl, C.; Chin, W.W. The case of partial least squares (PLS) path modeling in managerial accounting research. J. Manag. Control 2017, 28, 137–156. [Google Scholar] [CrossRef]
  103. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  104. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef]
  105. Kock, N.; Lynn, G.S. Lateral collinearity and misleading results in variance-based SEM: An illustration and recommendations. J. Assoc. Inf. Syst. 2012, 13, 2. [Google Scholar] [CrossRef]
  106. Falk, R.F.; Miller, N.B. A Primer for Soft Modeling; Ohio University of Akron Press: Akron, OH, USA, 1992. [Google Scholar]
  107. Chin, W.W. The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  108. Tenenhaus, M.; Vinzi, V.E.; Chatelin, Y.M.; Lauro, C. PLS path modeling. Comput. Stat. Data Anal. 2005, 48, 159–205. [Google Scholar] [CrossRef]
  109. Sarstedt, M.; Ringle, C.M.; Hair, J.F. Partial least squares structural equation modeling. In Handbook of Market Research; Springer International Publishing: Cham, Switzerland, 2021; pp. 587–632. [Google Scholar]
  110. Zhao, X.; Lynch, J.G., Jr.; Chen, Q. Reconsidering Baron and Kenny: Myths and truths about mediation analysis. J. Consum. Res. 2010, 37, 197–206. [Google Scholar] [CrossRef]
  111. Wolffgramm, M.R.; Corporaal, S.; Groen, A.J. Operators and their human–robot interdependencies: Implications of distinct job decision latitudes for sustainable work and high performance. Front. Robot. AI 2025, 12, 1442319. [Google Scholar] [CrossRef]
  112. Ackermann, H.; Lange, A.L.; Hafner, V.V.; Lazarides, R. How adaptive social robots influence cognitive, emotional, and self-regulated learning. Sci. Rep. 2025, 15, 6581. [Google Scholar] [CrossRef] [PubMed]
  113. Mitchell, J.J.; Jeon, M. Exploring Emotional Connections: A Systematic Literature Review of Attachment in Human-Robot Interaction. Int. J. Hum. Comput. Interact. 2025, 1–22. [Google Scholar] [CrossRef]
  114. Torras, C. Ethics of Social Robotics: Individual and Societal Concerns and Opportunities. Annu. Rev. Control Robot. Auton. Syst. 2024, 7, 1–18. [Google Scholar] [CrossRef]
  115. Yang, D.; Chae, Y.J.; Kim, D.; Lim, Y.; Kim, D.H.; Kim, C.; Park, S.K.; Nam, C. Effects of social behaviors of robots in privacy-sensitive situations. Int. J. Soc. Robot. 2022, 14, 589–602. [Google Scholar] [CrossRef]
  116. Guggemos, J.; Seufert, S.; Sonderegger, S.; Burkhard, M. Social robots in education: Conceptual overview and case study of use. In Orchestration of Learning Environments in the Digital World; Springer International Publishing: Cham, Switzerland, 2022; pp. 173–195. [Google Scholar]
  117. González-González, C.S.; Violant-Holz, V.; Gil-Iranzo, R.M. Social robots in hospitals: A systematic review. Appl. Sci. 2021, 11, 5976. [Google Scholar] [CrossRef]
  118. Durmaz, A.; Dursun, İ.; Kabadayi, E.T. Mitigating the effects of social desirability bias in self-report surveys: Classical and new techniques. In Applied Social Science Approaches to Mixed Methods Research; IGI Global Scientific Publishing: Hershey, PA, USA, 2020; pp. 146–185. [Google Scholar]
  119. Koller, K.; Pankowska, P.K.; Brick, C. Identifying bias in self-reported pro-environmental behavior. Curr. Res. Ecol. Soc. Psychol. 2023, 4, 100087. [Google Scholar] [CrossRef]
  120. Xing, X.; Burdet, E.; Si, W.; Yang, C.; Li, Y. Impedance learning for human-guided robots in contact with unknown environments. IEEE Trans. Robot. 2023, 39, 3705–3721. [Google Scholar] [CrossRef]
  121. Chen, L.; Chen, L.; Chen, X.; Lu, H.; Zheng, Y.; Wu, J.; Wang, Y.; Zhang, Z.; Xiong, R. Compliance while resisting: A shear-thickening fluid controller for physical human-robot interaction. Int. J. Robot. Res. 2024, 43, 1731–1769. [Google Scholar] [CrossRef]
Figure 1. Capabilities of MOCCA.
Figure 1. Capabilities of MOCCA.
Applsci 15 05334 g001
Figure 2. An example of a conversation with MOCCA.
Figure 2. An example of a conversation with MOCCA.
Applsci 15 05334 g002
Figure 3. Structural model.
Figure 3. Structural model.
Applsci 15 05334 g003
Figure 4. Hypothesis testing results.
Figure 4. Hypothesis testing results.
Applsci 15 05334 g004
Table 1. Constructs and items.
Table 1. Constructs and items.
ConstructsItemsMeasurementsRef.
IntimacyPAI1I feel that MOCCA is friendly to me, like a friend.[98]
PAI2I think MOCCA is sociable and enjoys interacting with me.
PAI3I would like to have more conversations with MOCCA.
PAI4I feel emotionally close to MOCCA.
PAI5I believe that MOCCA and I share a close relationship.
MoralityPAM1MOCCA can think through whether a conversation is moral.[18]
PAM2MOCCA feels obligated to engage in conversations in a moral way.
PAM3MOCCA has a sense for what is right and wrong.
PAM4MOCCA behaves according to moral rules in its conversations.
PAM5MOCCA would refrain from doing things that cause painful conversations.
DependencyPAD1MOCCA can only behave how it is programmed to conversation.[18]
PAD2MOCCA’s conversations are the result of its programming.
PAD3MOCCA can only do what humans tell it to do.
PAD4MOCCA would never engage in a conversation it was not programmed to do.
Information Privacy
Risk
IPR1The personal information collected by MOCCA may be misused.[56,99]
IPR2The personal information collected by MOCCA may be disclosed to others without my consent.
IPR3The personal information collected by MOCCA may be used improperly.
IPR4I believe it is difficult to ensure the confidentiality of personal information when using MOCCA.
IPR5I believe there is a risk that MOCCA may collect personal conversations and share them with others.
Parasocial
Interaction
PSI1I feel like MOCCA understands me when I talk to it.[38,62]
PSI2I feel like MOCCA knows I am involved in the conversation with it.
PSI3I feel like MOCCA knows that I am aware of it during our interaction.
PSI4I feel like MOCCA understands that I am focused on it during our conversation.
PSI5I feel like MOCCA knows how I am reacting to it during our conversation.
PSI6No matter what I say or do, MOCCA always responds to me.
TrustTRS1I believe I can trust what MOCCA says.[100]
TRS2I believe MOCCA will provide me with accurate information.
TRS3I believe MOCCA is honest.
TRS4I believe I can rely on MOCCA.
TRS5I believe MOCCA understands my needs and preferences.
TRS6I believe MOCCA wants to remember my interests.
TRS7I believe the services provided by MOCCA are trustworthy.
TRS8I believe MOCCA is trustworthy enough to share my personal information with it.
TRS9I believe the information provided by MOCCA is reliable.
Continuance Usage
Intention
CUI1I will always enjoy using MOCCA in my daily life.[88,101]
CUI2I intend to continue using MOCCA in the future.
CUI3I will use MOCCA more frequently in the future.
CUI4I will strongly recommend MOCCA to others.
Source(s): Table by the Author. Note: Ref. indicates the literature sources from which the measurement items were adapted or modified; Abbreviations: PAI = Perceived Anthropomorphism–Intimacy; PAM = Perceived Agency–Morality; PAD = Perceived Agency–Dependency; IPR = Perceived Risk–Information Privacy Risk; PSI = Parasocial interaction; TRS = Trust; CUI = Continuance usage intention.
Table 2. Participant demographics (N = 285).
Table 2. Participant demographics (N = 285).
ClassificationDistributionFrequencyPercentage
GenderMale13346.7%
Female15253.3%
AgeUnder 20s (<20 years old)3713.0%
20s (20 to 29 years old)7225.2%
30s (30 to 39 years old)7225.2%
40s (40 to 49 years old)6221.8%
50s (50 to 59 years old)3913.7%
60 or Older (= or >60 years old)31.1%
Educational BackgroundHigh school or below6522.8%
Currently in College (excluding Junior College)7325.6%
College Graduate (excluding Junior College)7124.9%
Postgraduate or higher7626.7%
Occupational BackgroundStudent5318.6%
Company employee (IT industry)9934.7%
Company employee (non-IT industry)9633.7%
Housewife3713.0%
Usage DurationLess than 1 month7024.6%
1 to 3 months5820.4%
3 to 6 months5318.6%
6 to 12 months4616.1%
Over 12 months5820.3%
Usage
Frequency
Almost every day72.5%
3 to 4 times a week5318.6%
Once a week11138.9%
2 to 3 times a month5519.3%
Less than once a month5920.7%
Usage Purpose
(multiple choices allowed)
Daily conversations and entertainment10135.4%
Translation and writing assistance15654.7%
Information and knowledge search13045.6%
Professional questions and consultations7626.7%
Programming and technical support165.6%
Creative and content creation assistance11239.3%
Source(s): Table by the Author.
Table 3. Construct validity.
Table 3. Construct validity.
ConstructsItemsλCronbach’s αrho_AAVECRVIF
IntimacyPAI10.8390.9080.9080.7310.9312.263
PAI20.8382.295
PAI30.8682.682
PAI40.8732.685
PAI50.8562.492
MoralityPAM10.8490.9010.9080.7160.9272.415
PAM20.8552.352
PAM30.8232.191
PAM40.8442.322
PAM50.8622.359
DependencyPAD10.8180.8550.8640.6960.9021.879
PAD20.8211.983
PAD30.8502.054
PAD40.8481.916
Information Privacy
Risk
IPR10.8300.8830.8840.6810.9142.137
IPR20.8242.074
IPR30.8222.103
IPR40.8272.055
IPR50.8232.004
Parasocial
Interaction
PSI10.8340.9040.9040.6750.9262.304
PSI20.8062.024
PSI30.8242.249
PSI40.8332.307
PSI50.8082.109
PSI60.8232.205
TrustTRS10.7810.9380.9390.6680.9482.140
TRS20.8122.368
TRS30.8042.331
TRS40.8432.691
TRS50.8282.581
TRS60.8082.339
TRS70.8332.636
TRS80.8412.704
TRS90.8042.291
Continuance Usage
Intention
CUI10.8340.8470.8470.6850.8971.944
CUI20.8101.771
CUI30.8341.957
CUI40.8321.874
Source(s): Table by the Author. Note: λ: Outer Loading; AVE: Average Variance Extracted; CR: Composite Reliability; Abbreviations: PAI = Perceived Anthropomorphism–Intimacy; PAM = Perceived Agency–Morality; PAD = Perceived Agency–Dependency; IPR = Perceived Risk–Information Privacy Risk; PSI = Parasocial interaction; TRS = Trust; CUI = Continuance usage intention.
Table 4. Discriminant validity (Fornell–Larcker criterion and HTMT).
Table 4. Discriminant validity (Fornell–Larcker criterion and HTMT).
ConstructsPAIPAMPADIPRPSITRSCUI
Intimacy0.8550.5470.5450.4190.3630.4650.423
Morality0.3740.8460.5620.4730.3530.4260.582
Dependency−0.372−0.4040.8350.4850.4420.4840.525
Information Privacy Risk−0.406−0.4470.3690.8250.4250.4990.453
Parasocial Interaction0.4770.441−0.395−0.4340.8210.4570.423
Trust0.5390.397−0.319−0.4310.5190.8170.411
Continuance Usage Intention0.3710.407−0.312−0.3630.4770.4900.828
Source(s): Table by the Author. Note: Shaded diagonal values represent the AVE’s square root; Off-diagonal values are correlations, with the Heterotrait–Monotrait Ratio (HTMT) above the main diagonal and the Fornell–Larcker criterion below the main diagonal; Abbreviations: PAI = Perceived Anthropomorphism–Intimacy; PAM = Perceived Agency–Morality; PAD = Perceived Agency–Dependency; IPR = Perceived Risk–Information Privacy Risk; PSI = Parasocial interaction; TRS = Trust; CUI = Continuance usage intention.
Table 5. The validity of the structural model.
Table 5. The validity of the structural model.
Endogenous VariablesR Square
(R2)
R Square
Adjusted
Q2Exogenous Variablesf2
Parasocial Interaction
(PSI)
0.3990.3880.264PAI0.032
PAM0.029
PAD0.023
IPR0.019
Trust
(TRS)
0.3650.3560.240PAI0.181
PAM0.025
PAD0.002
IPR0.041
Continuance Usage Intention
(CUI)
0.3080.3030.204PSI0.098
TRS0.116
Source(s): Table by the Author. Note: R2: variance explained; Q2: predictive relevance (Q2 = 1 − SSE/SSO); f2: effect size of each exogenous variable; Abbreviations: PAI = Perceived Anthropomorphism–Intimacy; PAM = Perceived Agency–Morality; PAD = Perceived Agency–Dependency; IPR = Perceived Risk–Information Privacy Risk; PSI = Parasocial interaction; TRS = Trust; CUI = Continuance usage intention.
Table 6. Results of hypothesis testing (Bootstrapping 5000).
Table 6. Results of hypothesis testing (Bootstrapping 5000).
ParametersPath Coef. (β)t-Valuep-Value95% CISupported?
2.5%97.5%
Direct Effect (Path)
H1: PAI → PSI0.172 **2.7630.0060.0530.294Yes
H2: PAM → PSI0.159 **2.7970.0050.0480.270Yes
H3: PAD → PSI−0.134 **2.6030.009−0.238−0.037Yes
H4: IPR → PSI−0.129 *2.2370.025−0.241−0.018Yes
H5: PAI → TRS0.390 ***7.3920.0000.2810.491Yes
H6: PAM → TRS0.149 **2.6040.0090.0350.261Yes
H7: PAD → TRS−0.044 (ns)0.7930.428−0.1550.060No
H8: IPR → TRS−0.190 **3.3240.001−0.302−0.080Yes
H9: TRS → PSI0.265 ***4.4760.0000.1480.377Yes
H10: PSI → CUI0.305 ***5.6140.0000.1940.410Yes
H11: TRS → CUI0.332 ***6.0500.0000.2240.440Yes
Specific Indirect Effect (Mediation Path)
PAI → PSI → CUI0.052 *2.4290.0150.0140.099Yes
PAM → PSI → CUI0.049 *2.3560.0190.0130.093Yes
PAD → PSI → CUI−0.041*2.2280.026−0.082−0.010Yes
IPR → PSI → CUI−0.039 *2.0900.037−0.078−0.005Yes
PAI → TRS → CUI0.129 ***4.8480.0000.0800.185Yes
PAM → TRS → CUI0.049 *2.2110.0270.0100.098Yes
PAD → TRS → CUI−0.014 (ns)0.7830.434−0.0540.020No
IPR → TRS → CUI−0.063 **2.8870.004−0.110−0.024Yes
Source(s): Table by the Author. Note: Path Coef. (β) are the standardized coefficients, and CI is the confidence interval; *** p < 0.001; ** p < 0.01; * p < 0.05; (ns) means no significant (p > 0.05); Abbreviations: PAI = Perceived Anthropomorphism–Intimacy; PAM = Perceived Agency–Morality; PAD = Perceived Agency–Dependency; IPR = Perceived Risk–Information Privacy Risk; PSI = Parasocial interaction; TRS = Trust; CUI = Continuance usage intention.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Cho, H.-K.; Kim, M.-Y. A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective. Appl. Sci. 2025, 15, 5334. https://doi.org/10.3390/app15105334

AMA Style

Yang Y, Cho H-K, Kim M-Y. A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective. Applied Sciences. 2025; 15(10):5334. https://doi.org/10.3390/app15105334

Chicago/Turabian Style

Yang, Yi, Hye-Kyung Cho, and Min-Yong Kim. 2025. "A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective" Applied Sciences 15, no. 10: 5334. https://doi.org/10.3390/app15105334

APA Style

Yang, Y., Cho, H.-K., & Kim, M.-Y. (2025). A Study on Factors Affecting the Continuance Usage Intention of Social Robots with Episodic Memory: A Stimulus–Organism–Response Perspective. Applied Sciences, 15(10), 5334. https://doi.org/10.3390/app15105334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop