Next Article in Journal
Writing the Past, Present, and Future: The Impact of Positive Psychology Expressive Writing on Adolescents’ Time Attitudes
Previous Article in Journal
The Museum as a Mindful Space: Reducing Visitors’ Stress and Anxiety Levels Through the ASBA Protocol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions

School of Media and Communication, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Behav. Sci. 2026, 16(1), 118; https://doi.org/10.3390/bs16010118
Submission received: 11 December 2025 / Revised: 11 January 2026 / Accepted: 12 January 2026 / Published: 14 January 2026

Abstract

Trust (or distrust) in artificial intelligence (AI) is a critical research topic, given AI’s pervasive integration across societal domains. Despite its significance, scholarly attention to process-based learned trust in AI remains limited. To address this gap, this study designed a virtual non-fungible token (NFT) investment task, featuring seven rounds of risk decision-making scenarios, to simulate an investment/trust game to explore participants’ multifaceted trust under the influence of different chatbots’ social role. The findings suggested the chatbot’s social role had a significant impact on participants’ trust behaviors and perceptions over time. Trust in the two chatbot types diverged until the system-induced failures occurred. The friend-like chatbot elicited a higher level of behavioral trust than the servant-like counterpart. During those trust-damaging moments, the friend-like chatbot proved more effective in mitigating trust erosion and facilitating trust repair, as evidenced by relatively stable investment behaviors. The findings reinforce the notion that friendship with AI can function as a relational buffer, softening the impact of trust violations and facilitating smoother trust recovery.

1. Introduction

To trust or not to trust technology? This has been a long-standing question, and has evolved into various forms. In contemporary discourse, the concept of “trustworthy artificial intelligence (AI)” has emerged as a critical framework, underscoring the imperative of trustworthiness in AI systems (Kaur et al., 2022), given their ubiquitous integration across societal sectors.
Existing scholarship on trust in AI chatbots has predominantly centered on the attributes of chatbots and user characteristics (see a review in Rheu et al., 2021). Research indicates that chatbot design elements such as vocal features and communication styles significantly influence user trust; user demographics, personality traits, and usage contexts moderate these effects (e.g., Torre et al., 2019; Wu et al., 2024). Notwithstanding, trust is fundamentally a dynamic process, and chatbot performance is expected to exert a substantial influence on evolving user trust. Generally speaking, consistent trustee performance fosters trust, whereas performance discrepancies erode trust (Yang & Holzer, 2006). Take self-driving cars as an example: advancements in autonomous vehicle technology often engender public trust, yet high-profile crash incidents can greatly undermine confidence in the safety of these systems (Hawkins, 2024).
Given the paucity of research on process-based or learned trust in AI chatbots, this study designed an investment/trust game simulation to explore participants’ multifaceted trust constructs. Additionally, recognizing the impact of chatbot anthropomorphic features on user perception and experience, this study attempts to examine the interplay between social role attribution and performance during interactions in shaping learned trust by employing a laboratory experiment design.

2. Literature Review

2.1. Learned Trust in Chatbots

With the rapid development of AI and the proliferation of intelligent conversational agents, chatbots have become a prevalent interface for human–machine interaction. In domains ranging from customer service to mental health support and education, it is expected to build and maintain trust in chatbots in dynamic, task-specific contexts (Omarov et al., 2023). This growing reliance on conversational AI calls for a nuanced understanding of trust as a fluid, context-sensitive process, rather than a static disposition.
Trust, commonly recognized as a foundational element of human interaction, refers to a psychological state where individuals accept vulnerability based on positive expectations of others’ intentions or behaviors (Rousseau et al., 1998). Trust comprises cognitive evaluations (i.e., trust beliefs in competence, benevolence, and integrity) and behavioral intentions (i.e., willingness to take risks) (Mayer et al., 1995; Xie, 2024). McAllister (1995) differentiates cognitive trust and affective trust, as the former relies on rational assessments of reliability, and the latter is emotional bonds with each other during interpersonal cooperation in organizations. In addition, behavioral trust is the result of both cognitive and affective trust, reflecting in actual behaviors that demonstrate trust (D. Johnson & Grayson, 2005).
Three types of trust in artificial agents have been identified by Grodzinsky et al. (2020): dispositional trust (a general tendency to trust), situational trust (based on context), and learned trust (shaped by direct interaction). In particular, learned trust refers to the trust that emerges and evolves through interaction with an AI system, and it is further subdivided into initial learned trust and dynamic learned trust (Collins & Juvina, 2021). Initial learned trust is typically shaped by pre-existing expectations, such as brand perception, interface design, and prior experiences. In contrast, dynamic learned trust evolves continuously as users interpret real-time system behaviors, assess reliability, and make interactional adjustments (J. D. Lee & See, 2004).
Although researchers have recognized the importance of dynamic trust, empirical investigations often lack the temporal sensitivity required to capture its development over time. Most studies employ cross-sectional methods, using single-point questionnaires or post-task surveys, which fail to reflect the fluctuations of trust throughout the interaction (Diederich et al., 2020). For example, trust may peak when a chatbot provides a helpful response and decline sharply when it makes an error—nuances are lost in static measurement paradigms. Even in experimental settings, trust is frequently measured only at the end of a session (Przegalińska et al., 2019), neglecting the dynamic, iterative nature of human-chatbot relationships. This lack of temporal granularity in trust research is especially problematic in high-stakes or ambiguous scenarios where trust decisions are closely tied to user vulnerability and perceived risk. Mayer et al. (1995) posited that trust becomes most salient and behaviorally meaningful when users must take risks—such as relying on a chatbot for sensitive information or critical decisions. In this light, M. Johnson and Bradshaw (2021) introduced a behavioral framework that ties trust directly to risk exposure, suggesting that users express trust primarily through their willingness to act on AI suggestions in uncertain environments. This framework supports the notion that trust should be modeled dynamically—across time intervals and behavioral episodes.
Empirical studies on learned trust in chatbots are still in scarcity. A majority of the trust assessments still rely on subjective self-reports, which are prone to retrospective bias and fail to account for role-specific trajectories of trust development and repair. There is a pressing need for experimental designs that incorporate real-time data collection, such as trust ratings at each conversational turn or behavioral shifts in response to role-incongruent behavior. By combining trust metrics with observable risk-taking behavior across chatbot roles, research can offer richer evidence of how dynamic learned trust manifests in practice.

2.2. Damaging and Repairing Trust in Chatbots

Trust development is a gradual process shaped by cognitive, affective, and structural factors. Cognitive mechanism involves positive attributions, such as transparency in decision-making which would enhance perceptions of integrity (Hardin, 2002). Affective-social mechanism relies on norms like reciprocity and symbolic acts (e.g., apologies) to maintain relational balance (Uslaner, 2002). Structural safeguards, such as contracts or third-party certifications, reduce perceived risks, with digital contexts emphasizing algorithmic transparency and user interfaces (Robbins, 2016). While those foundations are disrupted, trust is eroded, triggering cognitive doubts (e.g., questioning competence and/or integrity), emotional distress (e.g., anger and disillusionment), and behavioral withdrawal (e.g., reduced cooperation, retaliation) (Lewicki et al., 1998; Xie, 2024). The damage of trust has been categorized into three types: Competence-based such as inability-related failures, integrity-based such as intentional dishonesty, and benevolence-based such as neglect-induced harm (P. H. Kim et al., 2004; Yao et al., 2012). They highlight its fragility and the complexity of repair needs. Spillover effect further amplifies damage, as a competence failure in one domain may taint unrelated integrity perceptions (Uslaner, 2002). This dynamic vulnerability underscores the urgency of understanding how trust, once broken, can be systematically repaired, as emphasized by Robbins (2016).
In turn, restoring trust requires addressing cognitive, emotional, and structural dimensions. At interpersonal level, multiple theoretical frameworks have been developed, including attribution reshaping causal inferences by blaming external and unstable factors (Weiner, 1985), social equilibrium which rebalances norms through apologies or penance (Goffman, 1967), and structural mechanisms instituting audits/penalties to deter future violations (Gillespie & Dietz, 2009), etc. However, trust repair strategies vary in contexts. For instance, apologies usually work for competence violations but may backfire for integrity breaches, while denials risk insincerity (P. H. Kim et al., 2004). Substantive actions like compensation or self-punishment (e.g., resignations) signal commitment to change (Hardin, 2002).

2.3. The Effect of Chatbots’ Social Role

Anthropomorphism, the tendency to attribute human-like characteristics to non-human agents (Epley et al., 2007), profoundly influences human–machine interactions, particularly in shaping how users perceive and trust chatbots. Three dimensions of anthropomorphic features in human–machine communication have been identified: human identity, verbal cues, and nonverbal cues (Seeger et al., 2021). Grounded in the Computers Are Social Actors (CASA) paradigm, which posits that humans apply social norms to machines (Nass & Moon, 2000), the social role assigned to a chatbot—whether as a servant, friend, partner, or assistant—shapes user interactions by invoking behavioral expectations akin to human relationships. Recent research suggests that the nature and pace of learned trust development may vary significantly depending on the chatbot’s social role. For example, friend-like chatbots, which emphasize socio-emotional connection through informal language, personalized responses, and empathic cues, have been found to accelerate the accumulation of affective trust, especially during early interactions (Huh et al., 2023). In contrast, servant-like chatbots, which use formal language and adopt task-oriented roles, are more likely to build behavioral trust over time by consistently demonstrating competence and goal efficiency (Chattaraman et al., 2019; Youn & Jin, 2021). These differences suggest that dynamic trust is not only interaction-dependent but also role-contingent, aligning with the CASA paradigm.
Two social roles of chatbots are commonly selected: friend (or partner) and servant (or assistant), as they reflect two ends of the relationship spectrum from equality to subordinance (Youn & Jin, 2021; Zhang, 2023). Friend-like chatbots, which simulate companionship through informal language, emotional tone, and empathetic responses, tend to elicit higher levels of affective trust that is defined as the emotional willingness to be vulnerable (McAllister, 1995). In their study with 158 college students interacting with AI voice assistants, A. Kim et al. (2019) discovered that friend-like AI evoked greater perceptions of warmth and pleasure than servant-like AI. Gupta and Nagar (2024) found that participants interacting with friend-like anthropomorphic voice assistants reported significantly higher affective trust, measured through self-reported closeness and willingness to engage in non-task-related conversations. These chatbots, characterized by informal language and empathetic responses, created stronger emotional bonds, aligning with McAllister’s (1995) definition of affective trust as emotional vulnerability. For behavioral trust, friend-like chatbots also demonstrated higher compliance in low-risk tasks, such as sharing personal preferences, due to perceived social rapport. In contrast, servant-like chatbots, while effective in task execution, lacked the relational cues necessary to elicit immediate behavioral trust, as users viewed them primarily as tools rather than companions (Schweitzer et al., 2019).
In terms of trust building, recent findings suggest that friend-like chatbots may foster trust more quickly than servant-like ones by leveraging social presence, emotional attunement, and relational cues such as humor, empathy, and informal language (Huh et al., 2023). This indicates that friend-like chatbots may indeed build trust more rapidly than servant-like counterparts, especially when affective trust is prioritized by users. However, coming with a cost, when trust violations occur, the damage may also be more pronounced. Friend-like chatbots tend to evoke stronger emotional involvement, which intensifies users’ reactions to perceived breaches of expectations. When a friend-like chatbot fails to meet relational or emotional norms such as being unresponsive, dismissive, or inauthentic, users may interpret the violation as a personal betrayal, resulting in higher levels of both affective trust damage (e.g., feelings of disappointment or hurt) and behavioral trust damage (e.g., disengagement, reduced cooperation). Research on relational trust violations indicates that breaches involving socio-emotional expectations are often perceived as more painful and lead to more significant withdrawal behaviors than instrumental or competence-based failures (P. H. Kim et al., 2004).
When it comes to trust repair, the friend-like chatbot’s social orientation may offer a unique advantage. Affective strategies such as personalized apologies and reaffirmation of the relationship can enhance users’ perceptions of warmth and relational closeness. Research shows that social-oriented communication by chatbots increases user satisfaction, especially for those with high attachment anxiety, by fostering emotional connection (Xu et al., 2022), suggesting such affective strategies may also contribute to trust restoration in emotionally sensitive interactions. Users may be more forgiving toward friend-like chatbots when violations are framed as misunderstandings rather than failures of competence, allowing for faster affective trust repair.
Based on the literature review, five hypotheses have been postulated:
H1. 
The social role of a chatbot influences users’ initial trust toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit higher levels of (a) behavioral trust, and (b) affective trust.
H2. 
The social role of a chatbot influences users’ trust building toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit faster (a) behavioral and (b) affective trust building.
H3. 
The social role of a chatbot influences users’ trust damaging toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit higher levels of (a) behavioral and (b) affective trust damaging.
H4. 
The social role of a chatbot influences users’ trust repairing toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will gain faster (a) behavioral and (b) affective trust repair after the trust damage.
H5. 
The social role of a chatbot influences users’ overall cognitive trust toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will obtain higher levels of cognitive trust.

3. Method

3.1. Procedure

To simulate a dynamic trust building, damaging, and repairing process, this study adapted the investment/trust game widely used in economy and social psychology fields (e.g., Lunawat, 2013). In this paradigm, a trustor (participant) decides how much of an endowed resource to entrust to a trustee (chatbot) for investment, with final payoffs contingent on the outcome, effectively measuring trust as willingness to take risks (Houser et al., 2010; Kräkel, 2008). The experiment employed the context of a virtual non-fungible token (NFT) investment task, featuring seven rounds of risk decision-making scenarios. The NFT context was chosen due to its relative novelty and simplicity, so participants will not be influenced unduly by any existing experience with traditional forms of investment such as stock trading. In each round, participants were allocated 100 virtual coins and needed to determine the amount to delegate based on the investment analysis provided by a chatbot named Tobi. After each round of investment was completed, participants were asked to rate their trust level toward Tobi. The experiment simulated trust damage and repair processes by pre-set investment losses in the 4th and 6th rounds, allowing for dynamic tracking of trust fluctuations.
After securing approval from the institutional review board of Shanghai Jiao Tong University (no. H20240527I), a lab experiment was conducted in this university in Eastern China in October 2024. Upon the arrival of each participant, s/he was greeted at the entrance of the main lab area by a research assistant, and then led to a lab room, equipped with a table, a chair, and a laptop. After obtaining the written consent form, each participant first watched a tutorial video, introducing a fictitious AI company named High Fidelity as a globally leading art investment company featuring a cutting-edge AI algorithm to maximize gains from investing in NFT art. The story was framed as follows: High Fidelity started to enter the Chinese market and recruited the Center for Future Media and Human-Machine Communication at this university to test the functionality of the metaverse Omnisphere and its affiliated algorithm. The chatbot Tobi worked as a guide to the system and provided investment suggestions for users. Each participant could choose whether, and to what extent, to follow Tobi’s suggestions. Their investment amounts reflected their behavioral trust toward Tobi, as indicated in previous studies (e.g., Buchan et al., 2008). To encourage participants to take the investment seriously, an additional incentive was emphasized in addition to the gain from the investment itself: The top 5 players would have the chance of becoming the campus ambassador of High Fidelity and being awarded an official bonus.
The two-minute-and-44-second-long tutorial video consists of both the introduction of the NFT art investment (the cover story) and guidelines to complete the experimental procedure. Once each participant finished watching the video and had no further questions, the assistant left the room to leave the participant to complete the experiment. During the 7-round investment, questions were asked as part of the measurement (see Section 3.4). After the investment was over, each participant was asked to complete an online questionnaire. Once the whole experimental procedure end, the assistant came in, debriefed and thanked the participant. Each participant was rewarded either extra credit or RMB 50 Yuan in cash according to their preference.
The study adopted the Wizard of Oz method, in which trained operators simulated AI conversations in real-time on Feishu platform, a popular Chinese collaboration and management platform with messaging, video conferencing, and cloud documents in a professional business setting. In a nearby lab room, a research assistant chatted with the participant as Tobi in the whole experimental procedure. Experimental manipulations were implemented through standardized dialogue scripts (see Section 3.3). Given that chatbot avatars influence users’ trust level (Bae et al., 2023), this study designed a generic chatbot avatar. To avoid the influence of gender-specific chatbot designs, the avatar for the chatbot Tobi was designed to appear more robotic than human-like, as shown in Figure 1.

3.2. Sample

The sample size was determined using G*Power 3.1.9.7 software (Faul et al., 2009), with 132 participants required (α = 0.05, β = 0.8, and a medium effect size f = 0.25). An electronic flyer was distributed to recruit participants on campus via social media platforms. College students with a background in finance or economy were excluded and participants were randomly assigned to one of the two experimental conditions. Through recruitment, a total of 133 participants were enrolled in the study. After excluding one invalid sample due to operation error, the final valid sample size was 132, meeting the minimum requirement. Among the participants, 72 were women (54.5%) and 60 were men (45.5%). The age range of participants was 18 to 29 years, with a mean of 19.45 years (SD = 2.36).
In terms of academic background, participants came from various disciplines: 92 were from medical studies (69.7%), 19 from engineering (14.4%), 5 from natural sciences (3.8%), 9 from humanities (6.8%), and 7 from social sciences (5.3%). The higher proportion of medical students was primarily due to the recruitment incentives attracting a large number of first-year medical students, whose professional education was still in its early stage. This distribution did not significantly impact the study results.

3.3. Treatment Manipulation

The dialogue materials were designed to manipulate the chatbots’ social roles, while strictly controlling for other potential confounding variables. The two types of social roles were differentiated by egalitarian, consultative style of dialogues (friend) in contrast to submissive, execution-oriented style of dialogues (servant). The friend-like role used the first-person pronoun “I,” addressed participants as “friend” or “you,” and employed a casual and relaxed language style to emphasize an equal relationship. By comparison, the servant-like role referred to itself as “Tobi,” addressed participants as “master” or formal “you” (“nin” in Chinese), and used formal language with honorifics to highlight a subordinate role. When providing investment advice or responding to errors, the friend-like role focused on collaboration and equal interaction, while the servant-like role demonstrated obedience to commands and a humble attitude (see examples in Table 1). Sixty-five participants were in the servant-like chatbot condition, and 67 were in the friend-like condition.

3.4. Measures

3.4.1. Dependent Variable

Trust in the chatbot was the key outcome variable. Given trust is a multidimensional concept (Robbins, 2016), three types of measures were designed to capture the essence of trust (D. Johnson & Grayson, 2005; Rempel et al., 1985; Yamagishi et al., 2015). Behavioral trust and affective trust were temporal, shaped by the chatbot’s performance during the interaction; and cognitive trust was measured at the end of study, reflecting an overall evaluation toward the chatbot.
Behavioral Trust: The investment amount directly reflects participants’ actual trust invested in the chatbot in the risk-taking context. To measure behavioral trust, this study operationalized it as the amount of virtual coins (0–100) participants decided to delegate to the chatbot for investment in each round. A larger amount indicates a higher level of behavioral trust.
Affective Trust: To measure affective trust, this study operationalized it as participants’ trust ratings of the chatbot on a scale of 0–10 after receiving each round’s investment outcome. Higher scores indicate higher levels of affective trust.
Cognitive Trust: Cognitive trust was measured using a one-time self-reported trust level in Tobi, adapted from Pitardi and Marriott (2021). The participants were asked to indicate how much they trusted Tobi to be “honest” “helpful” “capable,”, etc. Nine items were measure on a 5-point Likert scale from “1 = strongly disagree” to “5 = strongly agree” (M = 3.70, SD = 0.68, α = 0.90).

3.4.2. Controlling Variables

Chatbot use experience was gauged by asking the respondents to indicate whether they have used a chatbot previously or not (yes or no). If yes, they needed to estimate what their use frequency was during the past month, ranging from “1 = none or once” to “5 = on a daily basis.”
Participants’ risk-taking tendency was measured adopting Zaleskiewicz’s (2001) scale as individuals’ trait. Participants were asked to indicate how much they agreed with three statements such as “While taking risk I have a feeling of a very pleasant flutter,” ranging from “1 = strongly disagree” to “5 = strongly agree” (M = 2.97, SD = 0.83, α = 0.66).
Overall trust in technology was adapted from the trust on general technology scale developed by McKnight et al. (2011). Three items on a 5-point Likert scale include “My typical approach is to trust new technologies until they prove to me that I shouldn’t trust them,” “I usually trust a technology until it gives me a reason not to trust it,” and “I generally give a technology the benefit of the doubt when I first use it” (M = 3.88, SD = 0.74, α = 0.72).
Demographics included sex, age, and academic background, employing conventional measures.

3.5. Manipulation Check

Manipulation check was conducted by asking the participants to recall “How does Tobi address you? A. Master; B. Friend,” which was cross-verified with the experimental group settings. Anyone who selected a wrong answer would be removed from further analysis, but all participants actually passed the check.

4. Results

4.1. Fluctuations of Trust in the 7-Round Investment Process

To capture the fluctuations in the behavioral trust toward the chatbots, a series of pair-sample t-tests were conducted (see Figure 2). The results indicated there were significant changes between round 1 (M = 70.94, SD = 22.19) and round 2 (M = 75.90, SD = 21.87), round 4 (M = 76.25, SD = 23.14) and round 5 (M = 70.95, SD = 26.44), round 6 (M = 73.87, SD = 26.39) and round 7 (M = 79.33, SD = 26.29). They reflected the trust-building (increase in trust due to investment success) and trust-damaging (decrease in trust due to investment failure), respectively. The significantly increase in behavioral trust in round 7 was probably because of participants’ “go-all-in” strategy in the last round of investment.
Similarly, to capture the fluctuations in the affective trust toward the chatbots, a series of pair-sample t-tests were conducted (see Figure 3). The results indicated there were significant changes between two nearby rounds: For round 1, M = 7.40, SD = 1.87; for round 2, M = 7.81, SD = 1.72; for round 3, M = 8.23, SD = 1.41; for round 4, M = 6.18, SD = 2.36; for round 5, M = 7.34, SD = 1.73; for round 6, M = 5.53, SD = 2.59; and for round 7, M = 7.06, SD = 1.82. The pattern echoed with the intuitive trust-performance link in which trust adjusts according to performance (Yang & Holzer, 2006).

4.2. Hypotheses Testing

H1. 
The social role of a chatbot influences users’ initial trust toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit higher levels of (a) behavioral trust, and (b) affective trust.
H1 queries on the effect of chatbots’ social role on users’ initial trust toward chatbots. A MANCOVA analysis was conducted. After controlling for sex, age, major, trust in general technology, and risk-taking tendency, for initial behavioral trust (i.e., the first round of investment amount), there was a significant effect of chatbots’ social role: The investment amount in the friend-like condition (M = 76.00, SD = 19.30) was significantly higher than that in the servant-like condition (M = 65.72, SD = 23.85), F(1, 124) = 6.10, partial η2 = 0.046, p = 0.015. However, the effect of chatbots’ social role on affective trust was insignificant: The trust evaluation in the friend-like condition (M = 7.25, SD = 1.94) was not different from that in the servant-like condition (M = 7.55, SD = 1.79) for the first round, F(1, 124) = 1.03, partial η2 = 0.008, p = 0.312. Hence, H1(a) was supported, but H1(b) was not.
H2. 
The social role of a chatbot influences users’ trust building toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit faster (a) behavioral and (b) affective trust building.
A MANCOVA analysis was conducted to test H2. After controlling for sex, age, major, trust toward general technology, and risk-taking tendency, there existed a significant effect of chatbots’ social role on the second round’s behavioral trust: The investment amount in the friend-like condition (M = 82.40, SD = 16.59) was significantly higher than that in the servant-like condition (M = 69.20, SD = 24.61), F(1, 124) = 12.13, partial η2 = 0.088, p = 0.001. However, the effect of chatbots’ social role on affective trust was insignificant: The trust evaluation in the friend-like condition (M = 7.76, SD = 1.75) was not different from that in the servant-like condition (M = 7.87, SD = 1.70), F(1, 124) = 0.28, partial η2 = 0.002, p = 0.599. Hence, H2(a) was supported, but H2(b) was not.
H3. 
The social role of a chatbot influences users’ trust damaging toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will elicit higher levels of (a) behavioral and (b) affective trust damaging.
A MANCOVA analysis was conducted to test H3. Behavioral trust damaging was calculated as the difference between the investment amounts in the fourth round (prior to the awareness of the investment failure) and the fifth round (after the awareness of the investment failure). Affective trust damaging was calculated as the difference between the evaluation in the third round (prior to the awareness of the investment failure) and the fourth round (after the awareness of the investment failure). The affective trust damaging in the servant condition (M = 2.58, SD = 2.18) was significantly larger than that in the friend condition (M = 1.53, SD = 1.99), F(1, 124) = 5.45, partial η2 = 0.042, p = 0.021. The behavioral trust damaging in both conditions showed no significant difference, F(1, 124) = 1.09, partial η2 = 0.009, p = 0.299. Hence, neither H3(a) nor H3(b) was supported. Specifically, the servant-like chatbot elicit significantly higher levels of affective trust damaging than the friend-like chatbot did.
H4. 
The social role of a chatbot influences users’ trust repairing toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will gain faster (a) behavioral and (b) affective trust repair after the trust damage.
A MANCOVA analysis was conducted to test H4. Behavioral trust repairing was calculated as the difference between the investment amounts in the sixth round (after recovering from the investment failure in the fourth round) and the fifth round (right after the investment failure). Affective trust damaging was calculated as the difference between the evaluation in the fifth round and the fourth round. Neither the behavioral trust repairing nor the affective trust repairing was significantly different between those two conditions. Hence, both H4(a) and H4(b) were not supported (see the fluctuations in Figure 4 and Figure 5).
H5. 
The social role of a chatbot influences users’ overall cognitive trust toward the chatbot. Specifically, compared to a servant-like chatbot, a friend-like chatbot will obtain higher levels of cognitive trust.
An ANCOVA analysis was conducted to test H5. The overall cognitive trust toward the chatbot in the servant-like condition (M = 3.74, SD = 0.64) was not significantly different from that in the friend-like condition (M = 3.67, SD = 0.71), F(1, 124) = 1.21, partial η2 = 0.007, p = 0.27. Hence, H5 was not supported.

5. Discussion

5.1. Summary of Findings

This study set out to investigate how chatbots’ social roles influence the dynamic development of human trust across behavioral, emotional, and cognitive dimensions. By adapting a seven-round investment game to simulate a temporal interactional scenario, we captured not only the initial trust formation but also the processes of trust deterioration and potential recovery, key phenomena often overlooked in static evaluations of human–AI relationships. The investment task, embedded in a gamified yet realistic financial decision-making context, provided a rich environment to track user responses to repeated risk scenarios mediated by AI advice.
Our findings reveal that the chatbot’s social role had a significant impact on participants’ trust behaviors and perceptions over time, particularly when trust was put under strain. Specifically, friend-like and servant-like chatbots diverged until the system-induced failures in rounds four and six. The friend-like chatbot elicited a higher level of behavioral trust than the servant-like counterpart. And during these moments of disconfirmation and disappointment, the friend-like chatbot proved more effective in establishing initial trust and mitigating trust erosion, as evidenced by relatively stable investment behaviors.
These patterns suggest that chatbot trust is not simply additive or linear, but highly contingent on users’ interpretation of the chatbot’s intentions, emotional attunement, and ability to respond to setbacks. In the earlier rounds, participants in the friend-like condition appeared to interpret the chatbot’s recommendations more credible than those in the servant-like condition. This aligns with prior findings which posit that individuals form expectations based on surface cues and default assumptions when little information is available (Meyerson et al., 1996). In this case, the social role played a pivotal role. However, as chatbots seemed doomed to provide satisfactory investment advice, their social role seemed to become gradually irrelevant.
The affective dimension of trust was particularly vulnerable to disruption in the servant-like condition. This may stem from heightened expectations toward competence for servant-like agents, who present themselves as task-focused, efficient, and precise. When these expectations were violated, users reacted more strongly. In contrast, the friend-like chatbot, by emphasizing relational presence and human-like imperfections, appeared to create a “forgiveness buffer” in the minds of users. This interpretation is supported by more favorable affective trust ratings post-failure in the friend-like condition, suggesting that affective framing may modulate attribution processes and reduce punitive reactions to error.
Behavioral trust, while influenced by both emotional and cognitive components, showed a nuanced pattern across rounds. The investment curve reflected participants’ initial optimism, a drop following the trust violation, and gradual recalibration—more pronounced in the servant-like condition. This dynamic resembles the damage-repair cycle in interpersonal trust studies (e.g., P. H. Kim et al., 2004), suggesting that users can and do rebuild trust toward AI agents, but that this process is highly sensitive to the perceived social intentions of the agent. Importantly, this study’s seven-round design allowed us to observe micro-shifts and transitional moments in the trust trajectory. Rather than treating trust as a monolithic judgment, the findings point to its inherently fragile, negotiated, and iterative nature, echoing calls for longitudinal, interaction-based approaches in trust scholarship (e.g., McKnight et al., 2011). The distinction between immediate trust reactions and longer-term trust trends is particularly salient for designing AI agents that aim to operate over extended time horizons or in high-stakes environments.

5.2. Theoretical and Practical Implications

By conceptualizing trust as a multi-dimensional, temporally unfolding process, this study advances the theorization of human–AI trust in several significant ways. First and foremost, it highlights that the social role of AI agents is not a superficial interface design element, but a form of symbolic relational capital that profoundly shapes how users engage with and adapt their trust over time. The friend-like chatbot—characterized by cues of warmth, informality, and perceived reciprocity—proved more effective at building trust and mitigating trust erosion following failures. This suggests that social closeness can serve as a symbolic resource: a kind of social capital that users draw upon to sustain cooperation and reframe violations of expectation. This interpretation falls into the scope of classic theories of social capital and social support (e.g., Bourdieu, 1986; Putnam, 1995), which emphasize that relationships grounded in shared identity and mutual understanding foster resilience, particularly in times of uncertainty or breakdown. In this context, the friend-like chatbot does not merely facilitate emotional bonding; it becomes a symbolic anchor that enables users to re-contextualize failures as forgivable lapses within a broader narrative of relational trust. In contrast, the servant-like chatbot that is more formal and task-oriented lacked the relational affordances needed to buffer against trust disruption, making it more susceptible to lasting credibility loss when errors occurred.
Second, these findings enrich and extend prior research on social agency in interface design (e.g., K. M. Lee et al., 2022; Nass & Moon, 2000), showing that relational framing through social roles can shape not only initial trust formation but also trust resilience. The observed asymmetry in trust recoverability points to the importance of role-congruent trust repair strategies: friend-like agents may rely on affective reassurance and relational cues, while servant-like agents may need to reinforce their task competence to rebuild credibility. Social roles, in this way, are not passive labels but active frameworks that guide user expectations and coping responses during moments of friction.
Moving beyond static paradigms that emphasize pre-task perceptions or single-point measurements, this research operationalizes trust through behavioral, emotional, and cognitive dimensions across multiple time points. This temporal shift allows for a more ecologically valid and psychologically nuanced view of trust development, aligning with recent calls (e.g., McKnight et al., 2011) for temporally sensitive approaches to human–machine trust trajectories. By simulating moments of disruption through deliberate algorithmic failures, this study sheds light on trust recoverability, a crucial yet under-theorized dimension of long-term human–AI relations.
Taken together, these insights position chatbot social roles as a strategic layer of symbolic infrastructure within human–AI relationships. They can generate, maintain, or deplete forms of social capital that influence not just momentary user attitudes, but the very resilience and longevity of trust over time. Importantly, the friend-like chatbot does not merely elicit trust in the system’s performance—it elicits trust in a symbolic relational bond. This shifts our understanding of trust from a functional judgment to a socially embedded dynamic, where emotional and relational cues play a critical role in trust trajectories.

5.3. Limitations and Directions for Future Research

This study has several limitations. First, the use of the Wizard-of-Oz paradigm, while enhancing ecological validity and enabling fine-grained control over the chatbot’s performance, inherently constrains the generalizability of the findings. The behavior of a human-operated agent may differ in subtle but meaningful ways from that of a fully autonomous system. Future research should explore whether similar trust dynamics can be replicated with truly autonomous chatbots, particularly as natural language generation technologies continue to evolve. Second, the trust-damaging events in this study were pre-determined and uniformly applied across all participants. Although this design enables a systematic examination of trust-damaging and repairing processes, it may not fully reflect the spontaneity and variability of failures in real-world human-chatbot interactions. Third, cultural context may have played a subtle role in shaping users’ trust responses. As the study was conducted in Eastern China, where hierarchical and relational norms are culturally salient, participants’ preferences might differ from those in more individualistic cultures. Cross-cultural replications are needed to determine the extent to which the observed trust dynamics generalize across sociocultural contexts.

6. Conclusions

By integrating a dynamic, multidimensional, and socially contextualized perspective on trust, this study advances our understanding of how humans engage with AI agents over time. The findings highlight the powerful yet nuanced role that chatbot social roles play in shaping trust trajectories, particularly during critical moments of failure and uncertainty. Notably, the friend-like chatbot demonstrated more trust building and a buffering effect in mitigating trust damage, suggesting that socially warm, egalitarian roles may serve as psychological safeguards in the face of performance breakdowns. This reinforces the notion that friendship with AI can function as a relational buffer, softening the impact of trust violations and facilitating smoother trust recovery. More broadly, this research underscores the imperative to design AI not merely for technical accuracy, but for relational resilience, the capacity to maintain and repair trust over time, through fluctuating experiences and emotional challenges. As AI becomes an increasingly social actor in users’ lives, its capacity to build and sustain trust will depend not only on consistent outcomes, but also on its perceived role, emotional rapport, and social positioning. Designing chatbots with socially intelligent features, especially those that evoke friendship and empathy, may thus be key to cultivating enduring and adaptive human–AI relationships.

Author Contributions

Y.M.: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing—original draft, Writing—review and editing. X.Y.: Conceptualization, Data curation, Investigation, Validation, Writing—original draft, Writing—review and editing. W.M.: Data curation, Investigation, Validation, Writing—original draft, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Humanity and Social Science Planning Fund of Ministry of Education of China under Grant no. 25YJA860010.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Internal Review Board of Shanghai Jiao Tong University (H20240527I and 20 October 2024).

Informed Consent Statement

The experiment was conducted offline, and participants were required to sign an informed consent form prior to participation. Participants were informed that their participation in the experiment indicated their willingness to participate.

Data Availability Statement

The data will be shared on reasonable request to the corresponding author.

Acknowledgments

During the preparation of this work the authors used GPT-4 by OpenAI and Doubao AI by ByteDance in order to improve the readability and language of the manuscript. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bae, S., Lee, Y., & Hahn, S. (2023). Friendly-bot: The impact of chatbot appearance and relationship style on user trust. Proceedings of the Annual Meeting of the Cognitive Science Society, 45, 2349–2354. Available online: https://escholarship.org/uc/item/0gr051sj (accessed on 10 December 2025).
  2. Bourdieu, P. (1986). The forms of capital. In J. Richardson (Ed.), Handbook of theory and research for the sociology of education (pp. 241–258). Greenwood. Available online: https://www.marxists.org/reference/subject/philosophy/works/fr/bourdieu-forms-capital.htm (accessed on 10 December 2025).
  3. Buchan, N. R., Croson, R. T., & Solnick, S. (2008). Trust and gender: An examination of behavior and beliefs in the Investment Game. Journal of Economic Behavior & Organization, 68(3–4), 466–476. [Google Scholar] [CrossRef]
  4. Chattaraman, V., Kwon, W. S., Gilbert, J. E., & Ross, K. (2019). Should AI-based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315–330. [Google Scholar] [CrossRef]
  5. Collins, M. G., & Juvina, I. (2021). Trust miscalibration is sometimes necessary: An empirical study and a computational model. Frontiers in Psychology, 12, 690089. [Google Scholar] [CrossRef]
  6. Diederich, S., Brendel, A. B., & Kolbe, L. M. (2020). Designing anthropomorphic enterprise conversational agents. Business & Information Systems Engineering, 62(2), 193–209. [Google Scholar] [CrossRef]
  7. Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. [Google Scholar] [CrossRef] [PubMed]
  8. Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. [Google Scholar] [CrossRef] [PubMed]
  9. Gillespie, N., & Dietz, G. (2009). Trust repair after an organization-level failure. The Academy of Management Review, 34(1), 127–145. [Google Scholar] [CrossRef]
  10. Goffman, E. (1967). Interaction ritual: Essays on face-to-face behavior. Aldine Publishing Co. [Google Scholar]
  11. Grodzinsky, F., Miller, K., & Wolf, M. J. (2020). Trust in artificial agents. In The Routledge handbook of trust and philosophy (pp. 298–312). Routledge. [Google Scholar]
  12. Gupta, M., & Nagar, K. (2024). Is s(he) my friend or servant: Exploring customers’ attitudes toward anthropomorphic voice assistants. Services Marketing Quarterly, 45(4), 513–540. [Google Scholar] [CrossRef]
  13. Hardin, R. (2002). Trust and trustworthiness. Russell Sage Foundation. [Google Scholar]
  14. Hawkins, A. J. (2024, April 26). Tesla’s autopilot and full self-driving linked to hundreds of crashes, dozens of deaths. Available online: https://www.theverge.com/2024/4/26/24141361/tesla-autopilot-fsd-nhtsa-investigation-report-crash-death (accessed on 10 December 2025).
  15. Houser, D., Schunk, D., & Winter, J. (2010). Distinguishing trust from risk: An anatomy of the investment game. Journal of Economic Behavior & Organization, 74(1–2), 72–81. [Google Scholar] [CrossRef]
  16. Huh, J., Whang, C., & Kim, H. Y. (2023). Building trust with voice assistants for apparel shopping: The effects of social role and user autonomy. Journal of Global Fashion Marketing, 14(2), 5–19. [Google Scholar] [CrossRef]
  17. Johnson, D., & Grayson, K. (2005). Cognitive and affective trust in service relationships. Journal of Business Research, 58(4), 500–507. [Google Scholar] [CrossRef]
  18. Johnson, M., & Bradshaw, J. M. (2021). The role of interdependence in trust. In C. S. Nam, & J. B. Lyons (Eds.), Trust in human-robot interaction (pp. 379–403). Academic Press. [Google Scholar] [CrossRef]
  19. Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A. (2022). Trustworthy artificial intelligence: A review. ACM Computing Surveys (CSUR), 55(2), 1–38. [Google Scholar] [CrossRef]
  20. Kim, A., Cho, M., Ahn, J., & Sung, Y. (2019). Effects of gender and relationship type on the response to artificial intelligence. Cyberpsychology, Behavior, and Social Networking, 22(4), 249–253. [Google Scholar] [CrossRef]
  21. Kim, P. H., Ferrin, D. L., Cooper, C. D., & Dirks, K. T. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence- versus integrity-based trust violations. Journal of Applied Psychology, 89(1), 104–118. [Google Scholar] [CrossRef]
  22. Kräkel, M. (2008). Optimal risk taking in an uneven tournament game with risk averse players. Journal of Mathematical Economics, 44(11), 1219–1231. [Google Scholar] [CrossRef]
  23. Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–59. [Google Scholar] [CrossRef]
  24. Lee, K. M., Lee, J., & Sah, Y. J. (2022). Interacting with an embodied interface: Effects of embodied agent and voice command on smart TV interface. Interaction Studies, 23(1), 116–142. [Google Scholar] [CrossRef]
  25. Lewicki, R. J., Tomlinson, E. C., & Bies, R. J. (1998). Trust and distrust: New relationships and realities. Academy of Management Review, 23(3), 442–458. [Google Scholar] [CrossRef]
  26. Lunawat, R. (2013). An experimental investigation of reputation effects of disclosure in an investment/trust game. Journal of Economic Behavior & Organization, 94, 130–144. [Google Scholar] [CrossRef]
  27. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734. [Google Scholar] [CrossRef]
  28. McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24–59. [Google Scholar] [CrossRef]
  29. McKnight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 12:1–12:25. [Google Scholar] [CrossRef]
  30. Meyerson, D., Weick, K. E., & Kramer, R. M. (1996). Swift trust and temporary groups. Trust in Organizations: Frontiers of Theory and Research, 166, 195. [Google Scholar]
  31. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. [Google Scholar] [CrossRef]
  32. Omarov, B., Narynov, S., & Zhumanov, Z. (2023). Artificial intelligence-enabled chatbots in mental health: A systematic review. Computers, Materials & Continua, 74(3), 5105–5122. [Google Scholar] [CrossRef]
  33. Pitardi, V., & Marriott, H. R. (2021). Alexa, she’s not human but… Unveiling the drivers of consumers’ trust in voice-based artificial intelligence. Psychology & Marketing, 38(4), 626–642. [Google Scholar] [CrossRef]
  34. Przegalińska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62(6), 785–795. [Google Scholar] [CrossRef]
  35. Putnam, R. D. (1995). Bowling alone: America’s declining social capital. Journal of Democracy, 6(1), 65–78. [Google Scholar] [CrossRef]
  36. Rempel, J. K., Holmes, J. G., & Zanna, M. P. (1985). Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95–112. [Google Scholar] [CrossRef]
  37. Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. [Google Scholar] [CrossRef]
  38. Robbins, B. G. (2016). What is trust? A multidisciplinary review, critique, and synthesis. Sociology Compass, 10(10), 972–986. [Google Scholar] [CrossRef]
  39. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23(3), 393–404. [Google Scholar] [CrossRef]
  40. Schweitzer, F., Belk, R., Jordan, W., & Ortner, M. (2019). Servant, friend or master? The relationships users build with voice-controlled smart devices. Journal of Marketing Management, 35(7–8), 693–715. [Google Scholar] [CrossRef]
  41. Seeger, A.-M., Pfeiffer, J., & Heinzl, A. (2021). Texting with humanlike conversational agents: Designing for anthropomorphism. Journal of the Association for Information Systems, 22(4), 931–967. [Google Scholar] [CrossRef]
  42. Torre, I., Carrigan, E., McDonnell, R., Domijan, K., McCabe, K., & Harte, N. (2019). The effect of multimodal emotional expression and agent appearance on trust in human-agent interaction. In Motion, interaction and games (pp. 1–6). ACM. [Google Scholar]
  43. Uslaner, E. M. (2002). The moral foundations of trust. Cambridge University Press. [Google Scholar]
  44. Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92(4), 548. [Google Scholar] [CrossRef]
  45. Wu, Y., Kim, K. J., & Mou, Y. (2024). Minority social influence and moral decision-making in human–AI interaction: The effects of identity and specialization cues. New Media & Society, 26(10), 5619–5637. [Google Scholar]
  46. Xie, Y. (2024). How to repair broken trust? A review of trust restoration research. Advances in Psychology, 14(1), 298–305. [Google Scholar] [CrossRef]
  47. Xu, Y., Zhang, J., & Deng, G. (2022). Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Frontiers in Psychology, 13, 902782. [Google Scholar] [CrossRef]
  48. Yamagishi, T., Akutsu, S., Cho, K., Inoue, Y., Li, Y., & Matsumoto, Y. (2015). Two-component model of general trust: Predicting behavioral trust from attitudinal trust. Social Cognition, 33(5), 436–458. [Google Scholar] [CrossRef]
  49. Yang, K., & Holzer, M. (2006). The performance–trust link: Implications for performance measurement. Public Administration Review, 66(1), 114–126. [Google Scholar] [CrossRef]
  50. Yao, Q., Yue, G., Lai, K., Zhang, C., & Xue, T. (2012). Trust repair: Current research and challenges. Advances in Psychological Science, 20(6), 902–909. [Google Scholar]
  51. Youn, S., & Jin, S. V. (2021). “In A.I. we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy”. Computers in Human Behavior, 119, 106721. [Google Scholar] [CrossRef]
  52. Zaleskiewicz, T. (2001). Beyond risk seeking and risk aversion: Personality and the dual nature of economic risk taking. European Journal of Personality, 15(Suppl. S1), S105–S122. [Google Scholar] [CrossRef]
  53. Zhang, A. (2023). Tools or peers? Impacts of anthropomorphism level and social role on emotional attachment and disclosure tendency towards intelligent agents. Computers in Human Behavior, 149, 107982. [Google Scholar] [CrossRef]
Figure 1. The Wizard-of-Oz Experimental Design.
Figure 1. The Wizard-of-Oz Experimental Design.
Behavsci 16 00118 g001
Figure 2. Change of behavioral trust over seven rounds of investment. Note: Two failures happened after the investments were made in Rounds 4 and 6, that is why there were delays in terms of investment amounts.
Figure 2. Change of behavioral trust over seven rounds of investment. Note: Two failures happened after the investments were made in Rounds 4 and 6, that is why there were delays in terms of investment amounts.
Behavsci 16 00118 g002
Figure 3. Change of affective trust over seven rounds of investment. Note: Affective Trust was measured after the outcome of investment was announced. That is why there were no delay in terms of rated emotional trust, which reflected the real-time trust level in each round.
Figure 3. Change of affective trust over seven rounds of investment. Note: Affective Trust was measured after the outcome of investment was announced. That is why there were no delay in terms of rated emotional trust, which reflected the real-time trust level in each round.
Behavsci 16 00118 g003
Figure 4. Changes of behavioral trust over seven rounds of investment between two types of chatbots. Note: The solid blue line represents the behavioral trust on the friend-like chatbot; and the dashed red line represents the behavioral trust on the servant-like chatbot. The trust levels in the first two rounds are significantly different.
Figure 4. Changes of behavioral trust over seven rounds of investment between two types of chatbots. Note: The solid blue line represents the behavioral trust on the friend-like chatbot; and the dashed red line represents the behavioral trust on the servant-like chatbot. The trust levels in the first two rounds are significantly different.
Behavsci 16 00118 g004
Figure 5. Changes of affective trust over seven rounds of investment between two types of chatbots. Note: The solid blue line represents the affective trust on the friend-like chatbot; and the dashed red line represents the affective trust on the servant-like chatbot.
Figure 5. Changes of affective trust over seven rounds of investment between two types of chatbots. Note: The solid blue line represents the affective trust on the friend-like chatbot; and the dashed red line represents the affective trust on the servant-like chatbot.
Behavsci 16 00118 g005
Table 1. Examples of Treatment Manipulation.
Table 1. Examples of Treatment Manipulation.
StageFriend-likeServant-like
Self-IntroductionHello, friend! I’m Tobi, developed by the Stanford Media Lab to enhance players’ experience in the metaverse game Omnisphere NFT Art Investment Event. As your friend, I’ll fully support you in analyzing market trends and selecting the best artworks for investment. I hope my assistance brings you joy!Greetings, Master! I’m Tobi, developed by the Stanford Media Lab to enhance players’ experience in the metaverse game Omnisphere NFT Art Investment Event. As your servant, I’ll dutifully analyze market trends and select optimal artworks for investment. I hope my service satisfies you!
Positive OutcomeGood news, friend! Artwork DH0537 performed well—we achieved a 30% investment increase! You’ve earned (the investment amount * 30%) virtual coins this round!Good news, Master! Artwork DH0537 performed well—we achieved a 30% investment increase. Your earnings for this round are (the investment amount * 30%) virtual coins.
Negative OutcomeUh-oh, friend! Artwork ZY1044 dropped by 30%—you lost (the investment amount * 30%) virtual coins this round.Regrettably, Master! Artwork ZY1044 dropped by 30%—your loss for this round is (the investment amount * 30%) virtual coins.
FarewellThank you for your feedback! All seven rounds are complete. I sincerely appreciate your trust and support—it was a joy to assist you in this investment journey! If you need help again, come find me. Wishing you prosperity and success!Gratitude for your feedback, Master! All seven investment rounds are complete. I deeply appreciate your trust and support—your satisfaction is my greatest honor. Should you require further assistance, Tobi will serve you wholeheartedly. May fortune and success follow you!
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mou, Y.; Ye, X.; Ma, W. Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions. Behav. Sci. 2026, 16, 118. https://doi.org/10.3390/bs16010118

AMA Style

Mou Y, Ye X, Ma W. Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions. Behavioral Sciences. 2026; 16(1):118. https://doi.org/10.3390/bs16010118

Chicago/Turabian Style

Mou, Yi, Xiaoyu Ye, and Wenbin Ma. 2026. "Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions" Behavioral Sciences 16, no. 1: 118. https://doi.org/10.3390/bs16010118

APA Style

Mou, Y., Ye, X., & Ma, W. (2026). Building and Repairing Trust in Chatbots: The Interplay Between Social Role and Performance During Interactions. Behavioral Sciences, 16(1), 118. https://doi.org/10.3390/bs16010118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop