Next Article in Journal
Ascon on FPGA: Post-Quantum Safe Authenticated Encryption with Replay Protection for IoT
Previous Article in Journal
Intuitive Recognition of a Virtual Agent’s Learning State Through Facial Expressions in VR
Previous Article in Special Issue
DARC: Disturbance-Aware Redundant Control for Human–Robot Co-Transportation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors

by
Chinenye Augustine Ajibo
1,*,
Carlos Toshinori Ishi
1,2 and
Hiroshi Ishiguro
1
1
Hiroshi Ishiguro Lab., ATR, Hikaridai, 2-2-2 Seika-cho, Kyoto 619-0288, Japan
2
Guardian Robot Project, RIKEN, ATR, Hikaridai, 2-2-2 Seika-cho, Kyoto 619-0288, Japan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(13), 2667; https://doi.org/10.3390/electronics14132667
Submission received: 30 May 2025 / Revised: 25 June 2025 / Accepted: 29 June 2025 / Published: 1 July 2025
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)

Abstract

The ability to effectively engineer robots with appropriate social behaviors that conform to acceptable social norms and with the potential to influence human behavior remains a challenging area in robotics. Given this, we sought to provide insights into “what can be considered a socially appropriate and effective behavior for robots charged with enforcing social compliance of various magnitudes”. To this end, we investigate how social robots can be equipped with context-inspired persuasive behaviors for human–robot interaction. For this, we conducted three separate studies. In the first, we explored how the android robot “ERICA” can be furnished with negative persuasive behaviors using a video-based within-subjects design with N = 50 participants. Through a video-based experiment employing a mixed-subjects design with N = 98 participants, we investigated how the context of norm violation and individual user traits affected perceptions of the robot’s persuasive behaviors in the second study. Lastly, we investigated the effect of the robot’s appearance on the perception of its persuasive behaviors, considering two humanoids (ERICA and CommU) through a within-subjects design with N = 100 participants. Findings from these studies generally revealed that the robot could be equipped with appropriate and effective context-sensitive persuasive behaviors for human–robot interaction. Specifically, the more assertive behaviors (displeasure and anger) of the agent were found to be effective (p < 0.01) as a response to a situation of repeated violation after an initial positive persuasion. Additionally, the appropriateness of these behaviors was found to be influenced by the severity of the violation. Specifically, negative behaviors were preferred for persuasion in situations where the violation affects other people (p < 0.01), as in the COVID-19 adherence and smoking prohibition scenarios. Our results also revealed that the preference for the negative behaviors of the robots varied with users’ traits, specifically compliance awareness (CA), agreeableness (AG), and the robot’s embodiment. The current findings provide insights into how social agents can be equipped with appropriate and effective context-aware persuasive behaviors. It also suggests the relevance of a cognitive-based approach in designing social agents, particularly those deployed in sensitive social contexts.

Graphical Abstract

1. Introduction

The increasing domains of usage of social robots in our society have resulted in a rapid revolution in the way we live. Recently, these agents (robotic and non-robotic) have been found to have the ability to perform tasks usually performed by humans [1]. Currently, these robots are used in areas where they provide emotional and psychological support to users [2], such as in elderly care homes and health care facilities where they provide companionship [3] and provide assisting services [4]; in educational institutions where they provide education support [5]; and in public space where they function as moral agent and ensure adherence to moral values [6]. In these environments, they are required to be equipped with functions to enable them not only to understand human emotions, moods, and intentions but also to be able to express the same (social intelligence) during human–robot interaction (HRI) [7]. In some cases, they are required to motivate users [8]. However, the dynamism that characterizes human society, ranging from variation in personality and individuality to differences in cultural inclination and orientation, poses a challenge to the development of these agents [9]. Some studies suggest the need for these agents to be autonomous and equipped with techniques to improve their internal capabilities, such as perception, decision-making, planning, and learning.
In human–human interaction (HHI), emotions such as moral anger, guilt, and disgust have been shown to play a critical role in motivating compliance and moral action [10]. Specifically, moral emotions are not only reactive but serve important social-regulatory functions—they signal norm violations and legitimize calls for behavioral change [11]. Additionally, studies on emotion-based persuasion indicate that negative emotions can outperform positive ones in certain contexts, particularly when the emotion is perceived as justified and aligned with social or moral norms [12]. For instance, anger perceived as morally grounded can increase the credibility and urgency of a message, thereby enhancing its persuasive impact [13]. Similarly, guilt appeals have been widely used in compliance and prosocial behavior literature to induce action [14].
In light of those as mentioned above, some research has suggested the need to equip robots with moral reasoning capabilities [15,16]. Agents endowed with moral reasoning functionalities, also called artificial moral agents (AMA), are virtual agents (software) or physical agents (robots) capable of engaging in ethical behavior(s) or at least of avoiding immoral behavior(s) [17]. These agents serve social functions that are not limited to the prevention of harm in public places, facilitating a better understanding of societal values, increasing public trust and confidence in social agents, preventing immoral use of agents, and aiding in a better understanding of human morality [18]. Previous studies have explored the possibilities of developing moral and ethical agents capable of facing moral situations that may arise during HRI [19,20]. For instance, the potential of equipping social agents (robotic and non-robotic) with expressive behaviors, focusing on facial expressions, head movements, utterance content, and body gestures for improved social acceptability was explored in [21,22]. Moreover, body gestures for emotional and attitudinal expressions by agents were investigated in [23,24]. In the realm of game AI, Mahadevan et al. [25] demonstrated that expressive robot behaviors generated through large language models can significantly enhance user engagement by simulating human-like reactions, including emotional feedback to player actions. Notably, their findings suggest that negative emotional cues—such as frowns, sighs, or verbal disappointment—often lead players to adjust their behavior.
Given this, we situate our research in the domain of social persuasion as captured in Figure 1. We consider persuasive social agents under three dimensions: (a) the embodiment of the agent, (b) the context of persuasion, and (c) the persuasive strategies and the method of generating persuasive strategies. Humanoids with varying embodiments (ERICA and CommU) and persuasive strategies of different levels of assertiveness (negative and non-negative persuasive strategies) were evaluated. Concerning expression modalities and behaviors of the robots, we focused on the audio and visual (facial and body gestures) modalities. It is worth mentioning that the behaviors of the robots were crafted using a rule-based approach where the robot’s gesture strokes were generated and inserted on focused words during expression; after each gesture stroke, the hand(s) of robot was held along the sentence and turned back to the rest position before the next stroke for an entire utterance. For the context of persuasion, we considered persuasion for social compliance (dieting, COVID-19, and smoking prohibition). These points are highlighted in red, indicated using asterisks (*) and hyphens (-) in Figure 1.
Two robotic platforms were employed in the work: ERICA, a highly anthropomorphic female adult-like humanoid with facial degrees of freedom (DoF) and synthetic skin, and CommU, a compact child-like robot with limited DoF and no facial expression capability. We based our examination within HRI by focusing on the following theoretical rationales: the uncanny valley, morphological priming, and expressive bandwidth. Regarding uncanny valley and morphological priming, ERICA, with its realistic facial appearance and expressive DoF, may evoke the uncanny valley effect, eliciting more compliance through its behaviors in contrast to CommU in some contexts of persuasion [26]. Additionally, ERICA may engender more authority over CommU, which may prompt perceptions of innocence and a non-threatening presence [27]. Concerning the facial DoF and expressive bandwidth of the robots, ERICA’s rich facial articulation enables nuanced emotional communication—micro expressions, eyebrow raises, lip movements—whereas CommU is restricted to body posture and vocal tone. This increased expressivity may enhance the perception of ERICA’s persuasive behaviors, due to better emotional clarity [28].
Persuasion is considered a vital part of human–human interaction (HHI). It determines how people engage and interact with each other during social encounters. Persuasion is the process of changing a person’s attitude/behavior [29,30]. It is important to mention that significant studies in social psychology have examined several persuasive styles in HHI for different scenarios [31,32]. In HHI contexts, individuals who violate social norms risk facing negative reactions from their social environment, such as receiving an angry glance or experiencing negative persuasive behaviors [33]. Additionally, reactions to norm violators vary based on the observer’s impact from the violation, the type of violation, and the cultural context in which it takes place [34]. Several studies on persuasive robots have been conducted in HRI, and most of them have focused on specific tasks or situations [35,36,37,38]. However, autonomous agents or proactive robots should consider situational awareness to prevent engaging in inappropriate or disruptive behaviors during social interactions [39,40]. Additionally, there is existing evidence of variations in how people perceive expressive social agents. This variation arises because individuals hold different views on how these agents should be regarded in society. While some consider them social entities that should not be equated with humans, others view them purely as machines or devices, rejecting any attribution of human-like or social characteristics [41,42,43]. These differing perceptions also lead to varying opinions on what constitutes appropriate and effective behaviors for social agents in HRI contexts.
In light of this, we sought to provide some insights into what socially appropriate and effective behavior is for agents tasked with enforcing social compliance. In this regard, we conducted three studies to equip social agents with persuasive behaviors (negative and non-negative) for eliciting compliance. In the first study, we explored how the female android robot ERICA can be equipped with negative persuasive behaviors. In the second study, we evaluated the impact of contexts and scenarios on the impression of the robot’s persuasive behaviors. Subsequently, we investigated the effect of the agent’s embodiment and user traits on the perception of the persuasive behaviors by the agents.
The remaining sections of this paper are organized as follows: Section 2 presents a review of relevant literature for framing the studies. In Section 3, the first study is presented, highlighting the hypothesis, experimental design, procedures, results, and discussions. Section 4 presents the second study, pointing out the research hypothesis, experimental design, procedures, results, and discussion. In Section 5, an overview of the third study is presented, highlighting the hypothesis, experimental design, procedures, results, and discussions. The limitations and implications of the research are presented in Section 6 and Section 7, respectively. Finally, in Section 8, conclusions from the studies are highlighted.

2. Related Work

Social agents (robotic and non-robotic) are designed to interact with humans using human-inspired communication channels, like verbal (linguistic), non-verbal (facial expressions, gestures, gaze, posture, and social touch), or a combination of both. These agents are equipped to cause social responses in users readily and some instances, respond to users’ emotions, moods, and intentions. As a result of this, users often attribute human-like qualities to these robots [44].

2.1. Emotional Social Agents

Significant research over the past decades has advanced emotion-based systems in robotics. Recent developments include computer graphics-driven expressive systems for humanoid robots, such as the EMOTION framework, which generates naturalistic facial expressions and gestures using large-language models to enhance non-verbal communication in HRI [45]. Other approaches have adopted animal-inspired expressive behaviors to reinforce emotional communication and improve user perception, e.g., integrating mechanical tail movements and biologically inspired gaze patterns in cat-like robots [46]. Human-like designs have also been adopted for expressing emotions in robots in the real world. The emotional expression exhibited by these robots during interaction has been evaluated to be comprehensible and independent of their characteristic appearance [47]. Considering that these robots are expected to engage in social interactions with humans in the real world, emphasis has been placed on equipping them with a human-like appearance to improve social acceptance [48]. Few studies have examined the importance of balanced emotional expression—incorporating both positive and negative emotions—in social robots. For instance, De Rooij et al. [49] demonstrated that robot facilitators displaying a mix of moods (positive, neutral, and negative) elicited corresponding emotional contagion, which in turn enhanced collaboration, satisfaction, and overall performance in group tasks.
Previous research has demonstrated the feasibility of equipping humanoid robots with natural, human-like positive emotional expressions, not limited to synchronized facial expressions, head movements, vocalizations of surprise, and laughter [50,51]. However, very few studies have attempted to generate natural human-like anger expression for HRI. Recently, Nishiwaki et.al [52] investigated the design of anger expressions in robots to counteract abusive interactions, though this work still did not target android platforms or integrate full-body gestures.

2.2. Persuasive Social Agents

Persuasion has been established to be a pivotal part of human–human interaction (HHI). It is said to be a strong determinant of how people engage and interact with each other during social encounters. Several persuasive strategies have been examined in social psychology for HHI contexts [53,54]. In the same vein, studies in HRI have advanced the prospects of robots as persuasive technologies for influencing preferences and causing behavioral change in users. For instance, the persuasive strategies of social robots were explored for sustaining daily acceptance of nutrition bar recommendations and to encourage healthy behavior [38]. The persuasive impact of assertiveness in social robots was further explored in [55] to examine how children perceive and respond to assertive versus polite behavior in a robotic companion. The findings demonstrated that robots displaying more assertive behavior were perceived as more competent and influential, effectively shaping children’s decisions more than robots that communicated in a neutral or overly polite manner. Moreover, Gonzalez-Oliveras et al. [56] examined how a robot’s displayed confidence (certain, neutral, or uncertain) influenced high school students’ decision-making during a quiz game. Results from the study revealed that students were most persuaded by the robot when it expressed certainty, compared to neutral and uncertain conditions; this highlights the importance of verbal and non-verbal confidence on user compliance. In addition, users’ responses to persuasive attempts made by anthropomorphic service robots and human persuaders were investigated in [57]. The study revealed that users with higher persuasion awareness were less likely to be influenced by socially present robots. Furthermore, Getson et al. [58] evaluated how persuasive behavior strategies employed by socially assistive robots could sustain engagement in long-term care environments. Through interviews with caregivers and interaction sessions with older adults, the study concluded that adaptive verbal encouragement and emotionally sensitive behaviors significantly enhanced user engagement and acceptance of the robot. Also, the role of gender cues and cuteness in a robot’s persuasive abilities in a hospitality setting has been [59]. Findings indicated that female participants with low self-perceived power were more likely to be persuaded by a robot with male-like characteristics. In addition, Ajibo et al. [6] evaluated how an agent’s perceived positive and negative expressions (utterances and gestures) could influence a user’s social compliance tendency. Findings from the study indicated that participants generally preferred polite behaviors by a robot, although participants with different levels of compliance awareness manifested different trends toward appropriateness and effectiveness for social compliance enforcement through the negative persuasive behaviors of the robot.

2.3. Situation Awareness and Persuasion

Studies in HHI have established that the reactions of individuals in the presence of a norm violation are influenced by several factors. For instance, Vriens et.al identified some factors that may determine if a bystander would speak out in the face of uncivil behavior [60]. They include (1) when such uncivil behavior has a direct impact on the bystander and (2) when there is a relationship between the bystander and the deviant (i.e., the violator). In addition, the study also suggested that a bystander would be more likely to speak out when he/she was angry before observing the infraction, when such a person feels responsible for speaking out, or when the individual has the legitimacy to speak. On the other hand, Mitsuishi et al. [61], opined that the severity of the violation may influence the type of persuasive strategy a bystander may adopt in addressing a deviant. This suggests that the appropriateness of a persuasive behavior may be situation-dependent. In a broad sense, “situation awareness” can be formally defined as the “perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their future status” [40]. Studies in HRI suggest that social awareness is not only a central precondition for interaction with the environment in general but also essential in communication where robots need to estimate their common grounds with their interaction partners to determine which knowledge they can rely on and which information they need to provide [39,40]. In light of this, [6] investigated how people would react to the persuasive behaviors (positive and negative) of a social robot while eliciting adherence in various social contexts (home, park, and research institute), and violation scenarios (diet recommendation adherence, smoking prohibition, and COVID-19 guideline adherence) given the differences in subjects’ compliance awareness (CA) (compliance awareness (CA) is the level of a person’s sense of values regarding awareness of the need to adhere to social rules geared towards the well-being of everyone) and agreeableness (AG).

3. Study 1: Negative Persuasive Behavior by Android Robot

3.1. Background and Hypothesis

The increasing prospect of integrating more social robots into human-centered spaces, such as homes, hospitals, malls, and schools, raises the need for these robots to be equipped with balanced (non-negative and negative) human-like behaviors for improved adaptation. Previous studies have explored how these agents can be equipped with non-negative behaviors, with few studies considering negative behaviors for these robots [25]. Inspired by the fact that studies in social sciences suggest that emotional expression plays a significant role in coordinating human social interaction by shaping people’s responses to their social environment, we sought to equip the android robot ERICA with audio-visual negative behavior for persuasion. Since emotional expressions coordinate human social interaction by influencing individual experiences and/or impacting the experiences of others [62], negative expressions may have some social benefits. To this end, we sought to generate natural human-like audio-visual negative behavior (anger) for the robot.
It is important to mention that robot ERICA has a human-like appearance but is constrained due to its limited DoFs; specifically, it has 13 DOFs in the face, 3 DoFs for the head motion, 3 DoFs for the upper-body motion, and 12 DoFs for each arm/hand. Based on this, we hypothesize that
H1. 
The Android robot ERICA may be able to generate natural human-like anger expressions.
H2. 
The appropriateness of the gestures by the robot may be context-dependent.
These hypotheses were partly informed by the work of [63], which demonstrated that a higher DoF significantly enriches a robot’s nonverbal communication capabilities and [40], which suggested that the appropriateness and/or the effectiveness of an agent’s behavior may be situation-dependent.

3.2. Experiment Design and Procedures

For this study, we first carried out an audio-visual analysis of anger expression in HHI. In that regard, we adopted the MELD corpus, which is a multi-speaker English-based dataset. We extracted the anger utterances and analyzed the relations between different motion types and speech acts. The motion types and speech acts were labeled by two annotators (with a kappa inter-rater agreement of 0.74). Based on the analysis results, we adopted five dominant gesture types: pointing (P), single-arm swing (SSw), single-arm spread (SSp), both-arm swing (BSw), and both-arm spread (BSp), which were identified to convey significant anger information in HHI for this study. For the dialogue acts, five dominant categories—declaring, asserting, questioning, suggesting, and disagreeing—were considered for the robot. Using a subjective experiment design where all subjects watched and evaluated all video clips (anger-based gestures by ERICA: https://drive.google.com/file/d/1p78Qp72Rn7uolEvdRC5u_iG4ZxquAk_S/view?usp=sharing) (accessed on 28 June 2025), we verified the hypothesis (see gesture snapshots in Figure 2). A total of thirty (30) video clips were made, out of which five (5) had no gestures (serving as a baseline for comparison), while the remaining twenty-five (25) clips had both utterances and gestures. For each utterance, the “No gesture” condition is shown in the beginning to serve as the baseline for the subject judgments, followed by the five gesture types in a randomized order. The evaluation was carried out using an online form where subjects remotely viewed and appraised the video clips. A total of N = 50 subjects of American nationality were recruited using Amazon Mechanical Turk (AMT) for the study, 35 male and 14 female, ages between 21 and 58 years old, M = 35.4, SD = 11.3). After watching each clip, subjects answered the questionnaire (https://forms.gle/w1sCjUc8N1DDkmyS9) (accessed on 28 June 2025) developed for the study.

3.3. Results and Discussions

Statistical significance tests were conducted through one-way repeated-measures ANOVA for the subjective scores. Significant differences after multiple comparisons (through Ryan’s method) are shown by (p < 0.05). The analysis of subjective responses for the hypothesis is thus presented.
The result, as shown in the left panel (a) of Figure 3, revealed that the average scores of the generated motions are significantly higher than the no-gesture (which is based on the impression by the voice only). This indicates that gestures accompanied by anger utterances effectively improve the perceived degree of anger for all gesture types. This finding is in tandem with H1 as it suggests that the android robot, using a combination of its utterances and gestures, can generate natural human-like anger expressions. This finding also aligns with prior research. Li et al. [63] demonstrated that increased degrees of freedom (DoF) substantially enhance a robot’s capacity for nonverbal communication, while Fiorini et al. [41] showed that embodiment—especially through natural gestures and expressive behaviors—significantly improves users’ emotional engagement and the perceived social presence of the robot.
In addition, subjective results as captured in panels (b–d) of Figure 3 revealed different trends in the appropriateness of the gesture types for different dialogue acts. For the declaring behavior, a significant difference was found between BSp and the other gesture types—BSw, P, and SSp (p < 0.05). These results indicate that BSp gestures may be considered more appropriate for the robot when making an anger-based declaration. Furthermore, for the disagreeing behavior, it can be observed that BSp and BSw are adjudged to be more appropriate than P and SSp (p < 0.05). This implies that either the BSp or BSw may also be regarded as being appropriate for the robot while expressing anger-based disagreement. Concerning the suggestive utterance, the BSp had a significant difference from SSp and SSw gestures (p < 0.05). These results indicate that the BSp gestures may be considered more appropriate, while the SSw gestures may be less appropriate for the robot when suggesting angrily. These findings are consistent with the work of Senaratne et al. [40], who proposed that the perceived appropriateness of an agent’s behavior is influenced by the contextual factors surrounding the interaction.

4. Study 2: Robot’s Persuasive Behaviors and Contexts of Violation

4.1. Background and Hypothesis

Motivated by the fact that studies in HHI suggest that the reactions of bystander in the presence of a norm violation may be influenced by factors not limited to (1) whether the uncivil behavior has a direct impact on the bystander, (2) whether there is a relationship between the bystander and the violator [60], and (3) whether the bystander has the legitimacy to speak in such a context [61]. Given these, we investigate how the persuasive behavior of a social robot in response to social violations is perceived relative to the context of the violation.
Situation awareness (SA) has been found to rely on the ability to perceive elements and events to better understand the actual situation of a scenario and the people in it to make informed decisions or take action [40]. Some studies in HRI suggest that SA is not only a precondition for interaction with the environment in general but also essential in communication, where robots need to estimate their common grounds with their interaction partners to determine which knowledge they can rely on and which information they need to provide [39,40]. Thus, implying the appropriateness and/or the effectiveness of an agent’s behavior may be situation-dependent. Based on this background, we hypothesize that
H3. 
The perception of an agent’s persuasive behavior toward a violator may vary depending on the context of the violation.
H4. 
The perception of an agent’s persuasive behavior directed toward a violator may be influenced by individual differences in trait tendencies.
Specifically, H4 was formulated based on the works by [64], which suggests that people high on the agreeableness (AG) trait have high trust in others, including humanoid robots, which attract more trustworthiness due to their human-likeness and personification. Additionally, people with high AG have high empathetic concerns and a high tendency to advance social cohesion, and as such, may consider some behaviors appropriate for the robot in a specific context [6].

4.2. Experiment Design and Procedures

Previous studies in HRI suggest that robots that use multimodal behaviors (gaze, body language/gestures, speech) are more engaging and persuasive during interactions [65]. Based on this, we employed the female-type android robot ERICA to evaluate participants’ impressions of four persuasive behaviors—politeness, logical reasoning, expressions of displeasure, and anger—across three norm-violation contexts: adherence to diet recommendations, smoking prohibition, and compliance with COVID-19 guidelines. The selection of the three contexts was intended, in part, to examine how the perceived impact of a norm violation—whether primarily self-directed (as in diet recommendation adherence) or other-directed (as in smoking prohibition and adherence to COVID-19 guidelines)—influences individuals’ preferences for different persuasive strategies as shown in Figure 4.
Regarding participants’ traits and how they impact preferences for robot behavior, prior research in HRI indicates that individuals’ perceptions of robot behavior can vary based on personality characteristics. For instance, extroversion has been linked to preferences for closer robot approach distances, while openness has been associated with favorable evaluations of robot interfaces in cognitive tasks [66]. Drawing on these findings, the present study considers the influence of the Big Five personality trait of agreeableness (AG) and the construct of compliance awareness (CA). Individuals scoring high in AG are generally courteous and expect similar treatment from others [67]. Likewise, those high in CA have been found to prefer behaviors that promote adherence to established norms.
We conducted a video-based subjective experiment by utilizing the services of Amazon Mechanical Turk (AMT). N = 98 participants resident in the United States: 68 males and 30 females, whose ages ranged from 21 to 77, M = 37.6, SD = 10.51 were recruited for the study. Informed consent was obtained from all participants, and all collected data was anonymized before analysis.
Before watching the videos, participants answered questions that evaluated their values (CA) regarding the importance and need to adhere to rules: social norms in general, medical recommendations, public rules, and WHO COVID-19 guidelines. Twenty questions were administered: five for each category of rules, and the answers were on a five-point scale where 1 indicates “strongly disagree” and 5 is “strongly agree” (https://forms.gle/YjD2te5h59DhzqReA) (accessed on 28 June 2025).
Next, the participants received a description of each scenario before watching the video clips for the scenario. Then, participants watched five videos in each scenario. The first was an introduction to the context and had a polite reminder/appeal to the violator. The subsequent videos for the four persuasive behaviors (polite, logical, displeased, and angry) were presented in a randomized fashion in the different scenarios to minimize possible carry-over effects.
After watching each video clip (persuasive behaviors by ERICA: https://drive.google.com/file/d/1wpp8ukBm7PFJm52xhQaTrvshHN9DhJoZ/view?usp=sharing8) (accessed on 28 June 2025), the participants evaluated the agent’s behaviors in terms of the following impression items: likeness for the agent as a member of the community, appropriateness, effectiveness, and competence of the agent’s persuasive behaviors, and participant’s willingness to comply through the persuasive behaviors in the given scenario.
Finally, participants were examined in terms of their personality trait tendencies. We utilized 20 questions from the Big Five questionnaire (https://forms.gle/34Xv6pwi9duQf5JAA) (accessed on 28 June 2025).

4.3. Results

We split participants into groups based on compliance awareness (CA) and agreeableness (AG) scores by drawing clue from the trait categorization approach proposed in [68]. We first estimated the mean score of each participant based on the responses to questions that assessed CA and AG. Based on the distribution of the mean scores of participants for each category, we set thresholds as shown in Table 1. This distribution ensured that participants were fairly split into two groups.
Subsequently, we carried out a three-way mixed ANOVA to check for interaction effects between the independent variables, which in this context are the subject traits (CA or AG), the scenario types, and the persuasive behaviors on each of the impression items (likeness, appropriateness, effectiveness, competence, and willingness). For the statistical analysis, we used JavaScript-STAR (JavaScript-STAR https://www.kisnet.or.jp/nappa/software/star/) (accessed on 28 June 2025) an open-source program for statistical analysis developed by Satoshi Tanaka in Joetsu University of Education. It is important to note that the three-way ANOVA revealed no significant interaction effects across all impression items. However, significant two-way interaction effects were observed for the majority of impression items, as presented in Table 2. The main effects of the independent variables are summarized in Table 3. Additionally, multiple comparisons were conducted by the Holm method, and effect sizes were computed in terms of Cohen’s d for impression items with significant interaction effects.
Figure 5 shows the mean subjective preferences for the different behaviors across AG groups (LAG and HAG) regardless of the scenario. Preferences for the compliance awareness (CA) groups were deliberately excluded, as their distribution patterns closely mirrored those observed in the AG groups. The Pearson correlation test conducted between the two groups revealed a negative correlation r(96) = −0.41, p < 0.01, so that LCA ≈ HAG and HCA ≈ LAG.

4.4. Discussions Relative to Hypothesis

First, regarding the differences among the three scenarios, which were designed to reflect differences in the context of the violation, our evaluation results indicated that regardless of the subject traits, subjects appraised higher scores for the negative behaviors in S3 (COVID-19 guidelines) and S2 (smoking prohibition). Given the variations in preferences for the negative behaviors of the robot relative to the scenarios, one can infer that it is appropriate for an agent to use a negative attitude for persuading a violator, considering that the violator has been initially persuaded politely. Additionally, regardless of the subject’s trait groups, the most negative attitude (anger) was most favored for persuading in S3. This may be attributable to the severity of harm that the action of a violator in this context (COVID-19) could pose to others. Similarly, in S2 (smoking prohibition), subjects appraised the displeased attitude highest for persuading in this context (higher than the anger attitude). On the contrary, in S1 (diet coaching), no clear preference emerged for the agent’s persuasive behaviors, as participants were divided—some favoring more negative approaches, while others preferred less negative ones. This lack of consensus may be attributed to the fact that the violator’s actions primarily impact only themselves.
These findings support H3, demonstrating that preferences for the agent’s persuasive attitude vary according to the contextual scenario. It is also consistent with the findings of [69], which indicated that the nature of a task can influence users’ tendencies to comply.
Concerning H4, the analysis results showed that subjects in the LAG ≈ HCA group showed a higher preference in terms of likeness and competence for the negative attitudes of the agent for persuading a third-party violator. A plausible explanation for this trend is that participants in the HCA group place a high value on adherence to established norms. This may account for their perception that employing a negative persuasive attitude toward norm violators is appropriate for promoting compliance. With respect to LAG individuals, their lower concern for social cohesion may explain their greater preference for the use of negative persuasive attitudes toward violators. In contrast, participants in the HAG ≈ LCA group exhibited a lower preference for such negative approaches across the scenarios. This tendency may be attributed to the higher levels of empathetic concern and stronger inclination to promote social cohesion commonly associated with HAG individuals. This may explain their lower preference for behaviors that conflict with these attributes. Finally, individuals in the LCA group tend to place less importance on adherence to established norms, which may account for their perception that the use of more negative persuasive attitudes toward violators is inappropriate. It is noteworthy that the findings related to compliance awareness (CA) align with those of our previous study [6], which demonstrated that individuals with high levels of CA are more likely to evaluate negatively toned persuasive behaviors toward a violator favorably. Similarly, our findings regarding agreeableness (AG) groups are consistent with prior research [70], which reported that individuals low in agreeableness tend to respond more positively to leaders who display anger These results suggest that both compliance awareness (CA) and agreeableness (AG) are valuable indicators for characterizing individual differences and may serve as effective criteria for selecting context-appropriate persuasive strategies.

5. Study 3: Robot’s Appearance on Persuasive Behaviors

5.1. Background and Hypothesis

Inspired by the fact that some HRI studies [71,72] suggest that a robot’s characteristics influence the way people experience interaction with the robot and comply with the persuasive attempts, we investigate the influence of an agent’s appearance on the perception of persuasive behaviors examined in Study 2. Two humanoid robots were employed: ERICA and CommU. For the context of norm violation, two scenarios, diet coaching (S1) and smoking prohibition (S2), were selected for evaluation as shown in the study design captured in Figure 6. Based on these conditions, we hypothesize that
H5. 
The perception of the persuasive behavior of a social robot towards a violator may vary based on the robot’s embodiment and the context of violation.
It is noteworthy that H5 was partially informed by the findings of [41], which demonstrated that embodiment, particularly through natural gestures and expressive behaviors, enhances emotional engagement and the perceived social presence of the robot.

5.2. Experiment Design and Procedures

As with Study 2, we adopted a video-based method for evaluating the hypothesis. Using the service of Amazon Mechanical Turk (AMT), we recruited N = 100 participants resident in the United States: 69 males and 31 females, whose ages ranged from 21 to 77, M = 37.7, SD = 10.49.
Before watching the videos (persuasive behaviors by CommU and ERICA: https://drive.google.com/file/d/1P7vbDVx_sRDw1sOG3IO6EepR-oiiTNiA/view?usp=sharing) (accessed on 28 June 2025), participants answered relevant questions that evaluated their CA. It is important to mention that participants received a description of each scenario before watching the video clips for the scenario. Participants watched five videos in each scenario. The first video was an introduction to the context and had a gentle/polite appeal to the violator. Subsequent videos contained the polite, logical, displeased, and angry behaviors administered in a randomized fashion for the different scenarios and robot types.
After watching each video clip, the participants evaluated the agent’s behaviors in terms of likeness for the agent as a member of the community, competence of the agent to realize compliance through its persuasive behaviors in the scenarios, subject’s tendency to obey the agent’s request through the persuasive behaviors, the appropriateness and effectiveness of the behaviors for the agent relative to scenarios. It is noteworthy that both robots employed an identical verbal script, and comparable gesture sets were adapted to ensure consistency across agents.

5.3. Results and Discussions

To evaluate the hypothesis, we analyzed the subjective results for the independent variables(scenarios and agent types) and the within factors (behaviors). A two-way mixed ANOVA was conducted to examine the influence on the likeness and competence scores. The results revealed significant p < 0.01 interaction effects for some impression items as captured in Table 4 and visualized in Figure 7.
From the result, a significant difference is observed in preference for the logical behavior of the robots in terms of competence. Specifically, the behavior was appraised higher for the android robot for S1 relative to the CommU robot.
The observed trend may be explained by differences in the agents’ physical appearance and expressive capabilities. The android robot, with its human-like appearance and greater degrees of freedom, is capable of exhibiting more nuanced and human-like expressions. This may have led participants to perceive its persuasive behavior—particularly logical reprimands—as more authentic and appropriate, thereby contributing to higher competence ratings. In contrast, the CommU robot’s limited expressivity and mechanical appearance may have reduced its perceived effectiveness in delivering logical persuasion. These findings suggest that human-likeness and expressive richness may enhance the perceived credibility and competence of robotic agents in persuasive contexts.
Regarding the negative attitudes, lower scores were appraised for these behaviors in S1 compared to S2. Additionally, we observed that in S2, the displeased attitude was scored higher for the android robot, while the authoritative behaviors received a higher score for the CommU robot regarding competence. In agreement with Study 2, a possible justification for this trend may be the severity of the violation.
Concerning the perceived appropriateness and effectiveness of the persuasive behaviors across agents and scenarios, Figure 8 indicates that, overall, polite behavior received higher ratings when exhibited by the CommU robot, whereas logical behavior was more positively evaluated when performed by the android robot. For the negative behaviors, the less negative behavior (displeased) was rated more favorably for the CommU robot in both scenarios S1–S2, while the more negative behavior (authoritative) was preferred when displayed by the android robot. In addition, participants demonstrated a greater tendency to comply with the android robot when it adopted an authoritative behavior in both S1–S2, whereas a displeased attitude elicited higher obedience when exhibited by the CommU robot in the same scenarios.
A possible justification for this may be the size of the agent relative to the severity of the violation in the scenario. Perhaps the subjects felt that in terms of these impression items, it may be more fitting for the robot with a more human-like appearance and size to persuade with the authoritative behavior, whereas it may be more sorting for the CommU robot to display a displeased attitude when enforcing compliance.
Our findings are consistent with our hypothesis (H5) and align with the results reported by [41], who demonstrated that higher levels of embodiment enhance emotional engagement and the perceived social presence—particularly the perceived authority—of a robotic agent. Moreover, our results further support the framework proposed by [40], which emphasizes that the appropriateness and effectiveness of an agent’s behavior are contingent upon the specific situational context.

6. Limitations

The findings from the studies presented in this work underscore the significance of moral-based negative behaviors in promoting social compliance. Notably, the more assertive behaviors were appraised as both appropriate and effective in contexts involving high-severity norm violations, such as noncompliance with COVID-19 safety protocols and smoking prohibitions. However, the generalizability of these results to other contexts remains uncertain, and the criteria for systematically quantifying violation severity require further investigation.
Another limitation of this study is its reliance on participants exclusively residing in the United States and the sample size. Prior work in HRI has demonstrated that user responses to robot behaviors—including emotional expressions, authority cues, and norm enforcement strategies—vary significantly across cultural contexts [73]. As such, the generalizability of our findings may be limited; our subsequent work will focus on cross-cultural validation to investigate if the behavioral strategies proposed in this work are adaptable and effective across diverse sociocultural settings.
Additionally, in this work, we relied exclusively on the use of video-based evaluation methods across all the experimental conditions. While video stimuli allow for controlled presentation and efficient data collection, they do not fully replicate the richness of real-world human–robot interaction, particularly in terms of embodiment, spatial dynamics, and reciprocal responsiveness [74]. Prior research has shown that participant responses in video-based HRI studies can differ from those observed during live interactions, especially when evaluating affective cues and persuasive intent [75]. Therefore, further investigation is needed to determine whether the trends observed in this study hold when robots are deployed in physically co-present, real-world settings.
Moreover, the participants recruited for this study represented a broad age range, which raises another concern about the generalizability of our findings. Previous research has emphasized the significance of adapting social robot design and interaction strategies to specific generational cohorts. In particular, the research by Osakwe et al. [76] highlights that for Generation Z (Gen Z) consumers, factors such as subjective norms, positive emotional responses, and performance expectancy are key determinants of service robot acceptance. Moreover, a systematic review in hospitality robotics emphasizes Gen Z’s enthusiastic adoption of RAISA (robots, AI, service automation) technologies, highlighting their preference for interactive, emotionally engaging service encounters [77]. These studies underscore that future-design paradigms for social robots aimed at younger cohort audiences must prioritize emotional resonance, social normative cues, and performance reliability, as their generational traits shape interaction preferences in ways that older cohorts may not share. Given this, we would subsequently consider social compliance strictly in the context of Gen Z.
Finally, it is important to note that the robots employed in this study operated autonomously. However, prior research indicates that increased robot autonomy may diminish users’ perceived sense of agency, thereby influencing how moral responsibility and blame are assigned. For example, Collier and Nguyen [78] demonstrated that when robots act independently, users are more likely to attribute moral judgment directly to the machine. Similarly, Cantucci et al. [79] found that autonomy moderates the relationship between perceived competence and cognitive trust, such that higher autonomy leads to more stringent moral appraisals in the event of failure. These findings suggest that perceptions of a robot’s operational mode—autonomous versus teleoperated—can significantly shape users’ moral evaluations, including assessments of blame, trust, and responsibility. Consequently, we recommend that future research systematically investigate how different modes of operation interact with robot design features to influence moral attributions and social compliance outcomes.

7. Implications

Despite the above-mentioned limitations, the findings of this study offer valuable practical implications. Specifically, the results can inform the design of persuasive behaviors for social robots operating in both public and private environments, particularly in contexts requiring compliance with established norms. Moreover, the behavior strategies identified may be effectively applied in domestic and healthcare settings, where robots are increasingly being integrated as supportive agents.
A representative application in the home setting involves a robot adopting or supporting the role of a caregiver—such as a parent or guardian—by promoting appropriate behavior in children. In such scenarios, the robot would require cognitive capabilities to detect and interpret instances of misbehavior and respond with contextually appropriate persuasive strategies. These strategies should be tailored to reflect the behavioral preferences and personality traits of the supervising adult. Similar applications can be envisioned in educational environments, where robots assist teachers or caregivers in managing classroom behavior and reinforcing social norms.
In elderly care settings, the findings of this study can support the development of assistive robots capable of context-aware and ethically grounded persuasion in the absence of human caregivers. Specifically, robots endowed with cognitive and perceptual capabilities to detect situations in which a patient’s direct or indirect actions deviate from prescribed care routines can be programmed to initiate adaptive persuasive strategies. These strategies can be tailored to the individual’s personality traits and persuasion preferences, as identified in this research, thereby promoting compliance while preserving autonomy and dignity.
Our findings can be extended to public settings such as parks, shopping malls, cinemas, and museums, where social robots may be deployed to promote adherence to socially acceptable behaviors. By integrating cognitive models capable of perceiving and interpreting norm violations in context, these robots can employ tailored persuasive strategies that align with the personality traits and behavioral preferences of the individual violator. Such adaptive capabilities enhance the robot’s potential to intervene effectively while maintaining social appropriateness and user receptivity.

8. Conclusions

In this paper, we present findings from our effort towards equipping social robots with context-inspired persuasive behaviors for human–robot interaction. Three separate studies aimed at (1) equipping social robots with negative persuasive behaviors, (2) investigating the impact contexts of violation and users’ traits on the impression of the robot’s persuasive behaviors, (3) evaluating the effect of the agent’s embodiment on the perception of their persuasive behaviors for compliance were conducted using the android ERICA (Study 1 and 2) and CommU robot (Study 3). Findings from our investigation revealed the possibility of equipping social robots for context-based persuasive behaviors for human–robot interaction. The more assertive behaviors by the robot were found to be appropriate and effective as a response to a situation of repeated violation after an initial positive persuasion. Moreover, users’ traits and severity of the violations were found to influence the perception of the robot’s persuasive behaviors in terms of appropriateness and effectiveness. Additionally, our investigation revealed that the impression of the agent’s reprimand behaviors varied with agent type and the scenarios.
The findings from these studies are significant for the field of human–robot interaction, as they offer valuable insights into how preferences for an agent’s persuasive attitudes change depending on the context of persuasion and the agent’s appearance. These factors are crucial to consider when designing and equipping social robots for diverse social settings.

Author Contributions

Conceptualization, C.A.A. and C.T.I.; formal analysis, C.A.A. and C.T.I.; investigation, C.A.A.; writing—original draft, C.A.A.; supervision, C.T.I. and H.I.; funding acquisition, H.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by JST, Moonshot R&D under Grant JPMJMS2011 (methodology conceptualization), and in part by the Grant-in-Aid for Scientific Research on Innovative Areas JP22H04875 (evaluation experiments).

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Advanced Telecommunication Research Institute International (ATR) (protocol codes 20-607, 21-605 approved in August 2020 and May 2021, respectively).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical reasons.

Acknowledgments

Ajibo wishes to thank the University of Nigeria, Nsukka, for the opportunity to embark on the doctoral program that resulted in the findings presented in this work.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding this work.

List of Abbreviations

AGAgreeableness
AMTAmazon Mechanical Turk
ANOVAAnalysis of variance
AIArtificial Intelligence
AMAArtificial Moral Agents
CACompliance Awareness
COVID-19Coronavirus Disease 2019
DoFDegrees of Freedom
HHIHuman-Human Interaction
HRIHuman–Robot Interaction
MELDMultimodal Multi-Party Dataset for Emotion Recognition in Conversation
RAISARobots, AI, Service Automation
WHOWorld Health Organization
Gen ZGeneration Z

References

  1. Sun, S.; Ye, H.; Law, R. Cognitive–analytical and emotional–social tasks achievement of service robots through human–robot interaction. Int. J. Contemp. Hosp. Manag. 2025, 37, 180–196. [Google Scholar] [CrossRef]
  2. Laban, G.; Morrison, V.; Cross, E.S. Social robots for health psychology: A new frontier for improving human health and well-being. Eur. Health Psychol. 2024, 23, 1095–1102. [Google Scholar]
  3. Arango, J.A.; Marco-Detchart, C.; Inglada, V.J. Personalized Cognitive Support via Social Robots. Sensors 2025, 25, 888. [Google Scholar] [CrossRef] [PubMed]
  4. Ahmed, E.; Buruk, O.O.; Hamari, J. Human–robot companionship: Current trends and future agenda. Int. J. Soc. Robot. 2024, 6, 1809–1860. [Google Scholar] [CrossRef]
  5. Lampropoulos, G. Social robots in education: Current trends and future perspectives. Information 2025, 16, 29. [Google Scholar] [CrossRef]
  6. Augustine Ajibo, C.; Ishi, C.T.; Ishiguro, H. Assessing the influence of an android robot’s persuasive behaviors and context of violation on compliance. Adv. Robot. 2024, 38, 1679–1689. [Google Scholar] [CrossRef]
  7. Ottoni, L.T.; Cerqueira, J.D. A systematic review of human–robot interaction: The use of emotions and the evaluation of their performance. Int. J. Soc. Robot. 2024, 16, 2169–2188. [Google Scholar] [CrossRef]
  8. Carnevale, A.; Raso, A.; Antonacci, C.; Mancini, L.; Corradini, A.; Ceccaroli, A.; Casciaro, C.; Candela, V.; de Sire, A.; D’Hooghe, P.; et al. Exploring the Impact of Socially Assistive Robots in Rehabilitation Scenarios. Bioengineering 2025, 12, 204. [Google Scholar] [CrossRef]
  9. Kabacińska, K.; Dosso, J.A.; Vu, K.; Prescott, T.J.; Robillard, J.M. Influence of User Personality Traits and Attitudes on Interactions With Social Robots: Systematic Review. Collabra Psychol. 2025, 11, 129175. [Google Scholar] [CrossRef]
  10. Zhang, C.; Zhang, Z.; Zhang, W.; Zeng, T.; Sun, B.; Zhao, J.; An, P. Exploring the Effects of AI Nonverbal Emotional Cues on Human Decision Certainty in Moral Dilemmas. arXiv 2024, arXiv:2412.15834. [Google Scholar]
  11. Bernhard, F.; Rudolph, U. Predicting Emotional and Behavioral Reactions to Collective Wrongdoing: Effects of Imagined Versus Experienced Collective Guilt on Moral Behavior. J. Behav. Decis. Mak. 2024, 37, e2410. [Google Scholar] [CrossRef]
  12. Carrasco-Farre, C. Large language models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments. arXiv 2024, arXiv:2404.09329. [Google Scholar]
  13. Demetriades, S.Z.; Kalny, C.S.; Turner, M.M.; Walter, N. Is all anger created equal? A meta-analytic assessment of anger elicitation in persuasion research. Emotion 2024, 24, 1428–1441. [Google Scholar] [CrossRef] [PubMed]
  14. Yan, Z.; Arpan, L.M.; Clayton, R.B. Assessing the Role of Self-Efficacy in Reducing Psychological Reactance to Guilt Appeals Promoting Sustainable Behaviors. Sustainability 2024, 16, 7777. [Google Scholar] [CrossRef]
  15. Reinecke, M.G.; Wilks, M.; Bloom, P. Developmental changes in the perceived moral standing of robots. Cognition 2025, 254, 105983. [Google Scholar] [CrossRef]
  16. Kumar, S.; Choudhury, S. AI humanoids as moral agents and legal entities: A study on the human–robot dynamics. J. Sci. Technol. Policy Manag. 2025. [Google Scholar] [CrossRef]
  17. Baum, K.; Dargasz, L.; Jahn, F.; Gros, T.P.; Wolf, V. Acting for the Right Reasons: Creating Reason-Sensitive Artificial Moral Agents. arXiv 2024, arXiv:2409.15014. [Google Scholar]
  18. Gabriel, I.; Manzini, A.; Keeling, G.; Hendricks, L.A.; Rieser, V.; Iqbal, H.; Tomašev, N.; Ktena, I.; Kenton, Z.; Rodriguez, M.; et al. The ethics of advanced ai assistants. arXiv 2024, arXiv:2404.16244. [Google Scholar]
  19. Arora, A.S.; Marshall, A.; Arora, A.; McIntyre, J.R. Virtuous integrative social robotics for ethical governance. Discov. Artif. Intell. 2025, 5, 8. [Google Scholar] [CrossRef]
  20. Raper, R. Is there a need for robots with moral agency? A case study in social robotics. In Proceedings of the 2024 IEEE International Conference on Industrial Technology (ICIT), Bristol, UK, 25–27 March 2024; pp. 1–6. [Google Scholar]
  21. Song, Y.; Luximon, Y. When Trustworthiness Meets Face: Facial Design for Social Robots. Sensors 2024, 24, 4215. [Google Scholar] [CrossRef]
  22. Fernández-Rodicio, E.; Castro-González, Á.; Gamboa-Montero, J.J.; Carrasco-Martínez, S.; Salichs, M.A. Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication. Sensors 2024, 24, 3671. [Google Scholar] [CrossRef] [PubMed]
  23. Wu, B.; Liu, C.; Ishi, C.T.; Shi, J.; Ishiguro, H. Extrovert or Introvert? GAN-Based Humanoid Upper-Body Gesture Generation for Different Impressions. Int. J. Soc. Robot. 2025, 17, 457–472. [Google Scholar] [CrossRef]
  24. Gao, Y.; Fu, Y.; Sun, M.; Gao, F. Multi-modal hierarchical empathetic framework for social robots with affective body control. IEEE Trans. Affect. Comput. 2024, 15, 1621–1633. [Google Scholar] [CrossRef]
  25. Mahadevan, K.; Chien, J.; Brown, N.; Xu, Z.; Parada, C.; Xia, F.; Zeng, A.; Takayama, L.; Sadigh, D. Generative expressive robot behaviors using large language models. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–14 March 2024; pp. 482–491. [Google Scholar]
  26. Kang, H.; Santos, T.F.; Moussa, M.B.; Magnenat-Thalmann, N. Mitigating the Uncanny Valley Effect in Hyper-Realistic Robots: A Student-Centered Study on LLM-Driven Conversations. arXiv 2025, arXiv:2503.16449. [Google Scholar]
  27. Lawrence, S.; Jouaiti, M.; Hoey, J.; Nehaniv, C.L.; Dautenhahn, K. The Role of Social Norms in Human–Robot Interaction: A Systematic Review. ACM Trans. Hum.-Robot. Interact. 2025, 14, 1–44. [Google Scholar] [CrossRef]
  28. Berns, K.; Ashok, A. “You Scare Me”: The Effects of Humanoid Robot Appearance, Emotion, and Interaction Skills on Uncanny Valley Phenomenon. Actuators 2024, 13, 419. [Google Scholar] [CrossRef]
  29. Rachmad, Y.E. Social Influence Theory; United Nations Economic and Social Council: New York, NY, USA, 2025.
  30. Voorveld, H.A.; Meppelink, C.S.; Boerman, S.C. Consumers’ persuasion knowledge of algorithms in social media advertising: Identifying consumer groups based on awareness, appropriateness, and coping ability. Int. J. Advert. 2024, 43, 960–986. [Google Scholar] [CrossRef]
  31. Ji, J.; Hu, T.; Chen, M. Impact of COVID-19 vaccine persuasion strategies on social endorsement and public response on Chinese social media. Health Commun. 2025, 40, 856–867. [Google Scholar] [CrossRef]
  32. Windrich, I.; Kierspel, S.; Neumann, T.; Berger, R.; Vogt, B. Enforcement of Fairness Norms by Punishment: A Comparison of Gains and Losses. Behav. Sci. 2024, 14, 39. [Google Scholar] [CrossRef]
  33. Andersson, P.A.; Vartanova, I.; Västfjäll, D.; Tinghög, G.; Strimling, P.; Wu, J.; Hazin, I.; Akotia, C.S.; Aldashev, A.; Andrighetto, G.; et al. Anger and disgust shape judgments of social sanctions across cultures, especially in high individual autonomy societies. Sci. Rep. 2024, 14, 5591. [Google Scholar] [CrossRef]
  34. Liu, R.W.; Lapinski, M.K. Cultural influences on the effects of social norm appeals. Philos. Trans. R. Soc. B 2024, 379, 20230036. [Google Scholar] [CrossRef] [PubMed]
  35. Iwasaki, M.; Yamazaki, A.; Yamazaki, K.; Miyazaki, Y.; Kawamura, T.; Nakanishi, H. Perceptive Recommendation Robot: Enhancing Receptivity of Product Suggestions Based on Customers’ Nonverbal Cues. Biomimetics 2024, 9, 404. [Google Scholar] [CrossRef] [PubMed]
  36. Sakai, K.; Ban, M.; Mitsuno, S.; Ishiguro, H.; Yoshikawa, Y. Leveraging the Presence of Other Robots to Promote Acceptability of Robot Persuasion: A Field Experiment. IEEE Robot. Autom. Lett. 2024, 9, 9813–9819. [Google Scholar] [CrossRef]
  37. Belcamino, V.; Carfì, A.; Seidita, V.; Mastrogiovanni, F.; Chella, A. A Social Robot with Inner Speech for Dietary Guidance. arXiv 2025, arXiv:2505.08664. [Google Scholar]
  38. Xu, J.; van der Horst, S.A.; Zhang, C.; Cuijpers, R.H.; IJsselsteijn, W.A. Robot-Initiated Social Control of Sedentary Behavior: Comparing the Impact of Relationship-and Target-Focused Strategies. arXiv 2025, arXiv:2502.08428. [Google Scholar]
  39. Zhao, Q.; Zhao, X.; Liu, Y.; Cheng, W.; Sun, Y.; Oishi, M.; Osaki, T.; Matsuda, K.; Yao, H.; Chen, H. SAUP: Situation Awareness Uncertainty Propagation on LLM Agent. arXiv 2024, arXiv:2412.01033. [Google Scholar]
  40. Senaratne, H.; Tian, L.; Sikka, P.; Williams, J.; Howard, D.; Kulić, D.; Paris, C. A Framework for Dynamic Situational Awareness in Human Robot Teams: An Interview Study. arXiv 2025, arXiv:2501.08507. [Google Scholar] [CrossRef]
  41. Fiorini, L.; D’Onofrio, G.; Sorrentino, A.; Cornacchia Loizzo, F.G.; Russo, S.; Ciccone, F.; Giuliani, F.; Sancarlo, D.; Cavallo, F. The Role of Coherent Robot Behavior and Embodiment in Emotion Perception and Recognition During Human-Robot Interaction: Experimental Study. JMIR Hum. Factors 2024, 11, e45494. [Google Scholar] [CrossRef]
  42. Pekçetin, T.N.; Evsen, S.; Pekçetin, S.; Acarturk, C.; Urgen, B.A. Real-world implicit association task for studying mind perception: Insights for social robotics. In Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–14 March 2024; pp. 837–841. [Google Scholar]
  43. Trafton, J.G.; McCurry, J.M.; Zish, K.; Frazier, C.R. The perception of agency. ACM Trans. Hum.-Robot. Interact. 2024, 13, 1–23. [Google Scholar] [CrossRef]
  44. Haresamudram, K.; Torre, I.; Behling, M.; Wagner, C.; Larsson, S. Talking body: The effect of body and voice anthropomorphism on perception of social agents. Front. Robot. AI 2024, 11, 1456613. [Google Scholar] [CrossRef]
  45. Huang, P.; Hu, Y.; Nechyporenko, N.; Kim, D.; Talbott, W.; Zhang, J. EMOTION: Expressive Motion Sequence Generation for Humanoid Robots with In-Context Learning. arXiv 2024, arXiv:2410.23234. [Google Scholar] [CrossRef]
  46. Wang, X.; Li, Z.; Wang, S.; Yang, Y.; Peng, Y.; Fu, C. Enhancing emotional expression in cat-like robots: Strategies for utilizing tail movements with human-like gazes. Front. Robot. AI 2024, 11, 1399012. [Google Scholar] [CrossRef] [PubMed]
  47. Wu, J.; Du, X.; Liu, Y.; Tang, W.; Xue, C. How the Degree of Anthropomorphism of Human-like Robots Affects Users’ Perceptual and Emotional Processing: Evidence from an EEG Study. Sensors 2024, 24, 4809. [Google Scholar] [CrossRef] [PubMed]
  48. Yang, W.; Xie, Y. Can robots elicit empathy? The effects of social robots’ appearance on emotional contagion. Comput. Hum. Behav. Artif. Humans 2024, 2, 100049. [Google Scholar] [CrossRef]
  49. de Rooij, A.; Broek, S.V.; Bouw, M.; de Wit, J. Co-Creating with a Robot Facilitator: Robot Expressions Cause Mood Contagion Enhancing Collaboration, Satisfaction, and Performance. Int. J. Soc. Robot. 2024, 16, 2133–2152. [Google Scholar] [CrossRef]
  50. Zhang, D.; Peng, J.; Jiao, Y.; Gu, J.; Yu, J.; Chen, J. ExFace: Expressive Facial Control for Humanoid Robots with Diffusion Transformers and Bootstrap Training. arXiv 2025, arXiv:2504.14477. [Google Scholar]
  51. Ding, B.; Kirtay, M.; Spigler, G. Imitation of human motion achieves natural head movements for humanoid robots in an active-speaker detection task. In Proceedings of the 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids), Nancy, France, 22–24 November 2024; pp. 645–652. [Google Scholar]
  52. Nishiwaki, K.; Brščić, D.; Kanda, T. Expressing Anger with Robot for Tackling the Onset of Robot Abuse. ACM Trans. Hum.-Robot. Interact. 2024, 14, 1–23. [Google Scholar] [CrossRef]
  53. Pfeuffer, A.; Hatfield, H.R.; Evans, N.; Kim, J. Illegally beautiful? The role of trust and persuasion knowledge in online image manipulation disclosure effects. Int. J. Advert. 2025, 44, 696–717. [Google Scholar] [CrossRef]
  54. Leite, J.A.; Razuvayevskaya, O.; Scarton, C.; Bontcheva, K. A cross-domain study of the use of persuasion techniques in online disinformation. In Proceedings of the Companion Proceedings of the ACM on Web Conference 2025, Sydney, Australia, 28 April–2 May 2025; pp. 1100–1103. [Google Scholar]
  55. Maj, K.; Grzybowicz, P.; Kopeć, J. “No, I Won’t Do That.” Assertive Behavior of Robots and its Perception by Children. Int. J. Soc. Robot. 2024, 16, 1489–1507. [Google Scholar] [CrossRef]
  56. Gonzalez-Oliveras, P.; Engwall, O.; Majlesi, A.R. Sense and Sensibility: What makes a social robot convincing to high-school students? arXiv 2025, arXiv:2506.12507. [Google Scholar]
  57. Lee, H.; Yi, Y. Humans vs. Service robots as social actors in persuasion settings. J. Serv. Res. 2025, 28, 150–167. [Google Scholar] [CrossRef]
  58. Getson, C.; Nejat, G. Investigating Persuasive Socially Assistive Robot Behavior Strategies for Sustained Engagement in Long-Term Care. arXiv 2024, arXiv:2408.14322. [Google Scholar]
  59. Peng, Z.L.; Mattila, A.S.; Sharma, A. Gendered robots and persuasion: The interplay of the robot’s gender, the consumer’s gender, and their power on menu recommendations. J. Hosp. Tour. Manag. 2025, 62, 294–303. [Google Scholar] [CrossRef]
  60. Vriens, E.; Andrighetto, G.; Tummolini, L. Risk, sanctions and norm change: The formation and decay of social distancing norms. Philos. Trans. R. Soc. B 2024, 379, 20230035. [Google Scholar] [CrossRef]
  61. Mitsuishi, K.; Kawamura, Y. Avoidance of altruistic punishment: Testing with a situation-selective third-party punishment game. J. Exp. Soc. Psychol. 2025, 116, 104695. [Google Scholar] [CrossRef]
  62. Corrao, F.; Nardelli, A.; Renoux, J.; Recchiuto, C.T. EmoACT: A Framework to Embed Emotions into Artificial Agents Based on Affect Control Theory. arXiv 2025, arXiv:2504.12125. [Google Scholar]
  63. Li, J.; Song, H.; Zhou, J.; Nie, Q.; Cai, Y. RMG: Real-Time Expressive Motion Generation with Self-collision Avoidance for 6-DOF Companion Robotic Arms. arXiv 2025, arXiv:2503.09959. [Google Scholar]
  64. Bartosik, B.; Wojcik, G.M.; Brzezicka, A.; Kawiak, A. Are you able to trust me? Analysis of the relationships between personality traits and the assessment of attractiveness and trust. Front. Hum. Neurosci. 2021, 15, 685530. [Google Scholar] [CrossRef]
  65. van Otterdijk, M.; Laeng, B.; Saplacan-Lindblom, D.; Baselizadeh, A.; Tørresen, J. Seeing Meaning: How Congruent Robot Speech and Gestures Impact Human Intuitive Understanding of Robot Intentions. Int. J. Soc. Robot. 2025, 1–4. [Google Scholar] [CrossRef]
  66. Staffa, M.; D’Errico, L.; Maratea, A. Influence of Social Identity and Personality Traits in Human–Robot Interactions. Robotics 2024, 13, 144. [Google Scholar] [CrossRef]
  67. A’yuninnisa, R.N.; Carminati, L.; Wilderom, C.P. Promoting employee flourishing and performance: The roles of perceived leader emotional intelligence, positive team emotional climate, and employee emotional intelligence. Front. Organ. Psychol. 2024, 2, 1283067. [Google Scholar] [CrossRef]
  68. Johnson, J.A. Calibrating personality self-report scores to acquaintance ratings. Personal. Individ. Differ. 2021, 169, 109734. [Google Scholar] [CrossRef]
  69. Watson, J.; Valsesia, F.; Segal, S. Assessing AI receptivity through a persuasion knowledge lens. Curr. Opin. Psychol. 2024, 58, 101834. [Google Scholar] [CrossRef] [PubMed]
  70. Van Kleef, G.A. Understanding the positive and negative effects of emotional expressions in organizations: EASI does it. Hum. Relations 2014, 67, 1145–1164. [Google Scholar] [CrossRef]
  71. Dennler, N.; Nikolaidis, S.; Matarić, M. Singing the Body Electric: The Impact of Robot Embodiment on User Expectations. arXiv 2024, arXiv:2401.06977. [Google Scholar]
  72. Schulz, D.; Unbehaun, D.; Doernbach, T. Investigating the Effects of Embodiment on Presence and Perception in Remote Physician Video Consultations: A Between-Participants Study Comparing a Tablet and a Telepresence Robot. i-com. 2025. Available online: https://www.degruyterbrill.com/document/doi/10.1515/icom-2024-0045/html?srsltid=AfmBOopZpEQ9TaIduZrQIkv0QHpV2G24-llchM0OBtsONxOCfiSWX7rc (accessed on 28 June 2025).
  73. Roselli, C.; Lapomarda, L.; Datteri, E. How culture modulates anthropomorphism in Human-Robot Interaction: A review. Acta Psychol. 2025, 255, 104871. [Google Scholar] [CrossRef]
  74. Hauser, E.; Chan, Y.C.; Modak, S.; Biswas, J.; Hart, J. Vid2Real HRI: Align video-based HRI study designs with real-world settings. In Proceedings of the 2024 33rd IEEE International Conference on Robot and Human Interactive Communication (ROMAN), Pasadena, CA, USA, 26–30 August 2024; pp. 542–548. [Google Scholar]
  75. Steinhaeusser, S.C.; Heckel, M.; Lugrin, B. The way you see me-comparing results from online video-taped and in-person robotic storytelling research. In Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 11–15 March 2024; pp. 1018–1022. [Google Scholar]
  76. Osakwe, C.N.; Říha, D.; Elgammal, I.M.; Ramayah, T. Understanding Gen Z shoppers’ interaction with customer-service robots: A cognitive-affective-normative perspective. Int. J. Retail. Distrib. Manag. 2024, 52, 103–120. [Google Scholar] [CrossRef]
  77. Fu, M.; Fraser, B.; Arcodia, C. Digital natives on the rise: A systematic literature review on generation Z’s engagement with RAISA technologies in hospitality services. Int. J. Hosp. Manag. 2024, 122, 103885. [Google Scholar] [CrossRef]
  78. Collier, M.A.; Narayan, R.; Admoni, H. The Sense of Agency in Assistive Robotics Using Shared Autonomy. arXiv 2025, arXiv:2501.07462. [Google Scholar]
  79. Cantucci, F.; Marini, M.; Falcone, R. The Role of Robot Competence, Autonomy, and Personality on Trust Formation in Human-Robot Interaction. arXiv 2025, arXiv:2503.04296. [Google Scholar]
Figure 1. Research roadmap for the study depicting the position of strategies developed for the social agents.
Figure 1. Research roadmap for the study depicting the position of strategies developed for the social agents.
Electronics 14 02667 g001
Figure 2. Pictorials of the generated gestures on the android ERICA during the stroke phases.
Figure 2. Pictorials of the generated gestures on the android ERICA during the stroke phases.
Electronics 14 02667 g002
Figure 3. (a) Subjective scores for the perceived degree of anger for each gesture type; (bd) subjective scores for perceived gesture appropriateness for each dialogue act. p-value indicated by * p < 0.05.
Figure 3. (a) Subjective scores for the perceived degree of anger for each gesture type; (bd) subjective scores for perceived gesture appropriateness for each dialogue act. p-value indicated by * p < 0.05.
Electronics 14 02667 g003
Figure 4. Scenario design for evaluating the impressions of different persuasive behaviors.
Figure 4. Scenario design for evaluating the impressions of different persuasive behaviors.
Electronics 14 02667 g004
Figure 5. Subjective scores for robot’s behaviors in terms of likeness, competence, and willingness across AG groups; p-value indicated by * p < 0.05.
Figure 5. Subjective scores for robot’s behaviors in terms of likeness, competence, and willingness across AG groups; p-value indicated by * p < 0.05.
Electronics 14 02667 g005
Figure 6. Scenario design for assessing the perception of different persuasive behaviors relative to the robot types.
Figure 6. Scenario design for assessing the perception of different persuasive behaviors relative to the robot types.
Electronics 14 02667 g006
Figure 7. Subjective scores for robot’s behaviors in terms of likeness, competence for the agents with red frame indicating behaviors with * p < 0.01.
Figure 7. Subjective scores for robot’s behaviors in terms of likeness, competence for the agents with red frame indicating behaviors with * p < 0.01.
Electronics 14 02667 g007
Figure 8. Subjective scores in terms of appropriateness, effectiveness, and tendency to adhere through the robot’s behaviors of the agents.
Figure 8. Subjective scores in terms of appropriateness, effectiveness, and tendency to adhere through the robot’s behaviors of the agents.
Electronics 14 02667 g008
Table 1. Distribution of participants based on trait categories.
Table 1. Distribution of participants based on trait categories.
TraitsThresholdDistribution
CA1.00 ≤ X ≤ 3.49Low CA (N = 47: 30 Male, 17 Female; M = 3.24, SD = 0.55)
3.50 ≤ X ≤ 5.00High CA (N = 51: 30 Male, 21 Female; M = 4.38, SD = 0.34)
AG1.00 ≤ X ≤ 2.99Low AG (N = 51: 35 Male, 16 Female; M = 2.69, SD = 0.58)
3.00 ≤ X ≤ 5.00High AG (N = 47: 25 Male, 22 Female; M = 4.47, SD = 0.46)
Table 2. Two-way interaction effects (F-values and significance levels) for each subjective impression item * p < 0.05, ** p < 0.01, ns ≡ p > 0.05.
Table 2. Two-way interaction effects (F-values and significance levels) for each subjective impression item * p < 0.05, ** p < 0.01, ns ≡ p > 0.05.
Impression ItemsScen-Beh F(6, 576)CA-Beh F(3, 288)AG-Beh F(3, 288)
Likeness3.4 *5.9 *14.4 **
Appropriateness22.7 **1.8 (ns)6.5 *
Effectiveness15.0 **2.2 (ns)1.6 (ns)
Competence1.8 (ns)4.3 *7.4 *
Willingness12.3 **6.3 *8.4 **
Table 3. Main effects (F-values and significance levels) for each subjective impression item * p < 0.05, ** p < 0.01, ns ≡ p > 0.05.
Table 3. Main effects (F-values and significance levels) for each subjective impression item * p < 0.05, ** p < 0.01, ns ≡ p > 0.05.
Impression itemsCA F(1, 96)AG F(1, 96)Scen F(2, 192)Beh F(3, 288)
Likeness29.8 **18.5 **2.8 *65.2 **
AppropriatenessNaN (ns)NaN (ns)NaN (ns)17.0 **
EffectivenessNaN (ns)NaN (ns)NaN (ns)5.9 *
Competence22.7 **11.8 **0.3 (ns)35.3 **
Willingness33.6 **16.5 **2.4 *34.1 **
Table 4. Two-way interaction effects (F-values and significance levels) for each subjective impression item * p < 0.01, ns ≡ p > 0.05.
Table 4. Two-way interaction effects (F-values and significance levels) for each subjective impression item * p < 0.01, ns ≡ p > 0.05.
Impression ItemsBeh-Scen F(3, 300)Beh-Agt F(3, 300)
Likeness7.3 *2.1 (ns)
Competence16.5 *8.6 *
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ajibo, C.A.; Ishi, C.T.; Ishiguro, H. Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors. Electronics 2025, 14, 2667. https://doi.org/10.3390/electronics14132667

AMA Style

Ajibo CA, Ishi CT, Ishiguro H. Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors. Electronics. 2025; 14(13):2667. https://doi.org/10.3390/electronics14132667

Chicago/Turabian Style

Ajibo, Chinenye Augustine, Carlos Toshinori Ishi, and Hiroshi Ishiguro. 2025. "Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors" Electronics 14, no. 13: 2667. https://doi.org/10.3390/electronics14132667

APA Style

Ajibo, C. A., Ishi, C. T., & Ishiguro, H. (2025). Negative Expressions by Social Robots and Their Effects on Persuasive Behaviors. Electronics, 14(13), 2667. https://doi.org/10.3390/electronics14132667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop