Next Article in Journal
Use of Robots for Play by Children with Cerebral Palsy
Previous Article in Journal
Predicting Health Care Costs Using Evidence Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Affective Embodied Agents and Their Effect on Decision Making †

by
Adrian Acosta-Mitjans
1,‡,
Dagoberto Cruz-Sandoval
1,‡,
Ramon Hervas
2,‡,
Esperanza Johnson
2,‡,
Chris Nugent
3,‡ and
Jesus Favela
1,*,‡
1
CICESE, Ensenada 22860 B.C., Mexico
2
Department of Technologies and Information Systems, University of Castilla-La Mancha, 13071 Ciudad Real, Spain
3
School of Computing, Ulster University, Jordanstown BT37 0QB, UK
*
Author to whom correspondence should be addressed.
Presented at the 13th International Conference on Ubiquitous Computing and Ambient ‪Intelligence UCAmI 2019, Toledo, Spain, 2–5 December 2019.
All authors contributed equally to this work.
Proceedings 2019, 31(1), 71; https://doi.org/10.3390/proceedings2019031071
Published: 21 November 2019

Abstract

:
Embodied agents, such as avatars and social robots, are increasingly incorporating a capacity to enact affective states and recognize the mood of their interlocutor. This influences how users perceive these technologies and how they interact with them. We report on an experiment aimed at assessing perceived empathy and fairness among individuals interacting with avatars and robots when compared to playing against a computer or a fellow human being. Twenty-one individuals were asked to play the ultimatum game, playing the role of a responder against another person, a computer, an avatar and a robot for a total of 32 games (8 per condition). We hypothesize that affective expressions by avatars and robots influence the emotional state of the users, leading them to irrational behavior by rejecting unfair proposals. We monitored galvanic skin response and heart rate of the players in the period when the offer was made by the proposer until the decision was announced by the responder. Our results show that most fair offers were accepted while most unfair offers were rejected. However, participants rejected more very unfair offers made by people and computers than by the avatars or robots.

1. Introduction

An embodied agent is an intelligent entity that interacts with the environment through a body. These include virtual agents whose body is represented as a graphical interface [1] and robots with a physical body [2]. These agents often exhibit affective states through the movement of their bodies, facial expressions, etc. Several studies have found that such embodiment affects how these agents are perceived by users [3]. Users, for instance, were able to interpret the affective state of a humanoid robot such as Nao by its body language [4].
Affective embodied agents fall under the category of Affective Computing. This term was coined in the mid 1990s by Rosalind Picard [5]. It was defined as the study and development of systems that can recognize, interpret, process and simulate affects. While several affective avatars have been developed in the last twenty years, there has been growing research and development in robots whose goal is to promote social interaction with humans. These social robots that use speech, facial expressions and communicative gestures to enhance this interaction, for them to display socially intelligent behavior they should be able to interpret and simulate affect [6].
An understanding how humans perceive affective embodied agents can help us design effective avatars and social robots. In this study we aim to evaluate if the social presence projected by an avatar or a social robot influences decision making. This becomes relevant as software robots and social robots are increasingly being deployed to assist is tasks that range from restaurant booking to patient care. These solutions are gradually incorporating a capacity to simulate emotions. To deepen our understanding of the effects that these technologies could have in our daily life, we conducted an experiment in which humans play the ultimatum game against different agents: a computer, humans, avatars and a social robot. The ultimatum game has been frequently used in experimental economics, psychology and computer science to understand rational decision making and fairness.
The next section introduces the ultimatum game affective agents and describes previous studies using avatars and robots on how they influence decision making. Section 3 describes the materials and methods used in the experiment. In Section 4 we present the results obtained. We end with a discussion and conclusions obtained from the study.

2. Affective Communication in Avatars and Social Robots

Studies on decision-making in economics, psychology and human-computer interaction often use simple models derived from game theory to study human behavior. The ultimatum game, in particular, has been used to analyze irrational behavior and the role of emotion in decision making [7]. The game is played by two individuals, each playing a different role, one as a “proposer” and the other as “responder”. Players are presented with an amount of money to be split between the two. The proposer decides how they suggest to split the amount among the two participants. The responder decides whether to accept or reject the offer. If the offer is accepted, it will be distributed according to the proposal made by the proposer. But if the offer is rejected, both players walk away with nothing. While the rational decision would be to accept any amount greater than zero, evidence shows that individuals often reject offers that they perceive as being unfair. Moreover, the bigger the split, the greater the probability of an offer being rejected. Thus, it is in the best interest of the proposer to offer the minimum amount of money that still has a good chance of being accepted. The fact that unfair offers are rejected shows that emotions play a role in decision making, as responders prefer a zero gain to rewarding the opponent with an unfair split of the amount of money at play. In lieu of this, the following two subsections will explore more in depth the role emotions can play when interacting with different agents. In Section 2.1, we will see related work with affective avatars, while Section 2.2 will explore affective social robots.

2.1. Affective Avatars

To understand what makes an avatar affective, we must begin by looking into affective computing. In [8], Picard makes a distinction between four categories of affective computing, focusing on expression and recognition. Most computers and software applications fall into category I, which means they do not perceive or express affect. On the other hand, category IV is that which can both perceive and recognize affect, with Picard saying ’this category maximizes the sentic communication between human and computer, potentially providing truly “personal” and “user-friendly” computing’.
Categories II and III just depend on whether it can perceive or express affect. Given these categories, we would call an avatar affective if it belongs to categories II, III or IV, depending on its perception or expression of affect. There are several works that deal with expressing affect or emotion, such as [9], where they have an avatar show different emotions, evaluated with a group of people with ASD (Autism Spectrum Disorder), to work emotion recognition and social skills. In another study [10], the population was formed by older adults, to assert if the presence of an avatar that showed emotion improved interaction. Finally, in [11,12], users interacted through touch with a virtual avatar, who would express different emotions depending on the nature of the touch.
Other works can both express and recognize emotion. In [13], they utilize multi-modal signals, and then feed these result to the avatar, who mirrors the emotion and generates a voice message that sympathizes with the user’s mood. In a similar line, and using wearable computers, [14] use the gathered data to then show the user’s emotional state to them through an avatar. In [15], the authors recognition emotion using EEG, and then show the resulting emotion through an avatar. Sometimes that avatar is not so much a humanoid representation but a more abstract representation. This is the case in [16], where they study disclosure and co-presence by having dyads of participants interact with each other and have the avatar be an emotibox. This is an abstract form that changes in color and shape according to the user’s perceived emotions.
As it can be seen, there is a distinct lack of work dealing with avatars purely recognizing emotion. This is likely due to the fact that once having the emotion recognition aspect and a visual representation of an avatar, it is chosen to also add emotion expression to said avatar.

2.2. Affective Social Robots

Social robots are physical embodied agents which are able to engage humans in social interaction [17]. Social robots employ hands-off interaction strategies, including the use of speech, facial expressions and communicative gestures to promote effective social interactions [18]. These robots have a physical and tangible representation in a real environment. This is the main difference with respect to virtual agents as the avatars are digitally generated using computer algorithms [19].
Various studies using social robots showed that a social robot could persuade humans to change their behavior, mood and perception. A social robot, called Huggable, was used in a study to promote socio-emotional wellbeing of children in hospital [20]. A smartphone is used as part of the robot to hear, speak and show animated eyes to express various emotions. Study results revealed that the social robot could facilitate socially energetic and positive conversations more effectively than a virtual character and a plush toy. These findings are significant because the increased positive emotion and social engagement are associated with positive patient outcomes. In [21], the authors explore the reciprocity between humans and robots. They implemented two different personalities in a Nao robot with social features such as speech and body gestures. They conducted an experiment using the Rock, Paper, Scissors game where a bribe version and honest version of the robot played with sixty participants. The results showed that the personality of the robot persuades the participants to change their behavior. Authors suggest that participants are keen to reciprocate help to the robots when they ask for a favor. However, they reciprocate less with bribing robots compared to honest ones. In [22], they examine the factors with which robots are recognized as social beings. Using the ultimatum game, the study concludes that people show changes in their attitude, depending on the agent’s appearance and behavior. The agent (robot, human or computer) in the role of proposer influences the number of rejections of the proposals. In particular, an android appearance is associated with a higher number of rejections. The examples above provide evidence that social robots—capable of using para-linguistic communication signals with a person, such as gesture, facial expression, intonation, gaze direction or body posture [23]— are capable of influencing people’s decision and behavior.
While it has been shown that incidental affect, such as mood, influence judgement in decision making [24], there has not been work that analyzes more in depth the use of affective agents. Experiments have shown that individuals accept unfair offers more frequently when playing against a computer than against another human player, indicating that rejecting unfair offers from other individuals is a way of social punishment or of enforcing fair norms. Thus, with this work, we aim to study a middle ground between person and PC, by introducing the variable of affective embodied agents, such as the ones shown above. We also wish to explore how pronounced those differences are, with the goal of providing a better understanding of how perception and judgement can change depending on the agent.

3. Methods and Materials

In this section we explore several key elements of this work. In Section 3.1, we will delve further on the subject of study, explaining the hypotheses proposed for this study. Section 3.2 explains the experimental conditions, explaining the cases for the social Computer and person (3.2.1), avatar (3.2.2) and robot (3.2.3). Section 3.3 will explain the experimental procedure, such as participants and protocol. Finally Section 3.4 will detail the setup for the performed experiment.

3.1. Subject of Study

We designed an experiment with the following research question in mind: Will an affective embodied agent (avatar or robot) influence individuals emotionally so that they make irrational decisions? If people perceive embodied agents as having an independent capacity to act according with their own will (agency), their unfair actions will create an emotional response in the users, prompting them to make decisions that are not in their best interest.
The experiment involves having participants play the ultimatum game against different types of rivals, namely, a person (P); a social robot (R); an avatar (A); and a computer (C). Participants will be confronted with both fair and unfair offers. Previous studies have established that individuals are more willing to accept unfair offers from computers, while unfair offers made by other individuals generate a physiological emotional response on the player which leads them to more often reject unfair offers [25]. Our hypothesis is that both the social robot and avatar, who exhibit some affective behaviors, will be perceived by the players as making their own decisions and will generate responses (acceptance of unfair offers, arousal) that are closer to those exhibited when playing against another person to a larger extent than when playing against a computer. The research is relevant to the field of Human-Robot Interaction, since it can inform designers in the development of new social robots, and it is relevant to Ubiquitous Computing, since the sensing and proper interpretation of physiological responses can be used to adapt the agent’s behavior.
Thus, we propose the following hypotheses:
Hypotheses 1 (H1).
Playing against an affective embodied agent (avatar/robot) will lead people to make more irrational decisions than playing against a computer.
Hypotheses 1a (H1a).
Playing against an affective avatar will lead people to make more irrational decisions than playing against a computer.
Hypotheses 1b (H1b).
Playing against an affective robot will lead people to make more irrational decisions than playing against a computer.
Hypotheses 2 (H2).
Subjects will experience more frustration when receiving unfair offers from an affective embodied agent than when receiving these offers from a computer.
Hypotheses 2a (H2a).
Subjects will experience more frustration when receiving unfair offers from an affective avatar than when receiving these offers from a computer.
Hypotheses 2b (H2b).
Subjects will experience more frustration when receiving unfair offers from an affective robot than when receiving these offers from a computer.
To test H1 we will record the number of fair and unfair offers accepted by each participant when playing against P, R, A and C.
To test H2 we will measure emotional arousal from the time an offer is received and the player responds to the offer. For this, we will use wearable sensors to measure Galvanic Skin Response (GSR) and Heart Rate Variability (HRV), both commonly used to measure emotional arousal [25,26]. To allow sufficient time for frustration to set in and activate a physiological response, we ask the individual to wait 10 s, from the moment they receive an offer until they respond to it.

3.2. Experimental Conditions

Participants will interact by playing against four different types of players. We describe the systems used for each type of player. All conditions include a fixation point of between 15 and 25 s, followed by the presentation of the opponent (except for the computer condition) for around 5 s. Then, the offer is presented by the opponent and the participant is given 10 s to respond. Once the offer is given, the recording of the GSR signal is triggered until the participant provides an answer. HR is measured with the smartwatch continuously throughout each session/condition.

3.2.1. Computer and Person

An interactive application to support the experiment in this kind of rivals has been developed using Unity Game Engine, VisualStudio Community 2019 IDE and the CSharp (C#) programming language. The application immerses the user into a 3D environment simulating that they are sitting in front of a table. The user plays the ultimatum game in two different conditions: playing against a computer or playing against several human players. In this second case, the opponent is also the computer, but the system simulates that the player is actually playing against a human. The application shows a fictional photograph of the rival, changing each round to minimize the bias related to the opponent’s aspect. In both cases, the subject takes the responder’s role and will play eight games. The offers made by the game appear randomly in both cases.

3.2.2. Avatar

The application version to play against an Avatar is aesthetically similar to the computer/person application. It is also developed using Unity Game Engine, VisualStudio Community 2019 IDE and the CSharp (C#) programming language and immerses the user into a 3D environment too. The main difference is that in front of the table, there is a humanized Avatar as opponent. There are a set of different Avatars, some of the virtual human characters and also humanized animals. The opponent can be randomly selected or specified at the beginning of the game. Players can alternatively take the role of offering or being offered money (proposer or responder, respectively) (see Figure 1). If the user is playing a round as proposer, they will indicate the amount of money they want to offer; whereas they would accept or decline an offer when playing as responders. The interaction with the application can be made touching buttons and typing text or using voice commands (using cognitive services of IBM Watson). The strategy and criteria of the avatar as responder and as proposer can be configured. For the development of this experiment, the game has been configured as follows: (a) The Avatar is always the female human character, (b) The user acts only as a responder, (c) The voice-based interaction has been disabled, (d) the opponent strategy is fit to the experiment procedure (Section 3.3).
Based on the experimental conditions described above, we developed eight personalities to be deployed in the robot. Each personality has a different voice, movements, dialog and color. Thus, each personality plays the role of the proposer for each offer. This strategy aimed to provide the idea that each game was independent. A fixation point (a rotating torus) was displayed in the screen for a specific time between games when the participant waits for the new player to make an offer.

3.2.3. Robot

We updated a social robot, called Eva, developed by one of the authors [27]. Verbal communication is the primary way of interaction between the social robot and participants. The robot Eva includes features such as natural language processing and speech, which were implemented using cognitive services from Google (speech-to-text, dialog flow) and IBM Watson (text-to-speech). A display is used to show WebGL animations to emulate facial expressions. For this study, we added 3 degrees of freedom (all of them for head movements) to represent body gestures. Synchronizing these elements (voice, facial expression and body gestures), Eva synthesizes basic emotions such as sadness, happiness and thinking. Moreover, a light ring was added as visual feedback about the robot status (speaking, hearing, power-off) to participants. Figure 2 shows the version of Eva used in this study.

3.3. Experimental Procedure

We will recruit 21 graduate students as participants, familiar with computer technology. Participants should be familiar interacting with diverse computing technology.
The experimental design is within-subjects. That is, all participants will play against each type of rival (P, R, A and C). To account for learning and fatigue, the order in which they play each rival will be balanced, but they play all 8 games against each type of rival before moving to the next condition.
The procedure will be as follows:
  • Participants are instructed about the objectives of the experiment and are then asked to sign a consent form.
  • Instructions for playing are given to participants with a presentation and participants are asked to play a sample game. The participant is asked to make eight offers in the range 0–100 for the ultimatum game. They are told that these offers will be used to play against other players offline.
  • Participants play against each type of rival in the pre-defined sequence. An initial waiting period of 2 min is used to collect physiological signals at baseline.
  • Participants play 8 consecutive games against each rival.
  • Each game will start by showing the participant a fixation point for a period of 10, 15, 20 or 25 s (randomly selected). After that, an image of the proponent (avatar or person) is shown for 10 s. In the case of the Robot, it wakes up and greets the participant, and in the case of the computer, a greeting message appears on the screen. Then the rival makes an offer in the form “I offer you 20 points and will keep 80” and indicates to the participant that she has 10 s to decide to accept or reject the offer. After 10 s the rival will ask for the participant’s decision. The total number of points obtained will be updated and displayed. The screen will again show a fixation point before a new game starts.
  • Once subjects completed all 32 games (8 per type of rival), they were given a snack.
Each type of rival makes 4 fair, and 4 unfair offers. The four fair offers will propose to split 100 points equally (50–50), while the four unfair offers will include two 80–20 splits, and two 90–10. The order in which this offers will be given will be decided at random. While the rational decision is to accept all offers greater than zero, there is ample experimental evidence that offers of less than 20% are often rejected [28].

3.4. Experimental Setup

The experiment was conducted in a laboratory with minimum noise, appropriate lighting and constant temperature . The participant was asked to sit at a desk on which there were a social robot (R) and a computer used for the three other rivals (P, A, C) (see Figure 3). We gave the participant a smartwatch (Watch 2 Classic from Huawei) to measure heart rate and placed the finger straps with the electrodes of the galvanic skin response sensor (Grove GSR sensor) on the middle and index fingers of their left hand.

4. Preliminary Results

A total of 21 subjects participated in the experiment (7 female, 14 male), which was conducted for a period of three consecutive days. Each session lasted approximately 35 min to complete the 32 games. The experiment took place at a research University in Ensenada, Mexico.
On average, the offers made off-line by the participants before playing against the avatar/PC/person/robot was of 43.63 per game, with a standard deviation of 10.22. Only two players offered more to the opponent (65% and 51.25%), and two offered on average 30% or less. This is consistent with what has been reported in the literature that proposers tend to offer between 40% to 45% [28]. There is a moderate correlation of 59% between the offers made and the offers accepted by the participants.
Table 1 shows a summary of the results of accepted/rejected offers. As expected most fair offers (50–50) were accepted (91.67%). One of the participants had a very unusual behavior rejecting all but one fair offer; another one accepted half the fair offers. Without these participants the percentage of acceptance of fair offers increases to 97.4%. In contrast, most unfair offers were rejected (24.4%), with the acceptance rate diminishing as the offer became more unfair (33.33% for 80–20 offers, versus only 15.45% for 90–10 offers). This is also consistent with expectations. We had one player who accepted all offers (fully rational behavior) and another one who accepted all except for one unfair offers. To show the variability of the results Figure 4 presents the average number of offers accepted with 95% confidence intervals.
Contrary to our expectations, however, players accepted more unfair offers when playing against other humans (30%), when compared to playing against a PC (19%), with the percentage of unfair offers for the avatar and robot condition somewhere between 26% and 23%. These results suggest that the players did not have negative emotional reactions to being given unfair offers by other individuals.
Female participants only accepted 14% of the unfair offers, a much lower acceptance rate than when playing against an avatar (36%) a robot (36%) and a person (39%) (see Table 2). The variation for male players is much lower, and for very unfair offers, they accepted on average 11% of the offers in all conditions.
We illustrate the emotional effect caused on participants by unfair offers by presenting some of the results of participant P1 with the social robot condition. Changes in the autonomic nervous system can produce changes in electrodermal activity. In particular, an increase in sympathetic activity produces sweat and an increase in galvanic skin response [29]. It is thus expected that individuals who react emotionally to unfair offers would experience an increase in electrodermal activity a few seconds after the stimuli is received (the unfair offer). P1 played eight games with the robot personalities (Mean = 37.85 s, SD = 2.94 s), where the game duration depends on the robot utterances length and the time it took the user to make a decision. Figure 5 (bottom) shows the log of game 2 between P1 and the social robot. In this game, the robot proposed an unfair offer (90–10) to P1, who then rejected it. There is an evident spike in the GSR values few seconds after the offer is made (waiting time) arguably associated to the arousal from the participant when considering an offer that she interprets as being unfair. (see Figure 5 (top)). This same pattern is observed in the GSR signal of game 6, where P1 rejected another unfair offer from the robot (90–10). Both cases correspond to the evidence of skin conductance responses occurring 1–5 s after presentation of the offer was made [25]. However, the signal behavior is clearly different in games 4 and 8, where the robot proposed fair offers that were accepted by P1.

5. Discussion

The ratio of accepted fair and unfair offers is consistent with those reported in the literature, with most accepted offers being accepted and less offers accepted as they became more unfair. However, the comparison among the different opponents (Person, Computer, Avatar and Robot) showed unexpected findings.
Accepted unfair offers were significantly higher when playing against humans than against the computer. One possible explanation is that empathy played a role in decision making.
Moreover, the ratio of unfair offers that are accepted when playing avatars and robots is also lower than against human. The difference between the accepted unfair offers is highest when playing with a person and lowest when playing with a PC (30% versus 19%). It is interesting to note that the avatar is the second option with more accepted unfair offers (26%), followed closely by the robot (23%). One possible explanation is that rather than getting angry at an unfair proposal, the players reacted with empathy. In any case, the fact that the avatar and robot conditions are between these extremes seems to suggest that players perceive both of these conditions as somewhat in between a person and a computer. Interestingly, the percentage of very unfair offers (90–10) accepted in the avatar and robot conditions is higher than those for PC and person. This could indicate that participants had more empathy to these players than when playing with a PC or another person.
While we tried to create on participants the perception that each game they played was independent of the next one, we did not fully succeed at that. While playing some participants made comments indicating that they would for instance reject an unfair offer hoping that this would lead the opponent to make a better offer in the next game.

6. Conclusions

We performed an experiment to assess if affective avatars and robots influence the emotional state of the users, leading them to irrational behavior by rejecting unjust proposals. As expected, most unjust offers were rejected. However, the behavior of users playing against avatars or robots resembles more their behavior playing against a fellow human being than when playing against a computer without an affective embodied agent. This shows that, despite being artificial entities, their simple presence in the interaction with the users significantly affects their behavior.
The affective agents implemented for this study were designed to have limited emotional expressions. They were designed like this so that only their presence and not their gestures, would influence decision making. In future experiments, we plan to explore how affective responses by the agents influence players. In addition, we plan to conduct a similar experiment in other countries to explore cultural differences.

Funding

This research was partially funded by CONACYT grant number A1-S-11287, the UCLM research groups founding 2019-GRIN-26902, as well as the support for Research Assistants from the JCCM.

Acknowledgments

We want to especially thank people who helped us by participating in this experiment. Many thanks also to Ángel Pavón for helping in the development of the material related with the avatar-based game.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cassell, J.; Sullivan, J.; Prevost, S.; Churchill, E.F. Embodied Conversational Agents; The MIT Press: Cambridge, MA, USA, 2000. [Google Scholar] [CrossRef]
  2. Wainer, J.; Feil-seifer, D.J.; Shell, D.A.; Mataric, M.J. The role of physical embodiment in human-robot interaction. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 117–122. [Google Scholar] [CrossRef]
  3. Rosenthal-von der Pütten, A.M.; Krämer, N.C.; Herrmann, J. The Effects of Humanlike and Robot-Specific Affective Nonverbal Behavior on Perception, Emotion, and Behavior. Int. J. Soc. Robot. 2018, 10, 569–582. [Google Scholar] [CrossRef]
  4. Beck, A.; Cañamero, L.; Bard, K.A. Towards an Affect Space for robots to display emotional body language. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 464–469. [Google Scholar] [CrossRef]
  5. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  6. Castellano, G.; Pereira, A.; Leite, I.; Paiva, A.; McOwan, P.W. Detecting User Engagement with a Robot Companion Using Task and Social Interaction-based Features. In Proceedings of the 2009 International Conference on Multimodal Interfaces, ICMI-MLMI ’09, Cambridge, MA, USA, 2–4 November 2009; pp. 119–126. [Google Scholar] [CrossRef]
  7. Güth, W.; Schmittberger, R.; Schwarze, B. An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 1982, 3, 367–388. [Google Scholar] [CrossRef]
  8. Picard, R.W. Affective Computing-MIT Media Laboratory Perceptual Computing Section Technical Report No. 321; MIT Press: Cambridge, MA, USA, 1995; Volume 2139. [Google Scholar]
  9. Hopkins, I.M.; Gower, M.W.; Perez, T.A.; Smith, D.S.; Amthor, F.R.; Wimsatt, F.C.; Biasini, F.J. Avatar assistant: improving social skills in students with an ASD through a computer-based intervention. J. Autism Dev. Disord. 2011, 41, 1543–1555. [Google Scholar] [CrossRef] [PubMed]
  10. Ortiz, A.; del Puy Carretero, M.; Oyarzun, D.; Yanguas, J.J.; Buiza, C.; Gonzalez, M.F.; Etxeberria, I. Elderly users in ambient intelligence: Does an avatar improve the interaction? In Universal Access in Ambient Intelligence Environments; Springer: Berlin/Heidelberg, Germany, 2007; pp. 99–114. [Google Scholar] [CrossRef]
  11. Johnson, E.; Hervás, R.; Gutiérrez López de la Franca, C.; Mondéjar, T.; Ochoa, S.F.; Favela, J. Assessing empathy and managing emotions through interactions with an affective avatar. Health Inform. J. 2018, 24, 182–193. [Google Scholar] [CrossRef] [PubMed]
  12. Johnson, E.; Hervás, R.; Gutiérrez-López-Franca, C.; Mondéjar, T.; Bravo, J. Analyzing and predicting empathy in neurotypical and nonneurotypical users with an affective avatar. Mob. Inf. Syst. 2017, 2017. [Google Scholar] [CrossRef]
  13. Bamidis, P.D.; Frantzidis, C.A.; Konstantinidis, E.I.; Luneski, A.; Lithari, C.; Klados, M.A.; Bratsas, C.; Papadelis, C.L.; Pappas, C. An integrated approach to emotion recognition for advanced emotional intelligence. In International Conference on Human-Computer Interaction; Springer: Berlin/Heidelberg, Germsny, 2009; pp. 565–574. [Google Scholar] [CrossRef]
  14. Nasoz, F.; Lisetti, C.L. MAUI avatars: Mirroring the user’s sensed emotions via expressive multi-ethnic facial avatars. J. Vis. Lang. Comput. 2006, 17, 430–444. [Google Scholar] [CrossRef]
  15. Ko, K.E.; Yang, H.C.; Sim, K.B. Emotion recognition using EEG signals with relative power values and Bayesian network. Int. J. Control. Autom. Syst. 2009, 7, 865. [Google Scholar] [CrossRef]
  16. Bailenson, J.N.; Yee, N.; Merget, D.; Schroeder, R. The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence Teleoperators Virtual Environ. 2006, 15, 359–372. [Google Scholar] [CrossRef]
  17. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef]
  18. Fasola, J.; Mataric, M.J. Using Socially Assistive Human–Robot Interaction to Motivate Physical Exercise for Older Adults. Proc. IEEE 2012, 100, 2512–2526. [Google Scholar] [CrossRef]
  19. Dautenhahn, K. Embodiment and Interaction in Socially Intelligent Life-Like Agents; Springer: Berlin/Heidelberg, Germany, 1999; pp. 102–141. [Google Scholar] [CrossRef]
  20. Jeong, S.; Breazeal, C.; Logan, D.; Weinstock, P. Huggable: The Impact of Embodiment on Promoting Socio-emotional Interactions for Young Pediatric Inpatients. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems-CHI ’18, Montreal QC, Canada, 21–26 April 2018; ACM Press: New York, New York, USA, 2018; pp. 1–13. [Google Scholar] [CrossRef]
  21. Sandoval, E.B.; Brandstetter, J.; Bartneck, C. Can a robot bribe a human? The measurement of the negative side of reciprocity in human robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand, 7–10 March 2016; pp. 117–124. [Google Scholar] [CrossRef]
  22. Nishio, S.; Ogawa, K.; Kanakogi, Y.; Itakura, S.; Ishiguro, H. Do robot appearance and speech affect people’s attitude? Evaluation through the Ultimatum Game. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 809–814. [Google Scholar] [CrossRef]
  23. Breazeal, C. Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 2003, 59, 119–155. [Google Scholar] [CrossRef]
  24. Västfjäll, D.; Slovic, P.; Burns, W.J.; Erlandsson, A.; Koppel, L.; Asutay, E.; Tinghög, G. The Arithmetic of Emotion: Integration of Incidental and Integral Affect in Judgments and Decisions. Front. Psychol. 2016, 7, 325. [Google Scholar] [CrossRef] [PubMed]
  25. Van’t Wout, M.; Kahn, R.S.; Sanfey, A.G.; Aleman, A. Affective state and decision-making in the Ultimatum Game. Exp. Brain Res. 2006, 169, 564–568. [Google Scholar] [CrossRef] [PubMed]
  26. Dulleck, U.; Schaffner, M.; Torgler, B. Heartbeat and Economic Decisions: Observing Mental Stress among Proposers and Responders in the Ultimatum Bargaining Game. PLoS ONE 2014, 9, e108218. [Google Scholar] [CrossRef] [PubMed]
  27. Cruz-Sandoval, D.; Favela, J. Semi-autonomous Conversational Robot to Deal with Problematic Behaviors from People with Dementia. In Lecture Notes in Computer Science (Including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2017; Volume 10586, pp. 677–688. [Google Scholar] [CrossRef]
  28. Knight, S.J. Fairness or anger in ultimatum game rejections? J. Eur. Psychol. Stud. 2012, 1, 2–14. [Google Scholar] [CrossRef]
  29. Kreibig, S.D. Autonomic nervous system activity in emotion: A review. Biol. Psychol. 2010, 84, 394–421. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An avatar offers a 50–50 split to the participant, who plays as a responder.
Figure 1. An avatar offers a 50–50 split to the participant, who plays as a responder.
Proceedings 31 00071 g001
Figure 2. (a) The robot Eva is thinking and waiting for the participant’s answer during 10 s; (b) Eva is listening to the participant’s answer.
Figure 2. (a) The robot Eva is thinking and waiting for the participant’s answer during 10 s; (b) Eva is listening to the participant’s answer.
Proceedings 31 00071 g002
Figure 3. Experimental setup.
Figure 3. Experimental setup.
Proceedings 31 00071 g003
Figure 4. Average number of fair (left) and unfair (right) offers accepted per condition, with confidence values.
Figure 4. Average number of fair (left) and unfair (right) offers accepted per condition, with confidence values.
Proceedings 31 00071 g004
Figure 5. The GSR signals from P1 of unfair rejected offers (game 2 and 6) and fair accepted offers (4 and 8) (top). Log of game 2 (bottom).
Figure 5. The GSR signals from P1 of unfair rejected offers (game 2 and 6) and fair accepted offers (4 and 8) (top). Log of game 2 (bottom).
Proceedings 31 00071 g005
Table 1. Summary of results. Percentage of accepted fair and unfair offers per condition.
Table 1. Summary of results. Percentage of accepted fair and unfair offers per condition.
% ApprovedPCAvatarRobotPersonAverage
Fair offers9293909291.67
Unfair offers1926233024.40
(80–20)2636264533.33
(90–10)1217191415.48
Table 2. Gender differences of unfair offers accepted per condition.
Table 2. Gender differences of unfair offers accepted per condition.
ApprovedPCAvatarRobotPerson
Female
Fair offers9610093100
Unfair offers14363639
(80–20)14433657
(90–10)14293621
Male
Fair offers89898988
Unfair offers21211625
(80–20)32322139
(90–10)11111111

Share and Cite

MDPI and ACS Style

Acosta-Mitjans, A.; Cruz-Sandoval, D.; Hervas, R.; Johnson, E.; Nugent, C.; Favela, J. Affective Embodied Agents and Their Effect on Decision Making. Proceedings 2019, 31, 71. https://doi.org/10.3390/proceedings2019031071

AMA Style

Acosta-Mitjans A, Cruz-Sandoval D, Hervas R, Johnson E, Nugent C, Favela J. Affective Embodied Agents and Their Effect on Decision Making. Proceedings. 2019; 31(1):71. https://doi.org/10.3390/proceedings2019031071

Chicago/Turabian Style

Acosta-Mitjans, Adrian, Dagoberto Cruz-Sandoval, Ramon Hervas, Esperanza Johnson, Chris Nugent, and Jesus Favela. 2019. "Affective Embodied Agents and Their Effect on Decision Making" Proceedings 31, no. 1: 71. https://doi.org/10.3390/proceedings2019031071

Article Metrics

Back to TopTop