Sensors 2013, 13(11), 15549-15581; doi:10.3390/s131115549

A Multimodal Emotion Detection System during Human–Robot Interaction

1 Robotics Lab, Universidad Carlos III de Madrid, Av. de la Universidad 30, Leganés, Madrid 28911, Spain 2 Institute for Systems and Robotics (ISR), North Tower, Av.Rovisco Pais 1, Lisbon, 1049-001, Portugal
* Author to whom correspondence should be addressed.
Received: 7 August 2013; in revised form: 24 September 2013 / Accepted: 22 October 2013 / Published: 14 November 2013
(This article belongs to the Section Physical Sensors)
PDF Full-text Download PDF Full-Text [12279 KB, uploaded 14 November 2013 11:16 CET]
Abstract: In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.
Keywords: emotion recognition; affective computing; human–robot interaction; dialog systems; FACS

Article Statistics

Load and display the download statistics.

Citations to this Article

Cite This Article

MDPI and ACS Style

Alonso-Martín, F.; Malfaz, M.; Sequeira, J.; Gorostiza, J.F.; Salichs, M.A. A Multimodal Emotion Detection System during Human–Robot Interaction. Sensors 2013, 13, 15549-15581.

AMA Style

Alonso-Martín F, Malfaz M, Sequeira J, Gorostiza JF, Salichs MA. A Multimodal Emotion Detection System during Human–Robot Interaction. Sensors. 2013; 13(11):15549-15581.

Chicago/Turabian Style

Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A. 2013. "A Multimodal Emotion Detection System during Human–Robot Interaction." Sensors 13, no. 11: 15549-15581.

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert