Next Article in Journal
Assessment of the Kinetic Trajectory of the Median Nerve in the Wrist by High-Frequency Ultrasound
Next Article in Special Issue
Application of Ultrasound Phase-Shift Analysis to Authenticate Wooden Panel Paintings
Previous Article in Journal / Special Issue
An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements
Article Menu

Export Article

Open AccessArticle

Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

RoboLab, Robotics and Artificial Vision Laboratory, University of Extremadura, Escuela Politécnica, Avenida de la Universidad s/n, Cáceres, Spain
Author to whom correspondence should be addressed.
Sensors 2014, 14(5), 7711-7737;
Received: 29 January 2014 / Revised: 22 April 2014 / Accepted: 22 April 2014 / Published: 28 April 2014
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
PDF [12685 KB, uploaded 21 June 2014]


This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. View Full-Text
Keywords: human robot interaction; robotic head; imitation human robot interaction; robotic head; imitation
This is an open access article distributed under the Creative Commons Attribution License (CC BY 3.0).
Printed Edition Available!
A printed edition of this Special Issue is available here.

Share & Cite This Article

MDPI and ACS Style

Cid, F.; Moreno, J.; Bustos, P.; Núñez, P. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation. Sensors 2014, 14, 7711-7737.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top