Sensors 2013, 13(9), 12406-12430; doi:10.3390/s130912406
Article

Teaching Human Poses Interactively to a Social Robot

1,* email, 1email, 2email and 1email
Received: 13 June 2013; in revised form: 15 August 2013 / Accepted: 5 September 2013 / Published: 17 September 2013
(This article belongs to the Section Physical Sensors)
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract: The main activity of social robots is to interact with people. In order to do that, the robot must be able to understand what the user is saying or doing. Typically, this capability consists of pre-programmed behaviors or is acquired through controlled learning processes, which are executed before the social interaction begins. This paper presents a software architecture that enables a robot to learn poses in a similar way as people do. That is, hearing its teacher’s explanations and acquiring new knowledge in real time. The architecture leans on two main components: an RGB-D (Red-, Green-, Blue- Depth) -based visual system, which gathers the user examples, and an Automatic Speech Recognition (ASR) system, which processes the speech describing those examples. The robot is able to naturally learn the poses the teacher is showing to it by maintaining a natural interaction with the teacher. We evaluate our system with 24 users who teach the robot a predetermined set of poses. The experimental results show that, with a few training examples, the system reaches high accuracy and robustness. This method shows how to combine data from the visual and auditory systems for the acquisition of new knowledge in a natural manner. Such a natural way of training enables robots to learn from users, even if they are not experts in robotics.
Keywords: interactive learning; human–robot interaction; robot learning
PDF Full-text Download PDF Full-Text [24130 KB, uploaded 21 June 2014 09:10 CEST]

Export to BibTeX |
EndNote


MDPI and ACS Style

Gonzalez-Pacheco, V.; Malfaz, M.; Fernandez, F.; Salichs, M.A. Teaching Human Poses Interactively to a Social Robot. Sensors 2013, 13, 12406-12430.

AMA Style

Gonzalez-Pacheco V, Malfaz M, Fernandez F, Salichs MA. Teaching Human Poses Interactively to a Social Robot. Sensors. 2013; 13(9):12406-12430.

Chicago/Turabian Style

Gonzalez-Pacheco, Victor; Malfaz, Maria; Fernandez, Fernando; Salichs, Miguel A. 2013. "Teaching Human Poses Interactively to a Social Robot." Sensors 13, no. 9: 12406-12430.

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert