Next Article in Journal
Gesture Elicitation Studies for Mid-Air Interaction: A Review
Next Article in Special Issue
Using Game Design to Teach Informatics and Society Topics in Secondary Schools
Previous Article in Journal
Designing Behavioural Artificial Intelligence to Record, Assess and Evaluate Human Behaviour
Previous Article in Special Issue
Effect of Sensory Feedback on Turn-Taking Using Paired Devices for Children with ASD
Article Menu

Export Article

Multimodal Technologies Interact. 2018, 2(4), 64;

ERIKA—Early Robotics Introduction at Kindergarten Age
MASCOR Institute, FH Aachen University of Applied Sciences, Eupener Str. 70, 52066 Aachen, Germany
Knowledge-Based Systems Group, RWTH Aachen University, 52056 Aachen, Germany
Correspondence: [email protected]
These authors contributed equally to this work.
Received: 27 July 2018 / Accepted: 14 September 2018 / Published: 27 September 2018


In this work, we report on our attempt to design and implement an early introduction to basic robotics principles for children at kindergarten age. One of the main challenges of this effort is to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. The humanoid robot Pepper from Softbank, which is a great platform for human–robot interaction experiments, was used to present a lecture on robotics by reading out the contents to the children making use of its speech synthesis capability. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents they acquired about how mobile robots work in principle. In this quiz, two LEGO Mindstorm EV3 robots were used to implement a strongly interactive scenario. Besides the thrill of being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. We got very positive feedback from the children as well as from their educators. To the best of our knowledge, this is one of only few attempts to use a robot like Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents.
robotics; kindergarten; education; robot operating system; ROS; Pepper; human–robot interaction; HRI

1. Introduction

Today’s world is increasingly characterized by technological advancements. One of the most striking components of digitalization perhaps comes in the form of the ever more popular artificial intelligence, many times appearing in different forms of embodiment, that is, as robots.
In order to prepare coming generations for a future in which robots are just an integral part of every day life, we designed, implemented and conducted a teaching project for an early robotics introduction for children at kindergarten age. We have a substantial background in teaching robotics at the university level (both in Bachelor and in Master courses). We also gave summer courses and held one-day events for school students earlier already. We now thought about extending these previous excursions into pre-university education to even children in pre-school and kindergarten. While we do have experience with teaching robotics, this is focused on higher education. We are not educators for pre-school or kindergarten and this report is not a study. Hence, the contribution of this work is more on the technical implementation and the lessons learnt.
To this end, we took our basic robotics introduction course and tried to boil it down to the core principles. We made an effort to illustrate these principles so that young children can relate as much as possible. Then, we tried to spice things up in terms of how we confer the content. More precisely, we enrich the ‘lecture’ part of the event with videos and animation. Furthermore, we let a real robot, namely Softbank’s Pepper, do some essential parts of the lecture. Finally, the event is concluded by a quiz. Again, to make things more interesting and entertaining, we tried to come up with something engaging and fancy. The quiz is implemented in the form of a robot race competition between two teams. Every team has a robot that moves forward in case of the team giving the right answer and not moving closer to the goal if the answer given is false. The robots used for this quiz were LEGO Mindstorm EV3, built to resemble the robot WALL-E known from the movie.
The rest of the paper is organized as follows. We first give a brief review of related work and set our effort apart from existing work in pre-school robotics education. Then, we describe and explain our conceptual design before we discuss important issues of the implementation. Finally, we analyse and discuss lessons learnt from a pilot instance conducted in a local kindergarten.

2. Related Work

Before going into detail about our approach and pilot run, we review related work in different dimensions. First, we look at the connections between robotics and education. Then, we take a particular look at robots used in pre-school activities. Finally, we review existing applications of the robot platform Pepper that we use.

2.1. Robotics and Education

Robotics is often used as a tool to foster interest and conduct early education in technology and engineering (e.g., [1,2]). The main focus lies on K–12 education to engage learners with STEM (Science, Technology, Engineering, Mathematics) subjects. The field of educational robotics is tackling this problem. Several initiatives exist to spark interest in young learners, for instance, the Roberta Initiative [3], the FIRST Lego League [4], or RoboCupJunior [5]. Besides these initiatives, the potential of using robots in school education (for different purposes) is widely acknowledged, see e.g., [6]. While the mentioned initiatives mainly make use of a project-based approach to support their teaching message, there are also a number of approaches following the constructivism or constructionism approach. For instance, Cejka et al. [7] describe a learning approach by constructivism following Papert’s approach [8] for K–12 learners. A review of research trends in robotics education for young children has recently been published in [9]. This paper gives a thorough overview of the different approaches taken in educational robotics. In our own previous research, we were also engaged in robotics education for K–12 learners or university students coming from a disadvantaged background [10,11]. In the work reported on the present article, we now turn to pre-school education.
There is other research related to our approach concerned with the use of telepresence robots in education. Ref. [12] is exploring how telepresence robots can be used to allow for elderly people to teach students from home. Robots may be operated by children [13] or may be used to help in overcoming a language barrier [14]. In [15], a telepresence robot is used for English tutoring. The tele-teaching robot Engkey is presented in [16]. It is tele-operated by a human English teacher from a remote site. An overview of telerobot assisted language learning is given in [17]. In contrast to the works mentioned, we do not intend to make a human act through the body of a robot, but we want the robot to act on its own instead.

2.2. Robots in Pre-School

In this paper, we report on an initiative to spark first interest in the field of robotics at a pre-school level. Rather than setting up a whole curriculum or using robots as a vehicle for teaching STEM subjects, we aim at giving pre-school children a positive experience about the topic “robotics”. They should experience that robotics is an interesting field and that controlling a robot is not some kind of magic, but computer science, engineering, and math. We think it is important to give this experience in a positive, playful way. The potential of educational robotics has been acknowledged earlier, in particular the potential to facilitate curiosity and creativity [18].
There are a number of related approaches that focus on robotics at a pre-school level. One is KIBO (
with the curriculum described in [19]. The focus there is on actually making the children build and program the robot in a seven week course. Our time frame is much shorter: a single event of about two hours. In addition, we concentrate on giving an introduction to robotics only. Our aim is to explain what is behind the robots that children already see in their daily lives and that they will see much more in the upcoming future. This is to spark interest in (the computer science parts of) robotics. It is also to sensitize the children to the complex inner workings of the fancy machines around them. Two novel concepts for educational robotics for kindergartens are presented in [20]. Both concepts are oriented towards hands-on experiences with robots. While one looks at the effect of robotics training on the cognitive processes, the other focuses on cross-generational aspects. Ref. [21] reports that robots can be a captivating tool and that they increase children’s desire to explore. Preschoolers have been found to be very open towards robots earlier already and even integrate (very simple) robots in their social play [22]. The effectiveness of robot-assisted language learning for pre-schoolers is studied in [23].
A different line of research is using robots as a tool for storytelling. For example, storytelling with a kindergarten socially assistive robot is presented in [24]. The underlying idea is to engage young children in educational games with a robot. The results suggest that this activity improves the children’s performance and that children enjoy the interaction very much. Another interesting work in the field of kindergarten education with robots is [25]. The authors show that better learning results can be achieved by motivating pre-school children to solve algebra exercises when robots are supporting the learning activity. Ref. [26] reports on a number of interaction applications with pre-school children with the robot NAO. In the conclusion, they mention that their NAOqi apps could also be used for the robot Pepper, which was used in our work. Ref. [27] suggests that using a socially assistive robot in pre-schooler learning activities increases engagement as well. While it is interesting to analyse the behaviour of robot-child interactions as is done in [28], this not the aim or a contribution of the work presented here. We also do not conduct a larger study in any sense. Instead, we focus on the technical implementation of using robots to convey basic robotics concepts to pre-school children and evaluate this in a pilot run.

2.3. Socially Interactive Robot Platforms

As motivated already, we want to use a robot to conduct an early introduction to robotics for children in kindergarten. There is a large choice of robot kits that are commonly used to let young learners build and program robots. Our focus, however, is to offer an interesting yet informative experience on basic robotics principles which is why we chose to let a robot itself conduct large parts of the presentation. Softbank Robotics’ Pepper robot (Figure 1) is an ideal platform for this endeavour. While the robot hardware is not very sophisticated, it is very appealing for humans to interact with it.
Pepper was designed as a human–robot interaction (HRI) platform with a number of applications. For instance, the H2020 MuMMER project ( is using Pepper for their HRI activities [29]. Some of the main applications are information kiosk applications in large shopping malls. Results of how humans react to Pepper have been published, for example, in [30]. The most related work to ours is [31] where Pepper is used for remote teaching activities. The results convince that Pepper is a good platform for HRI particularly when dealing with children. Apart from that, Pepper has been used in a number of different applications such as museum tourguides or information kiosks at airports or hospitals (e.g., [32,33,34,35,36,37]). Following an analysis in [38], pepper is a good choice in terms of an affective antropomorphic platform. In fact, Pepper has already been chosen as a (successful) teaching tool for young children before [39].

3. Conceptual Design

3.1. Course Preparation

As a first step in our endeavour, we posed ourselves a set of questions in order to design an event that would meet our vision sketched earlier. First, we thought about what is important in robotics and what (part of this) we want to and what part we are able to convey to the children at kindergarten age. Secondly, we looked at which sub-audience(s) at kindergarten age we can address with these topics and how heterogeneous this audience may really be. Then, we examined how we can make the children relate to what they should learn and understand. Finally, we considered how the event can be entertaining and informative at the same time and how we can assess what children really memorized and learned.
With the educators of the childcare facility, we coordinated and discussed important points about the lecture in the facility. They pointed out that the form of a lecture with frontal teaching was very new to the children. They were curious how the children would be able to follow a frontal lecture. As an action of support, they proposed that all the children should bring their chairs into the gym where the lecture took place. This action should discipline the children. As another important issue, the educators pointed out considering the shorter attention span of the children to include sections in the lecture where less attention is needed and some activation of the children takes place. For instance, we included video sequences of failures of robots, e.g., robots would tip over while trying to open a door, and incorporated a quiz section at the end of the lecture. As another educators’ remark, which was also on our check list, we should choose all examples from the experience world of the children. While our initial intention was to present the lecture only for final year pre-school children, we were asked by the educators to open the lecture also for younger children of the facility. The educators were aware that w.r.t. attention span and discipline, this could bare certain risks for conducting the lecture, but they wanted to also expose younger children to the social robots.
In the remainder of this section, we will present and discuss further details in three blocks, namely in subsections on content, form, and quiz.

3.2. Content

As a first step, we had to select what the most important elements of robotics are that we want to convey to young children. Drawing from previous experience in higher education teaching in robotics, we distilled that, for any presentation on mobile autonomous robots, we need to include information on the robot hardware itself and how the system knows where it is and how it gets from A to B to solve its task. In more detail, the following agenda builds the basis for our education event:
  • Introduction
    • existing robots and robotic competitions,
    • outline,
  • Sensors
    • different sensors available in general,
    • particular sensors of the robot Pepper,
    • at least one sensor (concept) in more detail,
  • Perception/Mapping
    • different kinds of maps,
    • building a map,
  • Localization
    • using a map to localize,
    • fitting sensor readings / landmarks,
  • Navigation
    • global navigation using path planning,
    • local navigation with collision avoidance.
While many of the topics could be detailed in arbitrary depth, we want to get across the main idea of the basic concepts. Given our target audience, this requires a specific form of presentation.

3.3. Form

To be able to convey the content above to children at kindergarten age, we needed to find examples and explanations that the children can relate to. Furthermore, almost none of the children in our target audience is able to read yet. Hence, using text is not an option in most cases, and even images such as pictograms have to be chosen carefully.
One of the main decisions we made here is to let a robot take over for presenting the content for large parts of the event. This appeared useful for several reasons. For one, the fascination of a machine talking to the children should raise the attention. For another, the information appears even more credible if it is presented by the very machine that the information is about. Finally, reading out textual information solves part of the reading issue mentioned above.
Since even the most sophisticated robots can not keep up with the variability and fallibility of humans, let alone young children and a robot would not have sufficient flexibility to handle any situation that might appear, the parts given by the robot are interleaved with human moderators for answering questions and handling unexpected events.
Even though robots present an exciting element to (very) young children, the total time of the event must be carefully chosen not to overload the audience. In addition, the attention span of children at kindergarten age is pretty limited. There are rules of thumb that children can concentrate according to the formula: Attention span for learning = chronological age + 1 min. Others estimate the ability to concentrate for learning in the age group between 5–7 years with about 15 min [40]. Earlier studies found that children at kindergarten age can concentrate on playing with a toy they like for not much more than half an hour [41]. Admittedly, these figures are for focusing on home work exercises and playing with a toy—following a presentation is a different thing. An additional disturbance factor for the attention span of the children was the form of presentation. The children in that age group do not have much experience with frontal teaching. They need to get used to the new situation first. This is why we chose to brighten up the content with some rather shallow elements such as fun videos, e.g., robots falling over, where the concentration levels could drop again and the children could relax.
For the content itself, we tried to come up with examples and illustrations that as many of the children can relate to as possible. We will give a set of examples of these choices in the following. For introducing the perception hardware of the robot, we first ask the children about their own senses. Thus, it should be easier to realize that robots need such senses too, just in the form of hardware. As an example sensor, we discuss sonar because it offers two great properties. First, the bat is using sonar to orient itself and many children might have heard this information already. Second, many cars use sonars in their parking assistant systems. Hence, a lot of the children might know the beeping noises that their family car or some other vehicle makes. The latter example is exactly what we are using in explaining how time-of-flight works. Here, the echo metaphor is used as well. Some pictures from the illustration of the sonar sensor are given in Figure 2.
For explaining the localization (and later the navigation) method, we took a toy horse barn that can be taken apart. By then overlaying a top view picture of the stable with a picture of it partly disassembled with only the walls remaining, we could illustrate a laser-based localization of a robot, in our story, whose job it was to clean the barn. Images from the horse barn illustration are given in Figure 3.

3.4. Quiz

As with every teaching activity, also at this very young age, we wanted to include some form of test in order to check whether the children have memorized the most important elements of the lecture. To make the quiz more entertaining, exciting, and to invite children for active participation, we opted for a team-style approach that includes robots in a game. The participants are divided into two teams: red and blue. Every team gets assigned one of two identical robots, LEGO Mindstorm EV3 in our case. The two robots are placed on a playing field that resembles a dragstrip style race track. The two lanes are marked with a color each: red and blue respectively. A picture of the race track and one of the robots is given in Figure 4.
The quiz now works as follows. The teams are asked questions referring to content presented earlier in the lecture. Children can select one of three answers much like in the “Runaround” quiz show ( For every correct answer, the team’s robot will move forward a couple of blocks; for a wrong answer, it will either lift its forklift as like shrugging its shoulders or turn in a circle instead, thus not getting any closer to the finish line.
Apart from offering a very engaging and entertaining element, the quiz offers another opportunity to explain several robotic concepts in practice. After the quiz has taken place, the children can tele-operate the Mindstorm robots with a joystick. In addition, the moderators explain what the robots actually need to do to perform their task. That is, the robots need to read their color sensor’s value to orient on the race track and to decide how long to instruct the motors to turn, etc.

4. Implementation

We were aware that children in pre-school possibly would not be very enthusiastic about listening to university lecturers to teach them the basics of mobile robotics. In order to engage them with this exciting topic which also requires a lot of abstract thinking, we decided to have the mobile service robot Pepper presenting the lecture contents. We further integrated LEGO Mindstorm EV3 robots in our Runaround-style quiz. For each correct answer, the robot belonging to the team that gave a correct answer moved some squares towards the finish line.

4.1. Technical Realization

As for a humanoid service robot, Pepper has quite limited capabilities compared, for instance, to TIAGo [42], but it has quite an appeal to the humans that interact with it. Pepper comes with a number of basic capabilities such as speech synthesis, speech recognition, or face detection which comes pre-installed or can be downloaded from the Softbank Robotics App Store. However, the appeal of the robot immediately stimulates passers-by to interact with the robot. We learnt this on numerous occasions where we displayed the robot to the broader public.
In particular, it comes with a mode that is called autonomous life where Pepper reacts to a set of sentences answering questions and making human-like gestures while speaking. While the speech detection is quite limited in crowded human-populated environments, Pepper can, nonetheless, be used quite well as an information kiosk. Figure 1 shows that it is equipped with a touch panel that can be used to display information and request user inputs. Information can be exchanged via an HTTP webserver.
In our setup, we use the robot middleware ROS [43] to interact with the sensors and actuators of the robot. We bridge the NAOqi API ( with ROS making use of the rosbridge ( package. This allows us to access Pepper’s motors and further built-in capabilities from within the ROS eco-system. In the ERiKA lecture, we particularly made use of Pepper’s built-in gestures and its speech synthesis module. The whole content of the lecture was prepared as an HTML-5 slide show making use of the reveal.js ( slide presentation system which is implemented in Javascript. In addition to a number of nice presentation features such as slide transitions and overview slides, it comes with a multiplex feature, which allows for controlling the presentation from different hosts. This way, we could either advance the lecture and select the next slide from the touch display installed at Pepper, or from an external presentation laptop.
ROS was connected with some additional Javascript code via the ROS Javascript library ( With this library, we were able to communicate user inputs from the tablet to the ROS high-level application and vice versa to display live camera images from the robot in the ERiKA presentation. To connect the browser with ROS, the rosbridge needs to be connected on the system. In our configuration, the following hosts and ports were used: Mti 02 00064 i001 At the startup of the presentation, one needs to connect to the rosbridge calling the ROSLIB initialisation function: Mti 02 00064 i002 As mentioned before, we mainly used the animatedSay action from NAOqi for the lectures. This was encapsulated as a ROS action. The respective Javascript code for starting the ROS action is: Mti 02 00064 i003 For the children, it was very exciting that Pepper (with some extra Javascript and ROSLIB implementation) was able to read out all the content that was presented on the slides. This is in particular an important feature, as pre-school children are, in general, not able to read.
For the quiz, two teams of pre-school children had to compete against each other: the blue team against the red team. As described in Section 3.4, for each question, three different answers could be selected. The quiz was organized as a race between two LEGO Mindstorm robots (see Figure 4). For each correct answer, the robot moved a (couple of) square(s) forward, for an incorrect answer, it simply lifted its forklift as like shrugging its shoulders or it turned in a circle. A team answering all questions correctly would reach the finish line.
Technically, this was realised as follows. The EV3 can be controlled under the Robot Operating System (see, for instance, Using this feature, we were easily able to encapsulate control commands for the LEGO EV3 as ROS actions and make them available via the ROSLIB to our HTML front-end application: Mti 02 00064 i004
The above code shows the Javascript definiton for the first of the two EV3 robots (called boobe3e_1) to move forward in case a correct answer was selected by one of the teams.
In summary, our system uses the ROSbridge to access NAOqi functionality within ROS. All functions were encapsulated as ROS actions. Furthermore, the ROS Javascript library ROSlibjs allowed us to call NAOqi functions from the HTML front-end that displayed the lecture and the quiz on Pepper’s tablet. Using ROS allowed us also to access and operate the LEGO Mindstorm EV3 robots which were used in the quiz from within the HTML presentation. Here, we made good experiences with the reveal.js slide presentation system which also allowed for running the slide show in a multiplex mode, i.e., control the presentation from an external laptop as well as on Pepper’s tablet. This came in very handy particular for the Runaround quiz, where the two teams of children had to select the correct answer on the tablet. We will give some more information about the quiz in the next sections.

4.2. Pilot Run

As a pilot run, we conducted a first instance of the above concept in a regional kindergarten.

4.2.1. Setup

After arriving at the premises of the kindergarten, we set up the robot Pepper and the Mindstorms for the quiz in the gym. The robots were all connected to a local access point. Laptops were used to run the presentation on a projector and to start up the robots’ software. A picture of the room is given in Figure 5.
While we were setting up, the pre-school aged children were already given badges to indicate their participation in the quiz scheduled for later. They spent the time colouring these badges to form two equally sized teams: a red team and a blue team. The team badges are depicted in Figure 6.

4.2.2. Lecture

The children entered the gym bringing their own chairs with them. It took some time for everybody to find an adequate position and for the children to calm down a little. Once everything settled, Pepper began with an introduction. For the course of the lecture, the authors of this paper and Pepper were presenting the slides in an interleaved fashion. While Pepper was ‘reading’ the slides’ content, the human teachers reacted to questions and remarks from the audience and tried to continuously activate the young audience.

4.2.3. Quiz

To give an impression of the quiz, we give two questions and different states of the graphical user interface (GUI) in Figure 7. The upper row shows two questions in the initial state. Left is a question of how many senses humans have. Right is the question of what localization is about. Children have three answers to choose from. While we tried to illustrate the possible answers with pictures as much as possible, we still read out loud the three options to allow for taking a well-informed decision. After both teams have chosen an answer, a ten second countdown is started (Figure 7 lower row, left image). While there is time left, the teams can still change their answer. When the time is up, the quiz system automatically highlights the right answer (Figure 7 lower row, right image) and it initiates appropriate actions with the teams’ robot on the race track. That is to say, if the team gives the right answer, its robot moves forward, if the team’s answer is wrong, its robot does not move forward but instead turns in a circle or only lifts and lowers its fork lift, not getting any closer towards the goal. Luckily, both groups reached the goal line at the same (final) step. This, however, might depend on moderator intervention when asking the questions. As a prize for the contestants in the quiz, we prepared certificates and we gave out little papercraft sheets (taken from for building a Pepper figure (at home).

4.2.4. Free Play

After the quiz, we gave pre-schoolers the opportunity to teleoperate the Mindstorm robots with a joystick. Unfortunately, one of the two robots had slight network issues. This is why the controller had to be restarted over and over again every couple of minutes. Nevertheless, children were enthusiastic about steering the robot around in the gym. In addition, we let the children and the educators freely interact with Pepper running the manufacturer-supplied autonomous life mode. We let Pepper say some utterances upon user request as well. Finally, the children (and educators) were also given the chance to take pictures with Pepper and the LEGO robots.

5. Pilot Evaluation

To improve the initial design and its implementation, we rely on the responses and feedback from different stakeholders. We give our own observations as well as children’s and educators’ feedback in the following.

5.1. Observations

During the pilot event and in reflections after the same, the authors made a couple of observations. We briefly report these observations in the following.
The overall reception of the event with children and educators was very good. While the organizational part needed improvising at times, this can be assumed to be normal for a pilot. In the presentation, some of the video sequences appeared to be too long in retrospect. This most likely is due to a smaller attention span with the younger children than expected. Overall, some video clips should rather be split into image sequences. This would allow for a more fine grained interaction with the audience response. For example, stopping the video to react on a remark from one of the children is rather complicated. In the explanation of the time-of-flight concept, we showed a formula of how to calculate the distance of an obstacle from the time needed for the sound reflection to come back to the sensor. This formula appeared too abstract for children at kindergarten age. However, we will probably keep it, but we will add a relation to how waiting for the echo of a clap sound in a large reflective hallway is essentially the same thing.
A general improvement with respect to the quiz could be to introduce intermediate summary slides after every section of the presentation. This allows for a quick repetition shortly after new content has been presented. We hope to increase the memorization of facts and hence also a better performance in the quiz later on.

5.2. Childrens’ Feedback

In the post-processing of the events, the kindergarten collected feedback from the children. We give the translated statements sorted by whether they are positive or not:
  • I didn’t like that one of the smaller robots did not work properly.
  • It wasn’t so nice that the robots in the video fell over.
  • It was a pity that Pepper did not drive around.
  • It was a pity that Pepper did not recognize all commands/questions.
  • The smaller children should have stayed for the quiz.
Positive feedback
  • Pepper looked really nice.
  • It was cool to be able to drive around with the smaller robots and that Pepper talked.
  • The films were funny.
  • The quiz was nice.
  • It was nice that Pepper could talk and that we could see her.
  • I like that we were allowed to drive around with the small robots.
  • The quiz was great.
  • There was a prize.
  • That we could talk with Pepper.
  • I can now build a paper-Pepper with my dad.
  • We could tele-operate the small robots with a joystick.

5.3. Educators’ Feedback

Some time after the pilot run, we also received feedback from the educators of the kindergarten, which we summarize in the following. Overall, the event was received as very fascinating for the children, especially meeting the robot Pepper in person. If one assumes an average attention span of 20–30 min for pre-school children (age 5–6), then the attention and the concentration that could be observed at the event, in particular for the younger children (age 3–4), was extremely high. For the kids, it was an exciting experience with lots of new information to process.
In the setting at the event, it was very important to connect the new content to the children’s reality. Using the children’s prior experiences and using practical examples allows for a better understanding. At kindergarten age, generic terms and concepts are present in a very limited form only. Every activity and video clip was fascinating for the children. The concentration could have been increased even more if the event was given in a smaller group. The younger children posed some kind of distraction. It was ambitious for all children and for the pre-schoolers an amazing challenge. The children are not used to frontal forms of teaching with such large audiences. They know such settings rather from concerts of cinema events where they take on a passive role and expect to be entertained. The quiz and the free play with the robots served as a perfect conclusion of the event.

6. Conclusions

In this paper, we reported on our efforts to design, implement, and conduct an early introduction to robotics for children at kindergarten age. We laid out our conceptual design and methodological considerations drawing from our experience in teaching robotics in higher education institutions. What has been particular challenging was to deal with the shorter attention span, to find examples from the experience world of pre-school children, and to boil down complex contents in a way that children of that age would follow a frontal lecture for over about an hour. It has to be noted that it was the first time for the children to be exposed to frontal teaching in their kindergarten. They usually know learning only in the form of chair circles. This was an additional challenge for the children. Furthermore, to expose more children to the excitement to have a humanoid robot at the premises, children below the age of 5 were also allowed to attend the lecture. This caused additional distractions during the lecture.
In summary, we can state that Pepper was the right choice for presenting the content. This way, we overcame the hurdle that pre-schoolers, in general, cannot read. It also raised the children’s attention. To cater to the more limited attention span of the children, content-wise, we also integrated shallower elements such as video clips from robots tipping over, etc. For the quiz, we used LEGO Mindstorm robots to activate the children for the second hour of the event. Finally, after the quiz, the children could also tele-operate and play around with the Mindstorm robots, which was quite fun for the children. On top of that, each child participating in the quiz received a participation certificate, and all the children got a cut-out sheet to build their own cardboard pepper robot.
We got very positive feedback from the children as well as from the educators. In addition, we learnt our lessons. It was clearly not really adequate to try and teach the formula to calculate the time of flight of an ultrasound signal in that age group. However, when we repeated the lecture with some updated material some weeks later to 4th grade of primary school pupils, the learners were easily able to understand this formula. In summary, this new learning concept was well-received in the local communities. We received a number of requests to repeat the event in different kindergartens and pre-schools.

Author Contributions

Both authors made equal contributions to all parts of the manuscript.


This research received no external funding.


We would like to thank and acknowledge the intensive support from Christoph Gollock in the technical implementation of this work. In addition, the help of Marcel Stüttgen and Dennis Miltz with technical matters in realizing the pilot is greatly appreciated.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Sullivan, A.; Bers, M.U. Robotics in the early childhood classroom: learning outcomes from an 8-week robotics curriculum in pre-kindergarten through second grade. Int. J. Technol. Des. Educ. 2016, 26, 3–20. [Google Scholar] [CrossRef]
  2. Eguchi, A. Educational robotics for promoting 21st century skills. J. Autom. Mob. Robot. Intell. Syst. 2014, 8, 5–11. [Google Scholar] [CrossRef]
  3. Fraunhofer IAIS. Roberta—Learning with Robots. 2010. Available online: (accessed on 26 July 2018).
  4. First Lego League South Africa. 2018. Available online: (accessed on 26 July 2018).
  5. The RoboCup Federation. RoboCupJunior. 2010. Available online: (accessed on 26 July 2018).
  6. Benitti, F.B.V. Exploring the Educational Potential of Robotics in Schools. Comput. Educ. 2012, 58, 978–988. [Google Scholar] [CrossRef]
  7. Cejka, E.; Rogers, C.; Portsmore, M. Kindergarten robotics: Using robotics to motivate math, science, and engineering literacy in elementary school. Int. J. Eng. Educ. 2006, 22, 711–722. [Google Scholar]
  8. Papert, S. Mindstorms: Children, Computers, and Powerful Ideas; Basic Books, Inc.: New York, NY, USA, 1980. [Google Scholar]
  9. Jung, S.E.; Won, E.S. Systematic Review of Research Trends in Robotics Education for Young Children. Sustainability 2018, 10, 905. [Google Scholar] [CrossRef]
  10. Ferrein, A.; Schiffer, S.; Booysen, T.; Stopforth, R. Why it is harder to run RoboCup in South Africa: Experiences from German South African collaborations. Int. J. Adv. Robot. Syst. 2016, 13. [Google Scholar] [CrossRef]
  11. Ferrein, A.; Marais, S.; Potgieter, A.; Steinbauer, G. RoboCup Junior: A vehicle for S&T education in Africa? In Proceedings of the Africon 2011 Conference, Livingstone, Zambia, 13–15 September 2011; pp. 1–6. [Google Scholar]
  12. Okamura, E.; Tanaka, F. A Pilot Study About Remote Teaching by Elderly People to Children over a Two-way Telepresence Robot System. In Proceedings of the Eleventh ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, 7–10 March 2016; IEEE Press: Piscataway, NJ, USA, 2016; pp. 489–490. [Google Scholar]
  13. Tanaka, F.; Takahashi, T.; Matsuzoe, S.; Tazawa, N.; Morita, M. Child-operated telepresence robot: A field trial connecting classrooms between Australia and Japan. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 5896–5901. [Google Scholar]
  14. Tanaka, F.; Takahashi, T.; Matsuzoe, S.; Tazawa, N.; Morita, M. Telepresence robot helps children in communicating with teachers who speak a different language. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 399–406. [Google Scholar]
  15. Kwon, O.H.; Koo, S.Y.; Kim, Y.G.; Kwon, D.S. Telepresence robot system for English tutoring. In Proceedings of the 2010 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Seoul, Korea, 26–28 October 2010; pp. 152–155. [Google Scholar]
  16. Yun, S.; Shin, J.; Kim, D.; Kim, C.G.; Kim, M.; Choi, M.T. Engkey: Tele-education Robot. In Social Robotics; Mutlu, B., Bartneck, C., Ham, J., Evers, V., Kanda, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 142–152. [Google Scholar]
  17. Han, J. Emerging technologies: Robot assisted language learning. Lang. Learn. Technol. 2012, 16, 1–9. [Google Scholar]
  18. Zawieska, K.; Duffy, B.R. The Social Construction of Creativity in Educational Robotics. In Progress in Automation, Robotics and Measuring Techniques; Szewczyk, R., Zieliński, C., Kaliczyńska, M., Eds.; Springer: Cham, Switzerland, 2015; pp. 329–338. [Google Scholar]
  19. Sullivan, A.; Bers, M.U. Dancing robots: Integrating art, music, and robotics in Singapore’s early childhood centers. Int. J. Technol. Des. Educ. 2018, 28, 325–346. [Google Scholar] [CrossRef]
  20. Eck, H.; Hirschmugl-Gaisch, S.; Hofmann, A.; Kandlhofer, M.; Rubenzer, S.; Steinbauer, G. Innovative concepts in educational robotics: Robotics projects for kindergartens in Austria. In Proceedings of the 2013 Austrian Robotics Workshop, Wien, Austria, 23–24 May 2013. [Google Scholar]
  21. He, B.; Xia, M.; Yu, X.; Jian, P.; Meng, H.; Chen, Z. An educational robot system of visual question answering for preschoolers. In Proceedings of the 2017 2nd International Conference on Robotics and Automation Engineering (ICRAE), Shanghai, China, 29–31 December 2017; pp. 441–445. [Google Scholar] [CrossRef]
  22. Kozima, H.; Nakagawa, C. A robot in a playroom with preschool children: Longitudinal field practice. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju Island, Korea, 26–29 August 2007; pp. 1058–1059. [Google Scholar] [CrossRef]
  23. Mazzoni, E.; Benvenuti, M. A Robot-Partner for Preschool Children Learning English Using Socio-Cognitive Conflict. J. Educ. Technol. Soc. 2015, 18, 474–485. [Google Scholar]
  24. Fridin, M. Storytelling by a kindergarten social assistive robot: A tool for constructive learning in preschool education. Comput. Educ. 2014, 70, 53–64. [Google Scholar] [CrossRef]
  25. Ferrarelli, P.; Lapucci, T.; Iocchi, L. Methodology and Results on Teaching Maths Using Mobile Robots. In Advances in Intelligent Systems and Computing, Proceedings of the ROBOT 2017: Third Iberian Robotics Conference, Sevilla, Spain, 22–24 November 2017; Ollero, A., Sanfeliu, A., Montano, L., Lau, N., Cardeira, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 694, pp. 394–406. [Google Scholar]
  26. Zimina, A.; Rimer, D.; Sokolova, E.; Shandarova, O.; Shandarov, E. The Humanoid Robot Assistant for a Preschool Children. In Interactive Collaborative Robotics; Ronzhin, A., Rigoll, G., Meshcheryakov, R., Eds.; Springer: Cham, Switzerland, 2016; pp. 219–224. [Google Scholar]
  27. Clabaugh, C.; Ragusa, G.; Sha, F.; Matarić, M. Designing a socially assistive robot for personalized number concepts learning in preschool children. In Proceedings of the 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), Providence, RI, USA, 13–16 August 2015; pp. 314–319. [Google Scholar] [CrossRef]
  28. Zimina, A.; Zolotukhina, P.; Shandarov, E. Robot-Assistant Behaviour Analysis for Robot-Child Interactions. In Interactive Collaborative Robotics; Ronzhin, A., Rigoll, G., Meshcheryakov, R., Eds.; Springer: Cham, Switzerland, 2017; pp. 219–228. [Google Scholar]
  29. Foster, M.E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.M.; Pandey, A.K. The MuMMER Project: Engaging Human-Robot Interaction in Real-World Public Spaces. In Social Robotics; Agah, A., Cabibihan, J.J., Howard, A.M., Salichs, M.A., He, H., Eds.; Springer: Cham, Switzerland, 2016; pp. 753–763. [Google Scholar] [CrossRef]
  30. Aaltonen, I.; Arvola, A.; Heikkilä, P.; Lammi, H. Hello Pepper, May I Tickle You? Children’s and Adults’ Responses to an Entertainment Robot at a Shopping Mall. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; ACM: New York, NY, USA, 2017; pp. 53–54. [Google Scholar] [CrossRef]
  31. Tanaka, F.; Isshiki, K.; Takahashi, F.; Uekusa, M.; Sei, R.; Hayashi, K. Pepper learns together with children: Development of an educational application. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; pp. 270–275. [Google Scholar] [CrossRef]
  32. BBC news. Pepper Robot to Work in Belgian Hospitals. 2016. Available online: (accessed on 26 July 2018).
  33. Wired. Pepper, the Emotional Robot, Learns How to Feel Like an American. 2016. Available online: (accessed on 26 July 2018).
  34. Digital trends. Pepper the Robot’S Latest Gig Is at the Smithsonian. 2018. Available online: (accessed on 26 July 2018).
  35. Meet Pepper, Pizza Hut’s Robot. 2016. Available online: (accessed on 26 July 2018).
  36. Digital trends. Munich Airport and Lufthansa Begin Testing Humanoid Robot—Josie Pepper—In Terminal 22. 2018. Available online: (accessed on 26 July 2018).
  37. Associate Press. Italy’s Robot Concierge a Novelty on the Way to Better AI. 2018. Available online: (accessed on 26 July 2018).
  38. Burgos, M.; Yagoda, R.E. Robots, Emotions and Appearances: Look at Me Now. Proc. Hum. Factors Ergon. Soc. Ann. Meet. 2012, 56, 2557–2559. [Google Scholar] [CrossRef]
  39. Crowe, S. Nao, Pepper Robots Teaching in Singapore. 2016. Available online: (accessed on 31 August 2018).
  40. Sommer-Stumpenhorst, N. Sich konzentrieren können–Konzentration lernen. Jahresbericht 1994 der Erziehungsberatungsstellen des KreisesWarendorf; Regionale Schulberatungsstelle Kreis Warendorf. 1994. Available online: (accessed on 26 July 2018). (In German)
  41. Moyer, K.E.; von Haller Gilmer, B. The Concept of Attention Spans in Children. Elem. Sch. J. 1954, 54, 464–466. [Google Scholar] [CrossRef]
  42. Pages, J.; Marchionni, L.; Ferro, F. TIAGo: The modular robot that adapts to different research needs. In Proceedings of the 2016 International Workshop on Robot Modularity, Daejeon, Korea, 10 October 2016. [Google Scholar]
  43. Quigley, M.; Conley, K.; Gerkey, B.P.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the 2009 ICRA Workshop on Open Source Software, Kobe, Japan, 12–17 May 2009. [Google Scholar]
Figure 1. The robot Pepper from Softbank Robotics used in the education activity.
Figure 1. The robot Pepper from Softbank Robotics used in the education activity.
Mti 02 00064 g001
Figure 2. Some pictures of the illustration to explain sonar sensors.
Figure 2. Some pictures of the illustration to explain sonar sensors.
Mti 02 00064 g002
Figure 3. A toy horse barn used to illustrate localization and mapping.
Figure 3. A toy horse barn used to illustrate localization and mapping.
Mti 02 00064 g003
Figure 4. RaceTrack with two LEGO Mindstorm EV3 robots used for the final quiz.
Figure 4. RaceTrack with two LEGO Mindstorm EV3 robots used for the final quiz.
Mti 02 00064 g004
Figure 5. Setup for the ERiKA pilot event in the gym of the kindergarten. In the left part, you can see the robot Pepper waiting for the children to enter. In the middle, you see the controlling laptop(s) and the projector showing the first slide. On the right, there is the race track with the two Mindstorm EV3 robots used for the final quiz.
Figure 5. Setup for the ERiKA pilot event in the gym of the kindergarten. In the left part, you can see the robot Pepper waiting for the children to enter. In the middle, you see the controlling laptop(s) and the projector showing the first slide. On the right, there is the race track with the two Mindstorm EV3 robots used for the final quiz.
Mti 02 00064 g005
Figure 6. Pre-school children were divided into two groups, one red and one blue, receiving badges as a form of distinction towards younger children and to form teams.
Figure 6. Pre-school children were divided into two groups, one red and one blue, receiving badges as a form of distinction towards younger children and to form teams.
Mti 02 00064 g006
Figure 7. Two examples of questions used in the quiz and different states of the GUI.
Figure 7. Two examples of questions used in the quiz and different states of the GUI.
Mti 02 00064 g007

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (
Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top