Next Article in Journal
A Self-Spatial Adaptive Weighting Based U-Net for Image Segmentation
Next Article in Special Issue
A Proposal of Accessibility Guidelines for Human-Robot Interaction
Previous Article in Journal
Study of Quantized Hardware Deep Neural Networks Based on Resistive Switching Devices, Conventional versus Convolutional Approaches
Previous Article in Special Issue
Algorithm to Generate Trajectories in a Robotic Arm Using an LCD Touch Screen to Help Physically Disabled People
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies

1
Institute of Philosophy, Jagiellonian University, 31-007 Krakow, Poland
2
Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Tokyo 184-8588, Japan
3
Faculty of Computer Science, Electronics and Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2021, 10(3), 347; https://doi.org/10.3390/electronics10030347
Received: 30 November 2020 / Revised: 25 January 2021 / Accepted: 27 January 2021 / Published: 2 February 2021
(This article belongs to the Special Issue Applications and Trends in Social Robotics)

Abstract

:
Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child–Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this paper, we explore the role of trust and expectations towards the robot in determining the success of CRI. We present several observations from the video recordings of our CRI events and the transcripts of free-format question-answering sessions with the robot using the Wizard-of-Oz (WOZ) methodology. From these observations, we identify children’s behaviors that indicate trust (or lack thereof) towards the robot, e.g., challenging behavior of a robot or physical interactions with it. We also gather insights into children’s expectations, e.g., verifying expectations as a causal process and an agency or expectations concerning the robot’s relationships, preferences and physical and behavioral capabilities. Based on our experiences, we suggest some guidelines for designing more effective CRI scenarios. Finally, we argue for the effectiveness of in-the-wild methodologies for planning and executing qualitative CRI studies.

1. Introduction

Due to a growing number of social robots being deployed to interact with children, both neurotypical and those with special needs, the field of Child–Robot Interaction (CRI henceforth) is facing many research challenges. One challenge is to design anthropomorphic manipulators that show human-like behaviors. For this, several machine-learning algorithms are being deployed [1,2]. Another is to extract emotions from motions and gestures, for which recurrent neural networks have been employed [3], and from signals collected using wearable wireless sensors, for which unsupervised machine learning approaches have been proposed [4]. Unsupervised learning is also applied in [5], where deep-learning auto-encoders are used for recognizing human activity and engagement. Advantages and problems of deep learning applications in robotics are discussed in [6], and [7] presented challenges and case studies related to deep learning in robotics.
Another major challenge in CRI is to design and conduct interaction scenarios that are based on trust and that fulfill children’s expectations. To study these two factors in real-life CRI situations, we followed an in-the-wild methodology, in which scenarios are designed for children to interact with a robot in their natural environment (e.g., their classroom) using activities with which they are familiar and which are interesting to them. Our studies were conducted in Poland and Japan with children in the 4–7-year age group and the humanoid robot Pepper.
We believe that the observations of CRI conducted in a non-laboratory environment, as presented in our study here, are necessary to obtain data about voluntary and intuitive behaviors of children, thereby allowing us to assess their trust and expectations towards the robot.

1.1. Distinguishing Trust and Expectations in CRI

Research on studying the human–robot relationship in terms of trust and expectations has a long tradition [8,9,10,11,12]. In the area of children–robot interaction, the issues of trust and expectations are especially important and recent studies have provided useful insights [13,14,15,16]. Among these, Charisi et al. [17] listed elements such as social interaction (e.g., [18]), social acceptance (see [19]), emotional interaction, learning process and outcomes [20], which were evaluated with various methods such as self-assessment, behavioral observations, psychophysiological metrics and task performance. Most of these studies were conducted in a laboratory setup rather than in ecological environments.
A few studies suggested predictors for the formation of a child–robot relationship that included appearance, behavior, functionality, adopting a human-like rather than a machine-like role, interacting accordingly to the situation and social skills [21]. For this, some researchers focused on methodologies for designing the robot’s personality in terms of the “Who”, “What” and “How” categories [22]. They emphasized the importance of designing a trustful personality that would meet people’s expectations, as users subconsciously attribute a personality to the robot [23,24].
A general definition of trust, provided by Hancock et al. (2011) [25], states that it is the belief by “an agent that actions prejudicial to the agent’s well-being will not be undertaken by influential others”, for example, a robot. In a previous research [9], trust in HRI is considered to be composed of three sets of features: (1) those related to the robot’s features and capabilities; (2) those related to the human user’s traits and beliefs; and (3) those related to the environment. For characterizing multidimensionality of trust in children’s relationships, some researchers distinguish among its cognitive, emotional and behavioral dimensions [26], while others [27] study features of trust and identify capabilities, ethical behaviors and reliability descriptions as the most measurable and distinctive ones. However, many researchers have emphasized the lack of a generally agreed definition of trust in human–robot interaction in the current literature [28,29] and the urgent need to rectify this [30]. In the context of our study on child–robot interaction, we use a working definition of trust as a set of children’s behaviors indicating their belief that a robot will not undertake actions detrimental to their well-being. We should also emphasize that we are considering trust from a human perspective towards a robot; there exist studies considering trust from the robot’s perspective towards a human [31].
Our goal is to determine ways to observe trust in CRI studies. Children are placed in a setting that creates an uncertainty caused by the appearance of the robot, which is an unknown object, and they have to decide whether to perform risk-taking actions in this situation [26]. This setting allows us to observe children’s expressions, attitudes and behaviors associated with trust. Although it has been argued that trust cannot be observed [28], we aim to understand behaviors that indicate that the interaction between the robot and a group of children is based on mutual understanding [32]. We plan to use the checklist proposed by Jian et al. [33] for measuring trust between people and automation: feelings of security, integrity, dependability and reliability. Building on previous research, we analyze trust and its indicators by capturing expressive behaviors, children’s verbalized thoughts, and the interplay between the robot’s behaviors and the children’s reactions. According to Möllering [34], interpretation provides a basis for deciding further engagement and investment in a relationship, therefore we aim to identify signs of children interpreting robot’s behavior and we analyze these interpretations.
The second construct, expectations, has been defined as the perception measured before the interaction with the robot, compared to the participant’s experience during the interaction, i.e., confirmation of initial beliefs [33]. Some researchers focus on the expectations generated by the robot’s appearance and how it changes the participant’s behavior [25,27], while others focus on the influence of the robot’s behavior and personality [9]. Expectation has also been considered a cognitive dimension of trust [9], which, based on the response of the other agent, changes during the interaction. An interesting light on expectations is shed by anthropomorphism, which is a common phenomenon in human–robot interaction that lets people project human abilities on robots [29]. As people’s categorizations and expectations of others are inherently linked [30], we can assume that children’s expectations toward the robot somewhat mirror their expectations towards other children or adults. In this research, we are using a working definition of expectations as children’s beliefs towards the robot as manifested in their behaviors and verbal expressions. These behaviors and expressions are externalized as elements of surprise, verification of assumptions or questions addressed to the robot. They all provide a window into the initial beliefs of the children towards the newly encountered robot.

1.2. Cognitive States

Although some studies have addressed children’s cognitive development with the support of robots [35], fewer have dealt with the impact of cognitive states on trust [21], where cognitive states refer to the children’s perceptions of, and thinking about, a robot during and after their interaction with it (e.g., anthropomorphism and perceived support of the robot toward the child). In our previous study on developing trust in children towards the robot, we underlined the need to focus on a friendly, accessible design [36]. As trust and expectations relate to the experience of robot users while interacting with it, some studies [22] proposed a methodology for the design of robotic applications, including desired features, to avoid negative CRI experience, which may prevent children from further interacting with the robot and accepting it [37]. Some researchers based this design process on the idea of co-design [38], which attempts to involve all participants in the design process, whereas others propose designing a robot based on children’s ratings of the robots’ personality attributes [39].

1.3. Existing Research on Developing Trust and Expectations

Recent research provides some insights into how children as young as 3-year-old perceive robot errors and develop trust. It is shown that children’s perception of a robot, and how they interact with it, are based on their understanding of people. For example, in [40], the robot deliberately misled the children, which influenced their subsequent interactions with it and their confidence in it. Another study [41] explored technological and interpersonal types of trust and posited three types of trust based on the answers provided by the children. In our study, we explore this issue further by focusing not only on trust but also on expectations and their interplay. Children’s answers are analyzed with the help of an open coding approach that allows exploring categories beyond specific types of trust. Another research focused on manipulating robot errors as a fundamental component in creating a psychological construct of trust [42]. The two domains of social and competency trust, and how they relate to social robotics, help to understand the development of social relationships between humans and robots. Importantly, a recent study [43] raised doubts on whether social behaviors in robots encourage in building trust towards them.
Studies on gratification measures underline the meaning of expectations, which are understood as the children’s experiences with robots and their impact on envisioning capabilities of the robot and the needs they may satisfy [44]. Interesting recommendations from these studies include a larger sample size with repeated studies, more distinctly measured items and finding ways of imparting knowledge about social robots to the children before the interaction. For conducting such studies, researchers could, for example, hold an orientation session before the CRI session to set user expectations and prevent an expectations gap by shaping user beliefs about the capabilities of the robot [22,45].

1.4. The Methodological Gap

A recent meta-study [21] found that most research studies conducted in CRI are quantitative. The authors found 9 qualitative, 57 quantitative and 20 mixed-methods studies in their analysis, and about half (48%) of the studies took place in a school environment. They underscored the need for a more holistic approach to studying CRI. This argument was reiterated by Jung and Hinds [46], who noted that contextual CRI studies are too few and, as a result, cannot predict with confidence the effects a robot will have across a variety of complex social situations. Recently, researchers have also made strong arguments to take an ethnographic approach for evaluating Human–Robot Interactions (HRI) [47]. It is emphasized that, in studying HRI, it is advantageous to apply methodologies, the efficacy of which has already been demonstrated in sociology and anthropology. The authors discussed a few such studies conducted over the past 15 years and made a persuasive case for more HRI studies using ethnographic techniques.
Other researchers [46] have underlined the need for building a generalized theory of HRI and pointed to growing evidence for the need to conduct in-situ interaction studies with robots. This is also known as the in-the-wild methodology for conducting natural observations instead of controlled lab experiments [48]. A goal of applying this methodology for CRI is to provide a principled understanding of what to expect from different types of robots by performing different types of tasks in different types of social situations and cultures [46]. The importance of the CRI setting cannot be overestimated as research has shown that the environment in which children interact with the robot affects the outcome [21]. Experiments conducted in the controlled laboratory setting may not reveal all the social aspects relevant to fully understanding CRI. Other researchers have confirmed that the experimental setup can strongly influence the observed results [17]. This study showed that CRI research uses laboratory settings much more than naturalistic user environments: 80% of the CRI studies surveyed used dyadic settings (one robot–one child) with an anthropomorphic robot (74%) and relied upon the Wizard-of-Oz (WOZ) protocol (62%) [21].
In another interesting piece of research, user studies were conducted to investigate the role of expectations in CRI [15]. The authors proposed three strategies for managing expectations in CRI: (1) understand children’s expectations; (2) educate children about robot’s capabilities; and (3) acknowledge that robots are a third kind of living entity, other than humans and animals.
Researchers recommend that the design and evaluation of social robots should be based on their situational context of use [22]. The authors considered the robot’s role in its environment and adjusted its form and personality to meet human needs. Another meta-study of trust in HRI showed that the attributes of a robot have more influence on trust than environmental and human-related factors [49].

1.5. Our Approach

In our research, we apply an in-the-wild methodology to explore the role of trust and expectations in CRI and identify factors that facilitate or hinder these. Other researchers (e.g., [48]) have argued that experiments in a laboratory setting may not provide sufficient information about relevant social aspects to fully understand Human–Robot Interaction (HRI). In our view, an in-the-wild approach to scientific exploration of CRI has many unique advantages, some of which are listed below:
  • It allows interaction and collaboration among robots and children to unfold naturally in an environment that is familiar to the children, for example their classroom, as opposed to some HRI lab. This facilitates intuitive and natural expressions by children, which provides useful data to assess factors that promote or hinder trust and expectations in CRI.
  • It allows unstructured, playful interactions between the children and the robot. The activity of playing is crucial for children’s development [50]: they find it easier to express their needs and emotions during free play, rather than in situations where they have to do structured tasks by following some rules [51].
  • It allows us to study safety issues in robots when children interact with them in complex social contexts. This is crucial for designing robots that can be deployed in real-world settings.
  • It helps to study the ethical aspects of CRI by observing children interacting with the robot in semi-supervised or unsupervised situations, thereby reducing the observer-expectancy effect [52] by the moderator.
  • It allows conducting long-term, longitudinal studies in CRI.

2. Motivations

Our primary motivation is to identify and understand factors affecting trust and expectations in children towards robots and examine their implications for designing the next generation of robots for CRI. Towards this goal, we plan to follow a qualitative research methodology for studying CRI, whereas most previous research has followed a quantitative approach. Conducting more qualitative research can be useful for compiling a set of interaction scenarios, guidelines for organizing new scenarios and helpful tips for conducting CRI with these scenarios.
There have been many studies conducted on CRI with special needs children [53]. However, in this study, we are focusing on neurotypical children, regarding whom there is less research. A holistic approach to designing robots and CRI scenarios should adapt to different groups of children: neurotypical children as well as those with different kinds of special needs.
Due to an increasing number of robots and CRI scenarios, it is important to explore how to design social robots considering the collaboration of children and robots in complex real-life environments, such as a kindergarten or a day-care center. This can help to develop good design practices and expose existing limitations. In the current state of the art, some such studies, for example on the impact of increased awareness of the robot’s capabilities, exist for older children [54] but not for younger children.
Investigating how existing robots (such as Pepper from Softbank Robotics [55]) work in a real environment and how they are perceived by inexperienced and attention-demanding users can bring much valuable information. Besides, working on frameworks for building human–robot relationships helps to promote robots for the social good and can contribute to positive socio-political changes [56].
Last, we aim to strengthen knowledge about studies conducted outside the laboratory in natural real-world settings.

3. Methodology

The methodology used in our study is known under different terms: in-the-wild [57,58,59], free interaction or field studies [60]. We use the first term in this paper and present our study as an example to motivate and inspire further research using in-the-wild methodologies. Our observations were designed to engage children in natural interaction with the robot, taking place in a familiar environment (kindergarten and day-care center), surrounded by people known to them (teachers) and using familiar activities liked by them. Although all the activities were pre-planned and the corresponding robot behaviors were pre-programmed, breaks between the activities were included to allow children to relax, talk freely or ask questions about the robot. One activity that was not pre-programmed was the free question-answering session with the robot, for which we used the WOZ paradigm where an adult (typically one of the researchers who was conducting the study) would answer the children’s questions spoken through the robot. There were some slight differences between the Polish and Japanese versions of the event, considering the cultural background of the participants, as described below.

3.1. Participants

Children as Participants—A Methodological Challenge

To study the attitude of preschool children towards robots, exceptional attention and a special methodological approach are needed. At this age, children are curious and are constantly exploring their environment. They show less social inhibition (such as masking their boredom by pretending to be interested). Their attention span is short and they get easily bored or distracted (for example, the attention span of a six-year-old is about half as short as that of a twelve-year-old [61,62]).
It is worth mentioning that some researchers use free-play episodes to allow a loosely structured environment for children to express a range of complex, dynamic and natural social behaviors that are not tied to an overly constructed social situation. Nevertheless, they still claim that the interaction is structured and can be validated [63].
All these factors make it quite challenging to design a robot interaction event with young children and observe their behavior. Another issue of which we must be careful is that the researchers’ preconception of children may influence the design of the study [41].
Another confounding issue is that young children may participate in the interaction activity with a greater propensity to perform unpredictable behaviors, and they might be more difficult to control by the researcher, especially when the children are in a group with their peers in a familiar environment, both of which conditions are necessary to capture the naturalness of their behavior.
In [64], a self-report questionnaire was created to measure children’s impressions after they interacted with the NAO robot. This approach was aimed for children aged 8–11, because, as the authors stated, younger children are not capable of participating in surveys. Our experiments was focused on younger children, so we had to apply a different methodology. In our study, children make voluntary comments about the robot and pose questions to it during a natural interaction.
Our study was focused on observing a child’s first interaction with a robot. In Poland, six groups of children participated in our study. They were from the same kindergarten but children in each group were from a different section. For all of them, this was their first encounter with a robot. In Japan, we had four groups of children. However, in two of the groups, some children from the previous groups also participated, which we allowed in the spirit of the in-the-wild methodology. The observations from these two groups are not included in this study. The composition of the group of participants in Poland and Japan is shown in Table 1.

3.2. Procedure

The children were brought into the room by their teacher, where a humanoid robot Pepper was waiting for them. The robot introduced itself briefly and encouraged children to play. In the first activity, the robot read a short story/poem to the children: in the Polish version, the poem How do you talk to a dog by Jan Brzechwa; and, in the Japanese version, Harry the Dirty Dog by Gene Zion for the first group and The Very Hungry Caterpillar by Eric Carle for the second. The second activity was dancing together with the children to a song with which they were familiar: for the Polish children, the English version of Head, shoulders, knees and toes; and, for the Japanese children, the Paprika song. In the third activity, the children were asked by the robot to draw something that was in the room. Although this activity was not time-limited, after about 15 min, the children either finished drawing or got bored and lost interest. At this point, the robot asked the children to show it their drawings. One by one, each child approached the robot and showed her or his drawing to it. The fourth and the last activity was a free question-answering session: children could ask the robot questions on any topic. Answers were given by the robot in the WOZ paradigm, in which a human behind the scene interacts with the participant via the robot, but the participant is led to believe that the robot is acting autonomously. In our study, one of the student programmers, out of sight of the children, answered their questions, which were voiced by the robot. This interaction paradigm has been used successfully in various CRI interactions [65,66,67]. After this activity, the children said goodbye to the robot. Figure 1 provides an explanation of the experimental setup described above.

3.3. Data Collection and Analysis

We collected three types of data: video recordings filming the children’s actions from behind (to not disturb the interaction), sound recordings (later transcribed) and children’s drawings. The data from the children’s drawings were analyzed separately and are not included in the analysis presented here. Videos collected in our study were coded with MaxQDA software and transcribed. A part of the coding tree created for this analysis is presented in Figure 2.

3.4. Distinguishing Trust and Expectation towards the Robot

In this paper, the analysis of children’s expectations towards the robot is focused on discerning cultural differences between Polish and Japanese children. These expectations can be evidenced through the verbal behavior of the children before, during or after the interaction (e.g., a comment about an unmet need of a child to perform some activity with the robot); or they can be explicitly queried by questionnaires (e.g., by the Negative Attitudes towards Robots Scale (NARS)) [30,68] or interviews. Most of the existing studies focus on adults and their perceptions (e.g., [30,69]), investigating factors affecting the development of trust with robots [70]. For instance, there is some evidence [15] showing that people of age ten or above prefer to interact with smaller robots and communicate with them through speech. However, it is not clear if this also applies to younger children (under 10 years old). Some findings indicate that people, including children [11], tend to generalize a robot’s social capabilities in social situations, but these presumptions can be overridden by appropriate behavior designs. This approach can be applied, for instance, to build trust by social support and closeness in forming child–robot relationship [71]. This study [71], however, focused on the development of measures for children aged 7–11, whereas our study focuses on children aged 4–7.
The data obtained from voluntary and direct verbal expressions during CRI provides an insight into the children’s perception of the robot and might lead to novel CRI interfaces. Certain assumptions implicit in the children’s expression towards the robot are formulated later in this paper based on direct observations, as our data are voluntary. The analysis of existing studies led us to conclude that research on trust and expectations conducted with children under the age of ten should engage smaller robots. We expect that the children’s behavior during this interaction would reveal their expectations and how they are being modulated by the interaction. Finally, the data collection should be based on the children’s verbal and nonverbal voluntary expressions.
Some recent studies [72] show that the expectations towards the robot can either be exceeded by the interaction experience, but also that the robot might not meet some of the initial assumptions of the participant. In this study, the robot Robi achieved higher scores after the initial interaction with the participants concerning anthropomorphism, animacy, likeability and intelligence, whereas the robot MyKeepon scored significantly lower in likeability. This underlines the need to see how the perception of robots changes over time, especially after the first encounter. Interestingly, some expectations might have a negative influence on the interaction (an effect of the expectations gap), and the circumstances of the interaction (the setting) might also impact the attitude of the human participant [11], as people tend to generalize social capabilities more for anthropomorphic robots in more social settings. However, the expectations gap can be mitigated by appropriate changes in the robot’s behavior.

3.5. Environment

In studying social robotics and CRI, where children (often for the first time) come into contact with a robot, a creature that may seem both fascinating and disturbing to them, it is worth bearing in mind the guidelines laid out by previous researchers on how to engage children in research [73] to improve the quality of interaction. They proposed six steps:
  • Consideration of capacity and capability
  • Developing ethical protocols and processes
  • Developing trust and relationships
  • Selecting appropriate methods
  • Identifying appropriate forms of communication
  • Consideration of context
Of particular relevance to us are the last three steps, which highlight the importance of some actions that improve the quality of the interactions and help children accommodate them. In this regard, it is necessary to follow an approach that allows observations in an interesting and safe environment for the child, so that the child can interact with the robot naturally without limiting her or his desire to explore. Previous research, especially on ethnographic studies [57,74,75], suggests that an in-the-wild methodology is appropriate in all these respects for studying CRI, and we used it effectively in our past research as well [58].

3.6. Setting

Another novelty of our approach is to have children interact with the robot in a group setting. Previous studies on CRI have largely focused on the dynamics of one-on-one (dyadic) interaction between a robot and a child [76]. A few studies have focused on CRI in small groups (three children, for example) [77]. We believe children behave differently when they are interacting with a robot in a large group, where members know each other, compared to one-on-one interactions. As one potential use of social robots that is being explored these days is for education [78], it is important to study CRI when children are in a peer group using an in-the-wild methodology.
The pictures shown below capture some scenes of interactions with Pepper. The children were close, but at a safe distance from the robot (in case of the robot’s hand movements), so that the robot was equally visible to all the participants. This was important during the drawing activity (Figure 3), during the willingness to explore independently, as in Figure 4, as also during the subsequent vocal interaction. Participants often made gestures familiar from their preschool activities, e.g., Polish children raising their hand when they wanted to speak (Figure 5 and Figure 6).

4. Results: Two Significant Factors in CRI In-the-Wild

In our study, we analyzed over 500 observations based on our recordings. We present 48 of these observations below, which were selected to showcase possible signs of trust and expectations. The observations in the ‘trust’ section are grouped into seven categories such as challenging robot behavior, curiosity and playing; and those in the ‘expectations’ section are grouped into eight categories such as asking about origins, gestures and human/non-human abilities.

4.1. Trust

Trust is a basic foundation of building human–human relationships [79]: it facilitates seamless and natural interactions. It is also important in human–robot and CRI, especially as social robots are becoming more commonplace and are engaging with people in a variety of roles: companion, caregiver, guide, assistant, etc. [80]. Thus, there is an inevitable need to incorporate trust in HRI and CRI.
To elaborate further, we chose to focus on the role of trust in CRI for the following reasons.
  • The increasing number of social robots interacting with children (e.g., [76,81]) demands analysis of building trust in CRI. Some researchers already work on factors affecting trust for adults [30]. For example, the qualitative study by Bethel et al. [82] indicates that preschool children not only applied the same social rules to adults and a humanoid robot but also showed the same propensity to share a secret they were supposed to keep to themselves regarding adults and a humanoid robot.
  • More research needs to be done on how different factors affect trust. Some studies on secret sharing behavior [82] have shown that children are inclined to trust robots as willingly as they trust adults. Other complementary studies showcase the importance of cognitive development and children’s personal attachment history for building trust [13]. However, Di Dio et al. [13] compared trust towards a NAO robot with that towards a human during play by testing children individually.
  • The psychological and physical safety of encounters between humans and robots is crucial [83,84], especially for younger participants, whose actions might be more unpredictable than those of adults.
  • Researchers need to know how to obtain trust in CRI and maintain the perception of being in a safe and engaging environment throughout the whole interaction with the participants, without them feeling overwhelmed or anxious by unfamiliar conditions such as interacting with robots. Preliminary studies (e.g., [13]) found that, for preschool and elementary school children, a robot can be perceived as a solid, trusted partner in the interaction, which can also contribute to building better human–human relationships.
We claim that an open approach is necessary for pioneering research, especially for trust. By this, we mean that an approach that does not make strong assumptions, and where, despite having specific research hypotheses and a scenario that allows mitigating the obstacles that may arise, researchers remain open to novel conclusions emerging from data. This sensitivity is particularly important during analysis, when, in addition to exploring the previous assumptions, researchers try to intuitively capture the next possible clues hidden in the data [85]. This is akin to exploratory studies, which are conducted first, when the researchers check which elements are particularly important, which attracts the most attention of the respondents and the answers to which questions are necessary for effective interaction. Armed with this knowledge, with a greater awareness of the needs and possible problems, they then design interactions to measure the behavior of the participants.
To conclude, it is not possible to gather relevant information for the real-world experimental results without proper preparation of children’s trusting attitude towards robots. In our studies, we carefully strove to capture possible signs of trust when children share their beliefs of the robot being a trustworthy agent. These are described in the next subsection.

4.1.1. Possible Signs of Trust towards Robots

We analyzed and coded video transcripts to organize our observations on trust in children–robot interaction. Throughout the study, children’s level of trust was supported or hindered by specific interaction types, which are described in this section.
(a)
Challenging behavior towards the robot to determine its capabilities: This refers to when the children challenge the robot to do some specific activity.
Observation 1.“So you can’t walk?!”
Observation 2.“But it can’t even say ‘Good Morning!’ ”
Our observations led us to the assumption that one of the most interesting ways to show trust to a robot is to provoke it to specific actions. By challenging behavior, we mean the child demanding an answer or an action from the robot. These contentious exchanges allow us to get an insight into the child’s image of a robot’s performance. Challenging behaviors are also signs of direct, voluntary perceptions and assumptions about the robot based on the child’s mental model.
Observation 3.“For sure you cannot do it!”
Challenging behaviors also signal the level of trust in the child–robot relationship. For example, in our studies children challenged Pepper about its physical abilities (e.g., performing a very specific movement or asking it to do a breakdance and teasing it).
(b)
Challenging behavior as an indicator of the level of trust: This refers to when the children challenge the robot for some specific information or experience.
Observation 4.Child 1: “Have you been to the cinema?” [children repeat this question a couple of times]
Pepper: “No, I haven’t been to the cinema.”
Child 2: “Do you know what cinema is at all?”
Pepper: “I know.”
Child 2: “So, tell us, what is that?”
Child 3: “And do you know what McDonald’s is?”
Child 4: “Do you know what McDonald’s is?”
[Pepper replies after a delay]
Pepper: “You can watch movies there.”
[children laugh]
Child 5: [amused] “At McDonald’s!”
[Children laugh. Due to the delayed answer, they interpret Pepper’s last answer as a response to the questions about McDonald’s]
Observation 5.“Do you know what France is?”
Observation 6.“Can you even do anything?”
Moreover, they asked some general knowledge questions such as whether it knows what is cinema. If it said yes, they dug deeper by asking it to describe it. The level of trust can be affected by the relevance of robot’s answers (irrelevant answers lead to laughter), the pace of answering questions, and the number of questions answered (some are skipped).
Observation 7.Child1: “How did you like Paris? And do you like to play?”
Teacher: “Wait, only one question at a time.”
Child2: “Do you like to play?”
Pepper: “I really enjoyed Paris!”
Child2: [confused] “Paris...?”
Observation 8.Child1: “Do you have parents?”
[The robot doesn’t answer immediately]
Child1 [urging]: “You have parents, well, Pepper?”
Child2: “...it has to think about it.”
As we followed the WOZ paradigm, the robot answered children’s questions that were typed in real-time by a researcher in the room. Due to the dynamics of the meeting (a large number of children and many questions asked simultaneously), sometimes the robot answered a question that was no longer relevant or interesting from the children’s perspective, or it was incomprehensible, which influenced the relevance of the robot’s answers. Sometimes, when there was a delay in the robot’s response (because the researcher was thinking what to say), the children may have felt the need to rush the robot.
(c)
Curiosity and the demonstration of desired actions as a sign of trust: This focuses on the children’s questions about the robot’s capabilities and competences.
Observation 9.“Can you do that?”
Observation 10.Children are sitting in front of the robot on the carpet
Pepper: “Hello!” [waves]
[Children are amused by this action and most of them wave to the robot]
In our studies, we observed that children tried to challenge the robot by asking about its abilities while making a specific gesture towards the robot (e.g., waving or moving a fist). Some of them seemed curious about the robot’s abilities whereas others were seen taking a nonchalant approach and showing off their abilities.
Observation 11.Child1: “You have a tablet, can you display a movie?”
Pepper: “But now I have anything [any movie] close at hand.”
Child1: “You always have armpits at hand.”
Child2: “Simple!” [unintelligible]
[A few children start laughing]
A demonstration of the desired actions from the robot is also an interesting issue. For example, we observed that when the robot reveals a lack of competence, children question it or make fun of it. Potentially, a violation of social norms by a robot, e.g., of knowing too much about a child, evokes negative effect (and thus decreases trust) in children [86], whereas the failure to complete a task decreases competency trust [14].
(d)
Physical interaction—catching attention in brave ways and approaching the robot: This refers to physical interactions and proactive behaviors to gain the attention of the robot.
Observation 12.The start of the malfunction activity. The children are screeching, shouting: ‘What’s this?!’
Child1: “Anything else?”
[The children come closer to the robot. After a while, the teacher pulls the children back, while they keep asking questions “Do you know anything else?”]
Children are also prone to catch attention in aggressive ways like waving in front of the robot’s eyes or repeating or reformulating questions, inviting the robot to play with them (demanding action from Pepper). This indicates that the children were feeling comfortable enough to decrease their physical distance with the robot and were willing to interact with it. Many children approached the robot right after entering the room where the interaction took place, eager to explore and touch it and had to be restrained by the teachers. During the encounter, pre-schoolers gradually came closer to the robot, not caring about the limit set by the teachers and the researchers.
Observation 13.Child: “Can we pet him?”
Researcher: “I don’t know. Probably not. But guess what, guess what, we don’t touch Pepper because he has very delicate skin. Okay?”
Child1: “It’s not skin.”
Child2: “This is metal!”
Child3: “Metal!”
Many children were intent on a physical exploration of the robot: petting it, grabbing its hands, looking in its eyes or touching its tablet. Thus, another sign of trust is based on physical interaction. The observations of children revealed that they came closer to the robot when they did not get a response immediately, especially when they were in the first row; children in the back stood up to see the robot and leaned towards it.
(e)
Social rules awareness and tool-like perspective: These observations refer to treating the robot as an object owned by someone and applying corresponding social rules.
Observation 14.Child: “Who’s steering you?”
Pepper: “[...] and IT specialists programmed me.”
Child: “Do you have batteries in you?”
An intriguing observation was that the children asked the teachers. This suggests two possible interpretations. One is that children are aware of social rules (in this case, “do not touch an object without asking its owner” or “always ask the teacher if you want to do something unusual”) and apply them to the interaction with the robot. The other is that children perceive the robot as a tool or an object without an ability to have autonomy or its preferences, which is not helpful for researchers wanting to create a companion-like interaction with the robot. They are building trust towards a dependent agent which is not the same as believing in a robot’s autonomy of agency [87]. There is a need for building further discussion around the role of teacher and how children would perceive the robot’s agency without her in the room.
(f)
Playing as a sign of trust: This refers to the verbal expressions of the children to encourage the robot to engage with them in fun activities.
Observation 15.“Pepper, let’s play together!”
Observation 16.“Will you play with us?”
Observation 17.“Will you show us a story on the tablet?”
Observation 18.“Please do some magic trick!”
Some research [88] suggests that children have the most fun when playing with a friend compared to playing with a robot, given that playing alone as the least fun option. Play provides children with an opportunity to practice and learn basic social skills, including conflict management. It also requires a certain amount of trust in the relationship–children who dislike each other rarely start to play together [51]. In our observations, similar patterns of wanting to play with the robot could be noticed, as expressed by children’s questions. These questions to Pepper suggest trust towards the robot’s skills and abilities, and children’s willingness to engage in a joint play. As previous studies [89] point out, higher transparency of the robot’s lack of psychological capabilities (e.g., a lack of preferences or knowledge of social rules) negatively affects the formation of child–robot relationship in terms of the trust, whereas the children’s feelings of closeness remain unaffected. For this reason, an analysis of closeness-related behaviors and actions of children during CRI that indicate children’s trust towards the robot is a key factor in the interaction design.
(g)
Anthropomorphization: This focuses on children’s questions that reveal their beliefs that the robot is like a human.
Observation 19.“Do you like me?”
Observation 20.“Who gave birth to you?”
Children tend to anthropomorphize social robots [90]. As a consequence, they expect a more unconstrained, substantive and useful interaction with the robot than is possible with the current state-of-the-art, however, their expectations towards robots will be addressed more directly in the next parts of this deduction. However, the tendency of children to anthropomorphize the robot, as natural as possible, has a significant impact on the trust that the user will place in the robot. If the child’s expectations built with the expectations gap are not met, it may reduce the child confidence in the robot.

4.1.2. General Remarks on Trust

Some of the questions asked by the children to Pepper, (for example, Observations 3, 6 and 20) might be considered as intimidating, which, in the older participants, might be a sign of trust towards the robot. However, it is also possible that some questions are not an indicator of pre-schoolers trusting the robot, but their tendency to explore the robot [91]. It should be noted however that an increased level of trust may be detectable in some children because of the group setting, which encourages a more proactive and outgoing behavior.
The children’s enthusiastic approach towards the robot from the very beginning of the interaction is worth pointing out. It can be a sign of trust—children are not afraid of approaching the robot and are willing to explore it freely. It might be an indicator of trust supported by its characteristics of appearance.
It is important to mention that the teacher, who was present during the interaction but was asked to not intervene as much as possible, might be a significant factor during an interaction. First, children may consider the teacher as a safety factor, looking for comfort and support when needed or when feeling overwhelmed by the unfamiliar situation. Some studies (e.g., [92]) indicate that close attachment-relevant relationships with teachers in early childhood are linked to the pattern of attachment to the parents and the verbal development of the child. Secondly, it can be an authority when children are getting too stimulated by the robot’s presence or a situation seems to be developing where the children might physically harm the robot or themselves. Finally, the dynamics of child–teacher relationship might indicate the level of trust in a robot: if a child decides to reach a teacher (i.e., to show a drawing instead of doing so to the robot), it indicates less trust towards the robot.
Another remarkable factor is the presentation of the robot during the interaction. Is a robot already at the place of interaction when the participants arrive? Do the participants wait for it? We decided that Pepper will “anticipate” children, so they could see it when entering the observation setting; we felt it might help accommodating children with the robot and give them some time to adjust. Some studies [14] present a scenario where the robot is introduced by the researcher, but we followed the script where the robot introduces itself. We feel that this increases the participants’ willingness to interact with the robot and their perception of the robot’s autonomy, resulting in a higher trust.

4.2. Expectations

Forming expectations towards (humanoid) social robots is a complex process [15]. A child’s initial attitude towards the robot, even before the first real encounter with it, influences the subsequent interactions, especially when a misalignment is perceived between expectations and the robot’s abilities. This is the expectations gap mentioned earlier in this paper.
There are two main reasons for studying the role of expectations in CRI:
  • Research on expectations assumes that children have a basic concept of the robot, and various studies aim to elicit features of this concept through observations and experiments. An in-the-wild methodology is ideal in this regard as it facilitates natural, spontaneous and voluntary interactions with the robot. This elicits behavior patterns that cannot be observed through laboratory experiments, surveys or individual interviews. By analyzing recordings and transcripts of such free CRIs, we can understand children’s mental models of a robot and their expectations towards it. Below in this section, we present some such observations.
  • Another important reason for analyzing expectations is the low number of studies on this topic focusing on younger children. There are not as many measures for such young participants (questionnaires and interviews) as can be offered to older subjects.
Lastly, observing and grasping the moments when children explore the robot and express expectations is a task which, in our analysis, followed an open coding of the transcripts and videos and emerged from the qualitative data as a general conclusion and questioning children’s behavior. During an interaction with a robot, a group forms its goal towards a better understanding of the situation, therefore they use different methods to reach that goal. Our working hypothesis, for the sake of this analysis, was that children make assumptions and try to test out their hypotheses by asking questions, commenting, testing robot’s reactions and so on. We focused on observing the moments when children saw Pepper for the first time, when Pepper did something new and/or unexpected and when children were encouraged to ask questions to Pepper.

4.2.1. Possible Signs of Expectations towards the Robot

Transcripts of our observation revealed an interesting set of codes supporting expectations analysis. Each of these reveals an assumption or a hypothesis about the robot. For children, this drives the processes of causal learning for explaining inconsistencies, and causal explanations and exploratory behaviors operate in tandem as mechanisms for generating and testing hypotheses [93].
(a)
Verifying expectations as a causal process and agency: This refers to the indicators of the children validating their initial beliefs about the robot.
Observation 21.A boy is selected to ask a question
Child 1: “Pepper-kun… Pepper-kun. Can you run fast?”
Pepper: “I am not good at running.”
[All the children are surprised]
Observation 22.“It’s kind of nervous.”
Observation 23.“It’s a robot.”
Observation 24.“It can’t even say ‘Good morning’!”.
During the interaction, children commented on Pepper, performed different actions towards it and asked it different questions. The comments included voluntary remarks on the robot’s state of mind (Observation 23) and direct assumptions regarding Pepper’s identity (Observation 24) and “body” (“it’s not a skin” from Observation 13). They also commented on the ability to use speech communication Observation 25). These comments show children’s assumptions that Pepper must belong to some race or a kind, being a non-human robot with no communications abilities but with some cognitive abilities. Children provoked the robot by claiming different things about him and checking out what will be its reaction. This itself shows that they assumed a robot can react to things happening in his environment and can understand their words. This challenging behavior, following a simple conclusion that Pepper can speak children’s language, maybe a sample of causal learning. Moreover, the distress connected to what to expect from the robot was compounded by the error behavior the robot was designed to perform. Temporary lack of movement was interpreted by the children doubly: as a situation that happened to Pepper or as his decision to “freeze”. This again shows different assumptions about Pepper as a being.
(b)
Asking about origins: This focuses on the children’s questions about how the robot was created.
Observation 25.“Where were you created?”
Observation 26.“How were you made?”
Observation 27.“Do you have parents, Pepper?”
Observation 28.“Do you have mum and dad?”
Children also made assumptions and tested them by asking about Pepper’s origins (created vs. born). This information seemed important for them, as the question appeared many times in different groups, interestingly more often children asked about Pepper’s parents then his creators.
(c)
Children’s questions as a source of insights: This focuses on the children’s hypotheses regarding their ideas and knowledge of the robot.
Observation 29.Pepper encourages children to ask it questions. After a few general questions about its appearance.
Child 1: “Why do you have this tablet attached?”
Children: “Right, why?”
Child 2: “So you don’t get sick!”
Child 3: “Why don’t you have hair?”
A few other children also repeat the last question, demanding the answer.”
Observation 30.Child: “Pepper, can you play with a ball?”
Observation 31.A Q&A session is in progress. Children speak through themselves, often without giving the robot time to answer
Child 1: “Do you write something down?”
Child 2: “Why do you have such a small mouth?”
Child 3: “Where are your cameras?”
Child 4: “How do you look inside?”
Child 5: “In the forehead?” [laughter]
Child 6: “Do you have a camera in your mouth too?”
Child 7: “Can you fly?”
Children asking questions is the most natural activity to observe for learning about their hypotheses. The ability to ask questions is a powerful tool that allows children to gather the information they need to learn about the world and solve problems in it [91]. In their questions, children focused on a set of human-like features which helped them to anthropomorphize the robot. Past research has shown that humans subconsciously anthropomorphize (ascribe human attributes to) and zoomorphism (ascribe animal attributes to) robots [94]. Some children, who lean towards transparency by asking de-anthropomorphizing questions, try to situate robots at the lower end of the human-likeness scale. Previous research has shown that transparency decreased the children’s humanlike perception of the robot in terms of animacy, anthropomorphism, social presence and perceived similarity [89]. In our research, children made assumptions regarding its movement abilities, mechanical equipment, gender, experience, knowledge, etc.
(d)
Robots can have preferences and relationships and children hypothesize relationships about the robot’s emotional life: This focuses on the children’s expectations towards the robot’s preferences and possible relationships with others.
Observation 32.After a series of questions about the robot’s mobility—in particular, whether it will ride around the room—a girl asks a question
Child: “Do you like me?”
Observation 33.A group of girls are standing together. One of them addresses the question to Pepper Girl: “Whom of us do you like the most?”
Observation 34.“Do you like to play?”
Testing ability to have preferences includes asking Pepper about a different set of abstract and non-abstract concepts such as travelling, food or music. Children assume the robot can have preferences and ask it to share them—we suspect they want to know its preferences for building a better relationship and seek common experiences. They are curious whether a robot can engage in social relationships. For this, they ask if the robot likes playing, music, reading, travelling or walking around and about its favorite food. They also ask if the robot likes the child who is asking and which child the robot likes the most.
Observation 35.“Child 1: Why don’t you have a wife?
Child 2: Or at least a friend!”
Children expected Pepper to have some social relationships. They asked about its parents to understand its origins, about its friends to understand its willingness to build new relationships. They even asked about its wife, which shows they assumed Pepper can build more sophisticated relationships. Research shows that social robots that should be perceived by children as social agents worth engaging to play with [95] are more likely to be labeled as a ‘friend’ than as a “brother or sister” or a ‘classmate’. In our research, the children may assume to be in some kind of relationship with Pepper themselves. It is especially interesting as younger children are learning to regulate their emotions for building relationships [96]. As emotions provide the basic scaffolding for building human relationships, this behavior of children towards the robot shows that they presume Pepper can control and express emotions.
(e)
Human and non-human abilities: This focuses on the children’s comments related to the robot’s human and non-human capabilities.
Observation 36.Child: [observing Pepper dancing] “This robot totally has an AI”
Observation 37.Boy: “There is an AI in this robot probably.”
Observation 38.Child: “It knows everything! More than human!”
Observation 39.Child: “Can you help?”
Observation 40.Child: “Who are you working with?”
Observation 41.Child: “Are you a boy or a girl?”
Human abilities refer to learning, experience travelling and doing things that are close to us such as playing instruments, reading books, hiking, singing and playing. Children assume that the robot engages in similar activities such as eating sweets, attending kindergarten/school, peeing, cleaning, sleeping, daily activities, brushing teeth and having vacations. Importantly, they assume that the robot can help them (Observation 40)—this quality of giving or being ready to give help is considered a human asset. They also make assumptions about robots age as they think Pepper should not be able to speak if it is two years old and even ask it to say tongue twisters, which is one-way children challenge one another. This shows that the children try to find the limit up to which the robot can be treated as a human regarding its abilities. Children assume Pepper should be able to button his clothes and are surprised when it appears that it is not able to do this. Children also explore the robot’s relationships—colleagues of Pepper (Observation 41)—which shows that children assume the robot to be the age of a working adult. In addition, simple questions about having hair, sweating, its birthday and asking for not getting ill is showing expectation towards Pepper as a human being. They were however not sure what to expect (Observation 42).
Non-human abilities attributed to the robot by the children include being able to change shape and to fly. Children assume that a robot can move but are not sure what to expect in terms of its range of moving abilities compared to humans, for example its ability to dance. It is worth mentioning that a question about the robot’s capability is followed by a question about how this ability might be achieved/executed.
Questions about the mechanical aspects of the robot reveal expectations towards it as a non-human being. They assume that the robot has an engine or batteries—external source of power. They ask how Pepper looks from inside, so they think about it as a toy. They are in general interested to know from what Pepper is built. When it comes to the appearance, expectations towards the robot include being made from metal, having a nose or a bigger one, being of another color than white or having wheels or two legs. They are also surprised by Pepper’s tablet and are interested to know about its function. Autonomy is another aspect of the robot’s characteristic which interested the children. They asked if someone is controlling or steering it, which clearly shows that they suspected Pepper to be a device or a toy rather than a human. This was further confirmed by the children asking the robot if it had any games on it (e.g., on a mobile phone) and petting it.
(f)
Shifting approaches to the robots’ abilities: This focuses on how, in certain circumstances, children changed their perception of robot or its behavior.
Observation 42.Girl: “What kind of battery do you have?”
[Children keep asking question after the robot says ‘bye’ as if they didn’t notice the end of the interaction. Pepper is silent for a while]
Boy1: “It didn’t relax today.”
Boy2: “No, it went to bed!”
Children’s approach towards Pepper as a human or a non-human was constantly changing as they were physically exploring it. For them, the robot might be perceived as a toy, a device, a machine, or a human in the same interaction. They asked in general about its skills, regardless of whether they were human characteristics or not.
(g)
Surprise: This focuses on the observations where children were astonished at the robot’s behavior when it was different from their original assumptions.
Observation 43.Throughout the interaction, children express an interest in the robot’s mobility. Finally, one of the surprised children addresses the issue directly to Pepper
Child: “You can’t walk?!”
Surprise as an emotional reaction to a certain situation may be considered as an indicator of an unmet expectation. Additionally, children express it freely by a loud tone of voice tone and by mimicking. A verification of their assumptions comes with an eye-opening experience of redefining their hypothesis and renewed attempts of exploration. For example, during our observation children strongly and repeatedly expressed their surprise about the Pepper’s lack of ability to walk (Observation 45). For researchers, a surprise may serve as a sign followed either by a disappointment or relief—this should be observed and used for study and robot design in the future.
(h)
Challenging the robot: This focuses on those situations where the children asked for specific information or experience, trying to confront their expectations with reality.
Observation 44.One boy is showing the flexibility of his wrist. Another one is watching him, repeating the movement and shouting to Pepper.
Boy1: “Can you do it?”
Pepper: “I think I cannot do it”
[After the robot’s statement, children keep demanding for the action]
Pepper: “I don’t have a ready program for that.”
Boy2: “Damn, what does it say?”
Observation 45.Boy: [impatiently] “Can you even fight?”
[Some children immediately decline vehemently]
Challenging behaviors can be interpreted as a negative reaction to the robot’s lack of ability to do something. In our observations, children provoked the robot by laughing at it or calling it by its name or asking it to perform a specific movement. We consider such provocative behaviors to be expressions of expectations.

4.2.2. General Remarks on Expectations towards Robots

To conclude, there are a few things worth mentioning while analyzing the children’s expectations towards robots. One is that we need to take an open-minded approach towards designing the robot’s appearance, behavior and personality. As researchers, we can gain valuable insights for designing future robots from the analysis of children’s expectations: it reveals the children’s assumptions and mental models about robots, which are dynamic: these models and assumptions change during the interaction as children get more acquainted with the robot. Children ask for things that are familiar to them so that they can relate to the robot.
At the same time, we have to remember to acknowledge that robots are, among humans and animals, seen by children as a new kind of “living” entity and therefore the designers will have to make them responsible for expectations they arouse [15].

4.3. Interplay of Expectations and Trust

Trust and expectations towards robots are not factors that are independent of each other, rather they depend on each other, and together have a significant impact on the effectiveness of the child–robot encounter. The following observations provide evidence for this interaction.
We observed that growing awareness of what to expect from the robot as well as a better image of Pepper as a companion did not harm the trust of children in the machine. At the end of the interaction, each group was willing to freely explore the robot (expressed by the eagerness of touching, talking to the robot and looking in its eyes). We assume that children expressing their expectations in front of the robot is a sign of their trust because children not only share their internal mental states, but they also expose themselves to group criticism.
Nevertheless, according to other researchers, social behaviors, as a perceived agency of the robot, can both positively and negatively influence trust competency and the ability of socialization of a machine [43]. In our study, we observed that while children did not lose trust in the robot overall, they gradually discovered that the robot is not able to perform social behaviors, so their trust in its competency could change throughout the observation.
Another important aspect worth mentioning here is the impact of trust in CRI observations to understand children’s expectations and image of the robot. Observations in an in-the-wild setting aim to bring more insights into real-life contextual interactions. Thanks to the positive attitude of children towards robots such as Pepper or Nao, we can gather more data about the nuances of such relationships. Children’s beliefs in Pepper’s state of mind and its physical and cognitive abilities can be gleaned from their attitude towards the robot and the questions they pose to the robot.
Trust can be a way to validate expectations for a robot. By confronting the mental image of the robot created before the interaction, the child can modulate their expectations in real-time and adapt them to the dynamics of the situation. One of these paths may be questions asked to the robot (see [91] for the role of questions in shaping the child’s knowledge). However, to be able to achieve the goal of exploration and cognition, some kind of trust is often required—for example, to approach the robot closely, to examine assumptions about its capabilities, to touch its parts and explore, to leave their comfort zone to engage in the activity proposed by the robot and enjoy it. Further research exploring this domain may show which of the robot’s behaviors are undesirable as part of the interaction and which may encourage children to seek answers to their assumptions about the robot.

5. Conclusions

We aimed to highlight the role of trust and expectations in CRI. We identified several behaviors in CRI that indicate trust (or lack thereof) of the children towards the robot. For example, children are challenging the robot, observing its performance and approaching it; they are eager to touch it; they are willing to engage with the robot and play together; and they ask the robot direct questions (e.g., Observation 32). We observed that children sometimes apply the rules of human–human social interaction in CRI, but sometimes adopt a tool-like perspective.
Our study drew some conclusions regarding children’s expectations towards the robot during CRI. For instance, children are prone to perceive robots as beings with agency, origins, having human and non-human attributes, able to engage in human-like relationships and being a part of a group by their behavior. Pre-schoolers easily change their perspective from treating the robot as a human-like companion to treating it as a machine. At the same time, some children are reluctant to ascribe typical human attributes to the robot, contrary to the expectations of the other group members.
We also gathered insights into children’s attitude towards the robot’s physical appearance, specifically of Pepper, which was the robot used in all the studies described here. Children expressed their surprise that the robot did not have a bigger face, no legs and no nose (or a small one). They also noticed that the robot’s movements are quite limited: for example, in the Polish dance activity, the robot does not exactly follow the choreography familiar to the children as it does not touch its toes. Regardless, the children continue to make movements appropriate to the dance. However, they do not point to their “eyes, ears, mouth, nose” as they follow Pepper’s behavior. During the play activities, when it is natural for the children to move around, the limited ability of the robot to perform complex movements hinders a positive reception. This suggests that children expect the robot to have sophisticated movement abilities, e.g., walking.
Another thing worth implementing is to give the robot a personality in terms of likes and dislikes. Children clearly expressed an interest in the robot’s favorite things, places and people. Being able to answer these questions in a meaningful way would help to build a trust-based relationship, as children ask such questions to each other as well [97,98], and the robot would be perceived as more of a social being. Moreover, it would be helpful for a social robot to fulfill even some unrealistic expectations such as being able to fly or having a family, as children are curious about them. For this, we need to provide the robot with a script to be able to answer such questions reasonably and consistently so that the children can relate to it.
We should also emphasize that in analyzing children’s expectations towards the robot, we need to consider their unexpected and provocative behaviors towards it.
Another goal of our research was to demonstrate the effectiveness of in-the-wild methodology for planning and executing CRI qualitative studies. In this regard, we can draw the following conclusions:
  • In our study, children were able to express their expectations and beliefs freely. They asked questions and shared their insights voluntarily, directly and frequently, which resulted in a better understanding of the dynamics of interactions with the robot and provided a natural environment for establishing social relationships between the robot and the children.
  • We gathered a variety of qualitative data on children’s interaction with the robot, thanks to the children being able to naturally express their needs and emotions during free play episodes.
  • In contrast to dyadic settings (one child with one robot), our group setting allowed us to observe real-life CRI at multiple levels; for example, the role of the facilitating teacher and the impact of group dynamics on CRI.
  • By conducting observations in the natural settings of pre-schoolers, we can learn about potential undesirable behaviors from the children (e.g., a lack of interest or engagement) and about potential safety hazards of the robot (e.g., a child being accidentally hurt by the robot because of approaching it too close). This allows us to design more congenial and safe interaction CRI scenarios.
Finally, our study provides some guidelines for designing CRI scenarios with pre-schoolers, which are summarized below:
  • We need to focus on children’s behaviors, including comments, challenging behaviors, performed actions towards the robot and the wide range of questions they ask to the robot. These provide direct expressions of children’s trust and expectations towards the robot.
  • We need to prepare appropriate scripts for Q&A sessions with the robot. Our study shows that the children are interested to know the robot’s likes and dislikes, bodily functions and physical abilities. Thus, appropriate answers need to be provided for them in the conversation scripts given to the robot. Just providing some preferences for the robot for abstract concepts (e.g., assessing children’s ideas, cultural aspects, mathematics and art) and non-abstract concepts (e.g., favorite movies, countries, environment for interaction and robot’s body part) would help connect with the children.
  • It is helpful to choose a robot that can move around and naturally interact with children using physical gestures. We need to help children with developing such expectations towards robots, that will help them understand its functioning, familiarize with its possibilities and become aware of its limitations even before the interaction.
  • During CRI, it is helpful to use only those activities with which the child is familiar, or that can be easily learned within the group. We should avoid overwhelming children with the situation, and it is crucial to have a trusted adult (parent or a teacher) available to facilitate the interaction and comfort the child if needed.

6. Limitations

Even though our study identified several factors that affect trust and expectations in CRI, it has some limitations. For example, the group setting used in our CRI activities makes it difficult to incorporate user-derived adaptation factors in the interaction [95]. Adaptation to user characteristics is an effective aid to engagement: the more personalized the experience of playing with a robot, the more engaging the interaction. For in-the-wild approaches in a group setting, such adaptation is more difficult and needs to be studied further.
Despite many advantages of the in-the-wild methodology, there are also some risks arising from the unpredictable course of each interaction. Although it is the investigator’s responsibility to carefully plan the interaction and ensure that the study is as reliable and accurate as possible, there may be differences that prevent or significantly hinder a comparative analysis between groups. For instance, in our study, we designed observations with two groups of children, using the same interaction scenario for each. Nonetheless, the exact course of interaction was not fully predictable and manageable due to the spontaneous and unpredictable responses of the participants. For example, we incorporated a robot malfunction behavior in our study, and one group of children started to mimic the sound of an ambulance during this episode, which was an unexpected behavior.
Another limitation is the potential for variability among researchers in interpreting children’s behaviors during CRI. For this, we need to develop standardized coding schemes and intervals [99] that do not show wide inter-subjective variance.

7. Future Research

While we explored a few factors affecting trust and expectations in CRI using an in-the-wild methodology, several related research issues need to be further studied. For example, we need to apply theories of group dynamics and interpersonal communication to understand how children in a group behave towards the robot. In one study conducted in Turkey [100], we noticed that one boy had a very dominant personality, so other children followed his lead while interacting with the robot. An interesting question would be to compare observations of group dynamics in children with and without the robot, focusing on how children take on the roles of a leader, a follower or an observer. This requires conducting a comparative study with a control group where a human, unfamiliar to the children, performs the same role as the robot in the experimental group.
Another issue that can be studied using in-the-wild methodologies is the expectations gap [22], which appears when the participants have too high or misleading expectations towards a robot due to a mistaken image of it. We observed it in our study as well—Pepper was not able to meet all children’s expectations for its physical abilities (flying and walking) which might have restrained them from interacting freely with the robot. The robot’s lack of human-like mobility seemed to be one of the most dissatisfying matters: “Does Pepper move?” was one of the most frequently asked questions; children’s comments also included: “You look like you can’t move, “Can you dance the waltz? I bet you cannot”. One possibility of avoiding the expectations gap is to condition the children’s expectations before they interact with the robot through presentations, videos and discussion. The expectations gap can also be studied by comparing the CRI patterns of two groups of children: one who received a prior briefing about the robot, and the other who received a similar briefing but unrelated to the robot.
Another interesting path of research on CRI we would like to pursue in the future is to perform longitudinal studies using an in-the-wild paradigm. A meta-analysis dealing with the methodology of research in the field of social robotics indicates that longitudinal research is underrepresented [17]. Long-term studies, where CRI is repeated with the same participants in the same and in different situations over some time, can help us learn about how children’s mental models of robots evolve, as they become habituated to the robot. We are planning to conduct such studies by having robot-led interactive sessions with the same group of children regularly.
Another issue for future research is to study children’s challenging behaviors towards robots. In the past, some researchers have focused on children’s aggressive behaviors towards a robot [101], for example where children invade the robot’s space or engage in a harmful pattern of behavior. These are considered as negative aspects of CRI, which should be discouraged in the interaction. However, in our study, challenging behaviors (e.g., drawing the robot’s attention to itself by shouting its name or demanding an answer to a question) were perceived as positive aspects of the interaction, indicating a sense of comfort for the participants. Eventually, both these positive and negative aspects of children’s behaviors towards the robot need to be integrated in a larger framework. Only then will we be able to design a platform for safe, reliable and fun interaction between preschool children and humanoid robots.
Various social robots are proliferating in our society in increasingly larger numbers; there is a pressing need to further explore real-life contexts and interaction frameworks for social robots. Some studies are already being conducted: for instance, children and adults interacting with a robot in a shopping mall [102] or children dancing with a robot in the classroom [103]. Some researchers have focused on HCI and HRI applicability in real-life situations in general [104], while others have provided guidelines on using robots in children’s natural environment, such as in a kindergarten [105]. However, we need to conduct many more in-the-wild studies on CRI to be able to design and develop social robots that can be deployed effectively in the real world.

Author Contributions

P.Z. and A.K. collaborated on the design of observation, conducting the event itself, data analysis and writing the article. B.I. was essential in editing the article and overseeing the research work throughout the process. G.V. and B.S. provided the necessary tools, software and resources and helped in organizing the CRI events in Japan and Poland, respectively. All authors have read and agreed to the published version of the manuscript.

Funding

The research presented in this paper was supported in part by the National Center for Research and Development (NCBR) under Grant No. POLTUR2/5/2018 and The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant Np. 117E021.

Acknowledgments

We would like to thank Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, Anna Belowska, Wojciech Jędras and Luis-Enrique Coronado for programming the robot and Muge Igarashi for help in organizing the CRI events in Japan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Su, H.; Qi, W.; Hu, Y.; Karimi, H.R.; Ferrigno, G.; De Momi, E. An Incremental Learning Framework for Human-like Redundancy Optimization of Anthropomorphic Manipulators. IEEE Trans. Ind. Inform. 2020. [Google Scholar] [CrossRef]
  2. Su, H.; Hu, Y.; Karimi, H.R.; Knoll, A.; Ferrigno, G.; De Momi, E. Improved recurrent neural network-based manipulator control with remote center of motion constraints: Experimental results. Neural Netw. 2020, 131, 291–299. [Google Scholar] [CrossRef] [PubMed]
  3. Loghmani, M.R.; Rovetta, S.; Venture, G. Emotional intelligence in robots: Recognizing human emotions from daily-life gestures. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 1677–1684. [Google Scholar] [CrossRef]
  4. Fiorini, L.; Mancioppi, G.; Semeraro, F.; Fujita, H.; Cavallo, F. Unsupervised emotional state classification through physiological parameters for social robotics applications. Knowl. Based Syst. 2020, 190, 105217. [Google Scholar] [CrossRef]
  5. Pattar, S.P.; Coronado, E.; Ardila, L.R.; Venture, G. Intention and Engagement Recognition for Personalized Human-Robot Interaction, an integrated and Deep Learning approach. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Osaka, Japan, 3–8 July 2019; pp. 93–98. [Google Scholar] [CrossRef]
  6. Sünderhauf, N.; Brock, O.; Scheirer, W.; Hadsell, R.; Fox, D.; Leitner, J.; Upcroft, B.; Abbeel, P.; Burgard, W.; Milford, M.; et al. The limits and potentials of deep learning for robotics. Int. J. Robot. Res. 2018, 37, 405–420. [Google Scholar] [CrossRef][Green Version]
  7. Károly, A.I.; Galambos, P.; Kuti, J.; Rudas, I.J. Deep Learning in Robotics: Survey on Model Structures and Training Strategies. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 266–279. [Google Scholar] [CrossRef]
  8. Wagner, A.R. The Role of Trust and Relationships in Human-Robot Social Interaction. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2009. [Google Scholar]
  9. Schaefer, K. The Perception and Measurement of Human-Robot Trust; University of Central Florida: Orlando, FL, USA, 2013. [Google Scholar]
  10. Spence, P.R.; Westerman, D.; Edwards, C.; Edwards, A. Welcoming our robot overlords: Initial expectations about interaction with a robot. Commun. Res. Rep. 2014, 31, 272–280. [Google Scholar] [CrossRef]
  11. Kwon, M.; Jung, M.F.; Knepper, R.A. Human expectations of social robots. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chirstchurch, New Zealand, 7–10 March 2016; pp. 463–464. [Google Scholar]
  12. Jokinen, K.; Wilcock, G. Expectations and first experience with a social robot. In Proceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, 17–20 October 2017; pp. 511–515. [Google Scholar]
  13. Di Dio, C.; Manzi, F.; Peretti, G.; Cangelosi, A.; Harris, P.L.; Massaro, D.; Marchetti, A. Shall I Trust You? From Child–Robot Interaction to Trusting Relationships. Front. Psychol. 2020, 11, 469. [Google Scholar] [CrossRef][Green Version]
  14. van Straten, C.L.; Peter, J.; Kühne, R.; de Jong, C.; Barco, A. Technological and interpersonal trust in child-robot interaction: An exploratory study. In Proceedings of the 6th International Conference on Human-Agent Interaction, Southampton, UK, 15–18 December 2018; pp. 253–259. [Google Scholar]
  15. Ligthart, M.; Henkemans, O.B.; Hindriks, K.; Neerincx, M.A. Expectation management in child-robot interaction. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28–31 August 2017; pp. 916–921. [Google Scholar]
  16. Yadollahi, E.; Johal, W.; Dias, J.; Dillenbourg, P.; Paiva, A. Studying the Effect of Robot Frustration on Children’s Change of Perspective. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, UK, 3–6 September 2019; pp. 381–387. [Google Scholar]
  17. Charisi, V.; Davison, D.; Reidsma, D.; Evers, V. Evaluation methods for user-centered child-robot interaction. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 22–27 August 2016; pp. 545–550. [Google Scholar] [CrossRef]
  18. Fior, M.; Nugent, S.; Beran, T.N.; Ramirez-Serrano, A.; Kuzyk, R. Children’s Relationships with Robots: Robot Is Child’s New Friend. J. Phys. Agents 2010, 4, 9–17. [Google Scholar] [CrossRef][Green Version]
  19. Cameron, D.; Fernando, S.; Millings, A.; Moore, R.; Sharkey, A.; Prescott, T. Children’s age influences their perceptions of a humanoid robot as being like a person or machine. In Conference on Biomimetic and Biohybrid Systems; Springer: Barcelona, Spain, 2015; pp. 348–353. [Google Scholar]
  20. Sullivan, A.; Bers, M.U. Robotics in the early childhood classroom: Learning outcomes from an 8-week robotics curriculum in pre-kindergarten through second grade. Int. J. Technol. Des. Educ. 2016, 26, 3–20. [Google Scholar] [CrossRef]
  21. van Straten, C.L.; Peter, J.; Kühne, R. Child–robot relationship formation: A narrative review of empirical research. Int. J. Soc. Robot. 2020, 12, 325–344. [Google Scholar] [CrossRef][Green Version]
  22. Tonkin, M.; Vitale, J.; Herse, S.; Williams, M.A.; Judge, W.; Wang, X. Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; HRI ’18. Association for Computing Machinery: New York, NY, USA, 2018; pp. 407–415. [Google Scholar] [CrossRef]
  23. de Graaf, M.M.A.; Allouch, S.B. Expectation Setting and Personality Attribution in HRI. In Proceedings of the 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Bielefeld, Germany, 3–6 March 2014; pp. 144–145. [Google Scholar]
  24. Walters, M.L.; Syrdal, D.S.; Dautenhahn, K.; Te Boekhorst, R.; Koay, K.L. Avoiding the uncanny valley: Robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Auton. Robot. 2008, 24, 159–178. [Google Scholar] [CrossRef][Green Version]
  25. Hancock, P.A.; Billings, D.R.; Schaefer, K.E. Can you trust your robot? Ergon. Des. 2011, 19, 24–29. [Google Scholar] [CrossRef]
  26. Szcześniak, M.; Colaço, M.; Rondón, G. Development of interpersonal trust among children and adolescents. Pol. Psychol. Bull. 2012, 43, 50–58. [Google Scholar] [CrossRef]
  27. Ullman, D.; Malle, B.F. What does it mean to trust a robot? Steps toward a multidimensional measure of trust. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 263–264. [Google Scholar]
  28. Kok, B.C.; Soh, H. Trust in robots: Challenges and opportunities. Curr. Robot. Rep. 2020, 1, 297–309. [Google Scholar] [CrossRef]
  29. Baker, A.L.; Phillips, E.K.; Ullman, D.; Keebler, J.R. Toward an understanding of trust repair in human-robot interaction: Current research and future directions. ACM Trans. Interact. Intell. Syst. 2018, 8, 1–30. [Google Scholar] [CrossRef]
  30. Lewis, M.; Sycara, K.; Walker, P. The role of trust in human-robot interaction. In Foundations of Trusted Autonomy; Springer: Cham, Switzerland, 2018; pp. 135–159. [Google Scholar]
  31. Vinanzi, S.; Patacchiola, M.; Chella, A.; Cangelosi, A. Would a robot trust you? Developmental robotics model of trust and theory of mind. Philos. Trans. R. Soc. B 2019, 374, 20180032. [Google Scholar] [CrossRef]
  32. Azevedo, C.R.; Raizer, K.; Souza, R. A vision for human-machine mutual understanding, trust establishment, and collaboration. In Proceedings of the 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, USA, 27–31 March 2017; pp. 1–3. [Google Scholar]
  33. Jian, J.Y.; Bisantz, A.M.; Drury, C.G. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  34. Möllering, G. The nature of trust: From Georg Simmel to a theory of expectation, interpretation and suspension. Sociology 2001, 35, 403–420. [Google Scholar]
  35. Bakała, E.; Visca, J.; Tejera, G.; Seré, A.; Amorin, G.; Gómez-Sena, L. Designing child-robot interaction with Robotito. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
  36. Zguda, P.; Kołota, A.; Jarosz, M.; Sondej, F.; Izui, T.; Dziok, M.; Belowska, A.; Jędras, W.; Venture, G.; Śnieżynski, B.; et al. On the Role of Trust in Child-Robot Interaction. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
  37. De Graaf, M.M.; Allouch, S.B. Exploring influencing variables for the acceptance of social robots. Robot. Auton. Syst. 2013, 61, 1476–1486. [Google Scholar] [CrossRef]
  38. Sanders, E.B.N.; Stappers, P.J. Co-creation and the new landscapes of design. Co-Design 2008, 4, 5–18. [Google Scholar] [CrossRef][Green Version]
  39. Woods, S. Exploring the design space of robots: Children’s perspectives. Interact. Comput. 2006, 18, 1390–1418. [Google Scholar]
  40. Geiskkovitch, D.Y.; Thiessen, R.; Young, J.E.; Glenwright, M.R. What? That’s Not a Chair!: How Robot Informational Errors Affect Children’s Trust Towards Robots. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 48–56. [Google Scholar] [CrossRef]
  41. Punch, S. Research with children: The same or different from research with adults? Childhood 2002, 9, 321–341. [Google Scholar] [CrossRef]
  42. Salem, M.; Lakatos, G.; Amirabdollahian, F.; Dautenhahn, K. Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 1–8. [Google Scholar]
  43. Stower, R. The Role of Trust and Social Behaviours in Children’s Learning from Social Robots. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), Cambridge, UK, 3–6 September 2019; pp. 1–5. [Google Scholar] [CrossRef]
  44. de Jong, C.; Kühne, R.; Peter, J.; Straten, C.L.V.; Barco, A. What Do Children Want from a Social Robot? Toward Gratifications Measures for Child-Robot Interaction. In Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019; pp. 1–8. [Google Scholar] [CrossRef]
  45. Paepcke, S.; Takayama, L. Judging a bot by its cover: An experiment on expectation setting for personal robots. In Proceedings of the 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; pp. 45–52. [Google Scholar] [CrossRef][Green Version]
  46. Jung, M.; Hinds, P. Robots in the Wild: A Time for More Robust Theories of Human-Robot Interaction. ACM Trans. Hum. Robot. Interact. (THRI) 2018, 7, 2–5. [Google Scholar] [CrossRef][Green Version]
  47. Jacobs, A.; Elprama, S.A.; Jewell, C.I. Evaluating Human-Robot Interaction with Ethnography. In Human-Robot Interaction; Springer: Berlin/Heidelberg, Germany, 2020; pp. 269–286. [Google Scholar]
  48. Hansen, A.K.; Nilsson, J.; Jochum, E.A.; Herath, D. On the Importance of Posture and the Interaction Environment: Exploring Agency, Animacy and Presence in the Lab vs. Wild using Mixed-Methods. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 24–26 March 2020; pp. 227–229. [Google Scholar]
  49. Savery, R.; Rose, R.; Weinberg, G. Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture. arXiv 2020, arXiv:cs.HC/2001.05863. [Google Scholar]
  50. Yogman, M.; Garner, A.; Hutchinson, J.; Hirsh-Pasek, K.; Golinkoff, R.M. Committee on Psychosocial Aspects of Child; Family Health, AAP Council on Communications And Media; The power of play: A pediatric role in enhancing development in young children. Pediatrics 2018, 142, e20182058. [Google Scholar] [CrossRef][Green Version]
  51. Cordoni, G.; Demuru, E.; Ceccarelli, E.; Palagi, E. Play, aggressive conflict and reconciliation in pre-school children: What matters? Behaviour 2016, 153, 1075–1102. [Google Scholar] [CrossRef]
  52. Balph, D.F.; Balph, M.H. On the psychology of watching birds: The problem of observer-expectancy bias. Auk 1983, 100, 755–757. [Google Scholar] [CrossRef]
  53. Coeckelbergh, M.; Pop, C.; Simut, R.; Peca, A.; Pintea, S.; David, D.; Vanderborght, B. A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: Ethical acceptability, trust, sociability, appearance, and attachment. Sci. Eng. Ethics 2016, 22, 47–65. [Google Scholar] [CrossRef]
  54. Rossi, A.; Holthaus, P.; Dautenhahn, K.; Koay, K.L.; Walters, M.L. Getting to Know Pepper: Effects of People’s Awareness of a Robot’s Capabilities on Their Trust in the Robot. In Proceedings of the 6th International Conference on Human-Agent Interaction Southampton, Southampton, UK, 15–18 December 2018; HAI ’18. Association for Computing Machinery: New York, NY, USA, 2018; pp. 246–252. [Google Scholar] [CrossRef][Green Version]
  55. Pandey, A.K.; Gelin, R. A mass-produced sociable humanoid robot: Pepper: The first machine of its kind. IEEE Robot. Autom. Mag. 2018, 25, 40–48. [Google Scholar] [CrossRef]
  56. Lee, H.R.; Cheon, E.; de Graaf, M.; Alves-Oliveira, P.; Zaga, C.; Young, J. Robots for Social Good: Exploring Critical Design for HRI. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 681–682. [Google Scholar] [CrossRef]
  57. Sabanovic, S.; Michalowski, M.P.; Simmons, R. Robots in the wild: Observing human-robot social interaction outside the lab. In Proceedings of the 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, 27–29 March 2006; pp. 596–601. [Google Scholar]
  58. Venture, G.; Indurkhya, B.; Izui, T. Dance with me! Child-robot interaction in the wild. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2017; pp. 375–382. [Google Scholar]
  59. Salter, T.; Michaud, F.; Larouche, H. How wild is wild? A taxonomy to characterize the ‘wildness’ of child-robot interaction. Int. J. Soc. Robot. 2010, 2, 405–415. [Google Scholar]
  60. Shiomi, M.; Kanda, T.; Howley, I.; Hayashi, K.; Hagita, N. Can a social robot stimulate science curiosity in classrooms? Int. J. Soc. Robot. 2015, 7, 641–652. [Google Scholar] [CrossRef]
  61. Brain Balance Normal Attention Span Expectations by Age. Available online: https://blog.brainbalancecenters.com/normal-attention-span-expectations-by-age (accessed on 31 January 2021).
  62. Ruff, H.A.; Lawson, K.R. Development of sustained, focused attention in young children during free play. Dev. Psychol. 1990, 26, 85. [Google Scholar] [CrossRef]
  63. Lemaignan, S.; Edmunds, C.E.; Senft, E.; Belpaeme, T. The PInSoRo dataset: Supporting the data-driven study of child-child and child-robot social dynamics. PLoS ONE 2018, 13, e0205999. [Google Scholar] [CrossRef]
  64. de Jong, C.; van Straten, C.; Peter, J.; Kuhne, R.; Barco, A. Children and social robots: Inventory of measures for CRI research. In Proceedings of the 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Genova, Italy, 27–28 September 2018; pp. 44–45. [Google Scholar]
  65. Tanevska, A.; Ackovska, N. Advantages of using the Wizard-of-Oz approach in assistive robotics for autistic children. In Proceedings of the 13th International Conference for Informatics and Information Technology (CiiT), Bitola, Macedonia, 22–24 April 2016. [Google Scholar]
  66. Charisi, V.; Gomez, E.; Mier, G.; Merino, L.; Gomez, R. Child-Robot Collaborative Problem-Solving and the Importance of Child’s Voluntary Interaction: A Developmental Perspective. Front. Robot. AI 2020, 7, 15. [Google Scholar] [CrossRef][Green Version]
  67. Ahmad, M.; Mubin, O.; Orlando, J. A systematic review of adaptivity in human-robot interaction. Multimodal Technol. Interact. 2017, 1, 14. [Google Scholar] [CrossRef][Green Version]
  68. Bartneck, C.; Nomura, T.; Kanda, T.; Suzuki, T.; Kato, K. Cultural Differences in Attitudes towards Robots; AISB: Budapest, Hungary, 2005. [Google Scholar]
  69. Wang, N.; Pynadath, D.V.; Rovira, E.; Barnes, M.J.; Hill, S.G. Is it my looks? or something i said? the impact of explanations, embodiment, and expectations on trust and performance in human-robot teams. In International Conference on Persuasive Technology; Springer: Berlin/Heidelberg, Germany, 2018; pp. 56–69. [Google Scholar]
  70. Gompei, T.; Umemuro, H. Factors and development of cognitive and affective trust on social robots. In International Conference on Social Robotics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 45–54. [Google Scholar]
  71. Van Straten, C.L.; Kühne, R.; Peter, J.; de Jong, C.; Barco, A. Closeness, trust, and perceived social support in child-robot relationship formation: Development and validation of three self-report scales. Interact. Stud. 2020, 21, 57–84. [Google Scholar] [CrossRef]
  72. Haring, K.S.; Watanabe, K.; Silvera-Tawil, D.; Velonaki, M. Expectations towards two robots with different interactive abilities. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chirstchurch, New Zealand, 7–10 March 2016; pp. 433–434. [Google Scholar]
  73. Johnson, V.; Hart, R.; Colwell, J. Steps for Engaging Young Children in Research: The Toolkit; Education Research Centre, University of Brighton: Brighton, UK, 2014. [Google Scholar]
  74. Andriella, A.; Torras, C.; Alenya, G. Short-term human-robot interaction adaptability in real-world environments. Int. J. Soc. Robot. 2019, 12, 639–657. [Google Scholar] [CrossRef]
  75. Björling, E.A.; Rose, E.; Ren, R. Teen-robot interaction: A pilot study of engagement with a low-fidelity prototype. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot, Chicago, IL, USA, 5–8 March 2018; pp. 69–70. [Google Scholar]
  76. Coninx, A.; Baxter, P.; Oleari, E.; Bellini, S.; Bierman, B.; Henkemans, O.; Ca namero, L.; Cosi, P.; Enescu, V.; Espinoza, R.; et al. Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users. J. Hum. Robot. Interact. 2016, 5, 32–67. [Google Scholar] [CrossRef][Green Version]
  77. Leite, I.; McCoy, M.; Lohani, M.; Ullman, D.; Salomons, N.; Stokes, C.; Rivers, S.; Scassellati, B. Emotional storytelling in the classroom: Individual versus group interaction between children and robots. In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 75–82. [Google Scholar]
  78. Green, A. Education robots offer leg-up to disadvantaged students. Financial Times 2020. Available online: https://www.ft.com/content/d8b3e518-3e0a-11ea-b84f-a62c46f39bc2 (accessed on 31 January 2021).
  79. Simpson, J.A. Psychological foundations of trust. Curr. Dir. Psychol. Sci. 2007, 16, 264–268. [Google Scholar] [CrossRef]
  80. Meghdari, A.; Alemi, M. Recent Advances in Social & Cognitive Robotics and Imminent Ethical Challenges (August 22, 2018). In Proceedings of the 10th International RAIS Conference on Social Sciences and Humanities, Princeton, NJ, USA, 22–23 August 2018. [Google Scholar]
  81. Kirstein, F.; Risager, R.V. Social robots in educational institutions they came to stay: Introducing, evaluating, and securing social robots in daily education. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chirstchurch, New Zealand, 7–10 March 2016; pp. 453–454. [Google Scholar]
  82. Bethel, C.L.; Stevenson, M.R.; Scassellati, B. Secret-sharing: Interactions between a child, robot, and adult. In Proceedings of the 2011 IEEE International Conference on Systems, Man, and Cybernetics, Anchorage, AK, USA, 9–12 October 2011; pp. 2489–2494. [Google Scholar]
  83. Zacharaki, A.; Kostavelis, I.; Gasteratos, A.; Dokas, I. Safety bounds in human robot interaction: A survey. Saf. Sci. 2020, 127, 104667. [Google Scholar] [CrossRef]
  84. Lasota, P.A.; Fong, T.; Shah, J.A. A Survey of Methods for Safe Human-Robot Interaction; Now Publishers: Breda, The Netherlands, 2017. [Google Scholar]
  85. Backman, K.; Kyngäs, H.A. Challenges of the grounded theory approach to a novice researcher. Nurs. Health Sci. 1999, 1, 147–153. [Google Scholar] [CrossRef] [PubMed]
  86. Leite, I.; Lehman, J.F. The robot who knew too much: Toward understanding the privacy/personalization trade-off in child-robot conversation. In Proceedings of the 15th International Conference on Interaction Design and Children, Manchester, UK, 21–24 June 2016; pp. 379–387. [Google Scholar]
  87. Brink, K.A.; Wellman, H.M. Robot teachers for children? Young children trust robots depending on their perceived accuracy and agency. Dev. Psychol. 2020, 56, 1268. [Google Scholar] [CrossRef] [PubMed]
  88. Shahid, S.; Krahmer, E.; Swerts, M. Child-robot interaction: Playing alone or together? In CHI’11 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2011; pp. 1399–1404. [Google Scholar]
  89. Straten, C.L.v.; Peter, J.; Kühne, R.; Barco, A. Transparency about a robot’s lack of human psychological capacities: Effects on child-robot perception and relationship formation. ACM Trans. Hum. Robot Interact. 2020, 9, 1–22. [Google Scholar] [CrossRef][Green Version]
  90. Damiano, L.; Dumouchel, P. Anthropomorphism in human–robot co-evolution. Front. Psychol. 2018, 9, 468. [Google Scholar] [CrossRef]
  91. Chouinard, M.M.; Harris, P.L.; Maratsos, M.P. Children’s questions: A mechanism for cognitive development. Monogr. Soc. Res. Child Dev. 2007, 72, i-129. [Google Scholar]
  92. Veríssimo, M.; Torres, N.; Silva, F.; Fernandes, C.; Vaughn, B.E.; Santos, A.J. Children’s representations of attachment and positive teacher–child relationships. Front. Psychol. 2017, 8, 2270. [Google Scholar] [CrossRef][Green Version]
  93. Legare, C.H. Exploring explanation: Explaining inconsistent evidence informs exploratory, hypothesis-testing behavior in young children. Child Dev. 2012, 83, 173–185. [Google Scholar] [CrossRef]
  94. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A survey of socially interactive robots. Robot. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef][Green Version]
  95. Belpaeme, T.; Baxter, P.; Read, R.; Wood, R.; Cuayáhuitl, H.; Kiefer, B.; Racioppa, S.; Kruijff-Korbayová, I.; Athanasopoulos, G.; Enescu, V.; et al. Multimodal child-robot interaction: Building social bonds. J. Hum. Robot Interact. 2012, 1, 33–53. [Google Scholar] [CrossRef][Green Version]
  96. Denham, S.A. Dealing with feelings: How children negotiate the worlds of emotions and social relationships. Cogn. Brain Behav. 2007, 11, 1. [Google Scholar]
  97. Tatlow-Golden, M.; Guerin, S. ‘My favourite things to do’and ‘my favourite people’: Exploring salient aspects of children’s self-concept. Childhood 2010, 17, 545–562. [Google Scholar] [CrossRef]
  98. Shutts, K.; Roben, C.K.P.; Spelke, E.S. Children’s use of social categories in thinking about people and social relationships. J. Cogn. Dev. 2013, 14, 35–62. [Google Scholar] [CrossRef] [PubMed][Green Version]
  99. Kushniruk, A.W.; Borycki, E.M. Development of a Video Coding Scheme for Analyzing the Usability and Usefulness of Health Information Systems; CSHI: Boston, MA, USA, 2015; pp. 68–73. [Google Scholar]
  100. Guneysu, A.; Karatas, I.; Asık, O.; Indurkhya, B. Attitudes of children towards dancing robot nao: A kindergarden observation. In Proceedings of the International Conference on Social Robotics (ICSR 2013): Workshop on Taking Care of Each Other: Synchronisation and Reciprocity for Social Companion Robots, Bristol, UK, 27–29 October 2013. [Google Scholar]
  101. Darling, K. Children Beating Up Robot Inspires New Escape Maneuver System. IEEE Spectrum, August 2015. Available online: https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/children-beating-up-robot (accessed on 31 January 2021).
  102. Aaltonen, I.; Arvola, A.; Heikkilä, P.; Lammi, H. Hello Pepper, may I tickle you? Children’s and adults’ responses to an entertainment robot at a shopping mall. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 53–54. [Google Scholar]
  103. Tanaka, F.; Movellan, J.R.; Fortenberry, B.; Aisaka, K. Daily HRI evaluation at a classroom environment: Reports from dance interaction experiments. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 3–9. [Google Scholar]
  104. Dondrup, C.; Baillie, L.; Broz, F.; Lohan, K. How can we transition from lab to the real world with our HCI and HRI setups? In Proceedings of the 4th Workshop on Public Space Human-Robot Interaction (PubRob), Barcelona, Spain, 3 September 2018. [Google Scholar]
  105. Tolksdorf, N.F.; Siebert, S.; Zorn, I.; Horwath, I.; Rohlfing, K.J. Ethical Considerations of Applying Robots in Kindergarten Settings: Towards an Approach from a Macroperspective. Int. J. Soc. Robot. 2020, 1–12. [Google Scholar] [CrossRef][Green Version]
Figure 1. The experimental setup.
Figure 1. The experimental setup.
Electronics 10 00347 g001
Figure 2. A part of the coding tree.
Figure 2. A part of the coding tree.
Electronics 10 00347 g002
Figure 3. Japanese participants.
Figure 3. Japanese participants.
Electronics 10 00347 g003
Figure 4. Japanese boy staring at Pepper’s tablet.
Figure 4. Japanese boy staring at Pepper’s tablet.
Electronics 10 00347 g004
Figure 5. Polish participants.
Figure 5. Polish participants.
Electronics 10 00347 g005
Figure 6. Polish children raising their hands to ask a question.
Figure 6. Polish children raising their hands to ask a question.
Electronics 10 00347 g006
Table 1. The demographic data about the participants.
Table 1. The demographic data about the participants.
Polish Participants
GroupSize of the GroupGender (F/M)Age of Participants
1208/124–6
22010/105–6
32211/115–6
4208/125–7
52311/125–7
Japanese Participants
GroupSize of the GroupGender (F/M)Age of Participants
1207/134–5
2195/145–6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zguda, P.; Kołota, A.; Venture, G.; Sniezynski, B.; Indurkhya, B. Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies. Electronics 2021, 10, 347. https://doi.org/10.3390/electronics10030347

AMA Style

Zguda P, Kołota A, Venture G, Sniezynski B, Indurkhya B. Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies. Electronics. 2021; 10(3):347. https://doi.org/10.3390/electronics10030347

Chicago/Turabian Style

Zguda, Paulina, Anna Kołota, Gentiane Venture, Bartlomiej Sniezynski, and Bipin Indurkhya. 2021. "Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies" Electronics 10, no. 3: 347. https://doi.org/10.3390/electronics10030347

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop