We Do Not Anthropomorphize a Robot Based Only on Its Cover: Context Matters too!

: The increasing presence of robots in our society raises questions about how these objects are perceived by users. Individuals seem inclined to attribute human capabilities to robots, a phenomenon called anthropomorphism. Contrary to what intuition might suggest, these attributions vary according to different factors, not only robotic factors (related to the robot itself), but also situational factors (related to the interaction setting), and human factors (related to the user). The present review aims at synthesizing the results of the literature concerning the factors that inﬂuence anthropomorphism, in order to specify their impact on the perception of robots by individuals. A total of 134 experimental studies were included from 2002 to 2023. The mere appearance hypothesis and the SEEK (sociality, effectance, and elicited agent knowledge) theory are two theories attempting to explain anthropomorphism. According to the present review, which highlights the crucial role of contextual factors, the SEEK theory better explains the observations on the subject compared to the mere appearance hypothesis, although it does not explicitly explain all the factors involved (e.g., the autonomy of the robot). Moreover, the large methodological variability in the study of anthropomorphism makes the generalization of results complex. Recommendations are proposed for future studies.


Introduction
A previous study ( [1], see also [2,3] for a review) showed that some psychological processes, typically found in the literature to be observed only at a specific age, could in fact be observed much earlier when a robot was used as an experimenter. This is especially true when the robot is introduced to the child as being ignorant and slow. This paradigm, the mentor-child paradigm, only works because children attributed intentions (trying to learn) and states of mind (having a piece of information, a concept, or not) to the robot. This is what we call anthropomorphism. This notion is not new and has been heavily discussed in the literature [4][5][6]. Much work has been conducted regarding robotic factors (the design of the robot itself). In this paper, we review this work and emphasize that other contextual elements contribute to anthropomorphism and that these elements are not directly related to the robot itself. To show this, we will first discuss the definition of the concept of anthropomorphism and the psychological processes involved, before exploring the different factors that influence the emergence of this phenomenon. We will discuss three types of factors: robotic factors, of course, but also situational and human factors. Finally, we will present the boundaries of anthropomorphism, namely the uncanny valley, as well as the measurement and methodological limitations.

Different Conceptions of What Anthropomorphism Is
Human beings show as much of a tendency to interact with artificial media as they do with other humans. This phenomenon, called anthropomorphism, can be observed in everyday life toward objects such as telephones, computers, or cars [7]. If anthropomorphism is shared by all, there are large inter-individual variations [8]. The word "anthropomorphism" is derived from the Greek anthropos (meaning man) and morphe (meaning form). Anthropomorphism, thus, implies going beyond a simple description of actions (observable or imagined) to represent the mental or physical state of the agent by using human terms, such as: "The dog is affectionate" becomes "The dog loves me" [5]. The notion of anthropomorphism, thus, refers to the tendency to attribute human characteristics (such as motivations, intentions, or emotions) to the behavior of non-human agents or non-living objects [5]. Individuals can attribute a wide range of mental capacities when engaging in anthropomorphism, such as intentions, or conscious experiences [5,9]. The presence of a robot seems to activate different types of socio-cognitive processes, stimulating both low-level processes, such as tracking the direction of the robot's gaze or representing the robot's movements as goal-directed actions [10,11], and high-level processes, such as the attribution of human mental states [5,12].
However, it is questionable whether individuals actually attribute mental states to nonhuman agents. Two forms of anthropomorphism can be distinguished [5]: On the one hand, the strong form, which refers to individuals who are sincerely convinced that the agent possesses the human characteristics attributed to it; and on the other hand, the weak form, which refers to individuals who act as if the agent really has these characteristics, while knowing that it does not. An individual can attribute mental states to agents to explain their behavior but deny the presence of a mind in them when explicitly asked [13,14]. Rather than a binary vision of anthropomorphism between the weak and the strong forms, some authors consider that it is instead a matter of degree on a continuum [5]. The attribution of a mental state to an agent, therefore, does not necessarily imply adherence to the reality of those mental states [15][16][17]. In this review, we will use the term anthropomorphism to refer to behaviors that would imply taking the mental states of an agent into consideration, regardless of whether there is an explicit belief in the presence of these mental states.
How can we explain our tendency to attribute human characteristics to non-human agents, even when we know that they do not actually possess these attributes? Although several explanations have been proposed, this phenomenon is generally considered a mistake [18][19][20]. Anthropomorphism could be caused by the activation of a default schema that would also apply to non-social objects whose behavior cannot be explained otherwise, such as computers and robots [21]. Anthropomorphism would result from the human desire to project the most complex organization possible onto the stimuli [18]. The living beings endowed with intentions-the humans-representing the "greatest complexity" of organization [22], the individuals would seek to attribute human properties to any of the encountered agents, insofar as their characteristics do not exclude them directly. Other papers [19,20] argue that anthropomorphism is the result of heuristics (as defined in [23]), which leads people to explain the behavior of non-human animals based on an analogy with our own human mind. This is in fact similar to the conception considering anthropomorphism as an automatic and invariant psychological process [18,24]. However, the invariance of the process is discussed: Some non-human agents are more anthropomorphized than others, and some individuals show an increased tendency toward anthropomorphism [5]. Moreover, the same agent can be anthropomorphized or considered as an object depending on the situation: Anthropomorphism is, therefore, situational [25]. We will detail this discussion in Section 3.2. As robots are non-human agents that are becoming more common, in this review, we will focus on models of anthropomorphism based on experimental studies with robots.
We will see that some of these models only take into account the appearance of the robot [26], while others also take into account the interaction situation the robot is involved in, either by exploring the psychological determinants of anthropomorphism [5] or by explaining anthropomorphism toward robots. Thus, theoretical frameworks should also explain the process of anthropomorphism by taking into account factors related to the interaction situation.
Due to the presence of this context of interaction, it is important to understand the psychological elements that are in effect in such a situation to fully grasp the process of anthropomorphization [5]. This would explain the individual variability.

Why We Anthropomorphize: The Three Psychological Determinants of Anthropomorphism
According to the sociality, effectance, and elicited agent knowledge (SEEK) theory, the process of anthropomorphism, which essentially applies a default model of human interaction to artificial agents, is modulated by three psychological determinants: (1) the accessibility and applicability of anthropocentric knowledge; (2) the motivation to explain and understand the behavior of other agents; (3) the desire for social contact and affiliation [5]. For this approach, it involves the need to interact with and explain one's environment that prompts individuals to anthropomorphize an object.
The human observer uses their knowledge to explain the behavior of a robot: They automatically rely on the representation elaborated from their experiences with humans, as it is more accessible and more economical. Thus, it becomes the default model used when interacting with robots in the absence of a more specific model targeted to them. It would result in the attribution of human characteristics to non-humans, to "complete a partial representation" [38]. A study supports this aspect of the SEEK theory as participants with more experience in interacting with robots have a decreased propensity to anthropomorphism [39]. The more a person interacts with a robot, the more relevant a specific representation of an interaction with a robot becomes, and the less anthropomorphism is needed. Thus, children are more likely to anthropomorphize than adults [40].
The motivation to explain and understand artificial agents results from the epistemophilic behavior that allows reducing the uncertainty implied by this interaction situation, all the more when the latter is new [38,41]. Anthropomorphization, thus, aims at answering the need of individuals to explain the robot's behavior [42][43][44]. This phenomenon is all the more important when non-human entities are perceived as having intentions with unpredictable behavior (for instance, when the robot Asimo answers questions in a random fashion) [43]. The need for individuals to understand as well as predict their environment increases the tendency for anthropomorphism, and in turn, anthropomorphism fills this need to explain the world [43,45]. This is particularly true for people who are anxious as anthropomorphism increases their sense of control [46].
Finally, anthropomorphism satisfies the desire for social contact and affiliation by providing a framework to manage interactions with non-human agents at the lowest cognitive cost. Human individuals would need to establish social links with other humans. Anthropomorphism could satisfy this need by providing a social connection with a nonhuman agent similar to the one that can be created with a human. The more a person feels a strong need for social contact, the greater the tendency to anthropomorphize. Hence, a high level of social isolation will lead to an increased tendency to anthropomorphize robots (for example, see [47] with AIBOs and [48] with NAO), pets [5], and objects, such as alarm clocks [46] or smartphones [49].
The explanation and prediction of the behavior of non-human agents can, therefore, be based on steps similar to those implemented to understand the behavior of human agents [5]. We will see to what extent anthropomorphism relies on capacities that are usually used for human interactions.

How We Anthropomorphize: The Theory of Mind
For some authors [5,27], anthropomorphism would be an extension of the Theory of Mind (ToM, the ability to attribute mental states in order to understand and predict behaviors [50]), whereas for others [51], the ToM would not be necessary for the anthro-pomorphization process and would only be a useful way of describing the agent or the situation [52]. So, is ToM a requirement for anthropomorphism or is it not?

Anthropomorphism as a Process Dependent on the Theory of Mind
If we refer to the definition of anthropomorphism as the attribution of human characteristics to non-human agents, such as objects, the latter is very similar to ToM in that it involves the attribution of mental states. Anthropomorphism is not only limited to perceiving human physical features, such as hands or eyes (even very abstract representations of them such as for THYMIO, whose lights can be likened to two eyes) but also involves the attribution of human mental states (sensations, emotions, intentions), i.e., not only does the robot have eyes, it can see. In the same way that the ToM is activated during human-human interactions to predict behaviors [53], anthropomorphism would be activated during interactions between human and non-human agents to predict, understand, and explain the behaviors of the latter [5,43], within a specific situation or context [54].
Neuroimaging studies highlight the activation of brain regions considered to be part of the ToM network when subjects engage in anthropomorphism [55]. This network corresponds to areas involved in tasks that require understanding and inferring the mental state of others. It is notably composed of the bilateral temporoparietal junction, the precuneus, and the medial prefrontal cortex [56,57].
The same circuit is used when interacting with non-human agents, whether they are with simple geometric shapes and non-human animals, with the activation of parts of the temporoparietal junction [58,59], or with biologically based animated characters, with the activation of this temporoparietal junction and the precuneus [60]. Furthermore, there is a correlation between a predisposition toward the anthropomorphization of non-human animals and a greater gray matter volume in the left temporoparietal junction [8]. This also applies to the observation of robot actions with patterns of neural activation similar to those visible during the observation of human actions [61]. This result again emphasizes the connection between anthropomorphism and ToM highlighted in [27], which can perhaps be explained by common underlying processes.
If the previously cited studies show a link between anthropomorphism and ToM, other authors contest it.

The Anthropomorphism Independent from the Theory of Mind
According to other studies [51,52], the ToM would not be necessary for anthropomorphism. It would only serve as a means to describe the agent or the situation. The process of anthropomorphism would be decomposed into two steps: We first rely on low-level perceptual processes, which are, in a second step, completed and interpreted using a language derived from ToM. This does not necessarily mean that the participants actually believe in these mental states. Such a conception can be assimilated into the weak form of anthropomorphism that we previously discussed [5]. For example, even if participants stated "the triangle had a great idea", they did not really consider it as a thinking entity. Thus, the actual attribution of mental states to non-human agents would depend, at least in part, on different processes than those involved in the attribution of mental states to a human [52].
Two arguments seem to support this theory. First, one study shows a correlation between the tendency to anthropomorphize cars and the activation of the fusiform face area, but not with the temporoparietal junction and medial prefrontal cortex [62]. This result could indicate that anthropomorphism relies more on perceptual processes than on ToM processes. Second, another study indicates that there is a correlation between ToM abilities and situational anthropomorphism (when the measure of anthropomorphism takes into account the context of interaction, i.e., a specific character in an animated film) but not with dispositional anthropomorphism (when the measure of anthropomorphism takes only into account general attitudes toward robots) [51]. This would indicate that anthropomorphism and ToM would not be analogous: anthropomorphism would not be an extension of ToM.
Those two arguments can be criticized. Both the concept of ToM and the concept of anthropomorphism are extremely broad "multidimensional constructs", so the question would be worth pursuing. Researchers have recently highlighted disparities between the various tasks used to measure ToM, raising the possibility that these tasks measure different cognitive processes [63]. We will see later that the same problem arises for anthropomorphism (Section 5.2, the measurement of anthropomorphism and its limits). Overall, anthropomorphism varies by object, situation, and agent (e.g., [25,64]).

Anthropomorphizing Factors: Robotic Factors Are Not Enough
According to the theories we have just seen, anthropomorphism is determined by the interaction situation or the appearance of the robot. But these different factors jointly modulate the tendency to anthropomorphize a robot.
In light of the theories presented in the introduction, we set out to better understand the determinants of anthropomorphism. Our search for articles was first carried out on Google Scholar with the keywords "anthropomorphism+robot+experimental+psychology", yielding 16,500 results. As this review is not intended to be systematic, we were particularly interested in experimental papers dealing with the acceptance of robots and the attribution of mental states to them, either directly or indirectly. We excluded experimental papers dealing with the industrial application of robots, focusing on the use of robots in an interactive setting. In all, we selected 134 experimental studies, with publication years ranging from 2002 to 2023.
Anthropomorphism is a particular process of inductive inferences. It can be influenced in two different ways: top-down induction and bottom-up induction [40,65]. To play on bottom-up inferences, modifying the design of the robot is required, i.e., its appearance and shape, voice, behavior, and the quality of its movements. To activate top-down inferences, promoting anthropomorphic beliefs is necessary, for example by attributing human sociocognitive abilities to the robot (e.g., suggesting to the participants that the robot feels pain if it falls from the table). The latter is heavily context-dependent: the situation itself can promote beliefs, and the user's own disposition can also have an influence. We will now explore all of these elements below.

Robotic Factors: The Design of the Robot
Four characteristics allow us to circumscribe the design of robots: their appearance, voice, the nature of their social behaviors (verbal or non-verbal), and the quality of their movements. Table 1 summarizes the studies cited in this section.  [96] Smiling; Questionnaire affective > non affective; affective = non affective p < 0.05 NAO 18 9 The Netherlands Manzi et al. [97] Duration of fixation on the face with eye contact > without p < 0.01 ROBOVIE 32 not specified not specified Nitsch and Glassen [98] Interaction score animated robot > apathetic robot p < 0.001 NAO 48 not specified Germany Obaid et al. [99] Proximity standing robot > sitting p < 0.01 NAO 22 28.6(10.6) New Zealand Okumura et al. [100] Perceived intelligence; Emotion attribution interactive > still robot p < 0.01 SOTA 36 62.08 months (6.42) Japan Rossignoli et al. [101] attribution of mental states earnest robot > misleading p < 0.01 NAO 126 not specified Italy Tozadore et al. [102] Correct answers high interactivity > low not specified NAO 30 not specified Brazil Tung [84] Social and physical attraction movement > static p < 0. An object will be perceived as more or less human-like if it does or does not have a human form [115] or human components [116]. Several studies have pointed out that the presence of human-like physical characteristics in a robot (NAO and ROBOVIE) could lead adults and children to anthropomorphize it [77,78]. Social robots generally have a human-like appearance, which is specifically intended to induce anthropomorphism in users [4], although individuals also anthropomorphize robots (PLEO and AIBO) that do not have a human appearance [33]. There is, nevertheless, a strong disparity in the design of the robots used.
A taxonomy of different social robots allows us to distinguish between several types of designs: abstract, iconic, and humanoid [38]. Abstract robots refer to robots whose appearance is strictly mechanical and does not include human morphological elements (e.g., LEGO MINDSTORMS). Iconic robots have human physical features-such as eyes, mouth, and arms-but their appearance is still strongly mechanical, which allows them to be immediately identified as robots (e.g., NAO). Humanoid robots have an appearance that strongly resembles humans (e.g., SOPHIA); the term "android" is added to designate a robot that strongly resembles humans, both in appearance and behavior (e.g., GEMINOID or ERICA).
The head plays an essential role in the perception of the notion of humanity, in robots [71] and embodied virtual agents (see [117] for a review). The three most important components in a robot's design are the eyes, the nose, and the mouth. Thus, on a robot's face, the number of human-like elements correlates with the level of anthropomorphism. As early as 17 months, infants recognize salient facial features and relevant behaviors in the social interactions (the initial eye contact, as in a 6 s video) of humans as well as robots (ROBOVIE 2) [97]. Individuals look more at an industrial robot (SAWYER) when it has a face (displayed on a tablet) [79]. They also adopt the perspective of a robot (BAXTER) more when it has a face or a head [26] (but there is no difference between the presence of a face and the presence of a head). A human-like robot face is considered warmer and more competent than a machine-like robot face, which causes more discomfort to participants [70]. The inversion effect, the fact that human bodies and faces are recognized more quickly and accurately when presented in their usual orientations rather than upside down [118,119] , applies to robot body images regardless of the degree of human resemblance, i.e., whether the robots have weak, moderate, or strong human-like physical features. Concerning robot faces, the inversion effect applies only to robots with a high level of human-likeness (versus a low level): Only robots with strong human-like faces are cognitively anthropomorphized [82].
Thus, user reactions may differ depending on whether the robot looks like a human or a machine. Nevertheless, the impact of the robot's appearance varies across studies.
The human-like appearance of a robot would facilitate interactions, notably by increasing the perceived familiarity of the robot and by giving the impression of understandable and predictable behavior [36]. Human-like robots are judged as more likable [22], fun [75], and intelligent [73]. Whatever the age of the participants (4-8 y.o., 9-13 y.o., or adults), they prefer to interact with the iconic robot NAO rather than with the abstract robot TITAN [69]. Expectations of social and moral norms are more evident toward anthropomorphic robots [76,84]. Adults seem to cooperate more with robots that have some human elements in their appearance [80], and show more empathy toward humanoid robots (GEMINOID) rather than iconic robots (KOJIRO [65]). They also develop more concern for them [81]. Participants were asked to choose which robot they would like to save during an earthquake. They favored the human-like robots-ANDREW and ALICIA-over the non-human-like robots-ROOMBA and AUR). Individuals express a preference for a care robot (PEOPLEBOT) when it has a human face rather than an iron face, sculpture-like face, or no face at all, and then attribute more mental abilities and positive personality traits to it [68]. An iconic robot (NAO) is rated as more believable, likable, and trustworthy than an abstract, less human-like robot (BAXTER) [22]. Nevertheless, BAXTER's credibility and perceived anthropomorphism increased when individuals first interacted with NAO, suggesting a generalization of anthropomorphism. Similarly, individuals trust an abstract robot (SCITOS G5) more than when they have first seen an iconic, more human-like robot (iCUB) [85].
The attribution of mental states could also depend on the quality of resemblance: A human-like appearance would facilitate the application of ToM to the robot (i.e., an explanation of its behavior based on its mental abilities) [34]. A human-like robot may lead individuals to spontaneously consider the robot's perspective. Moderately human-like robots elicit more adoption of their views than weak human-like robots, but less than strong human-like robots [26]. Children aged 7-14 attribute more human mental abilities to an iconic robot (NAO) with a human appearance than to an abstract robot (COZMO) [67] and similar results are observed among 5-9 y.o. children who assign more mental states to NAO (iconic) than to ROBOVIE (abstract) [77]. In adults, a robot's resemblance to humans increases the use of ToM toward them (OZOBOT, COZMO, NAO) [34] and the tendency to attribute mental states to it (NAO, PEPPER) [17,78].
Conversely, 3-5 y.o. children attribute as many biological properties to an iconic robot (NAO) as to an abstract robot (DASH) [72] and at 4-10 y.o., they consider humanoid and zoomorphic robots to have a similar moral status [83]. One study compared an iconic robot (NAO) with an abstract robot (the LEGO MINDSTORMS articulated arm) in a mini-dictator game [40]. They reported a lack of an effect of the robot's appearance on the children, in contrast to the results observed in adults [65]. In other words, 4-5 y.o. children and 8-9 y.o. children do not share their stickers with the iconic robot NAO any more than with the abstract robot LEGO MINDSTORMS. In this study, manipulating the affective state of robots (attribution of feelings versus non-attribution) and presenting it as successive images may make the anthropomorphic appearance of the robot less salient.
Thus, a human-like appearance seems to improve the quality of the interaction with a robot, and the tendency to attribute mental states to it. However, we will see that a robot's strong human-like appearance could also have negative effects on its perception, making it less likable (cf. Section 5.1). In addition, children may be less affected by appearance than adults, something we will discuss later in Section 4.3.1.
Although many studies focus on the appearance of the robot, other characteristics of the robot influence the perceptions of individuals. We will discuss these other characteristics in the next section.

A Human-like Voice Helps, but It Is Not Enough
Voice contributes to the anthropomorphism of the robot. Children aged 4-11 attribute as many mental states to a non-human-like agent with a human voice (ALEXA) as to a moderately human-like robot (NAO) [37]. Thus, adapting the voice, the length of the sentences, the speech rate according to the context of interaction, and the role occupied by the robot are important factors [120]. For example, it is relevant to make a voice higher pitched when the robot (NAO) is presented as a learner [111]. Indeed, the pitch of the voice has an influence on the perception of the overall quality of the interaction [112]. A social receptionist robot (OLIVIA), with a higher voice pitch, is evaluated more positively than the same receptionist with a lower voice. Similarly, individuals cooperate more with a robot (NAO) when it expresses itself with emotions in its voice [113]. The levels of pleasure and arousal experienced in interacting with a robot are increased when the robot has a voice similar to the human voice [95,109,110]. Individuals apply more social norms to a robot (NAO) with a natural voice intonation than to a synthetic voice [121]. But the voice impacts the perception of the robot by the individual differently, according to the behavior shown by the robot. We would trust a robot (NAO) that behaves honestly when it has a synthetic voice. On the other hand, if it acts dishonestly, we would trust the same robot (NAO) with a natural voice [114].

Behavior Is a Crucial Factor
The robot's behavior may play a more important role in assigning human status than its form [122]. The robot can express different social behaviors, both verbal and nonverbal, which might have an effect on acceptance. Acceptance can be subdivided into intentional and behavioral acceptance [123]. Intentional acceptance is the user's intention to act in a certain way with the technology (usually measured by a questionnaire) while behavioral acceptance refers to the user's actions when using the technology (behavioral measurement). Some studies conclude that there is no effect of nonverbal or verbal behavior on robot acceptance, whether intentional [88] or behavioral [103]. Conversely, other studies have shown an effect of the robot's verbal (e.g., encouragement) and nonverbal (e.g., behaving nicely to the user, being user-oriented) social behavior on behavioral [87,88] and intentional acceptance [92,93].
The robot's verbal behaviors are of key importance in the interaction (especially when the robot shows collaborative behavior in the conversation with the user). Individuals feel more satisfaction and trust toward a robot with polite behavior [94], and a friendly robot is more appreciated than a robot with unfriendly behavior [36]. The level of interactivity in the conversation is a relevant factor: A robot with highly interactive behavior (enabling sophisticated communication with the participant) is judged more sociable and competent than a robot with lesser communication skills [90]. Interaction is valued more highly and experienced as more positive when the robot is animated rather than apathetic [98]. For example, in children (3-5 y.o.), a robot (NAO) expressing interjections (e.g., "Ah", "Uh") is perceived as more human-like [103]. A highly interactive robot-one that says "hello" warmly and recognizes the first names of children-results in more child engagement than a weakly interactive robot [102]. At age 5, children consider a robot with interactive behavior more intelligent than a robot that does not move and consider it more likely to feel emotions [100]. Moreover, an unpredictable conversation triggers more anthropomorphism than one clearly following recognizable patterns of behavior [43]. Humans have pragmatic expectations regarding conversations, and details (including non-verbal details) such as timing and turn-taking can also have an impact on anthropomorphism [124].
The robot's nonverbal social behavior (e.g., looking in the direction of the interlocutor it is addressing, toward a target object, or reaching out toward that object) also modulates individuals' adoption of its perspective. The robot's point of view is taken into account more when the robot (NAO and BAXTER) is looking at the object than when it is looking to the side [26,32]. Individuals take little account of the perspective of an iconic (yet moderately human-like) robot when it does not show social behavior. A robot with its gaze directed toward the user increases the pleasure and arousal felt during the interaction [95]. Individuals trust a robot showing a human-like social gaze pattern more than a fixed gaze, if that robot is physically human-like (iCUB)-but not if the robot is non-anthropomorphic (SCITOS G5) [85]. A robot (SIMON) with joint-attention behavior is rated more competent by participants [91]. Even the posture of the robot can influence users, who approach a sitting robot (NAO) more than a standing one [99].
The question of adapting the robot's behavior according to the user's emotional state has also been raised, but the problem remains. A study shows no effect on intentional acceptance [96], while others highlight the beneficial effect of the robot's adaptive behavior, both on intentional acceptance [86,89] and behavioral acceptance [96,104]. For example, a robot with personalized behavior allows children to have more fun and motivation in their interactions. Moreover, studies underline the importance of coherence between the appearance of a robot and its behavior or between the intention it expresses and its behavior [101]. Customization of a robot by participants (by choosing the form and the social skills of the robot) increases their trust toward this robot and leads to less discomfort [125]. They also attribute more agency to the robot. This personalization has no effect on other measures of anthropomorphism (experience, perceived warmth, and competence).
The importance of the adaptation of the robot's behavior is strikingly similar to the natural adaptation of human behavior when communicating: this is the main interest of the entire field of conversational pragmatics [126][127][128] (note that conversational pragmatics has been shown to be a very important factor of human-likeness in conversations with artificial virtual agents (chatbots) [129][130][131][132][133]. In other words, even when it comes to the way the robot reacts, context is paramount.

The Quality of Movements Can Reinforce Anthropomorphism
A robot performing gestures is more appreciated than a stationary robot, and individuals attribute more mental states to it [107]. What defines the quality of a robot's movement is its degree of freedom (the ability of a system to move along a specific axis of rotation). The impression of human resemblance is more striking when a robot can move its arm on multiple axes (multiple degrees of freedom at the shoulder) rather than on a single axis (a single degree of freedom), which then only allows the arm to move up and down [134].
To promote anthropomorphism, the quality of the movement is one of the most important clues because it gives the robot the impression of animation and liveliness [108]. Simple geometrical figures can be the objects of anthropomorphism if their movements resemble human movements [35]. In virtual agents, motion triggers a stronger sense of social presence than a static agent, yet this behavioral effect is only observable in nonhuman-like agents, as opposed to human-like agents [135]. Regarding robots, results are slightly different: The closer the movement is to human (biological) movement, the more the partner will consider the interaction to be pleasant, regardless of the robot's appearance. Thus, a robot (BAXTER) that moves naturally and smoothly (naturalistic movement) is perceived as more friendly (versus mechanical movement), whether its whole body is visible or only its arm [105]. Natural movement (following curves) gives it a greater sense of animation, but only when the robot's body is fully visible. A study comparing an arm with robotic movement and an arm with human movement found a positive effect of motion on anthropomorphism. Users better anticipated the trajectory of the arm in the human motion condition [106]. Nevertheless, although moving robots are considered more human-like, they are not necessarily more appreciated. As we will see later, the perception of this animation can be disturbing or unsettling [105] (cf. Section 5.1).
While these factors related to the robot are generally well linked to the concept of anthropomorphism (robotic factors), the context of the interaction also has a crucial part to play in the anthropomorphization process of the robot (situational factors and human factors). We will focus on these situational factors in the next section.

Situational Factors: The Situation Itself Can Change the Level of Anthropomorphism
When using situational factors, we mean the characteristics of the interaction. They include the way the robot is presented to individuals (the "anthropomorphic framing"), the role of the robot, the frequency of the interaction, and the perceived degree of the autonomy of the robot. Table 2 summarizes the studies cited in this section. The way the robot is presented to individuals, also known as framing, affects interaction and the tendency toward anthropomorphism [65,137,143]. To place the robot in an anthropomorphic frame-that is, one that promotes anthropomorphism-studies rely on a humanized description of the robot, assigning it a first name and a personal history, or mental abilities. For instance, an "anthropomorphic framing" condition could involve the robot being described with a name and a personal history, which includes individual preferences, such as its favorite color and hobbies where a "non-anthropomorphic framing" condition would have the robot described in the manner of a tool.
The impact of anthropomorphic framing is debated. Some authors report no effect of anthropomorphic framing on the perceived resemblance of a robot (NAO) to a human [142] or on its intentional acceptance [145]. Similarly, the anthropomorphic framing of the robot does not increase prosocial behavior toward it [141]. Nevertheless, other studies suggest an impact of this framing. Individuals are less likely to use a hammer to hit a robot (HEXBUG) presented with a first name and a story (e.g., "He's friendly but easily distracted") than a robot presented as an object [137]. A robot (TELENOID) presented as having a personal story is considered more attractive by the participants, who then report a higher degree of perceived human likeness and a lower feeling of eeriness [139]. When robots are presented as part of a narrative story (in a situational context), they are appreciated more than robots presented solely from a technical point of view and are judged to be more intelligent and more human-like [143].
The social abilities attributed to the robot impact individuals' perceptions about the robot as well as their behavior with it. ToM skills are associated with more positive reactions and an increased desire to interact with the robot [136]. A robot (NAO) presented as having ToM skills (participants watch a video where the robot passes Sally and Anne's false belief test) is perceived as more socially intelligent than a robot without these skills [147] (in the video, the robot fails the false belief test). Similarly, participants trust a robot (PEPPER) presented as having advanced ToM capabilities more than a robot with weak capabilities [140,144]. The perception of ToM capabilities in a domestic robot (HIWONDER) leads to a more positive evaluation of service quality, in contrast to a robot lacking these capabilities [146] (the authors used the same script as in previous studies [140,147] to present the robot as having ToM capabilities). Individuals are more morally concerned and less likely to sacrifice robots that are presented as having emotions [65] regardless of the robot's appearance (GEMINOID, an android robot, and KOJIRO, a less human-like robot).
Anthropomorphic framing also has an impact on children's interactions with robots. Indeed, at 3-7 y.o., when a robot (TEGA) is presented as a friend of the child, the child looks at it significantly longer than when the robot is presented as a machine [138]. For this study, in the anthropomorphic framing condition, the experimenter speaks directly to the robot: "You will explain to your new friend how to play, okay?". In the non-anthropomorphic condition, the experimenter speaks to the robot in the 3rd person: "The robot will explain to you how to play." Children also share more resources with a robot when it appears to have emotional states [40].
Finally, the impact of anthropomorphic framing can depend on the task to be performed: in a social task, individuals collaborate more with a robot perceived as having emotional abilities, but they prefer to collaborate with a non-emotional robot in an arithmetic task [163].

Giving a Robot the Role of a Companion Increases Acceptance
The role of the robot in the interaction is extremely variable. It can act as a peer, a helper, a pupil [3,164,165], a mentor, a teacher, or an experimenter [1,2,166,167]. Unfortunately, the above studies have not analyzed the effect of the robot's role on acceptance by the participants but we describe below other studies that have done so.
Children aged 6 to 9 show higher intentional acceptance for some robotic functions, such as when the robot supports learning or when the robot is placed as a companion [160]. Similarly, adults are satisfied with having a robot to clean their house, but not if it cooks for them [162] or prays for them [69]. Overall, participants will judge an assistant robot as more sociable than a competitor robot [90]. In contrast, in another study, robots triggered the same intentional acceptance for different roles: friend role and machine role [138]. One paper stated that the impact of the robot's role is yet to be determined [155]. The effect of the robot's role also depends on the age of the child: Younger children (3-5 y.o.) are more interested in a story-reading activity given by the robot, while older children (5-8 y.o.) prefer to interact and discuss with the robot [161].
Anthropomorphism is heavily implied in these studies. Indeed, the fact that participants accepted the robot assuming a certain role indicates a fundamental level of anthropomorphism. Yet, to our knowledge, no study has directly investigated the influence of the role given to a robot on anthropomorphism itself.

The Frequency of the Interaction Decreases Anxiety and Anthropomorphism
The link between the frequency of interaction and the child's acceptance of the robot remains unclear. Some studies show a positive impact of frequency [154,157] while other papers find none [86,155]. Repeated interactions may be preferable as the expression of negative attitudes toward robots (especially anxiety) decreases over time, regardless of the robot type (GEMINOID HI-2 and ROBOVIE R2 in [36] and KAROTZ in [154]) and regardless of the age of participants [158] (with NAO, anxiety was reduced for all participants). The more often an individual interacts with a robot (AIBO), the more they express a positive attitude toward robots in general [153], which can be interpreted as a simple exposure effect [154].
Individuals may change their attitude toward a robot after conversing with it, although this depends on the robot's appearance. In an ultimatum game, participants cooperate more with an android robot (GEMINOID HI-1) after talking to it. Conversely, talking to an iconic robot (ROBOVIE R2) does not increase the amount of cooperation with it [156]. The change in attitude toward the robot over time may also depend on its behavior. Considering a period of 5 months, the quality of interaction of children aged 18 to 24 months with a robot (QRIO) decreases over time if the robot behaves in a predictable way, but it increases again if the robot performs a variety of behaviors [159]. In the long term, the attribution of mental abilities to the robot decreases (STARSHIP ROBOT [39]), which may correspond to the end of a two-month novelty period [154].

A Robot Perceived as Autonomous Is Anthropomorphized More
A robot (RA-I) that appears to act autonomously is perceived as more trustworthy than a teleoperated robot (i.e., a robot directed remotely by a human) but decreases the sense of social presence [150]. Children aged 4-8 attribute fewer anthropomorphic qualities to an explicitly remote-controlled robot than to an autonomous robot [148]. When children aged 7-10 are informed about the remote operation of the robot (NAO), they perceive it as less autonomous and are less prone to anthropomorphism [152]. Other studies instead highlight a similar level of acceptability between the two types of robots [149,151]; however, when participants are explicitly informed of the robot's teleoperation, the perceived intelligence of the robot decreases [151].
Thus, the perception of robots and the behaviors expressed toward them vary according to the interaction situation. The information explicitly given to the subject by the experimenter, therefore, impacts their tendency to anthropomorphize.
Overall, how the robot is presented, its role, and its perceived degree of autonomy will influence its perception, although some studies are contradictory (especially those on the frequency of interaction). These discrepancies can be explained by the inter-individual variability of anthropomorphism, which leads us to focus on the characteristics of the person interacting with the robot in the next section.

Human Factors Also Depend on the Users Themselves, Not Just the Robots
In addition to robotic and situational factors, the characteristics of the user modulate the perception and acceptability of robots [168]. A review paper highlighted the role of age, gender, personality, education, and experience with technology with robots [155]. We can also cite the impact of the child's developmental type on the perception of robots. We discuss all these aspects below. Table 3 summarizes the studies cited in this section.  [193] Same gender preference boys > girls, young > old p < 0.001 NAO 107 5-12 y.o. Ireland Schermerhorn et al. [194] Response bias alone > with robot (women); alone < robot (men) p < 0.05 Abstract Robot 47 not specified not specified Siegel et al. [195] Credibility  [200] Perception as social entity education− p < 0.05 ROBOCARE 66 not specified Swiss Kuriki et al. [110] Perceived humanness and positive feelings artificial voice = human (ASD group) p = 0.07 28 27.6 (autistic); 28.9(neurotypical) Japan Lee et al. [47] Social attractiveness; Positive evaluation lonely > non-lonely p < 0.05; p < 0.01 AIBO, APRIL 32 not specified USA Nakano et al. [201] Fixation to eyes and mouth TD > TSA p < 0.001 104 3.11(1.1), 29.5(7.4) (autistic); 3.1(1.11), 32.1(11.8) (neurotypical) Japan Niculescu et al. [112] Positive feelings no experience > experience all p < 0.05 OLIVIA, CYNTHIA 28 not specified Singapore Paepcke and Takayama [33] Perceived Social robots are generally well accepted by children aged 5-9 [155], both intentionally (NAO [177]) and behaviorally (KEEPON [104]). We observe similar tendencies with older children (aged 10-15) with good intentional and behavioral acceptance of the robot. At the intentional level, they show a similar acceptance of robots as they do with a human [178] or a tablet [179], and at the behavioral level they are more willing to switch devices when they perform an activity with a tablet than when they perform the activity with a robot indicating a preference for the robot [179]. When directly comparing groups of different ages, two studies highlight a similar acceptance of robots between young children aged 4-6 y.o. and older children aged 7-10 y.o., both at the intentional [173] and behavioral [104] levels.
Yet, results regarding the effect of age on robot acceptance are conflicting. Two other studies make the opposite finding [160,171]. Indeed, children aged 6-9 are more accepting of robots than both preteens (10-12 y.o.) and teens (13-16 y.o.) at the intentional level [160]. Yet, at the behavioral level, children aged 3 trust a human more than a robot in a game, while at age 7 they trust the robot more (NAO [171]). In addition, when preschoolers interact with a robot for the first time, younger children (about 36 m.o.) are more easily distracted compared to older children (about 44 m.o.). They look at the robot less and show greater dependence on the experimenter compared to older children [169]. A 10-month age difference may, thus, induce different levels of engagement with the robot. The question of the robot's role may be relevant to explain these results: 36 m.o. would be interested in the robot for a story reading, while 44 m.o. would prefer more interaction and discussion [161].
Beyond mere acceptance, age also has an impact on the overall tendency to anthropomorphize. Neurotypical children anthropomorphize robots [148] as evidenced by the fact that they attribute goals to their movements [175], assign mental states to them [180], can help them [174]), and feel morally concerned with them to some extent [83]. More specifically, younger children are more likely to anthropomorphize than older children [202]. At age 3, children are more likely to assign biological properties to a robot (DASH, NAO, KIROBO) than at age 5 or as an adult [72,176]. At age 5, children assign more mental states to an iconic robot (NAO) than 7 and 9 y.o. children [40,77]. At ages 4-8, children rate robots as significantly kinder than older children (aged 9-13 y.o.) or adults and would also appreciate more the robot praying for them [69]. At ages 5-11, children are more likely to attribute human characteristics to robots than adolescents (12-16 y.o.) [170]. At ages 9-12, children are more willing to consider robots as social beings, compared to 15 y.o. They are also more concerned about the robots' moral interests and attribute more mental states to them [122]. Thus, as children grow older, they attribute less to non-human agents (e.g., NAO, ALEXA, ROOMBA) [37].
The tendency to anthropomorphism would decrease during development due to the accumulation of experience [40]. According to the SEEK theory [5], anthropomorphism would serve to fill a partial representation of the robots. Thus, the more the child gains experience interacting with robots (with age), the more relevant their representation of robots becomes, and the less anthropomorphization is necessary. Several studies seem to confirm this theory: exposure to technology increases with age [37], and as children grow older, they have a more sophisticated understanding of the mental capacities of robots, as well as their moral and social status [122].
This tendency even extends to adults who ascribe less free will to a robot (ROBOVIE) than to a human compared to 5-7 y.o. children, who assign as much free will to them [172]. When comparing adults of different ages, we notice that they also perceive robots differently: Older adults (more than 60 y.o.) trust a robot more than younger adults (when the robot is polite) [94], judge them as more useful than younger individuals, but also express more anxiety toward them [158]. These age-related differences could be due once again to experience with technology: Young adults, having more experience with robots in daily life, would have a more accurate representation of the robots' real capabilities, which would explain why they find them less trustworthy or useful, but report less anxiety.

Same Gender Robot Promotes Acceptance in Children, but Not Anthropomorphism
On the question of the impact of the user's gender on the acceptance of robots, the results are not clear-cut, which could be due to differences in measurements. In children, at the behavioral level, girls interact longer with an abstract robot than boys do [104] but boys show a higher level of interaction (which includes gaze time, emotional expression, and dependence toward the experimenter) with an iconic robot than girls [190]. At the intentional level, when children interact with an abstract robot, no difference is observed in the child's gender on acceptance [173,197], whereas girls report more physical and social attraction to human-like robots than boys [197]. This could suggest that children would behave with the robot and perceive it differently according to their gender only when the robot has a human-like appearance. In adults, a feminine iconic host robot (OLIVIA) taking on the role of a receptionist is generally given higher ratings on a Likert scale on the quality of the interaction by male participants compared to female participants [112]. Behavioral differences linked to gender also exist in human-human interactions. In children, girls engage longer in interaction than boys, and boys initiate more episodes of interaction than girls [203]. In adults, men are more active in interaction than women (they talk more and give their opinion more), while women show more positive social behavior (friendly behavior, approval) [204]. Thus, it is possible that humans simply reapply the same model of social interactions with robots that they already use with humans.
The user's acceptance of the robot also varies depending on whether the robot's gender is the same as their own or not. At the intentional level, boys are more likely than girls to prefer an iconic robot (NAO) of the same gender, but at the behavioral level, such a difference does not seem to be present: children smile more with a female robot than with a male one, irrespective of the gender [193]. Yet, another study shows no effect of gender congruity on intentional acceptance [191]. These contradictory results can perhaps be explained by the experimental design of both studies. The first study [193] varied the voice and the name of the robot to give it its gender and then asked children about their explicit preferences regarding the robot's gender. In the second study [191], they only changed the name of the robot to indicate its gender and asked to rate indirect affirmations regarding the child's preferences, such as "I would like to take Lucas/Laura (the possible names of the robot) home with me".
It is possible that the impact of gender could vary with age. At 5-8 y.o., children prefer a robot of the same gender as themselves, while at 9-12 y.o. they report no particular preference [193]. These results are observed at the behavioral level: Young children placed in a situation of interaction with a robot of the same gender will play significantly longer [192] and smile more [193]; and at the intentional level: young children say they prefer a robot of the same gender as themselves [193]. In adults, a study shows that a robot of the opposite gender is judged more trustworthy, credible, and engaging than a robot of the same gender as the user [195]. A similar effect (same-gender preference) can be observed in cooperation tasks: Male participants complete tasks faster with robots of the same gender while female participants do not [189]. This pattern of same-gender preference is also observed in human-human interactions in children (see [205] for review) and in adults (see [206] for review) and is called gender segregation.
The user's gender would also modulate anthropomorphism, in adults but not in children. Indeed, a study shows that boys and girls anthropomorphize a non-gendered robot in the same way [207]. In [191], no difference is observed between the anthropomorphization of robots of the same gender compared to robots of the opposite gender. Thus, the attribution of human characteristics to robots does not seem to depend on the children's gender, unlike acceptance, which seems to depend on it. In adults, men tend to rate an abstract robot as more human-like compared to women, who rate it as more mechanical [194]. Conversely, in another study, women judged robotic movement to be more human-like than men did [186]. However, individuals attributed more mental abilities to a robot (FLOBI) with a human voice of the same gender as their own, compared to a voice of the opposite gender [109]. Men also report being psychologically closer to the robot with a male voice than to the female voice, this effect does not seem to exist for female participants [109].
Gender stereotypes may also apply to robots. Individuals perceive a female-faced robot as warmer and more competent than a male-faced robot, which evokes more discomfort [70]. When a robot (NAO) is implicitly presented as a man by giving it stereotypically masculine characteristics, it is judged more trustworthy and competent than if it is implicitly presented as a woman, where it is then considered more pleasant [188]. However, it is important to note that these authors did not take into account the participant's gender in their analysis. In another study, giving the robot a gendered name and voice (male, female, or neutral) did not produce a difference in perceived competence between the genders [187]. When explicitly asked, adult participants chose a gender-neutral robot over a gendered one [196]. This discrepancy can potentially be explained by the methodology employed. Some studies presented robots by video or by image [187,196], whereas in [188], the participants interacted with a physically present robot.

Personality Traits Impact Anthropomorphism
Few studies have examined the influence of individual differences on the tendency to attribute human characteristics to robots [44]. In adults, individuals with the highest need for cognition (individuals who are more likely to perform cognitively demanding activities) attribute fewer human characteristics (agentivity, sociability, and animation) to robots, and show more positive attitudes, compared to individuals with a lower need for cognition. Conversely, individuals with the highest need for prediction (individuals who are uncomfortable with ambiguity and who prefer order) attribute more anthropomorphic characteristics to robots (agentivity, sociability, and animation) and show more negative attitudes compared to individuals with a lower need for prediction [45]. On the other hand, people with strong empath personality traits are more reluctant to hit a robot (HEXBUG NANO) presented with a first name and a personalized story [137]. In addition, people with attachment anxiety (preoccupied with proximity, fearful of abandonment, and hypervigilant to social cues) will anthropomorphize more than others [46]. In children aged 8-12, there is a link between the personality trait of openness to new experiences and intentional acceptance of robots: children who are more open to new experiences are more likely to want to interact again with the robot (EMYS [198]).

Cultural Differences Regarding Anthropomorphism
Depending on the culture, the perception of the robot is not of the same nature. In a study that included seven different nationalities (German, American, English, Chinese, Dutch, Japanese, and Mexican), the attitude toward robots was the most positive in the USA, with the most negative in Mexico [153] (it was assessed here by the Negative Attitude Toward Robots scale [208]). An educational robot is perceived more positively in Koreawhere parents perceive it as a "friend of the child"-than in Spain, where parents perceive it as a machine [181]. Both Chinese and Koreans perceive a social robot (LEGO MIND-STORMS NXT) as more friendly, trustworthy, and satisfying than Germans. Both Chinese and Koreans also engage more in interactions with the robot [185]. Cultural differences are, therefore, likely to impact robot agreeableness, satisfaction, and trust expressed toward the robot. Moreover, the Japanese attribute more mental abilities to robots (ROBI, KEEPON) than the Australians [184]. For Chinese individuals, the more lonely an individual is, the less they anthropomorphize robots, but this is not the case for American individuals [182]. Explanations for these variations could be based on the differences between individualistic and collectivist cultures: individualism would lead to a less positive attitude toward robots [185].
Finally, a robot presented as having the same cultural background as the user will be perceived more positively. Germans attribute more mental abilities to a robot presented with a German name than to a robot with a Turkish name, and report more psychological closeness and positive intentions toward it [183]. This result can be compared with in-group bias in human-human interactions. People report more positive effects toward in-group individuals than toward out-group individuals [209] and show more prosocial behavior [210]. Thus, the same in-group bias could apply to interactions with robots.

But There Is More
Other user factors, such as previous experience with technology, education, expectations of robots, social isolation, and developmental type can impact robot acceptance.
Users who are more experienced with new technologies (computer training and/or knowledge of voice recognition devices) rate the social skills of two robots (OLIVIA and CYNTHIA) significantly lower than inexperienced users. This result suggests that more experienced users tend to be less open to perceiving robots as social entities [112]. Moreover, 4-7 y.o. with little or no experience with robots assign more psychological characteristics to them than experienced children [199]. Moreover, the more educated an individual is, the less likely they are to perceive the robot as a social entity [200].
Individuals judge a robot (KAROTZ) more positively before having met it versus after the interaction, which shows that, on the one hand, they generally have high expectations toward robots, and on the other hand, these expectations were not met when they met the robot [154]. Low initial expectations lead to less disappointment during the interaction [33]. In a long-term interaction, the people who had the highest expectations toward the robot before the interaction show an abandonment rate of the procedure that is higher than individuals with lower initial expectations [154].
In addition, isolated individuals would evaluate interactions with a robot (APRIL) more positively and rate the robot more attractive than socially well-connected individuals [47]. Moreover, loneliness would lead to more anthropomorphic attributions toward an animal or a technological gadget [46]. This increased tendency toward anthropomorphism can vary according to the robot's appearance since socially isolated people attribute more human characteristics to a robot that looks like a human than to a robot that looks like an animal [48]. As we have seen before, it could also depend on the user's culture, as a reverse pattern was observed among Chinese participants with lonely individuals anthropomorphizing less [182].
The developmental type can also modulate robot perception and interaction. For example, individuals with autism spectrum disorder (ASD) would not show a human preference bias, unlike neurotypical individuals who have more affinity with another human being than with an artificial object [201,211]; and they consider a human voice and a robotic voice to be similar [110]. They would also have more difficulty interpreting the robot's mental states than typically developing children [180].

Limits
We have seen the influence of a robot's design (robotic factors) on how robots are perceived and interacted with, as well as the importance of the context in which the interaction takes place (situational factors and human factors).
Despite some variability in the results, it seems that the more human-like a robot is perceived to be-whether this is due to its design or the interaction situation, which includes the user-the more it will be appreciated. Nevertheless, the benefit of a human-like appearance has a limit, i.e., the uncanny valley. A strong resemblance could instead impact negatively the interaction. Furthermore, the results of studies on anthropomorphism are sometimes contradictory, which may be attributable to the significant heterogeneity of the methodologies employed. Thus, three types of limits emerge from the literature: (1) an intrinsic limit of robotic factors: the uncanny valley; (2) the measurement of anthropomorphism; (3) the methodological limits observed in the study of human-robot interactions.

Intrinsic Limit for Robotic Factors: The Uncanny Valley
The uncanny valley is based on the following: When an object (here, a robot) reaches a very important degree of anthropomorphism, it triggers a feeling of uneasiness [212,213]. To highlight this phenomenon, two types of studies are proposed, those that focus on the feeling of strangeness [214][215][216] and those that show a preference for machine-like [162,217] or moderately human-like robots [73]. In human-robot interactions, two elements generate the feeling of strangeness: a humanoid face [214], and/or a similar size and body mass to the interacting person [216]. In consequence, humans can end up trusting and appreciating humanoid robots less than mechanical ones [73,143,218]. A meta-analysis (concerning 49 studies based on the Godspeed questionnaire) confirms the preference for robots with low to moderate human resemblance but fails to conclude on the negative effects induced by a strong resemblance to humans [219]. On the other hand, a study of 251 robots shows that a strong human appearance triggers a feeling of strangeness [215].
The explanation of the uncanny valley phenomenon is based on the expectations induced by the appearance of robots [220]: The discrepancies between human expectations and robot behavior would, thus, be at the origin of this phenomenon [212,213,221]. The resemblance would cause individuals to judge the robot according to human normative expectations [222] and, from that point, deviations from the human norm make the robot seem scary [223]. Thus, the extent to which individuals assign to the robot an ability to feel and perceive sensations significantly predicts the feeling of strangeness they report [214]. For this reason, the authors argue that the uncanny valley is a consequence of individuals' attribution of feeling and sensing abilities to robots. Note that the uncanny valley also impacts the quality of moral judgment. Individuals evaluate moral choices made by a human-like robot (iCLOONEY and iROBOT) as less ethical than the same choices made by a human or by a non-human-like robot (ASIMO) [222].
Repeated interactions decrease feelings of strangeness regardless of the robot (e.g., GEMINOID or ROBOVIE), with both robots being perceived as less strange on the third interaction compared to the first. This indicates that the uncanny valley phenomenon is reduced by increased exposure to the robot [36]. This result seems consistent with the theory presented in [214]. Once individuals know the actual capabilities of the robot, they would rate it as less uncanny and then report more positive feelings. This could explain the beneficial effect of repeated interactions with robots, discussed in Section 4.2.3.
The uncanny valley phenomenon is present in children [84,224]. The age of onset is discussed. For some authors, it would appear between 6 and 12 months [225]; 6 m.o. babies prefer to look at a strange avatar rather than a picture of a human; at 12 months, it is the opposite. Other authors suppose it begins between 4 and 8 y.o. [69], from 9 y.o. [226], or even between 8 and 14 y.o. [84]. The explanation for these differences lies in the methodology used, particularly with regard to the variables of interest/measures (fixation time for babies and image classification for children) or the choice of stimuli (robot video or robot image). The variability of the methodology used to measure anthropomorphism and perception of robots thus limits the interpretation of the results.
Thus, although robot designers seek to maximize the resemblance of robots to humans in order to improve interactions, human-like robots can trigger a feeling of strangeness for the user [215], and can even lead to a reduction of the trust granted to the robot [218]. In conclusion, improving the robotic factors of anthropomorphism alone does not necessarily have beneficial effects on the perception of and attitudes toward robots.

The Measure of Anthropomorphism and Its Limits
The methodologies employed in studies on human-robot interactions are highly heterogeneous [227]. Studies are mainly based on measurements from the user's point of view through questionnaires on the perception of robots [228,229]. In the next sections, after presenting the different types of questionnaires, we will examine their limitations.

Questionnaires and Implicit Measures
Different questionnaires (all based on Likert scales) are used in the literature to measure anthropomorphism from the user's perspective. A review of the literature shows that authors have used questionnaires that focused on anthropomorphism in the broad sense (e.g., the Godspeed questionnaire [6], individual differences in anthropomorphism [44], which correspond to weak anthropomorphism), whereas others focus specifically on cognitive anthropomorphism (e.g., the attribution of mental states questionnaire [97], which corresponds to strong anthropomorphism). The Godspeed questionnaire [6] is one of the most frequently used questionnaires [79,140,147,230] and consists of 5 items scored on a 5-point Likert scale. It assesses five domains: anthropomorphism, animation/illusion of life, appreciability, perceived intelligence, and perceived safety. Another questionnaire often used assesses the attribution of mental states to agents presented as pictures to measure anthropomorphism [77] and consists of 25 questions that are divided into 5 dimensions: perceptual, emotional, intentional, imaginative, and epistemic. As an example, for the perceptual attribution assessment, participants answer the question, "Do you think he can feel heat or cold?". Nevertheless, the questionnaires measuring anthropomorphism vary widely since many other questionnaires have also been used, e.g., the individual differences in anthropomorphism [44]. The individual differences in anthropomorphism questionnaire consists of 30 items, scored on a 10-point Likert scale. They assess the attribution of anthropomorphic traits, e.g., intentions, emotions, free will, mind, and of non-anthropomorphic traits, e.g., "durable, "active", and the robot interactive experiences questionnaire [231]. The robot interactive experiences questionnaire includes 8 items scored on a 7-point Likert scale. They assess the individual's attitude in situations of engagement and social interactions with a robot. Moreover, the validity of verbal (explicit) measures for assessing mental state attribution is questioned [16].
In the field of human-robot interactions, the importance of implicit measures deserves to be emphasized: the results obtained with these measures are not necessarily similar to those obtained with explicit measures, such as questionnaires [22,117,144]. Verbal and nonverbal measures of attribution of mental states to a robot can lead to divergent results [16,217]. Indeed, children may show similar behavior when interacting with a robot and with a human, while they attribute fewer mental states to the robot based on their responses to questionnaires [232]. For this reason, indirect (implicit) and more objective methods have been used to assess the tendency toward anthropomorphism, including relying on social dilemma-type paradigms (such as the mini-dictator game and a resourcesharing task [40]; a moral dilemma [65]; or the ultimatum game [228]) that identify, for example, giving behaviors. Nonverbal paradigms have also been used to implicitly assess the attribution of mental states to the robot (e.g., the gaze anticipation paradigm, [16]; or the implicit association task, [217]). Further studies are needed to determine which type of measure-explicit or implicit-better reflects mental state attributions to robots [16]. Until this debate is resolved, studies should include both types of anthropomorphism measurements.
Generally, the majority of studies focused on anthropomorphic robot designs (either by their appearance or by the way they are presented to the user) aim to assess its impact on subjective measures, such as the robot's perceived intelligence, acceptance, or realism [73], and pay little attention to the impact on more objective measures (such as performance [79]). While the major advantage of questionnaires is their ease of administration [228], they have some limitations.

The Pragmatic Limits of Anthropomorphic Measures
The measures used are often non-standardized and subjective, making it impossible to compare results. Self-reported measures may be subject to social desirability bias [155,233,234]. Participants are encouraged to respond based on what they think is expected of them. Behavioral measures are less subject to this bias.
Research in language pragmatics provides new insights into some of the results obtained. According to the mere appearance hypothesis, it is the physical resemblance of the robot with a human that induces its perspective to be taken into account. A study describes a low spontaneous perspective taking of the robot when it does not have a human appearance (e.g., THYMIO), while the perspective of a human-like robot is taken into account, even when this robot triggers a feeling of strangeness (ERICA) or when it is obvious that it does not have mental capabilities (e.g., in the case of a mannequin or a wax figure) [26]. In this study, participants see the image of a robot facing a figure. In the participant's reading direction, the figure is a 9; in the robot's reading direction, the figure is a 6. The experimenter then asks the open-ended question, "What is the number on the table?" to measure participants' spontaneous perspective-taking of the robots. The authors nevertheless question the effect of the experimenter's request. Indeed, according to the principle of cooperation [126], when the experimenter asks a question, the participants try to infer his expectations in order to answer it as well as possible. Yet, the use of a seemingly simple question may destabilize participants and encourage them to seek alternative interpretations in the environment. In this way, they may have inferred that they are expected to consider the agent's perspective when it seems relevant, i.e., when a cue for possibly understanding the numbers is present [26]. This could explain why they adopt the perspective of a dummy, but not that of an abstract robot (THYMIO looks like a box, which may not be a sufficient cue for numbers understanding). Thus, it is difficult to determine here whether the perspective-taking of human-like robots is truly spontaneous, or induced by the experimenter's request. This result could be due to pragmatic factors rather than appearance per se. Measures of anthropomorphism that rely on a question asked of the subject are susceptible to pragmatic bias.

General Methodological Limitations
Anthropomorphism is a complex notion, now widely studied. However, studies conducted with robots employ a highly heterogeneous methodology [227] in terms of the choice of the measure of anthropomorphism (see Section 5.2.2), the type of robot used, the way it is presented, and the type of interaction proposed (see Tables 1 and 2). We present below some recommendations to address these potential issues.
We have seen that the means of evaluating anthropomorphism are themselves extremely varied and subject to criticism. They are based, in particular, on questionnaires, explicit and subjective measures whose validity is questioned [16], in particular, because they are likely to be impacted by the bias of social desirability or the pragmatics of language. Participants would answer the question according to the expectations they infer from the experimenter, rather than according to spontaneous attributions toward the robots (e.g., [26,88,92,93,96]). We recommend the use of implicit measurement methods (mainly behavioral, such as non-verbal paradigms or social dilemmas), which would provide a more accurate measure of this phenomenon (e.g., [16,40,65,217,228]).
A wide variety of robots have been used in the literature, and the diversity of their appearances makes it difficult to generalize results from one robot to another. Some studies do not specify the type of robot used (e.g., [76,80,170,196]) or use a robot created by the research team itself (e.g., [60,70,71,173,194,225]). The beginning of standardization is allowed by the Anthropomorphic roBOT (ABOT) database, which references an anthropomorphic score for 251 robots based on the voting of 1000 participants. The scores seem to be coherent with the taxonomy we presented earlier [38] since the human-likeness of the abstract robot (LEGO MINDSTORMS) is judged low (15.92/100), the iconic robot (NAO) is judged moderate (45.92/100), and the humanoid robot (SOPHIA) is judged high (78.88/100) [235]. The iconic robot NAO might be a good choice because it has human characteristics, so can enjoy the benefits of anthropomorphism, without falling into the uncanny valley-it does not cause discomfort in children aged 3-18 y.o. [226] or in adults [219].
Moreover, exposure to robots is sometimes done with images or videos, rather than in vivo exposure. Yet, the physical presence of a robot (NURSEBOT) increases the tendency for anthropomorphism more than a projection of a robot on a screen [74] and is considered more trustworthy [94]. Another study states that a physically embodied robot (AIBO) is evaluated more positively when participants can touch the robot. Conversely, if they are prohibited from touching the robot (APRIL), individuals rate interactions with a non-embodied robot more positively compared to a physically embodied robot [47]. According to a metaanalysis, a physically present robot is perceived more positively than a robot presented on a screen (AIBOT [236]), and this mode of presentation allows better performance in a puzzle-solving task (KEEPON [237]). However, in another study, a robot presented in vivo to the participant would not be better accepted than a robot presented via a screen [238]. The different measures (intentional vs. behavioral) could explain, at least in part, these discrepancies. Thus, it is difficult to decide on this issue at this time.
The human appearance of a physically present-embodied-robot positively influences both subjective and objective measures of anthropomorphism, whereas the human appearance of a non-embodied robot (represented as an image, for example) has positive effects only on subjective measures [239]. The impact of the robot's appearance depends on how it is presented to the participants. Generalizing results obtained with a non-embodied robot to interactions with an embodied robot could lead to overestimating the impact of human likeness on subjective measures and underestimating its effect on objective measures [239].
Similarly, studies that do not involve the physical presence of a robot often represent robots as photos rather than videos (e.g., [26,65,69,84,141,228]. However, it seems that children appreciate them more when they are presented in video form rather than in pictures [84], as observing the robot's behavior would help them understand its intentions. Thus, data collection through online studies (e.g., [65,69,141]) also does not allow for conclusions about in vivo interactions conducted in the real world. Other studies do not necessarily use a robot to study anthropomorphism (for example, avatars and conversational agents, e.g. [9,12,37,44,46,110,179]). Overall, experimental conditions are rarely ecological, as most studies are carried out online or in the laboratory rather than in vivo (e.g., [9,39,49,65,69,70,94,136,141,153,228]). Studying anthropomorphism without actually placing the participant in a situation of interaction with a robot can distort the results obtained. Furthermore, the duration of interaction with the robots varies considerably across studies: the most common duration is 30 min per session (15-30 min on average, and few studies exceed 60 min) but the longest lasts over 120 min [229]. Overall, interactions with the robot are spread over a short period of time (often a single interaction, the duration of which is not always specified) and few studies are conducted over the long term [229].
We recommend referring to the ABOT database to systematically specify the humanlike score of the robots used. The variability of the robot used should also be reduced, so that the results obtained can be generalized. As we have seen in this review, a moderately human-like appearance is preferable, due to the uncanny valley phenomenon. We also recommend privileging in vivo interactions with robots that are physically embodied-and over the long term-in order to approach, as much as possible, the real conditions of human-robot interactions.
In addition to the heterogeneity of the studies, there are methodological limitations, which make the generalization of results questionable. A meta-analysis [219] noted that most studies use small sample sizes (the median of the 49 studies included in their metaanalysis was 21 participants) composed of predominantly young participants and students (the median age was 25 y.o.). As a consequence, it is not easy to draw any conclusion about non-student adults, as age affects the way adults perceive robots [94,158]. The authors also point to the lack of methodological rigor in a substantial number of studies, which omit crucial information about participants (for example, their age or nationality, factors known to influence anthropomorphism). In future studies, it would be appropriate to use a rigorous and standardized methodology. A recent paper advised selecting large sample sizes and reiterating the importance of greater transparency about the detailed characteristics of the sample [219].
This lack of methodological consensus leads to divergent results, which do not allow conclusions to be drawn on certain factors. For example, the effect of the robot's voice on anthropomorphism or the effect of the robot's autonomy on acceptance cannot be determined from the literature. The results are contradictory. Concerning the robot's voice, one study showed no differences in children [37] but a preference for a human voice over a robotic voice in adults [110]. The discrepancy in these results may be due to the age of the participants. Children who anthropomorphize robots more than adults are less likely to perceive the difference than adults. Concerning the robot's autonomy, it is more difficult to interpret the mixed results. Two studies showed no effect of robot autonomy on acceptance [149,151] and another study showed that robot autonomy had a positive effect on the trust attributed to the robot, but a negative effect on perceived social presence [150].
Is Anthropomorphizing a Robot Even a Good Thing? Some ethical considerations could be highlighted about anthropomorphizing robots (see [240] for a review).
Firstly, anthropomorphism has not only positive consequences. A user who attributes human capabilities to a robot may experience negative results (e.g., negative emotions may be triggered by the anthropomorphization of the agent as in the uncanny valley). The discrepancy between human expectations and actual robot abilities (in non-transparent experimental situations) may provoke disappointment and frustration in users [241]. This limitation is particularly important for vulnerable individuals, such as schizophrenics [242], individuals with ASD [180], or elders [243]. Even if avatars and robots are generally well accepted by these populations [242,244], the consequences of their interactions with robots could impact their social relationships. This is why some studies ensure that the robot is not perceived as a human [242].
Secondly, anthropomorphism induction may also be questioned. Although in psychology studies it is common to vary the participants' representations, this manipulation could have serious consequences. Indeed, we have seen that the robot's design and the way it is presented imply social cues with the aim of facilitating social interactions. This encourages user anthropomorphism, fostering both the top-down and bottom-up inferences mentioned above [65]. However, these social cues mimic human behavior [245] without any actual associated mental states. Then this mismatch between perceived capacities and actual capacities of robots leads to deception, which could have detrimental effects on users [241].
It, therefore, seems important for researchers to ask themselves why they have chosen to anthropomorphize robots, and whether this is really necessary. Indeed, the main reason to focus on anthropomorphism is to trigger behaviors that would allow an interaction similar to that with a human agent. Is the goal then to replace humans? If so, there should be serious considerations on whether this replacement brings more benefits than detrimental effects for the end user and society as a whole.

Conclusions
As we have seen in this review, human individuals generally tend to attribute human characteristics to robots. Despite the wide methodological differences observed in the literature, we can argue that several factors influence anthropomorphism: robotic, situational, and human factors. The main effects are summarized in Table 4.
We have seen that many studies focus on the appearance of robots to explain the tendency of individuals to attribute human characteristics to them. However, other factors of the robot can influence their perception. Among the robotic factors, in addition to the robot's appearance, its voice, its behavior, and the quality of its movements also modulate the way it is perceived. Moreover, the context of the interaction plays a crucial role. Taking into account situational factors related to the interaction setting (anthropomorphic framing, the role held by the robot, its autonomy, the frequency of interaction) and human factors related to the user (age, gender, personality, culture, and others) seems, therefore, essential in the study of anthropomorphism.
Robotic factors are mostly related to the design of the robot: anthropomorphism varies according to the appearance, voice, behavior, and movement quality of the robot. Anthropomorphizing robots (i.e., giving them a human-like form, which encourages individuals to anthropomorphize) generates three types of effects: i) effects on the quality of interaction, j) effects on the perception of the robot, and k) effects on the attribution of mental states. The robot's human appearance facilitates interactions [36,246], especially by increasing fun [75], engagement [55,79,247], and cooperation during interaction [4,22,80].
Anthropomorphization also impacts the perception of robots. They are more appreciated, deemed more believable [22], and more intelligent [73,75]. Finally, humans show more empathy toward these robots [65,81], adopt their point of view more [26], and attribute more mental abilities to them [17,34,68,78]. Children also attribute more cognitive skills to human-like robots [67,77]. Nevertheless, robots that strongly resemble humans can cause feelings of discomfort in users; this is a phenomenon called the uncanny valley [45,212]. Individuals value these robots less [73,215] and rate them as less trustworthy [218]. In children aged 9 and older, the sense of discomfort induced by highly human-like robots is similar to that of adults [69,226], but the age of emergence of the phenomenon remains unclear. Among robotic factors, the elements that also play a crucial role are the voice, the behavior, and the movement quality. Individuals cooperate more with a robot (NAO) when its voice expresses emotions [113] and report more enjoyment from a robot with a human voice [95,109,110]. The robot is also more appreciated when it shows friendly behavior [36], animated behavior [98], and interactive behavior [100,102], as well as natural movements [105]. Robotic factors are, therefore, very important in human-robot interactions since they affect acceptance and anthropomorphism. However, other factors also have an impact on anthropomorphism: situational and human factors.
Situational factors refer to the way the robot is presented in the interaction situation. Anthropomorphism varies according to the framing of the interaction, the role of the robot, the frequency and duration of the interaction, and the degree of autonomy. When a robot is presented in a human way (for example, by giving it a name or a personal story) individuals show more empathy and indulgence toward it [65,137] and find it more attractive [139,143]. A robot that is assigned mental abilities is rated as more socially intelligent [147] and more trustworthy [140,144], and individuals show an increased desire to interact with it [136]. Children also trust a robot (NAO) more when it appears to have human psychological capabilities [40,202]. The role filled by the robot also modulates its perception. Adults are satisfied to have a robot to clean their house, but not to have it cook for them [162] or to pray [69]. Children appreciate robots more as a companion or when they support their learning [160]. Frequency of interaction also plays a role in anthropomorphism, as repeated interactions with a robot increase its likability [153,154] but reduces anthropomorphism toward it [39]. Furthermore, a robot perceived as autonomous is more likely to be anthropomorphized by children than a remotely operated robot [148,152]. Regarding the impact of perceived autonomy on acceptance, the results are more mixed, since an autonomous robot increases feelings of trust (compared to a teleoperated robot) but decreases feelings of social presence [150].
The human factors concern the user themselves. A wide inter-individual variability is observed in anthropomorphism [5,42], depending on the individual's age, gender, personality, culture, previous experience with technology, education level, level of social isolation, and developmental type. Age-related differences are noted in the literature. Children are more likely to attribute human characteristics to robots than adolescents and adults [69,72,77,170,176]. In adults, a robot of the opposite gender to the user would be rated as more trustworthy, credible, and engaging than a robot of the same gender [195]. Children aged 5 to 8 prefer a robot of the same gender as themselves, while children aged 9 to 12 report no particular preference [170]. User personality also plays a role in anthropomorphism. Individuals with a high need for cognition attribute fewer human characteristics to robots and show more positive attitudes. Conversely, individuals with a high need for prediction attribute more anthropomorphic traits to robots and express more negative attitudes [45]. The perception of the robot also varies according to the culture of the user. A robot is perceived more positively in Korea than in Spain [181]; the attitude toward it is more positive in the USA than in Mexico [153]. Finally, other user factors impact anthropomorphism: The tendency to anthropomorphize seems to decrease with education [200] and experience with technology [112]. Individuals with the highest expectations toward robots tend to drop out of long-term interactions [154]. Social isolation would induce a more positive evaluation of robots [47], particularly human-like robots [48]. Developmental type is also an important factor. For example, children with ASD would have difficulty interpreting the mental states of a robot, unlike typically developing children [180].
Several theoretical frameworks have attempted to explain the nature of anthropomorphism, such as the mere appearance hypothesis [26] or the SEEK theory [5]. On the one hand, for the simple appearance hypothesis, which is a context-free model, the robot's appearance would activate processes similar to those involved in human-human interactions, through a stimulus generalization mechanism. This theory is based solely on the robot's appearance, but we have seen that other robotic factors have an impact on anthropomorphism, such as the robot's voice, its behavior, and the quality of its movements.
The SEEK theory, on the other hand, takes into account the context of interaction. According to it, anthropomorphism would be a way for individuals to explain a robot's behavior in the most accessible and economical way possible, in order to satisfy their need for prediction of the environment while satisfying their desire for social contact. Thus, the mere appearance hypothesis is mostly based on robotic factors related to the robot's design, whereas the SEEK theory focuses on situational and human factors, including some factors we described in this paper (the frequency of interaction, the participant's social isolation, and personality). Nevertheless, this review highlighted other situational factors (anthropomorphic framing, robot's role, and autonomy) and human factors (age, gender, culture, education level, prior experience with technology, and developmental type) that impact the acceptance of robots and anthropomorphism toward it, which are not directly mentioned in the SEEK theory. We will see below that the impact of these factors could still potentially be explained by this theory.
Concerning situational factors, the SEEK theory may explain the impact of an anthropomorphic framing, the robot's role, and perceived autonomy by the accessibility of anthropocentric knowledge. Presenting the robot as a human (with a first name, a personal story) or as having a human social role makes human-related knowledge more accessible to the user, resulting in increased anthropomorphism. The same process could occur for robotic factors. Moreover, the behavior of a robot perceived as acting autonomously is more unpredictable for the participants, which increases their motivation to explain the agent's behavior and, therefore, their tendency to anthropomorphize it.
Concerning human factors, age modulates anthropomorphism. This effect may be linked to the experience gained with the technology with age. The more experience the individual has with robots, the more they acquire a model for explaining behavior specific to this ontological category. To the same extent, culture has an impact on experience with technology: robots are more widely used in certain countries. The effect of gender we demonstrated may be explained by gender differences in human-human interactions that would be applied to robots. Since individuals use models based on interaction with humans to interact with robots, it is not surprising to find these differences in humanrobot interactions. In the same way, the differences in interaction linked to the type of development observed in human-human interactions can be applied to human-robot interactions. Individuals who have difficulty interpreting the mental state of a human will have the same difficulty interpreting the mental state of a robot.
In conclusion, there are several theories to explain anthropomorphism, but they should take greater account of the other factors highlighted in this review to enable the most exhaustive possible conception of anthropomorphism. The SEEK theory seems to be the most consistent with the results observed in the literature since it includes the majority of the factors cited. Indeed, the psychological determinants involved in this theory (i.e., the accessibility of anthropocentric knowledge, the motivation of individuals to understand the behavior of other agents, and to create social links) can be impacted by all the factors we have listed in this review.
Although we have observed that many contextual factors have been explained by the SEEK theory, certain questions remain unanswered and, thus, require further research. First, the same result can be interpreted differently by authors depending on their conception of anthropomorphism. For instance, some authors consider the perception of a robot as teleoperated as evidence of anthropomorphism (i.e., individuals would judge the robot's behavior as resembling that of a human) [124], whereas in other studies, a robot perceived as not acting on its own would reflect a lesser degree of anthropomorphism (since it is attributed less free will and agentivity) [9]. In this review, we have seen that when participants are informed about the teleoperation of the robot, they attribute less mental states to it, inferring that there is more anthropomorphism when people declare that believing the robot is teleoperated may be an incorrect interpretation. Second, we observed similarities in people's behavior toward a human and a robot (particularly in terms of differences in gender, personality, culture, and type of development), but more studies are needed to determine whether interactions with non-human agents involve the same socio-cognitive mechanisms as those involved in interactions with humans (see Section 3.3). Some neuroimaging studies suggest that interactions with non-human agents involve the same socio-cognitive mechanisms as those involved in interactions with humans, such as the ToM [55,61]. However, others suggest that the ToM would not be necessary for anthropomorphism [51,52] but would only serve as a way to describe the situation. We could argue that ToM may be involved in human-robot interactions regardless of robot appearance, since humans attribute mental states to non-human-like robots, albeit to a lesser extent than for human-like robots [33,34]. This highlights the importance of context in mental state attributions. Nevertheless, given the difficulty of differentiating between strong and weak anthropomorphism on the basis of self-reported measurements, it is complex to ensure that mental states are actually attributed to robots.
These shortcomings emphasize the need to revise the theories explaining anthropomorphism, and the means used to measure it, in order to better understand the phenomenon. It is also important to bear in mind the methodological limitations identified in this field of research because (1) they limit the interpretation of the results obtained, and (2) they prevent the results from being generalized. We recommend a precise description of the samples (age, gender, nationality) and of the robot used as these characteristics can have an impact on the interaction with the robots. The context in which the interaction takes place (the way the robot is presented, the role and autonomy assigned to it, and the duration and frequency of interaction) must also be taken into account when analyzing results. Researchers should be careful to rely on implicit measures of anthropomorphism, which are more objective, in order to circumvent the potential biases of explicit measures. In particular, implicit measures are less likely to be affected by pragmatic factors, and can, therefore, measure participants' spontaneous attributions more accurately.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: SEEK sociality, effectance, and elicited agent knowledge ToM theory of mind ASD autism spectrum disorders