Next Article in Journal
Obstacles Avoidance for Mobile Robot Using Type-2 Fuzzy Logic Controller
Next Article in Special Issue
Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany
Previous Article in Journal
FABRIKx: Tackling the Inverse Kinematics Problem of Continuum Robots with Variable Curvature
Previous Article in Special Issue
Siri 2.0—Conversational Commerce of Social Bots and the New Law of Obligations of Data: Explorations for the Benefit of Consumer Protection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Framework to Study and Design Communication with Social Robots

1
Faculty of Psychology, Ruhr University Bochum, 44801 Bochum, Germany
2
Department of Psychology and Ergonomics, Technische Universität Berlin, 10099 Berlin, Germany
*
Authors to whom correspondence should be addressed.
Robotics 2022, 11(6), 129; https://doi.org/10.3390/robotics11060129
Submission received: 31 August 2022 / Revised: 4 November 2022 / Accepted: 5 November 2022 / Published: 15 November 2022
(This article belongs to the Special Issue Communication with Social Robots)

Abstract

:
Communication is a central component in social human–robot interaction that needs to be planned and designed prior to the actual communicative act. We therefore propose a pragmatic, linear view of communication design for social robots that corresponds to a sender–receiver perspective. Our framework is based on Lasswell’s 5Ws of mass communication: Who, says what, in which channel, to whom, with what effect. We extend and adapt this model to communication in HRI. In addition, we point out that, besides the predefined communicative acts of a robot, other characteristics, such as a robot’s morphology, can also have an impact on humans, since humans tend to assign meaning to every cue in robots’ behavior and appearance. We illustrate the application of the extended framework to three different studies on human–robot communication to demonstrate the incremental value as it supports a systematic evaluation and the identification of similarities, differences, and research gaps. The framework therefore offers the opportunity for meta-analyses of existing research and additionally draws the path for future robust research designs for studying human–robot communication.

1. Introduction

“A Social Robot Cannot Not Communicate.” (L. Kunold & L. Onnasch)

The preceding quote (which arose during the preparation of this article) is based on communication theorist Paul Watzlawick’s axiom “One cannot not communicate” which represents one of five principal axioms Watzlawick proposed in his theory on communication [1]. What Watzlawick tried to stress with this grammatically odd sounding statement is that in human communication any perceivable behavior, even the absence of action, has the potential to be interpreted as a communicative act. This highlights the importance of the receiver of communication as their perception and interpretation is key for the communicative process, not necessarily the sender. This means that even if a person does not intend to send a message by verbal or non-verbal behavior, another person might still interpret the (absence of a) behavior as such.
Watzlawick’s theory was originally formulated to understand communication and misunderstandings within human families. However, we believe the same is true for communication with social robots.
So called “social robots” can be recognized by their social interface [2] that allows them to naturally communicate with humans in a verbal or nonverbal manner. For these types of robots, communication is a core function (or capability) that is central to human–robot interaction (HRI). Social robots, such as Aldebaran’s Pepper, for instance, are used in therapeutical settings for cognitive stimulation via small talk or little games [3,4]. Moreover, in public places, such as a train station, social robots are deployed as a next generation of traveler services using robotic heads that are supposed to resume the customer service [5]. Regarding these examples, it is quite clear that the design of the robot’s speech (verbal channel) as well as the choice of its voice and gestures that accompany the conversation (nonverbal channel) affect how human interaction partners might experience such an encounter. What is less clear is the influence of additional aspects on the communication 49 with and the perception of robots like the situational context or other characteristics of the 50 robot that are not directly associated with the communicational task. For example, what if the robot’s sensors detect movement at the ceiling or behind the robot and the robot gazes to the top or turns away while talking to a person? Is such behavior interpreted as intentional behavior? If so, is it interpreted technically, e.g., as a response to sensor input, or socially, e.g., as rudeness? What if a robot loses its WIFI-connection and stops talking? Such silence can also be interpreted in terms of “not not communicating” various ways (e.g., technical error, intentional silence, keeping a secret etc.).
Communication also plays an important role in functional robots that come into contact with humans: Imagine a robot vacuum cleaner, for instance. Although the core function of the robot is cleaning the robot nevertheless also needs channels to communicate certain states such as “being ready to start”, or “running out of battery” via, e.g., auditory or visual feedback. These communicative signals can also be intentionally designed so that humans (hopefully) understand them correctly, whereas an inactive robot vacuum cleaner being stuck under the couch and doing nothing also communicates something (e.g., that it is in trouble). This is a communicative act that is exclusively established by the interpretation of the observer and was not intended by the robot or its designers and programmers. Further, single-arm industrial robots, even those that are not designed to work with humans, communicate from the human observer’s point of view, implicitly by performing an action, or explicitly when an error occurs or maintenance is required (via sounds, lights, or messages).
As the above examples illustrate, robots entering our workplace and everyday life thus also enter our social environment in which communication (intended or not) is unavoidable. To enable an effective and efficient functioning implies that robots need to adapt to this environment, i.e., they need to become social to the extent that they use communication in a way that humans can understand [6]. This further means that robot developers should not only design communicative acts for robots, but they should also consider that other variables in the communication process (e.g., visible cues in the robot’s design or the absence of communication) affect interaction outcomes.
Human-centered approaches to design communication skills for robots pave the way for this adaptation as they (1) build on social scripts that have already been learned in human-human interactions and are therefore intuitive to humans, (2) offer opportunities for mutual cooperation between humans and robots, which better illustrates interaction instead of “just” operating a machine, and (3) explicitly introduce communication concepts into HRI and do not leave it to chance how a robot’s actions are interpreted.
In accordance with the aforementioned importance of communication in HRI, an extensive body of research already exists regarding communication with robots: Studies have investigated what people talk about with robots [7] or how a robot should talk to humans [8,9]. Klüber and Onnasch [10], for example, could show that people prefer a robot that uses natural speech for communication instead of sounds or text output. Hoffmann et al. [11] revealed that robot errors do not negatively impact liking, trust and acceptance if a robot uses a warm and human-like language. Furthermore, studies have shown that an appropriate vocal prosody has positive effects in terms of empathy [12] and the perception of social abilities [13]. Focusing on sound and noise as non-verbal communication for HRI, Joosse et al. [14] demonstrated that intentional noise instead of constant noise accompanying a robot’s approach velocity led to more positive attitudes and was helpful to communicate the robot’s goals to the user. Others have focused on non-verbal communication in HRI like color and motion [15] or gestures [16]. With regard to the latter, Admoni et al. [17] have detailed a robot behavior model that accounts for top-down and bottom-up features of an HRI scene to decide when and how a robot should perform deictic references like looking or pointing to improve task performance in collaboration with humans. The model adds to the rising research regarding the use of artificial gaze behavior to communicate a robot’s subsequently following actions [18,19,20]. Further research addresses the data-driven generation of communicative acts from the robot to the human and the understanding of communication input like natural speech [21,22]. For instance, Janssens and colleagues [23] proposed a data-driven method to generate situated conversation starters for HRI based on visual context. The model enables a robot to record visual data of the interacting human and to generate appropriate greetings to initiate an interaction. Engaging people in such small talk with robots is a promising approach to increase the naturalness and social character of the artificial conversational agent.
As the previous studies exemplify, researchers have already provided substantial knowledge to build language models for robots, to choose the preferred mode of communication, to design robot voices that are pleasant or to determine which content robots should communicate at all. However, a drawback that the aforementioned studies point out is that “communication” is an umbrella term for different perspectives and constructs. These variations and the lack of definitions complicate the comparison of results and the identification of relevant factors. This leads to a very unstructured state of the art and complicates drawing distinct interpretations of experimental findings regarding the impact and design of communication in HRI. In consequence, insights often remain at the level of single use cases, from which it is difficult to derive design recommendations and/or predictions about the impact of a robot’s communication on humans. Here, it is particularly important to understand the interplay of factors such as the embodiment of a robot, the selected communication channel, and the addressed target group in which communication occurs and is assessed. Each individual factor can influence the outcome of communication and should therefore be taken into account both when evaluating study results and when designing communication for robots.
To support knowledge accumulation, to enable the formulation of design guidelines for communication in HRI and to make research gaps visible, a meta-perspective is needed that allows a systematic classification of HRI communication research. Therefore, we propose a framework that uses the questions introduced by Lasswell in his theory on human communication (Who, says what, in which channel, to whom, with what effect) [24] to identify crucial variables in the design of communication for social robots that focusses on the intended effects of a robot’s communication on humans. We assume that the framework is useful to first structure existing knowledge on the impact of communicative cues and second, to design such cues based on intended effects of communication. This contributes to a more structured view on communicative acts for social robots including a controversy about what can be regarded as communication, which channels are provided and designable for a specific type of robot in a specific situation. The framework further supports a deeper understanding of the variety of factors affecting how communicative acts by social robots are perceived by humans. Finally, our framework helps interpreting and comparing existing work on communication effects by identifying variables that influence the outcome, such as variations in the source, the message, the channel, or the receiver of communication.
In the following, we start by establishing common ground by defining social robots and communication in order to build the scope for the framework and its implications. We then refer to existing approaches to study communication, with and without robots, before we turn to the communication framework for social robots that we propose as an adaptation of traditional communication theory to HRI.

2. Definitions

2.1. Social Robots

In line with related research, we define social robots as embodied machines capable of interacting with humans in social contexts, which requires these robots to follow social, interpersonal norms and rules [25,26].
Social robots have already entered various settings in which they engage in social interactions, e.g., public spaces, private home environments, healthcare, and educational settings. Figure 1 depicts such examples. Although these robots differ with respect to tasks and appearance, their main distinguishing feature is their ability to communicate with humans using verbal (e.g., small talk, jokes, sentence complexity) or non-verbal behaviors (e.g., gestures, eye movements, voice quality or even body temperature) that often resemble those of humans or animals. In consequence, these behaviors are interpreted by humans as cues to the sociality of the counterpart – so-called “social cues” e.g., [27]. Some authors therefore emphasize that even if fluent, human-like communication between humans and robots is desirable, the machine-like nature of a robot must remain recognizable to be ethically acceptable [6].

2.2. Social Robot’s Communication

In a very basic understanding, communication between humans and robots can be understood as a dyadic process that includes a sender, a message, and a receiver [28]. Although communication represents a mutual process between at least two agents (e.g., [29]), communicative acts of robots need to be designed in advance of the actual interaction. Therefore, we argue that a linear understanding of communication, as in classical transmission or sender-receiver-models (e.g., [28]), is the most appropriate starting point for communication design, as we manipulate the robot as the source of communication (sender) and examine the effects on a human as the receiver. In this sense, messages between humans and robots are exchanged through communication channels that allow for input and output [30]. Here, input describes how information is transmitted to the robot (e.g., through speech or touch) and output describes how the robot communicates with the human. Both input and output can include visual, auditory, and tactile channels, or in the words of interpersonal communication research: verbal and nonverbal behaviors. In HRI, these are typically visual displays, gestures, natural language, and physical interaction [31].
Based on these definitions and assumptions, we define a social robot’s communication as a designable output channel, verbal or nonverbal, that actively needs to be designed to achieve intended consequences (e.g., user engagement, coordination, trust) and to prevent unintended consequences (e.g., a social interpretation of non-social robot actions that might disturb the intended interaction). From this perspective, a robot’s communication is understood as an intentionally designed behavior. However, we also acknowledge that humans have a strong tendency to assign meaning to everything they perceive [32,33], including cues and behaviors that were not designed as such (e.g., Pepper’s posture in power saving mode bending forward can be interpreted as “sad” “intimidated”).
Conclusively, social robots, like humans, “cannot not communicate” [1] and designers of social robots’ communication should take all perceivable cues of a robot as well as the human audience into account when estimating communication effects (e.g., visible cables, an emergency stop button, the color with which an LED lights up, generally the morphology of a robot) [34].

3. Peculiarities of Communication with Social Robots

The communication with social robots is unique in contrast to other forms of dig-italized communication because it enables interactions close to interpersonal encounters (see Table 1). Social robots allow for reciprocal communication (sending and receiving messages) and are typically co-located in humans’ physical environment. Furthermore, communication with social robots typically happens in real-time while being sensitive and reactive to human behavior and the environment (temporal proximity). In addition, they allow for physical contact as they can touch humans, manipulate objects, and can be touched by humans (bodily contact; compare categorization in [35]). In contrast to physically embodied social robots, speech assistants, such as Alexa or Siri, can also engage in a reciprocal interaction, and their voice can be perceived as co-located, but visual cues and the possibility for bodily contact are missing.
The same applies to virtual conversational agents. Whereas the communication can be categorized as reciprocal, the spatial proximity is different from physically embodied robots [36,37]. Especially robots’ physical embodiment allows for a broad range of communicative acts, verbally as well as nonverbally. While disembodied agents such as speech assistants only rely on verbal communication, and virtually embodied conversational agents can add visible nonverbal communication cues such as gestures or gaze, but no physical contact, social robots serve the full range of interaction possibilities: reciprocal, co-located, real-time, and embodied. Table 1 summarizes the possibilities of communication with different agents and technologies and again highlights the multifaceted communicative abilities of social robots that closely resemble actual interpersonal communication (first and last row).

4. Views on Communication

4.1. Communication Models in HRI

Most communication models that were specifically developed for HRI stress the reciprocity of communication and therefore address both entities separately, the robot and the human. For instance, Banks and De Graaf [29] have proposed an agent-agnostic transmission model that rejects an exclusive assignment of communicative roles (sender, message, channel, receiver) to traditionally held agents (typically humans). It instead focuses on evaluating agents according to their functions as a means for considering what roles are held in communication processes. In doing so, technological artefacts like robots can become sender and receiver, too, and are not limited to roles within the transmission process anymore.
In a same line of thought, de Visser and colleagues have proposed an integrative model for trust in human-robot teams [38]. Although the model only implicitly addresses communication, it shares Banks’ and De Graaf’s [29] idea that interacting robots and humans can be equal agents as both contribute to what de Visser and colleagues call relationship equity, an emotional resource that predicts the degree of goodwill between two actors and subsequent behavior [38].
Frijns and colleagues [31] address the question of sender and receiver as well, but draw contrary conclusions. Their main argument is that in HRI or in interaction with any other technical artifact, communication can never be a symmetric process with regard to capabilities, components, and processes that are agent-inherent. Accordingly, they propose an asymmetric interaction model which differentiates (a) the human with regard to information analysis and action selection processes, (b) the situation including aspects of the interaction like interfaces but also the physical environment and interactional consequences such as a (situational) common ground, and (c) the robotic system in relation to (technical) information processing and action execution functions.
What the aforementioned models have in common is the reciprocal character of communication that is highlighted by dynamic agent roles [29], or explicit feedback loops [31,38]. From a design perspective, the inclusion of feedback loops in social HRI represents a challenge as humans are hard to predict in terms of perceptions, attitudes, affect, and behavior due to the huge amount of variety and complexity of human reactions. Accordingly, it seems reasonable to first think about human-robot communication as a discrete process instead of a dynamic and reciprocal one. To consider a robot’s communicative acts in a discrete and linear way allows to identify relevant robot design aspects that alleviate communication for humans. Traditional transmission models of communication build a good starting point as they represent unidirectional processes of communication (see Figure 2).

4.2. Transmission Models of Communication

The mathematical theory of communication proposed by Shannon and Weaver [28] that was originally introduced to optimize the message exchange via telecommunication, depicts communication as a linear process in which messages are transferred from a source (on the left) to a destination (on the right, Figure 2). The information source in the model is the sender of a message, who decides what should be communicated. The message is then sent to the receiver through a transmitter and a channel. The transmitter changes the message into a signal that allows for the transfer through a certain communication channel. Finally, the receiver decodes the signal back into a message. The focus of Shannon and Weaver’s model was to identify factors that lead to misunderstandings, which can be caused by noise in signal transmission and lead to incorrect decoding. Regarding communication with robots, the robot can be regarded as the sender that starts the communication based on specific events like having arrived at a certain destination or having encountered an obstacle on the robot’s path that triggers asking for help. The robot’s verbal and nonverbal channels are the channel through which communicative signals are transmitted to the human interaction partner (receiver) who decodes the robot’s behavior, i.e., assigns meaning to it. Ambiguities in the encoded signal or other unintended communicative cues can affect whether the message is successfully transmitted as intended or not.
A strength of this linear view is that it focuses on core communicative functions independent of agent type. However, with respect to HRI we assume that characteristics of the agent (here: the social robot) affect how a communicative act is perceived by humans and further restricts the possible forms of communication (e.g., a robot with-out facial features cannot display facial expressions). Hence, we propose to consider another theory that shares the unidirectional process view, namely Lasswell’s 5W model of mass communication [24,39].
Similar to the transmission model, Lasswell’s model on mass communication [24] describes communication in a process-based linear way following five subsequent questions: Who (source) says What (message) in Which way (channel) to Whom (receiver) with What effect (Figure 3)?
An incremental aspect of this model compared to Shannon’s and Weave’sr [28] is that Lasswell highlights the outcome of communication (‘What effect’). Effect in this context implies a manipulation of the receiver by the message. This change has to be observable and measurable, and its extent depends on different elements within the communication process. The model can therefore be used as a helpful template in designing communication studies because it encourages consideration of the entire process, including independent variables (manipulable cues/behaviors), moderators (characteristics of the source and the receiver), and dependent variables (effects) in a study design. Furthermore, Lasswell’s focus on mass communication implies that communication is not spontaneous but planned. The model thus regards the communicative act as something that must and can be designed before the actual communication takes place, which is quite similar to what we assume for the design of communication for social robots.
Some researchers, have criticized such models because of their rigidity and linearity which seems to lack the mutual character of communication [40]. However, while this is an appropriate argument for more complex communicative acts between hu-mans, linear models seem suitable in their simplicity to systematize and understand the communicative process between humans and robots. In such scenarios, we usually do not focus on the human (because of the huge amount of variety) but on the robot as the agent that can be manipulated by design. In HRI research, a lot of studies therefore focus on how a robot should communicate and how its messages and itself are perceived by the human counterpart. This view does not take the mutual character of communication into account; however, this can be neglected because the design of a social robot’s communication is in focus (i.e. unidirectional communication from the robot to the human).
Taken together, we suggest to apply a linear view on communication for the intentional and planned design of social robots’ communication based on the questions introduced by Lasswell [24]. Including some adaptations and extensions, we believe that Lasswell’s approach has the potential to build a profound basis to formulate design guidelines and to make research gaps visible by offering a systematic classification of HRI communication research.

5. Our Application of Lasswell’s Theory to HRI

We adapted Lasswell’s five questions to build a framework that is useful to classify communication with social robots (i.e., all robots, that communicate with humans). The questions address exactly the relevant variables that have an impact on the effect of a designated communicative act (Figure 4; Please note: Our application of Lasswell’s questions to communication with social robots should be regarded as an initial (not exhaustive) template to categorize communication research and effects of social robot’s communication in HRI. Extensions are highly welcomed). Based on the framework, it is possible to analyze communication between robots and humans in a structured manner by answering the questions proposed by Lasswell [24]. It allows for systematic comparisons of existing (and upcoming) communication research in HRI to identify design and communication characteristics (e.g., human-like/machine-like language, use of lights, or movement trajectories) that determine what effect a robot’s communicative behavior has on humans, which further affects whether communication is perceived, understood, trusted, and accepted. Moreover, the questions and the framework allow to tailor a social robot’s communicative acts in order to achieve a certain effect on the human agent’s side (e.g., transparency/understanding, trust and cooperation). It therefore is a useful tool for retrospective and prospective research and design questions. Figure 4 illustrates the adapted framework.

5.1. The Source or the “Who” in HRI

In our framework the source is supposed to be a physically embodied robot, whereby the framework is also applicable to other artificial entities that allow for communication (e.g., virtual conversational agents or speech assistants). If we consider embodied robots alone, they can fulfill different roles, depending on the interaction context, and they can take a wide variety of forms, mainly representing anthropo-morphic, zoomorphic, or technical morphologies [30]. The consideration of robot role and morphology is crucial as it shapes the subsequent communication process. Rau et al. [41], for example, revealed that a robot’s role affects people’s active response and engagement. In this study participants had higher active response ratings when robots had a social role as teachers or tour guides compared to a robot as a security guard. Addressing appearance, Babel and colleagues could further show that people complied with a technical looking robot more than with a humanlike robot when it requested priority over an elevator using the same verbal commands [42]. They conclude that politeness norms might be triggered more by an anthropomorphic robot design than by a technical appearance. Commands by an anthropomorphic robot might therefore be interpreted as impolite whereas they seem to match a mechanical appearance.
Contrary findings about the communication of a particular message may hence be due to the appearance or role of a robot. It is therefore important to consider the entire robot to identify potential communicative cues that might shape the interaction and the primary message that is intended to be transferred to the human counterpart. Zhong et al. [43], for example, aimed to investigate whether voice determines robot likeability. For this purpose, they had participants evaluate two distinct robots (Pepper by Softbank Robotics, Joey by Jinn-Bot Robotics & Design). Their results demonstrated that the robot with a more natural sounding voice (here: Pepper) was more liked in general than the robot with a monotonous, machinelike voice (Joey). However, it is difficult to claim that the voice of the robot alone caused more or less likability since two different robots with different morphological features were compared. In order to test for main effects of the robot’s voice, the appearance should be kept equal.
A robot’s morphology further determines which output channels are available and can be manipulated by design (e.g., gaze can only be designed for robots with a head and/or facial features). Visual morphological features have already been shown to elicit expectations about a robot’s capabilities [44]. Moreover, the presence of visible features can trigger expectations with regard to communicative capabilities: For instance, robots with facial features are expected to be able to communicate verbally and understand natural language input [45]. Hence, the mere presence of features such as eyes communicates meaning. If these features are not carefully designed and/or their communicative potential is ignored, unfavorable/unintentional interpretations and reactions might occur [33]. Onnasch and colleagues, for example, could show that robot eyes that are purely decorative on an industrial robot lead to reduced trust in a robot and can even be detrimental in terms of performance as they might distract humans from actual task fulfillment [46,47].

5.2. The Message or the “What” in HRI

What should be communicated can be decided based on the goals of communication. In the interpersonal context, message-centered definitions differentiate (a) interaction management, (b) relationship management, and (c) instrumental goals of communication [48]. Interaction management goals include, for example, initiating or ending a conversation, tailoring a message to an audience, and managing the impression one gives of oneself. Relational goals include building, maintaining, and restoring a relationship, and finally, instrumental goals include functions such as compliance, support, or entertainment [48].
Transferred to communication with robots, an instrumental goal of communication can be to fulfill a task, e.g., by giving advice. In contrast, a relational goal of a robot’s communication can be to establish a bond, e.g., through self-disclosure or reciprocity (e.g., [49]). An interaction management goal of robots’ could be for instance to start a conversation (e.g., [23]), or to be trusted and liked by a person. This can be achieved through the use of politeness, flattery, or humor. Table 2 exemplifies these goals of communication in relation to the communicative act and intended effects.
With respect to the content of a message (see Figure 4), the examples show that a separation of interactional and relational content is difficult, therefore we decided to collapse messages that address these dimensions into one category. In conclusion, the message content can thus be separated into “interactional/relational”, “instrumental”, and if nothing is applicable, “other” content.
Besides the content of the message, the question how a message is communicated (quality, Figure 4) should also be considered. Whether communicative signals should be designed naturally (e.g., human-like), artificially (e.g., machine-like) or something in between (hybrid) is a much discussed question (e.g., [6]). Correspondingly, research on communication in HRI often opposes humanlike and machinelike communication designs. On the extrema, a robot could either speak in a natural humanlike way using spoken natural language and a humanlike voice, or it can communicate in a machinelike way via command language displayed as text on a screen. However, several other combinations are possible to design communication with different qualities.
For example, related research showed that changes on the sentence/word level (e.g., humanlike: “Hello my name is Pepper what’s your name?”; versus machinelike: “System up, person recognized. Enter name…”) while keeping the voice and output channel equal affect the likeability, trust and intention to use a humanoid robot in a service context [11].
Both the content and the quality of a message should hence be explicated when communication in HRI is analyzed or designed.

5.3. The “Channel” in HRI

In HRI the communication channel can be defined based on the communication form and output modality (Figure 4 above). By form we mean whether a message is transmitted verbally (using spoken language) or nonverbally (using body language). Moreover, the channel can be categorized depending on the output modality: According to Bonarini [34], robots’ use of different output modalities, i.e., auditory, visual, and tactile, address the equivalent human channels, i.e., hearing, sight, and touch. Examples for possible combinations of communication form and output modality are summarized in Table 3.
For example, when a robot communicates with its user via text messages on a tablet screen, the channel can be categorized as verbal and visual, because language is conveyed through words on visible screen. In contrast, if a robot uses sound, such as non-linguistic utterances (e.g., beeps) like R2D2, the channel can be categorized as nonverbal (no words included) and auditory. In addition, nonverbal aspects of speech such as prosody can also be categorized as nonverbal and auditory.
The use of verbal communication for a robot depends on its ability to produce language output, e.g., through a text-to speech module and speakers. Instead, the use of nonverbal communication mainly depends on morphological features, e.g., eyes to gaze, a face to display facial expressions, hand-like features to point to an object or touch a human. The role and consequences of a robot’s morphology on human interaction partners have already been stressed under “Who” accordingly (see Section 5.1).

5.4. The Receiver or the “to Whom” in HRI

The addressee of the robot’s communication in the model is defined as the human interaction partner, which can also be a group of people, however, for the sake of simplicity, we use a single individual to exemplify the question. Note that the model can also be switched to communication from a human (who) to a robot (whom), but this is also out of the scope of this paper.
Individual characteristics of the person, such as age [50], attitudes, anxiety, and prior experiences with robots [51], can influence how the communication is perceived. Birmingham et al. [52] observed, for example, that participants with more negative attitudes towards robots evaluated empathetic communication from a humanoid robot differently from participants with less negative attitudes. Those with negative attitudes preferred cognitive expressions more than affective empathetic expressions from a robot.
Hancock et al. [53,54] further differentiate ability-based factors that also have to be considered. For instance, a person’s expertise with regard to the task can also influence which communication might be appropriate. For highly trained experts, a short goal-based message might be more instrumental, probably even a robot’s gaze might be sufficient, whereas novices might need additional explanations or should even be guided step by step through the task by the robot. Hence, the “to whom” must be considered when designing communicative cues for a certain audience.

5.5. The Communication “Effect” in HRI

According to our model, the answers to the aforementioned questions (who, says what, how, to whom) determine the effect that a robot’s communication will have on humans. Effects can be differentiated into intentional effects (effects following a design decision) and unintentional effects, the latter summarizing interpretations of robots’ appearance or behavior as communication although not designed as such, because also robots cannot not communicate. Unintentional effects can, but do not have to be unfavorable. An example for unfavorable effects is the transfer of occupational stereo-types by robot appearance or voice to HRI. For example, Goetz et al. revealed that female looking robots were more strongly associated with social roles such as a drawing instructor or an actress than male looking robots [55]. Although other studies have not found that prevailing occupational gender stereotypes apply to robots [56,57] it should still be considered (or at best avoided) when designing communication in HRI as a possible confounding factor (e.g., appearance or gendered voice).
Furthermore, communication effects can be categorized according to their communication goals, i.e., task-related or social. Task-related effects help in accomplishing a task, e.g., by eliciting awareness, attention, transparency or actions as responses to communication. Social effects help to foster a favorable impression of the robot and the experience measured by the evaluation of the robot, positive emotional reactions, and cooperative behaviors.
Communication goals have to be aligned with effect measure categories when planning studies on HRI. For example, when task-related coordination is the communication goal, behavioral outcome measures might be more informative with regard to goal achievement than perceptional measures (e.g., likeability) that only indirectly affect human–robot coordination [58].

5.6. Additional Important Variables

The adaption of Lasswell’s questions to communication with robots builds the core framework to systemize and design communicative acts from robots to humans. From a practical perspective, these core questions have to further be embedded into the context in which HRI takes place. Communication needs a referential frame that provides meaning but also constraints to the interaction. From a research perspective, another variable that has to be considered is the evaluation method that is used to conduct research in HRI. How insights on HRI are obtained plays a crucial role for interpreting results and drawing conclusions with regard to generalizability and transferability from single case studies. Both variables represent the bracket of our framework (see Figure 4).
In general, we understand everything as context that relates to the application domain in which a social robot is applied. Examples include the service domain, therapeutical settings, the education, healthcare or entertainment domain. All of these have different requirements for communication within HRI. For example, several studies have already shown that anthropomorphic robot appearance and communication are beneficial in social task settings in which the interaction is (part of) the task goal itself (like in therapeutical settings), but might not be appropriate in industrial settings in which people expect to interact with tools instead of robotic teammates [59,60]. On the other hand, a machine-like communication might be inappropriate in service and social settings in which people expect robots to adhere to certain social norms and to become part of the social network [11,61]. The context therefore has to be considered as a guardrail for the design of communication and interaction. Robots that are used in educational settings have to be tailored to the specific needs of children and the robot’s role in these settings. The dragonbot, developed by MIT, for instance, represents a tutoring robot that was designed as a peer that learns together with the child [62]. The success of this robot in tutoring and learning is that it creates a playful and motivating environment that is also suitable for long-term interactions [63]. Although robots in service environments should engage people in interaction, too, the context reveals different requirements. First of all, the service domain often represents a multi-person environment which implies several challenges for communication like the identification of the receiver or the general noise level. In such unstructured environments with groups of people, unexpected social dynamics can emerge that also have to be considered for a robot implementation. For example, several studies have shown the potential of robot abuse by groups of children in public spaces like shopping malls. This implies specific requirements for the robot and communication design (which are in stark contrast to requirements for robots in educational settings), e.g., robust materials, random and fast robot movements and trajectories (to be able to escape), as well as visual and behavioral feedback according to exerted physical force on the robot [64,65,66]. Whereas all of this is not part of the communication design which might focus on task-related aspects, such as motivating people to buy a particular product or providing information where to find certain shops, it has to be considered for the overall success of HRI.
In addition, the context can also determine the appropriateness of the communication channel. A verbal speech communication of a robot might be suitable in home settings but not in service environments with high noise levels. In the latter case, a verbal text communication and/or a visual communication via color or light might be more efficient as it demands other cognitive resources than the auditory input and therefore can be processed by the human counterpart more easily and unobtrusively [67]. Speech communication might also not be appropriate in multi-person settings, such as care facilities or hospitals, as some messages can contain personal data like medication plans that should not be shared with bystanders (e.g., when the robot’s task is to remind a person to take their medication). In such cases, again, a more indirect communication via text could be more suitable.
Another aspect that is determined by the context is the depth of HRI. Whereas in service domains communication between a robot and a human typically represents only short encounters (e.g., a robot greeting a customer), other contexts require long-term interactions, e.g., the use of robots as companion robots in home settings. Requirements for short-term interactions with robots are comparable to requirements for walk-up-and-use products. They should be intuitive and easy to use for everyone without additional knowledge [68,69]. Of course, whereas these requirements are also desirable for long-term interactions, the latter pose far more complex challenges to HRI. To engage people over an extended period of time, communication needs to be personalized [70]. Robots for long-term interaction need to develop a pervasive robot memory that goes beyond the mere passive storage of (symbolic) semantic information which enables the adaptation of communication over time and experience [71,72].
The second variable we added to the core framework of communication with social robots is the evaluation method that describes by which means a social robot’s communication is researched. The choice of evaluation method is crucial for the generalizability and transferability of results. For example, Kunold et al. [73] revealed inconsistent findings in a conceptual, video-based replication of a laboratory experiment on nonverbal communication (here: affective touch). The discrepancies demonstrate that not all observation-based research results can be transferred to actual HRI experiences without restrictions. Moreover, there is a growing body of research indicating that physically embodied robots are sometimes perceived differently than virtual two-dimensional agents and have different effects depending on the task and interaction context [36,74,75]. These differences can partially be explained by variations in capability attributions depending on the agents’ embodiment (i.e., virtual of physical; [76]), but also morphological differences [44]. Therefore, whether results on communication with robots stem from embodied HRI (laboratory vs. field studies) or other kinds of stimulus presentations (e.g., textual, audio- or video-recordings, simulated interactions in virtual environments) might further alter observable effects.
Considering these different forms of how communication is presented could further help to answer the question whether observed effects are unique effects of embodied, robots’ communication, or mere communication effects. The latter would imply that the same message triggers equal effects regardless of the source of communication and support the approach of agent-agnostic communication models [29].
Furthermore, the choice of evaluation method also has a significant impact on the choice of outcome variables (Section 5.5). Studies on communication with robots that use video-recordings or audios that are played to participants reduce the number of possible outcome variables to the perception of the robot, evaluative or affective con-sequences. The assessment of behavioral measures like task performance or cooperative behavior would not be feasible due to method constraints. Another difference be-tween such studies and physical HRI studies is that robot performance in videos or the audios is often a perfected representation and lacks other stimuli like noises caused by movements of the robot [77], the ventilation of the built-in processor (which might lead to a far more technical perception of the robot) or the impression of social presence evoked by physical presence of a robot. Attempts to draw general conclusions from such studies to actual interactions should therefore be done with caution.

6. Example Application of the Framework to Existing Communication Studies

To demonstrate the anticipated use of the framework, we chose three exemplary works that investigated communication with social robots and structured the studies according to the framework. We chose one example of a study on verbal [53], and two on nonverbal forms of communication [78,79] (see Figure 5).
The study by Birmingham et al. [53] investigated differences in viewers’ perceptions of cognitive and affective empathetic statements made by a robot in response to human disclosure. Participants perceived the robot that made affective empathetic statements as being more empathetic than the robot that made cognitive empathetic statements. Additionally, results revealed that participants with more negative attitudes toward robots were more likely to rate the cognitive condition as more empathetic than the affective condition.
Castro-González and colleagues [78] investigated how the combination of bodily appearance and movement characteristics of a robot alter people’s attributions of animacy, likability, trustworthiness, and unpleasantness. The study showed that naturalistic motion was judged to be more animate than mechanical motion, but only when the robot resembled a human form. Naturalistic motion improved likeability regardless of the robot’s appearance. Finally, a robot with a human form was rated as more disturbing when it moved naturalistically.
The third study researched the power of robot touch on a robot’s perception/evaluation, affect and behavior [79]. Results demonstrated that participants reacted by smiling and laughing to robot touch. Furthermore, participants who were touched by the robot complied significantly more frequently with a request posed by the robot during conversation, and reported better feelings compared to those who were not touched. Touch had no effects on subjective evaluations of the robot or on the interaction experience.
The categorization of studies according to our framework reveals similarities and differences between the studies. What strikes at first is that although all studies are on social communication with robots, they vary different aspects of communication. Bimingham et al. [52] at first glance have implemented a very clear-cut design as they only vary one aspect, i.e., the content of the message which is either a cognitive empathetic or an affective empathetic statement. However, if we look at the method in de-tail, we find that nonverbal expressions of the robot (i.e., gestures) were additionally included in the interaction. Based on our framework, it can be assumed that both aspects are relevant influencing factors and may have caused other effects than the pure variation of verbal expressions.
Hoffmann and Krämer [79] on the other hand, kept the message constant but varied whether the robot additionally communicated via non-verbal touch or just used verbal speech communication. Castro-González et al. [78] have chosen to vary two aspects that impact communication: First, they varied aspects of the sender (two-arm humanlike body vs. single-arm mechanical body), and second, the channel of communication (movement: humanlike smooth or mechanistic path). Furthermore, they introduced verbal communication via speech to the experimental setup but did not vary it. This already indicates a very complex design which always has the potential for confounding or unintended effects. For example, the fact that trustworthiness was unaffected by the experimental manipulations in the latter study could be due to the fact that, before the experiment started, the robot welcomed and instructed participants via speech to the following game and supported the game flow via verbal turn taking. This very strong social cue might have overridden other more subtle effects like movement. As speech is based on complex cognitive processes in humans this might have been a primary source for participants’ trustworthiness evaluations of the robot. It therefore would be interesting to replicate the study without the robot’s verbal communicative component. This also applies to the study by Hoffmann and Krämer [80] that focused on the manipulation of nonverbal communication (i.e., affective touch from the robot to the participants) but neglected to investigate the verbal statements that accompanied touch. Accompanying humanlike/sympathetic statements such as “I understand” can strongly impact the perception of a robot, its perceived warmth, and competencies, and should hence be further investigated in more detail in the future.
Furthermore, the categorization of studies into the framework reveals that the receiver has not been defined in great detail, except by Birmingham et al. [52]. It is therefore difficult to make statements about the generalizability of the findings or to compare the effects of the communication on the receiver. To be able to make such statements, a first step would be to encourage researchers to provide more information on the receiver despite age and gender.
What is also apparent from the comparison is that only the study by Hoffmann and Krämer [79] has included different levels of effect measure categories, i.e., perceptual evaluations, affective and behavioral measures. Behavioral measures in particular are under-represented in social HRI, which might also be due to the fact that a huge body of research is conducted via (online) questionnaires that rely on observations of photo or video stimuli, making the introduction of behavioral measures difficult to impossible.
In favor of this method, the effects often studied focus on the perceived liking of the robot, which can be easily captured by self-report measures and do not necessarily need a live encounter with a robot. However, this does not take into account that the impression people develop without real (live) contact with a robot does not imperatively correspond to the experience in real interaction. Moreover, it does not elaborate on the consequences of such judgments, e.g., how liking and sympathy will affect individuals’ behavior in real interaction. Mere questioning “How would you behave in the situation?” is difficult here, since many individuals lack previous experience with a real robot and their anticipated reaction might differ from actual behavior (e.g., [80]). To strengthen results, multi-method approaches are desirable. We therefore strongly welcome further efforts to assess the effect of communication in HRI in a multifaceted way on different levels. This brings us to another category of the framework that is of interest with regard to the comparison of studies, the evaluation method. Two of the three studies investigated communication in a real-life laboratory experiment with an embodied robot [78,79]. Instead, Birmingham et al. [81] used video-recordings of scripted HRI with actors that participants had to evaluate. This reveals an important difference of the studies. Whereas in the laboratory studies, participants were the actual interaction partner and receiver of communication, participants in the online study had a passive, third person perspective and evaluated the interaction from a meta perspective of an observer. The transferability of results to actual HRI settings therefore has to be done with caution (for a discussion on the comparability of live interaction and observation see [73]). This calls for further research to validate results.

7. Summary

In this paper, we have argued that communication is a central interaction com-ponent in social HRI. In doing so, however, we observed that it is an overly broad term that needs to be defined more precisely to gain a deeper understanding of the implications of various forms of robot communication. To support our arguments, we first de-fined what we mean by communication and social robots before presenting different views on communication from HRI and interpersonal research. We also explained that we understand a social robot’s communication to be a designable property. Hence, similar to mass communication, it needs to be designed and implemented in advance to achieve a certain effect on the human side. Accordingly, we propose a pragmatic, linear view on communication design for social robots. This view corresponds to a sender-receiver perspective as advocated by Shannon and Weaver [28]. We then turned to Lasswell’s 5Ws of mass communication [24], which we believe precisely ad-dress the critical variables in communication design for social robots, namely: Who, says what, in which channel, to whom, with what effect. We applied this interpersonal theory to HRI and built a framework around the questions proposed by Lasswell. Moreover, we added the context and evaluation method as crucial variables that should be considered when communication between humans and social robots is designed and studied. The resulting framework should help to better systematize existing and future communication research in HRI. In addition, we pointed out that besides the predefined communicative acts of a robot, other characteristics such as a robot’s morphology or doing nothing can also have an impact on humans. This is because humans tend to assign meaning to every cue in robots’ behavior and design. Conclusively social robots ‘cannot not communicate’. To derive design guidelines for social robots’ communication, static characteristics of the robot should also be considered when analyzing effects in HRI. Especially features that do not serve a communicative function (e.g., eyes that do not "see" but are attached as a decorative feature; [46]) or features that do not meet human expectations (e.g., vision sensors placed elsewhere than in a robot’s eyes) can disrupt intended communication.
With regard to the frequently discussed question concerning how communicative acts should be designed in detail (e.g., natural versus artificial, human- or machinelike question (e.g., [6])), more research is needed to decide which form of design is better suited and contributes to a specific communicative goal. Based on the framework, it can be assumed that different styles should be designed appropriately for different contexts, target group and intended goals of the communication.
The exemplary application of the framework to three different studies on communication with social robots demonstrated the incremental value of the framework as it supports the identification of similarities (e.g., the receiver in all studies were healthy adults, all variations mainly target social effects as outcome variables) as well as differences (e.g., different robots as well as forms and channels of communication were used) between studies on communication. In addition, research gaps become apparent concerning confounding factors in existing research such as accompanying robot behaviors, e.g., gestures (e.g., [52]) and speech (e.g., [79]).

8. Limitations

As we have already mentioned, communication has many facets and thus, there are many different approaches and theories to model it. Consequently, our overview of perspectives on communication and the existing theories is not exhaustive. We are aware that linear models for complex human communication in particular have been widely criticized [40]. Nevertheless, we consider it legitimate to take this view from a design perspective in order to specifically investigate effects in HRI. Even if robots in the future learn dynamically which behavior leads to a desired effect on the counter-part, the possible behaviors must be defined and implemented in advance. Therefore, with the current state of the art, most behaviors for social robots are expected to need to be defined in advance by humans, and a structured knowledge of the effects of certain communicative actions on humans is helpful.

9. Conclusions

Much research has already been done on the design and effects of social robots’ communication. However, deriving conclusions and design guidelines is difficult at the moment. Therefore, we propose a framework for researchers and designers to help systematize the state of the art. We believe that such a systematic analysis of existing work will help to better understand existing research (e.g., explain contradictory findings) and in addition provide a toolkit for developers and designers to build robots based on their intended effects on humans (e.g., a robot that communicates in a way that is understood by elderly people). We initially demonstrated how the framework can be applied to existing research (Section 6). However, future work should use the framework more extensively to compare studies. The framework therefore offers the opportunity for meta-analyses of existing research, and additionally draws the path for future robust research designs.

Author Contributions

Both authors have contributed equally in all parts of to the conceptualization and writing of this article. Conceptualization, L.K. and L.O.; formal analysis, L.K. and L.O.; investigation, L.K. and L.O.; methodology, L.K. and L.O.; project administration, L.K. and L.O.; resources, L.K. and L.O.; validation, L.K. and L.O.; visualization, L.K. and L.O.; writing—original draft, L.K. and L.O.; writing—review & editing, L.K. and L.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Watzlawick, P.; Bavelas, J.B.; Jackson, D.D. Menschliche Kommunikation; Verlag Hans Huber: Berne, Switzerland, 1969. [Google Scholar]
  2. Hegel, F.; Muhl, C.; Wrede, B.; Hielscher-Fastabend, M.; Sagerer, G. Understanding Social Robots. In Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions, Cancun, Mexico, 1–7 February 2009; pp. 169–174. [Google Scholar] [CrossRef]
  3. Ros, R.; Espona, M. Exploration of a Robot-Based Adaptive Cognitive Stimulation System for the Elderly. In Proceedings of the HRI ’20: ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 406–408. [Google Scholar]
  4. Castellano, G.; De Carolis, B.; Macchiarulo, N.; Pino, O. Detecting Emotions During Cognitive Stimulation Training with the Pepper Robot. In Springer Proceedings in Advanced Robotics; Springer Nature: Berlin/Heidelberg, Germany, 2022; Volume 23, pp. 61–75. [Google Scholar]
  5. Deutsche Bahn’s Multilingual Travel Assistant—Furhat Robotics. Available online: https://furhatrobotics.com/concierge-robot/ (accessed on 29 August 2022).
  6. Sandry, E. Re-Evaluating the Form and Communication of Social Robots. Int. J. Soc. Robot. 2015, 7, 335–346. [Google Scholar] [CrossRef] [Green Version]
  7. Mirnig, N.; Weiss, A.; Skantze, G.; Al Moubayed, S.; Gustafson, J.; Beskow, J.; Granström, B.; Tscheligi, M. Face-to-Face with A Robot: What Do We Actually Talk About? Int. J. Hum. Robot. 2013, 10, 1350011. [Google Scholar] [CrossRef] [Green Version]
  8. Crumpton, J.; Bethel, C.L. A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. Int. J. Soc. Robot. 2016, 8, 271–285. [Google Scholar] [CrossRef]
  9. Fischer, K.; Jung, M.; Jensen, L.C.; Aus Der Wieschen, M.V. Emotion Expression in HRI—When and Why. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Daegu, Korea, 22 March 2019; pp. 29–38. [Google Scholar]
  10. Klüber, K.; Onnasch, L. Appearance Is Not Everything—Preferred Feature Combinations for Care Robots. Comput. Human Behav. 2022, 128, 107128. [Google Scholar] [CrossRef]
  11. Hoffmann, L.; Derksen, M.; Kopp, S. What a Pity, Pepper! How Warmth in Robots’ Language Impacts Reactions to Errors during a Collaborative Task. In Proceedings of the HRI ’20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 245–247. [Google Scholar] [CrossRef] [Green Version]
  12. James, J.; Watson, C.I.; MacDonald, B. Artificial Empathy in Social Robots: An Analysis of Emotions in Speech. In Proceedings of the RO-MAN 2018—27th IEEE International Symposium on Robot and Human Interactive Communication, Nanjing, China, 27–31 August 2018; pp. 632–637. [Google Scholar]
  13. Klüber, K.; Onnasch, L. Affect-Enhancing Speech Characteristics for Robot Communication. In Proceedings of the 66th Annual Meeting of the Human Factors and Ergonomics Society, Atlanta, GA, USA, 10–14 October 2022; SAGE Publications: Los Angeles, CA, USA, 2022. [Google Scholar]
  14. Joosse, M.; Lohse, M.; Evers, V. Sound over Matter: The Effects of Functional Noise, Robot Size and Approach Velocity in Human-Robot Encounters. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 184–185. [Google Scholar]
  15. Löffler, D.; Schmidt, N.; Tscharn, R. Multimodal Expression of Artificial Emotion in Social Robots Using Color, Motion and Sound. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 334–343. [Google Scholar]
  16. Riek, L.D.; Rabinowitch, T.-C.; Bremner, P.; Pipe, A.G.; Fraser, M.; Robinson, P. Cooperative Gestures: Effective Signaling for Humanoid Robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Osaka, Japan, 2–5 March 2010; IEEE: Osaka, Japan, 2010; pp. 61–68. [Google Scholar]
  17. Admoni, H.; Weng, T.; Hayes, B.; Scassellati, B. Robot Nonverbal Behavior Improves Task Performance in Difficult Collaborations. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand, 7–10 March 2016; pp. 51–58. [Google Scholar]
  18. Onnasch, L.; Kostadinova, E.; Schweidler, P. Humans Can’t Resist Robot Eyes—Reflexive Cueing with Pseudo-Social Stimuli. Front. Robot. AI 2022, 72. [Google Scholar] [CrossRef]
  19. Boucher, J.-D.; Pattacini, U.; Lelong, A.; Bailly, G.; Elisei, F.; Fagel, S.; Dominey, P.F.; Ventre-Dominey, J. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation. Front. Neurorobot. 2012, 6, 1–11. [Google Scholar] [CrossRef] [Green Version]
  20. Wiese, E.; Weis, P.P.; Lofaro, D.M. Embodied Social Robots Trigger Gaze Following in Real-Time HRI. In Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; pp. 477–482. [Google Scholar] [CrossRef]
  21. Arora, A.; Fiorino, H.; Pellier, D.; Pesty, S. Learning Robot Speech Models to Predict Speech Acts in HRI. Paladyn 2018, 9, 295–306. [Google Scholar] [CrossRef]
  22. Liu, P.; Glas, D.F.; Kanda, T.; Ishiguro, H. Data-Driven HRI: Learning Social Behaviors by Example from Human-Human Interaction. IEEE Trans. Robot. 2016, 32, 988–1008. [Google Scholar] [CrossRef]
  23. Janssens, R.; Wolfert, P.; Demeester, T.; Belpaeme, T. “Cool Glasses, Where Did You Get Them?” Generating Visually Grounded Conversation Starters for Human-Robot Dialogue. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Sapporo Hokkaido, Japan, 7–10 March 2022; pp. 821–825. [Google Scholar]
  24. Lasswell, H.D. The Structure and Function of Communication in Society. Commun. Ideas 1948, 37, 136–139. [Google Scholar]
  25. Fong, T.; Nourbakhsh, I.; Dautenhahn, K. A Survey of Socially Interactive Robots. Rob. Auton. Syst. 2003, 42, 143–166. [Google Scholar] [CrossRef] [Green Version]
  26. Bartneck, C.; Forlizzi, J. A Design-Centred Framework for Social Human-Robot Interaction. In Proceedings of the 2004 IEEE International Workshop in Robot and Human Interactive Communication (RO-MAN 2004), Kurashiki, Japan, 20–22 September 2004; IEEE: Okayama, Japan, 2004; pp. 591–594. [Google Scholar]
  27. Feine, J.; Gnewuch, U.; Morana, S.; Maedche, A. A Taxonomy of Social Cues for Conversational Agents. Int. J. Hum. Comput. Stud. 2019, 132, 138–161. [Google Scholar] [CrossRef]
  28. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1949. [Google Scholar]
  29. Banks, J.; De Graaf, M.M.A. Toward an Agent-Agnostic Transmission Model: Synthesizing Anthropocentric and Technocentric Paradigms in Communication. Hum. -Mach. Commun. 2020, 1, 19–36. [Google Scholar] [CrossRef] [Green Version]
  30. Onnasch, L.; Roesler, E. A Taxonomy to Structure and Analyze Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  31. Frijns, H.A.; Schürer, O.; Koeszegi, S.T. Communication Models in Human–Robot Interaction: An Asymmetric MODel of ALterity in Human–Robot Interaction (AMODAL-HRI). Int. J. Soc. Robot. 2021, 13, 1–28. [Google Scholar] [CrossRef]
  32. Nass, C.; Moon, Y. Machines and Mindlessness: Social Responses to Computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  33. Norman, D.A. The Design of Everyday Things: Revised and Expanded Edition; MIT Press: New York, NY, USA, 2013. [Google Scholar]
  34. Bonarini, A. Communication in Human-Robot Interaction. Curr. Robot. Rep. 2020, 1, 279–285. [Google Scholar] [CrossRef]
  35. Zimmer, F.; Scheibe, K.; Stock, W.G. A Model for Information Behavior Research on Social Live Streaming Services (SLSSs); Springer International Publishing: Berlin/Heidelberg, Germany, 2018; Volume 10914, ISBN 9783319914848. [Google Scholar]
  36. Hoffmann, L.; Krämer, N.C. Investigating the Effects of Physical and Virtual Embodiment in Task-Oriented and Conversational Contexts. Int. J. Hum. Comput. Stud. 2013, 71, 763–774. [Google Scholar] [CrossRef]
  37. Li, J. The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots and Virtual Agents. Int. J. Hum. Comput. Stud. 2015, 77, 23–27. [Google Scholar] [CrossRef]
  38. de Visser, E.J.; Peeters, M.M.M.; Jung, M.F.; Kohn, S.; Shaw, T.H.; Pak, R.; Neerincx, M.A. Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams. Int. J. Soc. Robot. 2020, 12, 459–478. [Google Scholar] [CrossRef]
  39. Lasswell, H.D.; Lerner, D.; Speier, H. Propaganda and Communication in World History; University Press of Hawaii: Honululu, HI, USA, 1979. [Google Scholar]
  40. Day, R.E. The Conduit Metaphor and the Nature and Politics of Information Studies. J. Am. Soc. Inf. Sci. Technol. 2000, 51, 805–811. [Google Scholar] [CrossRef]
  41. Patrick Rau, P.-L.; Li, D.; Li, D. A Cross-Cultural Study: Effect of Robot Appearance and Task. Int J Soc Robot 2010, 2, 175–186. [Google Scholar] [CrossRef]
  42. Babel, F.; Hock, P.; Kraus, J.; Baumann, M. Human-Robot Conflict Resolution at an Elevator—The Effect of Robot Type, Request Politeness and Modality. In Proceedings of the Companion of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo Hokkaido, Japan, 7–10 March 2022; pp. 693–697. [Google Scholar]
  43. Zhong, V.J.; Mürset, N.; Jäger, J.; Schmiedel, T.; Schmiedel, T. Exploring Variables That Affect Robot Likeability. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo Hokkaido, Japan, 7–10 March 2022; pp. 1140–1145. [Google Scholar]
  44. Kunold, L.; Bock, N.; Rosenthal-von der Pütten, A.M. Not All Robots Are Evaluated Equally: The Impact of Morphological Features on Robots’ Assessment through Capability Attributions. ACM Trans. Human-Robot Interact. 2022. [Google Scholar] [CrossRef]
  45. Phillips, E.; Zhao, X.; Ullman, D.; Malle, B.F. What Is Human-like?: Decomposing Robots’ Human-like Appearance Using the Anthropomorphic RoBOT (ABOT) Database. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 105–113. [Google Scholar]
  46. Onnasch, L.; Hildebrandt, C.L. Impact of Anthropomorphic Robot Design on Trust and Attention in Industrial Human-Robot Interaction. ACM Trans. Human-Robot Interact. 2022, 11, 1–24. [Google Scholar] [CrossRef]
  47. Roesler, E.; Onnasch, L.; Majer, J.I. The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 64, 107–111. [Google Scholar] [CrossRef]
  48. Burleson, B.R. The Nature of Interpersonal Communication: A Message-Centered Approach. In The Handbook of Communication Science; SAGE Publications Inc.: Los Angeles, CA, USA, 2010; pp. 145–166. ISBN 9781412982818. [Google Scholar]
  49. Kory-Westlund, J.M.; Breazeal, C. Exploring the Effects of a Social Robot’s Speech Entrainment and Backstory on Young Children’s Emotion, Rapport, Relationship, and Learning. Front. Robot. AI 2019, 6, 54. [Google Scholar] [CrossRef] [Green Version]
  50. Bishop, L.; Van Maris, A.; Dogramadzi, S.; Zook, N. Social Robots: The Influence of Human and Robot Characteristics on Acceptance. Paladyn 2019, 10, 346–358. [Google Scholar] [CrossRef] [Green Version]
  51. Naneva, S.; Sarda Gou, M.; Webb, T.L.; Prescott, T.J. A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust Towards Social Robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  52. Birmingham, C.; Perez, A.; Matarić, M. Perceptions of Cognitive and Affective Empathetic Statements by Socially Assistive Robots. In Proceedings of the HRI ’22: 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo Hokkaido, Japan, 7–10 March 2022; pp. 323–331. [Google Scholar]
  53. Hancock, P.A.; Kessler, T.; Kaplan, A.; Brill, J.C.; Szalma, J. Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. Hum. Factors 2020, 63, 1196–1229. [Google Scholar] [CrossRef]
  54. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; De Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors 2011, 53, 517–527. [Google Scholar] [CrossRef]
  55. Goetz, J.; Kiesler, S.; Powers, A. Matching Robot Appearance and Behavior to Tasks to Improve Human-Robot Cooperation. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Millbrae, CA, USA, 31 October–2 November 2003; IEEE: Millbrae, CA, USA, 2003; pp. 55–60. [Google Scholar]
  56. Reich-Stiebert, N.; Eyssel, F. (Ir)Relevance of Gender?: On the Influence of Gender Stereotypes on Learning with a Robot. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 166–176. [Google Scholar]
  57. Roesler, E.; Naendrup-Poell, L.; Manzey, D.; Onnasch, L. Why Context Matters: The Influence of Application Domain on Preferred Degree of Anthropomorphism and Gender Attribution in Human–Robot Interaction. Int. J. Soc. Robot. 2022, 14, 1155–1166. [Google Scholar] [CrossRef]
  58. Fischer, K.; Jensen, L.C.; Kirstein, F.; Stabinger, S.; Erkent, Ö.; Shukla, D.; Piater, J. The Effects of Social Gaze in Human-Robot Collaborative Assembly. In Social Robotics; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9388, pp. 204–213. [Google Scholar] [CrossRef]
  59. Roesler, E.; Manzey, D.; Onnasch, L. A Meta-Analysis on the Effectiveness of Anthropomorphism in Human-Robot Interaction. Sci. Robot. 2021, 6, eabj5425. [Google Scholar] [CrossRef] [PubMed]
  60. Roesler, E.; Onnasch, L. Teammitglied Oder Werkzeug—Der Einfluss Anthropomorpher Gestaltung in Der Mensch-Roboter-Interaktion. In Mensch-Roboter-Kollaboration; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2020; pp. 163–175. [Google Scholar]
  61. Fischer, K. Why Collaborative Robots Must Be Social (and Even Emotional) Actors. Techne Res. Philos. Technol. 2019, 23, 270–289. [Google Scholar] [CrossRef] [Green Version]
  62. Setapen, A.A.M. Creating Robotic Characters for Long-Term Interaction. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2012. [Google Scholar]
  63. Short, E.; Swift-Spong, K.; Greczek, J.; Ramachandran, A.; Litoiu, A.; Grigore, E.C.; Feil-Seifer, D.; Shuster, S.; Lee, J.J.; Huang, S.; et al. How to Train Your DragonBot: Socially Assistive Robots for Teaching Children about Nutrition through Play. In Proceedings of the IEEE RO-MAN 2014—23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions, Edinburgh, Scotland, 15 October 2014; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2014; pp. 924–929. [Google Scholar]
  64. Brsci, D.; Kidokoro, H.; Suehiro, Y.; Kanda, T. Escaping from Children’s Abuse of Social Robots. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Portland, OR, USA, 2–5 March 2015; ACM: New York, NY, USA, 2015; pp. 59–66. [Google Scholar]
  65. Yamada, S.; Kanda, T.; Tomita, K. An Escalating Model of Children’s Robot Abuse. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 9 March 2020; pp. 191–199. [Google Scholar]
  66. Ku, H.; Choi, J.J.; Lee, S.; Jang, S.; Do, W. Designing Shelly, a Robot Capable of Assessing and Restraining Children’s Robot Abusing Behaviors. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 161–162. [Google Scholar]
  67. Wickens, C.D. Multiple Resources and Performance Prediction. Theor. Issues Ergon. Sci. 2002, 3, 159–177. [Google Scholar] [CrossRef]
  68. Jacucci, G.; Morrison, A.; Richard, G.T.; Kleimola, J.; Peltonen, P.; Parisi, L.; Laitinen, T. Worlds of Information: Designing for Engagement at a Public Multi-Touch Display. In Proceedings of the Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; Volume 4, pp. 2267–2276. [Google Scholar]
  69. Schneidermeier, T.; Burghardt, M.; Wolff, C. Design Guidelines for Coffee Vending Machines. In Proceedings of the Second International Conference on Design, User Experience, and Usability. Web, Mobile, and Product Design, Las Vegas, NV, USA, 21–26 July 2013; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8015 LNCS, pp. 432–440. [Google Scholar]
  70. Ho, W.C.; Dautenhahn, K.; Lim, M.Y.; Du Casse, K. Modelling Human Memory in Robotic Companions for Personalisation and Long-Term Adaptation in HRI. In Proceedings of the 2010 Conference on Biologically Inspired Cognitive Architectures (BICA); IOS Press: Amsterdam, Netherlands, 2010; Volume 221, pp. 64–71. [Google Scholar]
  71. Baxter, P.; Belpaeme, T. Pervasive Memory: The Future of Long-Term Social HRI Lies in the Past. In Proceedings of the AISB 2014—50th Annual Convention of the AISB, London, UK, 1–4 April 2014. [Google Scholar]
  72. Ahmad, M.I.; Mubin, O.; Orlando, J. Adaptive Social Robot for Sustaining Social Engagement during Long-Term Children–Robot Interaction. Int. J. Hum. Comput. Interact. 2017, 33, 943–962. [Google Scholar] [CrossRef]
  73. Kunold, L. Seeing Is Not Feeling the Touch from a Robot. In Proceedings of the 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN’22), Naples, Italy, 29 August–2 September 2022; pp. 1562–1569. [Google Scholar]
  74. Bartneck, C. Interacting with an Embodied Emotional Character. In Proceedings of the International Conference on Designing Pleasurable Products and Interfaces, Pittsburgh, PA, USA, 23–26 June 2003; Association for Computing Machinery (ACM): New York, NY, USA, 2003; pp. 55–60. [Google Scholar]
  75. Kiesler, S.; Powers, A.; Fussell, S.R.; Torrey, C. Anthropomorphic Interactions with a Robot and Robot-like Agent. Soc. Cogn. 2008, 26, 169–181. [Google Scholar] [CrossRef]
  76. Hoffmann, L.; Bock, N.; Rosenthal Pütten, A.M.V.D. The Peculiarities of Robot Embodiment (EmCorp-Scale): Development, Validation and Initial Test of the Embodiment and Corporeality of Artificial Agents Scale. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 370–378. [Google Scholar]
  77. Babel, F.; Kraus, J.; Hock, P.; Asenbauer, H.; Baumann, M. Investigating the Validity of Online Robot Evaluations: Comparison of Findings from an One-Sample Online and Laboratory Study; Association for Computing Machinery: New York, NY, USA, 2021; Volume 1, ISBN 9781450382908. [Google Scholar]
  78. Castro-González, Á.; Admoni, H.; Scassellati, B. Effects of Form and Motion on Judgments of Social Robots’ Animacy, Likability, Trustworthiness and Unpleasantness. Int. J. Hum. Comput. Stud. 2016, 90, 27–38. [Google Scholar] [CrossRef]
  79. Hoffmann, L.; Krämer, N.C. The Persuasive Power of Robot Touch. Behavioral and Evaluative Consequences of Non-Functional Touch from a Robot. PLoS ONE 2021, 16, e0249554. [Google Scholar] [CrossRef]
  80. Robinette, P.; Wagner, A.R.; Howard, A.M. Assessment of Robot to Human Instruction Conveyance Modalities across Virtual, Remote and Physical Robot Presence. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; pp. 1044–1050. [Google Scholar] [CrossRef]
  81. Torre, I.; Goslin, J.; White, L.; Technology, D.Z.-P. Undefined Trust in Artificial Voices: A “Congruency Effect” of First Impressions and Behavioural Experience. In Proceedings of the TechMindSociety ’18: Proceedings of the Technology, Mind, and Society, New York, NY, USA, 5–7 April 2018; Volume 18. [Google Scholar] [CrossRef]
Figure 1. Examples of social robots. From left to right: robotic butler A.L.O. that is implemented in hotels or malls (Photo: Business Wire), entertainment robot Cozmo by Anki, robot seal Paro often used in healtcare/therapeutical settings (http://www.parorobots.com/photogallery.asp, accessed on 12 August 2022), the Dragonbot by MIT Media Lab which is applied to educational settings with children (Photo: Bryce Vickmark).
Figure 1. Examples of social robots. From left to right: robotic butler A.L.O. that is implemented in hotels or malls (Photo: Business Wire), entertainment robot Cozmo by Anki, robot seal Paro often used in healtcare/therapeutical settings (http://www.parorobots.com/photogallery.asp, accessed on 12 August 2022), the Dragonbot by MIT Media Lab which is applied to educational settings with children (Photo: Bryce Vickmark).
Robotics 11 00129 g001
Figure 2. Depiction of the Mathematical Transmission Model by Shannon and Weaver. Own depiction adapted from Shannon and Weaver [28].
Figure 2. Depiction of the Mathematical Transmission Model by Shannon and Weaver. Own depiction adapted from Shannon and Weaver [28].
Robotics 11 00129 g002
Figure 3. Graphical conception of Lasswell’s theory of mass communication [24].
Figure 3. Graphical conception of Lasswell’s theory of mass communication [24].
Robotics 11 00129 g003
Figure 4. Framework to study communication in HRI. The headline questions are adapted from Lasswell [24]. Note.The grey boxes at the bottom should depict that source of communication can also be a human and a social robot a receiver of communication. For this purpose, the classifying categories, for example the quality of communication, should be revised to fit a human source of communication. However, this is out of the scope of the present paper.
Figure 4. Framework to study communication in HRI. The headline questions are adapted from Lasswell [24]. Note.The grey boxes at the bottom should depict that source of communication can also be a human and a social robot a receiver of communication. For this purpose, the classifying categories, for example the quality of communication, should be revised to fit a human source of communication. However, this is out of the scope of the present paper.
Robotics 11 00129 g004
Figure 5. Studies structured according to the framework. Categories in bold represent the variations/manipulations within each study.
Figure 5. Studies structured according to the framework. Categories in bold represent the variations/manipulations within each study.
Robotics 11 00129 g005
Table 1. Forms of communication, adapted from Zimmer et al. [35]; * rows and columns extend the original version.
Table 1. Forms of communication, adapted from Zimmer et al. [35]; * rows and columns extend the original version.
Type of CommunicationReciprocitySpatial ProximityTemporal ProximityBodily Contact* Channel
* Verbal* Nonverbal
Social/InterpersonalXXXXXX
Parasocial
(e.g., on TV)
Sometimes XX
Computer mediated
(e.g., chat, social media)
X X XPartial
* Artificial:
(e.g., speech assistants)
X X XPartial
(reduced to e.g., prosody)
* Artificial:
(e.g., embodied virtual agents)
XX X XPartial
(reduced to visual and acoustic)
* Artificial:
(e.g., physically embodied, co-located social robots)
XXXXXX
Table 2. Examples of verbal and nonverbal communicative acts, goals, and effects.
Table 2. Examples of verbal and nonverbal communicative acts, goals, and effects.
ExampleCommunication GoalAct of CommunicationEffect
“Hi, my name is Pepper”Interactional
(display robot identity),
Relational
(bonding)
Self-disclosure
(Name)
Social treatment
Anthropomorphization
“What’s your name”Interactional
(start conversation),
Relational
(signal interest in other)
(Ask) QuestionResponse/answer
Inviting communication
“I like your name”Relational
(signal warmth)
ComplimentLiking of robot
Positive affect
“Error”Instrumental
(information)
Verbal warningAwareness
Attention
Understanding
“Sorry”Interactional
(reputation management),
Relational
(relationship maintenance)
Verbal excuseTrust Repair
Positive Affect
Cooperation
“Pass me the salt”Instrumental
(task fulfillment)
Request
Command
Elicit action
→ hand over object
“You are in my way”Instrumental
(task fulfillment)
Annotation/
Complaint
Elicit action
→ move to let robot pass by
“Take the green box and put it into the shelf”Instrumental
(task fulfillment)
InstructionElicit action
→ move object to said position
Nonverbal examples
Robot blinkingInstrumental
(attention)
Issue, Error,
Problem
Awareness
Elicit Action → coping
Gaze at objectInstrumental
(guide attention)
AffordanceElicit Action → look to object
Point to objectInstrumental
(guide attention)
AffordanceElicit Action → look to object
Battery signInstrumental
(transparency)
AffordanceAwareness
Understanding
→ elicit action: charge battery
SmilingRelational
(liking)
Affect displayLiking, Acceptance
Table 3. Example combinations of communication form and output modality.
Table 3. Example combinations of communication form and output modality.
Output Modality
Communication FormAuditiveVisualTactile
Verbal Speech
“I am a robot”
Text on Screen
“Push the button”
Braille
(dots on reader)
Nonverbal Sounds
Laughing
Yawning
Facial Expression
Affective touch
Handshake
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kunold, L.; Onnasch, L. A Framework to Study and Design Communication with Social Robots. Robotics 2022, 11, 129. https://doi.org/10.3390/robotics11060129

AMA Style

Kunold L, Onnasch L. A Framework to Study and Design Communication with Social Robots. Robotics. 2022; 11(6):129. https://doi.org/10.3390/robotics11060129

Chicago/Turabian Style

Kunold, Laura, and Linda Onnasch. 2022. "A Framework to Study and Design Communication with Social Robots" Robotics 11, no. 6: 129. https://doi.org/10.3390/robotics11060129

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop