Next Article in Journal
Cat-Inspired Gaits for a Tilt-Rotor—From Symmetrical to Asymmetrical
Next Article in Special Issue
Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots
Previous Article in Journal
Multi-Layered Carbon-Black/Elastomer-Composite-Based Shielded Stretchable Capacitive Sensors for the Underactuated Robotic Hand
Previous Article in Special Issue
Using Simulation to Evaluate a Tube Perception Algorithm for Bin Picking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Robot Interaction in Industrial Settings: Perception of Multiple Participants at a Crossroad Intersection Scenario with Different Courtesy Cues

1
Algoritmi Centre, School of Engineering, University of Minho, 4800-058 Guimaraes, Portugal
2
DTx—Digital Transformation CoLab, 4800-058 Guimaraes, Portugal
*
Author to whom correspondence should be addressed.
Robotics 2022, 11(3), 59; https://doi.org/10.3390/robotics11030059
Submission received: 22 February 2022 / Revised: 26 April 2022 / Accepted: 9 May 2022 / Published: 13 May 2022
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)

Abstract

:
In environments shared with humans, Autonomous Mobile Robots (AMRs) should be designed with human-aware motion-planning skills. Even when AMRs can effectively avoid humans, only a handful of studies have evaluated the human perception of mobile robots. To establish appropriate non-verbal communication, robot movement should be legible and should consider the human element. In this paper, a study that evaluates humans’ perceptions of different AMR courtesy behaviors at industrial facilities, particularly at crossing areas, is presented. To evaluate the proposed kinesic courtesy cues, we proposed five tests (four proposed cues—stop, deceleration, retreating, and retreating and moving aside—and one control test) with a set of participants taken two by two. We assessed three different metrics, namely, the participants’ self-reported trust in AMR behavior, the legibility of the courtesy cues in the participants’ opinions, and the behavioral analysis of the participants related to each courtesy cue tested. The retreating courtesy cue, regarding the legibility of the AMR behavior, and the decelerate courtesy cue, regarding the behavioral analysis of the participants’ signs of hesitation, are better perceived from the forward view. The results obtained regarding the participants’ self-reported trust showed no significant differences in the two participant perspectives.

1. Introduction

The implementation of collaborative robots is seen as one of the technologies enabling Industry 5.0. This new industrial paradigm prioritizes essential needs and interests by placing humans at the core of the industrial production processes. It recognizes the power of the role of industry in achieving social and environmental objectives without setting aside the role of human workers in this process [1]. In fact, along with this new industrial paradigm, robots are no longer only programmable machines but are expected to be recognized as co-workers, side by side with human workers [2]. This relationship should increase production flexibility and efficiency while supporting the human workers in their tasks [3]. One of these technologies already being introduced on the shop floor is Industrial AMRs [4].
AMRs have evolved from Automated Guided Vehicles (AGVs), which are restricted to predefined paths using magnetic/electrical wires, among other sensors [4,5]. Compared to AGVs, AMRs are more flexible, collaborative, and cost-efficient [3]. This type of robot can move autonomously without the help of external workers [6] and they can detect obstacles and recalculate a new route around them [7]. The autonomy of AMRs implies continuous decision-making on how to behave according to their environment, with predefined rules and constraints [4].
Cooperation between humans and robots sharing a workspace is becoming increasingly common [8]. Human–Robot Collaboration (HRC) is the process wherein human and robot agents work together to achieve shared goals. For any level of collaboration, human safety has been a primary concern ever since robots were first created [8]. Beyond physical safety, other aspects also need to be considered when humans and robots interact, such as psychological aspects and mental stress [3]. In fact, when a robot comes close to a human, the robot may generate negative feelings, such as stress, mistrust, and anxiety [9]. This may be linked with human nature. For instance, instinct may lead humans to take evasive action when they perceive a threat due to approaching an unfamiliar object [10]. Therefore, foreseeing the humans’ acceptance, trust, and comfort requires us to take into account the robots’ appearance, movement, and behavior [11,12]. Because most industrial AMRs have non-humanoid features, the way to promote appreciation lies in non-verbal communication linked to their movement and behavior [13]. Motion is a way of communication and not only an instrument to reach a goal position, and predictability is of much importance for efficient human–robot interaction and collaboration [14]. Communication is a key factor in HRC activities to achieve a common goal [15]. Friendly and comprehensible AMR movements and behaviors are key factors for proper communication with the human worker [16].
AMRs can emulate the social behavior of humans through kinesic courtesy cues. In human–human interactions, kinesic courtesy cues promote social affiliation (e.g., physical distance from others, postural orientation, smooth social encounters, and acceptance of others) [13,17]. In the specific case of AMRs, legible kinesic cues can give, to humans, information about granting them the privilege of first passage at a crossroads. To be legible, AMRs’ kinesic courtesy cues need to be predictable and resemble human behavior [13]. Lichtenthäler et al. [14] showed that a good strategy for an AMR is moving as far as is possible straight towards its goal and reacting as smoothly as possible to a human. Kaiser et al. [13] showed that when robots present legible behaviors, they are better appreciated by humans.
The assessment of humans’ perception of robot behavior is a non-trivial problem. In industrial HRC scenarios, cognitive ergonomics deals with this issue. It is concerned with principles of interaction acceptability by minimizing mental stress and psychological discomfort, which could be felt by workers sharing a workspace with robots [18]. There is no single best way to assess these psychological parameters because they depend on the purpose of the assessment. According to Gualtieri et al. [19], there is a set of cognitive variables related to HRC, namely, trust, usability, frustration, perceived enjoyment, satisfaction, and acceptance. There are three categories of measurements available to measure these types of variables: (i) performance measures, (ii) subjective measures, and (iii) psychophysiological measures [20]. Performance measures are conducted based on reaction time and mistakes. Subjective measures assess the workers’ opinions, providing information on how they assess aspects of their interaction within the workspace. Psychophysiological measures include direct measurement of cognitive variables, namely, heart rate variability (HRV), galvanic skin response (GSR), and eye blink rate [20]. These technologies are more feasible and capable of providing human cognitive status assessment and interpretation [8]. However, subjective measures are more often used because they show practical advantages, e.g., ease of implementation and non-intrusiveness. Additionally, previous research supports their capacity to provide sensitive measures of the cognitive status of workers [21].
Hetherington et al. [22] highlighted the need to carry out in-person experiments with mobile robots, in open spaces, and to apply courtesy cues in scenarios that include several participants at the same time. These authors even wondered what the impact would be of courtesy cues from the perspective of participants who have a view of the robot from behind. Kaiser et al. [13] tested the legibility of two kinesic courtesy cues common in human interaction with an autonomous mobile robot in two different situations, but not simultaneously: one participant and robot share the same trajectory next to each other or moving from opposite ends. The authors also pointed out the need to explore other courtesy cues.

Objectives

The current exploratory study intends to go a step further, recognizing the im-portance of legible movements in the communication processes of AMRs. This study leads us to create an experimental protocol on a real-industry in-person scenario and contribute with some specific conditions not previously considered. Specifically, the aim is to investigate how different kinesic courtesy cues (stop, decelerate, retreat, and retreat and move to the left) would be understood in the view of two participants with different perspectives of the robot (one with a frontward view and the other with a backward view) at an industrial crossroad under the same test conditions, i.e., within one simultaneous scenario.

2. Materials and Methods

The following subsections provide the sample characterization, a description of the AMR used and its operating conditions, the hypotheses and measures that we intended to analyze, and an explanation of the experimental procedure and apparatus.

2.1. Participants

The participants collaborated voluntarily and signed an informed consent form in agreement with the Committee of Ethics for Research in Social and Humans Sciences of the University of Minho (approval number CEICSH 095/2019), in agreement with the Declaration of Helsinki.
A total number of 34 participants were recruited, with 13 females (38.2%) and 21 males (61.8%). To conduct each trial, two participants were required simultaneously. Participant A has a backward perspective of the robot, where the robot is moving away from the participant, and Participant B has a forward perspective of the robot, where the robot is moving toward the participant. Regarding the characterization of the participants’ ages, the average age was 29.8, with a standard deviation of 7.5, in the range of 21–46 years old.

2.2. Material and Experimental Setup

2.2.1. MiR 200 Specifications, Navigation, Control, and Safety

In this user study, an AMR, the MiR 200, and its automatic battery charging station, the MiR Charge 24 V, were used (Figure 1). The main specifications of the MiR 200 used for conducting the experiments are: weight (without load) of 65 kg; maximum speed forwards of 1.1 m/s; maximum speed backwards of 0.3 m/s; battery running time 10 h (or 15 km continuous driving); charging time with charge station up to 3.0 h ((0–80%): 2.0 h); charging time with cable up to 4.5 h ((0–80%): 3.0 h) [23].
The MiR 200 is a nonholonomic wheeled mobile robot (WMR) with a rectangular configuration, controlling its movement speed based on wheel odometry, with six wheels in total: one omnidirectional swivel wheel in each corner; and two driving wheels (differential control) in the center of the platform to ensure the stability of the mobile robot when it rotates [24,25,26,27,28]. The robot adjusts how much power is sent to each motor based on sensory input. The robot is equipped with two ISO 13849-certified SICK S300 safety laser scanners, one in the front left corner and another one in the rear right corner, offering 360° visual protection around the robot (Figure 2a); two Intel RealSenseTM D435 3D cameras on the front of the robot for the detection of objects vertically up to 1800 mm at a distance of 1950 mm in front of the robot (Figure 2b), and with an angle of 118° in the horizontal field of view (FoV) at 180 mm height from the floor (Figure 2c); and four ultrasound sensors, two placed at the front of the robot and two placed at the rear of the robot [23,27,29].
The SICK safety laser scanners provide the sensorial information for the collision avoidance function. This function prevents the robot from colliding with a person or an object by stopping it before a collision happens. To that end, the safety laser scanners are programmed with two sets of protective fields, each one individually configured to contour around the robot. One set is used when the robot is driving forward, and the other set is used when the robot is driving backward. Based on the speed, the robot activates the corresponding protective field. If an obstacle is detected, whether person or object, within the active protective field, the robot enters a protective stop automatically (Figure 3) until the protective field is cleared of obstacles for at least two seconds. The protective stop is a state of the robot where a robot status light turns red, and it is not possible to move the robot or send it on missions until it is brought out of the protective stop [27].
According to [27], the velocity and the protective field ranges when the robot is driving forward, i.e., in front of the robot, and backward, are different for specific cases. Each case describes a velocity interval in which the robot may operate. Figure 4a shows a representation of the protective field ranges according to five different forward robot velocities. For example, when the robot is moving at a velocity between −0.14 to 0.20 m/s (1) the protective field range is from 0 to 20 mm. The other velocity scenarios (2 to 5) are 0.21 to 0.40 m/s; 0.41 to 0.80 m/s; 0.81 to 1.10 m/s; and 1.11 to 2.00 m/s. The protective field range are respectively 0–120 mm; 0–290 mm; 0–430 mm; and 0–720 mm. Figure 4b shows the similar representation for when the robot is driving backward. In this case, the four velocity scenarios are −0.14 to 1.80 m/s; −0.20 to −0.15 m/s; −0.40 to −0.21 m/s; and −1.50 to −0.41 m/s. The protective field ranges are (1 to 4), respectively, 0–30 mm; 0–120 mm; 0–290 mm; and 0–430 mm.
To execute the experiments, a 2D map of the test area was created through the cartographer algorithm available on the robot platform (Figure 5). The robot localization within this map is determined by an adaptive Monte Carlo localization (AMCL) navigational system combining wheel odometry, information from the inertial measurement unit (IMU) encoders, and laser scanner data [27,28,29,30,31].
The kinesic courtesy cues were implemented on the robot through commands programmed in the software interface supplied by the respective vendor. During the tests, the desired value for the robot’s linear speed was set to 0.6 m/s. According to Lauckner et al. [31], speeds slower than 0.6 m/s are perceived as too slow. In terms of obstacle detection, the higher the speed (forward and backward speed), the larger the protective safety range [27].

2.2.2. Experimental Setup

The experiments were conducted in an open space within an industrial environment, in a crossroad-like configuration with simultaneous forward and backward scenarios (Figure 6).
A control courtesy cue, i.e., a control condition where the robot does not stop, and another four courtesy cues—stop, decelerate, retreat, and retreat and move to the left—were programmed into the MiR 200. For each courtesy cue, the MiR 200 moved towards the crossing area with linear speed (v = 0.6 m/s) and then executed specific movements to communicate to the participants that it was yielding the right of way at the intersection. The crossing area is an interaction and decision area where Participants A and B outpace the MiR 200, localized in the experimental apparatus between the end of the horizontal paths for Participant A, the MiR 200, and Participant B and the beginning of the vertical path. The four courtesy cues (Figure 7) tested were the following:
(i)
stop”: The AMR stopped suddenly before the crossing area, made a two-second stop, and returned to its trajectory to the final position.
(ii)
decelerate”: The AMR started to slow down its linear speed (v = 0.6 m/s) to v = 0.2 m/s at a distance of 1.0 m (represented by Xd in Figure 7) before stopping before the crossing area. Then, it stopped for two seconds and returned to its trajectory to the final position.
(iii)
retreat”: The robot stopped suddenly before the crossing area, then retreated 1.0 m (Xr in Figure 7), stopped for two seconds, and returned to its trajectory to the final position.
(iv)
retreat and move to the left”: The robot stopped suddenly before the crossing area, then retreated 1.0 m, and moved to the left by 0.2 m (Xleft in Figure 7) relative to the central point of the crossing area. Then, it stopped for two seconds and returned to its trajectory to the final position.
Participant A and the MiR 200 shared the trajectory from the position start, and Participant B shared the trajectory with the MiR 200 from the crossing area, both passing through the intersection. When the participants reached the crossing area, they were asked to decide whether (or not) to overtake the robot, according to the kinesic courtesy cue of the test in progress (Figure 8).

2.3. Procedure

The participants were informed about the scope of the study and the general instructions of the experiment; precisely, that the MiR 200 works autonomously, i.e., the robot moves on a preprogrammed map and trajectory (Figure 5)—the robot does not collide with any participant or any object due to its sensors that monitor the environment around the robot [27]. Each participant completed five tests: one for the control courtesy cue condition and the rest related to the other four courtesy conditions (see Section 2.2). The total time for performing the five test conditions was 20 min. Participants were instructed to start the test after hearing a “Beep” and seeing a green light. To ensure that they would encounter the robot at a specific point on the navigation map (where the robot would show the courtesy cue behavior), participants were also instructed to walk at the pace imposed by the evaluators (about 1 m/s). They could also abandon the test if they felt uncomfortable at any moment. After each courtesy cue condition, an appreciation of the subjects’ perception, through an adapted version of the Human Trust in Automation questionnaire (HTA), was obtained. To reduce hysteresis phenomena, i.e., different responses to identical inputs [32,33], the different kinesic courtesy cues were randomly assigned among the participants.

2.4. Measures

2.4.1. Perceived Trust and Mistrust Assessment

To measure the participants’ perceived trust in the AMR behavior, we applied an adapted version of the HTA questionnaire [34]. Comparatively to the original questionnaire, we replaced the word “system” by “robot”. Trust is an important factor in HRC. It determines humans’ use of autonomy, and improper trust can lead to either over-reliance or under-reliance on the robot [35]. The HTA is a validated questionnaire composed of 12 statements, assessed by a 7-point Likert scale (between 1 = “Totally disagree” and 7 = “Totally agree”). The subjects answered at the end of each interaction with the AMR presenting the different courtesy cues. The first five statements have a negative connotation, while the last seven statements have a positive connotation. We assessed the responses to the negative connotation statements as mistrust and the others as trust in the robot. To assess the condition of normality of the results, the Kolmogorov–Smirnov test was applied. Additionally, a one-way ANOVA was applied to compare the results of trust and mistrust with the conditions of different courtesy cues.

2.4.2. Legibility Assessment

To measure the legibility of the implemented kinesic courtesy cues tested, we asked the participants to select one of eight options after the end of each trial. The options available were: (i) the robot stopped; (ii) the robot decelerated and stopped; (iii) the robot stopped and retreated; (iv) the robot stopped, then retreated, and then moved forward and moved to the left/right; (v) the robot followed its path without stopping; (vi) the robot stopped and nudged; (vii) the robot stopped and tilted to one side; (viii) none of the above options. When the participant answered correctly, we considered the answer “true”. When the participant answered wrongly but pointed out one of the courtesies used in the test (i, ii, iii, iv, or v), we considered it “false”. If the participant answered wrongly and pointed out one courtesy that was not in the scope of the experiment (vi, vii, or viii), we considered it “false out of the test”. To assess the association between the levels of courtesy kinesic cues and the answer to control questions, a Pearson’s chi-square test was applied.

2.4.3. Behavioral Analysis

Based on video recordings of each experimental trial, we assessed the lack of hesitation with which the participants moved through the crossroad. For this purpose, two raters assessed each video individually and inferred the participants’ lack of hesitation. To assess this condition, we assumed that a person’s movement showed signs of hesitation if one of the following situations was observed: slowed down, stopped, moved to the side, retreated, granted the robot the right to pass, visually checked the robot, moved first but tentatively, seemed somewhat forced by the robot to pass first, passed the bottleneck jointly with the robot, or both got stuck in the crossroad [13]. To assess the association between the robot’s kinesic courtesy cues and the observable participant’s signs of hesitation, Pearson’s chi-square test was applied.

3. Results

We report our findings in three subsections. First, we tested the effect of the four kinesic courtesy cues on subjects’ trust and mistrust. Second, we assessed the legibility of the kinesic courtesy cues via participants’ self-reporting. Third, we compared the hesitation behavior of the participants when subjected to each kinesic courtesy cue from the point of view of the participants concerning their relative position (behind/in front) to the robot. We started from the assumption that an AMR without skills in non-verbal communication will be hard to read by its human counterparts.

3.1. Perceived Trust and Mistrust

  • To assess perceived trust, the HTA was applied. A one-way ANOVA was applied, and the application condition of homogeneity of variance was verified (p > 0.05). Additionally, the Kolmogorov–Smirnov test was applied to assess the condition of normality. The normality of the residuals was confirmed (p > 0.05). The results of trust and distrust in the human counterpart are shown in Table 1.
No statistically significant difference was found in the mean values of trust and distrust from both points of view regarding the kinesic courtesy cues. The graphs in Figure 9 illustrate these results.

3.2. Legibility Assessment

Pearson’s chi-square test of independence revealed a statistically significant association between the levels of courtesy kinesic cues and the answer to the control question for the forward point of view (X2(8) = 16.316, p = 0.038). The graph in Figure 10a shows that the courtesy cue that presented better legibility for the participants was retreat (with the same percentage as the control situation) (64.7%). From this perspective, the courtesy cue less understood by the participants was “retreat and move left” (11.8%).
For the backward view, Pearson’s chi-square test did not reveal a statistically significant association between the levels of courtesy kinesic cues and the answer to the control question for the forward point of view (X2(8) = 11.308, p = 0.186). Figure 10b illustrates this result, and it can be seen that the courtesy cue better understood by the participants was the decelerate (also with the same percentage as the control situation) (47.1%). Regarding the courtesy with less legibility for the participants, retreat and move left was the courtesy cue with the lowest percentage of right answers (5.9%).

3.3. Behavioral Analysis

Pearson’s chi-square test revealed a statistically significant association (X2(4) = 12.143, p = 0.016) between the robot courtesy cue and observable signs of hesitation in the participant from the forward view. The graph in Figure 11a shows that the courtesy cue for which the participants presented lesser signs of hesitation was decelerate (82.4%). On the contrary, the control condition was the one that presented a greater percentage of participant hesitation (70.6%).
Related to the backward view, Pearson’s chi-square test did not detect a significant association (X2(4) = 5.251, p = 0.263). Figure 11b illustrates that decelerate was the courtesy cue with a lesser percentage of participant hesitation (52.9%). On the contrary, retreat and move left was the courtesy with a greater percentage of participant hesitation behavior when encountering the AMR.
Additionally, an exact Fisher test was conducted to evaluate the association between the type of courtesy cue and the behavior of participants. The result (p = 0.014) revealed a statistically significant association in the sense that a higher percentage of participants showed hesitant behavior when they saw the robot from behind (61.2%) than when they approached from the front (41.2%).

4. Discussion

In this exploratory in-person experiment, we implemented four courtesy cues (stop, decelerate, retreat, and retreat and move to the left) and one control courtesy cue (no stop) on an AMR to investigate how these different kinesic courtesy cues would be understood in the view of two participants with different perspectives of the robot. That is, we intended to understand how participants’ behaviors were influenced by the courtesy cues of the robot, in a simultaneous scenario, with a set of participants taken two by two. One participant had a front-facing view of the robot, while the other had a back-facing view. The study being simultaneous becomes important when the behavior of one participant affects the behavior of the other. In environments where several participants simultaneously share the space with each other and with the robot, it is very likely that there will be different views. Different views imply different perceptions of the robot’s behavior and, therefore, different behaviors on the part of the participants towards the robot. In turn, this also has implications for the perception and behavior of other participants.
We tested three different metrics in our research, namely, the participants’ self-reported trust in AMR behavior, the legibility of the courtesy cues in the participants’ opinions, and behavioral analysis of the participants related to each courtesy cue tested. These metrics were assessed via an experimental protocol that consisted of two participants interacting with an AMR at an industrial crossroad.
The results of the participants’ self-reported trust showed no significant differences from the two participants’ perspectives between the control situation and the four kinesic courtesy cues implemented on the robot. This may be related to the fact that we measured trust right after the interaction with the robot. As Hancock et al. [36] pointed out, this is an issue that needs to be addressed in HRC because the process of trust development is not clear and needs to be further studied. Kaiser et al. [13], in a study where two kinesic courtesy cues were investigated (robot stop and robot stop and move to the side), found that an AMR that presented polite behaviors was better accepted by its human counterpart in an interaction, regardless of the specific courtesy cue. Here, we measured trust because this parameter directly affects people’s acceptance of the robot [37], but we did not find the same results. The fact of having two participants interacting simultaneously with the robot may have influenced our results, leading to different conclusions. Additionally, trust can be dynamically influenced by different factors, namely, the robotic system itself, the surrounding operational environment, and the respective natures and characteristics of the human team members [38].
Regarding the legibility of the robot behavior, we only found a statistically significant association in the participants with the forward view. From this point of view, the results point out that the users better perceived the robot behavior when it presented a retreating courtesy cue, granting the human the right to pass first at the crossroad. Hetherington et al. [22] presented results that are in agreement with our results. They explored the common human–robot interaction at a doorway or bottleneck in a structured environment and found that a robot’s retreating cue was the most socially acceptable cue and, therefore, the most legible.
Related to the behavioral analysis of the participants’ signs of hesitation, we also only found a statistically significant association in the forward view. When a human interacts with a robot and presents lesser patterns of hesitation, these interactions lead to less cognitive effort to decide how to interact [13]. The results show that the decelerate courtesy cue was the one with which the participants presented a lesser percentage of hesitation. These results are in accordance with Dondrup et al. [39]. These authors showed that the robot’s deceleration within the pedestrians’ personal space resulted in less disruption to their movement.
It was expected that the lower the understanding of the robot behavior, the higher the participants’ hesitancy [40]. Our results from the front view related to the retreat courtesy cue show that more than half of the participants understood the robot behavior (64.7%), and more than half of the participants showed no signs of hesitation (70.6%). However, the other courtesy cues are not in accordance with this. For instance, the decelerate and the retreat and move left courtesy cues presented a lower percentage of legibility and, also, a lower percentage of hesitance in the participants. This may indicate that a robot behavior understood by humans may be not enough to present good communication of the robot’s intentions. For that reason, further research should be carried out in this direction. Unlike what was expected in the four courtesy cues tested, the control situation was designed for the robot to not be polite and to thereby increase the participant’s hesitancy as the robot does not stop at the crossroad. Our results show that the control situation was understood by the majority of the participants (64.7%) and resulted in higher hesitance (70.6%), as expected. The results related to the forward view of the robot show that less than half of the participants did not understand the robot behavior, and this led to greater signs of hesitation in the participants in all courtesy cues tested. This shows that further research involving different perspectives of the robot needs to be carried out, to understand how AMRs should behave in order to be accepted and understood by all humans that interact with them.

Limitations and Future Work

This article addresses human perceptions of robot actions in a shared environment at an industrial crossing area. While this is a relevant topic that provides insight into human–robot interaction and human–robot collaboration research, some limitations that affected our research work should be noted. We can point to restraints regarding the low number of participants and the low variability in terms of the representativeness of the participants (the participants were all recruited from the academic community). Another limitation is related to the fact that there was no full human–robot interaction in play: the robot behavior was indifferent to human presence, because the behaviors’ activation was hardcoded. However, if proper human recognition were to be implemented, the outcome should be similar if full human–robot interaction was implemented. Finally, it should be noted that to carry out this exploratory study in an industrial environment and via in-person experiments, the parameters (such as linear speed and test conditions for courtesy cues) were adapted from other related works [22,31] using types of mobile robots other than the robot model under study (MiR 200), which may have conditioned the results obtained.
These limitations pave the way to further research. That is, they illustrate the need to develop an algorithm for the robot’s movement to be completely autonomous and for it to show courtesy cues when it finds a person at any time during the execution of its tasks, as suggested by Kaiser et al. [13]. This exploratory study already includes a scenario for analysis of the forward and backward view configurations of a group of two simultaneous participants with the robot. For that reason, it might be important to understand which parameters intrinsic to the robot and to courtesy cues must be applied in a scenario with more than two participants. We intend for future research to be applied on a shop floor where the AMR has to share the same trajectory with more than two workers, taking into account that the type of robot under study is intended for application in dynamic environments (e.g., industrial sector, logistics, hospitals) with a significant number of people in circulation [7,23]. On the other hand, by increasing the number of participants, the results, especially concerning trust and distrust analysis in HRC, become more difficult to analyze using only standard interviews and questionnaires [41]. Therefore, the application of psychophysiological measures (e.g., eye-tracking systems) should be a relevant measurement approach for this type of variable [42]. Another question that arose during the experiments and that should be addressed in further research is how the presence of a second co-actor (Participant A) affects the legibility of movements and courtesy cues perceived by the first co-actor (Participant B) and vice versa.

Author Contributions

C.A., conceptualization, methodology, investigation, writing, review, and validation; A.C. (André Cardoso), conceptualization, methodology, investigation, writing, review, and validation; A.C. (Ana Colim), investigation support, review, validation, and supervision; E.B., investigation support, validation, supervision, and review; A.C.B., statistical analysis, validation, supervision, and review; J.C., writing review; C.F., writing review; L.A.R., investigation support. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by NORTE-06-3559-FSE-000018, integrated in the invitation NORTE-59-2018-41, aimed at the Hiring of Highly Qualified Human Resources, co-financed by the Regional Operational Programme of the North 2020, thematic area of Competitiveness and Employment, through the European Social Fund (ESF). This work was also supported by FCT–Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Committee of Ethics for Research in Social and Humans Sciences of the University of Minho (approval number CEICSH 095/2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to acknowledge PIEP (Innovation in Polymer Engineering) for the use of their facilities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. European Commission. Industry 5.0—Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar] [CrossRef]
  2. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef] [Green Version]
  3. Berx, N.; Pintelon, L.; Decré, W. Psychosocial Impact of Collaborating with an Autonomous Mobile Robot: Results of an Exploratory Case Study. In Proceedings of the HRI’21: ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021. [Google Scholar] [CrossRef]
  4. Fragapane, G.; de Koster, R.; Sgarbossa, F.; Strandhagen, J.O. Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda. Eur. J. Oper. Res. 2021, 294, 405–426. [Google Scholar] [CrossRef]
  5. Saeidi, H.; Wang, Y. Incorporating Trust and Self-Confidence Analysis in the Guidance and Control of (Semi)Autonomous Mobile Robotic Systems. IEEE Robot. Autom. Lett. 2019, 4, 239–246. [Google Scholar] [CrossRef]
  6. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1–22. [Google Scholar] [CrossRef] [Green Version]
  7. Liaqat, A.; Hutabarat, W.; Tiwari, D.; Tinkler, L.; Harra, D.; Morgan, B.; Taylor, A.; Lu, T.; Tiwari, A. Autonomous mobile robots in manufacturing: Highway Code development, simulation, and testing. Int. J. Adv. Manuf. Technol. 2019, 104, 4617–4628. [Google Scholar] [CrossRef] [Green Version]
  8. Chen, Y.; Yang, C.; Song, B.; Gonzalez, N.; Gu, Y.; Hu, B. Effects of Autonomous Mobile Robots on Human Mental Workload and System Productivity in Smart Warehouses: A Preliminary Study. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2020, 64, 1691–1695. [Google Scholar] [CrossRef]
  9. Toyoshima, A.; Nishino, N.; Chugo, D.; Muramatsu, S.; Yokota, S.; Hashimoto, H. Autonomous Mobile Robot Navigation: Consideration of the Pedestrian’s Dynamic Personal Space. In Proceedings of the 2018 IEEE 27th International Symposium on Industrial Electronics (ISIE), Cairns, QLD, Australia, 13–15 June 2018; pp. 1094–1099. [Google Scholar] [CrossRef]
  10. Sunada, K.; Yamada, Y.; Hattori, T.; Okamoto, S.; Hara, S. Extrapolation simulation for estimating human avoidability in human-robot coexistence systems. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 785–790. [Google Scholar] [CrossRef]
  11. Shen, Z.; Elibol, A.; Chong, N.Y. Understanding nonverbal communication cues of human personality traits in human-robot interaction. IEEE/CAA J. Autom. Sin. 2020, 7, 1465–1477. [Google Scholar] [CrossRef]
  12. Stroessner, S.J.; Benitez, J. The Social Perception of Humanoid and Non-Humanoid Robots: Effects of Gendered and Machinelike Features. Int. J. Soc. Robot. 2019, 11, 305–315. [Google Scholar] [CrossRef]
  13. Kaiser, F.G.; Glatte, K.; Lauckner, M. How to make nonhumanoid mobile robots more likable: Employing kinesic courtesy cues to promote appreciation. Appl. Ergon. 2019, 78, 70–75. [Google Scholar] [CrossRef]
  14. Lichtenthäler, C.; Kirsch, A. Towards Legible Robot Navigation—How to Increase the Intend Expressiveness of Robot Navigation Behavior. In Proceedings of the International Conference on Social Robotics—Workshop Embodied Communication of Goals and Intentions, Bristol, UK, 27–29 October 2013. [Google Scholar]
  15. Gildert, N.; Millard, A.; Pomfret, A.; Timmis, J. The Need for Combining Implicit and Explicit Communication in Cooperative Robotic Systems. Front. Robot. AI 2018, 5, 65. [Google Scholar] [CrossRef] [Green Version]
  16. Sisbot, E.A.; Marin-Urias, L.F.; Alami, R.; Simeon, T. A Human Aware Mobile Robot Motion Planner. IEEE Trans. Robot. 2007, 23, 874–883. [Google Scholar] [CrossRef] [Green Version]
  17. Leroy, T.; Christophe, V.; Delelis, G.; Corbeil, M.; Nandrino, J.L. Social Affiliation as a Way to Socially Regulate Emotions: Effects of Others’ Situational and Emotional Similarities. Curr. Res. Soc. Psychol. Univ. Iowa 2010, 16. [Google Scholar]
  18. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  19. Gualtieri, L.; Fraboni, F.; De Marchi, M.; Rauch, E. Evaluation of Variables of Cognitive Ergonomics in Industrial Human-Robot Collaborative Assembly Systems. In Proceedings of the 21st Congress of the International Ergonomics Association (IEA 2021); Springer: Cham, Switzerland, 2021; pp. 266–273. [Google Scholar] [CrossRef]
  20. Fista, B.; Azis, H.A.; Aprilya, T.; Saidatul, S.; Sinaga, M.K.; Pratama, J.; Syalfinaf, F.A.; Steven; Amalia, S. Review of Cognitive Ergonomic Measurement Tools. IOP Conf. Ser. Mater. Sci. Eng. 2019, 598, 012131. [Google Scholar] [CrossRef]
  21. Rubio, S.; Diaz, E.; Martin, J.; Puente, J.M. Evaluation of Subjective Mental Workload: A Comparison of SWAT, NASA-TLX, and Workload Profile Methods. Appl. Psychol. 2004, 53, 61–86. [Google Scholar] [CrossRef]
  22. Hetherington, N.J.; Lee, R.; Haase, M.; Croft, E.A.; Van der Loos, H.F.M. Mobile Robot Yielding Cues for Human-Robot Spatial Interaction. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3028–3033. [Google Scholar] [CrossRef]
  23. Mobile Industrial Robots A/S (MiR). MiR a Better Way. 2019. Available online: https://epl-si.com/produto/mir200/ (accessed on 24 January 2022).
  24. Tzafestas, S.G. Mobile Robot Control and Navigation: A Global Overview. J. Intell. Robot. Syst. 2018, 91, 35–58. [Google Scholar] [CrossRef]
  25. Recker, T.; Heilemann, F.; Raatz, A. Handling of large and heavy objects using a single mobile manipulator in combination with a roller board. Procedia CIRP 2020, 97, 21–26. [Google Scholar] [CrossRef]
  26. Recker, T.; Zhou, B.; Stüde, M.; Wielitzka, M.; Ortmaier, T.; Raatz, A. LiDAR-Based Localization for Formation Control of Multi-Robot Systems. In Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021; Schüppstuhl, T., Tracht, K., Raatz, A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 363–373. [Google Scholar]
  27. Mobile Industrial Robots A/S (MiR). User Guide (En) MiR 200, 3rd ed.; Mobile Industrial Robots A/S: Odense, Denmark, 2020; pp. 1–173. [Google Scholar]
  28. Siegwart, R.; Nourbakhsh, L.R. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar] [CrossRef]
  29. Mobile Industrial Robots A/S. MiR Robot Safety. Available online: https://www.mobile-industrial-robots.com/insights/amr-safety/mir-robot-safety/ (accessed on 26 January 2022).
  30. Wadsten, J.; Klemets, R.E. Automated Deliverance of Goods by an Automated Guided Vehicle—Case study of the testing and implementation of an AGV within the production at Volvo Group AB, Tuve Gothenburg. Bachelor Thesis, Mechanical Engineering, Chalmers University of Technology, Gothenburg, Sweden, 14 June 2019. [Google Scholar]
  31. Lauckner, M.; Kobiela, F.; Manzey, D. ‘Hey Robot, Please Step Back!’—Exploration of a Spatial Threshold of Comfort for Human-Mechanoid Spatial Interaction in a Hallway Scenario. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014; pp. 780–787. [Google Scholar] [CrossRef]
  32. Farrell, P.S.E. The Hysteresis Effect. Hum. Factors J. Hum. Factors Ergon. Soc. 1999, 41, 226–240. [Google Scholar] [CrossRef]
  33. Bicho, E.; Schoner, G.; Vaz, F. Modelo Dinâmico Neuronal Para a Percepção Categorial Da Fala. Electrónica Telecomunicações 1999, 2, 617. [Google Scholar]
  34. Jian, J.-Y.; Bisantz, A.; Drury, C.G. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  35. Sadrfaridpour, B.; Saeidi, H.; Wang, Y. An integrated framework for human-robot collaborative assembly in hybrid manufacturing cells. In Proceedings of the 2016 IEEE International Conference on Automation Science and Engineering (CASE), Fort Worth, TX, USA, 21–25 August 2016; pp. 462–467. [Google Scholar] [CrossRef]
  36. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; De Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  37. Freedy, A.; DeVisser, E.; Weltman, G.; Coeyman, N. Measurement of trust in human-robot collaboration. In Proceedings of the 2007 International Symposium on Collaborative Technologies and Systems, Orlando, FL, USA, 25 May 2007; pp. 106–114. [Google Scholar] [CrossRef]
  38. Park, E.; Jenkins, Q.; Jiang, X. Measuring trust of human operators in new generation rescue robots. Proc. JFPS Int. Symp. Fluid Power 2008, 2008, 489–492. [Google Scholar] [CrossRef]
  39. Dondrup, C.; Lichtenthäler, C.; Hanheide, M. Hesitation signals in human-robot head-on encounters: A Pilot Study. In Proceedings of the HRI’14: ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 154–155. [Google Scholar] [CrossRef]
  40. Brooks, C.; Szafir, D. Visualization of Intended Assistance for Acceptance of Shared Control. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2021; pp. 11425–11430. [Google Scholar] [CrossRef]
  41. Evans, D.C.; Fendley, M. A multi-measure approach for connecting cognitive workload and automation. Int. J. Hum.-Comput. Stud. 2017, 97, 182–189. [Google Scholar] [CrossRef]
  42. Jenkins, Q.; Jiang, X. Measuring Trust and Application of Eye Tracking in Human Robotic Interaction. In Proceedings of the IIE Annual Conference and Expo 2010, Cancun, Mexico, 5–9 June 2010. [Google Scholar]
Figure 1. Principal dimensions of: (a) the MiR 200 robot; (b) the MiR Charge 24 V.
Figure 1. Principal dimensions of: (a) the MiR 200 robot; (b) the MiR Charge 24 V.
Robotics 11 00059 g001
Figure 2. (a) Top view of the SICK laser scanners; (b) Configuration of the 3D cameras and SICK laser scanners, side view; (c) FoV of 118°.
Figure 2. (a) Top view of the SICK laser scanners; (b) Configuration of the 3D cameras and SICK laser scanners, side view; (c) FoV of 118°.
Robotics 11 00059 g002
Figure 3. MiR 200 protective field mechanism: (a) The robot drives when its path is clear; (b) the robot activates the proactive stop when an obstacle is detected within its protective field.
Figure 3. MiR 200 protective field mechanism: (a) The robot drives when its path is clear; (b) the robot activates the proactive stop when an obstacle is detected within its protective field.
Robotics 11 00059 g003
Figure 4. Range of the robot’s active protective field that changes with the robot’s speed, represented in millimeters: (a) Forward driving direction; (b) Backward driving direction.
Figure 4. Range of the robot’s active protective field that changes with the robot’s speed, represented in millimeters: (a) Forward driving direction; (b) Backward driving direction.
Robotics 11 00059 g004
Figure 5. Clean and edited map corresponding to the total industrial area used for the tests: the arrows indicate the area in which the robot was allowed to circulate.
Figure 5. Clean and edited map corresponding to the total industrial area used for the tests: the arrows indicate the area in which the robot was allowed to circulate.
Robotics 11 00059 g005
Figure 6. Industrial environment: A crossroad-like configuration with simultaneous forward and backward scenarios.
Figure 6. Industrial environment: A crossroad-like configuration with simultaneous forward and backward scenarios.
Robotics 11 00059 g006
Figure 7. Scheme of the four courtesy cues tested.
Figure 7. Scheme of the four courtesy cues tested.
Robotics 11 00059 g007
Figure 8. Experimental apparatus of the interaction’s HRC kinesic cue study conditions: Xd decelerate distance, Xr retreat distance, Xleft move left distance; distances are represented in meters.
Figure 8. Experimental apparatus of the interaction’s HRC kinesic cue study conditions: Xd decelerate distance, Xr retreat distance, Xleft move left distance; distances are represented in meters.
Robotics 11 00059 g008
Figure 9. Profile plots for trust (a) and mistrust (b) scores by kinesic courtesy cue and by point of view.
Figure 9. Profile plots for trust (a) and mistrust (b) scores by kinesic courtesy cue and by point of view.
Robotics 11 00059 g009
Figure 10. Legibility of the robot courtesy kinesic cues: (a) Forward view; (b) Backward view.
Figure 10. Legibility of the robot courtesy kinesic cues: (a) Forward view; (b) Backward view.
Robotics 11 00059 g010
Figure 11. Observable signs of hesitation in the participants’ behavior while encountering the AMR related to each kinesic courtesy cue condition: (a) Forward view; (b) Backward view.
Figure 11. Observable signs of hesitation in the participants’ behavior while encountering the AMR related to each kinesic courtesy cue condition: (a) Forward view; (b) Backward view.
Robotics 11 00059 g011
Table 1. Results of one-way ANOVA related to the subjects’ perceived trust and distrust of the AMR from both points of view (forward and backward).
Table 1. Results of one-way ANOVA related to the subjects’ perceived trust and distrust of the AMR from both points of view (forward and backward).
ForwardBackward
TrustF(4,80) = 1.082, p = 0.371F(4,80) = 0.486, p = 0.746
MistrustF(4,80) = 0.564, p = 0.689F(4,80) = 0.966, p = 0.431
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alves, C.; Cardoso, A.; Colim, A.; Bicho, E.; Braga, A.C.; Cunha, J.; Faria, C.; Rocha, L.A. Human–Robot Interaction in Industrial Settings: Perception of Multiple Participants at a Crossroad Intersection Scenario with Different Courtesy Cues. Robotics 2022, 11, 59. https://doi.org/10.3390/robotics11030059

AMA Style

Alves C, Cardoso A, Colim A, Bicho E, Braga AC, Cunha J, Faria C, Rocha LA. Human–Robot Interaction in Industrial Settings: Perception of Multiple Participants at a Crossroad Intersection Scenario with Different Courtesy Cues. Robotics. 2022; 11(3):59. https://doi.org/10.3390/robotics11030059

Chicago/Turabian Style

Alves, Carla, André Cardoso, Ana Colim, Estela Bicho, Ana Cristina Braga, João Cunha, Carlos Faria, and Luís A. Rocha. 2022. "Human–Robot Interaction in Industrial Settings: Perception of Multiple Participants at a Crossroad Intersection Scenario with Different Courtesy Cues" Robotics 11, no. 3: 59. https://doi.org/10.3390/robotics11030059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop