Next Article in Journal
Optimal Scheduling of Cogeneration System with Heat Storage Device Based on Artificial Bee Colony Algorithm
Next Article in Special Issue
Virtual/Augmented Reality for Rehabilitation Applications Using Electromyography as Control/Biofeedback: Systematic Literature Review
Previous Article in Journal
Research of a Six-Pole Active Magnetic Bearing System Based on a Fuzzy Active Controller
Previous Article in Special Issue
A 3D Image Registration Method for Laparoscopic Liver Surgery Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Perspective

Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives

1
FCEE, Nova Lincs, University of Madeira, 9020-105 Funchal, Portugal
2
DEI, CISUC, University of Coimbra, 3030-788 Coimbra, Portugal
3
CeBER, University of Coimbra, 3030-788 Coimbra, Portugal
4
CINEICC, University of Coimbra, 3030-788 Coimbra, Portugal
5
ISR, DEEC, University of Coimbra, 3030-788 Coimbra, Portugal
6
Department of Neurosurgery, The Faculty of Medicine, University Hospital Knappschaftskrankenhaus Bochum GmbH, Ruhr-University Bochum, 44892 Bochum, Germany
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(11), 1726; https://doi.org/10.3390/electronics11111726
Submission received: 29 April 2022 / Revised: 19 May 2022 / Accepted: 20 May 2022 / Published: 29 May 2022

Abstract

:
Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations.

1. Introduction

Motor collaboration between humans is essential for activities ranging from working together at construction sites to performing complex surgeries. This is because the human ability to read the motor intentions of another human is unparalleled: a skilled technician does not need much instruction to hold up an element that the other one is welding; a nurse does not need much guidance when feeding her patient with a spoon. However, situations such as the COVID-19 pandemic reveal threats to this traditional model of collaboration. The contagion risk posed by human contact had a severe socio-economic impact imposing changes for industry across the board, from factories to hospitals and care homes [1]. While many institutions have rapidly switched to remote work and communication, many others could not do the same, as human contact is required in many industries. In situations of severe risk, such as a pandemic, human activities could be at least partially replaced by robots, thereby reducing contagion risk [2,3]. However, for industries requiring, at present, close human collaboration, this is less true. Even though the use of cobots could minimize the risk to humans, the collaboration between humans and robots is still far and away from nearing, let alone matching, the collaboration between humans [4,5].
Therefore, especially in the area where the interaction between humans and robots may represent a risk for the human, collaborative robots may become of vital help to their human operators. This paper reviews the current work, highlights existing issues and challenges and proposes novel approaches to the use of virtual reality (VR) and, more generally, extended reality (XR), as a tool for safe testing collaborative robotics. We used a narrative review, which is why we did not use explicit and systematic criteria for the search and critical analysis of the literature. It was not our intention to exhaust the sources of information; however, we tried to carry out a deep search that includes articles from 1997 to 2022. The selection of studies and their interpretation was performed to classify the main applications and critical factors involved in the use of XR in the broad domain of collaborative robotics, with special emphasis on the cobot, the users and the environment.

2. Human-Robot Collaboration, Safety and Acceptability

Human–robot collaboration (HRC) is a specific sub-domain of human–robot interaction (HRI), which studies a human operator and a robot working together on a common goal using physical manipulation [6]. The general idea of HRC is not new, and several companies have deployed collaborative robots capable of working along industry lines. Still, any progress in this domain is limited by the safety and acceptability of such collaboration [5,7]. Human safety is a critical factor: as industrial robots are often heavy and/or equipped with powerful effectors, they pose a physical danger. For this reason, most industrial robots are kept at a distance or inside safety cages (Figure 1). This solution is suboptimal for robots that are supposed to help humans perform their tasks since real cooperation assumes that both agents work simultaneously.
Table 1 shows the different levels of collaboration with robots at present. Fenced robots are the non-collaborative, most popular ones. Then there are robots that allow for collaboration. Again, their use is usually limited to cases 2 and 3 due to safety. Finally, the last two columns denote actual dynamic collaboration.
When two agents are working together, they need to establish joint attention to form a joint intention and execute joint actions [9,10,11]. Mutual understanding of each other’s actions and the acceptability of robotic actions to a human is therefore an important issue in the field of human–machine interaction (HMI) [12,13]. It is implicitly assumed that, in the robot–human diad, the human defines the intentions the robot has to adapt to [9]. However, unlike the presently available robots, the human brain comes equipped with specialized “computational machinery” for recognizing and predicting actions. The human brain is extremely efficient in recognizing other people’s actions, for example, their action intentions or errors [14,15]. This recognition makes humans able to rapidly adapt to what the other human does, reacting accordingly. However, we do not know whether the human brain applies the same predictive processes to non-human agents as it does to humans [16]. For example, one could expect that, as collaborative robots become more human-like, the quality and efficiency of human interactions with them would steadily increase. This is not always true. In the domain of social HMI, it has been shown that if robots resemble humans too closely, they are perceived as strange and unpleasant to interact with [17]. This effect is called the “uncanny valley” and is not limited to humans: other social primates also show adverse behavior towards realistic avatars [18]. This suggests that the primate brain may have hardwired neural systems allowing for intuitive discriminating of “natural” behavior. While the “uncanny valley” has been described for social HMI [19], virtually nothing is known about its impact on collaborative motor performance. Likewise, although it was previously reported [20] that humans operating assistive robots perform better if these robots follow human-like movement patterns (e.g., the relationship between curvature and speed), it is not known whether the same applies to scenarios where humans and cobots work autonomously (such as while cooperating).
Human actions are predictable in the sense that arm/joint configurations define the degrees of freedom of movement, allowing the brain to construct models of the other person’s actions based on natural motor repertoire [21]. For observing robot actions, this is less obvious, as robotic arms do not have the default biomechanical design constraints the human arm has and can execute much more complex movements (such as 360-degree rotations). Yet, the correct prediction of the other agent’s movements is needed for adapting one’s own actions and, as such, efficient cooperation. The intuitiveness of the other agent’s actions is of vital importance in situations where human cognitive effort has to be minimal, such as when under threat, stress, fatigue or heightened cognitive load. That is why it is important to understand how different robot designs (more or less human-like in terms of appearance and motion) might impact how humans perceive them and how this perception impacts manual collaboration.
Using VR allows testing human interactions with diverse virtual models (digital twins) of real cobots, including those popular in industry. Several cobot models, such as Baxter or Kinova, already have their digital twins extensively developed and implemented in different VR platforms, such as Unity 3D (Unity Technologies), including advanced motion planners and the physics of their virtual robotics limbs. The use of such digital twins allows, likewise, using the same robot-control-system (e.g., ROS) framework for controlling both the virtual and real industrial robots. Moreover, VR allows for the development of cobot models beyond the existing robot designs. This allows the testing of solutions not limited by the readily available technology and different, even hypothetical, robot models of different appearance or action patterns, such as in the study by Weistroffer et al. [22]. While these authors report a complex relationship between robot appearance, motion patterns, human performance, self-reports and physiological signals, it is important to emphasize that they did not measure more detailed performance indicators, such as human-motion-patterns (speed and accuracy) or eye-gaze data. Therefore, it remains to be further uncovered how robot anthropomorphism affects more subtle user performance.
Figure 1 shows VR robot models of increased anthropomorphism, similar to those used by Weistroffer et al. [22]. The bottom of Figure 1 shows an example VR collaboration scene with Baxter, in which the cobot passes a tool to the user, mimicking real interaction. Note that the robot has a face, a feature found on the real Baxter. Robot anthropomorphism, apart from possibly affecting human motor performance, can also affect higher-order cognitive aspects such as the feeling of presence (see, e.g., Dubosc et al. [23]), or attributing blame in the case of error (see, e.g., Furlough et al. [24]). The use of VR allows flexible manipulation of robot and scene designs to capture these cognitive aspects.

3. The Use of Virtual Reality for Testing Human–Robot Collaboration

Human–robot collaboration carries a physical risk to humans. For example, the robot arm can strike the operator or otherwise harm them. Therefore, operators can be stressed while collaborating with robots, which may result in their abnormal behavior, such as increased cognitive load or reduced motor performance [25]. Such dangerous scenarios are difficult to test in the natural world, implying significant limitations to user-experience testing of cobots.
In recent years, a feasible solution to test cobots while maintaining human safety is to use immersive VR environments [22,26,27]. Oyekan et al. [28] describe that, to design a collaborative environment to understand human reactions to both predictable and unpredictable robot motions, a virtual reality digital twin of a physical layout can be used. Dombrowski et al. [29] give us an interactive simulation of HRC—a technique that uses real-time physics simulation to immerse the design engineer or production planner inside a responsive virtual model of the factory—to optimize and validate manufacturing processes to achieve a better understanding of the risks and complexity of the assembly processes. Taken together, these studies demonstrate diverse approaches to testing different types of interaction scenarios and virtual cobots. This virtual testing can also be conducted for dangerous scenarios, but without putting humans at risk. For example, VR allows constructing scenes where the user is within the reach of a robot arm, thus collecting the user’s psychophysiological measures and movement patterns in those simulated dangerous scenes, but without an actual risk to the user.

4. A Framework for Extended Reality in Testing Human–Robot Collaboration

In the field of software engineering and human–computer interaction (HCI), specific methodologies exist that guide the design cycles of novel solutions and products in those areas [30] (see, e.g., Sommerville [31]). In the area of HRC, those systematic approaches are scarce to non-existent. While, at present, such agile approaches exist in robot design [32,33], these assume a given robot type and rely primarily on user feedback, such as self-reports and other subjective measures of user experience. The use of such subjective measures is, however, not without problems, as we will discuss later.
An XR framework for designing and testing HRC allows for an iterative development process, arguably at a reduced cost, since different iterations could be developed and tested before real-world deployment. In tandem, the study of human comfort with the robot could become a central part of the design. A feedback loop between the development team and users can be implemented more easily when using virtual, as compared to real, cobot designs, leading to more agile cycles of design and redesign. This is particularly important, as it allows for the design, implementation and testing of collaboration models at higher levels of abstraction, without requiring to deal with low-level motor control and perceptual issues. The VR scenarios themselves may simulate a range of scenes, from those taking place in a factory to those of an assistive robot in a care facility. Such virtual scenarios can mimic realistic environments (such as a specific factory line) or hypothetical ones (such as a space station).
Given the flexibility of VR, in experiments, one can manipulate several variables relating to different aspects of collaboration scenes. Based on the literature reviewed here, spanning across years from 1997 to 2022 and queried through major scientific article databases, we identify features implemented or that are possible to implement in such scenes, and we classify them as variables about the cobot, the user and the environment. We summarize those variables in Table 2.
It is known that robot anthropomorphism influences the human’s emotional and social perception of the robot, for example, the willingness to sacrifice it [34,35]. This is an important factor to consider when deciding on cobot design ergonomy, as different human emotional attitudes to the robot may influence collaboration efficiency in different situations (such as rescue operations). Onnasch and Roessler [35] provide a compelling taxonomy of different aspects of cobot design and their impact on human behavior in different interaction contexts.
The presence of cobot gaze is especially interesting, as it has been included on some cobots. For example, the company Rethink Robotics included gaze as a feature in their Baxter and Sawyer cobots to putatively increase cobot acceptability [36] (Kessler, 2017). This is because eye gaze is a critical element for human social life, as it communicates intentions [28], allowing the interaction partner to act accordingly on these intentions. Eye gaze predictively guides human hand actions [37] and is attracted to object affordances [38,39]. Gaze is also crucial for reading other agents’ intentions [10]. Despite how important gaze is for collaboration between humans, to our knowledge, the question of how the presence of the cobot gaze impacts human movement parameters has not yet been investigated. In this way, whether the human brain relies on gaze in perceiving actions and intentions of non-human agents and whether this similarly informs human actions as other humans’ gaze does, remains to be determined.

5. VR in Testing Cognitive and Social Aspects of Collaboration

Richards [40] proposes that the best way of achieving a higher level of collaboration between a human and robot is for the robot to mimic (to some extent or another) the behaviors of its human counterparts. In maintaining interaction efficiency, we need to understand the boundaries between human–robot capabilities, beliefs, intent and control. More specifically, we need to know how designers need to consider cognitive and social processes (e.g., trust, acceptability and attribution of blame) in an HRC for designing better cobots and collaboration conditions.
Trust is one of the requisites for building a successful human–robot collaboration [41]. It is the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability [42]. Trust also represents a calculative orientation toward risk [43]; by trusting, we assume a potential gain, while by distrusting, we are avoiding a possible loss [44]. Research on trusting robots shows that the relationship between trust and joint physical coordination is critical when human workers interact with robots in a collaborative task [45]. In HRC contexts, “affective” trust better predicts the willingness to use a robot by human workers, and both types of trust—cognitive (e.g., reliability and predictability of the robot and robot attributes) and affective (e.g., proximity and personality)—are ensured by the statements of apology and competence that the robots manifest [46]. The technology (cobot) acceptance by humans in a collaborative workplace is a predictive factor of the success of the human–robot interaction [7]. The real-time trust that results from the study of Desai et al. [47] confirms traditional post-run survey approaches for human–robot trust can be masked by primacy–recency bias and demonstrate that early drops in reliability negatively impact real-time trust differently than middle or late drops. In agreement with the same authors, robot trust feedback can improve autonomy control allocation during low reliability without altering real-time trust levels. It should be noted that feedback interface designs using semantic symbols lead to more abrupt real-time trust changes than non-semantic symbols. The research of Oyekan et al. [28] suggests that greater autonomy for the robot will result in greater attribution of blame in work tasks. In general, the order of amount of blame was humans, robots, and environmental factors. If the scenario described the robot as nonautonomous, the participants attributed almost as little blame to them as to the environmental factors; in contrast, if the scenario described the robot as autonomous, the participants attributed almost as much blame to them as to the human.
The studies that aim to analyze the cognitive and social processes in technology demonstrated the importance of these topics to the correct implementation and actual usage of the same. For that reason, it is fundamental to understand which factors could increase the trust, acceptability, etc., of users in HRC. The use of VR allows for the manipulation of robot characteristics that might affect cognitive and social processes (c.f. Weistroffer et al. [22]). Moreover, it allows combining behavioral and subjective measures of cognitive, emotional and social human factors with their physiological markers to yield a full picture of human factors in HRC [22,48,49]. Table 3 summarizes identified issues in VR experiments on HRC and proposed remedies.

6. Combined Use of User Experience Questionnaires and Objective Measures in HRC

VR simulations of HRC tasks can be very complex and require subjects to grab objects handed to them by the robot, hand an object to the robot, or simultaneously/jointly with a cobot to reach for a target object [22]. However, the validity of VR simulations for HRC relies on three intertwined concepts: immersion, presence, and embodiment. Immersion is modulated by the quality of the sensory information given by VR systems and the amount to which their interaction can support users’ sensorimotor contingencies (SCs) [51]. The better the immersion of a system, the higher the precision of the presentation of sensory stimuli (such as display resolution and field of view, sound and haptic information) and the more SCs supported (such as head, hand, arm, or full-body tracking). Immersion, in turn, has an impact on the experience of being there, on presence. Despite the lack of a consensual definition, presence might be defined as the psychological state in which a person reacts to a VE as they would in the physical world [52]. Presence is regarded to be the primary process that makes VR operate. However, there is no direct link between immersion and a sensation of being present. There is, however, widespread agreement that presence is a multi-component concept [53]. The sensation of presence, according to Slater, is based on the place illusion (PI)—the illusion of being there—and the plausibility illusion (PSI)—the believability of what is going on [51]. PSI is heavily reliant on the implemented VEs.
Hence, a VR system that ensures the necessary conditions for presence [54]—whereas PI is more directly related to the immersive features of a VR system with adequate immersive properties, embodiment and plausible and believable scenarios—can elicit behavioral and psychophysiological responses [55,56,57] consistent with real-world counterparts. Modern VR setups allow for relatively precise recordings of human-hand motion capture [58] with the feedback of the user’s hand enabling a more or less embodied experience. The possibility of the inclusion of virtual models of anthropomorphic hands mimicking a user’s own, as well as a variety of other end effectors (including different tools), allows for testing different levels of embodiment and their impact on collaborative situations, not limited to user’s own body, like in real-life testing. Similarly, users’ hand movements can indicate the levels of acceptability of motor cooperation with different cobot types. Natural hand velocity profiles for object-oriented movements are single-peak [59] and the presence of multiple peaks indicates a change in plan, such as that of adapting to cobot movement (e.g., Flash and Henis [60]). Analysis of velocity profiles is routinely used in motor neuroscience for assessing hand trajectory programming. Hand trajectories—in combination with hand speed, movement duration and precision—can be a good, objective indicator of human motor performance in collaborating with different types of cobots.
In addition, users can be wearing a haptic glove providing tactile sensation, to increase immersion and effectiveness. Human hand actions critically depend on the presence of haptic feedback [61]. In joint actions, forces applied to the object by each partner provide an important cue about their intentions, and the current state of the action and help coordination [62]. As pointed out by Bauer et al. [9], robot touch may also serve other communication purposes, important for establishing communication (such as a handshake); therefore, including it in VR scenes with cobots seems to be an important issue to be solved. While the use of haptic technologies significantly improves the embodiment of virtual scenes [58], haptics is not currently widespread due to the limited number of commercially naturalistic haptic interfaces.
Finally, the VR setting allows the recording of biological signals. These can be skin conductance (e.g., Weistroffer et al. [22]), heart rate (Etzi et al.; Weistroffer et al. [22,50]) and muscle activity, using respective sensors. The inclusion of these sensors allows for obtaining the objective metrics of user stress, independent of their self-reports (e.g., Etzi et al. [50]). Physiological responses recorded online, such as skin conductance level and heart rate variability, can be used to further detect stress levels, such as the activation of the fight or flight mechanism [63] in dangerous situations, e.g., when the virtual cobot hits the subject. This, together with participants’ self-reports, provides a more in-depth perspective of cobot acceptability than questionnaires alone. For example, a situation where participants’ positive self-reports are combined with physiological markers indicating stress would surface a more complex emotional state that could then be further disentangled [50].
Combining different psychophysiological and behavioral signals can be exploited by using machine learning tools to analyze them. This approach can help pinpoint subtle effects that user experience (UX) questionnaires would not be sensitive enough to measure. For example, it is possible to use a questionnaire to ask users about their level of stress/comfort with alternative cobots scenarios, after they have completed a series of tasks. However, the results of those questionnaires would not answer more important and interesting questions, such as: When did stress kick in? When were users most stressed? Were users stressed on the same task for each scenario or did some cause more/less stress? The retrospective nature of questionnaires means that the results that can be collected through them are too coarse-grained to accurately address more precise questions [64]. A portfolio of psychophysiological measures affords us the possibility of using more concrete measurements of the state of the human body to accompany post-fact questionnaires. This is particularly relevant in situations that could potentially involve risk and safety issues. Weistroffer et al. [22] demonstrated the feasibility of combining user questionnaires with physiological measures during human collaboration with virtual cobots of different levels of anthropomorphism and human-like vs. non-human-like effector velocity profiles. Their research showed that, while anthropomorphic robots gathered more user attention on their appearance, physiological signals did not reflect this effect. More recently, Etzi et al. [50] demonstrated that, while subjects physiological responses in a collaborative task did not indicate discomfort with changing robot velocity, their subjective self-reports did. Taking this integrative feedback approach and correlating psychophysiological measures with subjective questionnaires would provide us with a fuller and richer picture of what an ideal cobot scenario would be for a human, than that we would obtain from only the task performance data and subjective post-experience responses. Moreover, it allows avoiding subject responses to questionnaires to be driven mainly by their guessing of experimental demands, i.e., the demand characteristics of the VR scenario (c.f., McCambridge et al. [65]).
It is important to note that these above-mentioned studies integrating self-reports with physiological measures, employed subject samples smaller than in typical psychophysiological studies using similar methods. That is, Weistroffer et al. [22] used a sample of 13 subjects while Etzi et al. [50] had a sample size of 10. Using sample sizes this small has been repeatedly discussed in the relevant literature as one of the main reasons for low statistical power and difficulty in replicating findings (e.g., Button et al. [66]). For this reason, the failure to find physiological markers of stress when self-reports indicate it might result from low statistical power. Future studies using physiological markers should therefore employ higher sample sizes, e.g., determined by a-priori power analysis to warrant generalizability of their findings.

7. Telepresence and Teleoperation Scenarios

The use of VR provides an unprecedented opportunity to test cobots in hypothetical telepresence/teleoperation scenarios. Examples of use cases of teleoperation include factories, atomic power plants, assembly operations in space or the sea, and search and rescue operations [67,68]. Simulating remote presence based on HRIHRI is especially useful if the operating environment is hazardous and, therefore, placing a human operator at the site is not safe.
These scenarios may require shifting the user’s point of view from their first-person perspective to a third-person perspective, or birds’ eye view, which is common for teleoperation. It is important to consider how this shifting of perspective might affect the performance of, and/or cognitive load on, the human operator. Several authors have pinpointed the issues with embodiment in teleoperation when the operator directly controls the robot [69,70]. However, it remains to be determined how perspective and embodiment factors influence situations where the operator interacts with an otherwise autonomous robot.
Critically for teleoperation scenarios, VR allows using simulated delays and noise corruption and the responsive visual feedback being delayed or noisy in a way that emulates real-world teleoperation-related noise and streaming issues.

8. The Use of Augmented Reality

While virtual reality is based on creating an immersive digital environment, augmented reality (AR) provides an additional overlay enhancing the real world [71]. This usually has a form of an animated overlay over the visual scene, providing the user with additional information such as visual cues to the task, instrument parameters, etc. Such overlay can, for example, provide the operator with cues helping to establish joint attention (e.g., Marques et al. [11]). Such use of AR for cobot technology has been demonstrated in several studies. For example, Liu and Wang [72] explored the potential of AR as a worker support system in manufacturing tasks. They designed a system for assembly training and monitoring using AR. Above each assembly part and tool, a 3D text was displayed, providing assembly instructions to assemble objects in a specified sequence, together with a robot. A somewhat-similar concept was provided by Hietanen et al. [73], showing that AR overlays can be used to enhance user interaction with the production system, albeit with some limitations, demonstrating that currently available head-mounted displays might not be suitable for use in industry lines. On the more cognitive side of HRC, Palmirini et al. [74] developed an AR interface positively affecting human trust in cobots, as measured by psychometric methods. Michalos et al. [75] proposed in their study that, to improve operator’s safety and acceptance in hybrid assembly environments, a tool using the immersion capabilities of AR technology must be applied.
Although the use of AR allows for enhancing HRC through the addition of virtual interfaces, cues, etc., its efficacy first needs to be tested. Such testing of AR interface can be easier to do in VR, where the virtual interfaces can be emulated in a range of scenarios as described above, and thereby would be tried against several options. This, in turn, results in an agile development of solutions that are not limited to a specific laboratory/experimental context.

9. Considering Operator Gender and Age in Cobot Testing

The use of VR has potential beyond the variety of simulated scenarios. The relatively flexible setting up and easy-to-use equipment allow testing a variety of subjects, including those from outside the pool of current cobot operators. This flexibility provides several opportunities in assessing how personal characteristics interact with different characteristics of cobots, yet research on personal attributes moderating human collaboration with robots is lacking, which seems an important gap to be filled.
One conceivable factor is the gender of the operator. Although evidence for a substantial influence of gender on motor actions and especially collaborative manual behavior is scarce, men and women differ in their upper arm and hand biomechanics, which translates to some visuomotor skills critical for collaboration, such as visuomotor coordination while using the upper arm [12,13]. As noted before, collaborative robots may have different anthropomorphic features. Yet, previous studies have shown that males were sensitive to the differences between robotic and anthropomorphic movements, while women largely ignored those differences [76,77]. For this reason, gender seems to potentially affect the measured motor efficiency of collaborating with cobots. We expect gender to further impact acceptability, stress and trust in at least some collaboration scenarios.
Similar to gender, age might play an important role in cobot acceptability, due to factors such as experience with technology, visuomotor abilities, etc. Cobot acceptability seems especially important in the context of assistive robots aimed at the older population, as this group of users seems to value the physical attractiveness and social likeability of robots more than their younger counterparts [78]. Furthermore, analysis of gaze behavior has shown that, while younger people pay attention to several body parts, older adults focus significantly more on the robot face [78]. In this way, it is possible that the use of eye gaze might increase the cobots’ perceived friendliness and, likewise, the acceptance in a specific age or gender group. With the increasing presence of robots in areas that range from industrial plants to care homes, it becomes crucial to develop and fine–tune how human–robot collaboration takes place. To accommodate well for the personal characteristics of the individuals involved in this collaboration can be key not only to the collaboration effectiveness and efficiency but also to the quality of the interaction and experience between humans and robots.

10. Conclusions and Future Directions

Based on the current literature we can delineate several opportunities that the use of XR provides in advancing cobot research, development and deployment. We believe that, in the domain of development, the use of simulations and digital twins results in more agile development cycles for cobot solutions and flexibility in tested cobot designs. Such development would benefit from a general framework, highlighting important variables to consider in developing such simulations. In this paper, we propose what could be the backdrop for such a framework. We summarize critical variables in HRC VR experiments in Table 2 and provide a list of common issues and their proposed remedies in Table 3.
First of all, simulating diverse cobot designs, including hypothetical ones, allows for assessing cobot characteristics on operator efficiency and comfort without the restrictions posed by testing operators at the workplace with actual robots. Simulating diverse scenes and environments allows for assessing workplace and collaboration features, but foremost allows for safely testing dangerous scenarios, leveraging on immersive VR.
Testing operator performance can itself be performed using different measures, such as hand and eye motion tracking and physiological signals to yield objective and controlled performance measures. These measures can be combined with more traditional data, such as user self-reports and questionnaires, to construct a full picture of actual human interactions with collaborative robots. Future work should consider bigger sample sizes than those used to date, especially when measuring physiological signals.
Augmented reality has been used to enhance user performance and training in collaborating with cobots. The additional interface offered by AR can cue the user, e.g., about action sequences they are supposed to perform, but many more applications of the technology are conceivable in both training and generally improving human performance. The use of AR is very likely to increase as more cobots are deployed and, as such, research in this direction seems to have large potential.
Flexibility in designing scenes offered by VR can be also used to emulate remote operation, by introducing noise and delays typical for teleoperation scenarios. This gives an opportunity to expose/train operators in situations beyond cooperating with a robot in the same physical space. To date, studies on HRC and telepresence seem somewhat missing and we believe this direction has the potential to be explored further.
The use of XR opens up a whole array of possibilities to safely and quickly test cobot designs and collaboration scenarios without putting humans at the risk of harm. Modern XR technologies allow the integration of a wide variety of sensory modalities to create aware and immersive scenes. This way, testing cobots can be taken beyond the physical constraints of currently available cobot models and real-world settings. Furthermore, the development process can become more efficient by considering human reactions (i.e., psychological and physiological), leading to a more human-centered, holistic and efficient approach in human–robot collaboration.

Author Contributions

Conceptualization, A.P. (Artur Pilacinski), S.B.i.B., P.A.S., A.P. (Ana Pinto), J.A., P.M. and D.B.; writing—original draft preparation, A.P. (Artur Pilacinski), S.B.i.B., P.A.S., D.B.; writing—review and editing, A.P. (Artur Pilacinski), S.B.i.B., P.A.S., A.P. (Ana Pinto), C.C., D.B.; visualization, D.B., A.P. (Artur Pilacinski); supervision, A.P. (Artur Pilacinski), P.A.S., S.B.i.B.; project administration, A.P. (Artur Pilacinski), S.B.i.B.; funding acquisition, A.P. (Artur Pilacinski), S.B.i.B., J.A., D.B. All authors have read and agreed to the published version of the manuscript.

Funding

University of Coimbra: Sementes 2020 (A.P. (Artur Pilacinski)); Fundação para a Ciência e Tecnologia: PTDC/MHC-PCN/6805/2014 (A.P. (Artur Pilacinski)); NOVA-LINCS [PEest/UID/CEC/04516/2019] (S.B.i.B.); Fundação para a Ciência e Tecnologia: 2021.05646.BD (D.B.); FCT—Fundação para a Ciência e a Tecnologia UIDB/05037/2020 (A.P. (Ana Pinto)). European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 802553—“ContentMAP”) (J.A.). APC were covered by the University of Coimbra.

Acknowledgments

The authors want to thank Ioannis Iossifidis and members of Proaction and Neurorehabilitation Labs for their valuable input.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Contreras, F.; Baykal, E.; Abid, G. E-Leadership and Teleworking in Times of COVID-19 and Beyond: What We Know and Where Do We Go. Front. Psychol. 2020, 11, 590271. [Google Scholar] [CrossRef]
  2. Caselli, M.; Fracasso, A.; Traverso, S. Robots and Risk of COVID-19 Workplace Contagion: Evidence from Italy. Technol. Forecast. Soc. Chang. 2021, 173, 121097. [Google Scholar] [CrossRef]
  3. Guizzo, E.; Klett, R. How Robots Became Essential Workers in the COVID-19 Response. Available online: https://spectrum.ieee.org/how-robots-became-essential-workers-in-the-covid19-response (accessed on 13 January 2022).
  4. IFR Position Paper Demystifying Collaborative Industrial Robots; International Federation of Robotics: Frankfurt, Germany, 2018.
  5. Towers-Clark, C. Keep The Robot In The Cage—How Effective (And Safe) Are Co-Bots? Available online: https://www.forbes.com/sites/charlestowersclark/2019/09/11/keep-the-robot-in-the-cagehow-effective--safe-are-co-bots/ (accessed on 13 January 2022).
  6. Grosz, B.J. Collaborative Systems (AAAI-94 Presidential Address). AI Mag. 1996, 17, 67. [Google Scholar]
  7. Bröhl, C.; Nelles, J.; Brandl, C.; Mertens, A.; Schlick, C.M. TAM Reloaded: A Technology Acceptance Model for Human-Robot Cooperation in Production Systems. In Proceedings of the HCI International 2016 – Posters’ Extended Abstracts; Communications in Computer and Information Science, 617; Stephanidis, C., Ed.; Springer International Publishing: Cham, Switzerland, 2016; pp. 97–103. [Google Scholar]
  8. Bauer, W.; Bender, M.; Braun, M.; Rally, P.; Scholtz, O. Lightweight Robots in Manual Assembly–Best to Start Simply. In Examining Companies Initial Experiences with Lightweight Robots; Frauenhofer-Institut für Arbeitswirtschaft und Organisation IAO: Stuttgart, Germany, 2016; pp. 1–32. [Google Scholar]
  9. Bauer, A.; Wollherr, D.; Buss, M. Human–robot collaboration: A survey. Int. J. Humanoid Robot. 2008, 05, 47–66. [Google Scholar] [CrossRef]
  10. Zuberbühler, K. Gaze Following. Curr. Biol. CB 2008, 18, R453–R455. [Google Scholar] [CrossRef] [Green Version]
  11. Marques, B.; Silva, S.S.; Alves, J.; Araujo, T.; Dias, P.M.; Sousa Santos, B. A Conceptual Model and Taxonomy for Collaborative Augmented Reality. IEEE Trans. Vis. Comput. Graph. 2021, 1–21. [Google Scholar] [CrossRef]
  12. Gromeier, M.; Koester, D.; Schack, T. Gender Differences in Motor Skills of the Overarm Throw. Front. Psychol. 2017, 8, 212. [Google Scholar] [CrossRef] [Green Version]
  13. Moreno-Briseño, P.; Díaz, R.; Campos-Romo, A.; Fernandez-Ruiz, J. Sex-Related Differences in Motor Learning and Performance. Behav. Brain Funct. 2010, 6, 74. [Google Scholar] [CrossRef] [Green Version]
  14. Blakemore, S.-J.; Decety, J. From the Perception of Action to the Understanding of Intention. Nat. Rev. Neurosci. 2001, 2, 561–567. [Google Scholar] [CrossRef]
  15. Eaves, D.L.; Riach, M.; Holmes, P.S.; Wright, D.J. Motor Imagery during Action Observation: A Brief Review of Evidence, Theory and Future Research Opportunities. Front. Neurosci. 2016, 10, 514. [Google Scholar] [CrossRef] [Green Version]
  16. Martin, A.; Weisberg, J. Neural foundations for understanding social and mechanical concepts. Cogn. Neuropsychol. 2003, 20, 575–587. [Google Scholar] [CrossRef] [Green Version]
  17. MacDorman, K.F.; Green, R.D.; Ho, C.-C.; Koch, C.T. Too Real for Comfort? Uncanny Responses to Computer Generated Faces. Comput. Hum. Behav. 2009, 25, 695–710. [Google Scholar] [CrossRef] [Green Version]
  18. Steckenfinger, S.A.; Ghazanfar, A.A. Monkey Visual Behavior Falls into the Uncanny Valley. Proc. Natl. Acad. Sci. 2009, 106, 18362–18366. [Google Scholar] [CrossRef] [Green Version]
  19. Kahn, P.H.; Ishiguro, H.; Friedman, B.; Kanda, T.; Freier, N.G.; Severson, R.L.; Miller, J. What Is a Human?: Toward Psychological Benchmarks in the Field of Human–Robot Interaction. Interact. Stud. Soc. Behav. Commun. Biol. Artif. Syst. 2007, 8, 363–390. [Google Scholar] [CrossRef]
  20. Maurice, P.; Huber, M.E.; Hogan, N.; Sternad, D. Velocity-Curvature Patterns Limit Human–Robot Physical Interaction. IEEE Robot. Autom. Lett. 2018, 3, 249–256. [Google Scholar] [CrossRef] [Green Version]
  21. Spüler, M.; Niethammer, C. Error-Related Potentials during Continuous Feedback: Using EEG to Detect Errors of Different Type and Severity. Front. Hum. Neurosci. 2015, 9, 155. [Google Scholar] [CrossRef] [Green Version]
  22. Weistroffer, V.; Paljic, A.; Callebert, L.; Fuchs, P. A Methodology to Assess the Acceptability of Human-Robot Collaboration Using Virtual Reality. In Proceedings of the the 19th ACM Symposium on Virtual Reality Software and Technology, Singapore, 6–9 October 2013; ACM Press: New York, NY, USA, 2013; pp. 39–48. [Google Scholar]
  23. Dubosc, C.; Gorisse, G.; Christmann, O.; Fleury, S.; Poinsot, K.; Richir, S. Impact of Avatar Facial Anthropomorphism on Body Ownership, Attractiveness and Social Presence in Collaborative Tasks in Immersive Virtual Environments. Comput. Graph. 2021, 101, 82–92. [Google Scholar] [CrossRef]
  24. Furlough, C.; Stokes, T.; Gillan, D.J. Attributing Blame to Robots: I. The Influence of Robot Autonomy. Hum. Factors J. Hum. Factors Ergon. Soc. 2021, 63, 592–602. [Google Scholar] [CrossRef]
  25. Rabby, K.M.; Khan, M.; Karimoddini, A.; Jiang, S.X. An Effective Model for Human Cognitive Performance within a Human-Robot Collaboration Framework. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3872–3877. [Google Scholar] [CrossRef]
  26. Dianatfar, M.; Latokartano, J.; Lanz, M. Review on Existing VR/AR Solutions in Human–Robot Collaboration. Procedia CIRP 2021, 97, 407–411. [Google Scholar] [CrossRef]
  27. Duguleana, M.; Barbuceanu, F.G.; Mogan, G. Evaluating Human-Robot Interaction during a Manipulation Experiment Conducted in Immersive Virtual Reality. In Proceedings of the International Conference on Virtual and Mixed Reality, Orlando, FL, USA, 9–14 July 2011; pp. 164–173. [Google Scholar]
  28. Oyekan, J.O.; Hutabarat, W.; Tiwari, A.; Grech, R.; Aung, M.H.; Mariani, M.P.; López-Dávalos, L.; Ricaud, T.; Singh, S.; Dupuis, C. The Effectiveness of Virtual Environments in Developing Collaborative Strategies between Industrial Robots and Humans. Robot. Comput.-Integr. Manuf. 2019, 55, 41–54. [Google Scholar] [CrossRef]
  29. Dombrowski, U.; Stefanak, T.; Perret, J. Interactive Simulation of Human-Robot Collaboration Using a Force Feedback Device. Procedia Manuf. 2017, 11, 124–131. [Google Scholar] [CrossRef]
  30. Holtzblatt, K.; Beyer, H. Contextual Design, 2nd ed.; Morgan Kaufmann: Burlington, MA, USA, 2016; ISBN 978-0-12-801136-2. [Google Scholar]
  31. Sommerville, I. Software Engineering, 10th ed.; Pearson: Boston, MA, USA, 2016; ISBN 978-0-13-394303-0. [Google Scholar]
  32. Tonkin, M.; Vitale, J.; Herse, S.; Williams, M.A.; Judge, W.; Wang, X. Design Methodology for the UX of HRI: A Field Study of a Commercial Social Robot at an Airport; ACM Press: New York, NY, USA, 2018; ISBN 978-1-4503-4953-6. [Google Scholar]
  33. Zhong, V.J.; Schmiedel, T. A User-Centered Agile Approach to the Development of a Real-World Social Robot Application for Reception Areas. In Proceedings of the Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 8–11 March 2021; pp. 76–80. [Google Scholar]
  34. Onnasch, L.; Roesler, E. Anthropomorphizing Robots: The Effect of Framing in Human-Robot Collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 1311–1315. [Google Scholar] [CrossRef] [Green Version]
  35. Onnasch, L.; Roesler, E. A Taxonomy to Structure and Analyze Human–Robot Interaction. Int. J. Soc. Robot. 2021, 13, 833–849. [Google Scholar] [CrossRef]
  36. Kessler, S. This Industrial Robot Has Eyes Because They Make Human Workers Feel More Comfortable. Available online: https://qz.com/958335/why-do-rethink-robotics-robots-have-eyes/ (accessed on 13 January 2022).
  37. Johansson, R.S.; Westling, G.; Bäckström, A.; Flanagan, J.R. Eye–Hand Coordination in Object Manipulation. J. Neurosci. 2001, 21, 6917–6932. [Google Scholar] [CrossRef]
  38. Osiurak, F.; Rossetti, Y.; Badets, A. What Is an Affordance? 40 Years Later. Neurosci. Biobehav. Rev. 2017, 77, 403–417. [Google Scholar] [CrossRef]
  39. Pilacinski, A.; De Haan, S.; Donato, R.; Almeida, J. Tool Heads Prime Saccades. Sci. Rep. 2021, 11, 11954. [Google Scholar] [CrossRef]
  40. Richards, D. Escape from the Factory of the Robot Monsters: Agents of Change. Team Perform. Manag. Int. J. 2017, 23, 96–108. [Google Scholar] [CrossRef]
  41. Salem, M.; Lakatos, G.; Amirabdollahian, F.; Dautenhahn, K. Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. In In Proceedings of the 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Portland, OR, USA, 2–5 March 2015; pp. 141–148. [Google Scholar] [CrossRef] [Green Version]
  42. Lee, J.D.; See, K.A. Trust in Automation: Designing for Appropriate Reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 2004, 46, 50–80. [Google Scholar] [CrossRef]
  43. Bacharach, M.; Guerra, G.; Zizzo, D.J. The Self-Fulfilling Property of Trust: An Experimental Study. Theory Decis. 2007, 63, 349–388. [Google Scholar] [CrossRef]
  44. Gambetta, D. Can We Trust? In Trust: Making and Breaking Cooperative. Relations, Electronic Edition; Department of Sociology, University of Oxford: Oxford, UK, 2000; Volume 13. [Google Scholar]
  45. Wang, Y.; Lematta, G.J.; Hsiung, C.-P.; Rahm, K.A.; Chiou, E.K.; Zhang, W. Quantitative Modeling and Analysis of Reliance in Physical Human–Machine Coordination. J. Mech. Robot. 2019, 11, 060901. [Google Scholar] [CrossRef]
  46. Cameron, D.; Collins, E.; Cheung, H.; Chua, A.; Aitken, J.M.; Law, J. Don’t Worry, We’ll Get There: Developing Robot Personalities to Maintain User Interaction after Robot Error. In Conference on Biomimetic and Biohybrid Systems; Springer: Cham, Switzerland, 2016; pp. 409–412. [Google Scholar] [CrossRef]
  47. Desai, M.; Kaniarasu, P.; Medvedev, M.; Steinfeld, A.; Yanco, H. Impact of Robot Failures and Feedback on Real-Time Trust. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 251–258. [Google Scholar] [CrossRef] [Green Version]
  48. Gupta, K.; Hajika, R.; Pai, Y.S.; Duenser, A.; Lochner, M.; Billinghurst, M. In AI We Trust: Investigating the Relationship between Biosignals, Trust and Cognitive Load in VR. In Proceedings of the VRST ’19: 25th ACM Symposium on Virtual Reality Software and Technology, Parramatta, Australia, 12–15 November 2019; pp. 1–10. [Google Scholar]
  49. Gupta, K.; Hajika, R.; Pai, Y.S.; Duenser, A.; Lochner, M.; Billinghurst, M. Measuring Human Trust in a Virtual Assistant Using Physiological Sensing in Virtual Reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces, VR, Atlanta, GA, USA, 22–26 March 2020; pp. 756–765. [Google Scholar] [CrossRef]
  50. Etzi, R.; Huang, S.; Scurati, G.W.; Lyu, S.; Ferrise, F.; Gallace, A.; Gaggioli, A.; Chirico, A.; Carulli, M.; Bordegoni, M. Using Virtual Reality to Test Human-Robot Interaction During a Collaborative Task. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Anaheim, CA, USA, 18–21 August 2019; p. V001T02A080. [Google Scholar]
  51. Slater, M. Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Slater, M.; Wilbur, S. A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence Teleoperators Virtual Environ. 1997, 6, 603–616. [Google Scholar] [CrossRef]
  53. Baños, R.M.; Botella, C.; Alcañiz, M.; Liaño, V.; Guerrero, B.; Rey, B. Immersion and Emotion: Their Impact on the Sense of Presence. Cyberpsychol. Behav. 2004, 7, 734–741. [Google Scholar] [CrossRef] [PubMed]
  54. Slater, M.; Lotto, B.; Arnold, M.M.; Sánchez-Vives, M.V. How We Experience Immersive Virtual Environments: The Concept of Presence and Its Measurement. Anu Psicol 2009, 40, 193–210. [Google Scholar]
  55. Slater, M.; Antley, A.; Davison, A.; Swapp, D.; Guger, C.; Barker, C.; Pistrang, N.; Sanchez-Vives, M.V. A Virtual Reprise of the Stanley Milgram Obedience Experiments. PLoS ONE 2006, 1, e39. [Google Scholar] [CrossRef] [PubMed]
  56. Slater, M.; Rovira, A.; Southern, R.; Swapp, D.; Zhang, J.J.; Campbell, C.; Levine, M. Bystander Responses to a Violent Incident in an Immersive Virtual Environment. PLoS ONE 2013, 8, e52766. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Martens, M.A.; Antley, A.; Freeman, D.; Slater, M.; Harrison, P.J.; Tunbridge, E.M. It Feels Real: Physiological Responses to a Stressful Virtual Reality Environment and Its Impact on Working Memory. J. Psychopharmacol. 2019, 33, 1264–1273. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Buckingham, G. Hand Tracking for Immersive Virtual Reality: Opportunities and Challenges. Front. Virtual Real. 2021. [Google Scholar] [CrossRef]
  59. Morasso, P. Spatial Control of Arm Movements. Exp. Brain Res. 1981, 42, 223–227. [Google Scholar] [CrossRef]
  60. Flash, T.; Henis, E. Arm Trajectory Modifications During Reaching Towards Visual Targets. J. Cogn. Neurosci. 1991, 3, 220–230. [Google Scholar] [CrossRef]
  61. Rao, A.K.; Gordon, A.M. Contribution of Tactile Information to Accuracy in Pointing Movements. Exp. Brain Res. 2001, 138, 438–445. [Google Scholar] [CrossRef]
  62. Kosuge, K.; Kazamura, N. Control of a Robot Handling an Object in Cooperation with a Human. In Proceedings of the 6th IEEE International Workshop on Robot and Human Communication. RO-MAN’97 SENDAI, Sendai, Japan, 29 September–1 October 1997; pp. 142–147. [Google Scholar]
  63. Bradley, M.M.; Codispoti, M.; Cuthbert, B.N.; Lang, P.J. Emotion and Motivation I: Defensive and Appetitive Reactions in Picture Processing. Emot. Wash. DC 2001, 1, 276–298. [Google Scholar] [CrossRef]
  64. Lazar, J.; Feng, J.H.; Hochheiser, H. Research Methods in Human-Computer Interaction, 2nd ed.; Morgan Kaufmann Publishers: Burlington, MA, USA, 2017; ISBN 978-0-12-809343-6. [Google Scholar]
  65. McCambridge, J.; de Bruin, M.; Witton, J. The Effects of Demand Characteristics on Research Participant Behaviours in Non-Laboratory Settings: A Systematic Review. PLoS ONE 2012, 7, e39116. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Button, K.S.; Ioannidis, J.P.A.; Mokrysz, C.; Nosek, B.A.; Flint, J.; Robinson, E.S.J.; Munafò, M.R. Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience. Nat. Rev. Neurosci. 2013, 14, 365–376. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Tachi, S. From 3D to VR and Further to Telexistence. In Proceedings of the 2013 23rd International Conference on Artificial Reality and Telexistence (ICAT), Tokyo, Japan, 11–13 December 2013; pp. 1–10. [Google Scholar]
  68. Kim, J.-H.; Starr, J.W.; Lattimer, B.Y. Firefighting Robot Stereo Infrared Vision and Radar Sensor Fusion for Imaging through Smoke. Fire Technol. 2015, 51, 823–845. [Google Scholar] [CrossRef]
  69. Ewerton, M.; Arenz, O.; Peters, J. Assisted Teleoperation in Changing Environments with a Mixture of Virtual Guides. Adv. Robot. 2020, 34, 1157–1170. [Google Scholar] [CrossRef]
  70. Toet, A.; Kuling, I.A.; Krom, B.N.; van Erp, J.B.F. Toward Enhanced Teleoperation Through Embodiment. Front. Robot. AI 2020, 7, 14. [Google Scholar] [CrossRef] [Green Version]
  71. Cipresso, P.; Giglioli, I.A.C.; Raya, M.A.; Riva, G. The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Front. Psychol. 2018, 9, 2086. [Google Scholar] [CrossRef] [Green Version]
  72. Liu, H.; Wang, L. An AR-Based Worker Support System for Human-Robot Collaboration. Procedia Manuf. 2017, 11, 22–30. [Google Scholar] [CrossRef]
  73. Hietanen, A.; Pieters, R.; Lanz, M.; Latokartano, J.; Kämäräinen, J.-K. AR-Based Interaction for Human-Robot Collaborative Manufacturing. Robot. Comput.-Integr. Manuf. 2020, 63, 101891. [Google Scholar] [CrossRef]
  74. Palmarini, R.; del Amo, I.F.; Bertolino, G.; Dini, G.; Erkoyuncu, J.A.; Roy, R.; Farnsworth, M. Designing an AR Interface to Improve Trust in Human-Robots Collaboration. Procedia CIRP 2018, 70, 350–355. [Google Scholar] [CrossRef]
  75. Michalos, G.; Karagiannis, P.; Makris, S.; Tokçalar, Ö.; Chryssolouris, G. Augmented Reality (AR) Applications for Supporting Human-Robot Interactive Cooperation. Procedia CIRP 2016, 41, 370–375. [Google Scholar] [CrossRef] [Green Version]
  76. Abel, M.; Kuz, S.; Patel, H.J.; Petruck, H.; Schlick, C.M.; Pellicano, A.; Binkofski, F.C. Gender Effects in Observation of Robotic and Humanoid Actions. Front. Psychol. 2020, 11, 797. [Google Scholar] [CrossRef] [PubMed]
  77. Nomura, T. Robots and Gender. Gend. Genome 2017, 1, 18–26. [Google Scholar] [CrossRef] [Green Version]
  78. Oh, Y.H.; Ju, D.Y. Age-Related Differences in Fixation Pattern on a Companion Robot. Sensors 2020, 20, 3807. [Google Scholar] [CrossRef]
Figure 1. Top: Example VR robot models arranged according to their anthropomorphism. (R0) one arm basic; (R1) an articulated arm; (R2): two arms Baxter; (R3): a humanoid robot. Bottom: an example VR-collaboration scene used by the authors, developed with Unity Game Engine (Unity Technologies, San Francisco, CA, USA). The scene shows a basic tool-passing task in which subject kinematic and physiological data can be recorded in response to manipulated scene features (e.g., cobot appearance, speed, etc.).
Figure 1. Top: Example VR robot models arranged according to their anthropomorphism. (R0) one arm basic; (R1) an articulated arm; (R2): two arms Baxter; (R3): a humanoid robot. Bottom: an example VR-collaboration scene used by the authors, developed with Unity Game Engine (Unity Technologies, San Francisco, CA, USA). The scene shows a basic tool-passing task in which subject kinematic and physiological data can be recorded in response to manipulated scene features (e.g., cobot appearance, speed, etc.).
Electronics 11 01726 g001aElectronics 11 01726 g001b
Table 1. Types of collaboration with industrial robots at present. As the level of collaboration increases (left to right), so does the requirement for intrinsic safety features vs. external sensors. Source: IFR Position Paper [4], adapted from Bauer et al. [8].
Table 1. Types of collaboration with industrial robots at present. As the level of collaboration increases (left to right), so does the requirement for intrinsic safety features vs. external sensors. Source: IFR Position Paper [4], adapted from Bauer et al. [8].
Level of Collaboration1 Cell2 Coexistence3 Sequential Collaboration4 Cooperation5 Responsive Collaboration
Requirement for intrinsic safety features vs. external sensorsFenced robotNo fence but no shared workspaceRobot and worker both active in the workspace but movements are sequentialRobot and worker work on the same part at the same time, both in motionRobot responds in real time to the movement of the operator.
Table 2. Critical variables for cobot testing using virtual/extended reality are defined based on the literature reviewed here. We divide manipulated (independent) variables as those about either the cobot, the context, or the user. In each box, examples of each manipulated/measured variable are provided.
Table 2. Critical variables for cobot testing using virtual/extended reality are defined based on the literature reviewed here. We divide manipulated (independent) variables as those about either the cobot, the context, or the user. In each box, examples of each manipulated/measured variable are provided.
Critical Variables for HRC Experiments
Manipulated (Independent)Measured (Dependent)
CobotEnvironmentUserSubjectiveObjective
  • Anthropomorphism
  • Presence of gaze
  • Speed
  • Accuracy
  • Fluidity
  • Proximity
  • Size
  • Auditory noise
  • Scene type (e.g., factory)
  • Lighting
  • Demographics (gender, age)
  • Cognitive load
  • Experience with technology
  • Acceptability/trust ratings
  • Attributing blame
  • Sense of presence (realism)
  • Physiological responses
  • Motor efficiency
  • Pupillometry
Table 3. Summary of identified issues in VR experiments on HRC and proposed remedies.
Table 3. Summary of identified issues in VR experiments on HRC and proposed remedies.
Issues in HRC Studies and Proposed Remedies for VR Experiments
ProblemSuggested Remedy
  • Testing using real robots is limited to current designs only [22]
  • Low statistical power/small sample sizes [22,50]
  • Self-reports/physiological markers alone are not sensitive enough for assessing UX [50]
  • Results on HRC difficult to generalize across populations
  • Limited feeling of presence
  • Use VR models of hypothetical cobot designs to manipulate more variables
  • Increase samples (use statistical power calculators to determine sample size); increase within-subject repetitions
  • Combine self-reports with psychophysiological markers (heart rate, pupillometry, etc.); use standardized tools for measuring trust/acceptability; use time-resolved measures of stress, as provided by physiological markers
  • Test subject populations of diverse demographic characteristics (gender, age, experience with robots, etc.)
  • Increase immersion by using higher fidelity of stimuli, sound and haptic information
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Badia, S.B.i.; Silva, P.A.; Branco, D.; Pinto, A.; Carvalho, C.; Menezes, P.; Almeida, J.; Pilacinski, A. Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives. Electronics 2022, 11, 1726. https://doi.org/10.3390/electronics11111726

AMA Style

Badia SBi, Silva PA, Branco D, Pinto A, Carvalho C, Menezes P, Almeida J, Pilacinski A. Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives. Electronics. 2022; 11(11):1726. https://doi.org/10.3390/electronics11111726

Chicago/Turabian Style

Badia, Sergi Bermúdez i, Paula Alexandra Silva, Diogo Branco, Ana Pinto, Carla Carvalho, Paulo Menezes, Jorge Almeida, and Artur Pilacinski. 2022. "Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives" Electronics 11, no. 11: 1726. https://doi.org/10.3390/electronics11111726

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop