Next Article in Journal
A Method of Increasing Digital Filter Performance Based on Truncated Multiply-Accumulate Units
Previous Article in Journal
Development of a Preliminary-Risk-Based Flood Management Approach to Address the Spatiotemporal Distribution of Risk under the Kaldor–Hicks Compensation Principle
Previous Article in Special Issue
Unsupervised 3D Motion Summarization Using Stacked Auto-Encoders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots

Institute for Experiential Robotics, Northeastern University, Boston, MA 02115, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(24), 9051; https://doi.org/10.3390/app10249051
Submission received: 21 October 2020 / Revised: 30 November 2020 / Accepted: 14 December 2020 / Published: 18 December 2020
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)

Abstract

:
There is a significant amount of synergy between virtual reality (VR) and the field of robotics. However, it has only been in approximately the past five years that commercial immersive VR devices have been available to developers. This new availability has led to a rapid increase in research using VR devices in the field of robotics, especially in the development of VR interfaces for operating robots. In this paper, we present a systematic review on VR interfaces for robot operation that utilize commercially available immersive VR devices. A total of 41 papers published between 2016–2020 were collected for review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Papers are discussed and categorized into five categories: (1) Visualization, which focuses on displaying data or information to operators; (2) Robot Control and Planning, which focuses on connecting human input or movement to robot movement; (3) Interaction, which focuses on the development of new interaction techniques and/or identifying best interaction practices; (4) Usability, which focuses on user experiences of VR interfaces; and (5) Infrastructure, which focuses on system architectures or software to support connecting VR and robots for interface development. Additionally, we provide future directions to continue development in VR interfaces for operating robots.

1. Introduction

Even though the concept of virtual reality (VR) has been around since the 1960s [1], its usage in the field of robotics has only been widely adopted in robot-assisted surgery [2,3,4] and robot-assisted rehabilitation [5,6,7]. However, there is a great deal of synergy between VR and robotics [8]. Robots can be used in VR to help provide more immersive VR experiences by providing haptic feedback to users [9,10] or by providing items for user interaction [11,12]. On the flip side, VR is an enabling technology in robotics to provide immersive robot teleoperation [13], to aid in robot programming [14], to conduct human-robot interaction and collaboration studies [15,16,17], and even to train individuals on how to collaborate with robots [18].
The lack of VR integration in the field of robotics has been largely caused by the absence of availability and affordability of commercial VR devices. In the past decade though, there has been considerable advancement to make VR devices with immersive visualization and 6 degree-of-freedom (DOF) tracking commercially available and relatively affordable. With these advancements, there has been an increase in utilizing VR in the field of robotics. One area that has seen particular growth is in utilizing VR devices for human-robot interaction and collaboration.
VR provides an opportunity to create more natural and intuitive interfaces by immersing users in a 3D environment where they can view and interact with robots in 3D shared or remote environments. This can allow for better situational awareness and easier interaction. Robot interfaces can be of two types, either teleoperation, where an operator controls a robot’s end-effector, or shared-control, where an operator provides high-level commands to the robot. Both interface types allow operators to control a robot from remote locations. This allows for robots to be used in dangerous, distant and daring jobs and removes risk from the operator but keeps their knowledge and expertise in the loop. By using VR these interfaces can be used in a wider application due to the immersive nature of VR.
In this paper, we present a systematic review of the use of virtual reality in developing interfaces for interacting with robots. This paper has been motivated by the need to study this emerging field at the nexus of humans, VR, and robotics. The goal of this review is to consolidate the recent literature of VR interfaces for robots to identify the current state-of-the art and areas for future research.

2. Methodology

To identify how virtual reality is being used in robot interfaces, we conducted a systematic review following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines [19]. Publications were collected from the following databases:
  • IEEE Xplore
  • ACM Digital Library
  • SAGE Publications
  • Springer Link
  • MDPI
These databases were selected based on their proceedings and journals associated with robotics and their accessibility within the Northeastern University library network.
We used the following search string to capture relevant papers:
(“virtual reality” OR “vr”) AND robot
The search string was purposefully kept generic in order to capture as many relevant papers as possible. The search was done at the abstract level for all the listed databases, except for Springer Link, which did not provide the option to customize the search. We kept the search at only the abstract since we found that, in general, the relevant records contained these keywords there. This helped return only the most relevant records by eliminating records that briefly mentioned virtual reality and/or robots in passing and were not the primary focus. Since Springer Link did not allow for a custom search, we elected to utilize their built-in relevance sorter and only include the first hundred results. The one hundred results cutoff was selected by scanning record titles and identifying the likely relevant records. Almost all relevant record titles were within the first fifty results, but we doubled this number in order to catch additional potentially relevant works. This number was also on par with the number of records pulled from the other databases.
Results were narrowed to ones published between 2016 through September 2020. We decided to only look at the past 4 years, since 2016 is the year when virtual reality started becoming widely available on the consumer level (https://www.washingtonpost.com/news/the-switch/wp/2016/01/15/what-to-expect-from-virtual-reality-in-2016/). Additionally, the language was restricted to English.
In total, we collected 518 records for evaluation. Figure 1 shows the flow diagram of the screening process. Records were first filtered at the title/abstract level. Records that pertained to the areas of robot-assisted surgery, robot-assisted rehabilitation, robots being used as haptic devices for VR, or VR development environments being used as robot simulators were rejected. Records whose focus was not on both VR and robotics or were ancillary in nature were rejected as well. After the initial screening, we moved to a full-text screening. The purpose of this review was to specifically look at papers that utilized VR in robot interfaces. Therefore, papers involving other areas of VR and robots, such as using VR to help conduct human-robot interaction studies, were removed from consideration. Additionally, records were rejected at this level for either belonging into a previously rejected category or for not containing substantial results. We aimed to include as many relevant works as possible though for a complete and through review of the current research in VR robot interfaces. Furthermore, we consolidated records by the same author(s) of related work and only included the most recent record. After the full evaluation, a total of 41 records were identified to be included in this review. Appendix A contains a table briefly summarizing all the included papers.

3. Results and Discussion

For this review, we categorized papers in five different areas: Visualization, Robot Control and Planning, Interaction, Usability, and Infrastructure. There is natural overlap between the categories, however, each paper was assigned to the one that the contribution was the highest in. Figure 2 displays the breakdown of papers in each category.

3.1. Visualization

The Visualization category encompasses concepts mainly focused on displaying context to the user. Currently, most of this work is in identifying the best way to represent the real world in VR. Ref. [20] investigates the influence of displaying different levels of environmental information has on task performance and operator situation awareness in VR robot interfaces. Specifically, they look at two informational contexts: full information and a representative model. The full information context shows all the contextual and task-related information to the operator, which includes a mesh of the environment surrounding the robot. While the representative model context shows only the task-related information, which does not include any visualization of the robot’s surrounding environment. Ultimately, they find that the time to complete the task is reduced when displaying full information compared to the representative model, but accuracy remains the same between both. However, attentional demand was significantly higher during full information. Ref. [21] saw similar results when comparing a representative model visualization of the full environment to a real-time point cloud visualization of the real environment and found that the success rate and usability ratings were higher with the point cloud visualization. However, point cloud visualization can be computationally expensive and require a large amount of bandwidth. Ref. [22] looks to solve this problem by presenting a method to efficiently process and visualize point clouds for VR applications. Ref. [23] evaluates how using virtual features, such as a 3D robot model, object target poses, or displaying distance to a target, effects operator performance in completing teleoperation pick-and-place tasks. Overall, their results show that virtual features increase the accuracy and efficiency of the task performance and significantly reduce differences between expert and non-expert users. Ref. [24] compares an immersive 3D visualization to a standard 2D video-based visualization. They found that by displaying real-time 3D scene information improves the ability to self-localize in the scene, maneuver around corners, avoid obstacles, assess terrain for navigability, and control the visualization view. Ref. [25] takes a slightly different approach and assesses how different viewpoints can affect success when teleoperating a construction robot. Overall, they show that an active viewpoint, where the user is able to move their head to change the visualization improves the success of the teleoperation compared to an automatic bird’s-eye viewpoint that coincides with the movement of the robot to maintain the end-effector in the view.
Virtual reality causes motion sickness in some individuals and can be compounded depending on how data from a robot is being presented. Ref. [26] aims to identify the effect of linear velocity, acceleration, and angular displacement on VR motion sickness. They use their findings to develop a head-synchronized controller to reduce motion sickness when controlling a drone with VR. The developed controller reduces angular velocity by synchronizing the drone’s movement with the user’s head movement. This in turn reduces sensory conflict for the operator, therefore reducing the motion sickness as well. Similarly, ref. [27] analyzes the effects of visual and control latency in drones when using VR. Unsurprisingly, they find that an increase in latency resulted in worse flight performance and a higher level of motion sickness. However, they also find that the more time users spent operating the system, the more tolerant they became. Refs. [28,29] both present methods to decouple an operator’s head movement from the robot’s current view, i.e., the robot’s camera, when using VR for robot teleoperation. Traditionally, an operator’s head movement directly controls the robot’s camera. However, when the operator moves their head, there is a delay between the operator movement and the updated image being returned by the robot, which is caused by a combination of latency in the system and robot hardware speed. This delay can cause intersensory conflict and inflict motion sickness. Additionally, ref. [28] ran a user study and found that their decoupling method, that instantly returns an updated image upon head movement, on average was less nauseating and more visually comfortable to users. In relation, ref. [30] aims to reduce motion sickness by identifying the best way to display stereovision cameras inside a VR headset. In a 15 person user study, individuals, on average, reported feeling less motion sick when the stereo cameras were rendered on a plane inside the VR headset, rather than directly rendering the cameras to each eye, which creates a more immersive view. Ref. [31] studies how using human perception-optimized trajectory planning in mobile telepresence robots can improve motion sickness. Their results overall show that minimizing the amount of turns a robot takes decreases a user’s motion sickness.

3.2. Robot Control and Planning

This category refers to the area that is focused on connecting human input to robot movement to allow for successful teleoperation. Ref. [32] defines three mapping models to categorize teleoperation interfaces: direct, cyber-physical, and homunculus. A simplified representation of these models are shown in Figure 3. These categories help explain the different amounts of information being made between the robotic space and the user space for robot teleoperation. Direct maps the user’s hand and eyes directly to the robot. Cyber-physical contains a mapping between the user and the robot by mapping the user’s space to a virtual robot and environment, the robot space is similarly mapped back to this virtual space. Finally, the homunculus model combines the last two by decoupling mapping from the direct model, by using a virtual space between the user and the robot, but not requiring a complete virtual robot and environment that matches the real one like in the cyber-physical model.
Ref. [33] develops a solution that follows the direct model by imitating a user’s upper body pose to teleoperate a humanoid robot. Their method differs from other works in this field in that their system imitates the full human arm pose in the teleoperation rather than just controlling the end-effector pose of the robot. In a user study, they found that their imitation teleoperation method was preferred with users and allowed them to complete the task faster and perceive less overall workload compared to a direct manipulation programming of a robot. Instead of directly tracking an operator’s arm pose, refs. [34,35] both use machine learning techniques to learn a model that will map user input to robot motion in VR teleoperation interfaces. This way the only input required from the operator is the desired end-effector pose, which a handheld VR controller can supply, but still provide efficient teleoperation. Ref. [36] also uses machine learning to improve teleoperation, but their focus is on a predict-then-blend framework to aid the operator in dual-arm robot manipulation tasks. Using their learned model, they predict the manipulation target given the current robot trajectory and then estimate the desired robot motion given the current operator input. This estimate can then be used to correct the input from the operator to improve the efficiency of the teleoperation.
Ref. [37] presents a system architecture and controller that falls under the cyber-physical model by utilizing a digital twin of the robot. The idea here is that there are three control loops, one for the operator input, one for the physical robot, and one that serves as the interaction between the digital twin and the actual robot. The benefit of this method is that a robot and human do not need to be co-located and instead the robot can be controlled over long distances. Ref. [38] develops an interaction method that follows the homunculus model by decoupling the human input from the control loop of the robot and avoiding directly using the tracked operator movement in the robot controller. To accomplish this, they design a force controller with two interaction modes, one for coarse movement and one for fine movement. In the coarse movement mode, the operator is able to lock the robot movement and adjust the desired position and orientation of the robot by manipulating a virtual sphere with a coordinate frame. Once satisfied, the operator can then switch to the fine movement mode where orientation is locked and only position is controlled and the robot continuously moves as long as the virtual sphere is being interacted with. They compared their teleoperation interface to the one presented in [39], that directly tracks an operator’s movements and therefore requires operators to simultaneously control both the position and orientation of the robot, and found that their system had a 93.75% success rate compared to a 25% rate in completing a stacking task, which they attribute to their decoupling and force regulation in their controllers. Similar results were found in [40], where they showed that people preferred to control the movement of a robot by clicking-and-dragging a virtual sphere rather than have the robot directly follow the movements of the VR controller.
Improvements in virtual reality robot control and planning are also being looked at for specific applications. Ref. [41] presents an architecture that estimates human intent in VR to operate a welding robot. Their results show an increase in performance with their human intent recognizer. Instead of recognizing human intent, ref. [42] develops an optimization based planner to control a painting drone in VR. Qualitative results, where an operator controls the painting drone to trace a previously painted contoured line, show a high accuracy in system. Similarly, ref. [43] designs a teleoperation system for aerial manipulation that includes tactile feedback. Ref. [13] defines a control architecture that utilizes a VR headset, VR controllers, and an omni-directional treadmill to create a fully immersive teleoperation interface to operator a humanoid robot.

3.3. Interaction

Interaction papers focus on both the development of new interaction techniques for controlling robots in VR and identifying the best interaction practices. Ref. [44] develops a VR interface that incorporates object affordances to simplify teleoperation of two robotic arms equipped with dexterous robot hands. They provide grasping and manipulation assistance by allowing the operator to teleoperate the arms towards an object of interest and then provide an affordance menu. The affordance menu allows the user to select from a list of possible grasps and actions that can be performed on the object. Ref. [45] designs a visual programming system to define navigation tasks for mobile robots. Their system works by constructing a VR environment built from the output of the robot’s visual simultaneous localization and mapping (vSLAM) and then allowing users to select high-level landmarks along with task-level motion commands. From there, their system can plan a path for the robot in order to accomplish the desired tasks. Ref. [46] looks at how predictive components can improve operator situational awareness and workload in VR interfaces for multi-robot systems. Their results show that there is insignificant improvement when using predictive components. However, the authors acknowledge that these results could be due to lack of operator training in understanding the predictive cues. Therefore, there is additional work to be done in this area. Ref. [47] implements a system that allows users to collaborate with a robot to conduct 3D mapping of indoor environments using VR. Their system works by using VR headset pose data to estimate human intentions, which is then used to navigate a mobile robot to build a 3D map of the environment. This map is rendered inside the VR headset to provide an immersive view for the user.
There has also been some work towards identifying the best way to interact with robots in VR. Ref. [48] compares two different VR interactions, position control and trajectory control, to remotely operate a robotic manipulator. In position control, the user can place a single waypoint for the robot to autonomously navigate to and the user has the option to stop the motion at any time. In trajectory control, the user can move the arm by pressing a button and the robot will follow the relative movements of the controller. They conduct a 12 person user study comparing their two VR interfaces and find that when using positional control, users, in general, were both faster and more accurate in the tasks. Ref. [49] explores developing a VR interface for humanoids by taking inspiration from VR video games to identify the best control schemes and practices for VR. They summarize a total of 14 VR games, specifically looking at what viewpoint they use, how movement in the game is accomplished, how manipulation is done, and how information is brought up to be displayed. Ref. [50] investigates using different controllers, VR controllers to allow for grabbing and 3D mouses for driving, in operating a pair of robotic arms for a pick-and-place task and ultimately find that using the VR controllers allowed for faster operation due to faster gross movement control.

3.4. Usability

The Usability category highlights user experiences of VR interfaces. At the moment, research in this area is primarily focused on comparing traditional interfaces for robots to a VR interface. Traditional interfaces include two types. The most common one utilizes a monitor to view data and either a keyboard and mouse or a gaming controller to interact with the robot. The second one uses direct manipulation where users can physically grab and move the robot around and “program” the robot to complete a task, this is also frequently called learning from demonstration. Ref. [39] compares four different interfaces for a robotic manipulator: direct manipulation, computer (keyboard, mouse, and monitor), a partial VR interface that only uses positional hand tracking and uses a monitor instead of a VR headset, and a full VR interface with positional hand tracking and a VR headset. In an 18 person user study using a robotic manipulator in a cup stacking task, they found that their full VR interface was significantly better compared to the keyboard and monitor interface with a 66% improvement in task competition time, lower workload, higher usability, and higher likability score. Additionally, 5 of their 18 users were never able to complete the task with the keyboard and monitor interface, but they were all able to complete the task with the full VR interface. However, the full VR interface was slower and higher workload compared to direct manipulation, but it did have marginally higher usability and likability scores. Furthermore, when comparing the full VR interface to the partial VR interface, they found that task completion times were not significantly faster for the full VR interface, but it had higher usability and was marginally more likable than the partial VR interface. A similar 11 person user study was done in [51], where they compare their VR programming interface with a direct manipulation interface and a keyboard, mouse, and monitor interface. Their VR interface has one major difference from most others in the area, in that they use gesture recognition to teleoperate the robot rather than VR controllers. Their results show that the direct manipulation approach on average took the shortest amount of time and caused the smallest number of collisions. However, the VR approach was considered more natural among users and still performed better on average in performance time and number of collisions compared to the keyboard and monitor interface. Ref. [52] evaluates the use of VR for teleoperation and telemanipulation tasks using mobile robots equipped with a robotic arm and compares displaying camera streams on a monitor and a VR headset to display streams from stereo cameras to provide stereo-vision for an operator. In their 16 person user study, they found that in driving and observations tasks that there was not substantial difference in task completion times. However, for manipulation tasks there was a 20–25% increase in completion time when using the traditional interface over the VR one. They attribute these results to the stereo-vision the VR headset provides over a monitor, and that for manipulation tasks the addition of depth perception increases people’s capability to estimate distances, but this benefit is not significant for driving or observation tasks. Ref. [53] aims to see if VR interfaces lead to improvement in workload, situational awareness and performance for operators of multiple robots. They conduct an 8 person user study comparing a traditional keyboard, mouse, and monitor interface to a VR interface in both an indoor and outdoor scenario with three types of robots: aerial, ground, and manipulator. In general, they show that results are better with the VR interface over the traditional one in terms of operator performance, workload, and situational awareness. However, similar to [52], they found that performance when driving the mobile robot was better with the traditional interface. Ref. [54] looks at user preferences in using VR interfaces for teleoperating robots in combat or hostile environments. Although their work is done entirely in simulation, they found that in a 10 person user study, that 90% of the users preferred the immersive VR interface compared to a traditional non-immersive, keyboard, mouse, and monitor interface.

3.5. Infrastructure

The Infrastructure category focuses on system architectures or software that helps support connecting VR and robots for interface development. The majority of papers in this review utilize ROS for the robot development and Unity for the VR development, and there is currently no standard way to interface between the two. Therefore, it is necessary to find a way to bridge the gap. Ref. [55] presents a system architecture to work with multi-robot systems using ROS and virtual reality interface developed in Unity. Several works have also provided open-source solutions to allow for WebSocket communication between ROS and Unity [56,57]. Additionally, there is group at Siemens that is developing an open-source library called ROS# [58]. Ref. [59] provides an alternative to ROS# that decreases the message size and therefore in general allow for faster data transfer. However, their solution is currently not open-source. Ref. [60] presents a method capable of automatic calibration procedures to provide a spatial relationship between a robot cell and a VR system. Their work is provided as an open-source ROS package.

4. Takeaways

VR interfaces for operating robots have come a long way in the past four years. VR devices becoming commercially available and affordable for many researchers has helped tremendously in furthering the state-of-the-art. There is also a growing VR community to help support development, which lowers the barrier of entry for new developers. This is particularly relevant in the area of system architecture and infrastructure for VR robot interfaces, as there are now several open-source solutions in connecting ROS and Unity making it much easier to get started with VR robot interface development. Additionally, there have been several works that show the promise of VR interfaces over traditional ones, such as 2D computer interfaces. Overall, it has been shown that VR interfaces reduce task completion time, increase operator performance, and are generally preferred over traditional interfaces. These works help support the need for continue development in VR interfaces for robot operation.
Although, there has been a significant amount of foundational work in VR robot interfaces, there are still several areas that require further development. For example, there has been a significant amount of work in creating VR teleoperation interfaces, even though, teleoperation may not always be a viable option and a shared-control interface may be more appropriate solution, especially for complex systems. However, there has been limited research in shared-control VR interfaces despite the fact that these interfaces bring the advantage of allowing the robot to act semi-autonomously, which in turn allows the user to only provide input when needed and focus on the critical elements of a task. In addition, most VR interfaces so far have been designed for robot manipulators, aerial robots, or mobile robots. Figure 4 displays a heat map of the reviewed papers in each category and the types of robots used. At the moment, there is very limited work in using bipedal humanoid robots. Humanoids are general purpose platforms though and can easily be used in diverse environments designed for humans. However, they are complex dynamic systems and can immensely benefit from the immersive and 3D interaction environment that VR devices provide.
Furthermore, as VR interface development continues, it is important to continually evaluate the usability and likability of these new advancements among users. It is also important to evaluate these interfaces in real-world applications with actual potential users of the systems.

5. Future Directions

VR robot interfaces still require further development until they are ready for wide adoption. Highlighted below are some next steps in each area of VR robot interfaces to continue advancements in the field.

5.1. Visualization

Improving 3D visualization of the robot’s environment inside a VR headset. At the moment, there are two main techniques for 3D visualization of the robot’s environment inside a VR headset: using pre-designed 3D virtual models that mimic the real-world or using visualization data from the robot, i.e. point-cloud data. Both techniques though have issues that still need to be addressed. Currently, using a modeled environment of the world requires that the environment be both known ahead of time and static. While using real-time visualization from the robot can be computationally expensive and require large amounts of bandwidth to stream data from robot to VR headset. Ref. [21] presents a method that uses model-based background segmentation of point-cloud data to reduce bandwidth, but their method requires multiple depth sensors, knowledge of background models, and does not work with mobile platforms. Therefore, there is room for additional work in point-cloud streaming and rendering inside VR headsets.
Identifying what data to present and how to present it. Current data visualization is focused on rendering images or point-cloud data from a robot. However, there is opportunity to also display other types of information that can be virtually rendered in the scene. Most of the interfaces presented in this review already render a virtual model of the robot that mimics the current state of the physical robot, but there could be usefulness in visualizing points of interest, or end-effector distance from objects, etc. Ref. [23] investigates this area and overall found promising results in using additional virtual elements.

5.2. Robot Control and Motion Planning

Further development in shared-control interfaces. Currently, the state-of-the-art focuses primarily on using controllers or motion planners that allow operators to teleoperate the robot, but there is a plethora of research on shared-control planners that could be utilized. Results in [48] show that users were faster and more accurate when placing waypoints for a robot to traverse through (shared-control), rather than having the robot directly follow the user’s hand (teleoperation). There is opportunity for additional development into integrating these types of shared-control methods as they could remove workload from the operator to handle tasks the robot is well suited for, and allow users to focus on the high-level operation.

5.3. Interaction

Identifying best practices in interacting in VR. As discussed in Section 3.3, there has been some research into looking at the best practices for interacting in VR, however, it is still limited. There is a large body of knowledge in best practices for designing user interfaces, but the focus is on interfaces for 2D devices, such as, computers or mobile devices. There are some for VR as well, but most of these are focused towards gaming. It is important to understand best interaction practices in VR with focus on interacting with robots in order to create understandable and enjoyable VR robot interfaces.
Investigation using different VR input devices. So far there has been limited investigation in using other types of input devices for VR robot interfaces, such as VR gloves or omni-directional treadmills. These devices are used to create a more immersive experience in VR gaming and therefore could be useful as well in VR interfaces.

5.4. Usability

Diversifying user-studies. User studies on average are with a small group of people, typically 20 or less, and skew towards more males and in the 20–30 age bracket. This is understandable as researchers usually take advantage of university students in their department, which is often in the areas of computer science and engineering. However, this group is not representative of the broader community that could make use of these interfaces. There are already several applications robots are actively being deployed in, such as factories, assisting in bomb disposal, or search and rescue missions. Therefore, it is important to have users in these domains included in user studies. Additionally, in general it is important to have a diverse group of individuals as different groups will bring in different viewpoints, such as, expert vs. non-expert, or younger generation vs. older generation, etc.
Conducting real-world application user-studies. Presently, most user-studies in understanding usability of VR robot interfaces are simplified, fabricated tasks and are not well grounded in real-world applications. They are also not conducted in real-world environments and instead are often done inside of a lab or structured environment that is controllable. However, the real-world is unstructured and often unpredictable. Therefore, it is important to test these systems in real-world environments for real-world applications in order to more accurately understand the usability of these systems.

6. Conclusions

In this paper, we presented a systematic review of recently published research in the past four years relating to VR interfaces for robot operation. Papers were categorized into five different areas: Visualization, Robot Control and Planning, Interaction, Usability, and Infrastructure. We also highlighted some of the missing areas in VR robot interfaces as future directions for research. As robots become more capable and integrated into various workplaces and our daily lives, it will be crucial to have ways to interact with them. Virtual reality provides an opportunity to create natural and intuitive interfaces to allow for successful human-robot interaction for both expert and non-expert users.

Author Contributions

M.W. conducted the literature review and wrote the manuscript. T.P. supervised, reviewed, and edited the manuscript. Both authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Aeronautics and Space Administration under Grant No. NNX16AC48A issued through the Science and Technology Mission Directorate, by the National Science Foundation under Award No. 1544895, 1928654, 1935337, and 1944453.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Table A1. Summary of Results.
Table A1. Summary of Results.
RefYearCategoryRobot TypeContribution
[54]2016Usability(Virtual) MobileIdentifies user preferences between using a traditional computer interface over an immersive VR interface for teleoperation
[47]2016InteractionMobileDevelops a collaborative human-robot system to accomplish real-time mapping in VR
[45]2016InteractionMobileDevelops a visual programming system to define navigation tasks
[29]2016VisualizationHumanoidDevelops a method to use stereo panoramic reconstruction to reduce perceived visual latency during teleoperation
[25]2016VisualizationManipulatorEvaluates the affects of different viewpoints on success when teleoperating a construction robot
[46]2017Interaction(Virtual) Mobile  & AerialInvestigates the utility of predictive capabilities in VR interfaces for multi-robot teams using a traditional interface as a baseline
[51]2017UsabilityManipulatorCompares a developed VR programming interface with a direct manipulation interface and a keyboard, mouse, and monitor interface
[56]2017InfrastructureN/ADevelops an open-source cloud-based software architecture to interface ROS with Unity
[23]2018VisualizationDual-Arm ManipulatorEvaluates using virtual features to display task-related information to improve operator performance in completing teleoperation pick-and-place tasks
[40]2018Robot Control and PlanningManipulatorCompares different VR interaction techniques for teleoperation
[22]2018VisualizationManipulatorDevelops a method to efficiently process and visualize point-clouds in VR
[30]2018VisualizationMobile with ManipulatorEvaluates the best way to visualize stereo cameras inside a VR headset to minimize motion sickness
[32]2018Robot Control and PlanningDual-Arm ManipulatorDevelops a teleoperation framework that can quickly map user input to robot movement and vice-versa
[27]2018Visualization(Virtual) AerialEvaluates the effects of visual and control latency in drones when using VR
[59]2018InfrastructureN/ADevelops a framework to interface ROS with Unity
[57]2018InfrastructureN/ADevelops an open-source framework to interface ROS with Unity
[28]2019Visualization(Virtual) MobileDevelops an image projection method that remove discrepancies between robot and user head pose
[50]2019InteractionDual-Arm ManipulatorEvaluates using different controllers in teleoperation
[44]2019InteractionDual-Arm ManipulatorDevelops a telemanipulation framework that incorporates a set of grasp affordances to simplify operation
[49]2019InteractionHumanoid (Bipedal)Summarizes data visualization and interaction techniques of VR video games for adoption to VR robot interfaces
[33]2019Robot Control and PlanningHumanoid (Mobile Base)Develops teleoperation system that imitates user’s upper body pose data in real-time
[53]2019UsabilityMobile with Manipulator & AerialCompares a traditional interface to a VR interface for multi-robot missions
[24]2019VisualizationMobile with ManipulatorCompares an immersive VR visualization to a monitor video-based visualization for robot navigation
[21]2019VisualizationManipulatorCompares a representative model visualization of the full environment to a real-time point cloud visualization of the real environment for teleoperation
[37]2019Robot Control and PlanningManipulatorDevelops a framework that allows robot teleoperation through uses of a digital twin
[20]2019VisualizationManipulatorInvestigates the influence of displaying different levels of environmental information has on task performance and operator situation awareness in VR robot interfaces
[42]2019Robot Control and PlanningAerialDevelops an optimization based planner to control a painting drone in VR
[43]2019Robot Control and PlanningAerial with ManipulatorDevelops a teleoperation system for aerial manipulation that includes tactile feedback
[34]2019Robot Control and PlanningDual-Arm ManipulatorDevelops a deep correspondence model that maps user input to robot motion for teleoperation
[36]2019Robot Control and PlanningDual-Arm ManipulatorDevelops a predict-then-blend framework to increase efficiency and reduce user workload
[60]2019InfrastructureN/ADevelops an open-source solution that help calibrate VR equipment (HTC Vive) inside a robot cell (hardware-agnostic, only requires ROS-Industrial and MoveIt plugin)
[55]2019InfrastructureN/ADefines a system architecture to work with multi-robot systems using ROS and Unity
[31]2020VisualizationMobileDevelops and evaluates a human perception-optimized planner to reduce motion sickness
[13]2020Robot Control and PlanningHumanoid (Bipedal)Develops a control architecture that utilizes a VR setup with an omni-directional treadmill to create a fully immersive teleoperation interface
[48]2020InteractionDual-Arm ManipulatorCompares two different VR control interactions, position control and trajectory control, for robot operation
[52]2020UsabilityMobile with ManipulatorCompares displaying camera streams on a monitor and displaying stereo cameras streams inside a VR headset for teleoperation
[38]2020Robot Control and PlanningManipulatorDevelops two robot controllers to decouple an operator from the robot’s control loop for teleoperation
[41]2020Robot Control and PlanningManipulatorDevelops a method that estimates human intent in VR to control a welding robot
[26]2020VisualizationAerialDevelops a controller that synchronizes a drone’s movement with the user’s head movement to reduce motion sickness
[39]2020UsabilityDual-Arm ManipulatorCompares a VR interface to traditional interfaces for teleoperation
[35]2020Robot Control and PlanningManipulatorDevelops a motion planner using deep reinforcement learning to map the human workspace to the robot workspace for teleoperation

References

  1. Mazuryk, T.; Gervautz, M. Virtual Reality-History, Applications, Technology and Future; Vienna University of Technology: Vienna, Austria, 1996. [Google Scholar]
  2. Bric, J.D.; Lumbard, D.C.; Frelich, M.J.; Gould, J.C. Current state of virtual reality simulation in robotic surgery training: A review. Surg. Endosc. 2016, 30, 2169–2178. [Google Scholar] [CrossRef]
  3. Moglia, A.; Ferrari, V.; Morelli, L.; Ferrari, M.; Mosca, F.; Cuschieri, A. A systematic review of virtual reality simulators for robot-assisted surgery. Eur. Urol. 2016, 69, 1065–1080. [Google Scholar] [CrossRef] [PubMed]
  4. Van der Meijden, O.A.; Schijven, M.P. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: A current review. Surg. Endosc. 2009, 23, 1180–1190. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Adamovich, S.V.; Fluet, G.G.; Tunik, E.; Merians, A.S. Sensorimotor training in virtual reality: A review. Neuro Rehabil. 2009, 25, 29–44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Baur, K.; Schättin, A.; de Bruin, E.D.; Riener, R.; Duarte, J.E.; Wolf, P. Trends in robot-assisted and virtual reality-assisted neuromuscular therapy: A systematic review of health-related multiplayer games. J. Neuroeng. Rehabil. 2018, 15, 107. [Google Scholar] [CrossRef] [PubMed]
  7. Howard, M.C. A meta-analysis and systematic literature review of virtual reality rehabilitation programs. Comput. Hum. Behav. 2017, 70, 317–327. [Google Scholar] [CrossRef]
  8. Burdea, G.C. Invited review: The synergy between virtual reality and robotics. IEEE Trans. Robot. Autom. 1999, 15, 400–410. [Google Scholar] [CrossRef] [Green Version]
  9. Al-Sada, M.; Jiang, K.; Ranade, S.; Kalkattawi, M.; Nakajima, T. HapticSnakes: Multi-haptic feedback wearable robots for immersive virtual reality. Virtual Real. 2020, 24, 191–209. [Google Scholar] [CrossRef] [Green Version]
  10. Vonach, E.; Gatterer, C.; Kaufmann, H. VRRobot: Robot actuated props in an infinite virtual environment. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; pp. 74–83. [Google Scholar] [CrossRef]
  11. Zhao, Y.; Kim, L.H.; Wang, Y.; Le Goc, M.; Follmer, S. Robotic Assembly of Haptic Proxy Objects for Tangible Interaction and Virtual Reality. In Proceedings of the Interactive Surfaces and Spaces on ZZZ—ISS ’17, Brighton, UK, 17–20 October 2017; pp. 82–91. [Google Scholar] [CrossRef]
  12. Suzuki, R.; Hedayati, H.; Zheng, C.; Bohn, J.L.; Szafir, D.; Do, E.Y.L.; Gross, M.D.; Leithinger, D. RoomShift: Room-Scale Dynamic Haptics for VR with Furniture-Moving Swarm Robots. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, Honolulu, HI, USA, 25–30 April 2020; pp. 1–11. [Google Scholar] [CrossRef]
  13. Elobaid, M.; Hu, Y.; Romualdi, G.; Dafarra, S.; Babic, J.; Pucci, D. Telexistence and Teleoperation for Walking Humanoid Robots. In Intelligent Systems and Applications; Bi, Y., Bhatia, R., Kapoor, S., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 1106–1121. [Google Scholar]
  14. Bolano, G.; Roennau, A.; Dillmann, R.; Groz, A. Virtual Reality for Offline Programming of Robotic Applications with Online Teaching Methods. In Proceedings of the 2020 17th International Conference on Ubiquitous Robots (UR), Kyoto, Japan, 22–26 June 2020; pp. 625–630. [Google Scholar] [CrossRef]
  15. Liu, O.; Rakita, D.; Mutlu, B.; Gleicher, M. Understanding human-robot interaction in virtual reality. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 751–757. [Google Scholar] [CrossRef]
  16. Villani, V.; Capelli, B.; Sabattini, L. Use of Virtual Reality for the Evaluation of Human-Robot Interaction Systems in Complex Scenarios. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 422–427. [Google Scholar] [CrossRef]
  17. Wijnen, L.; Lemaignan, S.; Bremner, P. Towards using Virtual Reality for Replicating HRI Studies. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’20, Cambridge, UK, 23–26 March 2020; pp. 514–516. [Google Scholar] [CrossRef]
  18. Matsas, E.; Vosniakos, G.C. Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int. J. Interact. Des. Manuf. (IJIDeM) 2017, 11, 139–153. [Google Scholar] [CrossRef]
  19. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  20. Van de Merwe, D.B.; Van Maanen, L.; Ter Haar, F.B.; Van Dijk, R.J.E.; Hoeba, N.; der Stap, N. Human-Robot Interaction During Virtual Reality Mediated Teleoperation: How Environment Information Affects Spatial Task Performance and Operator Situation Awareness. In Virtual, Augmented and Mixed Reality, Applications and Case Studies; Chen, J.Y.C., Fragomeni, G., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 163–177. [Google Scholar]
  21. Su, Y.H.; Xu, Y.Q.; Cheng, S.L.; Ko, C.H.; Young, K.Y. Development of an Effective 3D VR-Based Manipulation System for Industrial Robot Manipulators. In Proceedings of the 2019 12th Asian Control Conference (ASCC), Kitakyushu-shi, Japan, 9–12 June 2019; pp. 1–6. [Google Scholar]
  22. Kohn, S.; Blank, A.; Puljiz, D.; Zenkel, L.; Bieber, O.; Hein, B.; Franke, J. Towards a Real-Time Environment Reconstruction for VR-Based Teleoperation Through Model Segmentation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar] [CrossRef]
  23. Brizzi, F.; Peppoloni, L.; Graziano, A.; Stefano, E.D.; Avizzano, C.A.; Ruffaldi, E. Effects of Augmented Reality on the Performance of Teleoperated Industrial Assembly Tasks in a Robotic Embodiment. IEEE Trans. Hum. Mach. Syst. 2018, 48, 197–206. [Google Scholar] [CrossRef]
  24. Stotko, P.; Krumpen, S.; Schwarz, M.; Lenz, C.; Behnke, S.; Klein, R.; Weinmann, M. A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3630–3637. [Google Scholar] [CrossRef] [Green Version]
  25. Xinxing, T.; Pengfei, Z.; Hironao, Y. VR-based construction tele-robot system displayed by HMD with active viewpoint movement mode. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; pp. 6844–6850. [Google Scholar] [CrossRef]
  26. Watanabe, K.; Takahashi, M. Head-synced Drone Control for Reducing Virtual Reality Sickness. J. Intell. Robot. Syst. 2020, 97, 733–744. [Google Scholar] [CrossRef]
  27. Zhao, J.; Allison, R.S.; Vinnikov, M.; Jennings, S. The Effects of Visual and Control Latency on Piloting a Quadcopter Using a Head-Mounted Display. In Proceedings of the 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 2972–2979. [Google Scholar] [CrossRef] [Green Version]
  28. Cash, H.; Prescott, T.J. Improving the Visual Comfort of Virtual Reality Telepresence for Robotics. In Social Robotics; Salichs, M.A., Ge, S.S., Barakova, E.I., Cabibihan, J.J., Wagner, A.R., Castro-González, Á., He, H., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 697–706. [Google Scholar]
  29. Theofilis, K.; Orlosky, J.; Nagai, Y.; Kiyokawa, K. Panoramic view reconstruction for stereoscopic teleoperation of a humanoid robot. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 15–17 November 2016; pp. 242–248. [Google Scholar] [CrossRef]
  30. Kot, T.; Novák, P. Application of virtual reality in teleoperation of the military mobile robotic system TAROS. Int. J. Adv. Robot. Syst. 2018, 15, 1729881417751545. [Google Scholar] [CrossRef] [Green Version]
  31. Becerra, I.; Suomalainen, M.; Lozano, E.; Mimnaugh, K.J.; Murrieta-Cid, R.; LaValle, S.M. Human Perception-Optimized Planning for Comfortable VR-Based Telepresence. IEEE Robot. Autom. Lett. 2020, 5, 6489–6496. [Google Scholar] [CrossRef]
  32. Lipton, J.I.; Fay, A.J.; Rus, D. Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing. IEEE Robot. Autom. Lett. 2018, 3, 179–186. [Google Scholar] [CrossRef] [Green Version]
  33. Hirschmanner, M.; Tsiourti, C.; Patten, T.; Vincze, M. Virtual Reality Teleoperation of a Humanoid Robot Using Markerless Human Upper Body Pose Imitation. In Proceedings of the 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), Toronto, ON, Canada, 15–17 October 2019; pp. 259–265. [Google Scholar] [CrossRef]
  34. Gaurav, S.; Al-Qurashi, Z.; Barapatre, A.; Maratos, G.; Sarma, T.; Ziebart, B.D. Deep Correspondence Learning for Effective Robotic Teleoperation Using Virtual Reality. In Proceedings of the 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), Toronto, ON, Canada, 15–17 October 2019; pp. 477–483. [Google Scholar] [CrossRef]
  35. Kamali, K.; Bonev, I.A.; Desrosiers, C. Real-time Motion Planning for Robotic Teleoperation Using Dynamic-goal Deep Reinforcement Learning. In Proceedings of the 2020 17th Conference on Computer and Robot Vision (CRV), Ottawa, ON, Canada, 13–15 May 2020; pp. 182–189. [Google Scholar] [CrossRef]
  36. Xi, B.; Wang, S.; Ye, X.; Cai, Y.; Lu, T.; Wang, R. A robotic shared control teleoperation method based on learning from demonstrations. Int. J. Adv. Robot. Syst. 2019, 16. [Google Scholar] [CrossRef] [Green Version]
  37. Tsokalo, I.A.; Kuss, D.; Kharabet, I.; Fitzek, F.H.P.; Reisslein, M. Remote Robot Control with Human-in-the-Loop over Long Distances Using Digital Twins. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  38. Sun, D.; Kiselev, A.; Liao, Q.; Stoyanov, T.; Loutfi, A. A New Mixed-Reality-Based Teleoperation System for Telepresence and Maneuverability Enhancement. IEEE Trans. Hum. Mach. Syst. 2020, 50, 55–67. [Google Scholar] [CrossRef] [Green Version]
  39. Whitney, D.; Rosen, E.; Phillips, E.; Konidaris, G.; Tellex, S. Comparing Robot Grasping Teleoperation Across Desktop and Virtual Reality with ROS Reality. In Robotics Research; Amato, N.M., Hager, G., Thomas, S., Torres-Torriti, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 335–350. [Google Scholar]
  40. Just, C.; Ortmaier, T.; Kahrs, L.A. A user study on robot path planning inside a Virtual Reality environment. In Proceedings of the ISR 2018 50th International Symposium on Robotics, Munich, Germany, 20–21 June 2018; pp. 1–6. [Google Scholar]
  41. Wang, Q.; Jiao, W.; Yu, R.; Johnson, M.T.; Zhang, Y. Virtual Reality Robot-Assisted Welding Based on Human Intention Recognition. IEEE Trans. Autom. Sci. Eng. 2020, 17, 799–808. [Google Scholar] [CrossRef]
  42. Vempati, A.S.; Khurana, H.; Kabelka, V.; Flueckiger, S.; Siegwart, R.; Beardsley, P. A Virtual Reality Interface for an Autonomous Spray Painting UAV. IEEE Robot. Autom. Lett. 2019, 4, 2870–2877. [Google Scholar] [CrossRef]
  43. Yashin, G.A.; Trinitatova, D.; Agishev, R.T.; Ibrahimov, R.; Tsetserukou, D. AeroVr: Virtual Reality-based Teleoperation with Tactile Feedback for Aerial Manipulation. In Proceedings of the 2019 19th International Conference on Advanced Robotics (ICAR), Belo Horizonte, Brazil, 2–6 December 2019; pp. 767–772. [Google Scholar] [CrossRef] [Green Version]
  44. Gorjup, G.; Dwivedi, A.; Elangovan, N.; Liarokapis, M. An Intuitive, Affordances Oriented Telemanipulation Framework for a Dual Robot Arm Hand System: On the Execution of Bimanual Tasks. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 3611–3616. [Google Scholar] [CrossRef]
  45. Lee, J.; Lu, Y.; Xu, Y.; Song, D. Visual programming for mobile robot navigation using high-level landmarks. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2901–2906. [Google Scholar] [CrossRef]
  46. Roldán, J.; Peña-Tapia, E.; Martín-Barrio, A.; Olivares-Méndez, M.; Del Cerro, J.; Barrientos, A. Multi-Robot Interfaces and Operator Situational Awareness: Study of the Impact of Immersion and Prediction. Sensors 2017, 17, 1720. [Google Scholar] [CrossRef] [Green Version]
  47. Du, J.; Sheng, W.; Liu, M. Human-guided robot 3D mapping using virtual reality technology. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 4624–4629. [Google Scholar] [CrossRef]
  48. Hetrick, R.; Amerson, N.; Kim, B.; Rosen, E.; de Visser, E.J.; Phillips, E. Comparing Virtual Reality Interfaces for the Teleoperation of Robots. In Proceedings of the 2020 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 24 April 2020; pp. 1–7. [Google Scholar] [CrossRef]
  49. Allspaw, J.; Heinold, L.; Yanco, H.A. Design of Virtual Reality for Humanoid Robots with Inspiration from Video Games. In Virtual, Augmented and Mixed Reality, Applications and Case Studies; Chen, J.Y.C., Fragomeni, G., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 3–18. [Google Scholar]
  50. Franzluebbers, A.; Johnson, K. Remote Robotic Arm Teleoperation through Virtual Reality. In Proceedings of the Symposium on Spatial User Interaction, SUI ’19, New Orleans, LA, USA, 19–20 October 2019; pp. 1–2. [Google Scholar] [CrossRef]
  51. Theofanidis, M.; Sayed, S.I.; Lioulemes, A.; Makedon, F. VARM. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA ’17, Island of Rhodes, Greece, 21–23 June 2017; pp. 215–221. [Google Scholar] [CrossRef]
  52. Maciaś, M.; Da̧browski, A.; Fraś, J.; Karczewski Michałand Puchalski, S.; Tabaka, S.; Jaroszek, P. Measuring Performance in Robotic Teleoperation Tasks with Virtual Reality Headgear. In Automation 2019; Szewczyk, R., Zieliński, C., Kaliczyńska, M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 408–417. [Google Scholar]
  53. Roldan, J.J.; Pena-Tapia, E.; Garcia-Aunon, P.; Del Cerro, J.; Barrientos, A. Bringing Adaptive and Immersive Interfaces to Real-World Multi-Robot Scenarios: Application to Surveillance and Intervention in Infrastructures. IEEE Access 2019, 7, 86319–86335. [Google Scholar] [CrossRef]
  54. Conn, M.A.; Sharma, S. Immersive Telerobotics Using the Oculus Rift and the 5DT Ultra Data Glove. In Proceedings of the 2016 International Conference on Collaboration Technologies and Systems (CTS), Orlando, FL, USA, 31 October–4 November 2016; pp. 387–391. [Google Scholar] [CrossRef]
  55. Roldán, J.J.; Peña-Tapia, E.; Garzón-Ramos, D.; de León, J.; Garzón, M.; del Cerro, J.; Barrientos, A. Multi-robot Systems, Virtual Reality and ROS: Developing a New Generation of Operator Interfaces. In Robot Operating System (ROS): The Complete Reference (Volume 3); Koubaa, A., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 29–64. [Google Scholar] [CrossRef]
  56. Mizuchi, Y.; Inamura, T. Cloud-based multimodal human-robot interaction simulator utilizing ROS and unity frameworks. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 11–14 December 2017; pp. 948–955. [Google Scholar] [CrossRef]
  57. Whitney, D.; Rosen, E.; Ullman, D.; Phillips, E.; Tellex, S. ROS Reality: A Virtual Reality Framework Using Consumer-Grade Hardware for ROS-Enabled Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar] [CrossRef]
  58. Bischoff, M. ROS#. 2019. Available online: https://github.com/siemens/ros-sharp (accessed on 17 December 2020).
  59. Babaians, E.; Tamiz, M.; Sarfi, Y.; Mogoei, A.; Mehrabi, E. ROS2Unity3D; High-Performance Plugin to Interface ROS with Unity3d engine. In Proceedings of the 2018 9th Conference on Artificial Intelligence and Robotics and 2nd Asia-Pacific International Symposium, Kish Island, Iran, 10 December 2018; pp. 59–64. [Google Scholar] [CrossRef]
  60. Astad, M.A.; Hauan Arbo, M.; Grotli, E.I.; Tommy Gravdahl, J. Vive for Robotics: Rapid Robot Cell Calibration. In Proceedings of the 2019 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, The Netherlands, 6–8 November 2019; pp. 151–156. [Google Scholar] [CrossRef]
Figure 1. Flow diagram of screening process.
Figure 1. Flow diagram of screening process.
Applsci 10 09051 g001
Figure 2. Division of papers in each category.
Figure 2. Division of papers in each category.
Applsci 10 09051 g002
Figure 3. Simplified versions of the robotic teleoperation mapping models from [32].
Figure 3. Simplified versions of the robotic teleoperation mapping models from [32].
Applsci 10 09051 g003
Figure 4. Heat map of reviewed papers in each category with robot types.
Figure 4. Heat map of reviewed papers in each category with robot types.
Applsci 10 09051 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wonsick, M.; Padir, T. A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots. Appl. Sci. 2020, 10, 9051. https://doi.org/10.3390/app10249051

AMA Style

Wonsick M, Padir T. A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots. Applied Sciences. 2020; 10(24):9051. https://doi.org/10.3390/app10249051

Chicago/Turabian Style

Wonsick, Murphy, and Taskin Padir. 2020. "A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots" Applied Sciences 10, no. 24: 9051. https://doi.org/10.3390/app10249051

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop