Next Article in Journal
Evaluation of Hunting-Based Optimizers for a Quadrotor Sliding Mode Flight Controller
Previous Article in Journal
Classification of All Non-Isomorphic Regular and Cuspidal Arm Anatomies in an Orthogonal Metamorphic Manipulator
Open AccessReview

Augmented Reality for Robotics: A Review

Department of Robotics, Nazarbayev University, Nur-Sultan City Z05H0P9, Kazakhstan
Author to whom correspondence should be addressed.
Robotics 2020, 9(2), 21;
Received: 17 February 2020 / Revised: 12 March 2020 / Accepted: 13 March 2020 / Published: 2 April 2020


Augmented reality (AR) is used to enhance the perception of the real world by integrating virtual objects to an image sequence acquired from various camera technologies. Numerous AR applications in robotics have been developed in recent years. The aim of this paper is to provide an overview of AR research in robotics during the five year period from 2015 to 2019. We classified these works in terms of application areas into four categories: (1) Medical robotics: Robot-Assisted surgery (RAS), prosthetics, rehabilitation, and training systems; (2) Motion planning and control: trajectory generation, robot programming, simulation, and manipulation; (3) Human-robot interaction (HRI): teleoperation, collaborative interfaces, wearable robots, haptic interfaces, brain-computer interfaces (BCIs), and gaming; (4) Multi-agent systems: use of visual feedback to remotely control drones, robot swarms, and robots with shared workspace. Recent developments in AR technology are discussed followed by the challenges met in AR due to issues of camera localization, environment mapping, and registration. We explore AR applications in terms of how AR was integrated and which improvements it introduced to corresponding fields of robotics. In addition, we summarize the major limitations of the presented applications in each category. Finally, we conclude our review with future directions of AR research in robotics. The survey covers over 100 research works published over the last five years.
Keywords: augmented reality; robotics; human–robot interaction; robot assisted surgery; teleoperation; remote control; robot programming; robot swarms augmented reality; robotics; human–robot interaction; robot assisted surgery; teleoperation; remote control; robot programming; robot swarms

1. Introduction

Augmented Reality (AR) has become a popular multidisciplinary research field over the last decades. It has been used in different applications to enhance visual feedback from information systems. Faster computers, advanced cameras, and novel algorithms further motivate researchers to expand the application areas of AR. Moreover, the Industry 4.0 paradigm triggered the use of AR in networks of connected physical systems and human-machine communication [1]. Meanwhile, robots are becoming ubiquitous in daily life, extending their traditional home in the industry to other domains such as rehabilitation robotics, social robotics, mobile/aerial robotics, and multi-agent robotic systems [2,3]. In robotics, AR acts as a new medium for interaction and information exchange with autonomous systems increasing the efficiency of the Human-Robot Interaction (HRI).
The most common definition of AR by Azuma [4] states that in AR “3D virtual objects are integrated into a 3D real environment in real-time”. Nearly two decades ago, Azuma considered technical limitations, namely sensing errors and registration issues, as the main challenges of AR technology [4,5]. Furthermore, the author listed potential research areas for AR including medicine, robot motion planning, maintenance and aircraft navigation. Two types of AR displays utilizing optical and video approaches were compared, yet they were still error-prone preventing the development of effective AR systems. At the time, marker-based tracking of the user position was the common technique in AR [6,7]. Since then, substantial work on motion tracking (e.g., mechanical, inertial, acoustic, magnetic, optical and radio/microwave sensing) was conducted for navigation, object detection/recognition/manipulation, instrument tracking and avatar animation in AR [8].
Further research on AR displays, tracking and interaction technologies conducted during the next decade was summarised by Zhou et al. [9]. This survey analyzed 276 works from the Proceedings of International Symposium on Mixed and Augmented Reality (ISMAR) conferences held between 1998 and 2007. According to this work, the major attention of researchers was focused on vision-based tracking, camera localization, registration methods, and display technologies.
Several surveys on AR were presented in the last years. Wang et al. [10] explored the use of AR in industrial assembly. Past problems in AR due to error-prone camera localization have been partially solved by vision-based mapping technique introduced in the field of computer vision. Encompassing further developments in this active research area, a comprehensive review of state-of-the-art was published in [11]. Another survey presenting an overview of pose estimation algorithms used in AR was presented in [12]. Due to the advances in tracking and localization, a wide range of novel AR applications emerged. Mekni and Lemieux [13] presented an overview of AR applications in medicine, military, manufacturing, gaming, visualization, robotics, marketing, tourism, education, path planning, geospatial and civil engineering. For medicine, Nicolau et al. [14] provided a review of AR utilized in minimally invasive surgery with a particular emphasis on laparoscopic surgical oncology. Nee et al. [15] summarized AR applications for design and manufacturing in industrial environments. Research and development in mobile AR were surveyed by Chatzopoulos et al. [16].
Our literature search revealed the surge of AR in robotics. Many works on Robot-Assisted Surgery (RAS), robot teleoperation, robot programming, and HRI have been published in last five years. Yet, there is no systematic review summarizing recent AR research in the field of robotics. In this survey, we provide a categorical review of around 100 AR applications in robotics presented in conference proceedings, journals, and patents. We divided the considered AR applications into four broad classes: (1) medical robotics, (2) robot motion planning and control, (3) HRI, and (4) swarm robotics. We discuss the contributions of AR applications within each class, together with technical limitations and future research. We hope this survey will help AR researchers and practitioners to (1) get a clear vision on the current status of AR in robotics, and (2) understand the major limitations and future directions of AR research in robotics.
The remainder of the paper is organized as follows: The brief history of AR is given in Section 2. AR hardware and software along with common AR platforms are reviewed in Section 3. This is followed by an overview of AR-based applications developed within four areas of robotics in Section 4: medical robotics in Section 4.1, robot control and planning in Section 4.2, human–robot interaction in Section 4.3, and swarm robotics in Section 4.4. Conclusions and directions for the future research are drawn in Section 5.

2. A Brief History of AR

The history of AR dates back to the invention of VR in the 1960s, when Sutherland [17] introduced the concept of “Ultimate Display” that stands for the simulation of a synthetic environment similar to the actual reality. There are three components in this concept: 1) Head-Mounted Display (HMD) with sound and tactile feedback to create a realistic Virtual Environment (VE), 2) interaction of the user with the virtual objects in the VE as if they were in the real environment, and 3) computer hardware for the creation of the virtual environment. Sutherland [18] stated about the VR that “With appropriate programming such a display could literally be the Wonderland into which Alice walked”. At Lincoln Laboratory of Massachusetts Institute of Technology (MIT), the author performed various experiments with first Augmented/Virtual Reality capable HMDs, one of such devices was referred to in the literature as “Sword of Damocles”. This invention was characterized by its huge size. Instead of the camera, a computer was used. Thus, the overall system was connected to the ceiling. For the first time, the term “virtual reality” was introduced by computer scientist, Jaron Lanier who in 1984 founded the first company (“VLP Research”) for developing VR products. This company introduced the VR goggles [19], joystick, data gloves, and “Bird 3D” electromagnetic tracker [20], the key components for the development of VR haptics to the market.
In 1992, the AR system also referred to as the Virtual Fixture System [21] was invented in the Armstrong Laboratorylocated in Arlington, Texas, USA. The system presented to the user a Mixed Reality (MR) incorporating features of the sight, sound, and touch. In another work, Rosenberg [22] described the advantages of the virtual fixture interface and the overlaid sensory feedback for telerobotics. The potential of this method to increase the quality and efficiency of robot remote control was validated on the peg-in-hole task developed in the Naval Ocean Systems Center as an example of teleoperated manipulation. This study demonstrated that operator performance in a robot manipulation task can be significantly improved when provided with visual cues in which virtual objects are augmented into the user’s direct view.
Later on, Milgram and Kishino [23] introduced the “virtuality continuum” concept that created the connection between the real world and a completely virtual one (Figure 1). One end of Milgram’s scale stands for the real environment and the other end represents the virtual environment, while everything in between is characterized by the newly introduced concept known as MR. According to Milgram’s diagram, there are two broad areas that belong to the MR: AR and Augmented Virtuality (AV). However, the medium that represented a combined version of real and computer-generated environments started to be called more as AR rather than MR.
Over the history of AR, one of the main issues were related to the proper identification of the user’s position in the 3D environment necessary for the augmentation of the user’s view in the AR device. Different methods were proposed to address this issue in robotics, such as simultaneous localization and mapping (SLAM) described in the work of Durrant-Whyte and Bailey [24]. With the help of SLAM, a mobile robot in an unknown environment creates a map and concurrently determines its position within this generated map.
The term AR is used in the literature more often than the term MR. Based on the number of papers indexed by the Scopus database between 1994 and 2019, the term MR is used much less than AR and VR (see Figure 2). The figure also shows the rising popularity of the term AR from 2015 to 2019 (from around 4000 papers in 2015 and to 8000 papers in 2019). We can prophesize that in the near future human-computer interaction (HCI), human–robot interaction (HRI) and the way how humans interact with each other might substantially transform due to AR. Different aspects of human life might also experience significant changes including engineering, medicine, education, social/natural sciences, psychology, arts, and humanities.

3. AR Technology

An Augmented Environment can be experienced through different sets of technology including mobile displays (tablets and smartphone screens), computer monitors, Head-Mounted Displays (HMDs), and projecting systems later leads to the development of Spatial Augmented Reality (SAR) (Figure 3). Recent technological advances increased the popularity of AR among the public. For example, in 2016, the game Pokemon Go allowed smartphone users to see an augmented environment in different parts of the world and play the game worldwide. Later, Rapp et al. [25] developed another location-based AR game. In this game, users could scan Quick Response (QR) codes with their smartphones and receive the visuals and descriptions of viruses spread around the world. Based on the popularity of Pokemon Go, one can foresee that AR can change the human perception of gaming. Furthermore, AR allows the development of dynamic games in which users can walk in real environments using their smartphones and along with entertainment acquire useful information, e.g., how to better look after their health.
Recently introduced AR goggles (e.g., Microsoft Hololens and Google Glass) enabled more efficient interaction between human and autonomous systems. In medicine, visualization tools in surgical robotics were adapted to use AR during Robot-Assisted surgery (RAS). For example, the da Vinci robotic system (Intuitive Surgical, Inc., Mountain View, CA) [26] is capable to render a stereo laparoscopic video to the robot’s console window during surgery.
Substantial improvements have been made in terms of software platforms for the development of AR applications for different AR displays. In 2013, Vuforia Engine [27] was introduced for the robotics community as the framework for image- or model-based tracking of objects of interest in AR. Vuforia can be integrated with external sensory devices, such as gyroscopes and accelerometers [28]. Later, in 2017, Apple introduced ARKit library [29] which became popular for developing industrial AR applications. In the following year, Google presented ARCore [30], a powerful platform for creating AR applications for Android.
In addition to SLAM, an asynchronous localization and mapping method was developed in [31]. According to the authors, this technique can be beneficial for systems utilizing large number of mobile robots thanks to its computational and power efficiency. Another work addressing the issues of mapping and positioning in AR and virtual reality (VR) was presented by Balachandreswaran et al. [32]. The proposed method was particularly designed for applications utilizing head-mounted displays (HMDs) and depends on the depth data from at least one depth camera.
With regards to the computer vision based techniques for 3D object pose estimation, a Deep Learning (DL) approach was presented in [33]. This method for tracking 3D real-world objects in a scene uses a deep convolutional neural network which is trained on synthetic images of the tracked objects. The network outputs the change in the object’s position using six variables, where the first three describe the object translation and the latter three represent the object rotation expressed in Euler angles. Integrated with DL, AR can be used for object detection, as implemented by Park et al. [34]. Further, the researchers developed the Deep-ChildAR system for assisting preschool children in learning various objects. The robotic system consists of a DL framework that recognizes objects in the real scenes and projection-based AR system that forms a user-friendly interface to provide a child with the description of the chosen object.
Despite earlier use of monitors for AR, current trends in HRI refer to the more compact HMD as the major medium for AR. Such a shift happened due to the advances in embedded systems, optics, and localization and registration algorithms. In addition, HMDs allow a more natural way of interaction with the surrounding world and robotic devices. On the other hand, tablets and smartphone screens are useful in teleoperation and simulation tasks, as these devices can be connected to separate processors for calculations. As a result, tablet- and screen-based AR have less severe latency but are more sensitive to the calibration between devices. Smart goggles can collect data from different built-in sensors, therefore, they can perform more accurate localization of the virtual object, increasing the quality augmentation. Usually, HMDs are designed in a way that users may walk around wearing them with normal optical glasses. However, HMDs have less powerful processors in comparison to personal computers. As a consequence, HMD-based AR applications have latency problems, i.e., it takes time for the HMD device to finish computations and present the augmented image to the view of the user.
In the following, we describe different models of HMDs classified into one of two broad groups: (1) Optical See-Through HMD (OST-HMD) [35] and (2) Video See-Through HMD (VST-HMD) [36]. Popular OST-HMD utilized in AR research include Hololens from Microsoft, nVisor ST60 from NVIS Company, Vuzix™ Star 1200XL, Google Glass, Atheer One Smart Glasses, Recon Jet Eyewear, the HTC Vive HMD with ZED Mini AR stereo video passthrough camera. There is also monocular one eye AR glasses such as Vuzix M100 Smart Glasses. These glasses use optical display modules. VST-HMDs include popular in literature i-Glasses SVGA Pro [37]. Recently, there appeared another class of AR glasses that utilize small projectors such as Moverio BT-200 Smart Glasses. In these, two projectors send the image onto transparent displays on each eye.
In addition to commercial HMDs, custom AR setups have also been developed by combining different devices (e.g., video mixer (Panasonic), raspberry Pi camera remote video stream, and motion capture system). Popular motion capture systems for AR include Mictron Tracker and ARTTrack 3D. Furthermore, different types of cameras are utilized in AR research, such as RGB-D, binocular, stereo, depth, infrared, monocular, pinhole, wide-angle and omnidirectional cameras. In custom setups, VR headsets (Oculus Rift and Oculus DK2) were also adapted as AR displays [38,39,40,41].
Also, Spatial AR setups gained popularity in research settings. Lee and Jang [42] describe a novel approach for object tracking developed for the integration into SAR. Their method utilizes two threads: one for object tracking and one for object detection. It is robust to illumination changes and occlusions due to objects placed between the tracked object and the camera. The major component for the development of SAR-based applications is the camera-projector system. According to our literature search, 2D AR projector InFocus IN116A and Philips PicoPix 3610 were frequently utilized. Most AR applications utilize camera-based registration methods where markers play an important role. Popular markers in AR include QR codes, fiducial markers, infrared markers, and object markers.
Technological advances made a huge contribution to AR in the last five years, however, further research is needed. Even powerful HMDs such as Microsoft Hololens suffer from technical limitations, such as small field of view and low camera resolution. Furthermore, an augmented world can be experienced from a specific minimum distance from the object due to the limitations of the HMD optics. In addition, registration and tracking problems are still prevalent. For example, if a marker or tracked object is occluded, then the alignment between the real world and virtual components is no longer accurate. In many applications, e.g., robot programming or RAS, accuracy is a key performance metric for AR.

4. AR Applications in Robotics

4.1. AR in Medical Robotics

In this section, we review AR applications developed during the five year period from 2015 to 2019 in medical robotics. We will cover AR-based Robot Assisted Surgery (RAS), surgical training platforms, and rehabilitation systems. Within this section of the paper we collected 26 papers from the Scopus database by the union of two search strategies: (1) keywords in the title (“augmented-reality-based" OR "augmented-reality” OR “augmented reality”) AND (“robot-assisted” OR “robotic rehabilitation” OR “robotic surgery” OR “robotic assisted” OR “robot-assisted” OR “robotics prosthesis” OR “robotic prosthesis”), (2) keywords in the title and in the keywords section: (“augmented-reality-based” OR “augmented-reality” OR “augmented reality”) AND (“robotic surgery” OR “robotic imaging” OR “medical robotics”). Then we excluded review papers, non-English papers, and papers that do not suit the scope of our work. Furthermore, we found additional papers via cross referencing from the previously considered works. Among the selected papers, ten works were included for the analysis presented in Table 1.
An overview of AR-based robotic surgery platforms by Diana and Marescaux [43] discusses issues related to safety and efficiency during the computer-assisted surgery. In [44], AR is utilized for training medical residents for robot-assisted urethrovesical anastomosis surgery. In [45], SAR technology is embedded to a mobile robot for robot-assisted rehabilitation. This rehabilitation framework consists of a projecting module that renders a virtual image on top of the real-world object. Later, Ocampo and Tavakoli [46] developed an AR framework integrated with the haptic interface, where SAR technology and 2D projection display were utilized. Such system was designed to make the overall rehabilitation process more effective, fast, and reduce the cognitive load experienced by patients during rehabilitation.
Pessaux et al. [47] developed a visualization system enhanced with AR technology for robot-assisted liver segmentectomy. This system assists the surgeon to precisely and safely recognize almost all of the important vascular structures during the operation. Liu et al. [48] presented an AR-based navigation system to assist a surgeon to perform base-of-tongue tumor resection during the transoral robotic surgery. With the AR vision module, the da Vinci surgical robot can identify the location of the tumor in real-time and adjust the area for resection by following the instructions augmented into its stereoscopic view. As noted by Navab et al. [49], AR-based vision tools together with intra-operative robotic imaging can improve the efficiency of computer-assisted surgery.
An AR compatible training system to assist urology residents to place an inflatable penile prosthesis is described in [50]. An AR-based vision system for RAS was developed by Lin et al. [51], where a three-dimensional AR display was used in an automatic navigation platform for craniomaxillofacial surgery. An AR navigation system used in oral and maxillofacial surgery was described by Wang et al. [52]. This framework utilized a novel markerless video see-through method for registration that was capable of aligning virtual and real objects in real-time. Thus, the user did not require to align manually. Later, Zhou et al. [53] developed an AR platform for assisting a surgeon during robotic mandibular plastic surgery. The framework generated navigation guidelines and prepared visual instructions for the preoperative surgical plan. With such system, inexperienced users have more chances to make proper decisions during operation. For instance, force feedback provided by the framework during the experiments facilitated the process of controlling the high-speed drill by new users preventing them from damaging bone or nerve tissues inadvertently.
Bostick et al. [54] created an AR-based control framework to send manipulation signals to a prosthetic device. The system utilized an algorithm that identified an object and sent corresponding grasping commands to the upper-limb prosthesis. Qian et al. [55] refer to the AR-based framework “ARssist” that consists of a teleoperated robot assistance module and a haptic feedback interface. It can generate visual instructions during minimally invasive surgery. The system renders the 3D models of the endoscope, utilized instruments, and handheld tools inside the patient’s body onto the surgeon’s view in real-time (Figure 4a). Furthermore, AR-assisted surgery with the help of projector-camera technology is also gaining momentum [55]. In [56], an AR system was used in rendering the operational area during the urological robot-assisted surgery for "radical prostatectomy". In [57], an overview of the literature addressing the issues of surgical AR in intra-abdominal minimally invasive surgery is presented. In [58], different robotic systems and AR/VR technologies utilized in neurosurgery are summarized. For example, in spinal neurosurgery, authors refer to the robotic assisting platforms utilized for screw placement, e.g., Spine Assist, Renaissance, and da Vinci surgical systems. According to the authors, there is a high potential of AR/VR enhanced training systems and simulation platforms to improve the training process of surgeons, even though limitations of AR/VR for neurosurgery are still present. The pilot studies with experienced and inexperienced surgeons using this AR assistance system in an operational scenario was described later in [59] (Figure 4b,c). Analyzing the recorded results of 25 inexperienced surgeons, the authors state that AR assistance helped to increase the efficiency of the operation, improve patient safety, enhance hand-eye coordination, and reduce the time required for tool manipulation by the surgeon. According to Hanna et al. [60], AR-integrated platforms can be used to detect and diagnose anatomic pathologies.
A recent work discussing AR and VR navigation frameworks for achieving higher accuracy and precision in RAS, specifically in oncologic liver surgery, was presented by Quero et al. [61]. This work highlighted how imaging and 3D visualization can improve the perception of the operation area by the surgeon in RAS. These authors also developed an AR-based platform designed to provide the "X-ray see-through vision" of the operating area in real-time to the surgeon wearing the HMD [62]. The performance of this system was tested during minimally invasive laparoscopic surgery on the deformable phantom. It was found that the system is prone to registration errors since the operational area was comprised of soft tissues.
Among robotic surgery platforms, da Vinci surgical robot is the most widely used robotic system utilized in RAS. In addition, console display of da Vinci system can be adapted to support AR techniques during the surgery obviating the need for additional displays for supporting AR during surgery. The state of the art in AR-integrated the Robot Assisted Surgery (RAS) was reviewed by Qian et al. (2019b) [63]. A brief overview of AR applications developed within 5 year period in the area of medical robotics along with the corresponding list of technology utilized for the creation of AR setups, tested robotic systems, and observed overall limitations are summarized in Table 1.
In Figure 5, we show the distribution of the papers indexed by the Scopus database published between 1995 and 2019 within the area of medical robotics. Selected papers are classified into three groups, which are number of papers in MR, AR, VR. From this figure, we infer that popularity of AR and VR in medical robotics rose significantly from 2015 until 2019. The papers were collected by using following keywords throughout all sections of papers in Scopus database: (1) ALL(“mixed reality” AND NOT “augmented reality” AND “medical robotics”), (2) ALL(“augmented reality reality” AND NOT “mixed reality” AND “medical robotics”) and (3) ALL(“virtual reality” AND “medical robotics”).
Based on our literature survey, we foresee that AR has an immense potential to trigger a paradigm shift in surgery to improve outcomes during and post operation. On the other hand, this will presumably introduce new challenges in the training of new medical residents and surgeons who would be able to benefit from AR-enhanced visualization and navigation tools. Before AR could be further utilized in medicine and rehabilitation, technical limitations of the current systems should also be overcome. These include latency, calibration and registration errors.

4.2. AR in Robot Control and Planning

AR applications within this category focus on the control and planning domains of robotics. In order to find relevant papers for this section, we collected 49 papers from the Scopus database by the union of two search strategies: (1) keywords in the title (“augmented-reality-based” OR “AR-assisted” OR “augmented reality”) AND (“robot navigation” OR “robot control” OR “robot localization” OR “robot visualization” OR “automated planning” OR “robotic control” OR “robot programming”), (2) keywords in the title and in the “keywords section”: (“augmented-reality-based” OR “AR-assisted” OR “augmented reality”) AND (“robot programming”)). We did not consider non-English papers, review papers, papers duplicating earlier works and irrelevant ones. Other papers were added to the list via cross-referencing strategy. Among selected papers, 16 were included for the analysis presented in Table 2.
Gacem et al. [64] introduced an AR-integrated system, “Projection-Augmented Arm (PAA)”, that incorporates projection technology with a motorized robotic arm to assist human user during the processes of locating and finding objects in a dense environment. In [65], an infrared camera was integrated to a mobile robot to enhance autonomous robot navigation via utilizing projection SAR technology. With the help of this framework, a mobile robot was able to follow a complicated path by receiving graphical feedback in the form of projected infrared signals. A similar AR-integrated mobile robot navigation system was described in the work of Dias et al. [66]. This system used a visualization framework consisting of non-central catadioptric cameras, and an algorithm capable to determine the position of the robot from the camera data. The catadioptric optical system acquired a wide field of view using camera lenses and mirrors through central projection and a single viewpoint.
In [67], industrial robot programmers assisted by AR were asked to complete the tool center point teaching, trajectory teaching, and overlap teaching tasks. Evaluation metrics used in the work were the efficiency of the performed work, overall time spent to complete the task, and the amount of the mental load experienced by the robot programmer. The following trends were observed: (1) mental load experienced by AR-assisted users was significantly lower than the amount of load experienced by users performing the tasks without AR assistance and (2) time spent for completing the task with AR support was significantly higher for non-experienced AR users. Thus, one can see that AR technology has the potential to reduce the mental load of industrial workers; however, time and training are needed for workers to get used to the AR system initially. In [38], AR was embedded to the remote control module of a teleoperated maintenance robot. In this system, a handheld manipulator wirelessly transmitted a control signal and manipulation instructions to the real maintenance robot (Figure 6a). An optical tracking system was utilized to identify the position of the handheld manipulator and to place properly the generated augmented scenes for the maintenance operator in the AR environment. In [68], AR was emplyed for on-the-fly control of aerial robots utilized in structural inspection. In [69], AR with haptic feedback was used during teleoperation of an unmanned aerial vehicle (UAV) equipped with a gamma-ray detector. With the help of this system comprised of a 3-DoF haptic device, fixed camera, computer screen, and a nuclear radiation detector, an operator was able to remotely control a drone during its search for the source of nuclear radiation (Figure 6b). Later, Kundu et al. [70] integrated AR interface with an omnidirectional robot for vision-based localization of a wheelchair during indoor navigation. As shown by Zhu and Veloso [71], it is possible to interact with a smart flying robot, that generates the route on-the-fly and executes the planned motion via an AR capable interface. This interface can videotape the robot’s motion and overlay the recorded video with the synthetic data allowing human operators to study algorithms utilized by the robot during the flight.
Information from an AR simulation environment with the virtual robot was used to generate the motion instructions to the real robot [72]. The presented framework was capable of simulating the pick-and-place task, in which the target object was placed into the computer numerical control (CNC) milling machine (Figure 6c). Lee et al. [73] in their work show that position of the AR camera with respect to the robot link can be estimated with the help of SAR technology in cases when the position of the camera cannot be calculated using methods purely based on kinematics. In another application, robot status and navigation instructions were generated by an AR platform and sent to the human operator via an AR display [74]. Guhl et al. [75] developed an AR interface that established communication between devices and the human within an industrial environment.An AR system that utilized the Fuzzy Cognitive Map optimization algorithm for the real-time control of a mobile robot is presented in [76]. During the experiments, the robot was able to modify its intended motion following AR instructions comprised of glyphs and paths. An experimental setup containing an AR-supported vision system and haptic feedback technology which was used for the remote control of a welding robot is presented in [77].
Liu et al. [78] integrated an AR interface with Temporal And-Or Graph (T-AOG) algorithm to provide a robot programmer with information on the robot’s hidden states (e.g., latent forces during interaction). In [79], an AR interface was used to program a seven degrees-of-freedom (DoF) manipulator via the method of augmented trajectories. The framework was able to perform trajectory generation, motion planning, parameter identification, and task execution in a simulation environment. In [80], a robot programming assistant framework, developed on the basis of the Google Tango AR computing platform, was described. This AR system was tested during the remote programming of two KUKA Lightweight Robots. During the study, the human operator was asked to modify the robot’s joint configuration, change coordinates of the tool center point, control the gripper, and switch between different control modes. Later, Hoffmann and Daily [81] introduced an AR framework to present information obtained from the 3D simulation environment in the form of 2D instructions displayed on an AR screen. The system records data from 2D camera and 3D sensor in order to place virtual data on top of the 3D objects within a real-world environment. AR was integrated with tactile feedback technology in [82]. This system was designed to provide real-time assistance to a human operator during industrial robot programming and trajectory generation. The developed interface enabled a more natural and efficient communication during collaborative work with an industrial robot thanks to the hand gestures used to interact with the AR interface.
Recent work describing the state-of-the-art of simulation and design platforms for manufacturing was published by Mourtzis [83]. Additionally, Mourtzis and Zogopoulos [84] in their work consider AR-based interface to support an assembly process. Integration of AR to warehouse design utilized in the papermaking industry was described in [85]. AR-based framework for industrial robot programming is described in [86]. It was found that such an application can significantly ease the process of robot programming and motion planning, reducing the necessity for extensive training of the human workers. An AR interface designed to provide assistance during robotic welding was presented by Ong et al. [87]. Another AR programming framework for welding robots was developed by Avalle et al. [88]. This tool helped a new user to complete complex welding tasks without having expertise in robotics. Projection-based AR was used for providing assistance to human operators in robotic pick and place tasks in the work of Gong et al. [89]. Such an interface reduces the cognitive load of an operator involved in robot grasping by providing visual feedback during robot-based object manipulation in real environment.
Integrating AR technology into the process of industrial robot programming, optimal path generation and the overall design of some production processes has become a common technique in industry. Among wide range of platforms developed for the creation of AR environments, Unity game development platform is the most highly utilized framework. AR environment allows to map the virtual and real robots and predict the motion of the real robot with the help of the virtual one. Furthermore, in some applications, AR provides instructions to the human programmer of an industrial robot such that the human operator can easily adapt to the programming of the new robotic system. In Table 2, we provide a short summary of AR applications presented in this section together with the used technology for the creation of AR environments, robotic platforms and observed limitations of the developed AR systems.
Numbers of the papers indexed by the Scopus database published between 1995 and 2019 in robot control and planning are shown in Figure 7. These papers were classified into three groups (MR, AR, and VR). Similar to the Figure 5 for medical robotics, the figure shows that the numbers of AR and VR papers on robot control and planning have been rising steadily, especially in the last five years. We also notice that the number of works in MR has not been changed significantly. This might be due to the trend in literature to combine fields of AR and MR into one area known as AR. In our search, we used the following keywords throughout all sections of papers in Scopus database: (1) ALL(“mixed reality” AND NOT “augmented reality” AND (“robot control” OR “robot planning”)), (2) ALL(“augmented reality reality” AND NOT “mixed reality” AND (“robot control” OR “robot planning”)) and (3) ALL(“virtual reality” AND (“robot control” OR “robot planning”)).
From the presented works, we infer that the integration of AR into the simulation process can lead to more efficient manufacturing with less cognitive load to the human operators. Furthermore, AR technology in the era of Industry 4.0 is going to serve as a new medium for human–robot collaboration in manufacturing. The developed applications and studies analyzed in this section reveal the premise of AR to facilitate human interaction with technology in the industry. However, the training of specialists in manufacturing, assembly and production needs to be transformed such that AR-assistance will increase the efficiency and can be adopted seamlessly.

4.3. AR for Human-Robot Interaction

Wide-ranging applications of AR in HRI have been developed to enhance the human experience during interaction with robotic systems or wearables within considered five year period. The papers analysed in this section were selected by the following method. First, we collected 44 papers from the Scopus database by the union of two search strategies: (1) keywords in the title (“augmented reality”) AND (“human–robot collaboration” OR “remote collaboration” OR “wearable robots” OR “remote control” OR “human-machine interaction” OR “human–robot interaction” OR “robotic teleoperation” OR “prosthesis” OR “brain-machine interface” ), (2) keywords in the title and in the “keywords section”: (“augmented reality”) and ((“augmented reality”) AND (“human–robot interaction”)). Then we removed survey papers, non English papers and papers not relevant for our topic. We also considered papers found by cross-referencing and ones suitable for the scope of this survey. In Table 3, 19 papers were selected for the detailed analysis.
With the help of AR, it was possible to set a remote collaborative workflow between a supervisor and the local user as shown by Gurevich et al. [90]. In this system, an AR-based projector was placed on top of a mobile robot and it enabled the transfer of information in a way that the user’s hands remain free to follow the remotely provided instructions. Another AR-based touchless interaction system allowed the users to interact with the AR environment via hand/feet gestures in [91]. In the work of Clemente et al. [92], an AR-based vision system was designed to deliver feedback to the sensory-motor control module of a robotic system during object manipulation. The system utilized two wearable devices: haptic data glove and MR goggles to facilitate user interaction during remote collaborative work with a robotic hand.
Gong et al. [93] developed an AR-based remote control system for real-time communication with “IWI” human-size traffic cop robot. For real-time video monitoring, “Raspberry Pi” camera module was embedded into the police robot. This camera allowed human operators to send remote control commands to the robot based on the information transferred to the AR goggles or tablet displays. Piumatti et al. [94] created a cloud-based robotic game integrated with Spatial Augmented Reality (SAR) technology. This game has different modules to set game logic, sound, graphics, artificial intelligence and player tracking in order to develop and integrate proper AR projections into the gaming experience. Dinh et al. [95] leveraged AR for assisting a human operator working with a semi-automatic taping robot. The system used SAR and “Epson Moverio BT-200” AR goggles to generate visual instructions on the taping surface. Lin et al. [96] integrated AR technology into the Robotic Teleoperation System comprised of Wheelchair Mounted Robotic Manipulator. AR was employed to remotely reconstruct the 3D scene of the working area and generate a specific scene, which was then augmented to the user’s field of view manipulating a virtual robotic arm. Finally, AR-based control module converted gestures applied to the virtual object into control commands for the robotic arm.
Shchekoldin et al. [97] presented an AR framework to control a telepresence robot using an HMD integrated with an inertial measurement unit (IMU). The system utilized an adaptive algorithm that processed the IMU measurements and inferred the motion patterns (e.g., direction and magnitude of the angular velocity) due to the user’s head movement. An AR-based vision system for constructing the kinematic model of reconfigurable modular robots is developed in [98]. Each robot module was marked with an AR tag with a unique identifier. This way, the state of each link was acquired and the distance between modules and joint angles were computed. Mourtzis et al. [99] utilized cloud-based remote communication to establish an interaction between an expert and the remotely-assisted maintenance worker within the AR environment (Figure 8a)
Recently, Makhataeva et al. [100] developed an AR-based visualization framework that aimed to increase an operator’s awareness of the danger by augmenting his/her view in AR goggles with an aura around the operating robot (Figure 8c,d). The work introduced the concept of safety aura that was developed on the basis of a safety metric based on distance and velocity. AR can also be applied for relaying robot motion intent during human–robot communication. For example, Walker et al. [101] developed an AR-based communication framework to interact with AscTec Hummingbird during its autonomous flight over a pre-defined trajectory. In the later work, Walker et al. [102] introduced an AR interface that utilized the virtual model of the robots, such that before sending teleoperation commands to the real robot, a user could study the motion of the virtual robots. This interface was tested on the teleoperation scenario of an aerial robot designed to collect and process environmental data. In [103], AR was embedded to the control interface of a teleoperated aerial robot (Parrot Bebop quadcopter) programmed for data collection during environmental inspection. In [104], a novel teleoperator interface to remotely control the robot manipulators for nuclear waste cleanup is proposed. This interface could enhance the performance of a human operator using multi-model AR equipped with haptic feedback. ROS (Robot Operating System) was employed as the main platform for data sharing and system integration. Another example of robot teleoperation with AR-based visual feedback was invented by Brizzi et al. [39]. The authors discussed how AR can improve the sense of embodiment in HRI based on experiments with Baxter robot which was teleoperated in an industrial assembly scenario.
In the area of wearable robotics, He et al. [105] developed an AR simulation system which provided a patient with a realistic experience of donning a myoelectric virtual prosthetic hand. The framework consisted of an AR display, pose estimation module for aligning the virtual image of the prosthetic device with the real hand of a person and a feedback system that allowed a user to control virtual hand prosthesis with electromyography signals recorded from the hand. Meli et al. [106] presented an overview of robotic frameworks embedded with haptic feedback and AR. This work also highlighted the effectiveness of wearable finger haptics in AR to perform robot manipulation, guidance, and gaming. Wang et al. [107] integrated a BCI system with AR visual feedback in order to assist paralyzed patients in controlling a robot arm for grasping. The system was tested on five subjects. Obtained results indicated that the AR-based visual feedback system has an advantage over standard camera-based one (control time and error of the gripper aperture reduced to 5 s and 20%, respectively). Zeng et al. [108] embedded AR into the Gaze-BCI system utilized in closed-loop control of a robotic arm. During the experiments, eight subjects were asked to grasp and lift objects with a robot. The obtained results were the following: amount of trigger commands necessary to complete the task reduced and amount of errors during the lifting process decreased more than 50% in comparison to the trials when a vision system without AR feedback was utilized. Si-Mohammed et al. [109] introduced a BCI system with AR feedback to simplify the control of a mobile robot. Microsoft HoloLens MR goggles were used for visualization of the augmented environment.
In the area of HRI, Peng et al. [40] developed an AR-based fabrication framework named as “Robotic Modeling Assistant (RoMA)”. With this framework (Figure 8b), a user wearing AR goggles was able to control object modeling and printing performed by a ceiling-mounted robotic arm (Adept S850, Omron). Such a framework can accelerate design and 3D printing by allowing a user to interact with the printed object and make changes almost simultaneously. Urbani et al. [110] introduced an AR-based inter-device communication framework to monitor and adjust operational parameters (e.g., status, shape, and position) of a multipurpose wrist wearable robot. According to Williams et al. [111], the first workshop on “Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI)” brought together works where AR, VR, and MR were integrated with different robotic systems. According to Sheridan [112], one of the major open problems in HRI is related to human safety during collaborative work with robots. The issue of safety during HRI was also addressed in [113]. Specifically, the authors considered the concept of the virtual barrier in order to protect human users during collaboration with robots in a manufacturing environment. In addition, AR was applied for Concept-Constrained Learning from Demonstration as described in [114]. Using this method, a user wearing AR goggles could see the start and end positions of robot motion and define the motion trajectories more intuitively.
AR technology for the remote control and teleportation has become popular in HRI research. In addition, visualization capabilities of AR-based applications increase their usability in human robot collaborative tasks. For example, AR increases the efficiency during the trajectory design and safety during testing of various robots. A key technological breakthrough in HRI application is the invention of the HMDs, e.g., Microsoft Hololens. ROS emerges the leading open source platform for interfacing AR devices with robots. Around twenty AR applications in HRI, along with the technology utilized in these setups, tested robotic systems, and observed limitations are summarized in Table 3.
Within the area of HRI, the number of papers in MR, AR and VR indexed by the Scopus between 1995 and 2019 is shown in Figure 9. The figure shows the significant rise in papers dedicated to the Human-Robot Interaction within AR and VR starting from the year of 2016. Even though number of papers both in AR and VR rise over the years, number of papers on VR is almost double that of AR. The search for papers was conducted by using the following keywords throughout all sections of papers in Scopus database: (1) ALL(“mixed reality” AND NOT “augmented reality” AND “human–robot interaction”), (2) ALL(“augmented reality” AND NOT “mixed reality” AND “human–robot interaction”) and (3) ALL(“virtual reality” AND “human–robot interaction”).
According to our literature analysis, we note AR can significantly shape human perception of collaborative work and interaction with robotic systems. For example, with the help of AR, interaction can become more intuitive and natural to humans. This is especially the case when the AR environment is experienced via AR goggles worn by a human during communication, control, and study of robotic systems. Furthermore, a multitude of AR applications developed for remote control and teleoperation indicate that AR will be extensively utilized in the future, primarily due to the rise of Industry 4.0. However, technological limitations of AR technology such as calibration issues, problems with optics and localization errors need to be further researched such that commercial AR applications for HRI can translate from research to the market.

4.4. AR-Based Swarm Robot Research

In this section, we consider AR applications developed for swarm robotics. Within this section, we found nine papers from the Scopus database by searching following keywords in the title and abstract ((“augmented reality”) AND (“robot swarm” OR “swarm robotics” OR “kilobots” OR “swarm e-pucks” OR "biohybrid systems”)). We removed irrelevant papers according to the scope of this work and included additional papers through cross-referencing. From the final list of chosen papers, we further selected seven for the analysis presented in Table 4.
In swarm robotics, AR can be perceived through virtual sensing technology as demonstrated by Reina et al. [115]. In their work, the authors presented an AR-based virtual sensing framework that consists of (1) a swarm of robots (15 e-pucks) each embedded with a virtual sensor, (2) a multi-camera tracking system that collects data of the real environment, and (3) a simulator that uses this data to generate augmented data and sends instructions to each robot based on the sensing range of the embedded virtual sensor (Figure 10a). With the help of virtual sensing, the motion of robot swarms can be studied more effectively in AR, while in real environment mini-robots have limited sensing. In addition, this system can help researchers to control and design motion of robot swarms within complex scenarios in an AR-based synthetic environment.
AR can also be utilized to investigate the interaction between swarms of biohybrid (plant-robot) agents. AR was utilized to visualize the space covered by biohybrid swarms [41]. Specifically, the system showed potential spatial changes as the plant-robot systems evolve or grow over time. As was demonstrated by Omidshafiei et al. [116], AR was integrated with the robotic prototyping framework employed in the research of cyber-physical systems. The AR platform assisted the user during hardware prototyping and algorithm testing via providing real-time visual feedback from the system’s hidden states.
Experiments with the Kilobots in the ARGoS simulation environment were described by Pinciroli et al. [117]. The authors designed a plugin to perform cross-compilation of the applications developed in the simulator according to a real robot scenario (Figure 10d). Reina et al. [118], in their work, integrated AR into the control software of Kilobots system. Within the developed platform, AR together with virtual sensing helped to extend the operational capabilities of the robots (Figure 10b,c). This made them suitable for experiments where advanced sensors and actuators were required. Later, an open-source platform was created for the experiments with large numbers of Kilobots [119]. The performance of this system was studied on an experimental setup where robot swarms were organized to execute obstacle avoidance, site selection, plant watering, and pheromone-based foraging tasks.
In the recent work of Reina et al. [120], a group of Kilobots was programmed to perform actions during the filming process of a short movie where actors were all robots. The work described software utilized for the programming of robot swarms during filming. Specifically, the authors wrote an open-source code that transformed human-generated instructions to the C code, which was later loaded to Kilobots to make them perform corresponding actions in the real scene. In the work of Llenas et al. [121], the behavior of large swarms were observed to study the concept of stigmergic foraging. During the study ARK platform also known as AR for Kilobots was used as the main simulation framework. The study on the collective foraging behavior continued [122], where behaviors of around 200 Kilobots having access to the sensing data from the virtual sensors and actuators were studied. ARGoS framework was used as the main experimental platform.
Latest advance in the research of swarm robotics is the simulation platforms such as ARK or ARGoS. These simulation platforms enable to study motion of robot swarms integrated with the virtual sensors and virtual actuators. This way, perception of the environment by the mini robots with limited sensing abilities can be enhanced by virtual sensors, allowing researchers to study their motion in more complex scenarios. Considered AR applications developed within the research of robot swarms, their technology, the robotic systems, and limitations are listed in Table 4.
Within the research area of swarm robotics, the distribution of the papers indexed by the Scopus database published between 1995 and 2019 in MR, AR and VR is shown in Figure 11. From the figure, we observe that integration of MR, AR and VR technologies in the research of swarm robotics is not yet widely addressed in the literature. However, in the period of 2015–2019, the number of papers published within AR and VR rose from five to around 30–35 papers. The number of papers in MR is less than five throughout all years within the considered time period. The papers for this figure were collected by using the following keywords in Scopus database: (1) ALL(“mixed reality” AND NOT “augmented reality” AND (“swarm robotics” OR “robot swarms”)), (2) ALL(“augmented reality reality” AND NOT “mixed reality” AND (“swarm robotics” OR “robot swarms”)) and (3) ALL(“virtual reality” AND (“swarm robotics” OR “robot swarms”)).
The performed review outlined the lack of research on the integration of AR to robot swarms. However, the limited introduction of AR to the swarm robotics has shaped the human perception of the simulation environment and significantly eased the design and understanding of the collective behavior in robot groups. Currently, AR-based simulation platforms allow scientists to setup experiments involving thousands of mini-robots in simulation before actual real-world experiments. Even though few applications have been developed within this area of robotics until now, we foresee that further development of AR technology will significantly increase the number of works dedicated to the study of AR in swarm robotics.

5. Conclusions

After reviewing the history of AR and current AR hardware and software, this work provided an extensive survey of AR research in robotics between 2015 and 2019. We covered the AR applications in four categories: medical robotics, robot control and planning, human–robot interaction and robot swarms. The wide range of AR use cases show the ubiquitous nature of this technology and its potential to improve human lives and generate economic impact.
AR is a trending technology in the age of Industry 4.0, where different robotic devices in industrial processes can communicate wirelessly and humans can perceive the status of the robots and performed operation via advanced visualization. Noting many new AR applications for industrial robots, we envisage a paradigm shift in human–robot collaboration. Presumably, the information flow will be dominated by the robots. Visualization displays, advanced camera systems, and motion tracking as well as the novel algorithms and software packages in the field of computer vision are making this shift faster. However, research should focus on how to prevent mental overload of humans with excessive augmented information.
Even though advances in wearable devices enable integration of AR in different areas of robotics, there are still issues that need to be addressed. For instance, current wearable devices have a limited field-of-view, poor tracking stability (especially, in the presence of occlusions), and crude user interfaces during interaction with the 3D contents of the augmented environment. Despite improvements in robotics thanks to AR, further research is needed for the use of AR in robotic systems outside of the laboratories. For reliability and robustness of the real-world applications, the complexity of the sensing elements and registration/tracking methods should be reduced. Moreover, accurate and semi-automated calibration is needed in order to integrate AR to robotic systems.
Future directions of AR research in robotics include the following: (1) Novel methods for object localization and registration using artificial intelligence. Specifically, during augmentation, advanced tracking and sensing systems should automatically adjust such that a virtual object can precisely be placed within an AR environment; (2) display systems with wide field of view and high resolution; and (3) advanced AR user interfaces and simulation platforms that can precisely mimic simulated scenarios into the real world environment.

Author Contributions

Conceptualization, H.A.V.; Literature search, Z.M. and H.A.V.; Project administration, H.A.V.; Supervision, H.A.V.; Writing—original draft, Z.M.; and Writing—review and editing, Z.M. and H.A.V. All authors have read and agreed to the published version of the manuscript.


This work was supported by the grant “Methods for Safe Human Robot Interaction with VIA Robots” from the Ministry of Education and Science of the Republic of Kazakhstan and by the Nazarbayev University Faculty Development Program grant “Hardware and Software Based Methods for Safe Human-Robot Interaction with Variable Impedance Robots”.

Conflicts of Interest

The authors declare no conflict of interest.The funders had no role in the design of the study; or in the decision to publish the results.


The following abbreviations are used in this manuscript:
ARAugmented Reality
BCIBrain-Computer Interface
CNCComputer Numerical Control
DLDeep Learning
HMDHead-Mounted Display
HRIHuman-Robot Interaction
MRMixed Reality
RASRobot Assisted Surgery
ROSRobot Operating System
SARSpatial Augmented Reality
SLAMSimultaneous Localization and Mapping
VRVirtual Reality


  1. Gorecky, D.; Schmitt, M.; Loskyll, M.; Zühlke, D. Human-machine-interaction in the Industry 4.0 era. In Proceedings of the 12th IEEE International Conference on Industrial Informatics (INDIN), Porto Alegre, Brazil, 27–30 July 2014; pp. 289–294. [Google Scholar]
  2. Hockstein, N.G.; Gourin, C.; Faust, R.; Terris, D.J. A history of robots: From science fiction to surgical robotics. J. Rob. Surg. 2007, 1, 113–118. [Google Scholar] [CrossRef] [PubMed]
  3. Kehoe, B.; Patil, S.; Abbeel, P.; Goldberg, K. A survey of research on cloud robotics and automation. IEEE Trans. Autom. Sci. Eng. 2015, 12, 398–409. [Google Scholar] [CrossRef]
  4. Azuma, R.T. A survey of augmented reality. Presence: Teleoperators Virtual Env. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  5. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graphics Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef]
  6. Kato, H.; Billinghurst, M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar]
  7. Zhang, X.; Fronz, S.; Navab, N. Visual marker detection and decoding in AR systems: A comparative study. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Darmstadt, Germany, 1 October 2002; pp. 97–106. [Google Scholar]
  8. Welch, G.; Foxlin, E. Motion tracking survey. IEEE Comput. Graphics Appl. 2002, 22, 24–38. [Google Scholar] [CrossRef]
  9. Zhou, F.; Duh, H.B.L.; Billinghurst, M. Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. In Proceedings of the IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), Cambridge, UK, 15–18 September 2008; pp. 193–202. [Google Scholar]
  10. Wang, X.; Ong, S.K.; Nee, A.Y. A comprehensive survey of augmented reality assembly research. Adv. Manuf. 2016, 4, 1–22. [Google Scholar] [CrossRef]
  11. Fuentes-Pacheco, J.; Ruiz-Ascencio, J.; Rendón-Mancha, J.M. Visual simultaneous localization and mapping: A survey. Artif. Intell. Rev. 2015, 43, 55–81. [Google Scholar] [CrossRef]
  12. Marchand, E.; Uchiyama, H.; Spindler, F. Pose estimation for augmented reality: A hands-on survey. IEEE Trans. Visual Comput. Graphics 2015, 22, 2633–2651. [Google Scholar] [CrossRef]
  13. Mekni, M.; Lemieux, A. Augmented reality: Applications, challenges and future trends. Appl. Comput. Sci. 2014, 205–214. [Google Scholar]
  14. Nicolau, S.; Soler, L.; Mutter, D.; Marescaux, J. Augmented reality in laparoscopic surgical oncology. Surgical Oncol. 2011, 20, 189–201. [Google Scholar] [CrossRef]
  15. Nee, A.Y.; Ong, S.; Chryssolouris, G.; Mourtzis, D. Augmented reality applications in design and manufacturing. CIRP Annals 2012, 61, 657–679. [Google Scholar] [CrossRef]
  16. Chatzopoulos, D.; Bermejo, C.; Huang, Z.; Hui, P. Mobile augmented reality survey: From where we are to where we go. IEEE Access 2017, 5, 6917–6950. [Google Scholar] [CrossRef]
  17. Sutherland, I.E. The Ultimate Display. In Proceedings of the IFIP Congress; Macmillan and Co.: London, UK, 1965; pp. 506–508. [Google Scholar]
  18. Sutherland, I.E. A head-mounted three dimensional display. In Proceedings of the Fall Joint Computer Conference, New York, NY, USA, 9–11 December 1968; pp. 757–764. [Google Scholar]
  19. Teitel, M.A. The Eyephone: A head-mounted stereo display. In Stereoscopic Displays and Applications; SPIE: Bellingham, DC, USA, 1990; Volume 1256, pp. 168–171. [Google Scholar]
  20. MacKenzie, I.S. Input devices and interaction techniques for advanced computing. In Virtual Environments and Advanced Interface Design; Oxford University Press: Oxford, UK, June 1995; pp. 437–470. [Google Scholar]
  21. Rosenberg, L.B. The Use of Virtual Fixtures as Perceptual Overlays to Enhance Operator Performance in Remote Environments; Technical Report; Stanford University Center for Design Research: Stanford, CA, USA, 1992. [Google Scholar]
  22. Rosenberg, L.B. Virtual fixtures: Perceptual tools for telerobotic manipulation. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; pp. 76–82. [Google Scholar]
  23. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. Ieice Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  24. Durrant-Whyte, H.; Bailey, T. Simultaneous Localization and Mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef]
  25. Rapp, D.; Müller, J.; Bucher, K.; von Mammen, S. Pathomon: A Social Augmented Reality Serious Game. In Proceedings of the International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), Würzburg, Germany, 5–7 September 2018. [Google Scholar]
  26. Bodner, J.; Wykypiel, H.; Wetscher, G.; Schmid, T. First experiences with the da Vinci operating robot in thoracic surgery. Eur. J. -Cardio-Thorac. Surg. 2004, 25, 844–851. [Google Scholar] [CrossRef]
  27. Vuforia, S. Vuforia Developer Portal. Available online: (accessed on 28 January 2020).
  28. Malỳ, I.; Sedláček, D.; Leitao, P. Augmented reality experiments with industrial robot in Industry 4.0 environment. In Proceedings of the International Conference on Industrial Informatics (INDIN), Poitiers, France, 19–21 July 2016; pp. 176–181. [Google Scholar]
  29. ARKit. Developer Documentation, Apple Inc. Available online: (accessed on 10 January 2020).
  30. ARCore. Developer Documentation, Google Inc. Available online: (accessed on 15 January 2020).
  31. Balan, A.; Flaks, J.; Hodges, S.; Isard, M.; Williams, O.; Barham, P.; Izadi, S.; Hiliges, O.; Molyneaux, D.; Kim, D.; et al. Distributed Asynchronous Localization and Mapping for Augmented Reality. U.S. Patent 8,933,931, 13 January 2015. [Google Scholar]
  32. Balachandreswaran, D.; Njenga, K.M.; Zhang, J. Augmented Reality System and Method for Positioning and Mapping. U.S. Patent App. 14/778,855, 21 July 2016. [Google Scholar]
  33. Lalonde, J.F. Deep Learning for Augmented Reality. In Proceedings of the IEEE Workshop on Information Optics (WIO), Quebec, Canada, 16–19 July 2018; pp. 1–3. [Google Scholar]
  34. Park, Y.J.; Ro, H.; Han, T.D. Deep-ChildAR bot: Educational activities and safety care augmented reality system with deep learning for preschool. In Proceedings of the ACM SIGGRAPH Posters, Los Angeles, CA, USA, 28 July–1 August 2019; p. 26. [Google Scholar]
  35. Azuma, R.; Bishop, G. Improving static and dynamic registration in an optical see-through HMD. In Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24–29 July 1994; pp. 197–204. [Google Scholar]
  36. Takagi, A.; Yamazaki, S.; Saito, Y.; Taniguchi, N. Development of a stereo video see-through HMD for AR systems. In Proceedings of the IEEE/ACM International Symposium on Augmented Reality (ISAR), Munich, Germany, 5–6 October 2000; pp. 68–77. [Google Scholar]
  37. Behzadan, A.H.; Timm, B.W.; Kamat, V.R. General-purpose modular hardware and software framework for mobile outdoor augmented reality applications in engineering. Adv. Eng. Inform. 2008, 22, 90–105. [Google Scholar] [CrossRef]
  38. Yew, A.; Ong, S.; Nee, A. Immersive augmented reality environment for the teleoperation of maintenance robots. Procedia Cirp 2017, 61, 305–310. [Google Scholar] [CrossRef]
  39. Brizzi, F.; Peppoloni, L.; Graziano, A.; Di, S.E.; Avizzano, C.A.; Ruffaldi, E. Effects of augmented reality on the performance of teleoperated industrial assembly tasks in a robotic embodiment. IEEE Trans. Hum. Mach. Syst. 2017, 48, 197–206. [Google Scholar] [CrossRef]
  40. Peng, H.; Briggs, J.; Wang, C.Y.; Guo, K.; Kider, J.; Mueller, S.; Baudisch, P.; Guimbretière, F. RoMA: Interactive fabrication with augmented reality and a robotic 3D printer. In Proceedings of the ACM/CHI Conference on Human Factors in Computing Systems, Kobe, Japan, 31 August–4 September 2018; p. 579. [Google Scholar]
  41. von Mammen, S.; Hamann, H.; Heider, M. Robot gardens: An augmented reality prototype for plant-robot biohybrid systems. In Proceedings of the ACM Conference on Virtual Reality Software and Technology, Munich, Germany, 2–4 November 2016; pp. 139–142. [Google Scholar]
  42. Lee, A.; Jang, I. Robust Multithreaded Object Tracker through Occlusions for Spatial Augmented Reality. Etri J. 2018, 40, 246–256. [Google Scholar] [CrossRef]
  43. Diana, M.; Marescaux, J. Robotic surgery. Br. J. Surg. 2015, 102, e15–e28. [Google Scholar] [CrossRef] [PubMed]
  44. Chowriappa, A.; Raza, S.J.; Fazili, A.; Field, E.; Malito, C.; Samarasekera, D.; Shi, Y.; Ahmed, K.; Wilding, G.; Kaouk, J.; et al. Augmented-reality-based skills training for robot-assisted urethrovesical anastomosis: A multi-institutional randomised controlled trial. BJU Int. 2015, 115, 336–345. [Google Scholar] [CrossRef] [PubMed]
  45. Costa, N.; Arsenio, A. Augmented reality behind the wheel-human interactive assistance by mobile robots. In Proceedings of the IEEE International Conference on Automation, Robotics and Applications (ICARA), Queenstown, New Zealand, 17–19 February 2015; pp. 63–69. [Google Scholar]
  46. Ocampo, R.; Tavakoli, M. Visual-Haptic Colocation in Robotic Rehabilitation Exercises Using a 2D Augmented-Reality Display. In Proceedings of the IEEE International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 3–5 April 2019. [Google Scholar]
  47. Pessaux, P.; Diana, M.; Soler, L.; Piardi, T.; Mutter, D.; Marescaux, J. Towards cybernetic surgery: Robotic and augmented reality-assisted liver segmentectomy. Langenbeck’s Arch. Surg. 2015, 400, 381–385. [Google Scholar] [CrossRef] [PubMed]
  48. Liu, W.P.; Richmon, J.D.; Sorger, J.M.; Azizian, M.; Taylor, R.H. Augmented reality and cone beam CT guidance for transoral robotic surgery. J. Robot. Surg. 2015, 9, 223–233. [Google Scholar] [CrossRef] [PubMed]
  49. Navab, N.; Hennersperger, C.; Frisch, B.; Fürst, B. Personalized, relevance-based multimodal robotic imaging and augmented reality for computer assisted interventions. Med Image Anal. 2016, 33, 64–71. [Google Scholar] [CrossRef] [PubMed]
  50. Dickey, R.M.; Srikishen, N.; Lipshultz, L.I.; Spiess, P.E.; Carrion, R.E.; Hakky, T.S. Augmented reality assisted surgery: A urologic training tool. Asian J. Androl. 2016, 18, 732–734. [Google Scholar]
  51. Lin, L.; Shi, Y.; Tan, A.; Bogari, M.; Zhu, M.; Xin, Y.; Xu, H.; Zhang, Y.; Xie, L.; Chai, G. Mandibular angle split osteotomy based on a novel augmented reality navigation using specialized robot-assisted arms—A feasibility study. J. Cranio Maxillofac. Surg. 2016, 44, 215–223. [Google Scholar] [CrossRef]
  52. Wang, J.; Suenaga, H.; Yang, L.; Kobayashi, E.; Sakuma, I. Video see-through augmented reality for oral and maxillofacial surgery. Int. J. Med Robot. Comput. Assist. Surg. 2017, 13, e1754. [Google Scholar] [CrossRef]
  53. Zhou, C.; Zhu, M.; Shi, Y.; Lin, L.; Chai, G.; Zhang, Y.; Xie, L. Robot-assisted surgery for mandibular angle split osteotomy using augmented reality: Preliminary results on clinical animal experiment. Aesthetic Plast. Surg. 2017, 41, 1228–1236. [Google Scholar] [CrossRef]
  54. Bostick, J.E.; Ganci, J.J.M.; Keen, M.G.; Rakshit, S.K.; Trim, C.M. Augmented Control of Robotic Prosthesis by a Cognitive System. U.S. Patent 9,717,607, 1 August 2017. [Google Scholar]
  55. Qian, L.; Deguet, A.; Kazanzides, P. ARssist: Augmented reality on a head-mounted display for the first assistant in robotic surgery. Healthc. Technol. Lett. 2018, 5, 194–200. [Google Scholar] [CrossRef]
  56. Porpiglia, F.; Checcucci, E.; Amparore, D.; Autorino, R.; Piana, A.; Bellin, A.; Piazzolla, P.; Massa, F.; Bollito, E.; Gned, D.; et al. Augmented-reality robot-assisted radical prostatectomy using hyper-accuracy three-dimensional reconstruction (HA 3D) technology: A radiological and pathological study. BJU Int. 2019, 123, 834–845. [Google Scholar] [CrossRef]
  57. Bernhardt, S.; Nicolau, S.A.; Soler, L.; Doignon, C. The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 2017, 37, 66–90. [Google Scholar] [CrossRef] [PubMed]
  58. Madhavan, K.; Kolcun, J.P.C.; Chieng, L.O.; Wang, M.Y. Augmented-reality integrated robotics in neurosurgery: Are We There Yet? Neurosurg. Focus 2017, 42, E3. [Google Scholar] [CrossRef] [PubMed]
  59. Qian, L.; Deguet, A.; Wang, Z.; Liu, Y.H.; Kazanzides, P. Augmented reality assisted instrument insertion and tool manipulation for the first assistant in robotic surgery. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, 20–24 May 2019; pp. 5173–5179. [Google Scholar]
  60. Hanna, M.G.; Ahmed, I.; Nine, J.; Prajapati, S.; Pantanowitz, L. Augmented reality technology using Microsoft HoloLens in anatomic pathology. Arch. Pathol. Lab. Med. 2018, 142, 638–644. [Google Scholar] [CrossRef] [PubMed]
  61. Quero, G.; Lapergola, A.; Soler, L.; Shabaz, M.; Hostettler, A.; Collins, T.; Marescaux, J.; Mutter, D.; Diana, M.; Pessaux, P. Virtual and augmented reality in oncologic liver surgery. Surg. Oncol. Clin. 2019, 28, 31–44. [Google Scholar] [CrossRef] [PubMed]
  62. Qian, L.; Zhang, X.; Deguet, A.; Kazanzides, P. ARAMIS: Augmented Reality Assistance for Minimally Invasive Surgery Using a Head-Mounted Display. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 74–82. [Google Scholar]
  63. Qian, L.; Wu, J.Y.; DiMaio, S.P.; Navab, N.; Kazanzides, P. A Review of Augmented Reality in Robotic-Assisted Surgery. IEEE Trans. Med Robot. Bionics 2020, 2, 1–16. [Google Scholar] [CrossRef]
  64. Gacem, H.; Bailly, G.; Eagan, J.; Lecolinet, E. Finding objects faster in dense environments using a projection augmented robotic arm. In Proceedings of the IFIP Conference on Human-Computer Interaction, Bamberg, Germany, 14–18 September 2015; pp. 221–238. [Google Scholar]
  65. Kuriya, R.; Tsujimura, T.; Izumi, K. Augmented reality robot navigation using infrared marker. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (ROMAN), Kobe, Japan, 31 August–4 September 2015; pp. 450–455. [Google Scholar]
  66. Dias, T.; Miraldo, P.; Gonçalves, N.; Lima, P.U. Augmented reality on robot navigation using non-central catadioptric cameras. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 4999–5004. [Google Scholar]
  67. Stadler, S.; Kain, K.; Giuliani, M.; Mirnig, N.; Stollnberger, G.; Tscheligi, M. Augmented reality for industrial robot programmers: Workload analysis for task-based, augmented reality-supported robot control. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (ROMAN), New York City, NY, USA, 26–31 August 2016; pp. 179–184. [Google Scholar]
  68. Papachristos, C.; Alexis, K. Augmented reality-enhanced structural inspection using aerial robots. In Proceedings of the IEEE International Symposium on Intelligent Control (ISIC), Buenos Aires, Argentina, 19–22 September 2016; pp. 185–190. [Google Scholar]
  69. Aleotti, J.; Micconi, G.; Caselli, S.; Benassi, G.; Zambelli, N.; Bettelli, M.; Zappettini, A. Detection of nuclear sources by UAV teleoperation using a visuo-haptic augmented reality interface. Sensors 2017, 17, 2234. [Google Scholar] [CrossRef]
  70. Kundu, A.S.; Mazumder, O.; Dhar, A.; Lenka, P.K.; Bhaumik, S. Scanning camera and augmented reality based localization of omnidirectional robot for indoor application. Procedia Comput. Sci. 2017, 105, 27–33. [Google Scholar] [CrossRef]
  71. Zhu, D.; Veloso, M. Virtually adapted reality and algorithm visualization for autonomous robots. In Robot World Cup; Springer: Berlin/Heidelberg, Germany, 2016; pp. 452–464. [Google Scholar]
  72. Pai, Y.S.; Yap, H.J.; Dawal, S.Z.M.; Ramesh, S.; Phoon, S.Y. Virtual planning, control, and machining for a modular-based automated factory operation in an augmented reality environment. Sci. Rep. 2016, 6, 27380. [Google Scholar] [CrossRef]
  73. Lee, A.; Lee, J.H.; Kim, J. Data-Driven Kinematic Control for Robotic Spatial Augmented Reality System with Loose Kinematic Specifications. ETRI J. 2016, 38, 337–346. [Google Scholar] [CrossRef]
  74. Kamoi, T.; Inaba, G. Robot System Having Augmented Reality-compatible Display. U.S. Patent App. 14/937,883, 9 June 2016. [Google Scholar]
  75. Guhl, J.; Tung, S.; Kruger, J. Concept and architecture for programming industrial robots using augmented reality with mobile devices like Microsoft HoloLens. In Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Limassol, Cyprus, 12–15 September 2017; pp. 1–4. [Google Scholar]
  76. Malayjerdi, E.; Yaghoobi, M.; Kardan, M. Mobile robot navigation based on Fuzzy Cognitive Map optimized with Grey Wolf Optimization Algorithm used in Augmented Reality. In Proceedings of the IEEE/RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 25–27 October 2017; pp. 211–218. [Google Scholar]
  77. Ni, D.; Yew, A.; Ong, S.; Nee, A. Haptic and visual augmented reality interface for programming welding robots. Adv. Manuf. 2017, 5, 191–198. [Google Scholar] [CrossRef]
  78. Liu, H.; Zhang, Y.; Si, W.; Xie, X.; Zhu, Y.; Zhu, S.C. Interactive robot knowledge patching using augmented reality. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1947–1954. [Google Scholar]
  79. Quintero, C.P.; Li, S.; Pan, M.K.; Chan, W.P.; Van, d.L.H.M.; Croft, E. Robot programming through augmented trajectories in augmented reality. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1838–1844. [Google Scholar]
  80. Gradmann, M.; Orendt, E.M.; Schmidt, E.; Schweizer, S.; Henrich, D. Augmented Reality Robot Operation Interface with Google Tango. In Proceedings of the International Symposium on Robotics (ISR), Munich, Germany, 20–21 June 2018; pp. 170–177. [Google Scholar]
  81. Hoffmann, H.; Daily, M.J. System and Method for Robot Supervisory Control with an Augmented Reality User Interface. U.S. Patent 9,880,553, 30 January 2018. [Google Scholar]
  82. Chan, W.P.; Quintero, C.P.; Pan, M.K.; Sakr, M.; Van der Loos, H.M.; Croft, E. A Multimodal System using Augmented Reality, Gestures, and Tactile Feedback for Robot Trajectory Programming and Execution. In Proceedings of the ICRA Workshop on Robotics in Virtual Reality, Brisbane, Australia, 21–25 May 2018. [Google Scholar]
  83. Mourtzis, D. Simulation in the design and operation of manufacturing systems: State of the art and new trends. Int. J. Prod. Res. 2019, 1–23. [Google Scholar] [CrossRef]
  84. Mourtzis, D.; Zogopoulos, V. Augmented reality application to support the assembly of highly customized products and to adapt to production re-scheduling. Int. J. Adv. Manuf. Technol. 2019, 1–12. [Google Scholar] [CrossRef]
  85. Mourtzis, D.; Samothrakis, V.; Zogopoulos, V.; Vlachou, E. Warehouse Design and Operation using Augmented Reality technology: A Papermaking Industry Case Study. Procedia Cirp 2019, 79, 574–579. [Google Scholar] [CrossRef]
  86. Ong, S.; Yew, A.; Thanigaivel, N.; Nee, A. Augmented reality-assisted robot programming system for industrial applications. Robot. Comput. Integr. Manuf. 2020, 61, 101820. [Google Scholar] [CrossRef]
  87. Ong, S.K.; Nee, A.Y.C.; Yew, A.W.W.; Thanigaivel, N.K. AR-assisted robot welding programming. Adv. Manuf. 2020, 8, 40–48. [Google Scholar] [CrossRef]
  88. Avalle, G.; De Pace, F.; Fornaro, C.; Manuri, F.; Sanna, A. An Augmented Reality System to Support Fault Visualization in Industrial Robotic Tasks. IEEE Access 2019, 7, 132343–132359. [Google Scholar] [CrossRef]
  89. Gong, L.; Ong, S.; Nee, A. Projection-based augmented reality interface for robot grasping tasks. In Proceedings of the International Conference on Robotics, Control and Automation, Guangzhou, China, 26–28 July 2019; pp. 100–104. [Google Scholar]
  90. Gurevich, P.; Lanir, J.; Cohen, B. Design and implementation of teleadvisor: A projection-based augmented reality system for remote collaboration. Comput. Support. Coop. Work. (Cscw) 2015, 24, 527–562. [Google Scholar] [CrossRef]
  91. Lv, Z.; Halawani, A.; Feng, S.; Ur, R.S.; Li, H. Touch-less interactive augmented reality game on vision-based wearable device. Pers. Ubiquitous Comput. 2015, 19, 551–567. [Google Scholar] [CrossRef]
  92. Clemente, F.; Dosen, S.; Lonini, L.; Markovic, M.; Farina, D.; Cipriani, C. Humans can integrate augmented reality feedback in their sensorimotor control of a robotic hand. IEEE Trans. Hum. Mach. Syst. 2016, 47, 583–589. [Google Scholar] [CrossRef]
  93. Gong, L.; Gong, C.; Ma, Z.; Zhao, L.; Wang, Z.; Li, X.; Jing, X.; Yang, H.; Liu, C. Real-time human-in-the-loop remote control for a life-size traffic police robot with multiple augmented reality aided display terminals. In Proceedings of the IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Hefei & Tai’an, China, 27–31 August 2017; pp. 420–425. [Google Scholar]
  94. Piumatti, G.; Sanna, A.; Gaspardone, M.; Lamberti, F. Spatial augmented reality meets robots: Human-machine interaction in cloud-based projected gaming environments. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Berlin, Germany, 3–6 September 2017; pp. 176–179. [Google Scholar]
  95. Dinh, H.; Yuan, Q.; Vietcheslav, I.; Seet, G. Augmented reality interface for taping robot. In Proceedings of the IEEE International Conference on Advanced Robotics (ICAR), Hong Kong, China, 10–12 July 2017; pp. 275–280. [Google Scholar]
  96. Lin, Y.; Song, S.; Meng, M.Q.H. The implementation of augmented reality in a robotic teleoperation system. In Proceedings of the IEEE International Conference on Real-time Computing and Robotics (RCAR), Angkor Wat, Cambodia, 6–9 June 2016; pp. 134–139. [Google Scholar]
  97. Shchekoldin, A.I.; Shevyakov, A.D.; Dema, N.U.; Kolyubin, S.A. Adaptive head movements tracking algorithms for AR interface controlled telepresence robot. In Proceedings of the IEEE International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 28–31 August 2017; pp. 728–733. [Google Scholar]
  98. Lin, K.; Rojas, J.; Guan, Y. A vision-based scheme for kinematic model construction of re-configurable modular robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2751–2757. [Google Scholar]
  99. Mourtzis, D.; Zogopoulos, V.; Vlachou, E. Augmented reality application to support remote maintenance as a service in the robotics industry. Procedia Cirp 2017, 63, 46–51. [Google Scholar] [CrossRef]
  100. Makhataeva, Z.; Zhakatayev, A.; Varol, H.A. Safety Aura Visualization for Variable Impedance Actuated Robots. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Paris, France, 14–16 January 2019; pp. 805–810. [Google Scholar]
  101. Walker, M.; Hedayati, H.; Lee, J.; Szafir, D. Communicating robot motion intent with augmented reality. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 316–324. [Google Scholar]
  102. Walker, M.E.; Hedayati, H.; Szafir, D. Robot Teleoperation with Augmented Reality Virtual Surrogates. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Korea, 11–14 March 2019; pp. 202–210. [Google Scholar]
  103. Hedayati, H.; Walker, M.; Szafir, D. Improving collocated robot teleoperation with augmented reality. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 78–86. [Google Scholar]
  104. Lee, D.; Park, Y.S. Implementation of Augmented Teleoperation System Based on Robot Operating System (ROS). In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5497–5502. [Google Scholar]
  105. He, Y.; Fukuda, O.; Ide, S.; Okumura, H.; Yamaguchi, N.; Bu, N. Simulation system for myoelectric hand prosthesis using augmented reality. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Parisian Macao, China, 5–8 December 2017; pp. 1424–1429. [Google Scholar]
  106. Meli, L.; Pacchierotti, C.; Salvietti, G.; Chinello, F.; Maisto, M.; De Luca, A.; Prattichizzo, D. Combining wearable finger haptics and Augmented Reality: User evaluation using an external camera and the Microsoft HoloLens. IEEE Robot. Autom. Lett. 2018, 3, 4297–4304. [Google Scholar] [CrossRef]
  107. Wang, Y.; Zeng, H.; Song, A.; Xu, B.; Li, H.; Zhu, L.; Wen, P.; Liu, J. Robotic arm control using hybrid brain-machine interface and augmented reality feedback. In Proceedings of the IEEE/EMBS International Conference on Neural Engineering (NER), Shanghai, China, 25–28 May 2017; pp. 411–414. [Google Scholar]
  108. Zeng, H.; Wang, Y.; Wu, C.; Song, A.; Liu, J.; Ji, P.; Xu, B.; Zhu, L.; Li, H.; Wen, P. Closed-loop hybrid gaze brain-machine interface based robotic arm control with augmented reality feedback. Front. Neurorobotics 2017, 11, 60. [Google Scholar] [CrossRef] [PubMed]
  109. Si-Mohammed, H.; Petit, J.; Jeunet, C.; Argelaguet, F.; Spindler, F.; Evain, A.; Roussel, N.; Casiez, G.; Anatole, L. Towards BCI-based Interfaces for Augmented Reality: Feasibility, Design and Evaluation. IEEE Trans. Vis. Comput. Graph. 2018, 26, 1608–1621. [Google Scholar] [CrossRef]
  110. Urbani, J.; Al-Sada, M.; Nakajima, T.; Höglund, T. Exploring Augmented Reality Interaction for Everyday Multipurpose Wearable Robots. In Proceedings of the IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), Hakodate, Japan, 28–31 August 2018; pp. 209–216. [Google Scholar]
  111. Williams, T.; Szafir, D.; Chakraborti, T.; Ben, A.H. Virtual, augmented, and mixed reality for human–robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Chicago, IL, USA, 5–8 March 2018; pp. 403–404. [Google Scholar]
  112. Sheridan, T.B. Human-robot interaction: Status and challenges. Hum. Factors 2016, 58, 525–532. [Google Scholar] [CrossRef] [PubMed]
  113. Chan, W.P.; Karim, A.; Quintero, C.P.; Van der Loos, H.M.; Croft, E. Virtual barriers in augmented reality for safe human–robot collaboration in manufacturing. In Proceedings of the Robotic Co-workers 4.0, Madrid, Spain, 5 October 2018. [Google Scholar]
  114. Luebbers, M.B.; Brooks, C.; Kim, M.J.; Szafir, D.; Hayes, B. Augmented reality interface for constrained learning from demonstration. In Proceedings of the International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI), Daegu, Korea (South), 11–14 March 2019. [Google Scholar]
  115. Reina, A.; Salvaro, M.; Francesca, G.; Garattoni, L.; Pinciroli, C.; Dorigo, M.; Birattari, M. Augmented reality for robots: Virtual sensing technology applied to a swarm of e-pucks. In Proceedings of the NASA/ESA Conference on Adaptive Hardware and Systems (AHS), Montreal, QC, Canada, 15–18 June 2015. [Google Scholar]
  116. Omidshafiei, S.; Agha-Mohammadi, A.A.; Chen, Y.F.; Ure, N.K.; Liu, S.Y.; Lopez, B.T.; Surati, R.; How, J.P.; Vian, J. Measurable augmented reality for prototyping cyberphysical systems: A robotics platform to aid the hardware prototyping and performance testing of algorithms. IEEE Control. Syst. Mag. 2016, 36, 65–87. [Google Scholar]
  117. Pinciroli, C.; Talamali, M.S.; Reina, A.; Marshall, J.A.; Trianni, V. Simulating Kilobots within ARGoS: Models and experimental validation. In Proceedings of the International Conference on Swarm Intelligence (ICSI), Shanghai, China, 17–22 June 2018; pp. 176–187. [Google Scholar]
  118. Reina, A.; Cope, A.J.; Nikolaidis, E.; Marshall, J.A.; Sabo, C. ARK: Augmented reality for Kilobots. IEEE Robot. Auto. Lett. 2017, 2, 1755–1761. [Google Scholar] [CrossRef]
  119. Valentini, G.; Antoun, A.; Trabattoni, M.; Wiandt, B.; Tamura, Y.; Hocquard, E.; Trianni, V.; Dorigo, M. Kilogrid: A novel experimental environment for the kilobot robot. Swarm Intell. 2018, 12, 245–266. [Google Scholar] [CrossRef]
  120. Reina, A.; Ioannou, V.; Chen, J.; Lu, L.; Kent, C.; Marshall, J.A. Robots as Actors in a Film: No War, A Robot Story. arXiv 2019, arXiv:1910.12294. [Google Scholar]
  121. Llenas, A.F.; Talamali, M.S.; Xu, X.; Marshall, J.A.; Reina, A. Quality-sensitive foraging by a robot swarm through virtual pheromone trails. In Proceedings of the International Conference on Swarm Intelligence, Shanghai, China, 17–22 June 2018; pp. 135–149. [Google Scholar]
  122. Talamali, M.S.; Bose, T.; Haire, M.; Xu, X.; Marshall, J.A.; Reina, A.; Talamali, M.S.; Marshall, J.A.; Bose, T.; Reina, A.; et al. Sophisticated Collective Foraging with Minimalist Agents: A Swarm Robotics Test. Swarm Intell. 2019, 6, 30–34. [Google Scholar] [CrossRef]
Figure 1. Milgram’s reality–virtuality continuum (Has been adapted from ([5,23]).
Figure 1. Milgram’s reality–virtuality continuum (Has been adapted from ([5,23]).
Robotics 09 00021 g001
Figure 2. Historical trends of the Mixed Reality (MR), Augmented Reality (AR), and Virtual Reality (VR) keywords in the papers indexed by the Scopus database.
Figure 2. Historical trends of the Mixed Reality (MR), Augmented Reality (AR), and Virtual Reality (VR) keywords in the papers indexed by the Scopus database.
Robotics 09 00021 g002
Figure 3. Illustrations of the three classes of AR technology.
Figure 3. Illustrations of the three classes of AR technology.
Robotics 09 00021 g003
Figure 4. AR systems in RAS: (a) Visualization of transparent body phantom in ARssist [55], (b,c) Examples of AR-based visualisation of endoscopy in ARssist [59].
Figure 4. AR systems in RAS: (a) Visualization of transparent body phantom in ARssist [55], (b,c) Examples of AR-based visualisation of endoscopy in ARssist [59].
Robotics 09 00021 g004
Figure 5. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of medical robotics.
Figure 5. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of medical robotics.
Robotics 09 00021 g005
Figure 6. AR in teleoperation and robot motion planning: (a) AR-based teleoperation of maintenance robot [38], (b) AR-based visual feedback on the computer screen [69], (c) virtual planning in AR with a 3D CAD model of the robot and teach pendant [72].
Figure 6. AR in teleoperation and robot motion planning: (a) AR-based teleoperation of maintenance robot [38], (b) AR-based visual feedback on the computer screen [69], (c) virtual planning in AR with a 3D CAD model of the robot and teach pendant [72].
Robotics 09 00021 g006
Figure 7. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of robot control and planning.
Figure 7. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of robot control and planning.
Robotics 09 00021 g007
Figure 8. AR in human–robot collaboration: (a) AR hardware setup for remote maintenance [99], (b) RoMA set up for 3D printing [40], (c) HRI setup for the visualization of safe and danger zones around a robot, d) Safety aura visualization around the robot [100].
Figure 8. AR in human–robot collaboration: (a) AR hardware setup for remote maintenance [99], (b) RoMA set up for 3D printing [40], (c) HRI setup for the visualization of safe and danger zones around a robot, d) Safety aura visualization around the robot [100].
Robotics 09 00021 g008
Figure 9. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of human–robot interaction.
Figure 9. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of human–robot interaction.
Robotics 09 00021 g009
Figure 10. AR for robot swarms: (a) Simulated environment with virtual sensing technology in ARGoS (left), aerial view of real environment (right) [115]. Multi-projector system of MAR-CPS: (b) interaction between ground vehicles and drone and (c) detection of the vehicle by drone [116], (d) Experiment with 50 Kilobots in simulation [117].
Figure 10. AR for robot swarms: (a) Simulated environment with virtual sensing technology in ARGoS (left), aerial view of real environment (right) [115]. Multi-projector system of MAR-CPS: (b) interaction between ground vehicles and drone and (c) detection of the vehicle by drone [116], (d) Experiment with 50 Kilobots in simulation [117].
Robotics 09 00021 g010
Figure 11. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of swarm robotics.
Figure 11. Historical trends of the MR, AR, and VR keywords in the papers indexed by the Scopus database within the field of swarm robotics.
Robotics 09 00021 g011
Table 1. AR applications in medicine.
Table 1. AR applications in medicine.
WorkApplicationAR System ComponentsRobotLimitations
[44]RAS: training toolHaptic-enabled AR-based training system (HoST), Robot-assisted Surgical Simulator (RoSS)da Vinci Surgical SystemLimitations in the evaluation of the cognitive load when using the simulation system, visual errors when the surgical is places into the different position
[45]Human Interactive AssistanceSAR system: digital projectors, camera, and Kinect depth cameraMobile robotSpatial distortion due to the robot movement during projection, error prone robot localization
[47]RASRobotic binocular camera, CT scan, video mixer (MX 70; Panasonic, Secaucus, NJ), VR-RENDER® software, Virtual Surgical Planning (VSP®, IRCAD)da Vinci™ (Intuitive Surgical, Inc., Sunnyvale, CA)Use of the fixed virtual model leading to the limited AR accuracy during the interaction with mobile and soft tissues
[48]RASVideo augmentation of the primary stereo endoscopy, volumetric CBCT scan, Visualization Toolkit, Slicer 3D bidirectional socket-based communication interface, 2D X-raysDa Vinci si robotSensitivity to the marker occlusion and distortions in orientation leading to the low accuracy of vision-based resection tool
[50]Augmented Reality Assisted SurgeryGoogle Glass® optical head-mounted displayAndrologic training toolTechnical limitation: low battery life, overheating, complexity in software integration
[51]RAS: AR navigationAR Toolkit software, display system, rapid prototyping (RP) technology(ProJet 660 Pro, 3DSYSTEM, USA) models, MicronTracker (Claron Company, Canada): optical sensors with 3 cameras, nVisor ST60 (NVIS Company, US)Robot-assisted armsLimited precision in cases of soft tissues within operational area
[52]Oral and maxillofacial surgeryMarkerless video see-through AR, video camera, optical flow tracker, cascade detector, integrator, online labeling tool, OpenGL software Target registration errors, uncertainty in 3P pose estimation (minimize the distance between camera and tracked object), time increase when the tracking is performed
[53]Mandibular angle split osteotomyImaging device (Siemens Somatom Definition Edge), Mimics CAD/CAM software for 3D virtual models7 DoF serial armErrors due to deviation between planned and actual drilling axes, errors during the target registration
[54]Wearable devices (prosthesis)Camera, AR glasses, AR glasses transceiver, AR glasses camera, robotic, server, cognitive systemrobotic prosthetic device
[55]RASARssist system: HMD (Microsoft HoloLens), endoscope camera, fiducial markers, vision-based tracking algorithmda Vinci Research kitKinematic inaccuracies, marker tracking error due to the camera calibration and limited intrinsic resolution, AR system latency
[56]RASVideo augmentation of the primary stereo endoscopyDa Vinci si robotSensitivity to the marker occlusion and distortions in orientation
[46]Rehabilitation2D spatial AR projector (InFocus IN116A), Unity Game Engine2 DoF planar rehabilitation robot (Quanser)Occlusion problems, presence of error prone calibration of projection system
Table 2. AR Applications in robot control and planning.
Table 2. AR Applications in robot control and planning.
WorkApplicationAR System ComponentsRobotLimitations
[65]Robot navigationInfrared camera, projector, IR filter (Fuji Film IR-76), infrared marker, ARToolkitWheeled mobile robotPositioning error of the robot along the path
[66]Robot navigationNon-central catadioptric camera (perspective camera, spherical mirror)Mobile robot (Pioneer 3D-X)Projection error of 3D virtual object into the 2D plane, high computation effort
[67]Robot programmingTablet-based AR interface: unity, Vuforia library, smartphoneSphero 2.0 robot ballPresence of expertise reversal effect (simple interface is good for expert user, for beginners vice versa)
[68]Robot path planningStereo camera, IMU, Visual–Inertial SLAM framework, nadir–facing PSEye camera system, smartphone, VR headsetMicro Aerial VehicleComplexity of the system, errors during the automatic generation of the optimized path
[70]Robot localizationScanning camera (360 degree), display device, visual/infrared markers, web camOmni wheel robotError prone localization readings from detected markers during the scan, limited updated rate
[72]Simulation systemCamera, HMD, webcam, simulation markersCNC machine, KUKA KR 16 KS robotDeviation error of different modules between simulation and real setups, positioning errors due to user’s hand movements
[75]Remote robot programmingHMD (Microsoft HoloLens), Tablet (Android, Windows), PC, markerUniversal Robots UR 5, Comau NJ, KUKA KR 6System communication errors
[76]Robot navigationWebcam, marker, Fuzzy Cognitive Map, GRAFT libraryRohan mobile robotError prone registration of the camera position and marker direction, errors from optimization process
[77]Remote robot programmingDepth camera, AR display, haptic device, Kinect sensor, PC cameraWelding robotError prone registration of the depth data, difference between virtual and actual paths
[38]Robot teleoperationOptical tracking system, handheld manipulator, HMD (Oculus Rift DK2), camera, fiducial markerABB IRB 140 robot armLow accuracy of optical tracking, limited performance due to dynamic obstacles
[69]UAV teleoperation3 DoF haptic device, ground fixed camera, virtual camera, virtual compass, PC screen, GPS receiver, markerUnmanned aerial vehicleError prone registration of buildings within the real environment, limited field of view, camera calibration errors
[78]Human robot interactionHMD (Microsoft HoloLens), LeapMotion sensor, DSLR camera, Kinect cameraRethink Baxter robotErrors in AR marker tracking and robot localization
[79]Robot programming (simulation)HMD (Microsoft HoloLens), speech/gesture inputs, MYO armband (1 DoF control input)7 DoF Barrett Whole-Arm ManipulatorAccuracy of robot localization degrade with time and user movements
[80]Robot operationRGB camera, depth camera, wide-angle camera, tablet, markerKUKA LBR robotTime consuming process of object detection and registration
[87]Robot weldingHMD, motion capture system (three Optitrack Flex 3 cameras)Industrial robotSmall deviation between planned and actual robot paths
[89]Robot graspingMicrosoft Kinect camera, laser projector, OptiTrack motion capture systemABB robotError prone object detection due to the sensor limitations, calibration errors between sensors
Table 3. AR applications in HRI.
Table 3. AR applications in HRI.
WorkApplicationAR System ComponentsRobotLimitations
[90]Remote collaborationProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[91]InteractionProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[92]Prosthetic deviceAR glasses (M100 Smart Glasses), data glove (Cyberglove), 4-in mobile device screen, personal computerRobot hand (IH2 Azzurra)Increased time necessary for grasping and completing the pick-and-lift task
[93]Robot remote controlAR glasses (MOVERIO BT-200), Raspberry Pi camera remote video stream, Kinect toolkit, display terminalsLife-size Traffic Police Robot IWILatency within the video frames, inaccuracies in the estimation of the depth of the field, memory limitations of the system
[94]AR-based gamingRGB-D camera, OpenPTrack library, control module, websocket communicationMobile robot (bomber)Delay in gaming due to the ROS infrastructure utilized in the system
[95]Interactive interfaceAR goggles (Epson Moverio BT-200), motion sensors, Kinect scanner, handheld wearable device, markersSemi-automatic taping robotic systemErrors during the user localization and calibration of robot-object position
[96]TeleoperationProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[97]TeleoperationProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[98]Robot programmingKinect camera, fiducial markers, AR tag trackingModular manipulatorError prone tag placement and marker detection
[101]Robot communicationHMD (Microsoft HoloLens), virtual drone, waypoint delegation interface, motion tracking systemAscTec Hummingbird droneLimited localization ability, narrow field of view of the HMD, limited generalizability in more cluttered spaces
[102]TeleoperationProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[103]Robotic teleoperationHMD (Microsoft HoloLens), Xbox controller, video display, motion tracking camerasParrot Bebop quadcopterLimited visual feedback from the system during operation of the teleoperated robot
[104]Robotic teleoperationRGB-D sensor, haptic hand controller, KinectFusion, HMD, markerBaxter robotLimited sensing accuracy, irregularity in the 3D reconstruction of the surface
[105]Prosthetic deviceMicrosoft Kinect, RGB camera, PC display, virtual prosthetic handMyoelectric hand prosthesisError prone alignment of the virtual hand prosthesis with the user’s forearm
[106]Wearable roboticsProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[107]BMI robot controlMonocular camera, desktop eye tracker (EyeX), AR interface in PC5 DoF desktop robotic arm (Dobot)Errors due to calibration of the camera, gaze tracking errors
[108]BMI for robot controlWebcam, marker-based tracking5 DoF robotic arm (Dobot)Error prone encoding of the camera position and orientation in 3D world, calibration process
[109]BCI systemProjector, camera, user interface (mouse-based GUI on PC), mobile baseRobotic armDistortions introduced by optics as the camera moves, unstable tracking mechanism
[110]Wearable robotsHMD, fiducial markers, Android smartphone, WebSocketsShape-shifting wearable robotTracking limitations due to change in lighting conditions and camera focus, latency in wireless connection
Table 4. AR Applications in swarm robotics.
Table 4. AR Applications in swarm robotics.
WorkApplicationAR System ComponentsRobotLimitations
[115]Robot perceptionArena tracking system (4 by 4 matrix of HD cameras), 16-core server, QR code, ARGoS simulator, virtual sensors15 e-puck mini mobile robotsTracking failure when objects bigger than robot size occludes the view of ceiling cameras
[118]Swarm robot controlOverhead controller, 4 cameras, unique markers, virtual sensors, infrared sensorsKilobot swarmsError prone automatic system calibration and ID assignment technique
[117]Motion studyARGoS simulator, AR for Kilobots platform, virtual light sensorsKilobot swarmsPresence of the internal noise in the motion of the Kilobots leading to the errors in precision during the real world experiments
[119]Virtualization environmentOverhead controller (OHC), infrared signals, virtual sensors (infrared/proximity), virtual actuators, KiloGUI applicationKilobot swarmsCommunication limitations between kilobots and control module of the Kilogrid simulator, problems with identification of the position and orientation by kilobots at the borders of arena
[122]Sophisticated collective foragingOverhead control board (OHC), RGB LED, wireless IR communication, AR for Kilobots simulator, IR-OHC, control module (computer)Kilobots
[41]Biohybrid designHMD (Oculus DK2), stereoscopic camera, gamepad, QR-codePlant-robot systemLimitations in spatial capabilities of the hardware setup, limited immersion performance
[116]Prototyping Cyberphysical systemsMotion-capture system: 18 Vicon T-Series motion-capture cameras, projection system: six ceiling-mounted Sony VPL-FHZ55 ground projectors, motion-capture markersAutonomous aerial vehicles
Back to TopTop