Next Article in Journal
Modeling and Verification of Cable-Hole Transmission Tension Ratio Considering the Cable Lateral Extrusion
Next Article in Special Issue
Design of Connector Assembly Equipment for the Automotive Industry
Previous Article in Journal
The Finite-Time Turnpike Property in Machine Learning
Previous Article in Special Issue
Design and Experimental Research of a Non-Destructive Detection Device for High-Precision Cylindrical Roller Dynamic Unbalance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework

by
Karthik Subramanian
1,*,
Liya Thomas
2,
Melis Sahin
3 and
Ferat Sahin
1
1
Department of Electrical and Microelectronic Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
2
School of Information, Rochester Institute of Technology, Rochester, NY 14623, USA
3
Departments of Biomedical and Electrical Engineering, Case Western Reserve University, Cleveland, OH 44195, USA
*
Author to whom correspondence should be addressed.
Machines 2024, 12(10), 706; https://doi.org/10.3390/machines12100706
Submission received: 28 August 2024 / Revised: 19 September 2024 / Accepted: 25 September 2024 / Published: 4 October 2024
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)

Abstract

:
The integration of Augmented Reality (AR) into Human–Robot Interaction (HRI) represents a significant advancement in collaborative technologies. This paper provides a comprehensive review of AR applications within HRI with a focus on manufacturing, emphasizing their role in enhancing collaboration, trust, and safety. By aggregating findings from numerous studies, this research highlights key challenges, including the need for improved Situational Awareness, enhanced safety, and more effective communication between humans and robots. A framework developed from the literature is presented, detailing the critical elements of AR necessary for advancing HRI. The framework outlines effective methods for continuously evaluating AR systems for HRI. The framework is supported with the help of two case studies and another ongoing research endeavor presented in this paper. This structured approach focuses on enhancing collaboration and safety, with a strong emphasis on integrating best practices from Human–Computer Interaction (HCI) centered around user experience and design.

1. Introduction

The field of Human–Robot Interaction (HRI) has been rapidly evolving, with recent years seeing a significant focus on integrating Augmented Reality (AR). HRI, which combines the efforts of humans and robots, offers considerable flexibility and benefits, such as reduced production time [1,2]. Although HRI is used in various industries, such as medical and automotive, it is predominantly applied in collaborative industrial settings, such as assembly lines and manufacturing [3].
Recent research has used AR to address these challenges in HRI. AR has been shown to improve accuracy and efficiency, reduce cognitive workload, and minimize errors [4,5]. In HRI, AR is implemented through devices such as headsets, smartwatches, and projections, each tailored to tackle different aspects of the aforementioned issues [6,7,8]. However, the rise in AR usage has introduced a new challenge related to information visualization. Current research on AR in HRI focuses more on information architecture rather than the presentation of information to the user [9]. It is crucial to emphasize the user interface (UI) in AR applications, as it directly affects how users perceive, comprehend, and interact with the instructions and information provided [10]. This is especially important in high-speed, high-stress environments where humans work in close proximity to robots. Integrating Human–Computer Interaction (HCI) principles into AR interfaces can enhance both visualization and usability.
Most AR in robotics research focuses primarily on AR in manufacturing, rather than HRI as a whole [4,11,12,13]. These papers solely address the applications and benefits of AR in aiding the work between humans and robots in manufacturing settings. Regarding HRI as a whole, prior research also includes a survey paper on the use of Augmented Reality (AR) in HRI [5]. This paper is highly technical and introduces specialized terminology to categorize different AR hardware, overlays, and interaction points. However, none of the papers mentioned above address HCI, user interface design, or the importance of how visual information should be displayed. An AR survey paper briefly touches on HCI but is not focused on the field of HRI [9]. Instead, it looks at AR used for Situational Awareness across many fields of research.
HRI and HCI are closely related fields, yet they focus on different aspects of interaction. HCI primarily deals with the design and use of computer technologies, emphasizing how humans interact with digital interfaces such as software applications and websites. The goal of HCI is to improve the user experience, usability, and accessibility of these technologies. In contrast, HRI focuses on interactions between humans and robots, which involves not only digital interfaces but also physical, spatial, and social dynamics. HRI studies tend to focus on information architecture and ensuring that AR systems operate correctly, while HCI studies concentrate on information visualization and user research, prioritizing how effectively users understand the information. There is a gap in research at the intersection of HRI and HCI. Specifically, there is a lack of HRI papers that incorporate HCI principles to enhance AR visualizations, thereby optimizing efficiency, understanding, and task load. The principles of good design in AR from HCI can be used to improve the visualization of information within AR systems in HRI. This survey paper aims to provide a comprehensive literature review of key factors in HRI, AR visualizations that enhance HRI, and potential applications of HCI within AR in HRI. The paper will begin by discussing the key factors in HRI and research that has improved collaboration, trust, and safety over the years. It will then examine common challenges within HRI and prior work that uses AR to address these challenges. Finally, it will discuss the implementation of UI design practices in AR for HRI, outline good design practices based on HCI principles, and provide a framework for future work in AR with respect to HRI. The scope of this study is specifically focused on industrial applications, particularly assembly line tasks in manufacturing where humans and robots collaborate in shared workspaces.

2. Materials and Methods

In this study, four researchers collaborated to analyze and compile 164 papers to explore the integration of Augmented Reality (AR) into HRI and the incorporation of HCI principles. The updated PRISMA methodology was utilized to guide and justify the literature collection process used in this paper [14]. A total of 164 papers were gathered by three researchers from 8 databases and 11 registers. The databases included ACM Digital Library, Elsevier, IEEE Xplore, MDPI, PLOS, SAGE Journals, Springer, and Taylor & Francis. The registers used were ArXiv Preprints, Royal Society Publishing, Frontiers, Core Reader, OpenReview, Inderscience Online, Now Publishers, Nielsen Norman Group (NNG), Scribd, ISO Standards, and Thesai Publications, with some identified through Google Scholar.
The literature search was divided among three researchers, each focusing on different thematic areas:
  • The first researcher focused on AR- and HRI-related content, using the following search strings: “HRI Key Principles”, “HRI Foundations”, “AR in HRI”, “Mixed Reality in HRI”, and “Hololens in HRI”.
  • The second researcher concentrated on HCI materials, using the following search strings: “UI in HRI”, “UX in HRI”, “HCI in HRI”, “AR UI in HRI”, “HCI Foundations”, and “UX/UI Principles”.
  • The third researcher specialized in Situational Awareness, using the following search strings: “Situational Awareness in HCI”, “Situational Awareness in HRI”, “Situational Awareness for AR”, and “Hololens Situational Awareness”.
No duplicate papers were found as each researcher explored different areas of the literature. In the initial screening process, 26 papers were excluded for not meeting the inclusion criteria, as they were non-peer-reviewed, irrelevant to the research topic, or outdated (published before 1990). The remaining 138 papers were retrieved, reviewed, and assessed for eligibility by all four researchers. Papers were required to contribute to at least one of the following areas: key HRI factors, AR use in HRI, HCI principles, integration of HCI in AR for HRI, or Situational Awareness in AR-HRI contexts. Eighteen additional papers were removed after this eligibility assessment. Of these, 4 papers gathered by the second researcher were excluded due to missing HCI integration within HRI. Fourteen papers gathered by the first researcher were excluded, with 9 focusing on contexts outside the scope of manufacturing and production in HRI, and 6 being VR-centric rather than AR-centric. In total, 119 papers were included in this review, including two studies cited in the final discussion on the proposed framework. Figure 1, generated from [15], visualizes the flow of identification, screening, retrieval, and eligibility of the literature gathered for this paper. Two additional citations were used in our paper to reference PRISMA.
The structured process of gathering and categorizing papers facilitated the development of a robust framework outlining critical AR elements necessary for advancing HRI, with a focus on improving collaboration, safety, and effective communication between humans and robots. With this comprehensive review of the methodology, the following sections will explore key factors that drive successful HRI, particularly focusing on collaboration, trust, and safety.

3. Key Factors in HRI

Human–Robot Interaction (HRI) is the combination of both robots and humans working together. It combines the accuracy and speed of automated machines with the ease of adjustability and change in human labor [3,16]. HRI requires robots to be complex; they must be flexible enough to adapt to changing work environments and situations while also being autonomous [2,17]. All robots have control systems and software controlled by their human counterparts; some require full direction, such as teleoperated robots, while others operate alone, but are previously programmed to complete certain tasks [1]. Human–Robot Collaboration (HRC) refers to when humans and robots work together towards a common goal. HRC is a subsection of HRI and often utilizes cognitive robots that can plan their own actions by gathering information from the environment, understanding the human counterpart’s intent and determining what steps need to be completed in order to finish the task [18]. These types of cognitive robots are advancing more in the field of HCI and require modern machine learning solutions that can build both the cognitive model and the behavioral block [19]. Typically, HRC is seen in industrial work settings including manufacturing and fabrication [20].
This paper mostly looks at HRC research in industrial manufacturing settings but also includes HRI and HRC research from other fields when relevant. Therefore, the umbrella term HRI will be used unless it is necessary to specify that the research was carried out for HRC. HRI can be beneficial for a number of reasons including low production costs, flexible adaptation to changing production layouts and tasks, and ergonomic benefits for humans [21]. Research in HRI tends to focus on improving fluency and efficiency between humans and robots through 2 key factors: collaboration on the one hand and trust and safety on the other. The framework for this section has been visualized in Figure 2.

3.1. Collaboration

Collaboration between humans and robots refers to how they work together. Bauer et al. categorize various types of HRI collaboration into five categories: cell, coexistence, sequential collaboration, cooperation, and responsive collaboration [18]. Each of these levels increases exponentially in terms of how closely robots and humans work and how much they interact with each other.
Efficient collaboration requires planning, communication and instruction, and interaction. Planning entails the allocation of tasks and deciding the pathways of the robots [18]. Communication and instruction pertain to how humans and robots deliver and send each other feedback or instructions. With proper planning and communication, successful interaction between humans and robots can be achieved. Interaction can also proceed smoothly and quickly with the integration of action planning. Action planning involves robots anticipating a human’s movement and then planning their own actions accordingly. This subsection explores current research that aims to improve the planning, communication and interaction in HRI.

3.1.1. Task Planning

When robots and humans work together, proper planning is required to ensure safe interaction and a productive environment. One aspect of planning is task allocation or assigning proper tasks to both human individuals and robot entities. This ensures a smooth workflow and prevents overloading for any single participant. Lamon et al. [22] look at allocating tasks in manufacturing scenarios based on a robot’s physical characteristics. The experiment looks at the physical capabilities of robot agents and human workers and then allocates tasks accordingly. Figure 3, taken from the paper, explains how tasks are broken down into a series of actions. These actions are assessed by the algorithm to determine whether they should be completed by a human or robot agent and are then allocated accordingly. The solution proposes which tasks are better suited for robot agents to remove easily physically fatiguing tasks from human workers. The results proved that tasks allocated by physical characteristics are suitable for fast-reconfigurable manufacturing environments. Another way to allocate tasks is by looking at trust [22]. Rahman et al. [23] consider the two-way trust of human trust in robots and human trust when creating a system that allocates and reallocates subtasks based on trust levels. Real-time trust methods and computational models of trust are used to calculate trust levels in both humans and robots. Subtasks are allocated using these calculations and reallocated in real time if trust levels drop. The results show that the allocation of subtasks and re-allocation based on trust lead to improved assembly performance compared to considering human trust alone or no trust at all.

3.1.2. Communication and Instruction

Instruction and communication are crucial in any industrial task, but they are especially important when involving robot entities that collaborate with humans. In typical HRI settings, most communication occurs with a human operator who controls the robots in the environment and instructs all human workers. When working with teleoperated robots, the case is different. Teleoperated robots are controlled by a human operator from a different location. Oftentimes, teleoperated robots enter locations that are not suitable for humans like underground caves or sewage systems. These robots are fully operated from a machine, and communication can be difficult when the robot’s environment is dark and cannot be properly viewed through its camera [24]. Glassmire et al. [25] add force feedback to teleoperated humanoid robot astronauts. The addition of haptic feedback was proven to improve the performance of the operator and decrease completion times.

3.1.3. Interaction

When humans and robots work together, it often involves taking turns or working on separate tasks, resulting in rigid actions and limited flow. In contrast, when humans work with other humans, they adapt to each other and begin to predict each other’s actions, creating a more seamless and efficient workflow [26]. This type of flow is often attempted to be replicated by making robots autonomous using various systems that involve prediction and adaptation. Hoffman et al. [26] introduce an adaptive action selection system to robots that allows for anticipatory decision-making. The robots adapt to the human workers and use predictions to automate their next steps. The researchers found an increase in fluency in Human–Robot Interaction when the robots used anticipatory actions through a higher increase in concurrent motion and a decrease in time between human and robot action. Additionally, participants who interacted with robots that did not use anticipatory actions were often annoyed by the robots’ lack of predictive behavior. Lasota et al. [27] conducted a similar study in which participants worked with an adaptive robot that used human-aware motion planning. Fluency between humans and robots was evaluated along with human satisfaction and perceived safety. The results concluded that team fluency increased with the adaptive robot, leading to faster task completion times, decreased idle times, and increased concurrent motion. Additionally, participants reported feeling comfortable and satisfied working with the adaptive robot. Yao et al. [28] used a different approach to increasing autonomy in robots: they used a Cyber-Physical Production System (CPPS) and IEC 6199, International Electrotechnical Commission (IEC), Geneva, Switzerland, 2012. [29] function blocks. The system was designed to allow for robot autonomy during various Human–Robot Interaction scenarios. Figure 4 shows eight categories of Human–Robot teamwork that are addressed by this system. The feasibility of this proposed system was tested in an assembly line case and proven to have potential capabilities to improve HRC fluency and increase the flexibility of manufacturing systems in future work.
Another process to increase autonomy in robots is through the use of digital twins. Digital twins are the digital representations of physical objects. In HRI, digital twins are typically used to model robotic mechanisms and simulate changes in tasks and manufacturing scenarios before implementing them in real life [30]. This virtual visualization enables workers to devise optimal solutions and saves time, money, and effort, as not all changes need to be physically tested [31]. Liu et al. [31] creates a digital-twin-based design platform using a quad-play CMCO (configuration design–motion planning–control development–optimization decoupling) model. The platform is used to design flow-type smart manufacturing systems that can be easily adjusted. A completed case study demonstrated that the digital-twin-based platform is both efficient and feasible. Rosen et al. [32] explored the topic of digital twins increasing the autonomy of robots in manufacturing settings. Information gathered by sensors or generated from the system is stored within a digital twin, allowing it to understand and represent the full environment and process state. The researchers propose that this information, along with the models of the digital twin, can be used to create simulations that anticipate the consequences of actions. This capability can enable the development of action-planning autonomous robots.

3.2. Trust and Safety

Trust and safety are important aspects of HRI [20,33,34]. When working with automated machines, it is necessary to ensure the safety of humans. Safety leads to human trust, which in turn increases fluency and productivity [20]. Neglecting the importance of safety in HRI can lead to serious injuries. Collision mitigation or avoidance is a necessary and critical system in HRI, especially Human–Robot Collaboration. One way to avoid injuries is through Situational Awareness (SA), which ensures that both humans and robots are aware of each other’s locations at all times. Another key approach is Speed and Separation Monitoring (SSM), a fundamental safety strategy that dynamically adjusts robot operations based on real-time proximity and the relative velocity of humans in a shared workspace. This method is guided by standards such as ISO 10218 [35] and ISO/TS 15066:2016 [36], which outline the requirements for safe SSM. Additionally, trust in robots can be enhanced by introducing social attributes like facial expressions, which help create a sense of bonding between humans and robots. The presence of robust safety measures like SSM and SA, combined with the introduction of social attributes in robots, can significantly boost human workers’ trust and acceptance of robotic systems. When workers feel confident that their safety is prioritized, they are more likely to engage positively with robots, leading to better collaboration and higher productivity.
Haddadin et al. [37] systematically evaluated the safety in HRI looking at real-world threats, dangers and injuries. The paper also included a summary and classification of possible human injuries when working with robots, including soft-tissue injuries and blunt trauma. Topics like trust and safety can be very broad and difficult to define, as they encompass human emotions, behavior, and psychology. Lee et al. [38] provides a detailed paper defining trust, its various aspects, and strategies for designing automation devices, such as robots, to gain trust.
A key challenge in HRI is measuring or evaluating trust and safety. Qualitative data, often collected through questionnaires, is typically used to assess perceived safety or trust. For example, Sahin et al. [33] conducted an experiment on perceived safety when working with robots, gathering participant data through questionnaires during and after trials. Similarly, Mauruta et al. [39] also used qualitative measures in the form of questionnaires to measure trust in their series of HRI experiments. On the other hand, Hancock et al. [40] explored various quantitative methods for measuring perceived trust in HRI. Their survey paper analyzed numerous papers and categorized quantitative measures into three sections: human-related, robot-related, and environmental. They concluded that robot-related measurements were most commonly associated with determining trust in HRI. There is no single correct method for measurement; qualitative and quantitative measures can be combined. For instance, Soh et al. [34] used both quantitative and qualitative methods to measure perceived safety, cross-checking quantitative data with qualitative questionnaire responses to ensure consistency.

3.2.1. Situational Awareness

A significant area of research concerning safety and trust in HRI is Situational Awareness (SA). In HRI, SA involves both the human being aware of the robot’s position relative to themselves and the robot being aware of the human’s position relative to itself [41]. SA is crucial for establishing trust, safety, and effective collaboration between humans and robots. Additionally, SA directly corresponds with staying “in the loop”. Low Situational Awareness (SA) can lead to an “out-of-the-loop” problem, where operators are unable to effectively take control when automation and robots fail. This lack of awareness hinders their ability to respond promptly and accurately in critical situations, compromising safety and efficiency [42]. Situational Awareness can improve collaboration between humans and robots, but the way it is displayed is crucial to its benefits. For instance, in an experiment by Unhelkar et al. [43] comparing human and robotic assistants, a red light was used to indicate when the robot was getting close. This visual cue frightened the workers and put them on edge. As a result, they preferred the human assistant over the robot due to the unsettling nature of the red light. When it comes to measuring SA, the most prominent technique is the Situational Awareness Global Assessment Technique (SAGAT). It was developed by Mica R. Endsley [44,45] back in 1988 and is still used today. The SAGAT technique involves random pauses of the machine so the user cannot predict the robot’s whereabouts or next moves. This allows for the comparison of the human’s perception of the robot’s location to the robot’s actual location. This method is often paired with questions to the user about the robot’s location or action. Unhelkar et al. [43], show a series of awareness questions asked during their user testing to measure the human’s Situational Awareness. Newer techniques for measuring SA include the Situation Present Assessment Technique (SPAM) and the Tactical Situational Awareness Test (TSAT) for the small-unit tactical level [46,47].

3.2.2. Speed and Separation Monitoring

Speed and Separation Monitoring (SSM) is a safety mechanism in Human–Robot Collaboration that continuously tracks the distance between humans and robots. When a human enters a designated safety zone, the system prompts the robot to reduce speed or halt, thereby preventing collisions and ensuring a safe working environment. The framework for SSM, as established by Marvel and Norcross, underscores the importance of calculating the minimum protective distance at which a robot needs to stop, using parameters such as human speed, robot speed, and the robot’s stopping time (TS). This is further enhanced by implementing external observer systems that continuously monitor the workspace, ensuring the robot can initiate a controlled stop if the separation distance is breached [48]. Complementing this approach, Kumar et al. [49] demonstrated the efficacy of using Time-of-Flight (ToF) sensor arrays mounted on robot links to dynamically adjust the robot’s operational speed based on real-time distance measurements and relative velocities between the human and the robot. This methodology not only improves safety but also integrates self-occlusion detection to filter out false readings, thus enhancing system reliability. Additionally, Rosenstrauch et al. [50] highlighted the use of Microsoft Kinect V2 for skeleton tracking, which continuously detects and tracks human workers within the shared workspace. This allows for dynamic adjustments in the robot’s speed, ensuring compliance with ISO/TS 15066 standards. Furthermore, Ganglbauer et al. [51] introduced a computer-aided safety assessment tool that incorporates the real-time digitization of human body parts and dynamic robot models to estimate potential contact forces, thereby providing immediate feedback on safety risks. Integrating these advanced SSM methodologies into HRI adheres to international safety standards and fosters safe and efficient working environments.

3.2.3. Social Attributions

A smaller but still important area of research within safety and trust in HRI is promoting social interactions between humans and robots. This includes exploring how humans socially work with robots, adding emotional cues and social body language to robots, and understanding mental stress when working with robots. Through a Wizard of Oz and video ethnographic study at an elementary school, Oh and Kim [52] discovered that over time, children develop an emotional connection with robots. The character resembling robots displayed a variety of facial expressions, as shown in Figure 5. There were a total of 30 emotions that included sorrow, anger, fear, delight, curiosity, indifference, and more. The emotional bond created with robots can lead to the long-term use of robots, thereby extending their typically short lifespans. A similar emotional connection was created between workers and a robot co-worker in a study conducted by Sauppe et al. [53]. This study concluded that the social attributes of the robot facilitated this emotional connection, which in turn made the workers feel safe when near the robot. They began perceiving the robot as they would a human co-worker. Additionally, Bruce et al. [54] explored how robots that express facial features and engage with humans socially, such as by turning towards them, are more compelling to interact with and thereby increase trust within humans. The mental stress of human workers is also a crucial factor to consider when aiming to ensure safety and trust around robots. This is important because mental stress directly correlates with productivity and comfort. Lu et al. [55] documented a number of measurements that can be used to gauge the mental stress of workers when collaborating with robots. The addition of both monitoring mental stress and giving robots social attributes can improve the interaction between humans and robots, fostering trust and creating safer workspaces.
Although understanding the essential factors that contribute to effective HRI is crucial, it is equally important to recognize the challenges that persist in this field, which the following section addresses.

4. Challenges in HRI

The field of HRI faces several key challenges that must be addressed to ensure effective and safe collaboration between humans and robots. Kumar et al. [20] identify the three main challenges in HRC as human safety, human trust in automation, and productivity. This paper identifies similar challenges, which include safety concerns, low levels of communication and collaboration, and the necessity for clear instruction and planning.
  • Safety Concerns: Improving both human and robot awareness of each other’s actions and intentions to prevent collisions and minimize risks of injuries.
  • Low Levels of Communication and Collaboration: Building and maintaining trust in robotic systems through predictable, transparent actions.
  • Necessity for Clear Instruction and Planning: Overcoming the limitations of pre-planned or rigid robot actions by enabling more dynamic, interactive, and responsive robot behaviors.
Although recent research offers innovative solutions and improvements, HRI still struggles with the need for constant feedback to enable natural interactions. Currently, most HRI scenarios involve pre-planned actions on computers or adaptive robots that lack real-time interactive capabilities.
Communication and collaboration between humans and robots also remain inadequate. Effective HRI requires seamless interaction, where both parties can understand and predict each other’s actions. However, current systems often lack the capability to provide dynamic, ongoing communication. Rather, the communication is one-sided, with only the robot predicting, while the human cannot see its predictions or upcoming actions [26,27,28]. This deficiency hinders the fluidity of teamwork and reduces overall efficiency, leading to misunderstandings and delays. Figure 6 (left) depicts the robot’s workspace setup where human interaction takes place. On the right, it illustrates how the robot from Lastoa et al.’s [27] experiment predicts human actions. While the most likely and frequently used paths are predicted, this method does not account for random actions or movements by humans. Additionally, the human cannot predict the robot’s path, which could further assist in preventing collisions.
Furthermore, clear instruction and planning are crucial for optimizing HRI. Robots typically require precise instructions to perform tasks accurately. However, in dynamic and fast-paced environments, the ability to provide and update these instructions in real time is limited [56,57]. This limitation results in inefficiencies and increased cognitive workload for human operators, who must continuously monitor and adjust the robots’ actions.
Lastly, safety concerns are important in HRI, as the close physical interaction between humans and robots in environments such as manufacturing and assembly lines introduces the risk of collision. Without real-time feedback and communication, it is challenging to ensure that both humans and robots can anticipate and respond to each other’s movements, potentially leading to hazardous situations [58,59,60,61]. Although solutions such as flashing lights have been created to show the whereabouts of the robots, unless the light is always in constant view of the user, it cannot always be useful and reach its full potential for ensuring safety and Situational Awareness [43]. Figure 7 shows an example of lights near a collaborative robot that lights up to show the robot is in motion.
The next section demonstrates how Augmented Reality can enhance HRI and provide solutions to the challenges that currently exist in the field, showcasing relevant research that addresses these issues.

5. Augmented Visualizations Used to Improve HRI

The integration of Augmented Reality (AR) presents a promising solution to the challenges in HRI. AR combines virtual elements overlaid on a real-world environment. It is typically used to convey important information needed while performing a task or viewing an environment [4,11]. AR enables humans to receive immediate visual feedback about the status and intentions of robots. This capability allows for more intuitive and responsive interactions. Additionally, AR has been integrated into a variety of fields including medical, entertainment, military, product design, and manufacturing [4]. More recently, it has been used in HRI to improve collaboration and feelings of safety between robots and humans [62,63]. Within HRI, AR is typically presented in the form of wearable devices, such as head-mounted displays (HMDs) and smartwatches. It is also seen in the form of projections onto real-life surfaces [64]. This section will explore research demonstrating that AR can enhance planning and training, communication and instruction, and interaction within the field of HRI. Additionally, the exploration of AR to increase Situational Awareness and collision mitigation will be examined to understand how it improves trust and safety. Table 1 below categorizes all of the papers reviewed in this section.

5.1. Collaboration Utilizing AR

Similar to traditional HRI collaboration, collaboration in HRI using AR focuses on three main components: planning and training, communication and instruction, and interaction. The key difference is that AR is used to enhance the features and systems involved in these components, ensuring more efficient collaboration. Additionally, there are various ways in which AR can be integrated into the collaboration, expanding the interaction between humans and robots to include how AR interfaces with both. Suzuki et al. [5] provide a detailed categorization of the various approaches, interactions, and characteristics commonly found in AR applications within HRI. They categorize the three main approaches of how AR works with humans and robots as on-body, on-environment, and on-robot. This subsection will explore how AR is used in the field of HRI to improve planning and training, communication and instruction, and interaction, ultimately making collaboration more effective.

5.1.1. Planning and Training Using AR

Recent research has found that AR enriches path planning and training by offering a virtual approach rather than a physical one, making operations safer, easier, and more flexible [65]. AR can be used to understand a robot’s status and planned actions, discuss and review plans with robots, and generate optimal plans before implementing them. Fang et al. [65] utilized AR to plan the path of an industrial robot. Despite encountering accuracy errors in their case study due to the application system, the concept of AR path planning was generally considered a success. Doil et al. [66] explored the use of AR in planning manufacturing tasks by placing virtual robots and machines within a real-world workspace to map out a manufacturing layout. This approach eliminates the need to model the work environment as required in a VR-based version, streamlining the planning process. Similarly, assembly workstation planning using AR is explored by Wang et al. [67]. The researchers developed an AR system that enables HRC assembly simulation based on current human worker mapping and motion data. This system allows operators to design optimal HRC assembly lines that mimic human assembly lines efficiently and cost-effectively. In HRI training, AR technology often provides virtual simulations of real-life interactions and situations, helping individuals better understand and become comfortable working with robots. Bischoff et al. [83] identified robot operation training as the most promising area of AR research in this field. When training with AR, human operators can safely practice complex tasks. Andersson et al. [68] developed a system that facilitates AR-based training, enabling this safe and effective learning environment. The system is divided into four main parts, interconnected via a Robot Operating System (ROS), to ensure flexibility and extensibility. The multimodal robot programming toolbox allows for various programming tools to interact with a data model through parameter requests, enabling the user to select desired tools during runtime, such as PbD or free drive mode, to define robot poses or trajectories. This is presented visually in Figure 8. In contrast, Matsas et al. [84] present a similar training system that utilizes VR and is entirely virtual, without incorporating real-world elements.

5.1.2. Employing AR for Communication and Instruction

In HRI, AR can be used both for communication and instruction. When giving instructions, AR is used for robots to give humans instructions proactively during work. When receiving instructions, AR can be used to view instruction plans during manufacturing and project instructions onto objects in assembly work. In an AR manufacturing case study by Saaski et al. [57], researchers found that participants completed assembly tasks faster when instructions were displayed virtually using AR and an HMD display rather than paper instructions. Participants identified the 3D animations of parts being assembled as the most useful AR feature. AR systems that provide instructions are not only helpful for workers but also for operators who oversee entire workplaces, including both human and robot workers. Michalos et al. [69] created an AR system that supports operators by allowing them to send AR instructions and maintain an overview of the workplace to ensure safety and display additional information through AR visualization. The results of the study demonstrated that the system significantly enhanced working conditions and the integration of the operator. Additionally, AR systems can support robots in giving instructions to human workers. Liu et al. [85] designed an AR system that virtually augments instructions intuitively to humans. These instructions are provided by robots through recognizing human motions and determining the next best course of action.
When using HMDs to display instructions and other important information, it is vital to ensure that virtual screen coordinates, eye level, and calibrations are properly adjusted to the wearer. Janin et al. [86] provide detailed procedures for finding the correct calibration parameters for displaying visuals on HMDs. Additionally, it is important to note that although utilizing HMDs for AR instructions can lead to positive results, they may not yet be ready for real-world applications. For example, an evaluation study by Evans et al. [56] concluded that while the Microsoft HoloLens has the potential to display AR instructions in an assembly line, it still requires improvements in accuracy before it can be applied in a real factory setting. In addition to just HMDs, Lunding et al. [70] used a hybrid approach to provide instruction and enhance communication. Instead of relying solely on HMD-based AR, they incorporated a web interface to achieve a more accurate visual representation when displaying assembly instructions. Additionally, this hybrid method establishes two-way communication between the operator and the robot. Aside from HMDs, projection mapping can also be used to display instructions and informational cues from the robot to a human worker. Kalpangam et al. [8] used vision-based object tracking to map out an environmental space, determine the location of various objects and then utilized projection mapping to overlay important cues and instructions onto tracked objects. Figure 9 illustrated the steps to aligning a car door projected over the physical door. A projected circle on the floor moved in real-time with the car door until it reached the correct alignment. Once aligned, the circle provides feedback to let the user know that the task has been successfully completed.
Communication in HRI is vital for making informed decisions and adapting to unknown situations. AR-based communication and location guidance between robots and humans is explored by Tabrez et al. [71]. They use AR for both prescriptive and descriptive guidance techniques. For teleoperated robots, an AR interaction system can address the challenges of maintaining awareness due to a single ego-centric view. Green et al. [24] explore how AR enables operators to view a teleoperated robot through a 3D visual representation of the work environment and anticipate the robot’s actions. This capability facilitates improved communication and collaboration between the operator and the teleoperated robot.

5.1.3. AR Promoting Interaction

Interaction in HRI often uses AR to promote fluency and productivity between humans and robots. This typically involves conveying the robots’ upcoming plans and actions, allowing users to interact with robots intuitively. De Franco et al. [72] take a similar approach to promote natural interaction between robots and humans by creating an AR system that allowed humans to view the robot’s status and future actions for collaborative tasks. Using an AR headset, the robot provided feedback to the human user, enabling them to understand the upcoming steps and be prepared. This improved their performance and was deemed helpful by 10 participants [72]. Andronas et al. [73] also promoted natural interactions using a Human–System and System–Human interaction framework. This framework allows robots and humans to communicate important information back and forth while completing their tasks. In general, the system increased operator trust and awareness and reduced human errors. Similar interaction systems can also be operated through smartwatches. Gkournelos et al. [74] promoted HRI interactions through a hybrid approach involving smartwatches. This method significantly aided the operator in integrating into an assembly line. Another approach to Human–Robot Interaction using AR is by allowing robots to manipulate virtual elements in a shared Augmented Reality workspace. In this setup, robots do not assist in the physical world but rather work proactively with humans by offering cues in the virtual augmented world. Qiu et al. [75] developed an experiment to test this setup and found that it worked effectively and could be useful for the future of HRC. Outside of the virtual world and elements, the AR interaction can also be projected onto a work surface to provide information about the robot’s upcoming actions and intentions. Sonawami et al. [76] used projection mapping to display visual cues, highlighting the objects that human beings need to interact with. As seen in Figure 10, the robot’s shadow is projected, highlighting its status. Additionally, the colored semi-circles show users which blocks are safe to pick up. The results show that projection mapping improved safety and task efficiency.

5.2. Applying AR to Improve Trust and Safety

AR can enhance trust and safety in HRI through two main approaches: prevention and correction. Prevention is achieved through Situational Awareness, where humans are actively shown the next steps and whereabouts of the robot to avoid collisions. This approach helps both robots and humans understand each other’s locations, enabling safe interactions and reducing the likelihood of being taken by surprise if they are too close to each other. Collision mitigation and avoidance includes maintenance, which refers to the upkeep of a robotic system or environment to ensure it complies with safety protocols and functions appropriately. It also includes collision avoidance through the operator’s overall view and management of an active HRI environment and the ability for robots to stop automatically if humans get too close to avoid collisions. In this subsection, we will examine research that demonstrates how AR systems can be integrated into HRI through Situational Awareness and collision mitigation to increase trust and safety for users in HRI environments.

5.2.1. Situational Awareness with AR

When it comes to AR, Situational Awareness is a widely researched topic in many different fields, including medical, military, and manufacturing [9,87]. In HRI, many researchers aim to promote safety by increasing user awareness of robots they are working with and their surroundings. These approaches may be similar to those discussed in the interaction section, but are grouped here because they are specifically designed to enhance Situational Awareness for safety and trust rather than interaction.
The first approach to improving Situational Awareness involves visualizing the intended path or motion of the robot. Both Tsamis et al. [59] and Palmarini et al. [58] use AR to show users the robot’s next actions. Their goal is to help users become more context-aware and trust the robots more. Although Tsamis et al. [59] used an HMD and Palmarini et al. [58] used a handheld tablet, both experiments resulted in positive sentiments and increased perceived trust from participants. Figure 11 shows the AR seen through an HMD. The green tube indicates the robot’s next path of motion, while the red bubble encloses the danger zone, where humans should be cautious of collisions.
The second approach is similar to the red bubble; it is when AR is used to visualize safety zones. This allows users to see the area in which a robot operates and know when they are too close. Vogel et al. [60] displayed safety zones using multiple projectors around the room, while Choi et al. [61] used an HMD and calculated the 3D locations of both the human and the robot using digital twins. Neither of these papers evaluated their system designs; instead, they focused on design architecture and making the system functional. Hietanen et al. [77] used both HMD and projection mapping to display safety zones. Both types of visualizations were tested in a user study. The AR in the HMD and the projection mapping both reduced task completion time and robot idle time. However, the projector was proven to show more noticeable improvements in safety.
Lunding et al. [78] combined both approaches by displaying robot safety zones and upcoming actions using an HMD. Their system, called RoboVisAR, also allows for users to easily customize the visualizations shown in the AR. The papers discussed employed one of three different methods to present visualizations of Situational Awareness. While each method was found to enhance SA, some approaches are proven to be more effective than others [12]. Generally, HMDs are found to improve efficiency. Furthermore, when compared to handheld displays and radio links, HMDs have been shown to offer better performance and reduce workload [6].

5.2.2. Using AR for Collision Mitigation and Avoidance

AR can make HRI conditions safer, but there are also mental and physical risks associated with the use of AR in environments with potentially dangerous robots. Bahaei and Gallina [88] created a framework called FRAAR (Framework for Risk Assessment in AR-equipped socio-technical systems) that can be used for the assessment of the risk of both social and technical issues that can arise when AR is introduced into any robot manufacturing system. Safe practices can also mean the maintenance and upkeep of a running system. Eschen et al. [79] utilized AR for the inspection and maintenance in the aviation industry. In a broader sense, Papanastiou et al. [80] and Makris et al. [81] both developed AR systems designed for operators to maintain safe and integrated environments in human–robot manufacturing settings. The accuracy of these safety measures is usually not precise, but instead, they are activated early to avoid human–robot collisions. This means that robots will be interrupted from their work and forced to stop moving very often, which can hurt the flow of a workspace. To minimize interruptions and maximize workflow, Matsas et al. [82] used proactive and adaptive techniques in a virtual immersive environment to test the performance metrics of a robot’s safety enhancements. This approach aims to obtain precise measurements of how close a human can be before a robot is forced to stop. A virtual human body model is created and analyzed to determine the appropriate distances from the body that should trigger the robot’s motion-stopping feature. Figure 12 shows the diagram of this body model.
The use of digital twins in Human–Robot Collaboration has shown significant promise in improving safety and mitigating collisions. According to Feddoul et al. [89], digital twins facilitate real-time monitoring, simulation, and prediction of potential safety hazards, providing a comprehensive approach to preemptively address safety issues. Similarly, Maruyama et al. [90] proposed a twin-based digital safety control framework that focuses on accurate distance calculations and real-time safety monitoring, ultimately preventing collisions by maintaining a safe distance between humans and robots. Lee et al. [91] detailed the application of digital twins in collaborative robotic systems through the implementation of safety controllers and validation techniques, focusing on their role in offline testing, real-time monitoring, and predictive analysis. It contains a good overview of different types of simulation environments used for different types of collaborative applications. As we examine the impact of augmented visualizations on improving HRI, it becomes clear that good design principles are critical to maximizing their effectiveness, guiding the development of user-friendly AR systems.

6. Good Design Principles for AR

Although there is immense research and development of AR in HRI, there is a lack of focus on visual information, with more emphasis placed on information architecture [9]. The way information is displayed is crucial because it directly affects how users comprehend and act on it. To properly design how information is presented in AR, we must look to Human–Computer Interaction (HCI) design principles. HCI is a multidisciplinary study focused on understanding how humans use and interact with technology, and it plays a crucial role in design and research [92]. AR is a highly popular area in HCI research across various fields, including mental health, accessibility, automotive, education, and gaming. Many of these studies examine how AR interfaces can be improved for efficiency, user understanding, and enjoyment. Positive results from HCI work in AR demonstrate its potential to enhance user experience and engagement [93,94,95]. Integrating HCI principles into HRI is essential, as it can lead to more intuitive and effective interactions between humans and robots, ultimately improving user satisfaction and trust.

6.1. UI and UX Design

HCI differs from HRI particularly in terms of visual design, specifically looking at the user interface (UI) and user experience (UX) of AR features implemented in Human–Robot collaboration. From Norman [96] and Rogers [10] we can summarize the top principles for both UI and UX design. UI design refers to the visual elements users see and interact with, which include both static elements such as icons, colors, typography, and layout and interactive elements like buttons, scroll bars, and animations. The goal of UI design is to create an intuitive, easily learned, and easily navigated product. To create user-friendly interfaces, the top four principles to follow include the following:
  • Simplicity: Designs should be simple, clutter-free, and easy to navigate. Avoid unnecessary elements and keep the interface as straightforward as possible.
  • Consistency: Maintain internal and external consistency. The design elements should be consistent throughout the system and align with universal design standards. For example, a floppy disk icon universally signifies “save”.
  • Feedback: Provide feedback through visual, auditory, or haptic means to indicate that an action has been successfully completed. This helps users understand the outcome of their interactions.
  • Visibility: Ensure that the cues are easy to see and understand. Visibility also refers to making the current state of the system visible to users. This helps users stay informed about what is happening within the system.
UX (User experience) design focuses on the flow of an app and how users interact with the system, rather than its appearance. It involves understanding users’ behaviors, pain points, and motivations. Good UX design improves engagement and usability. The top four principles for good UX design as follows.
  • Usability: Ensure the user flow of the system is easy to navigate, learn, and use. The design should be intuitive and provide a seamless experience for users.
  • Clarity: The design should be straightforward and provide users with only essential information, avoiding unnecessary elements. Clarity ensures that users understand the interface and its functions easily.
  • Efficiency: The system should be designed for performance optimization, enabling users to accomplish their goals quickly. Efficient design minimizes the time and effort required to complete tasks.
  • Accessibility: Ensure that the system is accessible to all users, regardless of their abilities. This includes considering different devices, screen sizes, and assistive technologies to create an inclusive user experience.
When it comes to evaluating the interface and user experience of systems, including AR systems, two common subjective measures are the System Usability Scale (SUS) and the User Experience Questionnaire (UEQ) [97,98]. Both measures are well established with benchmarks and prior work involving user studies. The SUS asks participants to rate the usability of an interface using a five-point Likert scale. It includes 10 questions, each focusing on different aspects of UI usability [98]. The UEQ is a 26-question survey that uses a 7-point Likert scale. Participants rate the user experience of a system based on six attributes, including dependability and attractiveness [97].

6.2. User-Centered Design

A good practice in design is always to consider the user. This is known as User-Centered Design (UCD). It refers to designing with the user in mind, catering to their needs and considering their technology level and skill set. This approach ensures that the interface is accessible and user-friendly for the intended audience. UCD involves continuous user participation and feedback throughout the design process, ensuring that the final product effectively meets user requirements and provides a positive user experience [99,100].
Additionally, in HCI, there is a focus on designing for inclusion, accessibility and sustainability. These principles can be considered when designing AR visual elements for HRI to ensure that all workers can properly use and interact with the technology [101]. Furthermore, designing for diversity, equality, and inclusion can ensure unintended bias does not occur within the AR system [102]. This paper will not go into detail on the design guidelines needed for accessibility, sustainability, and design, but there are numerous resources that can be referenced, including design guidelines and previous work [103,104,105,106].

6.3. Current Design Principles Used for AR in HRI

Within HRI research, there are limited papers that acknowledge or explore HCI principles such as UI/UX design. Only four papers considered UX and UI design strategies when developing the AR systems used to improve HRI. Fang et al. [107] used interface design strategies when creating an AR system to aid in the seamless interaction between humans and robots. Their paper includes a section labeled “Visualization”, which discusses each visual element in the AR and the purpose of its design. The user experience was evaluated based on how well the users understood the visual elements, their cues, and the information they were displaying. Alt et al. [108] directly examine principles for UI design, such as adaptability and workflow when creating a user interface to explain an AI-based robot program. User experience was assessed based on users’ perceptions of the interface’s usefulness, assistance, readability, and understanding. Similarly, when developing an AR head-mounted display for robot team collaboration, Chan et al. [7] designed the system with user workflow in mind. The system’s user experience was evaluated based on task load and the increased efficiency of collaboration. Green et al. [24] is the only one of these papers to offer multiple design variations. They presented three different UI designs for their AR system to increase Situational Awareness, as shown in Figure 13. The user experience of these various UI designs was evaluated based on users’ comfort and their ability to accurately read the robot’s location at any given time. The papers above all evaluate user experience based on various qualitative and quantitative measurements that correctly correlate with what they are designing to improve. For example, Chan et al. [7] used both the SUS and UEQ, mentioned above, to evaluave the usability and the user experince of the system design. There are also more universal methodologies for evaluating UX. Lindblom and Alenjung [109] created ANEMONE, an approach to evaluating UX in HRI. They conducted extensive research on HRI evaluation principles and UX evaluation principles, deriving from HCI, to develop this combined methodology that can be implemented into any AR user study in the field of HRI. Aside from incorporating HCI principles and evaluation methods, HRI research also includes a few papers regarding design guidelines, but they tend to be limited or highly specific. Jeffri and Rambli [110] offer guidelines for interface design in AR, but they focus specifically on the AR used in manual assemblies. Wewerka et al.’s [111] design guidelines are tailored for robotic process automation, and Zhao et al.’s [112] work is centered on the fabrication process. While these guidelines and tools can potentially be applied to design UI for AR in various HRI fields, they have only been tested in their respective domains. Consequently, their effectiveness in other fields remains uncertain. A better approach is to use HCI design guidelines for AR. These broader guidelines can be applied to any use of AR, including HRI. Since they are not specific to a particular industry, they offer flexibility and can be adapted to suit the specific needs and contexts of various designs, leading to more optimal outcomes [103].

7. Evaluation and Continuous Improvement

Continuous evaluation and improvement are pivotal to ensuring that AR systems in HRI remain effective, user-friendly, and safe. This section elaborates on the methods and strategies for evaluating AR systems and outlines a process for continuous improvement.

7.1. Safety Metrics

Safety metrics are critical for evaluating the risk and potential hazards in HRI environments. They focus on ensuring that interactions between humans and robots are safe and free from accidents or injuries. These metrics assess how well the system maintains safe distances, prevents collisions, and handles emergency situations. By monitoring these metrics, it is possible to identify and mitigate risks, enhancing the overall safety and trust of human operators in robotic systems. The key safety and efficiency metrics for robot safety using speed and separation algorithms are defined in [113,114].
  • Collision Count: The number of collisions or near misses between robots and humans. A lower collision rate indicates that the system effectively prevents accidents, ensuring the safety of human operators. Monitoring this metric helps quantify the direct impact of the AR-HRI system on safety. It is critical to track this to ensure that the implemented safety features are functioning as intended and to identify any areas needing improvement.
  • Safe Distance Maintenance: The percentage of time the robot maintains a predefined safe distance from the human operator. This metric shows how consistently the system keeps humans out of harm’s way. Maintaining a safe distance is crucial to prevent injuries and build trust between human operators and robots. Continuous monitoring of this metric ensures that the system adapts to dynamic environments while prioritizing human safety.
  • Emergency Stops: The frequency of emergency stops triggered by the system to prevent collisions. While emergency stops are necessary to prevent collisions, a high frequency may indicate overly conservative settings, which can disrupt workflow. Balancing safety and efficiency is key, and this metric helps in tuning the system. Tracking emergency stops helps in refining the sensitivity and response parameters of the system for optimal performance.

7.2. Efficiency Metrics

Efficiency metrics measure the productivity and effectiveness of the HRI system. They focus on how well tasks are completed. These metrics assess factors like task completion time, idle time, and path optimization. By evaluating efficiency, it is possible to determine if the system is not only safe but also effective in improving workflow and productivity in an HRI environment.
  • Task Completion Time: The time taken to complete a task before and after implementing the AR-HRI system. Reducing task completion time while maintaining safety shows that the system is efficient [115]. This metric is vital for demonstrating that the system not only keeps humans safe but also enhances productivity. It provides insights into how well the AR integration and SSM are streamlining the workflow.
  • Idle Time: The amount of time the robot is idle due to safety interventions. Minimizing idle time indicates that the system is effective without unnecessarily halting operations [115]. This balance is critical for maintaining a smooth and efficient workflow. By tracking idle time, one can assess the efficiency of the SSM algorithm in differentiating between real and false positives, thereby ensuring that the robot operates as smoothly as possible.
  • Path Optimization: Changes in the robot’s path efficiency, such as distance traveled and the smoothness of movements. Efficient path planning ensures that the robot performs its tasks optimally while avoiding humans [116]. Measuring path optimization helps in understanding the overall efficiency of the system. This metric is important for identifying any unnecessary detours or delays caused by the AR-HRI system and for optimizing the robot’s navigation algorithms.

7.3. User Experience Metrics

User experience metrics evaluate the subjective perceptions and satisfaction of human operators interacting with robotic systems. These metrics focus on how safe, comfortable, and intuitive the interactions are for users. They often include assessments of perceived safety, workload, and emotional responses. Understanding user experience is crucial for ensuring that the system is user-friendly and meets the needs and expectations of its operators, leading to higher acceptance and better overall performance.
  • Perceived Safety: Users’ feedback on how safe they feel working alongside the robot. Perceived safety is crucial for user acceptance of robotic systems [117]. This qualitative metric helps to understand the psychological impact of the system on human operators. Gathering user feedback through surveys and interviews helps in assessing the effectiveness of the AR visualizations and the SSM in making users feel secure.
  • NASA TLX Scores: Scores from the NASA Task Load Index, assessing perceived workload across six dimensions: mental demand, physical demand, temporal demand, performance, effort, and frustration [44]. Assessing workload using NASA TLX provides insights into how the system affects the cognitive and physical demands on human operators [118]. Lower scores indicate a more user-friendly and less stressful interaction. This metric is essential for understanding how the AR-HRI system impacts overall user workload and for identifying areas that can be improved to reduce user strain.
  • SAM Scale Scores: Measurements from the Self-Assessment Manikin (SAM) scale [119], assessing user arousal and valence. The SAM scale provides a subjective measure of users’ emotional responses [120]. These scores can be used for understanding the feelings felt by the workers during interaction with the robots while using AR. Positive emotional experiences are essential for user satisfaction and acceptance. Correlating these scores with physiological data (described in the next section) can provide a holistic view of user experience and emotional state [121]. Figure 14 is a visual depiction of how subjects would convey their subjective state.
  • User Experience Questionnaire (UEQ): The UEQ consists of 26 questions, using a 7-point Likert scale where participants evaluate the user experience of a system across six attributes, including dependability and attractiveness [97].

7.4. Physiological Metrics

Physiological metrics involve measuring the physical and emotional responses of human operators through biometric data. These metrics can include heart rate, skin conductance, and other physiological signals that indicate stress, arousal, or emotional states. By monitoring these physiological responses, it is possible to gain insights into how the HRI system affects users’ well-being and to make necessary adjustments to improve their comfort and performance. Savur and Sahin in [122] compile methods of physiological computing used in Human–Robot Collaboration.
  • ECG and Galvanic Skin Response (GSR): Data from physiological sensors that relay electrocardiogram (ECG) and galvanic skin response (GSR) signals from the human operator can be used to train models that estimate arousal and valence from ECG and GSR signals [117]. These models can also provide data on the emotional states of users, which can be correlated with subjective measures from the SAM scale to validate the user experiences. Implementing these models helps in real-time monitoring and analysis of user states, enabling proactive adjustments to the AR-HRI system to improve user comfort and performance.

7.5. System Performance Metrics

System performance metrics evaluate the technical effectiveness and reliability of the HRI system. These metrics measure response time, accuracy, and the system’s ability to detect and respond to issues. High system performance ensures reliable, efficient, and safe operations.
  • Response Time: The time it takes for the system to detect a potential collision and respond. Faster response times are crucial for preventing accidents. This metric demonstrates the system’s efficiency in real-time monitoring and intervention. Reducing response time is essential for ensuring that safety interventions are timely and effective, thereby minimizing the risk of accidents.
  • Positional Accuracy: The accuracy of the system in detecting the positions of both the robot and the human. High positional accuracy is essential for the effective functioning of the system. This metric ensures that the system can reliably monitor and react to the positions of humans and robots. Accurate position tracking is vital for the SSM algorithm to function correctly and for providing precise AR visualizations.
Building upon AR in HRI, HCI design principles, and evaluation metrics, we propose a comprehensive framework for developing AR applications in HRI. This framework ensures that these systems are both effective and adaptable to the needs of human operators.

8. Framework for Developing Augmented Reality Applications in Human–Robot Collaboration

The framework for designing AR systems in HRI is meticulously structured to address the critical aspects of collaboration, trust, and safety. This framework is built upon the foundational principles of HCI and is supported by the comprehensive research findings outlined in the AR in HCI survey. The framework, visualized in Figure 15, is divided into three key components: Key Components of AR in HRI, Good Design Principles for AR, and Evaluation and Continuous Improvement.
Collaboration focuses on enhancing the synergy between humans and robots through effective planning, training, communication, and interaction. It emphasizes the need for AR systems to facilitate seamless communication and instruction, thereby improving task efficiency and reducing the likelihood of errors.
Trust and safety are paramount in HRI, and the framework highlights Situational Awareness and control as essential elements. By leveraging AR technologies, the framework aims to enhance users’ Situational Awareness, enabling them to perceive, comprehend, and project the state of their environment accurately. The integration of control mechanisms, such as Speed and Separation Monitoring (SSM) algorithms, ensures that robots operate safely around human operators, dynamically adjusting their actions to maintain safe distances and prevent collisions.
Implementing HCI Principles for Good AR Design involves creating user interfaces (UI) and user experiences (UX) that are intuitive, user-centered, and designed to meet the specific needs of human operators. This includes utilizing AR to provide real-time feedback, improve usability, and enhance the overall user experience.
Evaluation and Continuous Improvement underscore the importance of using both quantitative and qualitative measures to assess the effectiveness of AR systems. By continuously collecting and analyzing data on system performance and user feedback, the framework advocates for an iterative design process that ensures that AR systems remain effective, user-friendly, and safe.
The framework outlined in this paper serves as a foundational guide for developing Augmented Reality (AR) applications in Human–Robot Interaction (HRI). Its practical relevance is demonstrated through specific experiments and the design of the elements that leverage different components of the framework. The following subsections describe implementations that focus on the key components of AR in HRI and good design principles of AR.
We showcase two case studies that are in accordance with the proposed framework. The first study highlights the use of AR for enhancing Situational Awareness, while the second study shows the utilization of digital twins with the help of AR to enable safety in HRI. Furthermore, ongoing studies that incorporate good practices from HCI are described in this section.

8.1. Improving Collaboration, Safety, and Trust Using AR in HRI

Ensuring trust and safety in HRI is paramount, particularly in dynamic industrial environments. The presented framework for designing Augmented Reality (AR) systems in HRI underscores the importance of these components, as validated by recent experimental studies. The first study utilizes AR to enhance Situational Awareness, employing the Situational Awareness Global Assessment Technique (SAGAT) to measure improvements [123]. Wearing a head-mounted display, the user can see a green arrow that moves to point at the end effector of the robot. When the user is turned away from the robot, the arrow indicates where the robot is located in relation to the user. The green arrow is circled in Figure 16 below. Results indicate a significant reduction in perception errors (Average Perception Error decreased significantly) and increased operator confidence (Mean Confidence levels rose), reinforcing the framework’s emphasis on Situational Awareness as a cornerstone of safety. The full results are available in the citation and not reported here for brevity. The study also reported an increase in the percentage of correct responses to accurately predict the direction of movement of the end effector of the robot and a trend towards faster response times, highlighting the effectiveness of AR displays in improving Situational Awareness. Figure 17 shows the results for the perception of error and the percentage of correct responses to the experiment, and Table 2 shows the statistical significance of the metrics [123].
The second study leverages mixed reality (MR) and digital twin technology to implement a Speed and Separation Monitoring (SSM) algorithm, which dynamically adjusts robot speed to prevent collisions [124]. This real-time control mechanism aligns perfectly with the framework’s focus on maintaining safety through effective control strategies. Figure 18 shows the digital twins for the robot and the human; the human model uses geometric primitives to approximate the human body when calculating the minimum distance from the robot. The study’s results, in Table 3, showed a maximum tracking error of 0.0743 m and a mean error of 0.0315 m, demonstrating high positional accuracy. Additionally, the SSM algorithm effectively adjusted the robot’s speed in response to changes in separation distance, ensuring a safe distance between humans and robots. An example of this can be seen in Figure 19.
Both studies utilize quantitative measures such as tracking error, perception accuracy, and response times, alongside qualitative feedback on user confidence and system usability, providing robust empirical support for the framework. By prioritizing trust and safety, the framework ensures that AR systems are designed to enhance operational security and build user trust, making it an invaluable tool for advancing HRI in industrial applications. These findings show the practical relevance of the framework, demonstrating its ability to guide the development of AR systems that significantly improve safety and trust in HRI environments.

8.2. Collision Warnings Using Virtual Elements

When utilizing the Augmented Reality (AR) system, the wearer is immersed in a sophisticated, real-time environment that significantly enhances their Situational Awareness and safety during HRI. The wearer can expect to see a series of dynamic visual indicators, represented by spheres, which trace the path of the robot end effector’s movements. These spheres provide an intuitive visual cue, changing color based on the proximity of the wearer to the robot’s path. When the human operator is within a safe distance, the spheres remain blue (Figure 20A), indicating no immediate threat. However, if the wearer approaches a critical proximity threshold, the spheres turn red (Figure 20B), signaling a potential collision risk. This visual feedback allows the wearer to anticipate the robot’s trajectory and adjust their movements accordingly, thereby preventing accidents. The system continuously updates these indicators, ensuring that the wearer has up-to-date information about the robot’s position and path.
Implementing effective HCI practices should begin with user research and a thorough understanding of how the user will interact with the system. For the minimap design, it started with understanding user preferences and the optimal system design. Prior work by Green et al. [24] suggested that an overview of the robot’s path was more successful in determining Situational Awareness than an immersive view. In the current lab, users have an immersive view. To provide them with an overview or bird’s-eye view, a minimap was implemented to show the user, the robot, and the tables at all times. The mini-map overlay provides real-time updates on the positions of key elements within the workspace, including the base and tool of the robot, the main camera, and additional objects of interest. These elements are represented by distinct indicator objects on the mini-map, which are dynamically updated to reflect their relative positions scaled down to a manageable size for the wearer’s view.
The mini-map serves as a crucial tool for spatial orientation, allowing the wearer to easily track the movements of both the robot and surrounding objects. Each indicator is color-coded for quick visual recognition: the base indicator is semi-transparent white, the tool indicator is cyan, the human indicator is green (Figure 21A) (changing to orange and then red based on proximity to the human operator) (Figure 21B,C), and other object indicators are grey. This color-coding not only helps in distinguishing between different elements but also provides immediate visual cues about the current operational status and potential hazards. The locations of the objects are linked to a fiducial and will be updated as the Hololens 2 is able to scan them.
As the wearer navigates the workspace, the mini-map continuously updates, showing the real-time positions of the robot and other objects relative to the base position. This dynamic update mechanism ensures that the wearer always has the latest information about the robot’s location, enhancing their ability to predict movements and avoid collisions. Furthermore, the camera indicator changes color based on the distance between the robot and the human operator, turning red when within a critical threshold, thus providing an immediate warning of potential collision risks.
For the visualization of the design, UI principles such as simplicity, consistency, and visibility were all considered. The design of the minimap is kept simple, including only the essential elements: the robot, the human, and the tables. Additionally, the human and robot are both represented as spheres to avoid cluttering the minimap. The tables take the shape of actual tables rather than cubes or other 3D shapes to remain consistent with the real world. This way, users do not have to assume or remember the real-world item corresponding to the elements in the AR.
Visibility is ensured because the map moves with the person, ensuring they always see the map in the correct orientation. This means users do not have to calculate their direction relative to the map’s direction to know where the robot is, enhancing ease of use and safety.

9. Limitations

9.1. Scope Limitation

The scope of this study is focused on Human–Robot Collaboration in industrial settings, particularly in manufacturing settings like assembly lines. This targeted scope allows for in-depth exploration of relevant use cases and provides actionable insights applicable to these specific environments. While the current focus limits broader applicability, future work will continue to refine and expand the framework to address emerging challenges pertaining to other domains.

9.2. Technical Limitations and User Acceptance

While Augmented Reality (AR) offers significant potential for enhancing HRI, it is important to acknowledge several technical limitations and user acceptance challenges. AR systems often face constraints such as limited field of view, display resolution, and response time, which can impact their effectiveness in high-stakes environments like industrial settings. These technical issues can lead to missed cues or distractions, affecting continuous Situational Awareness and safety. Additionally, variations in AR device performance across different environments may reduce the consistency and reliability of critical safety alerts. When systems do not work as intended, they can lead to worker frustration.
User acceptance is another critical factor that influences the success of AR in HRI. Challenges include the comfort and ergonomics of AR headsets, the learning curve associated with new technologies, and potential resistance due to concerns about complexity or reliability. To address these barriers, we can look at tools like NASA-TLX to measure subjective cognitive load when using AR.

9.3. Plan for Long-Term Impact Assessment of AR in HRI

To evaluate the long-term impact of AR-enhanced HRI systems on trust and safety, it is crucial to establish robust evaluation metrics and methodologies. Key performance indicators should include not only immediate outcomes such as reduced safety incidents and improved task efficiency but also long-term measures like sustained user trust, system reliability, and continued user engagement. Methodologies for assessment could involve mixed-methods approaches that combine quantitative data with qualitative insights from user feedback and testing in both simulated and real-world environments. By collecting data across all the metrics listed in Section 7, it is possible to capture trends over time and provide valuable data on the long-term impact of AR-enhanced HRI systems in manufacturing settings.

10. Conclusions

This paper has explored the significant advancements in integrating Augmented Reality (AR) into Human–Robot Interaction (HRI) with a focus on manufacturing, emphasizing the enhancement of collaboration, trust, and safety. By synthesizing findings from numerous studies, we have identified key challenges and proposed a comprehensive framework designed to address these issues effectively.
Our framework emphasizes the importance of Situational Awareness and control, leveraging AR technologies to enhance users’ ability to perceive, comprehend, and project the state of their environment. The integration of Speed and Separation Monitoring (SSM) algorithms ensures that robots operate safely around human operators, dynamically adjusting their actions to maintain safe distances and prevent collisions.
Moreover, the implementation of Human–Computer Interaction (HCI) principles for AR design ensures that user interfaces (UI) and user experiences (UX) are intuitive, user-centered, and tailored to meet the specific needs of human operators. Continuous evaluation and improvement, through both quantitative and qualitative measures, are essential to maintain the effectiveness, usability, and safety of AR systems in HRI.
In conclusion, the proposed framework not only addresses the critical aspects of collaboration, trust, and safety but also provides a structured approach for the continuous development and refinement of AR applications in HRI for manufacturing applications. This research paves the way for safer and more efficient Human–Robot collaborations, ultimately contributing to the advancement of the field.

Author Contributions

K.S. studied various AR devices, such as headsets and projections, to understand their impact on HRI, while L.T. contributed towards HCI principles to enhance the usability and effectiveness of AR interfaces in HRI. M.S. studied existing methods of evaluating Situational Awareness in HRI and contributed towards integrating Situational Awareness in the proposed framework mentioned in this paper. Under the supervision of F.S., the team synthesized their findings to provide a comprehensive review of AR applications within HRI, highlighting key challenges and proposing solutions for enhancing collaboration, trust, and safety in HRI systems. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported by the National Science Foundation under Award No. DGE-2125362. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Thrun, S. Toward a Framework for Human-Robot Interaction. Human–Computer Interact. 2004, 19, 9–24. [Google Scholar] [CrossRef] [PubMed]
  2. Scholtz, J. Theory and evaluation of human Robot Interactions. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Big Island, HI, USA, 6–9 January 2003; p. 10. [Google Scholar] [CrossRef]
  3. Sidobre, D.; Broquère, X.; Mainprice, J.; Burattini, E.; Finzi, A.; Rossi, S.; Staffa, M. Human–Robot Interaction. In Advanced Bimanual Manipulation: Results from the DEXMART Project; Siciliano, B., Ed.; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2012; pp. 123–172. [Google Scholar] [CrossRef]
  4. Ong, S.K.; Yuan, M.L.; Nee, A.Y.C. Augmented Reality applications in manufacturing: A survey. Int. J. Prod. Res. 2008, 46, 2707–2742. [Google Scholar] [CrossRef]
  5. Suzuki, R.; Karim, A.; Xia, T.; Hedayati, H.; Marquardt, N. Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22), New Orleans, LA, USA, 29 April 2022–5 May 2022; pp. 1–33. [Google Scholar] [CrossRef]
  6. Ruiz, J.; Escalera, M.; Viguria, A.; Ollero, A. A simulation framework to validate the use of head-mounted displays and tablets for information exchange with the UAV safety pilot. In Proceedings of the 2015 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), Cancun, Mexico, 23–25 November 2015; pp. 336–341. [Google Scholar] [CrossRef]
  7. Chan, W.P.; Hanks, G.; Sakr, M.; Zhang, H.; Zuo, T.; van der Loos, H.F.M.; Croft, E. Design and Evaluation of an Augmented Reality Head-mounted Display Interface for Human Robot Teams Collaborating in Physically Shared Manufacturing Tasks. ACM Trans. Hum.-Robot Interact. 2022, 11, 31:1–31:19. [Google Scholar] [CrossRef]
  8. Kalpagam Ganesan, R.; Rathore, Y.K.; Ross, H.M.; Ben Amor, H. Better Teaming Through Visual Cues: How Projecting Imagery in a Workspace Can Improve Human-Robot Collaboration. IEEE Robot. Autom. Mag. 2018, 25, 59–71. [Google Scholar] [CrossRef]
  9. Woodward, J.; Ruiz, J. Analytic Review of Using Augmented Reality for Situational Awareness. IEEE Trans. Vis. Comput. Graph. 2023, 29, 2166–2183. [Google Scholar] [CrossRef] [PubMed]
  10. Rogers, Y.; Sharp, H.; Preece, J. Interaction Design: Beyond Human-Computer Interaction, 6th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2023; Available online: http://id-book.com (accessed on 9 September 2024).
  11. Blaga, A.; Tamas, L. Augmented Reality for Digital Manufacturing. In Proceedings of the 2018 26th Mediterranean Conference on Control and Automation (MED), Zadar, Croatia, 19–22 June 2018; pp. 173–178. [Google Scholar] [CrossRef]
  12. Caudell, T.; Mizell, D. Augmented Reality: An application of heads-up display technology to manual manufacturing processes. In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences, Kauai, HI, USA, 7–10 January 1992; Volume 2, pp. 659–669. [Google Scholar] [CrossRef]
  13. Novak-Marcincin, J.; Barna, J.; Janak, M.; Novakova-Marcincinova, L. Augmented Reality Aided Manufacturing. Procedia Comput. Sci. 2013, 25, 23–31. [Google Scholar] [CrossRef]
  14. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  15. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef]
  16. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef]
  17. Baratta, A.; Cimino, A.; Gnoni, M.G.; Longo, F. Human Robot Collaboration in Industry 4.0: A literature review. Procedia Comput. Sci. 2023, 217, 1887–1895. [Google Scholar] [CrossRef]
  18. Bauer, A.; Wollherr, D.; Buss, M. Human-Robot Collaboration: A Survey. Int. J. Humanoid Robot. 2008, 5, 47–66. [Google Scholar] [CrossRef]
  19. Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot collaboration and machine learning: A systematic review of recent research. Robot. Comput.-Integr. Manuf. 2023, 79, 102432. [Google Scholar] [CrossRef]
  20. Kumar, S.; Savur, C.; Sahin, F. Survey of Human–Robot Collaboration in Industrial Settings: Awareness, Intelligence, and Compliance. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 280–297. [Google Scholar] [CrossRef]
  21. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  22. Lamon, E.; De Franco, A.; Peternel, L.; Ajoudani, A. A Capability-Aware Role Allocation Approach to Industrial Assembly Tasks. IEEE Robot. Autom. Lett. 2019, 4, 3378–3385. [Google Scholar] [CrossRef]
  23. Rahman, S.M.; Wang, Y. Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics 2018, 54, 94–109. [Google Scholar] [CrossRef]
  24. Green, S.A.; Chase, J.G.; Chen, X.; Billinghurst, M. Evaluating the Augmented Reality Human–Robot Collaboration System. In Proceedings of the 2008 15th International Conference on Mechatronics and Machine Vision in Practice, Auckland, New Zealand, 2–4 December 2008. [Google Scholar]
  25. Glassmire, J.; O’Malley, M.; Bluethmann, W.; Ambrose, R. Cooperative manipulation between humans and teleoperated agents. In Proceedings of the 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2004. HAPTICS ’04. Proceedings, Chicago, IL, USA, 27–28 March 2004; pp. 114–120. [Google Scholar] [CrossRef]
  26. Hoffman, G.; Breazeal, C. Effects of anticipatory action on Human–Robot teamwork: Efficiency, fluency, and perception of team. In Proceedings of the 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Arlington, VA, USA, 10–12 March 2007; pp. 1–8. [Google Scholar] [CrossRef]
  27. Lasota, P.A.; Shah, J.A. Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration. Hum. Factors J. Hum. Factors Ergon. Soc. 2015, 57, 21–33. [Google Scholar] [CrossRef]
  28. Yao, B.; Zhou, Z.; Wang, L.; Xu, W.; Yan, J.; Liu, Q. A function block based cyber-physical production system for physical human–Robot Interaction. J. Manuf. Syst. 2018, 48, 12–23. [Google Scholar] [CrossRef]
  29. IEC 61499; Standard for Distributed Automation. International Electrotechnical Commission: Geneva, Switzerland, 2012. Available online: https://iec61499.com/ (accessed on 9 September 2024).
  30. Stark, R.; Fresemann, C.; Lindow, K. Development and operation of Digital Twins for technical systems and services. CIRP Ann. 2019, 68, 129–132. [Google Scholar] [CrossRef]
  31. Liu, Q.; Leng, J.; Yan, D.; Zhang, D.; Wei, L.; Yu, A.; Zhao, R.; Zhang, H.; Chen, X. digital-twin-based designing of the configuration, motion, control, and optimization model of a flow-type smart manufacturing system. J. Manuf. Syst. 2021, 58, 52–64. [Google Scholar] [CrossRef]
  32. Rosen, R.; Von Wichert, G.; Lo, G.; Bettenhausen, K.D. About The Importance of Autonomy and Digital Twins for the Future of Manufacturing. IFAC-PapersOnLine 2015, 48, 567–572. [Google Scholar] [CrossRef]
  33. Sahin, M.; Savur, C. Evaluation of Human Perceived Safety during HRC Task using Multiple Data Collection Methods. In Proceedings of the 2022 17th Annual System of Systems Engineering Conference (SOSE), Rochester, NY, USA, 7–11 June 2022; pp. 465–470. [Google Scholar] [CrossRef]
  34. Soh, H.; Xie, Y.; Chen, M.; Hsu, D. Multi-task trust transfer for human–Robot Interaction. Int. J. Robot. Res. 2020, 39, 233–249. [Google Scholar] [CrossRef]
  35. ISO 10218-1:2011; Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 1: Robots. ISO: Geneva, Switzerland, 2011.
  36. ISO/TS 15066:2016; Robots and Robotic Devices—Collaborative Robots. ISO: Geneva, Switzerland, 2016.
  37. Haddadin, S.; Albu-Schäffer, A.; Hirzinger, G. Safe Physical Human-Robot Interaction: Measurements, Analysis and New Insights. In Robotics Research; Kaneko, M., Nakamura, Y., Eds.; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2011; pp. 395–407. [Google Scholar] [CrossRef]
  38. Lee, J.D.; See, K.A. Trust in automation: Designing for appropriate reliance. Hum. Factors 2004, 46, 50–80. [Google Scholar] [CrossRef] [PubMed]
  39. Maurtua, I.; Ibarguren, A.; Kildal, J.; Susperregi, L.; Sierra, B. Human–robot collaboration in industrial applications: Safety, interaction and trust. Int. J. Adv. Robot. Syst. 2017, 14, 172988141771601. [Google Scholar] [CrossRef]
  40. Hancock, P.A.; Billings, D.R.; Schaefer, K.E.; Chen, J.Y.C.; De Visser, E.J.; Parasuraman, R. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 2011, 53, 517–527. [Google Scholar] [CrossRef] [PubMed]
  41. Endsley, M. Theoretical underpinnings of situation awareness: A critical review. In Situation Awareness Analysis and Measurement; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2000; pp. 3–32. [Google Scholar]
  42. Endsley, M.; Kiris, E. The Out-of-the-Loop Performance Problem and Level of Control in Automation. Hum. Factors J. Hum. Factors Ergon. Soc. 1995, 37, 381–394. [Google Scholar] [CrossRef]
  43. Unhelkar, V.V.; Siu, H.C.; Shah, J.A. Comparative performance of human and mobile robotic assistants in collaborative fetch-and-deliver tasks. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, Bielefeld, Germany, 3–6 March 2014; pp. 82–89. [Google Scholar] [CrossRef]
  44. Endsley, M. Direct Measurement of Situation Awareness: Validity and Use of SAGAT. In Situation Awareness: Analysis and Measurement; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2000. [Google Scholar]
  45. Endsley, M. Situation awareness global assessment technique (SAGAT). In Proceedings of the IEEE 1988 National Aerospace and Electronics Conference, Dayton, OH, USA, 23–27 May 1988; Volume 3, pp. 789–795. [Google Scholar] [CrossRef]
  46. Bew, G.; Baker, A.; Goodman, D.; Nardone, O.; Robinson, M. Measuring Situational Awareness at the small unit tactical level. In Proceedings of the 2015 Systems and Information Engineering Design Symposium, Charlottesville, VA, USA, 24 April 2015; pp. 51–56. [Google Scholar] [CrossRef]
  47. Endsley, M.R. A Systematic Review and Meta-Analysis of Direct Objective Measures of Situation Awareness: A Comparison of SAGAT and SPAM. Hum. Factors 2021, 63, 124–150. [Google Scholar] [CrossRef]
  48. Marvel, J.A.; Norcross, R. Implementing Speed and Separation Monitoring in collaborative robot workcells. Robot. Comput.-Integr. Manuf. 2017, 44, 144–155. [Google Scholar] [CrossRef]
  49. Kumar, S.; Arora, S.; Sahin, F. Speed and Separation Monitoring using On-Robot Time-of-Flight Laser-ranging Sensor Arrays. In Proceedings of the 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), Vancouver, BC, Canada, 22–26 August 2019; pp. 1684–1691. [Google Scholar] [CrossRef]
  50. Rosenstrauch, M.J.; Pannen, T.J.; Krüger, J. Human robot collaboration—using kinect v2 for ISO/TS 15066 Speed and Separation Monitoring. Procedia CIRP 2018, 76, 183–186. [Google Scholar] [CrossRef]
  51. Ganglbauer, M.; Ikeda, M.; Plasch, M.; Pichler, A. Human in the loop online estimation of robotic speed limits for safe human robot collaboration. Procedia Manuf. 2020, 51, 88–94. [Google Scholar] [CrossRef]
  52. Oh, K.; Kim, M. Social Attributes of Robotic Products: Observations of Child-Robot Interactions in a School Environment. Int. J. Design 2010, 4, 45–55. [Google Scholar]
  53. Sauppé, A.; Mutlu, B. The Social Impact of a Robot Co-Worker in Industrial Settings. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 3613–3622. [Google Scholar] [CrossRef]
  54. Bruce, A.; Nourbakhsh, I.; Simmons, R. The role of expressiveness and attention in Human–Robot Interaction. In Proceedings of the 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Washington, DC, USA, 11–15 May 2002; Volume 4, pp. 4138–4142. [Google Scholar] [CrossRef]
  55. Lu, L.; Xie, Z.; Wang, H.; Li, L.; Xu, X. Mental stress and safety awareness during Human–Robot collaboration—Review. Appl. Ergon. 2022, 105, 103832. [Google Scholar] [CrossRef] [PubMed]
  56. Evans, G.; Miller, J.; Pena, M.; MacAllister, A.; Winer, E. Evaluating the Microsoft HoloLens through an Augmented Reality assembly application. In Proceedings of the SPIE Defense + Security, Anaheim, CA, USA, 9–13 April 2017; p. 101970V. [Google Scholar] [CrossRef]
  57. Sääski, J.; Salonen, T.; Liinasuo, M.; Pakkanen, J.; Vanhatalo, M.; Riitahuhta, A. Augmented Reality Efficiency in Manufacturing Industry: A Case Study. In Proceedings of the DS 50: Proceedings of NordDesign 2008 Conference, Tallinn, Estonia, 21–23 August 2008. [Google Scholar]
  58. Palmarini, R.; del Amo, I.F.; Bertolino, G.; Dini, G.; Erkoyuncu, J.A.; Roy, R.; Farnsworth, M. Designing an AR interface to improve trust in Human-Robots collaboration. Procedia CIRP 2018, 70, 350–355. [Google Scholar] [CrossRef]
  59. Tsamis, G.; Chantziaras, G.; Giakoumis, D.; Kostavelis, I.; Kargakos, A.; Tsakiris, A.; Tzovaras, D. Intuitive and Safe Interaction in Multi-User Human Robot Collaboration Environments through Augmented Reality Displays. In Proceedings of the 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), Vancouver, BC, Canada, 8–12 August 2021; pp. 520–526. [Google Scholar] [CrossRef]
  60. Vogel, C.; Schulenburg, E.; Elkmann, N. Projective- AR Assistance System for shared Human-Robot Workplaces in Industrial Applications. In Proceedings of the 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria, 8–11 September 2020; Volume 1, pp. 1259–1262. [Google Scholar] [CrossRef]
  61. Choi, S.H.; Park, K.B.; Roh, D.H.; Lee, J.Y.; Mohammed, M.; Ghasemi, Y.; Jeong, H. An integrated mixed reality system for safety-aware Human–Robot collaboration using deep learning and digital twin generation. Robot. Comput.-Integr. Manuf. 2022, 73, 102258. [Google Scholar] [CrossRef]
  62. Bassyouni, Z.; Elhajj, I.H. Augmented Reality Meets Artificial Intelligence in Robotics: A Systematic Review. Front. Robot. AI 2021, 8, 724798. [Google Scholar] [CrossRef]
  63. Costa, G.D.M.; Petry, M.R.; Moreira, A.P. Augmented Reality for Human–Robot Collaboration and Cooperation in Industrial Applications: A Systematic Literature Review. Sensors 2022, 22, 2725. [Google Scholar] [CrossRef]
  64. Franze, A.P.; Caldwell, G.A.; Teixeira, M.F.L.A.; Rittenbruch, M. Employing AR/MR Mockups to Imagine Future Custom Manufacturing Practices. In Proceedings of the 34th Australian Conference on Human-Computer Interaction (OzCHI ’22), New York, NY, USA, 6 April 2023; pp. 206–215. [Google Scholar] [CrossRef]
  65. Fang, H.; Ong, S.; Nee, A. Robot Path and End-Effector Orientation Planning Using Augmented Reality. Procedia CIRP 2012, 3, 191–196. [Google Scholar] [CrossRef]
  66. Doil, F.; Schreiber, W.; Alt, T.; Patron, C. Augmented Reality for manufacturing planning. In Proceedings of the Workshop on Virtual Environments 2003 (EGVE ’03), New York, NY, USA, 22–23 May 2003; pp. 71–76. [Google Scholar] [CrossRef]
  67. Wang, Q.; Fan, X.; Luo, M.; Yin, X.; Zhu, W. Construction of Human-Robot Cooperation Assembly Simulation System Based on Augmented Reality. In Virtual, Augmented and Mixed Reality. Design and Interaction; Chen, J.Y.C., Fragomeni, G., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; pp. 629–642. [Google Scholar] [CrossRef]
  68. Andersson, N.; Argyrou, A.; Nägele, F.; Ubis, F.; Campos, U.E.; Zarate, M.O.d.; Wilterdink, R. AR-Enhanced Human-Robot-Interaction—Methodologies, Algorithms, Tools. Procedia CIRP 2016, 44, 193–198. [Google Scholar] [CrossRef]
  69. Michalos, G.; Karagiannis, P.; Makris, S.; Tokçalar, Ö.; Chryssolouris, G. Augmented Reality (AR) Applications for Supporting Human-robot Interactive Cooperation. Procedia CIRP 2016, 41, 370–375. [Google Scholar] [CrossRef]
  70. Lunding, R.; Hubenschmid, S.; Feuchtner, T. Proposing a Hybrid Authoring Interface for AR-Supported Human–Robot Collaboration. 2024. Available online: https://openreview.net/forum?id=2w2ynC3yrM&noteId=ritvr8VKmu (accessed on 6 July 2024).
  71. Tabrez, A.; Luebbers, M.B.; Hayes, B. Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human–Robot Teaming. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, Online, 9–13 May 2022. [Google Scholar]
  72. De Franco, A.; Lamon, E.; Balatti, P.; De Momi, E.; Ajoudani, A. An Intuitive Augmented Reality Interface for Task Scheduling, Monitoring, and Work Performance Improvement in Human-Robot Collaboration. In Proceedings of the 2019 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), Budapest, Hungary, 3–5 July 2019; pp. 75–80. [Google Scholar] [CrossRef]
  73. Andronas, D.; Apostolopoulos, G.; Fourtakas, N.; Makris, S. Multi-modal interfaces for natural Human-Robot Interaction. Procedia Manuf. 2021, 54, 197–202. [Google Scholar] [CrossRef]
  74. Gkournelos, C.; Karagiannis, P.; Kousi, N.; Michalos, G.; Koukas, S.; Makris, S. Application of Wearable Devices for Supporting Operators in Human-Robot Cooperative Assembly Tasks. Procedia CIRP 2018, 76, 177–182. [Google Scholar] [CrossRef]
  75. Qiu, S.; Liu, H.; Zhang, Z.; Zhu, Y.; Zhu, S.C. Human-Robot Interaction in a Shared Augmented Reality Workspace. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 11413–11418. [Google Scholar] [CrossRef]
  76. Sonawani, S.; Amor, H.B. When and Where Are You Going? A Mixed-Reality Framework for Human Robot Collaboration. 2022. Available online: https://openreview.net/forum?id=BSrx_Q2-Akq (accessed on 21 June 2024).
  77. Hietanen, A.; Latokartano, J.; Pieters, R.; Lanz, M.; Kämäräinen, J.K. AR-based interaction for safe Human–Robot collaborative manufacturing. arXiv 2019. [Google Scholar]
  78. Lunding, R.S.; Lunding, M.S.; Feuchtner, T.; Petersen, M.G.; Grønbæk, K.; Suzuki, R. RoboVisAR: Immersive Authoring of Condition-based AR Robot Visualisations. In Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’24), New York, NY, USA, 11–15 March 2024; pp. 462–471. [Google Scholar] [CrossRef]
  79. Eschen, H.; Kötter, T.; Rodeck, R.; Harnisch, M.; Schüppstuhl, T. Augmented and Virtual Reality for Inspection and Maintenance Processes in the Aviation Industry. Procedia Manuf. 2018, 19, 156–163. [Google Scholar] [CrossRef]
  80. Papanastasiou, S.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Papavasileiou, A.; Dimoulas, K.; Baris, K.; Koukas, S.; Michalos, G.; Makris, S. Towards seamless human robot collaboration: Integrating multimodal interaction. Int. J. Adv. Manuf. Technol. 2019, 105, 3881–3897. [Google Scholar] [CrossRef]
  81. Makris, S.; Karagiannis, P.; Koukas, S.; Matthaiakis, A.S. Augmented Reality system for operator support in human–robot collaborative assembly. CIRP Ann. 2016, 65, 61–64. [Google Scholar] [CrossRef]
  82. Matsas, E.; Vosniakos, G.C.; Batras, D. Prototyping proactive and adaptive techniques for Human–Robot collaboration in manufacturing using virtual reality. Robot. Comput.-Integr. Manuf. 2018, 50, 168–180. [Google Scholar] [CrossRef]
  83. Bischoff, R.; Kazi, A. Perspectives on Augmented Reality based Human–Robot Interaction with industrial robots. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 4, pp. 3226–3231. [Google Scholar] [CrossRef]
  84. Matsas, E.; Vosniakos, G.C. Design of a virtual reality training system for human–robot collaboration in manufacturing tasks. Int. J. Interact. Des. Manuf. (IJIDeM) 2017, 11, 139–153. [Google Scholar] [CrossRef]
  85. Liu, H.; Wang, L. An AR-based Worker Support System for Human-Robot Collaboration. Procedia Manuf. 2017, 11, 22–30. [Google Scholar] [CrossRef]
  86. Janin, A.; Mizell, D.; Caudell, T. Calibration of head-mounted displays for Augmented Reality applications. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; pp. 246–255. [Google Scholar] [CrossRef]
  87. Mitaritonna, A.; Abásolo, M.J.; Montero, F. An Augmented Reality-based Software Architecture to Support Military Situational Awareness. In Proceedings of the 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Istanbul, Turkey, 12–13 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  88. Sheikh Bahaei, S.; Gallina, B. Assessing risk of AR and organizational changes factors in socio-technical robotic manufacturing. Robot. Comput.-Integr. Manuf. 2024, 88, 102731. [Google Scholar] [CrossRef]
  89. Feddoul, Y.; Ragot, N.; Duval, F.; Havard, V.; Baudry, D.; Assila, A. Exploring human-machine collaboration in industry: A systematic literature review of digital twin and robotics interfaced with extended reality technologies. Int. J. Adv. Manuf. Technol. 2023, 129, 1917–1932. [Google Scholar] [CrossRef]
  90. Maruyama, T.; Ueshiba, T.; Tada, M.; Toda, H.; Endo, Y.; Domae, Y.; Nakabo, Y.; Mori, T.; Suita, K. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors 2021, 21, 8266. [Google Scholar] [CrossRef] [PubMed]
  91. Shaaban, M.; Carfì, A.; Mastrogiovanni, F. Digital Twins for Human-Robot Collaboration: A Future Perspective. In Intelligent Autonomous Systems 18; Lee, S.G., An, J., Chong, N.Y., Strand, M., Kim, J.H., Eds.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2024; Volume 795, pp. 429–441. [Google Scholar] [CrossRef]
  92. Carroll, J.M. HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science; Elsevier: Amsterdam, The Netherlands, 2003. [Google Scholar]
  93. Nazari, A.; Alabood, L.; Feeley, K.B.; Jaswal, V.K.; Krishnamurthy, D. Personalizing an AR-based Communication System for Nonspeaking Autistic Users. In Proceedings of the 29th International Conference on Intelligent User Interfaces (IUI ’24’), New York, NY, USA, 18–21 March 2024; pp. 731–741. [Google Scholar] [CrossRef]
  94. von Sawitzky, T.; Wintersberger, P.; Riener, A.; Gabbard, J.L. Increasing trust in fully automated driving: Route indication on an Augmented Reality head-up display. In Proceedings of the 8th ACM International Symposium on Pervasive Displays (PerDis ’19), New York, NY, USA, 12–14 June 2019. [Google Scholar] [CrossRef]
  95. Chang, C.J.; Hsu, Y.L.; Tan, W.T.M.; Chang, Y.C.; Lu, P.C.; Chen, Y.; Wang, Y.H.; Chen, M.Y. Exploring Augmented Reality Interface Designs for Virtual Meetings in Real-world Walking Contexts. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS ’24), New York, NY, USA, 1–5 July 2024; pp. 391–408. [Google Scholar] [CrossRef]
  96. Norman, D.A. The Design of Everyday Things, Revised Edition; Basic Books: New York, NY, USA, 2013. [Google Scholar]
  97. Laugwitz, B.; Held, T.; Schrepp, M. Construction and Evaluation of a User Experience Questionnaire. In HCI and Usability for Education and Work; Holzinger, A., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5298, pp. 63–76. [Google Scholar] [CrossRef]
  98. Brooke, J. SUS—A quick and dirty usability scale. In Usability Evaluation in Industry; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  99. Vredenburg, K.; Mao, J.Y.; Smith, P.W.; Carey, T. A survey of user-centered design practice. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02), New York, NY, USA, 20–25 April 2002; pp. 471–478. [Google Scholar] [CrossRef]
  100. Abras, C.; Maloney-Krichmar, D.; Preece, J. User-Centered Design. In Berkshire Encyclopedia of Human-Computer Interaction; Bainbridge, W., Ed.; Sage Publications: Thousand Oaks, CA, USA, 2004. [Google Scholar]
  101. Geerts, D.; Vatavu, R.D.; Burova, A.; Vinayagamoorthy, V.; Mott, M.; Crabb, M.; Gerling, K. Challenges in Designing Inclusive Immersive Technologies. In Proceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia (MUM ’21), New York, NY, USA, 5–8 December 2021; pp. 182–185. [Google Scholar] [CrossRef]
  102. Tanevska, A.; Chandra, S.; Barbareschi, G.; Eguchi, A.; Han, Z.; Korpan, R.; Ostrowski, A.K.; Perugia, G.; Ravindranath, S.; Seaborn, K.; et al. Inclusive HRI II: Equity and Diversity in Design, Application, Methods, and Community. In Proceedings of the Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’23), New York, NY, USA, 13–16 March 2023; pp. 956–958. [Google Scholar] [CrossRef]
  103. Ejaz, A.; Syed, D.; Yasir, M.; Farhan, D. Graphic User Interface Design Principles for Designing Augmented Reality Applications. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 0100228. [Google Scholar] [CrossRef]
  104. Knowles, B.; Clear, A.K.; Mann, S.; Blevis, E.; Håkansson, M. Design Patterns, Principles, and Strategies for Sustainable HCI. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16), New York, NY, USA, 7–12 May 2016; pp. 3581–3588. [Google Scholar] [CrossRef]
  105. Nebeling, M.; Oki, M.; Gelsomini, M.; Hayes, G.R.; Billinghurst, M.; Suzuki, K.; Graf, R. Designing Inclusive Future Augmented Realities. In Proceedings of the Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24), New York, NY, USA, 11–16 May 2024; pp. 1–6. [Google Scholar] [CrossRef]
  106. Samaradivakara, Y.; Ushan, T.; Pathirage, A.; Sasikumar, P.; Karunanayaka, K.; Keppitiyagama, C.; Nanayakkara, S. SeEar: Tailoring Real-time AR Caption Interfaces for Deaf and Hard-of-Hearing (DHH) Students in Specialized Educational Settings. In Proceedings of the Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems (CHI EA ’24), New York, NY, USA, 11–16 May 2024; pp. 1–8. [Google Scholar] [CrossRef]
  107. Fang, H.C.; Ong, S.K.; Nee, A.Y.C. Novel AR-based interface for Human–Robot Interaction and visualization. Adv. Manuf. 2014, 2, 275–288. [Google Scholar] [CrossRef]
  108. Alt, B.; Zahn, J.; Kienle, C.; Dvorak, J.; May, M.; Katic, D.; Jäkel, R.; Kopp, T.; Beetz, M.; Lanza, G. Human-AI Interaction in Industrial Robotics: Design and Empirical Evaluation of a User Interface for Explainable AI-Based Robot Program Optimization. arXiv 2024. [Google Scholar]
  109. Lindblom, J.; Alenljung, B. The ANEMONE: Theoretical Foundations for UX Evaluation of Action and Intention Recognition in Human-Robot Interaction. Sensors 2020, 20, 4284. [Google Scholar] [CrossRef]
  110. Jeffri, N.F.S.; Rambli, D.R.A. Guidelines for the Interface Design of AR Systems for Manual Assembly. In Proceedings of the 2020 4th International Conference on Virtual and Augmented Reality Simulations (ICVARS ’20), New York, NY, USA, 14–16 February 2020; pp. 70–77. [Google Scholar] [CrossRef]
  111. Wewerka, J.; Micus, C.; Reichert, M. Seven Guidelines for Designing the User Interface in Robotic Process Automation. In Proceedings of the 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), Gold Coast, Australia, 25–29 October 2021; pp. 157–165. [Google Scholar] [CrossRef]
  112. Zhao, Y.; Masuda, L.; Loke, L.; Reinhardt, D. Towards a Design Toolkit for Designing AR Interface with Head-Mounted Display for Close-Proximity Human–Robot Collaboration in Fabrication. In Collaboration Technologies and Social Computing; Takada, H., Marutschke, D.M., Alvarez, C., Inoue, T., Hayashi, Y., Hernandez-Leo, D., Eds.; Springer: Cham, Switzerland, 2023; pp. 135–143. [Google Scholar] [CrossRef]
  113. Marvel, J.A. Performance Metrics of Speed and Separation Monitoring in Shared Workspaces. IEEE Trans. Autom. Sci. Eng. 2013, 10, 405–414. [Google Scholar] [CrossRef]
  114. Kumar, S.P. Dynamic Speed and Separation Monitoring with On-Robot Ranging Sensor Arrays for Human and Industrial Robot Collaboration. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2020. [Google Scholar]
  115. Scalera, L.; Giusti, A.; Vidoni, R.; Gasparetto, A. Enhancing fluency and productivity in Human–Robot collaboration through online scaling of dynamic safety zones. Int. J. Adv. Manuf. Technol. 2022, 121, 6783–6798. [Google Scholar] [CrossRef]
  116. Zanchettin, A.M.; Lacevic, B. Safe and minimum-time path-following problem for collaborative industrial robots. J. Manuf. Syst. 2022, 65, 686–693. [Google Scholar] [CrossRef]
  117. Savur, C. A Physiological Computing System to Improve Human–Robot Collaboration by Using Human Comfort Index. Ph.D. Thesis, Rochester Institute of Technology, Rochester, NY, USA, 2022. [Google Scholar]
  118. Chacón, A.; Ponsa, P.; Angulo, C. Cognitive Interaction Analysis in Human–Robot Collaboration Using an Assembly Task. Electronics 2021, 10, 1317. [Google Scholar] [CrossRef]
  119. Bradley, M.M.; Lang, P.J. Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  120. Betella, A.; Verschure, P.F.M.J. The Affective Slider: A Digital Self-Assessment Scale for the Measurement of Human Emotions. PLoS ONE 2016, 11, e0148037. [Google Scholar] [CrossRef] [PubMed]
  121. Legler, F.; Trezl, J.; Langer, D.; Bernhagen, M.; Dettmann, A.; Bullinger, A.C. Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions. Robotics 2023, 12, 168. [Google Scholar] [CrossRef]
  122. Savur, C.; Sahin, F. Survey on Physiological Computing in Human–Robot Collaboration. Machines 2023, 11, 536. [Google Scholar] [CrossRef]
  123. Sahin, M.; Subramanian, K.; Sahin, F. Using Augmented Reality to Enhance Worker Situational Awareness in Human-Robot Interaction. In Proceedings of the 2024 IEEE Conference on Telepresence, California Institute of Technology, Pasadena, CA, USA, 16–17 November 2024. Accepted for Presentation. [Google Scholar] [CrossRef]
  124. Subramanian, K.; Arora, S.; Adamides, O.; Sahin, F. Using Mixed Reality for Safe Physical Human–Robot Interaction. In Proceedings of the 2024 IEEE Conference on Telepresence, California Institute of Technology, Pasadena, CA, USA, 16–17 November 2024. Accepted for Presentation. [Google Scholar] [CrossRef]
Figure 1. PRISMA diagram of the literature gathering and screening process, generated using [15].
Figure 1. PRISMA diagram of the literature gathering and screening process, generated using [15].
Machines 12 00706 g001
Figure 2. Framework for key factors in HRI.
Figure 2. Framework for key factors in HRI.
Machines 12 00706 g002
Figure 3. Task allocation graphic from Lamon et al. The diagram illustrates the process from high-level tasks to the designated sequence of actions. Each high-level task is broken down into a series of actions needed to accomplish it. The algorithm then assesses the suitability of each agent for performing these actions based on specific metrics. Finally, the actions are allocated to the agents to minimize the overall performance cost [22].
Figure 3. Task allocation graphic from Lamon et al. The diagram illustrates the process from high-level tasks to the designated sequence of actions. Each high-level task is broken down into a series of actions needed to accomplish it. The algorithm then assesses the suitability of each agent for performing these actions based on specific metrics. Finally, the actions are allocated to the agents to minimize the overall performance cost [22].
Machines 12 00706 g003
Figure 4. Different categories of human–robot teamwork [28]. In each case, a human is depicted by a circle labeled with an “H,” whereas a robot is illustrated by a circle containing an “R.” Double headed arrows indicate command flows between the humans and robot.
Figure 4. Different categories of human–robot teamwork [28]. In each case, a human is depicted by a circle labeled with an “H,” whereas a robot is illustrated by a circle containing an “R.” Double headed arrows indicate command flows between the humans and robot.
Machines 12 00706 g004
Figure 5. Facial expression range of the robot used by Oh and Kim [52].
Figure 5. Facial expression range of the robot used by Oh and Kim [52].
Machines 12 00706 g005
Figure 6. (Left) A collaboration workspace between a robot and a human. (Right) The robot’s prediction model, displaying two types of robot movements: a conventional shortest-path motion (dashed arrow) and a human-aware motion (solid arrow). It also includes the predicted path of a human worker [27].
Figure 6. (Left) A collaboration workspace between a robot and a human. (Right) The robot’s prediction model, displaying two types of robot movements: a conventional shortest-path motion (dashed arrow) and a human-aware motion (solid arrow). It also includes the predicted path of a human worker [27].
Machines 12 00706 g006
Figure 7. Operation lights on a collaborating robot circled in green. The lights represent the separation distance. Green is safe, red is unsafe, and amber is warning.
Figure 7. Operation lights on a collaborating robot circled in green. The lights represent the separation distance. Green is safe, red is unsafe, and amber is warning.
Machines 12 00706 g007
Figure 8. Visual representation of the training solution by Andersson et al. [68].
Figure 8. Visual representation of the training solution by Andersson et al. [68].
Machines 12 00706 g008
Figure 9. Projected instructions for car door alignment. (a) The starting projection for a car door alignment task. (b) The projection moving as the car door aligns. (c) Ending projection with the “Done” visualization [8].
Figure 9. Projected instructions for car door alignment. (a) The starting projection for a car door alignment task. (b) The projection moving as the car door aligns. (c) Ending projection with the “Done” visualization [8].
Machines 12 00706 g009
Figure 10. AR projections by Sonawani et al. that show a robot’s status and interaction elements [76].
Figure 10. AR projections by Sonawani et al. that show a robot’s status and interaction elements [76].
Machines 12 00706 g010
Figure 11. The AR visualization seen through an HMD that visualizes the robots’ upcoming motions in green and the safety zone in red [59].
Figure 11. The AR visualization seen through an HMD that visualizes the robots’ upcoming motions in green and the safety zone in red [59].
Machines 12 00706 g011
Figure 12. Diagram of the body model used by Matsas et al. to calculate the distance at which a robot should automatically shut down to avoid human collision. Different sizes of geometric primitives are used to create estimations of the regions of the human body, this can be seen in A through D [82].
Figure 12. Diagram of the body model used by Matsas et al. to calculate the distance at which a robot should automatically shut down to avoid human collision. Different sizes of geometric primitives are used to create estimations of the regions of the human body, this can be seen in A through D [82].
Machines 12 00706 g012
Figure 13. The user’s view for three different UI screens created by Green et al. (A) Overview without planning, (B) immersive view, (C) overview with planning [24].
Figure 13. The user’s view for three different UI screens created by Green et al. (A) Overview without planning, (B) immersive view, (C) overview with planning [24].
Machines 12 00706 g013
Figure 14. SAM used commonly to obtain subjective responses from human subjects [119].
Figure 14. SAM used commonly to obtain subjective responses from human subjects [119].
Machines 12 00706 g014
Figure 15. Framework for developing AR application in HRI.
Figure 15. Framework for developing AR application in HRI.
Machines 12 00706 g015
Figure 16. AR display shown to human workers through a HMD. Green arrow indicating robot’s location is circled in red [123].
Figure 16. AR display shown to human workers through a HMD. Green arrow indicating robot’s location is circled in red [123].
Machines 12 00706 g016
Figure 17. Results from Study 1: (A) Average Perception Error; (B) Mean Confidence. AR represents performance with the use of Augmented Reality, and NAR represents the lack of AR when performing the task [123].
Figure 17. Results from Study 1: (A) Average Perception Error; (B) Mean Confidence. AR represents performance with the use of Augmented Reality, and NAR represents the lack of AR when performing the task [123].
Machines 12 00706 g017
Figure 18. Geometric primitives used for approximating the human body for minimum distance measurements in a digital twin [124].
Figure 18. Geometric primitives used for approximating the human body for minimum distance measurements in a digital twin [124].
Machines 12 00706 g018
Figure 19. A stacked view of a 60 s experiment segment shows the system’s performance using the Speed and Separation Monitoring algorithm. The top row indicates the speed scaling command to the robot, the middle row shows the minimum distance between human and robot, and the bottom row captures their positions every 10 s [124].
Figure 19. A stacked view of a 60 s experiment segment shows the system’s performance using the Speed and Separation Monitoring algorithm. The top row indicates the speed scaling command to the robot, the middle row shows the minimum distance between human and robot, and the bottom row captures their positions every 10 s [124].
Machines 12 00706 g019
Figure 20. Spheres to show the robot end effector’s movement. (A) Blue spheres when the human is in a safe proximity to the robot. (B) Red spheres when the human is in close proximity to the robot.
Figure 20. Spheres to show the robot end effector’s movement. (A) Blue spheres when the human is in a safe proximity to the robot. (B) Red spheres when the human is in close proximity to the robot.
Machines 12 00706 g020
Figure 21. The minimap design is circled in green. (A) The human indicator is green in safe proximity. (B) The human indicator turns orange when in closer but still safe proximity. (C) The human indicator turns red when in close and dangerous proximity to the robot.
Figure 21. The minimap design is circled in green. (A) The human indicator is green in safe proximity. (B) The human indicator turns orange when in closer but still safe proximity. (C) The human indicator turns red when in close and dangerous proximity to the robot.
Machines 12 00706 g021
Table 1. Summary of AR Applications in Human–Robot Interaction.
Table 1. Summary of AR Applications in Human–Robot Interaction.
ReferenceCategoryUse of ARHardware
Fang et al. [65]CollaborationPath PlanDisplay
Doil et al. [66]CollaborationTask PlanHMD+
Wang et al. [67]CollaborationTask PlanHMD
Andersson et al. [68]CollaborationTask TrainHMD
Saaski et al. [57]CollaborationTask InstrHMD
Michalos et al. [69]CollaborationTask InstrHMD+
Liu et al. [31]CollaborationTask InstrDisplay
Kalpangam et al. [8]CollaborationTask InstrProjection
Lunding et al. [70]CollaborationCommunicationHMD
Tabrez et al. [71]CollaborationCommunicationHMD
De Franco et al. [72]CollaborationTask CollabHMD
Andronas et al. [73]CollaborationInteract. CuesHMD
Gkournelos et al. [74]CollaborationInteract. CuesSmartwatch
Qui et al. [75]CollaborationInteract. CuesWeb
Sonawwani et al. [76]CollaborationInteract. CuesProjection
Tsamis et al. [59]Trust and SafetyVisual. ActionsHMD
Palmarini et al. [58]Trust and SafetyVisual. ActionsTablet
Vogel et al. [60]Trust and SafetySafe ZoneProjection
Choi et al. [61]Trust and SafetySafe ZoneHMD
Hietanen et al. [77]Trust and SafetySafe ZoneHMD+
Lunding et al. [78]Trust and SafetySafe ZoneHMD
Eschen et al. [79]Trust and SafetySafe MaintHMD
Papanastasiou et al. [80]Trust and SafetySafe OSHMD+
Makris et al. [81]Trust and SafetySafe OSHMD
Matsas et al. [82]Trust and SafetyCollis. AvdProjection
Abbreviations: Path Plan—Path Planning, Task Plan—Task Planning, Task Train—Task Training, Task Instr—Task Instruction, Task Collab—Task Collaboration, Interact. Cues—Interaction Cues, Visual. Actions—Visualizing Actions, Safe Zone—Safety Zone, Safe Maint—Safety Maintenance, Safe OS—Safety Operating System, Collis. Avd—Collision Avoidance, HMD+—Head-Mounted Displays and Screens.
Table 2. Statistical significance of situational awareness metrics.
Table 2. Statistical significance of situational awareness metrics.
MetricT-Statisticp-Value
Average Perception Error−3.58710.002106
Percent Correct2.41790.02643
Mean Confidence4.01710.008085
Mean Response Time−1.78470.09115
Table 3. Absolute error metrics: Hololens 2 tracking vs. Opti-Track Motion Capture System; values are in cms [124].
Table 3. Absolute error metrics: Hololens 2 tracking vs. Opti-Track Motion Capture System; values are in cms [124].
Error MetricValue
Max0.07435
Min0.00607
Median0.03087
Mean0.03149
RMSE0.03382
Std Dev0.01234
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Subramanian, K.; Thomas, L.; Sahin, M.; Sahin, F. Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework. Machines 2024, 12, 706. https://doi.org/10.3390/machines12100706

AMA Style

Subramanian K, Thomas L, Sahin M, Sahin F. Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework. Machines. 2024; 12(10):706. https://doi.org/10.3390/machines12100706

Chicago/Turabian Style

Subramanian, Karthik, Liya Thomas, Melis Sahin, and Ferat Sahin. 2024. "Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework" Machines 12, no. 10: 706. https://doi.org/10.3390/machines12100706

APA Style

Subramanian, K., Thomas, L., Sahin, M., & Sahin, F. (2024). Supporting Human–Robot Interaction in Manufacturing with Augmented Reality and Effective Human–Computer Interaction: A Review and Framework. Machines, 12(10), 706. https://doi.org/10.3390/machines12100706

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop