Mixed Reality-Enhanced Intuitive Teleoperation with Hybrid Virtual Fixtures for Intelligent Robotic Welding

: This paper presents an integrated scheme based on a mixed reality (MR) and haptic feedback approach for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed robotic tele-welding system features imitative motion mapping from the user’s hand movements to the welding robot motions, and it enables the spatial velocity-based control of the robot tool center point (TCP). The proposed mixed reality virtual ﬁxture (MRVF) integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Onsite welding and tele-welding experiments identify the operational differences between professional and unskilled welders and demonstrate the effectiveness of the proposed MRVF tele-welding framework for novice welders. The MRVF-integrated visual/haptic tele-welding scheme reduced the torch alignment times by 56% and 60% compared to the MRnoVF and baseline cases, with minimized cognitive workload and optimal usability. The MRVF scheme effectively stabilized welders’ hand movements and eliminated undesirable collisions while generating smooth welds.


Introduction
Welding has been used extensively in the maintenance of nuclear plants, the construction of underwater structures, and the repair of spacecraft in outer space [1]. In these hazardous situations in which human welders have no access, the judgment and intervention of the human operators are required [2]. Customized production is also an application scenario for welding, where welders often work in environments with dust, strong light, radiation, and explosion hazards [3]. Human-in-the-loop (HITL) robotic tele-welding strategies have become a feasible approach for bringing humans out of these dangerous, harmful, and unpleasant environments while performing welding operations [4,5]. Robotic tele-welding systems (RTWSs) combine the advantages of humans and robotics and coordinate the functions of all system components efficiently and safely [6,7]. RTWSs can diminish geographical limitations for scarce welding professionals and bring a remote workforce into manufacturing [8,9].
Welding training is a time-consuming and costly process. Intensive instruction and training are usually required to bring unskilled welders to an intermediate skill level [10,11]. It is important to analyze the differences between the operating skills of professional and novice welders to facilitate the professional welding level of unskilled welders and further to improve the feasibility, efficiency, and welding quality of RTWSs for novice welders during remote welding operations. The expertise and skill extraction of professional welders

Related Work
More recent research attention has focused on MR-enhanced tele-welding paradigms [22]. It was verified in [23] that there were no statistically significant differences in the total welding scores between participants in the physical welding group and the mixed realitybased welding groups. The mixed reality welding user interface gives operators the ability to perform welding at a distance while maintaining a level of manipulation [24]. An optical tracking-based telerobotic welding system was introduced in [25]. The Leap Motion sensor captures the trajectory of a virtual welding gun held by a human welder in userspace to control the remote welding robot for the welding task. However, this welding system requires the use of a physical replica of the workpiece for welding in the userspace to project a real-time weld pool state and guide the welders to adjust their hand movements to the shape of the workpiece [26]. Qiyue Wang et al. [27] developed an MR-based humanrobot collaborative welding system. The collaborative tele-welding platform combines the strengths of humans and robots to perform weaving gas tungsten arc welding (GTAW) tasks. The welder can monitor the welding process through an MR display without the need to be physically present. Welding experiments indicated that collaborative tele-welding has better welding results compared to welding performed by humans or robots independently. MR-based robot-assisted remote welding platforms were developed in [28] to provide the welders with more natural and immersive human-robot interaction (HRI) [29]. However, in these systems, the users rely on visual feedback for movement control and have no haptic effects to completely prevent accidental collisions between the robot and the workpiece when the operator controls the robot for welding from a distance. A visual and haptic robot programming system based on mixed reality and force feedback was developed in [30], but the system was not suitable for real-time remote welding operations and was inefficient in unstructured and dynamic welding situations. Haptic feedback provides the welders with additional scene modality and increases the sense of presence in the remote environment, thereby improving the ability to perform complex tasks [31][32][33]. The primary benefit of incorporating haptic effects is to enhance the tele-welding task performance and operator's perception [34]. These existing remote robotic welding systems do not take sufficient advantage of the potential performance improvements that various forms of haptic effects can bring to the user. The rapid development of MR-enhanced teleoperation has led to the integration of MR and virtual fixtures (VF) to improve task performance and user perception [35,36]. The integration of MR and VF in RTWSs can effectively address the defects and problems that exist in the above telerobotic welding systems. The immersive and interactive MR environment allows for the effective generation of virtual workpieces in the user space [37] and can be combined with VF technology to provide force feedback and guidance to users, thereby improving the accuracy of robot movements and effectively preventing accidental collisions [38,39].
The main weakness of the published studies on tele-welding is that existing remotecontrolled robotic welding systems do not adequately incorporate MR technology and virtual fixtures to effectively eliminate potentially harmful collisions in the tele-welding process and grant welding robots human-level dynamics for dexterous GMAW welding tasks. No attempt has been made to reduce operational complexity to assist inexperienced welders to perform welding quickly and address the time-consuming training and shortage of a qualified workforce [40]. In this work, an on-site welding experiment was designed to investigate the motion difference between the expert and unskilled welders, extracting the expertise and skills of professional welders to optimize the robotic tele-welding platform. An MRVF robotic tele-welding platform was developed to facilitate novice welders with better weld control by incorporating MR and VF. This tele-welding paradigm integrates imitation-based motion mapping and MR and VF functions, providing human-level operating capabilities and enabling non-skilled welders to perform remote GMAW tasks. A tele-welding experiment was carried out to verify the effect of MR-integrated visual and haptic cues on the tele-welding tasks against the typical baseline and MR tele-welding cases.

Welding Skill Extraction System Design
The objectives of this research were to (1) remove human welders from hazardous and unpleasant working environments without increasing operational complexity or sacrificing the welding quality; (2) enable the welders to conduct tele-welding in the same way it is performed onsite, minimizing the learning required by introducing a tele-welding robot in the loop; (3) analyze the operational techniques and welding expertise distinguishing professional welders from unskilled welders; (4) further assist unskilled workers with integrated visual and haptic HRI modalities via MR to improve task performance and system usability in remote-controlled tele-welding and to achieve welding results comparable to those of professional welders. These objectives address key issues in remote tele-welding.
To identify operational differences between unskilled and professional welders, hand movements of professional welders performing manual welding tasks were tracked and compared to those of unskilled welders. Figure 1 shows the hardware components of the gas metal arc welding (GMAW) motion tracking platform, including an HTC Vive tracking system, welding shelter, welding torch, and extra welding gas/wire/electricity supplies. A 6-DOF Vive tracker was mounted on the welding torch and exposed to the two surrounding Vive tracking base stations for tracking the translational and rotational motion of the welder's torch hand by generating a wireless connection between the tracked welding torch and the base stations. A metal welding shelter enables more precise motion tracking and covers the torch tip and workpieces to prevent infrared (IR) light exposure, which may interfere with the IR-sensitive tracking sensors. Auto-darkening welding helmets and welding gloves were used by all participants. and system usability in remote-controlled tele-welding and to achieve welding results comparable to those of professional welders. These objectives address key issues in remote tele-welding.
To identify operational differences between unskilled and professional welders, hand movements of professional welders performing manual welding tasks were tracked and compared to those of unskilled welders. Figure 1 shows the hardware components of the gas metal arc welding (GMAW) motion tracking platform, including an HTC Vive tracking system, welding shelter, welding torch, and extra welding gas/wire/electricity supplies. A 6-DOF Vive tracker was mounted on the welding torch and exposed to the two surrounding Vive tracking base stations for tracking the translational and rotational motion of the welder's torch hand by generating a wireless connection between the tracked welding torch and the base stations. A metal welding shelter enables more precise motion tracking and covers the torch tip and workpieces to prevent infrared (IR) light exposure, which may interfere with the IR-sensitive tracking sensors. Auto-darkening welding helmets and welding gloves were used by all participants.

MRVF Tele-Welding System Overview
In this study, we investigated the impact of an integrated visual/haptic perception in MR on a natural, 3D motion mapping, enhanced immersive, and intuitive tele-welding process. Figure 2 shows the MR-incorporated virtual fixture (MRVF) telerobotic system consisting of four main elements-(1) the welding robot and visualization system; (2) the haptic master robot; (3) the MR workspace implementation; (4) the robot and operator space communication implementation.

MRVF Tele-Welding System Overview
In this study, we investigated the impact of an integrated visual/haptic perception in MR on a natural, 3D motion mapping, enhanced immersive, and intuitive tele-welding process. Figure 2 shows the MR-incorporated virtual fixture (MRVF) telerobotic system consisting of four main elements-(1) the welding robot and visualization system; (2) the haptic master robot; (3) the MR workspace implementation; (4) the robot and operator space communication implementation.
The remote robotic welding platform consisted of a UR5 industrial manipulator with six degrees of freedom (DOF), gas metal arc welding (GMAW) equipment, welding camera, and auto-darkening filter. The UR5 industrial manipulator was equipped with an arc welding torch to perform the remote welding process, as shown in Figure 3. A monocular Logitech C615 webcam (Logitech International S.A, Lausanne, Switzerland) with an autodarkening lens was mounted on the robot to observe the welding process and provide the operator with a direct view of the workpieces. A robot operating system (ROS) middlewaresupported driver for the UR5 robot ran on a computer with an Ubuntu 16.04 operating system. The Ubuntu computer was equipped with an i7-10700 CPU, 64 GB RAM, and GeForce RTX 2060 graphics to command the UR5 robot controller through TCP/IP and process the on-site welding streams. The TCP/IP-based ROS communication protocol was capable of fast control rates at 125 Hz, which is sufficient for teleoperated robotic welding tasks, where real-time control is required.
A PHANToM Omni haptic robot (SensAble Technologies Inc., Woburn, MA, USA) was utilized as the motion input device to remotely operate the welding robot in a manual welding manner. The MRVF system features velocity-centric motion mapping (VCMM) from the user's hand movements to the robot motions and enables spatial velocity-based control of the robot tool center point (TCP). The welder uses the stylus of the haptic robot with the same motion and manner as when performing manual welding. This approach enables intuitive and precise user control of the position and orientation of the UR5 end The remote robotic welding platform consisted of a UR5 industrial manipulator with six degrees of freedom (DOF), gas metal arc welding (GMAW) equipment, welding camera, and auto-darkening filter. The UR5 industrial manipulator was equipped with an arc welding torch to perform the remote welding process, as shown in Figure 3. A monocular Logitech C615 webcam (Logitech International S.A, Lausanne, Switzerland) with an autodarkening lens was mounted on the robot to observe the welding process and provide the operator with a direct view of the workpieces. A robot operating system (ROS) middleware-supported driver for the UR5 robot ran on a computer with an Ubuntu 16.04 operating system. The Ubuntu computer was equipped with an i7-10700 CPU, 64 GB RAM, and GeForce RTX 2060 graphics to command the UR5 robot controller through TCP/IP and process the on-site welding streams. The TCP/IP-based ROS communication protocol was capable of fast control rates at 125 Hz, which is sufficient for teleoperated robotic welding tasks, where real-time control is required.   The remote robotic welding platform consisted of a UR5 industrial manipulator with six degrees of freedom (DOF), gas metal arc welding (GMAW) equipment, welding camera, and auto-darkening filter. The UR5 industrial manipulator was equipped with an arc welding torch to perform the remote welding process, as shown in Figure 3. A monocular Logitech C615 webcam (Logitech International S.A, Lausanne, Switzerland) with an autodarkening lens was mounted on the robot to observe the welding process and provide the operator with a direct view of the workpieces. A robot operating system (ROS) middleware-supported driver for the UR5 robot ran on a computer with an Ubuntu 16.04 operating system. The Ubuntu computer was equipped with an i7-10700 CPU, 64 GB RAM, and GeForce RTX 2060 graphics to command the UR5 robot controller through TCP/IP and process the on-site welding streams. The TCP/IP-based ROS communication protocol was capable of fast control rates at 125 Hz, which is sufficient for teleoperated robotic welding tasks, where real-time control is required.  The operator space for the MRVF tele-welding consisted of an HTC Vive HMD and 27-inch monitor connected to a desktop with an i7-8700k CPU, 32 GB RAM, and a GeForce GTX 1080 graphics processor. The immersive MRVF environment was generated in Unity 3D to display an integrated 3D visualization with overlaid monoscopic image streams and corresponding haptic feedback during the welding process. The ROS bridge provided a network intermediate, enabling the exchange of messages between nodes, and it was used to establish communication between the master and slave robot sides.

MRVF Visual/Haptic Workspace
Digital twin technology was used to capture the physical UR5 robot pose during operation and allowed the welders to view the rotation status of each joint [41]. The combination of the virtual twin and onsite video streams in MR provided comprehensive real-time monitoring of the robot's operating status. It also provided assistance in accurately and efficiently amending the welding motion based on data from the robot model. The scale ratio for the virtual UR5 robot was 1:5 so the digital twin data and motions fit the user's view in the MR welding workspace.
Virtual fixtures (VFs) can be divided into guidance fixtures and prevention fixtures. The proposed MRVF presented uses a combination of both to guide the users to efficiently navigate to the initial welding point and effectively prevent the torch tip from colliding with the workpiece.
During the welding process, the electrode needs to contact the molten weld pool to ensure the filler metal can be transferred from the electrode to the work. However, contact between the torch tip and the workpiece must be prevented to avoid damage. In the MRVF tele-welding workspace (Figure 4d), a transparent prevention VF panel remains overlaid on the virtual workpiece with a 2D display of the actual welding process to minimize collisions of the torch tip manipulated by the user and the workpiece.
corresponding haptic feedback during the welding process. The ROS bridge provided a network intermediate, enabling the exchange of messages between nodes, and it was used to establish communication between the master and slave robot sides.

MRVF Visual/Haptic Workspace
Digital twin technology was used to capture the physical UR5 robot pose during operation and allowed the welders to view the rotation status of each joint [41]. The combination of the virtual twin and onsite video streams in MR provided comprehensive realtime monitoring of the robot's operating status. It also provided assistance in accurately and efficiently amending the welding motion based on data from the robot model. The scale ratio for the virtual UR5 robot was 1:5 so the digital twin data and motions fit the user's view in the MR welding workspace.
Virtual fixtures (VFs) can be divided into guidance fixtures and prevention fixtures. The proposed MRVF presented uses a combination of both to guide the users to efficiently navigate to the initial welding point and effectively prevent the torch tip from colliding with the workpiece.
During the welding process, the electrode needs to contact the molten weld pool to ensure the filler metal can be transferred from the electrode to the work. However, contact between the torch tip and the workpiece must be prevented to avoid damage. In the MRVF tele-welding workspace (Figure 4d), a transparent prevention VF panel remains overlaid on the virtual workpiece with a 2D display of the actual welding process to minimize collisions of the torch tip manipulated by the user and the workpiece.  The welding experiments revealed it is relatively difficult to move the torch to the exact weld starting point for novice users, and this torch alignment process is often timeconsuming and increases overall task completion time. In the MR workspace, a conical guidance fixture is installed with the tip aligned to the welding start point, as shown in Figure 4d. The user simply moves the torch head to the wide end and then quickly moves the virtual torch tip to the cone tip position by following the resistance of the inner wall of the conical shape, and the actual torch is simultaneously driven to the intended welding start point.
Interaction between the haptic robot and the MR environment occurs at the haptic The welding experiments revealed it is relatively difficult to move the torch to the exact weld starting point for novice users, and this torch alignment process is often timeconsuming and increases overall task completion time. In the MR workspace, a conical guidance fixture is installed with the tip aligned to the welding start point, as shown in . The user simply moves the torch head to the wide end and then quickly moves the virtual torch tip to the cone tip position by following the resistance of the inner wall of the conical shape, and the actual torch is simultaneously driven to the intended welding start point.
Interaction between the haptic robot and the MR environment occurs at the haptic interface point (HIP), representing the corresponding position of the physical haptic probe of the master haptic robot [42,43]. The force exerted on the haptic stylus is calculated by simulating a spring between the proxy and the HIP. The resistance force exerted by the haptic stylus to the user's hand is proportional to the distance between the proxy point and the HIP. Figure 5 illustrates how haptic rendering and robot control are implemented using a master-controlled HIP and proxy-controlled robot (MHPR) architecture [44]. Considering the proxy point never violates the constraints imposed by the virtual fixtures, the welding robot will not collide with the workpiece, even though the operator overcomes the resistance force. This architecture forms a hard prevention fixture, allowing the user to maintain the desired contact tip-to-work distance (CTWD), preventing unwanted collisions and increasing the precision and stability of tele-welding operations.
out haptic effects including welding virtual workpiece, overlaid RGB stream, virtual welding torch, and the scaled digital twin of UR5; (d) the MRVF module involving hybrid guidance and prevention VFs.
The welding experiments revealed it is relatively difficult to move the torch to the exact weld starting point for novice users, and this torch alignment process is often timeconsuming and increases overall task completion time. In the MR workspace, a conical guidance fixture is installed with the tip aligned to the welding start point, as shown in Figure 4d. The user simply moves the torch head to the wide end and then quickly moves the virtual torch tip to the cone tip position by following the resistance of the inner wall of the conical shape, and the actual torch is simultaneously driven to the intended welding start point.
Interaction between the haptic robot and the MR environment occurs at the haptic interface point (HIP), representing the corresponding position of the physical haptic probe of the master haptic robot [42,43]. The force exerted on the haptic stylus is calculated by simulating a spring between the proxy and the HIP. The resistance force exerted by the haptic stylus to the user's hand is proportional to the distance between the proxy point and the HIP. Figure 5 illustrates how haptic rendering and robot control are implemented using a master-controlled HIP and proxy-controlled robot (MHPR) architecture [44]. Considering the proxy point never violates the constraints imposed by the virtual fixtures, the welding robot will not collide with the workpiece, even though the operator overcomes the resistance force. This architecture forms a hard prevention fixture, allowing the user to maintain the desired contact tip-to-work distance (CTWD), preventing unwanted collisions and increasing the precision and stability of tele-welding operations.

Experimental Design
Two experiments were conducted with novice and professional welders. Experiment 1 (the onsite welding experiment) investigated the motion difference between the expert and unskilled welders and extracted the expertise and skills of professional welders to optimize the robot-assisted welding platform. The experimental results further served as the "ground truth" for the development of MRVF robot-assisted welding platforms when facilitating novice welders with better weld control by incorporating MR and VF. Experiment 2 (the tele-welding experiment) was carried out to verify the effect of MR-integrated visual and haptic cues on the tele-welding performance of unskilled welders. The study was focused on novice participants to assess improvements and quality relative to professional on-site welding. Experiments were also conducted with professional welders to produce the criteria for the desired welding results.

On-Site Welding Experiment
Sixteen (16) student volunteers and four (4) technical staff members were recruited at the University of Canterbury (Christchurch, New Zealand). All participants were right-handed. The 16 students were unskilled welders who self-rated as having no prior experience, and the 4 technical staff members were very experienced welders who perform welding regularly and train undergraduates with no welding experience. Due to the relatively small number of professional welders, each professional welder was asked to weld multiple times to produce a comparable sample size.
Prior to the welding experiments, the workshop technician provided the novice subjects with the same standardized welding face-to-face instructions on the manual GMAW process, including the use of the welding torch, melting conditions, and the desired molten weld pool status for quality welding results. To observe the workshop safety precautions, the professional welders and experimenters remained close onsite and supervised the novice welders throughout the experiment. The novice welding results are, thus, safe and the best-case results for this cohort.
The welding experiments were conducted using a single-phase welding GMAW machine. A steel workpiece plate was placed in a horizontal position on the welding table for typical flat welding. The dimensions of the plates were 150 mm × 100 mm × 10 mm. The centerline line of the workpiece was set as the intended welding trajectory. Each professional welder was required to perform onsite flat welding four times for GMAW operation expertise and skill extraction. Each novice welder performed once for motion data collection and analysis. The corresponding hand movements and welding results were used to assess the absolute and relative welding performance, distinguishing the gap between experts and novices.

Tele-Welding Experiment
A 3 × 1 within-participants experiment was designed to validate whether the designed MRVF scheme facilitated novice welder control of a robotic tele-welding platform to achieve quality welding results and to assess the user experience. The null hypothesis (H0) of the repeated-measures ANOVA was that the baseline, MRnoVF, and MRVF tele-welding paradigms are equally effective in welding quality and welder experience for novices, in terms of effectiveness, intuitiveness, and learnability, using the VCMM imitation-based motion mapping approach as the basis for teleoperation.
In this work, three visualization modules in the tele-welding HRI platform, shown in Figure 4, were tested to validate the efficacy of the proposed MRVF tele-welding paradigm. In particular, to show the differences between the 2D baseline, MR, and MRVF settings for remote tele-welding. Specifically, the three modules were as follows: • Baseline: Perform the tele-welding operation with a non-immersive display using monoscopic streams (Figure 4a). The display screen was a standard 27-inch PC monitor. The 2D visualization was transmitted from the monoscopic camera mounted on the welding robot. The welder manipulated the master haptic robot for the welding robot control without haptic effects. The non-immersive 2D display was used as the baseline condition, as it is commonly used for visual feedback in typical remote-controlled welding systems. • MRnoVF: Conduct the tele-welding task with immersive MR-HMD with overlaid monocular images on the top of the virtual workpiece ( Figure 4c). The MRnoVF scheme is a limited version of the proposed MRVF module because it does not provide the participants with haptic cues to support hand maneuvering. The haptic device was deployed to command the UR5 arm for welding but provided no force feedback to the operator. • MRVF: MRVF incorporates combined planar prevention and conical guidance haptic cues in the immersive MR workspace (Figure 4d). The user maneuvered the haptic device within the constraints provided by guidance and prevention VFs while welding with the remotely placed robot. The user inspected the real-time pose of the physical welding robot via the scaled virtual replica in the scene.
The participants ran through all three experimental setups distinguished by increasing levels of visual and haptic HRI modalities. First, each participant read the instructions and completed a pre-task questionnaire recording age, gender, and familiarity with welding, robotics, and MR experience. The objective of each trial was then explained. Each subject was given the same introduction that demonstrated the proposed intuitive tele-welding platform with the visual/haptic feedback modules they were going to use before testing, ensuring standardized, consistent training for all subjects. After a demonstration, the participants were given 2 min to experience the MR-enhanced telerobotic welding system to familiarize themselves with the haptic robot, mixed reality imagery, and robotic welding platform.
The MR-HMD and haptic stylus were fitted on each participant at the user site. When the subject sent verbal confirmation, the MR welding workspace appeared as intended, and they started completing the teleoperated robotic welding tasks as required. Each participant completed the typical horizontal flat welding task under each experimental condition (2D baseline, MRnoVF, MRVF). The condition sequence was randomized to mitigate learning and fatigue effects. After completing each experimental task using one control-feedback condition, the participants filled out a questionnaire about the HRI module to directly compare the three conditions. The participants were given unbounded time to complete the welding tasks but were instructed to navigate the torch from a given pose to the desired welding starting pose as effectively as they could. The alignment time participants spent to position the torch tip significantly influences the overall tele-welding completion time compared to the welding itself. Thus, alignment times were measured to evaluate improvement in participant work efficiency with each condition as the VFs aid this process in particular. The number of accidental collisions between the torch tip and the metal was recorded as a performance metric. User effort and workload during teleoperation experiments were evaluated by the NASA task load index (NASA-TLX) score at the end of each task, assessing the qualitative mental demand, physical demand, time demand, performance, effort, and frustration (score range of 1-100, from the least to the most demanding) [45,46]. User acceptance and system usability, including usefulness and ease-of-use, was assessed by a questionnaire based on the technology acceptance model (TAM) measuring acceptance and ease-of-use (score range of 0-7, from worst to best) [47,48].
A one-way within-participants ANOVA with repeated measures analyzed the statistical differences among the means of all measurements [49]. Bonferroni correction indicated which mean values were significantly different and was used in this analysis when the ANOVA test showed a significant main effect of the experiment condition [50]. The Greenhouse-Geisser correction was applied to assess the difference in the welder reports of the baseline, MRnoVF, and MRVF modules as within-subject variables [51].

Onsite Welding Results
The experiment identified the difference between the welding motion trajectories of the skilled and unskilled welders to assist unskilled welders in achieving better control of the weld in telerobotic welding operations. Figure 6 shows the welding results of the skilled and unskilled welders; the expert welds exhibited consistent uniformity, with a smooth weld surface and even thickness across the weld axis. The results of the unskilled welders were heterogeneous, abrupt, variable, and uneven in thickness and direction. Analysis of the tracked torch motion data was performed to determine the causes of these discrepancies. Greenhouse-Geisser correction was applied to assess the difference in the welder reports of the baseline, MRnoVF, and MRVF modules as within-subject variables [51].

Onsite Welding Results
The experiment identified the difference between the welding motion trajectories of the skilled and unskilled welders to assist unskilled welders in achieving better control of the weld in telerobotic welding operations. Figure 6 shows the welding results of the skilled and unskilled welders; the expert welds exhibited consistent uniformity, with a smooth weld surface and even thickness across the weld axis. The results of the unskilled welders were heterogeneous, abrupt, variable, and uneven in thickness and direction. Analysis of the tracked torch motion data was performed to determine the causes of these discrepancies.   Figure 7b shows that both the professional and novice welders could manipulate the torch smoothly in the target direction, Y. Significantly aggressive hand velocities were observed in the X and Z directions, which indicates that the unskilled welders could adjust the motion velocity in the welding direction according   Figure 7b shows that both the professional and novice welders could manipulate the torch smoothly in the target direction, Y. Significantly aggressive hand velocities were observed in the X and Z directions, which indicates that the unskilled welders could adjust the motion velocity in the welding direction according to the real-time weld pool status just as the professional welders did, but they had more velocity and motion due to instability. The motion analysis for the hand motion differences was summarized by variance and RMSE and are shown in Table 1 and Figure 8, in which the overall results match those in Figure 7.  Figure 7 compares the motions and velocities between the professional and unskilled welders, showing that the unskilled welders had difficulty stabilizing the torch hand movement in the X and Z directions. Figure 7b shows that both the professional and novice welders could manipulate the torch smoothly in the target direction, Y. Significantly aggressive hand velocities were observed in the X and Z directions, which indicates that the unskilled welders could adjust the motion velocity in the welding direction according to the real-time weld pool status just as the professional welders did, but they had more velocity and motion due to instability. The motion analysis for the hand motion differences was summarized by variance and RMSE and are shown in Table 1 and Figure 8, in which the overall results match those in Figure 7.

Tele-Welding Results
Overall, all the subjects completed the tele-welding experiments under the three conditions. In the post-experiment questionnaire, the baseline case was rated as the most difficult welding task condition by the majority of participants. Most subjects commented that the MRVF VFs supported their suspended torch hands and reduced fatigue during the robotic welding process. Figure 9 presents a comparison between the sample welding results of the expert and novice welders for the MRVF-integrated visual/haptic scheme, which reduced undesirable deviations of the unskilled welder. The welding results show the gap between the unskilled and professional welders was significantly reduced, and the MVRF condition was intuitive enough to enable experienced welders to quickly transfer their skills from onsite welding to remote tasks.

Tele-Welding Results
Overall, all the subjects completed the tele-welding experiments under the three conditions. In the post-experiment questionnaire, the baseline case was rated as the most difficult welding task condition by the majority of participants. Most subjects commented that the MRVF VFs supported their suspended torch hands and reduced fatigue during the robotic welding process. Figure 9 presents a comparison between the sample welding results of the expert and novice welders for the MRVF-integrated visual/haptic scheme, which reduced undesirable deviations of the unskilled welder. The welding results show the gap between the unskilled and professional welders was significantly reduced, and the MVRF condition was intuitive enough to enable experienced welders to quickly transfer their skills from onsite welding to remote tasks. Figure 9. Results of the MR-integrated visual/haptic tele-welding system from a professional welder (above) and unskilled welder (below).
Statistical analysis results that compared the MR-enhanced visual/haptic tele-welding frameworks for HRI paradigms to the baseline and MRnoVF cases are given in Tables  2 and 3. Table 2 lists the mean scores and standard deviations of all measurements and ratings for all participants under each condition. Table 3 lists the p-values and statistical significance of the three modules using one-way ANOVA. The results indicate significant differences between the three visual/haptic integration levels in tele-welding tasks.  Statistical analysis results that compared the MR-enhanced visual/haptic tele-welding frameworks for HRI paradigms to the baseline and MRnoVF cases are given in Tables 2 and 3.  Table 2 lists the mean scores and standard deviations of all measurements and ratings for all participants under each condition. Table 3 lists the p-values and statistical significance of the three modules using one-way ANOVA. The results indicate significant differences between the three visual/haptic integration levels in tele-welding tasks.

Objective Measures
The analysis rejected the null hypothesis (H0) that the MRVF visual/haptic HRI approach for intuitive tele-welding, the MRnoVF, and 2D baseline modules have identical effects on welder performance. In particular, the results show that the MRVF visual/haptic HRI approach significantly outperformed both the 2D baseline and MRnoVF HRI methods on the welding tasks for all pairwise comparisons. Guiding a welding robot using natural welding motion through MR with hybrid guidance/prevention VFs in the MR workspace improved remote welding performance and reduced novice, unskilled welder effort.  As shown in Figure 10a Torch alignment times for the welding tasks using the MRVF-integrated visual and haptic tele-welding framework were reduced by 56% and 60% compared to the MRnoVF and baseline cases, respectively, indicating that the typical 2D tele-welding module and the MRnoVF case require additional time to achieve the same capabilities as the proposed MR-integrated visual/haptic HRI module.
Statistical significance was also seen for the average number of collisions between the three HRI modules-F (1.424, 21.353) = 4.091, p < 0.05, Partial = 0.21. The pairwise comparisons indicated the mean collision numbers to complete the welding task were significantly reduced in the baseline (M = 0.50) compared to the MRnoVF module (M = 0.25) and the MRVF module (M = 0), as shown in Figure 10b. The statistical results demonstrate that following through the cone-shaped guidance fixture provided by the MRVF can reduce the welding completion time by minimizing the time used for navigating the torch tip to the initial welding pose. In addition, the prevention VF greatly reduced the likelihood of a collision occurring.
isons indicated the mean collision numbers to complete the welding task were significantly reduced in the baseline (M = 0.50) compared to the MRnoVF module (M = 0.25) and the MRVF module (M = 0), as shown in Figure 10b. The statistical results demonstrate that following through the cone-shaped guidance fixture provided by the MRVF can reduce the welding completion time by minimizing the time used for navigating the torch tip to the initial welding pose. In addition, the prevention VF greatly reduced the likelihood of a collision occurring.

Subjective Measures
The NASA task load index (NASA-TLX) assessed the cognitive workload. On a scale of 0 to 100, with 100 being the most difficult, the participants rated their qualitative experience of mental demand, physical demand, temporal demand, performance, effort, and frustration after completing each task. Figure 11 shows all average NASA-TLX scores were lower for the MR-integrated visual and haptic HRI module (MRVF) compared to the baseline and MRnoVF cases. The MRVF visual-haptic mapping module significantly reduced the mental and physical demands and effort of participants. In particular, the men-

Subjective Measures
The NASA task load index (NASA-TLX) assessed the cognitive workload. On a scale of 0 to 100, with 100 being the most difficult, the participants rated their qualitative experience of mental demand, physical demand, temporal demand, performance, effort, and frustration after completing each task. Figure 11 shows all average NASA-TLX scores were lower for the MR-integrated visual and haptic HRI module (MRVF) compared to the baseline and MRnoVF cases. The MRVF visual-haptic mapping module significantly reduced the mental and physical demands and effort of participants. In particular, the mental workload was reduced from baseline (M = 80. 31 The technology acceptance model (TAM) evaluated the system functionality, usability, and user's acceptance and perception of the three tele-welding modules. Each scale consisted of three items measured on a seven-point scale (1 = strongly disagree; 7 = strongly agree). The MRVF visual/haptic HRI method (M = 4.19) was reported to be more acceptable than the MRnoVF module (M = 2.69) and baseline case (M = 2.25) in terms of perceived usefulness, as shown in Figure 12. The TAM results indicate there was an overall significant difference between the means of the users' appeal with the three different HRI modules. The participants found the MR-integrated visual/haptic tele-welding framework (MRVF) (M = 5.50) to be significantly easier to use compared to the 2D baseline (M = 2.63), and marginally easier to use than the MRnoVF module (M = 3.13). The subjective measures analysis proved the MRVF vision/force mapping approach for tele-welding outperformed the MRnoVF and 2D baseline modules in task workload and user perception.  The technology acceptance model (TAM) evaluated the system functionality, usability, and user's acceptance and perception of the three tele-welding modules. Each scale consisted of three items measured on a seven-point scale (1 = strongly disagree; 7 = strongly agree). The MRVF visual/haptic HRI method (M = 4.19) was reported to be more acceptable than the MRnoVF module (M = 2.69) and baseline case (M = 2.25) in terms of perceived usefulness, as shown in Figure 12. The TAM results indicate there was an overall significant difference between the means of the users' appeal with the three different HRI modules. The participants found the MR-integrated visual/haptic tele-welding framework (MRVF) (M = 5.50) to be significantly easier to use compared to the 2D baseline (M = 2.63), and marginally easier to use than the MRnoVF module (M = 3.13). The subjective measures analysis proved the MRVF vision/force mapping approach for tele-welding outperformed the MRnoVF and 2D baseline modules in task workload and user perception. Figure 12. Subjective scores on system functionality, usability, and user's acceptance and perception of the three telewelding modules. Higher scores represent higher preferences in all cases. The MRVF design demonstrated improvements in perceived usefulness and perceived ease of use. The special symbols (circles) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles indicate which observations in the dataset are outliers. Figure 11. Subjective NASA-TLX ratings of task workload across all conditions in the tele-welding tasks. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers. Figure 11. Subjective NASA-TLX ratings of task workload across all conditions in the tele-welding tasks. The special symbols (circles and asterisks (*)) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles and asterisks indicate which observations in the dataset are outliers. The technology acceptance model (TAM) evaluated the system functionality, usab ity, and user's acceptance and perception of the three tele-welding modules. Each sca consisted of three items measured on a seven-point scale (1 = strongly disagree; 7 strongly agree). The MRVF visual/haptic HRI method (M = 4.19) was reported to be mo acceptable than the MRnoVF module (M = 2.69) and baseline case (M = 2.25) in terms perceived usefulness, as shown in Figure 12. The TAM results indicate there was an ove all significant difference between the means of the users' appeal with the three differe HRI modules. The participants found the MR-integrated visual/haptic tele-weldin framework (MRVF) (M = 5.50) to be significantly easier to use compared to the 2D baselin (M = 2.63), and marginally easier to use than the MRnoVF module (M = 3.13). The subje tive measures analysis proved the MRVF vision/force mapping approach for tele-weldin outperformed the MRnoVF and 2D baseline modules in task workload and user perce tion. Figure 12. Subjective scores on system functionality, usability, and user's acceptance and perception of the three telewelding modules. Higher scores represent higher preferences in all cases. The MRVF design demonstrated improvements in perceived usefulness and perceived ease of use. The special symbols (circles) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles indicate which observations in the dataset are outliers. Figure 12. Subjective scores on system functionality, usability, and user's acceptance and perception of the three telewelding modules. Higher scores represent higher preferences in all cases. The MRVF design demonstrated improvements in perceived usefulness and perceived ease of use. The special symbols (circles) are outliers in the data, representing that they are outside of a certain range indicated by the boxes in the box plot. The numbers next to the circles indicate which observations in the dataset are outliers.

Limitations
The overall system experiment was conducted at short range. Thus, time lags were relatively very small. Research conducted in other studies indicated that lag between user motion and robot motion causes increasing errors [52,53]. Ongoing work using Markov models and other forecasting methods can address this issue in future work, given the results in our proof-of-concept system.
The MRVF system presented relatively low-cost and readily available components, where faster or more precise systems could provide greater accuracy, potentially reducing the improvements seen here. One purpose of this study was to use commercial off-theshelf products to demonstrate the potential of a relatively low-cost system to achieve tele-welding. Hence, while performance can be improved with better components, it also raises the cost, for which economic feasibility is application-dependent.
Subject numbers were limited in this study, and future work should replicate this effort with a larger study if feasible. However, the relatively low number of unskilled welders had consistent results. Thus, while greater numbers would more accurately quantify the gains to be obtained by an MRVF approach, the consistently large differences seen in both objective and subjective assessments indicate that the results should be replicable. This study aimed to enable inexperienced welders to perform quality remote welding tasks. Unskilled welders do not have frequent contact with the physical welding torch and are not reliant on its weight. It is feasible in future work to replace the handheld stylus on the haptic device with an actual welding torch or a 3D-printed torch model of the same weight to improve the professional welder's experience.

Conclusions
This research was focused on immersive and intuitive human-robot interaction with visual and haptic cues, specifically focusing on the MRVF framework for tele-welding scenarios. The MRVF visual/haptic mapping framework provided the welders with an intuitive approach to control the movement of the complex robotic welding system in a manner similar to conventional handheld manual welding via using a single-point grounded haptic robot. The users felt they could access the physical welding scenario from the MR-based operator space, as indicated in the subjective assessments. The MRVF allowed the unskilled, novice welders to rest their suspended torch hands against the VF surface during the robotic welding process, stabilizing the torch hand movements in the X and Z directions. With the integrated visual and haptic perception, the MRVF tele-welding scheme enabled the non-professional welders to achieve welding results in remote control tele-welding that were comparable to those of professional welders both remotely and onsite, reducing the dependence of remote welding on welder experience and specialized skills. The prevention haptic structure enabled in the MRVF module using VFs successfully eliminated collisions that can damage the robot and/or workpiece. The proposed MRVF visual/haptic framework for remote-controlled welding also enabled professional welders to retain a professional level of operation in the tele-welding process, indicating its intuitive ease of use. Overall, this approach improved the task performance of unskilled, novice welders, increased work efficiency, was intuitive and easy to use, and prevented unwanted collisions.