Next Article in Journal
Optimizing Coalition Formation Strategies for Scalable Multi-Robot Task Allocation: A Comprehensive Survey of Methods and Mechanisms
Previous Article in Journal
Planar Inverse Statics and Path Planning for a Tendon-Driven Discrete Continuum Robot
Previous Article in Special Issue
May I Assist You?—Exploring the Impact of Telepresence System Design on the Social Perception of Remote Assistants in Collaborative Assembly Tasks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness †

1
Department of Excellence in Robotics and AI, Sant’Anna School of Advanced Studies, 56127 Pisa, Italy
2
Department of Engineering and Science, Universitas Mercatorum, 00186 Rome, Italy
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Di Tecco, A.; Camardella, C.; Leonardis, D.; Loconsole, C.; Frisoli, A. Virtual Dashboard Design for Grasping Operations in Teleoperation Systems. In Proceedings of the 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), St. Albans, UK, 21–23 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 994–999.
Robotics 2025, 14(7), 92; https://doi.org/10.3390/robotics14070092
Submission received: 28 April 2025 / Revised: 21 June 2025 / Accepted: 27 June 2025 / Published: 30 June 2025
(This article belongs to the Special Issue Extended Reality and AI Empowered Robots)

Abstract

This research study investigates the impact of a virtual dashboard on the quality of task execution in robotic teleoperation. More specifically, this study investigates how a virtual dashboard improves user awareness and grasp precision in a teleoperated pick-and-place task by providing users with critical information in real-time. An experiment was conducted with 30 participants in a robotic teleoperated task to measure their task performance in two different experimental conditions: a control group used conventional interfaces, and an experimental group utilized the virtual dashboard with additional information. Research findings indicate that integrating a virtual dashboard improves grasping accuracy, reduces user fatigue, and speeds up task completion, thereby improving task effectiveness and the quality of the experience.

1. Introduction

Teleoperation allows the remote control of robotic systems by a human operator. In can be used in hazardous environments (i.e., disaster response and bomb defusing, as well as contaminated places) [1,2], as well as in general remote environments difficult or expensive to be physically reached by a trained human operator (i.e., for inspection and tele-medicine) [3,4]. Traditionally, feedback from a robot’s on-board sensors [5] has been delivered via conventional screens or, more recently, through immersive virtual reality headsets, each with its own trade-offs in terms of latency, the field of view, and user comfort [3,6].
In fact, one major challenge is the cognitive load imposed on an operator [7]. The complexity and diversity of teleoperation systems were recently highlighted in the ANA Avatar XPrize, where international teams developed advanced robotic avatars to perform a variety of remote tasks. Among the finalists, the number of degrees of freedom (DoFs) varied widely, from the 20 DoFs [8] to the 54 DoFs of iCub3, which even included additional DoFs for realistic facial expressions [9]. Some teams incorporated additional articulations for torso and waist mobility, with 2–3 extra DoFs [9,10,11]. These examples reflect the complexity of anthropomorphic systems capable of natural and precise interaction with their environments.
Research has turned to supportive visual interfaces to address both the increasing mechanical complexity of high-DoF avatars and the concomitant cognitive load on operators [12]. In particular, virtual dashboards have emerged as tools for augmenting traditional video feeds with real-time contextual cues—such as object proximity, optimal grasp points, and system status indicators—thereby lightening the cognitive burden and enhancing task precision.
Indeed, interpreting sensory information, often incomplete or inconsistent, and making real-time decisions can lead to fatigue and reduced performance [12]. In addition, the difference between the operator’s real-world experience and their remote environment perception through virtual reality (VR) headsets can also compromise their precision and effectiveness, especially in dexterity-demanding tasks [13]. Vision remains a critical component in these systems. All ANA Avatar XPrize finalists employed stereo cameras to enable stereoscopic 3D feedback. Some teams went further by mounting the cameras on linear actuators to dynamically adjust the stereo baseline and match the operator’s interpupillary distance, thus optimizing vergence and gaze control [9,14]. On the software side, scalable image resolution was used to adapt to varying bandwidth constraints [15], while another team implemented spherical rendering to maintain low-latency head tracking and real-time video streaming [16]. At the operator’s end, VR head-mounted displays were the most common interface, although screen-based setups were also utilized [8]. These setups were sometimes supplemented with auxiliary camera views such as a waist-level camera feed [8].
Furthermore, cybersickness, a common issue with VR headsets, can negatively impact operator performance and limit the extended use of immersive teleoperation systems [17]. Recent works have adopted alternative approaches to reducing cybersickness, including the reconstruction of the remote environment via point clouds and the interactive visualization of the three-dimensional context. Such solutions enable the operator to provide a coherent spatial map, often viewable on conventional screens or desktop interfaces, thereby avoiding the perceptual misalignments typical of immersive virtual reality. In parallel, decoupled control systems (e.g., rate control or shared control) have been introduced to mitigate the conflict between operator intentional movements and robotic responses, thus reducing the onset of nausea and fatigue. These strategies have been effective in high-precision contexts or during prolonged sessions, as reported in [13,17]. For instance, by providing additional information like object proximity, orientation, and other relevant data, virtual dashboards, whose effectiveness has also been demonstrated in training [18], can also reduce cognitive load and improve the operator’s understanding of the remote environment [12]. Moreover, integrating contextual information and visual cues can support decision-making and enhance precision [19]. VR headsets offer a more immersive and intuitive interface, increasing the operator’s sense of connection to the remote environment.
Furthermore, supervised machine learning models—deep convolutional neural networks such as YOLO and MobileNet—have become essential components for improving teleoperation systems [7,12,20,21,22]. In our implementation, object recognition and affordance detection are performed using inference from pre-trained deep learning models, which assist operators in identifying and manipulating objects with greater precision.
Indeed, object recognition and distance estimation algorithms can assist operators in identifying and manipulating objects [21]. Therefore, artificial intelligence can also personalize the user interface and adapt the system to the operator’s specific needs [13]. Techniques such as modifying the visual display with other features or incorporating additional visual-audio feedback within the VR are important to study for improving operator comfort, robot interaction, and in general, the quality of task execution. Motivated by the above, the paper focuses on the further development and integration of these technologies [23] for assisting the operator during teleoperation tasks. The goal is to create more intuitive, efficient, and safe teleoperation systems that allow natural and precise interaction with remote environments [24].
The remainder of this paper is structured as follows. Section 2 describes the methodology, including the teleoperation system, the virtual dashboard, and the experimental procedure. Section 3 presents the experimental results and statistical analysis of task performance. Section 4 discusses the implications of the findings, while Section 5 highlights the limitations of the current study. Finally, Section 6 draws the main conclusions and outlines directions for future work.

2. Methods

The methodology focuses on assessing the effects of a virtual dashboard on conventional performance metrics (i.e., time, success rate, fatigue) and on the proposed Quality of Task ( Q o T ) index. This study was conducted in a supervised experimental environment. Performance metrics include task time, user fatigue feedback, and task success rate. The virtual dashboard displays contextual data such as object position, distance, and optimal grasping points to enhance decision-making during teleoperation.

2.1. Robotic Teleoperation System

In this study, the experimental setup utilized a robotic teleoperation system, specifically the Universal Robots UR5, designed for executing pick-and-place operations using an anthropomorphic robotic arm under remote control. The system is shown in Figure 1. The system operated in a position tracking mode, without force feedback at the level of the arm’s joints. On the operator’s side (leader), the vision-based Leap Motion controller by Ultraleap tracked hand movements, while the remote side included a commercially available robotic arm, a custom-designed robotic hand, and a Realsense D455 RGB-D camera. The camera runs on a computer vision algorithm deployed within the Intelligent Processing System (IPS) to obtain the object’s bounding box using YOLO11x [12,25,26]. Meanwhile, a network architecture based on the retained version of MobileNetV3 has been used to implement the affordance detection algorithm, associated with the segmentation head as presented in [22]. When selecting the affordance detection model, we considered other recent methods (LOCATE, WorldAfford, OOAL, UAD). While achieving a similar mIoU (≈0.72 vs. 0.75 for LOCATE), the adopted model counts around 5M parameters and guarantees 30 Hz inference on a CPU. Conversely, estimated maximum delivery rates are 10 Hz on the CPU for LOCATE and a higher value of 40 Hz for WorldAfford, although on high-end GPUs. OOAL and UAD, while flexible in weak learning, require dedicated hardware to maintain comparable performance. Hence, the adopted model [22] was selected to prioritize real-time execution, considering the requirements of the teleoperation application.

2.2. Virtual Dashboard

Participants performed the teleoperation pick-and-place task by viewing the monitor in Figure 2. The monitor presented the video stream from the robot’s camera [12]. Participants were supported with a virtual dashboard as a Head-Up Display (HUD) during the experimentation.
To develop the HUD, a simplified approach was adopted to simulate a VR head-like interface. The dashboard was developed using Python 3.11, which has flexibility and rapid prototyping capabilities through the IPS. So, the graphical interface was created using libraries such as Tkinter and OpenCV, which allowed for the real-time visualization of data streams. This dashboard was designed to reduce cognitive load and improve operator accuracy. It is illustrated in Figure 3.
Observing the figure at the top of the screen, the elapsed time from the start of the operation is displayed, for example, 01:26, and a percentage indicator represents the probability success rate of the grasp, which is 93%. In addition, an icon such as an Ethernet connection displayed under the time shows the connectivity link between the UR5 robot and PC via the router’s Local Area Network (LAN). Moreover, a semi-circular bar with two circles on the extreme, with the label Distance, represents the real-time distance in percentage between the end-effector and the object. In particular, when the bar on the right circle becomes blue, the object is grasped. This data allows users to monitor the progress of the activity in real-time. A bounding box in the center of the interface highlights the object to be grasped—a red water bottle—providing a clear and precise visual reference. The bounding box color is green, meaning the probability of grasping the object is over 90%. Thus, the system considers the robotic hand’s position and orientation to the object, updating the distance between the hand and the object itself in real-time. At the bottom-left of the screen, an interactive Radar Screen or, as in gaming, a minimap shows a part of the workbench with a cone of vision from the perspective of the top view from the hand. In this minimap, the yellow circle represents the robot hand, whereas the green circle represents the object that could be grasped. The pose of the robot hand is also displayed at the bottom right of the HUD. So, button options such as Audio, Detect Object, AI Grasping Support (Support), and Warning alerts are illustrated as being activated or deactivated by the operator via the terminal PC. In this case, three options are activated with yellow color; they are the Audio, Detect Object, and Support options, whereas the Warning Alerts option is deactivated. Therefore, indicators shown on the HUD, such as elapsed time, grasp percentage, hand–object distance, and status icons, were updated at 30 Hz during task execution while also considering the UR5 packet flow. Moreover, the virtual dashboard frame was designed with a blue-based color scheme to evoke a sense of calmness and trust in the operator. Indeed, blue is often associated with tranquility and reliability, making it an ideal choice for reducing cognitive load during complex tasks [27,28,29]. Empirical studies in color psychology have associated blue tones with reduced physiological arousal and improved cognitive performance in demanding environments [27,28,29]. Although no formal comparison with other color schemes was conducted, the interface design was iteratively refined through informal pilot sessions with internal lab participants to ensure clarity, comfort, and non-intrusiveness. The choice was also inspired by interface conventions from popular gaming HUDs—such as Halo, Call of Duty, and Diablo—where blue is frequently used to present critical information in a way that is perceptually stable and cognitively unobtrusive. Therefore, the selected color palette aimed to create a digital environment conducive to focused decision-making under operational pressure. Therefore, this color creates a soothing digital environment that facilitates user decision-making and enhances the overall user experience.

2.3. Experimental Procedure

Thirty participants (2 female, 28 male, age 30.13 ± 5.20 ) of PECRO Laboratory were engaged in this study, providing written consent to participate according to the approval of the study from the Joint Ethics Committee of the Scuola Normale Superiore and Sant’Anna School of Advanced Studies (Aut. Prot. No. 62/2024 on 16 January 2025). This number of participants was more than sufficient for statistical analysis.
Participants had a technical or scientific background due to their affiliation with the PECRO Laboratory, but none had prior hands-on experience with robotic teleoperation systems or advanced VR devices with a complex virtual dashboard. To reduce performance bias due to differing familiarity, a standardized Warm-up phase was considered, allowing each participant to become familiar with the devices and the teleoperation interface before beginning the actual experimental tasks.
The experimentation procedure was divided into five phases: Welcome, Learning, Warm-up, Experiment, and Conclusion. The procedure is illustrated in Figure 4. In detail, during the Welcome phase, the participant was welcomed, and the experimental activity was introduced (3 min). Then, the Learning phase followed (2 min): the participant learned the activity consisted of moving the robot’s arm and its hand using a Leap Motion controller to pick-and-place an object, also known as the object of interest, looking the workspace throughout a monitor display (1.5 m distance). Next, the Warm-up phase followed. The participant was trained to control and move the robot arm and hand to pick and place the object from point A to point B on the workbench in front of the robot (3–5 min). The distance between points A and B was 35 cm, and they were marked during the experimentation on the workbench. For simplicity, the object was always a 24 cm high per 5 cm diameter bottle. Then, the Experiment phase began. The Experiment phase was divided into two sessions. Each session consisted of 10 repetitions, with a total of 20 repetitions. In particular, one session provided virtual dashboard assistance, whereas the other did not. The order of these two sessions was randomized across participants to reduce the risk of learning effects or order-related bias. This ensured that performance improvements were attributable to the dashboard itself rather than task repetition or increased familiarity. So, the time to complete tasks and the success rates on pick-and-place repetitions were noted. In addition, participants replied to this question at the end of each repetition: “What was your perceived level of fatigue on a scale of 1 (lowest) to 5 (highest)?”. Each session lasted an average of about 10 min between pauses and tasks. In the end, during the Conclusion phase, the participant was thanked, and another participant was invited. In total, each experiment took around 30 min for each participant.

2.4. Data Collection and Performance Analysis

Data recorded during the experimental sessions are the following:
1.
Completion Time (T): Elapsed time from arm motion onset to successful placement, also in case of failure, of an object.
2.
Level of Fatigue (F): Self-reported on a 1–5 Likert scale after each repetition.
3.
Success Rate (S): Binary flag, 1 means successful grasping, whereas 0 means failure.
The time, fatigue, and success rate distributions without and with the use of the Virtual Dashboard (VD) were analyzed statistically to assert the following hypotheses:
1.
H 1 (Efficiency): The VD reduces average completion time: E [ T VD ] < E [ T NoVD ] .
2.
H 2 (Fatigue): The participant reported lower fatigue when assisted: E [ F VD ] < E [ F NoVD ] .
3.
H 3 (Reliability): Use of the VD increases success rate: E [ P VD ] > E [ P NoVD ] .
Therefore, the Shapiro–Wilk (SW) test with a significance level ( α ) of 0.005 was used to verify that the data distribution, or variable, is normally distributed. This significance level represented the threshold for statistical significance. The SW null hypothesis was the following:
H 0 : The sample comes from a normal distribution .
Then, if H 0 is not rejected, normal distributions are compared to each other with Paired Student t-tests. Otherwise, if H 0 is rejected, the Wilcoxon Signed-Rank (WSR) test is used for non-normal distributions because it is more appropriate. To quantify the overall effects on the operator performance in teleoperated grasping, the Quality of Task ( Q o T ) index is proposed, defined as follows:
Q o T = S T × F ,
where S corresponds to the success rate as the mean percentage of correct tasks, that is, pick-and-place in this study, T corresponds to the completion time, and F corresponds to the mean fatigue reported by the participant. This index is useful because it measures the reliability, efficiency, and operator effort in a single scalar [30,31,32], allowing a holistic comparison between dashboard conditions. In detail, Q o T VD , i and Q o T NoVD , i are measured for each participant i, with i { 1 , . . , 30 } , to compare these two experimental sessions. The Q o T VD and Q o T NoVD are the set of Q o T VD , i and Q o T NoVD , i data, respectively. To assert normal distributions, the SW test is also applied on Q o T VD and Q o T NoVD . Then, also in this case, if the SW null hypothesis is not rejected, the t-test is measured, as follows:
t = Q o T VD ¯ Q o T NoVD ¯ s p 2 / K ,
where Q o T VD ¯ and Q o T NoVD ¯ are the variables’ mean of Q o T VD and Q o T NoVD , respectively, K is the number of participants in the experimentation (i.e., 30), and s p is the pooled standard deviation of measurements, calculated as
s p = ( n 1 ) σ VD 2 + ( m 1 ) σ NoVD 2 n + m 2 ,
where σ VD and σ NoVD are the standard deviations of variables Q o T VD and Q o T NoVD , respectively, n and m are the sample sizes corresponding to the number of participants, both equal to 30.

3. Results

The experiment aimed to validate the effectiveness of the virtual dashboard in a teleoperation scenario using the Quality of Task ( Q o T ) index. So, mean task execution time, operator fatigue, and task success rate were compared using a virtual dashboard and not.

3.1. Analysis of Time

The time analysis measured the mean task time, considering both failures and successes in completing the task with and without support from the virtual dashboard. Participants completed tasks in an average of 25 s with a variance of 16.59 when supported by the dashboard, compared to 32 s with a variance of 19.86 without it, suggesting a significant efficiency improvement and moderately lower variability. These results are shown in Figure 5a.

3.2. Analysis of Fatigue

Fatigue was evaluated considering the subjective values provided by operators answering the question “What was your perceived level of fatigue on a scale of 1 (lowest) to 5 (highest)?”. Operators rated their perceived fatigue at the end of each repetition. So, the mean fatigue of each session was measured and considered for the analysis. The average fatigue rating decreased from 3.86 without the dashboard to 1.65 with the dashboard, as illustrated in Figure 5b. Thus, the virtual dashboard might help operators reduce mental and physical fatigue by minimizing cognitive load and optimizing task performance.

3.3. Analysis of Success Rate

The mean success rates of participants’ pick-and-place tasks were used to compare the performance of the sessions. With the virtual dashboard, the success rate increased from 80% to 92%, suggesting an improvement in grasping reliability and efficiency, as shown in Figure 5c. This improvement could be attributed to the dashboard’s real-time support, which facilitated accurate object detection and grasping.

3.4. Quality of Task (QoT)

The time, fatigue, and success rate data distributions without and with the use of the Virtual Dashboard (VD) were analyzed statistically using the Shapiro–Wilk (SW) test. The SW’s null hypothesis ( H 0 ) was not rejected for time and fatigue variables, so they had a normal distribution and were further tested with the Paired Student t-test. On the contrary, success rate-based distributions rejected SW’s hypothesis H 0 . Thus, success rate-based distributions did not have normal distributions; therefore, they were analyzed with the Wilcoxon Signed-Rank (WSR) test. Both the Student t-test and WSR test rejected their H 0 hypotheses, so distributions were different. Thus, hypotheses H 1 , H 2 , and H 3 (see Section 2.4) corresponding to efficiency, fatigue, and reliability were satisfied, respectively.
The Quality of Task ( Q o T ) index was proposed to quantify the reliability, efficiency, and operator effort based on teleoperated grasping actions in a single metric. The Q o T index was measured by providing VD support ( Q o T VD ) and without it ( Q o T NoVD ) for each participant. So, the SW test was applied and H 0 was not rejected for Q o T VD and Q o T NoVD variables. Thus, the Student t-test was also used. The t-test rejected the null hypothesis ( Q o T VD 3.54 , Q o T NoVD 2.27 , p < 0.005 ), indicating that the difference in the performance between the Q o T VD and Q o T NoVD experimental sessions was significant, as also illustrated in Figure 6. This improvement of the Q o T index in the VD condition is in agreement with the significant improvement measured in all the conventional metrics (Time, Success Rate, and Level of Fatigue) contributing to the Q o T index.
The above results are important because they demonstrate that the HUD interface of the virtual dashboard improves the grasping quality, reducing the cognitive load and the fatigue perceived by the operator. Therefore, viewing real-time information allows operators to perform more precise and efficient tasks (grasps), confirming the virtual dashboard’s effectiveness in supporting the operator in robotic teleoperation tasks. Regarding the proposed Q o T metric, it is important to note that it combines different types of conventional metrics with an arbitrary multiplicative and divisive relation, and lacks dimensional homogeneity. Although in the conducted experiments, the trend is in agreement with that of each conventional metric, the above may introduce some limitations in terms of interpretability or cross-context comparison. The formulation appears useful as a scalar trade-off between reliability, speed, and operator burden, yet additional experimental setups and scenarios should be evaluated to support its general application in teleoperation further.

4. Discussion

The experimental results demonstrate the advantages of integrating a virtual dashboard in teleoperation systems. The dashboard’s real-time feedback and visual guidance contributed to more accurate and efficient grasping, as confirmed by the experimental results. Specifically, the average task completion time was reduced by approximately 22%, decreasing from 32 s without the dashboard to 25 s with it. Additionally, the average self-reported fatigue score dropped from 3.86 to 1.65, indicating a 57% reduction in perceived operator effort. The task success rate also increased significantly, improving from 80% to 92%, demonstrating enhanced grasp reliability and performance with the support of the virtual dashboard. Moreover, by minimizing cognitive load, the dashboard enabled users to focus on task precision and control, leading to a more intuitive and effective manipulation experience. The time analysis revealed that the dashboard’s assistance reduced task time, suggesting that the visual support accelerates decision-making and improves reaction time. In addition, the reduced fatigue levels declared by participants highlight the dashboard’s potential in prolonged tasks, where cognitive and physical effort appear to be more critical to performance. The increased success rate in pick-and-place operations and the improved Q o T confirm the dashboard’s value in improving operator awareness and control. Moreover, to the quantitative results, participant feedback was collected during the experimental conclusion phase to assess the usability and intuitiveness of the virtual dashboard. Most participants highlighted the grasp probability indicator and the distance bar as particularly useful for coordinating the grasping action. Several also appreciated the clarity of the top-down radar (minimap), which improved spatial understanding of the robot’s position. The overall interface layout was generally perceived as intuitive and non-intrusive. However, a few users reported confusion regarding the function of certain icons—such as the Ethernet connectivity indicator—suggesting that brief onboarding or legend explanations may improve the user experience in future iterations. Therefore, this study demonstrates that integrating a virtual dashboard can be a transformative approach in teleoperation, offering benefits in precision, efficiency, and user experience. Future research should explore adaptive interfaces, multimodal feedback, and more complex manipulations in working environments to further validate these findings.

5. Limitations

A limitation of this study is the gender imbalance in our sample (2 females and 28 males, or 9% male participants). Although the sample size was statistically adequate, this demographic imbalance may have influenced the results, favoring behavioral and ergonomic characteristics typically associated with male operators. Previous studies in Human-Robot Interaction (HRI) contexts have shown that variables such as hand size, spatial reasoning strategies, and perception of cognitive load can differ between genders [33,34]. Therefore, future investigations should aim for a more balanced sample to assess whether the results related to performance, perceived effort, and dashboard usability vary significantly between different demographic groups.
Another limitation is the use of a single cylindrical object (the bottle) rather than a standardized set, such as the YCB [35], commonly used to benchmark robotic manipulation tasks. This choice ensured experimental rigor in focusing on the effectiveness of the interface, but it limits the scope of the results with respect to scenarios with objects of more heterogeneous shapes, sizes, and materials. In industrial and service applications, grasping tasks often require non-trivial path planning, obstacle avoidance, and adaptation to dynamic environments. Future studies will integrate standardized objects and more complex environments to validate the generalization of the dashboard.
The third limitation is that the current experimental setup is optimized for small-sized objects. Extending the system to larger workloads will require: (1) a synchronized RGB-D sensor network or a mobile camera mounted on the robotic arm to cover a larger area; (2) real-time calibration procedures and spatial alignment techniques to maintain accuracy over different depths and object sizes; (3) adaptive viewpoint selection and field of view optimization algorithms to ensure reliability in large scenarios. Such extensions will be essential for large-scale industrial or warehouse deployments.
Finally, another limitation concerns the Leap Motion controller used for hand tracking. Although it offers high-frequency, markerless tracking with acceptable accuracy in controlled conditions [36], it introduces limitations in the allowed hand workspace as related to the limits of the tracked volume and occlusions of the pose of the fingers. On the other hand, within these limits, the solutions show consistent performance if compared to other hand tracking systems [37]. In the experimentation protocol, the workspace was kept within the optimal zone, and the task did not involve hand rotations, thus reducing the impact of these criticalities. Hybrid systems (e.g., integration of IMU and visual data) or alternative controllers could be explored in the future to increase robustness.

6. Conclusions

This research study presents the Quality of Task ( Q o T ) index to verify the benefit of virtual dashboards in robotic teleoperation tasks. It considers different metrics such as mean task time, operator fatigue, and the number of completed tasks. The Q o T index was measured by supporting operators using a virtual dashboard. In addition, it was also measured when the operator was not supported by a virtual dashboard. Thus, Q o T indexes were compared. After statistical analysis, the Student t-test revealed that there is a difference between these indexes. Therefore, the virtual dashboard is a powerful tool for teleoperation tasks. The dashboard offers real-time visual guidance, an intuitive interface, and spatial awareness tools to create a more immersive and efficient user experience for the operator. The research results highlight reduced mean task time, lower mean operator fatigue, and improved mean success rate in completing tasks. However, the study also identifies potential areas for virtual dashboard tuning and future development, considering various challenges regarding network conditions (low bandwidth, high packet traffic, and so forth), different objects to grasp, and integration in virtual reality (VR) headsets. Also, virtual dashboards may be adapted based on user expertise and task complexity, thereby enhancing usability across a broader range of work environments. Multimodal feedback may also be used, including haptic and physiological operator information. Thus, developing machine learning algorithms for predictive assistance, fatigue, and task optimization can lead to more support for operators and/or more independent teleoperation systems. In conclusion, this research study presented the advantages of using virtual dashboards in teleoperation tasks and defined future targets for remote manipulation using VR headsets.

Author Contributions

Conceptualization, A.D.T. and C.L.; methodology, A.D.T., D.L. and C.L.; software, A.D.T.; validation, A.D.T.; formal analysis, A.D.T.; investigation, A.D.T.; resources, A.D.T.; data curation, A.D.T.; writing—original draft preparation, A.D.T.; writing—review and editing, A.D.T. and D.L.; visualization, A.D.T.; supervision, A.D.T., D.L., C.L. and A.F.; project administration, A.D.T., D.L., C.L. and A.F.; funding acquisition, C.L. and A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This project has been funded under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.1—Call for tender No. 104 published on 2 February 2022, by the Italian Ministry of University and Research (MUR), funded by the European Union–NextGenerationEU—Project Title “AVATAR: Enhanced AI-enabled Avatar Robot for Remote Telepresence”—CUP J53D23000860006, D53D23001490008—Grant Assignment Decree No. 960 adopted on 30 June 2023 by the Italian MUR.

Institutional Review Board Statement

This study involved humans in its research. The Joint Bioethical Committee of Scuola Normale Superiore and Sant’Anna School of Advanced Studies approved all ethical and experimental procedures and protocols (Authorization Prot. No. 62/2024 on 16 January 2025).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented and used in this study is contained in the RobotSense25-DashGrasp Dataset available on Zenodo: https://zenodo.org/records/15272120 (accessed on 26 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Klamt, T.; Rodriguez, D.; Baccelliere, L.; Chen, X.; Chiaradia, D.; Cichon, T.; Gabardi, M.; Guria, P.; Holmquist, K.; Kamedula, M.; et al. Flexible disaster response of tomorrow: Final presentation and evaluation of the centauro system. IEEE Robot. Autom. Mag. 2019, 26, 59–72. [Google Scholar] [CrossRef]
  2. Xiao, C.; Woeppel, A.B.; Clepper, G.M.; Gao, S.; Xu, S.; Rueschen, J.F.; Kruse, D.; Wu, W.; Tan, H.Z.; Low, T.; et al. Tactile and chemical sensing with haptic feedback for a telepresence explosive ordnance disposal robot. IEEE Trans. Robot. 2023, 39, 3368–3381. [Google Scholar] [CrossRef]
  3. Draper, J.V. Teleoperators for Advanced Manufacturing: Applications and Human Factors Challenges. Int. J. Hum. Factors Manuf. 1995, 5, 53–85. [Google Scholar] [CrossRef]
  4. Wang, J.; Peng, C.; Zhao, Y.; Ye, R.; Hong, J.; Huang, H.; Chen, L. Application of a Robotic Tele-Echography System for COVID-19 Pneumonia. J. Ultrasound Med. 2021, 40, 385–390. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, A.; Zhang, J.; Yang, Y.; Wang, T.; Cheng, Y.; Song, Y.; Wang, K. Design and Research of CFETR Peg Hole Assembly Virtual Platform. Fusion Eng. Des. 2023, 191, 113594. [Google Scholar] [CrossRef]
  6. Coelho, A.; Singh, H.; Kondak, K.; Ott, C. Whole-Body Bilateral Teleoperation of a Redundant Aerial Manipulator. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9150–9156. [Google Scholar] [CrossRef]
  7. Shao, S.; Zhou, Q.; Liu, Z. Mental Workload Characteristics of Manipulator Teleoperators with Different Spatial Cognitive Abilities. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419888042. [Google Scholar] [CrossRef]
  8. Luo, R.; Wang, C.; Keil, C.; Nguyen, D.; Mayne, H.; Alt, S.; Schwarm, E.; Mendoza, E.; Padır, T.; Whitney, J.P. Team Northeastern’s Approach to ANA XPRIZE Avatar Final Testing: A Holistic Approach to Telepresence and Lessons Learned. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 7054–7060. [Google Scholar] [CrossRef]
  9. Dafarra, S.; Pattacini, U.; Romualdi, G.; Rapetti, L.; Grieco, R.; Darvish, K.; Milani, G.; Valli, E.; Sorrentino, I.; Viceconte, P.M.; et al. icub3 Avatar System: Enabling Remote Fully Immersive Embodiment of Humanoid Robots. Sci. Robot. 2024, 9, eadh3834. [Google Scholar] [CrossRef]
  10. Park, B.; Jung, J.; Sim, J.; Kim, S.; Ahn, J.; Lim, D.; Kim, D.; Kim, M.; Park, S.; Sung, E.; et al. Team SNU’s Avatar System for Teleoperation using Humanoid Robot: ANA Avatar XPRIZE Competition. In Proceedings of the Workshop on Towards Robot Avatars: Perspectives on the ANA Avatar XPRIZE Competition. Robotics: Science and Systems, New York, NY, USA, 27 June–1 July 2022. [Google Scholar]
  11. Park, S.; Kim, J.; Lee, H.; Jo, M.; Gong, D.; Ju, D.; Won, D.; Kim, S.; Oh, J.; Jang, H.; et al. A Whole-Body Integrated AVATAR System: Implementation of Telepresence With Intuitive Control and Immersive Feedback. IEEE Robot. Autom. Mag. 2023, 32, 60–68. [Google Scholar] [CrossRef]
  12. Di Tecco, A.; Camardella, C.; Leonardis, D.; Loconsole, C.; Frisoli, A. Virtual Dashboard Design for Grasping Operations in Teleoperation Systems. In Proceedings of the 2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), London, UK, 21–23 October 2024; pp. 994–999. [Google Scholar] [CrossRef]
  13. Di Tecco, A.; Foglia, P.; Prete, C.A. Video Quality Prediction: An Exploratory Study with Valence and Arousal Signals. IEEE Access 2024, 12, 36558–36576. [Google Scholar] [CrossRef]
  14. Marques, J.M.; Peng, J.C.; Naughton, P.; Zhu, Y.; Nam, J.S.; Hauser, K. Commodity Telepresence with Team AVATRINA’s Nursebot in the ANA Avatar XPRIZE Finals. In Proceedings of the 2nd Workshop Toward Robot Avatars, IEEE International Conference on Robotics and Automation (ICRA), London, UK, 2 June 2023; pp. 1–3, ISBN 979-8-3503-2366-5. [Google Scholar]
  15. Van Erp, J.B.; Sallaberry, C.; Brekelmans, C.; Dresscher, D.; Ter Haar, F.; Englebienne, G.; Van Bruggen, J.; De Greeff, J.; Pereira, L.F.S.; Toet, A.; et al. What Comes After Telepresence? Embodiment, Social Presence and Transporting One’s Functional and Social Self. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 2067–2072. [Google Scholar] [CrossRef]
  16. Schwarz, M.; Lenz, C.; Rochow, A.; Schreiber, M.; Behnke, S. Nimbro Avatar: Interactive Immersive Telepresence with Force-Feedback Telemanipulation. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5312–5319. [Google Scholar] [CrossRef]
  17. Darvish, K.; Penco, L.; Ramos, J.; Cisneros, R.; Pratt, J.; Yoshida, E.; Ivaldi, S.; Pucci, D. Teleoperation of Humanoid Robots: A Survey. IEEE Trans. Robot. 2023, 39, 1706–1727. [Google Scholar] [CrossRef]
  18. Semeraro, F.; Frisoli, A.; Ristagno, G.; Loconsole, C.; Marchetti, L.; Scapigliati, A.; Pellis, T.; Grieco, N.; Cerchiari, E.L. Relive: A Serious Game to Learn How to Save Lives. Resuscitation 2014, 85, e109–e110. [Google Scholar] [CrossRef] [PubMed]
  19. Chang, E.; Kim, H.T.; Yoo, B. Virtual Reality Sickness: A Review of Causes and Measurements. Int. J. Hum.-Interact. 2020, 36, 1658–1682. [Google Scholar] [CrossRef]
  20. Adawy, M.; Abualese, H.; El-Omari, N.K.T.; Alawadhi, A. Human-Robot Interaction (HRI) using Machine Learning (ML): A Survey and Taxonomy. Int. J. Adv. Soft Comput. Its Appl. 2024, 16, 183–213. [Google Scholar] [CrossRef]
  21. Pan, Y.; Chen, C.; Li, D.; Zhao, Z.; Hong, J. Augmented Reality-based Robot Teleoperation System using RGB-D Imaging and Attitude Teaching Device. Robot. Comput.-Integr. Manuf. 2021, 71, 102167. [Google Scholar] [CrossRef]
  22. Lugani, S.; Ragusa, E.; Zunino, R.; Gastaldo, P. Lightweight Neural Networks for Affordance Segmentation: Enhancement of the Decoder Module. In Proceedings of the International Conference on Applications in Electronics Pervading Industry, Environment and Society, Genoa, Italy, 28–29 September 2023; Springer Nature: Cham, Switzerland, 2023; pp. 437–443. [Google Scholar] [CrossRef]
  23. Luo, J.; He, W.; Yang, C. Combined Perception, Control, and Learning for Teleoperation: Key Technologies, Applications, and Challenges. Cogn. Comput. Syst. 2020, 2, 33–43. [Google Scholar] [CrossRef]
  24. Connette, C.; Arbeiter, G.; Meßmer, F.; Haegele, M.; Verl, A.; Notheis, S.; Mende, M.; Hein, B.; Wörn, H. Efficient Monitoring of Process Plants by Telepresence and Attention Guidance. In Proceedings of the ROBOTIK 2012: 7th German Conference on Robotics. Information Technology Society within VDE (ITG), Munich, Germany, 21–22 May 2012; pp. 1–4, ISBN 978-3-8007-3418-4. [Google Scholar]
  25. Di Tecco, A. Intelligent Processing System (IPS). Zenodo. 2025. Available online: https://zenodo.org/records/14953938 (accessed on 26 June 2025).
  26. Jocher, G.; Qiu, J.; Chaurasia, A. Ultralytics YOLOv11. GitHub. 2025. Available online: https://github.com/ultralytics/ultralytics (accessed on 26 June 2025).
  27. Mikellides, B. Color and Physiological Arousal. J. Archit. Plan. Res. 1990, 7, 13–20. [Google Scholar]
  28. Yamin, P.A.; Park, J.; Kim, H.K. In-vehicle human–machine interface guidelines for augmented reality head-up displays: A review, guideline formulation, and future research directions. Transp. Res. Part F: Traffic Psychol. Behav. 2024, 104, 266–285. [Google Scholar] [CrossRef]
  29. Skirnewskaja, J.; Wilkinson, T.D. Automotive Holographic Head-Up Displays. Adv. Mater. 2022, 34, 2110463. [Google Scholar] [CrossRef]
  30. Seo, M.; Gupta, S.; Ham, Y. Evaluation of Work Performance, Task Load, and Behavior Changes on Time-Delayed Teleoperation Tasks in Space Construction. In Proceedings of the Construction Research Congress 2024, Des Moines, IA, USA, 20–23 March 2024; pp. 89–98. [Google Scholar]
  31. Yang, E.; Dorneich, M. The Emotional, Cognitive, Physiological, and Performance Effects of Variable Time Delay in Robotic Teleoperation. Int. J. Soc. Robot. 2017, 9, 491–508. [Google Scholar] [CrossRef]
  32. Pearce, M.; Mutlu, B.; Shah, J.; Radwin, R. Optimizing Makespan and Ergonomics in Integrating Collaborative Robots Into Manufacturing Processes. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1772–1784. [Google Scholar] [CrossRef]
  33. Aitsam, M.; Lacroix, D.; Goyal, G.; Bartolozzi, C.; Di Nuovo, A. Measuring Cognitive Load Through Event Camera Based Human-Pose Estimation. In Proceedings of the Human-Friendly Robotics 2024; Paolillo, A., Giusti, A., Abbate, G., Eds.; Springer: Cham, Switzerland, 2025; pp. 229–239. [Google Scholar]
  34. Winkle, K.; Lagerstedt, E.; Torre, I.; Offenwanger, A. 15 Years of (Who)man Robot Interaction: Reviewing the H in Human-Robot Interaction. ACM Trans. Hum.-Robot Interact. 2022, 12, 1–28. [Google Scholar] [CrossRef]
  35. Calli, B.; Singh, A.; Walsman, A.; Srinivasa, S.; Abbeel, P.; Dollar, A.M. The ycb object and model set: Towards common benchmarks for manipulation research. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 27–31 July 2015; pp. 510–517. [Google Scholar]
  36. Kim, Y.; Kim, P.C.; Selle, R.; Shademan, A.; Krieger, A. Experimental evaluation of contact-less hand tracking systems for tele-operation of surgical tasks. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3502–3509. [Google Scholar]
  37. Mizera, C.; Delrieu, T.; Weistroffer, V.; Andriot, C.; Decatoire, A.; Gazeau, J.P. Evaluation of hand-tracking systems in teleoperation and virtual dexterous manipulation. IEEE Sensors J. 2019, 20, 1642–1655. [Google Scholar] [CrossRef]
Figure 1. Camera view of the teleoperated scene without dashboard support (left) and the output of the YOLOv11x-based object detection. The green background (right) highlights the affordance region identified for grasping, derived from the bounding box of the detected object (red bottle) by using the MobileNetV3.
Figure 1. Camera view of the teleoperated scene without dashboard support (left) and the output of the YOLOv11x-based object detection. The green background (right) highlights the affordance region identified for grasping, derived from the bounding box of the detected object (red bottle) by using the MobileNetV3.
Robotics 14 00092 g001
Figure 2. Experimental setup scenario with a person behind the robot (a) and on the side (b), while looking at the virtual dashboard on the monitor display during a robotic teleoperation pick-and-place task. In addition, the diagram in (c) illustrates the experimental scenario, showing the relative positions of the camera device, the UR5 robotic arm, the object on the table, and the operator’s viewpoint.
Figure 2. Experimental setup scenario with a person behind the robot (a) and on the side (b), while looking at the virtual dashboard on the monitor display during a robotic teleoperation pick-and-place task. In addition, the diagram in (c) illustrates the experimental scenario, showing the relative positions of the camera device, the UR5 robotic arm, the object on the table, and the operator’s viewpoint.
Robotics 14 00092 g002
Figure 3. Virtual dashboard used during the task execution on full screen.
Figure 3. Virtual dashboard used during the task execution on full screen.
Robotics 14 00092 g003
Figure 4. Experimental procedure with its phases.
Figure 4. Experimental procedure with its phases.
Robotics 14 00092 g004
Figure 5. Comparison of performance metrics with and without the virtual dashboard: (a) Mean task time, (b) Operator’s self-reported fatigue (Likert scale 1–5), and (c) Success rate of pick-and-place tasks. The asterisk indicates statistical significance at p < 0.005 .
Figure 5. Comparison of performance metrics with and without the virtual dashboard: (a) Mean task time, (b) Operator’s self-reported fatigue (Likert scale 1–5), and (c) Success rate of pick-and-place tasks. The asterisk indicates statistical significance at p < 0.005 .
Robotics 14 00092 g005
Figure 6. Comparison of Quality of Task index without and with using the virtual dashboard. The asterisk indicates statistical significance at p < 0.005 .
Figure 6. Comparison of Quality of Task index without and with using the virtual dashboard. The asterisk indicates statistical significance at p < 0.005 .
Robotics 14 00092 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Di Tecco, A.; Leonardis, D.; Frisoli, A.; Loconsole, C. Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness. Robotics 2025, 14, 92. https://doi.org/10.3390/robotics14070092

AMA Style

Di Tecco A, Leonardis D, Frisoli A, Loconsole C. Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness. Robotics. 2025; 14(7):92. https://doi.org/10.3390/robotics14070092

Chicago/Turabian Style

Di Tecco, Antonio, Daniele Leonardis, Antonio Frisoli, and Claudio Loconsole. 2025. "Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness" Robotics 14, no. 7: 92. https://doi.org/10.3390/robotics14070092

APA Style

Di Tecco, A., Leonardis, D., Frisoli, A., & Loconsole, C. (2025). Grasping Task in Teleoperation: Impact of Virtual Dashboard on Task Quality and Effectiveness. Robotics, 14(7), 92. https://doi.org/10.3390/robotics14070092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop