Next Article in Journal
Evaluating Spatial Decision-Making and Player Experience in a Remote Multiplayer Augmented Reality Hide-and-Seek Game
Previous Article in Journal
The Art Nouveau Path: Promoting Sustainability Competences Through a Mobile Augmented Reality Game
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance

by
Maria-Magdalena Wolf
*,
Henrik Schmidt
,
Michael Christl
,
Jana Fank
and
Frank Diermeyer
Institute of Automotive Technology, Technical University of Munich (TUM), Boltzmannstr. 15, DE-85748 Garching b. München, Germany
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2025, 9(8), 78; https://doi.org/10.3390/mti9080078 (registering DOI)
Submission received: 27 June 2025 / Revised: 20 July 2025 / Accepted: 21 July 2025 / Published: 31 July 2025

Abstract

Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is critical. In addition to video feed, supplemental informational elements are crucial—not only for the predominantly studied remote driving, but also for emerging desk-based remote assistance concepts. This work develops a GUI for different teleoperation concepts by identifying key informational elements during the teleoperation process through expert interviews (N = 9). Following this, a static and dynamic GUI prototype was developed and evaluated in a click dummy study (N = 36). Thereby, the dynamic GUI adapts the number of displayed elements according to the teleoperation phase. Results show that both GUIs achieve good system usability scale (SUS) ratings, with the dynamic GUI significantly outperforming the static version in both usability and task completion time. However, the results might be attributable to a learning effect due to the lack of randomization. The user experience questionnaire (UEQ) score shows potential for improvement. To enhance the user experience, the GUI should be evaluated in a follow-up study that includes interaction with a real vehicle.

1. Introduction

As of 2025, driverless cars remain a rare sight, especially in Europe, even though they exist and are permitted by legislation [1]. However, if there is an automated vehicle (AV) on a public road, chances are high that a human is still in the loop. Major AV companies like Waymo [2] or Cruise [3] continue to rely on remote operators for monitoring, guidance, and driving [4]. From a car user’s perspective, attitudes toward autonomous and remote driving also vary. A McKinsey survey [5] of around 1500 car owners in China, Germany, and the United States found that consumers consider remote driving superior to autonomous driving in several aspects, including safety and adaptability to unexpected situations. Additionally, about 56% of respondents indicated they would be more likely to purchase an av if it included remote-driving features, as this could allow for broader usability across different road types and weather conditions. Thus, besides automation technology companies, actual teleoperation providers like Valeo [6], DriveU [7], Einride [8], and Fernride [9] have emerged.
If human involvement is essential for autonomous driving, then so is the user interface for interacting with the vehicle. In addition to intuitive interaction, situational awareness of the remote operator presents a particular challenge in teleoperation [10,11]. Since situational awareness relies mainly on the remote operator’s visual perception [12], particular emphasis is placed on designing the graphical user interface (GUI) to effectively convey essential information to the remote operator, enabling quick and safe decision-making and acting.
Therefore, our work aims to develop a user-friendly GUI focused on the necessary informational elements that complement the video feed for safe operations, following a user-centered design process, as follows:
  • Defining teleoperation process steps for remote driving and remote assistance.
  • Identifying essential GUI elements through expert interviews.
  • Evaluating a static and dynamic variant of the developed GUI within the teleoperation process through an online study.
Typically, teleoperation GUIs present a camera feed along with additional information like the vehicle speed. Several studies have already explored optimal video presentation in teleoperation, focusing on aspects such as required video quality [13], camera perspective [14], and field of view [15]. In addition, research has examined a range of display concepts [13,16] and output devices, including head-mounted displays [17,18].
In contrast, this work focuses on the supplemental informational elements and how a holistic GUI design should look on a conventional monitor to accommodate all necessary information on a single display surface. The following summarizes the literature on evaluating teleoperation GUIs and deriving design recommendations. Table 1 provides an overview of the teleoperation concepts considered in the publications, the implementation approaches, setups (including output devices, displayed elements, and input devices), and participants. The table also distinguishes whether the evaluations focused on the overall GUI or individual display elements.
Based on expert interviews or observations, publications propose design recommendations or requirements for teleoperation interfaces without presenting a fully developed GUI in their work [19,20]. For instance, Graf and Hussmann [20] identified 80 general information requirements, such as object display and vehicle position, which they narrowed down to 20 essential elements considered critical for safe teleoperation. In contrast, Tener and Lanir [19] illustrated their GUI element suggestions in design examples. However, the proposed informational elements were neither integrated into a unified GUI nor empirically validated.
While the work by Lindgren and Vahlberg [21] presented a holistic GUI, their study evaluated a click dummy and derived general design guidelines, including information prioritization and the use of coding strategies, such as color, to enhance clarity. Nevertheless, their work did not explicitly examine the impact of individual GUI elements.
Further developing this line of research, Gafert et al. [22] and Bodell and Gulliksson [23] shifted the focus toward analyzing individual GUI components. Gafert et al. [22] were the first to translate the requirements defined by Graf and Hussmann [20] into 19 concrete GUI elements, which were then implemented into two display variants—one featuring the complete set of informational elements and the other encompassing a reduced display configuration. In a real driving study involving the teleoperation of a miniature vehicle, the density and relevance of displayed information were examined across the orientation and navigation phases. Bodell and Gulliksson [23] also conducted a user study to explore how camera images, maps, and sensor data can be effectively presented to enhance safety and efficiency. Other informational display elements, such as steering angle, turn signal indicators, or vehicle model information, were not included in their GUI and evaluation.
Table 1. Overview of related work regarding graphical user interfaces for teleoperation.
Table 1. Overview of related work regarding graphical user interfaces for teleoperation.
SourceImplementationSetupParticipantsEvaluation
NoneSimulationReal WorldOutput DeviceDisplay ElementsInput DeviceNo.ExperienceDisplay ElementsOverall GUI
Remote DrivingTener and Lanir [19]Observations/ interviews Ultra-wide MonitorVideo feed, side mirrors, rear view, vehicle speed, other driving information → derive 25 teleoperation challenges and design guidelinesSteering wheel and pedals8/14Remote operators/automotive industry professionals, academic teleoperation researchers(✓)
Graf and Hussmann [20]Interviews 80 collected requirements/20 extracted for safe AV teleoperation 18/10Automotive industry professionals
Gafert et al. [22] Miniature vehicleHead-mounted display36 requirements represented in 19 GUI elements: video feed, rear view, navigation map, vehicle speed, other driving information, task information, weather, …Steering wheel and pedals16None
Bodell and Gulliksson [23] Gazebo 2 Monitors360° video feed, vehicle speed, bird’s-eye view map, planned path of automation, LiDAR dataSteering wheel, gamepad controller?None
Lindgren and Vahlberg [21] Click Dummy 3 MonitorsVideo feed, side mirrors, rear view, bird’s eye view, vehicle speed, other driving information, notification list, top bar with system information, time and weather → derive general design guidelinesMouse9Engineers with experience in remote operation, remote operators
Remote AssistanceKettwich et al. [24] Click dummy 6 Monitors, touchscreenVideo feed, overview map, vehicle data, disturbances; bird’s eye view, planned path of automationMouse, touch13Control center professionals
Schrank et al. [25] Click dummy 6 Monitors, touchscreenVideo feed, overview map, vehicle data, disturbances; bird’s eye view, planned path of automationMouse, touch34University or state-certified technical degree (remote operator criteria acc. to German law)
Tener and Lanir [26] Click dummy Ultra-wide monitor, monitor on top, tabletVideo feed; rear view; video feed, status bar, planned path of automation, obstacle marking, notifications, vehicle status, weather, …Touch14Experts in remote operation
The publications presented so far all focus on remote driving. In addition to traditional remote driving workstations, startups and research initiatives are now introducing novel teleoperation workstations with alternative desk-based control and display concepts, primarily for remote assistance of the AV. Click-based remote assistance systems offer an intuitive way to issue high-level commands, with users favoring minimal involvement in the dynamic driving task unless the scenario feels unsafe [27,28]. This requires different user interface specifications [29] and research in remote assistance [30].
Implementations by Cruise [31], Motional [32,33], and Zoox [34] display an abstract representation of the surroundings alongside multiple video streams, including road layout and detected objects. Motional [32] also visualizes traffic lights on the map. However, these startups do not provide access to the entire GUIs, including supplemental informational elements, likely due to competitive reasons.
Both industrial implementations and research on remote assistance, especially in GUI design, remain limited. Kettwich et al. [24] designed a remote operator GUI with six screens and a touchscreen, based on a systematic analysis of use cases [35]. First, the GUI was evaluated as a click dummy by control center professionals, using offline videos. Building on this setup, Schrank et al. [25] conducted another study simulating scenarios in which an AV requires remote assistance. Although the click dummy achieved good usability and acceptance scores in both studies, it was not designed to determine which informational elements were necessary at specific points to achieve these results. Similarly, the results from the online user study by Tener and Lanir [26] allow for general conclusions about their simulation-based click dummy, revealing good usability scores and relatively positive user acceptance, but lacking information on the influence of individual informational elements and the appropriate timing of their display during the teleoperation process.
Consequently, research questions arise regarding the selection and timing of informational elements within a GUI and whether there are distinct differences in the information required for remote driving versus remote assistance. Given the absence of a clearly defined teleoperation process, an initial analysis is necessary to identify the informational elements required at each stage. Furthermore, this work presents and evaluates a GUI prototype that incorporates the informational elements identified as essential. This prototype serves as a basis for assessing the effectiveness of the selected elements. The detailed methodology is outlined in the following section.

2. Methodology

The development of the GUI for remote operation followed the user-centered design process [36]: (1) understand context of use, (2) specify user requirements, (3) design solutions, (4) evaluate against requirements. In the first step, the context of use was defined through interviews, which analyzed the teleoperation process for both remote driving and remote assistance. Given the limited user base, primarily within startups developing remote driving systems, teleoperation experts were consulted during system development. Based on the context of use, the experts assessed the importance of collected informational elements in a second step during the interviews to specify requirements. These requirements formed the basis for the GUI design, which was implemented as a click dummy in a third step. In the fourth step, the GUI was evaluated in an online study.
The following sections describe the approach used in the expert interview and the online study. The interviews and studies were conducted with ethical approval from the ethics committee of the Technical University of Munich (2024-79-NM-BA, 2024-82-NM-BA).

2.1. Expert Interview

To design a display concept for teleoperation, it was first necessary to understand the interaction process between a remote operator and the AV. According to the authors, no publications on the teleoperation process currently exist, so this process was developed through expert interviews. Therefore, nine international experts (median age: 33 with S D = 8 ) were recruited, each with a minimum of one year of teleoperation experience, while six had three or more years of experience. Two participants were working as CEOs responsible for business development; three were researchers, and four were working as engineers or product managers responsible for technical solutions. Seven participants had already operated a remote-controlled vehicle (exclusive of toys).
To guide the participants through a generic interaction process, we defined an imaginary scenario in which each participant acts as a remote operator responsible for an AV fleet capable of self-driving but still requiring human support in unsolvable situations. An AV requires support by either (A) remotely driving the vehicle or (B) providing remote assistance to the vehicle. Due to the odd number of participants, five participants were randomly assigned to (A) remote driving (PRD), and four participants to (B) remote assistance (PRA) without considering the experts’ specifications.
Task I: For the first task, participants were provided with an empty template (Figure 1) with five predefined sections, ranging from self-driving to teleoperation and back, separated by vertical blue lines. The participants’ task was to define actions executed by either the AV or the remote operator during the interaction process. Subsequently, these actions were to be placed as blue-bordered boxes within one of the dotted boxes related to either the AV or the remote operator. The predefined actions in the first and fifth sections could be modified, and additional actions could be added. Multiple actions could be placed in series or parallel, with different durations represented by the length of the boxes. If actions spanned multiple predefined sections, they could also be placed across the blue section lines. Beyond that, participants were asked to assess the importance of a standardized teleoperation process for themselves and the industry.
In the next step, we aimed to determine which informational elements should be shown on a GUI for the remote operator in each section of the teleoperation process. As part of a literature review, more than 60 GUIs in the fields of teleoperation and in-vehicle systems were previously analyzed, and their display elements were compiled into a morphological box, resulting in 57 categories of informational components.
Task II: For the second task, the experts were asked to evaluate the 57 informational elements. The participants were provided with the same template, containing the predefined five sections of the teleoperation process (Figure 1) and were invited to rate the relevance of each informational element within these sections {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}. Thereby, the evaluation only assessed the permanent display of information. This task did not cover short-term visuals such as warnings (e.g., for making the remote operator aware of low tire pressure).

2.2. Online Study

Based on the results of the expert interviews (analyzed in Section 3.2), we designed a GUI, shown in Figure 2, containing the informational elements that were classified as at least 1: not necessary, but nice to have. The GUI uses the area of the vehicle’s engine hood as a dashboard for displaying vehicle parameters. Additional informational elements are overlaid in the top border area of the screen to minimize interference with the video stream.
For remote monitoring during self-driving in the transition sections, two display alternatives with varying levels of information density were developed. The first display variant corresponds to the GUI in teleoperation mode (Figure 2), ensuring a static GUI throughout the teleoperation process. The second display variant (Figure 3) features a reduced number of elements tailored to the teleoperation mode, allowing the GUI to dynamically adapt across the stages of the teleoperation process, incorporating insights from the expert interviews. When switching to teleoperation, both GUI variants change their appearance from a green to a blue tint, indicating that human input is required.
The online study aimed to evaluate and compare the static and dynamic GUI variants while validating the experts’ assessment of the necessity of displaying certain informational elements during teleoperation. To simulate real-world teleoperation, we developed an interactive online prototype that participants could click through, enabling a remote study without requiring peripherals like pedals or steering wheels by focusing solely on remote assistance. Therefore, we took a series of photos of a scenario in which the ego vehicle was driving about 50 m through an American neighborhood. To proceed to the next interactive screen and move the vehicle forward, participants had to click on the road in the picture. Otherwise, the GUI showed an error message.
Task I: After an introduction to teleoperation, the participants were required to interact with the static GUI while paying attention to the display elements. After starting the first interactive screen by clicking a button, the participant monitored the AV in autonomous mode until the AV requested the participant to support it. The participant guided the AV through the scenario by clicking a waypoint in the desired direction on the street surface. After solving the scenario, the participant was informed that the AV could continue driving autonomously. Hence, control was handed over to the AV. After the interaction with the prototype, the participants evaluated the static GUI using the system usability scale (SUS) [37] and the short version of the user experience questionnaire (UEQ) [38].
Task II: In the second task, participants were introduced to the dynamic GUI and had to solve the same scenario with this display variant again. We did not make any additional changes except for the reduced set of display elements during autonomous driving mode. After completing the SUS [37] and UEQ [38] again, participants were asked to identify their preferred GUI option and offer suggestions for potential improvements.
Task III: Finally, the participants were asked to complete a card-sorting task by allocating each informational element of the GUI to one of the following categories: {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}. Thus, each informational element was represented by a card containing an icon and a textual designation.
Participants: For the online study, a total of N = 36 participants (4 female, 32 male) with an average age of 26.3 years ( S D = 6.0 ) were recruited. All participants, except for one, had a valid driver’s license. Of these, 28 participants reported having 10 or fewer years of driving experience, seven had 11 to 20 years, and one had more than 30 years. Fourteen participants drove almost every day, five a few days a week, twelve a few days a month, and four a few times a year. Thirty-one participants reported only short-distance travel (<200 km per round trip), three participants reported middle-distance travel (201 km to 500 km per round trip), and one participant reported long-distance travel (>500 km per round trip). The participants’ average affinity for technology interaction (ATI) (short version) score was M = 4.42 with S D = 1.08 , and α = 0.89 .

3. Results

The following section summarizes the results from the expert interviews on the teleoperation process and the evaluation of informational elements, as well as the results from the online study assessing the designed GUI as a click dummy.

3.1. Teleoperation Process

Before designing the GUI, the first step was to conduct an expert interview to define the context of use. Each expert, assisted by the interviewer, specified a teleoperation process in the given template (Figure 1) by defining different actions for the AV or remote operator. Since the experts did not modify the self-driving sections s1 and s5, we focus only on the transition from self-driving to teleoperation (s2), teleoperation by the remote operator (s3), and the transition from teleoperation to self-driving (s4). Similar actions were grouped wherever possible to compare the resulting process steps. Tasks executed by the AV are plotted in orange, and those executed by the remote operator in purple. The more frequently the experts mentioned a particular action, the higher the number and the more intense the color.
For remote driving (Table 2), the main process starts with the AV driving autonomously and continuously sending situation data while the remote operator is waiting. However, an expert emphasized that teleoperation is only profitable if waiting time is utilized for other tasks, like fleet monitoring. Once the AV recognizes a problem, it sends a notification to the remote operator while executing a minimal risk maneuver. P2RD suggested that an optimal takeover request design should enable the vehicle to ask for support. After receiving the notification, the remote operator assesses the situation. If the AV reached a safe state, the remote operator takes over for teleoperation and chooses one of the interaction concepts offered by the AV that the AV has to approve. Thus, the remote operator sends driving commands, which the AV monitors and executes consecutively, whereby the AV can offer remote operator assistance or automatically intervene if necessary. In turn, the AV’s execution is monitored by the remote operator. Once the difficult situation is bypassed, the remote operator guides the AV back into a safe state and requests autonomous driving. P3RD emphasized that the vehicle must inform the remote operator when self-driving is available again. Therefore, the AV checks its confidence level for self-driving, approves the request, and transitions back to autonomous driving. Afterward, the remote operator monitors the AV until both disconnect.
For remote assistance (Table 3), the experts described the process during the transition from self-driving to teleoperation (s2), similar to the remote driving procedure. However, the interaction is continuously supported by ADAS while the vehicle collects information for learning. Furthermore, the experts emphasized that the remote operator first connects to the AV to assess the situation when a notification from the AV is received, checked, and confirmed. If the AV reaches a safe state, the remote operator can request or accept teleoperation and take over. In the specific scenario of waypoint guidance [39], the remote operator plans a trajectory by setting a series of waypoints. P4RA proposes that this trajectory can also include a point where the transition to autonomous driving is performed automatically. Once the remote operator confirms the final trajectory, they monitor the AV’s execution. Thereby, the AV checks whether it is back on the originally planned trajectory and, if so, transitions back to self-driving. The remote operator can intervene in this transition if something goes wrong. If the transition to autonomous driving is not performed automatically, the remote operator tries to stop the vehicle and manually hands over to the AV. As a final step, the AV disconnects from the remote operator.
In the post-task survey, the question “How important is a standardized teleoperation process for you?” was answered with 4.6 on average on a 5-point Likert scale (1: very unimportant, 5: very important). The importance of a standardized teleoperation process for the industry was assessed at 4.7. However, P1RA warned that overly detailed governmental standardization may hinder development, while P3RA noted that higher standardization improves efficiency but is challenging due to varying conditions. P2RD raised key research questions, including the optimal operator-to-vehicle ratio, minimum latency, expected market penetration of Level 5 autonomy, and the benefits of remote driving over assistance.

3.2. Informational Elements

The experts rated each collected informational element on a scale of { 0 , 1 , 2 } , regarding their assigned teleoperation concept (remote driving or remote assistance) and the respective section (s1–s5) in the teleoperation process. The average results are visualized in Table 4 with darker shades and higher values indicating greater relevance of an element.
For remote assistance, the experts consider informational elements like engine RPM, mileage, range, energy consumption and level, oil level, temperature, and the compass to be irrelevant, regardless of the teleoperation process section (s1–s5). In contrast, elements like vehicle speed, control owner and mode, and the camera sensor video stream appear to be mostly necessary at all times.
Although the remote driving heat map is comparable, it shows more high-necessity ratings (value 2.00 ), emphasizing the need for additional informational elements such as steering angle, turn signals (used in conjunction with hazard lights), as well as a microphone for cabin communication, outside sound, cabin sound, and the predicted ego trajectory. However, P1RA, P2RD, and P3RD noted that the need to toggle cabin or outside sound could be eliminated by implementing a system that only forwards relevant conversations to the remote operator. Furthermore, a few ratings—like energy level, range, used seats, vehicle model information, and local time—break the symmetry and are most important during the transition to remote driving (s2), as the remote operator may need this information to gain sufficient situational awareness.
The comparison heat map compares both evaluations for remote assistance and remote driving. The cells’ values within [ 2.0 , 2.0 ] were calculated by subtracting the remote assistance from the remote driving values. Hence, positive numbers (marked in green) indicate informational elements that were more important for remote driving. Vice versa, negative numbers (marked in blue) indicate elements considered more important for remote assistance. Notably, map, route, and trip information were rated as more critical for remote assistance, whereas traffic signs, lights, participant, and object highlighting and abstraction were considered more relevant for remote driving.
Overall, expert assessments of remote assistance and remote driving concepts do not differ significantly. However, the predominantly green shading in the comparison column indicates that many informational elements were considered slightly more important for remote driving ( 0.22 higher, on average, compared to remote assistance; Table 4).

3.3. Online Study

Design Derivation: Since the online study focuses on evaluating a novel GUI for remote assistance, the GUI concept was derived from the results of the expert interviews on remote assistance, particularly considering the highest ratings of the elements within the teleoperation section, s3, for the static GUI variant, and the importance across all sections for the reduced GUI (Table 4). The derived design for the click dummy (Figure 2) only contains the informational elements that were at least classified as nice to have, but not necessary by every expert, i.e., an average value of at least 1.00.
However, traffic light highlighting and abstraction were excluded because the scenario in the online study does not include any traffic lights, but the approach would be applied analogously to the traffic signs in the GUI. As the scenario does not include moving participants, no predicted traffic participant trajectory or position was highlighted. The explicit display of the further sensor video stream (e.g., LiDAR) and predicted collisions was omitted, as detected objects on the road were already highlighted with red line markings, and collisions were not possible due to the fixed scenario screens to click through. Similarly, the predicted ego position can be inferred from the predicted ego trajectory, visualized as a driving corridor. The resulting set of informational elements is indicated with an x and marked red in the column Part of GUI in Table 4. All elements omitted for the reasons mentioned above are marked with (x). The reduced GUI (Figure 3) contains fewer informational elements during the monitoring of autonomous self-driving that were, on average, considered as 1: nice to have, but not necessary in sections other than teleoperation (s1, s2, s4, s5).
Evaluation: The online study aimed to evaluate and compare the developed static and dynamic GUI. Since waypoints could only be placed on the road surface, it was possible to track misclicks. Misclicks occurred 10 times while using the GUIStatic and 14 times with GUIDynamic (Table 5).
Guiding the vehicle via clicks through the situation using GUIStatic took an average of 76.7 s whereas GUIDynamic reduced this time by 16%. As the Shapiro–Wilk test shows no normal distribution for the task completion time, the Wilcoxon test was applied to assess significant differences. The results reveal that using the GUIDynamic was significantly faster than using the GUIStatic ( z = 3.97 , p = 0.00007 ) with an effect size of r = 0.66 , indicating a strong effect according to Cohen [40].
The participants rated the usability of GUIStatic , on average, with a SUS score of 68.5 , putting it within the higher “marginal” acceptability range and labeling it with the adjective “ok” [37]. In comparison, the GUIDynamic received a SUS score of 76.6 , ranking as “acceptable” while shifting the adjective to “good”. Due to the lack of normal distribution, a Wilcoxon test was conducted for the SUS scores, revealing that the SUS score for the GUIDynamic was significantly higher compared to the GUIStatic ( z = 4.11 , p = 0.00004 ), with an effect size of r = 0.69 , indicating a strong effect [40].
The user experience of the GUIStatic was rated lower compared to the GUIDynamic in pragmatic, hedonic, and overall quality within the neutral evaluation range (Table 5). The GUIDynamic scored the highest rated value in pragmatic quality, reaching a positive evaluation. Although the UEQ data is normally distributed, the t-test showed no significant difference in the overall UEQ score ( t = 2.007 , p = 0.0525 ), but there is a significant difference ( t = 2.160 , p = 0.0377 ) for the pragmatic quality between GUIDynamic and GUIStatic with a small practical effect ( r = 0.36 ) [40]. For the hedonic quality, the t-test shows no significant difference ( t = 1.355 , p = 0.1842 ).
In the post-task survey, users rated their preference between GUIStatic and GUIDynamic; 4 favored GUIStatic, 11 preferred GUIDynamic, 9 liked both, and 12 saw no difference.
Informational Elements: The results of the card sorting are visualized in Table 6. The experts and participants columns contain the respective average ratings of each informational element during the teleoperation section (s3) for remote assistance, while the Comparison column highlights their differences. Positive values (marked in pink) indicate elements rated higher by participants, whereas negative values (marked in turquoise) indicate elements rated higher by experts. The majority of vehicle parameters were rated significantly higher by the participants. On the other side, informational elements related to system parameters, communication, teleoperation, and routing were rated notably higher by the experts. Overall, four participants stated in their improvement suggestions that the GUIs, in general, contained too much information for the task. Three participants criticized the prototype’s limited interaction and the resulting challenges. Furthermore, three participants expressed a preference for the ability to customize the GUI to better suit their individual needs rather than relying on a default configuration.

4. Discussion and Limitations

Following the results, this section discusses their outcomes and limitations to lay the foundation for future research and ensure the proper interpretation of the findings.

4.1. Teleoperation Process

The following general process steps are derived from the remote driving and remote assistance processes defined during the expert interviews (Table 2 and Table 3), assuming the remote operator is not yet in the loop. It should be noted that the experts may not have explicitly stated certain aspects, as these were presumed to be self-evident, such as the AV also transmitting situational data to the remote operator during remote assistance.
  • AV recognizes complex scenario or problem.
  • AV notifies the remote operator while executing a minimal risk maneuver.
  • Remote operator receives notification, connects to AV, and assesses situation.
  • If the AV reached a safe state, the remote operator takes over for teleoperation.
  • Thereby, the AV can offer different teleoperation concepts.
  • Remote operator chooses a teleoperation concept (remote driving or remote assistance).
  • AV has to approve the teleoperation with the chosen interaction concept.
  • Execution of the dynamic driving task through the selected teleoperation concept (specific individual steps).
  • Thereby, the AV checks and approves confidence for self-driving.
  • Handing over to self-driving upon request of the remote operator or offer from the AV, or through automated transfer (especially in remote assistance).
  • Remote operator monitors self-driving and can intervene if something goes wrong.
  • Remote operator and AV disconnect.
Step (8) is further divided into substeps, with remote driving involving more low-level commands, whereas remote assistance remains at a high level due to the continued presence of autonomous features. This would allow an automatic transition back to self-driving for remote assistance. However, the handshakes in steps (4) and (10) remain unclear.
If the remote operator is already connected to the AV and is assessing the situation, they can request teleoperation and take over, according to step (4). The question remains—Does the remote operator request teleoperation intervention manually, or does the AV enter teleoperation mode autonomously and await commands? Similarly, step (10) involves the second handover transitioning back to self-driving. Further research should be conducted to determine which handover option is promising regarding transition time and reliability, as well as if and how an AV could automatically switch back to autonomous driving.
A further field of research is the right timing of a handover. Typically, the vehicle is handed over to the remote operator from a minimal risk state, which is likely to be a standstill. However, a transition process that enables seamless switching between autonomous driving and teleoperation without entering a minimal risk maneuver remains largely unexplored. Likewise, the transition back to self-driving can occur either from a standstill or while in motion.
Overall, experts generally consider standardizing the teleoperation process to be important or very important. Nevertheless, standardization should not hinder development but should instead allow adaptability across different applications and conditions. Variations in interaction, such as the handover to self-driving (step (10)), present significant challenges to establishing a standardized teleoperation process.

4.2. Informational Elements

The evaluation of informational elements reduced the number of collected potential display components by more than half, minimizing the GUI and ensuring the remote operator’s attention remained focused on essential information.
The initial experts’ assessment of informational elements indicates that features such as maps, route, and trip information are considered more critical for remote assistance, aligning with existing applications in Section 1. In contrast, traffic signs, traffic lights, participant and object highlighting, and abstraction were rated as more important for remote driving, likely because the remote operator is directly responsible for executing the driving task and maintaining vehicle control. Therefore, the remote operator may need to gain a more detailed situational awareness than in remote assistance. The overall higher rated relevance of informational elements in remote driving also suggests an increased need for situational awareness in this teleoperation mode.
However, it is essential to note that the experts came from various fields of teleoperation applications. Teleoperation in passenger transport focuses on safety, comfort, and user support in complex traffic situations, while logistics emphasizes operational efficiency with less focus on user experience, often in controlled environments such as warehouses. While both domains rely on remote human intervention, the presence of passengers rather than cargo shapes distinct operational priorities. Since the experts were randomly assigned to the remote driving and remote assistance groups, it is possible that one sector was overrepresented in a group. This may explain why, in remote driving, communication elements were considered more important, likely due to the higher proportion of experts from the passenger transport sector compared to the remote assistance group, which may have included more experts from logistics applications. This effect can be mitigated by increasing the participant pool or by evenly distributing the experts based on their prior experience.
Despite the small number of experts ( N = 9 ) and random distribution, the relatively symmetrical color distribution in the heat map in Table 4 shows that the elements are perceived as equally necessary or unnecessary, whether for remote driving or assistance.
Comparing the expert’s rating with the results from the participants of the online study (Table 6), the assessments for remote assistance can largely be confirmed by the larger participant sample. However, participants rated vehicle parameters as more important than experts, whereas experts placed greater emphasis on elements related to communication, map, trip information, latency, and network quality indicators, as well as control owner and mode display. This may be because participants in the online study did not have to interact with these elements, making the information less relevant in the click dummy scenario without an actual driving task. It also shows that the experts take a different perspective compared to the laypersons due to their prior experience.
Overall, it was noted in the online study that the GUI was overloaded with information and should be further reduced. However, it is again crucial to consider that many elements were not needed in the click task without a vehicle connection. In the next step, a user study involving the execution of teleoperation using a real-world vehicle should be conducted.
In the future, it is conceivable that the informational elements could be individually arranged and customized by the remote operator, as suggested by the three participants.

4.3. Online Study

The expert interview findings were translated into a GUI prototype, implemented in a static and dynamic variant using a click dummy. However, one-third of participants reported not noticing any difference between the two versions. Still, the dynamic GUI received a significantly higher rating in terms of usability. This result may be influenced by the presentation order, as participants were always shown the static GUI first, followed by the dynamic version. Consequently, increased familiarity with the system in the second trial could have led to better evaluations.
Similarly, the faster task completion times observed with the dynamic GUI may be attributed to the fixed order of presentation and potential learning effects. Additionally, both GUI variants were tested using the same scenario, which may have amplified the impact but led to better comparability.
The number of misclicks can likely be attributed to participants’ curiosity, as they attempted to challenge the system and explore its limitations. Nevertheless, the GUI variants achieved a marginally higher, acceptable-to-good SUS score, with pragmatic quality in the user experience falling within the (almost) green range. In contrast, the hedonic quality received lower ratings, remaining within a neutral range, possibly due to the GUI being evaluated only in a prototypical click dummy without real vehicle integration. Three participants even criticized the lack of proper interaction. Therefore, the results of usability and user experience cannot be solely assigned to the GUI but are also influenced by the interaction with the prototype. Future studies should explore the combined impact of interaction and display concepts on usability and system performance, considering various interaction approaches and real-world applications. Thus, studies should be conducted under real-world conditions, such as video streaming with associated transmission latencies and interaction with a vehicle.
Generally, it is important to note that the online study involved participants without prior teleoperation experience, in order to capture their intuitive impressions of the visual interface without bias from existing teleoperation systems and to reach a broader participant pool. However, it must be acknowledged that the participants were likely to have been naïve with respect to teleoperation. For instance, feedback indicating that the GUI contained too much information may be more reflective of their lack of experience than of actual usability issues. Similarly, differences in the evaluation of display elements compared to expert assessments (Table 6) could be explained by the participants’ lack of familiarity.
Nevertheless, 30% of the participants preferred the dynamic gui (33.3% noticed no difference and 25% liked both), which was generally rated higher regarding usability and user experience. Therefore, the display should adapt throughout the teleoperation process, providing only the necessary elements at each stage. However, the dynamic GUI and its reduced display during the automated driving monitoring phase still need to be further investigated with a real vehicle connection. In particular, the remote operator notification, including the switch to the scenario, and the subsequent handover to teleoperation, require further investigation.

5. Conclusions

As the core of this work, a graphical user interface (GUI) for the teleoperation of AVs was developed following the user-centered design process. First, process steps were defined by experts, outlining tasks to be performed by the AV or remote operator. Thus, a major unresolved issue was the handshake to the remote operator and the transition back to self-driving. Based on these steps, the experts evaluated the necessity of informational elements gathered from the state of the art. Building on the results, a click dummy was created, and the necessity of the informational elements was reassessed in an online study with a larger sample. Additionally, a comparison was made between a static and a dynamic GUI, showing reduced display elements during the automation monitoring phase. Although one-third of the participants reported no noticeable difference between the GUIs, the usability of the dynamic GUI was rated significantly higher. Overall, both GUIs received good system usability scale (SUS) ratings. Task completion time was also significantly faster with the dynamic GUI. Nevertheless, the lack of counterbalancing the order of the static and dynamic conditions limits direct comparisons, as any observed improvements in the dynamic condition may result from learning effects. However, the user experience questionnaire (UEQ) score suggests that there is still potential for improvement in terms of user experience. Based on the expanded insights, the GUI design should be further refined and integrated into existing interaction concepts. This enables iterative development, including future studies involving interaction with a vehicle. Overall, the developed display concept could serve as a standard interface for presenting key information in teleoperation research.

Author Contributions

Conceptualization, M.-M.W.; methodology, M.-M.W.; software, H.S.; validation, M.-M.W.; formal analysis, H.S. and M.C.; investigation, H.S.; resources, M.-M.W. and H.S.; data curation, H.S. and M.-M.W.; writing—original draft preparation, M.-M.W. and H.S.; writing—review and editing, J.F. and F.D.; visualization, H.S. and M.-M.W.; supervision, M.-M.W.; project administration, M.-M.W. and F.D.; funding acquisition, F.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Federal Ministry of Economic Affairs and Climate Action of Germany (BMWK) within the project Safestream (FKZ 01ME21007B).

Institutional Review Board Statement

The studies were conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of the Technical University of Munich (2024-79-NM-BA 02.08.2024; 2024-82-NM-BA 04.09.2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in these studies are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ATIAffinity for Technology Interaction
AVAutomated Vehicle
GUIGraphical User Interface
SUSSystem Usability Scale
UEQUser Experience Questionnaire

References

  1. Council of European Union. Council Regulation (EU) No 1426/2022; Council of European Union: Bruxelles, Belgique, 2022. [Google Scholar]
  2. LLC, W. Waymo - Self-Driving Cars - Autonomous Vehicles - Ride-Hail. Available online: https://waymo.com/ (accessed on 30 January 2025).
  3. Cruise Driverless Rides|Autonomous Vehicles|Self-Driving. Available online: https://www.getcruise.com/ (accessed on 30 January 2025).
  4. Jin, H. Insight: A Secret Weapon for Self-Driving Car Startups: Humans. Available online: https://www.reuters.com/business/autos-transportation/secret-weapon-self-driving-car-startups-humans-2021-08-23/ (accessed on 29 January 2025).
  5. Kelkar, A.K.; Heineke, K.; Kellner, M.; Smith, A.-S. Remote-Driving Services: The Next Disruption in Mobility Innovation? 2025. Available online: https://www.mckinsey.com/features/mckinsey-center-for-future-mobility/our-insights/remote-driving-services-the-next-disruption-in-mobility-innovation#/ (accessed on 30 January 2025).
  6. Valeo. Valeo to Showcase Major Innovations at Iaa Mobility 2023. Available online: https://www.valeo.com/en/valeo-to-showcase-major-innovations-at-iaa-mobility-2023/ (accessed on 26 January 2025).
  7. Teleoperation in All Use Cases and All Levels of Autonomy. Available online: https://driveu.auto/ (accessed on 26 January 2025).
  8. Einride. Balancing Humanity and Autonomy—The Einride Remote Interface Allows Operators to Monitor a Fleet of Vehicles and Keep an Eye on Their Progress. Available online: https://www.einride.tech/what-we-do/autonomy#automate (accessed on 26 January 2025).
  9. Fernride. Human Assisted Autonomy. Available online: https://www.fernride.com/system (accessed on 12 April 2025).
  10. Mutzenich, C.; Durant, S.; Helman, S.; Dalton, P. Updating our understanding of situation awareness in relation to remote operators of autonomous vehicles. Cogn. Res. Princ. Implic. 2021, 6, 9. [Google Scholar] [CrossRef] [PubMed]
  11. Expert from HF-IRADS. Human Factors Challenges of Remote Support and Control: A Position Paper from HF-IRADS. Proceedings of HF-IRADS. 2020. Available online: https://api.semanticscholar.org/CorpusID:276605706 (accessed on 12 April 2025).
  12. Colavita, F.B. Human sensory dominance. Percept. Psychophys. 1974, 16, 409–412. [Google Scholar] [CrossRef]
  13. Georg, J.M.; Putz, E.; Diermeyer, F. Longtime Effects of Videoquality, Videocanvases and Displays on Situation Awareness during Teleoperation of Automated Vehicles. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 248–255. [Google Scholar] [CrossRef]
  14. Boker, A.; Lanir, J. Bird’s Eye View Effect on Situational Awareness in Remote Driving. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18–21 September 2023. AutomotiveUI ’23. [Google Scholar] [CrossRef]
  15. Voysys. Field of View Affects Sense of Speed. Available online: https://www.youtube.com/watch?v=1D3j352_jsM (accessed on 10 April 2025).
  16. Cabrall, C.; Stapel, J.; Besemer, P.; Jongbloed, K.; Knipscheer, M.; Lottman, B.; Oomkens, P.; Rutten, N. Plausibility of Human Remote Driving: Human-Centered Experiments from the Point of View of Teledrivers and Telepassengers. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 2018–2023. [Google Scholar] [CrossRef]
  17. Bout, M.; Brenden, A.P.; Klingegård, M.; Habibovic, A.; Böckle, M.P. A Head-Mounted Display to Support Teleoperations of Shared Automated Vehicles. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct, Oldenburg, Germany, 24–27 September 2017; AutomotiveUI ’17. pp. 62–66. [Google Scholar] [CrossRef]
  18. Georg, J.M. Konzeption und Langzeittest der Mensch-Maschine-Schnittstelle für die Teleoperation von Automatisierten Fahrzeugen. Ph.D. Thesis, Technische Universität München, München, Germany, 2024. [Google Scholar]
  19. Tener, F.; Lanir, J. Driving from a Distance: Challenges and Guidelines for Autonomous Vehicle Teleoperation Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022. CHI ’22. [Google Scholar] [CrossRef]
  20. Graf, G.; Hussmann, H. User Requirements for Remote Teleoperation-based Interfaces. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Virtual Conference, 21–22 September 2020; AutomotiveUI ’20. pp. 85–88. [Google Scholar] [CrossRef]
  21. Lindgren, I.; Larsson Vahlberg, A. Remote Operation in a Modern Context. Master’s Thesis, Chalmers University of Technology, Gothenburg, Sweden, 2023. [Google Scholar]
  22. Gafert, M.; Mirnig, A.G.; Fröhlich, P.; Kraut, V.; Anzur, Z.; Tscheligi, M. Effective remote automated vehicle operation: A mixed reality contextual comparison study. Pers. Ubiquitous Comput. 2023, 27, 2321–2338. [Google Scholar] [CrossRef]
  23. Bodell, O.; Gulliksson, E. Teleoperation of Autonomous Vehicle. Master’s Thesis, Chalmers University of Technology, Gothenburg, Sweden, 2016. [Google Scholar]
  24. Kettwich, C.; Schrank, A.; Oehl, M. Teleoperation of Highly Automated Vehicles in Public Transport: User-Centered Design of a Human-Machine Interface for Remote-Operation and Its Expert Usability Evaluation. Multimodal Technol. Interact. 2021, 5, 26. [Google Scholar] [CrossRef]
  25. Schrank, A.; Walocha, F.; Brandenburg, S.; Oehl, M. Human-centered design and evaluation of a workplace for the remote assistance of highly automated vehicles. Cogn. Technol. Work 2024, 26, 183–206. [Google Scholar] [CrossRef]
  26. Tener, F.; Lanir, J. Guiding, not driving: Design and Evaluation of a Command-Based User Interface for Teleoperation of Autonomous Vehicles. arXiv 2025, arXiv:2502.00750. [Google Scholar]
  27. Brand, T.; Baumann, M.; Schmitz, M. Bridging system limits with human–machine-cooperation. Cogn. Technol. Work 2024, 26, 341–360. [Google Scholar] [CrossRef]
  28. Brecht, D.; Gehrke, N.; Kerbl, T.; Krauss, N.; Majstorović, D.; Pfab, F.; Wolf, M.M.; Diermeyer, F. Evaluation of Teleoperation Concepts to Solve Automated Vehicle Disengagements. IEEE Open J. Intell. Transp. Syst. 2024, 5, 629–641. [Google Scholar] [CrossRef]
  29. Schrank, A.; Merat, N.; Oehl, M.; Wu, Y. Human factors considerations of remote operation supporting level 4 automation. In Lecture Notes in Mobility; Springer Nature: Cham, Switzerland, 2024; pp. 111–125. [Google Scholar]
  30. Skogsmo, I.; Andersson, J.; Jernberg, C.; Aramrattana, M. One2Many: Remote Operation of Multiple Vehicles; VTI Report 1164A; National Road and Transport Research Institute: Stockholm, Sweden, 2023; Available online: https://vti.diva-portal.org/smash/get/diva2:1757256/FULLTEXT01.pdf (accessed on 29 January 2025).
  31. Bruce, S. Under the Hood of Cruise’s R&D. 2021. Available online: https://dovetail.com/outlier/under-the-hood-of-cruises-r-and-d/ (accessed on 20 July 2025).
  32. Motional. Motional’s Remote Vehicle Assistance (RVA). Available online: https://www.youtube.com/watch?v=pyoHeEcgHFA (accessed on 19 January 2024).
  33. Motional. Smart Choices: How Robotaxis Partner with Remote Vehicle Operators to Work Through Tricky Spots Safely. Available online: https://motional.com/news/rva (accessed on 19 January 2024).
  34. Zoox. How Zoox Uses TeleGuidance to Provide Remote Assistance to its Autonomous Vehicles. Available online: https://www.youtube.com/watch?v=NKQHuutVx78 (accessed on 29 January 2025).
  35. Kettwich, C.; Dreßler, A. Requirements of Future Control Centers in Public Transport. In Proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Virtual Conference, 21–22 September 2020; AutomotiveUI ’20. pp. 69–73. [Google Scholar] [CrossRef]
  36. DIN EN ISO 9241-210:2020-03; Ergonomie der Mensch-System-Interaktion—Teil 210: Menschzentrierte Gestaltung Interaktiver Systeme; Deutsche Fassung. Technical Report; Beuth Verlag GmbH: Berlin, Germany, 2020. Available online: https://www.dinmedia.de/de/norm/din-en-iso-9241-210/313017070 (accessed on 12 April 2025).
  37. Bangor, A.; Kortum, P.; Miller, J. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  38. Hinderks, A.; Schrepp, M.; Thomaschewski, J. User Experience Questionnaire. Available online: https://www.ueq-online.org/ (accessed on 30 January 2025).
  39. Majstorović, D.; Hoffmann, S.; Pfab, F.; Schimpe, A.; Wolf, M.M.; Diermeyer, F. Survey on Teleoperation Concepts for Automated Vehicles. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 1290–1296. [Google Scholar] [CrossRef]
  40. Cohen, J. A power primer. Psychol. Bull. 1992, 112, 155–159. [Google Scholar] [CrossRef]
Figure 1. Template of the teleoperation process with predefined sections and actions in blue boxes. The template of the teleoperation process for (A) remote driving or (B) remote assistance, with predefined sections and actions in blue-bordered boxes.
Figure 1. Template of the teleoperation process with predefined sections and actions in blue boxes. The template of the teleoperation process for (A) remote driving or (B) remote assistance, with predefined sections and actions in blue-bordered boxes.
Mti 09 00078 g001
Figure 2. Designed GUI during teleoperation mode (waypoint guidance).
Figure 2. Designed GUI during teleoperation mode (waypoint guidance).
Mti 09 00078 g002
Figure 3. Designed GUI variant with reduced set of display elements during autonomous self-driving for dynamic adaptation across the teleoperation process.
Figure 3. Designed GUI variant with reduced set of display elements during autonomous self-driving for dynamic adaptation across the teleoperation process.
Mti 09 00078 g003
Table 2. Specified teleoperation process steps for remote driving, including the number of mentions.
Table 2. Specified teleoperation process steps for remote driving, including the number of mentions.
Transition from Self-driving to Teleop.Teleoperation (Remote Driving)Transition from Teleop. to Self-driving
Autonomous Vehicle Drive autonomously (sense, plan, act)3 Send situation data3 Drive autonomously (sense, plan, act)1
Send situation data3Wait1Send situation data2
Realize difficult traffic scenario1Follow driving commands3Follow driving commands1
Send notification3Monitor teleoperation3Monitor teleoperation1
Execute minimal risk maneuver4Offer assistance1Intervene if something goes wrong1
Offer interaction concept1Intervene if something goes wrong1Check confidence for self-driving2
Wait1Approve self-driving1Approve self-driving2
Approve teleoperation1 Transition to self-driving1
Disconnect1
Remote Operator Wait1 Take over for teleoperation1 Send driving commands1
Receive notification2Select interaction concept1Monitor execution of driving commands2
Assess situation2Send driving commands5Guide vehicle back in safe state3
Take over for teleoperation2Monitor execution of driving commands3Request self-driving3
Select interaction concept1Guide vehicle back in safe state2Monitor self-driving1
Disconnect2
Table 3. Specified teleoperation process steps for remote assistance, including the number of mentions.
Table 3. Specified teleoperation process steps for remote assistance, including the number of mentions.
Transition from Self-driving to Teleop.Teleoperation (Remote Assistance)Transition from Teleop. to Self-driving
Auton. Vehicle Assist with ADAS2 Assist with ADAS3 Assist with ADAS2
Collect information for learning1Collect information for learning1Collect information for learning1
Send notification4Stay in safe state1Check if vehicle is back on orig. trajectory1
Execute minimal risk maneuver3Approve teleoperation1Transition to self-driving2
Stay in safe state1Execute driving commands2Disconnect1
Offer predefined commands1
Remote Operator Identify task2 Assess situation2 Monitor self-driving1
Receive notification2Plan trajectory3Intervene transition if sth. goes wrong 1
Connect to vehicle1Define autom. transition to self-driving1Check if vehicle can be stopped1
Assess situation2Confirm trajectory for execution2Stop the vehicle1
Take over for teleoperation2 Hand over to AV1
Request teleoperation2
The steps highlighted in gray are optional and only required when automated handover is not possible, necessitating a manual transition from a stationary position.
Table 4. Heat map for remote assistance, remote driving, and their comparison based on the experts’ evaluation of 57 informational elements identified through the literature review {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}. (x) marks elements intended for the GUI but omitted due to the scenario and click dummy limitations.
Table 4. Heat map for remote assistance, remote driving, and their comparison based on the experts’ evaluation of 57 informational elements identified through the literature review {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}. (x) marks elements intended for the GUI but omitted due to the scenario and click dummy limitations.
Part of GUIPart of red. GUIRemote AssistanceRemote DrivingComparison
s1s2s3s4s5s1s2s3s4s5s1s2s3s4s5
Vehicle Parameters
      Vehicle Speedxx1.751.752.001.751.751.402.002.002.001.40−0.350.250.000.25−0.35
      Engine RPM 0.250.250.250.250.250.200.600.800.600.20−0.050.350.550.35−0.05
      Gearx 0.250.501.000.500.250.000.801.200.800.00−0.250.300.200.30−0.25
      Mileage 0.000.000.000.000.000.400.400.400.400.400.400.400.400.400.40
      Range 0.250.000.000.000.250.601.201.000.600.600.351.201.000.600.35
      Energy Consumption 0.000.000.000.000.000.400.400.400.400.400.400.400.400.400.40
      Energy Level (Gas/Battery) 0.750.500.500.500.751.001.601.201.001.000.251.100.700.500.25
      Oil Level 0.000.000.000.000.000.200.000.200.000.200.200.000.200.000.20
      Oil Temperature 0.000.000.000.000.000.200.000.200.000.200.200.000.200.000.20
      Open Doorsx 0.500.501.000.500.500.601.001.201.000.600.100.500.200.500.10
      Used Seatsx 0.250.501.000.500.250.401.601.000.800.400.151.100.000.300.15
      Fastended Seat Belts 0.250.500.500.500.250.600.800.800.400.600.350.300.30−0.100.35
      Lateral & Longitudinal Accelerations (G-Forces)xx0.751.001.001.000.750.401.001.401.000.40−0.350.000.400.00−0.35
      Tire Pressure 0.250.250.250.250.250.400.200.400.200.400.15−0.050.15−0.050.15
      Traction 0.500.500.750.500.500.200.601.200.600.20−0.300.100.450.10−0.30
      Steering Angle (Front Wheel Angle)x 0.250.751.500.750.250.401.602.001.600.400.150.850.500.850.15
      Lightx 0.250.751.000.750.250.401.001.201.000.400.150.250.200.250.15
      Turn Signalxx0.501.001.501.000.500.601.602.001.600.600.100.600.500.600.10
      Hazard Lightsxx0.501.001.001.000.500.601.602.001.600.600.100.601.000.600.10
      Vehicle Model Information 0.750.750.750.750.750.401.000.600.600.40−0.350.25−0.15−0.15−0.35
      Park Brakexx0.501.001.001.000.500.601.201.601.200.600.100.200.600.200.10
      Window Heating 0.000.250.250.250.000.400.200.400.200.400.40−0.050.15−0.050.40
      Air Condition 0.000.250.250.250.000.400.200.400.200.400.40−0.050.15−0.050.40
      Local Time 0.750.750.750.750.750.601.401.001.000.60−0.150.650.250.25−0.15
System Parameters
      Overall System Statusxx1.331.331.331.331.331.401.201.201.201.400.07−0.13−0.13−0.130.07
      Sensor Quality 0.670.670.670.670.670.601.201.001.000.60−0.070.530.330.33−0.07
      Latencyx 0.750.751.500.750.750.801.401.601.400.800.050.650.100.650.05
      Network Qualityxx1.001.001.501.001.000.801.601.601.600.80−0.200.600.100.60−0.20
Communication
      Microphone for Cabin Communication (On/Off) 0.500.750.750.750.501.602.002.002.001.601.101.251.251.251.10
      Microphone for Outside Communication (On/Off)xx0.501.501.501.500.500.801.601.601.600.800.300.100.100.100.30
      Outside Sound (On/Off)xx0.501.501.501.500.500.402.002.002.000.40−0.100.500.500.50−0.10
      Cabin Sound (On/Off) 0.500.750.750.750.501.002.002.002.001.000.501.251.251.250.50
      Chat (Textual Communication with Passengers) 0.250.500.500.500.251.200.600.600.601.200.950.100.100.100.95
      Inside Cameraxx0.501.001.251.000.501.201.201.201.201.200.700.20−0.050.200.70
Teleoperation
      Realtime Evaluation of Teleoperation Inputx 0.000.671.330.670.000.400.601.401.000.400.40−0.070.070.330.40
      Control Ownerxx1.501.752.002.001.501.201.801.801.801.20−0.300.05−0.20−0.20−0.30
      Control Mode (Interaction Concept)xx0.501.002.001.000.500.401.801.401.400.40−0.100.80−0.600.40−0.10
Environment
      Camera Sensor Video Streamxx1.252.002.002.001.250.802.002.002.000.80−0.450.000.000.00−0.45
      Further Sensor Video Stream (e.g. LiDAR)(x) 0.500.751.500.750.500.401.001.001.000.40−0.100.25−0.500.25−0.10
      Obstacle Highlighting and Abstractionx 0.500.751.500.750.500.601.801.601.800.600.101.050.101.050.10
      Traffic Sign Highlighting and Abstractionx 0.500.751.250.750.500.601.601.401.600.600.100.850.150.850.10
      Traffic Participant Highlighting 0.000.250.750.250.000.401.401.201.400.400.401.150.451.150.40
      Traffic Lights Highlighting and Abstraction(x) 0.250.501.000.500.250.401.601.401.600.400.151.100.401.100.15
      Weather Conditionsxx1.001.001.001.001.000.601.401.001.000.60−0.400.400.000.00−0.40
      Perspective Mode Selection xx0.501.001.251.000.500.601.601.201.400.600.100.60−0.050.400.10
Prediction
      Predicted Ego Trajectoryxx0.501.001.751.000.500.401.202.001.600.40−0.100.200.250.60−0.10
      Predicted Ego Position(x) 0.250.751.500.750.250.600.801.400.800.600.350.05−0.100.050.35
      Predicted Traffic Participant Trajectory(x) 0.250.501.250.500.250.400.600.800.600.400.150.10−0.450.100.15
      Predicted Traffic Participant Position(x) 0.500.751.500.750.500.400.600.800.600.40−0.10−0.15−0.70−0.15−0.10
      Predicted Collision(x)(x)0.501.001.751.000.500.401.001.401.000.40−0.100.00−0.350.00−0.10
Explainable AI
      Confidence Score 0.750.750.750.750.750.801.200.401.000.800.050.45−0.350.250.05
      Explanation 0.500.500.500.500.500.801.400.601.200.800.300.900.100.700.30
Routing
      Compass 0.250.250.250.250.250.600.800.600.600.600.350.550.350.350.35
      Navigation Direction (e.g. next turn)xx1.001.001.501.001.000.601.801.601.600.60−0.400.800.100.60−0.40
      Mapxx1.251.251.751.251.251.201.200.800.801.20−0.05−0.05−0.95−0.45−0.05
      Route (Long-term path on map)xx0.751.001.501.000.751.201.000.600.601.200.450.00−0.90−0.400.45
      Trip Information (ETA, distance, etc.)xx1.001.001.501.001.001.001.000.600.801.000.000.00−0.90−0.200.00
Comparison: Positive numbers indicate more importance for remote driving, negative numbers for remote assistance; s1: self-driving vehicle in autonomous state, s2: transition from self-driving to teleoperation, s3: teleoperation by remote operator, s4: transition from teleoperation to self-driving, s5: self-driving vehicle in autonomous state.
Table 5. Comparison results of the static and dynamic GUI.
Table 5. Comparison results of the static and dynamic GUI.
GUIStatic GUIDynamic 
Mdn M SD Mdn M SD
Misclicks1014
Task Completion Time (s)61.7376.737.741.2564.869.6
SUS Score70.0068.512.878.7576.612.5
UEQ Score0.560.410.900.630.670.92
      Pragmatic Quality0.880.651.081.130.941.11
      Hedonic Quality0.380.161.100.500.401.08
Table 6. Comparison of experts’ and participants’ ratings of informational elements during remote assistance, and their differences: positive values indicate higher participant ratings (pink), negative values indicate higher expert ratings (turquoise) {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}.
Table 6. Comparison of experts’ and participants’ ratings of informational elements during remote assistance, and their differences: positive values indicate higher participant ratings (pink), negative values indicate higher expert ratings (turquoise) {0: irrelevant, 1: not necessary, but nice to have, and 2: necessary}.
ExpertsParticipantsComparison
Vehicle Parameters
   Vehicle Speed2.001.97−0.03
   Gear1.001.420.42
   Open Doors1.001.500.50
   Used Seats1.000.64−0.36
   Lateral & Longitudinal Accelerations (G-Forces)1.000.44−0.56
   Steering Angle (Front Wheel Angle)1.501.690.19
   Light1.001.280.28
   Turn Signal1.501.670.17
   Hazard Lights1.001.390.39
   Park Brake1.001.310.31
System Parameters
   Overall System Status1.331.500.17
   Latency1.501.00−0.50
   Network Quality1.500.92−0.58
Communication
   Microphone for Outside Communication (On/Off)1.500.72−0.78
   Outside Sound (On/Off)1.500.64−0.86
   Inside Camera1.250.56−0.69
Teleoperation
   Realtime Evaluation of Teleoperation Input1.331.440.11
   Control Owner2.001.47−0.53
   Control Mode (Interaction Concept)2.001.28−0.72
Environment
   Camera Sensor Video Stream2.001.75−0.25
   Obstacle Highlighting and Abstraction1.501.530.03
   Traffic Sign Highlighting and Abstraction1.251.560.31
   Weather Conditions1.000.81−0.19
   Perspective Mode Selection1.251.06−0.19
Prediction
   Predicted Ego Trajectory1.751.56−0.19
Routing
   Navigation Direction (e.g. next turn)1.501.33−0.17
   Map1.751.14−0.61
   Route (Long-term path on map)1.501.42−0.08
   Trip Information (ETA, distance, etc.)1.500.81−0.69
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wolf, M.-M.; Schmidt, H.; Christl, M.; Fank, J.; Diermeyer, F. A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance. Multimodal Technol. Interact. 2025, 9, 78. https://doi.org/10.3390/mti9080078

AMA Style

Wolf M-M, Schmidt H, Christl M, Fank J, Diermeyer F. A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance. Multimodal Technologies and Interaction. 2025; 9(8):78. https://doi.org/10.3390/mti9080078

Chicago/Turabian Style

Wolf, Maria-Magdalena, Henrik Schmidt, Michael Christl, Jana Fank, and Frank Diermeyer. 2025. "A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance" Multimodal Technologies and Interaction 9, no. 8: 78. https://doi.org/10.3390/mti9080078

APA Style

Wolf, M.-M., Schmidt, H., Christl, M., Fank, J., & Diermeyer, F. (2025). A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance. Multimodal Technologies and Interaction, 9(8), 78. https://doi.org/10.3390/mti9080078

Article Metrics

Back to TopTop