Next Article in Journal
The Impact of Smart Stops on the Accessibility and Safety of Public Transport Users
Previous Article in Journal
Development of a Methodology Used to Predict the Wheel–Surface Friction Coefficient in Challenging Climatic Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees

1
Department of Computer Games Development, Faculty of Computing and AI, Air University Islamabad, Islamabad 44230, Pakistan
2
Visual Computing Center, King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
3
Department of Computer Science & IT, University of Malakand, Chakdara 18800, Pakistan
*
Author to whom correspondence should be addressed.
Future Transp. 2025, 5(4), 130; https://doi.org/10.3390/futuretransp5040130
Submission received: 13 August 2025 / Revised: 20 September 2025 / Accepted: 22 September 2025 / Published: 1 October 2025

Abstract

Virtual reality driving simulators have been increasingly used for training purposes, but they are still lacking effective driver assistance features, and poor use of user interface (UI) and guidance systems leads to users’ performance being affected. In this paper, we investigate image–arrow aids in a virtual reality driving simulator (VRDS) that enables trainees (new drivers) to interpret instructions according to the correct course of action while performing their driving task. Image–arrow aids consist of arrows, texts, and images that are separately rendered during driving in the VRDS. A total of 45 participants were divided into three groups: G1 (image–arrow aids), G2 (audio and textual aids), and G3 (arrows and textual aids). The results showed that G1 (image–arrow guidance) achieved the best performance, with a mean error rate of 8.1 (SD = 1.23) and a mean completion time of 3.26 min (SD = 0.56). In comparison, G2 (audio and textual aids) had a mean error rate of 10.8 (SD = 1.31) and completion time of 4.49 min (SD = 0.67), while G3 (arrows and textual aids) had the highest error rate (18.4, SD = 1.43) and longest completion time (6.51 min, SD = 0.68). An evaluation revealed that the performance of G1 is significantly better than that of G2 and G3 in terms of performance measures (errors + time) and subjective analysis such as usability, easiness, understanding, and assistance.

1. Introduction

The rapid advancement of virtual reality (VR) technology has opened new opportunities in the realm of driver education. This technology, in its current state, provides numerous benefits, including simulating driving environments, offering real-time feedback, and facilitating customized, repetitive training sessions [1,2]. Much past research work on this has revealed the significance of a VR-based driving simulator as a cost-effective and hazard-free training method [1]. In VR-based driver training, the ability of drivers to perceive their surroundings, maintain their attention, and assess their vehicle’s state is vital for safe driving. For that reason, a well-designed aid system is crucial. These cognitive aids provide real-time information within the driver’s field of view, helping in understanding the driving environment and guiding the driver towards optimal behaviors, making VR-based driving simulators more useful [2]. Therefore, different cognitive aids such as audio, visual, or haptic-based aids can be used to improve driver performance. However, excessive use of cognitive aids in virtual environments (VEs) can also lead to higher mental pressure on learners, which affects their performance negatively [3,4,5,6].
One of the key parts of this simulator is the driver interface system, which replicates steering wheels and pedals found in real-world control systems. The more advanced ones replicate an automobile’s entire interior. Sensor data on parameters like speed, braking, and driver reaction times are gathered by data acquisition systems [7,8,9]. The Visual Display units display high-resolution road views and traffic information with the help of modern GPUs [4,10]. The Sound System provides necessary audio cues as feedback, like engine noise, wheel screeching, etc. Some of these simulators use a motion platform to provide feedback in the form of physical forces [10].
The use of driving simulators in training and research is preferred over real road driving as it offers safe, efficient driver training, particularly as road networks and vehicles increase, leading to higher accident risks [1]. However, these traditional driving simulators lack immersion. In order to overcome the limitations that traditional driving simulators have, virtual reality (VR)-based driving simulators overcome this by offering realistic, immersive environments for assessing and improving driver behavior safely and effectively [2].
VR technology has revolutionized driver research and education with immersive, cost-effective, and hazard-free training, with real-time feedback and customizable sessions. VR-based simulators (VRDSs) help drivers perceive their surroundings and assess vehicle states for safe driving. Well-designed cognitive aids, such as audio, visual, or haptic systems, enhance performance by guiding behaviors, making the simulation more effective and improving overall learning outcomes [3]. However, excessive and disorderly use of these aids can degrade performance. Moreover, the abundance of information presented in the simulator, including detailed traffic rules, road layouts, and vehicle controls, can overwhelm drivers, further exacerbating cognitive strain [1,2]. This is where our study comes in.
This study was conducted with the aim of developing and evaluating a VR-based driving simulator, implementing a certain paradigm of cognitive aids called image–arrow aids in a virtual reality driving simulator (VRDS), to enhance trainees’ (new drivers) skills and performance. The specific objectives included investigating the effectiveness of image–arrow aids in a virtual reality driving simulator (VRDS) in improving trainees’ performance, enabling drivers to interpret instructions on their own within a training environment, and comparing their effectiveness with that of other aids. We compared the image–arrow aids with audio + textual aids and arrows + textual aids. Our proposed image–arrow aids contrast with those other aids in that they incorporate realistic visual scenes accompanied by short textual hints, for instance, arrows indicating the direction where a turn should be made. In addition, text on a screen, mostly in combination with images, ensures that the trainees’ attention to the execution of such driving tasks is sufficient for them to perform the tasks effectively. The proposed image–arrow aids are designed specifically for novice drivers, who may require more contextual understanding to make correct decisions according to the correct course of actions.

2. Literature Review

Current VR-Based Driving Simulators

There has been much research conducted on VR-based driving simulators with the aim of improving the cognitive learning of novice drivers in driving education.
In 2019, Lia Morra et al. [3] designed a virtual reality-based autonomous driving simulator that could provide visual feedback—i.e., iconic cues—to the user about what action the car was about to take and why it was taking that action, to validate their claims. The evaluation of this experiment was performed through subjective questionnaires after driving, along with physiological data gathered during driving (i.e., galvanic skin response, GSR). The results of this experiment suggested that a Head-Up Display (HUD) with more visual cues provided a less stressful driving experience, despite a slight increase in mental burden, and that the user felt more trusting toward autonomous driving systems. However, iconic cues consist of simple symbols (e.g., stop sign, left-turn arrow, dashboard-like indicators), potentially reducing visual clutter in interfaces. Sogol Masoumzadeh et al. [7] designed a virtual reality driving simulator as a serious game, aimed at improving the spatial cognition associated with aging and dementia. Over a two-week intervention period, 11 participants with varying levels of dementia engaged in daily VRDS sessions. The results indicated a significant improvement in spatial cognition, as assessed by a VR replica of the Morris Water test, with participants demonstrating a 44.4% increase in normalized correct trajectory findings post-intervention. Additionally, participants showed progress in game levels and spatial learning scores throughout the sessions, alongside an improvement in mood. These findings underscore the value of incorporating VR-based serious games into training programs for individuals with dementia, with implications for improving cognitive function and overall well-being in this population.
Régis Lobjois et al. [9] studied the comparison of driving behavior and the level of mental workload between driving in an affordable simulator and driving on roads. The results showed that the driving speed in the simulator closely matched real-world driving speeds. However, measures of mental workload, such as blink frequency, response times to tasks, and subjective ratings, consistently indicated higher levels in the simulator. Pietra et al. [11] developed a prototype to assess how visual and haptic feedback affected eco-driving behavior using immersive VR technology. Implemented in Unity, the simulator included diverse driving scenarios and sensory feedback mechanisms delivered via an Oculus Rift headset. Overall, the system showed promise in guiding drivers toward eco-sustainable behavior, with haptic feedback being particularly effective.
In 2021, a study by Xin Zou et al. [12] focused on how user experience differs between on-road and static simulations, both with and without a VR headset. Some of the parameters observed and measured in this study are presence, arousal, simulator sickness, and task workload. From their subjective evaluation, it was observed that the users had a realistic and immersive driving experience. Ribeiro et al. [13] introduced a VR-based Advanced Driver Assistance System (ADAS) called VISTA-Sim simulator to develop and evaluate personalized driver assistance. This study was specifically targeted towards truck drivers who need to dock at distribution centers in a timely and orderly fashion. Promising preliminary results indicated that this platform provides a means to automatically learn from a driver’s performance. An accessible, open-source virtual reality driving simulator (ADRIS) designed for individuals with sensory-motor disabilities was developed [14]. ADRIS provides a realistic driving experience while collecting behavioral and physiological data for training and assessing driving skills in various populations, including those with disabilities, new drivers, and individuals with attentional deficits, thereby facilitating independent living and transportation. Taeho Oh et al. [15] examined how inexperienced drivers navigate through hook-turn intersections, found in Melbourne, Australia, using a virtual reality driving simulator. The simulator offered an immersive driving experience, revealing higher collision risks for human-driven cars compared to computer-driven ones. Participant feedback indicates the simulator’s realism and usefulness for beginner drivers, highlighting its potential for safety research and training.
The number of different virtual driving simulators with aids that have been proposed to train users in the driving of cars while interpreting visual and auditory cues has been excessively high [16,17,18,19,20], implying that improvements in design are required for driving aids [17,18,19,20,21,22]. In this study, we investigate the effectiveness of image–arrow aids in a virtual reality driving simulator (VRDS) in improving trainees’ performance in a manner that would enable drivers to interpret instructions on their own within a training environment. We compared the image–arrow aids with audio and textual aids and arrows and textual aids.

3. Materials and Methods

3.1. Image–Arrow Aid Virtual Reality Driving Simulator (Image–Arrow Aid VRDS)

This study presents an image–arrow aid virtual reality driving simulator (Image–arrow Aid VRDS) as a simulator for the training of novice drivers and as a possible solution for the minimization of cognitive processing. A selected collection of cognitive aids during training is used to enhance performance when training with the simulator.
Image–arrow aid VRDS allows users to explore a virtual environment while sticking to predetermined routes and obeying traffic rules and driving instructions. Through this, novice drivers learn the basic principles of real-life driving. The simulator uses multimodal aids which provide a paradigm cognitive aid, for example, visual aids such as arrows, auditory cues for the indicators and warnings, and images that are different for different driving situations.
As people use the simulator, the cognitive aids break up difficult navigation tasks, and hence, the process of processing and interpreting becomes easy to understand and execute through proper driving action. Image–arrow aid-based VRDS works toward the enhancement of trainees’ performance and their ability to maintain focus on what matters, ensuring that they do not become lost in all the minute details; this allows for enhanced situational awareness and better decision-making abilities for them.
This study was carried out on the MA-VRDS (Image–arrow Virtual Reality Driving Simulator) prototype. This VR-based prototype was developed on the Unity game engine, incorporating the use of cognitive aids, and a realistic environment to teach the basics of driving in real-world conditions. The simulator uses tailored visual, audio, and image aids to provide guidance for each specific driving scenario, improving users’ comprehension and decision-making abilities. Figure 1 provides a visualization of how the MA-VRDS operates.

3.1.1. User Interface for Driving

The user interface in the image-arrow aid VRDS is designed to simulate the real experience of driving a vehicle. Figure 2 depicts the real view of the interface as well as the driver’s view, showing how these elements are integrated into an interactive and educational driving simulator. The image here underlines the lifelike aspect of MA-VRDS, as it goes on to depict the way users are generally shrouded by a realistic driving environment that simulates the way it is perceived in a real vehicle. It uses the Meta Quest 2 headset to view the virtual world of image-arrow aid VRDS and uses the Logitech steering wheel and pedals to control the car, mimicking the movements and interactions of an actual vehicle. As users navigate the virtual world, they are presented with real-time driving data on a virtual dashboard. This includes essential information like the following:
  • Speedometer: Displays the current speed of the car, allowing users to assess and regulate their speed according to road conditions and traffic rules.
  • Gear Status Indicator: Notifies the drivers with the gear state of the car for maximum performance and fuel efficiency.
Furthermore, a performance UI panel offers user feedback regarding the drivers’ actions. It indicates how many driving errors or correct tasks the user made and the total amount of driving time. This feedback can be useful for drivers in reflecting on their skills, improvement, and development over the simulation course.

3.1.2. Virtual Environment and Map

The environment in the MA-VRDS prototype is a 3D rendition of a small part of an urban city that offers a comprehensive training ground for new drivers. This 3D-rendered virtual city features an extensive network of roads, intersections, and sidewalks, enabling users to practice various kinds of driving maneuvers like signaling, turning, and maintaining proper lane discipline while understanding the traffic sense. Though there is no traffic, the environment simulates real-world challenges to prepare drivers for real-life traffic situations. The top view map of the virtual environment can be seen in Figure 3.
The map comprises some important features such as the followings:
  • Intersections: These are points where two or more roads intersect and where drivers can exercise turning, lane-changing, and obeying traffic signal rules.
  • Sidewalks: These are designated non-drivable areas that promote pedestrian awareness and help drivers stay alert.
  • Speed Bumps: These are small bumps on the roads installed on the roads to assist the drivers in maintaining their speed while driving over bumps for a smoother ride (Figure 4).
  • Traffic Lights: A simulated system of traffic signals which teach individuals to observe changes in traffic flow and respond to such changes, accordingly, further enhancing the decision making and situational awareness of the driver, as shown in Figure 5.
  • Pedestrian Crossings: These emphasize the need for drivers to yield to pedestrians. This practice cultivates respect for pedestrian rights and enhances drivers’ awareness of their surroundings, promoting a culture of safety, as shown in Figure 6.
Through this immersive virtual environment, users engage in comprehensive training that prepares them for the complexities of real-world driving, contributing to safer and more skilled drivers on the road.

3.1.3. Image–Arrow Aids

Image-Arrow aids are a collection of cognitive aids that support training by providing drivers with real-time contextual assistance in different driving situations, depending on the virtual environment they operate within. Image–arrow aids are specifically designed to be responsive to specific driving situations, in order to assist trainees in executing their driving tasks effectively. In this context, image–arrow aids make driving easier by using visual features, for instance, arrows indicating the direction where a turn should be made. In addition, text on a screen, mostly in combination with images, ensures that the trainees’ attention to the execution of such driving tasks is sufficient for them to perform the tasks effectively. The careful selection of the right types of aids makes an important difference in how effectively users learn and retain information. Each helps achieve a particular goal in facilitating user interaction and making the entire training process meaningful, enabling learners to gain knowledge and skills for appropriate behavior in actual driving situations. Arrow + text aids enhance the learning experience, whereby directional arrows and written details are used together in the virtual reality driving simulator. Clear arrows indicating the direction a learner must go, like turning left or right and stopping, are supplemented by short explanatory texts. It is through such interplay between visual arrows and written instructions that even learners quickly identify the necessary maneuvers and better understand the consequences for real driving contexts. The synergy between the arrows and the text enhances situational awareness and assists learners in processing important information even faster, making them more responsive and safer for a variety of driving situations as shown in Figure 7a. In addition, Figure 7b depicts abstract driving concepts and processes by using sequences in the VR driving simulator. One example for images could be tricky maneuvers like merging onto a busy highway, completing a three-point turn, or even parallel parking. Each of these maneuvers is appropriately broken down into quite clear, sequential steps, therefore enabling easy visual expressions of movement coordination and other road users’ actions. Images present training processes that are not intimidating; the learners have all the intimidating aspects of how things are in real life. This method, therefore, makes them understand the timing and accuracy that a lot of these maneuvers require, thus creating an atmosphere of confidence while they practice these skills in a safe, controlled space. This effectively creates good and effective learning experience using VR for training, thus making the learners better equipped to handle real-life situations of driving.

3.2. Participants

In this study, 45 participants (26 males and 19 females; aged 18–23) from various institutions took part in the evaluation. They were divided into three independent groups (G1, G2, and G3). A balanced Latin-square scheme was applied to determine the order in which different cognitive aids (Image–Arrow Aids (G1), Audio–Textual Aids (G2), and Arrow–Textual Aids (G3) were presented across groups. This approach ensured the counterbalancing of task sequences and minimized order effects, while maintaining a between-subjects design. The group composition included six females in G1, seven in G2, and six in G3. Information on participants’ driving and gaming experience was also recorded to ensure comparability across groups. Although no formal statistical tests were applied, a descriptive comparison showed that the three groups were broadly similar in their background experience. This helped minimize the risk of these factors acting as potential confounders in the study.
Each group interacted with a different prototype of the virtual reality driving simulator to evaluate the impact of various multimodal aids on driving performance, as shown in Figure 8. The user study for the virtual reality driving simulator was conducted in three different groups, each group experiencing a distinct prototype, to assess the impact of three different aids on driving performance.
  • Image–Arrow Aids (G1): These have already been introduced above in Section 3.1.3. In the image-arrow aids, the arrow indicates the direction where a turn should be made along with text on a screen, mostly in combination with images, to ensure that the trainees’ attention to the execution of such driving tasks is sufficient for them to perform the tasks effectively. Image-arrow aids enable trainees to gain knowledge and skills for appropriate behavior during VR driving simulation.
  • Audio–Textual Aids (G2): In this condition, the participants engaged with a simulator that provided auditory instructions along with textual prompts. When the trainees navigated in the VRDS, an arrow was displayed to assist them with the correct direction, and audio instructions were also used to assist them with correct actions such as using the accelerator and brake pedals to control speed, instructions about road signs, and safety reminders during driving. This combination aimed to assist them by ensuring that they could interpret both arrow and audio cues while addressing challenges like merging into traffic or obeying speed limits. The audio–textual aid VRDS is shown in Figure 9.
3.
Arrow–Textual Aids (G3): The third group utilized a simulator that focused solely on arrow cognitive aids. In this setup, participants received live visual directions and feedback as they drove through various scenarios. The emphasis on direction allowed learners to understand their sense of location related to their immediate environment, helping them to make informed decisions regarding traffic intersections, pedestrian crossings, and other driving dynamics. This directional approach is aimed at enhancing navigational skills in a driving context. The arrow–textual aid VRDS is shown in Figure 10.

3.3. Procedure

The procedure for testing and analyzing the effectiveness of image–arrow aids in a virtual reality-based driving simulator (VRDS) was conducted in a biomedical lab, at Air University Islamabad. The participants were first introduced to the study, and after providing their informed consent, they were briefed on the procedure. In order to make sure that they understood the controls and the setup, they were given a briefing on how the VR equipment operated.
The participants were divided into three groups based on the different types of multimodal aids they would use: Group 1 (G1) tested an image–arrow VRDS; Group 2 (G2) used auditory and textual aids; and Group 3 (G3) was exposed to a simulator with arrow and textual cues. Throughout the experiment, all participants navigated the same fixed driving route consisting of nine maneuvers to ensure consistency across trials. To minimize sequence effects, a balanced Latin square design was employed to counterbalance the order of experimental conditions across groups.
Additionally, subjective data was collected through a questionnaire administered at multiple points in the study. Participants rated their mental load, task complexity, and overall experience with the driving tasks on a five-point Likert scale. After completing the driving scenarios, resting measurements were taken, and a final questionnaire assessed the effectiveness of image–arrow aids.
At the end of the experiment, the participants were thanked for their involvement, and the study concluded with a debriefing. The data collected was then analyzed to determine the effectiveness of image–arrow aids in improving driving performance.

3.4. Task Description with Driving Routes

The experimental driving route in the VR environment was approximately 900 m in length and incorporated a variety of road segments to simulate realistic urban driving conditions. The route included a total of 9 maneuvers, consisting of 3 left turns, 3 right turns, and 3 U-turns, strategically placed to evaluate driver responses in different simulated traffic contexts. Speed zones were defined within the VR environment, with three distinct limits: 30 km/h (8.33 m/s) for school and residential areas, 60 km/h (16.67 m/s) for standard urban roads, and 80 km/h (22.22 m/s) for main arterial roads. A predefined task checklist, as shown in Table 1, was provided to each participant for their assistance.

4. Results

4.1. Subjective Measure of the Cognitive Aids in VRDS

Following the completion of the driving tasks, each participant filled out another questionnaire with ratings, focusing on their experiences with the respective driving conditions. The questions listed within the questionnaire regarding the cognitive aids are shown in Table 2. Each question was answered on a five-point Likert scale, ranging from strongly agree to strongly disagree.
For the first question, which related to how easily participants could interpret the simulated driving condition, 34.6% of the participants selected the “strongly agree” option and 37.8% selected “agree” for G1. In G2, 18.2% selected “strongly agree” and 23.1% selected “agree”. For G3, 15.8% of the participants strongly agreed, and 20.3% agreed. These results suggest that G1’s drive-specific cognitive aids helped participants better interpret driving conditions compared to G2 and G3.
For the second question, which addressed the ease of converting simulated traffic scenarios into real driving responses, 36.9% of the participants in G1 selected “strongly agree”, while 28.7% selected “agree”. In G2, the “strongly agree” and “agree” responses were 12.3% and 25.4%, respectively. For G3, 35.0% strongly agreed and 27.2% agreed. These results show that G1 provided effective support for translating simulation into practical actions, though G3 also showed strong perceived usefulness.
For the third question, regarding whether participants could perform driving tasks independently without instructor guidance, 35.9% in G1 strongly agreed, and 38.1% agreed. In G2, 13.1% selected “strongly agree”, and 29.8% selected “agree”. For G3, 38.4% chose “strongly agree”, while 26.8% selected “agree”. This indicates that both G1 and G3 enabled users to function autonomously, with G1 slightly ahead in guided independence. Table 3 and Figure 11 summarize the participants’ feedback across all the groups and conditions regarding cognitive aids. For the fourth question, which asked whether the condition was ideal for future driving simulations, 34.6% of the participants in G1 strongly agreed and 41.5% agreed. In G2, 21.3% strongly agreed, and 44.8% agreed. For G3, only 7.8% chose “strongly agree” while 20.4% agreed. This highlights a stronger preference for G1 and G2 over G3 in terms of suitability for future driving simulations.

4.2. Reliability and Validity Analysis of the Questionnaires

In this part of the analysis, we examined the validity and reliability of the questionnaires (see Table 2) using participants’ feedback. To assess these properties, we calculated the Average Variance Extracted (AVE), Composite Reliability (CR), and Cronbach’s alpha. Following the Fornell and Larcker criterion [23], AVE evaluates convergent validity, with a recommended square root of AVE greater than 0.50. The internal consistency reliability was assessed through Cronbach’s alpha and CR, both of which should exceed 0.70 [24]. Cronbach’s alpha is particularly suitable for evaluating Likert-scale instruments [25,26,27]. When Cronbach’s alpha is above 0.70, the CR value should also be higher than 0.70; otherwise, it should be lower. Table 1 presents the AVE, CR, and Cronbach’s alpha values for G1 and G2, based on their feedback. As shown in Table 4, the square roots of AVE for both questionnaires in G1 and G2 exceed the estimated correlation values. Likewise, the Cronbach’s alpha and CR values for all groups (G1, G2, and G3) are above 0.70, indicating good internal consistency and strong discriminant validity across the questionnaires.
In addition to the AVE, CR, and Cronbach’s alpha, a correlation matrix was computed to assess the discriminant validity. The square root of the AVE values exceeded the inter-construct correlations, confirming that each construct was distinct and measured unique aspects of the questionnaire. Table 5 presents the inter-construct correlations along with the square roots of the AVE.

4.3. Performance Metrics

This part of the analysis was to check the performance of trainees during driving in the three different cognitive aid VRDSs. The data recorded in this section includes the number of errors performed as well as the tasks’ completion times during their driving simulations. To assess overall driving performance, a composite score combining driving errors and completion times was computed. Both variables were standardized, and the resulting values were summed to produce a single performance index (errors + time composite). This method ensured that both accuracy and efficiency contributed equally to the analysis.
Number of Errors during Driving Task Execution: The numbers of errors performed by the trainees during their driving tasks were recorded. These errors include driving in the wrong direction (entering a lane in the opposite direction of travel at any point was counted as one error per occurrence); colliding with obstacles (contact with road boundaries, obstacles, pedestrians, or sign boards was counted as one error per collision event); exceeding the recommended speed limit (30, 60, or 80 km/h) by more than 5 km/h for longer than 3 s (which was counted as one error); etc. All errors were weighed equally, with each instance counted as one error, and the total error score per participant was computed as the sum of all error instances. The p values of the mean errors were calculated for three groups using analysis of variance (ANOVA). The ANOVA of the errors in task completion for the three groups is statistically significant (F(2, 43) = 16.23); p = 0.001 (p < 0.05) and η2 = 0.44, 95% CI [0.05, 0.32]. Comparing the errors in the task completion of G1 (mean, 8.1 errors; STD, 1.23), where p = 0.002 (p < 0.05), with those of G2 and G3, we observed a significant ANOVA. This indicates that, based on errors in task completion, trainees in G1 (image–arrow VRDS) performed significantly better compared to G2 and G3. On the other hand, comparing the errors in the task completion of G2 (mean, 10.8 errors; STD, 1.31), where p = 0.004 (p < 0.05), with those of G3, we observed a significant ANOVA. It can be observed that the mean errors in the task completion of G2 are better than those of G3. In addition, the Tukey–Kramer post hoc analysis indicates that the numbers of errors are significantly different between G1 and G2 (p = 0.002, p < 0.05), G1 and G3 (p = 0.001, p < 0.05), and G2 and G3 (p = 0.004, p < 0.05).
The means and standard deviations of G1, G2, and G3 based on the errors are provided in Table 6 and Figure 12.
Task Completion Time: In addition, the average task completion time and standard deviation for each group were checked for analyzing the performance of the trainees in the VRDSs. The p values of mean time were calculated for the three groups using analysis of variance (ANOVA). The ANOVA of task completion time for the three groups is statistically significant (F(2, 43) = 65.34); p = 0.004 (p < 0.05) and η2 = 0.76, 95% CI [0.28, 0.59]. A comparative analysis revealed significant differences in task completion times among the three groups. Comparing the task completion time of G1 (mean, 3.26 min; STD, 0.56), where p = 0.001 (p < 0.05), with that of G2 and G3, we observed a significant ANOVA. This indicates that, based on task completion, trainees in G1 (image–arrow VRDS) performed significantly better compared to G2 and G3. Similarly, comparing the task completion time of G2 (mean, 4.49 min; STD, 0.67), where p = 0.003 (p < 0.05), with that of G3, we observed a significant ANOVA. It can be observed that the mean task completion time of G2 is better than that of G3. In addition, the Tukey–Kramer post hoc analysis indicates that the task completion times are significantly different between G1 and G2 (p = 0.001, p < 0.05), G1 and G3 (p = 0.001, p < 0.05), and G2 and G3 (p = 0.003, p < 0.05).
The means and standard deviations of successful tasks performed during driving for each group are provided in Table 6 and graphed in Figure 13.
Composite Index (Errors + Time): A composite score was calculated by summing the standardized values of mean errors and completion times. ANOVA indicated significant group differences; F(2,43) = 41.26, p < 0.001, η2 = 0.66, 95% CI [0.23, 0.49]. G1 (errors: 8.1 ± 1.23; time: 3.26 ± 0.56) achieved the lowest composite score (best performance), followed by G2 (errors: 10.8 ± 1.31; time: 4.49 ± 0.67), and G3 (errors: 18.4 ± 1.43; time: 6.51 ± 0.68).
From all the above results, we can assume that G1 (image–arrow aids) is the most effective environment in improving task performance in a VR driving simulator. Participants belonging to G1 were less prone to errors and completed more successful tasks in less time than G2 and G3. In addition, based on subjective evaluation, the feedback of G1 showed higher satisfaction compared with that of G2 and G3. It can be observed that the performance of G1 was significantly better than that of G2 and G3 in terms of performance measures (errors + time) and subjective factors such as usability, easiness, understanding, and assistance.

5. Conclusions and Future Work

Our study’s findings show that using image–arrow aids in a VRDS setting has significant advantages over other aids (arrow–textual and audio–textual aids). Participants in the image–arrow aid VRDS (G1) performed better on numerous parameters, including subjective factors, completion time, and errors, during the performance of tasks in a VRDS. These findings suggest that image–arrow aid VRDS improves trainees’ efficiency and performance in a virtual reality driving environment. The analysis demonstrated that participants in the image–arrow aid VRDS (G1) reported consistently better performance than the other groups, G2 and G3. G1 participants reported better feedback than G2 and G3 regarding retaining learning, interpreting aids, and recognizing traffic rules, and handling driving tasks in VRDS. This underlines the importance of image–arrow aid VRDS in simplifying the learning process and providing a more user-friendly environment for trainees. G1 performed the best in terms of mean task completion time (3.26 min) and mean errors during task execution (8.1 errors). Similarly, G2 performed the driving task with a mean completion time of 4.49 min and mean errors of 10.8. Meanwhile, G3 performed the driving task with a mean completion time of 6.51 min and mean errors of 18.4. An evaluation revealed that G1 showed better performance based on time and errors than G2 and G3.
The current work focuses on urban road scenarios to establish a controlled environment for comparing image–arrow aids with arrow–textual and audio–textual aids. In future work, we will expand the experimental setup to various driving conditions such as rural roads and nighttime scenarios, to evaluate the proposed method across a wider range of environments. In addition, the current research focuses on short-term evaluation, and we will assess the effects of the proposed method over the long term in future work. In our future work, we will also evaluate cognitive load via subjective analysis and objective analysis using EEG to directly quantify cognitive load and assess usability. In addition, future research will extend this work by introducing dynamic traffic elements to better simulate real-world driving complexity. Finally, although the participants completed a short familiarization session to become accustomed to the VR setup, detailed procedures for managing potential motion sickness were not included in this study. Future work will incorporate standardized motion sickness-handling protocols and objective monitoring to further improve ecological validity.

Author Contributions

Conceptualization, N.A. and M.A.A.; methodology, N.A., S.U. and H.R.; validation, D.K. and S.U.; writing—original draft preparation, N.A.; writing—review and editing, N.A., S.U. and D.K.; supervision, N.A. and H.R. All authors have read and agreed to the published version of the manuscript.

Funding

There was no funding to support this research.

Data Availability Statement

This manuscript has no formal dataset. However, any type of related data such as 3D models, images, and code can be available on reasonable request to the first author.

Acknowledgments

We would like to express our sincere gratitude to the Department of Biomedical Engineering at Air University Islamabad for their valuable support and facilitation in conducting our experiments. Their assistance played a crucial role in the successful execution of our work.

Conflicts of Interest

We have no conflicts of interest with any organization in this case.

Abbreviations

The following abbreviations are used in this manuscript:
VRVirtual reality
VEVirtual environments
VRDSVirtual Reality-based Driving Simulator
VLEVirtual learning environments
GPUGraphics processing unit
HMDHead-mounted display
Image–Arrow Aid VRDSImage-arrow aid Virtual Reality Driving Simulator

References

  1. Juliano, J.M.; Schweighofer, N.; Liew, S.L. Increased cognitive load in immersive virtual reality during visuomotor adaptation is associated with decreased long-term retention and context transfer. J. Neuroeng. Rehabil. 2022, 19, 106. [Google Scholar] [CrossRef]
  2. Sari, R.C.; Pranesti, A.; Solikhatun, I.; Nurbaiti, N.; Yuniarti, N. Cognitive overload in immersive virtual reality in education: More presence but less learnt? Educ. Inf. Technol. 2023, 29, 12887–12909. [Google Scholar] [CrossRef]
  3. Morra, L.; Lamberti, F.; Pratticó, F.G.; Rosa, S.L.; Montuschi, P. Building Trust in Autonomous Vehicles: Role of Virtual Reality Driving Simulators in HMI Design. IEEE Trans. Veh. Technol. 2019, 68, 9438–9450. [Google Scholar] [CrossRef]
  4. Ali, N.; Ullah, S.; Raees, M. The effect of task specific aids on students’ performance and minimization of cognitive load in a virtual reality chemistry laboratory. Comput. Animat. Virtual Worlds 2023, 34, e2194. [Google Scholar] [CrossRef]
  5. Ullah, S.; Mehmood, B.; Raees, M.; Ali, N.; Rehman, I.U. Virtual Periodic Table for Dynamic Visualization of Atomic Structure and Hierarchical-Based Interaction: A System to Enhance Student’s Learning. J. Chem. Educ. 2025, 102, 1829–1838. [Google Scholar] [CrossRef]
  6. Ali, N.; Ullah, S. The effect of interactive tutorial information and purpose built virtual chemistry laboratory on students’ performance. Multimed. Tools Appl. 2025, 84, 18873–18892. [Google Scholar] [CrossRef]
  7. Masoumzadeh, S.; Moussavi, Z. Does Practicing with a Virtual Reality Driving Simulator Improve Spatial Cognition in Older Adults? A Pilot Study. Neurosci. Insights 2020, 15, 2633105520967930. [Google Scholar] [CrossRef]
  8. Boboc, R.G.; Butilă, E.V.; Butnariu, S. Leveraging wearable sensors in virtual reality driving simulators: A review of techniques and applications. Sensors 2024, 24, 4417. [Google Scholar] [CrossRef] [PubMed]
  9. Lobjois, R.; Mecheri, S. Attentional capacity matters for visuomotor adaptation to a virtual reality driving simulator. Sci. Rep. 2024, 14, 28991. [Google Scholar] [CrossRef] [PubMed]
  10. Lobjois, R.; Faure, V.; Désiré, L.; Benguigui, N. Behavioral and workload measures in real and simulated driving: Do they tell us the same thing about the validity of driving simulation? Saf. Sci. 2021, 134, 105046. [Google Scholar] [CrossRef]
  11. Pietra, A.; Rull, M.V.; Etzi, R.; Gallace, A.; Scurati, G.; Ferrise, F.; Bordegoni, M. Promoting eco-driving behavior through multisensory stimulation: A preliminary study on the use of visual and haptic feedback in a virtual reality driving simulator. Virtual Real. 2021, 25, 3. [Google Scholar] [CrossRef]
  12. Zou, X.; O’Hern, S.; Ens, B.; Coxon, S.; Mater, P.; Chow, R.; Neylan, M.; Vu, H.L. On-road virtual reality autonomous vehicle (VRAV) simulator: An empirical study on user experience. Transp. Res. Part C Emerg. Technol. 2021, 126, 103090. [Google Scholar] [CrossRef]
  13. Ribeiro, P.; Krause, A.F.; Meesters, P.; Kural, K.; van Kolfschoten, J.; Büchner, M.-A.; Ohlmann, J.; Ressel, C.; Benders, J.; Essig, K. A VR Truck Docking Simulator Platform for Developing Personalized Driver Assistance. Appl. Sci. 2021, 11, 8911. [Google Scholar] [CrossRef]
  14. Ricci, S.; Gandolfi, F.; Marchesi, G.; Bellitto, A.; Basteris, A.; Canessa, A.; Massone, A.; Casadio, M. ADRIS: The new open-source accessible driving simulator for training and evaluation of driving abilities. Comput. Methods Programs Biomed. 2022, 221, 106857. [Google Scholar] [CrossRef] [PubMed]
  15. Oh, T.; Xu, Y.; Li, Z.; Kim, I. Driving Risk Analysis Based on Driving Experience at Hook-Turn Intersection Using the Emerging Virtual Reality Technology. J. Adv. Transp. 2022, 2022, 1–12. [Google Scholar] [CrossRef]
  16. Shi, Y.; Boffi, M.; Piga, B.E.A.; Mussone, L.; Caruso, G. Perception of Driving Simulations: Can the Level of Detail of Virtual Scenarios Affect the Driver’s Behavior and Emotions? IEEE Trans. Veh. Technol. 2022, 71, 3429–3442. [Google Scholar] [CrossRef]
  17. Liang, Y.; Zheng, P.; Xia, L. A visual reasoning-based approach for driving experience improvement in the AR-assisted head-up display. Adv. Eng. Inform. 2023, 55, 101888. [Google Scholar] [CrossRef]
  18. Borghini, G.; Astolfi, L.; Vecchiato, G.; Mattia, D.; Babiloni, F. Measuring neurophysiological signals in aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness. Neurosci. Biobehav. Rev. 2014, 44, 58–75. [Google Scholar] [CrossRef] [PubMed]
  19. Xie, J.; Xu, G.; Wang, J.; Li, M.; Han, C.; Jia, Y. Effects of mental load and fatigue on steady-state evoked potential based brain computer interface tasks: A comparison of periodic flickering and motion-reversal based visual attention. PLoS ONE 2016, 11, e0163426. [Google Scholar] [CrossRef] [PubMed]
  20. Kamzanova, A.T.; Kustubayeva, A.M.; Matthews, G. Use of EEG workload indices for diagnostic monitoring of vigilance decrement. Hum. Factors 2014, 56, 1136–1149. [Google Scholar] [CrossRef]
  21. Yin, Y.; Sun, J.; Liu, Y.; Wang, H.; Jing, P. The Cognitive Load of Observation Tasks in 3D Video is Lower Than That in 2D Video. arXiv 2023, arXiv:2302.12968. [Google Scholar]
  22. Dan, A.; Reiner, M. EEG-based cognitive load of processing events in 3D virtual worlds is lower than processing events in 2D displays. Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol. 2017, 122, 75–84. [Google Scholar] [CrossRef]
  23. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  24. Cohen, J. Statistical power analysis. Curr. Dir. Psychol. Sci. 1992, 1, 98–101. [Google Scholar] [CrossRef]
  25. Malapane, T.A.; Ndlovu, N.K. Assessing the reliability of Likert scale statements in an e-commerce quantitative study: A Cronbach alpha analysis using SPSS Statistics. In Proceedings of the 2024 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 3 May 2024; pp. 90–95. [Google Scholar]
  26. Lubiano, M.A.; Montenegro, M.; Pérez-Fernández, S.; Gil, M.Á. Analyzing the influence of the rating scale for items in a questionnaire on cronbach coefficient alpha. In Trends in Mathematical, Information and Data Sciences: A Tribute to Leandro Pardo; Springer: Cham, Switzerland, 2022; pp. 377–388. [Google Scholar]
  27. Farooq, M.S. Social Support and Entrepreneurial Skills as Antecedents of Entrepreneurial Behaviour. Ph.D. Thesis, Universiti Malaysia Sarawak (UNIMAS), Kota Samarahan, Malaysia, 2016. [Google Scholar]
Figure 1. Architecture model of the image–arrow aid VRDS.
Figure 1. Architecture model of the image–arrow aid VRDS.
Futuretransp 05 00130 g001
Figure 2. Driver’s perspective in VR (top-left side). Interface for the driver (top-right side). Performance UI panel (bottom side).
Figure 2. Driver’s perspective in VR (top-left side). Interface for the driver (top-right side). Performance UI panel (bottom side).
Futuretransp 05 00130 g002
Figure 3. Top-down view of the Road map (left), intended path for the driver (right).
Figure 3. Top-down view of the Road map (left), intended path for the driver (right).
Futuretransp 05 00130 g003
Figure 4. Speed bumps for maintaining speed.
Figure 4. Speed bumps for maintaining speed.
Futuretransp 05 00130 g004
Figure 5. Traffic signal system.
Figure 5. Traffic signal system.
Futuretransp 05 00130 g005
Figure 6. Pedestrian crossing.
Figure 6. Pedestrian crossing.
Futuretransp 05 00130 g006
Figure 7. The inside scenario of image–arrow VRDS; (a) represents an arrow with textual instructions, and (b) represents an image with textual instructions about the actual action during driving.
Figure 7. The inside scenario of image–arrow VRDS; (a) represents an arrow with textual instructions, and (b) represents an image with textual instructions about the actual action during driving.
Futuretransp 05 00130 g007
Figure 8. Experimental setup.
Figure 8. Experimental setup.
Futuretransp 05 00130 g008
Figure 9. The inside scenario of audio–textual aid VRDS; (a) represents textual instructions, and (b) represents audio about the actual action during driving.
Figure 9. The inside scenario of audio–textual aid VRDS; (a) represents textual instructions, and (b) represents audio about the actual action during driving.
Futuretransp 05 00130 g009
Figure 10. The inside scenario of arrow–textual aid VRDS; (a) represents an arrow with textual instructions, and (b) represents textual aids about the actual action during driving.
Figure 10. The inside scenario of arrow–textual aid VRDS; (a) represents an arrow with textual instructions, and (b) represents textual aids about the actual action during driving.
Futuretransp 05 00130 g010
Figure 11. Subjective analysis regarding cognitive aids: (a) Question about ease of interpreting aids (Q1). (b) Question about translating aids to actions (Q2). (c) Question about performing task without instructor (Q3). (d) Question about suitability for future use (Q4).
Figure 11. Subjective analysis regarding cognitive aids: (a) Question about ease of interpreting aids (Q1). (b) Question about translating aids to actions (Q2). (c) Question about performing task without instructor (Q3). (d) Question about suitability for future use (Q4).
Futuretransp 05 00130 g011
Figure 12. No. of errors performed by trainees during driving.
Figure 12. No. of errors performed by trainees during driving.
Futuretransp 05 00130 g012
Figure 13. Mean task completion time by trainees during driving in VRDS.
Figure 13. Mean task completion time by trainees during driving in VRDS.
Futuretransp 05 00130 g013
Table 1. Predefined task checklist during driving in VRDS.
Table 1. Predefined task checklist during driving in VRDS.
Task No.Task Description
T1Maintain speed within the posted limit for each zone, i.e., 30, 60, and 80 km/h (equivalent to 8.33, 16.67, and 22.22 m/s).
T2Follow all virtual traffic signals and signs.
T3Perform lane changes when required.
T4Complete all maneuvers correctly, i.e., left, right, and U-turns.
T5Avoid collisions with obstacles, pedestrians, sign boards, etc.
T6Park in the designated area at the end of the route.
Table 2. List of questions within the questionnaire regarding the cognitive aids.
Table 2. List of questions within the questionnaire regarding the cognitive aids.
Q. NoQuestion
Q1In the VR driving simulator, this aid was easy to understand while navigating.
Q2While performing driving tasks in VR, this aid was easy to translate into driving actions.
Q3With this aid, I was able to complete driving tasks without the need of any instructor.
Q4I believed this aid is the most optimal option for future VR-based driving simulators.
Table 3. Responses of participants (total sample size n = 45) regarding the cognitive aids in VRDS.
Table 3. Responses of participants (total sample size n = 45) regarding the cognitive aids in VRDS.
Q. NoGroupPercentages Are Calculated with Respect to the Number of Participants in Each Group (n ≈ 15). Both Absolute Counts (n) and Percentages (%) Are Reported in Parallel, Rounded to One Decimal Place.
Strongly DisagreeDisagreeNeutralAgreeStrongly Agree
Q1G18.417.41.837.834.6
G211.528.019.223.118.2
G321.534.18.320.315.8
Q2G15.916.911.628.736.9
G229.927.05.425.412.3
G30.024.912.927.235.0
Q3G10.019.07.038.135.9
G226.630.50.029.813.1
G38.517.19.226.838.4
Q4G10.523.40.041.534.6
G29.220.74.044.821.3
G318.939.013.920.47.8
Table 4. Validity and reliability of questionnaires.
Table 4. Validity and reliability of questionnaires.
QuestionnaireAverage Variance ExtractedComposite ReliabilityCronbach’s Alpha
Cognitive Aids0.6440.8250.732
Table 5. Inter-construct correlations of the AVE.
Table 5. Inter-construct correlations of the AVE.
ConstructCognitive AidsG1G2G3
Cognitive Aids0.8020.420.360.39
G10.420.7840.410.34
G20.360.410.8120.37
G30.390.340.370.795
Table 6. Mean task completion times with standard deviations and mean numbers of errors with standard deviations for G1, G2, and G3.
Table 6. Mean task completion times with standard deviations and mean numbers of errors with standard deviations for G1, G2, and G3.
GroupMean No. of ErrorsMean Completion Time (Min)
MeanStandard DeviationMeanStandard Deviation
G18.11.233.260.56
G210.81.314.490.67
G318.41.436.510.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, N.; Ansari, M.A.; Khan, D.; Rahman, H.; Ullah, S. Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees. Future Transp. 2025, 5, 130. https://doi.org/10.3390/futuretransp5040130

AMA Style

Ali N, Ansari MA, Khan D, Rahman H, Ullah S. Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees. Future Transportation. 2025; 5(4):130. https://doi.org/10.3390/futuretransp5040130

Chicago/Turabian Style

Ali, Numan, Muhammad Alyan Ansari, Dawar Khan, Hameedur Rahman, and Sehat Ullah. 2025. "Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees" Future Transportation 5, no. 4: 130. https://doi.org/10.3390/futuretransp5040130

APA Style

Ali, N., Ansari, M. A., Khan, D., Rahman, H., & Ullah, S. (2025). Virtual Reality Driving Simulator: Investigating the Effectiveness of Image–Arrow Aids in Improving the Performance of Trainees. Future Transportation, 5(4), 130. https://doi.org/10.3390/futuretransp5040130

Article Metrics

Back to TopTop