applsci-logo

Journal Browser

Journal Browser

Advances in Virtual Reality and Vision for Driving Safety

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 30 October 2026 | Viewed by 3061

Special Issue Editor


E-Mail Website1 Website2
Guest Editor
Clinical and Laboratory Applications of Research in Optometry, Department of Optics, University of Granada, 18071 Granada, Spain
Interests: driving safety; road behavior; distracted driving; binocular vision; visual impairment; aging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, the rapid development of virtual reality (VR), computer vision, and intelligent sensing technologies has opened new opportunities for improving driving safety. Road accidents remain a major global challenge, and innovative solutions are urgently needed to enhance driver awareness, prevent collisions, and support safer mobility. Vision is the most critical sensory function in driving, with impairments such as reduced acuity, visual field loss, and delayed visual processing significantly increasing accident risk. Advances in vision science, combined with VR-based simulation and computer vision systems, enable more accurate assessment of visual performance, early detection of risk factors, and the design of targeted interventions.

We are pleased to invite you to contribute to this Special Issue. It aims to bring together cutting-edge research and practical applications that explore how immersive technologies, vision-based monitoring, and visual function assessment can improve driver performance, assess risks, and ultimately reduce traffic accidents.

For this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following:

  1. Virtual reality applications for driver training, rehabilitation, and simulation;
  2. Vision-based driver monitoring, fatigue detection, and behavior analysis;
  3. Visual function assessment, impairment detection, and their impact on driving safety;
  4. Intelligent road perception, advanced driver assistance systems (ADASs), and autonomous driving.

We look forward to receiving your valuable contributions.

Dr. Carolina Ortiz
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • driving safety
  • computer vision
  • visual function
  • driver monitoring
  • simulation
  • road perception
  • distracted driving
  • ADAS
  • autonomous vehicles

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 586 KB  
Article
Emergent Pedestrian Safety in a World-Model Driving Agent Under Adversarial Interaction Without Explicit Safety Rewards
by Stefan Zlatinov, Gorjan Nadzinski, Vesna Ojleska Latkoska, Dushko Stavrov and Mile Stankovski
Appl. Sci. 2026, 16(8), 3915; https://doi.org/10.3390/app16083915 - 17 Apr 2026
Viewed by 248
Abstract
Pedestrian interaction remains a central safety challenge for autonomous driving, particularly under non-compliant or adversarial pedestrian behavior. Existing research and evaluations predominantly test against rule-following pedestrians, leaving a gap in understanding how learning-based agents handle worst-case interactions. We introduce the Jaywalkers Library, a [...] Read more.
Pedestrian interaction remains a central safety challenge for autonomous driving, particularly under non-compliant or adversarial pedestrian behavior. Existing research and evaluations predominantly test against rule-following pedestrians, leaving a gap in understanding how learning-based agents handle worst-case interactions. We introduce the Jaywalkers Library, a novel configurable benchmark in CARLA with three adversarial pedestrian archetypes (Intruder, Indecisive Crosser, and Protester). We evaluate a DreamerV3 agent trained with sparse rewards, where the only pedestrian-specific signal is a terminal collision penalty. Evaluation employs a frozen-policy protocol with explicit train–test separation. Safety behavior is decomposed into endpoint outcomes, evasion dynamics, and efficiency costs. Under nominal conditions, the agent achieves high route completion and generalizes to an unseen town, whereas under adversarial exposure, an archetype-sensitive evasion strategy emerges. The agent swerves at speed against dynamic pedestrians but decelerates against the slow-moving Protester. Collision rates reveal a counterintuitive difficulty ordering in which the Protester is the hardest, followed by the Intruder, with the Indecisive Crosser as the most survivable. These findings show that a sparse terminal penalty suffices for emergent pedestrian avoidance in a world-model agent, but that effectiveness is bounded by the world model’s ability to predict pedestrian persistence. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

26 pages, 18451 KB  
Article
Supervisory Gaze Behaviour Under Different Automation Durations in Level 2 Driving: A First-Order Transition Analysis
by Hanna Chouchane, Jooheong Lee, Yuki Sakamura, Hiroki Nakamura, Genya Abe and Makoto Itoh
Appl. Sci. 2026, 16(3), 1401; https://doi.org/10.3390/app16031401 - 29 Jan 2026
Cited by 1 | Viewed by 447
Abstract
Level 2 driving automation requires continuous driver supervision, yet common attention metrics often capture gaze allocation rather than the structure of supervisory scanning. This study proposes a quantitative approach for describing supervisory gaze organisation using first-order Markov chain analysis of gaze transitions. Forty-three [...] Read more.
Level 2 driving automation requires continuous driver supervision, yet common attention metrics often capture gaze allocation rather than the structure of supervisory scanning. This study proposes a quantitative approach for describing supervisory gaze organisation using first-order Markov chain analysis of gaze transitions. Forty-three licensed drivers (N=43) completed a simulator drive with Level 2 automation for either 5 or 15 min (between-subjects), representing typical Japanese expressway intervals between service areas. Supervisory behaviour was analysed at the scenario level, without introducing secondary tasks, allowing attentional drift to emerge naturally under automation. Eye-tracking data were manually annotated frame-by-frame at 60 Hz and modelled as transition probability matrices across key Areas of Interest (AOIs): road centre, mirrors, periphery, and the human–machine interface. Compared with the 5 min condition, the 15 min condition showed fewer mirror-to-road-centre recovery transitions and slower System-Recognised Reaction Time (SRRT) at the takeover request. These patterns suggest a gradual weakening of supervisory gaze organisation rather than a simple loss of attention. The proposed framework offers a reproducible way to calibrate driver monitoring and evaluate human–machine interfaces by linking gaze transition probabilities to takeover readiness. By quantifying how supervisory behaviour reorganises under extended automation in realistic driving scenarios, this study provides a practical basis for the development of safety-relevant driver monitoring indicators in Level 2 driver assistance systems. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

14 pages, 1179 KB  
Article
Relationship Between Humphrey Automated Perimetry and Virtual Reality-Based Perimetry: A Constant dB Offset and Normative Data
by Juan E. Cedrún-Sánchez, Ricardo Bernárdez-Vilaboa, Laura Sánchez-Alamillos, Marina Medina-Galdeano, Carla Otero-Currás and F. Javier Povedano-Montero
Appl. Sci. 2026, 16(3), 1351; https://doi.org/10.3390/app16031351 - 29 Jan 2026
Viewed by 502
Abstract
Background: Automated visual field testing is fundamental in ophthalmology, but differences in stimulus scaling and luminance between devices hinder direct comparison of sensitivity values. Virtual reality (VR)-based perimetry has emerged as a portable alternative, yet its relationship with conventional perimetry requires clarification. Methods: [...] Read more.
Background: Automated visual field testing is fundamental in ophthalmology, but differences in stimulus scaling and luminance between devices hinder direct comparison of sensitivity values. Virtual reality (VR)-based perimetry has emerged as a portable alternative, yet its relationship with conventional perimetry requires clarification. Methods: This prospective cross-sectional study included 60 healthy participants stratified into younger (<50 years) and older (≥50 years) groups. Differential light sensitivity was assessed in the right eye using Humphrey Automated Perimetry (HFA) with the 30-2 test pattern and a VR-based perimeter (Dicopt-Pro) in randomized order. Pointwise sensitivity values were analyzed using linear regression and Bland–Altman analysis, and sensitivity profiles were examined as a function of visual field eccentricity. Results: A strong linear relationship was observed between HFA and Dicopt-Pro sensitivity values in both age groups (R ≥ 0.96). A systematic and approximately constant inter-device offset was identified, with mean differences of 15.7 ± 0.4 dB in younger subjects and 13.7 ± 0.5 dB in older subjects. Bland–Altman analysis showed consistent bias without proportional error. Dicopt-Pro sensitivity profiles demonstrated an eccentricity-dependent decline comparable to HFA while preserving age-related differences. Conclusions: VR-based perimetry using Dicopt-Pro shows sensitivity patterns closely aligned with conventional Humphrey perimetry when a systematic, age-specific inter-device offset is considered, enabling clinically meaningful interpretation of Dicopt-Pro results within an HFA-referenced framework. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

22 pages, 1918 KB  
Article
Edge-VisionGuard: A Lightweight Signal-Processing and AI Framework for Driver State and Low-Visibility Hazard Detection
by Manuel J. C. S. Reis, Carlos Serôdio and Frederico Branco
Appl. Sci. 2026, 16(2), 1037; https://doi.org/10.3390/app16021037 - 20 Jan 2026
Viewed by 1311
Abstract
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor [...] Read more.
Driving safety under low-visibility or distracted conditions remains a critical challenge for intelligent transportation systems. This paper presents Edge-VisionGuard, a lightweight framework that integrates signal processing and edge artificial intelligence to enhance real-time driver monitoring and hazard detection. The system fuses multi-modal sensor data—including visual, inertial, and illumination cues—to jointly estimate driver attention and environmental visibility. A hybrid temporal–spatial feature extractor (TS-FE) is introduced, combining convolutional and B-spline reconstruction filters to improve robustness against illumination changes and sensor noise. To enable deployment on resource-constrained automotive hardware, a structured pruning and quantization pipeline is proposed. Experiments on synthetic VR-based driving scenes demonstrate that the full-precision model achieves 89.6% driver-state accuracy (F1 = 0.893) and 100% visibility accuracy, with an average inference latency of 16.5 ms. After 60% parameter reduction and short fine-tuning, the pruned model preserves 87.1% accuracy (F1 = 0.866) and <3 ms latency overhead. These results confirm that Edge-VisionGuard maintains near-baseline performance under strict computational constraints, advancing the integration of computer vision and Edge AI for next-generation safe and reliable driving assistance systems. Full article
(This article belongs to the Special Issue Advances in Virtual Reality and Vision for Driving Safety)
Show Figures

Figure 1

Back to TopTop