Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (105)

Search Parameters:
Keywords = driver’s gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4553 KB  
Article
From Initial to Situational Automation Trust: The Interplay of Personality, Interpersonal Trust, and Trust Calibration in Young Males
by Menghan Tang, Tianjiao Lu and Xuqun You
Behav. Sci. 2026, 16(2), 176; https://doi.org/10.3390/bs16020176 - 26 Jan 2026
Abstract
To understand human–machine interactions, we adopted a framework that distinguishes between stable individual differences (enduring personality/interpersonal traits), initial trust (pre-interaction expectations), and situational trust (dynamic calibration via gaze and behavior). A driving simulator experiment was conducted with 30 male participants to investigate trust [...] Read more.
To understand human–machine interactions, we adopted a framework that distinguishes between stable individual differences (enduring personality/interpersonal traits), initial trust (pre-interaction expectations), and situational trust (dynamic calibration via gaze and behavior). A driving simulator experiment was conducted with 30 male participants to investigate trust calibration across three levels: manual (Level 0), semi-automated (Level 2, requiring monitoring), and fully automated (Level 4, system handles tasks). We combined eye tracking (pupillometry/fixations) with the Eysenck Personality Questionnaire (EPQ) and Interpersonal Trust Scale (ITS). Results indicated that semi-automation yielded a higher hazard detection sensitivity (d′ = 0.81) but induced greater physiological costs (pupil diameter, ηp2 = 0.445) compared to manual driving. A mediation analysis confirmed that neuroticism was associated with initial trust specifically through interpersonal trust. Critically, despite lower initial trust, young male individuals with high interpersonal trust exhibited slower reaction times in the semi-automation model (B = 0.60, p = 0.035), revealing a “social complacency” effect where social faith paradoxically predicted lower behavioral readiness. Based on these findings, we propose that situational trust is a multi-layer calibration process involving dissociated attentional and behavioral mechanisms, suggesting that such “wary but complacent” drivers require adaptive HMI interventions. Full article
(This article belongs to the Topic Personality and Cognition in Human–AI Interaction)
24 pages, 2587 KB  
Article
Discriminative Capabilities of Eye Gaze Measures for Cognitive Load Evaluation in a Driving Simulation Task
by Anastasiia Bakhchina, Karina Arutyunova, Evgenii Burashnikov, Anastasiya Filatova, Andrei Filimonov and Ivan Shishalov
J. Eye Mov. Res. 2026, 19(1), 1; https://doi.org/10.3390/jemr19010001 - 24 Dec 2025
Viewed by 359
Abstract
Driving is a cognitively demanding task engaging attentional effort and working memory resources, which increases cognitive load. The aim of this study was to evaluate the discriminative capabilities of an objective eye tracking method in comparison to a subjective self-report scale (the NASA–Task [...] Read more.
Driving is a cognitively demanding task engaging attentional effort and working memory resources, which increases cognitive load. The aim of this study was to evaluate the discriminative capabilities of an objective eye tracking method in comparison to a subjective self-report scale (the NASA–Task Load Index) in distinguishing cognitive load levels during driving. Participants (N = 685) performed highway and urban driving in a fixed-base driving simulator. The N-Back test was used as a secondary task to increase cognitive load. In line with previous studies, the NASA–Task Load Index was shown to be an accurate self-report tool in distinguishing conditions with higher and lower levels of cognitive load due to the additional N-Back task, with best average accuracy of 0.81 within the highway driving scenario. Eye gaze metrics worked best when differentiating between stages of highway and urban driving, with an average accuracy of 0.82. Eye gaze entropy measures were the best indicators for cognitive load dynamics, with average accuracy reaching 0.95 for gaze transition entropy in the urban vs. highway comparison. Eye gaze metrics showed significant correlations with the NASA–Task Load Index results in urban driving stages, but not in highway driving. The results demonstrate that eye gaze metrics can be used in combination with self-reports for developing algorithms of cognitive load evaluation and reliable driver state prediction in different road conditions. Full article
Show Figures

Figure 1

18 pages, 2710 KB  
Article
Eye Gaze Entropy Reflects Individual Experience in the Context of Driving
by Karina Arutyunova, Evgenii Burashnikov, Nikita Timakin, Ivan Shishalov, Andrei Filimonov and Anastasiia Bakhchina
Entropy 2026, 28(1), 8; https://doi.org/10.3390/e28010008 - 20 Dec 2025
Viewed by 616
Abstract
Eye gaze plays an essential role in the organisation of human goal-directed behaviour. Stationary gaze entropy and gaze transition entropy are two informative measures of visual scanning in different tasks. In this work, we discuss the benefits of these eye gaze entropy measures [...] Read more.
Eye gaze plays an essential role in the organisation of human goal-directed behaviour. Stationary gaze entropy and gaze transition entropy are two informative measures of visual scanning in different tasks. In this work, we discuss the benefits of these eye gaze entropy measures in the context of driving behaviour. In our large-scale study, participants performed driving tasks in a simulator (N = 380, 44% female, age: 20–73 years old) and in on-road urban environments (N = 241, 44% female, age: 19–74 years old). We analysed measures of eye gaze entropy in relation to driving experience and compared their dynamics between the simulator and on-road driving. The results demonstrate that, in both driving conditions, gaze transition entropy is higher, whereas stationary gaze entropy is lower, in more experienced drivers of both genders. This suggests that gaining driving experience may be accompanied by a decrease in overall gaze dispersion and an increased unpredictability of visual scanning behaviour. These results are in line with previously reported trends on experience-related dynamics of eye gaze entropy measures. We discuss our findings in the framework of the system-evolutionary theory, which explains the organisation of behaviour through the history of individual development, corresponding to the growing complexity of individual–environment interactions. Experience-related dynamics of eye gaze complexity can be a useful factor in the development of practical applications, such as driver monitoring systems and other human–machine interfaces. Full article
(This article belongs to the Special Issue Information-Theoretic Methods in Computational Neuroscience)
Show Figures

Figure 1

38 pages, 4380 KB  
Article
Enhancement of ADAS with Driver-Specific Gaze Profiling Algorithm—Pilot Case Study
by Marián Gogola and Ján Ondruš
Vehicles 2025, 7(4), 145; https://doi.org/10.3390/vehicles7040145 - 28 Nov 2025
Viewed by 406
Abstract
This study investigates drivers’ visual attention strategies during naturalistic urban driving using mobile eye-tracking (Pupil Labs Neon). A sample of experienced drivers participated in a realistic traffic scenario to examine fixation behaviour under varying traffic conditions. Non-parametric analyses revealed substantial variability in fixation [...] Read more.
This study investigates drivers’ visual attention strategies during naturalistic urban driving using mobile eye-tracking (Pupil Labs Neon). A sample of experienced drivers participated in a realistic traffic scenario to examine fixation behaviour under varying traffic conditions. Non-parametric analyses revealed substantial variability in fixation behaviour attributable to driver identity (H(9) = 286.06, p = 2.35 × 10−56), stimulus relevance (H(7) = 182.64, p = 5.40 × 10−36), and traffic density (H(4) = 76.49, p = 9.64 × 10−16). Vehicles and pedestrians elicited significantly longer fixations than lower-salience categories, reflecting adaptive allocation of visual attention to behaviourally critical elements of the scene. Compared with the fixed-rule method, which produced inflated anomaly rates of 7.23–14.84% (mean 12.06 ± 2.71%), the DSGP algorithm yielded substantially lower and more stable rates of 1.62–3.33% (mean 2.48 ± 0.53%). The fixed-rule approach over-classified anomalies by approximately 4–6×, whereas DSGP more accurately distinguished contextually appropriate fixations from genuine attentional deviations. These findings demonstrate that fixation behaviour in driving is strongly shaped by individual traits and environmental context, and that driver-specific modelling substantially improves the reliability of attention monitoring. Therefore DSGP framework offers a robust, personalised alternative evaluated at the proof-of-concept level to fixed thresholds and represents a promising direction for enhancing driver-state assessment in future ADAS. Full article
Show Figures

Figure 1

25 pages, 3393 KB  
Article
Enhancing Driver Monitoring Systems Based on Novel Multi-Task Fusion Algorithm
by Romas Vijeikis, Ibidapo Dare Dada, Adebayo A. Abayomi-Alli and Vidas Raudonis
Sensors 2025, 25(21), 6799; https://doi.org/10.3390/s25216799 - 6 Nov 2025
Viewed by 1136
Abstract
Distracted driving continues to be a major contributor to road accidents, highlighting the growing research interest in advanced driver monitoring systems for enhanced safety. This paper seeks to improve the overall performance and effectiveness of such systems by highlighting the importance of recognizing [...] Read more.
Distracted driving continues to be a major contributor to road accidents, highlighting the growing research interest in advanced driver monitoring systems for enhanced safety. This paper seeks to improve the overall performance and effectiveness of such systems by highlighting the importance of recognizing the driver’s activity. This paper introduces a novel methodology for assessing driver attention by using multi-perspective information using videos that capture the full driver body, hands, and face and focusing on three driver tasks: distracted actions, gaze direction, and hands-on-wheel monitoring. The experimental evaluation was conducted in two phases: first, assessing driver distracted activities, gaze direction, and hands-on-wheel using a CNN-based model and videos from three cameras that were placed inside the vehicle, and second, evaluating the multi-task fusion algorithm, considering the aggregated danger score, which was introduced in this paper, as a representation of the driver’s attentiveness based on the multi-task data fusion algorithm. The proposed methodology was built and evaluated using a DMD dataset; additionally, model robustness was tested on the AUC_V2 and SAMDD driver distraction datasets. The proposed algorithm effectively combines multi-task information from different perspectives and evaluates the attention level of the driver. Full article
(This article belongs to the Special Issue Computer Vision-Based Human Activity Recognition)
Show Figures

Figure 1

18 pages, 6415 KB  
Article
Drowsiness Classification in Young Drivers Based on Facial Near-Infrared Images Using a Convolutional Neural Network: A Pilot Study
by Ayaka Nomura, Atsushi Yoshida, Takumi Torii, Kent Nagumo, Kosuke Oiwa and Akio Nozawa
Sensors 2025, 25(21), 6755; https://doi.org/10.3390/s25216755 - 4 Nov 2025
Viewed by 652
Abstract
Drowsy driving is a major cause of traffic accidents worldwide, and its early detection remains essential for road safety. Conventional driver monitoring systems (DMS) primarily rely on behavioral indicators such as eye closure, gaze, or head pose, which typically appear only after a [...] Read more.
Drowsy driving is a major cause of traffic accidents worldwide, and its early detection remains essential for road safety. Conventional driver monitoring systems (DMS) primarily rely on behavioral indicators such as eye closure, gaze, or head pose, which typically appear only after a significant decline in alertness. This study explores the potential of facial near-infrared (NIR) imaging as a hypothetical physiological indicator of drowsiness. Because NIR light penetrates more deeply into biological tissue than visible light, it may capture subtle variations in blood flow and oxygenation near superficial vessels. Based on this hypothesis, we conducted a pilot feasibility study involving young adult participants to investigate whether drowsiness levels could be estimated from single-frame NIR facial images acquired at 940 nm—a wavelength already used in commercial DMS and suitable for both physiological sensitivity and practical feasibility. A convolutional neural network (CNN) was trained to classify multiple levels of drowsiness, and Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to interpret the discriminative regions. The results showed that classification based on 940 nm NIR images is feasible, achieving an optimal accuracy of approximately 90% under the binary classification scheme (Pattern A). Grad-CAM revealed that regions around the nasal dorsum contributed to this, consistent with known physiological signs of drowsiness. These findings support the feasibility of NIR-based drowsiness classification in young drivers and provide a foundation for future studies with larger and more diverse populations. Full article
Show Figures

Figure 1

17 pages, 1377 KB  
Article
Sequential Fixation Behavior in Road Marking Recognition: Implications for Design
by Takaya Maeyama, Hiroki Okada and Daisuke Sawamura
J. Eye Mov. Res. 2025, 18(5), 59; https://doi.org/10.3390/jemr18050059 - 21 Oct 2025
Viewed by 757
Abstract
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their [...] Read more.
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition. Full article
Show Figures

Figure 1

19 pages, 2534 KB  
Article
Real-Time Driver Attention Detection in Complex Driving Environments via Binocular Depth Compensation and Multi-Source Temporal Bidirectional Long Short-Term Memory Network
by Shuhui Zhou, Wei Zhang, Yulong Liu, Xiaonian Chen and Huajie Liu
Sensors 2025, 25(17), 5548; https://doi.org/10.3390/s25175548 - 5 Sep 2025
Cited by 2 | Viewed by 1484
Abstract
Driver distraction is a key factor contributing to traffic accidents. However, in existing computer vision-based methods for driver attention state recognition, monocular camera-based approaches often suffer from low accuracy, while multi-sensor data fusion techniques are compromised by poor real-time performance. To address these [...] Read more.
Driver distraction is a key factor contributing to traffic accidents. However, in existing computer vision-based methods for driver attention state recognition, monocular camera-based approaches often suffer from low accuracy, while multi-sensor data fusion techniques are compromised by poor real-time performance. To address these limitations, this paper proposes a Real-time Driver Attention State Recognition method (RT-DASR). RT-DASR comprises two core components: Binocular Vision Depth-Compensated Head Pose Estimation (BV-DHPE) and Multi-source Temporal Bidirectional Long Short-Term Memory (MSTBi-LSTM). BV-DHPE employs binocular cameras and YOLO11n (You Only Look Once) Pose to locate facial landmarks, calculating spatial distances via binocular disparity to compensate for monocular depth deficiency for accurate pose estimation. MSTBi-LSTM utilizes a lightweight Bidirectional Long Short-Term Memory (Bi-LSTM) network to fuse head pose angles, real-time vehicle speed, and gaze region semantics, bidirectionally extracting temporal features for continuous attention state discrimination. Evaluated under challenging conditions (e.g., illumination changes, occlusion), BV-DHPE achieved 44.7% reduction in head pose Mean Absolute Error (MAE) compared to monocular vision methods. RT-DASR achieved 90.4% attention recognition accuracy with 21.5 ms average latency when deployed on NVIDIA Jetson Orin. Real-world driving scenario tests confirm that the proposed method provides a high-precision, low-latency attention state recognition solution for enhancing the safety of mining vehicle drivers. RT-DASR can be integrated into advanced driver assistance systems to enable proactive accident prevention. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

24 pages, 2242 KB  
Article
Attention Allocation and Gaze Behavior While Driving: A Comparison Among Young, Middle-Aged and Elderly Drivers
by Anamarija Poll, Tomaž Tollazzi and Chiara Gruden
Sustainability 2025, 17(17), 7927; https://doi.org/10.3390/su17177927 - 3 Sep 2025
Viewed by 1154
Abstract
In 2023, 95.5 million Europeans were aged over 65, falling within the definition of the “elderly population”. According to statistics, this number will rise to 129.8 million by 2050, making Europe the oldest continent in the world. One of the consequences of such [...] Read more.
In 2023, 95.5 million Europeans were aged over 65, falling within the definition of the “elderly population”. According to statistics, this number will rise to 129.8 million by 2050, making Europe the oldest continent in the world. One of the consequences of such growth is a sharp increase in the number of elderly drivers. Although they have more experience, which can positively impact road safety, their performance and health generally decline, limiting some of the physical and mental abilities required for safe vehicle control. The main objective of this research was to shed light on the behavior of elderly drivers by comparing three different drivers’ age groups: young, middle-aged and elderly drivers. Based on analysis of road accidents involving elderly drivers, the road safety situation for elderly drivers in Slovenia was highlighted, a questionnaire was developed to understand how elderly drivers perceive traffic, and an experiment was conducted where 30 volunteers were tested using a driving simulator and eye-tracking glasses. Objective driving and gaze behavior data were obtained, and very different performance was found among the three age groups, with elderly drivers having poorer reaction times and overlooking many elements compared to younger drivers. Full article
Show Figures

Figure 1

15 pages, 2879 KB  
Article
Study on the Eye Movement Transfer Characteristics of Drivers Under Different Road Conditions
by Zhenxiang Hao, Jianping Hu, Xiaohui Sun, Jin Ran, Yuhang Zheng, Binhe Yang and Junyao Tang
Appl. Sci. 2025, 15(15), 8559; https://doi.org/10.3390/app15158559 - 1 Aug 2025
Viewed by 1237
Abstract
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, [...] Read more.
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, and downhill, were selected, and the eye movement data of 23 drivers in different driving stages were collected by aSee Glasses eye-tracking device to analyze the visual gaze characteristics of the drivers and their transfer patterns in each road section. Using Markov chain theory, the probability of staying at each gaze point and the transfer probability distribution between gaze points were investigated. The results of the study showed that drivers’ visual behaviors in different road sections showed significant differences: drivers in the turning section had the largest percentage of fixation on the near front, with a fixation duration and frequency of 29.99% and 28.80%, respectively; the straight ahead section, on the other hand, mainly focused on the right side of the road, with 31.57% of fixation duration and 19.45% of frequency of fixation; on the uphill section, drivers’ fixation duration on the left and right roads was more balanced, with 24.36% of fixation duration on the left side of the road and 25.51% on the right side of the road; drivers on the downhill section looked more frequently at the distance ahead, with a total fixation frequency of 23.20%, while paying higher attention to the right side of the road environment, with a fixation duration of 27.09%. In terms of visual fixation, the fixation shift in the turning road section was mainly concentrated between the near and distant parts of the road ahead and frequently turned to the left and right sides; the straight road section mainly showed a shift between the distant parts of the road ahead and the dashboard; the uphill road section was concentrated on the shift between the near parts of the road ahead and the two sides of the road, while the downhill road section mainly occurred between the distant parts of the road ahead and the rearview mirror. Although drivers’ fixations on the front of the road were most concentrated under the four road sections, with an overall fixation stability probability exceeding 67%, there were significant differences in fixation smoothness between different road sections. Through this study, this paper not only reveals the laws of drivers’ visual behavior under different driving environments but also provides theoretical support for behavior-based traffic safety improvement strategies. Full article
Show Figures

Figure 1

24 pages, 2268 KB  
Article
Fusion of Driving Behavior and Monitoring System in Scenarios of Driving Under the Influence: An Experimental Approach
by Jan-Philipp Göbel, Niklas Peuckmann, Thomas Kundinger and Andreas Riener
Appl. Sci. 2025, 15(10), 5302; https://doi.org/10.3390/app15105302 - 9 May 2025
Viewed by 1349
Abstract
Driving under the influence of alcohol (DUI) remains a leading cause of accidents globally, with accident risk rising exponentially with blood alcohol concentration (BAC). This study aims to distinguish between sober and intoxicated drivers using driving behavior analysis and driver monitoring system (DMS), [...] Read more.
Driving under the influence of alcohol (DUI) remains a leading cause of accidents globally, with accident risk rising exponentially with blood alcohol concentration (BAC). This study aims to distinguish between sober and intoxicated drivers using driving behavior analysis and driver monitoring system (DMS), technologies that align with emerging EU regulations. In a driving simulator, twenty-three participants (average age: 32) completed five drives (one practice and two each while sober and intoxicated) on separate days across city, rural, and highway settings. Each 30-minute drive was analyzed using eye-tracking and driving behavior data. We applied significance testing and classification models to assess the data. Our study goes beyond the state of the art by a) combining data from various sensors and b) not only examining the effects of alcohol on driving behavior but also using these data to classify driver impairment. Fusing gaze and driving behavior data improved classification accuracy, with models achieving over 70% accuracy in city and rural conditions and a Long Short-Term Memory (LSTM) network reaching up to 80% on rural roads. Although the detection rate is, of course, still far too low for a productive system, the results nevertheless provide valuable insights for improving DUI detection technologies and enhancing road safety. Full article
(This article belongs to the Special Issue Human-Centered Approaches to Automated Vehicles)
Show Figures

Figure 1

29 pages, 11492 KB  
Article
Sustainable Real-Time Driver Gaze Monitoring for Enhancing Autonomous Vehicle Safety
by Jong-Bae Kim
Sustainability 2025, 17(9), 4114; https://doi.org/10.3390/su17094114 - 1 May 2025
Viewed by 1674
Abstract
Despite advances in autonomous driving technology, current systems still require drivers to remain alert at all times. These systems issue warnings regardless of whether the driver is actually gazing at the road, which can lead to driver fatigue and reduced responsiveness over time, [...] Read more.
Despite advances in autonomous driving technology, current systems still require drivers to remain alert at all times. These systems issue warnings regardless of whether the driver is actually gazing at the road, which can lead to driver fatigue and reduced responsiveness over time, ultimately compromising safety. This paper proposes a sustainable real-time driver gaze monitoring method to enhance the safety and reliability of autonomous vehicles. The method uses a YOLOX-based face detector to detect the driver’s face and facial features, analyzing their size, position, shape, and orientation to determine whether the driver is gazing forward. By accurately assessing the driver’s gaze direction, the method adjusts the intensity and frequency of alerts, helping to reduce unnecessary warnings and improve overall driving safety. Experimental results demonstrate that the proposed method achieves a gaze classification accuracy of 97.3% and operates robustly in real-time under diverse environmental conditions, including both day and night. These results suggest that the proposed method can be effectively integrated into Level 3 and higher autonomous driving systems, where monitoring driver attention remains critical for safe operation. Full article
Show Figures

Figure 1

24 pages, 12563 KB  
Article
Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?
by Korbinian Kunst, David Hoffmann, Anıl Erkan, Karina Lazarova and Tran Quoc Khanh
J. Eye Mov. Res. 2025, 18(2), 13; https://doi.org/10.3390/jemr18020013 - 10 Apr 2025
Viewed by 1723
Abstract
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the [...] Read more.
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior. Full article
Show Figures

Figure 1

27 pages, 11491 KB  
Article
Detecting Driver Drowsiness Using Hybrid Facial Features and Ensemble Learning
by Changbiao Xu, Wenhao Huang, Jiao Liu and Lang Li
Information 2025, 16(4), 294; https://doi.org/10.3390/info16040294 - 7 Apr 2025
Cited by 1 | Viewed by 3788
Abstract
Drowsiness while driving poses a significant risk in terms of road safety, making effective drowsiness detection systems essential for the prevention of accidents. Facial signal-based detection methods have proven to be an effective approach to drowsiness detection. However, they bring challenges arising from [...] Read more.
Drowsiness while driving poses a significant risk in terms of road safety, making effective drowsiness detection systems essential for the prevention of accidents. Facial signal-based detection methods have proven to be an effective approach to drowsiness detection. However, they bring challenges arising from inter-individual differences among drivers. Variations in facial structure necessitate personalized feature extraction thresholds, yet existing methods apply a uniform threshold, leading to inaccurate feature extraction. Furthermore, many current methods focus on only one or two facial regions, overlooking the possibility that drowsiness may manifest differently across different facial areas among different drivers. To address these issues, we propose a drowsiness detection method that combines an ensemble model with hybrid facial features. This approach enables the accurate extraction of features from four key facial regions—the eye region, mouth contour, head pose, and gaze direction—through adaptive threshold correction to ensure comprehensive coverage. An ensemble model, combining Random Forest, XGBoost, and Multilayer Perceptron with a soft voting criterion, is then employed to classify the drivers’ drowsiness state. Additionally, we use the SHAP method to ensure model explainability and analyze the correlations between features from various facial regions. Trained and tested on the UTA-RLDD dataset, our method achieves a video accuracy (VA) of 86.52%, outperforming similar techniques introduced in recent years. The interpretability analysis demonstrates the value of our approach, offering a valuable reference for future research and contributing significantly to road safety. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Figure 1

19 pages, 5278 KB  
Article
Dynamic Response Characteristics of Drivers’ Visual Search Behavior to Road Horizontal Curve Radius: Latest Simulation Experimental Results
by Jinliang Xu, Yongji Ma, Chao Gao, Tian Xin, Houfu Yang, Wenyu Peng and Zhiyuan Wan
Sustainability 2025, 17(5), 2197; https://doi.org/10.3390/su17052197 - 3 Mar 2025
Viewed by 1581
Abstract
Road horizontal curves, which significantly influence drivers’ visual search behavior and are closely linked to traffic safety, also constitute a crucial factor in sustainable road traffic development. This paper uses simulation driving experiments to explore the dynamic response characteristics of 27 typical subject [...] Read more.
Road horizontal curves, which significantly influence drivers’ visual search behavior and are closely linked to traffic safety, also constitute a crucial factor in sustainable road traffic development. This paper uses simulation driving experiments to explore the dynamic response characteristics of 27 typical subject drivers’ visual search behavior regarding road horizontal curve radius. Results show that in a monotonous, open road environment, the driver’s visual search is biased towards the inside of the curve; as the radius increases, the 85th percentile value of the longitudinal visual search length gradually increases, the 85th percentile value of the horizontal search angle gradually decreases, the 85th percentile value of vehicle speed gradually increases, and the dispersion and bias of the gaze points gradually decrease. The search length, horizontal angle, and speed approach the level of straight road sections (380 m, 10° and 115 km/h, respectively). When R ≥ 1200 m, a driver’s dynamic visual search range reaches a stable distribution state that is the same as that of a straight road. A dynamic visual search range distribution model for drivers on straight and horizontal curved road sections is constructed. Based on psychological knowledge such as attention resource theory and eye–mind theory, a human factor engineering explanation was provided for drivers’ attention distribution and speed selection mechanism on road horizontal curve sections. The research results can provide theoretical references for the optimization design of road traffic, decision support to improve the driver training system, and a theoretical basis for determining the visual search characteristics of human drivers in autonomous driving technology, thereby promoting the safe and sustainable development of road traffic. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

Back to TopTop