Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,051)

Search Parameters:
Keywords = eye-tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4823 KB  
Article
Remote Tower Air Traffic Controller Multimodal Fatigue Detection
by Weijun Pan, Dajiang Song, Ruihan Liang, Zirui Yin and Boyuan Han
Sensors 2026, 26(6), 1856; https://doi.org/10.3390/s26061856 - 15 Mar 2026
Abstract
Remote tower (rTWR) operations are reshaping air traffic control but introduce significant human-factor risks, notably cognitive fatigue induced by prolonged screen-based visual surveillance. To mitigate these risks in a safety-critical domain where missed detections can be catastrophic, we propose a non-intrusive, multimodal fatigue [...] Read more.
Remote tower (rTWR) operations are reshaping air traffic control but introduce significant human-factor risks, notably cognitive fatigue induced by prolonged screen-based visual surveillance. To mitigate these risks in a safety-critical domain where missed detections can be catastrophic, we propose a non-intrusive, multimodal fatigue detection framework fusing ocular and cardiac signals. A high-fidelity simulation study with 36 controllers was conducted to collect eye-tracking and electrocardiogram (ECG) data, from which a 12-dimensional feature vector—integrating gaze entropy and heart rate variability (HRV)—was extracted. Addressing the severe class imbalance and scarcity of fatigue samples in physiological data, we developed a cost-sensitive XGBoost classifier combining SMOTE oversampling with a dynamically weighted loss function. Experimental results show that the proposed framework performed well under mixed-subject evaluation and improved sensitivity to fatigue events. Although a marked performance drop was observed under LOSO evaluation, personalized calibration partially alleviated this limitation, indicating the potential of the framework for real-time fatigue monitoring in remote tower operations. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

27 pages, 1639 KB  
Article
Cognitive Behavioral Therapy Reduces Symptom Severity and Normalizes Neurophysiological and Attentional Reactivity in Anorexia Nervosa: A Randomized Controlled Trial
by Eda Yılmazer, Metin Çınaroğlu, Selami Varol Ülker and Gökben Hızlı Sayar
Brain Sci. 2026, 16(3), 309; https://doi.org/10.3390/brainsci16030309 - 13 Mar 2026
Viewed by 110
Abstract
Background: Anorexia nervosa (AN) is a severe psychiatric disorder marked by restrictive eating, distorted body image, and high relapse rates. While cognitive-behavioral therapy (CBT) is a widely used treatment, its mechanisms of action in AN remain incompletely understood, particularly beyond self-reported symptom change. [...] Read more.
Background: Anorexia nervosa (AN) is a severe psychiatric disorder marked by restrictive eating, distorted body image, and high relapse rates. While cognitive-behavioral therapy (CBT) is a widely used treatment, its mechanisms of action in AN remain incompletely understood, particularly beyond self-reported symptom change. This study investigated the effects of a 12-week CBT intervention on both clinical and multimodal laboratory-based outcomes in women with restrictive-type AN. Methods: In a two-arm, pre–post randomized controlled trial (ClinicalTrials.gov: NCT07037017), 59 women with restrictive-type AN were randomized to a CBT intervention (n = 30) or no-treatment control (n = 29). A total of 50 participants (CBT: 26; control: 24) completed baseline and post-intervention assessments and were included in analyses. Outcomes included psychometric measures (eating disorder symptoms, depression, anxiety, body image-related obsessive–compulsive symptoms, and cognitive emotion regulation) and laboratory-based indices: electroencephalography (EEG), galvanic skin response (GSR), and eye-tracking during exposure to food- and body-related stimuli. Group × Time effects were analyzed using repeated-measures mixed-effects models, and statistical analyses were conducted using SPSS (Version 31; IBM Corp., Armonk, NY, USA). Results: Significant Group × Time interactions indicated greater improvements in the CBT group across all psychometric outcomes, including reduced eating disorder symptom severity (p < 0.001, ηp2 = 0.28) and increased adaptive emotion regulation. CBT participants also showed significant reductions in EEG P300 and late positive potential (LPP) amplitudes to body-related stimuli, increased frontal alpha asymmetry, decreased visual fixation on salient body and food cues, and attenuated GSR reactivity (all p < 0.05). Exploratory correlations revealed that symptom improvements were associated with reductions in neurophysiological and attentional reactivity. Conclusions: To our knowledge, this is the first RCT in AN to demonstrate that CBT not only improves self-reported outcomes but also modulates neurophysiological and attentional processes implicated in the maintenance of the disorder. Multimodal laboratory assessments provided mechanistic insight into treatment effects and may inform personalized intervention strategies. CBT appears to facilitate recovery through both cognitive–emotional and physiological recalibration. Full article
(This article belongs to the Section Neuropsychiatry)
Show Figures

Figure 1

26 pages, 1234 KB  
Review
Towards Rigorous Eye-Tracking Methodology in Interdisciplinary Fields: Insights from and Recommendations for Tourism Research
by Wilson Cheong Hin Hong
J. Eye Mov. Res. 2026, 19(2), 31; https://doi.org/10.3390/jemr19020031 - 12 Mar 2026
Viewed by 125
Abstract
Eye-tracking methodology represents a young but rapidly growing approach in tourism research, offering a direct window into the cognitive processes driving tourism stakeholders’ behaviour. However, a critical gap remains between the rapid adoption of this tool and the methodological rigour required to interpret [...] Read more.
Eye-tracking methodology represents a young but rapidly growing approach in tourism research, offering a direct window into the cognitive processes driving tourism stakeholders’ behaviour. However, a critical gap remains between the rapid adoption of this tool and the methodological rigour required to interpret its neurophysiological data. This critical review synthesizes 23 empirical studies (2020–2025) from the destination marketing and branding domain to diagnose eye-tracking’s state-of-the-art application. Adopting the SALSA framework (Search, Appraisal, Synthesis, Analysis) augmented by PRISMA 2020 guidelines, this study systematically searched Web of Science and Scopus databases. Studies were appraised using an eight-dimensional quality rubric, assessing from theoretical grounding to experimental design to statistical rigour. Findings revealed a “tool-first” exploratory phenomenon, where the majority of studies relied on basic fixation metrics to infer complex psychological states such as “interest”, when they could imply other cognitive states. Furthermore, most reviewed studies failed to control for stimulus-level confounds (e.g., luminance, AOI size) and utilized inappropriate data-handling procedures and methods, such as the absence of data cleaning and treating count and binary data as continuous data. These, coupled with transparency deficits, undermined the validity of their conclusions. Hence, a Checklist for Eye-Tracking Rigour (CETR) and a methodological decision tree were developed to guide researchers towards confirmatory and neurobiologically grounded research. Findings also provided a framework for managers/practitioners to more accurately interpret eye-tracking studies. Full article
Show Figures

Figure 1

15 pages, 2232 KB  
Article
Search Efficiency and Visual Appeal of Pictorial-Based and Typography-Based Map
by Dorotea Kovačević and Klementina Možina
ISPRS Int. J. Geo-Inf. 2026, 15(3), 119; https://doi.org/10.3390/ijgi15030119 - 12 Mar 2026
Viewed by 135
Abstract
Visual information should be presented clearly and effectively so that it is quickly and easily understood. The same principle applies to different types of maps and plans. This study explores the relationship between a map’s design and how users interact with it when [...] Read more.
Visual information should be presented clearly and effectively so that it is quickly and easily understood. The same principle applies to different types of maps and plans. This study explores the relationship between a map’s design and how users interact with it when searching for specific targets. Focusing on a digital tourist city map, we employed an eye-tracking technology to investigate how different cartographic designs (pictorial-based versus typography-based) influence visual search. As the need for visually appealing designs becomes an important part of the user experience, we further explored the observers’ perceptions of the maps’ visual appeal. The results show that the typography-based maps enabled a more effective visual search than the pictorial, as measured by search time, fixation count, and the number of fixations before locating the target. A greater amount of visual attention was directed towards the typography-based maps, as measured by completion time and several eye-tracking metrics during the observers’ evaluation of the maps’ visual appeal. Based on the results, this study highlights the practical implications of effective map design in enhancing users’ navigation and their visual engagement with cartographic data. Full article
(This article belongs to the Special Issue Cartography and Geovisual Analytics)
Show Figures

Figure 1

24 pages, 4833 KB  
Article
Optimizing Head-Up Display Information Presentation for Older Drivers: Visual Attention Patterns and Design Implications
by Ke Zhang, Chen Xu and Jinho Yim
Appl. Sci. 2026, 16(6), 2682; https://doi.org/10.3390/app16062682 - 11 Mar 2026
Viewed by 109
Abstract
As population aging accelerates, age-related declines in visual sensitivity and attentional control make older drivers more vulnerable to suboptimal in-vehicle interface designs. Head-up displays (HUDs) are intended to reduce gaze shifts by overlaying information within the forward field of view, yet empirical evidence [...] Read more.
As population aging accelerates, age-related declines in visual sensitivity and attentional control make older drivers more vulnerable to suboptimal in-vehicle interface designs. Head-up displays (HUDs) are intended to reduce gaze shifts by overlaying information within the forward field of view, yet empirical evidence remains limited on how specific HUD presentation strategies reshape older drivers’ visual attention allocation. Grounded in theories of visual attention and cognitive load, this study systematically investigates three design variables that are increasingly common in contemporary HUDs (including AR-HUDs): (1) dynamic versus static navigation cues, (2) pedestrian warning strategies under different lighting conditions, and (3) the spatial placement of high-priority information. We first conducted a formative user study to define variables and operationalizations, and then carried out three within-subject driving-simulator experiments using controlled HUD stimuli and eye tracking. Objective gaze measures (e.g., fixation count, total fixation duration, and time to first fixation) were combined with subjective preference ratings to characterize attentional capture, search efficiency, and potential attentional costs. Findings reveal a robust trade-off: continuously changing navigation cues enhance attentional capture but can also increase attentional “stickiness,” unnecessarily consuming older drivers’ limited attentional resources. In pedestrian hazard tasks, real-time overlay warnings that were spatially aligned with the hazard significantly improved visual localization under low-light conditions, outperforming early warnings and multi-stage strategies. Across tasks and layout conditions, the central HUD region showed a stable attentional advantage—placing critical information centrally elicited greater visual attention and stronger subjective preference. These results provide mechanistic evidence for how HUD parameters modulate older drivers’ attention and yield actionable implications for prioritization, temporal pacing of dynamic navigation cues, and a “center-first” layout strategy to guide age-friendly HUD design. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

33 pages, 6958 KB  
Article
Short-Term Performance of Visual Attention Prompt Methods Across Driver Proficiency in a Driving Simulator
by Jinwei Liang and Makio Ishihara
Multimodal Technol. Interact. 2026, 10(3), 28; https://doi.org/10.3390/mti10030028 - 11 Mar 2026
Viewed by 88
Abstract
In complex driving environments, drivers must continuously detect and respond to critical visual information such as traffic signs and pedestrians. However, important targets may sometimes be overlooked due to high cognitive load during driving. Therefore, visual attention prompt methods have been proposed to [...] Read more.
In complex driving environments, drivers must continuously detect and respond to critical visual information such as traffic signs and pedestrians. However, important targets may sometimes be overlooked due to high cognitive load during driving. Therefore, visual attention prompt methods have been proposed to guide drivers’ gaze toward relevant targets. A visual attention prompt method is a visual cue presented in a key area in a user’s field of view to draw his/her visual attention. This study evaluates the short-term performance of five visual attention prompt methods (Point, Arrow, Blur, Dusk, and ModAF) in a driving simulator and compares their performance between novice and proficient drivers. Eye-tracking data and multiple analyses are used to examine whether the influence of these methods could be maintained after they are disabled and to clarify drivers’ response patterns across methods in consideration with their driving proficiency. The results indicate that visual attention prompt methods could induce a short-term transfer effect, as drivers still tend to fixate on target traffic signs earlier after the methods are disabled, and the elapsed-time analysis estimates that this effect lasts about 84.35 s. Overall, the Point, Arrow, and Dusk methods show relatively stronger performance with significant reductions in the elapsed time to fixate on the traffic sign. The clustering analysis further shows that drivers’ response patterns are not uniform, with two clusters for novice drivers and three clusters for proficient drivers. The results suggest that most novice drivers tend to benefit from explicit non-directional visual cues that enhance target salience, such as the Point method, whereas proficient drivers are more likely to benefit from explicit directional visual cues that provide clear directional guidance, such as the Arrow method. These findings suggest that visual attention prompt methods may be useful for developing driver training strategies tailored to different levels of driving proficiency, helping drivers maintain more effective visual attention allocation during driving and potentially contributing to improved driving safety. Full article
Show Figures

Figure 1

26 pages, 3911 KB  
Article
Integrated Multimodal Perception and Predictive Motion Forecasting via Cross-Modal Adaptive Attention
by Bakhita Salman, Alexander Chavez and Muneeb Yassin
Future Transp. 2026, 6(2), 64; https://doi.org/10.3390/futuretransp6020064 - 11 Mar 2026
Viewed by 109
Abstract
Accurate environmental perception is fundamental to safe autonomous driving; however, most existing multimodal systems rely on fixed or heuristic sensor fusion strategies that cannot adapt to scene-dependent variations in sensor reliability. This paper proposes Cross-Modal Adaptive Attention (CMAA), a unified end-to-end Bird’s-Eye-View (BEV) [...] Read more.
Accurate environmental perception is fundamental to safe autonomous driving; however, most existing multimodal systems rely on fixed or heuristic sensor fusion strategies that cannot adapt to scene-dependent variations in sensor reliability. This paper proposes Cross-Modal Adaptive Attention (CMAA), a unified end-to-end Bird’s-Eye-View (BEV) perception framework that dynamically fuses camera, LiDAR, and RADAR information through learnable, context-aware modality gating. Unlike static fusion approaches, CMAA adaptively reweights sensor contributions based on global scene descriptors, enabling the robust integration of semantic, geometric, and motion cues without manual tuning. The proposed architecture jointly performs 3D object detection, multi-object tracking, and motion forecasting within a shared BEV representation, preserving spatial alignment across tasks and supporting efficient real-time deployment. Experiments conducted on the official nuScenes validation split demonstrate that CMAA achieves 0.528 mAP and 0.691 NDS, outperforming fixed-weight fusion baselines while maintaining a compact model size and efficient inference. Additional tracking evaluation using the official nuScenes tracking devkit reports improved tracking performance, while motion forecasting experiments show reduced trajectory displacement errors (minADE and minFDE). Ablation studies further confirm the complementary contributions of adaptive modality gating and bidirectional cross-modal refinement, and a stratified dynamic analysis reveals consistent reductions in velocity estimation error across object classes, motion regimes, and environmental conditions. These results demonstrate that adaptive multimodal fusion improves robustness, motion reasoning, and perception reliability in complex traffic environments while remaining computationally efficient for deployment in safety-critical autonomous driving systems. Full article
Show Figures

Figure 1

31 pages, 3100 KB  
Article
A Study on the Association Between Tower Crane Operator Fatigue State and Collision Risk Under Human–Machine Interaction
by Zhijiang Wu, Yaru Zhu, Junwen Wang, Zhenzhen Chai, Jixun Fan and Guofeng Ma
Buildings 2026, 16(6), 1102; https://doi.org/10.3390/buildings16061102 - 10 Mar 2026
Viewed by 119
Abstract
To investigate the relationship between operator fatigue and collision risk under human–machine interaction (HMI) in intelligent tower crane operations, and to reveal the mitigating effects of HMI on fatigue-induced collision risks, a comprehensive data acquisition approach integrating eye-tracking signals, risk indicators, and fatigue [...] Read more.
To investigate the relationship between operator fatigue and collision risk under human–machine interaction (HMI) in intelligent tower crane operations, and to reveal the mitigating effects of HMI on fatigue-induced collision risks, a comprehensive data acquisition approach integrating eye-tracking signals, risk indicators, and fatigue scale assessments was proposed and validated through scenario-based experiments. First, two experimental scenarios—traditional mechanical operation and HMI operation—were established. Based on a review of existing studies, representative eye-movement metrics and fatigue scale indicators were selected. Subsequently, operator fatigue states were classified into three levels: low fatigue, moderate fatigue, and high fatigue. A total of 28 participants were recruited to complete fatigue assessments and subsequently perform tower crane lifting tasks under both experimental scenarios. Finally, collision risk under different scenarios was quantitatively evaluated using the safety distance between the crane hook and the rigger, as well as the frequency of collision alarms. The results indicate that, under traditional mechanical operation, increasing fatigue levels were associated with a significant reduction in safety distance between the crane hook and the rigger, accompanied by a marked increase in collision alarm occurrences, resulting in a relatively high overall collision risk. In contrast, under the HMI operation scenario, participants demonstrated superior operational control at equivalent fatigue levels. Specifically, under moderate fatigue, collision risk was reduced from low risk to no risk, while under high fatigue, collision risk decreased from high risk to low risk. These results indicate that, under laboratory-simulated conditions, human–machine interaction can mitigate, to a certain extent, the increasing trend of collision risk when operators perform tower crane lifting operations under fatigue. These findings provide a scientific basis for further optimization of intelligent tower crane operational modes and the development of enhanced safety management strategies. Full article
Show Figures

Figure 1

21 pages, 2987 KB  
Article
Seeing Through Packaging: Eye-Tracking Evidence on How Product Visual Strategy and Unit Size Shape Visual Attention and Consumer Evaluation
by Zhiyi Guo, Zihao Cao, Yongchun Mao, Muhizam Mustafa, Yuqi Luo and Yueyue Ning
J. Eye Mov. Res. 2026, 19(2), 30; https://doi.org/10.3390/jemr19020030 - 10 Mar 2026
Viewed by 151
Abstract
Product visual strategies (PVS) on food packaging influence how consumers visually inspect products at the point of purchase. However, evidence comparing transparent windows and product images remains mixed, particularly regarding how these strategies interact with food unit size (FUS) and shape visual attention [...] Read more.
Product visual strategies (PVS) on food packaging influence how consumers visually inspect products at the point of purchase. However, evidence comparing transparent windows and product images remains mixed, particularly regarding how these strategies interact with food unit size (FUS) and shape visual attention patterns. Moreover, few studies have examined these effects using objective eye-tracking measures within controlled experimental designs. This study employed a 2 × 2 between-subjects quasi-experiment to investigate the effects of PVS (transparent window and product image) and FUS (large unit and small unit) on visual attention and subsequent product-related evaluations. A total of 160 participants viewed realistic chocolate package stimuli that varied only in visual strategy and unit size. Eye movements were recorded using Tobii Pro Glasses 2. Visual attention was assessed through Time to First Fixation (TFF) and Fixation Duration (FD), while expected tastiness, expected quality, and purchase intention were measured using standardized self-report scales. The results showed that transparent-window packaging attracted visual attention more rapidly and sustained longer fixations than product-image packaging. These attention differences were accompanied by higher expected tastiness, expected quality, and purchase intention. While food unit size alone showed limited effects on eye-movement measures, a significant interaction was observed: small-unit designs elicited greater visual attention and more favorable evaluations only when the product was directly visible through a transparent window. Overall, the findings demonstrate how product visual strategies and food unit size jointly shape visual attention allocation during packaging inspection. By integrating eye-tracking measures with evaluation and behavioral intention outcomes, this study contributes to applied eye-movement research in food packaging contexts. Full article
Show Figures

Figure 1

21 pages, 1455 KB  
Article
Temporal Optimization of Dynamic Message Signs: A Survival Analysis of Driver Comprehension Factors
by Mousa Abushattal, Fadi Alhomaidat, Rasha Al-Shamaseen, Mohammad Al-Marafi, Layan Alkodary and Ahmed Jaber
Vehicles 2026, 8(3), 50; https://doi.org/10.3390/vehicles8030050 - 8 Mar 2026
Viewed by 180
Abstract
Dynamic Message Signs (DMSs) play a critical role in conveying real-time traffic information to drivers; however, their effectiveness heavily relies on how messages are structured and displayed, particularly regarding phasing duration and content length. This study examines the influence of these two factors [...] Read more.
Dynamic Message Signs (DMSs) play a critical role in conveying real-time traffic information to drivers; however, their effectiveness heavily relies on how messages are structured and displayed, particularly regarding phasing duration and content length. This study examines the influence of these two factors on driver readability, comprehension, and gaze behavior using an advanced virtual reality (VR) driving simulator. Controlled experiments simulated four DMS scenarios, combining two phasing intervals (2.5 and 4 s) with short and long message formats, adhering to Michigan Department of Transportation (MDOT) guidelines. The experiment integrated eye-tracking technology to measure fixation duration and frequency, while statistical methods, including survival analysis and LASSO regression, were employed to identify significant predictors of message readability. Results revealed that shorter messages with shorter phasing intervals led to the highest comprehension rates and reduced cognitive strain. Furthermore, individual characteristics such as gender, driving speed, and highway driving experience significantly affected how drivers engaged with DMS messages. These findings contribute to the development of more effective DMS deployment strategies and provide practical design recommendations to enhance traffic safety and information delivery on high-speed roadways. Full article
Show Figures

Figure 1

21 pages, 5465 KB  
Article
Visual Attention to Food Bank Posters: Insights from an Exploratory Eye-Tracking Study
by Olga Grabowska-Chenczke, Anshu Rani, Ewelina Marek-Andrzejewska and Ewa Kiryluk-Dryjska
Behav. Sci. 2026, 16(3), 384; https://doi.org/10.3390/bs16030384 - 7 Mar 2026
Viewed by 215
Abstract
This exploratory eye-tracking study investigates how the emotional content of food bank advertisements influences food donor perception and visual attention. It does so by addressing a gap in the literature on eye-tracking applications in food donation contexts and social neuroscience. Visual attention represents [...] Read more.
This exploratory eye-tracking study investigates how the emotional content of food bank advertisements influences food donor perception and visual attention. It does so by addressing a gap in the literature on eye-tracking applications in food donation contexts and social neuroscience. Visual attention represents a fundamental behavioural precursor to decision-making, yet its role in charitable communications remains underexplored. The objective of this research was to investigate how the content of food bank advertisements is associated with the way that potential food donors perceive food bank posters on a cognitive level. This study adopted a social neuroscience approach, using the methodology of eye-tracking to examine the visual attention patterns that form while viewing food bank posters. Participants (N = 96) viewed four posters varying in their emotional appeal, i.e., positive, neutral, negative and cognitive dissonance, while their eye movements were being recorded. Results revealed the robust attentional prioritisation of generic pictorial content over specific organisational logos or abstract symbols across all metrics and posters with large effect sizes (r = 0.69–0.87). It was found that pictures captured participants’ attention three to seven times faster than logos and also received two to seven times more fixations. The poster carrying a negative appeal elicited the strongest pictorial advantage, consistent with the negativity bias in attention allocation. Exploratory analysis found no significant correlation between participants’ past charitable behaviour and visual attention patterns, thus suggesting that the Picture Superiority Effect operates universally, regardless of individual past charitable behaviours. This is the first eye-tracking study examining donor-facing food bank communications in Poland, contributing to social neuroscience approaches in prosocial behaviour research. Findings suggest charitable organisations should prioritise emotionally engaging pictures’ inclusion over logo prominence in their visual communications messaging. Full article
Show Figures

Figure 1

24 pages, 320 KB  
Review
Application of Eye Movement Analysis in Medicine: A Review Across Neurodevelopmental, Neurological, and Neurodegenerative Disorders
by Amnaduny Akhara Nurhasan and Paweł Kasprowski
Appl. Sci. 2026, 16(5), 2548; https://doi.org/10.3390/app16052548 - 6 Mar 2026
Viewed by 234
Abstract
Eye tracking has emerged as a valuable, non-invasive tool for identifying cognitive and motor abnormalities across a wide range of brain-related disorders. Recent studies have explored its utility in neurodevelopmental, neurological, and neurodegenerative conditions. This review synthesizes the findings of studies that apply [...] Read more.
Eye tracking has emerged as a valuable, non-invasive tool for identifying cognitive and motor abnormalities across a wide range of brain-related disorders. Recent studies have explored its utility in neurodevelopmental, neurological, and neurodegenerative conditions. This review synthesizes the findings of studies that apply eye movement analysis including fixation patterns, saccades, scanpaths, and pupil dynamics combined with machine learning (ML) and deep learning (DL) approaches for disease detection and classification. Particular attention is given to the design of eye-tracking tasks, feature extraction strategies, and algorithmic frameworks. Across clinical categories, models such as Support Vector Machines (SVM), random forests (RF), and Convolutional Neural Networks (CNN) have demonstrated promising diagnostic potential, with several studies reporting classification accuracies exceeding 80%, although performance varies depending on the task design, dataset characteristics, and validation methodology. These findings support the potential of eye movement-based biomarkers for early detection and clinical monitoring. Despite encouraging results, current research faces important limitations, including small sample sizes, a lack of standardization, and limited generalizability across populations. To advance clinical translation, future work should emphasize data augmentation, multimodal integration, external validation, and the use of explainable AI (XAI). Overall, eye movement analysis offers a scalable and objective pathway toward improving diagnostic precision in brain-related disorders. Full article
(This article belongs to the Special Issue Eye Tracking Technology and Its Applications)
25 pages, 16570 KB  
Article
Effective Flow Ratio: A Novel Efficiency Metric for Heterogeneous Traffic in a Signalized Urban Intersection with Aerial Computer Vision
by Abu Anas Ibn Samad, Tanvir Ahmed and Md Nazmul Huda
Big Data Cogn. Comput. 2026, 10(3), 80; https://doi.org/10.3390/bdcc10030080 - 6 Mar 2026
Viewed by 288
Abstract
Intelligent Transportation Systems (ITS) primarily rely on flow rate and occupancy to estimate traffic states. However, in heterogeneous traffic conditions characterized by weak lane discipline and diverse vehicle classes, these conventional metrics fail to capture the true operational efficiency of signalized intersections. High [...] Read more.
Intelligent Transportation Systems (ITS) primarily rely on flow rate and occupancy to estimate traffic states. However, in heterogeneous traffic conditions characterized by weak lane discipline and diverse vehicle classes, these conventional metrics fail to capture the true operational efficiency of signalized intersections. High flow rates can mask underlying inefficiencies, while low flow rates do not necessarily indicate free-flow conditions. This paper introduces a novel computer vision-based metric, the Effective Flow Ratio (EFR), designed to quantify the actual discharge efficiency of mixed traffic. By leveraging Bird’s-Eye View (BEV) vehicle tracking using You Only Look Once version 11 (YOLOv11) and ByteTrack, EFR distinguishes between kinematic movement and effective discharge, resolving the ambiguity of “moving but not clearing” states. We analyze 21 days of continuous footage from a rooftop-mounted camera overlooking a congested intersection in Dhaka, Bangladesh, exhibiting distinct non-linear behaviors compared to raw flow counts. Our results demonstrate that: (i) Flow rate and discharge efficiency are dynamically decoupled, evidenced by significant variance in EFR within identical flow bins; (ii) Temporal rolling correlations reveal transient regimes where traditional signal control logic would misinterpret congestion severity; and (iii) EFR provides a more robust proxy for intersection performance than occupancy or volume alone. The proposed metric offers a granular, physics-informed input for next-generation adaptive traffic signal control in developing urban environments. Full article
(This article belongs to the Special Issue AI, Computer Vision and Human–Robot Interaction)
Show Figures

Figure 1

34 pages, 4142 KB  
Article
Subject-Independent Multimodal Interaction Modeling for Joint Emotion and Immersion Estimation in Virtual Reality
by Haibing Wang and Mujiangshan Wang
Symmetry 2026, 18(3), 451; https://doi.org/10.3390/sym18030451 - 6 Mar 2026
Viewed by 157
Abstract
Virtual Reality (VR) has emerged as a powerful medium for immersive human–computer interaction, where users’ emotional and experiential states play a pivotal role in shaping engagement and perception. However, existing affective computing approaches often model emotion recognition and immersion estimation as independent problems, [...] Read more.
Virtual Reality (VR) has emerged as a powerful medium for immersive human–computer interaction, where users’ emotional and experiential states play a pivotal role in shaping engagement and perception. However, existing affective computing approaches often model emotion recognition and immersion estimation as independent problems, overlooking their intrinsic coupling and the structured relationships underlying multimodal physiological signals. In this work, we propose a modality-aware multi-task learning framework that jointly models emotion recognition and immersion estimation from a graph-structured and symmetry-aware interaction perspective. Specifically, heterogeneous physiological and behavioral modalities—including eye-tracking, electrocardiogram (ECG), and galvanic skin response (GSR)—are treated as relational components with structurally symmetric encoding and fusion mechanisms, while their cross-modality dependencies are adaptively aggregated to preserve interaction symmetry at the representation level and introduce controlled asymmetry at the task-optimization level through weighted multi-task learning, without introducing explicit graph neural network architectures. To support reproducible evaluation, the VREED dataset is further extended with quantitative immersion annotations derived from presence-related self-reports via weighted aggregation and factor analysis. Extensive experiments demonstrate that the proposed framework consistently outperforms recurrent, convolutional, and Transformer-based baselines. Compared with the strongest Transformer baseline, the proposed framework yields consistent relative performance gains of approximately 3–7% for emotion recognition metrics and reduces immersion estimation errors by nearly 9%. Beyond empirical improvements, this study provides a structured interpretation of multimodal affective modeling that highlights symmetry, coupling, and controlled symmetry breaking in multi-task learning, offering a principled foundation for adaptive VR systems, emotion-driven personalization, and dynamic user experience optimization. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

17 pages, 1869 KB  
Article
Simultaneous Analysis of Microsaccades and Pupil Size Variations in Age-Related Cognitive Impairment Using Eye-Tracking Technology
by Seokjun Oh, Tahsin Nairuz, Sung-Jun Park and Jong-Ha Lee
J. Eye Mov. Res. 2026, 19(2), 29; https://doi.org/10.3390/jemr19020029 - 5 Mar 2026
Viewed by 184
Abstract
Age-related cognitive impairment represents a critical stage in the continuum of neurodegenerative disorders, including Alzheimer’s disease (AD), highlighting the need for objective and non-invasive physiological indicators of early neurological change. This study investigates the simultaneous analysis of microsaccadic eye movements and pupil size [...] Read more.
Age-related cognitive impairment represents a critical stage in the continuum of neurodegenerative disorders, including Alzheimer’s disease (AD), highlighting the need for objective and non-invasive physiological indicators of early neurological change. This study investigates the simultaneous analysis of microsaccadic eye movements and pupil size variations as ocular biomarkers associated with age-related cognitive impairment using eye-tracking technology. A total of 70 participants were recruited and categorized into three age groups: individuals in their 20s, 60s, and 70s. Participants in their 70s were further categorized based on MMSE-K scores into cognitively normal (≥24) and impaired (≤23) subgroups. Quantitative analyses showed a significant age-related increase in microsaccade frequency along both axes, with significantly higher microsaccade frequencies (p < 0.01) among individuals with lower cognitive scores within the same age group. Pupil size variation, including constriction and dilation rates, declined with age, while response speed remained relatively unchanged across all age groups. These findings highlight a clear association between age related-cognitive decline and involuntary ocular responses. The proposed dual-biomarker method offers a non-invasive and quantitative framework that may complement traditional cognitive screening tools. Future studies involving larger cohorts and clinically diagnosed AD populations are required to determine the diagnostic utility of these ocular biomarkers. Full article
Show Figures

Figure 1

Back to TopTop