Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,399)

Search Parameters:
Keywords = Gaze

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 4553 KB  
Article
From Initial to Situational Automation Trust: The Interplay of Personality, Interpersonal Trust, and Trust Calibration in Young Males
by Menghan Tang, Tianjiao Lu and Xuqun You
Behav. Sci. 2026, 16(2), 176; https://doi.org/10.3390/bs16020176 - 26 Jan 2026
Abstract
To understand human–machine interactions, we adopted a framework that distinguishes between stable individual differences (enduring personality/interpersonal traits), initial trust (pre-interaction expectations), and situational trust (dynamic calibration via gaze and behavior). A driving simulator experiment was conducted with 30 male participants to investigate trust [...] Read more.
To understand human–machine interactions, we adopted a framework that distinguishes between stable individual differences (enduring personality/interpersonal traits), initial trust (pre-interaction expectations), and situational trust (dynamic calibration via gaze and behavior). A driving simulator experiment was conducted with 30 male participants to investigate trust calibration across three levels: manual (Level 0), semi-automated (Level 2, requiring monitoring), and fully automated (Level 4, system handles tasks). We combined eye tracking (pupillometry/fixations) with the Eysenck Personality Questionnaire (EPQ) and Interpersonal Trust Scale (ITS). Results indicated that semi-automation yielded a higher hazard detection sensitivity (d′ = 0.81) but induced greater physiological costs (pupil diameter, ηp2 = 0.445) compared to manual driving. A mediation analysis confirmed that neuroticism was associated with initial trust specifically through interpersonal trust. Critically, despite lower initial trust, young male individuals with high interpersonal trust exhibited slower reaction times in the semi-automation model (B = 0.60, p = 0.035), revealing a “social complacency” effect where social faith paradoxically predicted lower behavioral readiness. Based on these findings, we propose that situational trust is a multi-layer calibration process involving dissociated attentional and behavioral mechanisms, suggesting that such “wary but complacent” drivers require adaptive HMI interventions. Full article
(This article belongs to the Topic Personality and Cognition in Human–AI Interaction)
17 pages, 8025 KB  
Article
Quantitative Analysis of Smooth Pursuit and Saccadic Eye Movements in Multiple Sclerosis
by Pavol Skacik, Lucia Kotulova, Ema Kantorova, Egon Kurca and Stefan Sivak
Neurol. Int. 2026, 18(2), 22; https://doi.org/10.3390/neurolint18020022 - 26 Jan 2026
Abstract
Introduction: Multiple sclerosis (MS) is a chronic inflammatory and neurodegenerative disease of the central nervous system, frequently associated with visual and oculomotor disturbances. Quantitative analysis of eye movements represents a non-invasive method for assessing central nervous system dysfunction beyond conventional imaging; however, [...] Read more.
Introduction: Multiple sclerosis (MS) is a chronic inflammatory and neurodegenerative disease of the central nervous system, frequently associated with visual and oculomotor disturbances. Quantitative analysis of eye movements represents a non-invasive method for assessing central nervous system dysfunction beyond conventional imaging; however, the diagnostic and predictive value of oculomotor metrics remains insufficiently defined. Objectives: The aims of this study were to compare smooth pursuit gain and reflexive saccade parameters (latency, velocity, and precision) between individuals with MS and healthy controls, and to evaluate their ability to discriminate disease status. Methods: This cross-sectional study included 46 clinically stable patients with MS (EDSS ≤ 6.5) and 46 age- and sex-matched healthy controls. Oculomotor function was assessed using videonystagmography under standardized conditions. Group differences across horizontal and vertical gaze directions were analyzed using linear mixed-effects models. Random forest models were applied to assess the discriminative performance of oculomotor parameters, with permutation-based feature importance and receiver operating characteristic (ROC) curve analysis. Results: Patients with MS showed significantly reduced smooth pursuit gain across most horizontal and vertical directions compared with controls. Saccadic latency was significantly prolonged in all tested movement directions. Saccadic velocity exhibited selective directional impairment consistent with subtle medial longitudinal fasciculus involvement, whereas saccadic precision did not differ significantly between groups. A random forest model combining pursuit and saccadic parameters demonstrated only moderate discriminative performance between MS patients and controls (AUC = 0.694), with saccadic latency contributing most strongly to classification. Conclusions: Quantitative eye-movement assessment revealed widespread oculomotor abnormalities in MS, particularly reduced smooth pursuit gain and prolonged saccadic latency. Although the overall discriminative accuracy of oculomotor parameters was limited, these findings support their potential role as complementary markers of central nervous system dysfunction. Further longitudinal and multimodal studies are required to clarify their clinical relevance and prognostic value. Full article
(This article belongs to the Special Issue Advances in Multiple Sclerosis, Third Edition)
Show Figures

Graphical abstract

17 pages, 1688 KB  
Article
A Comparison of Centroid Tracking and Image Phase for Improved Optokinetic Nystagmus Detection
by Jason Turuwhenua, Mohammad Norouzifard, Zaw LinTun, Misty Edmonds, Rebecca Findlay, Joanna Black and Benjamin Thompson
J. Eye Mov. Res. 2026, 19(1), 12; https://doi.org/10.3390/jemr19010012 - 26 Jan 2026
Abstract
Optokinetic nystagmus (OKN) is an involuntary sawtooth eye movement that occurs in the presence of a drifting stimulus. Our experience is that low-amplitude/short-duration OKN can challenge the limits of our commercially available Pupil Neon eye-tracker, leading to false negative OKN detection results. We [...] Read more.
Optokinetic nystagmus (OKN) is an involuntary sawtooth eye movement that occurs in the presence of a drifting stimulus. Our experience is that low-amplitude/short-duration OKN can challenge the limits of our commercially available Pupil Neon eye-tracker, leading to false negative OKN detection results. We sought to investigate whether such instances could be remediated. We compared automated OKN detection using: (1) the gaze signal from the Pupil Neon (OKN-G), (2) centroid tracking (OKN-C), and (3) an image-phase-based “motion microscopy” technique (OKN-MMIC). The OKN-C and OKN-MMIC methods were also tested as a remediated step after a negative OKN-G result (OKN-C-STEP, OKN-MMIC-STEP). To validate the approaches adults (n = 22) with normal visual acuity was measured whilst viewing trials of an OKN induction stimulus shown at four levels of visibility. Confusion matrices and performance measures were determined for a “main” dataset that included all methods, and a “retest” set, which contained instances where centroid tracking failed. For the main set, all tested methods improved upon OKN-G by Matthew’s correlation coefficient (0.80–0.85 vs. 0.76), sensitivity (0.89–0.95 vs. 0.85), and accuracy (0.91–0.93 vs. 0.88); but only OKN-C yielded better specificity (0.90–0.96 vs. 0.95). For the retest set, MMIC and MMIC-STEP methods consistently improved upon the performance of OKN-G across all measures. Full article
Show Figures

Figure 1

28 pages, 3176 KB  
Article
Processing Data Visualizations with Seductive Details Using AI-Enabled Analysis of Eye Movement Saliency Maps
by Kristine Zlatkovic, Pavlo Antonenko, Do Hyong Koh and Poorya Shidfar
AI Educ. 2026, 2(1), 1; https://doi.org/10.3390/aieduc2010001 - 22 Jan 2026
Viewed by 46
Abstract
Understanding how learners process data visualizations with seductive details is essential for improving comprehension and engagement. This study examined the influence of task-relevant and task-irrelevant seductive details on attentional distribution and comprehension in the context of data story learning, using COVID-19 data visualizations [...] Read more.
Understanding how learners process data visualizations with seductive details is essential for improving comprehension and engagement. This study examined the influence of task-relevant and task-irrelevant seductive details on attentional distribution and comprehension in the context of data story learning, using COVID-19 data visualizations as experimental materials. A gaze-based methodology was applied, using eye-movement data and saliency maps to visualize learners’ attentional patterns while processing bar graphs with varying embellishments. Results showed that task-relevant seductive details supported comprehension for learners with higher visuospatial abilities by guiding attention toward textual information, while task-irrelevant details hindered comprehension, particularly for those with lower visuospatial abilities who focused disproportionately on visual elements. Working memory capacity emerged as a significant predictor of attentional distribution. Additionally, repeated exposure to data visualizations enhanced participants’ ability to recognize visualization types, improving efficiency and reducing reliance on legends and supplementary text. Overall, this study highlights the cognitive mechanisms underlying visualization processing in data story learning and provides practical implications for education, human–computer interaction, and adaptive technology design, emphasizing the importance of tailoring visualization strategies to individual learner differences. Full article
Show Figures

Figure 1

14 pages, 15350 KB  
Article
Inspecting the Retina: Oculomotor Patterns and Accuracy in Fundus Image Interpretation by Novice Versus Experienced Eye Care Practitioners
by Suraj Upadhyaya
J. Eye Mov. Res. 2026, 19(1), 11; https://doi.org/10.3390/jemr19010011 - 21 Jan 2026
Viewed by 61
Abstract
Visual search behavior, influenced by expertise, prior knowledge, training, and visual fatigue, is crucial in ophthalmic diagnostics. This study investigates differences in eye-tracking strategies between novice and experienced eye care practitioners during fundus image interpretation. Forty-seven participants, including 37 novices (first- to fourth-year [...] Read more.
Visual search behavior, influenced by expertise, prior knowledge, training, and visual fatigue, is crucial in ophthalmic diagnostics. This study investigates differences in eye-tracking strategies between novice and experienced eye care practitioners during fundus image interpretation. Forty-seven participants, including 37 novices (first- to fourth-year optometry students) and 10 experienced optometrists (≥2 years of experience), viewed 20 fundus images (10 normal, 10 abnormal) while their eye movements were recorded using an Eyelink1000 Plus gaze tracker (2000 Hz). Diagnostic and laterality accuracy were assessed, and statistical analyses were conducted using Sigma Plot 12.0. Results showed that experienced practitioners had significantly higher diagnostic accuracy (83 ± 6.3%) than novices (70 ± 12.9%, p < 0.005). Significant differences in oculomotor behavior were observed, including median latency (p < 0.001), while no significant differences were found in median peak velocity (p = 0.11) or laterality accuracy (p = 0.97). Diagnostic accuracy correlated with fixation count in novices (r = 0.54, p < 0.001), while laterality accuracy correlated with total dwelling time (r = −0.62, p < 0.005). The experienced practitioners demonstrated systematic and focused visual search patterns, whereas the novices exhibited unorganized scan paths. Enhancing training with visual feedback could improve fundus image analysis accuracy in novice clinicians. Full article
Show Figures

Figure 1

14 pages, 1920 KB  
Article
Effects of Physical Activity Level on Microsaccade Dynamics During Optic Flow Stimulation in Adults with Type 2 Diabetes
by Milena Raffi, Alessandra Laffi, Andrea Meoni, Michela Persiani, Lucia Brodosi, Alba Nicastri, Maria Letizia Petroni and Alessandro Piras
Biomedicines 2026, 14(1), 231; https://doi.org/10.3390/biomedicines14010231 - 21 Jan 2026
Viewed by 131
Abstract
Background: Microsaccades are small fixational eye movements tightly linked to attention and oculomotor control. Although diabetes mellitus is associated with retinal and neural alterations that may impair visuomotor function, the influence of physical activity on microsaccade behaviour in individuals with type 2 [...] Read more.
Background: Microsaccades are small fixational eye movements tightly linked to attention and oculomotor control. Although diabetes mellitus is associated with retinal and neural alterations that may impair visuomotor function, the influence of physical activity on microsaccade behaviour in individuals with type 2 diabetes mellitus (T2DM) remains unknown. This study investigated whether habitual physical activity modulates microsaccade characteristics during fixation under different optic flow stimuli. Given that optic flow engages motion processing and gaze stabilisation pathways that may be affected by diabetes-related microvascular/neural changes, it can reveal subtle visuomotor alterations during fixation. Methods: Twenty-eight adults with T2DM and no diagnosed retinopathy performed a fixation task while viewing optic flow stimuli made of moving dots. Eye movements were recorded using an EyeLink system. Physical activity behaviour was assessed at baseline and at a 6-month follow-up after a low-threshold aerobic circuit training programme. Classification as physically active (≥600 MET-min/week) or inactive (<600 MET-min/week) was based on the 6-month assessment. Microsaccade characteristics were analysed by repeated-measures ANOVA. Results: Microsaccade rate was modulated by optic flow (p = 0.044, η2p = 0.106) and showed a significant stimulus × group × sex interaction (p = 0.005, η2p = 0.163), indicating sex-dependent differences in how optic flow modulated microsaccade rate across physically active and inactive participants. A time × stimulus interaction effect was found in peak velocity (p = 0.03, η2p = 0.114) and amplitude (p = 0.02, η2p = 0.127), consistent with modest context-dependent changes over time. Conclusions: These findings suggest that physical activity modulates microsaccade generation and supports the potential of microsaccade metrics as sensitive indicators of oculomotor function in diabetes. Full article
Show Figures

Figure 1

11 pages, 463 KB  
Article
Comparison of the Surgical Treatment for Strabismus According to Its Type: Esotropia Versus Exotropia
by Antonio Martínez-Abad, Ana Siverio-Colomina, Maria Alejandra Amesty, Rosa Díez-de-la-Uz and Mario Cantó-Cerdán
J. Clin. Med. 2026, 15(2), 795; https://doi.org/10.3390/jcm15020795 - 19 Jan 2026
Viewed by 123
Abstract
Background: The direction of deviation in strabismus may influence the predictability of the surgical procedure, but this factor remains insufficiently investigated. The aim of this study was to compare postoperative changes in ocular deviation, measured by video oculography, following surgical treatment in [...] Read more.
Background: The direction of deviation in strabismus may influence the predictability of the surgical procedure, but this factor remains insufficiently investigated. The aim of this study was to compare postoperative changes in ocular deviation, measured by video oculography, following surgical treatment in patients with concomitant exotropia and esotropia. Methods: A prospective longitudinal study included 49 patients with horizontal strabismus. All patients underwent an eye examination before and after surgery, with ocular deviation measured in nine gaze positions using video oculography. Preoperative and postoperative results were analyzed separately for esotropias and exotropias to assess surgical efficacy in both conditions. Results: Ocular deviation significantly improved after strabismus surgery in both esotropia and exotropia across all nine gaze positions (p < 0.05). The greatest improvement was observed in the primary position, with an efficacy rate of 75% in exotropia (mean reduction of 14.93 prism diopters) and 78% in esotropia (mean reduction of 17.50 prism diopters). Residual postoperative deviation was similar between the two types of strabismus (p > 0.05). In non-primary gaze positions, surgical efficacy was lower—particularly during complex eye movements—in both groups. Conclusions: Strabismus surgery resulted in a significant reduction in ocular deviation across all gaze positions in patients with concomitant horizontal strabismus, as objectively assessed by video oculography. Postoperative improvements were comparable between exotropia and esotropia, with the highest surgical efficacy observed in the primary gaze position. These findings support the use of objective multigaze evaluation to more comprehensively characterize postoperative alignment and to inform future assessments of surgical outcomes. Full article
(This article belongs to the Special Issue Clinical Investigations into Diagnosing and Managing Strabismus)
Show Figures

Figure 1

34 pages, 7495 KB  
Article
Advanced Consumer Behaviour Analysis: Integrating Eye Tracking, Machine Learning, and Facial Recognition
by José Augusto Rodrigues, António Vieira de Castro and Martín Llamas-Nistal
J. Eye Mov. Res. 2026, 19(1), 9; https://doi.org/10.3390/jemr19010009 - 19 Jan 2026
Viewed by 122
Abstract
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and [...] Read more.
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and social desirability effects, the proposed approach relies on direct behavioural measurements of visual attention. The system captures gaze distribution and fixation dynamics during interaction with products or interfaces. It uses AOI-level eye tracking metrics as the sole behavioural signal to infer candidate choice under constrained experimental conditions. In parallel, OpenCV and ML perform facial analysis to estimate demographic attributes (age, gender, and ethnicity). These attributes are collected independently and linked post hoc to gaze-derived outcomes. Demographics are not used as predictive features for choice inference. Instead, they are used as contextual metadata to support stratified, segment-level interpretation. Empirical results show that gaze-based inference closely reproduces observed choice distributions in short-horizon, visually driven tasks. Demographic estimates enable meaningful post hoc segmentation without affecting the decision mechanism. Together, these results show that multimodal integration can move beyond descriptive heatmaps. The platform produces reproducible decision-support artefacts, including AOI rankings, heatmaps, and segment-level summaries, grounded in objective behavioural data. By separating the decision signal (gaze) from contextual descriptors (demographics), this work contributes a reusable end-to-end platform for marketing and UX research. It supports choice inference under constrained conditions and segment-level interpretation without demographic priors in the decision mechanism. Full article
Show Figures

Figure 1

18 pages, 800 KB  
Article
Gaze-Speech Coordination During Narration in Autism Spectrum Disorder and First-Degree Relatives
by Jiayin Xing, Joseph C. Y. Lau, Kritika Nayar, Emily Landau, Mitra Kumareswaran, Marcia Grabowecky and Molly Losh
Brain Sci. 2026, 16(1), 107; https://doi.org/10.3390/brainsci16010107 - 19 Jan 2026
Viewed by 121
Abstract
Background/Objectives: Narrative differences in autism spectrum disorder (ASD) and subtle and parallel differences among their first-degree relatives suggest potential genetic liability to this critical social-communication skill. Effective social-communication relies on coordinating signals across modalities, which is often disrupted in ASD. Therefore, the current [...] Read more.
Background/Objectives: Narrative differences in autism spectrum disorder (ASD) and subtle and parallel differences among their first-degree relatives suggest potential genetic liability to this critical social-communication skill. Effective social-communication relies on coordinating signals across modalities, which is often disrupted in ASD. Therefore, the current study examined the coordination of fundamental skills—gaze and speech—as a potential mechanism underlying narrative and broader pragmatic differences in ASD and their first-degree relatives. Methods: Participants included 35 autistic individuals, 41 non-autistic individuals, 90 parents of autistic individuals, and 34 parents of non-autistic individuals. Participants narrated a wordless picture book presented on an eye-tracker, with gaze and speech simultaneously recorded and subsequently coded. Time series analyses quantified their temporal coordination (i.e., the temporal lead of gaze to speech) and content coordination (i.e., the amount of gaze-speech content correspondence). These metrics were then compared between autistic and non-autistic groups and between parent groups and examined in relation to narrative quality and conversational pragmatic language skills. Results: Autistic individuals showed reduced temporal coordination but increased content coordination relative to non-autistic individuals with no significant differences found between parent groups. In both autistic individuals, and parent groups combined, increased content coordination and reduced temporal coordination were linked to reduced narrative quality and pragmatic language skills, respectively. Conclusions: Reduced temporal and increased content coordination may reflect a localized strategy of labeling items upon visualization. This pattern may indicate more limited visual, linguistic, and cognitive processing and underlie differences in higher-level social-communicative abilities in ASD. To our knowledge, this study is the first to identify multimodal skill coordination as a potential mechanism contributing to higher-level social-communicative differences in ASD and first-degree relatives, implicating mechanism-based interventions to support pragmatic language skills in ASD. Full article
Show Figures

Figure 1

13 pages, 455 KB  
Article
Eye Gaze Detection Using a Hybrid Multimodal Deep Learning Model for Assistive Technology
by Verdzekov Emile Tatinyuy, Noumsi Woguia Auguste Vigny, Mvogo Ngono Joseph, Fono Louis Aimé and Wirba Pountianus Berinyuy
Appl. Sci. 2026, 16(2), 986; https://doi.org/10.3390/app16020986 - 19 Jan 2026
Viewed by 291
Abstract
This paper presents a novel hybrid multimodal deep learning model for robust and real-time eye gaze estimation. Accurate gaze tracking is essential for advancing human–computer interaction (HCI) and assistive technologies, but existing methods often struggle with environmental variations, require extensive calibration, and are [...] Read more.
This paper presents a novel hybrid multimodal deep learning model for robust and real-time eye gaze estimation. Accurate gaze tracking is essential for advancing human–computer interaction (HCI) and assistive technologies, but existing methods often struggle with environmental variations, require extensive calibration, and are computationally intensive. Our proposed model, GazeNet-HM, addresses these limitations by synergistically fusing features from RGB, depth, and infrared (IR) imaging modalities. This multimodal approach allows the model to leverage complementary information: RGB provides rich texture, depth offers invariance to lighting and aids pose estimation, and IR ensures robust pupil detection. Furthermore, we introduce a personalized adaptation module that dynamically fine-tunes the model to individual users with minimal calibration data. To ensure practical deployment, we employ advanced model compression techniques, enabling real-time inference on resource-constrained embedded systems. Extensive evaluations on public datasets (MPIIGaze, EYEDIAP, Gaze360) and our collected M-Gaze dataset demonstrate that GazeNet-HM achieves state-of-the-art performance, reducing the mean angular error by up to 27.1% compared to leading unimodal methods. After model compression, the system achieves a real-time inference speed of 32 FPS on an embedded Jetson Xavier NX platform. Ablation studies confirm the contribution of each modality and component, highlighting the effectiveness of our holistic design. Full article
Show Figures

Figure 1

16 pages, 255 KB  
Article
Beyond Heideggerian Gelassenheit and Lichtungen: Christian Thought in Terrence Malick’s The Thin Red Line
by Sixto J. Castro
Religions 2026, 17(1), 110; https://doi.org/10.3390/rel17010110 - 17 Jan 2026
Viewed by 271
Abstract
The Thin Red Line is a film by Terrence Malick that is usually read in a Heideggerian key, due precisely to the intellectual formation of the author, who was a professor of phenomenology and translator of Heidegger before becoming a filmmaker. However, read [...] Read more.
The Thin Red Line is a film by Terrence Malick that is usually read in a Heideggerian key, due precisely to the intellectual formation of the author, who was a professor of phenomenology and translator of Heidegger before becoming a filmmaker. However, read in the light of some of his later works, it can be seen as an oblique preamble for the manifest theism that The Tree of Life and A Hidden Life, two manifestly 21st-century religious films, unfold. In The Thin Red Line, Malick gives cinematographic form to some Heideggerian concepts in order to go beyond Heideggerian post-Christian philosophy and make the viewers adopt a mystical gaze that allows them to contemplate creation from a point of view that is neither utilitarian nor technical, but rather characterised by the perspective of Gelassenheit. A religious reading of this Heideggerian idea allows access to Heidegger’s source, which is Meister Eckhart, who is as present in Malick’s film(s) as Heideggerian philosophy itself. Full article
(This article belongs to the Special Issue Religion and Film in the 21st Century: Perspectives and Challenges)
27 pages, 13508 KB  
Article
Investigating XR Pilot Training Through Gaze Behavior Analysis Using Sensor Technology
by Aleksandar Knežević, Branimir Krstić, Aleksandar Bukvić, Dalibor Petrović and Boško Rašuo
Aerospace 2026, 13(1), 97; https://doi.org/10.3390/aerospace13010097 - 16 Jan 2026
Viewed by 280
Abstract
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study [...] Read more.
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study was conducted to evaluate the effectiveness of an extended reality environment relative to traditional flight simulators. Eight flight instructor candidates, advanced pilots with comparable flight-hour experience, were divided into four groups based on airplane or helicopter type and cockpit configuration (analog or digital). In the traditional simulator, fixation numbers, dwell time percentages, revisit numbers, and revisit time percentages were recorded, while in the extended reality environment, the following metrics were analyzed: fixation numbers and durations, saccade numbers and durations, smooth pursuits and durations, and number of blinks. These eye-tracking parameters were evaluated alongside flight performance metrics across all trials. Each scenario involved a takeoff and initial climb task within the traffic pattern of a fixed-wing aircraft. Despite the diversity of pilot groups, no statistically significant differences were observed in either flight performance or gaze behavior metrics between the two environments. Moreover, differences identified between certain pilot groups within one scenario were consistently observed in another, indicating the sensitivity of the proposed evaluation procedure. The enhanced realism and validated effectiveness are therefore crucial for establishing standards that support the formal adoption of extended reality technologies in pilot training programs. Integrating this digital space significantly enhances the overall training experience and provides a higher level of simulation fidelity for next-generation cadet training. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

20 pages, 4891 KB  
Article
Active Inference Modeling of Socially Shared Cognition in Virtual Reality
by Yoshiko Arima and Mahiro Okada
Sensors 2026, 26(2), 604; https://doi.org/10.3390/s26020604 - 16 Jan 2026
Viewed by 232
Abstract
This study proposes a process model for sharing ambiguous category concepts in virtual reality (VR) using an active inference framework. The model executes a dual-layer Bayesian update after observing both self and partner actions and predicts actions that minimize free energy. To incorporate [...] Read more.
This study proposes a process model for sharing ambiguous category concepts in virtual reality (VR) using an active inference framework. The model executes a dual-layer Bayesian update after observing both self and partner actions and predicts actions that minimize free energy. To incorporate agreement-seeking with others into active inference, we added disagreement in category judgments as a risk term in the free energy, weighted by gaze synchrony measured using Dynamic Time Warping (DTW), which is assumed to reflect joint attention. To validate the model, an object classification task in VR including ambiguous items was created. The experiment was conducted first under a bot avatar condition, in which ambiguous category judgments were always incorrect, and then under a human–human pair condition. This design allowed verification of the collaborative learning process by which human pairs reached agreement from the same degree of ambiguity. Analysis of experimental data from 14 participants showed that the model achieved high prediction accuracy for observed values as learning progressed. Introducing gaze synchrony weighting (γ00.5) further improved prediction accuracy, yielding optimal performance. This approach provides a new framework for modeling socially shared cognition using active inference in human–robot interaction contexts. Full article
Show Figures

Figure 1

23 pages, 67974 KB  
Article
Analyzing the “Opposite” Approach in Additions to Historic Buildings Using Visual Attention Tools: Dresden Military History Museum Case
by Nuray Özkaraca Özalp, Hicran Hanım Halaç, Mehmet Fatih Özalp and Fikret Bademci
J. Eye Mov. Res. 2026, 19(1), 7; https://doi.org/10.3390/jemr19010007 - 12 Jan 2026
Viewed by 216
Abstract
From past to present, modern additions have continued to transform historic environments. While some argue that contemporary extensions disrupt the integrity of historic buildings, others suggest that the contrast between past and present creates a meaningful architectural dialog. This debate raises a key [...] Read more.
From past to present, modern additions have continued to transform historic environments. While some argue that contemporary extensions disrupt the integrity of historic buildings, others suggest that the contrast between past and present creates a meaningful architectural dialog. This debate raises a key question: in contrasting compositions, which architectural elements draw more visual attention, the historic or the modern? To address this, a visual attention-based analytical approach is adopted. In this study, eye-tracking-based visual attention analysis is used to examine how viewers perceive the relationship between historical and contemporary architectural elements. Instead of conventional laboratory-based eye-tracking, artificial intelligence-supported visual attention software developed from eye-tracking datasets is employed. Four tools—3M-VAS, EyeQuant, Attention Insight, and Expoze—were used to generate heat maps, gaze sequence maps, hotspots, focus maps, attention distribution diagrams, and saliency predictions. These visualizations enabled both a qualitative and quantitative comparison of viewer focus. The case study is the Military History Museum in Dresden, Germany, known for its widely debated contemporary addition representing an oppositional design approach. The results illustrate which architectural components are visually prioritized, offering insight into how contrasting architectural languages are cognitively perceived in historic settings. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

25 pages, 4608 KB  
Article
Comparison of Multi-View and Merged-View Mining Vehicle Teleoperation Systems Through Eye-Tracking
by Alireza Kamran Pishhesari, Mahdi Shahsavar, Amin Moniri-Morad and Javad Sattarvand
Mining 2026, 6(1), 3; https://doi.org/10.3390/mining6010003 - 12 Jan 2026
Viewed by 142
Abstract
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. [...] Read more.
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. In a controlled experiment, 35 participants navigated a teleoperated robot along a 50 m lab-scale path representative of an underground mine under both multi-view and merged-view conditions. Task performance and eye-tracking data—including completion time, path adherence, and speed-limit violations—were collected for comparison. The merged-view system enabled 6% faster completion times, 21% higher path adherence, and 28% fewer speed-limit violations. Eye-tracking metrics indicated more efficient and distributed attention: blink rate decreased by 29%, fixation duration shortened by 18%, saccade amplitude increased by 11%, and normalized gaze-transition entropy rose by 14%, reflecting broader and more adaptive scanning. NASA-TLX scores further showed a 27% reduction in perceived workload. Regression-based sensitivity analysis revealed that gaze entropy was the strongest predictor of efficiency in the multi-view condition, while fixation duration dominated under merged-view visualization. For path adherence, blink rate was most influential in the multi-view setup, whereas fixation duration became key in merged-view operation. Overall, the results indicated that merged-view visualization improved visual attention distribution and reduced cognitive tunneling indicators in a controlled laboratory teleoperation task, offering early-stage, interface-level insights motivated by mining-relevant teleoperation challenges. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies, 2nd Edition)
Show Figures

Figure 1

Back to TopTop