Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (330)

Search Parameters:
Keywords = gaze-behavior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 13508 KB  
Article
Investigating XR Pilot Training Through Gaze Behavior Analysis Using Sensor Technology
by Aleksandar Knežević, Branimir Krstić, Aleksandar Bukvić, Dalibor Petrović and Boško Rašuo
Aerospace 2026, 13(1), 97; https://doi.org/10.3390/aerospace13010097 (registering DOI) - 16 Jan 2026
Abstract
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study [...] Read more.
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study was conducted to evaluate the effectiveness of an extended reality environment relative to traditional flight simulators. Eight flight instructor candidates, advanced pilots with comparable flight-hour experience, were divided into four groups based on airplane or helicopter type and cockpit configuration (analog or digital). In the traditional simulator, fixation numbers, dwell time percentages, revisit numbers, and revisit time percentages were recorded, while in the extended reality environment, the following metrics were analyzed: fixation numbers and durations, saccade numbers and durations, smooth pursuits and durations, and number of blinks. These eye-tracking parameters were evaluated alongside flight performance metrics across all trials. Each scenario involved a takeoff and initial climb task within the traffic pattern of a fixed-wing aircraft. Despite the diversity of pilot groups, no statistically significant differences were observed in either flight performance or gaze behavior metrics between the two environments. Moreover, differences identified between certain pilot groups within one scenario were consistently observed in another, indicating the sensitivity of the proposed evaluation procedure. The enhanced realism and validated effectiveness are therefore crucial for establishing standards that support the formal adoption of extended reality technologies in pilot training programs. Integrating this digital space significantly enhances the overall training experience and provides a higher level of simulation fidelity for next-generation cadet training. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

12 pages, 880 KB  
Article
An Eye-Tracking Study of Pain Perception Toward Faces with Visible Differences
by Pauline Rasset, Loy Séry, Marine Granjon and Kathleen Bogart
Behav. Sci. 2026, 16(1), 98; https://doi.org/10.3390/bs16010098 - 12 Jan 2026
Viewed by 152
Abstract
This research examines the underlying processes of public stigma toward visible facial differences (VFDs) by focusing on gaze behavior. Past research showed that a VFD influences the visual processing of faces, leading to increased attention to the VFD area at the expense of [...] Read more.
This research examines the underlying processes of public stigma toward visible facial differences (VFDs) by focusing on gaze behavior. Past research showed that a VFD influences the visual processing of faces, leading to increased attention to the VFD area at the expense of internal features (i.e., eyes, nose, mouth). Since these features primarily convey affective information, this pre-registered study investigates whether this bias also affects pain perception. In an eye-tracking task, participants (N = 44) viewed faces that either did or did not display a VFD located in a peripheral area of the face, and that either did or did not express pain, while their gaze behavior was being recorded. Participants then rated perceived pain intensity for each face. Results showed that VFDs diverted attention toward peripheral features and away from internal, pain-relevant features of the face. Surprisingly, participants rated faces with VFDs as experiencing more pain, regardless of whether pain was actually expressed. This suggests that, despite gazing less at facial expressions, observers inferred pain based on task-irrelevant features, likely stereotypes related to the VFD. These findings provide insights into how people with VFDs are perceived and how their emotions are interpreted. Full article
(This article belongs to the Special Issue Emotions and Stereotypes About People with Visible Facial Difference)
Show Figures

Figure 1

16 pages, 2139 KB  
Article
Visual Strategies of Avoidantly Attached Individuals: Attachment Avoidance and Gaze Behavior in Deceptive Interactions
by Petra Hypšová, Martin Seitl and Stanislav Popelka
J. Eye Mov. Res. 2026, 19(1), 5; https://doi.org/10.3390/jemr19010005 - 7 Jan 2026
Viewed by 212
Abstract
Gaze behavior is a critical component of social interaction, reflecting emotional recognition and social regulation. While previous research has emphasized either situational influences (e.g., deception) or stable individual differences (e.g., attachment avoidance) on gaze patterns, studies exploring how these factors interact to shape [...] Read more.
Gaze behavior is a critical component of social interaction, reflecting emotional recognition and social regulation. While previous research has emphasized either situational influences (e.g., deception) or stable individual differences (e.g., attachment avoidance) on gaze patterns, studies exploring how these factors interact to shape gaze behavior in interpersonal contexts remain scarce. In this vein, the aim of the present study was to experimentally determine whether the gaze direction of individuals differs, with respect to their avoidant orientation, under changing situational conditions, including truthful and deceptive communication towards a counterpart. Using a within-person experimental design and the eye-tracking methodology, 31 participants took part in both rehearsed and spontaneous truth-telling and lie-telling tasks. Consistent with expectations, higher attachment avoidance was associated with significantly fewer fixations on emotionally expressive facial regions (e.g., mouth, jaw), and non-significant but visually consistent increases in fixations on the upper face (e.g., eyes) and background. These findings indicate that stable dispositional tendencies, rather than situational demands such as deception, predominantly shape gaze allocation during interpersonal interactions. They further provide a foundation for future investigations into the dynamic interplay between personality and situational context in interactive communicative settings. Full article
Show Figures

Graphical abstract

18 pages, 4285 KB  
Article
Eye-Tracking and Emotion-Based Evaluation of Wardrobe Front Colors and Textures in Bedroom Interiors
by Yushu Chen, Wangyu Xu and Xinyu Ma
Multimodal Technol. Interact. 2026, 10(1), 7; https://doi.org/10.3390/mti10010007 - 6 Jan 2026
Viewed by 112
Abstract
Wardrobe fronts form a major visual element in bedroom interiors, yet material selection for their colors and textures often relies on intuition rather than evidence. This study develops a data-driven framework that links gaze behavior and affective responses to occupants’ preferences for wardrobe [...] Read more.
Wardrobe fronts form a major visual element in bedroom interiors, yet material selection for their colors and textures often relies on intuition rather than evidence. This study develops a data-driven framework that links gaze behavior and affective responses to occupants’ preferences for wardrobe front materials. Forty adults evaluated color and texture swatches and rendered bedroom scenes while eye-tracking data capturing attraction, retention, and exploration were collected. Pairwise choices were modeled using a Bradley–Terry approach, and visual-attention features were integrated with emotion ratings to construct an interpretable attention index for predicting preferences. Results show that neutral light colors and structured wood-like textures consistently rank highest, with scene context reducing preference differences but not altering the order. Shorter time to first fixation and longer fixation duration were the strongest predictors of desirability, demonstrating the combined influence of rapid visual capture and sustained attention. Within the tested stimulus set and viewing conditions, the proposed pipeline yields consistent preference rankings and an interpretable attention-based score that supports evidence-informed shortlisting of wardrobe-front materials. The reported relationships between gaze, affect, and choice are associative and are intended to guide design decisions within the scope of the present experimental settings. Full article
Show Figures

Figure 1

32 pages, 1145 KB  
Systematic Review
The Diagnostic Potential of Eye Tracking to Detect Autism Spectrum Disorder in Children: A Systematic Review
by Marcella Di Cara, Carmela De Domenico, Adriana Piccolo, Angelo Alito, Lara Costa, Angelo Quartarone and Francesca Cucinotta
Med. Sci. 2026, 14(1), 28; https://doi.org/10.3390/medsci14010028 - 6 Jan 2026
Viewed by 213
Abstract
Background: Autism spectrum disorder (ASD) is associated with distinct visual attention patterns that provide insight into underlying social-cognitive mechanisms. Methods: This systematic review (PROSPERO: CRD42023429316), conducted per PRISMA guidelines, synthesizes evidence from 14 peer-reviewed studies using eye-tracking to compare oculomotor strategies [...] Read more.
Background: Autism spectrum disorder (ASD) is associated with distinct visual attention patterns that provide insight into underlying social-cognitive mechanisms. Methods: This systematic review (PROSPERO: CRD42023429316), conducted per PRISMA guidelines, synthesizes evidence from 14 peer-reviewed studies using eye-tracking to compare oculomotor strategies in autistic children and typically developing (TD) controls. A comprehensive literature search was conducted in PubMed, Web of Science, and Science Direct up to March 2025. Study inclusion criteria focused on ASD versus TD group comparisons in individuals under 18 years, with key metrics, fixation duration and count, spatial distribution, saccadic parameters systematically extracted. Risk of bias was assessed using the QUADAS-2 tool, revealing high heterogeneity in both index tests and patient selection. Results: The results indicate that autistic children exhibit reduced fixation on socially salient stimuli, atypical saccadic behavior, and more variable spatial exploration compared to controls. Conclusions: These oculomotor differences suggest altered mechanisms of social attention and information processing in ASD. Findings suggest that eye-tracking can contribute valuable information about heterogeneous gaze profiles in ASD, providing preliminary insight that may inform future studies to develop more sensitive diagnostic tools. This review highlights visual attention patterns as promising indicators of neurocognitive functioning in ASD. Full article
Show Figures

Figure 1

13 pages, 638 KB  
Systematic Review
Application of Artificial Intelligence Tools for Social and Psychological Enhancement of Students with Autism Spectrum Disorder: A Systematic Review
by Angeliki Tsapanou, Anastasia Bouka, Angeliki Papadopoulou, Christina Vamvatsikou, Dionisia Mikrouli, Eirini Theofila, Kassandra Dionysopoulou, Konstantina Kortseli, Panagiota Lytaki, Theoni Myrto Spyridonidi and Panagiotis Plotas
Brain Sci. 2026, 16(1), 56; https://doi.org/10.3390/brainsci16010056 - 30 Dec 2025
Viewed by 315
Abstract
Background: Children with autism spectrum disorder (ASD) commonly experience persistent difficulties in social communication, emotional regulation, and social engagement. In recent years, artificial intelligence (AI)-based technologies, particularly socially assistive robots and intelligent sensing systems, have been explored as complementary tools to support psychosocial [...] Read more.
Background: Children with autism spectrum disorder (ASD) commonly experience persistent difficulties in social communication, emotional regulation, and social engagement. In recent years, artificial intelligence (AI)-based technologies, particularly socially assistive robots and intelligent sensing systems, have been explored as complementary tools to support psychosocial interventions in this population. Objective: This systematic review aimed to critically evaluate recent evidence on the effectiveness of AI-based interventions in improving social, emotional, and cognitive functioning in children with ASD. Methods: A systematic literature search was conducted in PubMed following PRISMA guidelines, targeting English-language studies published between 2020 and 2025. Eligible studies involved children with ASD and implemented AI-driven tools within therapeutic or educational settings. Eight studies met inclusion criteria and were analyzed using the PICO framework. Results: The reviewed interventions included humanoid and non-humanoid robots, gaze-tracking systems, and theory of mind-oriented applications. Across studies, AI-based interventions were associated with improvements in joint attention, social communication and reciprocity, emotion recognition and regulation, theory of mind, and task engagement. Outcomes were assessed using standardized behavioral measures, observational coding, parent or therapist reports, and physiological or sensor-based indices. However, the studies were characterized by small and heterogeneous samples, short intervention durations, and variability in outcome measures. Conclusions: Current evidence suggests that AI-based systems may serve as valuable adjuncts to conventional interventions for children with ASD, particularly for supporting structured social and emotional skill development. Nonetheless, methodological limitations and limited long-term data underscore the need for larger, multi-site trials with standardized protocols to better establish efficacy, generalizability, and ethical integration into clinical practice. Full article
Show Figures

Figure 1

17 pages, 5410 KB  
Article
Comparing Eye-Tracking and Verbal Reports in L2 Reading Process Research: Three Qualitative Studies
by Chengsong Yang, Guangwei Hu, Keyu Que and Na Fan
J. Eye Mov. Res. 2026, 19(1), 2; https://doi.org/10.3390/jemr19010002 - 25 Dec 2025
Viewed by 344
Abstract
This study compares the roles of eye-tracking and verbal reports (think-alouds and retrospective verbal reports, RVRs) in L2 reading process research through three qualitative studies. Findings indicate that eye-tracking provided precise, quantitative data on visual attention and reading patterns (e.g., fixation duration, gaze [...] Read more.
This study compares the roles of eye-tracking and verbal reports (think-alouds and retrospective verbal reports, RVRs) in L2 reading process research through three qualitative studies. Findings indicate that eye-tracking provided precise, quantitative data on visual attention and reading patterns (e.g., fixation duration, gaze plots) and choice-making during gap-filling. Based on our mapping, it was mostly effective in identifying 13 out of 47 reading processing strategies, primarily those involving skimming or scanning that had distinctive eye-movement signatures. Verbal reports, while less exact in measurement, offered direct access to cognitive processes (e.g., strategy use, reasoning) and uncovered content-specific thoughts inaccessible to eye-tracking. Both methods exhibited reactivity: eye-tracking could cause physical discomfort or altered reading behavior, whereas think-alouds could disrupt task flow or enhance reflection. This study reveals the respective strengths and limitations of eye-tracking and verbal reports in L2 reading research. It facilitates a more informed selection and application of these methodological approaches in alignment with specific research objectives, whether employed in isolation or in an integrated manner. Full article
Show Figures

Figure 1

21 pages, 2203 KB  
Article
An Analysis of Applicability for an E-Scooter to Ride on Sidewalk Based on a VR Simulator Study
by Jihyun Kim, Dongmin Lee, Sooncheon Hwang, Juehyun Lee and Seungmin Kim
Appl. Sci. 2026, 16(1), 218; https://doi.org/10.3390/app16010218 - 24 Dec 2025
Viewed by 389
Abstract
E-scooters have rapidly become a popular option for first- and last-mile mobility, yet their integration into urban transportation systems has raised significant safety concerns. This study investigates the feasibility of permitting E-scooter riding on sidewalks under controlled conditions to minimize pedestrian conflicts. Analysis [...] Read more.
E-scooters have rapidly become a popular option for first- and last-mile mobility, yet their integration into urban transportation systems has raised significant safety concerns. This study investigates the feasibility of permitting E-scooter riding on sidewalks under controlled conditions to minimize pedestrian conflicts. Analysis of E-scooter crashes in Daejeon, South Korea, showed that 98.09% of crashes were caused by rider negligence, with “Failure to Fulfill Safe Driving Duty” as the leading factor. To investigate the applicability of safe sidewalk usage, a VR-based simulator experiment was conducted with 41 participants across four scenarios with varying sidewalk widths and pedestrian densities, under speed limits of 10, 15, and 20 km/h. Riding behaviors—including speed stability, braking, steering, and conflict frequency—and gaze behaviors were measured. Results showed that riding at 10 km/h improved riding stability and minimized conflicts. Regression analysis identified pedestrian density as the strongest predictor of conflicts, followed by sidewalk width and riding speed. These findings suggest specific policy needs: ensuring a minimum sidewalk width of 4 m for safe shared use, restricting operation to environments with low-to-moderate pedestrian density, and implementing a 10 km/h speed limit. This study provides evidence-based recommendations for safer integration of E-scooters into pedestrian environments. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

24 pages, 10048 KB  
Entry
Immersive Methods and Biometric Tools in Food Science and Consumer Behavior
by Abdul Hannan Zulkarnain and Attila Gere
Encyclopedia 2026, 6(1), 2; https://doi.org/10.3390/encyclopedia6010002 - 22 Dec 2025
Viewed by 338
Definition
Immersive methods and biometric tools provide a rigorous, context-rich way to study how people perceive and choose food. Immersive methods use extended reality, including virtual, augmented, mixed, and augmented virtual environments, to recreate settings such as homes, shops, and restaurants. They increase participants’ [...] Read more.
Immersive methods and biometric tools provide a rigorous, context-rich way to study how people perceive and choose food. Immersive methods use extended reality, including virtual, augmented, mixed, and augmented virtual environments, to recreate settings such as homes, shops, and restaurants. They increase participants’ sense of presence and the ecological validity (realism of conditions) of experiments, while still tightly controlling sensory and social cues like lighting, sound, and surroundings. Biometric tools record objective signals linked to attention, emotion, and cognitive load via sensors such as eye-tracking, galvanic skin response (GSR), heart rate (and variability), facial electromyography, electroencephalography, and functional near-infrared spectroscopy. Researchers align stimuli presentation, gaze, and physiology on a common temporal reference and link these data to outcomes like liking, choice, or willingness-to-buy. This approach reveals implicit responses that self-reports may miss, clarifies how changes in context shift perception, and improves predictive power. It enables faster, lower-risk product and packaging development, better-informed labeling and retail design, and more targeted nutrition and health communication. Good practices emphasize careful system calibration, adequate statistical power, participant comfort and safety, robust data protection, and transparent analysis. In food science and consumer behavior, combining immersive environments with biometrics yields valid, reproducible evidence about what captures attention, creates value, and drives food choice. Full article
(This article belongs to the Collection Food and Food Culture)
Show Figures

Graphical abstract

16 pages, 1906 KB  
Article
Visual Attention and User Preference Analysis of Children’s Beds in Interior Environments Using Eye-Tracking Technology
by Yunxi Nie, Jinjing Wang and Yushu Chen
Buildings 2026, 16(1), 44; https://doi.org/10.3390/buildings16010044 - 22 Dec 2025
Viewed by 304
Abstract
Visual attention plays a critical role in users’ cognitive evaluation of safety and functionality in interior furniture, particularly for children’s beds, which are inherently safety-sensitive products. This study adopts an integrated approach combining eye-tracking experiments and questionnaire surveys to examine users’ visual cognition [...] Read more.
Visual attention plays a critical role in users’ cognitive evaluation of safety and functionality in interior furniture, particularly for children’s beds, which are inherently safety-sensitive products. This study adopts an integrated approach combining eye-tracking experiments and questionnaire surveys to examine users’ visual cognition and preference patterns toward children’s solid wood beds under controlled viewing conditions, focusing on material attributes, bed typologies, and key structural components. The results indicate that natural solid wood materials with clear textures and warm tones attract higher visual attention, while storage-integrated bed designs significantly enhance exploratory gaze behavior. At the component level, safety-related elements such as guardrails and headboards consistently receive the earliest visual attention, highlighting their cognitive priority in safety assessment and spatial perception. Overall, the findings support a dual-path visual cognition mechanism in which bottom-up visual salience interacts with top-down concerns related to safety and usability. This study provides evidence-based insights for material selection and structural emphasis in children’s furniture design within interior environments. The applicability of the conclusions is primarily limited to adult observers under controlled visual conditions. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

20 pages, 2679 KB  
Article
Physiological and Behavioral Response Differences Between Video-Mediated and In-Person Interaction
by Christoph Tremmel, Nathan T. M. Huneke, Daniel Hobson, Christopher Tacca and m.c. schraefel
Sensors 2026, 26(1), 34; https://doi.org/10.3390/s26010034 - 20 Dec 2025
Viewed by 548
Abstract
This study investigates how virtual communication differs from in-person interaction across physiological and behavioral domains, with the goal of informing future interface design. Using a naturalistic setup, we recorded multimodal biosignals, including eye tracking, head and hand movement, heart rate, respiratory rate, and [...] Read more.
This study investigates how virtual communication differs from in-person interaction across physiological and behavioral domains, with the goal of informing future interface design. Using a naturalistic setup, we recorded multimodal biosignals, including eye tracking, head and hand movement, heart rate, respiratory rate, and EEG during both in-person and video-based dialogues. Our results show that virtual communication significantly reduces movement and gaze dynamics, particularly in horizontal eye movements and lateral head motion, reflecting both sender- and receiver-side constraints. These physical limitations likely stem from the need to remain within the camera frame and the restricted access to nonverbal cues. Pupil dilation was significantly greater during in-person conversations, consistent with increased arousal during natural communication. Heart rate and EEG trends similarly suggested heightened engagement in face-to-face settings, though interpretation of EEG was limited by movement artifacts. Together, the findings highlight how virtual platforms alter embodied interaction, underscoring the need to address both mobility and visual access in future communication technologies to better support co-presence. Full article
(This article belongs to the Special Issue Measurement Sensors and Applications)
Show Figures

Figure 1

20 pages, 2845 KB  
Article
From Gaze to Music: AI-Powered Personalized Audiovisual Experiences for Children’s Aesthetic Education
by Jiahui Liu, Jing Liu and Hong Yan
Behav. Sci. 2025, 15(12), 1684; https://doi.org/10.3390/bs15121684 - 4 Dec 2025
Viewed by 440
Abstract
The cultivation of aesthetic appreciation through engagement with exemplary artworks constitutes a fundamental pillar in fostering children’s cognitive and emotional development, while simultaneously facilitating multidimensional learning experiences across diverse perceptual domains. However, children in early stages of cognitive development frequently encounter substantial challenges [...] Read more.
The cultivation of aesthetic appreciation through engagement with exemplary artworks constitutes a fundamental pillar in fostering children’s cognitive and emotional development, while simultaneously facilitating multidimensional learning experiences across diverse perceptual domains. However, children in early stages of cognitive development frequently encounter substantial challenges when attempting to comprehend and internalize complex visual narratives and abstract artistic concepts inherent in sophisticated artworks. This study presents an innovative methodological framework designed to enhance children’s artwork comprehension capabilities by systematically leveraging the theoretical foundations of audio-visual cross-modal integration. Through investigation of cross-modal correspondences between visual and auditory perceptual systems, we developed a sophisticated methodology that extracts and interprets musical elements based on gaze behavior patterns derived from prior pilot studies when observing artworks. Utilizing state-of-the-art deep learning techniques, specifically Recurrent Neural Networks (RNNs), these extracted visual–musical correspondences are subsequently transformed into cohesive, aesthetically pleasing musical compositions that maintain semantic and emotional congruence with the observed visual content. The efficacy and practical applicability of our proposed method were validated through empirical evaluation involving 96 children (analyzed through objective behavioral assessments using eye-tracking technology), complemented by qualitative evaluations from 16 parents and 5 experienced preschool educators. Our findings show statistically significant improvements in children’s sustained engagement and attentional focus under AI-generated, artwork-matched audiovisual support, potentially scaffolding deeper processing and informing future developments in aesthetic education. The results demonstrate statistically significant improvements in children’s sustained engagement (fixation duration: 58.82 ± 7.38 s vs. 41.29 ± 6.92 s, p < 0.001, Cohen’s d ≈ 1.29), attentional focus (AOI gaze frequency increased 73%, p < 0.001), and subjective evaluations from parents (mean ratings 4.56–4.81/5) when visual experiences are augmented by AI-generated, personalized audio-visual experiences. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

24 pages, 1857 KB  
Article
Ensemble Modeling of Multiple Physical Indicators to Dynamically Phenotype Autism Spectrum Disorder
by Marie Amale Huynh, Aaron Kline, Saimourya Surabhi, Kaitlyn Dunlap, Onur Cezmi Mutlu, Mohammadmahdi Honarmand, Parnian Azizian, Peter Washington and Dennis P. Wall
Algorithms 2025, 18(12), 764; https://doi.org/10.3390/a18120764 - 2 Dec 2025
Viewed by 379
Abstract
Early detection of Autism Spectrum Disorder (ASD), a neurodevelopmental condition characterized by social communication challenges, is essential for timely intervention. Naturalistic home videos collected via mobile applications offer scalable opportunities for digital diagnostics. We leveraged GuessWhat, a mobile game designed to engage parents [...] Read more.
Early detection of Autism Spectrum Disorder (ASD), a neurodevelopmental condition characterized by social communication challenges, is essential for timely intervention. Naturalistic home videos collected via mobile applications offer scalable opportunities for digital diagnostics. We leveraged GuessWhat, a mobile game designed to engage parents and children, which has generated over 3000 structured videos from 382 children. From this collection, we curated a final analytic sample of 688 feature-rich videos centered on a single dyad, enabling more consistent modeling. We developed a two-step pipeline: (1) filtering to isolate high-quality videos, and (2) feature engineering to extract interpretable behavioral signals. Unimodal LSTM-based models trained on eye gaze, head position, and facial expression achieved test AUCs of 86% (95% CI: 0.79–0.92), 78% (95% CI: 0.69–0.86), and 67% (95% CI: 0.55–0.78), respectively. Late-stage fusion of unimodal outputs significantly improved predictive performance, yielding a test AUC of 90% (95% CI: 0.84–0.95). Our findings demonstrate the complementary value of distinct behavioral channels and support the feasibility of using mobile-captured videos for detecting clinically relevant signals. While further work is needed to improve generalizability and inclusivity, this study highlights the promise of real-time, scalable autism phenotyping for early interventions. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 2nd Edition)
Show Figures

Figure 1

23 pages, 4065 KB  
Article
Robust Camera-Based Eye-Tracking Method Allowing Head Movements and Its Application in User Experience Research
by He Zhang and Lu Yin
J. Eye Mov. Res. 2025, 18(6), 71; https://doi.org/10.3390/jemr18060071 - 1 Dec 2025
Viewed by 651
Abstract
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation [...] Read more.
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation persists in these vision-based methods: sensitivity to head movements. Therefore, users are often required to maintain a rigid head position, leading to discomfort and potentially skewed results. To address this challenge, this paper proposes a robust eye-tracking methodology designed to accommodate head motion. Our core technique involves mapping the displacement of the pupil center from a dynamically updated reference point to estimate the gaze point. When head movement is detected, the system recalculates the head-pointing coordinate using estimated head pose and user-to-screen distance. This new head position and the corresponding pupil center are then established as the fresh benchmark for subsequent gaze point estimation, creating a continuous and adaptive correction loop. We conducted accuracy tests with 22 participants. The results demonstrate that our method surpasses the performance of many current methods, achieving mean gaze errors of 1.13 and 1.37 degrees in two testing modes. Further validation in a smooth pursuit task confirmed its efficacy in dynamic scenarios. Finally, we applied the method in a real-world gaming context, successfully extracting fixation counts and gaze heatmaps to analyze visual behavior and UX across different game modes, thereby verifying its practical utility. Full article
Show Figures

Figure 1

16 pages, 1846 KB  
Article
Integrating Eye-Tracking and Artificial Intelligence for Quantitative Assessment of Visuocognitive Performance in Sports and Education
by Francisco Javier Povedano-Montero, Ricardo Bernardez-Vilaboa, José Ramon Trillo, Rut González-Jiménez, Carla Otero-Currás, Gema Martínez-Florentín and Juan E. Cedrún-Sánchez
Photonics 2025, 12(12), 1167; https://doi.org/10.3390/photonics12121167 - 27 Nov 2025
Viewed by 533
Abstract
Background: Eye-tracking technology enables the objective quantification of oculomotor behavior, providing key insights into visuocognitive performance. This study presents a comparative analysis of visual attention patterns between rhythmic gymnasts and school-aged students using an optical eye-tracking system combined with machine learning algorithms. Methods: [...] Read more.
Background: Eye-tracking technology enables the objective quantification of oculomotor behavior, providing key insights into visuocognitive performance. This study presents a comparative analysis of visual attention patterns between rhythmic gymnasts and school-aged students using an optical eye-tracking system combined with machine learning algorithms. Methods: Eye movement data were recorded during controlled visual tasks using the DIVE system (sampling rate: 120 Hz). Spatiotemporal metrics—including fixation duration, saccadic amplitude, and gaze entropy—were extracted and used as input features for supervised models: Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Decision Tree (CART), Random Forest, XGBoost, and a one-dimensional Convolutional Neural Network (1D-CNN). Data were divided according to a hold-out scheme (70/30) and evaluated using accuracy, F1-macro score, and Receiver Operating Characteristic (ROC) curves. Results: XGBoost achieved the best performance (accuracy = 94.6%; F1-macro = 0.945), followed by Random Forest (accuracy = 94.0%; F1-macro = 0.937). The neural network showed intermediate performance (accuracy = 89.3%; F1-macro = 0.888), whereas SVM and k-NN exhibited lower values. Gymnasts demonstrated more stable and goal-directed gaze patterns than students, reflecting greater efficiency in visuomotor control. Conclusions: Integrating eye-tracking with artificial intelligence provides a robust framework for the quantitative assessment of visuocognitive performance. Ensemble algorithms demonstrated high discriminative power, while neural networks require further optimization. This approach shows promising applications in sports science, cognitive diagnostics, and the development of adaptive human–machine interfaces. Full article
Show Figures

Figure 1

Back to TopTop