-
Sequential Fixation Behavior in Road Marking Recognition: Implications for Design -
Reading Assessment and Eye Movement Analysis in Bilateral Central Scotoma Due to Age-Related Macular Degeneration -
Microsaccade Activity During Visuospatial Working Memory in Early-Stage Parkinson’s Disease -
Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware)
Journal Description
Journal of Eye Movement Research
Journal of Eye Movement Research
(JEMR) is an international, peer-reviewed, open access journal on all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas, and is published bimonthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, PMC, and other databases.
- Journal Rank: JCR - Q1 (Ophthalmology) / CiteScore - Q2 (Ophthalmology)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 39.9 days after submission; acceptance to publication is undertaken in 5.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
2.8 (2024)
Latest Articles
Investigating the Effect of Presentation Mode on Cognitive Load in English–Chinese Distance Simultaneous Interpreting: An Eye-Tracking Study
J. Eye Mov. Res. 2025, 18(6), 73; https://doi.org/10.3390/jemr18060073 (registering DOI) - 1 Dec 2025
Abstract
►
Show Figures
Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving
[...] Read more.
Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving 36 participants, comprising 19 professional interpreters and 17 student interpreters, to assess the effects of presentation mode on their cognitive load during English-to-Chinese DSI. A Tobii Pro X3-120 screen-based eye tracker was used to collect eye-tracking data as the participants sequentially performed a DSI task involving four distinct presentation modes: the Speaker, Slides, Split, and Corner modes. The findings, derived from the integration of eye-tracking data and interpreting performance scores, indicate that both presentation mode and experience level significantly influence interpreters’ cognitive load. Notably, student interpreters demonstrated longer fixation durations in the Slides mode, indicating a reliance on visual aids for DSI. These results have implications for language learning, suggesting that the integration of visual supports can aid in the acquisition and performance of interpreting skills, particularly for less experienced interpreters. This study contributes to our understanding of the interplay between technology, cognitive load, and language learning in the context of DSI.
Full article
Open AccessArticle
Initial and Sustained Attentional Bias Toward Emotional Faces in Patients with Major Depressive Disorder
by
Hanliang Wei, Tak Kwan Lam, Weijian Liu, Waxun Su, Zheng Wang, Qiandong Wang, Xiao Lin and Peng Li
J. Eye Mov. Res. 2025, 18(6), 72; https://doi.org/10.3390/jemr18060072 (registering DOI) - 1 Dec 2025
Abstract
►▼
Show Figures
Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of
[...] Read more.
Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC, n = 47), assessing both the initial orientation (initial gaze preference) and sustained attention (first dwell time). Key findings revealed the following: (1) while both groups showed an initial vigilance toward threatening faces (fearful/sad), only MDD patients displayed an additional attentional capture by happy faces; (2) a significant emotion main effect (F (2, 216) = 10.19, p < 0.001) indicated a stronger initial orientation to fearful versus happy faces, with Bayesian analyses (BF < 0.3) confirming the absence of group differences; and (3) no group disparities emerged in sustained attentional maintenance (all ps > 0.05). These results challenge conventional negativity-focused models by demonstrating valence-specific early-stage abnormalities in MDD, suggesting that depressive attentional dysfunction may be most pronounced during initial automatic processing rather than later strategic stages. The findings advance the theoretical understanding of attentional bias in depression while highlighting the need for stage-specific intervention approaches.
Full article

Figure 1
Open AccessArticle
Robust Camera-Based Eye-Tracking Method Allowing Head Movements and Its Application in User Experience Research
by
He Zhang and Lu Yin
J. Eye Mov. Res. 2025, 18(6), 71; https://doi.org/10.3390/jemr18060071 (registering DOI) - 1 Dec 2025
Abstract
►▼
Show Figures
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation
[...] Read more.
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation persists in these vision-based methods: sensitivity to head movements. Therefore, users are often required to maintain a rigid head position, leading to discomfort and potentially skewed results. To address this challenge, this paper proposes a robust eye-tracking methodology designed to accommodate head motion. Our core technique involves mapping the displacement of the pupil center from a dynamically updated reference point to estimate the gaze point. When head movement is detected, the system recalculates the head-pointing coordinate using estimated head pose and user-to-screen distance. This new head position and the corresponding pupil center are then established as the fresh benchmark for subsequent gaze point estimation, creating a continuous and adaptive correction loop. We conducted accuracy tests with 22 participants. The results demonstrate that our method surpasses the performance of many current methods, achieving mean gaze errors of 1.13 and 1.37 degrees in two testing modes. Further validation in a smooth pursuit task confirmed its efficacy in dynamic scenarios. Finally, we applied the method in a real-world gaming context, successfully extracting fixation counts and gaze heatmaps to analyze visual behavior and UX across different game modes, thereby verifying its practical utility.
Full article

Figure 1
Open AccessArticle
Measuring Mental Effort in Real Time Using Pupillometry
by
Gavindya Jayawardena, Yasith Jayawardana and Jacek Gwizdka
J. Eye Mov. Res. 2025, 18(6), 70; https://doi.org/10.3390/jemr18060070 - 24 Nov 2025
Abstract
►▼
Show Figures
Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on
[...] Read more.
Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky–Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions.
Full article

Figure 1
Open AccessArticle
Visual Attention to Food Content on Social Media: An Eye-Tracking Study Among Young Adults
by
Aura Lydia Riswanto, Seieun Kim, Youngsam Ha and Hak-Seon Kim
J. Eye Mov. Res. 2025, 18(6), 69; https://doi.org/10.3390/jemr18060069 - 20 Nov 2025
Abstract
►▼
Show Figures
Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase
[...] Read more.
Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms.
Full article

Figure 1
Open AccessArticle
Gaze Characteristics Using a Three-Dimensional Heads-Up Display During Cataract Surgery
by
Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi and Michael J. Heiferman
J. Eye Mov. Res. 2025, 18(6), 68; https://doi.org/10.3390/jemr18060068 - 17 Nov 2025
Abstract
►▼
Show Figures
Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2–4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY
[...] Read more.
Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2–4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. Results: Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (p = 0.042), longer saccades (p < 0.0001), and fewer fixations on the HUD (p < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (p < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2–4) fixated primarily on the instrument tip. Conclusions: Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices.
Full article

Graphical abstract
Open AccessArticle
BEACH-Gaze: Supporting Descriptive and Predictive Gaze Analytics in the Era of Artificial Intelligence and Advanced Data Science
by
Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones and Matthew Halderman
J. Eye Mov. Res. 2025, 18(6), 67; https://doi.org/10.3390/jemr18060067 - 12 Nov 2025
Abstract
►▼
Show Figures
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend
[...] Read more.
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots’ performance in flight maneuvers to enhance aviation safety.
Full article

Figure 1
Open AccessArticle
Recovery of the Pupillary Response After Light Adaptation Is Slowed in Patients with Age-Related Macular Degeneration
by
Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger and Mathias Abegg
J. Eye Mov. Res. 2025, 18(6), 66; https://doi.org/10.3390/jemr18060066 - 10 Nov 2025
Abstract
Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this
[...] Read more.
Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. Results: The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. Conclusions: This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility.
Full article
(This article belongs to the Special Issue New Horizons and Recent Advances in Eye-Tracking Technology)
►▼
Show Figures

Figure 1
Open AccessArticle
Eye-Tracking Data in the Exploration of Students’ Engagement with Representations in Mathematics: Areas of Interest (AOIs) as Methodological and Conceptual Challenges
by
Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal and Simon Goodchild
J. Eye Mov. Res. 2025, 18(6), 65; https://doi.org/10.3390/jemr18060065 (registering DOI) - 5 Nov 2025
Abstract
►▼
Show Figures
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the
[...] Read more.
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students’ engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students’ engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students’ engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students’ engagement with different representations is in focus.
Full article

Figure 1
Open AccessArticle
An Exploratory Eye-Tracking Study of Breast-Cancer Screening Ads: A Visual Analytics Framework and Descriptive Atlas
by
Ioanna Yfantidou, Stefanos Balaskas and Dimitra Skandali
J. Eye Mov. Res. 2025, 18(6), 64; https://doi.org/10.3390/jemr18060064 - 4 Nov 2025
Abstract
►▼
Show Figures
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable
[...] Read more.
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an “early-vs.-sticky” quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services.
Full article

Figure 1
Open AccessArticle
Effects of Multimodal AR-HUD Navigation Prompt Mode and Timing on Driving Behavior
by
Qi Zhu, Ziqi Liu, Youlan Li and Jung Euitay
J. Eye Mov. Res. 2025, 18(6), 63; https://doi.org/10.3390/jemr18060063 - 4 Nov 2025
Abstract
►▼
Show Figures
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This
[...] Read more.
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: −2000 m, −1000 m, −500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers’ reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers’ response performance (indexed by reaction time) and attention stability, with synchronous prompts at −1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers’ perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at −1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (−1000 m) to optimize the perception–action window, thereby improving the safety and efficiency of AR-HUD navigation systems.
Full article

Graphical abstract
Open AccessArticle
The Influence of Social Media-like Cues on Visual Attention—An Eye-Tracking Study with Food Products
by
Maria Mamalikou, Konstantinos Gkatzionis and Malamatenia Panagiotou
J. Eye Mov. Res. 2025, 18(6), 62; https://doi.org/10.3390/jemr18060062 - 4 Nov 2025
Abstract
►▼
Show Figures
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek
[...] Read more.
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user’s engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media.
Full article

Graphical abstract
Open AccessArticle
AI Images vs. Real Photographs: Investigating Visual Recognition and Perception
by
Veslava Osińska, Weronika Kortas, Adam Szalach and Marc Welter
J. Eye Mov. Res. 2025, 18(6), 61; https://doi.org/10.3390/jemr18060061 - 3 Nov 2025
Abstract
►▼
Show Figures
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs.
[...] Read more.
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups’ gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users.
Full article

Graphical abstract
Open AccessArticle
The Influence of Text Genre on Eye Movement Patterns During Reading
by
Maksim Markevich and Anastasiia Streltsova
J. Eye Mov. Res. 2025, 18(6), 60; https://doi.org/10.3390/jemr18060060 - 3 Nov 2025
Abstract
►▼
Show Figures
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry
[...] Read more.
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading.
Full article

Figure 1
Open AccessArticle
Sequential Fixation Behavior in Road Marking Recognition: Implications for Design
by
Takaya Maeyama, Hiroki Okada and Daisuke Sawamura
J. Eye Mov. Res. 2025, 18(5), 59; https://doi.org/10.3390/jemr18050059 - 21 Oct 2025
Abstract
►▼
Show Figures
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their
[...] Read more.
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.
Full article

Figure 1
Open AccessArticle
Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The “Big Three” Effects of Word Length, Frequency, and Predictability
by
Marina Norkina, Daria Chernova, Svetlana Alexeeva and Maria Harchevnik
J. Eye Mov. Res. 2025, 18(5), 58; https://doi.org/10.3390/jemr18050058 - 18 Oct 2025
Abstract
►▼
Show Figures
Oculomotor reading behavior is influenced by both universal factors, like the “big three” of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the “big three” factors
[...] Read more.
Oculomotor reading behavior is influenced by both universal factors, like the “big three” of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the “big three” factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the “big three” effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.
Full article

Figure 1
Open AccessArticle
Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights
by
Bo Yuan and Sakol Teeravarunyou
J. Eye Mov. Res. 2025, 18(5), 57; https://doi.org/10.3390/jemr18050057 - 16 Oct 2025
Abstract
►▼
Show Figures
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence
[...] Read more.
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.
Full article

Graphical abstract
Open AccessArticle
DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection
by
Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu and Esra Demirci
J. Eye Mov. Res. 2025, 18(5), 56; https://doi.org/10.3390/jemr18050056 - 15 Oct 2025
Abstract
►▼
Show Figures
Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5–17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions.
[...] Read more.
Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5–17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.
Full article

Figure 1
Open AccessArticle
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study
by
Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič and Ana Fakin
J. Eye Mov. Res. 2025, 18(5), 55; https://doi.org/10.3390/jemr18050055 - 15 Oct 2025
Abstract
►▼
Show Figures
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual
[...] Read more.
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3–1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.
Full article

Graphical abstract
Open AccessFeature PaperArticle
Test–Retest Reliability of a Computerized Hand–Eye Coordination Task
by
Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez and María Carmen Sánchez-González
J. Eye Mov. Res. 2025, 18(5), 54; https://doi.org/10.3390/jemr18050054 - 14 Oct 2025
Abstract
►▼
Show Figures
Background: Hand–eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand–eye coordination
[...] Read more.
Background: Hand–eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand–eye coordination in healthy adults, as well as the influence of age and sex. Methods: Seventy-eight adults completed four sessions of a computerized visual–motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. Results: Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. Conclusions: The COI-SV® protocol provides a robust, objective, and reproducible measurement of hand–eye coordination, supporting its use in clinical, sports, and research settings.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Special Issue in
JEMR
New Horizons and Recent Advances in Eye-Tracking Technology
Guest Editor: Lee FriedmanDeadline: 20 December 2025
Special Issue in
JEMR
Eye Tracking and Visualization
Guest Editor: Michael BurchDeadline: 30 April 2026
Special Issue in
JEMR
Reading Across the Adult Lifespan: Perspectives from Eye Movement Research
Guest Editor: Victoria A. McGowanDeadline: 20 May 2026
Special Issue in
JEMR
Digital Advances in Binocular Vision and Eye Movement Assessment
Guest Editors: Clara Martinez-Perez, Jacobo Garcia-QueirugaDeadline: 20 June 2026



