-
Predicting Cartographic Symbol Location with Eye-Tracking Data and Machine Learning Approach -
Influence of Time Pressure on Successive Visual Searches -
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification -
Improving Reading and Eye Movement Control in Readers with Oculomotor and Visuo-Attentional Deficits
Journal Description
Journal of Eye Movement Research
Journal of Eye Movement Research
(JEMR) is an international, peer-reviewed, open access journal on all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas published bimonthly online by MDPI (from Volume 18, Issue 1, 2025).
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, PMC, and other databases.
- Journal Rank: JCR - Q1 (Ophthalmology) / CiteScore - Q2 (Ophthalmology)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 39.9 days after submission; acceptance to publication is undertaken in 5.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
2.8 (2024)
Latest Articles
Sequential Fixation Behavior in Road Marking Recognition: Implications for Design
J. Eye Mov. Res. 2025, 18(5), 59; https://doi.org/10.3390/jemr18050059 - 21 Oct 2025
Abstract
►
Show Figures
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their
[...] Read more.
This study examined how drivers’ eye fixations change before, during, and after recognizing road markings, and how these changes relate to driving speed, visual complexity, cognitive functions, and demographics. 20 licensed drivers viewed on-board movies showing digit or character road markings while their eye movements were tracked. Fixation positions and dispersions were analyzed. Results showed that, regardless of marking type, fixations were horizontally dispersed before and after recognition but became vertically concentrated during recognition, with fixation points shifting higher (p < 0.001) and horizontal dispersion decreasing (p = 0.01). During the recognition period, fixations moved upward and narrowed horizontally toward the final third (p = 0.034), suggesting increased focus. Longer fixations were linked to slower speeds for digits (p = 0.029) and more characters for character markings (p < 0.001). No significant correlations were found with cognitive functions or demographics. These findings suggest that drivers first scan broadly, then concentrate on markings as they approach. For optimal recognition, simple or essential information should be placed centrally or lower, while detailed content should appear higher to align with natural gaze patterns. In high-speed environments, markings should prioritize clarity and brevity in central positions to ensure safe and rapid recognition.
Full article
Open AccessArticle
Oculomotor Behavior of L2 Readers with Typologically Distant L1 Background: The “Big Three” Effects of Word Length, Frequency, and Predictability
by
Marina Norkina, Daria Chernova, Svetlana Alexeeva and Maria Harchevnik
J. Eye Mov. Res. 2025, 18(5), 58; https://doi.org/10.3390/jemr18050058 - 18 Oct 2025
Abstract
►▼
Show Figures
Oculomotor reading behavior is influenced by both universal factors, like the “big three” of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the “big three” factors
[...] Read more.
Oculomotor reading behavior is influenced by both universal factors, like the “big three” of word length, frequency, and contextual predictability, and language-specific factors, such as script and grammar. The aim of this study was to examine the influence of the “big three” factors on L2 reading focusing on a typologically distant L1/L2 pair with dramatic differences in script and grammar. A total of 41 native Chinese-speaking learners of Russian (levels A2-B2) and 40 native Russian speakers read a corpus of 90 Russian sentences for comprehension. Their eye movements were recorded with EyeLink 1000+. We analyzed both early (gaze duration and skipping rate) and late (regression rate and rereading time) eye movement measures. As expected, the “big three” effects influenced oculomotor behavior in both L1 and L2 readers, being more pronounced for L2, but substantial differences were also revealed. Word frequency in L1 reading primarily influenced early processing stages, whereas in L2 reading it remained significant in later stages as well. Predictability had an immediate effect on skipping rates in L1 reading, while L2 readers only exhibited it in late measures. Word length was the only factor that interacted with L2 language exposure which demonstrated adjustment to alphabetic script and polymorphemic word structure. Our findings provide new insights into the processing challenges of L2 readers with typologically distant L1 backgrounds.
Full article

Figure 1
Open AccessArticle
Visual Strategies for Guiding Gaze Sequences and Attention in Yi Symbols: Eye-Tracking Insights
by
Bo Yuan and Sakol Teeravarunyou
J. Eye Mov. Res. 2025, 18(5), 57; https://doi.org/10.3390/jemr18050057 - 16 Oct 2025
Abstract
►▼
Show Figures
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence
[...] Read more.
This study investigated the effectiveness of visual strategies in guiding gaze behavior and attention on Yi graphic symbols using eye-tracking. Four strategies, color brightness, layering, line guidance, and size variation, were tested with 34 Thai participants unfamiliar with Yi symbol meanings. Gaze sequence analysis, using Levenshtein distance and similarity ratio, showed that bright colors, layered arrangements, and connected lines enhanced alignment with intended gaze sequences, while size variation had minimal effect. Bright red symbols and lines captured faster initial fixations (Time to First Fixation, TTFF) on key Areas of Interest (AOIs), unlike layering and size. Lines reduced dwell time at sequence starts, promoting efficient progression, while larger symbols sustained longer attention, though inconsistently. Color and layering showed no consistent dwell time effects. These findings inform Yi graphic symbol design for effective cross-cultural visual communication.
Full article

Graphical abstract
Open AccessArticle
DyslexiaNet: Examining the Viability and Efficacy of Eye Movement-Based Deep Learning for Dyslexia Detection
by
Ramis İleri, Çiğdem Gülüzar Altıntop, Fatma Latifoğlu and Esra Demirci
J. Eye Mov. Res. 2025, 18(5), 56; https://doi.org/10.3390/jemr18050056 - 15 Oct 2025
Abstract
►▼
Show Figures
Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5–17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions.
[...] Read more.
Dyslexia is a neurodevelopmental disorder that impairs reading, affecting 5–17.5% of children and representing the most common learning disability. Individuals with dyslexia experience decoding, reading fluency, and comprehension difficulties, hindering vocabulary development and learning. Early and accurate identification is essential for targeted interventions. Traditional diagnostic methods rely on behavioral assessments and neuropsychological tests, which can be time-consuming and subjective. Recent studies suggest that physiological signals, such as electrooculography (EOG), can provide objective insights into reading-related cognitive and visual processes. Despite this potential, there is limited research on how typeface and font characteristics influence reading performance in dyslexic children using EOG measurements. To address this gap, we investigated the most suitable typefaces for Turkish-speaking children with dyslexia by analyzing EOG signals recorded during reading tasks. We developed a novel deep learning framework, DyslexiaNet, using scalogram images from horizontal and vertical EOG channels, and compared it with AlexNet, MobileNet, and ResNet. Reading performance indicators, including reading time, blink rate, regression rate, and EOG signal energy, were evaluated across multiple typefaces and font sizes. Results showed that typeface significantly affects reading efficiency in dyslexic children. The BonvenoCF font was associated with shorter reading times, fewer regressions, and lower cognitive load. DyslexiaNet achieved the highest classification accuracy (99.96% for horizontal channels) while requiring lower computational load than other networks. These findings demonstrate that EOG-based physiological measurements combined with deep learning offer a non-invasive, objective approach for dyslexia detection and personalized typeface selection. This method can provide practical guidance for designing educational materials and support clinicians in early diagnosis and individualized intervention strategies for children with dyslexia.
Full article

Figure 1
Open AccessArticle
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study
by
Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič and Ana Fakin
J. Eye Mov. Res. 2025, 18(5), 55; https://doi.org/10.3390/jemr18050055 - 15 Oct 2025
Abstract
►▼
Show Figures
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual
[...] Read more.
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3–1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool.
Full article

Graphical abstract
Open AccessFeature PaperArticle
Test–Retest Reliability of a Computerized Hand–Eye Coordination Task
by
Antonio Ríder-Vázquez, Estanislao Gutiérrez-Sánchez, Clara Martinez-Perez and María Carmen Sánchez-González
J. Eye Mov. Res. 2025, 18(5), 54; https://doi.org/10.3390/jemr18050054 - 14 Oct 2025
Abstract
►▼
Show Figures
Background: Hand–eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand–eye coordination
[...] Read more.
Background: Hand–eye coordination is essential for daily functioning and sports performance, but standardized digital protocols for its reliable assessment are limited. This study aimed to evaluate the intra-examiner repeatability and inter-examiner reproducibility of a computerized protocol (COI-SV®) for assessing hand–eye coordination in healthy adults, as well as the influence of age and sex. Methods: Seventy-eight adults completed four sessions of a computerized visual–motor task requiring rapid and accurate responses to randomly presented targets. Accuracy and response times were analyzed using repeated-measures and reliability analyses. Results: Accuracy showed a small session effect and minor examiner differences on the first day, whereas response times were consistent across sessions. Men generally responded faster than women, and response times increased slightly with age. Overall, reliability indices indicated moderate-to-good repeatability and reproducibility for both accuracy and response time measures. Conclusions: The COI-SV® protocol provides a robust, objective, and reproducible measurement of hand–eye coordination, supporting its use in clinical, sports, and research settings.
Full article

Figure 1
Open AccessArticle
Recognition and Misclassification Patterns of Basic Emotional Facial Expressions: An Eye-Tracking Study in Young Healthy Adults
by
Neşe Alkan
J. Eye Mov. Res. 2025, 18(5), 53; https://doi.org/10.3390/jemr18050053 - 11 Oct 2025
Abstract
►▼
Show Figures
Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust,
[...] Read more.
Accurate recognition of basic facial emotions is well documented, yet the mechanisms of misclassification and their relation to gaze allocation remain under-reported. The present study utilized a within-subjects eye-tracking design to examine both accurate and inaccurate recognition of five basic emotions (anger, disgust, fear, happiness, and sadness) in healthy young adults. Fifty participants (twenty-four women) completed a forced-choice categorization task with 10 stimuli (female/male poser × emotion). A remote eye tracker (60 Hz) recorded fixations mapped to eyes, nose, and mouth areas of interest (AOIs). The analyses combined accuracy and decision-time statistics with heatmap comparisons of misclassified versus accurate trials within the same image. Overall accuracy was 87.8% (439/500). Misclassification patterns depended on the target emotion, but not on participant gender. Fear male was most often misclassified (typically as disgust), and sadness female was frequently labeled as fear or disgust; disgust was the most incorrectly attributed response. For accurate trials, decision time showed main effects of emotion (p < 0.001) and participant gender (p = 0.033): happiness was categorized fastest and anger slowest, and women responded faster overall, with particularly fast response times for sadness. The AOI results revealed strong main effects and an AOI × emotion interaction (p < 0.001): eyes received the most fixations, but fear drew relatively more mouth sampling and sadness more nose sampling. Crucially, heatmaps showed an upper-face bias (eye AOI) in inaccurate trials, whereas accurate trials retained eye sampling and added nose and mouth AOI coverage, which aligned with diagnostic cues. These findings indicate that the scanpath strategy, in addition to information availability, underpins success and failure in basic-emotion recognition, with implications for theory, targeted training, and affective technologies.
Full article

Figure 1
Open AccessArticle
The Effect of Visual Attention Dispersion on Cognitive Response Time
by
Yejin Lee and Kwangtae Jung
J. Eye Mov. Res. 2025, 18(5), 52; https://doi.org/10.3390/jemr18050052 - 10 Oct 2025
Abstract
►▼
Show Figures
In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information
[...] Read more.
In safety-critical systems like nuclear power plants, the rapid and accurate perception of visual interface information is vital. This study investigates the relationship between visual attention dispersion measured via heatmap entropy (as a specific measure of gaze entropy) and response time during information search tasks. Sixteen participants viewed a prototype of an accident response support system and answered questions at three difficulty levels while their eye movements were tracked using Tobii Pro Glasses 2. Results showed a significant positive correlation (r = 0.595, p < 0.01) between heatmap entropy and response time, indicating that more dispersed attention leads to longer task completion times. This pattern held consistently across all difficulty levels. These findings suggest that heatmap entropy is a useful metric for evaluating user attention strategies and can inform interface usability assessments in high-stakes environments.
Full article

Graphical abstract
Open AccessArticle
Diagnosing Colour Vision Deficiencies Using Eye Movements (Without Dedicated Eye-Tracking Hardware)
by
Aryaman Taore, Gabriel Lobo, Philip R. K. Turnbull and Steven C. Dakin
J. Eye Mov. Res. 2025, 18(5), 51; https://doi.org/10.3390/jemr18050051 - 2 Oct 2025
Abstract
►▼
Show Figures
Purpose: To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet. Methods: This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant
[...] Read more.
Purpose: To investigate the efficacy of a novel test for diagnosing colour vision deficiencies using reflexive eye movements measured using an unmodified tablet. Methods: This study followed a cross-sectional design, where thirty-three participants aged between 17 and 65 years were recruited. The participant group comprised 23 controls, 8 deuteranopes, and 2 protanopes. An anomaloscope was employed to determine the colour vision status of these participants. The study methodology involved using an Apple iPad Pro’s built-in eye-tracking capabilities to record eye movements in response to coloured patterns drifting on the screen. Through an automated analysis of these movements, the researchers estimated individuals’ red–green equiluminant point and their equivalent luminance contrast. Results: Estimates of the red–green equiluminant point and the equivalent luminance contrast were used to classify participants’ colour vision status with a sensitivity rate of 90.0% and a specificity rate of 91.30%. Conclusions: The novel colour vision test administered using an unmodified tablet was found to be effective in diagnosing colour vision deficiencies and has the potential to be a practical and cost-effective alternative to traditional methods. Translation Relevance: The test’s objectivity, its straightforward implementation on a standard tablet, and its minimal requirement for patient cooperation, all contribute to the wider accessibility of colour vision diagnosis. This is particularly advantageous for demographics like children who might be challenging to engage, but for whom early detection is of paramount importance.
Full article

Figure 1
Open AccessArticle
Visual Attention to Economic Information in Simulated Ophthalmic Deficits: A Remote Eye-Tracking Study
by
Cansu Yuksel Elgin and Ceyhun Elgin
J. Eye Mov. Res. 2025, 18(5), 50; https://doi.org/10.3390/jemr18050050 - 2 Oct 2025
Abstract
►▼
Show Figures
This study investigated how simulated ophthalmic visual field deficits affect visual attention and economic information processing. Using webcam-based eye tracking, 227 participants with normal vision recruited through Amazon Mechanical Turk were assigned to control, central vision loss, peripheral vision loss, or scattered vision
[...] Read more.
This study investigated how simulated ophthalmic visual field deficits affect visual attention and economic information processing. Using webcam-based eye tracking, 227 participants with normal vision recruited through Amazon Mechanical Turk were assigned to control, central vision loss, peripheral vision loss, or scattered vision loss simulation conditions. Participants viewed economic stimuli of varying complexity while eye movements, cognitive load, and comprehension were measured. All deficit conditions showed altered oculomotor behaviors. Central vision loss produced the most severe impairments: 43.6% increased fixation durations, 68% longer scanpaths, and comprehension accuracy of 61.2% versus 87.3% for controls. Visual deficits interacted with information complexity, showing accelerated impairment for complex stimuli. Mediation analysis revealed 47% of comprehension deficits were mediated through altered attention patterns. Cognitive load was significantly elevated, with central vision loss participants reporting 84% higher mental demand than controls. These findings demonstrate that visual field deficits fundamentally alter economic information processing through both direct perceptual limitations and compensatory attention strategies. Results demonstrate the feasibility of webcam-based eye tracking for studying simulated visual deficits and suggest that different types of simulated visual deficits may require distinct information presentation strategies.
Full article

Graphical abstract
Open AccessArticle
Guiding the Gaze: How Bionic Reading Influences Eye Movements
by
T. R. Beelders
J. Eye Mov. Res. 2025, 18(5), 49; https://doi.org/10.3390/jemr18050049 - 1 Oct 2025
Abstract
In recent years, Bionic reading has been introduced as a means to combat superficial reading and low comprehension rates. This paper investigates eye movements between participants who read a passage in standard font and an additional Bionic font passage. It was found that
[...] Read more.
In recent years, Bionic reading has been introduced as a means to combat superficial reading and low comprehension rates. This paper investigates eye movements between participants who read a passage in standard font and an additional Bionic font passage. It was found that Bionic font does not significantly change eye movements when reading. Fixation durations, number of fixations and reading speeds were not significantly different between the two formats. Furthermore, fixations were spread throughout the word and not only on leading characters, even when using Bionic font; hence, participants were not able to “auto-complete” the words. Additionally, Bionic font did not facilitate easier processing of low-frequency or unfamiliar words. Overall, it would appear that Bionic font, in the short term, does not affect reading. Further investigation is needed to determine whether a long-term intervention with Bionic font is more meaningful than standard interventions.
Full article
(This article belongs to the Special Issue Eye Movements in Reading and Related Difficulties)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Tracking the Impact of Age and Dimensional Shifts on Situation Model Updating During Narrative Text Comprehension
by
César Campos-Rojas and Romualdo Ibáñez-Orellana
J. Eye Mov. Res. 2025, 18(5), 48; https://doi.org/10.3390/jemr18050048 - 26 Sep 2025
Abstract
►▼
Show Figures
Studies on the relationship between age and situation model updating during narrative text reading have mainly used response or reading times. This study enhances previous measures (working memory, recognition probes, and comprehension) by incorporating eye-tracking techniques to compare situation model updating between young
[...] Read more.
Studies on the relationship between age and situation model updating during narrative text reading have mainly used response or reading times. This study enhances previous measures (working memory, recognition probes, and comprehension) by incorporating eye-tracking techniques to compare situation model updating between young and older Chilean adults. The study included 82 participants (40 older adults and 42 young adults) who read two narrative texts under three conditions (no shift, spatial shift, and character shift) using a between-subject (age) and within-subject (dimensional change) design. The results show that, while differences in working memory capacity were observed between the groups, these differences did not impact situation model comprehension. Younger adults performed better in recognition tests regardless of updating conditions. Eye-tracking data showed increased fixation times for dimensional shifts and longer reading times in older adults, with no interaction between age and dimensional shifts.
Full article

Figure 1
Open AccessReview
A Comprehensive Framework for Eye Tracking: Methods, Tools, Applications, and Cross-Platform Evaluation
by
Govind Ram Chhimpa, Ajay Kumar, Sunita Garhwal, Dhiraj Kumar, Niyaz Ahmad Wani, Mudasir Ahmad Wani and Kashish Ara Shakil
J. Eye Mov. Res. 2025, 18(5), 47; https://doi.org/10.3390/jemr18050047 - 23 Sep 2025
Abstract
►▼
Show Figures
Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human–computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications,
[...] Read more.
Eye tracking, a fundamental process in gaze analysis, involves measuring the point of gaze or eye motion. It is crucial in numerous applications, including human–computer interaction (HCI), education, health care, and virtual reality. This study delves into eye-tracking concepts, terminology, performance parameters, applications, and techniques, focusing on modern and efficient approaches such as video-oculography (VOG)-based systems, deep learning models for gaze estimation, wearable and cost-effective devices, and integration with virtual/augmented reality and assistive technologies. These contemporary methods, prevalent for over two decades, significantly contribute to developing cutting-edge eye-tracking applications. The findings underscore the significance of diverse eye-tracking techniques in advancing eye-tracking applications. They leverage machine learning to glean insights from existing data, enhance decision-making, and minimize the need for manual calibration during tracking. Furthermore, the study explores and recommends strategies to address limitations/challenges inherent in specific eye-tracking methods and applications. Finally, the study outlines future directions for leveraging eye tracking across various developed applications, highlighting its potential to continue evolving and enriching user experiences.
Full article

Figure 1
Open AccessArticle
Microsaccade Activity During Visuospatial Working Memory in Early-Stage Parkinson’s Disease
by
Katherine Farber, Linjing Jiang, Mario Michiels, Ignacio Obeso and Hoi-Chung Leung
J. Eye Mov. Res. 2025, 18(5), 46; https://doi.org/10.3390/jemr18050046 - 22 Sep 2025
Abstract
►▼
Show Figures
Fixational saccadic eye movements (microsaccades) have been associated with cognitive processes, especially in tasks requiring spatial attention and memory. Alterations in oculomotor and cognitive control are commonly observed in Parkinson’s disease (PD), though it is unclear to what extent microsaccade activity is affected.
[...] Read more.
Fixational saccadic eye movements (microsaccades) have been associated with cognitive processes, especially in tasks requiring spatial attention and memory. Alterations in oculomotor and cognitive control are commonly observed in Parkinson’s disease (PD), though it is unclear to what extent microsaccade activity is affected. We acquired eye movement data from sixteen participants with early-stage PD and thirteen older healthy controls to examine the effects of dopamine modulation on microsaccade activity during the delay period of a spatial working memory task. Some microsaccade characteristics, like amplitude and duration, were moderately larger in the PD participants when they were “on” their dopaminergic medication than healthy controls, or when they were “off” medication, while PD participants exhibited microsaccades with a linear amplitude–velocity relationship comparable to controls. Both groups showed similar microsaccade rate patterns across task events, with most participants showing a horizontal bias in microsaccade direction during the delay period regardless of the remembered target location. Overall, our data suggest minimal involvement of microsaccades during visuospatial working memory maintenance under conditions without explicit attentional cues in both subject groups. However, moderate effects of PD-related dopamine deficiency were observed for microsaccade size during working memory maintenance.
Full article

Figure 1
Open AccessArticle
Active Gaze Guidance and Pupil Dilation Effects Through Subject Engagement in Ophthalmic Imaging
by
David Harings, Niklas Bauer, Damian Mendroch, Uwe Oberheide and Holger Lubatschowski
J. Eye Mov. Res. 2025, 18(5), 45; https://doi.org/10.3390/jemr18050045 - 19 Sep 2025
Abstract
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning
[...] Read more.
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning by guiding the gaze along predefined patterns. An infrared fundus camera setup was used with nine healthy adults (aged 20–57) who completed tasks comparing passive viewing of moving patterns to actively tracing them by drawing using a touchpad interface. The active task involved participant-controlled target movement with real-time color feedback for accurate pattern tracing. Results showed that active tracing significantly increased pupil diameter by an average of 17.8% (range 8.9–43.6%; p < 0.001) and reduced blink frequency compared to passive viewing. More complex patterns led to greater pupil dilation, confirming the link between cognitive load and physiological response. These findings demonstrate that patient driven gaze guidance can stabilize gaze, reduce blinking, and naturally dilate the pupil. These conditions might enhance the quality of scannerless OCT or other imaging techniques benefiting from guided gaze and larger pupils. There could be benefits for children and people with compliance issues, although further research is needed to consider cognitive load.
Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
►▼
Show Figures

Figure 1
Open AccessArticle
Processing Written Language in Video Games: An Eye-Tracking Study on Subtitled Instructions
by
Haiting Lan, Sixin Liao, Jan-Louis Kruger and Michael J. Richardson
J. Eye Mov. Res. 2025, 18(5), 44; https://doi.org/10.3390/jemr18050044 - 17 Sep 2025
Abstract
►▼
Show Figures
Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed
[...] Read more.
Written language is a common component among the multimodal representations that help players construct meanings and guide actions in video games. However, how players process texts in video games remains underexplored. To address this, the current exploratory eye-tracking study examines how players processed subtitled instructions and resultant game performance. Sixty-four participants were recruited to play a videogame set in a foggy desert, where they were guided by subtitled instructions to locate, corral, and contain robot agents (targets). These instructions were manipulated into three modalities: visual-only (with subtitled instructions only), auditory only (with spoken instructions), and visual–auditory (with both subtitled and spoken instructions). The instructions were addressed to participants (as relevant subtitles) or their AI teammates (as irrelevant subtitles). Subtitle-level results of eye movements showed that participants primarily focused on the relevant subtitles, as evidenced by more fixations and higher dwell time percentages. Moreover, the word-level results indicate that participants showed lower skipping rates, more fixations, and higher dwell time percentages on words loaded with immediate action-related information, especially in the absence of audio. No significant differences were found in player performance across conditions. The findings of this study contribute to a better understanding of subtitle processing in video games and, more broadly, text processing in multimedia contexts. Implications for future research on digital literacy and computer-mediated text processing are discussed.
Full article

Figure 1
Open AccessArticle
Entropy as a Lens: Exploring Visual Behavior Patterns in Architects
by
Renate Delucchi Danhier, Barbara Mertins, Holger Mertins and Gerold Schneider
J. Eye Mov. Res. 2025, 18(5), 43; https://doi.org/10.3390/jemr18050043 - 16 Sep 2025
Abstract
►▼
Show Figures
This study examines how architectural expertise shapes visual perception, extending the “Seeing for Speaking” hypothesis into a non-linguistic domain. Specifically, it investigates whether architectural training influences unconscious visual processing of architectural content. Using eye-tracking, 48 architects and 48 laypeople freely viewed 15 still
[...] Read more.
This study examines how architectural expertise shapes visual perception, extending the “Seeing for Speaking” hypothesis into a non-linguistic domain. Specifically, it investigates whether architectural training influences unconscious visual processing of architectural content. Using eye-tracking, 48 architects and 48 laypeople freely viewed 15 still images of built, mixed, and natural environments. Visual behavior was analyzed using Shannon’s entropy scores based on dwell times within 16 × 16 grids during the first six seconds of viewing. Results revealed distinct visual attention patterns between groups. Architects showed lower entropy, indicating more focused and systematic gaze behavior, and their attention was consistently drawn to built structures. In contrast, laypeople exhibited more variable and less organized scanning patterns, with greater individual differences. Moreover, architects demonstrated higher intra-group similarity in their gaze behavior, suggesting a shared attentional schema shaped by professional training. These findings highlight that domain-specific expertise deeply influences perceptual processing, resulting in systematic and efficient attention allocation. Entropy-based metrics proved effective in capturing these differences, offering a robust tool for quantifying expert vs. non-expert visual strategies in architectural cognition. The visual patterns exhibited by architects are interpreted to reflect a “Grammar of Space”, i.e., a structured way of visually parsing spatial elements.
Full article

Graphical abstract
Open AccessArticle
How Visual Style Shapes Tourism Advertising Effectiveness: Eye-Tracking Insights into Traditional and Modern Chinese Ink Paintings
by
Fulong Liu, Xiheng Shao, Zhengwei Tao, Nurul Hanim Md Romainoor and Mohammad Khizal Mohamed Saat
J. Eye Mov. Res. 2025, 18(5), 42; https://doi.org/10.3390/jemr18050042 - 12 Sep 2025
Abstract
►▼
Show Figures
This study investigates how traditional versus modern Chinese ink painting styles in tourism advertisements affect viewers’ visual attention, aesthetic evaluations, and tourism intentions. Using eye-tracking experiments combined with surveys and interviews, the researchers conducted a mixed-design experiment with 80 Chinese college students. Results
[...] Read more.
This study investigates how traditional versus modern Chinese ink painting styles in tourism advertisements affect viewers’ visual attention, aesthetic evaluations, and tourism intentions. Using eye-tracking experiments combined with surveys and interviews, the researchers conducted a mixed-design experiment with 80 Chinese college students. Results indicate that traditional ink-style advertisements attracted longer total fixation durations, higher aesthetic evaluations, and stronger cultural resonance in natural landscape contexts, while modern ink-style advertisements captured initial attention more quickly and performed better aesthetically in urban settings. Qualitative analyses further revealed cultural familiarity and aesthetic resonance underpinning preferences for traditional style, whereas modern style mainly attracted attention through novelty and creativity. These findings expand Cultural Schema Theory and the aesthetic processing model within advertising research, suggesting practical strategies for tourism advertising to match visual styles appropriately with destination types and audience characteristics to enhance promotional effectiveness.
Full article

Figure 1
Open AccessArticle
Eye Movement Impairment in Women Undergoing Chemotherapy
by
Milena Edite Casé de Oliveira, José Marcos Nascimento de Sousa, Gerlane Da Silva Vieira Torres, Ruanna Priscila Silva de Brito, Nathalia dos Santos Negreiros, Bianca da Nóbrega Tomaz Trombetta, Kedma Anne Lima Gomes Alexandrino, Waleska Fernanda Souto Nóbrega, Letícia Lorena Soares Silva Polimeni, Catarina Cavalcanti Braga, Cristiane Maria Silva de Souza Lima, Thiago P. Fernandes and Natanael Antonio dos Santos
J. Eye Mov. Res. 2025, 18(5), 41; https://doi.org/10.3390/jemr18050041 - 11 Sep 2025
Abstract
►▼
Show Figures
The assessment of visual attention is important in visual and cognitive neuroscience, providing objective measures for researchers and clinicians. This study investigated the effects of chemotherapy on eye movements in women with breast cancer. Twelve women with breast cancer and twelve healthy controls
[...] Read more.
The assessment of visual attention is important in visual and cognitive neuroscience, providing objective measures for researchers and clinicians. This study investigated the effects of chemotherapy on eye movements in women with breast cancer. Twelve women with breast cancer and twelve healthy controls aged between 33 and 59 years completed a visual search task, identifying an Arabic number among 79 alphabetic letters. Test duration, fixation duration, total fixation duration, and total visit duration were recorded. Compared to healthy controls, women with breast cancer exhibited significantly longer mean fixation duration [t = 4.54, p < 0.00]; mean total fixation duration [t = 2.41, p < 0.02]; mean total visitation duration [t = 2.05, p < 0.05]; and total test time [t = 2.32, p < 0.03]. Additionally, positive correlations were observed between the number of chemotherapy cycles and the eye tracking parameters. These results suggest the possibility of slower information processing in women experiencing acute effects of chemotherapy. However, further studies are needed to clarify this relationship.
Full article

Figure 1
Open AccessArticle
Interpretable Quantification of Scene-Induced Driver Visual Load: Linking Eye-Tracking Behavior to Road Scene Features via SHAP Analysis
by
Jie Ni, Yifu Shao, Yiwen Guo and Yongqi Gu
J. Eye Mov. Res. 2025, 18(5), 40; https://doi.org/10.3390/jemr18050040 - 9 Sep 2025
Abstract
►▼
Show Figures
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this
[...] Read more.
Road traffic accidents remain a major global public health concern, where complex urban driving environments significantly elevate drivers’ visual load and accident risks. Unlike existing research that adopts a macro perspective by considering multiple factors such as the driver, vehicle, and road, this study focuses on the driver’s visual load, a key safety factor, and its direct source—the driver’s visual environment. We have developed an interpretable framework combining computer vision and machine learning to quantify how road scene features influence oculomotor behavior and scene-induced visual load, establishing a complete and interpretable link between scene features, eye movement behavior, and visual load. Using the DR(eye)VE dataset, visual attention demand is established through occlusion experiments and confirmed to correlate with eye-tracking metrics. K-means clustering is applied to classify visual load levels based on discriminative oculomotor features, while semantic segmentation extracts quantifiable road scene features such as the Green Visibility Index, Sky Visibility Index and Street Canyon Enclosure. Among multiple machine learning models (Random Forest, Ada-Boost, XGBoost, and SVM), XGBoost demonstrates optimal performance in visual load detection. SHAP analysis reveals critical thresholds: the probability of high visual load increases when pole density exceeds 0.08%, signage surpasses 0.55%, or buildings account for more than 14%; while blink duration/rate decrease when street enclosure exceeds 38% or road congestion goes beyond 25%, indicating elevated visual load. The proposed framework provides actionable insights for urban design and driver assistance systems, advancing traffic safety through data-driven optimization of road environments.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Special Issue in
JEMR
Eye Tracking and Visualization
Guest Editor: Michael BurchDeadline: 20 November 2025
Special Issue in
JEMR
New Horizons and Recent Advances in Eye-Tracking Technology
Guest Editor: Lee FriedmanDeadline: 20 December 2025
Special Issue in
JEMR
Eye Movements in Reading and Related Difficulties
Guest Editors: Argyro Fella, Timothy C. Papadopoulos, Kevin B. Paterson, Daniela ZambarbieriDeadline: 30 June 2026



