Journal Description
Journal of Eye Movement Research
Journal of Eye Movement Research
(JEMR) is an international, peer-reviewed, open access journal on all aspects of oculomotor functioning including methodology of eye recording, neurophysiological and cognitive models, attention, reading, as well as applications in neurology, ergonomy, media research and other areas published bimonthly online by MDPI (from Volume 18, Issue 1, 2025).
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), PubMed, PMC, and other databases.
- Journal Rank: JCR - Q1 (Ophthalmology) / CiteScore - Q2 (Ophthalmology)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 39.9 days after submission; acceptance to publication is undertaken in 5.8 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: APC discount vouchers, optional signed peer review, and reviewer names published annually in the journal.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
2.8 (2024)
Latest Articles
Influence of Time Pressure on Successive Visual Searches
J. Eye Mov. Res. 2025, 18(4), 31; https://doi.org/10.3390/jemr18040031 - 17 Jul 2025
Abstract
►
Show Figures
In the current eye-tracking experiment, we explored the effects of time pressure on visual search performance and oculomotor behavior. Participants performed two consecutive time-pressured searches for a T-shaped target among L-shaped distractors in two separate displays of fifteen items, with the option to
[...] Read more.
In the current eye-tracking experiment, we explored the effects of time pressure on visual search performance and oculomotor behavior. Participants performed two consecutive time-pressured searches for a T-shaped target among L-shaped distractors in two separate displays of fifteen items, with the option to self-interrupt the first search (Search 1) to proceed to the second (Search 2). Our results showed that participants maintained high search accuracy during Search 1 across all conditions, but performance noticeably declined during Search 2 with increasing time pressure. Time pressure also led to decreased numbers of fixations and faster response times overall. When both targets where acquired, fixation durations were longer in Search 2 than in Search 1, while saccade amplitudes were shorter in Search 2. Our findings suggest that time pressure leads to the first target being prioritized when targets possess equal value, emphasizing the challenges of optimizing performance in time-sensitive tasks.
Full article
Open AccessArticle
Eye Movement Patterns as Indicators of Text Complexity in Arabic: A Comparative Analysis of Classical and Modern Standard Arabic
by
Hend Al-Khalifa
J. Eye Mov. Res. 2025, 18(4), 30; https://doi.org/10.3390/jemr18040030 - 16 Jul 2025
Abstract
►▼
Show Figures
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA
[...] Read more.
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA text, we examined differences in fixation patterns, regression rates, and overall reading behavior between these two forms of Arabic. Our analyses revealed significant differences in eye movement metrics between CA and MSA text, with CA text consistently eliciting more fixations, longer fixation durations, and more frequent revisits. Multivariate analysis confirmed that language type has a significant combined effect on eye movement patterns. Additionally, we identified different relationships between text features and eye movements for CA versus MSA text, with sentence-level features emerging as significant predictors across both language types. Notably, we observed an interaction between language type and readability level, with readers showing less sensitivity to readability variations in CA text compared to MSA text. These findings contribute to our understanding of how historical language evolution affects reading behavior and have practical implications for Arabic language education, publishing, and assessment. The study demonstrates the value of eye movement analysis for understanding text complexity in Arabic and highlights the importance of considering language-specific features when studying reading processes.
Full article

Graphical abstract
Open AccessArticle
Through the Eyes of the Viewer: The Cognitive Load of LLM-Generated vs. Professional Arabic Subtitles
by
Hussein Abu-Rayyash and Isabel Lacruz
J. Eye Mov. Res. 2025, 18(4), 29; https://doi.org/10.3390/jemr18040029 - 14 Jul 2025
Abstract
►▼
Show Figures
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic
[...] Read more.
As streaming platforms adopt artificial intelligence (AI)-powered subtitle systems to satisfy global demand for instant localization, the cognitive impact of these automated translations on viewers remains largely unexplored. This study used a web-based eye-tracking protocol to compare the cognitive load that GPT-4o-generated Arabic subtitles impose with that of professional human translations among 82 native Arabic speakers who viewed a 10 min episode (“Syria”) from the BBC comedy drama series State of the Union. Participants were randomly assigned to view the same episode with either professionally produced Arabic subtitles (Amazon Prime’s human translations) or machine-generated GPT-4o Arabic subtitles. In a between-subjects design, with English proficiency entered as a moderator, we collected fixation count, mean fixation duration, gaze distribution, and attention concentration (K-coefficient) as indices of cognitive processing. GPT-4o subtitles raised cognitive load on every metric; viewers produced 48% more fixations in the subtitle area, recorded 56% longer fixation durations, and spent 81.5% more time reading the automated subtitles than the professional subtitles. The subtitle area K-coefficient tripled (0.10 to 0.30), a shift from ambient scanning to focal processing. Viewers with advanced English proficiency showed the largest disruptions, which indicates that higher linguistic competence increases sensitivity to subtle translation shortcomings. These results challenge claims that large language models (LLMs) lighten viewer burden; despite fluent surface quality, GPT-4o subtitles demand far more cognitive resources than expert human subtitles and therefore reinforce the need for human oversight in audiovisual translation (AVT) and media accessibility.
Full article

Figure 1
Open AccessArticle
GMM-HMM-Based Eye Movement Classification for Efficient and Intuitive Dynamic Human–Computer Interaction Systems
by
Jiacheng Xie, Rongfeng Chen, Ziming Liu, Jiahao Zhou, Juan Hou and Zengxiang Zhou
J. Eye Mov. Res. 2025, 18(4), 28; https://doi.org/10.3390/jemr18040028 - 9 Jul 2025
Abstract
►▼
Show Figures
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user
[...] Read more.
Human–computer interaction (HCI) plays a crucial role across various fields, with eye-tracking technology emerging as a key enabler for intuitive and dynamic control in assistive systems like Assistive Robotic Arms (ARAs). By precisely tracking eye movements, this technology allows for more natural user interaction. However, current systems primarily rely on the single gaze-dependent interaction method, which leads to the “Midas Touch” problem. This highlights the need for real-time eye movement classification in dynamic interactions to ensure accurate and efficient control. This paper proposes a novel Gaussian Mixture Model–Hidden Markov Model (GMM-HMM) classification algorithm aimed at overcoming the limitations of traditional methods in dynamic human–robot interactions. By incorporating sum of squared error (SSE)-based feature extraction and hierarchical training, the proposed algorithm achieves a classification accuracy of 94.39%, significantly outperforming existing approaches. Furthermore, it is integrated with a robotic arm system, enabling gaze trajectory-based dynamic path planning, which reduces the average path planning time to 2.97 milliseconds. The experimental results demonstrate the effectiveness of this approach, offering an efficient and intuitive solution for human–robot interaction in dynamic environments. This work provides a robust framework for future assistive robotic systems, improving interaction intuitiveness and efficiency in complex real-world scenarios.
Full article

Figure 1
Open AccessArticle
eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification
by
Michael Barz, Omair Shahzad Bhatti, Hasan Md Tusfiqur Alam, Duy Minh Ho Nguyen, Kristin Altmeyer, Sarah Malone and Daniel Sonntag
J. Eye Mov. Res. 2025, 18(4), 27; https://doi.org/10.3390/jemr18040027 - 7 Jul 2025
Abstract
►▼
Show Figures
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal,
[...] Read more.
Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations’ validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals.
Full article

Figure 1
Open AccessArticle
Efficiency Analysis of Disruptive Color in Military Camouflage Patterns Based on Eye Movement Data
by
Xin Yang, Su Yan, Bentian Hao, Weidong Xu and Haibao Yu
J. Eye Mov. Res. 2025, 18(4), 26; https://doi.org/10.3390/jemr18040026 - 2 Jul 2025
Abstract
►▼
Show Figures
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the
[...] Read more.
Disruptive color on animals’ bodies can reduce the risk of being caught. This study explores the camouflaging effect of disruptive color when applied to military targets. Disruptive and non-disruptive color patterns were placed on the target surface to form simulation materials. Then, the simulation target was set in woodland-, grassland-, and desert-type background images. The detectability of the target in the background was obtained by collecting eye movement indicators after the observer observed the background targets. The influence of background type (local and global), camouflage pattern type, and target viewing angle on the disruptive-color camouflage pattern was investigated. This study aims to design eye movement observation experiments to statistically analyze the indicators of first discovery time, discovery frequency, and first-scan amplitude in the target area. The experimental results show that the first discovery time of mixed disruptive-color targets in a forest background was significantly higher than that of non-mixed disruptive-color targets (t = 2.54, p = 0.039), and the click frequency was reduced by 15% (p < 0.05), indicating that mixed disruptive color has better camouflage effectiveness in complex backgrounds. In addition, the camouflage effect of mixed disruptive colors on large-scale targets (viewing angle ≥ 30°) is significantly improved (F = 10.113, p = 0.01), providing theoretical support for close-range reconnaissance camouflage design.
Full article

Figure 1
Open AccessArticle
Improving Reading and Eye Movement Control in Readers with Oculomotor and Visuo-Attentional Deficits
by
Stéphanie Ducrot, Bernard Lété, Marie Vernet, Delphine Massendari and Jérémy Danna
J. Eye Mov. Res. 2025, 18(4), 25; https://doi.org/10.3390/jemr18040025 - 23 Jun 2025
Abstract
►▼
Show Figures
The initial saccade of experienced readers tends to land halfway between the beginning and the middle of words, at a position originally referred to as the preferred viewing location (PVL). This study investigated whether a simple physical manipulation—namely, increasing the saliency (brightness or
[...] Read more.
The initial saccade of experienced readers tends to land halfway between the beginning and the middle of words, at a position originally referred to as the preferred viewing location (PVL). This study investigated whether a simple physical manipulation—namely, increasing the saliency (brightness or color) of the letter located at the PVL—can positively influence saccadic targeting strategies and optimize reading performance. An eye-movement experiment was conducted with 25 adults and 24 s graders performing a lexical decision task. Results showed that this manipulation had no effect on initial landing positions in proficient readers, who already landed most frequently at the PVL, suggesting that PVL saliency is irrelevant once automatized saccade targeting routines are established. In contrast, the manipulation shifted the peak of the landing site distribution toward the PVL for a cluster of readers with immature saccadic strategies (with low reading-level scores and ILPs close to the beginning of words), but only in the brightness condition, and had a more compelling effect in a cluster with oculomotor instability (with flattened and diffuse landing position curves along with oculomotor and visuo-attentional deficits). These findings suggest that guiding the eyes toward the PVL may offer a novel way to improve reading efficiency, particularly for individuals with oculomotor and visuo-attentional difficulties.
Full article

Figure 1
Open AccessArticle
Is the Prosodic Structure of Texts Reflected in Silent Reading? An Eye-Tracking Corpus Analysis
by
Marijan Palmović and Kristina Cergol
J. Eye Mov. Res. 2025, 18(3), 24; https://doi.org/10.3390/jemr18030024 - 18 Jun 2025
Abstract
►▼
Show Figures
The aim of this study was to test the Implicit Prosody Hypothesis using a reading corpus, i.e., a text without experimental manipulation labelled with eye-tracking parameters. For this purpose, a bilingual Croatian–English reading corpus was analysed. In prosodic terms, Croatian and English are
[...] Read more.
The aim of this study was to test the Implicit Prosody Hypothesis using a reading corpus, i.e., a text without experimental manipulation labelled with eye-tracking parameters. For this purpose, a bilingual Croatian–English reading corpus was analysed. In prosodic terms, Croatian and English are at the opposite ends of the spectrum: English is considered a time-framed language, while Croatian is a syllable-framed language. This difference served as a kind of experimental control in this study on natural reading. The results show that readers’ eyes lingered more on stressed syllables than on the arrangement of stressed and unstressed syllables for both languages. This is especially pronounced for English, a language with greater differences in the duration of stressed and unstressed syllables. This study provides indirect evidence in favour of the Implicit Prosody Hypothesis, i.e., the idea that readers are guided by their inner voice with its suprasegmental features when reading silently. The differences between the languages can be traced back to the typological differences in stress in English and Croatian.
Full article

Figure 1
Open AccessReview
Trends and Transformations: A Bibliometric Analysis of Eye-Tracking Research in Educational Technology
by
Liqi Lai, Baohua Su and Linwei She
J. Eye Mov. Res. 2025, 18(3), 23; https://doi.org/10.3390/jemr18030023 - 16 Jun 2025
Abstract
►▼
Show Figures
This study employs bibliometric analysis to provide a comprehensive review of eye-tracking research in the field of educational technology. The study analyzed 374 relevant papers published in 19 high-quality journals from the Web of Science core collection between 2001 and 1 June 2024.
[...] Read more.
This study employs bibliometric analysis to provide a comprehensive review of eye-tracking research in the field of educational technology. The study analyzed 374 relevant papers published in 19 high-quality journals from the Web of Science core collection between 2001 and 1 June 2024. The findings reveal research trends, hot topics, and future directions in this field. The findings indicate an upward trend in the application of eye-tracking technology in educational technology, with a significant increase noted after 2014. China, the United States, Germany, and the Netherlands dominate research in this area, contributing to a substantial amount of high-quality research output. Keyword co-occurrence analysis reveals that terms such as “attention,” “cognitive load,” “information,” and “comprehension” are currently hot topics of research. Burst keyword analysis further reveals the evolution of research trends. These trends have shifted from an initial focus on information processing and application studies to a growing emphasis on learner understanding and behavior analysis, ultimately concentrating on learning outcomes and the exploration of emerging technology applications. This study not only provides researchers in the field of educational technology with a comprehensive understanding of the current state of eye-tracking research but also points to future research directions, particularly in optimizing instructional design, enhancing learning outcomes, and exploring the applications of emerging educational technologies using eye-tracking technology.
Full article

Figure 1
Open AccessArticle
Learners’ Perception of Scientific Text Layouts Design Using Eye-Tracking
by
Elizabeth Wianto, Hapnes Toba and Maya Malinda
J. Eye Mov. Res. 2025, 18(3), 22; https://doi.org/10.3390/jemr18030022 - 13 Jun 2025
Cited by 1
Abstract
►▼
Show Figures
Lifelong learning, particularly in adult education, has gained considerable attention due to rapid lifestyle changes, including pandemic-induced lockdowns. This research targets adult learners returning to higher education after gap years, emphasizing their preference for technology with clear, practical benefits. However, many still need
[...] Read more.
Lifelong learning, particularly in adult education, has gained considerable attention due to rapid lifestyle changes, including pandemic-induced lockdowns. This research targets adult learners returning to higher education after gap years, emphasizing their preference for technology with clear, practical benefits. However, many still need help operating digital media. This research aims to identify best practices for sustainably providing digital scientific materials to students by examining respondents’ tendencies in viewing journal article pages and scientific posters, with a focus on layout designs that include both textual and schematic elements. The research questions focus on (1) identifying the characteristics of Areas of Interest (AoI) that effectively attract learners’ attention and (2) determining the preferred characteristics for each learner group. Around 110 respondents were selected during the experiments using web tracking technology. Utilizing this web-based eye-tracking tool, we propose eight activities to detect learners’ perceptions of text-based learning object materials. The fact that first language significantly shapes learners’ attention was confirmed by time-leap analysis and AoI distances showing they focus more on familiar elements. While adult learners exhibit deeper engagement with scientific content and sustained concentration during reading, their unique preferences toward digital learning materials result in varied focus patterns, particularly in initial interest and time spent on tasks. Thus, it is recommended that lecturers deliver digital content for adult learners in a textual format or by placing the important parts of posters in the center.
Full article

Figure 1
Open AccessArticle
Attention and Outcomes Across Learning Conditions in L2 Vocabulary Acquisition: Evidence from Eye-Tracking
by
Yiyang Yang and Hulin Ren
J. Eye Mov. Res. 2025, 18(3), 21; https://doi.org/10.3390/jemr18030021 - 13 Jun 2025
Abstract
►▼
Show Figures
The role of attention has been shown to be essential in second language (L2) learning. However, the impact of different learning conditions on attention and learning outcomes remains underdeveloped, particularly through the application of eye-tracking technology. This study aims to evaluate the effect
[...] Read more.
The role of attention has been shown to be essential in second language (L2) learning. However, the impact of different learning conditions on attention and learning outcomes remains underdeveloped, particularly through the application of eye-tracking technology. This study aims to evaluate the effect of intentional learning conditions (i.e., data-driven learning) on vocabulary learning and attentional allocations. Twenty-six intermediate English L2 learners participated in the study to learn the usage of four artificial attributive adjectives in noun phrases (NPs). Learning outcomes were analysed to assess the types of knowledge developed, shedding light on the role of attention and the conscious processing of word usage. Eye-tracking data, collected using Eyelink 1000 plus, investigated gaze patterns and the allocation of attentional sources when applying the learned usage of adjectives. The results indicate that fixation stability and regression movements significantly differ under the impact of intentional learning conditions. Post-test results also indicate a shift in attention from the target adjectives to the associated nouns. These findings underscore the critical role of attention and highlight the influence of learning conditions on L2 vocabulary learning, providing practical implications and empirical validation for L2 educators and researchers aiming to enhance vocabulary instruction through intentional learning strategies.
Full article

Figure 1
Open AccessArticle
Binocular and Fellow Eye Acuity Deficits in Amblyopia: Impact of Fixation Instability and Sensory Factors
by
Yulia Haraguchi, Gokce Busra Cakir, Aasef Shaikh and Fatema Ghasia
J. Eye Mov. Res. 2025, 18(3), 20; https://doi.org/10.3390/jemr18030020 - 3 Jun 2025
Abstract
►▼
Show Figures
Amblyopia, a neurodevelopmental disorder, is commonly assessed through amblyopic eye visual acuity (VA) deficits, but recent studies also highlight abnormalities in the fellow eye. This study quantified binocular and fellow/dominant eye VA in individuals with amblyopia and strabismus without amblyopia and examined factors
[...] Read more.
Amblyopia, a neurodevelopmental disorder, is commonly assessed through amblyopic eye visual acuity (VA) deficits, but recent studies also highlight abnormalities in the fellow eye. This study quantified binocular and fellow/dominant eye VA in individuals with amblyopia and strabismus without amblyopia and examined factors influencing these measures, including fixation eye movement (FEM) abnormalities. Identifying which subsets of patients—such as those with nystagmus, concurrent strabismus, or greater fixation instability—exhibit more pronounced deficits in binocular visual acuity and binocular summation can enhance clinical decision-making by enabling tailored interventions and aiding patient counseling. Sixty-eight amblyopic, seventeen strabismic without amblyopia, and twenty-four control subjects were assessed using an adaptive psychophysical staircase procedure and high-resolution video-oculography to evaluate FEMs and fixation instability (FI). Binocular and fellow eye VA were significantly lower in amblyopia, regardless of type or nystagmus presence, whereas binocular and dominant eye VA in strabismus without amblyopia did not differ from the controls. Despite reduced binocular acuity, amblyopic and strabismic subjects exhibited binocular summation, with binocular VA exceeding fellow/dominant eye VA. Reduced binocular VA correlated with greater fellow eye VA deficits, diminished binocular summation, and increased FI in the amblyopic eye. Fellow eye VA deficits were linked to greater amblyopic eye VA deficits, an increased degree of anisometropia, higher FI, and stronger nystagmus correlation. These findings suggest amblyopia affects both visual sensory and motor systems, impacting binocular function and fixation stability, with potential consequences for everyday visuomotor tasks like reading.
Full article

Figure 1
Open AccessArticle
Research on Flight Training Optimization with Instrument Failure Based on Eye Movement Data
by
Jiwen Tai, Yu Qian, Zhili Song, Xiuyi Li, Ziang Qu and Chengzhi Yang
J. Eye Mov. Res. 2025, 18(3), 19; https://doi.org/10.3390/jemr18030019 - 23 May 2025
Abstract
►▼
Show Figures
To improve the quality of flight training in instrument failure scenarios, eye movement data were collected from flight instructors during climbing, descending, and turning flights when the primary attitude direction indicator failed. The performance data of the excellent instructors was selected to produce
[...] Read more.
To improve the quality of flight training in instrument failure scenarios, eye movement data were collected from flight instructors during climbing, descending, and turning flights when the primary attitude direction indicator failed. The performance data of the excellent instructors was selected to produce eye movement tutorials. These tutorials were used to conduct eye movement training for the experimental group of flight trainees. In contrast, the control group received traditional training. The performance and eye movement data of the two groups of flight trainees were then compared and analyzed. The results showed that flight trainees who received eye movement training performed better when facing instrument failure. Specifically, the deviations in the rate of descent, heading during the descent, airspeed during the turn, and slope during the turn were significantly different from those of the control group. Compared to the control group, the experimental group had a significantly lower fixation frequency on the failed instrument during the turn. Additionally, the average glance duration on the failed instrument during the climb and turn was significantly reduced. The study demonstrated the effectiveness of eye movement training in improving the quality of flight training in instrument failure scenarios.
Full article

Figure 1
Open AccessArticle
Eye-Tracking Algorithm for Early Glaucoma Detection: Analysis of Saccadic Eye Movements in Primary Open-Angle Glaucoma
by
Cansu Yuksel Elgin
J. Eye Mov. Res. 2025, 18(3), 18; https://doi.org/10.3390/jemr18030018 - 19 May 2025
Abstract
►▼
Show Figures
Glaucoma remains a leading cause of irreversible blindness worldwide, with early detection crucial for preventing vision loss. This study developed and validated a novel eye-tracking algorithm to detect oculomotor abnormalities in primary open-angle glaucoma (POAG). We conducted a case–control study (March–June 2021), recruiting
[...] Read more.
Glaucoma remains a leading cause of irreversible blindness worldwide, with early detection crucial for preventing vision loss. This study developed and validated a novel eye-tracking algorithm to detect oculomotor abnormalities in primary open-angle glaucoma (POAG). We conducted a case–control study (March–June 2021), recruiting 16 patients with moderate POAG, 16 with preperimetric POAG, and 16 age-matched controls. The participants underwent a comprehensive ophthalmic examination and eye movement recording using a high-resolution infrared tracker during two tasks: saccades to static targets and saccades to moving targets. The patients with POAG exhibited a significantly increased saccadic latency and reduced accuracy compared to the controls, with more pronounced differences in the moving target task. Notably, preperimetric POAG patients showed significant abnormalities despite having normal visual fields based on standard perimetry. Our machine learning algorithm incorporating multiple saccadic parameters achieved an excellent discriminative ability between glaucomatous and healthy subjects (AUC = 0.92), with particularly strong performance for moderate POAG (AUC = 0.97) and good performance for preperimetric POAG (AUC = 0.87). These findings suggest that eye movement analysis may serve as a sensitive biomarker for early glaucomatous damage, potentially enabling earlier intervention and improved visual outcomes.
Full article

Figure 1
Open AccessArticle
On the Validity and Benefit of Manual and Automated Drift Correction in Reading Tasks
by
Naser Al Madi
J. Eye Mov. Res. 2025, 18(3), 17; https://doi.org/10.3390/jemr18030017 - 9 May 2025
Abstract
►▼
Show Figures
Drift represents a common distortion that affects the position of fixations in eye tracking data. While manual correction is considered very accurate, it is considered subjective and time-consuming. On the other hand, automated correction is fast, objective, and considered less accurate. An objective
[...] Read more.
Drift represents a common distortion that affects the position of fixations in eye tracking data. While manual correction is considered very accurate, it is considered subjective and time-consuming. On the other hand, automated correction is fast, objective, and considered less accurate. An objective comparison of the accuracy of manual and automated correction has not been conducted before, and the extent of subjectivity in manual correction is not entirely quantified. In this paper, we compare the accuracy of manual and automated correction of eye tracking data in reading tasks through a novel approach that relies on synthetic data with known ground truth. Moreover, we quantify the subjectivity in manual human correction with real eye tracking data. Our results show that expert human correction is significantly more accurate than automated algorithms, yet novice human correctors are on par with the best automated algorithms. In addition, we found that human correctors show excellent agreement in their correction, challenging the notion that manual correction is “highly subjective”. Our findings provide unique insights, quantifying the benefits of manual and automated correction.
Full article

Figure 1
Open AccessArticle
Age-Related Differences in Visual Attention to Heritage Tourism: An Eye-Tracking Study
by
Linlin Yuan, Zihao Cao, Yongchun Mao, Mohd Hafizal Mohd Isa and Muhammad Hafeez Abdul Nasir
J. Eye Mov. Res. 2025, 18(3), 16; https://doi.org/10.3390/jemr18030016 - 8 May 2025
Cited by 1
Abstract
►▼
Show Figures
With the rising significance of visual marketing, differences in how tourists from various age groups visually engage with tourism promotional materials remain insufficiently studied. This study recruited 48 participants and used a quasi-experimental design combined with eye-tracking technology to examine visual attention, scan
[...] Read more.
With the rising significance of visual marketing, differences in how tourists from various age groups visually engage with tourism promotional materials remain insufficiently studied. This study recruited 48 participants and used a quasi-experimental design combined with eye-tracking technology to examine visual attention, scan path patterns, and their relationship to reading performance among different age groups. Independent t-tests, correlation analyses, and Lag Sequential Analysis were conducted to compare the differences between the two groups. Results indicated that elder participants had significantly higher fixation counts and longer fixation durations in text regions than younger participants, as well as higher perceived novelty scores. A positive correlation emerged between text fixation duration and perceived novelty. Additionally, elder participants showed greater interaction between text and images, while younger participants exhibited a more linear reading pattern. This study offers empirical insights to optimize tourism promotional materials, highlighting the need for age-specific communication strategies.
Full article

Figure 1
Open AccessArticle
Natural or Human Landscape Beauty? Quantifying Aesthetic Experience at Longji Terraces Through Eye-Tracking
by
Ting Zhang, Yue Jiang, Donghong Liu, Shijie Zeng and Pengjin Sheng
J. Eye Mov. Res. 2025, 18(3), 15; https://doi.org/10.3390/jemr18030015 - 7 May 2025
Abstract
►▼
Show Figures
This study investigated tourists’ visual perception, aesthetic experience, and behavioral intentions across four types of landscapes. A total of 353 questionnaires were distributed on-site, and the SOR model was used to examine the visual stimuli and aesthetic responses perceived by tourists, followed by
[...] Read more.
This study investigated tourists’ visual perception, aesthetic experience, and behavioral intentions across four types of landscapes. A total of 353 questionnaires were distributed on-site, and the SOR model was used to examine the visual stimuli and aesthetic responses perceived by tourists, followed by laboratory eye-tracking to observe tourists’ points of attention on the Longji Terraced Fields landscape Key findings reveal that different residences and revisiting conditions affect tourists’ visual attention, with the most attention given at the intersections of landscape elements. Furthermore, although landscape visual stimuli do not significantly affect the intention response, eye movement parameters are positively correlated with aesthetic experience. The study contributes to understanding tourist aesthetic perception in terraced rice field landscapes and provides Chinese cases for the aesthetic appreciation of the terrace landscape.
Full article

Figure 1
Open AccessArticle
Binocular Advantage in Established Eye–Hand Coordination Tests in Young and Healthy Adults
by
Michael Mendes Wefelnberg, Felix Bargstedt, Marcel Lippert and Freerk T. Baumann
J. Eye Mov. Res. 2025, 18(3), 14; https://doi.org/10.3390/jemr18030014 - 7 May 2025
Abstract
►▼
Show Figures
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular
[...] Read more.
Background: Eye–hand coordination (EHC) plays a critical role in daily activities and is affected by monocular vision impairment. This study evaluates existing EHC tests to detect performance decline under monocular conditions, supports the assessment and monitoring of vision rehabilitation, and quantifies the binocular advantage of each test. Methods: A total of 70 healthy sports students (aged 19–30 years) participated in four EHC tests: the Purdue Pegboard Test (PPT), Finger–Nose Test (FNT), Alternate Hand Wall Toss Test (AHWTT), and Loop-Wire Test (LWT). Each participant completed the tests under both binocular and monocular conditions in a randomized order, with assessments conducted by two independent raters. Performance differences, binocular advantage, effect sizes, and interrater reliability were analyzed. Results: Data from 66 participants were included in the final analysis. Significant performance differences between binocular and monocular conditions were observed for the LWT (p < 0.001), AHWTT (p < 0.001), and PPT (p < 0.05), with a clear binocular advantage and large effect sizes (SMD range: 0.583–1.660) for the AHWTT and LWT. Female participants performed better in fine motor tasks, while males demonstrated superior performance in gross motor tasks. Binocular performance averages aligned with published reference values. Conclusions: The findings support the inclusion of the LWT and AHWTT in clinical protocols to assess and assist individuals with monocular vision impairment, particularly following sudden uniocular vision loss. Future research should extend these findings to different age groups and clinically relevant populations.
Full article

Figure 1
Open AccessArticle
Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?
by
Korbinian Kunst, David Hoffmann, Anıl Erkan, Karina Lazarova and Tran Quoc Khanh
J. Eye Mov. Res. 2025, 18(2), 13; https://doi.org/10.3390/jemr18020013 - 10 Apr 2025
Abstract
►▼
Show Figures
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the
[...] Read more.
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior.
Full article

Figure 1
Open AccessArticle
Influence of Visual Coding Based on Attraction Effect on Human–Computer Interface
by
Linlin Wang, Yujie Liu, Xinyi Tang, Chengqi Xue and Haiyan Wang
J. Eye Mov. Res. 2025, 18(2), 12; https://doi.org/10.3390/jemr18020012 - 8 Apr 2025
Abstract
►▼
Show Figures
Decision-making is often influenced by contextual information on the human–computer interface (HCI), with the attraction effect being a common situational effect in digital nudging. To address the role of visual cognition and coding in the HCI based on the attraction effect, this research
[...] Read more.
Decision-making is often influenced by contextual information on the human–computer interface (HCI), with the attraction effect being a common situational effect in digital nudging. To address the role of visual cognition and coding in the HCI based on the attraction effect, this research takes online websites as experimental scenarios and demonstrates how the coding modes and attributes influence the attraction effect. The results show that similarity-based attributes enhance the attraction effect, whereas difference-based attributes do not modulate its intensity, suggesting that the influence of the relationship driven by coding modes is weaker than that of coding attributes. Additionally, variations in the strength of the attraction effect are observed across different coding modes under the coding attribute of similarity, with color coding having the strongest effect, followed by size, and labels showing the weakest effect. This research analyzes the stimulating conditions of the attraction effect and provides new insights for exploring the relationship between cognition and visual characterization through the attraction effect at the HCI. Furthermore, our findings can help apply the attraction effect more effectively and assist users in making more reasonable decisions.
Full article

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics

Conferences
Special Issues
Special Issue in
JEMR
Eye Tracking and Visualization
Guest Editor: Michael BurchDeadline: 20 November 2025
Special Issue in
JEMR
New Horizons and Recent Advances in Eye-Tracking Technology
Guest Editor: Lee FriedmanDeadline: 20 December 2025
Special Issue in
JEMR
Eye Movements in Reading and Related Difficulties
Guest Editors: Argyro Fella, Timothy C. Papadopoulos, Kevin B. Paterson, Daniela ZambarbieriDeadline: 30 June 2026