Next Issue
Volume 19, February
Previous Issue
Volume 18, October
 
 

J. Eye Mov. Res., Volume 18, Issue 6 (December 2025) – 18 articles

Cover Story (view full-size image): BEACH-Gaze is an open-source, GUI-based platform that transforms eye-tracking research by making advanced gaze analytics accessible to all. It integrates descriptive and predictive analytics with machine learning, enabling real-time simulations and sequential gaze segmentation without requiring advanced technical expertise. In the era of AI and data science, BEACH-Gaze transforms raw gaze into actionable insights that enhance decision-making, accelerates scientific discovery, and helps democratize intelligent gaze analytics across disciplines and real-world application domains. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
17 pages, 2284 KB  
Article
Stimulus Center Bias Persists Irrespective of Its Position on the Display
by Rotem Mairon and Ohad Ben-Shahar
J. Eye Mov. Res. 2025, 18(6), 77; https://doi.org/10.3390/jemr18060077 - 16 Dec 2025
Viewed by 469
Abstract
Since the earliest studies on human eye movements, it has been repeatedly demonstrated that observers fixate the center of visual stimuli more than their periphery, regardless of visual content. Subsequent research suggested only little effect of typical biases in experimental setups, such as [...] Read more.
Since the earliest studies on human eye movements, it has been repeatedly demonstrated that observers fixate the center of visual stimuli more than their periphery, regardless of visual content. Subsequent research suggested only little effect of typical biases in experimental setups, such as observer’s position relative to the screen or the relative location of the cue marker. While comparative studies of the screen center vs. stimulus center revealed that both conspire in the process, much of the prior art is still confounded by experimental details that leave the origins of the center-bias debatable. We thus propose methodological novelties to rigorously test the effect of the stimulus center, isolated from other factors. In particular, eye movements were tracked in a free-viewing experiment in which stimuli were presented at a wide range of horizontal displacements from a counterbalanced cue marker in a wide visual field. Stimuli spanned diverse natural scene images to allow inherent biases to surface in the pooled data. Various analyses of the first few fixations show a robust bias toward the center of the stimulus, independent of its position on the display, but affected by its distance to the cue marker. Center bias is thus a tangible phenomenon related to the stimulus. Full article
Show Figures

Figure 1

10 pages, 1421 KB  
Article
The Role of Spontaneous Eye Blinks in Temporal Perception: An Eye Tracking Study
by Domenica Abad-Malo, Omar Alvarado-Cando and Hakan Karsilar
J. Eye Mov. Res. 2025, 18(6), 76; https://doi.org/10.3390/jemr18060076 - 16 Dec 2025
Viewed by 525
Abstract
Our interaction with the world depends on our ability to process temporal information, which is a key component of human cognition that directly impacts decision-making, planning, and prediction of events. Visual information plays a crucial role in shaping our subjective perception of time, [...] Read more.
Our interaction with the world depends on our ability to process temporal information, which is a key component of human cognition that directly impacts decision-making, planning, and prediction of events. Visual information plays a crucial role in shaping our subjective perception of time, and even brief interruptions, such as those caused by eye blinks, can disrupt the continuity of our perception and alter how we estimate durations. The purpose of this study is to investigate the relationship between spontaneous eye blinks and time perception using a temporal bisection task. In particular, we focus on how blinks preceding stimulus presentation impact the perceived duration of that stimulus. The results of fitting a generalized linear mixed-effects model revealed that blinking can indeed influence the duration estimation. Specifically, the presence of a single blink before the stimulus presentation had a significant effect on subjective time perception; participants were more likely to categorize a duration as shorter compared to when they did not blink. In contrast, two or more blinks before stimulus presentation did not have a significant effect compared to not blinking. This study further elucidates the complex interaction between the momentary suppression of visual input and the perception of time. Full article
Show Figures

Figure 1

40 pages, 1880 KB  
Article
Eyes on Prevention: An Eye-Tracking Analysis of Visual Attention Patterns in Breast Cancer Screening Ads
by Stefanos Balaskas, Ioanna Yfantidou and Dimitra Skandali
J. Eye Mov. Res. 2025, 18(6), 75; https://doi.org/10.3390/jemr18060075 - 13 Dec 2025
Viewed by 627
Abstract
Strong communication is central to the translation of breast cancer screening availability into uptake. This experiment tests the role of design features of screening advertisements in directing visual attention in screening-eligible women (≥40 years). To this end, a within-subjects eye-tracking experiment (N = [...] Read more.
Strong communication is central to the translation of breast cancer screening availability into uptake. This experiment tests the role of design features of screening advertisements in directing visual attention in screening-eligible women (≥40 years). To this end, a within-subjects eye-tracking experiment (N = 30) was conducted in which women viewed six static public service advertisements. Predefined Areas of Interest (AOIs), Text, Image/Visual, Symbol, Logo, Website/CTA, and Source/Authority—were annotated, and three standard measures were calculated: Time to First Fixation (TTFF), Fixation Count (FC), and Fixation Duration (FD). Analyses combined descriptive summaries with subgroup analyses using nonparametric methods and generalized linear mixed models (GLMMs) employing participant-level random intercepts. Within each category of stimuli, detected differences were small in magnitude yet trended towards few revisits in each category for the FC mode; TTFF and FD showed no significant differences across categories. Viewing data from the perspective of Areas of Interest (AOIs) highlighted pronounced individual differences. Narratives/efficacy text and dense icon/text callouts prolonged processing times, although institutional logos and abstract/anatomical symbols generally received brief treatment except when coupled with action-oriented communication triggers. TTFF timing also tended toward individual areas of interest aligned with the Scan-Then-Read strategy, in which smaller labels/sources/CTAs are exploited first in comparison with larger headlines/statistical text. Practically, screening messages should co-locate access and credibility information in early-attention areas and employ brief, fluent efficacy text to hold gaze. The study adds PSA-specific eye-tracking evidence for breast cancer screening and provides immediately testable design recommendations for programs in Greece and the EU. Full article
Show Figures

Graphical abstract

21 pages, 2975 KB  
Article
Where Vision Meets Memory: An Eye-Tracking Study of In-App Ads in Mobile Sports Games with Mixed Visual-Quantitative Analytics
by Ümit Can Büyükakgül, Arif Yüce and Hakan Katırcı
J. Eye Mov. Res. 2025, 18(6), 74; https://doi.org/10.3390/jemr18060074 - 10 Dec 2025
Viewed by 725
Abstract
Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements [...] Read more.
Mobile games have become one of the fastest-growing segments of the digital economy, and in-app advertisements represent a major source of revenue while shaping consumer attention and memory processes. This study examined the relationship between visual attention and brand recall of in-app advertisements in a mobile sports game using mobile eye-tracking technology. A total of 79 participants (47 male, 32 female; Mage = 25.8) actively played a mobile sports game for ten minutes while their eye movements were recorded with Tobii Pro Glasses 2. Areas of interest (AOIs) were defined for embedded advertisements, and fixation-related measures were analyzed. Brand recall was assessed through unaided, verbal-aided, and visual-aided measures, followed by demographic comparisons based on gender, mobile sports game experience and interest in tennis. Results from Generalized Linear Mixed Models (GLMMs) revealed that brand placement was the strongest predictor of recall (p < 0.001), overriding raw fixation duration. Specifically, brands integrated into task-relevant zones (e.g., the central net area) achieved significantly higher recall odds compared to peripheral ads, regardless of marginal variations in dwell time. While eye movement metrics varied by gender and interest, the multivariate model confirmed that in active gameplay, task-integration drives memory encoding more effectively than passive visual salience. These findings suggest that active gameplay imposes unique cognitive demands, altering how attention and memory interact. The study contributes both theoretically by extending advertising research into ecologically valid gaming contexts and practically by informing strategies for optimizing mobile in-app advertising. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

20 pages, 591 KB  
Article
Investigating the Effect of Presentation Mode on Cognitive Load in English–Chinese Distance Simultaneous Interpreting: An Eye-Tracking Study
by Xuelian (Rachel) Zhu
J. Eye Mov. Res. 2025, 18(6), 73; https://doi.org/10.3390/jemr18060073 - 1 Dec 2025
Viewed by 975
Abstract
Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving [...] Read more.
Distance simultaneous interpreting is a typical example of technology-mediated interpreting, bridging participants (i.e., interpreters, audience, and speakers) in various events and conferences. This study explores how presentation mode affects cognitive load in DSI, utilizing eye-tracking sensor technology. A controlled experiment was conducted involving 36 participants, comprising 19 professional interpreters and 17 student interpreters, to assess the effects of presentation mode on their cognitive load during English-to-Chinese DSI. A Tobii Pro X3-120 screen-based eye tracker was used to collect eye-tracking data as the participants sequentially performed a DSI task involving four distinct presentation modes: the Speaker, Slides, Split, and Corner modes. The findings, derived from the integration of eye-tracking data and interpreting performance scores, indicate that both presentation mode and experience level significantly influence interpreters’ cognitive load. Notably, student interpreters demonstrated longer fixation durations in the Slides mode, indicating a reliance on visual aids for DSI. These results have implications for language learning, suggesting that the integration of visual supports can aid in the acquisition and performance of interpreting skills, particularly for less experienced interpreters. This study contributes to our understanding of the interplay between technology, cognitive load, and language learning in the context of DSI. Full article
Show Figures

Graphical abstract

17 pages, 1441 KB  
Article
Initial and Sustained Attentional Bias Toward Emotional Faces in Patients with Major Depressive Disorder
by Hanliang Wei, Tak Kwan Lam, Weijian Liu, Waxun Su, Zheng Wang, Qiandong Wang, Xiao Lin and Peng Li
J. Eye Mov. Res. 2025, 18(6), 72; https://doi.org/10.3390/jemr18060072 - 1 Dec 2025
Viewed by 683
Abstract
Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of [...] Read more.
Major depressive disorder (MDD) represents a prevalent mental health condition characterized by prominent attentional biases, particularly toward negative stimuli. While extensive research has established the significance of negative attentional bias in depression, critical gaps remain in understanding the temporal dynamics and valence-specificity of these biases. This study employed eye-tracking technology to systematically examine the attentional processing of emotional faces (happy, fearful, sad) in MDD patients (n = 61) versus healthy controls (HC, n = 47), assessing both the initial orientation (initial gaze preference) and sustained attention (first dwell time). Key findings revealed the following: (1) while both groups showed an initial vigilance toward threatening faces (fearful/sad), only MDD patients displayed an additional attentional capture by happy faces; (2) a significant emotion main effect (F (2, 216) = 10.19, p < 0.001) indicated a stronger initial orientation to fearful versus happy faces, with Bayesian analyses (BF < 0.3) confirming the absence of group differences; and (3) no group disparities emerged in sustained attentional maintenance (all ps > 0.05). These results challenge conventional negativity-focused models by demonstrating valence-specific early-stage abnormalities in MDD, suggesting that depressive attentional dysfunction may be most pronounced during initial automatic processing rather than later strategic stages. The findings advance the theoretical understanding of attentional bias in depression while highlighting the need for stage-specific intervention approaches. Full article
Show Figures

Figure 1

23 pages, 4065 KB  
Article
Robust Camera-Based Eye-Tracking Method Allowing Head Movements and Its Application in User Experience Research
by He Zhang and Lu Yin
J. Eye Mov. Res. 2025, 18(6), 71; https://doi.org/10.3390/jemr18060071 - 1 Dec 2025
Viewed by 822
Abstract
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation [...] Read more.
Eye-tracking for user experience analysis has traditionally relied on dedicated hardware, which is often costly and imposes restrictive operating conditions. As an alternative, solutions utilizing ordinary webcams have attracted significant interest due to their affordability and ease of use. However, a major limitation persists in these vision-based methods: sensitivity to head movements. Therefore, users are often required to maintain a rigid head position, leading to discomfort and potentially skewed results. To address this challenge, this paper proposes a robust eye-tracking methodology designed to accommodate head motion. Our core technique involves mapping the displacement of the pupil center from a dynamically updated reference point to estimate the gaze point. When head movement is detected, the system recalculates the head-pointing coordinate using estimated head pose and user-to-screen distance. This new head position and the corresponding pupil center are then established as the fresh benchmark for subsequent gaze point estimation, creating a continuous and adaptive correction loop. We conducted accuracy tests with 22 participants. The results demonstrate that our method surpasses the performance of many current methods, achieving mean gaze errors of 1.13 and 1.37 degrees in two testing modes. Further validation in a smooth pursuit task confirmed its efficacy in dynamic scenarios. Finally, we applied the method in a real-world gaming context, successfully extracting fixation counts and gaze heatmaps to analyze visual behavior and UX across different game modes, thereby verifying its practical utility. Full article
Show Figures

Figure 1

25 pages, 2059 KB  
Article
Measuring Mental Effort in Real Time Using Pupillometry
by Gavindya Jayawardena, Yasith Jayawardana and Jacek Gwizdka
J. Eye Mov. Res. 2025, 18(6), 70; https://doi.org/10.3390/jemr18060070 - 24 Nov 2025
Viewed by 1273
Abstract
Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on [...] Read more.
Mental effort, a critical factor influencing task performance, is often difficult to measure accurately and efficiently. Pupil diameter has emerged as a reliable, real-time indicator of mental effort. This study introduces RIPA2, an enhanced pupillometric index for real-time mental effort assessment. Building on the original RIPA method, RIPA2 incorporates refined Savitzky–Golay filter parameters to better isolate pupil diameter fluctuations within biologically relevant frequency bands linked to cognitive load. We validated RIPA2 across two distinct tasks: a structured N-back memory task and a naturalistic information search task involving fact-checking and decision-making scenarios. Our findings show that RIPA2 reliably tracks variations in mental effort, demonstrating improved sensitivity and consistency over the original RIPA and strong alignment with the established offline measures of pupil-based cognitive load indices, such as LHIPA. Notably, RIPA2 captured increased mental effort at higher N-back levels and successfully distinguished greater effort during decision-making tasks compared to fact-checking tasks, highlighting its applicability to real-world cognitive demands. These findings suggest that RIPA2 provides a robust, continuous, and low-latency method for assessing mental effort. It holds strong potential for broader use in educational settings, medical environments, workplaces, and adaptive user interfaces, facilitating objective monitoring of mental effort beyond laboratory conditions. Full article
Show Figures

Figure 1

14 pages, 1118 KB  
Article
Visual Attention to Food Content on Social Media: An Eye-Tracking Study Among Young Adults
by Aura Lydia Riswanto, Seieun Kim, Youngsam Ha and Hak-Seon Kim
J. Eye Mov. Res. 2025, 18(6), 69; https://doi.org/10.3390/jemr18060069 - 20 Nov 2025
Viewed by 1870
Abstract
Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase [...] Read more.
Social media has become a dominant channel for food marketing, particularly targeting youth through visually engaging and socially embedded content. This study investigates how young adults visually engage with food advertisements on social media and how specific visual and contextual features influence purchase intention. Using eye-tracking technology and survey analysis, data were collected from 35 participants aged 18 to 25. Participants viewed simulated Instagram posts incorporating elements such as food imagery, branding, influencer presence, and social cues. Visual attention was recorded using Tobii Pro Spectrum, and behavioral responses were assessed via post-surveys. A 2 × 2 design varying influencer presence and food type showed that both features significantly increased visual attention. Marketing cues and branding also attracted substantial visual attention. Linear regression revealed that core/non-core content and influencer features were among the strongest predictors of consumer response. The findings underscore the persuasive power of human and social features in digital food advertising. These insights have implications for commercial marketing practices and for understanding how visual and social elements influence youth engagement with food content on digital platforms. Full article
Show Figures

Figure 1

14 pages, 1532 KB  
Article
Gaze Characteristics Using a Three-Dimensional Heads-Up Display During Cataract Surgery
by Puranjay Gupta, Emily Kao, Neil Sheth, Reem Alahmadi and Michael J. Heiferman
J. Eye Mov. Res. 2025, 18(6), 68; https://doi.org/10.3390/jemr18060068 - 17 Nov 2025
Viewed by 677
Abstract
Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2–4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY [...] Read more.
Purpose: An observational study to investigate differences in gaze behaviors across varying expertise levels using a 3D heads-up display (HUD) integrated with eye-tracking was conducted. Methods: 25 ophthalmologists (PGY2–4, fellows, attendings; number(n) = 5/group) performed cataract surgery on a SimulEYE model using NGENUITY HUD. Results: Surgical proficiency increased with experience, with attendings achieving the highest scores (54.4 ± 0.89). Compared with attendings, PGY2s had longer fixation durations (p = 0.042), longer saccades (p < 0.0001), and fewer fixations on the HUD (p < 0.0001). Capsulorhexis diameter relative to capsule size increased with expertise, with fellows and attendings achieving significantly larger diameters than PGY2s (p < 0.0001). Experts maintained smaller tear angles, initiated tears closer to the main wound, and produced more circular morphologies. They rapidly alternated gaze between instruments and surrounding tissue, whereas novices (PGY2–4) fixated primarily on the instrument tip. Conclusions: Experts employ a feed-forward visual sampling strategy, allowing perception of instruments and surrounding tissue, minimizing inadvertent damage. Furthermore, attending surgeons maintain smaller tear angles and initiate tears proximally to forceps insertion, which may contribute to more controlled tears. Future integration of eye-tracking technology into surgical training could enhance visual-motor strategies in novices. Full article
Show Figures

Graphical abstract

22 pages, 2262 KB  
Article
BEACH-Gaze: Supporting Descriptive and Predictive Gaze Analytics in the Era of Artificial Intelligence and Advanced Data Science
by Bo Fu, Kayla Chu, Angelo Ryan Soriano, Peter Gatsby, Nicolas Guardado Guardado, Ashley Jones and Matthew Halderman
J. Eye Mov. Res. 2025, 18(6), 67; https://doi.org/10.3390/jemr18060067 - 12 Nov 2025
Cited by 1 | Viewed by 730
Abstract
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend [...] Read more.
Recent breakthroughs in machine learning, artificial intelligence, and the emergence of large datasets have made the integration of eye tracking increasingly feasible not only in computing but also in many other disciplines to accelerate innovation and scientific discovery. These transformative changes often depend on intelligently analyzing and interpreting gaze data, which demand a substantial technical background. Overcoming these technical barriers has remained an obstacle to the broader adoption of eye tracking technologies in certain communities. In an effort to increase accessibility that potentially empowers a broader community of researchers and practitioners to leverage eye tracking, this paper presents an open-source software platform: Beach Environment for the Analytics of Human Gaze (BEACH-Gaze), designed to offer comprehensive descriptive and predictive analytical support. Firstly, BEACH-Gaze provides sequential gaze analytics through window segmentation in its data processing and analysis pipeline, which can be used to achieve simulations of real-time gaze-based systems. Secondly, it integrates a range of established machine learning models, allowing researchers from diverse disciplines to generate gaze-enabled predictions without advanced technical expertise. The overall goal is to simplify technical details and to aid the broader community interested in eye tracking research and applications in data interpretation, and to leverage knowledge gained from eye gaze in the development of machine intelligence. As such, we further demonstrate three use cases that apply descriptive and predictive gaze analytics to support individuals with autism spectrum disorder during technology-assisted exercises, to dynamically tailor visual cues for an individual user via physiologically adaptive visualizations, and to predict pilots’ performance in flight maneuvers to enhance aviation safety. Full article
Show Figures

Figure 1

10 pages, 566 KB  
Article
Recovery of the Pupillary Response After Light Adaptation Is Slowed in Patients with Age-Related Macular Degeneration
by Javier Barranco Garcia, Thomas Ferrazzini, Ana Coito, Dominik Brügger and Mathias Abegg
J. Eye Mov. Res. 2025, 18(6), 66; https://doi.org/10.3390/jemr18060066 - 10 Nov 2025
Viewed by 871
Abstract
Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this [...] Read more.
Purpose: This study evaluates a novel, non-invasive method using a virtual reality (VR) headset with integrated eye trackers to assess retinal function by measuring the recovery of the pupillary response after light adaptation in patients with age-related macular degeneration (AMD). Methods: In this pilot study, fourteen patients with clinically confirmed AMD and 14 age-matched healthy controls were exposed to alternating bright and dark stimuli using a VR headset. The dark stimulus duration increased incrementally by 100 milliseconds per trial, repeated over 50 cycles. The pupillary response to the re-onset of brightness was recorded. Data were analyzed using a linear mixed-effects model to compare recovery patterns between groups and a convolutional neural network to evaluate diagnostic accuracy. Results: The pupillary response amplitude increased with longer dark stimuli, i.e., the longer the eye was exposed to darkness the bigger was the subsequent pupillary amplitude. This pupillary recovery was significantly slowed by age and by the presence of macular degeneration. Test diagnostic accuracy for AMD was approximately 92%, with a sensitivity of 90% and a specificity of 70%. Conclusions: This proof-of-concept study demonstrates that consumer-grade VR headsets with integrated eye tracking can detect retinal dysfunction associated with AMD. The method offers a fast, accessible, and potentially scalable approach for retinal disease screening and monitoring. Further optimization and validation in larger cohorts are needed to confirm its clinical utility. Full article
(This article belongs to the Special Issue New Horizons and Recent Advances in Eye-Tracking Technology)
Show Figures

Figure 1

19 pages, 1440 KB  
Article
Eye-Tracking Data in the Exploration of Students’ Engagement with Representations in Mathematics: Areas of Interest (AOIs) as Methodological and Conceptual Challenges
by Mahboubeh Nedaei, Roger Säljö, Shaista Kanwal and Simon Goodchild
J. Eye Mov. Res. 2025, 18(6), 65; https://doi.org/10.3390/jemr18060065 - 5 Nov 2025
Viewed by 841
Abstract
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the [...] Read more.
In mathematics, and in learning mathematics, representations (texts, formulae, and figures) play a vital role. Eye-tracking is a promising approach for studying how representations are attended to in the context of mathematics learning. The focus of the research reported here is on the methodological and conceptual challenges that arise when analysing students’ engagement with different kinds of representations using such data. The study critically examines some of these issues through a case study of three engineering students engaging with an instructional document introducing double integrals. This study reports that not only the characteristics of different types of representations affect students’ engagement with areas of interests (AOIs), but also methodological decisions, such as how AOIs are defined, will be consequential for interpretations of that engagement. This shows that both technical parameters and the inherent nature of the representations themselves must be considered when defining AOIs and analysing students’ engagement with representations. The findings offer practical considerations for designing and analysing eye-tracking studies when students’ engagement with different representations is in focus. Full article
Show Figures

Figure 1

37 pages, 3305 KB  
Article
An Exploratory Eye-Tracking Study of Breast-Cancer Screening Ads: A Visual Analytics Framework and Descriptive Atlas
by Ioanna Yfantidou, Stefanos Balaskas and Dimitra Skandali
J. Eye Mov. Res. 2025, 18(6), 64; https://doi.org/10.3390/jemr18060064 - 4 Nov 2025
Viewed by 850
Abstract
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable [...] Read more.
Successful health promotion involves messages that are quickly captured and held long enough to permit eligibility, credibility, and calls to action to be coded. This research develops an exploratory eye-tracking atlas of breast cancer screening ads viewed by midlife women and a replicable pipeline that distinguishes early capture from long-term processing. Areas of Interest are divided into design-influential categories and graphed with two complementary measures: first hit and time to first fixation for entry and a tie-aware pairwise dominance model for dwell that produces rankings and an “early-vs.-sticky” quadrant visualization. Across creatives, pictorial and symbolic features were more likely to capture the first glance when they were perceptually dominant, while layouts containing centralized headlines or institutional cues deflected entry to the message and source. Prolonged attention was consistently focused on blocks of text, locations, and badges of authoring over ornamental pictures, demarcating the functional difference between capture and processing. Subgroup differences indicated audience-sensitive shifts: Older and household families shifted earlier toward source cues, more educated audiences shifted toward copy and locations, and younger or single viewers shifted toward symbols and images. Internal diagnostics verified that pairwise matrices were consistent with standard dwell summaries, verifying the comparative approach. The atlas converts the patterns into design-ready heuristics: defend sticky and early pieces, encourage sticky but late pieces by pushing them toward probable entry channels, de-clutter early but not sticky pieces to convert to processing, and re-think pieces that are neither. In practice, the diagnostics can be incorporated into procurement, pretesting, and briefs by agencies, educators, and campaign managers in order to enhance actionability without sacrificing segmentation of audiences. As an exploratory investigation, this study invites replication with larger and more diverse samples, generalizations to dynamic media, and associations with downstream measures such as recall and uptake of services. Full article
Show Figures

Figure 1

22 pages, 4962 KB  
Article
Effects of Multimodal AR-HUD Navigation Prompt Mode and Timing on Driving Behavior
by Qi Zhu, Ziqi Liu, Youlan Li and Jung Euitay
J. Eye Mov. Res. 2025, 18(6), 63; https://doi.org/10.3390/jemr18060063 - 4 Nov 2025
Cited by 1 | Viewed by 1151
Abstract
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This [...] Read more.
Current research on multimodal AR-HUD navigation systems primarily focuses on the presentation forms of auditory and visual information, yet the effects of synchrony between auditory and visual prompts as well as prompt timing on driving behavior and attention mechanisms remain insufficiently explored. This study employed a 2 (prompt mode: synchronous vs. asynchronous) × 3 (prompt timing: −2000 m, −1000 m, −500 m) within-subject experimental design to assess the impact of multimodal prompt synchrony and prompt distance on drivers’ reaction time, sustained attention, and eye movement behaviors, including average fixation duration and fixation count. Behavioral data demonstrated that both prompt mode and prompt timing significantly influenced drivers’ response performance (indexed by reaction time) and attention stability, with synchronous prompts at −1000 m yielding optimal performance. Eye-tracking results further revealed that synchronous prompts significantly enhanced fixation stability and reduced visual load, indicating more efficient information integration. Therefore, prompt mode and prompt timing significantly affect drivers’ perceptual processing and operational performance. Delivering synchronous auditory and visual prompts at −1000 m achieves an optimal balance between information timeliness and multimodal integration. This study recommends the following: (1) maintaining temporal consistency in multimodal prompts to facilitate perceptual integration and (2) controlling prompt distance within an intermediate range (−1000 m) to optimize the perception–action window, thereby improving the safety and efficiency of AR-HUD navigation systems. Full article
Show Figures

Graphical abstract

18 pages, 570 KB  
Article
The Influence of Social Media-like Cues on Visual Attention—An Eye-Tracking Study with Food Products
by Maria Mamalikou, Konstantinos Gkatzionis and Malamatenia Panagiotou
J. Eye Mov. Res. 2025, 18(6), 62; https://doi.org/10.3390/jemr18060062 - 4 Nov 2025
Viewed by 1760
Abstract
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek [...] Read more.
Social media has developed into a leading advertising platform, with Instagram likes serving as visual cues that may influence consumer perception and behavior. The present study investigated the effect of Instagram likes on visual attention, memory, and food evaluations focusing on traditional Greek food posts, using eye-tracking technology. The study assessed whether a higher number of likes increased attention to the food area, enhanced memory recall of food names, and influenced subjective ratings (liking, perceived tastiness, and intention to taste). The results demonstrated no significant differences in overall viewing time, memory performance, or evaluation ratings between high-like and low-like conditions. Although not statistically significant, descriptive trends suggested that posts with a higher number of likes tended to be evaluated more positively and the AOIs likes area showed a trend towards attracting more visual attention. The observed trends point to a possible subtle role of likes in user’s engagement with food posts, influencing how they process and evaluate such content. These findings add to the discussion about the effect of social media likes on information processing when individuals observe food pictures on social media. Full article
Show Figures

Graphical abstract

20 pages, 2586 KB  
Article
AI Images vs. Real Photographs: Investigating Visual Recognition and Perception
by Veslava Osińska, Weronika Kortas, Adam Szalach and Marc Welter
J. Eye Mov. Res. 2025, 18(6), 61; https://doi.org/10.3390/jemr18060061 - 3 Nov 2025
Viewed by 4155
Abstract
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. [...] Read more.
Recently, the photorealism of generated images has improved noticeably due to the development of AI algorithms. These are high-resolution images of human faces and bodies, cats and dogs, vehicles, and other categories of objects that the untrained eye cannot distinguish from authentic photographs. The study assessed how people perceive 12 pictures generated by AI vs. 12 real photographs. Six main categories of stimuli were selected: architecture, art, faces, cars, landscapes, and pets. The visual perception of selected images was studied by means of eye tracking and gaze patterns as well as time characteristics, compared with consideration to the respondent groups’ gender and knowledge of AI graphics. After the experiment, the study participants analysed the pictures again in order to describe the reasons for their choice. The results show that AI images of pets and real photographs of architecture were the easiest to identify. The largest differences in visual perception are between men and women as well as between those experienced in digital graphics (including AI images) and the rest. Based on the analysis, several recommendations are suggested for AI developers and end-users. Full article
Show Figures

Graphical abstract

22 pages, 1836 KB  
Article
The Influence of Text Genre on Eye Movement Patterns During Reading
by Maksim Markevich and Anastasiia Streltsova
J. Eye Mov. Res. 2025, 18(6), 60; https://doi.org/10.3390/jemr18060060 - 3 Nov 2025
Viewed by 1037
Abstract
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry [...] Read more.
Successful reading comprehension depends on many factors, including text genre. Eye-tracking studies indicate that genre shapes eye movement patterns at a local level. Although the reading of expository and narrative texts by adolescents has been described in the literature, the reading of poetry by adolescents remains understudied. In this study, we used scanpath analysis to examine how genre and comprehension level influence global eye movement strategies in adolescents (N = 44). Thus, the novelty of this study lies in the use of scanpath analysis to measure global eye movement strategies employed by adolescents while reading narrative, expository, and poetic texts. Two distinct reading patterns emerged: a forward reading pattern (linear progression) and a regressive reading pattern (frequent lookbacks). Readers tended to use regressive patterns more often with expository and poetic texts, while forward patterns were more common with a narrative text. Comprehension level also played a significant role, with readers with a higher level of comprehension relying more on regressive patterns for expository and poetic texts. The results of this experiment suggest that scanpaths effectively capture genre-driven differences in reading strategies, underscoring how genre expectations may shape visual processing during reading. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop