Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (381)

Search Parameters:
Keywords = Fixational eye movement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2879 KiB  
Article
Study on the Eye Movement Transfer Characteristics of Drivers Under Different Road Conditions
by Zhenxiang Hao, Jianping Hu, Xiaohui Sun, Jin Ran, Yuhang Zheng, Binhe Yang and Junyao Tang
Appl. Sci. 2025, 15(15), 8559; https://doi.org/10.3390/app15158559 (registering DOI) - 1 Aug 2025
Abstract
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, [...] Read more.
Given the severe global traffic safety challenges—including threats to human lives and socioeconomic impacts—this study analyzes visual behavior to promote sustainable transportation, improve road safety, and reduce resource waste and pollution caused by accidents. Four typical road sections, namely, turning, straight ahead, uphill, and downhill, were selected, and the eye movement data of 23 drivers in different driving stages were collected by aSee Glasses eye-tracking device to analyze the visual gaze characteristics of the drivers and their transfer patterns in each road section. Using Markov chain theory, the probability of staying at each gaze point and the transfer probability distribution between gaze points were investigated. The results of the study showed that drivers’ visual behaviors in different road sections showed significant differences: drivers in the turning section had the largest percentage of fixation on the near front, with a fixation duration and frequency of 29.99% and 28.80%, respectively; the straight ahead section, on the other hand, mainly focused on the right side of the road, with 31.57% of fixation duration and 19.45% of frequency of fixation; on the uphill section, drivers’ fixation duration on the left and right roads was more balanced, with 24.36% of fixation duration on the left side of the road and 25.51% on the right side of the road; drivers on the downhill section looked more frequently at the distance ahead, with a total fixation frequency of 23.20%, while paying higher attention to the right side of the road environment, with a fixation duration of 27.09%. In terms of visual fixation, the fixation shift in the turning road section was mainly concentrated between the near and distant parts of the road ahead and frequently turned to the left and right sides; the straight road section mainly showed a shift between the distant parts of the road ahead and the dashboard; the uphill road section was concentrated on the shift between the near parts of the road ahead and the two sides of the road, while the downhill road section mainly occurred between the distant parts of the road ahead and the rearview mirror. Although drivers’ fixations on the front of the road were most concentrated under the four road sections, with an overall fixation stability probability exceeding 67%, there were significant differences in fixation smoothness between different road sections. Through this study, this paper not only reveals the laws of drivers’ visual behavior under different driving environments but also provides theoretical support for behavior-based traffic safety improvement strategies. Full article
Show Figures

Figure 1

18 pages, 1047 KiB  
Article
Eye Movement Patterns as Indicators of Text Complexity in Arabic: A Comparative Analysis of Classical and Modern Standard Arabic
by Hend Al-Khalifa
J. Eye Mov. Res. 2025, 18(4), 30; https://doi.org/10.3390/jemr18040030 - 16 Jul 2025
Viewed by 258
Abstract
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA [...] Read more.
This study investigates eye movement patterns as indicators of text complexity in Arabic, focusing on the comparative analysis of Classical Arabic (CA) and Modern Standard Arabic (MSA) text. Using the AraEyebility corpus, which contains eye-tracking data from readers of both CA and MSA text, we examined differences in fixation patterns, regression rates, and overall reading behavior between these two forms of Arabic. Our analyses revealed significant differences in eye movement metrics between CA and MSA text, with CA text consistently eliciting more fixations, longer fixation durations, and more frequent revisits. Multivariate analysis confirmed that language type has a significant combined effect on eye movement patterns. Additionally, we identified different relationships between text features and eye movements for CA versus MSA text, with sentence-level features emerging as significant predictors across both language types. Notably, we observed an interaction between language type and readability level, with readers showing less sensitivity to readability variations in CA text compared to MSA text. These findings contribute to our understanding of how historical language evolution affects reading behavior and have practical implications for Arabic language education, publishing, and assessment. The study demonstrates the value of eye movement analysis for understanding text complexity in Arabic and highlights the importance of considering language-specific features when studying reading processes. Full article
Show Figures

Graphical abstract

17 pages, 2942 KiB  
Article
Visual Perception and Fixation Patterns in an Individual with Ventral Simultanagnosia, Integrative Agnosia and Bilateral Visual Field Loss
by Isla Williams, Andrea Phillipou, Elsdon Storey, Peter Brotchie and Larry Abel
Neurol. Int. 2025, 17(7), 105; https://doi.org/10.3390/neurolint17070105 - 10 Jul 2025
Viewed by 226
Abstract
Background/Objectives: As high-acuity vision is limited to a very small visual angle, examination of a scene requires multiple fixations. Simultanagnosia, a disorder wherein elements of a scene can be perceived correctly but cannot be integrated into a coherent whole, has been parsed into [...] Read more.
Background/Objectives: As high-acuity vision is limited to a very small visual angle, examination of a scene requires multiple fixations. Simultanagnosia, a disorder wherein elements of a scene can be perceived correctly but cannot be integrated into a coherent whole, has been parsed into dorsal and ventral forms. In ventral simultanagnosia, limited visual integration is possible. This case study was the first to record gaze during the presentation of a series of visual stimuli, which required the processing of local and global elements. We hypothesised that gaze patterns would differ with successful processing and that feature integration could be disrupted by distractors. Methods: The patient received a neuropsychological assessment and underwent CT and MRI. Eye movements were recorded during the following tasks: (1) famous face identification, (2) facial emotion recognition, (3) identification of Ishihara colour plates, and (4) identification of both local and global letters in Navon composite letters, presented either alone or surrounded by filled black circles, which we hypothesised would impair global processing by disrupting fixation. Results: The patients identified no famous faces but scanned them qualitatively normally. The only emotion to be consistently recognised was happiness, whose scanpath differed from the other emotions. She identified none of the Ishihara plates, although her colour vision was normal on the FM-15, even mapping an unseen digit with fixations and tracing it with her finger. For plain Navon figures, she correctly identified 20/20 local and global letters; for the “dotted” figures, she was correct 19/20 times for local letters and 0/20 for global letters (chi-squared NS for local, p < 0.0001, global), with similar fixation of salient elements for both. Conclusions: Contrary to our hypothesis, gaze behaviour was largely independent of the ability to process global stimuli, showing for the first time that normal acquisition of visual information did not ensure its integration into a percept. The core defect lay in processing, not acquisition. In the novel Navon task, adding distractors abolished feature integration without affecting the fixation of the salient elements, confirming for the first time that distractors could disrupt the processing, not the acquisition, of visual information in this disorder. Full article
Show Figures

Figure 1

15 pages, 559 KiB  
Article
Exploring Fixation Times During Emotional Decoding in Intimate Partner Violence Perpetrators: An Eye-Tracking Pilot Study
by Carolina Sarrate-Costa, Marisol Lila, Luis Moya-Albiol and Ángel Romero-Martínez
Brain Sci. 2025, 15(7), 732; https://doi.org/10.3390/brainsci15070732 - 8 Jul 2025
Viewed by 283
Abstract
Background/Objectives: Deficits in emotion recognition abilities have been described as risk factors for intimate partner violence (IPV) perpetration. However, much of this research is based on self-reports or instruments that present limited psychometric properties. While current scientific literature supports the use of eye [...] Read more.
Background/Objectives: Deficits in emotion recognition abilities have been described as risk factors for intimate partner violence (IPV) perpetration. However, much of this research is based on self-reports or instruments that present limited psychometric properties. While current scientific literature supports the use of eye tracking to assess cognitive and emotional processes, including emotional decoding abilities, there is a gap in the scientific literature when it comes to measuring these processes in IPV perpetrators using eye tracking in an emotional decoding task. Hence, the aim of this study was to examine the association between fixation times via eye tracking and emotional decoding abilities in IPV perpetrators, controlling for potential confounding variables. Methods: To this end, an emotion recognition task was created using an eye tracker in a group of 52 IPV perpetrators. This task consisted of 20 images with people expressing different emotions. For each picture, the facial region was selected as an area of interest (AOI). The fixation times were added to obtain a total gaze fixation time score. Additionally, an ad hoc emotional decoding multiple-choice test about each picture was developed. These instruments were complemented with other self-reports previously designed to measure emotion decoding abilities. Results: The results showed that the longer the total fixation times on the AOI, the better the emotional decoding abilities in IPV perpetrators. Specifically, fixation times explained 20% of the variance in emotional decoding test scores. Additionally, our ad hoc emotional decoding test was significantly correlated with previously designed emotion recognition tools and showed similar reliability to the eyes test. Conclusions: Overall, this pilot study highlights the importance of including eye movement signals to explore attentional processes involved in emotion recognition abilities in IPV perpetrators. This would allow us to adequately specify the therapeutic needs of IPV perpetrators to improve current interventions. Full article
Show Figures

Figure 1

24 pages, 1038 KiB  
Article
Eye Movements of French Dyslexic Adults While Reading Texts: Evidence of Word Length, Lexical Frequency, Consistency and Grammatical Category
by Aikaterini Premeti, Frédéric Isel and Maria Pia Bucci
Brain Sci. 2025, 15(7), 693; https://doi.org/10.3390/brainsci15070693 - 27 Jun 2025
Viewed by 423
Abstract
Background/Objectives: Dyslexia, a learning disability affecting reading, has been extensively studied using eye movements. This study aimed to examine in the same design the effects of different psycholinguistic variables, i.e., grammatical category, lexical frequency, word length and orthographic consistency on eye movement patterns [...] Read more.
Background/Objectives: Dyslexia, a learning disability affecting reading, has been extensively studied using eye movements. This study aimed to examine in the same design the effects of different psycholinguistic variables, i.e., grammatical category, lexical frequency, word length and orthographic consistency on eye movement patterns during reading in adults. Methods: We compared the eye movements of forty university students, twenty with and twenty without dyslexia while they read aloud a meaningful and a meaningless text in order to examine whether semantic context could enhance their reading strategy. Results: Dyslexic participants made more reading errors and had longer reading time particularly with the meaningless text, suggesting an increased reliance on the semantic context to enhance their reading strategy. They also made more progressive and regressive fixations while reading the two texts. Similar results were found when examining grammatical categories. These findings suggest a reduced visuo-attentional span and reliance on a serial decoding approach during reading, likely based on grapheme-to-phoneme conversion. Furthermore, in the whole text analysis, there was no difference in fixation duration between the groups. However, when examining word length, only the control group exhibited a distinction between longer and shorter words. No significant group differences emerged for word frequency. Importantly, multiple regression analyses revealed that orthographic consistency predicted fixation durations only in the control group, suggesting that dyslexic readers were less sensitive to phonological regularities—possibly due to underlying phonological deficits. Conclusions: These findings suggest the involvement of both phonological and visuo-attentional deficits in dyslexia. Combined remediation strategies may enhance dyslexic individuals’ performance in phonological and visuo-attentional tasks. Full article
(This article belongs to the Section Developmental Neuroscience)
Show Figures

Figure 1

21 pages, 1445 KiB  
Article
Attention and Outcomes Across Learning Conditions in L2 Vocabulary Acquisition: Evidence from Eye-Tracking
by Yiyang Yang and Hulin Ren
J. Eye Mov. Res. 2025, 18(3), 21; https://doi.org/10.3390/jemr18030021 - 13 Jun 2025
Viewed by 449
Abstract
The role of attention has been shown to be essential in second language (L2) learning. However, the impact of different learning conditions on attention and learning outcomes remains underdeveloped, particularly through the application of eye-tracking technology. This study aims to evaluate the effect [...] Read more.
The role of attention has been shown to be essential in second language (L2) learning. However, the impact of different learning conditions on attention and learning outcomes remains underdeveloped, particularly through the application of eye-tracking technology. This study aims to evaluate the effect of intentional learning conditions (i.e., data-driven learning) on vocabulary learning and attentional allocations. Twenty-six intermediate English L2 learners participated in the study to learn the usage of four artificial attributive adjectives in noun phrases (NPs). Learning outcomes were analysed to assess the types of knowledge developed, shedding light on the role of attention and the conscious processing of word usage. Eye-tracking data, collected using Eyelink 1000 plus, investigated gaze patterns and the allocation of attentional sources when applying the learned usage of adjectives. The results indicate that fixation stability and regression movements significantly differ under the impact of intentional learning conditions. Post-test results also indicate a shift in attention from the target adjectives to the associated nouns. These findings underscore the critical role of attention and highlight the influence of learning conditions on L2 vocabulary learning, providing practical implications and empirical validation for L2 educators and researchers aiming to enhance vocabulary instruction through intentional learning strategies. Full article
Show Figures

Figure 1

20 pages, 7297 KiB  
Article
Predicting Landing Position Deviation in Low-Visibility and Windy Environment Using Pilots’ Eye Movement Features
by Xiuyi Li, Yue Zhou, Weiwei Zhao, Chuanyun Fu, Zhuocheng Huang, Nianqian Li and Haibo Xu
Aerospace 2025, 12(6), 523; https://doi.org/10.3390/aerospace12060523 - 10 Jun 2025
Viewed by 396
Abstract
Eye movement features of pilots are critical for aircraft landing, especially in low-visibility and windy conditions. This study conducts simulated flight experiments concerning aircraft approach and landing under three low-visibility and windy conditions, including no-wind, crosswind, and tailwind. This research collects 30 participants’ [...] Read more.
Eye movement features of pilots are critical for aircraft landing, especially in low-visibility and windy conditions. This study conducts simulated flight experiments concerning aircraft approach and landing under three low-visibility and windy conditions, including no-wind, crosswind, and tailwind. This research collects 30 participants’ eye movement data after descending from the instrument approach to the visual approach and measures the landing position deviation. Then, a random forest method is used to rank eye movement features and sequentially construct feature sets by feature importance. Two machine learning models (SVR and RF) and four deep learning models (GRU, LSTM, CNN-GRU, and CNN-LSTM) are trained with these feature sets to predict the landing position deviation. The results show that the cumulative fixation duration on the heading indicator, altimeter, air-speed indicator, and external scenery is vital for landing position deviation under no-wind conditions. The attention allocation required by approaches under crosswind and tailwind conditions is more complex. According to the MAE metric, CNN-LSTM has the best prediction performance and stability under no-wind conditions, while CNN-GRU is better for crosswind and tailwind cases. RF also performs well as per the RMSE metric, as it is suitable for predicting landing position errors of outliers. Full article
(This article belongs to the Topic AI-Enhanced Techniques for Air Traffic Management)
Show Figures

Figure 1

21 pages, 4184 KiB  
Article
Binocular and Fellow Eye Acuity Deficits in Amblyopia: Impact of Fixation Instability and Sensory Factors
by Yulia Haraguchi, Gokce Busra Cakir, Aasef Shaikh and Fatema Ghasia
J. Eye Mov. Res. 2025, 18(3), 20; https://doi.org/10.3390/jemr18030020 - 3 Jun 2025
Viewed by 493
Abstract
Amblyopia, a neurodevelopmental disorder, is commonly assessed through amblyopic eye visual acuity (VA) deficits, but recent studies also highlight abnormalities in the fellow eye. This study quantified binocular and fellow/dominant eye VA in individuals with amblyopia and strabismus without amblyopia and examined factors [...] Read more.
Amblyopia, a neurodevelopmental disorder, is commonly assessed through amblyopic eye visual acuity (VA) deficits, but recent studies also highlight abnormalities in the fellow eye. This study quantified binocular and fellow/dominant eye VA in individuals with amblyopia and strabismus without amblyopia and examined factors influencing these measures, including fixation eye movement (FEM) abnormalities. Identifying which subsets of patients—such as those with nystagmus, concurrent strabismus, or greater fixation instability—exhibit more pronounced deficits in binocular visual acuity and binocular summation can enhance clinical decision-making by enabling tailored interventions and aiding patient counseling. Sixty-eight amblyopic, seventeen strabismic without amblyopia, and twenty-four control subjects were assessed using an adaptive psychophysical staircase procedure and high-resolution video-oculography to evaluate FEMs and fixation instability (FI). Binocular and fellow eye VA were significantly lower in amblyopia, regardless of type or nystagmus presence, whereas binocular and dominant eye VA in strabismus without amblyopia did not differ from the controls. Despite reduced binocular acuity, amblyopic and strabismic subjects exhibited binocular summation, with binocular VA exceeding fellow/dominant eye VA. Reduced binocular VA correlated with greater fellow eye VA deficits, diminished binocular summation, and increased FI in the amblyopic eye. Fellow eye VA deficits were linked to greater amblyopic eye VA deficits, an increased degree of anisometropia, higher FI, and stronger nystagmus correlation. These findings suggest amblyopia affects both visual sensory and motor systems, impacting binocular function and fixation stability, with potential consequences for everyday visuomotor tasks like reading. Full article
Show Figures

Figure 1

21 pages, 1696 KiB  
Article
Cognitive Insights into Museum Engagement: A Mobile Eye-Tracking Study on Visual Attention Distribution and Learning Experience
by Wenjia Shi, Kenta Ono and Liang Li
Electronics 2025, 14(11), 2208; https://doi.org/10.3390/electronics14112208 - 29 May 2025
Viewed by 837
Abstract
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of [...] Read more.
Recent advancements in Mobile Eye-Tracking (MET) technology have enabled the detailed examination of visitors’ embodied visual behaviors as they navigate exhibition spaces. This study employs MET to investigate visual attention patterns in an archeological museum, with a particular focus on identifying “hotspots” of attention. Through a multi-phase research design, we explore the relationship between visitor gaze behavior and museum learning experiences in a real-world setting. Using three key eye movement metrics—Time to First Fixation (TFF), Average Fixation Duration (AFD), and Total Fixation Duration (TFD), we analyze the distribution of visual attention across predefined Areas of Interest (AOIs). Time to First Fixation varied substantially by element, occurring most rapidly for artifacts and most slowly for labels, while video screens showed the shortest mean latency but greatest inter-individual variability, reflecting sequential exploration and heterogeneous strategies toward dynamic versus static media. Total Fixation Duration was highest for video screens and picture panels, intermediate yet variable for artifacts and text panels, and lowest for labels, indicating that dynamic and pictorial content most effectively sustain attention. Finally, Average Fixation Duration peaked on artifacts and labels, suggesting in-depth processing of descriptive elements, and it was shortest on video screens, consistent with rapid, distributed fixations in response to dynamic media. The results provide novel insights into the spatial and contextual factors that influence visitor engagement and knowledge acquisition in museum environments. Based on these findings, we discuss strategic implications for museum research and propose practical recommendations for optimizing exhibition design to enhance visitor experience and learning outcomes. Full article
(This article belongs to the Special Issue New Advances in Human-Robot Interaction)
Show Figures

Figure 1

15 pages, 1207 KiB  
Article
Performance Analysis of Eye Movement Event Detection Neural Network Models with Different Feature Combinations
by Birtukan Adamu Birawo and Pawel Kasprowski
Appl. Sci. 2025, 15(11), 6087; https://doi.org/10.3390/app15116087 - 28 May 2025
Viewed by 478
Abstract
Event detection is the most important element of eye movement analysis. Deep learning approaches have recently demonstrated superior performance across various fields, so researchers have also used them to identify eye movement events. In this study, a combination of two-dimensional convolutional neural networks [...] Read more.
Event detection is the most important element of eye movement analysis. Deep learning approaches have recently demonstrated superior performance across various fields, so researchers have also used them to identify eye movement events. In this study, a combination of two-dimensional convolutional neural networks (2D-CNN) and long short-term memory (LSTM) layers is proposed to simultaneously classify input data into fixations, saccades, post-saccadic oscillations (PSOs), and smooth pursuits (SPs). The first step involves calculating features (i.e., velocity, acceleration, jerk, and direction) from positional points. Various combinations of these features have been used as input to the networks. The performance of the proposed method was evaluated across all feature combinations and compared to state-of-the-art feature sets. Combining velocity and direction with acceleration and/or jerk demonstrated significant performance improvement compared to other feature combinations. The results show that the proposed method, using a combination of velocity and direction with acceleration and/or jerk, improves PSO identification performance, which has been difficult to distinguish from short saccades, fixations, and SPs using classic algorithms. Finally, heuristic event measures were applied, and performance was compared across different feature combinations. The results indicate that the model combining velocity, acceleration, jerk, and direction achieved the highest accuracy and most closely matched the ground truth. It correctly classified 82% of fixations, 90% of saccades, and 88% of smooth pursuits. However, the PSO detection rate was only 73%, highlighting the need for further research. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

20 pages, 6633 KiB  
Article
Research on Flight Training Optimization with Instrument Failure Based on Eye Movement Data
by Jiwen Tai, Yu Qian, Zhili Song, Xiuyi Li, Ziang Qu and Chengzhi Yang
J. Eye Mov. Res. 2025, 18(3), 19; https://doi.org/10.3390/jemr18030019 - 23 May 2025
Viewed by 442
Abstract
To improve the quality of flight training in instrument failure scenarios, eye movement data were collected from flight instructors during climbing, descending, and turning flights when the primary attitude direction indicator failed. The performance data of the excellent instructors was selected to produce [...] Read more.
To improve the quality of flight training in instrument failure scenarios, eye movement data were collected from flight instructors during climbing, descending, and turning flights when the primary attitude direction indicator failed. The performance data of the excellent instructors was selected to produce eye movement tutorials. These tutorials were used to conduct eye movement training for the experimental group of flight trainees. In contrast, the control group received traditional training. The performance and eye movement data of the two groups of flight trainees were then compared and analyzed. The results showed that flight trainees who received eye movement training performed better when facing instrument failure. Specifically, the deviations in the rate of descent, heading during the descent, airspeed during the turn, and slope during the turn were significantly different from those of the control group. Compared to the control group, the experimental group had a significantly lower fixation frequency on the failed instrument during the turn. Additionally, the average glance duration on the failed instrument during the climb and turn was significantly reduced. The study demonstrated the effectiveness of eye movement training in improving the quality of flight training in instrument failure scenarios. Full article
Show Figures

Figure 1

18 pages, 8881 KiB  
Article
Implementation of Eye-Tracking Technology in the Domestic Tourism Marketing Complex
by Olena Sushchenko, Kateryna Kasenkova, Nataliia Pohuda and Mariana Petrova
Tour. Hosp. 2025, 6(2), 94; https://doi.org/10.3390/tourhosp6020094 - 22 May 2025
Viewed by 952
Abstract
This study explores the potential of using eye-tracking technology as a marketing tool to enhance domestic tourism. By examining the visual preferences of users, this research aims to improve the informational resources and visual components of advertising campaigns for tourism destinations. An experiment [...] Read more.
This study explores the potential of using eye-tracking technology as a marketing tool to enhance domestic tourism. By examining the visual preferences of users, this research aims to improve the informational resources and visual components of advertising campaigns for tourism destinations. An experiment was conducted to determine which of three image categories—architecture (Ia), nature (In), and people (Ip)—captures more user attention. Participants’ eye movements were tracked to collect data on fixation time, first glance, and the order of image exploration. The findings indicate that images of people (Ip) attract more attention than images of architecture or nature, irrespective of pose, angle, or clothing. Within the Ip category, dynamic images of people in authentic clothing (Ip3–Ip5) held viewers’ attention longer, averaging 3.3 s compared to 1.3 s for static portrait photos (Ip6–Ip8). This study concludes that eye-tracking technology can effectively identify visual elements that interest potential tourists, facilitating the creation of compelling advertising content. This approach can support the development of a cohesive and engaging visual identity for tourism destinations, thereby enhancing marketing strategies and promoting sustainable tourism. Full article
(This article belongs to the Special Issue Smart Destinations: The State of the Art)
Show Figures

Figure 1

19 pages, 5903 KiB  
Article
Examining the Visual Search Behaviour of Experts When Screening for the Presence of Diabetic Retinopathy in Fundus Images
by Timothy I. Murphy, James A. Armitage, Larry A. Abel, Peter van Wijngaarden and Amanda G. Douglass
J. Clin. Med. 2025, 14(9), 3046; https://doi.org/10.3390/jcm14093046 - 28 Apr 2025
Viewed by 595
Abstract
Objectives: This study investigated the visual search behaviour of optometrists and fellowship-trained ophthalmologists when screening for diabetic retinopathy in retinal photographs. Methods: Participants assessed and graded retinal photographs on a computer screen while a Gazepoint GP3 HD eye tracker recorded their eye movements. [...] Read more.
Objectives: This study investigated the visual search behaviour of optometrists and fellowship-trained ophthalmologists when screening for diabetic retinopathy in retinal photographs. Methods: Participants assessed and graded retinal photographs on a computer screen while a Gazepoint GP3 HD eye tracker recorded their eye movements. Areas of interest were derived from the raw data using Hidden Markov modelling. Fixation strings were extracted by matching raw fixation data to areas of interest and resolving ambiguities with graph search algorithms. Fixation strings were clustered using Affinity Propagation to determine search behaviours characteristic of the correct and incorrect response groups. Results: A total of 23 participants (15 optometrists and 8 ophthalmologists) completed the grading task, with each assessing 20 images. Visual search behaviour differed between correct and incorrect responses, with data suggesting correct responses followed a visual search strategy incorporating the optic disc, macula, superior arcade, and inferior arcade as areas of interest. Data from incorrect responses suggest search behaviour driven by saliency or a search pattern unrelated to anatomical landmarks. Referable diabetic retinopathy was correctly identified in 86% of cases. Grader accuracy was 64.8% with good inter-grader agreement (α = 0.818). Conclusions: Our study suggests that a structured visual search strategy is correlated with higher accuracy when assessing retinal photographs for diabetic retinopathy. Referable diabetic retinopathy is detected at high rates; however, there is disagreement between clinicians when determining a precise severity grade. Full article
(This article belongs to the Special Issue Diabetic Retinopathy: Current Concepts and Future Directions)
Show Figures

Figure 1

22 pages, 1588 KiB  
Article
An Eye-Tracking Study on Text Comprehension While Listening to Music: Preliminary Results
by Georgia Andreou and Maria Gkantaki
Appl. Sci. 2025, 15(7), 3939; https://doi.org/10.3390/app15073939 - 3 Apr 2025
Viewed by 2111
Abstract
The aim of the present study was to examine the effect of background music on text comprehension using eye-tracking technology. Ten Greek undergraduate students read four texts under the following four reading conditions: preferred music, non-preferred music, café noise, and in silence. Eye [...] Read more.
The aim of the present study was to examine the effect of background music on text comprehension using eye-tracking technology. Ten Greek undergraduate students read four texts under the following four reading conditions: preferred music, non-preferred music, café noise, and in silence. Eye movements were tracked to assess visual patterns, while reading performance and attitudes were also evaluated. The results showed that fixation measures remained stable across conditions, suggesting that early visual processing is not significantly influenced by auditory distractions. However, reading performance significantly declined under non-preferred music, highlighting its disruptive impact on cognitive processing. Participants also reported greater difficulty and fatigue in this condition, consistent with an increased cognitive load. In contrast, preferred music and silence were associated with enhanced understanding, confidence, and immersion, café noise also had a moderate but manageable effect on reading outcomes. These findings underscore the importance of tailoring reading environments to individual preferences in order to optimize reading performance and engagement. Future research studies should focus on the effects of different musical attributes, such as tempo and genre, and use more complex reading tasks, in order to better understand how auditory stimuli interact with cognitive load and visual processing. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

20 pages, 1075 KiB  
Review
Eye Tracking in Parkinson’s Disease: A Review of Oculomotor Markers and Clinical Applications
by Pierluigi Diotaiuti, Giulio Marotta, Francesco Di Siena, Salvatore Vitiello, Francesco Di Prinzio, Angelo Rodio, Tommaso Di Libero, Lavinia Falese and Stefania Mancone
Brain Sci. 2025, 15(4), 362; https://doi.org/10.3390/brainsci15040362 - 31 Mar 2025
Cited by 1 | Viewed by 1942
Abstract
(1) Background. Eye movement abnormalities are increasingly recognized as early biomarkers of Parkinson’s disease (PD), reflecting both motor and cognitive dysfunction. Advances in eye-tracking technology provide objective, quantifiable measures of saccadic impairments, fixation instability, smooth pursuit deficits, and pupillary changes. These advances offer [...] Read more.
(1) Background. Eye movement abnormalities are increasingly recognized as early biomarkers of Parkinson’s disease (PD), reflecting both motor and cognitive dysfunction. Advances in eye-tracking technology provide objective, quantifiable measures of saccadic impairments, fixation instability, smooth pursuit deficits, and pupillary changes. These advances offer new opportunities for early diagnosis, disease monitoring, and neurorehabilitation. (2) Objective. This narrative review explores the relationship between oculomotor dysfunction and PD pathophysiology, highlighting the potential applications of eye tracking in clinical and research settings. (3) Methods. A comprehensive literature review was conducted, focusing on peer-reviewed studies examining eye movement dysfunction in PD. Relevant publications were identified through PubMed, Scopus, and Web of Science, using key terms, such as “eye movements in Parkinson’s disease”, “saccadic control and neurodegeneration”, “fixation instability in PD”, and “eye-tracking for cognitive assessment”. Studies integrating machine learning (ML) models and VR-based interventions were also included. (4) Results. Patients with PD exhibit distinct saccadic abnormalities, including hypometric saccades, prolonged saccadic latency, and increased anti-saccade errors. These impairments correlate with executive dysfunction and disease progression. Fixation instability and altered pupillary responses further support the role of oculomotor metrics as non-invasive biomarkers. Emerging AI-driven eye-tracking models show promise for automated PD diagnosis and progression tracking. (5) Conclusions. Eye tracking provides a reliable, cost-effective tool for early PD detection, cognitive assessment, and rehabilitation. Future research should focus on standardizing clinical protocols, validating predictive AI models, and integrating eye tracking into multimodal treatment strategies. Full article
(This article belongs to the Section Neurodegenerative Diseases)
Show Figures

Figure 1

Back to TopTop