Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = post-saccadic oscillations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1207 KB  
Article
Performance Analysis of Eye Movement Event Detection Neural Network Models with Different Feature Combinations
by Birtukan Adamu Birawo and Pawel Kasprowski
Appl. Sci. 2025, 15(11), 6087; https://doi.org/10.3390/app15116087 - 28 May 2025
Viewed by 1412
Abstract
Event detection is the most important element of eye movement analysis. Deep learning approaches have recently demonstrated superior performance across various fields, so researchers have also used them to identify eye movement events. In this study, a combination of two-dimensional convolutional neural networks [...] Read more.
Event detection is the most important element of eye movement analysis. Deep learning approaches have recently demonstrated superior performance across various fields, so researchers have also used them to identify eye movement events. In this study, a combination of two-dimensional convolutional neural networks (2D-CNN) and long short-term memory (LSTM) layers is proposed to simultaneously classify input data into fixations, saccades, post-saccadic oscillations (PSOs), and smooth pursuits (SPs). The first step involves calculating features (i.e., velocity, acceleration, jerk, and direction) from positional points. Various combinations of these features have been used as input to the networks. The performance of the proposed method was evaluated across all feature combinations and compared to state-of-the-art feature sets. Combining velocity and direction with acceleration and/or jerk demonstrated significant performance improvement compared to other feature combinations. The results show that the proposed method, using a combination of velocity and direction with acceleration and/or jerk, improves PSO identification performance, which has been difficult to distinguish from short saccades, fixations, and SPs using classic algorithms. Finally, heuristic event measures were applied, and performance was compared across different feature combinations. The results indicate that the model combining velocity, acceleration, jerk, and direction achieved the highest accuracy and most closely matched the ground truth. It correctly classified 82% of fixations, 90% of saccades, and 88% of smooth pursuits. However, the PSO detection rate was only 73%, highlighting the need for further research. Full article
(This article belongs to the Special Issue Latest Research on Eye Tracking Applications)
Show Figures

Figure 1

20 pages, 4100 KB  
Protocol
Automated Analysis Pipeline for Extracting Saccade, Pupil, and Blink Parameters Using Video-Based Eye Tracking
by Brian C. Coe, Jeff Huang, Donald C. Brien, Brian J. White, Rachel Yep and Douglas P. Munoz
Vision 2024, 8(1), 14; https://doi.org/10.3390/vision8010014 - 18 Mar 2024
Cited by 11 | Viewed by 4906
Abstract
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are [...] Read more.
The tremendous increase in the use of video-based eye tracking has made it possible to collect eye tracking data from thousands of participants. The traditional procedures for the manual detection and classification of saccades and for trial categorization (e.g., correct vs. incorrect) are not viable for the large datasets being collected. Additionally, video-based eye trackers allow for the analysis of pupil responses and blink behaviors. Here, we present a detailed description of our pipeline for collecting, storing, and cleaning data, as well as for organizing participant codes, which are fairly lab-specific but nonetheless, are important precursory steps in establishing standardized pipelines. More importantly, we also include descriptions of the automated detection and classification of saccades, blinks, “blincades” (blinks occurring during saccades), and boomerang saccades (two nearly simultaneous saccades in opposite directions where speed-based algorithms fail to split them), This is almost entirely task-agnostic and can be used on a wide variety of data. We additionally describe novel findings regarding post-saccadic oscillations and provide a method to achieve more accurate estimates for saccade end points. Lastly, we describe the automated behavior classification for the interleaved pro/anti-saccade task (IPAST), a task that probes voluntary and inhibitory control. This pipeline was evaluated using data collected from 592 human participants between 5 and 93 years of age, making it robust enough to handle large clinical patient datasets. In summary, this pipeline has been optimized to consistently handle large datasets obtained from diverse study cohorts (i.e., developmental, aging, clinical) and collected across multiple laboratory sites. Full article
Show Figures

Figure 1

18 pages, 3291 KB  
Review
Review and Evaluation of Eye Movement Event Detection Algorithms
by Birtukan Birawo and Pawel Kasprowski
Sensors 2022, 22(22), 8810; https://doi.org/10.3390/s22228810 - 15 Nov 2022
Cited by 42 | Viewed by 10239
Abstract
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm [...] Read more.
Eye tracking is a technology aimed at understanding the direction of the human gaze. Event detection is a process of detecting and classifying eye movements that are divided into several types. Nowadays, event detection is almost exclusively done by applying a detection algorithm to the raw recorded eye-tracking data. However, due to the lack of a standard procedure for how to perform evaluations, evaluating and comparing various detection algorithms in eye-tracking signals is very challenging. In this paper, we used data from a high-speed eye-tracker SMI HiSpeed 1250 system and compared event detection performance. The evaluation focused on fixations, saccades and post-saccadic oscillation classification. It used sample-by-sample comparisons to compare the algorithms and inter-agreement between algorithms and human coders. The impact of varying threshold values on threshold-based algorithms was examined and the optimum threshold values were determined. This evaluation differed from previous evaluations by using the same dataset to evaluate the event detection algorithms and human coders. We evaluated and compared the different algorithms from threshold-based, machine learning-based and deep learning event detection algorithms. The evaluation results show that all methods perform well for fixation and saccade detection; however, there are substantial differences in classification results. Generally, CNN (Convolutional Neural Network) and RF (Random Forest) algorithms outperform threshold-based methods. Full article
(This article belongs to the Special Issue Eye Tracking Sensors Data Analysis with Deep Learning Methods)
Show Figures

Figure 1

29 pages, 3273 KB  
Article
Characterising Eye Movement Events with an Unsupervised Hidden Markov Model
by Malte Lüken, Šimon Kucharský and Ingmar Visser
J. Eye Mov. Res. 2022, 15(1), 1-29; https://doi.org/10.16910/jemr.15.1.4 - 28 Jun 2022
Cited by 5 | Viewed by 527
Abstract
Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood [...] Read more.
Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately. Full article
Show Figures

Figure 1

28 pages, 24267 KB  
Article
Study of an Extensive Set of Eye Movement Features: Extraction Methods and Statistical Analysis
by Ioannis Rigas, Lee Friedman and Oleg Komogortsev
J. Eye Mov. Res. 2018, 11(1), 1-28; https://doi.org/10.16910/jemr.11.1.3 - 20 Mar 2018
Cited by 40 | Viewed by 549
Abstract
This work presents a study of an extensive set of 101 categories of eye movement features from three types of eye movement events: fixations, saccades, and post-saccadic oscillations. We present a unified framework of methods for the extraction of features that describe the [...] Read more.
This work presents a study of an extensive set of 101 categories of eye movement features from three types of eye movement events: fixations, saccades, and post-saccadic oscillations. We present a unified framework of methods for the extraction of features that describe the temporal, positional and dynamic characteristics of eye movements. We perform statistical analysis of feature values by employing eye movement data from a normative population of 298 subjects, recorded during a text reading task. We present overall measures for the central tendency and variability of feature values, and we quantify the test-retest reliability of features using either the Intraclass Correlation Coefficient (for normally distributed and normalized features) or Kendall’s coefficient of concordance (for non-normally distributed features). Finally, for the case of normally distributed and normalized features we additionally perform factor analysis and provide interpretations of the resulting factors. The presented methods and analysis can provide a valuable tool for researchers in various fields that explore eye movements, such as in behavioral studies, attention and cognition research, medical research, biometric recognition, and humancomputer interaction. Full article
Show Figures

Figure 1

Back to TopTop