Next Issue
Volume 8, August
Previous Issue
Volume 7, January
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 8, Issue 1 (December 2015) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
20 pages, 2457 KiB  
Article
Accuracy and Precision of Fixation Locations Recorded with the Low-Cost Eye Tribe Tracker in Different Experimental Set-Ups
by Kristien Ooms, Lien Dupont, Lieselot Lapon and Stanislav Popelka
J. Eye Mov. Res. 2015, 8(1), 1-20; https://doi.org/10.16910/jemr.8.1.5 (registering DOI) - 11 Dec 2015
Cited by 95 | Viewed by 95
Abstract
This article compares the accuracy and precision of the low-cost Eye Tribe tracker and a well-established comparable eye tracker: SMI RED 250. Participants were instructed to fixate on predefined point locations on a screen. The accuracy is measured by the distance between the [...] Read more.
This article compares the accuracy and precision of the low-cost Eye Tribe tracker and a well-established comparable eye tracker: SMI RED 250. Participants were instructed to fixate on predefined point locations on a screen. The accuracy is measured by the distance between the recorded fixation locations and the actual location. Precision is represented by the standard deviation of these measurements. Furthermore, the temporal precision of both eye tracking devices (sampling rate) is evaluated as well. The obtained results illustrate that a correct set-up and selection of software to record and process the data are of utmost importance to obtain acceptable results with the low-cost device. Nevertheless, with careful selections in each of these steps, the quality (accuracy and precision) of the recorded data can be considered comparable. Full article
Show Figures

Figure 1

10 pages, 6264 KiB  
Article
Does Color Influence Eye Movements While Exploring Videos?
by Shahrbanoo Hamel, Dominique Houzet, Denis Pellerin and Nathalie Guyader
J. Eye Mov. Res. 2015, 8(1), 1-10; https://doi.org/10.16910/jemr.8.1.4 (registering DOI) - 23 Apr 2015
Cited by 1 | Viewed by 60
Abstract
Although visual attention studies consider color as one of the most important features in guiding visual attention, few studies have investigated how color influences eye movements while viewing natural scenes without any particular task. To better understand the visual features that drive attention, [...] Read more.
Although visual attention studies consider color as one of the most important features in guiding visual attention, few studies have investigated how color influences eye movements while viewing natural scenes without any particular task. To better understand the visual features that drive attention, the aim of this paper was to quantify the influence of color on eye movements when viewing dynamic natural scenes. The influence of color was investigated by comparing the eye positions of several observers eye-tracked while viewing video stimuli in two conditions: color and grayscale. The comparison was made using the dispersion between the eye positions of observers, the number of attractive regions measured with a clustering method applied to the eye positions, and by comparing eye positions to the predictions of a saliency model. The mean amplitude of saccades and the mean duration of fixations were compared as well. Globally, a slight influence of color on eye movements was measured; only the number of attractive regions for color stimuli was slightly higher than for grayscale stimuli. However, a luminance-based saliency model predicts the eye positions for color stimuli as efficiently as for grayscale stimuli. Full article
Show Figures

Figure 1

8 pages, 782 KiB  
Article
Eye-Tracking Study of Reading Speed from LCD Displays: Influence of Type Style and Type Size
by Gregor Franken, Anja Podlesek and Klementina Možina
J. Eye Mov. Res. 2015, 8(1), 1-8; https://doi.org/10.16910/jemr.8.1.3 (registering DOI) - 30 Mar 2015
Cited by 15 | Viewed by 92
Abstract
Increasing amounts of text are read from various types of screens. The shape and the size of a typeface determine the legibility of texts. The aim of this study was to investigate the legibility of different typefaces displayed on LCD screens. Two typefaces [...] Read more.
Increasing amounts of text are read from various types of screens. The shape and the size of a typeface determine the legibility of texts. The aim of this study was to investigate the legibility of different typefaces displayed on LCD screens. Two typefaces (Georgia and Verdana), designed for screen renderings were analyzed by eye-tracking technology in 8 different sizes. Regardless of the font size, the texts set in Verdana were read faster. For both typefaces the reading speed increased with increasing the font size. The number of fixations increased with the character size, while the fixation time was shorter. Full article
Show Figures

Figure 1

11 pages, 858 KiB  
Article
SMOOVS: Towards Calibration-Free Text Entry by Gaze Using Smooth Pursuit Movements
by Otto Hans-Martin Lutz, Antje Christine Venjakob and Stefan Ruff
J. Eye Mov. Res. 2015, 8(1), 1-11; https://doi.org/10.16910/jemr.8.1.2 (registering DOI) - 19 Mar 2015
Cited by 39 | Viewed by 59
Abstract
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptance in general human-computer interaction. In order to use gaze spellers for public displays, they need to be robust and provide an intuitive interaction concept. However, traditional dwell- and [...] Read more.
Gaze-based text spellers have proved useful for people with severe motor diseases, but lack acceptance in general human-computer interaction. In order to use gaze spellers for public displays, they need to be robust and provide an intuitive interaction concept. However, traditional dwell- and blink-based systems need accurate calibration which contradicts fast and intuitive interaction. We developed the first gaze speller explicitly utilizing smooth pursuit eye movements and their particular characteristics. The speller achieves sufficient accuracy with a one-point calibration and does not require extensive training. Its interface consists of character elements which move apart from each other in two stages. As each element has a unique track, gaze following this track can be detected by an algorithm that does not rely on the exact gaze coordinates and compensates latency-based artefacts. In a user study, 24 participants tested four speed-levels of moving elements to determine an optimal interaction speed. At 300 px/s users showed highest overall performance of 3.34WPM (without training). Subjective ratings support the finding that this pace is superior. Full article
Show Figures

Figure 1

17 pages, 1585 KiB  
Article
Dummy Eye Measurements of Microsaccades: Testing the Influence of System Noise and Head Movements on Microsaccade Detection in a Popular Video-Based Eye Tracker
by Frouke Hermens
J. Eye Mov. Res. 2015, 8(1), 1-17; https://doi.org/10.16910/jemr.8.1.1 - 20 Feb 2015
Cited by 12 | Viewed by 63
Abstract
Whereas early studies of microsaccades have predominantly relied on custom-built eye trackers and manual tagging of microsaccades, more recent work tends to use video-based eye tracking and automated algorithms for microsaccade detection. While data from these newer studies suggest that microsaccades can be [...] Read more.
Whereas early studies of microsaccades have predominantly relied on custom-built eye trackers and manual tagging of microsaccades, more recent work tends to use video-based eye tracking and automated algorithms for microsaccade detection. While data from these newer studies suggest that microsaccades can be reliably detected with video-based systems, this has not been systematically evaluated. I here present a method and data examining microsaccade detection in an often used video-based system (the Eyelink II system) and a commonly used detection algorithm (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). Recordings from human participants and those obtained using a pair of dummy eyes, mounted on a pair of glasses either worn by a human participant (i.e., with head motion) or a dummy head (no head motion) were compared. Three experiments were conducted. The first experiment suggests that when microsaccade measurements make use of the pupil detection mode, microsaccade detections in the absence of eye movements are sparse in the absence of head movements, but frequent with head movements (despite the use of a chin rest). A second experiment demonstrates that by using measurements that rely on a combination of corneal reflection and pupil detection, false microsaccade detections can be largely avoided as long as a binocular criterion is used. A third experiment examines whether past results may have been affected by possible incorrect detections due to small head movements. It shows that despite the many detections due to head movements, the typical modulation of microsaccade rate after stimulus onset is found only when recording from the participants’ eyes. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop