Next Issue
Volume 3, September
Previous Issue
Volume 3, August
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 3, Issue 2 (February 2009) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
13 pages, 1159 KiB  
Article
A Statistical Mixture Method to Reveal Bottom-Up and Top-Down Factors Guiding the Eye-Movements
by Thomas Couronné, Anne Guérin-Dugué, Michel Dubois, Pauline Faye and Christian Marendaz
J. Eye Mov. Res. 2009, 3(2), 1-13; https://doi.org/10.16910/jemr.3.2.5 - 11 Feb 2010
Cited by 4 | Viewed by 51
Abstract
When people gaze at real scenes, their visual attention is driven both by a set of bottom-up processes coming from the signal properties of the scene and also from top-down effects such as the task, the affective state, prior knowledge, or the semantic [...] Read more.
When people gaze at real scenes, their visual attention is driven both by a set of bottom-up processes coming from the signal properties of the scene and also from top-down effects such as the task, the affective state, prior knowledge, or the semantic context. The context of this study is an assessment of manufactured objects (here car cab interior). From this dedicated context, this work describes a set of methods to analyze the eye-movements during the visual scene evaluation. But these methods can be adapted to more general contexts. We define a statistical model to explain the eye fixations measured experimentally by eye-tracking even when the ratio signal/noise is bad or lacking of raw data. One of the novelties of the approach is to use complementary experimental data obtained with the “Bubbles” paradigm. The proposed model is an additive mixture of several a priori spatial density distributions of factors guiding visual attention. The “Bubbles” paradigm is adapted here to reveal the semantic density distribution which represents here the cumulative effects of the top-down factors. Then, the contribution of each factor is compared depending on the product and on the task, in order to highlight the properties of the visual attention and the cognitive activity in each situation. Full article
Show Figures

Figure 1

12 pages, 473 KiB  
Article
The Behavioural and Neurophysiological Modulation of Microsaccades in Monkeys
by Donald C. Brien, Brian D. Corneil, Jillian H. Fecteau, Andrew H. Bell and Douglas P. Munoz
J. Eye Mov. Res. 2009, 3(2), 1-12; https://doi.org/10.16910/jemr.3.2.4 - 22 Dec 2009
Cited by 25 | Viewed by 61
Abstract
Systematic modulations of microsaccades have been observed in humans during covert orienting. We show here that monkeys are a suitable model for studying the neurophysiology governing these modulations of microsaccades. Using various cue-target saccade tasks, we observed the effects of visual and auditory [...] Read more.
Systematic modulations of microsaccades have been observed in humans during covert orienting. We show here that monkeys are a suitable model for studying the neurophysiology governing these modulations of microsaccades. Using various cue-target saccade tasks, we observed the effects of visual and auditory cues on microsaccades in monkeys. As in human studies, following visual cues there was an early bias in cue-congruent microsaccades followed by a later bias in cue-incongruent microsaccades. Following auditory cues there was a cue-incongruent bias in left cues only. In a separate experiment, we observed that brainstem omnipause neurons, which gate all saccades, also paused during microsaccade generation. Thus, we provide evidence that at least part of the same neurocircuitry governs both large saccades and microsaccades. Full article
Show Figures

Figure 1

23 pages, 1348 KiB  
Article
Time Course and Hazard Function: A Distributional Analysis of Fixation Duration in Reading
by Gary Feng
J. Eye Mov. Res. 2009, 3(2), 1-23; https://doi.org/10.16910/jemr.3.2.3 - 22 Dec 2009
Cited by 5 | Viewed by 55
Abstract
Reading processes affect not only the mean of fixation duration but also its distribution function. This paper introduces a set of hypotheses that link the timing and strength of a reading process to the hazard function of a fixation duration distribution. Analyses based [...] Read more.
Reading processes affect not only the mean of fixation duration but also its distribution function. This paper introduces a set of hypotheses that link the timing and strength of a reading process to the hazard function of a fixation duration distribution. Analyses based on large corpora of reading eye movements show a surprisingly robust hazard function across languages, age, individual differences, and a number of processing variables. The data suggest that eye movements are generated stochastically based on a stereotyped time course that is independent of reading variables. High-level reading processes, however, modulate eye movement programming by increasing or decreasing the momentary saccade rate during a narrow time window. Implications to theories and analyses of reading eye movement are discussed. Full article
Show Figures

Figure 1

12 pages, 1770 KiB  
Article
Mixed Responses: Why Readers Spend Less Time at Unfavorable Landing Positions
by Gary Feng
J. Eye Mov. Res. 2009, 3(2), 1-12; https://doi.org/10.16910/jemr.3.2.2 - 12 Dec 2009
Cited by 2 | Viewed by 58
14 pages, 1402 KiB  
Article
Speed and Accuracy of Gaze Gestures
by Henna Heikkilä and Kari-Jouko Räihä
J. Eye Mov. Res. 2009, 3(2), 1-14; https://doi.org/10.16910/jemr.3.2.1 - 17 Nov 2009
Cited by 11 | Viewed by 72
Abstract
We conducted an experiment where participants carried out six gaze gesture tasks. The gaze paths were analyzed to find out the speed and accuracy of the gaze gestures. As a result, the gaze gestures took more time than we anticipated and only the [...] Read more.
We conducted an experiment where participants carried out six gaze gesture tasks. The gaze paths were analyzed to find out the speed and accuracy of the gaze gestures. As a result, the gaze gestures took more time than we anticipated and only the very fastest participants got close to what was expected. There was not much difference in performance times between small and large gaze gestures, because the participants reached significantly faster speed when making large gestures than small gestures. Curved shapes were found difficult to follow and time-consuming when followed properly. In general, the accuracy in following shapes was sometimes very poor. We believe that to improve the speed and accuracy of gaze gestures, proper feedback must be used more than in our experiment. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop