Next Issue
Volume 3, February
Previous Issue
Volume 2, May
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 3, Issue 1 (August 2009) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
12 pages, 451 KiB  
Article
On-Line Syntactic and Semantic Influences in Reading Revisited
by Joel Pynte, Boris New and Alan Kennedy
J. Eye Mov. Res. 2009, 3(1), 1-12; https://doi.org/10.16910/jemr.3.1.5 - 17 Aug 2009
Cited by 3 | Viewed by 61
Abstract
This study is a follow-up to Pynte, New and Kennedy (2008), Journal of Eye Movement Research. 2(1):4, 1–11. A new series of multiple regression analyses were conducted on the French part of the Dundee corpus, using a new set of syntactic and [...] Read more.
This study is a follow-up to Pynte, New and Kennedy (2008), Journal of Eye Movement Research. 2(1):4, 1–11. A new series of multiple regression analyses were conducted on the French part of the Dundee corpus, using a new set of syntactic and semantic predictors. In line with our prior study, quite different patterns of results were obtained for function and content words. We conclude that syntactic processing operations during reading mainly concern function words and are carried out ahead of semantic processing Full article
Show Figures

Figure 1

19 pages, 990 KiB  
Article
Eye Movements and Attention in Visual Feature Search with Graded Targetdistractor-Similarity
by Carolin Wienrich, Uta Heße and Gisela Müller-Plath
J. Eye Mov. Res. 2009, 3(1), 1-19; https://doi.org/10.16910/jemr.3.1.4 - 17 Jul 2009
Cited by 8 | Viewed by 80
Abstract
We conducted a visual feature search experiment in which we varied the target-distractorsimilarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), [...] Read more.
We conducted a visual feature search experiment in which we varied the target-distractorsimilarity in four steps, the number of items (4, 6, and 8), and the presence of the target. In addition to classical search parameters like error rate and reaction time (RT), we analyzed saccade amplitudes, fixation durations, and the portion of reinspections (recurred fixation on an item with at least one different item fixated in between) and refixations (recurred fixation on an item without a different item fixated in between) per trial. When targetdistractor- similarity was increased, more errors and longer RTs were observed, accompanied by shorter saccade amplitudes, longer fixation durations, and more reinspections/refixations. An increasing set size resulted in longer saccade amplitudes and shorter fixation durations. Finally, in target-absent trials we observed more reinspections than refixations, whereas in target-present trials refixations were more frequent than reinspections. The results on saccade amplitude and fixation duration support saliency-based search theories that assume an attentional focus variable in size according to task demands and a variable attentional dwell time. Reinspections and refixations seem to be rather a sign of incomplete perceptual processing of items than being due to memory failure. Full article
Show Figures

Figure 1

7 pages, 825 KiB  
Article
Microsaccades Under Monocular Viewing Conditions
by Wilhelm Bernhard Kloke, Wolfgang Jaschinski and Stephanie Jainta
J. Eye Mov. Res. 2009, 3(1), 1-7; https://doi.org/10.16910/jemr.3.1.2 - 3 Jul 2009
Cited by 3 | Viewed by 66
Abstract
Among the eye movements during fixation, the function of small saccades occuring quite commonly at fixation is still unclear. It has been reported that a substantial number of these microsaccades seem to occur in only one of the eyes. The aim of the [...] Read more.
Among the eye movements during fixation, the function of small saccades occuring quite commonly at fixation is still unclear. It has been reported that a substantial number of these microsaccades seem to occur in only one of the eyes. The aim of the present study is to investigate microsaccades in monocular stimulation conditions. Although this is an artificial test condition which does not occur in natural vision, this monocular presentation paradigm allows for a critical test of a presumptive monocular mechanism of saccade generation. Results in these conditions can be compared with the normal binocular stimulation mode. We checked the statistical properties of microsaccades under monocular stimulation conditions and found no indication for specific interactions for monocularly detected small saccades, which might be present if they were based on a monocular physiological activation mechanism. Full article
Show Figures

Figure 1

10 pages, 380 KiB  
Article
Gaze Interaction Enhances Problem Solving: Effects of Dwell-Time Based, Gaze-Augmented, and Mouse Interaction on Problem-Solving Strategies and User Experience
by Roman Bednarik, Tersia Gowases and Markku Tukiainen
J. Eye Mov. Res. 2009, 3(1), 1-10; https://doi.org/10.16910/jemr.3.1.3 - 9 Jun 2009
Cited by 28 | Viewed by 104
Abstract
It is still unknown whether the very application of gaze for interaction has effects on cognitive strategies users employ and how these effects materialize. We conducted a between-subject experiment in which thirty-six participants interacted with a computerized problem-solving game using one of three [...] Read more.
It is still unknown whether the very application of gaze for interaction has effects on cognitive strategies users employ and how these effects materialize. We conducted a between-subject experiment in which thirty-six participants interacted with a computerized problem-solving game using one of three interaction modalities: dwell-time, gaze-augmented interaction, and the conventional mouse. We observed how using each of the modalities affected performance, problem solving strategies, and user experience. Users with gaze-augmented interaction outperformed the other groups on several problem-solving measures, committed fewer errors, were more immersed, and had a better user experience. The results give insights to the cognitive processes during interaction using gaze and have implications on the design of eye-tracking interfaces. Full article
Show Figures

Figure 1

13 pages, 348 KiB  
Article
Quick Models for Saccade Amplitude Prediction
by Oleg V. Komogortsev, Young Sam Ryu and Do Hyong Koh
J. Eye Mov. Res. 2009, 3(1), 1-13; https://doi.org/10.16910/jemr.3.1.1 - 3 Jun 2009
Cited by 5 | Viewed by 81
Abstract
This paper presents a new saccade amplitude prediction model. The model is based on a Kalman filter and regression analysis. The aim of the model is to predict a saccade’s am-plitude extremely quickly, i.e., within two eye position samples at the onset of [...] Read more.
This paper presents a new saccade amplitude prediction model. The model is based on a Kalman filter and regression analysis. The aim of the model is to predict a saccade’s am-plitude extremely quickly, i.e., within two eye position samples at the onset of a saccade. Specifically, the paper explores saccade amplitude prediction considering one or two sam-ples at the onset of a saccade. The models’ prediction performance was tested with 35 subjects. The amplitude accuracy results yielded approximately 5.26° prediction error, while the error for direction prediction was 5.3% for the first sample model and 1.5% for the two samples model. The practical use of the proposed model lays in the area of real-time gaze-contingent compression and extreme eye-gaze aware interaction applications. The paper provides theoretical evaluation of the benefits of saccade amplitude prediction to the gaze-contingent multimedia compression, estimating a 21% improvement in com-pression for short network delays. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop