Next Issue
Volume 6, August
Previous Issue
Volume 6, April
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 6, Issue 2 (August 2013) – 5 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
11 pages, 1079 KiB  
Article
Dynamic Programming for Re-Mapping Noisy Fixations in Translation Tasks
by Michael Carl
J. Eye Mov. Res. 2013, 6(2), 1-11; https://doi.org/10.16910/jemr.6.2.5 - 5 Aug 2013
Cited by 8 | Viewed by 73
Abstract
Eyetrackers which allow for free head movements are in many cases imprecise to the extent that reading patterns become heavily distorted. The poor usability and interpretability of these gaze patterns is corroborated by a “naïve” fixation-to-symbol mapping, which often wrongly maps the possibly [...] Read more.
Eyetrackers which allow for free head movements are in many cases imprecise to the extent that reading patterns become heavily distorted. The poor usability and interpretability of these gaze patterns is corroborated by a “naïve” fixation-to-symbol mapping, which often wrongly maps the possibly drifted center of the observed fixation onto the symbol directly below it. In this paper I extend this naïve fixation-to-symbol mapping by introducing background knowledge about the translation task. In a first step, the sequence of fixation-tosymbol mappings is extended into a lattice of several possible fixated symbols, including those on the line above and below the naïve fixation mapping. In a second step a dynamic programming algorithm applies a number of heuristics to find the best path through the lattice, based on the probable distance in characters, in words and in pixels between successive fixations and the symbol locations, so as to smooth the gazing path according to the background gazing model. A qualitative and quantitative evaluation shows that the algorithm increases the accuracy of the re-mapped symbol sequence. Full article
Show Figures

Figure 1

17 pages, 448 KiB  
Article
Implicit Prosody and Contextual Bias in Silent Reading
by Kate McCurdy, Gerrit Kentner and Shravan Vasishth
J. Eye Mov. Res. 2013, 6(2), 1-17; https://doi.org/10.16910/jemr.6.2.4 - 19 Jul 2013
Cited by 8 | Viewed by 64
Abstract
Eye-movement research on implicit prosody has found effects of lexical stress on syntactic ambiguity resolution, suggesting that metrical well-formedness constraints interact with syntactic category assignment. Building on these findings, the present eyetracking study investigates whether contextual bias can modulate the effects of metrical [...] Read more.
Eye-movement research on implicit prosody has found effects of lexical stress on syntactic ambiguity resolution, suggesting that metrical well-formedness constraints interact with syntactic category assignment. Building on these findings, the present eyetracking study investigates whether contextual bias can modulate the effects of metrical structure on syntactic ambiguity resolution in silent reading. Contextual bias and potential stress-clash in the ambiguous region were crossed in a 2 × 2 design. Participants read biased context sentences followed by temporarily ambiguous test sentences. In the three-word ambiguous region, main effects of lexical stress were dominant, while early effects of context were absent. Potential stress clash yielded a significant increase in first-pass regressions and re-reading probability across the three words. In the disambiguating region, the disambiguating word itself showed increased processing difficulty (lower skipping and increased re-reading probability) when the disambiguation engendered a stress clash configuration, while the word immediately following showed main effects of context in those same measures. Taken together, effects of lexical stress upon eye movements were swift and pervasive across first-pass and second-pass measures, while effects of context were relatively delayed. These results indicate a strong role for implicit meter in guiding parsing, one that appears insensitive to higher-level constraints. Our findings are problematic for two classes of models, the two-stage garden-path model and the constraint-based competition-integration model, but can be explained by a variation on the two-stage model, the unrestricted race model. Full article
Show Figures

Figure 1

17 pages, 1401 KiB  
Article
Tracking Visual Scanning Techniques in Training Simulation for Helicopter Landing
by Maxi Robinski and Michael Stein
J. Eye Mov. Res. 2013, 6(2), 1-17; https://doi.org/10.16910/jemr.6.2.3 - 19 Jul 2013
Cited by 21 | Viewed by 78
Abstract
Research has shown no consistent findings about how scanning techniques differ between experienced and inexperienced helicopter pilots depending on mission demands. To explore this question, 33 military pilots performed two different landing maneuvers in a flight simulator. The data included scanning data (eye [...] Read more.
Research has shown no consistent findings about how scanning techniques differ between experienced and inexperienced helicopter pilots depending on mission demands. To explore this question, 33 military pilots performed two different landing maneuvers in a flight simulator. The data included scanning data (eye tracking) as well as performance, workload and a self-assessment of scanning techniques (interviews). Fifty-four percent of scanning-related differences between pilots resulted from the factor combination of expertise and mission demands. A comparison of eye tracking and interview data revealed that pilots were not always clearly aware of their actual scanning techniques. Eye tracking as a feedback tool for pilots offers a new opportunity to substantiate their training as well as research interests within the German Armed Forces. Full article
Show Figures

Figure 1

32 pages, 5887 KiB  
Article
Parsing Eye Movement Analysis of Scanpaths of Naïve Viewers of Art: How Do We Differentiate Art from Non-Art Pictures?
by Wolfgang H. Zangemeister and Claudio Privitera
J. Eye Mov. Res. 2013, 6(2), 1-32; https://doi.org/10.16910/jemr.6.2.2 (registering DOI) - 14 May 2013
Cited by 5 | Viewed by 57
Abstract
Relating to G.Buswell’s early work we posed the questions: How do art-naïve people look at pairs of artful pictures and similarly looking snapshots? Does the analysis of their eye movement recordings reveal a difference in their perception? Parsing eye scanpaths using string editing, [...] Read more.
Relating to G.Buswell’s early work we posed the questions: How do art-naïve people look at pairs of artful pictures and similarly looking snapshots? Does the analysis of their eye movement recordings reveal a difference in their perception? Parsing eye scanpaths using string editing, similarity coefficients can be sorted out and represented for the two measures ‘Sp’ (Similarities of position) and ‘Ss’ (Similarities of sequences). 25 picture pairs were shown 5 times to 7 subjects with no specific task, who were ‘art-naïve’ to avoid confounding of the results through specific art knowledge of the subjects. A significant difference between scanpaths of artful pictures compared to snapshots was not found in our subjects´ repeated viewing sessions. Auto-similarity (same subject viewing the same picture) and cross-similarity (different subjects viewing the same picture) significantly demonstrated this result, for sequences of eye fixations (Ss) as well as their positions (Sp): In case of global (different subjects and different pairs) sequential similarity Ss we found that about 84 percent of the picture pairs where viewed with very low similarity, in quasi random mode within the range of random values. Only in 4 out of 25 artful-picture snapshot pairs was a high similarity found. A specific restricted set of representative regions in the internal cognitive model of the picture is essential for the brain to perceive and eventually recognize the picture: This representative set is quite similar for different subjects and different picture pairs independently of their art–non art features that where in most cases not recognized by our subjects. Furthermore our study shows that the distinction of art versus non-art has vanished, causing confusion about the ratio of signal and noise in the communication between artists and viewers of art. Full article
Show Figures

Figure 1

15 pages, 580 KiB  
Article
Eye Movements in Gaze Interaction
by Emilie Møllenbach, John Paulin Hansen and Martin Lillholm
J. Eye Mov. Res. 2013, 6(2), 1-15; https://doi.org/10.16910/jemr.6.2.1 (registering DOI) - 9 May 2013
Cited by 50 | Viewed by 106
Abstract
Gaze, as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements: [...] Read more.
Gaze, as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements: fixations, saccades and smooth pursuits and three types of GDOs: static, dynamic, or absent. This taxonomy is qualified through related research and is the first main contribution of this paper. The second part of the paper offers an experimental exploration of single stroke gaze gestures (SSGG). The main findings suggest (1) that different lengths of SSGG can be used for interaction, (2) that GDOs are not necessary for successful completion, and (3) that SSGG are comparable to dwell time selection. Full article
Previous Issue
Next Issue
Back to TopTop