Next Issue
Volume 9, May
Previous Issue
Volume 9, February
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 9, Issue 3 (February 2016) – 6 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
13 pages, 1079 KiB  
Article
Eye Movement Patterns in Solving Science Ordering Problems
by Hui Tang, Elizabeth Day, Lisa Kendhammer, James N. Moore, Scott A. Brown and Norbert J. Pienta
J. Eye Mov. Res. 2016, 9(3), 1-13; https://doi.org/10.16910/jemr.9.3.6 - 16 May 2016
Cited by 13 | Viewed by 100
Abstract
Dynamic biological processes, such as intracellular signaling pathways, commonly are taught using static representations of individual steps in the pathway. As a result, students often memorize these steps for examination purposes, but fail to appreciate the cascade nature of the pathway. In this [...] Read more.
Dynamic biological processes, such as intracellular signaling pathways, commonly are taught using static representations of individual steps in the pathway. As a result, students often memorize these steps for examination purposes, but fail to appreciate the cascade nature of the pathway. In this study, we compared eye movement patterns for students who correctly ordered the components of an important pathway responsible for vasoconstriction against those who did not. Similarly, we compared the patterns of students who learned the material using three dimensional animations previously associated with improved student understanding of this pathway against those who learned the material using static images extracted from those animations. For two of the three ordering problems, students with higher scores had shorter total fixation duration when ordering the components and spent less time (fixating) in the planning and solving phases of the problem-solving process. This finding was supported by the scanpath patterns that demonstrated that students who correctly solved the problems used more efficient problem-solving strategies. Full article
Show Figures

Figure 1

5 pages, 379 KiB  
Article
The Interference Effect of Concurrent Working Memory Task on Visual Inhibitory Control
by Brad M. Hong, Susan Weiner, Michael Clancy and Minjoon Kouh
J. Eye Mov. Res. 2016, 9(3), 1-5; https://doi.org/10.16910/jemr.9.3.5 - 27 Apr 2016
Viewed by 52
Abstract
We examined the interference between inhibitory control of a saccadic eye movement and a working memory task. This study was motivated by the observation that people are susceptible to cognitive errors when they are preoccupied. Subjects were instructed to make an anti-saccade, or [...] Read more.
We examined the interference between inhibitory control of a saccadic eye movement and a working memory task. This study was motivated by the observation that people are susceptible to cognitive errors when they are preoccupied. Subjects were instructed to make an anti-saccade, or to look in the opposite direction of a visual stimulus, thereby exercising inhibitory control over the reflexive eye movement towards a salient object. At the same time, the subjects were instructed to memorize a random sequence of digits that were read out to them, thereby engaging their working memory. We measured the success of an eye movement by rapidly switching between images and asking the subjects what they saw. We found that these concurrent cognitive tasks significantly degraded anti-saccade performance. Full article
Show Figures

Figure 1

11 pages, 831 KiB  
Article
Examining the Validity of the Total Dwell Time of Eye Fixations to Identify Landmarks in a Building
by Pepijn Viaene, Pieter Vansteenkiste, Matthieu Lenoir, Alain De Wulf and Philippe De Maeyer
J. Eye Mov. Res. 2016, 9(3), 1-11; https://doi.org/10.16910/jemr.9.3.4 - 25 Apr 2016
Cited by 17 | Viewed by 83
Abstract
It is uncertain to what extent the duration of eye fixations reflects the use of landmarks during navigation. Therefore, a study was conducted in which eye tracking data and route descriptions were collected of 23 participants who were highly familiar with the indoor [...] Read more.
It is uncertain to what extent the duration of eye fixations reflects the use of landmarks during navigation. Therefore, a study was conducted in which eye tracking data and route descriptions were collected of 23 participants who were highly familiar with the indoor test environment. Based on the total fixation time on different landmark categories, two measures were calculated, namely the calculated landmark category use and the probable landmark category use. Based on the ratio between these measures an object was considered to be a landmark or not. The results were then compared with the objects referenced to in written route instructions. It can be concluded that promising results were provided by this method to identify landmarks. This landmark identification criterion strongly re-flected the landmarks that came forward in the written route instructions. However, issues related to the identification of structural landmarks remain a problem. Full article
Show Figures

Figure 1

17 pages, 1333 KiB  
Article
Idiosyncratic Feature-Based Gaze Mapping
by Pieter Blignaut
J. Eye Mov. Res. 2016, 9(3), 1-17; https://doi.org/10.16910/jemr.9.3.2 - 21 Apr 2016
Cited by 15 | Viewed by 71
Abstract
It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to [...] Read more.
It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to identify a set of polynomial expressions that will provide the best possible accuracy for a specific individual is proposed. It is also proposed that regression coefficients are recal-culated in real-time, based on a subset of calibration points in the region of the current gaze and that a real-time correction is applied, based on the offsets from calibration targets that are close to the estimated point of regard. It was found that if no correction is applied, the choice of polynomial is critically im-portant to get an accuracy that is just acceptable. Previously identified polynomial sets were confirmed to provide good results in the absence of any correction procedure. By applying real-time correction, the accuracy of any given polynomial improves while the choice of polynomial becomes less critical. Identification of the best polynomial set per participant and correction technique in combination with the aforementioned correction techniques, lead to an average error of 0.32° (SD = 0.10°) over 134 participant recordings. The proposed improvements could lead to low-cost systems that are accurate and fast enough to do reading research or other studies where high accuracy is expected at framer-ates in excess of 200 Hz. Full article
Show Figures

Figure 1

16 pages, 838 KiB  
Article
Comparing the Difficulty of Tasks Using Eye Tracking Combined with Subjective and Behavioural Criteria
by Magdalena Andrzejewska and Anna Stolińska
J. Eye Mov. Res. 2016, 9(3), 1-16; https://doi.org/10.16910/jemr.9.3.3 - 16 Apr 2016
Cited by 35 | Viewed by 101
Abstract
In this article, we attempted to examine the issue of the existence of differences in eye move-ment of school-age students as they solve tasks of different difficulty levels in the sciences and natural sciences (computer science, mathematics, physics, biology). Catego-ries of the task’s [...] Read more.
In this article, we attempted to examine the issue of the existence of differences in eye move-ment of school-age students as they solve tasks of different difficulty levels in the sciences and natural sciences (computer science, mathematics, physics, biology). Catego-ries of the task’s difficulty level were established on the basis of two types of criteria: subjective (an evaluation made by the subjects) and behavioural (connected to the correct-ness of their solution). The relationships of these criteria with the visual activity parame-ters, which were considered to be indicators of mental effort, were identified. An analysis of the data obtained enabled the observation of discrepancies in categorizing difficulties of the tasks on the basis of subjective and behavioural criteria. A significant and strong corre-lation was noticed between task difficulty level, determined by the percentage of correct answers, and the fixation parameters, although such a relationship with the blink parame-ters was not found. There was no correlation of the activity of the eye movement parame-ters, considered to be mental effort indicators, with a student’s opinion about the task’s difficulty level. On the basis of the investigations made, it can be stated that the fixation duration average can be taken as an index of the difficulty level of the task being solved. Full article
Show Figures

Figure 1

8 pages, 534 KiB  
Article
ELAN Analysis Companion (EAC): A Software Tool for Time-Course Analysis of ELAN-Annotated Data
by Richard Andersson and Olof Sandgren
J. Eye Mov. Res. 2016, 9(3), 1-8; https://doi.org/10.16910/jemr.9.3.1 - 25 Feb 2016
Cited by 6 | Viewed by 106
Abstract
ELAN is a widely used and free (in both senses) annotation software for behavioral or other events that unfold over time. We report on and release a stand-alone program that expands on ELAN’s capabilities in two ways: (1) it allows the researcher to [...] Read more.
ELAN is a widely used and free (in both senses) annotation software for behavioral or other events that unfold over time. We report on and release a stand-alone program that expands on ELAN’s capabilities in two ways: (1) it allows the researcher to plot and export time-course analysis data directly from ELAN's native annotation files, allowing for hassle-free data extraction in the time domain, e.g., for visual-world paradigm studies; and (2) it allows the researcher to weight ELAN's built-in annotator reliability rating based on the duration of the coded events. This software is released under an open license. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop