Next Issue
Volume 10, December
Previous Issue
Volume 10, October
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 10, Issue 5 (October 2017) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
4 pages, 191 KiB  
Article
Eye Tracking and Visualization: Introduction to the Special Thematic Issue of the Journal of Eye Movement Research
by Michael Burch, Lewis L. Chuang, Andrew Duchowski, Daniel Weiskopf and Rudolf Groner
J. Eye Mov. Res. 2017, 10(5), 1-4; https://doi.org/10.16910/jemr.10.5.1 - 25 May 2018
Cited by 4 | Viewed by 47
Abstract
There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking [...] Read more.
There is a growing interest in eye tracking technologies applied to support traditional visualization techniques like diagrams, charts, maps, or plots, either static, animated, or interactive ones. More complex data analyses are required to derive knowledge and meaning from the data. Eye tracking systems serve that purpose in combination with biological and computer vision, cognition, perception, visualization, human-computer-interaction, as well as usability and user experience research. The 10 articles collected in this thematic special issue provide interesting examples how sophisticated methods of data analysis and representation enable researchers to discover and describe fundamental spatio-temporal regularities in the data. The human visual system, supported by appropriate visualization tools, enables the human operator to solve complex tasks, like understanding and interpreting three-dimensional medical images, controlling air traffic by radar displays, supporting instrument flight tasks, or interacting with virtual realities. The development and application of new visualization techniques is of major importance for future technological progress. Full article
11 pages, 987 KiB  
Article
Using Simultaneous Scanpath Visualization to Investigate the Relationship Between Accuracy and Eye Movement During Medical Image Interpretation
by Alan Davies, Simon Harper, Markel Vigo and Caroline Jay
J. Eye Mov. Res. 2017, 10(5), 1-11; https://doi.org/10.16910/jemr.10.5.11 - 24 Feb 2018
Cited by 4 | Viewed by 51
Abstract
In this paper, we explore how a number of novel methods for visualizing and analyzing differences in eye-tracking data, including scanpath length, Levenshtein distance, and visual transition frequency, can help to elucidate the methods clinicians use for interpreting 12-lead electrocardiograms (ECGs). Visualizing the [...] Read more.
In this paper, we explore how a number of novel methods for visualizing and analyzing differences in eye-tracking data, including scanpath length, Levenshtein distance, and visual transition frequency, can help to elucidate the methods clinicians use for interpreting 12-lead electrocardiograms (ECGs). Visualizing the differences between multiple participants' scanpaths simultaneously allowed us to answer questions including: do clinicians fixate randomly on the ECG, or do they apply a systematic approach?; is there a relationship between interpretation accuracy and visual behavior? Results indicate that practitioners have very different visual search strategies. Clinicians who incorrectly interpret the image have greater scanpath variability than those who correctly interpret it, indicating that differences between practitioners in terms of accuracy are reflected in different eye-movement behaviors. The variation across practitioners is likely to be the result of differential training, clinical role and expertise. Full article
Show Figures

Figure 1

17 pages, 14743 KiB  
Article
Visual Multi-Metric Grouping of Eye-Tracking Data
by Ayush Kumar, Rudolf Netzel, Michael Burch, Daniel Weiskopf and Klaus Mueller
J. Eye Mov. Res. 2017, 10(5), 1-17; https://doi.org/10.16910/jemr.10.5.10 - 14 Feb 2018
Cited by 13 | Viewed by 53
Abstract
We present an algorithmic and visual grouping of participants and eye-tracking metrics derived from recorded eye-tracking data. Our method utilizes two well-established visualization concepts. First, parallel coordinates are used to provide an overview of the used metrics, their interactions, and similarities, which helps [...] Read more.
We present an algorithmic and visual grouping of participants and eye-tracking metrics derived from recorded eye-tracking data. Our method utilizes two well-established visualization concepts. First, parallel coordinates are used to provide an overview of the used metrics, their interactions, and similarities, which helps select suitable metrics that describe characteristics of the eye-tracking data. Furthermore, parallel coordinates plots enable an analyst to test the effects of creating a combination of a subset of metrics resulting in a newly derived eye-tracking metric. Second, a similarity matrix visualization is used to visually represent the affine combination of metrics utilizing an algorithmic grouping of subjects that leads to distinct visual groups of similar behavior. To keep the diagrams of the matrix visualization simple and understandable, we visually encode our eyetracking data into the cells of a similarity matrix of participants. The algorithmic grouping is performed with a clustering based on the affine combination of metrics, which is also the basis for the similarity value computation of the similarity matrix. To illustrate the usefulness of our visualization, we applied it to an eye-tracking data set involving the reading behavior of metro maps of up to 40 participants. Finally, we discuss limitations and scalability issues of the approach focusing on visual and perceptual issues. Full article
Show Figures

Figure 1

16 pages, 984 KiB  
Article
Uncertainty Visualization of Gaze Estimation to Support Operator-Controlled Calibration
by Almoctar Hassoumi, Vsevolod Peysakhovich and Christophe Hurter
J. Eye Mov. Res. 2017, 10(5), 1-16; https://doi.org/10.16910/jemr.10.5.6 - 14 Feb 2018
Viewed by 36
Abstract
In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this [...] Read more.
In this paper, we investigate how visualization assets can support the qualitative evaluation of gaze estimation uncertainty. Although eye tracking data are commonly available, little has been done to visually investigate the uncertainty of recorded gaze information. This paper tries to fill this gap by using innovative uncertainty computation and visualization. Given a gaze processing pipeline, we estimate the location of this gaze position in the world camera. To do so we developed our own gaze data processing which give us access to every stage of the data transformation and thus the uncertainty computation. To validate our gaze estimation pipeline, we designed an experiment with 12 participants and showed that the correction methods we proposed reduced the Mean Angular Error by about 1.32 cm, aggregating all 12 participants' results. The Mean Angular Error is 0.25° (SD = 0.15°) after correction of the estimated gaze. Next, to support the qualitative assessment of this data, we provide a map which codes the actual uncertainty in the user point of view. Full article
Show Figures

Figure 1

14 pages, 1720 KiB  
Article
Scanpath Visualization and Comparison Using Visual Aggregation Techniques
by Vsevolod Peysakhovich and Christophe Hurter
J. Eye Mov. Res. 2017, 10(5), 1-14; https://doi.org/10.16910/jemr.10.5.9 - 8 Jan 2018
Cited by 21 | Viewed by 89
Abstract
We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, [...] Read more.
We demonstrate the use of different visual aggregation techniques to obtain non-cluttered visual representations of scanpaths. First, fixation points are clustered using the mean-shift algorithm. Second, saccades are aggregated using the Attribute-Driven Edge Bundling (ADEB) algorithm that handles a saccades direction, onset timestamp, magnitude or their combination for the edge compatibility criterion. Flow direction maps, computed during bundling, can be visualized separately (vertical or horizontal components) or as a single image using the Oriented Line Integral Convolution (OLIC) algorithm. Furthermore, cosine similarity between two flow direction maps provides a similarity map to compare two scanpaths. Last, we provide examples of basic patterns, visual search task, and art perception. Used together, these techniques provide valuable insights about scanpath exploration and informative illustrations of the eye movement data. Full article
Show Figures

Figure 1

19 pages, 1903 KiB  
Article
A Skeleton-Based Approach to Analyzing Oculomotor Behavior When Viewing Animated Characters
by Thibaut Le Naour and Jean-Pierre Bresciani
J. Eye Mov. Res. 2017, 10(5), 1-19; https://doi.org/10.16910/jemr.10.5.7 - 18 Dec 2017
Cited by 3 | Viewed by 31
Abstract
Knowing what people look at and understanding how they analyze the dynamic gestures of their peers is an exciting challenge. In this context, we propose a new approach to quantifying and visualizing the oculomotor behavior of viewers watching the movements of animated characters [...] Read more.
Knowing what people look at and understanding how they analyze the dynamic gestures of their peers is an exciting challenge. In this context, we propose a new approach to quantifying and visualizing the oculomotor behavior of viewers watching the movements of animated characters in dynamic sequences. Using this approach, we were able to illustrate, on a 'heat mesh', the gaze distribution of one or several viewers, i.e., the time spent on each part of the body, and to visualize viewers' timelines, which are linked to the heat mesh. Our approach notably provides an 'intuitive' overview combining the spatial and temporal characteristics of the gaze pattern, thereby constituting an efficient tool for quickly comparing the oculomotor behaviors of different viewers. The functionalities of our system are illustrated through two use case experiments with 2D and 3D animated media sources, respectively. Full article
Show Figures

Figure 1

15 pages, 810 KiB  
Article
Eye Movement Planning on Single-Sensor-Single-Indicator Displays is Vulnerable to User Anxiety and Cognitive Load
by Jonathan Allsop, Rob Gray, Heinrich H. Bülthoff and Lewis Chuang
J. Eye Mov. Res. 2017, 10(5), 1-15; https://doi.org/10.16910/jemr.10.5.8 - 13 Dec 2017
Cited by 15 | Viewed by 53
Abstract
In this study, we demonstrate the effects of anxiety and cognitive load on eye movement planning in an instrument flight task adhering to a single-sensor-single-indicator data visualisation design philosophy. The task was performed in neutral and anxiety conditions, while a low or high [...] Read more.
In this study, we demonstrate the effects of anxiety and cognitive load on eye movement planning in an instrument flight task adhering to a single-sensor-single-indicator data visualisation design philosophy. The task was performed in neutral and anxiety conditions, while a low or high cognitive load, auditory n-back task was also performed. Cognitive load led to a reduction in the number of transitions between instruments, and impaired task performance. Changes in self-reported anxiety between the neutral and anxiety conditions positively correlated with changes in the randomness of eye movements between instruments, but only when cognitive load was high. Taken together, the results suggest that both cognitive load and anxiety impact gaze behavior, and that these effects should be explored when designing data visualization displays. Full article
Show Figures

Figure 1

14 pages, 753 KiB  
Article
Visual Analytics of Gaze Data with Standard Multimedia Players
by Julius Schöning, Christopher Gundler, Gunther Heidemann, Peter König and Ulf Krumnack
J. Eye Mov. Res. 2017, 10(5), 1-14; https://doi.org/10.16910/jemr.10.5.4 - 20 Nov 2017
Cited by 4 | Viewed by 55
Abstract
With the increasing number of studies, where participants' eye movements are tracked while watching videos, the volume of gaze data records is growing tremendously. Unfortunately, in most cases, such data are collected in separate files in custom-made or proprietary data formats. These data [...] Read more.
With the increasing number of studies, where participants' eye movements are tracked while watching videos, the volume of gaze data records is growing tremendously. Unfortunately, in most cases, such data are collected in separate files in custom-made or proprietary data formats. These data are difficult to access even for experts and effectively inaccessible for non-experts. Normally expensive or custom-made software is necessary for their analysis. We address this problem by using existing multimedia container formats for distributing and archiving eye-tracking and gaze data bundled with the stimuli data. We define an exchange format that can be interpreted by standard multimedia players and can be streamed via the Internet. We convert several gaze data sets into our format, demonstrating the feasibility of our approach and allowing to visualize these data with standard multimedia players. We also introduce two VLC player add-ons, allowing for further visual analytics. We discuss the benefit of gaze data in a multimedia container and explain possible visual analytics approaches based on our implementations, converted datasets, and first user interviews. Full article
Show Figures

Figure 1

12 pages, 8214 KiB  
Article
Visualizing the Reading Activity of People Learning to Read
by Oleg Špakov, Harri Siirtola, Howell Istance and Kari-Jouko Räihä
J. Eye Mov. Res. 2017, 10(5), 1-12; https://doi.org/10.16910/jemr.10.5.5 - 15 Nov 2017
Cited by 15 | Viewed by 68
Abstract
Several popular visualizations of gaze data, such as scanpaths and heatmaps, can be used independently of the viewing task. For a specific task, such as reading, more informative visualizations can be created. We have developed several such techniques, some dynamic and some static, [...] Read more.
Several popular visualizations of gaze data, such as scanpaths and heatmaps, can be used independently of the viewing task. For a specific task, such as reading, more informative visualizations can be created. We have developed several such techniques, some dynamic and some static, to communicate the reading activity of children to primary school teachers. The goal of the visualizations was to highlight the reading skills to a teacher with no background in the theory of eye movements or eye tracking technology. Evaluations of the techniques indicate that, as intended, they serve different purposes and were appreciated by the school teachers differently. Dynamic visualizations help to give the teachers a good understanding of how the individual students read. Static visualizations help in getting a simple overview of how the children read as a group and of their active vocabulary. Full article
Show Figures

Figure 1

14 pages, 7000 KiB  
Article
Gaze Self-Similarity Plot—A New Visualization Technique
by Pawel Kasprowski and Harezlak Katarzyna
J. Eye Mov. Res. 2017, 10(5), 1-14; https://doi.org/10.16910/jemr.10.5.3 - 16 Oct 2017
Cited by 7 | Viewed by 36
Abstract
Eye tracking has become a valuable way for extending knowledge of human behavior based on visual patterns. One of the most important elements of such an analysis is the presentation of obtained results, which proves to be a challenging task. Traditional visualization techniques [...] Read more.
Eye tracking has become a valuable way for extending knowledge of human behavior based on visual patterns. One of the most important elements of such an analysis is the presentation of obtained results, which proves to be a challenging task. Traditional visualization techniques such as scan-paths or heat maps may reveal interesting information, nonetheless many useful features are still not visible, especially when temporal characteristics of eye movement is taken into account. This paper introduces a technique called gaze self-similarity plot (GSSP) that may be applied to visualize both spatial and temporal eye movement features on the single two-dimensional plot. The technique is an extension of the idea of recurrence plots, commonly used in time series analysis. The paper presents the basic concepts of the proposed approach (two types of GSSP) complemented with some examples of what kind of information may be disclosed and finally showing areas of the GSSP possible applications. Full article
Show Figures

Figure 1

12 pages, 8077 KiB  
Article
A Quality-Centered Analysis of Eye Tracking Data in Foveated Rendering
by Thorsten Roth, Martin Weier, André Hinkenjann, Yongmin Li and Philipp Slusallek
J. Eye Mov. Res. 2017, 10(5), 1-12; https://doi.org/10.16910/jemr.10.5.2 - 28 Sep 2017
Cited by 13 | Viewed by 74
Abstract
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual [...] Read more.
This work presents the analysis of data recorded by an eye tracking device in the course of evaluating a foveated rendering approach for head-mounted displays (HMDs). Foveated rendering methods adapt the image synthesis process to the user’s gaze and exploiting the human visual system’s limitations to increase rendering performance. Especially, foveated rendering has great potential when certain requirements have to be fulfilled, like low-latency rendering to cope with high display refresh rates. This is crucial for virtual reality (VR), as a high level of immersion, which can only be achieved with high rendering performance and also helps to reduce nausea, is an important factor in this field. We put things in context by first providing basic information about our rendering system, followed by a description of the user study and the collected data. This data stems from fixation tasks that subjects had to perform while being shown fly-through sequences of virtual scenes on an HMD. These fixation tasks consisted of a combination of various scenes and fixation modes. Besides static fixation targets, moving targets on randomized paths as well as a free focus mode were tested. Using this data, we estimate the precision of the utilized eye tracker and analyze the participants’ accuracy in focusing the displayed fixation targets. Here, we also take a look at eccentricity-dependent quality ratings. Comparing this information with the users’ quality ratings given for the displayed sequences then reveals an interesting connection between fixation modes, fixation accuracy and quality ratings. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop