Next Issue
Volume 12, February
Previous Issue
Volume 11, December
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 11, Issue 6 (November 2018) – 6 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
11 pages, 764 KiB  
Article
Automating Areas of Interest Analysis in Mobile Eye Tracking Experiments Based on Machine Learning
by Julian Wolf, Stephan Hess, David Bachmann, Quentin Lohmeyer and Mirko Meboldt
J. Eye Mov. Res. 2018, 11(6), 1-11; https://doi.org/10.16910/jemr.11.6.6 - 10 Dec 2018
Cited by 34 | Viewed by 97
Abstract
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies [...] Read more.
For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assign-ment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quan-titatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm’s performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 h of eye tracking recording for the total procedure, respectively 1 h considering human working time only. Together with a real-time capability of the mapping process after completed train-ing, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git) Full article
Show Figures

Figure 1

17 pages, 68671 KiB  
Article
Representative Scanpath Identification for Group Viewing Pattern Analysis
by Aoqi Li and Zhenzhong Chen
J. Eye Mov. Res. 2018, 11(6), 1-17; https://doi.org/10.16910/jemr.11.6.5 - 22 Nov 2018
Cited by 5 | Viewed by 39
Abstract
Scanpaths are composed of fixations and saccades. Viewing trends reflected by scanpaths play an important role in scientific studies like saccadic model evaluation and real-life applications like artistic design. Several scanpath synthesis methods have been proposed to obtain a scanpath that is representative [...] Read more.
Scanpaths are composed of fixations and saccades. Viewing trends reflected by scanpaths play an important role in scientific studies like saccadic model evaluation and real-life applications like artistic design. Several scanpath synthesis methods have been proposed to obtain a scanpath that is representative of the group viewing trend. But most of them either target a specific category of viewing materials like webpages or leave out some useful information like gaze duration. Our previous work defined the representative scanpath as the barycenter of a group of scanpaths, which actually shows the averaged shape of multiple scanpaths. In this paper, we extend our previous framework to take gaze duration into account, obtaining representative scanpaths that describe not only attention distribution and shift but also attention span. The extended framework consists of three steps: Eye-gaze data preprocessing, scanpath aggregation and gaze duration analysis. Experiments demonstrate that the framework can well serve the purpose of mining viewing patterns and “barycenter” based representative scanpaths can better characterize the pattern. Full article
Show Figures

Figure 1

11 pages, 1269 KiB  
Article
Eye Movement Parameters for Performance Evaluation in Projection-Based Stereoscopic Display
by Chiuhsiang Joe Lin, Yogi Tri Prasetyo and Retno Widyaningrum
J. Eye Mov. Res. 2018, 11(6), 1-11; https://doi.org/10.16910/jemr.11.6.3 - 20 Nov 2018
Cited by 34 | Viewed by 35
Abstract
The current study applied Structural Equation Modeling (SEM) to analyze the rela-tionship among index of difficulty (ID) and parallax on eye gaze movement time (EMT), fixation duration (FD), time to first fixation (TFF), number of fixation (NF), and eye gaze accuracy (AC) simultaneously. [...] Read more.
The current study applied Structural Equation Modeling (SEM) to analyze the rela-tionship among index of difficulty (ID) and parallax on eye gaze movement time (EMT), fixation duration (FD), time to first fixation (TFF), number of fixation (NF), and eye gaze accuracy (AC) simultaneously. EMT, FD, TFF, NF, and AC were measured in the projec-tion-based stereoscopic display by utilizing Tobii eye tracker system. Ten participants were recruited to perform multi-directional tapping task using within-subject design with three different levels of parallax and six different levels of ID. SEM proved that ID had significant direct effects on EMT, NF, and FD also a significant indirect effect on NF. However, ID was found not a strong predictor for AC. SEM also proved that parallax had significant direct effects on EMT, NF, FD, TFF, and AC. Apart from the direct effect, parallax also had significant indirect effects on NF and AC. Regarding the interrelation-ship among dependent variables, there were significant indirect effects of FD and TFF on AC. Our results concluded that higher AC was achieved by lowering parallax (at the screen), longer EMT, higher NF, longer FD, and longer TFF. Full article
Show Figures

Figure 1

27 pages, 1045 KiB  
Article
Attention and Information Acquisition: Comparison of Mouse-Click with Eye-Movement Attention Tracking
by Steffen Egner, Stefanie Reimann, Rainer Hoeger and Wolfgang H. Zangemeister
J. Eye Mov. Res. 2018, 11(6), 1-27; https://doi.org/10.16910/jemr.11.6.4 - 16 Nov 2018
Cited by 17 | Viewed by 71
Abstract
Attention is crucial as a fundamental prerequisite for perception. The measurement of attention in viewing and recognizing the images that surround us constitutes an important part of eye movement research, particularly in advertising-effectiveness research. Recording eye and gaze (i.e., eye and head) movements [...] Read more.
Attention is crucial as a fundamental prerequisite for perception. The measurement of attention in viewing and recognizing the images that surround us constitutes an important part of eye movement research, particularly in advertising-effectiveness research. Recording eye and gaze (i.e., eye and head) movements is considered the standard procedure for measuring attention. However, alternative measurement methods have been developed in recent years, one of which is mouse-click attention tracking (mcAT) by means of an on-line based procedure that measures gaze motion via a mouse-click (i.e., a hand and finger positioning maneuver) on a computer screen. Here we compared the validity of mcAT with eye movement attention tracking (emAT). We recorded data in a between subject design via emAT and mcAT and analyzed and compared 20 subjects for correlations. The test stimuli consisted of 64 images that were assigned to eight categories. Our main results demonstrated a highly significant correlation (p < 0.001) between mcAT and emAT data. We also found significant differences in corre- lations between different image categories. For simply structured pictures of humans or animals in particular, mcAT provided highly valid and more consistent results compared to emAT. We concluded that mcAT is a suitable method for measuring the attention we give to the images that surround us, such as photographs, graphics, art or digital and print advertisements. Full article
Show Figures

Figure 1

13 pages, 643 KiB  
Article
MAGiC: A Multimodal Framework for Analysing Gaze in Dyadic Communication
by Ülkü Arslan Aydın, Sinan Kalkan and Cengiz Acartürk
J. Eye Mov. Res. 2018, 11(6), 1-13; https://doi.org/10.16910/jemr.11.6.2 - 12 Nov 2018
Cited by 3 | Viewed by 38
Abstract
The analysis of dynamic scenes has been a challenging domain in eye tracking research. This study presents a framework, named MAGiC, for analyzing gaze contact and gaze aversion in face-to-face communication. MAGiC provides an environment that is able to detect and track the [...] Read more.
The analysis of dynamic scenes has been a challenging domain in eye tracking research. This study presents a framework, named MAGiC, for analyzing gaze contact and gaze aversion in face-to-face communication. MAGiC provides an environment that is able to detect and track the conversation partner’s face automatically, overlay gaze data on top of the face video, and incorporate speech by means of speech-act annotation. Specifically, MAGiC integrates eye tracking data for gaze, audio data for speech segmentation, and video data for face tracking. MAGiC is an open source framework and its usage is demonstrated via publicly available video content and wiki pages. We explored the capabilities of MAGiC through a pilot study and showed that it facilitates the analysis of dynamic gaze data by reducing the annotation effort and the time spent for manual analysis of video data. Full article
Show Figures

Figure 1

14 pages, 505 KiB  
Article
Eye-Hand Coordination Patterns of Intermediate and Novice Surgeons in a Simulation-Based Endoscopic Surgery Training Environment
by Damla Topalli and Nergiz Ercil Cagiltay
J. Eye Mov. Res. 2018, 11(6), 1-14; https://doi.org/10.16910/jemr.11.6.1 - 8 Nov 2018
Cited by 7 | Viewed by 46
Abstract
Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objec-tive metrics for hand-movement skills and assess [...] Read more.
Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objec-tive metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants’ eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants’ eye-hand coordination skills are analyzed. The results indicate higher correla-tions in the intermediates’ eye-hand movements compared to the novices. An increase in intermediates’ visual concentration leads to smoother hand movements. Similarly, the novices’ hand movements are shown to remain at a standstill. After the first round of practice, all participants’ eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees’ eye-hand coordi-nation skills and help instructional system designers to better address training requirements. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop