Next Issue
Volume 14, April
Previous Issue
Volume 14, February
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 14, Issue 2 (March 2021) – 6 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
31 pages, 883 KiB  
Article
Eye Movements During Dynamic Scene Viewing Are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos
by Suvi K. Holm, Tuomo Häikiö, Konstantin Olli and Johanna K. Kaakinen
J. Eye Mov. Res. 2021, 14(2), 1-31; https://doi.org/10.16910/jemr.14.2.3 - 21 Oct 2021
Cited by 12 | Viewed by 93
Abstract
The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants’ skills in three visual attention tasks (attentional blink, visual search, [...] Read more.
The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants’ skills in three visual attention tasks (attentional blink, visual search, and multiple object tracking) were assessed. The results showed that individual differences in visual attention tasks were associated with eye movement patterns observed during viewing of the gameplay video. The differences were noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes and fixation distances from the center of the screen. The individual differences showed during specific events of the video as well as during the video as a whole. The results highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual differences in dynamic scene viewing. Full article
Show Figures

Figure 1

16 pages, 1310 KiB  
Article
Silent Versus Reading Out Loud modes: An Eye-Tracking Study
by Ioannis Smyrnakis, Vassilios Andreadakis, Andriani Rina, Nadia Bοufachrentin and Ioannis M. Aslanides
J. Eye Mov. Res. 2021, 14(2), 1-16; https://doi.org/10.16910/jemr.14.2.1 - 21 Oct 2021
Cited by 12 | Viewed by 108
Abstract
The main purpose of this study is to compare the silent and loud reading ability of typical and dyslexic readers, using eye-tracking technology to monitor the reading process. The participants (156 students of normal intelligence) were first divided into three groups based on [...] Read more.
The main purpose of this study is to compare the silent and loud reading ability of typical and dyslexic readers, using eye-tracking technology to monitor the reading process. The participants (156 students of normal intelligence) were first divided into three groups based on their school grade, and each subgroup was then further separated into typical readers and students diagnosed with dyslexia. The students read the same text twice, one time silently and one time out loud. Various eye-tracking parameters were calculated for both types of reading. In general, the performance of the typical students was better for both modes of reading - regardless of age. In the older age groups, typical readers performed better at silent reading. The dyslexic readers in all age groups performed better at reading out loud. However, this was less prominent in secondary and upper secondary dyslexics, reflecting a slow shift towards silent reading mode as they age. Our results confirm that the eye-tracking parameters of dyslexics improve with age in both silent and loud reading, and their reading preference shifts slowly towards silent reading. Typical readers, before 4th grade do not show a clear reading mode preference, however, after that age they develop a clear preference for silent reading. Full article
Show Figures

Figure 1

8 pages, 341 KiB  
Article
Reading Eye Movements Performance on iPad vs Print Using a Visagraph
by Alicia Feis, Amanda Lallensack, Elizabeth Pallante, Melanie Nielsen, Nicole Demarco and Balamurali Vasudevan
J. Eye Mov. Res. 2021, 14(2), 1-8; https://doi.org/10.16910/jemr.14.2.6 - 14 Sep 2021
Cited by 12 | Viewed by 94
Abstract
This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. [...] Read more.
This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads. Full article
13 pages, 8618 KiB  
Article
Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking
by Liu Xin, Zheng Bin, Duan Xiaoqin, He Wenjing, Li Yuandong, Zhao Jinyu, Zhao Chen and Wang Lin
J. Eye Mov. Res. 2021, 14(2), 1-13; https://doi.org/10.16910/jemr.14.2.5 - 13 Jul 2021
Cited by 11 | Viewed by 59
Abstract
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback [...] Read more.
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert’s judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation. Full article
Show Figures

Figure 1

12 pages, 3010 KiB  
Article
Optimizing the Usage of Pupillary Based Indicators for Cognitive Workload
by Benedict C. O. F. Fehringer
J. Eye Mov. Res. 2021, 14(2), 1-12; https://doi.org/10.16910/jemr.14.2.4 - 11 Jun 2021
Cited by 7 | Viewed by 44
Abstract
The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, [...] Read more.
The Index of Cognitive Activity (ICA) and its open-source alternative, the Index of Pupillary Activity (IPA), are pupillary-based indicators for cognitive workload and are independent of light changes. Both indicators were investigated regarding influences of cognitive demand, fatigue and inter-individual differences. In addition, the variability of pupil changes between both eyes (difference values) were compared with the usually calculated pupillary changes averaged over both eyes (mean values). Fifty-five participants performed a spatial thinking test, the R-Cube-Vis Test, with six distinct difficulty levels and a simple fixation task before and after the R-Cube-Vis Test. The distributions of the ICA and IPA were comparable. The ICA/IPA values were lower during the simple fixation tasks than during the cognitively demanding R-Cube-Vis Test. A fatigue effect was found only for the mean ICA values. The effects of both indicators were larger between difficulty levels of the test when inter-individual differences were controlled using z-standardization. The difference values seemed to control for fatigue and appeared to differentiate better between more demanding cognitive tasks than the mean values. The derived recommendations for the ICA/IPA values are beneficial to gain more insights in individual performance and behavior during, e.g., training and testing scenarios. Full article
Show Figures

Figure 1

11 pages, 2539 KiB  
Article
Developing Expert Gaze Pattern in Laparoscopic Surgery Requires More than Behavioral Training
by Sicong Liu, Rachel Donaldson, Ashwin Subramaniam, Hannah Palmer, Cosette D. Champion, Morgan L. Cox and L. Gregory Appelbaum
J. Eye Mov. Res. 2021, 14(2), 1-11; https://doi.org/10.16910/jemr.14.2.2 - 10 Mar 2021
Cited by 7 | Viewed by 43
Abstract
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were [...] Read more.
Expertise in laparoscopic surgery is realized through both manual dexterity and efficient eye movement patterns, creating opportunities to use gaze information in the educational process. To better understand how expert gaze behaviors are acquired through deliberate practice of technical skills, three surgeons were assessed and five novices were trained and assessed in a 5-visit protocol on the Fundamentals of Laparoscopic Surgery peg transfer task. The task was adjusted to have a fixed action sequence to allow recordings of dwell durations based on pre-defined areas of interest (AOIs). Trained novices were shown to reach more than 98% (M = 98.62%, SD = 1.06%) of their behavioral learning plateaus, leading to equivalent behavioral performance to that of surgeons. Despite this equivalence in behavioral performance, surgeons continued to show significantly shorter dwell durations at visual targets of current actions and longer dwell durations at future steps in the action sequence than trained novices (ps ≤ .03, Cohen’s ds > 2). This study demonstrates that, while novices can train to match surgeons on behavioral performance, their gaze pattern is still less efficient than that of surgeons, motivating surgical training programs to involve eye tracking technology in their design and evaluation. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop