Next Issue
Volume 15, February
Previous Issue
Volume 14, April
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 14, Issue 4 (August 2021) – 6 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
25 pages, 3209 KiB  
Article
Review on Eye-Hand Span in Sight-Reading of Music
by Joris Perra, Bénédicte Poulin-Charronnat, Thierry Baccino and Véronique Drai-Zerbib
J. Eye Mov. Res. 2021, 14(4), 1-25; https://doi.org/10.16910/jemr.14.4.4 - 11 Nov 2021
Cited by 8 | Viewed by 80
Abstract
In a sight-reading task, the position of the eyes on the score is generally further ahead than the note being produced by the instrument. This anticipation allows musicians to identify the upcoming notes and possible difficulties and to plan their gestures accordingly. The [...] Read more.
In a sight-reading task, the position of the eyes on the score is generally further ahead than the note being produced by the instrument. This anticipation allows musicians to identify the upcoming notes and possible difficulties and to plan their gestures accordingly. The eye-hand span (EHS) corresponds to this offset between the eye and the hand and measures the distance or latency between an eye fixation on the score and the production of the note on the instrument. While EHS is mostly quite short, the variation in its size can depend on multiple factors. EHS increases in line with the musician's expertise level, diminishes as a function of the complexity of the score and can vary depending on the context in which it is played. By summarizing the main factors that affect EHS and the methodologies used in this field of study, the present review of the literature highlights the fact that a) to ensure effective sight reading, the EHS must be adaptable and optimized in size (neither too long not too short), with the best sight readers exhibiting a high level of perceptual flexibility in adapting their span to the complexity of the score; b) it is important to interpret EHS in the light of the specificities of the score, given that it varies so much both within and between scores; and c) the flexibility of EHS can be a good indicator of the perceptual and cognitive capacities of musicians, showing that a musician's gaze can be attracted early by a complexity in a still distant part of the score. These various points are discussed in the light of the literature on music-reading expertise. Promising avenues of research using the eye tracking method are proposed in order to further our knowledge of the construction of an expertise that requires multisensory integration. Full article
Show Figures

Figure 1

16 pages, 1606 KiB  
Article
Visual Scanpath Training to Emotional Faces Following Severe Traumatic Brain Injury: A Single Case Design
by Suzane Vassallo and Jacinta Douglas
J. Eye Mov. Res. 2021, 14(4), 1-16; https://doi.org/10.16910/jemr.14.4.6 - 21 Oct 2021
Cited by 3 | Viewed by 42
Abstract
The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over [...] Read more.
The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over a 3-month period using a single case design (AB) with one follow up session. At baseline BR’s scanpath was restricted, characterised by gaze allocation primarily to salient facial features on the right side of the face stimulus. Following intervention his visual scanpath became more lateralised, although he continued to demonstrate an attentional bias to the right side of the face stimulus. This study is the first to demonstrate change in both the pattern and the position of the visual scanpath to emotional faces following intervention in a person with chronic severe TBI. In addition, these findings extend upon our previous work to suggest that modification of the visual scanpath through targeted facial feature training can support improved facial recognition performance in a person with severe TBI. Full article
Show Figures

Figure 1

30 pages, 1197 KiB  
Article
The Association of Eye Movements and Performance Accuracy in a Novel Sight-Reading Task
by Lucas Lörch
J. Eye Mov. Res. 2021, 14(4), 1-30; https://doi.org/10.16910/jemr.14.4.5 - 21 Oct 2021
Cited by 4 | Viewed by 52
Abstract
The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. [...] Read more.
The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. In between the presentation of each note, participants sight-read a notated melody on an electric piano in the tempo of 70 bpm. All melodies were unique but contained four types of note pairs: eighth-eighth, eighthquarter, quarter-eighth, quarter-quarter. Analyses revealed that reading with fewer fixations was associated with a more accurate note onset. Fewer fixations might be advantageous for sight-reading as fewer saccades have to be planned and less information has to be integrated. Moreover, the quarter-quarter note pair was read with a larger number of fixations and the eighth-quarter note pair was read with a longer gaze duration. This suggests that when rhythm is processed, additional beats might trigger re-fixations and unconventional rhythmical patterns might trigger longer gazes. Neither recall accuracy nor chunking processes were found to explain additional variance in the eye movement data. Full article
Show Figures

Figure 1

8 pages, 483 KiB  
Article
Can Longer Gaze Duration Determine Risky Investment Decisions? An Interactive Perspective
by Yiheng Wang and Yanping Liu
J. Eye Mov. Res. 2021, 14(4), 1-8; https://doi.org/10.16910/jemr.14.4.3 - 21 Sep 2021
Cited by 2 | Viewed by 50
Abstract
Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences people's decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can [...] Read more.
Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences people's decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced people's decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty. Full article
Show Figures

Figure 1

21 pages, 1447 KiB  
Article
Metacognitive Monitoring and Metacognitive Strategies of Gifted and Average Children on Dealing with Deductive Reasoning Task
by Ondřeji Straka, Šárka Portešová, Daniela Halámková and Michal Jabůrek
J. Eye Mov. Res. 2021, 14(4), 1-21; https://doi.org/10.16910/jemr.14.4.1 - 14 Sep 2021
Cited by 3 | Viewed by 82
Abstract
In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, [...] Read more.
In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants’ subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration – in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants’ subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed. Full article
Show Figures

Figure 1

15 pages, 1075 KiB  
Article
I2DNet—Design and Real-Time Evaluation of Appearance-Based Gaze Estimation System
by L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun and Pradipta Biswas
J. Eye Mov. Res. 2021, 14(4), 1-15; https://doi.org/10.16910/jemr.14.4.2 - 31 Aug 2021
Cited by 10 | Viewed by 78
Abstract
Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to [...] Read more.
Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop