Next Issue
Volume 15, July
Previous Issue
Volume 15, April
 
 
Journal of Eye Movement Research is published by MDPI from Volume 18 Issue 1 (2025). Previous articles were published by another publisher in Open Access under a CC-BY (or CC-BY-NC-ND) licence, and they are hosted by MDPI on mdpi.com as a courtesy and upon agreement with Bern Open Publishing (BOP).

J. Eye Mov. Res., Volume 15, Issue 3 (June 2022) – 11 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
6 pages, 254 KiB  
Article
Introduction to the Special Thematic Issue “Virtual Reality and Eye Tracking”
by Hasler S. Béatrice and Rudolf Groner
J. Eye Mov. Res. 2022, 15(3), 1-6; https://doi.org/10.16910/jemr.15.3.1 - 2 Jun 2024
Viewed by 71
Abstract
Technological advancements have made it possible to integrate eye tracking in virtual reality (VR) and augmented reality (AR), and many new VR/AR headsets already include eye tracking as a standard feature [...] Full article
16 pages, 2600 KiB  
Article
Cognitive Load Estimation in VR Flight Simulator
by P Archana Hebbar, Sanjana Vinod, Aumkar Kishore Shah, Abhay A Pashilkar and Pradipta Biswas
J. Eye Mov. Res. 2022, 15(3), 1-16; https://doi.org/10.16910/jemr.15.3.11 - 5 Jul 2023
Cited by 12 | Viewed by 151
Abstract
This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot’s interactions with aircraft by means of quantifying pilot’s perceived [...] Read more.
This paper discusses the design and development of a low-cost virtual reality (VR) based flight simulator with cognitive load estimation feature using ocular and EEG signals. Focus is on exploring methods to evaluate pilot’s interactions with aircraft by means of quantifying pilot’s perceived cognitive load under different task scenarios. Realistic target tracking and context of the battlefield is designed in VR. Head mounted eye gaze tracker and EEG headset are used for acquiring pupil diameter, gaze fixation, gaze direction and EEG theta, alpha, and beta band power data in real time. We developed an AI agent model in VR and created scenarios of interactions with the piloted aircraft. To estimate the pilot’s cognitive load, we used low-frequency pupil diameter variations, fixation rate, gaze distribution pattern, EEG signal-based task load index and EEG task engagement index. We compared the physiological measures of workload with the standard user’s inceptor control-based workload metrics. Results of the piloted simulation study indicate that the metrics discussed in the paper have strong association with pilot’s perceived task difficulty. Full article
Show Figures

Figure 1

19 pages, 40227 KiB  
Article
Let’s get it Started: Eye Tracking in VR with the Pupil Labs Eye Tracking Add-On for the HTC Vive
by Judith Josupeit
J. Eye Mov. Res. 2022, 15(3), 1-19; https://doi.org/10.16910/jemr.15.3.10 - 19 Jun 2023
Cited by 1 | Viewed by 129
Abstract
Combining eye tracking and virtual reality (VR) is a promising approach to tackle various applied research questions. As this approach is relatively new, routines are not established yet and the first steps can be full of potential pitfalls. The present paper gives a [...] Read more.
Combining eye tracking and virtual reality (VR) is a promising approach to tackle various applied research questions. As this approach is relatively new, routines are not established yet and the first steps can be full of potential pitfalls. The present paper gives a practice example to lower the boundaries for getting started. More specifically, I focus on an affordable add-on technology, the Pupil Labs eye tracking add-on for the HTC Vive. As add-on technology with all relevant source code available on GitHub, a high degree of freedom in preprocessing, visualizing, and analyzing eye tracking data in VR can be achieved. At the same time, some extra preparatory steps for the setup of hardware and software are necessary. Therefore, specifics of eye tracking in VR from unboxing, software integration, and procedures to analyzing the data and maintaining the hardware will be addressed. The Pupil Labs eye tracking add-on for the HTC Vive represents a highly transparent approach to existing alternatives. Characteristics of eye tracking in VR in contrast to other headmounded and remote eye trackers applied in the physical world will be discussed. In conclusion, the paper contributes to the idea of open science in two ways: First, by making the necessary routines transparent and therefore reproducible. Second, by stressing the benefits of using open source software. Full article
Show Figures

Figure 1

14 pages, 1548 KiB  
Article
An Investigation of Feed-Forward and Feedback Eye Movement Training in Immersive Virtual Reality
by David J. Harris, Mark R. Wilson, Martin I. Jones, Toby de Burgh, Daisy Mundy, Tom Arthur, Mayowa Olonilua and Samuel J. Vine
J. Eye Mov. Res. 2022, 15(3), 1-14; https://doi.org/10.16910/jemr.15.3.7 - 12 Jun 2023
Cited by 2 | Viewed by 70
Abstract
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. [...] Read more.
The control of eye gaze is critical to the execution of many skills. The observation that task experts in many domains exhibit more efficient control of eye gaze than novices has led to the development of gaze training interventions that teach these behaviours. We aimed to extend this literature by i) examining the relative benefits of feed-forward (observing an expert’s eye movements) versus feed-back (observing your own eye movements) training, and ii) automating this training within virtual reality. Serving personnel from the British Army and Royal Navy were randomised to either feed-forward or feed-back training within a virtual reality simulation of a room search and clearance task. Eye movement metrics – including visual search, saccade direction, and entropy – were recorded to quantify the efficiency of visual search behaviours. Feed-forward and feed-back eye movement training produced distinct learning benefits, but both accelerated the development of efficient gaze behaviours. However, we found no evidence that these more efficient search behaviours transferred to better decision making in the room clearance task. Our results suggest integrating eye movement training principles within virtual reality training simulations may be effective, but further work is needed to understand the learning mechanisms. Full article
Show Figures

Figure 1

17 pages, 19642 KiB  
Article
MAP3D: An Explorative Approach for Automatic Mapping of Real-World Eye-Tracking Data on a Virtual 3D Model
by Isabell Stein, Helen Jossberger and Hans Gruber
J. Eye Mov. Res. 2022, 15(3), 1-17; https://doi.org/10.16910/jemr.15.3.8 - 31 May 2023
Cited by 3 | Viewed by 54
Abstract
Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies’ ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually [...] Read more.
Mobile eye tracking helps to investigate real-world settings, in which participants can move freely. This enhances the studies’ ecological validity but poses challenges for the analysis. Often, the 3D stimulus is reduced to a 2D image (reference view) and the fixations are manually mapped to this 2D image. This leads to a loss of information about the three-dimensionality of the stimulus. Using several reference images, from different perspectives, poses new problems, in particular concerning the mapping of fixations in the transition areas between two reference views. A newly developed approach (MAP3D) is presented that enables generating a 3D model and automatic mapping of fixations to this virtual 3D model of the stimulus. This avoids problems with the reduction to a 2D reference image and with transitions between images. The x, y and z coordinates of the fixations are available as a point cloud and as .csv output. First exploratory application and evaluation tests are promising: MAP3D offers innovative ways of post-hoc mapping fixation data on 3D stimuli with open-source software and thus provides cost-efficient new avenues for research. Full article
20 pages, 3528 KiB  
Article
A Systematic Performance Comparison of Two Smooth Pursuit Detection Algorithms in Virtual Reality Depending on Target Number, Distance, and Movement Patterns
by Sarah-Christin Freytag, Roland Zechner and Michelle Kamps
J. Eye Mov. Res. 2022, 15(3), 1-20; https://doi.org/10.16910/jemr.15.3.9 - 29 May 2023
Cited by 1 | Viewed by 48
Abstract
We compared the performance of two smooth-pursuit-based object selection algorithms in Virtual Reality (VR). To assess the best algorithm for a range of configurations, we systematically varied the number of targets to choose from, their distance, and their movement pattern (linear and circular). [...] Read more.
We compared the performance of two smooth-pursuit-based object selection algorithms in Virtual Reality (VR). To assess the best algorithm for a range of configurations, we systematically varied the number of targets to choose from, their distance, and their movement pattern (linear and circular). Performance was operationalized as the ratio of hits, misses and non-detections. Averaged over all distances, the correlation-based algorithm performed better for circular movement patterns compared to linear ones (F(1,11) = 24.27, p < .001, η² = .29). This was not found for the difference-based algorithm (F(1,11) = 0.98, p = .344, η² = .01). Both algorithms performed better in close distances compared to larger ones (F(1,11) = 190.77, p < .001, η² = .75 correlation-based, and F(1,11) = 148.20, p < .001, η² = .42, difference-based). An interaction effect for distance x movement emerged. After systematically varying the number of targets, these results could be replicated, with a slightly smaller effect. Full article
Show Figures

Figure 1

16 pages, 2606 KiB  
Article
Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes
by Julia Beitner, Jason Helbing, Dejan Draschkow, Erwan J. David and Melissa L.-H. Võ
J. Eye Mov. Res. 2022, 15(3), 1-16; https://doi.org/10.16910/jemr.15.3.5 - 31 Mar 2023
Cited by 3 | Viewed by 72
Abstract
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our [...] Read more.
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior. Full article
11 pages, 3407 KiB  
Article
The Pupil Near Response Is Short Lasting and Intact in Virtual Reality Head Mounted Displays
by Hidde Pielage, Adriana A. Zekveld, Sjors van de Ven, Sophia E. Kramer and Marnix Naber
J. Eye Mov. Res. 2022, 15(3), 1-11; https://doi.org/10.16910/jemr.15.3.6 - 27 Mar 2023
Cited by 5 | Viewed by 66
Abstract
The pupil of the eye constricts when moving focus from an object further away to an object closer by. This is called the pupil near response, which typically occurs together with accommodation and vergence responses. When immersed in virtual reality mediated through a [...] Read more.
The pupil of the eye constricts when moving focus from an object further away to an object closer by. This is called the pupil near response, which typically occurs together with accommodation and vergence responses. When immersed in virtual reality mediated through a head-mounted display, this triad is disrupted by the vergence-accommodation conflict. However, it is not yet clear if the disruption also affects the pupil near response. Two experiments were performed to assess this. The first experiment had participants follow a target that first appeared at a far position and then moved to either a near position (far-to-near; FN) or to another far position (far-to-far; FF). The second experiment had participants follow a target that jumped between five positions, which was repeated at several distances. Experiment 1 showed a greater pupil constriction amplitude for FN trials, compared to FF trials, suggesting that the pupil near response is intact in head-mounted display mediated virtual reality. Experiment 2 did not find that average pupil dilation differed when fixating targets at different distances, suggesting that the pupil near response is transient and does not result in sustained pupil size changes. Full article
Show Figures

Figure 1

16 pages, 2526 KiB  
Article
Assessment of Cognitive Biases in Augmented Reality: Beyond Eye Tracking
by Piotr Słowiński, Ben Grindley, Helen Muncie, David J Harris, Samuel J Vine and Mark R Wilson
J. Eye Mov. Res. 2022, 15(3), 1-16; https://doi.org/10.16910/jemr.15.3.4 - 30 Dec 2022
Cited by 3 | Viewed by 73
Abstract
We study an individual’s propensity for rational thinking; the avoidance of cognitive biases (unconscious errors generated by our mental simplification methods) using a novel augmented reality (AR) platform. Specifically, we developed an odd-one-out (OOO) game-like task in AR designed to try to induce [...] Read more.
We study an individual’s propensity for rational thinking; the avoidance of cognitive biases (unconscious errors generated by our mental simplification methods) using a novel augmented reality (AR) platform. Specifically, we developed an odd-one-out (OOO) game-like task in AR designed to try to induce and assess confirmatory biases. Forty students completed the AR task in the laboratory, and the short form of the comprehensive assessment of rational thinking (CART) online via the Qualtrics platform. We demonstrate that behavioural markers (based on eye, hand and head movements) can be associated (linear regression) with the short CART score – more rational thinkers have slower head and hand movements and faster gaze movements in the second more ambiguous round of the OOO task. Furthermore, short CART scores can be associated with the change in behaviour between two rounds of the OOO task (one less and one more ambiguous) – hand-eye-head coordination patterns of the more rational thinkers are more consistent in the two rounds. Overall, we demonstrate the benefits of augmenting eye-tracking recordings with additional data modalities when trying to understand complicated behaviours. Full article
Show Figures

Figure 1

10 pages, 2044 KiB  
Article
Maintaining Fixation by Children in a Virtual Reality Version of Pupil Perimetry
by Brendan L. Portengen, Marnix Naber, Demi Jansen, Carlijn van den Boomen, Saskia M. Imhof and Giorgio L. Porro
J. Eye Mov. Res. 2022, 15(3), 1-10; https://doi.org/10.16910/jemr.15.3.2 - 19 Sep 2022
Cited by 7 | Viewed by 74
Abstract
The assessment of the visual field in young children continues to be a challenge. Children often do not sit still, fail to fixate stimuli for longer durations, and have limited verbal capacity to report visibility. Therefore, we introduced a head-mounted VR display with [...] Read more.
The assessment of the visual field in young children continues to be a challenge. Children often do not sit still, fail to fixate stimuli for longer durations, and have limited verbal capacity to report visibility. Therefore, we introduced a head-mounted VR display with gazecontingent flicker pupil perimetry (VRgcFPP). We presented large flickering patches at different eccentricities and angles in the periphery to evoke pupillary oscillations, and three fixation stimulus conditions to determine best practices for optimal fixation and pupil response quality. A total of twenty children (3-11y) passively fixated a dot, counted the repeated appearance of an animated character (counting task), and watched an animated movie in separate trials of 80s each (20 patch locations, 4s per location). The results showed that gaze precision and accuracy did not differ significantly across the fixation conditions but pupil amplitudes were strongest for the dot and count task. The VR set-up appears to be an ideal apparatus for children to allow free range of movement, an engaging visual task, and reliable eye measurements. We recommend the use of the fixation counting task for pupil perimetry because children enjoyed it the most and it achieved strongest pupil responses. Full article
Show Figures

Figure 1

18 pages, 2737 KiB  
Article
Eye Tracking in Virtual Reality: Vive Pro Eye Spatial Accuracy, Precision, and Calibration Reliability
by Immo Schuetz and Katja Fiehler
J. Eye Mov. Res. 2022, 15(3), 1-18; https://doi.org/10.16910/jemr.15.3.3 - 7 Sep 2022
Cited by 48 | Viewed by 131
Abstract
A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers [...] Read more.
A growing number of virtual reality devices now include eye tracking technology, which can facilitate oculomotor and cognitive research in VR and enable use cases like foveated rendering. These applications require different tracking performance, often measured as spatial accuracy and precision. While manufacturers report data quality estimates for their devices, these typically represent ideal performance and may not reflect real-world data quality. Additionally, it is unclear how accuracy and precision change across sessions within the same participant or between devices, and how performance is influenced by vision correction. Here, we measured spatial accuracy and precision of the Vive Pro Eye built-in eye tracker across a range of 30 visual degrees horizontally and vertically. Participants completed ten measurement sessions over multiple days, allowing to evaluate calibration reliability. Accuracy and precision were highest for central gaze and decreased with greater eccentricity in both axes. Calibration was successful in all participants, including those wearing contacts or glasses, but glasses yielded significantly lower performance. We further found differences in accuracy (but not precision) between two Vive Pro Eye headsets, and estimated participants’ inter-pupillary distance. Our metrics suggest high calibration reliability and can serve as a baseline for expected eye tracking performance in VR experiments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop