Eye Tracking and Visualization

Special Issue Editor


E-Mail Website
Guest Editor
Center for Data Analytics, Visualization, and Simulation, University of Applied Sciences, 7000 Chur, Switzerland
Interests: eye tracking; information visualization; visual analytics; data science; software engineering

Special Issue Information

Dear Colleagues,

The application of eye tracking technology to application-specific research questions generates vast amounts of spatio-temporal data. Algorithmically analyzing the recorded data can support the identification of patterns and anomalies in the data; however, visualizations typically exploit humans’ perceptual abilities to detect visual patterns and hence, provide a powerful and faster means in addition to the pure algorithmic solutions. In this Special Issue, we focus on concepts, approaches, and techniques that make use of interactive visualizations to analyze eye movement data or that provide a way to better interact in visualizations or visual user interfaces by applying gaze-assisted interaction. In addition, more complex visual analytics tools including algorithms, visualizations, and human–computer interaction to facilitate pattern finding and to support decision making are welcome. Moreover, evaluations of visualizations and visual analytics tools—static or dynamic ones that are integrated into small-, medium-, or large-scale displays in the real world, virtual, augmented, or immersive reality—that make use of eye tracking are in the scope of this Special Issue.  

Dr. Michael Burch
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Eye Movement Research is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • eye tracking
  • visualization
  • visual analytics
  • user evaluation
  • human–computer interaction
  • pattern identification
  • visual perception

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3698 KB  
Article
Active Gaze Guidance and Pupil Dilation Effects Through Subject Engagement in Ophthalmic Imaging
by David Harings, Niklas Bauer, Damian Mendroch, Uwe Oberheide and Holger Lubatschowski
J. Eye Mov. Res. 2025, 18(5), 45; https://doi.org/10.3390/jemr18050045 - 19 Sep 2025
Viewed by 219
Abstract
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning [...] Read more.
Modern ophthalmic imaging methods such as optical coherence tomography (OCT) typically require expensive scanner components to direct the light beam across the retina while the patient’s gaze remains fixed. This proof-of-concept experiment investigates whether the patient’s natural eye movements can replace mechanical scanning by guiding the gaze along predefined patterns. An infrared fundus camera setup was used with nine healthy adults (aged 20–57) who completed tasks comparing passive viewing of moving patterns to actively tracing them by drawing using a touchpad interface. The active task involved participant-controlled target movement with real-time color feedback for accurate pattern tracing. Results showed that active tracing significantly increased pupil diameter by an average of 17.8% (range 8.9–43.6%; p < 0.001) and reduced blink frequency compared to passive viewing. More complex patterns led to greater pupil dilation, confirming the link between cognitive load and physiological response. These findings demonstrate that patient driven gaze guidance can stabilize gaze, reduce blinking, and naturally dilate the pupil. These conditions might enhance the quality of scannerless OCT or other imaging techniques benefiting from guided gaze and larger pupils. There could be benefits for children and people with compliance issues, although further research is needed to consider cognitive load. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

20 pages, 44464 KB  
Article
Spatial Guidance Overrides Dynamic Saliency in VR: An Eye-Tracking Study on Gestalt Grouping Mechanisms and Visual Attention Patterns
by Qiaoling Zou, Wanyu Zheng, Xinyan Jiang and Dongning Li
J. Eye Mov. Res. 2025, 18(5), 37; https://doi.org/10.3390/jemr18050037 - 25 Aug 2025
Viewed by 634
Abstract
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and [...] Read more.
(1) Background: Virtual Reality (VR) films challenge traditional visual cognition by offering novel perceptual experiences. This study investigates the applicability of Gestalt grouping principles in dynamic VR scenes, the influence of VR environments on grouping efficiency, and the relationship between viewer experience and grouping effects. (2) Methods: Eye-tracking experiments were conducted with 42 participants using the HTC Vive Pro Eye and Tobii Pro Lab. Participants watched a non-narrative VR film with fixed camera positions to eliminate narrative and auditory confounds. Eye-tracking metrics were analyzed using SPSS version 29.0.1, and data were visualized through heat maps and gaze trajectory plots. (3) Results: Viewers tended to focus on spatial nodes and continuous structures. Initial fixations were anchored near the body but shifted rapidly thereafter. Heat maps revealed a consistent concentration of fixations on the dock area. (4) Conclusions: VR reshapes visual organization, where proximity, continuity, and closure outweigh traditional saliency. Dynamic elements draw attention only when linked to user goals. Designers should prioritize spatial logic, using functional nodes as cognitive anchors and continuous paths as embodied guides. Future work should test these mechanisms in narrative VR and explore neural correlates via fNIRS or EEG. Full article
(This article belongs to the Special Issue Eye Tracking and Visualization)
Show Figures

Figure 1

Back to TopTop