A Data-Driven Approach for Comparing Gaze Allocation Across Conditions
Abstract
1. Introduction
2. Materials and Methods
2.1. Design
2.2. Participants
2.3. Stimuli
2.4. Setup and Eye-Tracking Recording
2.5. Procedure
2.6. Analysis
2.6.1. ROI Analysis
2.6.2. DNN-Based Approach
Training and Validation Data
DNN
Statistical Inference
3. Results
3.1. ROI Analysis
3.2. DNN-Based Analysis
3.3. Classification Images
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Treue, S. Visual attention: The where, what, how and why of saliency. Curr. Opin. Neurobiol. 2003, 13, 428–432. [Google Scholar] [CrossRef]
- Itti, L.; Koch, C. Computational modelling of visual attention. Nat. Rev. Neurosci. 2001, 2, 194–203. [Google Scholar] [CrossRef]
- Gide, M.S.; Karam, L.J. A Locally Weighted Fixation Density-Based Metric for Assessing the Quality of Visual Saliency Predictions. IEEE Trans. Image Process. 2016, 25, 3852–3861. [Google Scholar] [CrossRef]
- Tatler, B.W.; Baddeley, R.J.; Gilchrist, I.D. Visual correlates of fixation selection: Effects of scale and time. Vis. Res. 2005, 45, 643–659. [Google Scholar] [CrossRef] [PubMed]
- Yarbus, A.L. Eye Movements During Perception of Complex Objects. In Eye Movements and Vision; Springer US: Boston, MA, USA, 1967; pp. 171–211. [Google Scholar] [CrossRef]
- Hayhoe, M.; Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 2005, 9, 188–194. [Google Scholar] [CrossRef] [PubMed]
- Land, M.F. Vision, eye movements, and natural behavior. Vis. Neurosci. 2009, 26, 51–62. [Google Scholar] [CrossRef]
- Schütz, A.C.; Braun, D.I.; Gegenfurtner, K.R. Eye movements and perception: A selective review. J. Vis. 2011, 11, 9. [Google Scholar] [CrossRef]
- Toscani, M.; Zdravković, S.; Gegenfurtner, K.R. Lightness perception for surfaces moving through different illumination levels. J. Vis. 2016, 16, 21. [Google Scholar] [CrossRef]
- Toscani, M.; Valsecchi, M.; Gegenfurtner, K.R. Optimal sampling of visual information for lightness judgments. Proc. Natl. Acad. Sci. USA 2013, 110, 11163–11168. [Google Scholar] [CrossRef]
- Toscani, M.; Valsecchi, M.; Gegenfurtner, K.R. Selection of visual information for lightness judgements by eye movements. Philos. Trans. R. Soc. B Biol. Sci. 2013, 368, 20130056. [Google Scholar] [CrossRef]
- Toscani, M.; Yücel, E.I.; Doerschner, K. Gloss and speed judgments yield different fine tuning of saccadic sampling in dynamic scenes. i-Perception 2019, 10, 2041669519889070. [Google Scholar] [CrossRef]
- Metzger, A.; Ennis, R.J.; Doerschner, K.; Toscani, M. Perceptual task drives later fixations and long latency saccades, while early fixations and short latency saccades are more automatic. Perception 2024, 53, 501–511. [Google Scholar] [CrossRef] [PubMed]
- Greene, M.R.; Liu, T.; Wolfe, J.M. Reconsidering Yarbus: A failure to predict observers’ task from eye movement patterns. Vis. Res. 2012, 62, 1–8. [Google Scholar] [CrossRef]
- Aizenman, A.M.; Gegenfurtner, K.R.; Goettker, A. Oculomotor routines for perceptual judgments. J. Vis. 2024, 24, 3. [Google Scholar] [CrossRef] [PubMed]
- Toscani, M.; Gather, M.; Seiss, E.; Metzger, A. EXPRESS: Effect of prior haptic object exploration on eye-movements. Q. J. Exp. Psychol. 2026, 17470218261417305. [Google Scholar] [CrossRef] [PubMed]
- End, A.; Gamer, M. Task instructions can accelerate the early preference for social features in naturalistic scenes. R. Soc. Open Sci. 2019, 6, 180596. [Google Scholar] [CrossRef]
- Orquin, J.L.; Holmqvist, K. Threats to the validity of eye-movement research in psychology. Behav. Res. Methods 2018, 50, 1645–1656. [Google Scholar] [CrossRef]
- Orquin, J.L.; Ashby, N.J.S.; Clarke, A.D.F. Areas of Interest as a Signal Detection Problem in Behavioral Eye-Tracking Research. J. Behav. Decis. Mak. 2016, 29, 103–115. [Google Scholar] [CrossRef]
- Von Der Malsburg, T.; Angele, B. False positives and other statistical errors in standard analyses of eye movements in reading. J. Mem. Lang. 2017, 94, 119–133. [Google Scholar] [CrossRef]
- Antúnez, L.; Vidal, L.; Sapolinski, A.; Giménez, A.; Maiche, A.; Ares, G. How do design features influence consumer attention when looking for nutritional information on food labels? Results from an eye-tracking study on pan bread labels. Int. J. Food Sci. Nutr. 2013, 64, 515–527. [Google Scholar] [CrossRef]
- Hitzel, E.; Tong, M.; Schütz, A.; Hayhoe, M. Objects in the peripheral visual field influence gaze location in natural vision. J. Vis. 2015, 15, 783. [Google Scholar] [CrossRef]
- Wooding, D.S. Fixation Maps: Quantifying Eye-Movement Traces. In Proceedings of the Eye Tracking Research & Application Symposium, ETRA 2002, New Orleans, LA, USA, 25–27 March 2002; pp. 31–36. [Google Scholar]
- Privitera, C.; Azzariti, M.; Stark, L. Locating regions-of-interest for the Mars Rover expedition. Int. J. Remote Sens. 2000, 21, 3327–3347. [Google Scholar] [CrossRef]
- Santella, A.; DeCarlo, D. Robust clustering of eye movement recordings for quantification of visual interest. In Proceedings of the Eye Tracking Research & Application Symposium, ETRA 2004, San Antonio, TX, USA, 22–24 March 2004; pp. 27–34. [Google Scholar]
- Ahumada, A.; Lovell, J. Stimulus Features in Signal Detection. J. Acoust. Soc. Am. 1971, 49, 1751–1756. [Google Scholar] [CrossRef]
- Beard, B.L.; Ahumada, A.J., Jr. Technique to Extract Relevant Image Features for Visual Tasks; Rogowitz, B.E., Pappas, T.N., Eds.; SPIE: San Jose, CA, USA, 1998; pp. 79–85. [Google Scholar] [CrossRef]
- Eckstein, M.P.; Ahumada, A.J. Classification images: A tool to analyze visual strategies. J. Vis. 2002, 2, i. [Google Scholar] [CrossRef] [PubMed]
- Kayser, C.; Petkov, C.I.; Lippert, M.; Logothetis, N.K. Mechanisms for Allocating Auditory Attention: An Auditory Saliency Map. Curr. Biol. 2005, 15, 1943–1947. [Google Scholar] [CrossRef] [PubMed]
- Metzger, A.; Toscani, M.; Akbarinia, A.; Valsecchi, M.; Drewing, K. Deep neural network model of haptic saliency. Sci. Rep. 2021, 11, 1395. [Google Scholar] [CrossRef]
- Williams, J.R.; Markov, Y.A.; Tiurina, N.A.; Störmer, V.S. What You See Is What You Hear: Sounds Alter the Contents of Visual Perception. Psychol. Sci. 2022, 33, 2109–2122. [Google Scholar] [CrossRef] [PubMed]
- Ernst, M.O.; Banks, M.S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 2002, 415, 429–433. [Google Scholar] [CrossRef]
- Hancock, P.A.; Mercado, J.E.; Merlo, J.; Van Erp, J.B.F. Improving target detection in visual search through the augmenting multi-sensory cues. Ergonomics 2013, 56, 729–738. [Google Scholar] [CrossRef]
- Faul, F.; Erdfelder, E.; Buchner, A.; Lang, A.-G. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods 2009, 41, 1149–1160. [Google Scholar] [CrossRef]
- Piczak, K.J. ESC: Dataset for Environmental Sound Classification. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; ACM: Brisbane Australia, 2015; pp. 1015–1018. [Google Scholar] [CrossRef]
- Public Domain Sounds Backup [Internet]. 2020. Available online: https://pdsounds.tuxfamily.org/ (accessed on 12 March 2026).
- Tatler, B.W. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 2007, 7, 4. [Google Scholar] [CrossRef] [PubMed]
- Einhäuser, W.; Spain, M.; Perona, P. Objects predict fixations better than early saliency. J. Vis. 2008, 8, 18. [Google Scholar] [CrossRef]
- Granzier, J.J.; Toscani, M.; Gegenfurtner, K.R. Role of eye movements in chromatic induction. J. Opt. Soc. Am. A 2012, 29, A353–A365. [Google Scholar] [CrossRef] [PubMed]
- Gil Rodríguez, R.; Bayer, F.; Toscani, M.; Guarnera, D.; Guarnera, G.C.; Gegenfurtner, K.R. Colour calibration of a head mounted display for colour vision research using virtual reality. SN Comput. Sci. 2022, 3, 22. [Google Scholar] [CrossRef] [PubMed]
- Toscani, M.; Gil, R.; Guarnera, D.; Guarnera, G.; Kalouaz, A.; Gegenfurtner, K.R. Assessment of OLED Head Mounted Display for Vision Research with Virtual Reality; IEEE: New York, NY, USA, 2019; pp. 738–745. [Google Scholar]
- Arizpe, J.; Kravitz, D.J.; Yovel, G.; Baker, C.I. Start Position Strongly Influences Fixation Patterns during Face Processing: Difficulties with Eye Movements as a Measure of Information Use. PLoS ONE 2012, 7, e31106. [Google Scholar] [CrossRef]
- Raschka, S. Machine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI; No Starch Press: San Francisco, CA, USA, 2024. [Google Scholar]
- Calvo, M.G.; Fernández-Martín, A.; Gutiérrez-García, A.; Lundqvist, D. Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database. Sci. Rep. 2018, 8, 17039. [Google Scholar] [CrossRef]
- Brenner, E.; Granzier, J.J.; Smeets, J.B. Perceiving colour at a glimpse: The relevance of where one fixates. Vis. Res. 2007, 47, 2557–2568. [Google Scholar] [CrossRef][Green Version]
- Neri, P.; Levi, D.M. Receptive versus perceptive fields from the reverse-correlation viewpoint. Vis. Res. 2006, 46, 2465–2474. [Google Scholar] [CrossRef]
- Hansen, T.; Gegenfurtner, K.R. Classification images for chromatic signal detection. J. Opt. Soc. Am. A 2005, 22, 2081–2089. [Google Scholar] [CrossRef]
- Metzger, A.; Toscani, M. Unsupervised learning of haptic material properties. eLife 2022, 11, e64876. [Google Scholar] [CrossRef]
- Lao, J.; Miellet, S.; Pernet, C.; Sokhn, N.; Caldara, R. I map4: An open source toolbox for the statistical fixation mapping of eye movement data with linear mixed modeling. Behav. Res. Methods 2017, 49, 559–575. [Google Scholar] [CrossRef]
- De Haas, B.; Iakovidis, A.L.; Schwarzkopf, D.S.; Gegenfurtner, K.R. Individual differences in visual salience vary along semantic dimensions. Proc. Natl. Acad. Sci. USA 2019, 116, 11687–11692. [Google Scholar] [CrossRef]
- Kümmerer, M.; Theis, L.; Bethge, M. Deep gaze i: Boosting saliency prediction with feature maps trained on imagenet. arXiv 2014, arXiv:1411.1045. [Google Scholar]
- Mahanama, B.; Jayawardana, Y.; Rengarajan, S.; Jayawardena, G.; Chukoskie, L.; Snider, J.; Jayarathna, S. Eye movement and pupil measures: A review. Front. Comput. Sci. 2022, 3, 733531. [Google Scholar] [CrossRef]
- Cui, Y.; Hondzinski, J.M. Gaze tracking accuracy in humans: Two eyes are better than one. Neurosci. Lett. 2006, 396, 257–262. [Google Scholar] [CrossRef]
- Cerf, M.; Frady, E.P.; Koch, C. Faces and text attract gaze independent of the task: Experimental data and computer model. J. Vis. 2009, 9, 10. [Google Scholar] [CrossRef] [PubMed]
- Rothkegel, L.O.; Trukenbrod, H.A.; Schütt, H.H.; Wichmann, F.A.; Engbert, R. Influence of initial fixation position in scene viewing. Vis. Res. 2016, 129, 33–49. [Google Scholar] [CrossRef]
- Yarbus, A.L. Eye movements during fixation on stationary objects. In Eye Movements and Vision; Springer: Berlin/Heidelberg, Germany, 1967; pp. 103–127. [Google Scholar]




| Chance Accuracy | 0.0025 Quantile Accuracy | 0.9975 Quantile Accuracy |
|---|---|---|
| 0.4484 | 0.5490 | 0.6180 |
| 0.5042 | 0.4030 | 0.4800 |
| 0.5499 | 0.4060 | 0.4740 |
| 0.4075 | 0.5640 | 0.6390 |
| 0.4903 | 0.5210 | 0.6060 |
| 0.5501 | 0.4480 | 0.5450 |
| 0.4483 | 0.4800 | 0.5540 |
| 0.5215 | 0.4720 | 0.5440 |
| 0.4538 | 0.5790 | 0.6410 |
| 0.5383 | 0.4270 | 0.4990 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Prosser, J.; Metzger, A.; Toscani, M. A Data-Driven Approach for Comparing Gaze Allocation Across Conditions. J. Eye Mov. Res. 2026, 19, 33. https://doi.org/10.3390/jemr19020033
Prosser J, Metzger A, Toscani M. A Data-Driven Approach for Comparing Gaze Allocation Across Conditions. Journal of Eye Movement Research. 2026; 19(2):33. https://doi.org/10.3390/jemr19020033
Chicago/Turabian StyleProsser, Jack, Anna Metzger, and Matteo Toscani. 2026. "A Data-Driven Approach for Comparing Gaze Allocation Across Conditions" Journal of Eye Movement Research 19, no. 2: 33. https://doi.org/10.3390/jemr19020033
APA StyleProsser, J., Metzger, A., & Toscani, M. (2026). A Data-Driven Approach for Comparing Gaze Allocation Across Conditions. Journal of Eye Movement Research, 19(2), 33. https://doi.org/10.3390/jemr19020033

