Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes
Abstract
:Introduction
Methods
Participants
Apparatus
Stimuli
Design
Procedure
Analysis
Results
Behavioral effects
Eye movement measures of search efficiency
Gaze and head measures
Discussion
Ethics and Conflict of Interest
Acknowledgements
Author Contributions
References
- Anderson, B. A., and D. S. Lee. 2023. Visual search as effortful work. Journal of Experimental Psychology: General. Advance online publication. [Google Scholar] [CrossRef]
- Anderson, N. C., W. F. Bischof, T. Foulsham, and A. Kingstone. 2020. Turning the (virtual) world around: Patterns in saccade direction vary with picture orientation and shape in virtual reality. Journal of Vision 20, 8: 21. [Google Scholar] [CrossRef] [PubMed]
- Baayen, R. H., D. J. Davidson, and D. M. Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language 59, 4: 390–412. [Google Scholar] [CrossRef]
- Barr, D. J., R. Levy, C. Scheepers, and H. J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language 68, 3: 255–278. [Google Scholar] [CrossRef]
- Bates, D., M. Mächler, B. Bolker, and S. Walker. 2015. Fitting linear mixed-effects models using lme4. Journal of Statistical Software 67, 1. [Google Scholar] [CrossRef]
- Beighley, S., and H. Intraub. 2016. Does inversion affect boundary extension for briefly-presented views? Visual Cognition 24, 3: 252–259. [Google Scholar] [CrossRef]
- Beitner, J., J. Helbing, D. Draschkow, and M. L.-H. Võ. 2021. Get your guidance going: Investigating the activation of spatial priors for efficient search in virtual reality. Brain Sciences 11, 1: 44. [Google Scholar] [CrossRef]
- Boettcher, S. E. P., D. Draschkow, E. Dienhart, and M. L.-H. Võ. 2018. Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. Journal of Vision 18, 13: 11. [Google Scholar] [CrossRef]
- Brockmole, J. R., and J. M. Henderson. 2006. Using realworld scenes as contextual cues for search. Visual Cognition 13, 1: 99–108. [Google Scholar] [CrossRef]
- Chemero, A. 2009. Radical embodied cognitive science. MIT Press. [Google Scholar]
- David, E. J., J. Beitner, and M. L.-H. Võ. 2020. Effects of transient loss of vision on head and eye movements during visual search in a virtual environment. Brain Sciences 10, 11: 841. [Google Scholar] [CrossRef]
- David, E. J., J. Beitner, and M. L.-H. Võ. 2021. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. Journal of Vision 21, 7: 3. [Google Scholar] [CrossRef]
- David, E. J., P. Lebranchu, M. Perreira Da Silva, and P. Le Callet. 2020. A new toolbox to process gaze and head motion in virtual reality. In Preparation. [Google Scholar]
- David, E. J., P. Lebranchu, M. Perreira Da Silva, and P. Le Callet. 2022. What are the visuo-motor tendencies of omnidirectional scene free-viewing in virtual reality? Journal of Vision 22, 4: 12. [Google Scholar] [CrossRef] [PubMed]
- Draschkow, D. 2022. Remote virtual reality as a tool for increasing external validity. Nature Reviews Psychology 1, 8: 433–434. [Google Scholar] [CrossRef]
- Draschkow, D., M. Kallmayer, and A. C. Nobre. 2021. When natural behavior engages working memory. Current Biology 31, 4: 869–874. [Google Scholar] [CrossRef] [PubMed]
- Draschkow, D., A. C. Nobre, and F. van Ede. 2022. Multiple spatial frames for immersive working memory. Nature Human Behaviour 6, 4: 536–544. [Google Scholar] [CrossRef]
- Draschkow, D., and M. L.-H. Võ. 2017. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Scientific Reports 7: 16471. [Google Scholar] [CrossRef]
- Droll, J. A., and M. M. Hayhoe. 2007. Trade-offs between gaze and working memory use. Journal of Experimental Psychology: Human Perception and Performance 33, 6: 1352–1365. [Google Scholar] [CrossRef] [PubMed]
- Einhäuser, W., F. Schumann, S. Bardins, K. Bartl, G. Böning, E. Schneider, and P. König. 2007. Human eye-head co-ordination in natural exploration. Network: Computation in Neural Systems 18, 3: 267–297. [Google Scholar] [CrossRef]
- Foulsham, T., A. Kingstone, and G. Underwood. 2008. Turning the world around: Patterns in saccade direction vary with picture orientation. Vision Research 48, 17: 1777–1790. [Google Scholar] [CrossRef]
- Freedman, E. G. 2008. Coordination of the eyes and head during visual orienting. Experimental Brain Research 190, 4: 369–387. [Google Scholar] [CrossRef]
- Gutiérrez, J., E. J. David, A. Coutrot, M. P. Da Silva, and P. Le Callet. 2018. Introducing UN Salient360! Benchmark: A platform for evaluating visual attention models for 360° contents. 2018 Tenth International Conference on Quality of Multimedia Experience (QoMEX), 1–3. [Google Scholar] [CrossRef]
- Gutiérrez, J., E. J. David, Y. Rai, and P. Le Callet. 2018. Toolbox and dataset for the development of saliency and scanpath models for omnidirectional/360° still images. Signal Processing: Image Communication 69: 35–42. [Google Scholar] [CrossRef]
- Hayes, T. R., and J. M. Henderson. 2022. Scene inversion reveals distinct patterns of attention to semantically interpreted and uninterpreted features. Cognition 229: 105231. [Google Scholar] [CrossRef]
- Helbing, J., D. Draschkow, and M. L.-H. Võ. 2020. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 196: 104147. [Google Scholar] [CrossRef]
- Helbing, J., D. Draschkow, and M. L.-H. Võ. 2022. Auxiliary scene-context information provided by anchor objects guides attention and locomotion in natural search behavior. Psychological Science 33, 9: 1463–1476. [Google Scholar] [CrossRef] [PubMed]
- Hoffman, L., and M. J. Rovine. 2007. Multilevel models for the experimental psychologist: Foundations and illustrative examples. Behavior Research Methods 39, 1: 101–117. [Google Scholar] [CrossRef]
- Hollingworth, A., and B. Bahle. 2019. Edited by S. Pollmann. Eye tracking in visual search experiments. In Spatial learning and attention guidance. Humana Press: Vol. 151, pp. 23–35. [Google Scholar] [CrossRef]
- Hope, R. M. 2013. Rmisc: Ryan Miscellaneous (1.5) [R package]. https://cran.r-project.org/package=Rmisc.
- Horowitz, T. S., and J. M. Wolfe. 1998. Visual search has no memory. Nature 394, 6693: 575–577. [Google Scholar] [CrossRef] [PubMed]
- Johnsdorf, M., J. Kisker, T. Gruber, and B. Schöne. 2023. Comparing encoding mechanisms in realistic virtual reality and conventional 2D laboratory settings: Event-related potentials in a repetition suppression paradigm. Frontiers in Psychology 14: 1051938. [Google Scholar] [CrossRef]
- Kass, R. E., and A. E. Raftery. 1995. Bayes factors. Journal of the American Statistical Association 90, 430: 773–795. [Google Scholar] [CrossRef]
- Kelley, T. A., M. M. Chun, and K.-P. Chua. 2003. Effects of scene inversion on change detection of targets matched for visual salience. Journal of Vision 3, 1: 1. [Google Scholar] [CrossRef]
- Kisker, J., T. Gruber, and B. Schöne. 2021. Experiences in virtual reality entail different processes of retrieval as opposed to conventional laboratory settings: A study on human memory. Current Psychology 40, 7: 3190–3197. [Google Scholar] [CrossRef]
- Kliegl, R., P. Wei, M. Dambacher, M. Yan, and X. Zhou. 2011. Experimental effects and individual differences in linear mixed models: Estimating the relationship between spatial, object, and attraction effects in visual attention. Frontiers in Psychology 1: 238. [Google Scholar] [CrossRef]
- Koehler, K., and M. P. Eckstein. 2015. Scene inversion slows the rejection of false positives through saccade exploration during search. Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 1141–1146. https://cogsci.mindmodeling.org/2015/papers/0202/paper0202.pdf.
- Kuznetsova, A., P. B. Brockhoff, and R. H. B. Christensen. 2017. lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software 82, 13. [Google Scholar] [CrossRef]
- Lauer, T., V. Willenbockel, L. Maffongelli, and M. L.-H. Võ. 2020. The influence of scene and object orientation on the scene consistency effect. Behavioural Brain Research 394: 112812. [Google Scholar] [CrossRef]
- Li, C.-L., M. P. Aivar, D. M. Kit, M. H. Tong, and M. M. Hayhoe. 2016. Memory and visual search in naturalistic 2D and 3D environments. Journal of Vision 16, 8: 9. [Google Scholar] [CrossRef]
- Malcolm, G. L., and J. M. Henderson. 2009. The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements. Journal of Vision 9, 11: 8. [Google Scholar] [CrossRef]
- Melnik, A., F. Schüler, C. A. Rothkopf, and P. König. 2018. The world as an external memory: The price of saccades in a sensorimotor task. Frontiers in Behavioral Neuroscience 12: 253. [Google Scholar] [CrossRef]
- Meng, M., and M. C. Potter. 2008. Detecting and remembering pictures with and without visual noise. Journal of Vision 8, 9: 7. [Google Scholar] [CrossRef]
- Morey, R. D., J. N. Rouder, T. Jamil, S. Urbanek, K. Forner, and A. Ly. 2022. Package “BayesFactor” (0.9.12-4.2). https://CRAN.R-project.org/package=BayesFactor.
- Nakagawa, S., and H. Schielzeth. 2013. A general and simple method for obtaining R 2 from generalized linear mixed-effects models. Methods in Ecology and Evolution 4, 2: 133–142. [Google Scholar] [CrossRef]
- Neider, M. B., and G. J. Zelinsky. 2008. Exploring set size effects in scenes: Identifying the objects of search. Visual Cognition 16, 1: 1–10. [Google Scholar] [CrossRef]
- Nyström, M., and K. Holmqvist. 2010. An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods 42, 1: 188–204. [Google Scholar] [CrossRef]
- O’Regan, J. K. 1992. Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology/Revue Canadienne de Psychologie 46, 3: 461–488. [Google Scholar] [CrossRef]
- Pan, J. S., N. Bingham, and G. P. Bingham. 2013. Embodied memory: Effective and stable perception by combining optic flow and image structure. Journal of Experimental Psychology: Human Perception and Performance 39, 6: 1638–1651. [Google Scholar] [CrossRef]
- Parsons, T. D. 2015. Virtual reality for enhanced ecological validity and experimental control in the clinical, affective and social neurosciences. Frontiers in Human Neuroscience 9: 660. [Google Scholar] [CrossRef]
- R Core Team. 2022. R: A language and environment for statistical computing (4.2.2). [Programming language R]. R Foundation for Statistical Computing. https://www.R-project.org/.
- Rock, I. 1974. The perception of disoriented figures. Scientific American 230, 1: 78–85. [Google Scholar] [CrossRef] [PubMed]
- RStudio Team. 2022. RStudio: Integrated development environment for R (2022.7.1.554). [Computer software]. Posit Software. https://posit.co/.
- Rubo, M., N. Messerli, and S. Munsch. 2021. The human source memory system struggles to distinguish virtual reality and reality. Computers in Human Behavior Reports 4: 100111. [Google Scholar] [CrossRef]
- Schöne, B., R. S. Sylvester, E. L. Radtke, and T. Gruber. 2021. Sustained inattentional blindness in virtual reality and under conventional laboratory conditions. Virtual Reality 25, 1: 209–216. [Google Scholar] [CrossRef]
- Schöne, B., M. Wessels, and T. Gruber. 2017. Experiences in virtual reality: A window to autobiographical memory. Current Psychology 38: 715–719. [Google Scholar] [CrossRef]
- Shore, D. I., and R. M. Klein. 2000. The effects of scene inversion on change blindness. The Journal of General Psychology 127, 1: 27–43. [Google Scholar] [CrossRef]
- Stahl, J. S. 1999. Amplitude of human head movements associated with horizontal saccades. Experimental Brain Research 126, 1: 41–54. [Google Scholar] [CrossRef]
- Tarr, M. J., and W. H. Warren. 2002. Virtual reality in behavioral neuroscience and beyond. Nature Neuroscience 5, 11, Suppl.: 1089–1092. [Google Scholar] [CrossRef]
- Taubert, J., D. Apthorp, D. Aagten-Murphy, and D. Alais. 2011. The role of holistic processing in face perception: Evidence from the face inversion effect. Vision Research 51, 11: 1273–1278. [Google Scholar] [CrossRef]
- Valentine, T. 1988. Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology 79, 4: 471–491. [Google Scholar] [CrossRef]
- Van der Stigchel, S. 2020. An embodied account of visual working memory. Visual Cognition 28, 5–8: 414–419. [Google Scholar] [CrossRef]
- Võ, M. L.-H. 2021. The meaning and structure of scenes. Vision Research 181: 10–20. [Google Scholar] [CrossRef] [PubMed]
- Võ, M. L.-H., S. E. P. Boettcher, and D. Draschkow. 2019. Reading scenes: How scene grammar guides attention and aids perception in realworld environments. Current Opinion in Psychology 29: 205–210. [Google Scholar] [CrossRef]
- Võ, M. L.-H., and J. M. Wolfe. 2012. When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. Journal of Experimental Psychology: Human Perception and Performance 38, 1: 23–41. [Google Scholar] [CrossRef] [PubMed]
- Võ, M. L.-H., and J. M. Wolfe. 2013. The interplay of episodic and semantic memory in guiding repeated search in scenes. Cognition 126, 2: 198–212. [Google Scholar] [CrossRef] [PubMed]
- von Noorden, G. K., and E. C. Campos. 2002. Binocular vision and ocular motility: Theory and management of strabismus, 6th ed. Mosby. [Google Scholar]
- Walther, D. B., E. Caddigan, L. Fei-Fei, and D. M. Beck. 2009. Natural scene categories revealed in distributed patterns of activity in the human brain. Journal of Neuroscience 29, 34: 10573–10581. [Google Scholar] [CrossRef]
- Wickham, H. 2016. ggplot2: Elegant graphics for data analysis, 2nd ed. [R Package]. Springer. [Google Scholar] [CrossRef]
- Wilkinson, G. N., and C. E. Rogers. 1973. Symbolic description of factorial models for analysis of variance. Applied Statistics 22, 3: 392–399. [Google Scholar] [CrossRef]
- Wolfe, J. M., G. A. Alvarez, R. Rosenholtz, Y. I. Kuzmova, and A. M. Sherman. 2011. Visual search for arbitrary objects in real scenes. Attention, Perception, & Psychophysics 73, 6: 1650–1671. [Google Scholar] [CrossRef]
- Yin, R. K. 1969. Looking at upside-down faces. Journal of Experimental Psychology 81, 1: 141–145. [Google Scholar] [CrossRef]
- Zhang, H., and J. S. Pan. 2022. Visual search as an embodied process: The effects of perspective schange and external reference on search performance. Journal of Vision 22, 10: 13. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. This article is licensed under a Creative Commons Attribution 4.0 International License.
Share and Cite
Beitner, J.; Helbing, J.; Draschkow, D.; David, E.J.; Võ, M.L.-H. Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes. J. Eye Mov. Res. 2022, 15, 1-16. https://doi.org/10.16910/jemr.15.3.5
Beitner J, Helbing J, Draschkow D, David EJ, Võ ML-H. Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes. Journal of Eye Movement Research. 2022; 15(3):1-16. https://doi.org/10.16910/jemr.15.3.5
Chicago/Turabian StyleBeitner, Julia, Jason Helbing, Dejan Draschkow, Erwan J. David, and Melissa L.-H. Võ. 2022. "Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes" Journal of Eye Movement Research 15, no. 3: 1-16. https://doi.org/10.16910/jemr.15.3.5
APA StyleBeitner, J., Helbing, J., Draschkow, D., David, E. J., & Võ, M. L.-H. (2022). Flipping the World Upside Down: Using Eye Tracking in Virtual Reality to Study Visual Search in Inverted Scenes. Journal of Eye Movement Research, 15(3), 1-16. https://doi.org/10.16910/jemr.15.3.5