The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes
Abstract
:1. Introduction
Why Use Eye Movements to Study Scenes?
- Eye movements are natural. Because of the structure of the eye, people naturally move their eyes to point the location of the highest acuity in the retina (i.e., the fovea, ~2° of visual angle) at what they wish to “look” at (see Kowler [25] for a more detailed description of the visual field). To compensate for this limited area of high acuity, people rotate the eyes to focus light from different physical locations onto the fovea. Importantly, in contrast to cognitive tasks that require the experimenter to train the participant in how to respond correctly, researchers do not teach participants how to move their eyes. In fact, it takes effort and monitoring by the researcher if the goal is to have participants not move their eyes. Most people are unaware that the eyes move a number of times per second (~3 eye movements per second [26]). Although not completely implicit, the relative ease of eye movements and their relative “invisibility” makes them an ideal tool for non-invasively observing behavior. In addition, the measurement of eye movements has become relatively easier in the last two decades with the costs of eye trackers falling and the ease of use of these devises increasing. Overall, eye movements provide a low-cost way to unobtrusively observe natural behavior.
- Eye movements are fast. Eye movements and fixations operate on a time scale that allows researchers to have greater precision in their measurements. Saccadic eye movements generally take less than 50 ms (frequently much less) to rotate the eyes from pointing at one part of the visual world to pointing at another part of the visual world. Once the eyes have rotated to point to the new location, they pause or fixate for a brief amount of time (e.g., 100–400 ms). While the eye is in motion, visual processing from the eyes is limited through saccadic suppression [26], so of cognitive interest is when the eye is relatively still (such as during a fixation) and visual information is acquired. Borrowing from reading research, scene processing has been measured using different fixation measures based on duration, number/count, and location. However, aggregate fixation measures that define processing across different temporal windows have proven to be especially useful. For instance, gaze duration (the sum of the fixation durations on a region of interest from the first fixation in the region to when the eyes leave that region) can give an indication of the time to initially process and recognize an object. Subsequent fixations (second gaze duration or total time) would indicate that additional information gathering was needed or that a checking/confirming process was necessary.
- Eye movements operate across a spatial dimension. Unlike temporal measures that are inherently unidimensional (e.g., reaction time), measuring where the eyes are directed allows researchers to determine areas of a stimulus that the participant is currently prioritizing. The spatially distributed information of eye movements allows researchers to have a direct measure of prioritization of information available to the observer, examine commonalities in prioritization across individuals, and allow for other interesting spatially aggregate measures. For instance, the proportion of the image fixated and the scan path length can each indicate the extent of exploratory vs. focused behavior. Some tasks encourage greater exploration of the scene (e.g., memorization), whereas others constrain that exploration (e.g., visual search). Further, scan path allows for a direct measure of efficiency of the eye movements as it can be used to create a ratio of the distance taken to reach a critical region to the shortest distance possible. Thus, with all 360° of possible prioritization for the next fixation, the spatial dimension allows for a rich set of measures that reflect different types of processing.
- Eye movements operate across a temporal dimension. Because the eyes have to move from one location to the next in a serial manner, eye movement data also provide a temporal record of processing in addition to the spatial record. This information allows researchers to identify the order that scene features are processed, potentially indicating their relative importance to the task. In addition, fixations typically last only a few hundred milliseconds, which is much shorter than many complex tasks (e.g., search) take to complete. The serial fixation record can be examined to determine, at a more fine-grained time scale, the processing that was occurring at each point in the trial rather than on a global scale (i.e., reaction time).
2. Attention and Eye Movements
3. Where You Look
3.1. Influence of Stimulus Properties
3.2. Meaning or Object as the Unit of Selection
3.3. Semantic Integrity within the Larger Scene Context
3.4. Influences of Spatial Associations
4. Effects of Task
5. Influence of Scene Representations in Memory
5.1. What Is Remembered of a Scene from a Fixation?
5.2. How Does Memory of the Scene Influence Current Fixations?
6. Dynamic Scenes and Eye Movements
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Castelhano, M.S.; Henderson, J.M.M. The influence of color on the perception of scene gist. J. Exp. Psychol. Hum. Percept. Perform. 2008, 34, 660–675. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M. Human gaze control during real-world scene perception. Trends Cogn. Sci. 2003, 7, 498–504. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M.; Hollingworth, A. High-level scene perception. Annu. Rev. Psychol. 1999, 50, 243–271. [Google Scholar] [CrossRef]
- Oliva, A. Gist of the scene. In Neurobiology of Attention; Itti, L., Rees, G., Tsotsos, J.K., Eds.; Academic Press: Cambridge, MA, USA, 2005; pp. 251–256. [Google Scholar]
- Rayner, K.; Pollatsek, A. Eye movements and scene perception. Can. J. Psychol. 1992, 46, 342. [Google Scholar] [CrossRef] [PubMed]
- Torralba, A.; Oliva, A.; Castelhano, M.S.; Henderson, J.M. Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychol. Rev. 2006, 113, 766–786. [Google Scholar] [CrossRef]
- Oliva, A.; Torralba, A. The role of context in object recognition. Trends Cogn. Sci. 2007, 11, 520–527. [Google Scholar] [CrossRef]
- Pereira, E.J.; Castelhano, M.S. Peripheral guidance in scenes: The interaction of scene context and object content. J. Exp. Psychol. Hum. Percept. Perform. 2014, 40, 2056–2072. [Google Scholar] [CrossRef]
- Võ, M.L.-H.; Boettcher, S.E.; Draschkow, D. Reading scenes: How scene grammar guides attention and aids perception in real-world environments. Curr. Opin. Psychol. 2019, 29, 205–210. [Google Scholar] [CrossRef]
- Intraub, H. Rethinking scene perception: A multisource model. Psychol. Learn. Motiv. 2010, 52, 231–264. [Google Scholar]
- Greene, M.R.; Oliva, A. The briefest of glances: The time course of natural scene understanding. Psychol. Sci. 2009, 20, 464–472. [Google Scholar] [CrossRef]
- Greene, M.R.; Oliva, A. Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cogn. Psychol. 2009, 58, 137–176. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
- Castelhano, M.S.; Heaven, C. Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychon. Bull. Rev. 2011, 18, 890–896. [Google Scholar] [CrossRef]
- Malcolm, G.L.; Henderson, J.M. Combining top-down processes to guide eye movements during real-world scene search. J. Vis. 2010, 10, 4. [Google Scholar] [CrossRef] [PubMed]
- Josephs, E.L.; Konkle, T. Perceptual dissociations among views of objects, scenes, and reachable spaces. J. Exp. Psychol. Hum. Percept. Perform. 2019, 45, 715–728. [Google Scholar] [CrossRef] [PubMed]
- Castelhano, M.S.; Fernandes, S. The Foreground Bias: Initial scene representations dominated by foreground information. J. Vis. 2018, 18, 1240. [Google Scholar] [CrossRef]
- Castelhano, M.S.; Witherspoon, R.L. How you use it matters: Object function guides attention during visual search in scenes. Psychol. Sci. 2016, 27, 606–621. [Google Scholar] [CrossRef]
- Greene, M.R.; Baldassano, C.; Esteva, A.; Beck, D.M.; Fei-Fei, L. Visual scenes are categorized by function. J. Exp. Psychol. Gen. 2016, 145, 82–94. [Google Scholar] [CrossRef] [PubMed]
- Buswell, G. How People Look at Pictures: A Study of the Psychology and Perception in Art; Univ. Chicago Press: Oxford, UK, 1935. [Google Scholar]
- Yarbus, A.L. Eye Movements and Vision; Springer: Boston, MA, USA, 1967. [Google Scholar]
- Deubel, H.; Schneider, W.X. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vis. Res. 1996, 36, 1827–1837. [Google Scholar] [CrossRef] [Green Version]
- Hoffman, J.E.; Subramaniam, B. The role of visual attention in saccadic eye movements. Percept. Psychophys. 1995, 57, 787–795. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rayner, K.; McConkie, G.W.; Ehrlich, S. Eye movements and integrating information across fixations. J. Exp. Psychol. Hum. Percept. Perform. 1978, 4, 529–544. [Google Scholar] [CrossRef] [PubMed]
- Kowler, E. Eye movements: The past 25 years. Vis. Res. 2011, 51, 1457–1483. [Google Scholar] [CrossRef] [PubMed]
- Rayner, K. Eye movements and attention in reading, scene perception, and visual search. Q. J. Exp. Psychol. 2009, 62, 1457–1506. [Google Scholar] [CrossRef] [PubMed]
- Casarotti, M.; Lisi, M.; Umiltà, C.; Zorzi, M. Paying attention through eye movements: A computational investigation of the premotor theory of spatial attention. J. Cogn. Neurosci. 2012, 24, 1519–1531. [Google Scholar] [CrossRef] [PubMed]
- Cavanagh, P.; Hunt, A.R.; Afraz, A.; Rolfs, M. Visual stability based on remapping of attention pointers. Trends Cogn. Sci. 2010, 14, 147–153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rolfs, M.; Jonikaitis, D.; Deubel, H.; Cavanagh, P. Predictive remapping of attention across eye movements. Nat. Neurosci. 2011, 14, 252–256. [Google Scholar] [CrossRef] [PubMed]
- Zhao, M.; Gersch, T.M.; Schnitzer, B.S.; Dosher, B.A.; Kowler, E. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades. Vis. Res. 2012, 74, 40–60. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Golomb, J.D.; Marino, A.C.; Chun, M.M.; Mazer, J.A. Attention doesn’t slide: Spatiotopic updating after eye movements instantiates a new, discrete attentional locus. Atten. Percept. Psychophys. 2011, 73, 7–14. [Google Scholar] [CrossRef] [PubMed]
- Golomb, J.D.; Nguyen-Phuc, A.Y.; Mazer, J.A.; McCarthy, G.; Chun, M.M. Attentional facilitation throughout human visual cortex lingers in retinotopic coordinates after eye movements. J. Neurosci. 2010, 30, 10493–10506. [Google Scholar] [CrossRef] [PubMed]
- Golomb, J.D.; Pulido, V.Z.; Albrecht, A.R.; Chun, M.M.; Mazer, J.A. Robustness of the retinotopic attentional trace after eye movements. J. Vis. 2010, 10, 1–12. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rizzolatti, G.; Riggio, L.; Dascola, I.; Umiltá, C. Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention. Neuropsychologia 1987, 25, 31–40. [Google Scholar] [CrossRef]
- Bindemann, M. Scene and screen center bias early eye movements in scene viewing. Vis. Res. 2010, 50, 2577–2587. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tatler, B.W. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 2007, 7, 4. [Google Scholar] [CrossRef] [PubMed]
- Tatler, B.W.; Vincent, B.T. The prominence of behavioural biases in eye guidance. Vis. Cogn. 2009, 17, 1029–1054. [Google Scholar] [CrossRef]
- Tseng, P.H.; Carmi, R.; Cameron, I.G.M.; Munoz, D.P.; Itti, L. Quantifying center bias of observers in free viewing of dynamic natural scenes. J. Vis. 2009, 9, 4. [Google Scholar] [CrossRef] [PubMed]
- Rothkegel, L.O.M.; Trukenbrod, H.A.; Schütt, H.H.; Wichmann, F.A.; Engbert, R. Temporal evolution of the central fixation bias in scene viewing. J. Vis. 2017, 17, 3. [Google Scholar] [CrossRef] [Green Version]
- Itti, L.; Koch, C. A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 2000, 40, 1489–1506. [Google Scholar] [CrossRef] [Green Version]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Mackworth, N.H.; Morandi, A.J. The gaze selects informative details within pictures. Percept. Psychophys. 1967, 2, 547–552. [Google Scholar] [CrossRef]
- Borji, A.; Sihite, D.N.; Itti, L. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Trans. Image Process. 2013, 22, 55–69. [Google Scholar] [CrossRef]
- Erdem, E.; Erdem, A. Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vis. 2013, 13, 11. [Google Scholar] [CrossRef] [PubMed]
- Frey, H.-P.; König, P.; Einhäuser, W. The role of first- and second-order stimulus features for human overt attention. Percept. Psychophys. 2007, 69, 153–161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nuthmann, A.; Malcolm, G.L. Eye guidance during real-world scene search: The role color plays in central and peripheral vision. J. Vis. 2016, 16, 3. [Google Scholar] [CrossRef] [PubMed]
- Bruce, N.D.B.; Wloka, C.; Frosst, N.; Rahman, S. On computational modeling of visual saliency: Examining what’s right, and what’s left. Vis. Res. 2015, 116, 95–112. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M.; Brockmole, J.R.; Castelhano, M.S.; Mack, M. Visual saliency does not account for eye movements during visual search in real-world scenes. In Eye Movements: A Window on Mind and Brain; van Gompel, R., Fischer, M., Murray, W.S., Hill, R.L., Eds.; Elsevier: Oxford, UK, 2007; pp. 537–562. [Google Scholar]
- Tatler, B.; Hayhoe, M.; Land, M.; Ballard, D. Eye guidance in natural vision: Reinterpreting salience. J. Vis. 2011, 11, 5. [Google Scholar] [CrossRef] [PubMed]
- Castelhano, M.S.; Mack, M.L.; Henderson, J.M. Viewing task influences eye movement control during active scene perception. J. Vis. 2009, 9, 6. [Google Scholar] [CrossRef] [PubMed]
- Awh, E.; Belopolsky, A.V.; Theeuwes, J. Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends Cogn. Sci. 2012, 16, 437–443. [Google Scholar] [CrossRef] [PubMed]
- Bindemann, M.; Scheepers, C.; Burton, A.M. Viewpoint and center of gravity affect eye movements to human faces. J. Vis. 2009, 9, 7. [Google Scholar] [CrossRef]
- Henderson, J.M.; Malcolm, G.L.; Schandl, C. Searching in the dark: Cognitive relevance drives attention in real-world scenes. Psychon. Bull. Rev. 2009, 16, 850–856. [Google Scholar] [CrossRef]
- Nuthmann, A.; Henderson, J.M. Object-based attentional selection in scene viewing. J. Vis. 2010, 10, 20. [Google Scholar] [CrossRef]
- Stoll, J.; Thrun, M.; Nuthmann, A.; Einhäuser, W. Overt attention in natural scenes: Objects dominate features. Vis. Res. 2015, 107, 36–48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Foulsham, T.; Kingstone, A. Fixation-dependent memory for natural scenes: An experimental test of scanpath theory. J. Exp. Psychol. 2013, 142, 41–56. [Google Scholar] [CrossRef] [PubMed]
- Driver, J.; Davis, G.; Russell, C.; Turatto, M.; Freeman, E. Segmentation, attention and phenomenal visual objects. Cognition 2001, 80, 61–95. [Google Scholar] [CrossRef]
- Henderson, J.; Chanceaux, M.; Smith, T.J. The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements. J. Vis. 2009, 9, 32. [Google Scholar] [CrossRef] [PubMed]
- Walther, D.; Koch, C. Modeling attention to salient proto-objects. Neural Netw. 2006, 19, 1395–1407. [Google Scholar] [CrossRef] [PubMed]
- Wischnewski, M.; Belardinelli, A.; Schneider, W.X.; Steil, J.J. Where to look next? Combining static and dynamic proto-objects in a TVA-based model of visual attention. Cognit. Comput. 2010, 2, 326–343. [Google Scholar] [CrossRef]
- Zelinsky, G.J.; Yu, C.-P. Clutter perception is invariant to image size. Vis. Res. 2015, 116, 142–151. [Google Scholar] [CrossRef]
- Grill-Spector, K.; Kanwisher, N. Visual recognition. Psychol. Sci. 2005, 16, 152–160. [Google Scholar] [CrossRef]
- Henderson, J.M.; Hayes, T.R. Meaning-based guidance of attention in scenes as revealed by meaning maps. Nat. Hum. Behav. 2017, 1, 743–747. [Google Scholar] [CrossRef]
- Peacock, C.E.; Hayes, T.R.; Henderson, J.M. Meaning guides attention during scene viewing, even when it is irrelevant. Atten. Percept. Psychophys. 2019, 81, 20–34. [Google Scholar] [CrossRef]
- Loftus, G.R.; Mackworth, N.H. Cognitive determinants of fixation location during picture viewing. J. Exp. Psychol. Hum. Percept. Perform. 1978, 4, 565–572. [Google Scholar] [CrossRef] [PubMed]
- De Graef, P.; Christiaens, D.; D’Ydewalle, G.G.; De Graef, P.; Christiaens, D.; D’Ydewalle, G.G. Perceptual effects of scene context on object identification. Psychol. Res. 1990, 52, 317–329. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M.; Weeks, P.A.; Hollingworth, A. The effects of semantic consistency on eye movements during complex scene viewing. J. Exp. Psychol. Hum. Percept. Perform. 1999, 25, 210–228. [Google Scholar] [CrossRef]
- Becker, M.W.; Pashler, H.; Lubin, J. Object-intrinsic oddities draw early saccades. J. Exp. Psychol. Hum. Percept. Perform. 2007, 33, 20–30. [Google Scholar] [CrossRef]
- Underwood, G.; Humphreys, L.; Cross, E. Congruency, saliency and gist in the inspection of objects in natural scenes. Eye Mov. 2007, IV-VII, 567–579. [Google Scholar] [CrossRef]
- Castelhano, M.S.; Heaven, C. The relative contribution of scene context and target features to visual search in scenes. Atten. Percept. Psychophys. 2010, 72, 1283–1297. [Google Scholar] [CrossRef] [PubMed]
- Võ, M.L.-H.; Henderson, J.M. Object-scene inconsistencies do not capture gaze: Evidence from the flash-preview moving-window paradigm. Atten. Percept. Psychophys. 2011, 73, 1742–1753. [Google Scholar] [CrossRef]
- Võ, M.L.-H.; Henderson, J.M. Does gravity matter? Effects of semantic and syntactic inconsistencies on the allocation of attention during scene perception. J. Vis. 2009, 9, 24. [Google Scholar] [CrossRef]
- LaPointe, M.R.P.; Milliken, B. Semantically incongruent objects attract eye gaze when viewing scenes for change. Vis. Cogn. 2016, 24, 63–77. [Google Scholar] [CrossRef]
- De Groot, F.; Huettig, F.; Olivers, C.N.L. When meaning matters: The temporal dynamics of semantic influences on visual attention. J. Exp. Psychol. Hum. Percept. Perform. 2016, 42, 180–196. [Google Scholar] [CrossRef]
- Spotorno, S.; Tatler, B.W.; Faure, S. Semantic consistency versus perceptual salience in visual scenes: Findings from change detection. Acta Psychol. (Amst) 2013, 142, 168–176. [Google Scholar] [CrossRef] [PubMed]
- Vo, M.L.-H.; Wolfe, J.M. Differential electrophysiological signatures of semantic and syntactic scene processing. Psychol. Sci. 2013, 24, 1816–1823. [Google Scholar] [CrossRef] [PubMed]
- Friedman, A. Framing pictures: The role of knowledge in automatized encoding and memory for gist. J. Exp. Psychol. Gen. 1979, 108, 316–355. [Google Scholar] [CrossRef] [PubMed]
- Võ, M.L.-H.; Wolfe, J.M. The interplay of episodic and semantic memory in guiding repeated search in scenes. Cognition 2013, 126, 198–212. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- LaPointe, M.R.P.; Lupianez, J.; Milliken, B. Context congruency effects in change detection: Opposing effects on detection and identification. Vis. Cogn. 2013, 21, 99–122. [Google Scholar] [CrossRef]
- Castelhano, M.S.; Henderson, J.M.J.M. Initial scene representations facilitate eye movement guidance in visual search. J. Exp. Psychol. Hum. Percept. Perform. 2007, 33, 753–763. [Google Scholar] [CrossRef] [PubMed]
- Neider, M.B.; Zelinsky, G.J. Scene context guides eye movements during visual search. Vis. Res. 2006, 46, 614–621. [Google Scholar] [CrossRef] [Green Version]
- Võ, M.L.-H.; Henderson, J.M. The time course of initial scene processing for eye movement guidance in natural scene search. J. Vis. 2010, 10, 14. [Google Scholar] [CrossRef]
- Litchfield, D.; Donovan, T. Worth a quick look? Initial scene previews can guide eye movements as a function of domain-specific expertise but can also have unforeseen costs. J. Exp. Psychol. Hum. Percept. Perform. 2016, 42, 982–994. [Google Scholar] [CrossRef]
- Hwang, A.D.; Wang, H.-C.; Pomplun, M. Semantic guidance of eye movements in real-world scenes. Vis. Res. 2011, 51, 1192–1205. [Google Scholar] [CrossRef] [Green Version]
- Malcolm, G.L.; Henderson, J.M. The effects of target template specificity on visual search in real-world scenes: Evidence from eye movements. J. Vis. 2009, 9, 8. [Google Scholar] [CrossRef] [PubMed]
- Mack, S.C.; Eckstein, M.P. Object co-occurrence serves as a contextual cue to guide and facilitate visual search in a natural viewing environment. J. Vis. 2011, 11, 9. [Google Scholar] [CrossRef] [PubMed]
- Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. Labelme: A database and web-based tool for image annotation. Int. J. Comput. 2008, 77, 157–173. [Google Scholar] [CrossRef]
- Nuthmann, A. How do the regions of the visual field contribute to object search in real-world scenes? Evidence from eye movements. J. Exp. Psychol. Hum. Percept. Perform. 2014, 40, 342–360. [Google Scholar] [CrossRef] [PubMed]
- Nuthmann, A. Fixation durations in scene viewing: Modeling the effects of local image features, oculomotor parameters, and task. Psychon. Bull. Rev. 2017, 24, 370–392. [Google Scholar] [CrossRef] [PubMed]
- Walther, D.B.; Chai, B.; Caddigan, E.; Beck, D.M.; Fei-Fei, L. Simple line drawings suffice for functional MRI decoding of natural scene categories. Proc. Natl. Acad. Sci. USA 2011, 108, 9661–9666. [Google Scholar] [CrossRef] [Green Version]
- O’Connell, T.P.; Walther, D.B. Dissociation of salience-driven and content-driven spatial attention to scene category with predictive decoding of gaze patterns. J. Vis. 2015, 15, 20. [Google Scholar] [CrossRef]
- Hollingworth, A. Two forms of scene memory guide visual search: Memory for scene context and memory for the binding of target object to scene location. Vis. Cogn. 2009, 17, 273–291. [Google Scholar] [CrossRef]
- Wolfe, J.M.; Võ, M.L.-H.; Evans, K.K.; Greene, M.R. Visual search in scenes involves selective and nonselective pathways. Trends Cogn. Sci. 2011, 15, 77–84. [Google Scholar] [CrossRef] [Green Version]
- Biederman, I.; Glass, A.L.; Stacy, E.W. Searching for objects in real-world scenes. J. Exp. Psychol. 1973, 97, 22–27. [Google Scholar] [CrossRef]
- Sanocki, T.; Epstein, W. Priming spatial layout of scenes. Psychol. Sci. 1997, 8, 374–378. [Google Scholar] [CrossRef]
- Hillstrom, A.P.; Segabinazi, J.D.; Godwin, H.J.; Liversedge, S.P.; Benson, V. Cat and mouse search: The influence of scene and object analysis on eye movements when targets change locations during search. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2017, 372, 20160106. [Google Scholar] [CrossRef] [PubMed]
- Castelhano, M.S.; Pereira, E.J. Searching through the clutter: Using surface guidance framework to explore set size effects in scenes. Manuscr. Prog. 2019. under review. [Google Scholar]
- Pereira, E.J.; Castelhano, M.S. Attentional capture is contingent on scene region: Using surface guidance framework to explore attentional mechanisms during search. Psychon. Bull. Rev. 2019, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Malcolm, G.L.; Shomstein, S. Object-based attention in real-world scenes. J. Exp. Psychol. Gen. 2015, 144, 257–263. [Google Scholar] [CrossRef] [PubMed]
- Vatterott, D.B.; Vecera, S.P. The attentional window configures to object and surface boundaries. Vis. Cogn. 2015, 23, 561–576. [Google Scholar] [CrossRef]
- Bonner, M.F.; Epstein, R.A. Coding of navigational affordances in the human visual system. Proc. Natl. Acad. Sci. USA 2017, 114, 4793–4798. [Google Scholar] [CrossRef] [Green Version]
- Bonner, M.F.; Epstein, R.A. Computational mechanisms underlying cortical responses to the affordance properties of visual scenes. PLoS Comput. Biol. 2018, 14, e1006111. [Google Scholar] [CrossRef]
- Man, L.L.Y.; Castelhano, M.S. Across the planes: Differing impacts of foreground and background information on visual search in scenes. J. Vis. 2018, 18, 384. [Google Scholar] [CrossRef]
- DeAngelus, M.; Pelz, J.B. Top-down control of eye movements: Yarbus revisited. Vis. Cogn. 2009, 17, 790–811. [Google Scholar] [CrossRef]
- Pannasch, S.; Schulz, J.; Velichkovsky, B.M. On the control of visual fixation durations in free viewing of complex images. Atten. Percept. Psychophys. 2011, 73, 1120–1132. [Google Scholar] [CrossRef] [PubMed]
- Pannasch, S.; Velichkovsky, B.M. Distractor effect and saccade amplitudes: Further evidence on different modes of processing in free exploration of visual images. Vis. Cogn. 2009, 17, 1109–1131. [Google Scholar] [CrossRef]
- Henderson, J.M.; Shinkareva, S.V.; Wang, J.; Luke, S.G.; Olejarczyk, J. Predicting cognitive state from eye movements. PLoS ONE 2013, 8, e64937. [Google Scholar]
- Franke, I.S.; Pannasch, S.; Helmert, J.R.; Rieger, R.; Groh, R.; Velichkovsky, B.M. Towards attention-centered interfaces. ACM Trans. Multimed. Comput. Commun. Appl. 2008, 4, 1–13. [Google Scholar] [CrossRef]
- Choe, K.W.; Kardan, O.; Kotabe, H.P.; Henderson, J.M.; Berman, M.G. To search or to like: Mapping fixations to differentiate two forms of incidental scene memory. J. Vis. 2017, 17, 8. [Google Scholar] [CrossRef] [PubMed]
- Antes, J.R. The time course of picture viewing. J. Exp. Psychol. 1974, 103, 62–70. [Google Scholar] [CrossRef] [PubMed]
- Castelhano, M.S.; Henderson, J.M. Incidental visual memory for objects in scenes. Vis. Cogn. 2005, 12, 1017–1040. [Google Scholar] [CrossRef]
- Hollingworth, A. Scene and position specificity in visual memory for objects. J. Exp. Psychol. Learn. Mem. Cogn. 2006, 32, 58. [Google Scholar] [CrossRef]
- Rensink, R.A. The dynamic representation of scenes. Vis. Cogn. 2000, 7, 17–42. [Google Scholar] [CrossRef]
- Rensink, R.A.; O’Regan, J.K.; Clark, J.J. To see or not to see: The need for attention to perceive changes in scenes. Psychol. Sci. 1997, 8, 368–373. [Google Scholar] [CrossRef]
- Konkle, T.; Brady, T.F.; Alvarez, G.A.; Oliva, A. Scene memory is more detailed than you think: The role of categories in visual long-term memory. Psychol. Sci. 2010, 21, 1551–1556. [Google Scholar] [CrossRef] [PubMed]
- Kardan, O.; Berman, M.G.; Yourganov, G.; Schmidt, J.; Henderson, J.M. Classifying mental states from eye movements during scene viewing. J. Exp. Psychol. Hum. Percept. Perform. 2015, 41, 1502–1514. [Google Scholar] [CrossRef] [PubMed]
- Mills, M.; Hollingworth, A.; Van der Stigchel, S.; Hoffman, L.; Dodd, M.D. Examining the influence of task set on eye movements and fixations. J. Vis. 2011, 11, 17. [Google Scholar] [CrossRef] [PubMed]
- Subramanian, R.; Shankar, D.; Sebe, N.; Melcher, D. Emotion modulates eye movement patterns and subsequent memory for the gist and details of movie scenes. J. Vis. 2014, 14, 31. [Google Scholar] [CrossRef]
- Võ, M.L.H.; Wolfe, J.M. When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes. J. Exp. Psychol. Hum. Percept. Perform. 2012, 38, 23–41. [Google Scholar] [CrossRef] [PubMed]
- Olejarczyk, J.H.; Luke, S.G.; Henderson, J.M. Incidental memory for parts of scenes from eye movements. Vis. Cogn. 2014, 22, 975–995. [Google Scholar] [CrossRef]
- Draschkow, D.; Võ, M.L.-H. Of “what” and “where” in a natural search task: Active object handling supports object location memory beyond the object’s identity. Atten. Percept. Psychophys. 2016, 78, 1574–1584. [Google Scholar] [CrossRef]
- Võ, M.L.; Wolfe, J.M. The role of memory for visual search in scenes. Ann. N. Y. Acad. Sci. 2015, 1339, 72–81. [Google Scholar] [Green Version]
- Williams, C.C. Incidental and intentional visual memory: What memories are and are not affected by encoding tasks? Vis. Cogn. 2010, 18, 1348–1367. [Google Scholar] [CrossRef]
- Chun, M.M.; Jiang, Y. Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cogn. Psychol. 1998, 36, 28–71. [Google Scholar] [CrossRef]
- Castelhano, M.S.; Fernandes, S.; Theriault, J. Examining the hierarchical nature of scene representations in memory. J. Exp. Psychol. Learn. Mem. Cogn. 2018. [Google Scholar] [CrossRef] [PubMed]
- Brockmole, J.R.; Castelhano, M.S.; Henderson, J.M. Contextual cueing in naturalistic scenes: Global and local contexts. J. Exp. Psychol. Learn. Mem. Cogn. 2006, 32, 699–706. [Google Scholar] [CrossRef] [PubMed]
- Brockmole, J.R.; Henderson, J.M. Using real-world scenes as contextual cues for search. Vis. Cogn. 2006, 13, 99–108. [Google Scholar] [CrossRef]
- Brockmole, J.R.; Võ, M.L.-H. Semantic memory for contextual regularities within and across scene categories: Evidence from eye movements. Atten. Percept. Psychophys. 2010, 72, 1803–1813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Josephs, E.L.; Draschkow, D.; Wolfe, J.M.; Võ, M.L.-H. Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychol. (Amst) 2016, 169, 100–108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hollingworth, A. Task specificity and the influence of memory on visual search: Comment on Võ and Wolfe (2012). J. Exp. Psychol. 2012, 38, 1596–1603. [Google Scholar] [CrossRef] [PubMed]
- Võ, M.L.-H.; Zwickel, J.; Schneider, W.X. Has someone moved my plate? The immediate and persistent effects of object location changes on gaze allocation during natural scene viewing. Atten. Percept. Psychophys. 2010, 72, 1251–1255. [Google Scholar] [CrossRef] [PubMed]
- Hollingworth, A.; Williams, C.C.; Henderson, J.M. To see and remember: Visually specific information is retained in memory from previously attended objects in natural scenes. Psychon. Bull. Rev. 2001, 8, 761–768. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Henderson, J.M.; Williams, C.C.; Castelhano, M.S.; Falk, R.J. Eye movements and picture processing during recognition. Percept. Psychophys. 2003, 65, 725–734. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Noton, D.; Stark, L. Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vis. Res. 1971, 11, 929–942. [Google Scholar] [CrossRef]
- Johansson, R.; Johansson, M. Look here, eye movements play a functional role in memory retrieval. Psychol. Sci. 2014, 25, 236–242. [Google Scholar] [CrossRef] [PubMed]
- Laeng, B.; Bloem, I.M.; D’Ascenzo, S.; Tommasi, L. Scrutinizing visual images: The role of gaze in mental imagery and memory. Cognition 2014, 131, 263–283. [Google Scholar] [CrossRef] [PubMed]
- Wynn, J.S.; Bone, M.B.; Dragan, M.C.; Hoffman, K.L.; Buchsbaum, B.R.; Ryan, J.D. Selective scanpath repetition during memory-guided visual search. Vis. Cogn. 2016, 24, 15–37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bochynska, A.; Laeng, B. Tracking down the path of memory: Eye scanpaths facilitate retrieval of visuospatial information. Cogn. Process. 2015, 16, 159–163. [Google Scholar] [CrossRef] [PubMed]
- Henderson, J.M.; Williams, C.C.; Falk, R.J. Eye movements are functional during face learning. Mem. Cognit. 2005, 33, 98–106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- James, W. The Principles of Psychology; Henry Holt and Company: New York, NY, USA, 1890. [Google Scholar]
- Hayhoe, M. Vision using routines: A functional account of vision. Vis. Cogn. 2000, 7, 43–64. [Google Scholar] [CrossRef]
- Hayhoe, M.; Ballard, D. Eye movements in natural behavior. Trends Cogn. Sci. 2005, 9, 188–194. [Google Scholar] [CrossRef] [PubMed]
- Land, M.F.; McLeod, P. From eye movements to actions: How batsmen hit the ball. Nat. Neurosci. 2000, 3, 1340–1345. [Google Scholar] [CrossRef] [PubMed]
- Schneider, E.; Villgrattner, T.; Vockeroth, J.; Bartl, K.; Kohlbecher, S.; Bardins, S.; Ulbrich, H.; Brandt, T. EyeSeeCam: An eye movement-driven head camera for the examination of natural visual exploration. Ann. N. Y. Acad. Sci. 2009, 1164, 461–467. [Google Scholar] [CrossRef]
- Dorr, M.; Martinetz, T.; Gegenfurtner, K.R.; Barth, E. Variability of eye movements when viewing dynamic natural scenes. J. Vis. 2010, 10, 28. [Google Scholar] [CrossRef]
- Mital, P.K.; Smith, T.J.; Hill, R.L.; Henderson, J.M. Clustering of gaze during dynamic scene viewing is predicted by motion. Cognit. Comput. 2011, 3, 5–24. [Google Scholar] [CrossRef]
- Marius ’t Hart, B.; Vockeroth, J.; Schumann, F.; Bartl, K.; Schneider, E.; König, P.; Einhäuser, W. Gaze allocation in natural stimuli: Comparing free exploration to head-fixed viewing conditions. Vis. Cogn. 2009, 17, 1132–1158. [Google Scholar] [CrossRef]
- Hinde, S.J.; Smith, T.J.; Gilchrist, I.D. In search of oculomotor capture during film viewing: Implications for the balance of top-down and bottom-up control in the saccadic system. Vis. Res. 2017, 134, 7–17. [Google Scholar] [CrossRef] [PubMed]
- Goldstein, R.B.; Woods, R.L.; Peli, E. Where people look when watching movies: Do all viewers look at the same place? Comput. Biol. Med. 2007, 37, 957–964. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Loschky, L.C.; Larson, A.M.; Magliano, J.P.; Smith, T.J. What would jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension. PLoS ONE 2015, 10, e0142474. [Google Scholar] [CrossRef] [PubMed]
- Smith, T.J.; Mital, P.K. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes. J. Vis. 2013, 13, 16. [Google Scholar] [CrossRef] [PubMed]
- Foulsham, T.; Kingstone, A. Are fixations in static natural scenes a useful predictor of attention in the real world? Can. J. Exp. Psychol. Can. Psychol. Exp. 2017, 71, 172–181. [Google Scholar] [CrossRef] [Green Version]
- Potter, M.C. Short-term conceptual memory for pictures. J. Exp. Psychol. Hum. Learn. Mem. 1976, 2, 509–522. [Google Scholar] [CrossRef]
- Titchener, E.B. The postulates of a structural psychology. Philos. Rev. 1898, 7, 449. [Google Scholar] [CrossRef]
- Wertheimer, M. Untersuchungen zur lehre von der gestalt. II. Psychol. Res. 1923, 4, 301–350. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Williams, C.C.; Castelhano, M.S. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision 2019, 3, 33. https://doi.org/10.3390/vision3030033
Williams CC, Castelhano MS. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision. 2019; 3(3):33. https://doi.org/10.3390/vision3030033
Chicago/Turabian StyleWilliams, Carrick C., and Monica S. Castelhano. 2019. "The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes" Vision 3, no. 3: 33. https://doi.org/10.3390/vision3030033
APA StyleWilliams, C. C., & Castelhano, M. S. (2019). The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision, 3(3), 33. https://doi.org/10.3390/vision3030033