Visual Orienting and Conscious Perception

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (30 June 2019) | Viewed by 32714

Special Issue Editor


E-Mail Website
Guest Editor
School of Psychology and Centre for Brain Research, University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
Interests: visual orienting; conscious perception; imagery; anauralia; aphantasia
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We invite submissions for a Special Issue on the topic of “Visual Orienting and Conscious Perception”.

In keeping with the overall aim of the journal, to stimulate useful interchange between individuals working primarily on basic theoretical issues, and those working on more applied aspects of vision science, we invite papers on a range of topics related to the theme of “Visual Orienting and Conscious Perception”. Review papers, as well as original empirical reports, are welcome. Papers that seek to bridge the gap between theoretical and applied aspects of visual orienting and conscious perception will be especially welcome. A range of potential topics for this Special Issue is provided below. However, this should be viewed as a suggestive list, rather than an exhaustive catalogue. Individuals unsure about whether a proposed submission would be appropriate are invited to contact the Special Issue Editor, Tony Lambert ([email protected]).

  • Endogenous and exogenous orienting
  • Overt and covert visual orienting
  • Conscious (e.g., vision for perception) and non-conscious (e.g., vision for action) aspects of visual function
  • Eye movements and conscious awareness
  • Looking without seeing—e.g., inattentional blindness, change blindness, repetition blindness
  • Visual orienting and attentional failures inside and outside the laboratory
  • Vision for action and eye movement control
  • Clinical disorders that impair visual orienting and/or conscious visual perception

Dr. Anthony James Lambert
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) is waived for well-prepared manuscripts submitted to this issue. Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

13 pages, 534 KiB  
Article
Accident Vulnerability and Vision for Action: A Pilot Investigation
by Anthony J. Lambert, Tanvi Sharma and Nathan Ryckman
Vision 2020, 4(2), 26; https://doi.org/10.3390/vision4020026 - 13 May 2020
Cited by 3 | Viewed by 2672
Abstract
Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between [...] Read more.
Many accidents, such as those involving collisions or trips, appear to involve failures of vision, but the association between accident risk and vision as conventionally assessed is weak or absent. We addressed this conundrum by embracing the distinction inspired by neuroscientific research, between vision for perception and vision for action. A dual-process perspective predicts that accident vulnerability will be associated more strongly with vision for action than vision for perception. In this preliminary investigation, older and younger adults, with relatively high and relatively low self-reported accident vulnerability (Accident Proneness Questionnaire), completed three behavioural assessments targeting vision for perception (Freiburg Visual Acuity Test); vision for action (Vision for Action Test—VAT); and the ability to perform physical actions involving balance, walking and standing (Short Physical Performance Battery). Accident vulnerability was not associated with visual acuity or with performance of physical actions but was associated with VAT performance. VAT assesses the ability to link visual input with a specific action—launching a saccadic eye movement as rapidly as possible, in response to shapes presented in peripheral vision. The predictive relationship between VAT performance and accident vulnerability was independent of age, visual acuity and physical performance scores. Applied implications of these findings are considered. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

13 pages, 1684 KiB  
Article
Reading and Misleading: Changes in Head and Eye Movements Reveal Attentional Orienting in a Social Context
by Tom Foulsham, Monika Gejdosova and Laura Caunt
Vision 2019, 3(3), 43; https://doi.org/10.3390/vision3030043 - 27 Aug 2019
Cited by 3 | Viewed by 2652
Abstract
Social attention describes how observers orient to social information and exhibit behaviors such as gaze following. These behaviors are examples of how attentional orienting may differ when in the presence of other people, although they have typically been studied without actual social presence. [...] Read more.
Social attention describes how observers orient to social information and exhibit behaviors such as gaze following. These behaviors are examples of how attentional orienting may differ when in the presence of other people, although they have typically been studied without actual social presence. In the present study we ask whether orienting, as measured by head and eye movements, will change when participants are trying to mislead or hide their attention from a bystander. In two experiments, observers performed a preference task while being video-recorded, and subsequent participants were asked to guess the response of the participant based on a video of the head and upper body. In a second condition, observers were told to try to mislead the “guesser”. The results showed that participants’ preference responses could be guessed from videos of the head and, critically, that participants spontaneously changed their orienting behavior in order to mislead by reducing the rate at which they made large head movements. Masking the eyes with sunglasses suggested that head movements were most important in our setup. This indicates that head and eye movements can be used flexibly according to the socio-communicative context. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

16 pages, 1123 KiB  
Article
How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?
by Austin J. Hurst, Michael A. Lawrence and Raymond M. Klein
Vision 2019, 3(2), 31; https://doi.org/10.3390/vision3020031 - 17 Jun 2019
Cited by 1 | Viewed by 3463
Abstract
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants [...] Read more.
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Graphical abstract

16 pages, 1035 KiB  
Article
Location-Specific Orientation Set Is Independent of the Horizontal Benefit with or Without Object Boundaries
by Zhe Chen, Ailsa Humphries and Kyle R. Cave
Vision 2019, 3(2), 30; https://doi.org/10.3390/vision3020030 - 14 Jun 2019
Cited by 3 | Viewed by 3107
Abstract
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of [...] Read more.
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of the upcoming stimulus configuration without cuing its location and found an asymmetry: the orientation cue only enhanced performance for vertical configurations. The current study replicates the horizontal benefit in visual comparison and again demonstrates that it is independent of surrounding object boundaries. In these experiments, the cue is informative about the location of the target configuration as well as its orientation, and it enhances performance for both horizontal and vertical configurations; there is no asymmetry. Either a long or a short cue can enhance performance when it is valid. Thus, Chen and Cave’s cuing asymmetry seems to reflect unusual aspects of an attentional set for orientation that must be established without knowing the upcoming stimulus location. Taken together, these studies show that a location-specific cue enhances comparison independently of the horizontal advantage, while a location-nonspecific cue produces a different type of attentional set that does not enhance comparison in horizontal configurations. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

19 pages, 4351 KiB  
Article
Contextually-Based Social Attention Diverges across Covert and Overt Measures
by Effie J. Pereira, Elina Birmingham and Jelena Ristic
Vision 2019, 3(2), 29; https://doi.org/10.3390/vision3020029 - 10 Jun 2019
Cited by 6 | Viewed by 4105
Abstract
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are [...] Read more.
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face–house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

13 pages, 1279 KiB  
Article
Attention Combines Similarly in Covert and Overt Conditions
by Christopher D. Blair and Jelena Ristic
Vision 2019, 3(2), 16; https://doi.org/10.3390/vision3020016 - 25 Apr 2019
Cited by 8 | Viewed by 4153
Abstract
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye [...] Read more.
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention’s dynamic flexibility in facilitating human behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

40 pages, 6782 KiB  
Article
Cancelling Flash Illusory Line Motion by Cancelling the Attentional Gradient and a Consideration of Consciousness
by Katie McGuire, Amanda Pinny and Jeff P. Hamm
Vision 2019, 3(1), 3; https://doi.org/10.3390/vision3010003 - 10 Jan 2019
Cited by 2 | Viewed by 3053
Abstract
Illusory line motion (ILM) refers to the perception of motion in a line that is, in fact, presented in full at one time. One form of this illusion (flashILM) occurs when the line is presented between two objects following a brief [...] Read more.
Illusory line motion (ILM) refers to the perception of motion in a line that is, in fact, presented in full at one time. One form of this illusion (flashILM) occurs when the line is presented between two objects following a brief luminance change in one of them and flashILM is thought to result from exogenous attention being captured by the flash. Exogenous attention fades with increasing delays, which predicts that flashILM should show a similar temporal pattern. Exogenous attention appears to follow flashILM to become more or less equally distributed along the line.The current study examines flashILM in order to test these predictions derived from the attentional explanation for flashILM and the results were consistent with them. The discussion then concludes with an exploratory analysis approach concerning states of consciousness and decision making and suggests a possible role for attention. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

12 pages, 952 KiB  
Article
Evidence Transcranial Direct Current Stimulation Can Improve Saccadic Eye Movement Control in Older Adults
by Po Ling Chen, Andreas Stenling and Liana Machado
Vision 2018, 2(4), 42; https://doi.org/10.3390/vision2040042 - 03 Dec 2018
Cited by 3 | Viewed by 4714
Abstract
Objectives: Ageing is associated with declines in voluntary eye movement control, which negatively impact the performance of daily activities. Therapies treating saccadic eye movement control deficits are currently lacking. To address the need for an effective therapy to treat age-related deficits in saccadic [...] Read more.
Objectives: Ageing is associated with declines in voluntary eye movement control, which negatively impact the performance of daily activities. Therapies treating saccadic eye movement control deficits are currently lacking. To address the need for an effective therapy to treat age-related deficits in saccadic eye movement control, the current study investigated whether saccadic behaviour in older adults can be improved by anodal transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex using a montage that has been proven to be effective at improving nonoculomotor control functions. Method: The tDCS protocol entailed a 5 cm × 7 cm anodal electrode and an encephalic cathodal reference electrode positioned over the contralateral supraorbital area. In two experiments, healthy older men completed one active (1.5 mA current for 10 min) and one sham stimulation session, with the session order counterbalanced across participants, and eye movement testing following stimulation. In the first experiment, participants rested during the tDCS (offline), whereas in the follow-up experiment, participants engaged in antisaccades during the tDCS (online). Results: Analyses revealed improvements in saccadic performance following active anodal tDCS relative to sham stimulation in the online experiment, but not in the offline experiment, which was presumably due to the activation of the relevant networks during tDCS promoting more targeted effects. Discussion: These outcomes converge with findings pertaining to nonoculomotor cognitive functions, and provide evidence that tDCS can improve saccadic eye movement control in older adults. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

Other

Jump to: Research

9 pages, 630 KiB  
Perspective
Illuminating the Neural Circuits Underlying Orienting of Attention
by Michael I. Posner and Cristopher M. Niell
Vision 2019, 3(1), 4; https://doi.org/10.3390/vision3010004 - 24 Jan 2019
Cited by 3 | Viewed by 4248
Abstract
Human neuroimaging has revealed brain networks involving frontal and parietal cortical areas as well as subcortical areas, including the superior colliculus and pulvinar, which are involved in orienting to sensory stimuli. Because accumulating evidence points to similarities between both overt and covert orienting [...] Read more.
Human neuroimaging has revealed brain networks involving frontal and parietal cortical areas as well as subcortical areas, including the superior colliculus and pulvinar, which are involved in orienting to sensory stimuli. Because accumulating evidence points to similarities between both overt and covert orienting in humans and other animals, we propose that it is now feasible, using animal models, to move beyond these large-scale networks to address the local networks and cell types that mediate orienting of attention. In this opinion piece, we discuss optogenetic and related methods for testing the pathways involved, and obstacles to carrying out such tests in rodent and monkey populations. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

Back to TopTop