Next Issue
Previous Issue

Table of Contents

Vision, Volume 3, Issue 2 (June 2019)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
View options order results:
result details:
Displaying articles 1-21
Export citation of selected articles as:
Open AccessReview
Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future
Received: 1 April 2019 / Revised: 9 June 2019 / Accepted: 18 June 2019 / Published: 20 June 2019
Viewed by 132 | PDF Full-text (1645 KB) | HTML Full-text | XML Full-text
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging [...] Read more.
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical ‘scenes’ and we discuss how tracking experts’ eyes may provide useful insights for medical education and screening efficiency. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessArticle
How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?
Received: 23 January 2019 / Revised: 3 June 2019 / Accepted: 12 June 2019 / Published: 17 June 2019
Viewed by 214 | PDF Full-text (1123 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants [...] Read more.
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Figures

Graphical abstract

Open AccessArticle
Location-Specific Orientation Set Is Independent of the Horizontal Benefit with or Without Object Boundaries
Received: 19 March 2019 / Revised: 2 June 2019 / Accepted: 12 June 2019 / Published: 14 June 2019
Viewed by 163 | PDF Full-text (1035 KB) | HTML Full-text | XML Full-text
Abstract
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of [...] Read more.
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of the upcoming stimulus configuration without cuing its location and found an asymmetry: the orientation cue only enhanced performance for vertical configurations. The current study replicates the horizontal benefit in visual comparison and again demonstrates that it is independent of surrounding object boundaries. In these experiments, the cue is informative about the location of the target configuration as well as its orientation, and it enhances performance for both horizontal and vertical configurations; there is no asymmetry. Either a long or a short cue can enhance performance when it is valid. Thus, Chen and Cave’s cuing asymmetry seems to reflect unusual aspects of an attentional set for orientation that must be established without knowing the upcoming stimulus location. Taken together, these studies show that a location-specific cue enhances comparison independently of the horizontal advantage, while a location-nonspecific cue produces a different type of attentional set that does not enhance comparison in horizontal configurations. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Figures

Figure 1

Open AccessArticle
Contextually-Based Social Attention Diverges across Covert and Overt Measures
Received: 30 January 2019 / Revised: 27 May 2019 / Accepted: 30 May 2019 / Published: 10 June 2019
Viewed by 212 | PDF Full-text (4351 KB) | HTML Full-text | XML Full-text
Abstract
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are [...] Read more.
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face–house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Figures

Figure 1

Open AccessArticle
Object Properties Influence Visual Guidance of Motor Actions
Received: 27 February 2019 / Revised: 29 May 2019 / Accepted: 4 June 2019 / Published: 10 June 2019
Viewed by 195 | PDF Full-text (2976 KB) | HTML Full-text | XML Full-text
Abstract
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and [...] Read more.
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and shape of objects) and object state (whether it is full of liquid, or to be set down in a crowded location) influence visual supervision while setting objects down, which is an element of object interaction that has been relatively neglected in the literature. In a liquid pouring task, we asked participants to move empty glasses to a filling station; to leave them empty, half fill, or completely fill them with water; and then move them again to a tray. During the first putdown (when the glasses were all empty), visual guidance was determined only by the type of glass being set down—with more unwieldy champagne flutes being more likely to be guided than other types of glasses. However, when the glasses were then filled, glass type no longer mattered, with the material and fill level predicting whether the glasses were set down with visual supervision: full, glass material containers were more likely to be guided than empty, plastic ones. The key finding from this research is that the visual system responds flexibly to dynamic changes in object properties, likely based on predictions of risk associated with setting-down the object unsupervised by vision. The factors that govern these mechanisms can vary within the same object as it changes state. Full article
(This article belongs to the Special Issue Visual Control of Action)
Figures

Figure 1

Open AccessArticle
Distinguishing Hemodynamics from Function in the Human LGN Using a Temporal Response Model
Received: 25 June 2018 / Revised: 3 May 2019 / Accepted: 4 June 2019 / Published: 7 June 2019
Viewed by 181 | PDF Full-text (3651 KB) | HTML Full-text | XML Full-text
Abstract
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a [...] Read more.
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a point of entry for blood vessels entering the LGN and supplying the substrates of central vision. The location of the hilum along the ventral surface of the LGN and the resulting gradient in the amplitude of the HRF across the extent of the LGN have made it difficult to segment the human LGN into its more interesting magnocellular and parvocellular regions that represent two distinct visual processing streams. Here, we show that an intrinsic clustering of the LGN responses to a variety of visual inputs reveals the hilum, and further, that this clustering is dominated by the amplitude of the HRF. We introduced a temporal population receptive field model that includes separate sustained and transient temporal impulse response functions that vary on a much short timescale than the HRF. When we account for the HRF amplitude, we demonstrate that this temporal response model is able to functionally segregate the residual responses according to their temporal properties. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Figures

Figure 1

Open AccessPerspective
Ocular Equivocation: The Rivalry Between Wheatstone and Brewster
Received: 14 May 2019 / Revised: 31 May 2019 / Accepted: 3 June 2019 / Published: 6 June 2019
Viewed by 176 | PDF Full-text (5684 KB) | HTML Full-text | XML Full-text
Abstract
Ocular equivocation was the term given by Brewster in 1844 to binocular contour rivalry seen with Wheatstone’s stereoscope. The rivalries between Wheatstone and Brewster were personal as well as perceptual. In the 1830s, both Wheatstone and Brewster came to stereoscopic vision armed with [...] Read more.
Ocular equivocation was the term given by Brewster in 1844 to binocular contour rivalry seen with Wheatstone’s stereoscope. The rivalries between Wheatstone and Brewster were personal as well as perceptual. In the 1830s, both Wheatstone and Brewster came to stereoscopic vision armed with their individual histories of research on vision. Brewster was an authority on physical optics and had devised the kaleidoscope; Wheatstone extended his research on audition to render acoustic patterns visible with his kaleidophone or phonic kaleidoscope. Both had written on subjective visual phenomena, a topic upon which they first clashed at the inaugural meeting of the British Association for the Advancement of Science in 1832 (the year Wheatstone made the first stereoscopes). Wheatstone published his account of the mirror stereoscope in 1838; Brewster’s initial reception of it was glowing but he later questioned Wheatstone’s priority. They both described investigations of binocular contour rivalry but their interpretations diverged. As was the case for stereoscopic vision, Wheatstone argued for central processing whereas Brewster’s analysis was peripheral and based on visible direction. Brewster’s lenticular stereoscope and binocular camera were described in 1849. They later clashed over Brewster’s claim that the Chimenti drawings were made for a 16th-century stereoscope. The rivalry between Wheatstone and Brewster is illustrated with anaglyphs that can be viewed with red/cyan glasses and in Universal Freeview format; they include rivalling ‘perceptual portraits’ as well as examples of the stimuli used to study ocular equivocation. Full article
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)
Figures

Figure 1

Open AccessReview
Recent Advances of Computerized Graphical Methods for the Detection and Progress Assessment of Visual Distortion Caused by Macular Disorders
Received: 17 April 2019 / Accepted: 27 April 2019 / Published: 5 June 2019
Viewed by 197 | PDF Full-text (4018 KB) | HTML Full-text | XML Full-text
Abstract
Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their [...] Read more.
Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their conventional diagnostic methods, this paper reviews such graphical interface methods including computerized Amsler Grid, Preferential Hyperacuity Perimeter, and Three-dimensional Computer-automated Threshold Amsler Grid. Thereafter, the challenges of these computerized methods for accurate and rapid detection of macular disorders are discussed. The early detection and progress assessment of macular disorders can significantly enhance the required clinical procedure for the diagnosis and treatment of macular disorders. Full article
Figures

Figure 1

Open AccessReview
Using Eye Movements to Understand how Security Screeners Search for Threats in X-Ray Baggage
Received: 28 February 2019 / Revised: 17 May 2019 / Accepted: 1 June 2019 / Published: 4 June 2019
Viewed by 430 | PDF Full-text (741 KB) | HTML Full-text | XML Full-text
Abstract
There has been an increasing drive to understand failures in searches for weapons and explosives in X-ray baggage screening. Tracking eye movements during the search has produced new insights into the guidance of attention during the search, and the identification of targets once [...] Read more.
There has been an increasing drive to understand failures in searches for weapons and explosives in X-ray baggage screening. Tracking eye movements during the search has produced new insights into the guidance of attention during the search, and the identification of targets once they are fixated. Here, we review the eye-movement literature that has emerged on this front over the last fifteen years, including a discussion of the problems that real-world searchers face when trying to detect targets that could do serious harm to people and infrastructure. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessReview
The Changing Role of Phonology in Reading Development
Received: 22 February 2019 / Revised: 24 May 2019 / Accepted: 28 May 2019 / Published: 30 May 2019
Viewed by 296 | PDF Full-text (703 KB) | HTML Full-text | XML Full-text
Abstract
Processing of both a word’s orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers’ reliance on [...] Read more.
Processing of both a word’s orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers’ reliance on phonology to more skilled readers’ development of direct orthographic-semantic links. Specifically, in becoming a skilled reader, the extent to which an individual processes phonology during lexical identification is thought to decrease. Recent data from eye movement research suggests, however, that the developmental change in phonological processing is somewhat more nuanced than this. Such studies show that phonology influences lexical identification in beginning and skilled readers in both typically and atypically developing populations. These data indicate, therefore, that the developmental change might better be characterised as a transition from overt decoding to abstract, covert recoding. We do not stop processing phonology as we become more skilled at reading; rather, the nature of that processing changes. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessReview
What Can Eye Movements Tell Us about Subtle Cognitive Processing Differences in Autism?
Received: 27 March 2019 / Revised: 15 May 2019 / Accepted: 18 May 2019 / Published: 24 May 2019
Viewed by 358 | PDF Full-text (3517 KB) | HTML Full-text | XML Full-text
Abstract
Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal [...] Read more.
Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal characteristics of autism. Following a brief summary of a previous review chapter by one of the authors of the current paper, a detailed review of eye movement studies investigating various aspects of processing in autism over the last decade will be presented. The literature will be organised into sections covering different cognitive components, including language and social communication and interaction studies. The aim of the review will be to show how eye movement studies provide a very useful on-line processing measure, allowing us to account for observed differences in behavioural data (accuracy and reaction times). The subtle processing differences that eye movement data reveal in both language and social processing have the potential to impact in the everyday communication domain in autism. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessReview
Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content
Received: 28 February 2019 / Revised: 9 May 2019 / Accepted: 10 May 2019 / Published: 18 May 2019
Viewed by 342 | PDF Full-text (1484 KB) | HTML Full-text | XML Full-text
Abstract
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding [...] Read more.
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessArticle
The Limitations of Reward Effects on Saccade Latencies: An Exploration of Task-Specificity and Strength
Received: 17 March 2019 / Revised: 7 May 2019 / Accepted: 9 May 2019 / Published: 11 May 2019
Viewed by 402 | PDF Full-text (3026 KB) | HTML Full-text | XML Full-text
Abstract
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. [...] Read more.
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. The purpose of this study was to investigate whether the effects of operant conditioning of prosaccades to specific spatial locations would transfer to more complex oculomotor behaviours, specifically, prosaccades made in the presence of a distractor (Experiment 1) and antisaccades (Experiment 2). In part 1 of each experiment, participants were rewarded for making a saccade to one hemifield. In both experiments, the reward produced a significant facilitation of saccadic latency for prosaccades directed to the rewarded hemifield. In part 2, rewards were withdrawn, and the participant made a prosaccade to targets that were accompanied by a contralateral distractor (Experiment 1) or an antisaccade (Experiment 2). There were no hemifield-specific effects of the reward on saccade latency on the remote distractor effect or antisaccades, although the reward was associated with an overall slowing of saccade latency in Experiment 1. These data indicate that operant conditioning of saccadic eye movements does not transfer to similar but untrained tasks. We conclude that rewarding specific spatial locations is unlikely to induce long-term, systemic changes to the human oculomotor system. Full article
(This article belongs to the Special Issue Visual Control of Action)
Figures

Figure 1

Open AccessReview
Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach
Received: 27 February 2019 / Revised: 24 April 2019 / Accepted: 7 May 2019 / Published: 10 May 2019
Viewed by 255 | PDF Full-text (2836 KB) | HTML Full-text | XML Full-text
Abstract
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused [...] Read more.
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers’ eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessCommentary
Bela Julesz in Depth
Received: 23 March 2019 / Revised: 3 May 2019 / Accepted: 5 May 2019 / Published: 8 May 2019
Viewed by 218 | PDF Full-text (3661 KB) | HTML Full-text | XML Full-text
Abstract
A brief tribute to Bela Julesz (1928–2003) is made in words and images. In addition to a conventional stereophotographic portrait, his major contributions to vision research are commemorated by two ‘perceptual portraits’, which try to capture the spirit of his main accomplishments in [...] Read more.
A brief tribute to Bela Julesz (1928–2003) is made in words and images. In addition to a conventional stereophotographic portrait, his major contributions to vision research are commemorated by two ‘perceptual portraits’, which try to capture the spirit of his main accomplishments in stereopsis and the perception of texture. Full article
Figures

Figure 1

Open AccessReview
Associations and Dissociations between Oculomotor Readiness and Covert Attention
Received: 19 February 2019 / Revised: 23 April 2019 / Accepted: 25 April 2019 / Published: 7 May 2019
Viewed by 291 | PDF Full-text (866 KB) | HTML Full-text | XML Full-text
Abstract
The idea that covert mental processes such as spatial attention are fundamentally dependent on systems that control overt movements of the eyes has had a profound influence on theoretical models of spatial attention. However, theories such as Klein’s Oculomotor Readiness Hypothesis (OMRH) and [...] Read more.
The idea that covert mental processes such as spatial attention are fundamentally dependent on systems that control overt movements of the eyes has had a profound influence on theoretical models of spatial attention. However, theories such as Klein’s Oculomotor Readiness Hypothesis (OMRH) and Rizzolatti’s Premotor Theory have not gone unchallenged. We previously argued that although OMRH/Premotor theory is inadequate to explain pre-saccadic attention and endogenous covert orienting, it may still be tenable as a theory of exogenous covert orienting. In this article we briefly reiterate the key lines of argument for and against OMRH/Premotor theory, then evaluate the Oculomotor Readiness account of Exogenous Orienting (OREO) with respect to more recent empirical data. These studies broadly confirm the importance of oculomotor preparation for covert, exogenous attention. We explain this relationship in terms of reciprocal links between parietal ‘priority maps’ and the midbrain oculomotor centres that translate priority-related activation into potential saccade endpoints. We conclude that the OMRH/Premotor theory hypothesis is false for covert, endogenous orienting but remains tenable as an explanation for covert, exogenous orienting. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Figures

Figure 1

Open AccessArticle
Attention Combines Similarly in Covert and Overt Conditions
Received: 4 February 2019 / Revised: 16 April 2019 / Accepted: 23 April 2019 / Published: 25 April 2019
Viewed by 291 | PDF Full-text (1279 KB) | HTML Full-text | XML Full-text
Abstract
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye [...] Read more.
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention’s dynamic flexibility in facilitating human behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Figures

Figure 1

Open AccessArticle
Hands Ahead in Mind and Motion: Active Inference in Peripersonal Hand Space
Received: 28 February 2019 / Revised: 5 April 2019 / Accepted: 16 April 2019 / Published: 18 April 2019
Viewed by 334 | PDF Full-text (2641 KB) | HTML Full-text | XML Full-text
Abstract
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize [...] Read more.
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants’ virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing. Full article
(This article belongs to the Special Issue Visual Control of Action)
Figures

Figure 1

Open AccessArticle
Dynamic Cancellation of Perceived Rotation from the Venetian Blind Effect
Received: 2 March 2019 / Revised: 28 March 2019 / Accepted: 30 March 2019 / Published: 3 April 2019
Viewed by 302 | PDF Full-text (1369 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The [...] Read more.
Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The temporal dynamics of the Venetian blind effect are much slower (1.3 Hz) than those for geometric binocular disparities (4–5 Hz). Sine-wave modulations of luminance and contrast disparity, however, can be discriminated from square-wave modulations at 1 Hz, which suggests a non-linearity. To measure this non-linearity, a luminance or contrast disparity modulation was presented at a particular frequency and paired with a geometric disparity modulation that cancelled the perceived rotation induced by the luminance or contrast modulation. Phases between the luminance or contrast and the geometric modulation varied in 50 ms increments from −200 and 200 ms. When phases were aligned, observers perceived little or no rotation. When not aligned, a perceived rotation was induced by a contrast or luminance disparity that was then cancelled by the geometric disparity. This causes the perception of a slight jump. The Generalized Difference Model, which is linear in time, predicted a minimal probability in cases when luminance or contrast disparities occurred before the geometric disparities due to the slower dynamics of the Venetian blind effect. The Gated Generalized Difference Model, which is non-linear in time, predicted a minimal probability for offsets of 0 ms. Results followed the Gated model, which further suggests a non-linearity in time for the Venetian blind effect. Full article
(This article belongs to the Special Issue Visual Motion Processing)
Figures

Figure 1

Open AccessArticle
The A-Effect and Global Motion
Received: 4 February 2019 / Revised: 20 March 2019 / Accepted: 22 March 2019 / Published: 28 March 2019
Viewed by 302 | PDF Full-text (3005 KB) | HTML Full-text | XML Full-text
Abstract
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while [...] Read more.
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Figures

Figure 1

Open AccessReview
A Review of Motion and Orientation Processing in Migraine
Received: 29 January 2019 / Accepted: 7 March 2019 / Published: 27 March 2019
Viewed by 269 | PDF Full-text (1372 KB) | HTML Full-text | XML Full-text
Abstract
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic [...] Read more.
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic treatments. This article reviews the literature on performance differences on two visual tasks, global motion discrimination and orientation, which, of the many visual tasks that have been used to compare differences between migraine and control groups, have yielded the most consistent patterns of group differences. The implications for understanding the underlying pathophysiology in migraine are discussed, but the main focus is on bringing together disparate areas of research and suggesting those that can reveal practical uses of visual tests to treat and manage migraine. Full article
(This article belongs to the Special Issue Visual Motion Processing)
Figures

Figure 1

Vision EISSN 2411-5150 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top