Visual Perception and Its Neural Mechanisms

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (15 October 2018) | Viewed by 41024

Special Issue Editors


E-Mail Website
Guest Editor
Assistant Professor, Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
Interests: visual neuroscience; neuroimaging methods; computational modeling; attention; object recognition; statistics

E-Mail Website
Guest Editor
Associate Professor, Department of Psychology, University of Wisconsin – Madison, Madison, WI 53706, USA
Interests: visual perception; motion and depth processing; perceptual disorders; virtual reality; sensory integration; binocular vision

Special Issue Information

Dear Colleagues,

For this Special Issue on “Visual Perception and Its Neural Mechanisms”, we invite a mixture of original and review articles that provide insight into the neural computations performed by visual cortex. We especially encourage articles that address issues pertaining to (i) advances in techniques for neural measurement or (ii) neural data analysis and modeling approaches that elucidate perceptual mechanisms.

Suggested topics include, but are not limited to:

  1. Advanced methods improving the resolution and quality of neural measurements
  2. Computational modeling frameworks
  3. Efforts to clarify the link between brain responses and behavior (e.g., perceptual judgments, awareness, uncertainty, reaction times)
  4. Efforts to bridge spatial and temporal scales of measurement (e.g., single units vs. population measures)
  5. Novel insights into the organization of, and within, visual areas in the brain
  6. Neural biomarkers for perceptual disorders

If you are considering a review article, please send us a brief proposal before a full submission.

Dr. Kendrick Kay
Dr. Bas Rokers
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • perception
  • neural mechanism
  • computational modeling
  • fMRI
  • EEG/MEG
  • ECoG
  • optical imaging
  • electrophysiology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 3651 KiB  
Article
Distinguishing Hemodynamics from Function in the Human LGN Using a Temporal Response Model
by Kevin DeSimone and Keith A. Schneider
Vision 2019, 3(2), 27; https://doi.org/10.3390/vision3020027 - 7 Jun 2019
Cited by 4 | Viewed by 4216
Abstract
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a [...] Read more.
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a point of entry for blood vessels entering the LGN and supplying the substrates of central vision. The location of the hilum along the ventral surface of the LGN and the resulting gradient in the amplitude of the HRF across the extent of the LGN have made it difficult to segment the human LGN into its more interesting magnocellular and parvocellular regions that represent two distinct visual processing streams. Here, we show that an intrinsic clustering of the LGN responses to a variety of visual inputs reveals the hilum, and further, that this clustering is dominated by the amplitude of the HRF. We introduced a temporal population receptive field model that includes separate sustained and transient temporal impulse response functions that vary on a much short timescale than the HRF. When we account for the HRF amplitude, we demonstrate that this temporal response model is able to functionally segregate the residual responses according to their temporal properties. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

18 pages, 5709 KiB  
Article
Reliability and Generalizability of Similarity-Based Fusion of MEG and fMRI Data in Human Ventral and Dorsal Visual Streams
by Yalda Mohsenzadeh, Caitlin Mullin, Benjamin Lahner, Radoslaw Martin Cichy and Aude Oliva
Vision 2019, 3(1), 8; https://doi.org/10.3390/vision3010008 - 10 Feb 2019
Cited by 15 | Viewed by 7550
Abstract
To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. [...] Read more.
To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

17 pages, 2182 KiB  
Article
Temporal Limits of Visual Motion Processing: Psychophysics and Neurophysiology
by Bart G. Borghuis, Duje Tadin, Martin J.M. Lankheet, Joseph S. Lappin and Wim A. van de Grind
Vision 2019, 3(1), 5; https://doi.org/10.3390/vision3010005 - 26 Jan 2019
Cited by 16 | Viewed by 6084
Abstract
Under optimal conditions, just 3–6 ms of visual stimulation suffices for humans to see motion. Motion perception on this timescale implies that the visual system under these conditions reliably encodes, transmits, and processes neural signals with near-millisecond precision. Motivated by in vitro evidence [...] Read more.
Under optimal conditions, just 3–6 ms of visual stimulation suffices for humans to see motion. Motion perception on this timescale implies that the visual system under these conditions reliably encodes, transmits, and processes neural signals with near-millisecond precision. Motivated by in vitro evidence for high temporal precision of motion signals in the primate retina, we investigated how neuronal and perceptual limits of motion encoding relate. Specifically, we examined the correspondence between the time scale at which cat retinal ganglion cells in vivo represent motion information and temporal thresholds for human motion discrimination. The timescale for motion encoding by ganglion cells ranged from 4.6 to 91 ms, and depended non-linearly on temporal frequency, but not on contrast. Human psychophysics revealed that minimal stimulus durations required for perceiving motion direction were similarly brief, 5.6–65 ms, and similarly depended on temporal frequency but, above ~10%, not on contrast. Notably, physiological and psychophysical measurements corresponded closely throughout (r = 0.99), despite more than a 20-fold variation in both human thresholds and optimal timescales for motion encoding in the retina. The match in absolute values of the neurophysiological and psychophysical data may be taken to indicate that from the lateral geniculate nucleus (LGN) through to the level of perception little temporal precision is lost. However, we also show that integrating responses from multiple neurons can improve temporal resolution, and this potential trade-off between spatial and temporal resolution would allow for loss of temporal resolution after the LGN. While the extent of neuronal integration cannot be determined from either our human psychophysical or neurophysiological experiments and its contribution to the measured temporal resolution is unknown, our results demonstrate a striking similarity in stimulus dependence between the temporal fidelity established in the retina and the temporal limits of human motion discrimination. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

15 pages, 3063 KiB  
Article
Long-Range Interocular Suppression in Adults with Strabismic Amblyopia: A Pilot fMRI Study
by Benjamin Thompson, Goro Maehara, Erin Goddard, Reza Farivar, Behzad Mansouri and Robert F. Hess
Vision 2019, 3(1), 2; https://doi.org/10.3390/vision3010002 - 8 Jan 2019
Cited by 11 | Viewed by 4301
Abstract
Interocular suppression plays an important role in the visual deficits experienced by individuals with amblyopia. Most neurophysiological and functional MRI studies of suppression in amblyopia have used dichoptic stimuli that overlap within the visual field. However, suppression of the amblyopic eye also occurs [...] Read more.
Interocular suppression plays an important role in the visual deficits experienced by individuals with amblyopia. Most neurophysiological and functional MRI studies of suppression in amblyopia have used dichoptic stimuli that overlap within the visual field. However, suppression of the amblyopic eye also occurs when the dichoptic stimuli do not overlap, a phenomenon we refer to as long-range suppression. We used functional MRI to test the hypothesis that long-range suppression reduces neural activity in V1, V2 and V3 in adults with amblyopia, indicative of an early, active inhibition mechanism. Five adults with amblyopia and five controls viewed monocular and dichoptic quadrant stimuli during fMRI. Three of five participants with amblyopia experienced complete perceptual suppression of the quadrants presented to their amblyopic eye under dichoptic viewing. The blood oxygen level dependant (BOLD) responses within retinotopic regions corresponding to amblyopic and fellow eye stimuli were analyzed for response magnitude, time to peak, effective connectivity and stimulus classification. Dichoptic viewing slightly reduced the BOLD response magnitude in amblyopic eye retinotopic regions in V1 and reduced the time to peak response; however, the same effects were also present in the non-dominant eye of controls. Effective connectivity was unaffected by suppression, and the results of a classification analysis did not differ significantly between the control and amblyopia groups. Overall, we did not observe a neural signature of long-range amblyopic eye suppression in V1, V2 or V3 using functional MRI in this initial study. This type of suppression may involve higher level processing areas within the brain. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

17 pages, 3423 KiB  
Article
Differentiation of Types of Visual Agnosia Using EEG
by Sarah M. Haigh, Amanda K. Robinson, Pulkit Grover and Marlene Behrmann
Vision 2018, 2(4), 44; https://doi.org/10.3390/vision2040044 - 18 Dec 2018
Cited by 3 | Viewed by 7583
Abstract
Visual recognition deficits are the hallmark symptom of visual agnosia, a neuropsychological disorder typically associated with damage to the visual system. Most research into visual agnosia focuses on characterizing the deficits through detailed behavioral testing, and structural and functional brain scans are used [...] Read more.
Visual recognition deficits are the hallmark symptom of visual agnosia, a neuropsychological disorder typically associated with damage to the visual system. Most research into visual agnosia focuses on characterizing the deficits through detailed behavioral testing, and structural and functional brain scans are used to determine the spatial extent of any cortical damage. Although the hierarchical nature of the visual system leads to clear predictions about the temporal dynamics of cortical deficits, there has been little research on the use of neuroimaging methods with high temporal resolution to characterize the temporal profile of agnosia deficits. Here, we employed high-density electroencephalography (EEG) to investigate alterations in the temporal dynamics of the visual system in two individuals with visual agnosia. In the context of a steady state visual evoked potential paradigm (SSVEP), individuals viewed pattern-reversing checkerboards of differing spatial frequency, and we assessed the responses of the visual system in the frequency and temporal domain. JW, a patient with early visual cortex damage, showed impaired SSVEP response relative to a control group and to the second patient (SM) who had right temporal lobe damage. JW also showed lower decoding accuracy for early visual responses (around 100 ms). SM, whose lesion is more anterior in the visual system, showed good decoding accuracy initially but low decoding after 500 ms. Overall, EEG and multivariate decoding methods can yield important insights into the temporal dynamics of visual responses in individuals with visual agnosia. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

18 pages, 1030 KiB  
Article
Apparent Motion Perception in the Praying Mantis: Psychophysics and Modelling
by Ghaith Tarawneh, Lisa Jones, Vivek Nityananda, Ronny Rosner, Claire Rind and Jenny C. A. Read
Vision 2018, 2(3), 32; https://doi.org/10.3390/vision2030032 - 10 Aug 2018
Viewed by 5147
Abstract
Apparent motion is the perception of motion created by rapidly presenting still frames in which objects are displaced in space. Observers can reliably discriminate the direction of apparent motion when inter-frame object displacement is below a certain limit, Dmax . Earlier studies of [...] Read more.
Apparent motion is the perception of motion created by rapidly presenting still frames in which objects are displaced in space. Observers can reliably discriminate the direction of apparent motion when inter-frame object displacement is below a certain limit, Dmax . Earlier studies of motion perception in humans found that Dmax is lower-bounded at around 15 arcmin, and thereafter scales with the size of the spatial elements in the images. Here, we run corresponding experiments in the praying mantis Sphodromantis lineola to investigate how Dmax scales with the element size. We use random moving chequerboard patterns of varying element and displacement step sizes to elicit the optomotor response, a postural stabilization mechanism that causes mantids to lean in the direction of large-field motion. Subsequently, we calculate Dmax as the displacement step size corresponding to a 50% probability of detecting an optomotor response in the same direction as the stimulus. Our main findings are that the mantis Dmax scales roughly as a square-root of element size and that, in contrast to humans, it is not lower-bounded. We present two models to explain these observations: a simple high-level model based on motion energy in the Fourier domain and a more-detailed one based on the Reichardt Detector. The models present complementary intuitive and physiologically-realistic accounts of how Dmax scales with the element size in insects. We conclude that insect motion perception is limited by only a single stage of spatial filtering, reflecting the optics of the compound eye. In contrast, human motion perception reflects a second stage of spatial filtering, at coarser scales than imposed by human optics, likely corresponding to the magnocellular pathway. After this spatial filtering, mantis and human motion perception and Dmax are qualitatively very similar. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

Other

Jump to: Research

10 pages, 853 KiB  
Brief Report
Assessing Lateral Interaction in the Synesthetic Visual Brain
by Diana Jimena Arias, Anthony Hosein and Dave Saint-Amour
Vision 2019, 3(1), 7; https://doi.org/10.3390/vision3010007 - 8 Feb 2019
Cited by 1 | Viewed by 3690
Abstract
In grapheme-color synesthesia, letters and numbers evoke abnormal colored perceptions. Although the underlying mechanisms are not known, it is largely thought that the synesthetic brain is characterized by atypical connectivity throughout various brain regions, including the visual areas. To study the putative impact [...] Read more.
In grapheme-color synesthesia, letters and numbers evoke abnormal colored perceptions. Although the underlying mechanisms are not known, it is largely thought that the synesthetic brain is characterized by atypical connectivity throughout various brain regions, including the visual areas. To study the putative impact of synesthesia on the visual brain, we assessed lateral interactions (i.e., local functional connectivity between neighboring neurons in the visual cortex) by recording steady-state visual evoked potentials (ssVEPs) over the occipital region in color-grapheme synesthetes (n = 6) and controls (n = 21) using the windmill/dartboard paradigm. Discrete Fourier Transform analysis was conducted to extract the fundamental frequency and the second harmonics of ssVEP responses from contrast-reversing stimuli presented at 4.27 Hz. Lateral interactions were assessed using two amplitude-based indices: Short-range and long-range lateral interactions. Results indicated that synesthetes had a statistically weaker signal coherence of the fundamental frequency component compared to the controls, but no group differences were observed on lateral interaction indices. However, a significant correlation was found between long-range lateral interactions and the type of synesthesia experience (projector versus associator). We conclude that the occipital activity related to lateral interactions in synesthetes does not substantially differ from that observed in controls. Further investigation is needed to understand the impact of synesthesia on visual processing, specifically in relation to subjective experiences of synesthete individuals. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

Back to TopTop