Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = retinotopy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2217 KB  
Article
Fast Optical Signals for Real-Time Retinotopy and Brain Computer Interface
by David Perpetuini, Mehmet Günal, Nicole Chiou, Sanmi Koyejo, Kyle Mathewson, Kathy A. Low, Monica Fabiani, Gabriele Gratton and Antonio Maria Chiarelli
Bioengineering 2023, 10(5), 553; https://doi.org/10.3390/bioengineering10050553 - 5 May 2023
Cited by 5 | Viewed by 3077
Abstract
A brain–computer interface (BCI) allows users to control external devices through brain activity. Portable neuroimaging techniques, such as near-infrared (NIR) imaging, are suitable for this goal. NIR imaging has been used to measure rapid changes in brain optical properties associated with neuronal activation, [...] Read more.
A brain–computer interface (BCI) allows users to control external devices through brain activity. Portable neuroimaging techniques, such as near-infrared (NIR) imaging, are suitable for this goal. NIR imaging has been used to measure rapid changes in brain optical properties associated with neuronal activation, namely fast optical signals (FOS) with good spatiotemporal resolution. However, FOS have a low signal-to-noise ratio, limiting their BCI application. Here FOS were acquired with a frequency-domain optical system from the visual cortex during visual stimulation consisting of a rotating checkerboard wedge, flickering at 5 Hz. We used measures of photon count (Direct Current, DC light intensity) and time of flight (phase) at two NIR wavelengths (690 nm and 830 nm) combined with a machine learning approach for fast estimation of visual-field quadrant stimulation. The input features of a cross-validated support vector machine classifier were computed as the average modulus of the wavelet coherence between each channel and the average response among all channels in 512 ms time windows. An above chance performance was obtained when differentiating visual stimulation quadrants (left vs. right or top vs. bottom) with the best classification accuracy of ~63% (information transfer rate of ~6 bits/min) when classifying the superior and inferior stimulation quadrants using DC at 830 nm. The method is the first attempt to provide generalizable retinotopy classification relying on FOS, paving the way for the use of FOS in real-time BCI. Full article
Show Figures

Figure 1

14 pages, 2416 KB  
Case Report
Normal Retinotopy in Primary Visual Cortex in a Congenital Complete Unilateral Lesion of Lateral Geniculate Nucleus in Human: A Case Study
by Akshatha Bhat, Jan W. Kurzawski, Giovanni Anobile, Francesca Tinelli, Laura Biagi and Maria Concetta Morrone
Int. J. Mol. Sci. 2022, 23(3), 1055; https://doi.org/10.3390/ijms23031055 - 19 Jan 2022
Cited by 3 | Viewed by 3523
Abstract
Impairment of the geniculostriate pathway results in scotomas in the corresponding part of the visual field. Here, we present a case of patient IB with left eye microphthalmia and with lesions in most of the left geniculostriate pathway, including the Lateral Geniculate Nucleus [...] Read more.
Impairment of the geniculostriate pathway results in scotomas in the corresponding part of the visual field. Here, we present a case of patient IB with left eye microphthalmia and with lesions in most of the left geniculostriate pathway, including the Lateral Geniculate Nucleus (LGN). Despite the severe lesions, the patient has a very narrow scotoma in the peripheral part of the lower-right-hemifield only (beyond 15° of eccentricity) and complete visual field representation in the primary visual cortex. Population receptive field mapping (pRF) of the patient’s visual field reveals orderly eccentricity maps together with contralateral activation in both hemispheres. With diffusion tractography, we revealed connections between superior colliculus (SC) and cortical structures in the hemisphere affected by the lesions, which could mediate the retinotopic reorganization at the cortical level. Our results indicate an astonishing case for the flexibility of the developing retinotopic maps where the contralateral thalamus receives fibers from both the nasal and temporal retinae. Full article
Show Figures

Figure 1

16 pages, 2705 KB  
Review
Information Integration and Information Storage in Retinotopic and Non-Retinotopic Sensory Memory
by Haluk Öğmen and Michael H. Herzog
Vision 2021, 5(4), 61; https://doi.org/10.3390/vision5040061 - 13 Dec 2021
Viewed by 3715
Abstract
The first stage of the Atkinson–Shiffrin model of human memory is a sensory memory (SM). The visual component of the SM was shown to operate within a retinotopic reference frame. However, a retinotopic SM (rSM) is unable to account for vision [...] Read more.
The first stage of the Atkinson–Shiffrin model of human memory is a sensory memory (SM). The visual component of the SM was shown to operate within a retinotopic reference frame. However, a retinotopic SM (rSM) is unable to account for vision under natural viewing conditions because, for example, motion information needs to be analyzed across space and time. For this reason, the SM store of the Atkinson–Shiffrin model has been extended to include a non-retinotopic component (nrSM). In this paper, we analyze findings from two experimental paradigms and show drastically different properties of rSM and nrSM. We show that nrSM involves complex processes such as motion-based reference frames and Gestalt grouping, which establish object identities across space and time. We also describe a quantitative model for nrSM and show drastic differences between the spatio-temporal properties of rSM and nrSM. Since the reference-frame of the latter is non-retinotopic and motion-stream based, we suggest that the spatiotemporal properties of the nrSM are in accordance with the spatiotemporal properties of the motion system. Overall, these findings indicate that, unlike the traditional rSM, which is a relatively passive store, nrSM exhibits sophisticated processing properties to manage the complexities of ecological perception. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Figure 1

22 pages, 3722 KB  
Article
The Conformal Camera in Modeling Active Binocular Vision
by Jacek Turski
Symmetry 2016, 8(9), 88; https://doi.org/10.3390/sym8090088 - 31 Aug 2016
Cited by 4 | Viewed by 6197
Abstract
Primate vision is an active process that constructs a stable internal representation of the 3D world based on 2D sensory inputs that are inherently unstable due to incessant eye movements. We present here a mathematical framework for processing visual information for a biologically-mediated [...] Read more.
Primate vision is an active process that constructs a stable internal representation of the 3D world based on 2D sensory inputs that are inherently unstable due to incessant eye movements. We present here a mathematical framework for processing visual information for a biologically-mediated active vision stereo system with asymmetric conformal cameras. This model utilizes the geometric analysis on the Riemann sphere developed in the group-theoretic framework of the conformal camera, thus far only applicable in modeling monocular vision. The asymmetric conformal camera model constructed here includes the fovea’s asymmetric displacement on the retina and the eye’s natural crystalline lens tilt and decentration, as observed in ophthalmological diagnostics. We extend the group-theoretic framework underlying the conformal camera to the stereo system with asymmetric conformal cameras. Our numerical simulation shows that the theoretical horopter curves in this stereo system are conics that well approximate the empirical longitudinal horopters of the primate vision system. Full article
(This article belongs to the Special Issue Symmetry in Vision)
Show Figures

Figure 1

Back to TopTop