Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = trans-saccadic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6324 KB  
Article
A Bio-Inspired Visual Perception Transformer for Cross-Domain Semantic Segmentation of High-Resolution Remote Sensing Images
by Xinyao Wang, Haitao Wang, Yuqian Jing, Xianming Yang and Jianbo Chu
Remote Sens. 2024, 16(9), 1514; https://doi.org/10.3390/rs16091514 - 25 Apr 2024
Cited by 5 | Viewed by 2502
Abstract
Pixel-level classification of very-high-resolution images is a crucial yet challenging task in remote sensing. While transformers have demonstrated effectiveness in capturing dependencies, their tendency to partition images into patches may restrict their applicability to highly detailed remote sensing images. To extract latent contextual [...] Read more.
Pixel-level classification of very-high-resolution images is a crucial yet challenging task in remote sensing. While transformers have demonstrated effectiveness in capturing dependencies, their tendency to partition images into patches may restrict their applicability to highly detailed remote sensing images. To extract latent contextual semantic information from high-resolution remote sensing images, we proposed a gaze–saccade transformer (GSV-Trans) with visual perceptual attention. GSV-Trans incorporates a visual perceptual attention (VPA) mechanism that dynamically allocates computational resources based on the semantic complexity of the image. The VPA mechanism includes both gaze attention and eye movement attention, enabling the model to focus on the most critical parts of the image and acquire competitive semantic information. Additionally, to capture contextual semantic information across different levels in the image, we designed an inter-layer short-term visual memory module with bidirectional affinity propagation to guide attention allocation. Furthermore, we introduced a dual-branch pseudo-label module (DBPL) that imposes pixel-level and category-level semantic constraints on both gaze and saccade branches. DBPL encourages the model to extract domain-invariant features and align semantic information across different domains in the feature space. Extensive experiments on multiple pixel-level classification benchmarks confirm the effectiveness and superiority of our method over the state of the art. Full article
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)
Show Figures

Graphical abstract

3 pages, 328 KB  
Concept Paper
Using the Blind Spot to Investigate Trans-Saccadic Perception
by Julie Royo, Fabrice Arcizet, Patrick Cavanagh and Pierre Pouget
Vision 2021, 5(3), 39; https://doi.org/10.3390/vision5030039 - 26 Aug 2021
Viewed by 3417
Abstract
We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and [...] Read more.
We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls. Full article
Show Figures

Figure 1

11 pages, 1795 KB  
Article
Visual Perception of Facial Emotional Expressions during Saccades
by Vladimir A. Barabanschikov and Ivan Y. Zherdev
Behav. Sci. 2019, 9(12), 131; https://doi.org/10.3390/bs9120131 - 27 Nov 2019
Cited by 8 | Viewed by 4374
Abstract
The regularities of visual perception of both complex and ecologically valid objects during extremely short photo expositions are studied. Images of a person experiencing basic emotions were displayed for as low as 14 ms amidst a saccade spanning 10 degrees of visual angle. [...] Read more.
The regularities of visual perception of both complex and ecologically valid objects during extremely short photo expositions are studied. Images of a person experiencing basic emotions were displayed for as low as 14 ms amidst a saccade spanning 10 degrees of visual angle. The observers had a main task to recognize the emotion depicted, and a secondary task to point at the perceived location of the photo on the screen. It is shown that probability of correct recognition of emotion is above chance (0.62), and that it depends on its type. False localizations of stimuli and their compression in the direction of the saccade were also observed. According to the acquired data, complex environmentally valid objects are perceived differently during saccades in comparison to isolated dots, lines or gratings. The rhythmic structure of oculomotor activity (fixation–saccade–fixation) does not violate the continuity of the visual processing. The perceptual genesis of facial expressions does not take place only during gaze fixation, but also during peak speed of rapid eye movements both at the center and in closest proximity of the visual acuity area. Full article
(This article belongs to the Special Issue XVI European Congress of Psychology)
Show Figures

Figure 1

12 pages, 395 KB  
Article
It’s All About the Transient: Intra-Saccadic Onset Stimuli Do Not Capture Attention
by Sebastiaan Mathôt and Jan Theeuwes
J. Eye Mov. Res. 2012, 5(2), 1-12; https://doi.org/10.16910/jemr.5.2.4 - 2 May 2012
Cited by 1 | Viewed by 250
Abstract
An abrupt onset stimulus was presented while the participants’ eyes were in motion. Because of saccadic suppression, participants did not perceive the visual transient that normally accompanies the sudden appearance of a stimulus. In contrast to the typical finding that the presentation of [...] Read more.
An abrupt onset stimulus was presented while the participants’ eyes were in motion. Because of saccadic suppression, participants did not perceive the visual transient that normally accompanies the sudden appearance of a stimulus. In contrast to the typical finding that the presentation of an abrupt onset captures attention and interferes with the participants’ responses, we found that an intra-saccadic abrupt onset does not capture attention: It has no effect beyond that of increasing the set-size of the search array by one item. This finding favours the local transient account of attentional capture over the novel object hypothesis. Full article
Show Figures

Figure 1

13 pages, 219 KB  
Article
Transsaccadic Scene Memory Revisited: A 'Theory of Visual Attention (TVA)' Based Approach to Recognition Memory and Confidence for Objects in Naturalistic Scenes.
by Melissa L.-H. Võ, Werner X. Schneider and Ellen Matthias
J. Eye Mov. Res. 2008, 2(2), 1-13; https://doi.org/10.16910/jemr.2.2.7 - 16 Dec 2008
Cited by 3 | Viewed by 238
Abstract
The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which—based on the theory of visual attention (TVA)—processing efficiency parameters were derived, namely visual short-term memory [...] Read more.
The study presented here introduces a new approach to the investigation of transsaccadic memory for objects in naturalistic scenes. Participants were tested with a whole-report task from which—based on the theory of visual attention (TVA)—processing efficiency parameters were derived, namely visual short-term memory storage capacity and visual processing speed. By combining these processing efficiency parameters with transsaccadic memory data from a previous study, we were able to take a closer look at the contribution of visual short-term memory capacity and processing speed to the establishment of visual long-term memory representations during scene viewing. Results indicate that especially the VSTM storage capacity plays a major role in the generation of transsaccadic visual representations of naturalistic scenes. Full article
Show Figures

Figure 1

Back to TopTop