In this brief methodological report, we suggest a simple method to perform intra-saccadic display manipulation (such as double-step) without any eye-tracking system and/or display controls. The method addresses the timing challenges arising from the many steps required to update a display contingent on gaze. First, the online access to gaze position data is delayed. This end-to-end sample delay includes not only the time taken for a physical event to be registered, processed, and made available online by the eye-tracking system (e.g., capturing an image of the eye, fitting the pupil and corneal reflection, and extrapolating gaze position), but also the time needed to retrieve the data via Ethernet, USB, or analog ports. Second, as we need a reliable, and thus often a more conservative, criterion to decide whether a saccade has been initiated, the onset of the saccade detected online usually lags behind the onset of the saccade detected offline. Henceforth, this delay will be referred to as the saccade detection latency. Third, once we have detected a saccade in the data retrieved online, the stimulus has to be drawn to the graphics card’s back-buffer and the flip with the front-buffer has to be synchronized with the display’s vertical retrace [
1]. This detect-to-flip latency is determined by the refresh rate of the monitor and depends on the time of detection within the refresh cycle. Fourth, there is the flip-to-display latency—that is, the time from the execution of the flip until the physical stimulus presentation on the screen. Whereas the transfer of the entire video signal takes up to one frame duration, the display’s reaction time can additionally increase the flip-to-display latency, as well as introduce temporal jitter. Because both gaze-contingent displays and saccade profiles can be subject to considerable variance, we increase the risk of the change occurring after rather than during the saccade. Failure to acknowledge or control these latencies can therefore lead to erroneous results and unwarranted conclusions.
We can avoid these complexities by exploiting our natural monocular blind spot to introduce gaze-contingent display changes. This new paradigm permits the study of trans-saccadic manipulations with little or no timing constraints. To do so, we simply use the natural blind spot observed in monocular vision as a self-generated saccade detector that produces an accurate intra-saccadic stimulus change. The blind spot corresponds to the optic disk on the retina, where blood vessels and ganglion-cell axons converge to form the optic nerve, which leads away from the eyeball to the brain. For this reason, the optic disk contains no photoreceptors (rods or cones), and thus no visual events can be registered within the blind spot [
2,
3,
4]. The blind spot is located about 12–15° temporally and 1.5° below the horizontal and is roughly 7.5° high and 5.5° wide with some inter-individual variabilities. Although a stimulus on the display is not perceived when presented in the blind spot, it becomes detectable as soon as it exits the blind spot during the eye movement, avoiding the need to detect the saccade and update the display.
As an example, we describe a version double step task using this paradigm, see
Figure 1. The participants must initially fixate a central point and then move their gaze to a visible target. Shortly after the first saccade begins, the hidden second target is revealed as the blind spot has moved off of its location. After landing on the first target, participants then make a saccade to this newly visible point. This use of the blind spot offers the opportunity to explore diverse trans-saccadic manipulations in a simple setup, such as saccadic adaptation (using a flashed target outside the blind spot), saccadic inhibition (with masked stimuli within the blind spot) as well as visual remapping (with the remapped object or part of the object in the blind spot). Furthermore, although the blind spot encodes no visual information, one never perceives an odd dark or blank area there. Instead, one sees a complete scene of the world even when viewing monocularly [
5].