Next Article in Journal
Use of Plasma Rich in Growth Factors and ReGeneraTing Agent Matrix for the Treatment of Corneal Diseases
Next Article in Special Issue
Developmental Trajectories of Size Constancy as Implicitly Examined by Simple Reaction Times
Previous Article in Journal
No Evidence of Reduced Contrast Sensitivity in Migraine-with-Aura for Large, Narrowband, Centrally Presented Noise-Masked Stimuli
Previous Article in Special Issue
A Riemannian Geometry Theory of Synergy Selection for Visually-Guided Movement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Does Vergence Affect Perceived Size?

Centre for Applied Vision Research, University of London, Northampton Square, Clerkenwell, London EC1V 0HB, UK
Vision 2021, 5(3), 33; https://doi.org/10.3390/vision5030033
Submission received: 4 October 2020 / Revised: 14 May 2021 / Accepted: 24 May 2021 / Published: 22 June 2021
(This article belongs to the Special Issue Size Constancy for Perception and Action)

Abstract

:
Since Kepler (1604) and Descartes (1637), it has been suggested that ‘vergence’ (the angular rotation of the eyes) plays a key role in size constancy. However, this has never been tested divorced from confounding cues such as changes in the retinal image. In our experiment, participants viewed a target which grew or shrank in size over 5 s. At the same time, the fixation distance specified by vergence was reduced from 50 to 25 cm. The question was whether this change in vergence affected the participants’ judgements of whether the target grew or shrank in size? We found no evidence of any effect, and therefore no evidence that eye movements affect perceived size. If this is correct, then our finding has three implications. First, perceived size is much more reliant on cognitive influences than previously thought. This is consistent with the argument that visual scale is purely cognitive in nature (Linton, 2017; 2018). Second, it leads us to question whether the vergence modulation of V1 contributes to size constancy. Third, given the interaction between vergence, proprioception, and the retinal image in the Taylor illusion, it leads us to ask whether this cognitive approach could also be applied to multisensory integration.

1. Introduction

As objects move forwards or backwards in space, the image they cast on the retina varies drastically in size. And yet, objects do not appear to change dramatically in size when they move closer or further away. This suggests that there is a neural mechanism (‘size constancy’) that compensates for the drastic changes in the retinal image caused by changes in distance (for a review, see [1]).
We can distinguish between two kinds of visual cues for size constancy: 1. pictorial cues, which are present in the static monocular retinal image (such as familiar size and perspective) and account for the impression of size constancy in pictures, and 2. triangulation cues, which typically rely on introducing multiple viewpoints either simultaneously (vergence and binocular disparities) or consecutively (motion parallax).
1. Pictorial Cues: The neural correlates of size constancy are better understood for pictorial cues since 2D pictures are more easily presented to participants in fMRI scanners (e.g., [2]). However, pictorial cues are neither necessary nor sufficient for size constancy. First, pictorial cues are unnecessary because, as we shall discuss below, observers in the Taylor illusion appear to experience something close to full size constancy from vergence alone. Second, pictorial cues are insufficient because, as [3] observe, size constancy in pictures is merely a fraction (10–45%) of the size constancy experienced in the real world ([2,4]; Ref. [5] estimates 10–30%; see also [6] for a recent attempt to disambiguate static monocular and binocular size constancy, although recognising vergence may have influenced their monocular results).
To spell out this claim, consider the kind of stimulus used in [2] (Figure 1). First, in Figure 1, we judge the further ball to be many times larger in physical size than the nearer ball. Second, we also judge the further ball to take up more of the picture than the nearer ball even though they have the same angular size in the picture. This second, apparent increase in angular size is the phenomenon that size constancy is concerned with. In [2], the further ball was judged to 17% larger in angular size than the nearer ball.
When reviewing the literature, we have to keep this distinction between perceived physical size and perceived angular size in mind. For instance, [3] contrast the partial (10–45%) size constancy in pictures with the full (100%) size constancy experienced in the real world according to Emmert’s Law [7]. However, Emmert’s Law is a claim about perceived physical size, not perceived angular size. Indeed, the essence of Emmert’s Law is the claim that perceived physical size varies proportionately with perceived physical distance when the perceived angular size is fixed. By contrast, an angular size interpretation of Emmert’s Law would falsely imply that objects do not reduce in perceived angular size with distance, as if we view the world in orthographic projection.
2. Triangulation Cues: Keeping the distinction between physical size and angular size in mind, vergence (the angular rotation of the eyes) is thought to play an important contribution to the perceived angular size of objects, known as vergence size constancy. In terms of triangulation cues to size constancy (vergence, accommodation, binocular disparity, motion parallax, defocus blur), the emphasis has been on vergence.
Motion parallax is neither necessary for size constancy (full size constancy is observed by participants in an fMRI scanner in [3]) nor is there strong evidence that motion parallax contributes to size constancy ([8] only marginally qualify “the common notion that size constancy emerges as a result of retinal and vergence processing alone”).
Binocular disparity is typically regarded as providing relative depth, not absolute size and distance. Still, considering size constancy only requires relative changes in distance to be matched by relative changes in apparent size (distance1 ÷ distance2 = size1 ÷ size2), a relative depth cue could suffice. However, the problem is that binocular disparity does not even provide relative depth information until it has been scaled by absolute distance information, which is typically assumed to come from vergence. As [9] observe, a 1° change in retinal disparity could equally reflect a change in distance from 20 to 21 cm (5% increase in distance) or 2 to 4 m (100% increase in distance).
There is also a deeper conceptual point. Although in [10,11] I argue that visual processing is divorced from absolute size and distance, orthodox discussions typically articulate size constancy in terms of the visual system using absolute distance to determine perceived size. However, as we already mentioned, binocular disparity is typically thought of as being a merely relative depth cue outside very limited circumstances (objects taking up at least 20° of the visual field; [12]). Instead, vergence is typically cited as being one of our most important absolute distance cues at near distances ([13,14]; although I review and challenge this consensus in [15]).
Kepler in 1604 [16] and Descartes in 1637 [17] were the first to suggest that the visual system uses vergence to scale the size of the retinal image. Evidence for vergence size constancy has come from four specific contexts where it has been found that changing the vergence angle affects the perceived size of objects (so-called ‘vergence micropsia’):
1. Wallpaper Illusion: Before the invention of the stereoscope by Wheatstone in 1838 [18], the earliest evidence of vergence micropsia was the ‘wallpaper illusion’, the observation that if you cross your eyes whilst looking at a recurring wallpaper pattern, the wallpaper pattern appears smaller and closer (see Smith in 1738 [19]; Priestley in 1772 [20]; Goethe in 1810 [21], Meyer in 1842 and 1852 [22,23]; Brewster in 1844 [24]; Locke in 1849 [25], discussed more recently by [26,27,28]; see [29] for a review).
2. Stereoscopic Viewing: The invention of the stereoscope by Wheatstone in 1838 [18] (which presents separate images to each eye) enabled the eyes to be rotated independently of the retinal image. Wheatstone observed that if eye rotation was increased, the perceived image appeared to shrink, even though the images shown to each eye remained fixed ([30]; see also Helmholtz in 1866 [31], p. 313; as well as [32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50]).
3. Telestereoscopic Viewing: Building on Wheatstone’s stereoscope, Helmholtz invented the telestereoscope in 1857 [51,52], and observed that if we use mirrors to artificially increase the distance between the two eyes, the world appears miniaturised. In his Treatise on Physiological Optics, he observed that “it will seem as if the observer were not looking at the natural landscape itself, but a very exquisite and exact model of it, reduced in scale” ([31], p. 312). This effect has been attributed to vergence by Helmholtz ([51,52]; [31], p. 310) and Rogers ([53,54]), since the eyes need to rotate more to fixate on the same physical distance (cf. my alternative account in [11], discussed below), and has been extensively studied in the military research (where helicopter pilots often view the world through cameras with increased interpupillary separation, see [55,56,57,58,59,60]).
4. Taylor Illusion: Vergence is also thought to be central to the integration of hand motion and the retinal image in the Taylor illusion. If you make an after-image of your hand with a bright flash, and then in complete darkness move your hand closer to your face, the after-image of your hand appears to shrink even though it is fixed in size on the retina [61]. The best current explanation for the Taylor illusion is that it is due ([61,62,63]) or almost entirely due ([64]) to the increase in vergence as the eyes track the physical hand moving in darkness (see also [65,66,67,68,69]; and for vergence scaling of after-images see [70,71,72,73]). Importantly, when [64] moved the participant’s hand and vergence in opposite directions, they found that (a) the after-image size changed in the direction of vergence, not the hand movement, and (b) the magnitude of the size change when vergence and the hand were in conflict was almost as large as when both the hand and vergence were moving in the same direction.
Surveying the literature on vergence micropsia, two things are striking. First, to our knowledge, there has never been a report of a failure of vergence micropsia within peripersonal space (near distances corresponding to arms reach). Even on the rare occasions when a change in vergence fails to provide an impression of motion in depth (for instance, when motion in depth is vetoed by a stimulus that takes up the whole visual field) as in [46], the authors still report “apparent size changes as about threefold when convergence changed from about 0 deg to 25 deg”, with the authors observing: “Changes in size and depth produced by ocular vergence changes are well known”.
Second, the after-image literature appears to suggest that vergence provides something close to perfect size constancy for distances between 25 and 50 cm. This can be seen for two reasons. First, because size constancy appears close to perfect for 25–50 cm when vergence is the only distance cue. Apparent size doubled for the representative subject in [64] (incongruent condition) from 3.3 cm at 25 cm (suggested by the y = −0.61x + 3.3 line of best fit) to 6.3 cm at 50 cm (average of size estimates after a >3° vergence eye movement) (my analysis of their Figure 5 using WebPlotDigitizer 4.2; [74]). Second, the same conclusion is arrived at by a combination of the fact that (a) the Taylor illusion provides near perfect size constancy in this distance range [64,65,69], coupled with the fact that (b) the Taylor illusion can be attributed almost entirely to vergence [64].
Vergence size constancy is therefore regarded as a fundamental aspect of visual perception. However, we believe that vergence size constancy should be re-evaluated for two reasons:
First, our recent work suggests that vergence is an ineffective absolute distance cue. We find participants are unable to use vergence to judge absolute distance once confounding cues have been controlled for [15], and we are reluctant to embrace the possibility (raised by [75,76]) that vergence might still be an effective size constancy cue, even if it proves to be an ineffective absolute distance cue.
Second, one surprising fact is that, to the best of our knowledge, vergence size constancy has never been tested divorced from confounding cues (changes in the retinal image, such as diplopia or retinal slip, or changes in hand position) which inform the observer about changes in distance. The reason for this is easy to appreciate. Vergence can only be driven in one of two ways. Either participants track the retinal slip of a visual object moving in depth (such as an LED [62,64]) in which case participants are informed about the change in distance by binocular disparity (the apparent retinal slip of the stimulus as it moves in depth), or participants track their own hand moving in depth (as in the Taylor illusion), but this gives them proprioceptive information about the changing distance instead. The purpose of our experiment was therefore to test vergence size constancy in a context where it is known to be effective (vergence changes over 5 s from 25 to 50 cm [64]), but in a way that controls for subjective knowledge about changing distance.

2. Materials and Methods

Participants viewed two targets on a display fixed 160 cm away through two metal occluders that ensured that the left eye only saw the right target and the right eye only saw the left target. We were therefore able to manipulate the vergence distance specified by the targets (indicated by the arrow in Figure 2) by changing the separation between the targets on the fixed display.
Before the experiment began, participants rotated the metal plates to ensure that they only saw one target in each eye, and were asked to report whether the targets ever appeared double. Our apparatus therefore relied upon (a) manipulating the vergence demand (the vergence specified distance of the stimulus), coupled with (b) a subjective criterion (diplopia; whether the target went double) to ensure the vergence response was effective. No participant reported that they experienced diplopia during the experiment, although (as we discuss below) 1 participant was excluded from the outset because they could not fuse the target, and 2 participants because they experienced the target as blurry.
Quantifying the effect of vergence on the perceived size of the target posed a complex technical challenge which we resolved in five ways:
First, we built an objective estimate of size change into the target itself. The vergence size constancy literature typically relies on viewing an after-image (which has a fixed retinal size) and then asking participants to subjectively evaluate the change in perceived size after a vergence change by (1) asking participants to match their visual experience to (a) a visible chart [64,65,70] or (b) a memorised chart [69], or (2) by asking participants to make a conscious size judgement of (a) the physical size of the after-image [62], or (b) the after-image’s % size change [66]. We wanted to eradicate this highly cognitive and subjective evaluative process with a more objective criterion of the target size change.
We quantified the effect of vergence on perceived size by increasing or decreasing the physical size of the target during the vergence change, and testing at what change in physical size participants are at chance in determining whether the target got larger or smaller. The target (illustrated in Figure 3) consisted of two horizontals bars connected by a vertical bar. It was 3° in angular size (width and height) at the start of the trial. On each trial, the vergence specified distance of the target reduced from 50 to 25 cm over 5 s. At the same time, the physical size of the target on the display increased or decreased by between −20% and +20%, and participants had to make a forced choice using a button press whether the target got bigger or smaller (“did the target get bigger or smaller?”).
To summarise, over the 5 s of each trial, we made two simultaneous changes to the targets. First, we changed the physical separation of the targets on the display to change the vergence specified distance of the target (Figure 2). Second, at the same time, we increased or decreased the physical size of the targets on the display by between −20% and +20% (Figure 3).
Second, we wanted the target to be presented in darkness to exclude any residual visual cues. An interesting finding from piloting was that the usual technique (having participants view CRTs through neutral density filters) was not effective in eradicating residual luminance from the display (it degraded the target before residual luminance was completely eradicated). Instead, we achieved this in four ways. First, we used an OLED display (LG OLED55C7V), which, unlike normal displays, does not produce residual luminance for black pixels. Second, subjects wore a mask to block out any residual light, which had red eye filters through which the red stimuli were viewed (blocking out 100% green and ~90% blue light). Third, subjects viewed the stimuli through a narrow (17°) viewing window of 48 cm × 18 cm at a distance of 60 cm. Fourth, the whole apparatus was covered by blackout fabric, and before the experiment began subjects pulled a hood of blackout fabric over their heads and the external lights were turned off. A photograph and cross-section plans of the apparatus are provided in Figure 4.
Third, in order to drive vergence without providing subjective distance information, we used a visual stimulus that (unlike an LED) provided ‘sub-threshold’ binocular disparities: binocular disparities that are visible to the participant’s visual system (in order to drive vergence), but subjectively invisible to the participant themselves. This we achieved with a 3° target moving in depth from 50 to 25 cm over 5 s. We expected that using a target rather than an LED would have this effect for two reasons. First, being slightly larger on the retina, it was likely to improve vergence tracking of the target. Second, any remaining retinal slip would be less discriminable against a slightly larger target. Although this motion in depth is gradual (equivalent to an average speed of 5 cm/s, corresponding to an average vergence angle change of 1.4°/s), this is consistent with the changes in vergence in [64] (25 to 50 cm over 5 s), where close to perfect vergence size constancy was previously reported.
Fourth, in order to present a constant retinal image with eye rotation, we rendered the targets to maintain a constant radius from, and orientation to, the eye. This was achieved in OpenGL by ‘projecting’ the target onto the display, so that the correct retinal image was achieved when the participants viewed the target (Figure 5) (camera set in OpenGL to the nodal point of the eye, and an asymmetric frustum was used so that the far clipping plane matched the distance and dimensions of the display). A bite bar was used to ensure that the nodal point of the eye remained fixed during the experiment (Figure 4), and the 6 mm difference between the nodal point and the centre of rotation of the eye was intentionally ignored (cf. [77,78]).
Fifth, another challenge of this display is that it requires the eyes to focus (or ‘accommodate’) at the distance of the display (160 cm), whilst vergence (the angular rotation of the eyes) is at 25–50 cm. This decoupling of vergence and accommodation does not happen in normal viewing conditions, and too much vergence–accommodation conflict can lead to the target going blurry or double [79]. To solve this problem we had an optometrist fit each participant with contact lenses (based on the participant’s valid UK prescription) so that the optical distance of the display was 33 cm even though its physical distance was 160 cm. This ensured a maximum of +/− 1 dioptres of vergence–accommodation conflict, well within the zone of ‘clear single binocular vision’ [79]. Some of the most dramatic reports of vergence micropsia have been in the presence of large vergence–accommodation conflicts (e.g., 6.5 dioptres in [46]), so the presence of +/− 1 dioptre should not be objectionable.
We used a four-parameter maximum likelihood model (Quest+ [80,81]) to estimate when participants were at chance. Participants completed 200 trials (10 sets of 20 trials), and on each trial Quest+ tested the size change that would be most informative. In piloting, we found that the author (an experienced psychophysical observer) could not detect size changes over 5 s that were smaller than 1.5%. So, if vergence changes the perceived size of the target by less than 1.5%, vergence size constancy can be dismissed as smaller than the smallest effect size of interest under an inferiority test [82]. In our actual experiment this was revised down to 1.43%, the detection threshold of our most sensitive observer.
Assuming vergence does not bias size judgements, how would the number of participants affect our ability to determine this fact? We can simulate the experiment 10,000 times (bias = 0, detection threshold = 5%, lapse rate = 2%), and fit a hierarchical Bayesian model (discussed below) to the data (Figure 6). We found that with 5 or more observers we should be able to rule out any size constancy effects greater than 1.5%.
A total of 11 observers (8 female, 3 male; age range 20–34, average age 24.5) participated in the experiment: the author and 10 participants recruited using an online advertisement (13 were originally recruited, but 1 was excluded because they could not fuse the target, and 2 were excluded because their vision was blurry with the contact lenses). All participants were screened to ensure their accommodation was within normal bounds for their age (tested with a RAF near-point rule), vergence within normal bounds (18D or above on a Clement Clarke prism bar), and stereoacuity within normal bounds (60 arc secs or less on a TNO stereo-test). The author’s participation was required to (a) confirm Quest+ mirrored the pilot data, and (b) provide a criterion for the minimum effect size. All other subjects were naïve as to the purpose of the experiment, and were paid £15/h for 3 h. This study was approved by the School of Health Sciences Research Ethics Committee at City, University of London in accordance with the Declaration of Helsinki. The code for running the experiment is openly available: https://osf.io/5nwaz/ (accessed on 30 September 2020), running on Matlab 2019a (MathWorks) with PsychToolBox 3.0.15 [83].

3. Results

Let us consider what we would expect to find according to (a) the null hypothesis (vergence has no effect on perceived size) and (b) the alternative hypothesis (vergence has an effect on perceived size). If vergence has no effect on perceived size, then participants should be at chance at determining whether the target got bigger or smaller when we do not introduce a size change (bias = 0). By contrast, if participants experience something close to full size constancy, then we would have to increase the size of the target by 100% in order to cancel out the reduction in perceived size caused by vergence micropsia (which would equate to a 50% reduction in size, because halving the distance leads to a doubling of the retinal image size, assuming the small angle approximation).
These two hypotheses, and various intermediate degrees of size constancy between these two extremes, are plotted in Figure 7A. What the hierarchical Bayesian model of our results in Figure 7A show is that the bias we found was ≈ 0, consistent with there being no vergence size constancy.
To explain our conclusion, the individual results are plotted in Figure 7C. Each blue dot represents a physical size change that was tested by the Quest+ maximum likelihood model, and the darkness of the dot indicates the number of times it was tested. It is important to keep in mind that with Quest+ no one data point should be viewed in isolation. The underlying assumption is that Quest+ is fitting a psychometric function to the data. On each trial, Quest+ tests the physical size change that would be most informative to the four parameters of the psychometric function being estimated (slope, bias, floor, and ceiling). If Quest+ tests one physical size change a few times and finds performance at 100%, and an almost identical size change and finds performance at 0, the interpretation is that the actual response is somewhere between these two extremes. Put simply, from a Bayesian modelling perspective, there is nothing to suggest that these changes between neighbouring data points reflects a wild variation in participant response.
We fit each of the individual sets of data with a four-parameter logistic Bayesian psychometric function which estimates the slope, bias, floor, and ceiling of the psychometric function (indicated with a black line), using the Palamedes Toolbox 1.10.1 [84,85] with CmdStan 2.22.0, using the toolbox’s standard priors (bias and slope: normal (0,100), upper and lower lapse rates: beta (1,10)), based on 15,000 posterior estimates (100 posterior estimates illustrated in red in Figure 7C). Figure 7C shows individual biases range from −2.2% to +1.2%, but cluster around 0, as we would expect if vergence is having no effect on participants’ size judgements. Put simply, participants are perceiving the larger stimuli as larger and the smaller stimuli as smaller, despite the changes in vergence.
To estimate the population level psychometric function illustrated in Figure 7A, we used the Palamedes Toolbox 1.10.1 [84,85] with CmdStan 2.22.0 to fit a four-parameter logistic hierarchical Bayesian psychometric function, which fits the data with a multilevel model that takes into account the variability of each subject. We used the toolbox’s standard multilevel priors (which are documented in [85]) and, based on 15,000 posterior estimates (100 posterior estimates illustrated in red in Figure 7C), found a population level bias of −0.219% (95% CI: −1.82% to 1.39%) and a population level slope of −0.732 (95% CI: −1.07 to 0.378).
The estimate that particularly interests us is the population bias, so in Figure 7B we provide a probability density function of the 15,000 posterior estimates of the bias. We find no statistically significant bias, and therefore no statistically significant effect of vergence on perceived size. Indeed, the non-significant bias of −0.2% is in the wrong direction for size constancy.
To go beyond the negative claim that we found no statistically significant effect (null hypothesis not rejected) to the positive claim that there is no effect of vergence on perceived size (null hypothesis accepted), we can make two further arguments.
First, from a Bayesian perspective, we can perform a JZS Bayes factor [86]. The estimated Bayes factor that we found was 3.99 (±0.03%), which suggests that the data are four times more likely under the null hypothesis (bias = 0) than under the alternative (bias ≠ 0).
Second, from a frequentist perspective, we can perform an inferiority test that tests whether, if there is a vergence size constancy effect, it is at least as large as the smallest effect size of interest [82]. You will remember, we defined our smallest effect size of interest as the detection threshold for our most sensitive observer (which is 1.43%). Put simply, any vergence size constancy effect that is smaller than a 1.43% size change will not be detected by any of our observers. Since we have a directional hypothesis (vergence micropsia should reduce, rather than increase, the apparent size of the target), we specifically tested whether there is a bias > 1.43%. We therefore performed an inferiority test by taking the 90% confidence interval of the population bias in Figure 7B in the predicted direction, which is 0.96%. Since this is smaller than 1.43% (our smallest effect size of interest), from a frequentist perspective, we can conclude that any vergence size constancy effect is effectively equivalent to zero [82].

4. Discussion

According to the literature, “it is well known that vergence is a reliable source of depth information for size constancy” [64]. However, we find no evidence that vergence makes any contribution to perceived size.
To our knowledge, ours is the first study to report a failure of vergence size constancy at near distances. However, ours is also the first study that controls for confounding perceptual cues (changes in the retinal image) whilst also controlling for confounding cognitive cues (keeping subjects naïve about changes in absolute distance).
A number of alternative interpretations of our results have been put to us. In Section 4.1, Section 4.2, Section 4.3, Section 4.4, Section 4.5 and Section 4.6 we explore these alternative explanations. Whilst we cannot definitively rule out these alternative explanations, we will go on to explain why we do not believe that they give the most plausible interpretation of our results.

4.1. Eye Tracking

We specifically chose not to employ eye tracking in our experiment for four reasons. First, [87] find that readily available research eye trackers “are not accurate enough to be used to determine vergence, distance to the binocular fixation point and fixation disparity”, with errors of up to 2.5°. Second, eye tracking was impractical given our use of parallax barriers, making a clear view of both eyes (for the eye tracker) and the calibration targets (for the observer) impossible. Third, give the participants were fitted with contact lenses to set their accommodation at 33 cm, but the display was at 160 cm, ordinary techniques for eye tracking calibration (having participants converge on points on the display 160 cm away) could not be employed. Fourth, given our thesis that the vergence size constancy literature reflects cognitive influences rather than truly perceptual effects, we were understandably reluctant to introduce any procedures that would inform participants about the mechanisms underpinning the apparatus, or that eye movements were important for the task being performed.
One suggestion is that the absence of eye tracking leaves open the possibility that participants were not converging during the experiment. Whilst this possibility cannot be excluded, we did employ a subjective criterion for binocular fusion, namely whether the participant reported the target going double. Ref. [88] find that for a 3° target (which ours was, on average), the average maximum disparity participants fail to detect is 2.5° or less. So our subjective criterion of binocular fusion should provide comparable accuracy to eye tracking, where the 2D gaze literature [89,90,91,92] and the 3D gaze literature [87] report errors of a similar magnitude (up to ≈2.5°).
Admittedly, there have been attempts to improve eye tracking accuracy. First, since eye tracking errors are caused by changes in luminance affecting pupil size, one approach is to keep luminance fixed. However, it is not clear that controlling luminance is the right approach for our experiment. If we controlled luminance in our experiment, as the target got larger it would have to get darker, and as the target got smaller it would have to get brighter. Second, Ref. [90] found that even if luminance is not controlled, the error can be reduced (down from 2.5° to 0.5°) by calibrating the eye tracker at different luminances. However, this still fails to control for non-luminance effects on pupil size, including changes in pupil size with vergence (the ‘near triad’ [93]), and cognitive processes ([94]), although these are liable to be smaller than the errors induced by changes in luminance.
Another concern with our experiment is that participants might have been doing the task with one eye closed, although they were told to keep both eyes open. However, both this suggestion, and the suggestion that participants failed to effectively converge, have to be understood in the context of our two other experiments (reported in [15]) where a similar paradigm was used to demonstrate that vergence was ineffective as an absolute distance cue. For these concerns to be really driving our results in this paper and [15] we would have to conclude that 35 participants over 3 experiments collectively failed to report pervasive diplopia, or all conducted the experiment with one eye closed, and there is no reason to believe that this is the case.

4.2. Vergence–Accommodation Conflict

Another concern is that vergence and accommodation were decoupled, and this might have affected the vergence response and/or placed the distance estimates from vergence and accommodation in conflict. However, as we have already discussed, some of the most impressive reports of vergence affecting the perceived size of objects occur in the context of pervasive vergence–accommodation conflict (e.g., the 6.5 dioptres of vergence–accommodation conflict in [46]). In this context, +/− 1 dioptres of vergence–accommodation conflict, which is well within the zone of ‘clear single binocular vision’ [79], should be entirely permissible. Indeed, it is worth considering that three out of the four contexts that I outline in the Introduction in which vergence micropsia has been historically observed (the wallpaper illusion, stereoscopic viewing, telestereoscopic viewing) all rely on inducing much larger vergence/accommodation conflicts than my experiment.

4.3. Vergence vs. Looming

My experiment explicitly introduces a physical size change component into the stimulus. Because the stimulus is viewed in darkness and in isolation, the stimulus appears to move towards the observer when it grows in size, and recede from the observer when it reduces in size. This is well documented for objects changing in size when viewed in darkness and in isolation, even without a change in vergence [95]. Another suggestion that has been put to us is that our results only pertain to stimuli like this, that appear to loom towards us, either because there is something special about looming, or because the visual system in general is task dependent.
This argument amounts to the suggestion that changes in retinal image size are so powerful that they eradicate a vergence size constancy signal that would otherwise be effective. However, we then have to ask what these other contexts might be. The claim of vergence size constancy is that as an object moves towards us in depth in the real world, the visual system cancels out, at least to some extent, its increasing retinal size. However, an object moving towards us at near distances in the real world is almost always accompanied by a change in its retinal image size. So in real world viewing, the vergence size constancy mechanism will almost always be accompanied by a looming cue.
After-images are an exception to this rule. However, in any case, my experiment tests something very close to the after-image literature [61,62,64,66,68,69]. Although Quest+ never tests zero physical size changes, as Figure 7C demonstrates it did test small angular size changes (1–2% over 5 s), which are well below threshold for most observers. Here, much like with an after-image, vergence is changing whilst the retinal image size remains virtually fixed. Yet, the points that test these small, non-discriminable, size changes demonstrate that participants are unable to determine whether the target is increasing or reducing in size.

4.4. Gradual Changes in Vergence

Another concern relates to the gradual nature of the vergence changes. Perhaps vergence does not affect perceived size when it is varied gradually (from 50 to 25 cm over 5 s), but it is an effective cue to perceived size when varied more rapidly. Again, we cannot rule out this possibility, but would make four observations.
First, we invite readers to try it for themselves. Hold your hand out at 50 cm and move it towards your face at 25 cm whilst counting out “one thousand, two thousand, three thousand, four thousand, five thousand”. Whilst the motion in depth is gradual, it is very far from being imperceptible, and we would be concerned if a supposedly important perceptual mechanism were unable to process this very apparent motion in depth.
Second, we chose this vergence change because it is exactly the kind of vergence change that had previously been found to produce almost perfect size constancy in [64]. Whilst it is open to someone to therefore interpret my results as just a critique of [64], we think this would be a mistake. Since [64] found close to perfect size constancy for gradual changes, if vergence size constancy is ineffective for such gradual changes, then a new (and I would argue cognitive) account must still be posited to explain the close to perfect size constancy in [64]. However, if this new account can explain the close to perfect size constancy reported in [64], there is no reason why it could not equally explain the close to perfect size constancy reported in other experiments with more rapid eye movements.
Third, you might object that such gradual changes in distance are unusual in the real world. That may be true for an object moving towards us at 5 cm/s from 50 to 25 cm. However, consider the case where we are sitting at a desk looking at a screen. We are constantly shifting our position in a chair as we look at a screen, moving forwards and backwards towards the screen at a rate of 5 cm/s or even less. So what initially looks like an artificial scenario is actually highly relevant to our everyday viewing.
Fourth, notice that this interpretation of our results makes a very specific prediction. Either (a) vergence is not an important cue to size constancy in full cue conditions (a conclusion which I assume those advancing this interpretation would want to resist) or (b) if I gradually move an object in full cue conditions from 50 to 25 cm, observers should misjudge its size because the gradual change in vergence has removed an important size cue.
Note that this final point is the size version of the argument that I made in [15]. There I showed that if vergence was manipulated gradually, participants were unable to point to the distance of a target. To maintain that vergence is nonetheless an important absolute distance cue in full cue conditions, one would have to maintain that if I gradually changed the distance of an object in full cue conditions, participants would be severely compromised in their ability to point to the distance of the object. If they are not, then it is hard to maintain that vergence is both ineffective as a distance cue when gradually manipulated, and yet an important absolute distance cue in full cue conditions. As I note in [15], “this account risks replacing the ineffectiveness of vergence under my account with the redundancy of vergence under their account.”

4.5. Limited Changes in Vergence

Another concern is that the eye movements in this experiment are just 25 cm (from 50 cm to 25 cm) in depth, and perhaps vergence size constancy is more effective for larger eye movements. However, this concern has to be understood in light of the way in which the vergence angle falls off with distance.
As Figure 8 demonstrates, assuming 25 cm is as close as we look in everyday viewing, then a vergence change from 25 to 50 cm corresponds to around half the full vergence range from 25 cm to infinity. If vergence size constancy is not effective in this context, then it cannot be effective in other contexts (e.g., looking from 50 cm to 2 m) where the change in distance is much greater, but the change in the vergence angle is much less.

4.6. Convergent vs. Divergent Eye Movements

Since we only tested convergent eye movements (eye movements from far to near), rather than divergent eye movements (eye movements from near to far), conceptually there is the possibility that vergence size constancy still works in one direction (near to far) but not the other (far to near). However, first, there is no evidence that this is in fact the case. Second, any visual system that increases perceived size when the eyes move from near to far, but does not reduce perceived size when the eyes move from far to near, is going to lead to an ever ratchetting increase in the perceived size of objects as the eyes are moved forwards and backwards in depth. Readers can confirm for themselves that this is not the case.

4.7. Vergence Micropsia

If vergence does not affect perceived size, how do I explain the four contexts where vergence micropsia has been reported in the literature, namely 1. the wallpaper illusion (where we cross our eyes when looking at a wallpaper pattern), 2. stereoscopic viewing (where we vary vergence whilst looking at two identical images in a stereoscope), 3. telestereoscopic viewing (where we effectively increase the interpupillary distance using mirrors), and 4. the Taylor illusion (where an after-image of the hand appears to shrink when we move it towards us)?
There are two explanations of the wallpaper illusion/stereoscopic viewing. First, the same confounding cues I sought to exclude in this experiment (retinal slip and subjective knowledge about our own eye movements) could be used to cognitively infer a reduced size even if this is not actually perceived. Second, and more obviously, however, is the fact that the wallpaper illusion and mirror stereoscopes do not control for changes in angular size when we look at a fronto-parallel surface obliquely. This is illustrated by Figure 5 above (and was controlled for in our experiment using OpenGL). In [11], I apply this thought to explain vergence micropsia.
If you cross-fuse the two coins in Figure 9, the central fused coin appears smaller than when we look at the coins normally. But why does the coin shrink in size if vergence micropsia does not exist? To answer this, we have to understand how the retinal image changes when we cross-fuse.
As Figure 10 illustrates, when we look at the coins normally the right coin projects a larger retinal image to the right eye than the left eye. Equally, the left coin projects a larger retinal image to the left eye than the right eye. Now when we cross-fuse, the fused coin is made up of the smaller retinal image of the left coin in the right eye and the smaller retinal image of the right coin in the left eye, explaining why the fused coin is perceived as reduced in scale.
There is an easy way to check if this is the correct explanation. This account predicts that the flanking monocular coins either side of the fused coin will be made up of the large retinal image of the left coin in the left eye and the large retinal image of the right coin in the right eye, so the central fused coin should look smaller than the two monocular flankers. At close viewing distances this difference in size between the fused and flanking coins is immediately apparent.

4.8. Cognitive Explanation of Vergence Size Constancy

Let us now turn to the Taylor illusion as a useful context in which to explore an alternative, purely cognitive, explanation of vergence size constancy. The Taylor illusion (where an after-image of the hand appears to shrink or grow with physical hand movements) is an important paradigm for recent discussions of multisensory integration [67,96,97]. The best current explanation for the Taylor illusion is that it is due [61,62,63] or almost entirely due [64] to changes in vergence as the eyes track the hand moving in darkness. However, in light of our results, we suggest that this explanation is no longer sustainable, since vergence had no effect on the perceived size of the target once subjective knowledge about the fixation distance had been controlled for.
Nor does this imply that the Taylor illusion is primarily due to proprioceptive information from hand motion directly influencing visual perception [66,69], since [64] demonstrates that when vergence and hand motion are in conflict, the Taylor illusion follows vergence, and the effect is only marginally reduced in size.
Instead, what both accounts are missing is the participant’s subjective knowledge about their own changing hand and gaze positions. We would suggest that explains why [64] found that vergence affects perceived size when their participants knew about their changing gaze position (from their hand movements or from the motion in depth of an LED), but why we did not when our participants were ignorant of this fact. There are two ways in which conscious knowledge about our changing hand or gaze position could influence size constancy.
First, our subjective knowledge could influence our visual experience (so-called ‘cognitive penetration’ of perception by cognition). However, we are sceptical of invoking ‘cognitive penetration’ to explain an effect that could also be explained as a purely cognitive bias (for further sceptical discussions of ‘cognitive penetration’, see [98,99,100]).
Second, under our alternative cognitive bias account, the integration of the retinal image and our changing gaze position could be purely cognitive, rather than perceptual. Our visual experience of the after-image’s angular size remains constant, but because we know that our hand is moving towards our face, our hand movements cognitively bias our interpretation of our constant visual experience, and we interpret the constant angular size of the after-image as a reduction in physical size.
Put another way, the after-image literature cannot place itself outside the looming literature since the absence of a change in retinal image size is still a looming cue. The fact that after-images do not change in retinal size, whilst the participant knows that they change in distance (for instance, due to retinal slip from diplopia, or from motion of their own hand in the Taylor illusion) is a cue that the object is shrinking in size, because there is an absence of the attendant increase in retinal size that one would expect. This is an entirely consistent looming-based explanation of the vergence size constancy after-image literature that does not invoke the notion of vergence size constancy.
One reviewer has asked why subjective knowledge should determine perceived size in this context but not e.g., in the Ames Room where we know that the two people we are seeing are the same size. This is an open question, but note two disanalogies between these two scenarios. First, the Ames Room is about perceived physical size (judging the wrong physical size given the different angular sizes of the people), whilst the Taylor illusion is primarily about perceived angular size (the impression of the hand ‘shrinking’ or ‘growing’). Second, the Taylor illusion occurs over time, and without the opportunity for direct comparison between the target at t1 and t2, implying an important role for working memory, and therefore additional scope for cognitive influences.

4.9. Multisensory Integration

This purely cognitive interpretation of the Taylor illusion has wide-reaching implications for multisensory integration, specifically the integration of vision and hand movements. The Taylor illusion is taken as evidence of multisensory integration at the level of perception. Specifically, that vision “relies on multimodal signals” [64,101] and that “visual consciousness is shaped by the body” [67,96,102]. However, if the integration of proprioception and the retinal image is purely cognitive in the context of vergence (the major driver of the Taylor illusion in [64]), there is no reason why the integration of proprioception and the retinal image in the context of integrating vision and hand movements (the minor driver of the Taylor illusion in [64]) could not equally be accounted for in purely cognitive terms.
This cognitive approach also suggests a non-perceptual explanation for variants of the Taylor illusion that appear to demonstrate the integration of vision with the rubber-hand illusion [67] and tool use [97].
I also advance cognitive interpretations of the integration of vision and proprioception in the contexts of (a) vision and touch in slant estimation [103,104] in [10], pp. 37–38 and pp. 65–66, and (b) vision and vestibular cues to self-motion [105,106] in [11]. Combined with (c) the cognitive interpretation of the Taylor illusion in this paper, this leads to the cognitive theory of multisensory integration I develop in [107].

4.10. Vergence Modulation of V1

Ever since [108] found that the large majority of neurons in the monkey primary visual context (V1) were modulated by vergence, it has been suggested that processing of the vergence signal in V1 plays an important role in size constancy. Further evidence for the vergence modulation of V1 is found by [109,110,111,112,113,114,115]. Ref. [110] found a smaller proportion of neurons in V1 were affected, and to a less dramatic extent, than in [108,112]. One reason could be the differences in vergence distances tested, but another possibility raised by [110] is that since vergence was not tracked in [108,112], there is no guarantee that poor fixation did not cause retinal disparities. This concern might also apply to other studies.
Related discussions include the role of vergence in the parietal cortex [116,117,118,119], and potentially also LGN [120] (although this was speculative, and never followed up), as well as early neural network models [121,122] that [108] complements.
More recently, Ref. [123] have looked at the time course of size constancy, and found that vergence and the retinal image are not integrated during (a) initial processing in V1 (~50 ms), but instead during (b) recurrent processing within V1, and/or (c) re-entrant projections from higher-order visual areas (e.g., [116,117]), both of which are consistent with the ~150 ms timeframe. This is consistent with the suggestion in [108] that whilst vergence responsive neurons encode vergence distance, further computations are required to scale the retinal image, so vergence responsive neurons “constitute an intermediate step in the computation of true depth, as suggested by neural network models [121].”
However, this whole line of research, from [108] to the present, is prefaced on the fact that “psychophysical data suggest an important role for vergence” [108]. But this is exactly what our results in this paper and in [15] question. Taken together, our results in this paper and in [15] suggest that there is no link between vergence and distance perception ([15]) or size perception (this paper), and therefore no link between the vergence modulation in V1 (or anywhere else) and distance or size perception.
How, then, are we to account for the vergence modulation of V1 that has been reported [108,112]. There are three possibilities:
First, one objection to our account is that we have not excluded every possible task in which vergence might be implicated in size and distance perception. One possibility that is often suggested is that distance is extracted from vergence in order to scale binocular disparities for our perception of 3D shape [50], and that errors in this mechanism are responsible for distortions of visual shape [124]. This will be the focus of future experimental work, although we would be surprised if distance from vergence was used to scale binocular disparities whilst failing to scale size and distance themselves.
Second, as we have argued throughout this paper, we should be careful attributing effects to vergence that could equally be explained by changes in the retinal image. [110] raises the prospect of vergence insufficiency leading to changes in retinal disparity. Another possibility is that the results in [108,112] simply reflect changes in vertical disparity with distance, since [108,112] change the distance of the display, not vergence per se. So the vergence modulations in V1 might simply be artifacts.
Third, according to our alternative account, visual scale is entirely dependent upon top-down cognitive processing [11]. However, as we argue in [107], this is entirely consistent with eye movement processing in V1, if this processing simply reflects subjective knowledge about our changing gaze position. There is increasing evidence that V1 is implicated in purely cognitive (non-visual) processing, e.g., the location of rewards amongst visually identical targets [125]. So the vergence modulations of V1, if they do exist, could equally reflect the participant’s own purely cognitive knowledge about their changing gaze position, even though this has no effect on their visual experience.

4.11. Telestereoscopic Viewing

Of the four instances of vergence size constancy we discussed in the Introduction, alternative explanations of (a) the wallpaper illusion and (b) stereoscopic viewing have been provided in Section 4.7, whilst an alternative explanation of (c) the Taylor illusion is provided in Section 4.8. That only leaves (d) telestereoscopic viewing to be explained. This is when the interpupillary distance of the eyes is increased using mirrors, leading to the impression that the world has been ‘miniaturised’. Helmholtz explained this miniaturisation in terms of the increase in vergence required to fixate on an object ([51,52], Ref. [31] p. 310; Figure 11).
However, our results in this paper challenge this account.
An alternative explanation for this effect, and potentially the micropsia effects in the wallpaper illusion and stereoscopic viewing (although not the Taylor illusion), are changes in vertical disparities. Rogers [53,54] appeals to vertical disparities (as well as vergence) to explain telestereoscopic viewing, but we are sceptical of this explanation.
The initial promise of vertical disparities was that you could take any three points in the scene (and potentially just two), and determine the geometry and viewing distance of the scene [126,127,128]. However, there is no evidence the visual system can actually achieve this with such a limited number of points, and the emphasis soon shifted (in [129]) to the gradient of vertical disparities across a fronto-parallel surface. Ref. [130] explains the contemporary vertical disparity literature in the following terms: “For any continuous surface like the chessboard, there is a horizontal gradient of the ratio of the vertical sizes in the two eyes and this varies with viewing distance.”
However, there are four concerns with this account:
First, telestereoscopic viewing should not affect our perception of size and distance under the classic account [126], since in the process of estimating egocentric distance this account also estimates the interpupillary distance and vergence angle. Put simply, whether our eyes are far apart and rotated a lot, or close together and rotated a little, so long as they are fixated on the same point in physical space, the approach in [126] should give exactly the same egocentric distance, and therefore exactly the same perceived scale.
Second, vertical disparities have proved an ineffective absolute distance cue for surfaces that take up as much as 25° × 30° of the visual field [131,132]. Ref. [12] argue that this is because vertical disparities are maximised in the periphery. However, the results for even a 60° surface are questionable. Ref. [133] investigates how vertical disparities could improve the distance perception of a cylinder in front of a 60° background. First, subjects performed no better, and potentially performed worse, with a 60° background present than without it. Second, whilst increasing the vertical disparities of the background did change the apparent distance of the cylinder, this was largely because participants changed their previous judgements about the cylinder’s distance when the background’s vertical disparities were undistorted. Had participants made consistent judgements about the same stimulus across conditions the effect would largely disappear. If there is no benefit of having a 60° background, and no consistency in distance judgements across conditions when it is present, it becomes hard to maintain that vertical disparities provide an effective absolute distance cue.
Similarly, it is hard to claim that [12] shows more than an ordinal effect of vertical disparities on distance judgements even for a 70° surface, since “different observers differed in the way they used numbers to indicate absolute distances”, and so distances underwent a normalisation and scaling process.
Third, even taking these 60° [133] and 70° [12] results at face value, what do they imply? They limit the application of vertical disparities to (1) regularly textured flat surfaces, that are (2) no further away than 1–2 m (see the sharp fall-off of vertical disparities with distance for a 60° × 60° surface in [133], Figure 1c), and (3) take up at least 30° (and arguably 60–70°) of the visual field. Since such surfaces are almost never encountered in natural viewing conditions, this cannot be how we experience visual scale. Theoretically, the results in [12,133] could support a broader principle, but as [12] note: “Whether the visual system extracts the vertical disparities of individual points or the gradient of VSRs over a surface is an empirical question for which there is no clear answer at present.”
Fourth, even if a fronto-parallel surface is not required, the points still need to take up 60–70° of the visual field. This cannot explain how telestereoscopic viewing can change the perceived scale of a photograph which takes up much less than 60–70° when viewed. See, for example, Figure 12 and Figure 13. Looking at them through red-blue glasses with one eye open, they look like normal photos. However, as soon as you view them through red-blue glasses with both eyes open, you get the immediate impression that we are “not looking at the natural landscape itself, but a very exquisite and exact model of it, reduced in scale” (Helmholtz in [31], p. 312). Whatever explains the miniaturisation effect of telestereoscopic photographs likely explains the miniaturisation effect of telestereoscopic viewing in the real world, and yet large angular sizes are not required for telestereoscopic photographs to be effective.
Instead of vergence or vertical disparities, in [11] I advance a third distinct explanation for the change in scale experienced in telestereoscopic viewing based on horizontal disparities. The argument begins from the premise that this is the only remaining parameter that is manipulated in telestereoscopic viewing apart from vergence and vertical disparities. Horizontal disparities are typically thought of merely as an affine depth cue (not even a relative depth cue) until they are scaled by an absolute distance cue, typically thought to be vergence [50,124]. The reason for this is that binocular disparities fall off with 1/distance2, and so in order to extract relative depth (which is a relationship in terms of 1/distance not 1/distance2) we need to know the viewing distance. So, on this account, distance from vergence provides us with perceived stereo depth [124]: ‘Distance → Stereo Depth’. My argument in [11] seeks to invert this relationship (known as ‘depth constancy’), to instead go from ‘Stereo Depth → Distance’, in three stages:
First, on this account, there is no scaling of binocular disparity using vergence, and therefore no ‘depth constancy’. Whilst [124] found some evidence of ‘depth constancy’, this was not tested in the controlled blackout conditions of [15] or this paper. Under this alternative account, perceived depth simply reflects binocular disparity that falls off with 1/distance2. This will be the subject of future empirical investigation.
Second, under this account, we are unconsciously very sensitive to the way in which perceived depth falls off with distance. What I describe as “a cognitive association between (a) vivid stereo depth and (b) closer distances (reflecting our experience of an environment where disparity falls-off with distance2)” in [11]. Put simply, we only ever experience the vivid stereo depth we experience in Figure 12 and Figure 13 when we view a scene up close, and so we interpret it as miniature [134].
Whilst vivid stereo depth equates to near distances, flat stereo depth could equate to ‘far away’ or ‘up close and flat’, so assumptions about scene geometry will often be necessary. Our interpretation of the relationship between stereo depth and distance is informed by natural scene statistics [134], enabling us to make appropriate assumptions.
The relationship between stereo depth and distance would be more informative if we could compare the perceived geometry of the scene (depth from disparity) with the actual geometry of the scene. So, in Figure 12 and Figure 13, we might additionally infer that the accentuated stereo-depth in Figure 12 and Figure 13 is only consistent with the geometry of the scene if the scene is being viewed up close; i.e., if the scene is miniature [134]. If we do, it may be that we recognise the scene as of a certain familiar kind. Or it may be that we rely on pictorial cues, which are comparatively distance invariant (albeit subject to perspective distortions / foreshortening [135]), and which are unaffected by telestereoscopic viewing (which increases binocular disparity without reducing viewing distance).
I would expect the same miniaturisation effect to be present if the scenes in Figure 12 and Figure 13 were depicted using a random-dot stereogram. This doesn’t necessarily rule out familiarity or pictorial cues. In [10] we argue that the cues that inform us about scene geometry in a pictorial sense are processed after stereo-depth has been perceptually resolved. For instance, you can embed a 2D line drawing of a cube within a random dot stereogram. The line drawing only emerges after the stereo-depth has been perceptually resolved. However, you can still recognise it as a 2D line drawing of a cube. So, you are interpreting the shape of the cube using pictorial cues, such as perspective, much like any 2D line drawing. However, it would be a mistake to suggest that this necessarily implies that what you are relying on are monocular cues because these do not exist. Similarly, you can imagine viewing a face in a random dot stereogram. The face might be accentuated or reduced in depth, but you’d still be able to understand it as a face, and therefore interpret it as being accentuated or reduced in depth even though there are no monocular cues.
Third, I claim that the relationship between accentuated stereo-depth (i.e., accentuated 3D shape) and visual scale is merely a post-perceptual cognitive association. What does this mean? So far as our visual experience is concerned, all that is being manipulated with telestereoscopic viewing is our perception of 3D shape. An illustration of this is provided by Figure 12 and Figure 13 when viewed with red-blue glasses. Close one eye and you do not experience the apparent miniaturisation in scale that you do with binocular viewing. Yet, what changes between monocular and binocular viewing? Fixate on an object in the image and close one eye. First, the angular size of the object does not appear to change in any dramatic sense, so we cannot attribute the dramatic change in perceived scale to a change in perceived angular size. Second, the object does not suddenly appear to change its distance from us, and move towards or away from us, so we cannot attribute the dramatic change in perceived scale to a change in the perceived distance of the object. There is therefore no sense that our perception of (a) size or (b) distance changes dramatically as we fixate on an object in the scene and open and close one eye. Instead, the defining change in our visual experience is a change in the perceived 3D shape of the scene.
Note that it might be tempting to write off telestereoscopic viewing as a contrived illusion. However, that would be a mistake. What telestereoscopic viewing demonstrates is that binocular cues dominate all other cues to absolute size and distance. In telestereoscopic viewing all other cues to absolute distance in the real world are maintained, and yet our impression of visual scale varies dramatically with the manipulation of the inter-pupillary distance using mirrors. Instead of interpreting our inter-pupillary distance as changing, we attribute this change to the scene itself, and see it as miniaturised. So, whatever explains why we see the world as the wrong size when binocular cues are wrong (in telestereoscopic viewing), also explains why we see the world as the right size when binocular cues are correct (in ordinary binocular viewing).
If this is the correct interpretation of telestereoscopic viewing, then our results suggests that visual scale is much more reliant on cognitive influences than previously thought. First, because our results challenge the suggestion that size and distance are triangulated using vergence. Second, because our alternative explanation of binocular scale perception in telestereoscopic viewing is entirely cognitive in nature. Our results are consistent with our argument in [10,11] that visual scale is based solely on higher level cognitive processes, where we extend [136]’s and [137]’s observations about familiar size to argue that visual scale itself is a purely cognitive process.

4.12. Development of Size Constancy

One objection to our account is that infants 1.5–2 months of age can apparently respond to the physical, and not just the retinal, size of objects [138,139,140]. According to this objection, this speaks against a cognitive explanation. However, we have good reason to be cautious of the claims in this literature. First, Ref. [138] found no benefit of binocular vision on infant size constancy, grounding his account in motion parallax instead. This should immediately alert us that this literature is missing something important. Second, even on motion parallax, Ref. [138] admitted that infants could simply be responding to changes in the stimulus with head movements, rather than perceived size. Third, a counterpoint to this literature is that those with their sight restored after early blindness often experience gross errors in their distance judgements [141,142]. Fourth, whilst it is merely anecdotal, Helmholtz’s recollection of his own gross failures of size constancy as a child ([31], p. 283) inspired his whole account of inferential size and distance perception.

5. Conclusions

Vergence is thought to provide an essential signal for size constancy. We tested vergence size constancy for the first time without confounding cues, and found no evidence that eye movements make any contribution to perceived size. We explore a number of alternative explanations for these results. Whilst we cannot definitively exclude these alternatives, we conclude that the most plausible interpretation of our results is that vergence does not contribute to size perception, complementing our previous results [15] that suggest that vergence does not contribute to distance perception. If this is the correct interpretation, this work has three important implications. First, it suggests that our impression of visual scale is much more reliant on cognitive processing than previously thought. Second, it leads us to question whether the vergence modulation of neurons in V1 reported in the literature really contributes to our impression of visual scale. Third, it leads us to question whether the multisensory integration reported in the context of the retinal image with proprioceptive cues from the hand is better understood in terms of observers having subjective knowledge about their hand and gaze position.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by City, University of London’s Optometry Division’s Proportionate Review Ethics Committee (ETH1920-0015, approved 6 August 2019).

Informed Consent Statement

Fully informed and written consent was obtained from all subjects involved in the study.

Data Availability Statement

Code for running the experiment, and all the data and analysis scripts, are accessible in an open access repository: https://osf.io/5nwaz/ (accessed on 30 September 2020).

Acknowledgments

This research was conducted under the supervision of Christopher Tyler and Simon Grant. We thank Salma Ahmad, Mark Mayhew, and Deanna Taylor for fitting the contact lenses, and Jugjeet Bansal, Priya Mehta, and the staff of the CitySight clinic for their assistance. We thank Joshua Solomon, Matteo Lisi, Byki Huntjens, Michael Morgan, Chris Hull, and Pete Jones for their advice. We thank audiences at the Applied Vision Association (2019), the British Machine Vision Association’s ‘3D worlds from 2D images in humans and machines’ meeting (2020): https://youtu.be/6P3EYCEn52A (accessed on 30 September 2020) and the Vision Sciences Society (2020), poster: https://osf.io/tb3un/ (accessed on 30 September 2020), and presentation: http://youtu.be/VhpYjPj5Q80 (accessed on 30 September 2020) where this work was presented, as well as audiences at the Association for the Scientific Study of Consciousness (2019) where the argument about telestereoscopic viewing was presented: https://osf.io/9ry3s (accessed on 30 September 2020). We thank Scott Murray for allowing me to reproduce Figure 1, and Sascha Becher for allowing me to reproduce Figure 12 and Figure 13. We thank the three referees and the two editors for their very valuable comments on the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sperandio, I.; Chouinard, P.A. The Mechanisms of Size Constancy. Multisens. Res. 2015, 28, 253–283. [Google Scholar] [CrossRef] [PubMed]
  2. Murray, S.O.; Boyaci, H.; Kersten, D. The representation of perceived angular size in human primary visual cortex. Nat. Neurosci. 2006, 9, 429–434. [Google Scholar] [CrossRef] [PubMed]
  3. Sperandio, I.; Chouinard, P.A.; Goodale, M.A. Retinotopic activity in V1 reflects the perceived and not the retinal size of an afterimage. Nat. Neurosci. 2012, 15, 540–542. [Google Scholar] [CrossRef] [PubMed]
  4. Leibowitz, H.; Brislin, R.; Perlmutrer, L.; Hennessy, R. Ponzo Perspective Illusion as a Manifestation of Space Perception. Science 1969, 166, 1174–1176. [Google Scholar] [CrossRef]
  5. Goodale, M.A. How Big is that Banana? Differences in Size Constancy for Perception and Action. 2020. Available online: https://www.youtube.com/watch?v=EbFkPk7Q3Z8 (accessed on 30 September 2020).
  6. Millard, A.S.; Sperandio, I.; Chouinard, P.A. The contribution of stereopsis in Emmert’s law. Exp. Brain Res. 2020, 238, 1061–1072. [Google Scholar] [CrossRef]
  7. Emmert, E. Größenverhältnisse der Nachbilder. Klinische Monatsblätter Für Augenheilkunde Und Für Augenärztliche Fortbildung 1881, 19, 443–450. [Google Scholar]
  8. Combe, E.; Wexler, M. Observer Movement and Size Constancy. Psychol. Sci. 2010, 21, 667–675. [Google Scholar] [CrossRef] [Green Version]
  9. Brenner, E.; van Damme, W.J. Judging distance from ocular convergence. Vis. Res. 1998, 38, 493–498. [Google Scholar] [CrossRef] [Green Version]
  10. Linton, P. The Perception and Cognition of Visual Space; Palgrave: London, UK, 2017. [Google Scholar]
  11. Linton, P. Do We See Scale? 2018. BioRxiv 2018. [Google Scholar] [CrossRef] [Green Version]
  12. Rogers, B.J.; Bradshaw, M.F. Disparity Scaling and the Perception of Frontoparallel Surfaces. Perception 1995, 24, 155–179. [Google Scholar] [CrossRef]
  13. Mon-Williams, M.; Tresilian, J.R. Some Recent Studies on the Extraretinal Contribution to Distance Perception. Perception 1999, 28, 167–181. [Google Scholar] [CrossRef]
  14. Viguier, A.; Clément, G.; Trotter, Y. Distance perception within near visual space. Perception 2001, 30, 115–124. [Google Scholar] [CrossRef] [Green Version]
  15. Linton, P. Does vision extract absolute distance from vergence? Atten. Percept. Psychophys. 2020, 82, 3176–3195. [Google Scholar] [CrossRef]
  16. Kepler, J. Paralipomena to Witelo. In Optics: Paralipomena to Witelo and Optical Part of Astronomy; Donahue, W.H., Ed.; Green Lion Press: Santa Fe, Mexico, 2000. [Google Scholar]
  17. Descartes, R. Dioptrique (Optics). In The Philosophical Writings of Descartes; Cottingham, J., Stoothoff, R., Murdoch, D., Eds.; Cambridge University Press: Cambridge, UK, 1985; Volume 1. [Google Scholar]
  18. Wheatstone, C. XVIII Contributions to the physiology of vision—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philos. Trans. R. Soc. Lond. 1838, 128, 371–394. [Google Scholar] [CrossRef]
  19. Smith, R. A Compleat System of Opticks in Four Books, Viz. A Popular, a Mathematical, a Mechanical, and a Philosophical Treatise. To Which Are Added Remarks Upon the Whole; Robert, S., Ed.; Cornelius Crownfield: Cambridge, UK, 1738. [Google Scholar]
  20. Priestley, J. The History and Present State of Discoveries Relating to Vision, Light, and Colours; J. Johnson: London, UK, 1772. [Google Scholar]
  21. Von Goethe, J.W. Zur Farbenlehre; Cotta’schen Buchhandlung: Tübingen, Germany, 1810. [Google Scholar]
  22. Meyer, H. Ueber einige Täuschungen in der Entfernung u. Grösse der Gesichtsobjecte. Arch. Physiol. Heilkd. 1842, 316. [Google Scholar]
  23. Meyer, H. Ueber die Schätzung der Grösse und der Entfernung der Gesichtsobjecte aus der Convergenz der Augenaxen. Ann. Phys. 1852, 161, 198–207. [Google Scholar] [CrossRef] [Green Version]
  24. Brewster, D. On the knowledge of distance given by binocular vision. Trans. R. Soc. Edinb. 1844, 15, 663–674. [Google Scholar] [CrossRef] [Green Version]
  25. Locke, J. XXV. On single and double vision produced by viewing objects with both eyes; and on an optical illusion with regard to the distance of objects. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1849, 34, 195–201. [Google Scholar] [CrossRef]
  26. Kohly, R.P.; Ono, H. Fixating on the wallpaper illusion: A commentary on “The role of vergence in the perception of distance: A fair test of Bishop Berkeley’s claim” by Logvinenko et al. Spat. Vis. 2001, 15, 377–386. [Google Scholar] [CrossRef]
  27. Lie, I. Convergence as a cue to perceived size and distance. Scand. J. Psychol. 1965, 6, 109–116. [Google Scholar] [CrossRef]
  28. Ono, H.; Mitson, L.; Seabrook, K. Change in convergence and retinal disparities as an explanation for the wallpaper phenomenon. J. Exp. Psychol. 1971, 91, 1–10. [Google Scholar] [CrossRef] [PubMed]
  29. Howard, I.P. Perceiving in Depth, Volume 3: Other Mechanisms of Depth Perception; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  30. Wheatstone, C.I. The Bakerian Lecture.—Contributions to the physiology of vision.—Part the second. On some remarkable, and hitherto unobserved, phenomena of binocular vision (continued). Philos. Trans. R. Soc. Lond. 1852, 142, 1–17. [Google Scholar] [CrossRef]
  31. Helmholtz, H. Handbuch der Physiologischen Optik, Vol. III; L. Voss: Leipzig, Germany, 1866. [Google Scholar]
  32. Adams, O.S. Stereogram decentration and stereo-base as factors influencing the apparent size of stereoscopic pictures. Am. J. Psychol. 1955, 68, 54–68. [Google Scholar] [CrossRef] [PubMed]
  33. Biersdorf, W.R.; Ohwaki, S.; Kozil, D.J. The Effect of Instructions and Oculomotor Adjustments on Apparent Size. Am. J. Psychol. 1963, 76, 1–17. [Google Scholar] [CrossRef]
  34. Enright, J.T. The eye, the brain and the size of the moon: Toward a unified oculomotor hypothesis for the moon illusion. In The Moon Illusion; Hershenson, M., Ed.; Erlbaum Associates: Mahwah, NJ, USA, 1989; pp. 59–121. [Google Scholar]
  35. Frank, H. Ueber den Einfluss inadaquater Konvergenz und Akkommodation auf die Sehgrosse. Psychol. Forsch. 1930, 13, 135–144. [Google Scholar] [CrossRef]
  36. Gogel, W.C. The Effect of Convergence on Perceived Size and Distance. J. Psychol. 1962, 53, 475–489. [Google Scholar] [CrossRef]
  37. Heinemann, E.G.; Tulving, E.; Nachmias, J. The effect of oculomotor adjustments on apparent size. Am. J. Psychol. 1959, 72, 32–45. [Google Scholar] [CrossRef]
  38. Hermans, T.G. Visual size constancy as a function of convergence. J. Exp. Psychol. 1937, 21, 145–161. [Google Scholar] [CrossRef]
  39. Hermans, T.G. The relationship of convergence and elevation changes to judgments of size. J. Exp. Psychol. 1954, 48, 204–208. [Google Scholar] [CrossRef]
  40. Judd, C.H. Some facts of binocular vision. Psychol. Rev. 1897, 4, 374–389. [Google Scholar] [CrossRef] [Green Version]
  41. Komoda, M.K.; Ono, H. Oculomotor adjustments and size-distance perception. Percept. Psychophys. 1974, 15, 353–360. [Google Scholar] [CrossRef] [Green Version]
  42. Leibowitz, H.; Moore, D. Role of changes in accommodation and convergence in the perception of size. J. Opt. Soc. Am. 1966, 56, 1120–1129. [Google Scholar] [CrossRef]
  43. Leibowitz, H.W.; Shiina, K.; Hennessy, R.T. Oculomotor adjustments and size constancy. Percept. Psychophys. 1972, 12, 497–500. [Google Scholar] [CrossRef] [Green Version]
  44. Locke, N.M. Some Factors in Size-Constancy. Am. J. Psychol. 1938, 51, 514–520. [Google Scholar] [CrossRef]
  45. McCready, D.W. Size-distance perception and accommodation-convergence micropsia—A critique. Vis. Res. 1965, 5, 189–206. [Google Scholar] [CrossRef]
  46. Regan, D.; Erkelens, C.J.; Collewijn, H. Necessary conditions for the perception of motion in depth. Investig. Ophthalmol. Visual Sci. 1986, 27, 584–597. [Google Scholar]
  47. Von Holst, E. Aktive Leistungen der menschlichen Gesichtswahrnemung. Stud. General. 1957, 10, 231–243. [Google Scholar]
  48. Von Holst, E. 1st der Einfluss der Akkommodation auf die gesehene Dinggrijsse ein “reflektorischer” Vorgang? Naturwissenschaften 1955, 42, 445–446. [Google Scholar] [CrossRef]
  49. Von Holst, E. Die Beteiligung von Konvergenz und Akkommodation an der wahrgenommenen Grossenkonstanz. Naturwissenschaften 1955, 42, 444–445. [Google Scholar] [CrossRef]
  50. Wallach, H.; Zuckerman, C. The constancy of stereoscopic depth. Am. J. Psychol. 1963, 76, 404–412. [Google Scholar] [CrossRef]
  51. Helmholtz, H. Das Telestereoskop. Annalen Physik 1857, 178, 167–175. [Google Scholar] [CrossRef]
  52. Helmholtz, P.H. II. On the telestereoscope. Lond. Edinb. Dublin Philos. Mag. J. Sci. 1858, 15, 19–24. [Google Scholar] [CrossRef]
  53. Rogers, B. Are stereoscopic cues ignored in telestereoscopic viewing? J. Vis. 2009, 9, 288. [Google Scholar] [CrossRef]
  54. Rogers, B.J. Information, illusion, and constancy in telestereoscopic viewing. In Vision in 3D Environments; Harris, L.R., Jenkin, M.R.M., Eds.; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  55. Newman, D.G.; Ostler, D. Hyperstereopsis Associated with Helmet-Mounted Sighting and Display Systems for Helicopter Pilots. Aerosp. Med. Assoc. 2009. [Google Scholar] [CrossRef]
  56. Priot, A.E.; Neveu, P.; Philippe, M.; Roumes, C. Adaptation to alterations of apparent distance in stereoscopic displays: From lab to hyperstereopsis. In Proceedings of the 2012 International Conference on 3D Imaging (IC3D), Liege, Belgium, 3–5 December 2012; pp. 1–7. [Google Scholar] [CrossRef]
  57. Priot, A.-E.; Laboissière, R.; Plantier, J.; Prablanc, C.; Roumes, C. Partitioning the components of visuomotor adaptation to prism-altered distance. Neuropsychologia 2011, 49, 498–506. [Google Scholar] [CrossRef] [Green Version]
  58. Priot, A.-E.; Laboissière, R.; Sillan, O.; Roumes, C.; Prablanc, C. Adaptation of egocentric distance perception under telestereoscopic viewing within reaching space. Exp. Brain Res. 2010, 202, 825–836. [Google Scholar] [CrossRef] [Green Version]
  59. Priot, A.-E.; Vacher, A.; Vienne, C.; Neveu, P.; Roumes, C. The initial effects of hyperstereopsis on visual perception in helicopter pilots flying with see-through helmet-mounted displays. Displays 2018, 51, 1–8. [Google Scholar] [CrossRef]
  60. Stuart, G.; Jennings, S.; Kalich, M.; Rash, C.; Harding, T.; Craig, G. Flight performance using a hyperstereo helmet-mounted display: Adaptation to hyperstereopsis. Proc. SPIE Int. Soc. Opt. Eng. 2009, 7326. [Google Scholar] [CrossRef]
  61. Taylor, F.V. Change in size of the afterimage induced in total darkness. J. Exp. Psychol. 1941, 29, 75–80. [Google Scholar] [CrossRef]
  62. Mon-Williams, M.; Tresilian, J.R.; Plooy, A.; Wann, J.P.; Broerse, J. Looking at the task in hand: Vergence eye movements and perceived size. Exp. Brain Res. 1997, 117, 501–506. [Google Scholar] [CrossRef] [PubMed]
  63. Morrison, J.D.; Whiteside, T.C. Binocular cues in the perception of distance of a point source of light. Perception 1984, 13, 555–566. [Google Scholar] [CrossRef] [PubMed]
  64. Sperandio, I.; Kaderali, S.; Chouinard, P.A.; Frey, J.; Goodale, M.A. Perceived size change induced by nonvisual signals in darkness: The relative contribution of vergence and proprioception. J. Neurosci. Off. J. Soc. Neurosci. 2013, 33, 16915–16923. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Bross, M. Emmert’s law in the dark: Active and passive proprioceptive effects on positive visual afterimages. Perception 2000, 29, 1385–1391. [Google Scholar] [CrossRef] [PubMed]
  66. Carey, D.P.; Allan, K. A motor signal and “visual” size perception. Exp. Brain Res. 1996, 110, 482–486. [Google Scholar] [CrossRef]
  67. Faivre, N.; Dönz, J.; Scandola, M.; Dhanis, H.; Ruiz, J.B.; Bernasconi, F.; Salomon, R.; Blanke, O. Self-Grounded Vision: Hand Ownership Modulates Visual Location through Cortical β and γ Oscillations. J. Neurosci. 2017, 37, 11–22. [Google Scholar] [CrossRef] [Green Version]
  68. Gregory, R.L.; Wallace, J.G.; Campbell, F.W. Changes in the size and shape of visual after-images observed in complete darkness during changes of position in space. Q. J. Exp. Psychol. 1959, 11, 54–55. [Google Scholar] [CrossRef]
  69. Ramsay, A.I.G.; Carey, D.P.; Jackson, S.R. Visual-proprioceptive mismatch and the Taylor illusion. Exp. Brain Res. 2007, 176, 173–181. [Google Scholar] [CrossRef] [PubMed]
  70. Lou, L. Apparent Afterimage Size, Emmert’s Law, and Oculomotor Adjustment. Perception 2007, 36, 1214–1228. [Google Scholar] [CrossRef] [PubMed]
  71. Suzuki, K. Effects of oculomotor cues on the apparent size of afterimages. Jpn. Psychol. Res. 1986, 28, 168–175. [Google Scholar] [CrossRef] [Green Version]
  72. Urist, M.J. Afterimages and ocular muscle proprioception. AMA Arch. Ophthalmol. 1959, 61, 230–232. [Google Scholar] [CrossRef] [PubMed]
  73. Zenkin, G.M.; Petrov, A.P. Transformation of the Visual Afterimage Under Subject’s Eye and Body Movements and the Visual Field Constancy Mechanisms. Perception 2015, 44, 973–985. [Google Scholar] [CrossRef] [PubMed]
  74. Marin, F.; Rohatgi, A.; Charlot, S. WebPlotDigitizer, a polyvalent and free software to extract spectra from old astronomical publications: Application to ultraviolet spectropolarimetry. arXiv 2017, arXiv:1708.02025. [Google Scholar]
  75. Ono, H.; Comerford, J. Stereoscopic depth constancy. In Stability and Constancy in Visual Perception: Mechanisms and Process; Epstein, W., Ed.; Wiley: Hoboken, NJ, USA, 1977. [Google Scholar]
  76. Bishop, P.O. Vertical Disparity, Egocentric Distance and Stereoscopic Depth Constancy: A New Interpretation. Proc. R. Soc. Lond. Ser. B Biol. Sci. 1989, 237, 445–469. [Google Scholar]
  77. Konrad, R.; Angelopoulos, A.; Wetzstein, G. Gaze-Contingent Ocular Parallax Rendering for Virtual Reality. arXiv 2019, arXiv:1906.09740. [Google Scholar]
  78. Linton, P. Would Gaze-Contingent Rendering Improve Depth Perception in Virtual and Augmented Reality? arXiv 2019, arXiv:1905.10366v1. [Google Scholar]
  79. Hoffman, D.M.; Girshick, A.R.; Akeley, K.; Banks, M.S. Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 2008, 8, 33. [Google Scholar] [CrossRef]
  80. Brainard, D.H. mQUESTPlus: A Matlab Implementation of QUEST+. Available online: https://github.com/brainardlab/mQUESTPlus (accessed on 30 September 2020).
  81. Watson, A.B. QUEST+: A general multidimensional Bayesian adaptive psychometric method. J. Vis. 2017, 17, 10. [Google Scholar] [CrossRef] [Green Version]
  82. Lakens, D.; Scheel, A.M.; Isager, P.M. Equivalence Testing for Psychological Research: A Tutorial. Adv. Methods Pract. Psychol. Sci. 2018, 1, 259–269. [Google Scholar] [CrossRef] [Green Version]
  83. Kleiner, M.; Brainard, D.; Pelli, D. What’s new in Psychtoolbox-3?” Perception 36 ECVP Abstract Supplement. PLoS ONE 2007, 36. [Google Scholar]
  84. Prins, N.; Kingdom, F.A.A. Applying the Model-Comparison Approach to Test Specific Research Hypotheses in Psychophysical Research Using the Palamedes Toolbox. Front. Psychol. 2018, 9, 1250. [Google Scholar] [CrossRef] [Green Version]
  85. Prins, N.; Kingdom, F.A.A. Available online: http://www.palamedestoolbox.org/hierarchicalbayesian.html (accessed on 30 September 2020).
  86. Rouder, J.N.; Speckman, P.L.; Sun, D.; Morey, R.D.; Iverson, G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 2009, 16, 225–237. [Google Scholar] [CrossRef]
  87. Hooge, I.T.C.; Hessels, R.S.; Nyström, M. Do pupil-based binocular video eye trackers reliably measure vergence? Vis. Res. 2019, 156, 1–9. [Google Scholar] [CrossRef]
  88. Siegel, H.; Duncan, C.P. Retinal Disparity and Diplopia vs. Luminance and Size of Target. Am. J. Psychol. 1960, 73, 280–284. [Google Scholar] [CrossRef]
  89. Choe, K.W.; Blake, R.; Lee, S.-H. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation. Vis. Res. 2016, 118, 48–59. [Google Scholar] [CrossRef]
  90. Drewes, J.; Zhu, W.; Hu, Y.; Hu, X. Smaller Is Better: Drift in Gaze Measurements due to Pupil Dynamics. PLoS ONE 2014, 9, e111197. [Google Scholar] [CrossRef] [Green Version]
  91. Wildenmann, U.; Schaeffel, F. Variations of pupil centration and their effects on video eye tracking. Ophthalmic Physiol. Opt. 2013, 33, 634–641. [Google Scholar] [CrossRef]
  92. Wyatt, H.J. The human pupil and the use of video-based eyetrackers. Vis. Res. 2010, 50, 1982–1988. [Google Scholar] [CrossRef] [Green Version]
  93. Balaban, C.D.; Kiderman, A.; Szczupak, M.; Ashmore, R.C.; Hoffer, M.E. Patterns of Pupillary Activity During Binocular Disparity Resolution. Front. Neurol. 2018, 9. [Google Scholar] [CrossRef]
  94. Naber, M.; Nakayama, K. Pupil responses to high-level image content. J. Vis. 2013, 13, 7. [Google Scholar] [CrossRef]
  95. González, E.G.; Allison, R.S.; Ono, H.; Vinnikov, M. Cue conflict between disparity change and looming in the perception of motion in depth. Vis. Res. 2010, 50, 136–143. [Google Scholar] [CrossRef] [Green Version]
  96. Faivre, N.; Arzi, A.; Lunghi, C.; Salomon, R. Consciousness is more than meets the eye: A call for a multisensory study of subjective experience. Neurosci. Conscious. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  97. Grove, C.; Cardinali, L.; Sperandio, I. Does Tool-Use Modulate the Perceived Size of an Afterimage During the Taylor Illusion? Perception 2019, 48, 181. [Google Scholar]
  98. Firestone, C.; Scholl, B.J. Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behav. Brain Sci. 2016, 39, e229. [Google Scholar] [CrossRef] [PubMed]
  99. Fodor, J.A. The Modularity of Mind: An Essay on Faculty Psychology; MIT Press: Cambridge, MA, USA, 1983. [Google Scholar]
  100. Pylyshyn, Z. Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behav. Brain Sci. 1999, 22, 341–365. [Google Scholar] [CrossRef]
  101. Chen, J.; Sperandio, I.; Goodale, M.A. Proprioceptive Distance Cues Restore Perfect Size Constancy in Grasping, but Not Perception, When Vision Is Limited. Curr. Biol. 2018, 28, 927–932.e4. [Google Scholar] [CrossRef] [Green Version]
  102. Faivre, N.; Salomon, R.; Blanke, O. Visual consciousness and bodily self-consciousness. Curr. Opin. Neurol. 2015, 28, 23–28. [Google Scholar] [CrossRef] [Green Version]
  103. Gepshtein, S.; Burge, J.; Ernst, M.O.; Banks, M.S. The combination of vision and touch depends on spatial proximity. J. Vis. 2005, 5, 7. [Google Scholar] [CrossRef] [Green Version]
  104. Hillis, J.M.; Ernst, M.O.; Banks, M.S.; Landy, M.S. Combining Sensory Information: Mandatory Fusion Within, but Not Between, Senses. Science 2002, 298, 1627–1630. [Google Scholar] [CrossRef] [Green Version]
  105. Ash, A.; Palmisano, S.; Kim, J. Vection in Depth during Consistent and Inconsistent Multisensory Stimulation. Perception 2011, 40, 155–174. [Google Scholar] [CrossRef] [Green Version]
  106. Fischer, M.H.; Kornmüller, A.E. Optokinetic-Induced Motion Perception and Optokinetic Nystagmus. J. Psychol. Neurol. 1930, 41, 273–308. [Google Scholar]
  107. Linton, P. V1 as an Egocentric Cognitive Map. PsyArXiv 2021. [Google Scholar] [CrossRef]
  108. Trotter, Y.; Celebrini, S.; Stricanne, B.; Thorpe, S.; Imbert, M. Modulation of neural stereoscopic processing in primate area V1 by the viewing distance. Science 1992, 257, 1279–1281. [Google Scholar] [CrossRef]
  109. Cottereau, B.; Durand, J.-B.; Vayssière, N.; Trotter, Y. The influence of eye vergence on retinotopic organization in human early visual cortex. Percept. ECVP Abstr. 2014, 81. [Google Scholar] [CrossRef] [Green Version]
  110. Cumming, B.G.; Parker, A.J. Binocular Neurons in V1 of Awake Monkeys Are Selective for Absolute, Not Relative, Disparity. J. Neurosci. 1999, 19, 5602–5618. [Google Scholar] [CrossRef] [Green Version]
  111. Dobbins, A.C.; Jeo, R.M.; Fiser, J.; Allman, J.M. Distance Modulation of Neural Activity in the Visual Cortex. Science 1998, 281, 552–555. [Google Scholar] [CrossRef] [Green Version]
  112. Trotter, Y.; Celebrini, S.; Stricanne, B.; Thorpe, S.; Imbert, M. Neural processing of stereopsis as a function of viewing distance in primate visual cortical area V1. J. Neurophysiol. 1996, 76, 2872–2885. [Google Scholar] [CrossRef]
  113. Trotter, Y.; Celebrini, S.; Trotter, Y.; Celebrini, S. Gaze direction controls response gain in primary visual-cortex neurons. Nature 1999, 398, 239–242. [Google Scholar] [CrossRef]
  114. Trotter, Y.; Celebrini, S.; Durand, J.B. Evidence for implication of primate area V1 in neural 3-D spatial localization processing. J. Physiol. Paris 2004, 98, 125–134. [Google Scholar] [CrossRef]
  115. Trotter, Y.; Stricanne, B.; Celebrini, S.; Thorpe, S.; Imbert, M. Neural Processing of Stereopsis as a Function of Viewing Distance; Oxford University Press: Oxford, UK, 1993. [Google Scholar]
  116. Gnadt, J.W.; Mays, L.E. Depth tuning in area LIP by disparity and accommodative cues. Soc. Neurosci. Abs. 1991, 17, 1113. [Google Scholar]
  117. Gnadt, J.W.; Mays, L.E. Neurons in monkey parietal area LIP are tuned for eye-movement parameters in three-dimensional space. J. Neurophysiol. 1995, 73, 280–297. [Google Scholar] [CrossRef]
  118. Quinlan, D.J.; Culham, J.C. FMRI reveals a preference for near viewing in the human parieto-occipital cortex. NeuroImage 2007, 36, 167–187. [Google Scholar] [CrossRef]
  119. Culham, J.; Gallivan, J.; Cavina-Pratesi, C.; Quinlan, D. FMRI investigations of reaching and ego space in human superior parieto-occipital cortex. In Embodiment, Ego-Space and Action; Klatzky, R.L., MacWhinney, B., Behrman, M., Eds.; Psychology Press: New York, NY, USA, 2008; pp. 247–274. [Google Scholar]
  120. Richards, W. Spatial remapping in the primate visual system. Kybernetik 1968, 4, 146–156. [Google Scholar] [CrossRef]
  121. Lehky, S.; Pouget, A.; Sejnowski, T. Neural Models of Binocular Depth Perception. Cold Spring Harb. Symp. Quant. Biol. 1990, 55, 765–777. [Google Scholar] [CrossRef] [Green Version]
  122. Pouget, A.; Sejnowski, T.J. A neural model of the cortical representation of egocentric distance. Cereb. Cortex 1994, 4, 314–329. [Google Scholar] [CrossRef] [Green Version]
  123. Chen, J.; Sperandio, I.; Henry, M.J.; Goodale, M.A. Changing the Real Viewing Distance Reveals the Temporal Evolution of Size Constancy in Visual Cortex. Curr. Biol. 2019, 29, 2237–2243.e4. [Google Scholar] [CrossRef]
  124. Johnston, E.B. Systematic distortions of shape from stereopsis. Vis. Res. 1991, 31, 1351–1360. [Google Scholar] [CrossRef]
  125. Saleem, A.B.; Diamanti, E.M.; Fournier, J.; Harris, K.D.; Carandini, M. Coherent encoding of subjective spatial position in visual cortex and hippocampus. Nature 2018, 562, 124–127. [Google Scholar] [CrossRef]
  126. Longuet-Higgins, H.C. The Role of the Vertical Dimension in Stereoscopic Vision. Perception 1982, 11, 377–386. [Google Scholar] [CrossRef]
  127. Mayhew, J. The Interpretation of Stereo-Disparity Information: The Computation of Surface Orientation and Depth. Perception 1982, 11, 387–407. [Google Scholar] [CrossRef]
  128. Mayhew, J.E.W.; Longuet-Higgins, H.C. A computational model of binocular depth perception. Nature 1982, 297, 376–378. [Google Scholar] [CrossRef]
  129. Gillam, B.; Lawergren, B. The induced effect, vertical disparity, and stereoscopic theory. Percept. Psychophys. 1983, 34, 121–130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  130. Rogers, B. Perception: A Very Short Introduction; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  131. Cumming, B.G.; Johnston, E.B.; Parker, A.J. Vertical Disparities and Perception of Three-Dimensional Shape. Nat. Lond. 1991, 349, 411–413. [Google Scholar] [CrossRef] [PubMed]
  132. Sobel, E.C.; Collett, T.S. Does vertical disparity scale the perception of stereoscopic depth? Proc. R. Soc. Lond. Ser. B Biol. Sci. 1991, 244, 87–90. [Google Scholar] [CrossRef]
  133. Vienne, C.; Plantier, J.; Neveu, P.; Priot, A.-E. The Role of Vertical Disparity in Distance and Depth Perception as Revealed by Different Stereo-Camera Configurations. I-Perception 2016, 7. [Google Scholar] [CrossRef]
  134. Linton, P. Does Human Vision Triangulate Absolute Distance. British Machine Vision Association: 3D Worlds from 2D Images in Humans and Machines. 2020. Available online: https://www.youtube.com/watch?v=6P3EYCEn52A (accessed on 30 September 2020).
  135. Tehrani, M.A.; Majumder, A.; Gopi, M. Correcting perceived perspective distortions using object specific planar transformations. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Evanston, IL, USA, 13–15 May 2016. [Google Scholar] [CrossRef]
  136. Gogel, W.C. The Sensing of Retinal Size. Vis. Res. 1969, 9, 1079–1094. [Google Scholar] [CrossRef]
  137. Predebon, J. The role of instructions and familiar size in absolute judgments of size and distance. Percept. Psychophys. 1992, 51, 344–354. [Google Scholar] [CrossRef] [Green Version]
  138. Bower, T.G.R. Stimulus variables determining space perception in infants. Science 1965, 149, 88–89. [Google Scholar] [CrossRef]
  139. Granrud, C.E. Size constancy in newborn human infants [ARVO Abstract]. Investig. Ophthalmol. Visual Sci. 1987, 28, 5. [Google Scholar]
  140. Slater, A.; Mattock, A.; Brown, E. Size constancy at birth: Newborn infants’ responses to retinal and real size. J. Exp. Child. Psychol. 1990, 49, 314–322. [Google Scholar] [CrossRef]
  141. Gregory, R.L.; Wallace, J.G. Recovery from early blindness. Exp. Psychol. Soc. Monogr. 1963, 2, 65–129. [Google Scholar]
  142. Šikl, R.; Šimecček, M.; Porubanová-Norquist, M.; Bezdíček, O.; Kremláček, J.; Stodůlka, P.; Fine, I.; Ostrovsky, Y. Vision after 53 years of blindness. I-Perception 2013, 4, 498–507. [Google Scholar] [CrossRef]
Figure 1. Demonstration of pictorial size constancy in the stimulus used by [2] (© Scott Murray).
Figure 1. Demonstration of pictorial size constancy in the stimulus used by [2] (© Scott Murray).
Vision 05 00033 g001
Figure 2. Illustration of how the vergence specified distance was manipulated in the experiment using a fixed display. By increasing the separation between the targets on the display (top vs. bottom image) we were able to reduce the vergence specified distance (indicated by arrow).
Figure 2. Illustration of how the vergence specified distance was manipulated in the experiment using a fixed display. By increasing the separation between the targets on the display (top vs. bottom image) we were able to reduce the vergence specified distance (indicated by arrow).
Vision 05 00033 g002
Figure 3. Summary of the two simultaneous manipulations of the targets during each trial of (1) the physical separation of the targets on the display, that manipulated the vergence specified distance of the target, and (2) the physical size of the targets on the display. Participants were asked to judge whether the target got bigger or smaller.
Figure 3. Summary of the two simultaneous manipulations of the targets during each trial of (1) the physical separation of the targets on the display, that manipulated the vergence specified distance of the target, and (2) the physical size of the targets on the display. Participants were asked to judge whether the target got bigger or smaller.
Vision 05 00033 g003
Figure 4. Photograph and cross-section plans of the apparatus.
Figure 4. Photograph and cross-section plans of the apparatus.
Vision 05 00033 g004
Figure 5. OpenGL rendering of target achieves the correct retinal image for a target with a constant radius and orientation to the eye, whilst presenting the target on a fronto-parallel display.
Figure 5. OpenGL rendering of target achieves the correct retinal image for a target with a constant radius and orientation to the eye, whilst presenting the target on a fronto-parallel display.
Vision 05 00033 g005
Figure 6. Simulated experiment. We simulated the experiment 10,000 times in Quest+ (bias = 0, detection threshold = 5%, lapse rate = 2%) to model how increasing the number of participants would improve the accuracy of our hierarchical Bayesian estimate of the true bias (true bias = 0). We determined that we needed n ≥ 5 to rule out an effect greater than our smallest effect size of interest (vergence size constancy > 1.5%).
Figure 6. Simulated experiment. We simulated the experiment 10,000 times in Quest+ (bias = 0, detection threshold = 5%, lapse rate = 2%) to model how increasing the number of participants would improve the accuracy of our hierarchical Bayesian estimate of the true bias (true bias = 0). We determined that we needed n ≥ 5 to rule out an effect greater than our smallest effect size of interest (vergence size constancy > 1.5%).
Vision 05 00033 g006
Figure 7. Results. (A) Hierarchical Bayesian model of the population psychometric function in black (based on 15,000 posterior estimates, 100 representative posterior estimates in red). Also shown are predictions for various degrees of vergence size constancy effect sizes (in grey). (B) Probability density function of 15,000 posterior estimates of the population bias, with a non-significant bias of −0.2%. (C) Individual subject results fitted with Bayesian psychometric functions in black (based on 15,000 posterior estimates, 100 representative posterior estimates in red). Blue dots indicating the physical size changes tested by Quest+ (with darkness of the dot indicating the number of times it was tested). Individual biases cluster around zero (from −2.2% to 1.2%). For each participant, alpha (a) is the bias of the logistic function, and beta (b) is the slope.
Figure 7. Results. (A) Hierarchical Bayesian model of the population psychometric function in black (based on 15,000 posterior estimates, 100 representative posterior estimates in red). Also shown are predictions for various degrees of vergence size constancy effect sizes (in grey). (B) Probability density function of 15,000 posterior estimates of the population bias, with a non-significant bias of −0.2%. (C) Individual subject results fitted with Bayesian psychometric functions in black (based on 15,000 posterior estimates, 100 representative posterior estimates in red). Blue dots indicating the physical size changes tested by Quest+ (with darkness of the dot indicating the number of times it was tested). Individual biases cluster around zero (from −2.2% to 1.2%). For each participant, alpha (a) is the bias of the logistic function, and beta (b) is the slope.
Vision 05 00033 g007
Figure 8. Fall off of vergence angle with distance from [15]. Just as with disparity, the vergence angle falls off as a function of 1/distance2.
Figure 8. Fall off of vergence angle with distance from [15]. Just as with disparity, the vergence angle falls off as a function of 1/distance2.
Vision 05 00033 g008
Figure 9. Arrange two coins so their centres are the interpupillary distance apart (approximately 6 cm). Cross fuse the two coins so the right eye looks at the left coin and the left eye looks at the right coin. The central fused coin appears to shrink in size. This is vergence micropsia.
Figure 9. Arrange two coins so their centres are the interpupillary distance apart (approximately 6 cm). Cross fuse the two coins so the right eye looks at the left coin and the left eye looks at the right coin. The central fused coin appears to shrink in size. This is vergence micropsia.
Vision 05 00033 g009
Figure 10. Differences in retinal projections for the left and rights coins in the right eye.
Figure 10. Differences in retinal projections for the left and rights coins in the right eye.
Vision 05 00033 g010
Figure 11. Telestereoscopic viewing. Mirrors alter the path of light to the eyes. In order to view the same point in space, the eyes now have to rotate more (as indicated by the solid line) than they ordinarily would (as indicated by the dotted line).
Figure 11. Telestereoscopic viewing. Mirrors alter the path of light to the eyes. In order to view the same point in space, the eyes now have to rotate more (as indicated by the solid line) than they ordinarily would (as indicated by the dotted line).
Vision 05 00033 g011
Figure 12. ‘Wernigerode Boulevard’ (2011) by Sascha Becher. © Sascha Becher. Used with permission from: https://www.flickr.com/photos/stereotron/6597314627/in/album-72157612377392630/ (accessed on 30 September 2020). For more of Sascha Becher’s telestereoscopic images please see: https://www.flickr.com/photos/stereotron/ (accessed on 30 September 2020).
Figure 12. ‘Wernigerode Boulevard’ (2011) by Sascha Becher. © Sascha Becher. Used with permission from: https://www.flickr.com/photos/stereotron/6597314627/in/album-72157612377392630/ (accessed on 30 September 2020). For more of Sascha Becher’s telestereoscopic images please see: https://www.flickr.com/photos/stereotron/ (accessed on 30 September 2020).
Vision 05 00033 g012
Figure 13. ‘Trans-Canada Highway @ Neyes National Park’ (2014) by Sascha Becher. © Sascha Becher. Used with permission from: https://www.flickr.com/photos/stereotron/30208366315 (accessed on 30 September 2020). For more of Sascha Becher’s telestereoscopic images please see: https://www.flickr.com/photos/stereotron/ (accessed on 30 September 2020).
Figure 13. ‘Trans-Canada Highway @ Neyes National Park’ (2014) by Sascha Becher. © Sascha Becher. Used with permission from: https://www.flickr.com/photos/stereotron/30208366315 (accessed on 30 September 2020). For more of Sascha Becher’s telestereoscopic images please see: https://www.flickr.com/photos/stereotron/ (accessed on 30 September 2020).
Vision 05 00033 g013
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Linton, P. Does Vergence Affect Perceived Size? Vision 2021, 5, 33. https://doi.org/10.3390/vision5030033

AMA Style

Linton P. Does Vergence Affect Perceived Size? Vision. 2021; 5(3):33. https://doi.org/10.3390/vision5030033

Chicago/Turabian Style

Linton, Paul. 2021. "Does Vergence Affect Perceived Size?" Vision 5, no. 3: 33. https://doi.org/10.3390/vision5030033

APA Style

Linton, P. (2021). Does Vergence Affect Perceived Size? Vision, 5(3), 33. https://doi.org/10.3390/vision5030033

Article Metrics

Back to TopTop