Size Constancy for Perception and Action

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 38643

Special Issue Editors


E-Mail Website
Guest Editor
Department of Psychology, University of Western Ontario, London, ON N6A 5C2, Canada
Interests: perception and action; neuropsychology; fMRI; grasping; blindsight; visual form agnosia; optic ataxia

E-Mail
Guest Editor
Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC V6T 1Z4, Canada
Interests: visual perception; neuropsychology; sensorimotor control of prehension

Special Issue Information

Dear Colleagues,

Recently, there has been a resurgence of interest in size constancy, the ability to see the real-world size of an object despite dramatic changes in the size of the retinal image. Our brain presumably creates size constancy by integrating information about the size and shape of the image of an object projected on the retina with information about its viewing distance. Such computations are critical not only to our stable perception of objects but also to the visual control of actions directed at those objects.  But the nature of those computations remains unclear—and the relationship between mechanisms mediating size constancy for perception and size constancy for action is a matter of some debate. In this special issue, we invite both empirical and theoretical contributions that address these questions—and provide insights into the size-constancy mechanisms underlying our stable perception of the world and the visual control of effective object-directed actions.   

Prof. Dr. Melvyn A. Goodale
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 4861 KiB  
Article
The Effects of Adding Pictorial Depth Cues to the Poggendorff Illusion
by Gizem Y. Yildiz, Bailey G. Evans and Philippe A. Chouinard
Vision 2022, 6(3), 44; https://doi.org/10.3390/vision6030044 - 18 Jul 2022
Cited by 1 | Viewed by 2826
Abstract
We tested if the misapplication of perceptual constancy mechanisms might explain the perceived misalignment of the oblique lines in the Poggendorff illusion. Specifically, whether these mechanisms might treat the rectangle in the middle portion of the Poggendorff stimulus as an occluder in front [...] Read more.
We tested if the misapplication of perceptual constancy mechanisms might explain the perceived misalignment of the oblique lines in the Poggendorff illusion. Specifically, whether these mechanisms might treat the rectangle in the middle portion of the Poggendorff stimulus as an occluder in front of one long line appearing on either side, causing an apparent decrease in the rectangle’s width and an apparent increase in the misalignment of the oblique lines. The study aimed to examine these possibilities by examining the effects of adding pictorial depth cues. In experiments 1 and 2, we presented a central rectangle composed of either large or small bricks to determine if this manipulation would change the perceived alignment of the oblique lines and the perceived width of the central rectangle, respectively. The experiments demonstrated no changes that would support a misapplication of perceptual constancy in driving the illusion, despite some evidence of perceptual size rescaling of the central rectangle. In experiment 3, we presented Poggendorff stimuli in front and at the back of a corridor background rich in texture and linear perspective depth cues to determine if adding these cues would affect the Poggendorff illusion. The central rectangle was physically large and small when presented in front and at the back of the corridor, respectively. The strength of the Poggendorff illusion varied as a function of the physical size of the central rectangle, and, contrary to our predictions, the addition of pictorial depth cues in both the central rectangle and the background decreased rather than increased the strength of the illusion. The implications of these results with regards to different theories are discussed. It could be the case that the illusion depends on both low-level and cognitive mechanisms and that deleterious effects occur on the former when the latter ascribes more certainty to the oblique lines being the same line receding into the distance. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

15 pages, 2237 KiB  
Article
View Normalization of Object Size in the Right Parietal Cortex
by Sylvia Hoba, Gereon R. Fink, Hang Zeng and Ralph Weidner
Vision 2022, 6(3), 41; https://doi.org/10.3390/vision6030041 - 1 Jul 2022
Cited by 4 | Viewed by 2250
Abstract
Prior knowledge alters perception already on early levels of processing. For instance, judging the display size of an object is affected by its familiar size. Using functional magnetic resonance imaging, we investigated the neural processes involved in resolving ambiguities between familiar object size [...] Read more.
Prior knowledge alters perception already on early levels of processing. For instance, judging the display size of an object is affected by its familiar size. Using functional magnetic resonance imaging, we investigated the neural processes involved in resolving ambiguities between familiar object size and physical object size in 33 healthy human subjects. The familiar size was either small or large, and the object was displayed as either small or large. Thus, the size of the displayed object was either congruent or incongruent with its internally stored canonical size representation. Subjects were asked to indicate where the stimuli appeared on the screen as quickly and accurately as possible, thereby ensuring that differential activations cannot be ascribed to explicit object size judgments. Incongruent (relative to congruent) object displays were associated with enhanced activation of the right intraparietal sulcus (IPS). These data are consistent with but extend previous patient studies, which found the right parietal cortex involved in matching visual objects presented atypically to prototypical object representations, suggesting that the right IPS supports view normalization of objects. In a second experiment, using a parametric design, a region-of-interest analysis supported this notion by showing that increases in size mismatch between the displayed size of an object and its familiar viewing size were associated with an increased right IPS activation. We conclude that the right IPS performs view normalization of mismatched information about the internally stored prototypical size and the current viewing size of an object. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

16 pages, 1842 KiB  
Article
Size Constancy Mechanisms: Empirical Evidence from Touch
by Luigi Tamè, Suzuki Limbu, Rebecca Harlow, Mita Parikh and Matthew R. Longo
Vision 2022, 6(3), 40; https://doi.org/10.3390/vision6030040 - 1 Jul 2022
Cited by 2 | Viewed by 2969
Abstract
Several studies have shown the presence of large anisotropies for tactile distance perception across several parts of the body. The tactile distance between two touches on the dorsum of the hand is perceived as larger when they are oriented mediolaterally (across the hand) [...] Read more.
Several studies have shown the presence of large anisotropies for tactile distance perception across several parts of the body. The tactile distance between two touches on the dorsum of the hand is perceived as larger when they are oriented mediolaterally (across the hand) than proximodistally (along the hand). This effect can be partially explained by the characteristics of primary somatosensory cortex representations. However, this phenomenon is significantly attenuated relative to differences in acuity and cortical magnification, suggesting a process of tactile size constancy. It is unknown whether the same kind of compensation also takes place when estimating the size of a continuous object. Here, we investigate whether the tactile anisotropy that typically emerges when participants have to estimate the distance between two touches is also present when a continuous object touches the skin and participants have to estimate its size. In separate blocks, participants judged which of two tactile distances or objects on the dorsum of their hand felt larger. One stimulation (first or second) was aligned with the proximodistal axis (along the hand) and the other with the mediolateral axis (across the hand). Results showed a clear anisotropy for distances between two distinct points, with across distances consistently perceived as larger than along distances, as in previous studies. Critically, however, this bias was significantly reduced or absent for judgments of the length of continuous objects. These results suggest that a tactile size constancy process is more effective when the tactile size of an object has to be approximated compared to when the distance between two touches has to be determined. The possible mechanism subserving these results is described and discussed. We suggest that a lateral inhibition mechanism, when an object touches the skin, provides information through the distribution of the inhibitory subfields of the RF about the shape of the tactile RF itself. Such a process allows an effective tactile size compensatory mechanism where a good match between the physical and perceptual dimensions of the object is achieved. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

17 pages, 529 KiB  
Article
The Riemannian Geometry Theory of Visually-Guided Movement Accounts for Afterimage Illusions and Size Constancy
by Peter D. Neilson, Megan D. Neilson and Robin T. Bye
Vision 2022, 6(2), 37; https://doi.org/10.3390/vision6020037 - 20 Jun 2022
Cited by 1 | Viewed by 2447
Abstract
This discussion paper supplements our two theoretical contributions previously published in this journal on the geometric nature of visual space. We first show here how our Riemannian formulation explains the recent experimental finding (published in this special issue on size constancy) that, contrary [...] Read more.
This discussion paper supplements our two theoretical contributions previously published in this journal on the geometric nature of visual space. We first show here how our Riemannian formulation explains the recent experimental finding (published in this special issue on size constancy) that, contrary to conclusions from past work, vergence does not affect perceived size. We then turn to afterimage experiments connected to that work. Beginning with the Taylor illusion, we explore how our proposed Riemannian visual–somatosensory–hippocampal association memory network accounts in the following way for perceptions that occur when afterimages are viewed in conjunction with body movement. The Riemannian metric incorporated in the association memory network accurately emulates the warping of 3D visual space that is intrinsically introduced by the eye. The network thus accurately anticipates the change in size of retinal images of objects with a change in Euclidean distance between the egocentre and the object. An object will only be perceived to change in size when there is a difference between the actual size of its image on the retina and the anticipated size of that image provided by the network. This provides a central mechanism for size constancy. If the retinal image is the afterimage of a body part, typically a hand, and that hand moves relative to the egocentre, the afterimage remains constant but the proprioceptive signals change to give the new hand position. When the network gives the anticipated size of the hand at its new position this no longer matches the fixed afterimage, hence a size-change illusion occurs. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

21 pages, 4286 KiB  
Article
Can People Infer Distance in a 2D Scene Using the Visual Size and Position of an Object?
by John Jong-Jin Kim and Laurence R. Harris
Vision 2022, 6(2), 25; https://doi.org/10.3390/vision6020025 - 4 May 2022
Cited by 3 | Viewed by 6134
Abstract
Depth information is limited in a 2D scene and for people to perceive the distance of an object, they need to rely on pictorial cues such as perspective, size constancy and elevation in the scene. In this study, we tested whether people could [...] Read more.
Depth information is limited in a 2D scene and for people to perceive the distance of an object, they need to rely on pictorial cues such as perspective, size constancy and elevation in the scene. In this study, we tested whether people could use an object’s size and its position in a 2D image to determine its distance. In a series of online experiments, participants viewed a target representing their smartphone rendered within a 2D scene. They either positioned it in the scene at the distance they thought was correct based on its size or adjusted the target to the correct size based on its position in the scene. In all experiments, the adjusted target size and positions were not consistent with their initially presented positions and sizes and were made larger and moved further away on average. Familiar objects influenced adjusted position from size but not adjusted size from position. These results suggest that in a 2D scene, (1) people cannot use an object’s visual size and position relative to the horizon to infer distance reliably and (2) familiar objects in the scene affect perceived size and distance differently. The differences found demonstrate that size and distance perception processes may be independent. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

15 pages, 1892 KiB  
Article
Binocular Viewing Facilitates Size Constancy for Grasping and Manual Estimation
by Ewa Niechwiej-Szwedo, Michael Cao and Michael Barnett-Cowan
Vision 2022, 6(2), 23; https://doi.org/10.3390/vision6020023 - 20 Apr 2022
Cited by 4 | Viewed by 2694
Abstract
A prerequisite for efficient prehension is the ability to estimate an object’s distance and size. While most studies demonstrate that binocular viewing is associated with a more efficient grasp programming and execution compared to monocular viewing, the factors contributing to this advantage are [...] Read more.
A prerequisite for efficient prehension is the ability to estimate an object’s distance and size. While most studies demonstrate that binocular viewing is associated with a more efficient grasp programming and execution compared to monocular viewing, the factors contributing to this advantage are not fully understood. Here, we examined how binocular vision facilitates grasp scaling using two tasks: prehension and manual size estimation. Participants (n = 30) were asked to either reach and grasp an object or to provide an estimate of an object’s size using their thumb and index finger. The objects were cylinders with a diameter of 0.5, 1.0, or 1.5 cm placed at three distances along the midline (40, 42, or 44 cm). Results from a linear regression analysis relating grip aperture to object size revealed that grip scaling during monocular viewing was reduced similarly for both grasping and estimation tasks. Additional analysis revealed that participants adopted a larger safety margin for grasping during monocular compared to binocular viewing, suggesting that monocular depth cues do not provide sufficient information about an object’s properties, which consequently leads to a less efficient grasp execution. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

13 pages, 3880 KiB  
Article
Familiarity with an Object’s Size Influences the Perceived Size of Its Image
by Jeroen B. J. Smeets, Pauline E. Weijs and Eli Brenner
Vision 2022, 6(1), 14; https://doi.org/10.3390/vision6010014 - 24 Feb 2022
Viewed by 3283
Abstract
It is known that judgments about objects’ distances are influenced by familiar size: a soccer ball looks farther away than a tennis ball if their images are equally large on the retina. We here investigate whether familiar size also influences judgments about the [...] Read more.
It is known that judgments about objects’ distances are influenced by familiar size: a soccer ball looks farther away than a tennis ball if their images are equally large on the retina. We here investigate whether familiar size also influences judgments about the size of images of objects that are presented side-by-side on a computer screen. Sixty-three participants indicated which of two images appeared larger on the screen in a 2-alternative forced-choice discrimination task. The objects were either two different types of balls, two different types of coins, or a ball and a grey disk. We found that the type of ball biased the comparison between their image sizes: the size of the image of the soccer ball was over-estimated by about 5% (assimilation). The bias in the comparison between the two balls was equal to the sum of the biases in the comparisons with the grey disk. The bias for the coins was smaller and in the opposite direction (contrast). The average precision of the size comparison was 3.5%, irrespective of the type of object. We conclude that knowing a depicted object’s real size can influence the perceived size of its image, but the perceived size is not always attracted towards the familiar size. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

13 pages, 1469 KiB  
Article
Developmental Trajectories of Size Constancy as Implicitly Examined by Simple Reaction Times
by Irene Sperandio
Vision 2021, 5(4), 50; https://doi.org/10.3390/vision5040050 - 19 Oct 2021
Cited by 3 | Viewed by 3196
Abstract
It is still unclear whether size constancy is an innate ability or whether it develops with age. As many developmental studies are limited to the child’s comprehension of the task instructions, here, an implicit measure of perceived size, namely, simple manual reaction time [...] Read more.
It is still unclear whether size constancy is an innate ability or whether it develops with age. As many developmental studies are limited to the child’s comprehension of the task instructions, here, an implicit measure of perceived size, namely, simple manual reaction time (RT), was opted for based on the assumption that perceptually bigger objects generate faster detection times. We examined size constancy in children (from 5 to 14 years of age) and adults using a simple RT approach. Participants were presented with pictures of tennis balls on a screen that was physically moved to two viewing distances. Visual stimuli were adjusted in physical size in order to subtend the same visual angle across distances, determining two conditions: a small-near tennis ball vs. a big-far tennis ball. Thanks to size constancy, the two tennis balls were perceived as different even though they were of equal size on the retina. Stimuli were also matched in terms of luminance. Participants were asked to react as fast as possible to the onset of the stimuli. The results show that the RTs reflected the perceived rather than the retinal size of the stimuli across the different age groups, such that participants responded faster to stimuli that were perceived as bigger than those perceived as smaller. Hence, these findings are consistent with the idea that size constancy is already present in early childhood, at least from the age of five, and does not require extensive visual learning. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

27 pages, 8040 KiB  
Article
Does Vergence Affect Perceived Size?
by Paul Linton
Vision 2021, 5(3), 33; https://doi.org/10.3390/vision5030033 - 22 Jun 2021
Cited by 8 | Viewed by 4737
Abstract
Since Kepler (1604) and Descartes (1637), it has been suggested that ‘vergence’ (the angular rotation of the eyes) plays a key role in size constancy. However, this has never been tested divorced from confounding cues such as changes in the retinal image. In [...] Read more.
Since Kepler (1604) and Descartes (1637), it has been suggested that ‘vergence’ (the angular rotation of the eyes) plays a key role in size constancy. However, this has never been tested divorced from confounding cues such as changes in the retinal image. In our experiment, participants viewed a target which grew or shrank in size over 5 s. At the same time, the fixation distance specified by vergence was reduced from 50 to 25 cm. The question was whether this change in vergence affected the participants’ judgements of whether the target grew or shrank in size? We found no evidence of any effect, and therefore no evidence that eye movements affect perceived size. If this is correct, then our finding has three implications. First, perceived size is much more reliant on cognitive influences than previously thought. This is consistent with the argument that visual scale is purely cognitive in nature (Linton, 2017; 2018). Second, it leads us to question whether the vergence modulation of V1 contributes to size constancy. Third, given the interaction between vergence, proprioception, and the retinal image in the Taylor illusion, it leads us to ask whether this cognitive approach could also be applied to multisensory integration. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

59 pages, 4183 KiB  
Article
A Riemannian Geometry Theory of Synergy Selection for Visually-Guided Movement
by Peter D. Neilson, Megan D. Neilson and Robin T. Bye
Vision 2021, 5(2), 26; https://doi.org/10.3390/vision5020026 - 25 May 2021
Cited by 3 | Viewed by 5497
Abstract
Bringing together a Riemannian geometry account of visual space with a complementary account of human movement synergies we present a neurally-feasible computational formulation of visuomotor task performance. This cohesive geometric theory addresses inherent nonlinear complications underlying the match between a visual goal and [...] Read more.
Bringing together a Riemannian geometry account of visual space with a complementary account of human movement synergies we present a neurally-feasible computational formulation of visuomotor task performance. This cohesive geometric theory addresses inherent nonlinear complications underlying the match between a visual goal and an optimal action to achieve that goal: (i) the warped geometry of visual space causes the position, size, outline, curvature, velocity and acceleration of images to change with changes in the place and orientation of the head, (ii) the relationship between head place and body posture is ill-defined, and (iii) mass-inertia loads on muscles vary with body configuration and affect the planning of minimum-effort movement. We describe a partitioned visuospatial memory consisting of the warped posture-and-place-encoded images of the environment, including images of visible body parts. We depict synergies as low-dimensional submanifolds embedded in the warped posture-and-place manifold of the body. A task-appropriate synergy corresponds to a submanifold containing those postures and places that match the posture-and-place-encoded visual images that encompass the required visual goal. We set out a reinforcement learning process that tunes an error-reducing association memory network to minimize any mismatch, thereby coupling visual goals with compatible movement synergies. A simulation of a two-degrees-of-freedom arm illustrates that, despite warping of both visual space and posture space, there exists a smooth one-to-one and onto invertible mapping between vision and proprioception. Full article
(This article belongs to the Special Issue Size Constancy for Perception and Action)
Show Figures

Figure 1

Back to TopTop