Abstract
Mirror symmetry is an important and common feature of the visual world, which has attracted the interest of scientists, artists, and philosophers for centuries. The human visual system is very sensitive to mirror symmetry; symmetry is detected quickly and accurately and influences perception even when not relevant to the task at hand. Neuroimaging studies have identified mirror symmetry-specific haemodynamic and electrophysiological responses in extra-striate regions of the visual cortex, and these findings closely align with behavioural psychophysical findings when only considering the magnitude and sensitivity of the response. However, as we go on to discuss later, the location of these responses is at odds with where psychophysical models based on early visual filters would predict. In attempts to capture and explain mirror symmetry perception, various models have been developed and refined as our understanding of the factors influencing mirror symmetry perception has grown. The current review provides a contemporary overview of the psychophysical and neuroimaging understanding of mirror symmetry perception in human vision. We then consider how new findings align with predominant spatial filtering models of mirror symmetry perception to identify key factors that need to be accounted for in current and future iterations.
1. Introduction
The existence of symmetries has long fascinated scientists, artists and philosophers. We see an appreciation for natural and artificial visual symmetries across species, including bees [1,2,3], pigeons [4] and chickens [5]. We can also observe different forms of symmetry across all physical and philosophical domains, including in animals and plants [6], in chemistry [7], in physics and mathematics [8,9], and in language, literature and music [10,11,12,13,14] (See Figure 1).
Figure 1.
Examples of natural and artificial mirror symmetries in the visual world include (A) animals, (B) art such as the classic works by Cimabue [13], and (C) architecture. From Cimabue [13] and online database Unsplash [14] https://unsplash.com/, (accessed on 15 August 2022).
In humans, symmetry is recognised visually and preferentially viewed as early as 4 months of age [15] and is omnipresent across cultures [16], including in artworks across time and place [12]. Some researchers have also argued that the preference for symmetry in humans, as it is in animals, occurs because symmetry is a marker of healthy development and, therefore, mate attractiveness [17,18,19,20]. Others have suggested that symmetry may also play an important role in scene perception, particularly when identifying objects in a visual scene [21] and in figure/ground segmentation [22,23]. More recently, the importance of symmetry as an attractive and useful feature from a human factors perspective has been noted in terms of design, marketing, architecture and artificial vision [24,25,26]. These examples serve to highlight how symmetry can provide a means of structure and organisation across a range of seemingly unrelated disciplines and thus is an important organising principle of our world more generally [12].
1.1. Focus and Structure of the Current Review
The aim of this review is to provide an overview of historical and contemporary research findings on the processing of visual mirror symmetry in human observers. This is intended to build on and extend previous reviews, the most recent of which was conducted by Treder [27]. We begin by outlining the rationale behind focusing on visual mirror symmetry, specifically by exploring its special status in visual perception and how this differs from other types of symmetries. We review psychophysical studies of mirror symmetry in terms of both global forms at the pattern level and local features at the element level. As part of this overview, we explore a set of recent studies exploring the temporal features of symmetry processing, including the temporal integration of symmetry information. We then turn our attention to the neuroimaging literature, including haemodynamic and electrophysiology signals specific to and modulated by symmetry. Finally, we examine the current prevailing models of visual symmetry processing—the so-called spatial filter models. We examine in detail three influential models by Dakin and Watt [28], Dakin and Hess [29], and Rainville and Kingdom [30] and consider the strengths of these models, particularly their biological plausibility and alignment with established features of processing in early stages of the visual system (e.g., spatial filters). However, although initially successful, there are many recently established findings in symmetry perception that spatial filter models cannot account for in their current form. This includes the beneficial effect of higher-order structure and the visual system’s ability to identify symmetry in the context of spatial and temporal manipulations (e.g., temporal integration, opposite contrast polarity feature pairs, and local orientation variations). Furthermore, these models predict symmetry perception should arise as early as the primary visual cortex (V1), which is not consistent with neuroimaging studies suggesting that symmetry information activates V4 and other extra-straite regions. With this in mind, this review highlights the need for updated, revised models of symmetry processing mechanisms with the breadth and flexibility to account for contemporary psychophysical and neuroimaging findings. For conciseness, “symmetry” in this review will refer specifically to mirror symmetry.
1.2. Special Status of Mirror Symmetry
While precise definitions of symmetry vary across disciplines, applications and philosophical viewpoints, when an object, living entity, pattern or concept is described as symmetric, its key features are repeated in some way. This review focuses on visual symmetry and, more specifically, mirror symmetry (also referred to as bilateral or reflection symmetry). In vision, an object or pattern is considered to be mirror-symmetric when it can be divided into two identical but reflected (i.e., mirrored) halves along at least one central axis [27]. For example, the butterfly in Figure 2A has two wings on either side of the body, which act as the axis of reflection. Further, the wings’ shape and patterns are identical but opposite; they form mirror images of each other, about a diagonal midline (in this example image). In other words, the image properties at a distance orthogonal to the symmetry axis are identical to properties at − also orthogonal to the axis;
Figure 2.
Real-world examples of the different types of symmetries include (A) reflection or mirror symmetry, (B) translation or repetition symmetry, and (C) rotational symmetry. Available on https://unsplash.com/ [14] accessed on 15 August 2021.
In perfect symmetry, all components of the image would satisfy Equation (1). However, it is important to note that this is not necessary for symmetry to be detected, and most of the studies reviewed in this article focus on minimum detection thresholds (e.g., how much symmetry information is required to be reliability detected or discriminated). When an image partially satisfies Equation (1), symmetry is still detectable, for example. If we were to apply Equation (1) to Figure 1B above, the symmetry response would be imperfect as the symmetry information in the image is imperfect; however, a symmetry signal would still be carried by those image elements that do match. As long as this signal is present, other kinds of information [31] should not affect the minimal detection thresholds.
One of the earliest and most consistent findings in symmetry research is the special status of mirror symmetries over repetition and rotation [25]. The human observer’s sensitivity to vertical mirror symmetry was noted in writings by Mach [32] in the late 19th century. Mach observed reflection was more readily perceived than other symmetry types and also that there appeared to be a difference in sensitivity depending on the orientation of the axis of symmetry, commenting that -
“vertical [mirror] symmetry pleases us, whilst horizontal symmetry is indifferent and is only noticed by the experienced eye”(p. 33).
Mach [32] also goes on to postulate that symmetry perception is innate or inbuilt in the visual system, and given its prevalence across the visual environment,
“the sense of symmetry, although primarily acquired by means of the eyes, cannot be wholly limited to the visual organs. It must also be deeply rooted in the other parts of the organism by ages of practice…”(p. 35).
Mach’s [32] writing has inspired a rich and detailed history of research interest into how, when, where and why mirror symmetry is processed in the visual system.
1.3. Other Types of Symmetry
Other symmetries include repetition symmetry (Figure 2B), where identical sub-components are repeated across space at regular intervals, such as in the tiled friezes in Figure 2B and for these, Equation (1) would need to recognize multiple symmetry axes to apply adequately. A third type of symmetry is rotational symmetry, where an object can be rotated around a central point without change in its appearance, which can be seen in flowers (Figure 2C) and Equation (1) is not an appropriate description. While multiple symmetries can be present in a given object, one will be the most useful and, therefore, predominant in processing (for example, most rotational symmetries can have multiple reflectional symmetry axes, and reflection symmetries with more than one axis have a degree of rotational symmetry). While observers could accurately identify repetition symmetries, the process was found to be substantially more laborious and time-intensive. Corballis and Roldan [33] showed a detection advantage for mirror symmetry over repetition, and Bruce and Morgan [34] similarly found that observers were better able to detect asymmetries in mirror symmetry than with repetition, a fact later confirmed by Jenkins [35]. Figures with reflected contours, and therefore mirror symmetry, are also more readily perceived than those with repeated contours [36,37]. Similar outcomes are found when rotational symmetries are compared to mirror symmetry. Regardless of the degree of rotation (e.g., 90° versus 180°) or the type of pattern (e.g., dots, line segments or solid polygons), a substantial performance benefit was obtained for mirror symmetry [38,39]. In contrast, little difference is found when directly comparing repetition and rotational symmetries [40,41,42]. For the remainder of the review, we will focus on mirror symmetry, which is the symmetry signal the human visual system is most sensitive to.
2. Mirror Symmetry in Psychophysics: Local Detail and Global Form
Mirror symmetry is one of the predominant structural features in biological organisms and is used to guide our interpretation of the visual world [43]. Symmetry was included in the original Gestalt perceptual organisation principles discussed by Wertheimer [44] and later Koffka [23], alongside proximity, similarity, common fate, parallelism, good continuation and closure [45]. It has been argued that symmetry is a useful grouping cue because it simplifies complex visual input [43,46]. Barlow and Reeves [47] and Apthorp and Bell [48] all noted that the detection of a small amount of symmetry in an image can convey more information about the overall pattern than any irregular feature and thus is a very useful and economical cue for reducing the amount of information processing required in a scene [43]. Symmetrical image components are unlikely to occur by chance, and therefore, symmetry adds redundancy that is likely to reinforce the image interpretation. Moreover, if the observer is looking for a known target, then regions of symmetry may be sufficient for recognition. Consistent with this view, small amounts of symmetry information can lead to generally random patterns being erroneously interpreted as more substantially symmetric, and the location of symmetric regions within a larger object can also influence the overall perception. The tendency to overgeneralise a symmetric interpretation can be leveraged for camouflage [49]. Symmetry is, therefore, dependent on the interplay of local (element level) and global (pattern level) information. This section presents an overview of the key spatial features implicated in symmetry processing and how the overall perception of symmetry is changed when these features are manipulated. We discuss these findings, beginning with global pattern features and progressing to more local information at the element and element pair level.
2.1. Axis Orientation
Mirror symmetry is defined by the presence of an explicit or implied axis around which reflected elements are arranged [50]. Eye tracking studies, where axis location and orientation were consistent, have shown that observers tend to initially fixate on the centre of the pattern or object, where the object symmetry axis is located, and use this location as the initial basis for decisions of whether a pattern is symmetrical or not [51,52]. If the axis location is less predictable, symmetry salience is reduced, although this effect is attenuated when the location of the axis is cued prior to viewing the pattern [53,54].
All else being equal, symmetric patterns with a vertical axis are the most readily perceived, both in terms of detection speed and the magnitude of signal required to discriminate symmetry from noise [39,55,56,57]. Studies report an anisotropic relationship between axis orientation and symmetry detection; patterns with vertical axes are most quickly and accurately detected, followed by patterns with horizontal axes and then cardinal obliques [27,33]. Wenderoth [58] argued that symmetry axes were treated in a similar manner to contours, where a vertical advantage and reduced contrast sensitivity to oblique contours are found (oblique effect; Orban, Vandenbussche & Vogels, [59]), consistent with the suggestion that orientation and symmetry perception share some common orientation-tuned mechanisms [60].
While there is not a clearly established reason for mirror symmetry patterns with vertical axes being detected more readily than other orientations, many potential explanations have been suggested. The earliest of these was the idea that vertical symmetry was most salient because it aligned with the hemispheric representation of the visual field in the brain, leading to the mental rotation hypothesis [32,33,56,61]. Corballis and Roldan [33] suggested the advantage of vertical symmetry stemmed from its alignment with the vertical meridian dividing the visual half fields between the two hemispheres of the brain (e.g., two mirror symmetric hemispheres with a corpus callosum running through the central axis), consistent with earlier speculations by Mach [32]. The idea of the involvement of the corpus callosum arose from the observation that callosal projecting neurones processing symmetrically placed locations across the vertical meridian were common along the border of V1/V2 cortical maps [62,63]. The callosal hypothesis proposed the detection cost (measured in terms of response times) when processing non-vertical axis patterns arose because they needed to be mentally rotated to determine if a symmetry axis was present. As all patterns were treated as though they had a vertical axis, the process took longer with larger rotation angles. Julesz [64] suggested that symmetry perception was achieved via a point-by-point matching process, whereby each hemisphere processed one side of the axis. The corpus callosum facilitated this matching by connecting symmetrically positioned locations [65,66]. While the specifics of the callosal hypothesis have not been supported, it is argued in the more contemporary literature that a symmetry-sensitive visual system is evolutionary and adaptive [67]. From this perspective, humans may have developed sensitivity to the prevalence of natural and artificial vertical symmetries. Studies of face processing and sexual attraction find symmetric faces, which have a vertical symmetry axis, correlate strongly with positive judgements of attractiveness [68] and perceptions of beauty [69]. A near-vertical symmetric axis is also common in head-on views of the animal kingdom.
2.2. Number of Symmetry Axes
While symmetry is defined by the presence of an axis, symmetric patterns are not restricted to a single axis; multiple symmetry axes of different orientations may be present in a single object [33]. Many studies of symmetry perception have shown a near-monotonic increase in response accuracy and a decrease in response time as the number of symmetry axes increases. Observers tend to be faster and more accurate at identifying the presence of symmetric patterns with four axes than with a single vertical symmetry axis alone, despite the established vertical advantage for single symmetries [70,71,72]. It has been suggested that additional symmetry axes increase the amount of symmetry information within a single pattern. Patterns with multiple axes are also more resistant to disruption from positional skewing or non-orthogonal viewpoints [50,70,73], potentially due to the additional information in the rotational structure, which has been previously identified as rapidly detectable [74,75,76,77,78]. In summary, patterns with vertical symmetry axes are consistently more readily detectable than other orientations. However, increasing the number of symmetry axes improves symmetry processing in an additive fashion, regardless of the specifics of the task.
2.3. Element Position
Symmetry is defined by element position—an individual dot or pair of dots is generally meaningless in isolation within a random field of dots. However, when a number of such pairs are grouped around a central point, it can create an impression of a global structure. Different arrangements of such pairs produce different structures, including Glass patterns [78] and, of course, mirror symmetry [73,79]. For mirror symmetry, paired dots cluster along the axis such that both are equidistant from and orthogonal to the axis [79], as in Equation (1). If a dot does not have a symmetrically positioned partner across the axis, it will not contribute to the symmetry signal in a pattern and is instead a “noise” element that disrupts the underlying symmetric structure. Having too high a proportion of these noise dots can obscure spatial structure conveyed by the symmetric pairs. Varying the ratio of signal (symmetrically positioned) to noise (non-symmetrically positioned dots) is a common way of investigating symmetry perception, as it allows for the calculation and comparison of symmetry detection thresholds without changing the overall number of dots in the pattern. Patterns to which observers are less sensitive require higher signal-to-noise ratios to be detected, both in terms of identifying symmetric patterns and differentiating symmetric patterns from noise patterns.
2.3.1. Integration Region and Pattern Outline
While symmetry is generally considered a global pattern feature, the region immediately around the axis is the most salient cue to the presence of symmetry [80,81]. The size of this critical region varies depending on the information density along the pattern axis [29,30]. Patterns with greater local information clustered along the axis tend to be more readily detected compared to patterns where random noise is introduced over this region (see the section on element orientation; Rainville and Kingdom [30]). Furthermore, disrupting or obscuring the symmetry axis by embedding strips of random noise in the location of the axis has been shown to reduce the salience of symmetry in remaining regions, even when all other cues are retained [53].
Second, regarding the symmetry axis, Wenderoth [58] found that the pattern outline or boundary is another important cue for mirroring symmetry. He found that symmetric regions needed to have higher proportions of symmetrical signals to be detected when embedded in a random noise surround, even though the symmetry axis was retained and all other cues to symmetry were present. Symmetry thus appears to be processed holistically; when the symmetry axis is readily identified, and the outline of a pattern is symmetric, it is assumed that the pattern is globally symmetric even if other subregions (e.g., between the axis and boundary) are imperfectly symmetric [43]. Thus, we are most sensitive to disruptions along the central axis or outer boundary of the pattern while being relatively less sensitive to the information contained throughout the rest of the pattern [58] when the target pattern is an unknown field of dots.
2.3.2. Skewed Symmetry
In a real-world setting, symmetric objects or patterns are often viewed from non-frontoparallel viewpoints (i.e., “side on”) where there is displacement of the relative positions of elements across the axis, known as “skew”. Symmetry detection mechanisms are very sensitive to disruptions in element position. If element positions within a given pair are jittered, such that the virtual lines joining the paired elements are no longer orthogonal to the axis, symmetry becomes significantly more difficult to detect [40]. While symmetry detection is still possible under such conditions, it presents an important challenge to the visual system since I(axis+D) is no longer equal to I(axis-D) [62]. In skewed symmetry patterns [34], the position of elements differ in distance from the axis but also in relative position parallel to the axis. This also occurs when a perspective slant is introduced to symmetric polygons [82], and it has been noted that skewed symmetry is equivalent to viewing symmetry in-depth or from a non-frontoparallel viewpoint [82,83]. As the degree of skew is increased, symmetry becomes systematically harder to detect until it cannot be discriminated from random [82].
In an attempt to explain the processing of symmetry under conditions of skewing, Wagemans et al. [41] proposed that patterns facilitating virtual four-cornered shapes (“correlational quadrangles”) can preserve the percept of symmetry under imperfect viewing conditions. These correlational quadrangles are formed through the grouping of “pairs of pairs” (i.e., the interaction between two symmetric pairs (orthogonal to the axis) in the direction parallel to the axis). These virtual shapes introduce an intermediate structure between individual local element pairs and the global symmetric pattern and preserve axis cues. When such higher-order cues are present, the global symmetry of the pattern becomes more readily identifiable than for patterns where pairs are processed more independently. Wagemans et al. [41] argue that this is because correlational quadrangles facilitate the propagation of a reference frame that strengthens the relationship between local and global pattern features. In essence, this enhances the appearance of a symmetric pattern as one independent object rather than a collection of individual, independent elements.
2.4. Shared Local Element Features
When elements are symmetrically positioned (as in Equation (1)), it is generally beneficial to symmetry perception if they share key features, such as color [84,85], luminance polarity [86,87,88,89,90] or element orientation [91,92,93,94,95]. Even when the more global pattern features discussed above are optimised, manipulating these local features can significantly impact symmetry salience. The factors that have this effect are likely to indicate critical properties of the detection processes. The following sections provide an account of what has been learnt by manipulating element properties.
2.4.1. Luminance Polarity
The direction of luminance variation drives different pathways in early vision; positive polarity dots (brighter than background) are processed via ON-channels, while negative polarity dots (darker than background) are processed via OFF-channels, and the two channels are independent in early vision [96,97]. Evidence from global motion and global form research where grouped or paired dots can act as signal or noise show that the proportion of signal dots required to distinguish a partially structured pattern from a random pattern is significantly higher when local dot pairs contain both luminance increment and decrement information compared to a single matched polarity [98,99,100]. Dots of opposing luminance polarity within the same symmetric pair fail to satisfy Equation (1) and should not readily convey symmetry information if they are processed by separate channels, even if they are perfectly spatially symmetric.
An early investigation by Zhang and Gerbino [90] found that in symmetric patterns, both light and dark pairs contribute equally when discriminating symmetry from equivalent noise patterns. Both patterns had lower detection thresholds compared to patterns with opposing intrapair polarity, regardless of whether negative correspondences were randomly distributed across the axis (Figure 3D) or segregated on either side of the axis (one side all negative, the other all positive; Figure 3C). This was interpreted as supporting a contrast-sensitive point-by-point matching process. Wenderoth [89] conducted a similar experiment which largely supported Zhang and Gerbino’s [90] initial findings: detection performance for the all-polarity matched (Figure 3A) and pair-matched polarity (elements match within a pair but pairs may be brighter or darker; Figure 3B) conditions was significantly better than for halves-unmatched (positive polarity one side of the axis and negative polarity on the other; Figure 3C) and pairs-unmatched (elements of opposing polarities within pairs; Figure 3D).
Figure 3.
Examples of the four polarity conditions used are (A) all-matched-polarity, (B) matched-pairs polarity, (C) unmatched halves polarity, and (D) unmatched pairs polarity. All examples have 100% positional symmetry.
The literature is consistent in noting that variation in element appearance interrupts the formation of pairs and, therefore, has a significant and detrimental impact on symmetry perception. There are suggestions that symmetry in these situations is only detectable via polarity-insensitive point-matching mechanisms [88], which differs fundamentally from the mechanism by which symmetry is more typically perceived. Mancini, Sally, and Gurnsey [88] go as far as to say that the symmetry of unmatched luminance polarity patterns can only be detected via attentional mechanisms reliant on sequential searching and matching of individual symmetrically positioned elements. Of course, an alternative is a second-order contrast (but not polarity sensitive) perceptual mechanism [101,102,103], which is consistent with symmetry perception under conditions that preclude point-by-point matching strategies—such as very brief viewing times or temporal integration designs [86].Although sensitivity to pattern symmetry is much reduced in such instances (see panels of Figure 3), detection is still possible, suggesting either a second-order, contrast polarity independent mechanism is deployed [86,101,102] or, alternatively, attention-based point-matching processes to recover symmetry information [88].
2.4.2. Element Orientation
While the role of global pattern orientation (axis orientation) has attracted a lot of research interest, variations in orientation content defining a local element have been comparatively understudied (see Figure 4 for an illustration of local versus global orientation in symmetry). Like luminance polarity, different orientations are also processed by discrete neural populations in early vision [104,105,106]. Koeppl and Morgan [92], using oriented line segments, found that element position was more important for symmetry salience than absolute element orientation. This was interpreted as evidence for a “coarse-scale interpretation” [92] where element orientation does not play a significant role. Similar conclusions have been drawn by follow-up studies, which found that local orientation variation did not substantially change symmetry detection thresholds [93,95]. Sharman and Gheorghiu [95] suggest that local element orientation does not impact symmetry detection, regardless of how congruent it is with positional information.
Figure 4.
Four examples of patterns with 100% positional symmetry but different element orientations [91]. Comparing the paired images vertically ((A) & (B), and (C) & (D)) shows the same orientation content with different global pattern orientations. Comparing the paired images horizontally ((A) & (C), and (B) & (D)) shows different orientation content with the same global pattern orientation. Comparison of images diagonally ((A) & (D), and (B) & (C)) shows different global pattern orientations with the same relative orientation content (parallel or orthogonal to the axis). [91].
However, Saarinen and Levi [94] found that when elements in a symmetrically positioned pair have orthogonal orientations to each other (e.g., one horizontal and one vertical element; Figure 5G), there is a significant increase in symmetry detection thresholds. Similarly, Locher and Wagemans [93] report a small but significant superiority for patterns composed of one or two orientations (when orientation within pairs is kept consistent; Figure 5A,B,D, highlighted with a bolded border) compared to patterns where the orientation of the two elements in a pair can vary randomly across the pattern (Figure 5C,F,G; no border). They attribute this to “local randomness” disrupting positional information and reducing the impression of global structure throughout the pattern [93]. Sharman and Gheorgiu [95] and Koeppl and Morgan [92] do not disrupt the mirroring of local orientation within a pair; element orientation is always varied in the same manner for both elements of a pair and thus does not introduce conflicting “local randomness” between position and orientation information that is critical here. Further, one of Sharman and Gheorgiu’s [95] key conditions used positional and orientation symmetric Gabor patterns within a randomly oriented noise field. This would conceivably have a facilitatory “pop out” effect, allowing for the detection of the proportion of common orientation elements, in addition to the proportion of symmetry signal, making it harder to determine the impact on symmetry perception alone. Together, this set of results suggests that symmetry detection mechanisms do not fully discard local orientation. Rather, like luminance polarity, the detection thresholds vary in a manner that implicates multiple orientation-sensitive mechanisms operating in parallel.
Figure 5.
Examples of stimuli from the different orientation conditions [91]. Mirrored conditions are highlighted by a solid outline (outline not shown in the experimental display) and include (A) vertical, (B) horizontal, (D) mixed-matching and (E) mirrored oblique. Matched conditions are shown in the top row, including (A) vertical, (B) horizontal, (C) matched oblique and (D) mixed-matching. Unmatched conditions are shown in the bottom row: (E) mirrored oblique, (F) 45° unmatched and (G) 90° unmatched. All examples have 75% positional symmetry and vertical symmetry axes.
Bellagarda et al. [91] methodically compared symmetry perception in patterns composed of oriented Gabor elements, where orientation was varied within and/or between element pairs. Consistent with findings in Bellagarda et al. [86] regarding luminance polarity, observers were less sensitive to underlying symmetry when symmetry could not be identified based on matching first-order features (i.e., orientation) over the axis. However, the pattern of results in Bellagarda et al. [91] was more flexible and more complex than would be expected if symmetry detection was simply attributable to differences between first- (orientation) and second-order (contrast envelope) processing alone. If this was the case, patterns where element orientation was consistent over the axis could be detected in a manner consistent with first-order features. When elements are not matched, then second-order mechanisms could extract the symmetrically placed contrast envelope, removing the conflicting orientation information. However, this explanation cannot account for the systematic variability in performance across orientation conditions. Conditions where element orientation and position are mirrored, should show the same detection performance, while conditions where element orientation is not mirrored should result in less efficient detection. In symmetry perception, mirroring information across the axis is more important than matching (i.e., translating), as noted by Rainville and Kingdom [30]. Previous studies have argued that element orientations need to be matched across the axis [94]. These studies have typically considered only horizontal and vertical elements, arguing that when two different element orientations are combined in a pair, symmetric information is disrupted. However, Rainville and Kingdom [30] instead note that paired vertical and horizontal elements are special cases of geometry in that they are both matched and mirrored in orientation (i.e., a horizontal remains horizontal when it is reflected and when it is translated). They suggest that mirroring is key in symmetry perception, and therefore, mirrored obliques, which individually differ in orientation by 90° but are mirror images of each other, should facilitate performance as effectively as horizontal or vertical pairs.
Furthermore, this also suggests that matching oblique elements should be disruptive to symmetry perception, as they do not retain the proposed critical mirroring of orientation information even though they are matched in orientation. The relationship defined in Eq 1 is true for mirrored elements, regardless of orientation. However, for matched but not mirrored elements, Eq 1 is no longer valid, and thus mirror symmetry is disrupted. Bellagarda et al. [91] showed that the mirrored oblique condition, where elements are reflected, is detected much more efficiently than matched oblique elements not mirrored in orientation. The latter needs significantly higher proportions of signal pairs to detect than other conditions in which orientation is matched and mirrored (e.g., horizontal or vertical elements, where Equation (1) is valid). Mirroring of position and orientation is, therefore, most critical to symmetry perception, and this is made clear by using a complete orientation set to allow a direct comparison of matched, mirrored unmatched and unmirrored element combinations. In sum, Bellagarda et al. [91] support a key role of element orientation in symmetry perception and also suggest that simple differences between first- versus second-order processing in the absence of additional stimulus specificities (such as comparisons of mirror reflection in orientation) are insufficient to account for the variation in symmetry processing when element orientation is varied.
2.4.3. Higher-Order Structure
The bootstrapping, or correlational quadrangle model proposed by Wagemans, van Gool, Swinnen and van Horebeek [84], describes an intermediate level of symmetry information between the matching of individual element pairs and the global symmetric pattern. Wagemans et al. [41] defined this higher-order structure as “pairs of pairs”; that is, groups of four symmetrically positioned elements forming virtual four cornered shapes (the correlational quadrangle; Figure 6C). They hypothesised that the presence of correlational quadrangles in an array strengthens the underlying symmetry signal by facilitating axis identification and making the pattern more resistant to perturbation through additional reference frames. While early investigations showed a positive contribution of correlational quadrangles in terms of symmetry detection thresholds, nuanced investigation of the model has been limited by the available stimuli. Dot patterns, like those used by Wagemans et al. [41], do not allow for interactions between constituent elements to be controlled in any way beyond element position. Any element can conceivably interact with any other element in the array, and therefore, there are a very large number of potential projected lines between elements that can be drawn to form a number of potential correlational quadrangles.
Figure 6.
Examples of symmetric stimuli with differing higher and lower-order structures include (A) a solid polygon with only high-order structure, (B) dot patterns with lower-order structure, and (C) patterns from the current study, where corner elements allow manipulation of both higher- and lower-order structure. Virtual lines dictated by element position are shown with solid lines. Potential projected lines are shown by solid lines. A lower-order structure is defined as pairwise virtual lines spanning the symmetry axis, as shown by broken likes between paired elements. The higher-order structure is highlighted by solid lines on the same side of the axis. In the (A) solid polygon stimuli, the higher-order structure may be necessary for symmetry perception as while there is some lower-order symmetry information conveyed by the corresponding points on the lines and corners; there is a paucity of discrete virtual or projected lines. Therefore, symmetry perception based on lower-order processing alone (e.g., spatial filtering) will be possible but is likely to include more noise. It is also difficult to manipulate higher or lower-order structures in isolation; often, these shapes are 100% symmetric or 100% asymmetric. In (B), symmetry is defined by virtual lines between dot elements at equivalent positions over the axis. While there may be some incidental groupings on the same side of the axis, any one dot is equally likely to co-align with any other dot and could produce infinite spurious projected lines, as shown by the arrows in (B), meaning that this incidental higher-order structure is not a useful cue. In (C), both higher and lower-order structure is explicitly manipulated by varying the coalignment of angled elements. Virtual lines within element pairs (lower-order structure, dotted lines) and projected lines between element pairs (higher-order-structure, solid lines) are defined to form intermediate structures between individual local dot pairings and the global symmetric pattern [107].
Bellagarda et al. [107] employed a third element type to try to minimize this uncertainty, as shown in Figure 7. These are the corner elements, first introduced by Persike and Meinhardt [86,87], and are formed by the intersection of two Gabor elements. Each corner had two dominant orientation components and a variable internal angle. Each element, therefore, produced definable virtual lines, and the coalignment of these lines could be explicitly manipulated to promote or inhibit the formation of correlational quadrangles. As hypothesised, observers were more sensitive to reflected than unreflected corners, and they were also more sensitive to patterns containing higher-order structures.
Figure 7.
Examples of corner elements with different internal angles. Each one is made of two halves of Gabor elements joined along the line bisecting the vertex [107].
Corners do not provide additional orientation information. Instead, perception seems to depend on a mechanism like a good continuation, which limits the elements that could be considered as part of the embedded rectangles to those providing alignment. Correlational quadrangles may be useful for symmetry perception because they provide a useful way of minimising false matches, increasing the configural information of the array while circumventing the introduction of additional noise. The results of Bellagarda et al. [107] show that they also need to form virtual lines between element pairs that are reflected over the axis but are not necessarily horizontal in order to form these quadrangles. This suggests that an adequate account needs to include both matching between elements and between element pairs.
2.5. Summary
- Symmetric patterns with vertical axes are perceived more efficiently and with lower signals than patterns with other axis orientations. The exception to this is when patterns contain more than one axis; additional pattern axes improve symmetry detection, with the best performance consistently identified for patterns with four axes of symmetry.
- The integration region around the symmetry axis is critical for symmetry detection, and it also plays an important role in the pattern outline. This suggests that while symmetry is dependent on the precise local positioning of individual elements, it is processed holistically as a global pattern.
- Distortion of individual element position, referred to as skewed symmetry, disrupts symmetry perception.
- Luminance polarity manipulations implicate second-order processing in symmetry perception since symmetry is still detectable when element polarity is different across the axis. Similarly, element orientation also impacts symmetry detection when orientations are not reflected within symmetrically positioned pairs. Symmetry remains detectable in these cases (unmatched polarity and unmirrored orientation) but with higher detection thresholds.
- A higher-order structure, formed by the relationship between pairs of paired elements, provides additional structural configural information, strengthening the symmetry percept of the overall pattern by minimising false matches and the impact of skew.
3. Mirror Symmetry in Time
Our visual world is complex and constantly changing. This presents a fundamental challenge for the human visual system, requiring it to be fast and flexible enough to accommodate frequent unexpected changes in input but stable and accurate enough for reliable identification of objects. As a symmetric object moves through space, its appearance changes, but other aspects, such as symmetry, remain relatively constant and thus can facilitate recognition. This perceptual constancy is fundamental for extracting consistent shape information in a dynamic 3D world [108] so that object recognition remains consistent across changing viewpoints, orientations and viewing conditions. Here, we build on the spatial research reviewed above to consider dynamic symmetry processing, that is, how local information is integrated across time into a global percept of symmetry.
The role of time and temporal factors in symmetry perception can be considered in a number of ways. For example, we know that symmetry is detected within a few tens of milliseconds when matched elements are presented simultaneously [50,64,109]. One early and important investigation into the role of time in symmetry perception was conducted by Hogben, Julesz and Ross [110], who asked over what time period can the visual system integrate symmetric information within the dot-pairs that form the symmetric pattern? Using a point plotting oscilloscopic system, two streams of very short lifetime dots (reduced to 1% brightness in 10 ms) were presented such that they formed symmetric pairs (e.g., one stream was the left side of the symmetry axis, the other stream was the right side). One stream of dots was temporally offset relative to the other so that while elements were always symmetrically positioned across the axis when they were present, there was a period of time separating the onset of each element in the pair. Hogben, Julesz and Ross [110] found that if this temporal delay between elements in a pair did not exceed 50–90 ms, the symmetry in the array could be accurately detected. If this delay duration was exceeded, the pattern was indistinguishable from random dot patterns with no temporal delay. This result showed that just as the visual system can combine elements across space into a global percept of form, so too can it integrate this information across time. Hogben, Julesz and Ross [110] show that with longer delays, the elements of the pair fail to stimulate the underlying processes within a critical time window. Jenkins also obtained a similar result for translational symmetry by Jenkins [111]. The percept of symmetry was retained up to a certain delay (approximately 60 ms, consistent with reflection symmetry), after which the patterns became indistinguishable from random noise.
3.1. Temporal Integration of Mirror Symmetry
The above studies highlight the role of temporal integration, or the combination of discrete elements or components, separated by time, into coherent pairs that can contribute to symmetry perception [92,93]. Note that this is not the same as examining the temporal integration period over which pairs may be accumulated to produce a symmetry percept. Hogben et al. [110] hypothesised that visible persistence is what permits this within-pair temporal integration to occur. Visible persistence describes a phenomenon where a briefly presented visual stimulus endures as a signal for visual processing beyond the end of its physical offset [94,95]. In the case of Hogben et al. [110], the temporal separation between the two elements in each symmetric pair was varied between 0 and 140 ms in different conditions, and so they were often not physically present in the same 10-microsecond time window, yet symmetry could still be perceived. This implies that positional information about the first dot persists in the visual system after the end of its physical lifetime, long enough to coincide with the physical onset and processing of the second element. If the delay between onsets exceeds the duration of the visible persistence mechanism assumed to be the underlying temporal integration of symmetry, information from the two elements cannot be combined, and hence, symmetry information is lost. Later research (e.g., Bellagarda et al., [86] discussed below) found a comparable 60 ms window over which symmetry can be perceived.
Inspired by Hogben et al.’s [110] visible persistence hypothesis, Niimi, Watanabe, and Yokosawa [112] also investigated the duration over which two static asymmetric dot patterns could be integrated into one globally symmetric pattern, but in their case, all the elements in one half were presented at exactly the same time. Two asymmetric patterns were sequentially presented for 13 ms in two separate intervals. While neither interval contained symmetry information, if the two patterns were successfully integrated, a globally symmetric pattern would emerge. Temporal delay, defined as the stimulus onset asynchrony (SOA) between the presentation of the two pattern intervals, varied from 0 ms SOA (i.e., simultaneous presentation) up to 427 ms. Participants had to indicate whether the combined pattern was symmetric or random (a yes/no task). Niimi, Watanabe and Yokosawa [112] found that for short delays (i.e., 13 ms and 27 ms), performance did not differ from conditions with no delay. As delay duration increased further, accuracy at identifying the symmetric patterns decreased until symmetry could not be differentiated from random. Like Hogben et al. [110], Niimi et al. [112] concluded that symmetry perception mechanisms were tolerant of some temporal jitter facilitated by visible persistence. When this delay tolerance of around 60 ms was exceeded, however, temporal integration no longer occurred, and hence symmetry was no longer perceptible.
A series of more recent studies by Sharman and Gheorghiu [113,114,115] focused specifically on the temporal integration of mirror symmetry and some of the features that may influence this process. Their initial study focused on symmetric motion in dynamic patterns, but their results showed that the number of elements and the duration of their lifetime had a bigger impact than symmetric motion on symmetry salience. Sharman and Gheorghiu [114] interpreted this as further evidence for the temporal integration of mirror symmetry and suggested their findings may be explained by the visible persistence and accrual of novel symmetry information across time, much like Niimi, Watanabe and Yokosawa [112]. In a follow-up study, Sharman et al. [115] used two novel symmetric patterns that were rapidly alternated (where all elements were presented simultaneously) pattern halves (either left or right sides of the axis). This was similar to Niimi et al. [112], where elements were randomly distributed through two temporally separated intervals. Both Sharman et al. [115] and Niimi et al. [112] mapped temporal integration in mirror symmetry using segmented patterns such that symmetry could not be appreciated without integration across the two intervals. Rather than referring to SOA per se, Sharman, Gregersen and Gheorghiu [115] instead varied the alternation frequency of patterns/pattern halves (higher frequency equates to a shorter delay duration). Consistent with the studies discussed above, Sharman et al. [115] replicated the 50–70 ms temporal integration window for visual symmetry; at longer delays, symmetry either was not perceived, or there was no advantage for dynamic (rapidly changing or alternating) over static displays (where all elements are presented simultaneously).
Bellagarda et al. [86] replicated the 60 ms upper limit for temporal integration of mirror symmetry discussed earlier [110,112,113,115] and showed this window is consistent even for patterns varying in luminance-polarity. Importantly, however, Bellagarda and colleagues also showed that the processes occurring within this window vary in a manner consistent with differences between first- and second-order perceptual processes [89,90,101]. This finding was inconsistent with the alternative attention-based hypotheses [88] as the limited lifetime of the elements precludes attention-based point matching; the second element’s lifetime is shorter than the time required to deploy attention to its location. This highlights the importance of considering temporal and spatial features of perception to better delineate underlying mechanisms. Static patterns necessarily fail to provide insight into the temporal requirements permitting temporal integration of elements within a pair, and therefore, it is unclear how performance might be limited by this processing stage. Here, the intersection of spatial and temporal features unveiled important features required for candidate symmetry mechanisms and allowed us to rule out explicit attention-based point-matching explanations.
Bellagarda et al. [91] employed the same temporal integration paradigm as in Bellagarda et al. [86] which permitted consideration of both sensitivity and the time course of the element matching in symmetry processing, for patterns with different local orientation information (e.g., Gabor elements of varying orientation combinations). When the orientation of paired elements is mirrored across the axis such that Eq 1 applies, performance shares the same first-order characteristics seen in the matched-polarity research considered earlier, i.e., higher sensitivity but rapid decay in performance with longer temporal delays. When elements are symmetrically placed but not mirrored over the axis, such that Eq 1 does not apply to the orientation content of the pattern, even though the orientations are identical, there is a significant performance cost reflected in increasingly lower sensitivity and longer persistence duration. Consistent with findings regarding luminance polarity, when symmetry cannot be identified based on matching first-order features over the axis, observers are less sensitive to underlying symmetry, and the impact of lengthening delay is reduced. The results of Bellagarda et al. [107] argue against a straightforward linear covariation of sensitivity and persistence within the visual system. In Bellagarda et al. [107], there is no significant change in persistence estimates, even though threshold values varied significantly between conditions. This finding rejects the interpretation of those results as reflecting a need for increased processing time to compensate for lower sensitivity to the signal. Instead, sensitivity and persistence make an independent contribution to symmetry detection; change in one parameter does not necessitate change in the other. This discussion assumes the two paired elements are first detected independently and that they must both be detected within a time window of approximately 60 ms. Combinations of elements may be grouped to produce higher-order pattern information, which may assist detection if reflected across a symmetry axis.
3.2. Summary
- Symmetric information is readily perceived as long as the delay between elements within a pair does not exceed 60 ms. Beyond 60 ms, symmetry cannot be identified.
- Visible persistence is thought to underlie the temporal integration of symmetry, as it permits element locations to be retained over time. Locations can be integrated when they fall within the same temporal window.
- While the 60 ms upper limit to symmetry perception appears fixed, sensitivity thresholds and persistence estimates vary significantly depending on pattern features such as polarity, element orientation and the presence of higher-order structure, suggesting temporal symmetry mechanisms are sensitive to these features.
4. Processing Mirror Symmetry in the Brain
The neural mechanisms underlying the perception of mirror symmetry have been investigated with neuroimaging methods. Both haemodynamic and electrophysiological studies have identified consistent, automatic and symmetry-specific responses generated in the extra-striate visual cortices [116,117,118]. More recent research by Van Meel, Baeck, Gillebert, Wagemans and Op de Beeck [119] using multi-voxel pattern analyses (MVPA) and functional connectivity analyses has traced the processing of symmetry through the ventral visual stream in the human cortex. They identified increasingly holistic processing of global symmetry as they progressed to later visual object areas in the pathway (such as the Lateral Occipital Cortex (LOC) and other extra-striate regions), accompanied by greater discriminability of random versus symmetric patterns. This was interpreted as evidence for figural processing of symmetry (i.e., as an object).
4.1. Haemodynamic Signatures
Although few in number, functional magnetic resonance imaging (fMRI) studies in humans and primates have consistently identified an increase in blood flow to extra-striate visual areas in response to symmetric signals in patterns [120,121,122,123]. An example of this is shown in Sasaki et al. [122], with fMRI data showing symmetry-specific activation in the extra-striate regions. No symmetry-specific haemodynamic responses have been found in early retinotopic areas (e.g., V1) by any studies [119,123]. Rather, the largest haemodynamic responses are identified around V3A, V4d/v, V7 and the lateral occipital cortex (LOC) [124,125,126]. Regions in the extra-striate cortex, particularly the LOC, are implicated in object and scene processing [127,128] rather than being specialised for lower-level features such as luminance, spatial frequency or orientation variation in an image [129]. Transcranial magnetic stimulation (TMS) studies, which disrupt cortical processing in targeted areas, have also supported the role of the LOC in symmetry processing, particularly when applied over the left LOC, while participants attempted to discriminate symmetric and asymmetric patterns [124,125,126]. TMS over the fusiform and occipital face areas (FFA and OFA), which both showed some response to symmetry information, also disrupted performance [130].
4.2. Electrophysiological Signatures
Findings from electroencephalographic (EEG) investigations are similar to fMRI studies in showing a symmetry-specific extra-striate response but have permitted a more detailed investigation of the influence of stimulus features. Jacobsen and Höfel [69] found significant, sustained negativity in the EEG waveform over the occipital regions, including V1 and the LOC, in response to symmetric stimuli, now called the Sustained Posterior Negativity (SPN), and this has been demonstrated in a number of investigations [131]. Localisation studies indicate that the SPN is generated in extra-striate regions [67]. The SPN is a difference wave and reflects a significantly greater negative amplitude for symmetric compared to asymmetric images [67]. The SPN occurs relatively late, after the first negative going component (N1); the SPN is identifiable approximately 250 ms after stimulus onset, peaking at roughly 300 ms [132]; see example in Figure 1.3B from [67]. The SPN is reliably generated in response to symmetry regardless of the task participants are completing [133,134] and is also generated in response to a range of different stimulus types, including dot patterns, abstract polygons, line elements and real objects such as flowers [131,135,136].
SPN magnitude varies in response to stimulus properties in a manner consistent with psychophysical findings [137]. Mirror symmetry consistently produces the largest SPN response compared to translation and rotation symmetries, and the former is also more readily detected in psychophysical studies [131]. There is a larger response for vertical axis patterns than to horizontal axes [138] and a greater SPN amplitude for patterns with multiple symmetry axes [139]. Perspective slant, like that investigated psychophysically by Bertamini, Tyson-Carr and Makin [88], reduces the SPN, as do patterns where symmetry is manipulated to be perceived as background rather than a figure [140]. Makin, Rampone and Bertamini [141] and Wright, Mitchell, Dering and Gheorghiu [142] have shown that symmetric patterns with unmatched luminance polarities (c.f. Wenderoth [89] and Zhang and Gerbino [90]) generate a SPN response regardless of task or symmetry type.
4.3. How Is Temporal Integration of Mirror Symmetry Represented in the Brain?
A collection of recent studies from the Bertamini Lab [143,144,145,146] has considered how temporal integration of mirror symmetry may be represented by the SPN. As alluded to by its name, the SPN is a sustained response that continues after stimulus offset [143]. Consistent with Niimi et al.’s [112,147] dynamic stimulus advantage, investigations into SPN priming have shown that the SPN is enhanced and extended by the rapid sequential presentation of novel symmetric patterns with a common axis of symmetry [148]. This is consistent with temporal integration of consistent symmetry axis information even across changing patterns, as suggested by Niimi et al. [112,147] and Sharman and Gheorghiu [95], and also suggests that temporal integration can be assessed using the SPN.
Rampone et al. [144,145] used dynamically occluded polygons where no explicit symmetries were present on the screen at any point. Only one-half of the shape was present at any one time, meaning the presence of symmetry in the shape was only identifiable following the temporal integration of the two halves. They showed that an SPN was reliably generated approximately 300 ms after the second half of the occluded polygon was presented, but only if the two sides were symmetric. As the SPN is a symmetry-specific response, it can only be generated if temporal integration is successful. If the occluded polygon was asymmetric, no identifiable SPN was produced. Follow-up investigations [144] found that the SPN generated following temporal integration is modulated by stimulus features in the same manner as for static symmetry patterns where all component elements are always presented simultaneously without the necessity of temporal integration [67]. To our knowledge, this study constitutes the first evidence of temporal integration of visual mirror symmetry from an EEG or neuroimaging perspective. More recently, Wang, Cao and Xue [149] have shown that a symmetry-specific SPN is generated in response to the temporal integration of symmetric faces but has a distinct time course when compared to the face-selective N170 response.
4.4. Functional near Infrared Spectroscopy (fNIRS)
Bellagarda et al. [150] employed functional near-infrared spectroscopy (fNIRS) to study neural responses to temporal integration of visual mirror symmetry. fNIRS uses the optical properties of reflected near-infrared red light to measure relative changes in cortical oxygenated and deoxygenated haemoglobin in response to visual stimulation [151,152,153,154]. This is similar to the Blood oxygen level-dependent (BOLD) response focused on in fMRI studies [151].
Bellagarda et al. [150] intended to replicate and extend the EEG data in Rampone et al. [144,145] by providing an insight into how temporal integration of mirror symmetry may be reflected in haemodynamic responses. The fNIRS analysis showed a symmetry-specific haemodynamic response over the extra-striate regions of the visual cortex, and this response was identifiable when no or a brief temporal delay (0 ms or 50 ms) was present within element pairs. The magnitude of this response (i.e., size of the total change in haemoglobin compared to baseline levels) scaled with increasing delay; longer temporal delays produced a smaller and less localised response. When delays exceeded 60 ms (up to 100 ms delays were tested), and the patterns were therefore not distinguishable from random patterns perceptually, no symmetry-dependent changes in oxygenated and deoxygenated haemoglobin were identified. As in fMRI studies, no symmetry-specific response was found in V1 and surrounding areas. However, the symmetry response identified using fNIRS was substantially more medial than that shown in previous EEG and fMRI studies, which predominantly implicate the lateral occipital cortex [125,126,127,155].
Similar to the argument proposed by Rampone et al. [144,145], if temporal integration is indeed more medially localised than the perception of static symmetries, this may be due to recruitment of other regions involved in processing particular stimuli or stimulus features such as faces, objects or motion [156,157]. Further to this, there is a paucity of data regarding how the haemodynamic response to symmetry might change for patterns with different stimulus or element features. Bellagarda et al.’s findings accord with Rampone et al.’s [144,145] suggestion that symmetry processing may not be solely localised to the right LOC but rather varies depending on task demands and the associated involvement of other mechanisms or cortical regions. For example, symmetry in faces may more strongly recruit the OFA, while it may be that the temporal processing component of Bellagarda et al.’s and Rampone et al.’s [144,145] tasks may be recruiting more medially located subregions (for example, involvement of parietal regions that have been implicated in more general, non-symmetry focused temporal integration studies [158] are one possibility).
4.5. Summary
- Symmetry-specific haemodynamic (fMRI and fNIRS) and electrophysiological (EEG) responses are consistently identified across studies. TMS has also been found to disrupt symmetry processing.
- Neuroimaging studies consistently find that symmetrical stimuli drive responses in extra-striate regions of the visual cortex, particularly around the Lateral Occipital Cortex (LOC). The primary Visual Cortex (V1) does not generate a symmetry-specific response. Therefore, symmetry processing primarily occurs in areas implicated in object perception rather than early parts of the visual-cortical pathway.
- Recent imaging studies find that symmetry-specific responses around the LOC are generated in response to dynamic, temporally offset stimuli. Dynamic stimuli, requiring temporal integration and motion processing, appear to recruit different brain regions compared to static patterns.
5. Models of Mirror Symmetry: The Spatial Filtering Perspective
Over the years, a number of models of symmetry perception have been proposed in an attempt to capture the diverse range of psychophysical findings. Some models are limited by virtue of the types of stimuli they are based on [159], others by limitations in the way in which they could be tested [43], or limitations in how they could be feasibly implemented within the constraints of known neural mechanisms [160,161] (see Treder [27], for a discussion of several of these models). Others have persisted in the general zeitgeist for some time before being replaced or updated as the understanding of the brain and visual system advanced. For example, the callosal hypothesis, which predicted vertical axes, was more readily detected because they aligned with the vertical meridian dividing the right and left visual fields, with the corpus callosum representing the axis as it is coincidentally a vertical axis of symmetry for the brain [33,55,61,64].
The early stages of the visual system, such as the dorsal Lateral Geniculate Nucleus (LGNd) and V1, can be considered as a set of spatial filters [162,163,164,165]. Simple cells in V1 are selective for a range of features in an image (e.g., luminance polarity, orientation and spatial frequency). When we have a bank of such cells varying in sensitivity to one particular dimension while being equivalent in all others (e.g., a large response to a specific polarity but similar responses to motion, orientation and spatial frequency), that signal dimension can be de-multiplexed to extract information on the particular stimulus property. Only this information is then passed on to higher levels by the bank of cells, thus “filtering” the image for that specific property. These individual pieces of information are then re-assembled into a meaningful interpretation of the image by regions later in the visual hierarchy [166,167].
5.1. Component Process Model
The component process model introduced by Jenkins [79] was developed as a direct rebuttal to the callosal hypothesis. One of the aims of this model was to provide a more parsimonious explanation, informed by Hubel and Wiesel’s [105] receptive field measurements of V1 cells and our understanding of spatial filtering driven by psychophysical responses to stimulus features [38]. Jenkins [79] defines symmetry as the:
“two dimensional distribution of uniformly oriented point pair elements of non-uniform size and with collinear midpoints”(p. 433)
Jenkins proposes a sequential three-step model based on this definition. The first step involves the detection of symmetrically positioned elements falling along the same virtual line, orthogonal to the symmetry axis (e.g., horizontal virtual lines for a vertical axis). This is the orientational uniformity referred to. Following the detection of orientational uniformity, the system needs to fuse these point pairs into a salient feature and then detect symmetry present in this feature. In essence, this model turns individual locally symmetric components into a singular striated structure with their midpoints representing the central symmetric axis. While it does not propose an alternative mechanistic operationalisation of the three processes, the central tenets of the component process model have endured as the basis for subsequent models.
5.2. Spatial Filter Models
Like Jenkins’ [79] component process model, spatial filter models are loosely based on the concept of detection of orientational uniformity across the axis, measuring coalignment along a central axis and fusion of these into a single recognisable feature. Dakin and Watt [28] proposed a spatial filter model of symmetry perception, which, in essence, combines Jenkins’ [79] component process model with an initial spatial filtering stage.
5.2.1. The Dakin & Watt Model
Early iterations of this model by Dakin and Watt [28] were intended to produce a biologically plausible and more general-purpose mechanism by which to operationalise Jenkins’ [79] component processes. It has been broadly demonstrated that spatial receptive fields with various preferred stimulus orientations exist in the early cortical visual system, and these filters are sensitive to a range of image features, including (but not limited to) spatial frequency, luminance and orientation [162,165]. When these filters are relatively large and have a preference for low spatial frequencies, fine details of the image are removed, and only larger spatial scale variations in luminance are retained. Structured or systematic variations in luminance are unlikely to occur by chance and are instead interpreted as signifying something salient or meaningful in the environment (e.g., an object). Dakin and Watt [28] observed that when an image with a vertical axis of symmetry is filtered with elongated horizontal Difference-of-Gaussian (DoG) filters and the output is half-wave rectified, a number of horizontal regions of activity (blobs) of varying sizes were produced. These filters are thought to represent elongated receptive fields preferring orientations orthogonal to the axis. If symmetry is present in an image, these blobs cluster with their mid-points along the symmetry axis. The image is, therefore, converted into a group of parallel-aligned stripes. The greater the degree of symmetry present in the image, the larger or more numerous the blobs are, and the greater the likelihood that their centres will fall along the symmetry axis. These filtered blobs, therefore, provide a way to quantitatively measure both the amount of symmetry in a given image and also locate the symmetry axis [28].
Dakin and Watt’s [28] model suggests spatial filtering provides a domain-general way of operationalising Jenkin’s [79] component processes of detecting orientational uniformity, co-alignment and symmetry detection. Unlike previous models, spatial filtering does not require a complex, symmetry-specific mechanism and instead uses an established, general-purpose processing mechanism in the visual system [28,29]. Dakin and Watt’s [28] model can account for a range of findings from psychophysical studies, including resistance to positional jitter of the dot elements forming a pair [47], the importance of the integration region around the symmetry axis [47,79,83] and sensitivity to variations in luminance polarity axis [86,88,89,90].
5.2.2. The Dakin & Hess Model
One aspect of stimulus dependence that is not well accounted for by Dakin and Watt’s [28] model is the influence of element orientation. Orientation in symmetry can be considered in three ways: orientation of the symmetry axis [33], orientation of local symmetric elements [92,93,94] and orientation of the spatial filters processing the pattern [29,30,168]. The original spatial filter model by Dakin and Watt [28] emphasises orthogonal spatial filters, specifically horizontally oriented filters processing vertically symmetric patterns. Symmetric pairs are defined as two elements at equivalent spatial positions on either side of the axis, which means that filters necessarily have to run orthogonally across the axis (either as one larger filter or two paired filters with spatial separation, as suggested by Rainville & Kingdom [30] below) to capture both elements.
More recent models have challenged this assumption and have considered the role of filter orientation. In particular, Osorio [168] posited that vertical filters parallel to the symmetry axis were more useful for signalling symmetry in an array than horizontal filters orthogonal to the symmetry axis. Osorio’s [168] model uses dense isotropic noise patterns and Gabor filters in the quadrature phase running parallel to the symmetry axis and emphasises the symmetry information closest to the symmetry axis. Dakin and Hess [29], however, observe that neither Osorio [168] nor Dakin and Watt [28] could account for the converging psychophysical evidence of a key role for a narrowly-tuned population of orientation channels in symmetry perception, as they only include broadband filters of a single orientation. In addition, they only chose filters that contained an implicit reflection in the two half fields.
To better understand the role of filter orientation in symmetry perception, more nuanced filtering models incorporating multiple orientations have been proposed. Directly inspired by the conflicting hypotheses proposed by Dakin and Watt [28] and Osorio [168], Dakin and Hess [29] developed a variation of the original spatial filtering model [28] and directly compared vertical and horizontal filters. DoG filtering has different outputs depending on the orientation of the filter, which Dakin and Hess [29] hypothesised would affect the strength of the symmetry signal produced. Comparing vertical and horizontal filters, Dakin and Hess [29] showed a clear advantage for horizontally filtered patterns in stimuli with a vertical symmetry axis but also found this was due to orthogonality against the pattern axis. That is when the pattern had a horizontal axis, vertical filters produced a greater symmetry signal. As part of this study, Dakin and Hess [29] explored two modifications of the original Dakin and Watt [28] filtering model. Dakin and Watt [28] used linear horizontal filtering and thresholding. Dakin and Hess [29] used both a quasi-linear model with a filtering and feature alignment measure only and compared this to a version that employed an additional early half-wave rectification prior to the filtering process (the non-linear model of Dakin and Hess [29]). Where the early rectification version includes a rectification both before and after filtering, the quasi-linear model has no rectification prior to filtering. They found that the early rectification model with filters oriented orthogonal to the axis could adequately account for human performance for noise patterns that were either isotropically filtered or filtered orthogonal to the axis. When filtering occurs parallel to the axis of symmetry, the model fails to reach human performance levels regardless of the optimisation of the image window or the spatial frequencies used. While the early rectification model consistently produced the strongest symmetry signal, neither model produced a strong and consistent fit across the range of existing psychophysical data [29]. An illustration of the Dakin and Hess [29] model for two symmetric stimuli with different orientation content is shown in Figure 8 below.
Figure 8.
Example of the Dakin and Hess model [29]. The original image is composed of 64 symmetrically positioned Gabors. (Ai–Avii) When the filter is horizontal (orthogonal to the axis), symmetry is successfully detected and blob magnitude peaks at the location of the central axis. (Bi–Bvii) However, when a vertical filter is used, there is no single identifiable peak, and the model, therefore, fails to detect the symmetry in the image. Red circles on the x-axis of the final panel (vii) show the model’s best estimate of axis location. Refer to Appendix A for a more detailed discussion of each panel and how the mode is implemented.
5.2.3. The Rainville & Kingdom Model
The final filtering model of interest here is the oriented spatial filter model proposed by Rainville and Kingdom [30], shown in Figure 9. Dakin and Hess [29] argue that only filters oriented orthogonal and/or parallel to the symmetry axis can convey symmetry information. In contrast, Rainville and Kingdom [30] assert that there are special cases whereby symmetry may be better represented by combinations of mirror-symmetric oblique filters (i.e., differing by ±45° from vertical such that component elements in a symmetric pair differed by 90°). They argue that symmetry is coded by a combination of mirror-oriented filters occurring prior to mechanisms coding for parallel or orthogonal positioning with respect to the axis. Rather than using a single DoG filter with one dominant orientation (vertical or horizontal) and locating the symmetry axis at the peak of the response, Rainville and Kingdom [30] employed pairs of symmetrically oriented filters in anti-phase. A cross-correlation is then conducted across the entire image, with the axis signified by a “dip” in the size of the response of the filter pair. When positioned over the axis and appropriately separated in space, the two opposing filters are equally but oppositely stimulated such that the response sums to zero.
Figure 9.
Example of Rainville and Kingdom [30] model. The same original image with 64 symmetrically positioned horizontal Gabors is filtered by pairs of oriented filters. As filters are in opposite phases, the symmetry axis causes filter responses to cancel out and thus identifiable by the substantial negative change on the model outputs. (Ai–Av) Horizontally oriented filters produce the largest response. Considering (Bi–Bv) vertical and (Ci–Cv) mirrored obliques filters, the output is smaller and is more sensitive to noise.
Importantly, the Rainville and Kingdom [30] model replicates the advantage of horizontal (orthogonal to the axis) filters and a relative deficit of symmetry information when filters are vertical (parallel to the axis). However, they also show that two filters of opposing but mirror symmetric oblique orientations produce a strong response to symmetry signals. While the magnitude of this response is less than with filters of the same orientation oriented orthogonal to the symmetry axis (i.e., horizontal), the pattern of performance across different local orientations (horizontal, vertical and mirrored oblique) aligns well with human performance [30].
5.3. Assumptions of Spatial Filter Models
It is important to note that considering mirror symmetry perception from a spatial filtering perspective necessitates a few key assumptions (many of which are acknowledged by Dakin and Watt [28] and Dakin and Hess [29]). The precise implications of these assumptions for patterns with varying local features are explored below. However, while spatial filter models are biologically plausible and can account for a wide range of key features of symmetry perception in the literature, a number of other characteristics are not well encapsulated by the published models and imply that later stages of processing are also involved
Assumption 1: Symmetry processing occurs in V1. The involvement of spatial filters as the primary mechanisms for symmetry perception implies that this may occur early in the visual hierarchy (e.g., V1), where neurons are assumed to behave like spatial filters [28,162]. By necessity, therefore, symmetry perception must be retinotopic and has been modelled as a predominantly first-order process driven by changes in luminance across the image. The Dakin models, in particular [28,29,80], emphasise the importance of short-range filters with a low spatial frequency preference, meaning that the symmetry pairs closest to the axis may be most salient, and also necessitating the loss of high-frequency information during the formation of response blobs. Information that cannot be conveyed solely by symmetrically positioned pairs of the same luminance polarity is thus lost and needs to be accounted for via other mechanisms (e.g., second-order processing or attention-based matching).
Assumption 2: Global, but not local, orientation information must be accounted for. Spatial filtering models are predicated on low-level processing to detect symmetry information. Filter models have typically been tested on dense, isotropic noise patterns with little intrinsic orientation structure [29,30] and have not assessed the impact of local element orientation information on symmetry detection. This is largely based on the fact that Koeppl and Morgan [92] found that rotating line elements about their centre point, thus changing their orientation relative to the symmetry axis, had little impact on the salience of the symmetry.
Filter models instead emphasise the introduction of orientation content by way of the response of oriented filters [30]. However, as shown in Bellagarda et al. [91], local element orientation information has a clear role in symmetry perception, which directly contradicts Koeppl and Morgan’s [92] findings. This likely occurs because Koeppl and Morgan [92] only included conditions where element orientation was always mirrored within a pair, satisfying the requirements of Eq 1 in terms of both position and orientation information. Bellagarda et al. [91] combined unmirrored orientations within a pair, which means that Equation (1) is satisfied in terms of position only, therefore requiring additional second-order processing to remove conflicting orientation information. Given this more recent finding, spatial filtering models need to be revised in order to account for the role of both local and global orientation in symmetry perception.
Assumption 3: Attention-based strategies are required for unmatched polarities. Spatial filter models also assume that variations in luminance information within element pairs cannot be detected via pre-attentive mechanisms, which are largely reliant on first-order detection mechanisms alone [28]. Thus, these models are not easily reconciled with data from recent experiments using symmetric pairs containing elements of opposing polarities, where symmetry perception is possible, albeit with elevated thresholds (e.g., [86]). Detection via spatial filtering relies on paired elements sharing the same luminance polarity. In the Dakin and Watt (1996) [28] models, both elements must have the same luminance polarity information to appropriately stimulate the same elongated horizontal filter. Dakin and Hess [29] attempt to address this constraint by including an additional rectification step in their early-rectification (non-linear) model to eliminate this polarity sensitivity, meaning that the model should work equally well for opposite polarity patterns. In Rainville and Kingdom [30]’s conceptualisation, opposite polarity elements can be detected by the individual filters on either side of the axis, but they would produce responses with the same sign and thus not generate the decrease in response required to signal the presence of symmetry. This is inconsistent with the findings of Bellagarda et al. [86] and others, which consistently show that symmetry is still detectable in opposite-polarity patterns when attention-based point matching is precluded. This finding cannot be accounted for by the models of either Dakin and Hess [29] or Rainville and Kingdom [30].
Assumption 4: Only the axis is important for symmetry perception. Many spatial filtering models assert that higher-order structures, element orientation and unmatched luminance polarities are only detectable via attention-based point-matching and not perceptual mechanisms. So, filter models cannot account for symmetry perception when these features are present. Therefore, the detection mechanism must incorporate second-order processing to identify symmetry based on contrast rather than luminance variations. Second-order processing would extract the contrast envelope via rectification so that symmetry can still be identified based on the consistent positioning of contrast envelopes across the axis. While this is similar to the rectification model proposed by Dakin and Watt [28], they employed one large filter that spanned the axis, and their work using varied orientation combinations shows that different detectors are needed for each element.
However, second-order processing does not account for the beneficial presence of higher-order structure (in the form of correlational quadrangles [107]), which can strengthen the symmetry signal. So, we also need to consider configural information or sub-structural features of the array. This might be achieved through the implementation of Gestalt perceptual processes like good continuation or proximity/grouping of individual substructures, such as correlational quadrangles. Detecting higher-order structures could benefit symmetry perception as it reduces the incidence of accidental symmetric pairings in an array through the introduction of an explicit internal structure, as alluded to by Wagemans et al. [41].
Grouping processes also provide additional reference frames for the symmetry axis and reduce incorrect rejection of imperfect symmetries when viewed from alternative viewpoints [21]. It is also likely that other configurable features contribute to this process, such as the outer boundary of the figure (Wenderoth [58] or, in some stimuli, features like global motion [95]. Neuroimaging results support this notion—symmetry perception appears to be primarily associated with higher object-processing regions of the cortex [67,122,124,149]; which may emphasise the integration of configurable features into a single coherent global representation, rather than a collection of individual elements with particular lower-level features.
Assumption 5: Symmetry perception is spatial but not temporal. Although sporadic, there has been interest in the temporal characteristics of mirror symmetry perception, particularly temporal integration. Investigation of symmetry perception in dynamic patterns is an important next step in understanding the role of symmetry in vision, given the visual system’s ability to recognise symmetry in fragmented or continuously changing real-world scenes [123]. Established models of symmetry perception, however, do not yet accommodate dynamic symmetries and make no specific predictions about the temporal features of symmetry perception.
Visual mechanisms also differ in sensitivity to stimulus features and time-course. For example, the second-order mechanisms implicated in processing symmetries with unmatched polarities have been shown to be less sensitive and more sluggish than their first-order counterparts in some contexts [169], but this temporal difference is missed with static images [89,90]. While the impact of luminance polarity, element orientation and axis orientation are well understood in static symmetries, they have not been considered in a dynamic context. Mapping how different stimulus features alter the period over which symmetry information can be integrated would make a useful contribution to understanding how symmetry is processed in the brain and point to how temporal integration of mirror symmetry might augment existing spatial filter models.
Previous studies have tended to use stimulus presentations where all elements in an array are presented simultaneously for a fixed duration. This cannot address whether symmetric information must be presented concurrently in order for symmetry to be detected. Yet, several studies have now shown the visual system actually allows for some tolerance to temporal delay. Temporal integration is a more general and less critical feature of symmetry perception but underscores the need for elements to fall within the same temporal window to avoid aberrant detection of false symmetries (e.g., two different objects that happen to co-occur in a short period). From a biological perspective, this feature of the system is quite useful, as it also allows for appreciation of change in the environment over time (e.g., recognition of a symmetric object as it moves through and is partially occluded or camouflaged by its environment).
5.4. Summary
- Spatial filtering models are the predominant theories that explain the mechanisms behind symmetry processing. Key models include Dakin and Watt [28], Dakin and Hess [29], and Rainville and Kingdom [30].
- These models make six key assumptions that are contradicted by recent psychophysical and imaging findings. These assumptions include;
- (1)
- symmetry processing occurs in the primary visual cortex
- (2)
- global, but not local, symmetry information is key to symmetry perception
- (3)
- attention-based mechanisms are required to detect symmetry when element features, such as luminance and orientation, are not reflected over the axis
- (4)
- only the axis and surrounding integration region are important for symmetry perception to occur
- (5)
- symmetry processing is a spatial, but not temporal, process.
- The discord across the symmetry literature suggests that new models that incorporate recent findings within a biologically plausible framework are needed.
6. Beyond Spatial Filters: Considerations for Future Models and Mechanisms?
If existing models do not have sufficient explanatory power in the context of the current literature, what is the next gap in our understanding of how and where symmetry perception occurs in the visual system? In essence, the review of the current literature presented here emphasises the need for a more robust, flexible and potentially multi-level model of symmetry perception that is ultimately less reliant on the processing of low-level stimulus features.
One possibility is a more complex spatial filtering-based model, and many of these have been suggested over the years in various forms [68,170,171,172,173]. This class of model would likely require a complex combination of multiple filter banks of different sizes, orientations and distributions in order to cover relationships across varying symmetric axis angles, with the necessary constraint that the pairs must extract reflection across the axis. Filters would also have to vary broadly in terms of preference for orientation, phase, luminance polarity and so on, with many potential pairings between filters across space. The scope and complexity make it difficult to implement concisely and realistically without further specification. It also risks the proposition of a complex, computationally demanding visual mechanism that is not readily operationalised in a biologically plausible manner, as per criticism of recent models such as the Voronoi tessellation model [160] or the holographic weight of evidence model [161]. Given this, there may be a risk when incorporating higher-level image properties that such models would be hard to constrain, given our current understanding of the visual system. This is a similar issue faced by early models of Corballis and Roldan [33], Herbert and Humphrey [61], Julesz and Chiang [174] or others, which can account for a very broad range of findings but without clear parameterisation or implementation in substantiated visual mechanisms. However, we do know that in order for complex object identification, this kind of complex interaction does occur; lower order V1 information is passed through to more selective regions of the extrastriate visual cortex along the ventral stream. It may be that one intermediate step along this process is the extraction of symmetry information if it is during this process that symmetry becomes a meaningful and useful visual feature. However, this remains to be seen in future studies.
Future research endeavours need to incorporate non-spatial features such as temporal delay. Spatial filtering operates at early stages of the visual system [80,103] and must be an initial processing stage, yet neuroimaging and neurostimulation research consistently finds that symmetry perception is not associated with an increase in V1 activity, as initially hypothesised based on prevailing interpretations of psychophysical findings [121,122,132]. Others have argued that symmetry processing occurs in higher extra-striate regions, such as the lateral occipital cortex (LOC), where inputs from multiple earlier visual areas are combined to facilitate object processing [127,128,129]. Rather than being dependent on a single early mechanism, some argue that the percept of symmetry evolves from the integration of a range of different image features in object-sensitive regions [127,128,129]. The speed or ease of symmetry perception is changed by different orientations, polarities, and so on, but it is not eliminated when these features are not congruent with positional symmetry information. Given this, we need to account for these element-level effects in attempts to model symmetry mechanisms.
We can reject models which are based on single, large-oriented receptive fields like that proposed by Dakin and Hess [29], but a variation on the matched pairs of smaller receptive fields like those used in the Rainville and Kingdom [30] model may still be an appropriate method for the early stages involved in detecting visual mirror symmetry. These appear to be clustered primarily along the horizontal and vertical axes of symmetry in the previously identified integration regions [59,63] but need to be compared across multiple separations and orientations. It may be that the pairing of two local receptive fields selected from particularly useful orientation combinations (for example, the intrinsically mirrored conditions that had the lowest detection thresholds in Bellagarda et al. [91], such as the horizontal and mirrored obliques) that make the strongest contribution to the existing positional symmetry in the pattern. The Rainville and Kingdom [30] model does not incorporate a second-order system focused on detecting the contrast envelopes, which would be needed for more complex patterns. The inclusion of this secondary mechanism would account for the lower sensitivity and longer persistences identified in the unmirrored conditions (such as the 90° unmatched, 45° unmatched or matched obliques). These first- and second-order systems work in parallel in the visual system, and we may rely on the one providing the strongest signal for a given stimulus input.
Moving beyond positional symmetry at an element level, Bellagarda et al. [107] also show that the outputs of early receptive fields can be combined in a manner consistent with good continuation to form larger objects, which facilitates symmetry perception through the reduction of spurious noise. While this is not included in earlier filtering models, this would align with Wagemans, van Gool, Swinnen and van Horebeek’s [41] bootstrap model. It is also consistent with suggestions that symmetry may be a useful cue for detecting and discriminating individual objects in the environment and appears to be processed in object-focused regions of the brain, such as the LOC and surrounds.
Finally, Bellagarda et al. [86,91,107] have also shown that the mechanisms facilitating the matching of symmetrically positioned elements have an extended time course. This allows for temporal delays between left and right elements to be accommodated, and symmetry information is not lost as objects move through a changing environment. While this time course varies for stimuli predominantly detected via the first- or second-order systems, studies have consistently shown that this is limited to approximately 60 ms, even if persistence estimates for second-order conditions appear to exceed this. It may be that this 60 ms limit is imposed later in the processing pathway.
Summary
- Early models of symmetry processing are limited in terms of the number and types of parameters they are able to incorporate and are challenging to realistically implement within known characteristics of the visual system.
- Models using large, coarse filtering mechanisms can be rejected. It is likely that symmetry processing involves a combination of first- and second-order mechanisms operating in parallel.
- Neuroimaging studies suggest V1 is not preferentially activated by mirror symmetry in patterns, as predicted by spatial filtering models. Object-processing areas, such as the LOC, are consistently implicated in symmetry processing tasks.
- Future models or mechanisms will need to incorporate banks of filters with varied spatial and temporal sensitivities, distributions and sizes.
7. Conclusions
Symmetry is a fundamental visual feature that helps simplify complex visual scenes by introducing redundancy. For symmetry to serve this function effectively in the real world, it must remain detectable and useful under various viewing conditions, such as changes in luminance, viewing angle, or motion. If symmetry were not detectable in such circumstances, it would lose its biological significance. Thus, the perception of symmetry must be flexible and dynamic, extending beyond basic static visual patterns.
The research studies reviewed here consistently show symmetry perception does not rely solely on basic features such as luminance polarity or orientation. While early processes are sensitive to these features, symmetry can still be detected when these features are discordant, although with reduced sensitivity. This suggests that second-order processing plays an important role and supports symmetry detection based on consistent contrast positioning, even when luminance or orientation varies.
However, second-order processing cannot account for the influence of higher-order image structures, such as correlational quadrangles, which enhance symmetry perception. These features help to group elements into a coherent structure, reducing accidental pairings and enhancing symmetry detection, especially when viewed from different perspectives. Neuroimaging studies support the idea that symmetry perception is rooted in higher-level object-processing regions of the brain, which integrate configurable features into a global representation. This suggests that symmetry perception cannot be considered in terms of simple feature matching; it requires the integration of various perceptual cues into a unified whole.
We can reject models that are based on single large oriented receptive fields like those proposed by Dakin and Hess [29]. Matched pairs of smaller receptive fields, like those used in Rainville and Kingdom [30], may still be an appropriate method for detecting mirror symmetry based on luminance signals. The Rainville and Kingdom [30] model does not incorporate a second-order system for detecting the position of contrast envelopes, which would be needed for more complex patterns.
First- and second-order systems work in parallel in the visual system, and we may rely on whichever provides the strongest signal for a given stimulus input. Moving beyond positional symmetry at an element level, we also show that the outputs of early receptive fields can be combined in a manner consistent with good continuation to form larger objects, which facilitates symmetry perception through the reduction of spurious noise. While this is not included in earlier filtering models, this would align with Wagemans, van Gool, Swinnen and van Horebeek’s [41] bootstrap model. It is also consistent with suggestions that symmetry may be a useful cue for identifying individual objects and appears to be processed in object-focused regions of the brain, such as the LOC and surrounds. Finally, mechanisms facilitating the matching of symmetrically positioned elements appear to have an extended time course. This allows for temporal delays between elements to be accommodated, and symmetry information is not lost as objects move through a changing environment.
Overall, we have shown that symmetry detection is remarkably robust to variations in low-level visual features (luminance polarity and orientation), higher-order configurable features (correlational quadrangles) and the introduction of temporal delay. As such, we emphasise the need for future modelling efforts to consider potential underlying mechanisms that can account for such features and are not restricted to first- and/or second-order processing in isolation. Rather, models need to emphasise positional symmetry information and include a method of accounting for configurable features beyond orthogonal pairings while also accounting for the temporal dependency of the symmetry signal. This may be achieved with one overarching model, or we may need to consider the necessity of several intersection models at different levels of the visual hierarchy that depend on task demands or stimulus features. One way to explore the necessary features of these models further is to focus on variability in brain regions implicated in symmetry perception for different types of intermediate stimuli, particularly correlational quadrangles. Further, mapping of the cortical symmetry response could be better achieved by mapping sensitivity to different forms of transformation and considering a change in response location (e.g., do some types of symmetries support object detection and therefore are more extra striate, while others facilitate object invariance and therefore may be closer to V1?, or by a suite of studies whereby the stimulus is held constant, but the participant’s task instructions vary (does this change which regions are implicated and when?). While it is unclear at this stage how such a model may be implemented, it will likely be possible to draw inspiration from higher-order object processing, emphasizing the integration of discrete pieces of visual information into a meaningful, coherent whole.
Author Contributions
Conceptualization, C.A.B. and D.R.B.; methodology, C.A.B. and D.R.B.; writing—original draft preparation, C.A.B.; writing—review and editing, C.A.B., D.R.B., J.E.D., P.V.M. and J.B.; visualization, C.A.B., D.R.B. and J.E.D.; supervision, D.R.B., J.E.D. and J.B. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by Australian Research Council Grants DP160104211 & DP190103474 to DRB.
Data Availability Statement
No new data was generated for this review.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Implementation of Dakin & Hess’ Oriented Spatial Filtering Model
Like Dakin and Watt’s [28] original conceptualisation, Dakin and Hess’ [29] model is based on filtering, coalignment and axis identification; vertically symmetric images are convolved with elongated Gabor filters having a low spatial frequency centre. This produces “blobs” of varying sizes corresponding to signal element pairs, which are hypothesised to have mid-points that cluster on the axis of symmetry. The greater the degree of alignment along the axis and the larger the number of aligned blobs, the more readily symmetry should be perceived. Dakin and Hess [29] initially proposed two versions of this model: one quasi-linear model that was most similar to Dakin and Watt’s [28] original model and a second non-linear model containing an additional half-wave rectification prior to filtering. Given the similarity in output from the two models, we will focus on the quasi-linear model in particular as this was their final version and is most similar to both Dakin and Watt’s [28] original model and also Rainville and Kingdom [28]’s model. Both horizontal and vertical filters are used, where filters are oriented at 90° or 0°, respectively, relative to the vertical axis.
The Dakin and Hess [29] model is shown in Figure 8 in our review for illustrative purposes. Each symmetric image (Figure 8(Ai,Bi)) was convolved with a single Gabor filter oriented either horizontally (Figure 8(Aii)) or vertically (Figure 8(Bii)). The spatial aspect ratio of the Gabor filter was set to 0.5, meaning that filters were twice as long as they were wide. Half width at half height was 26 pixels, and Gabors had a spatial frequency of 45 cycles per image. Figure 8(Aiii,Biii) shows the filter response for the image, where the brightness of the regions increases in proportion to the filter response. This is then thresholded to determine which regions of the filter response were significantly different to average or background noise, shown in Figure 8(Aiv,Biv) as red (signal) and blue (noise). Filter responses of similar magnitude of difference from the background tend to cluster together into “blobs”. The number and size of blobs present are used as a measure of signal strength in the filter response to the pattern. To do this, we find ellipses around these signal blobs using Matlab’s Image processing toolbox function, regionprops. Larger ellipses indicate either large blobs or a cluster of smaller blobs in a similar region of space. This is shown in Figure 8(Av,Bv), where green ellipses indicate a signal higher than the background and red lower than the background. Figure 8(Avi,Bvi) shows these ellipses overlayed on the filtered version of the original image, showing good alignment of our ellipses with the signal information in the original image. Starting on the left edge of the image, we sum the magnitude of all ellipses overlapping or abutting each column along the x-axis. The entire magnitude of any ellipse contiguous to a given column is included in the sum for that column, regardless of how much of that ellipse falls within the column (i.e., even if only the edge of an ellipse falls within a column, the entire magnitude of that ellipse is included in the sum for the column). A graph of the ellipse magnitude for each column is shown in Figure 8(Avii,Bvii). If the model has successfully identified symmetry in an image, the peak of this graph where the most blob information falls should coincide with the location of the symmetry axis (which falls at approximately 192 pixels from the left edge in our images). An example of a successful response is shown in Figure 8(Avii), where there is a large, systematic distribution of symmetry information in each column, peaking over the location of the symmetry axis. The red circle on the x-axis indicates the model’s best estimate of the axis location. Figure 8(Bvii), however, shows a comparatively unsuccessful model response; the magnitude of the blobs is very small and noisy, and there is a substantial dip over the symmetry axis, suggesting an absence of identifiable symmetry information over this region. Indeed, two location estimates are indicated as equal, but not very likely (red circles). This is not overly surprising, as the filter and element orientations are opposing, and thus this specific filter-element orientation pairing is sub-optimal to detect symmetry in the image in that example.
References
- Glurfa, M.; Eichmann, B.; Menzel, R. Symmetry perception in an insect. Nature 1996, 382, 459–461. [Google Scholar] [CrossRef] [PubMed]
- Lehrer, M. Shape perception in the honeybee: Symmetry as a global framework. Int. J. Plant Sci. 1999, 160, S51–S56. [Google Scholar] [CrossRef] [PubMed]
- Rodríguez, I.; Gumbert, A.; Ibarra, N.H.d.; Kunze, J.; Giurfa, M. Symmetry is in the eye of the ‘beeholder’: Innate preference for bilateral symmetry in flower-naïve bumblebees. Naturwissenschaften 2004, 91, 374–377. [Google Scholar] [CrossRef]
- Delius, J.D.; Nowak, B. Visual symmetry recognition by pigeons. Psychol. Res. 1982, 44, 199–212. [Google Scholar] [CrossRef]
- Mascalzoni, E.; Osorio, D.; Regolin, L.; Vallortigara, G. Symmetry perception by poultry chicks and its implications for three-dimensional object recognition. Proc. R. Soc. B Biol. Sci. 2011, 279, 841–846. [Google Scholar] [CrossRef]
- Zagorska-Marek, B. Mirror Symmetry of Life. In Current Topics in Chirality–From Chemistry to Biology; IntechOpen: London, UK, 2021. [Google Scholar]
- Heilbronner, E.; Dunitz, J.D. Reflections on Symmetry: In Chemistry–and Elsewhere; John Wiley & Sons: Zurich, Switzerland, 1993. [Google Scholar]
- Gross, D.J. The role of symmetry in fundamental physics. Proc. Natl. Acad. Sci. USA 1996, 93, 14256–14259. [Google Scholar] [CrossRef] [PubMed]
- Field, M.; Golubitsky, M. Symmetry in Chaos; A Search for Pattern in Mathematics, Art, and Nature, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2009. [Google Scholar]
- Hafri, A.; Gleitman, L.R.; Landau, B.; Trueswell, J.C. Where word and world meet: Language and vision share an abstract representation of symmetry. J. Exp. Psychol. Gen. 2022, 152, 509. [Google Scholar] [CrossRef]
- Pavolvic, B.; Trinajstic, N. On Symmetry and Asymmetry in Literature; Pergamon Press: Oxford, UK, 1986; Volume 10. [Google Scholar]
- McManus, I.C. Symmetry and asymmetry in aesthetics and the arts. Eur. Rev. 2005, 13, 157–180. [Google Scholar] [CrossRef]
- Cimabue. Santa Trinita Maestà; Uffizi Gallery: Florence, Italy, 1290. [Google Scholar]
- From Magdalena Kula Manchee (2020) and Patrick Tomasso (2018), Unsplash. Available online: https://unsplash.com/ (accessed on 15 August 2022).
- Pornstein, M.H.; Krinsky, S.J. Perception of symmetry in infancy: The salience of vertical symmetry and the perception of pattern wholes. J. Exp. Child Psychol. 1985, 39, 1–19. [Google Scholar] [CrossRef]
- Bode, C.; Helmy, M.; Bertamini, M. A cross-cultural comparison for preference for symmetry: Comparing British and Egyptians non-experts. Psihologija 2017, 50, 383–402. [Google Scholar] [CrossRef]
- Brown, J.R.; van der Zwan, R.; Brooks, A. Eye of the Beholder: Symmetry Perception in Social Judgments Based on Whole Body Displays. i-Perception 2012, 3, 398–409. [Google Scholar] [CrossRef]
- Rhodes, G.; Proffitt, F.; Grady, J.M.; Sumich, A. Facial symmetry and the perception of beauty. Psychon. Bull. Rev. 1998, 5, 659–669. [Google Scholar] [CrossRef]
- Wade, J.T. The relationships between symmetry and attractiveness and mating relevant decisions and behaviour: A review. Symmetry 2010, 2, 1081–1098. [Google Scholar] [CrossRef]
- Perretta, D.I.; Burt, D.M.; Penton-Voak, I.S.; Lee, K.J.; Rowland, D.A.; Edwards, R. Symmetry and Human Facial Attractiveness. Evol. Hum. Behav. 1999, 20, 295–307. [Google Scholar] [CrossRef]
- Pashler, H. Coordinate Frame for Symmetry Detection and Object Recognition. J. Exp. Psychol. Hum. Percept. Perform. 1990, 16, 150–163. [Google Scholar] [CrossRef]
- Bahnsen, P. Eine Untersuchung über Symmetrie und Asymmetrie bei visuellen Wahrnehmungen [A study of symmetry and asymmetry in visual perception]. Z. Für Psychol. 1928, 108, 129–154. [Google Scholar]
- Koffka, K. Principles of Gestalt psychology; Harcourt, Brace: New York, NY, USA, 1935. [Google Scholar]
- Gierl, H. Symmetry and Likeability: Prior Research and Transfer to the Field of Advertising. Mark. ZFP 2021, 43, 3–34. [Google Scholar] [CrossRef]
- Loy, G.; Eklundh, J.-O. Detecting symmetry and symmetric constellations of features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Salingaros, N.A. Symmetry gives meaning to architecture. Symmetry Cult. Sci. 2020, 31, 231–260. [Google Scholar] [CrossRef]
- Treder, M. Behind the Looking-Glass: A Review on Human Symmetry Perception. Symmetry 2010, 2, 1510–1543. [Google Scholar] [CrossRef]
- Dakin, S.C.; Watt, R.J. Detection of bilateral symmetry using spatial filters. Spat. Vis. 1994, 8, 393–413. [Google Scholar] [CrossRef] [PubMed]
- Dakin, S.C.; Hess, R. The spatial mechanisms mediating symmetry perception. Vis. Res. 1997, 37, 2915–2930. [Google Scholar] [CrossRef] [PubMed]
- Rainville, S.J.; Kingdom, F.A. The functional role of oriented spatial filters in the perception of mirror symmetry—Psychophysics and modeling. Vis. Res. 2000, 40, 2621–2644. [Google Scholar] [CrossRef]
- Arnheim, R. Art and Visual Perception: A Psychology of the Creative Eye (New Version, Expanded and Rev. ed.); University of California Press: Berkeley, CA, USA, 1974. [Google Scholar]
- Mach, E. Beiträge zur Analyse der Empfindungen; Gustav Fischer: Jena, Japan, 1886. [Google Scholar]
- Corballis, M.C.; Roldan, C.E. Detection of symmetry as a function of angular orientation. J. Exp. Psychol. Hum. Percept. Perform. 1975, 1, 221–230. [Google Scholar] [CrossRef]
- Bruce, V.G.; Morgan, M.J. Violations of symmetry and repetition in visual patterns. Perception 1975, 4, 239–249. [Google Scholar] [CrossRef]
- Jenkins, B. Spatial limits to the detection of transpositional symmetry in dynamic dot textures. J. Exp. Psychol. Hum. Percept. Perform. 1983, 9, 258–269. [Google Scholar] [CrossRef]
- Baylis, G.C.; Driver, J. Perception of symmetry and repetition within and across visual shapes: Part-descriptions and object-based attention. Vis. Cogn. 2001, 8, 163–196. [Google Scholar] [CrossRef]
- Bertamini, M. Sensitivity to reflection and translation is modulated by objectness. Perception 2010, 39, 27–40. [Google Scholar] [CrossRef]
- Royer, F.L. Detection of Symmetry. J. Exp. Psychol. Hum. Percept. Perform. 1981, 7, 1186–1210. [Google Scholar] [CrossRef] [PubMed]
- Palmer, S.E.; Hemenway, K. Orientation and symmetry: Effects of multiple, rotational, and near symmetries. J. Exp. Psychol. Hum. Percept. Perform. 1978, 4, 691–702. [Google Scholar] [CrossRef]
- Wagemans, J. Skewed Symmetry: A Non Accidental Property Used to Percieve Visual Forms. J. Exp. Psychol. Hum. Percept. Perform. 1993, 19, 364–380. [Google Scholar] [CrossRef] [PubMed]
- Wagemans, J.; van Gool, L.; Swinnen, V.; van Horebeek, J. Higher order structure in regularity detection. Vis. Res. 1993, 33, 1067–1088. [Google Scholar] [CrossRef]
- Kahn, J.I.; Foster, D.H. Visual comparison of rotated and reflected random-dot patterns as a function of their positional symmetry and separation in the field. Q. J. Exp. Psychol. 1981, 33A, 155–166. [Google Scholar] [CrossRef]
- Freyd, J.; Tversky, B. Force of Symmetry in Form Perception. Am. J. Psychol. 1984, 97, 109–126. [Google Scholar] [CrossRef] [PubMed]
- Wertheimer, M. Laws of organization in perceptual forms. Psychol. Forschung 1923, 4, 301–350. [Google Scholar] [CrossRef]
- Wagemans, J.; Elder, J.H.; Kubovy, M.; Palmer, S.E.; Peterson, M.A.; Singh, M.; von der Heydt, R. A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure-ground organization. Psychol. Bull. 2012, 138, 1172–1217. [Google Scholar] [CrossRef]
- Attneave, F. Some informational aspects of visual perception. Psychol. Rev. 1954, 61, 183. [Google Scholar] [CrossRef] [PubMed]
- Barlow, H.B.; Reeves, B.C. The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vis. Res. 1979, 19, 783–793. [Google Scholar] [CrossRef]
- Apthorp, D.; Bell, J. Symmetry is less than meets the eye. Curr. Biol. 2015, 25, R267–R268. [Google Scholar] [CrossRef]
- Wainwright, J.B.; Scott-Samuel, N.E.; Cuthill, I.C. Overcoming the detectability costs of symmetrical coloration. Proc. Roal Soc. B 2019, 287, 20192664. [Google Scholar] [CrossRef]
- Wagemans, J. Detection of Visual Symmetries. Spat. Vis. 1995, 9, 9–32. [Google Scholar] [CrossRef] [PubMed]
- Kootstra, G.; Nederveen, A.; Boer, B. Paying Attention to Symmetry. In Proceedings of the British Machine Vision Conference (BMVC2008), Leeds, UK, 1–4 September 2008. [Google Scholar]
- Meso, A.I.; Montagnini, A.; Bell, J.; Masson, G.S. Looking for symmetry: Fixational eye movements are biased by image mirror symmetry. J. Neurophysiol. 2016, 116, 1250–1260. [Google Scholar] [CrossRef]
- Chen, C.C.; Tyler, C.W. Symmetry: Modeling the effects of masking noise, axial cueing and salience. PLoS ONE 2010, 5, e9840. [Google Scholar] [CrossRef]
- Gurnsey, R.; Herbert, A.M.; Kenemy, J. Bilateral symmetry embedded in noise is detected accurately only at fixation. Vis. Res. 1998, 38, 3795–3803. [Google Scholar] [CrossRef]
- Corballis, M.C.; Roldan, C.E. On the perception of symmetrical and repeated patterns. Percept. Psychophys. 1974, 16, 136–142. [Google Scholar] [CrossRef]
- Fisher, C.B.; Bornstein, M.H. Identification of symmetry: Effects of stimulus orientation and head position. Percept. Psychophys. 1982, 32, 443–448. [Google Scholar] [CrossRef]
- Graham, N.V. Beyond multiple pattern analyzers modeled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years. Vis. Res 2011, 51, 1397–1430. [Google Scholar] [CrossRef]
- Wenderoth, P. The role of pattern outline in bilateral symmetry detection with briefly flashed dot patterns. Spat. Vis. 1995, 9, 57–77. [Google Scholar] [CrossRef] [PubMed]
- Orban, G.A.; Vandenbussche, E.; Vogels, R. Human orientation discrimination tested with long stimuli. Vis. Res. 1984, 24, 121–128. [Google Scholar] [CrossRef]
- Wagemans, J. Parallel visual processes in symmetry perception: Normality and pathology. Doc. Opthalmologica 1999, 95, 359–370. [Google Scholar] [CrossRef] [PubMed]
- Herbert, A.M.; Humphrey, G.K. Bilateral symmetry detection: Testing a callosal hypothesis. Perception 1996, 25, 463–480. [Google Scholar] [CrossRef]
- Innocenti, G.M.; Schmidt, K.; Milleret, C.; Fabri, M.; Harland Sanders, J. The functional characterization of callosal connections. Prog. Neurobiol. 2022, 208, 102186. [Google Scholar] [CrossRef]
- Van Essen, D.C.; Maunsell, J.H.R. The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey. J. Neurosci. 1983, 3, 2563–2586. [Google Scholar] [CrossRef] [PubMed]
- Julesz, B. Foundations of Cyclopean Perception; University of Chicago Press: Chicago, IL, USA, 1971. [Google Scholar]
- Braitenberg, V. Vehicles: Experiments in Synthetic Psychology; MIT Press: Cambridge, MA, USA, 1984. [Google Scholar]
- Braitenberg, V. Information Processing in the Cortex: Experiments and Theory; Springer: Berlin/Heidelberg, Germany, 1990. [Google Scholar]
- Bertamini, M.; Makin, A.D.J. Brain Activity in Response to Visual Symmetry. Symmetry 2014, 6, 975–996. [Google Scholar] [CrossRef]
- Cohen, E.H.; Zaidi, Q. Symmetry in context: Salience of mirror symmetry in natural patterns. J. Vis. 2013, 13, 22. [Google Scholar] [CrossRef] [PubMed]
- Jacobsen, T.; Höfel, L. Descriptive and evaluative judgment processes: Behavioral and electrophysiological indices of processing symmetry and aesthetics. Cogn. Affect. Behav. Neurosci. 2003, 3, 289–299. [Google Scholar] [CrossRef] [PubMed]
- Wagemans, J.; Van Gool, L.; D’ydewalle, G. Detection of symmetry in tachistoscopically presented dot patterns: Effects of multiple axes and skewing. Percept. Psychophys. 1991, 50, 413–427. [Google Scholar] [CrossRef]
- Wenderoth, P. The effects on bilateral-symmetry detection of multiple symmetry, near symmetry, and axis orientation. Perception 1997, 26, 891–904. [Google Scholar] [CrossRef]
- Joung, W.; Latimer, C. Tilt aftereffects generated by symmetrical dot patterns with two or four axes of symmetry. Spat. Vis. 2003, 16, 155–182. [Google Scholar] [CrossRef]
- Wagemans, J. The role of perceptual organization in visual perception. Percept. Psychophys. 1993, 54, 589–602. [Google Scholar] [CrossRef]
- Dickinson, J.E.; Cribb, S.J.; Riddell, H.; Badcock, D.R. Tolerance for local and global differences in the integration of shape information. Vis. Res. 2015, 15, 21. [Google Scholar] [CrossRef]
- Green, R.J.; Dickinson, J.E.; Badcock, D.R. The effect of spatiotemporal displacement on the integration of shape information. J. Vis. 2018, 18, 4. [Google Scholar] [CrossRef]
- Tan, K.W.S.; Dickinson, J.E.; Badcock, D.R. Detecting shape change: Characterizing the interaction between texture-defined and contour-defined borders. J. Vis. 2013, 13, 12. [Google Scholar] [CrossRef]
- Tan, K.W.S.; Dickinson, J.E.; Badcock, D.R. Discrete annular regions of texture contribute independently to the analysis of shape from texture. J. Vis. 2016, 16, 10. [Google Scholar] [CrossRef] [PubMed]
- Glass, L. Moire Effect from Random Dots. Nature 1969, 223, 578–580. [Google Scholar] [CrossRef] [PubMed]
- Jenkins, B. Component processes in the perception of bilaterally symmetric dot textures. Percept. Psychophys. 1983, 34, 433–440. [Google Scholar] [CrossRef]
- Dakin, S.C.; Herbert, A.M. The spatial region of integration for visual symmetry detection. Proc. R. Soc. Lond. 1998, 265, 659–664. [Google Scholar] [CrossRef]
- Kurki, I. Stimulus information supporting bilateral symmetry perception. Vis. Res 2019, 161, 18–24. [Google Scholar] [CrossRef]
- Bertamini, M.; Tyson-Carr, J.; Makin, A.D.J. Perspective Slant Makes Symmetry Harder to Detect and Less Aesthetically Appealing. Symmetry 2022, 14, 475. [Google Scholar] [CrossRef]
- Sawada, T.; Pizlo, Z. Detection of skewed symmetry. J. Vis. 2008, 8, 14. [Google Scholar] [CrossRef] [PubMed]
- Morales, D.; Pashler, H. No role for colour in symmetry perception. Nature 1999, 399, 115–116. [Google Scholar] [CrossRef]
- Gheorghiu, E.; Kingdom, F.A.; Remkes, A.; Li, H.C.O.; Rainville, S. The role of color and attention-to-color in mirror-symmetry perception. Sci. Rep. 2016, 6, 29287. [Google Scholar] [CrossRef]
- Bellagarda, C.A.; Dickinson, J.E.; Bell, J.; Badcock, D.R. The temporal integration windows for visual mirror symmetry. Vis. Res. 2021, 188, 184–192. [Google Scholar] [CrossRef]
- Cunningham, D.W.; Baker, C.L.; Peirce, J.W. Perception of shape and surface in visual images: Interaction of form and shading. J. Vis. 2017, 17, 4. [Google Scholar] [CrossRef]
- Mancini, M.; Sally, S.; Gurnsey, R. Detection of symmetry and anti-symmetry. Vision Res 2005, 45, 2145–2160. [Google Scholar] [CrossRef]
- Wenderoth, P. The effects of the contrast polarity of dot-pair partners on the detection of bilateral symmetry. Perception 1996, 25, 757–771. [Google Scholar] [CrossRef]
- Zhang, L.; Gerbino, W. Symmetry in opposite contrast dot patterns. Spat. Vis. 1992, 21, 95. [Google Scholar]
- Bellagarda, C.A.; Dickinson, J.E.; Bell, J.; Badcock, D.R. Selectivity for local orientation information in visual mirror symmetry perception. Vis. Res. 2023, 207, 108207. [Google Scholar] [CrossRef] [PubMed]
- Koeppl, U.; Morgan, M. Local orientation versus local position as determinants of percieved symmetry. Perception 1993, 22, 111. [Google Scholar]
- Locher, P.J.; Wagemans, J. Effects of element types and spatial grouping on symmetry detection. Perception 1993, 22, 565–587. [Google Scholar] [CrossRef]
- Saarinen, J.; Levi, D.M. Perception of mirror symmetry reveals long-range interactions between orientation-selective cortical filters. NeuroReport 2000, 11, 2133–2138. [Google Scholar] [CrossRef] [PubMed]
- Sharman, R.J.; Gheorghiu, E. Orientation of pattern elements does not influence mirror symmetry perception. J. Vis. 2019, 19, 151c. [Google Scholar] [CrossRef]
- Schiller, P.H.; Sandell, J.H.; Maunsell, J.H.R. Functions of the ON and OFF channels of the visual system. Nature 1986, 322, 824–825. [Google Scholar] [CrossRef]
- Schiller, P.H. The ON and OFF channels of the visual system. Trends Neurosci. 1992, 15, 86–92. [Google Scholar] [CrossRef] [PubMed]
- Badcock, D.R.; Clifford, C.W.; Khuu, S.K. Interactions between luminance and contrast signals in global form detection. Vis. Res 2005, 45, 881–889. [Google Scholar] [CrossRef]
- Edwards, M. Common-fate motion processing: Interaction of the On and Off pathways. Vis. Res 2009, 49, 429–438. [Google Scholar] [CrossRef]
- Edwards, M.; Badcock, D.R. Global motion perception: Interaction of the ON and OFF pathways. Vis. Res. 1994, 34, 2849–2858. [Google Scholar] [CrossRef]
- Tyler, C.W.; Hardage, L. Mirror Symmetry Detection: Predominance of Second-Order Pattern Processing Throughout the Visual Field. In Human Symmetry Perception and It’s Computational Analysis; Tyler, C.W., Ed.; VSP: Leiden, The Netherlands, 1996; pp. 157–171. [Google Scholar]
- Brooks, A.; van der Zwan, R. The role of ON- and OFF-channel processing in the detection of bilateral symmetry. Perception 2002, 31, 1061–1072. [Google Scholar] [CrossRef]
- van der Zwan, R.; Leo, E.; Joung, W.; Latimer, C.; Wenderoth, P. Evidence that both area V1 and extrastriate visual cortex contribute to symmetry perception. Curr. Biol. 1998, 8, 889–892. [Google Scholar] [CrossRef] [PubMed]
- Ellemberg, D.; Allen, H.A.; Hess, R.F. Second-order spatial frequency and orientation channels in human vision. Vis. Res. 2006, 46, 2798–2803. [Google Scholar] [CrossRef]
- Hubel, D.H.; Wiesel, T.N. Receptive fields and functional architecture of the monkey striate cortex. J. Physiol. 1968, 195, 215–243. [Google Scholar] [CrossRef]
- Reynaud, A.; Hess, R.F. Properties of spatial channels underlying the detection of orientation-modulations. Exp. Brain Res. 2012, 220, 135–145. [Google Scholar] [CrossRef]
- Bellagarda, C.A.; Dickinson, J.E.; Bell, J.; Badcock, D.R. Contribution of higher-order structure to perception of mirror symmetry: Role of shapes and corners. J. Vis. 2023, 23, 4. [Google Scholar] [CrossRef]
- Pizlo, Z.; de Barros, J.A. The Concept of Symmetry and the Theory of Perception. Front. Comput. Neurosci. 2021, 15, 681162. [Google Scholar] [CrossRef]
- Carmody, D.P.; Nodine, C.F. Global Detection of Symmetry. Percept. Mot. Ski. 1977, 45, 1267–1273. [Google Scholar] [CrossRef]
- Hogben, J.H.; Julesz, B.; Ross, J. Short Term Memory for Symmetry. Vis. Res. 1976, 16, 861–866. [Google Scholar] [CrossRef]
- Jenkins, B. Temporal limits to the detection of correlation in transpositionally symmetric dot textures. Percept. Psychophys. 1983, 33, 79–84. [Google Scholar] [CrossRef] [PubMed]
- Niimi, R.; Watanabe, K.; Yokosawa, K. The role of visible persistence for perception of visual bilateral symmetry. Jpn. Psychol. Res. 2005, 47, 262–270. [Google Scholar] [CrossRef]
- Sharman, R.J.; Gheorghiu, E. Spatiotemporal and luminance contrast properties of symmetry perception. Symmetry 2018, 10, 220. [Google Scholar] [CrossRef]
- Sharman, R.J.; Gheorghiu, E. The role of motion and number of element locations in mirror symmetry perception. Sci. Rep. 2017, 7, 45679. [Google Scholar] [CrossRef] [PubMed]
- Sharman, R.J.; Gregersen, S.; Gheorghiu, E. Temporal dynamics of mirror-symmetry perception. J. Vis. 2018, 18, 10. [Google Scholar] [CrossRef]
- Beh, H.C.; Latimer, C.R. Symmetry Detection and Orientation Perception: Electrocortical Responses to Stimuli with Real and Implicit Axes of Orientation. Aust. J. Psychol. 1997, 49, 128–133. [Google Scholar] [CrossRef]
- Bertamini, M.; Silvanto, J.; Norcia, A.M.; Makin, A.D.J.; Wagemans, J. The Neural Basis of Visual Symmetry and Its Role in Mid and High Level Visual Processing. Ann. N. Y. Acad. Sci. 2018, 1426, 111–126. [Google Scholar] [CrossRef] [PubMed]
- Cattaneo, Z. The neural basis of mirror symmetry detection: A review. J. Cogn. Psychol. 2016, 29, 259–268. [Google Scholar] [CrossRef]
- Van Meel, C.; Baeck, A.; Gillebert, C.R.; Wagemans, J.; Op de Beeck, H.P. The representation of symmetry in multi-voxel response patterns and functional connectivity throughout the ventral visual stream. Neuroimage 2019, 191, 216–224. [Google Scholar] [CrossRef]
- Audurier, P.; Héjja-Brichard, Y.; De Castro, V.; Kohler, P.J.; Norcia, A.M.; Durand, J.-B.; Cottereau, B.R. Symmetry processing in the macaque visual cortex. In Cerebral Cortex; Oxford University Press (OUP): Oxford, UK, 2021. [Google Scholar] [CrossRef]
- Beck, D.M.; Pinsk, M.A.; Kastner, S. Symmetry perception in humans and macaques. Trends Cogn. Sci. 2005, 9, 405–406. [Google Scholar] [CrossRef]
- Sasaki, Y.; Vanduffel, W.; Knutsen, T.; Tyler, C.; Tootell, R. Symmetry activates extrastriate visual cortex in human and nonhuman primates. Proc. Natl. Acad. Sci. USA 2005, 102, 3159–3163. [Google Scholar] [CrossRef]
- Tyler, C.W.; Baseler, H.A.; Kontsevich, L.L.; Likova, L.T.; Wade, A.R.; Wandell, B.A. Predominantly extra-retinotopic cortical response to pattern symmetry. Neuroimage 2005, 24, 306–314. [Google Scholar] [CrossRef] [PubMed]
- Cattaneo, Z.; Mattavelli, G.; Papagno, C.; Herbert, A.; Silvanto, J. The role of the human extrastriate visual cortex in mirror symmetry discrimination: A TMS-adaptation study. Brain Cogn. 2011, 77, 120–127. [Google Scholar] [CrossRef]
- Bona, S.; Cattaneo, Z.; Silvanto, J. The causal role of the occipital face area (OFA) and lateral occipital (LO) cortex in symmetry perception. J. Neurosci. 2015, 35, 731–738. [Google Scholar] [CrossRef]
- Bona, S.; Herbert, A.; Toneatto, C.; Silvanto, J.; Cattaneo, Z. The causal role of the lateral occipital complex in visual mirror symmetry detection and grouping: An fMRI-guided TMS study. Cortex 2014, 51, 46–55. [Google Scholar] [CrossRef]
- Grill-Spector, K.; Kourtzi, Z.; Kanwisher, N. The lateral occipital complex and its role in object recognition. Vis. Res. 2001, 41, 1409–1422. [Google Scholar] [CrossRef] [PubMed]
- Grill-Spector, K. The neural basis of object perception. Curr. Opin. Neurobiol. 2003, 13, 159–166. [Google Scholar] [CrossRef] [PubMed]
- Grill-Spector, K.; Malach, R. The human visual cortex. Annu. Rev. Neurosci. 2004, 27, 649–677. [Google Scholar] [CrossRef]
- Cattaneo, Z.; Bona, S.; Silvanto, J. Different neural representations for detection of symmetry in dot-patterns and in faces: A state-dependent TMS study. Neuropsychologia 2020, 138, 107333. [Google Scholar] [CrossRef]
- Makin, A. Sustained Posterior Negativity Catalogue, 2011–2024. Available online: https://www.bertamini.org/lab/SPNcatalogue.html (accessed on 15 August 2022).
- Makin, A.D.J.; Rampone, G.; Pecchinenda, A.; Bertamini, M. Electrophysiological responses to visuospatial regularity. Psychophysiology 2013, 50, 1045–1055. [Google Scholar] [CrossRef] [PubMed]
- Hofel, L.; Jacobsen, T. Electrophysiological indices of processing aesthetics: Spontaneous or intentional processes? Int. J. Psychophysiol. 2007, 65, 20–31. [Google Scholar] [CrossRef]
- Kohler, P.J.; Cottereau, B.R.; Norcia, A.M. Dynamics of perceptual decisions about symmetry in visual cortex. NeuroImage 2018, 167, 316–330. [Google Scholar] [CrossRef]
- Makin, A.D.J.; Rampone, G.; Wright, A.; Martinovic, J.; Bertamini, M. Visual symmetry in objects and gaps. J. Vis. 2014, 14, 12. [Google Scholar] [CrossRef]
- Makin, A.D.J.; Rampone, G.; Karakashevska, E.; Bertamini, M. The extrastriate symmetry response can be elicited by flowers and landscapes as well as abstract shapes. J. Vis. 2020, 20, 11. [Google Scholar] [CrossRef]
- Palumbo, L.; Bertamini, M.; Makin, A. Scaling of the extrastriate neural response to symmetry. Vis. Res 2015, 117, 1–8. [Google Scholar] [CrossRef]
- Cattaneo, Z.; Bona, S.; Silvanto, J. Not all visual symmetry is equal: Partially distinct neural bases for vertical and horizontal symmetry. Neuropsychologia 2017, 104, 126–132. [Google Scholar] [CrossRef]
- Wang, M.; Wu, F.; van Tonder, G.; Wu, Q.; Feng, Y.; Yu, Y.; Yang, J.; Takahashi, S.; Ejima, Y.; Wu, J. Electrophysiological response to visual symmetry: Effects of the number of symmetry axes. Neurosci. Lett. 2022, 770, 136393. [Google Scholar] [CrossRef]
- Karakashevska, E.; Rampone, G.; Tyson-Carr, J.; Makin, A.D.J.; Bertamini, M. Neural responses to reflection symmetry for shapes defined by binocular disparity, and for shapes perceived as regions of background. Neuropsychologia 2021, 163, 108064. [Google Scholar] [CrossRef]
- Makin, A.D.J.; Rampone, G.; Bertamini, M. Symmetric patterns with different luminance polarity (anti-symmetry) generate an automatic response in extrastriate cortex. Eur. J. Neurosci. 2019, 51, 922–936. [Google Scholar] [CrossRef]
- Wright, D.; Mitchell, C.; Dering, B.R.; Gheorghiu, E. Luminance-polarity distribution across the symmetry axis affects the electrophysiological response to symmetry. Neuroimage 2018, 173, 484–497. [Google Scholar] [CrossRef]
- Bertamini, M.; Rampone, G.; Oulton, J.; Tatlidil, S.; Makin, A.D.J. Sustained response to symmetry in extrastriate areas after stimulus offset: An EEG study. Sci. Rep. 2019, 9, 4401. [Google Scholar] [CrossRef]
- Rampone, G.; Makin, A.D.J.; Tyson-Carr, J.; Bertamini, M. Spinning objects and partial occlusion: Smart neural responses to symmetry. Vis. Res. 2021, 188, 1–9. [Google Scholar] [CrossRef]
- Rampone, G.; Makin, A.D.J.; Tatlidil, S.; Bertamini, M. Representation of symmetry in the extrastriate visual cortex from temporal integration of parts: An EEG/ERP study. NeuroImage 2019, 193, 214–230. [Google Scholar] [CrossRef] [PubMed]
- Rampone, G.; Tatlidil, S.; Adel, F.; Bertamini, M.; Makin, A. A neurophysiological response to symmetry is formed through the integration of partial transient information over parieto-occipital regions. In Proceedings of the European Conference on Visual Perception, Berlin, Germany, 27–31 August 2017. [Google Scholar]
- Niimi, R.; Watanabe, K.; Yokosawa, K. The dynamic-stimulus advantage of visual symmetry perception. Psychol. Res. 2008, 72, 567–579. [Google Scholar] [CrossRef] [PubMed]
- Makin, A.D.J.; Tyson-Carr, J.; Derpsch, Y.; Rampone, G.; Bertamini, M. Electrophysiological priming effects demonstrate independence and overlap of visual regularity representations in the extrastriate cortex. PLoS ONE 2021, 16, e0254361. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Cao, R.; Xue, S. Temporal dynamics and task-dependent neural mechanisms in facial symmetry processing. Exp. Brain Res. 2025, 243, 106. [Google Scholar] [CrossRef] [PubMed]
- Bellagarda, C.A.; Dickinson, J.E.; Bell, J.; Badcock, D.R. Haemodynamic Signatures of Temporal Integration of Visual Mirror Symmetry. Symmetry 2022, 14, 901. [Google Scholar] [CrossRef]
- Ferrari, M.; Quaresima, V. A brief review on the history of human functional near-infrared spectroscopy (fNIRS) development and fields of application. Neuroimage 2012, 63, 921–935. [Google Scholar] [CrossRef] [PubMed]
- León-Carrión, J.; León-Domínguez, U. Functional Near-Infrared Spectroscopy (fNIRS): Principles and Neuroscientific Applications. In Neuroimaging–Methods; Bright, P., Ed.; InTech: Houston, TX, USA, 2012. [Google Scholar]
- Pinti, P.; Tachtsidis, I.; Hamilton, A.; Hirsch, J.; Aichelburg, C.; Gilbert, S.; Burgess, P.W. The present and future use of functional near-infrared spectroscopy (fNIRS) for cognitive neuroscience. Ann. N. Y. Acad. Sci. 2018, 1464, 5–29. [Google Scholar] [CrossRef] [PubMed]
- Quaresima, V.; Ferrari, M. A Mini-Review on Functional Near-Infrared Spectroscopy (fNIRS): Where Do We Stand, and Where Should We Go? Photonics 2019, 6, 87. [Google Scholar] [CrossRef]
- Cattaneo, Z.; Bona, S.; Ciricugno, A.; Silvanto, J. The chronometry of symmetry detection in the lateral occipital (LO) cortex. Neuropsychologia 2022, 167, 108160. [Google Scholar] [CrossRef]
- Stigliani, A.; Jeska, B.; Grill-Spector, K. Encoding model of temporal processing in human visual cortex. Proc. Natl. Acad. Sci. USA 2017, 114, E11047–E11056. [Google Scholar] [CrossRef]
- Stigliani, A.; Jeska, B.; Grill-Spector, K. Differential sustained and transient temporal processing across visual streams. PLoS Comput. Biol. 2019, 15, e1007011. [Google Scholar] [CrossRef]
- Huk, A.C.; Shadlen, M.N. Neural Activity in Macaque Parietal Cortex Reflects Temporal Integration of Visual Motion Signals during Perceptual Decision Making. J. Neurosci. 2005, 25, 10420–10436. [Google Scholar] [CrossRef]
- Bertamini, M.; Friedenburg, J.D.; Kubovy, M. Detection of symmetry and perceptual organization: The way a lock and key process work. Acta Psychol. 1997, 95, 119–140. [Google Scholar] [CrossRef]
- Dry, M.J. Using relational structure to detect symmetry: A Voronoi tessellation based model of symmetry perception. Acta Psychol. 2008, 128, 75–90. [Google Scholar] [CrossRef]
- van der Helm, P.A.; Leeuwenberg, E.L.J. Goodness of Regularities: A Non Transformational Approach. Psychol. Rev. 1996, 103, 429–456. [Google Scholar] [CrossRef]
- Blakemore, C.; Campbell, F.W. On the existence of neurones in the human visual system selectively sensitive to the the orientation and size of retinal images. J. Physiol. 1969, 203, 237–260. [Google Scholar] [CrossRef]
- Campbell, F.W.; Robson, J.G. Application of Fourier analysis to the visibility of gratings. J. Physiol. 1968, 197, 551. [Google Scholar] [CrossRef]
- Watt, R.J.; Morgan, M. Spatial filters and the localisation of luminance changes in human vision. Vis. Res. 1984, 24, 1387–1397. [Google Scholar] [CrossRef]
- Braddick, O.; Campbell, F.W.; Atkinson, J. Channels in Vision: Basic Aspects. In Handbook of Sensory Physiology; Held, R., Leibowitz, H.W., Teuber, H.L., Eds.; Springer: Berlin/Heidelberg, Germany, 1978; Volume 8. [Google Scholar]
- Yamins, D.L.K.; Hong, H.; Cadieu, C.F.; Solomon, E.A.; Seibert, D.; DiCarlo, D.J. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. USA 2014, 111, 8619–8624. [Google Scholar] [CrossRef] [PubMed]
- Wilson, H.R.; Wilkinson, F. From orientations to objects: Configural processing in the ventral stream. J. Vis. 2015, 15, 4. [Google Scholar] [CrossRef]
- Osorio, D. Symmetry Detection by Categorisation of Spatial Phase, A Model. Proc. R. Soc. Lond. 1996, 263, 105–110. [Google Scholar]
- Schofield, A.J.; Georgeson, M.A. The temporal properties of first- and second-order vision. Vis. Res. 2000, 40, 2475–2487. [Google Scholar] [CrossRef] [PubMed]
- Poirier, F.J.; Wilson, H.R. A biologically plausible model of human shape symmetry perception. J. Vis. 2010, 10, 9. [Google Scholar] [CrossRef]
- Scognamillo, R.; Rhodes, G.; Morrone, C.; Burr, D. A feature-based model of symmetry detection. Proc. Biol. Sci. 2003, 270, 1727–1733. [Google Scholar] [CrossRef] [PubMed]
- Zabrodksy, H.; Algom, D. Continuous symmetry: A model for human figural perception. Spat. Vis. 1994, 8, 455–467. [Google Scholar] [CrossRef]
- Zhu, T. Neural processes in symmetry perception: A parallel spatio-temporal model. Biol. Cybern. 2014, 108, 121–131. [Google Scholar] [CrossRef] [PubMed]
- Julesz, B.; Chiang, J.-J. Symmetry perception and spatial-frequency channels. Perception 1979, 8, 711–718. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.








