# Symmetry as an Intrinsically Dynamic Feature

^{1}

^{2}

^{3}

^{4}

^{*}

^{}

## Abstract

**:**

## 1. Introduction

**S**a system,

**p**a variable and

**T**a transform,

**S**is symmetric in

**p**iff.

**S(p)**=

**S(T(p))**.

## 2. Symmetry Constancy

_{T}(or 1- σ

_{T}) of T over θ as shown by the three examples in Figure 5.

#### 2.1. Dynamics Through Erosion

^{n}denotes the n-iterative application of an operator, S is the Symmetry Transform, E is the classical Erosion operator and θ is the slope of the axis.

`∂`IOT outputs a High-Pass filtered version of [S(X)]. The edges are now local maxima of a High-Pass filtered version of X. As maximum finding does not commute with linear operations, S(edges) will not coincide with max(

`∂`IOT). Experiments have been completed in a systematic manner on synthetic and real images, addressing the shape parameter dynamics, as shown in Figure 7 and Figure 8, respectively (see [8,9] for further details).

- When going from a regular pattern to a modified version of it, any perturbation gives a signature. Moreover, experiments about ratios of pattern size over defect size, or about the number of defects, allow precision assessment.
- Curve variations follow, at least in a qualitative manner, tendencies of symmetry variations with the angle: constancy, monotony, smoothness etc.

#### 2.2. Dynamics Through Multi-Resolution

**F**, that determines what neighborhood at layer n will generate the pixel (father) at layer n+1 above (see Figure 10a for two examples), the other for intensity

**V**that computes the father’s value from the sons’ value, using filters such as max, min, and, or, average, median, laplacian. Figure 10b compares respective results of erosion (using a disk kernel) and sub-sampling, Figure 10c displays a pyramid of similar patterns. Here again the first idea is to check parts where symmetry maintains itself over layers.

- direct: building the pyramid and running the symmetry operator S layer by layer;
- indirect: running S at the bottom (image) and then building the pyramid from these values;
- hierarchical: recursively running S to build the pyramid.

_{n}/M

_{n}is around one half for usual mask sizes between 5 and 15. If we consider the 11×11 version, the external two-pixels-large belt weighs 90%. That allows balancing the region orientation of the multi-resolution process.

^{2}) and hierarchy with overlap is O(N

^{2}K

^{2}/3), to be compared to O(N

^{2}K

^{2}) for the initial Symmetry Transform.

## 3. Capturing Symmetry

- a model is made explicit to support optimality claims;
- this model puts forward:
- the invariance to the transform again in relying on explicit comparison between the pattern and a transformed version of it;
- the distance that could evolve into approximate comparison for similarity;

- pattern inclusion introduces set operations (such as Minkowski’s), associating as a result logic and geometry.

#### 3.1. Optimal Symmetry Detection

**(**P

**)**is the maximal included symmetric pattern in P. Let us underline that the transposition to the pyramidal scheme is straightforward, considering the corresponding symmetric collection of blocks. In this case, taking in account the discussion outlined in the previous section, precision is preferred to robustness here, as stemming from the measure itself. The sketch of such an algorithm is not difficult to describe (see Figure 18).

_{2}-frame the symmetric version of a function f with respect to an abscissa x is given by:

_{G}and x* expressions shows that picking the centre of mass for the locus of symmetry axes amounts to assimilate any pattern with their paraboloid approximation. Designing algorithms still requires to specify which parts of the image are to be correlated with their transformed version. That leads again to multi resolution, possibly combined with erosion. The technique is to tile the image again using blocks of a given shape, size and periodicity, indicative of the size of targeted symmetric patterns. Note that inside blocks or on isolated patterns a process of sub-axis alignment from multiple bands makes sense too. See Figure 21 for an illustration, and the following figures for more detail.

#### 3.2. Symmetry Measures

**A**the object and

**K**its kernel the first measure to be considered can be:

**A**−

**K = D =**{

**n**components

_{c}**Di**}

**n**components as already illustrated in Figure 19, where

_{c}**n**ranges between 3 and 8 depending on the kernel precision, and by first kernel examples in the results shown in Figure 23. The distribution of

_{c}**D**’s is likely uneven. Consequently, three correcting factors λ

_{i}_{2}were tried:

^{-1}so very close in case of symmetry when ε is small. Indeed, let be a given symmetric version of P when the invariant (axis or centre) is in x. The inner and outer kernels are respectively:

#### 3.3. Results

- sensitivity of the symmetry detection to the centre position
- validity of λ the measure (degree) of symmetry in comparing patterns to their kernel through the elongation η and then the kernel evolutions with IOT
- quality of the correlation kernel wrt the kernel
- validity of the symmetry axis from correlation wrt the best axis over shifts

_{G}and λ

_{C}as in Figure 23.

**Figure 24.**First row: symmetry indexes vs. the direction for image (c) 1. Second row: as above for image (c) 2.

## 4. Symmetry Detection and Face Expressions

#### 4.1. An Approach to Face Expression Recognition Based on Broken Symmetry Detection

- Human face is by nature mostly symmetrical. More so in the so-called Neutral expression (see FACS [19]).
- Any expression different from Neutral is obtained by stretching a different subset of face muscles (the so-called action units [20]).
- Such stretching is rarely completely symmetrical; as such, the more marked those changes in expression are, the more breakage of symmetry is introduced in different parts of the face.
- Collecting and measuring those differences in symmetry from different portions of the face allows us to compile a typical signature for each expression. These signatures are then vectorized and feed to a classifier.

**Figure 26.**Face Expressions in FACS. A sample of face expressions from the JAFFE database [21]: (a) neutral, (b) sadness, (c) disgust, (d) happiness, (e) fear, (f) anger, (g) surprise. Expressions are obtained by self-attribution-the subject is asked to produce a specific expression, then a snapshot is taken.

**Figure 27.**Are you happy to see me? Some instances of an expression are not easily reconciled with their self attribution. This problem encompasses geographical, gender and social entities. (a) Self–attributed as happiness, easily misclassified; (b) not self–attributed as happiness, often misclassified as happiness; (c) one is self–attributed as happiness, the other is not: can you spot which is which?

**Figure 28.**Broken Symmetries in expressions. This is a detail from the eyes of the same subject in five different expressions: (a) neutral, (b) anger, (c) disgust, (d) fear, (e) happiness. Symmetry is clearly broken in different ways through these expressions.

#### 4.2. Method

#### 4.2.1. Dataset

#### 4.2.2. Procedure

_{1}, F

_{2}, …, F

_{h}, …, F

_{L}} of sub-images of F (F

_{h}⊆ F ) such that ∪F

_{h}≡F. The ordered covering, OCF, of F is a covering, CF, elements of which have been ordered according to a given exploration rule of F: OCF = (F

_{i1}, F

_{i2}, …, F

_{ih}, …, F

_{iL}).

- 1)
- parameters (e.g., axis position and angle) for which max_included and min_including are obtained are the same, and
- 2)
- the measure given above is invariant for translation, rotation, and scaling.

**Figure 32.**Max_included and Min_including: (a) original object, (b) min_including; (c) max_included; (d) min_including–max_included.

#### 4.3. Experimental Results and Discussion

## 5. Conclusions

## References

- Kanizsa, G. The role of regularity in perceptual organization. In Studies in perception: Festschrift for Fabio Metelli; Martello-Giunti Editore: Firenze, Italia, 1975. [Google Scholar]
- Palmer, S.E. The Role of Symmetry in Shape Perception. Acta Psychol.
**1985**, 59, 67–90. [Google Scholar] [CrossRef] - Zavidovique, B.; DiGesù, V. The S-kernel: A measure of symmetry of objects. Pattern Recogn. Lett.
**2007**, 40, 839–852. [Google Scholar] [CrossRef] - DiGesú, V.; Zavidovique, B. Iterative symmetry detection: Shrinking vs. decimating patterns. Integr. Comput. Aided Eng.
**2005**, 12, 319–332. [Google Scholar] [CrossRef] - Palmer, S. Vision Science: Photons to Phenomenology; Bradford Books/MIT Press: Cambridge, MA, USA, 1999; pp. 4–15. [Google Scholar]
- DiGesù, V.; Valenti, C. Symmetry operators in computer vision. Vistas Astron.
**1996**, 40, 461–468. [Google Scholar] [CrossRef] - DiGesù, V.; Zavidovique, B. A note on the iterative object symmetry transform. Pattern Recogn. Lett.
**2004**, 25, 1533–1545. [Google Scholar] [CrossRef] - DiGesú, V.; Lo Bosco, G.; Zavidovique, B. Classification Based on Iterative Object Symmetry Transform. In ICIAP ’03: Proceedings of the 12th International Conference on Image Analysis and Processing; IEEE Computer Society: Washington, DC, USA, 2003. [Google Scholar]
- Zavidovique, B.; Di Gesú, V. Pyramid symmetry transforms: From local to global symmetry. Image Vision Comput.
**2007**, 25, 220–229. [Google Scholar] [CrossRef] - Oliva, D.; Samengo, I.; Leutgeb, S.; Mizumori, S. A Subjective Distance Between Stimuli: Quantifying the Metric Structure of Representations. Neural Comput.
**2005**, 17, 969–990. [Google Scholar] [CrossRef] [PubMed] - Tversky, A. Features of similarity. Psychol. Rev.
**1977**, 84, 327–352. [Google Scholar] [CrossRef] - Di Gesu`, V.; Zavidovique, B. S-Kernel: A New Symmetry Measure. In Pattern Recognition and Machine Intelligence; Pal, S., Ed.; Springer Verlag: Berlin-Heidelberg, Germany, 2005. [Google Scholar]
- Moghaddam, B.; Pentland, A.P. Face recognition using view-based and modular eigenspaces. In Proc. SPIE; SPIE: San Diego, CA, USA, 1994; Volume 2277. [Google Scholar]
- Zhao, W.; Chellappa, R.; Phillips, P.J.; Rosenfeld, A. Face recognition: A literature survey. ACM Comput. Surv.
**2003**, 35, 399–458. [Google Scholar] [CrossRef] - Shakhnarovich, G.; Moghaddam, B. Face Recognition in Subspaces. In Handbook of Face Recognition; Stan, Z.L., Anil, K.J., Eds.; Springer-Verlag: Secaucus, NJ, USA, 2004; Volume I, Chapter 7; pp. 141–168. [Google Scholar]
- Jain, A.K.; Ross, A.; Prabhakar, S. An introduction to biometric recognition. IEEE Trans. Circ. Syst. Video Technol.
**2004**, 14, 4–20. [Google Scholar] [CrossRef] - Pelachaud, C.; Poggi, I. Multimodal communication between synthetic agents. In Proceedings of the Working Conference on Advanced Visual Interfaces; ACM: L’Aquila, Italy, 1998. [Google Scholar]
- Shan, C.; Gong, S.; McOwan, P.W. A Comprehensive Empirical Study on Linear Subspace Methods for Facial Expression Analysis. In CVPRW ’06: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop; IEEE Computer Society: Washington, DC, USA, 2006. [Google Scholar]
- Ekman, P.; Friesen, W. Facial Action Coding System: A Technique For The Measurement Of Facial Movement; Consulting Psychologists Press: Palo Alto, CA, USA, 1978. [Google Scholar]
- Essa, I.A.; Pentland, A.P. Coding, Analysis, Interpretation, and Recognition of Facial Expressions. IEEE Trans. Pattern Anal. Mach. Intell.
**1997**, 19, 757–763. [Google Scholar] [CrossRef] - Lyons, M.; Akamatsu, S.; Kamachi, M.; Gyoba, J. Coding Facial Expressions with Gabor Wavelets. In IEEE International Conference on Automatic Face and Gesture Recognition; IEEE Computer Society: Los Alamitos, CA, USA, 1998. [Google Scholar]

**Figure 2.**Symmetry requires bumps to become holes (left), dark parts to be seen first (middle), pop-up of texture for vertical elements and color for diagonal elements (right).

**Figure 3.**Noise or bending do not disturb global symmetry, neither do projections for local symmetry.

**Figure 6.**Variations of IOT with the number of iterations and the angle for an image and its binary version.

**Figure 7.**Elongation (red) and circle (green) variations with the number of erosions for the upper left patterns (circle, perturbed circle, rectangle, perturbed rectangle, star and perturbed star). Third row; the first derivatives in the rectangle cases and then the variations and their derivative for the random shape.

**Figure 8.**Elongation (left) and circle (right) variations, along with the number of erosions for the corresponding images. Fourth row; circle of the cell images (blue, green, red in the order), their first derivatives and then again for elongations.

**Figure 9.**Elongation in polar coordinates superposed to the corresponding pictures after erosion. The max and min symmetry axes are: (a) perfectly stable in case of perfect symmetry; (b) switching in the case of a tilted (asymmetric) head.

**Figure 10.**Defining the pyramidal warping: (a) examples of father/sons topology; (b)comparison of three evolutions of a same image under erosion (top) and multiresolution (bottom); (c) symmetry constancy over pyramid layers.

**Figure 12.**(a) and (b) phase problems vs. symmetry evolution; (c) global symmetry of local asymmetries

**Figure 13.**Comparing pyramidal symmetry detection schemes on test images: (a) direct; (b) hierarchy; (c) hierarchy with overlap.

**Figure 14.**Points of interest from local symmetry of human faces: (a) the best evolution; (b) comparing schemes over layers.

**Figure 16.**Comparing symmetry evolutions via the maximum elongation under erosion and multiresolution in the ambiguous case of a multiple symmetry texture.

**Figure 17.**Results of face extraction from a crowd by their symmetry constancy along resolution decrease and iterated erosion respectively.

**Figure 18.**Graphic representation of a plausible kernel finding procedure: Union of couples of maximal bands apart the chosen axis.

**Figure 19.**The axis of the maximally symmetric related pattern may not be the maximal symmetry axis, depending on the detector.

**Figure 21.**Tiling a pattern with blocks to be correlated or with bands for their axes to be pieced together (medialaxis style).

**Figure 23.**Kernels and maximal symmetry of patterns (a) 1, (b) 1 and (b) 2 in Figure 22.

**Figure 29.**Covering of a face. The usefulness of overlapping is self-evident: eyes are captured in slice 2 and mouth in slice 6, while eyebrows are captured in slice 1 and cheeks in slice 5.

**Figure 30.**Cleaning algorithm. The effect of the preprocessor used on JAFFE: (a) original image; (b) preprocessed image.

**Figure 33.**MeanRR for different stripe widths and overlaps. The graph shows meanRR value for stripe widths of 8,16,32,64,128 versus overlap of 10%, 20%, 30%, 40%, 50%.

**Table 2.**Results from the eight patterns shown in Figure 22: symmetry measures with the corresponding maximum-symmetry angles.

Image | λg | αg | λc | αc | OST | αOST |
---|---|---|---|---|---|---|

1a | 0.76 | 135.00° | 0.76 | 11.25° | 0.86 | 112.50° |

2a | 0.74 | 90.00° | 0.79 | 33.75° | 0.93 | 101.00° |

3a | 0.82 | 157.50° | 0.76 | 22.50° | 0.87 | 56.25° |

4a | 0.76 | 0.00° | 0.80 | 0.00° | 0.80 | 0.00° |

1b | 0.80 | 90.00° | 0.80 | 90.00° | 0.72 | 90.00° |

2b | 0.70 | 90.00° | 0.89 | 90.00° | 0.92 | 45.00° |

1c | 0.99 | 90.00° | 0.99 | 135.00° | 0.90 | 90.00° |

2c | 0.99 | 0.00° | 0.99 | 90.00° | 0.96 | 0.00° |

Image | ρ | α |
---|---|---|

1a | 0.67 | 101.25° |

2a | 0.67 | 112.50° |

3a | 0.58 | 112.50° |

4a | 0.55 | 157.50° |

1b | 0.80 | 90.00° |

2b | 0.94 | 90.00° |

1c | 0.99 | 90.00° |

2c | 0.98 | 0.00° |

Neutral | Anger | Disgust | Fear | Happiness | |
---|---|---|---|---|---|

Neutral | 0,88 | 0,00 | 0,02 | 0,05 | 0,05 |

Anger | 0,07 | 0,61 | 0,12 | 0,13 | 0,07 |

Disgust | 0,02 | 0,00 | 0,93 | 0,00 | 0,05 |

Fear | 0,07 | 0,07 | 0,05 | 0,76 | 0,05 |

Happiness | 0,15 | 0,07 | 0,12 | 0,05 | 0,61 |

© 2010 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Di Gesu, V.; Tabacchi, M.E.; Zavidovique, B.
Symmetry as an Intrinsically Dynamic Feature. *Symmetry* **2010**, *2*, 554-581.
https://doi.org/10.3390/sym2020554

**AMA Style**

Di Gesu V, Tabacchi ME, Zavidovique B.
Symmetry as an Intrinsically Dynamic Feature. *Symmetry*. 2010; 2(2):554-581.
https://doi.org/10.3390/sym2020554

**Chicago/Turabian Style**

Di Gesu, Vito, Marco E. Tabacchi, and Bertrand Zavidovique.
2010. "Symmetry as an Intrinsically Dynamic Feature" *Symmetry* 2, no. 2: 554-581.
https://doi.org/10.3390/sym2020554