# Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Measuring Continuous Symmetry Using Filter Responses from Convolutional Neural Networks

#### 2.2. Image Dataset

#### 2.3. Rating Experiment

## 3. Results

## 4. Discussion

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Bronshtein, I.; Semendyayev, K.; Musiol, G.; Mühlig, H. Handbook of Mathematics; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
- Wagemans, J. Detection of visual symmetries. Spat. Vis.
**1995**, 9, 9–32. [Google Scholar] [CrossRef] [PubMed] - Grammer, K.; Thornhill, R. Human (Homo sapiens) facial attractiveness and sexual selection: The role of symmetry and averageness. J. Comp. Psychol.
**1994**, 108, 233. [Google Scholar] [CrossRef] [PubMed] - Møller, A.P.; Swaddle, J.P. Asymmetry, Developmental Stability and Evolution; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
- Tinio, P.; Smith, J. The Cambridge Handbook of the Psychology of Aesthetics and the Arts; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
- Zaidel, D.W.; Aarde, S.M.; Baig, K. Appearance of symmetry, beauty, and health in human faces. Brain Cogn.
**2005**, 57, 261–263. [Google Scholar] [CrossRef] [PubMed] - Jacobsen, T.; Höfel, L. Aesthetic judgments of novel graphic patterns: Analyses of individual judgments. Percept. Motor Skills
**2002**, 95, 755–766. [Google Scholar] [CrossRef] [PubMed] - Liu, Y.; Hel-Or, H.; Kaplan, C.S.; Kaplan, C.S.; Van Gool, L. Computational symmetry in computer vision and computer graphics. Found. Trends
^{®}Comput. Grap. Vis.**2010**, 5, 1–195. [Google Scholar] [CrossRef] - Chen, P.-C.; Hays, J.; Lee, S.; Park, M.; Liu, Y. A quantitative evaluation of symmetry detection algorithms. In Technical Report CMU-RI-TR-07-36; Carnegie Mellon University: Pittsburgh, PA, USA, 2007. [Google Scholar]
- Liu, J.; Slota, G.; Zheng, G.; Wu, Z.; Park, M.; Lee, S.; Rauschert, I.; Liu, Y. Symmetry detection from realworld images competition 2013: Summary and results. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 25–27 June 2013; pp. 200–205.
- Zabrodsky, H.; Peleg, S.; Avnir, D. Symmetry as a continuous feature. IEEE Trans. Pattern Anal. Mach. Intell.
**1995**, 17, 1154–1166. [Google Scholar] [CrossRef] - Den Heijer, E. Evolving symmetric and balanced art. In Computational Intelligence; Springer: Berlin, Germany, 2015; pp. 33–47. [Google Scholar]
- Shaker, F.; Monadjemi, A. A new symmetry measure based on gabor filters. In Proceedings of the 2015 23rd Iranian Conference on Electrical Engineering, Tehran, Iran, 10–14 May 2015; pp. 705–710.
- Wurtz, R.; Kandel, E. Central visual pathway. In Principles of Neural Science, 4th ed.; Kandel, E., Schwartz, J., Jessell, T., Eds.; McGraw-Hill: New York, NY, USA, 2000; pp. 523–547. [Google Scholar]
- Lennie, P. Color vision. In Principles of Neural Science, 4th ed.; ER, K., Schwartz, J., Jessell, T., Eds.; McGraw-Hill: New York, NY, USA, 2000; pp. 572–589. [Google Scholar]
- Yosinski, J.; Clune, J.; Nguyen, A.; Fuchs, T.; Lipson, H. Understanding neural networks through deep visualization. In Proceedings of the Deep Learning Workshop, International Conference on Machine Learning (ICML), Lille, France, 10–11 July 2015.
- LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. In The Handbook of Brain Theory and Neural Networks; Massachusetts Institute of Technology (MIT) Press: Cambridge, MA, USA, 1995; pp. 255–258. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Massachusetts Institute of Technology (MIT) Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Comput. Sci. arXiv
**2014**. [Google Scholar] - Karpathy, A.; Fei-Fei, L. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3128–3137.
- Gatys, L.A.; Ecker, A.S.; Bethge, M. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. Comput. Sci. arXiv
**2015**. [Google Scholar] - Abdel-Hamid, O.; Mohamed, A.R.; Jiang, H.; Deng, L.; Penn, G.; Yu, D. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process.
**2014**, 22, 1533–1545. [Google Scholar] [CrossRef] - Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd Association of Computing Machinery (ACM) international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678.
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Comput. Sci. arXiv
**2014**. [Google Scholar] - D’Agostino, R.B. An omnibus test of normality for moderate and large size samples. Biometrika
**1971**, 58, 341–348. [Google Scholar] [CrossRef] - Marĉelja, S. Mathematical description of the responses of simple cortical cells. J. Opt. Soc. Am.
**1980**, 70, 1297–1300. [Google Scholar] [CrossRef] [PubMed] - Zeiler, M.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833.
- Mahendran, A.; Vedaldi, A. Understanding deep image representations by inverting them. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 5188–5196.

**Figure 1.**Filters of the first convolutional layer (conv1) of the Convolutional Neural Networks (CNN) architecture used in our experiment (CaffeNet; [24]). The filters detect oriented luminance edges and different spatial frequencies. Color is detected in form of oriented color-opponent edges and color blobs.

**Figure 2.**Representative covers and their respective calculated left/right symmetry values, which were obtained with first-layer filters at patch level 17. The images are of high symmetry (

**a**); intermediate symmetry (

**b**); and low symmetry (

**c**); respectively. Due to copyright issues, we cannot reproduce covers used in our study here. Copyright: (

**a**) author A.B.; (

**b**) Graham JamesWorthington, CC BY-SA 4.0; and (

**c**) Musiclive55, CC BY-SA 4.0.

**Figure 4.**(

**a**) Spearman’s rank coefficients for the correlation between the subjective ratings and calculated values of left/right symmetry. Subjective ratings are plotted as a function of the number of subimages in the model for different layers of the CaffeNet model. The model parameters were systematically varied. The patch level squared corresponds to the number of subimages. The RMSE values of (

**b**) a linear fit; (

**c**) a quadratic fit and (

**d**) a cubic fit show similar trends for all configurations. With quadratic and cubic polynomials, lower errors were obtained compared to the linear fit, which indicates that the relation between our measure and the subjective ratings is not linear.

**Figure 5.**Scatter plot of rated symmetry values versus calculated symmetry values for two different configurations of the model (

**a**, layer 1 with 17 patches squared, correlation of 0.80;

**b**, layer 2 with 11 patches squared, correlation of 0.85). Each dot represents one cover image. Metal music covers are shown in black, pop music covers in cyan and classic music covers in magenta. The blue curve represents the best quadratic fit, as determined from the plots shown in Figure 4c.

**Figure 6.**Standard deviation of the ratings of 20 participants for 300 CD album cover images, plotted as a function of the median rating for the covers. Each dot represents one cover image. Metal music covers are shown in black, pop music covers in cyan and classic music covers in magenta.

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Brachmann, A.; Redies, C.
Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images. *Symmetry* **2016**, *8*, 144.
https://doi.org/10.3390/sym8120144

**AMA Style**

Brachmann A, Redies C.
Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images. *Symmetry*. 2016; 8(12):144.
https://doi.org/10.3390/sym8120144

**Chicago/Turabian Style**

Brachmann, Anselm, and Christoph Redies.
2016. "Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images" *Symmetry* 8, no. 12: 144.
https://doi.org/10.3390/sym8120144