Beauty in the Eyes of Machine: A Novel Intelligent Signal Processing-Based Approach to Explain the Brain Cognition and Perception of Beauty Using Uncertainty-Based Machine Voting
Abstract
:1. Introduction
- We extract objective general patterns of facial beauty attributes learned subjectively by deep convolutional neural networks (CNNs).
- This is the first study, wherein correlations between beauty and facial attributes are subjectively analyzed based on a quantitative approach. We conclude general patterns of statistically significant attributes of attractiveness.
- We propose a novel framework and algorithm to train CNNs, visualize and extract the learnings of the machine about beauty, and automatically explain every machine decision in the entire dataset.
- We validate existing psychological, biological, and neurobiological studies of beauty and discover new patterns.
- We propose a novel method to prove the relationship between the golden ratio and facial beauty.
2. Related Work
3. Methods
3.1. Data Preparation
- Predictions are not satisfactory and meaningless.
- However, when the majority rate or vote for a face is beautiful, they still disagree on the reason behind voting, which aggravates the problem that renders predictions meaningless.
- Because the model is trained on average scores and/or voting and produces meaningless predictions, the model learning is ambiguous, and it is impossible to interpret neurons to explain beauty, as there is no basis to justify beauty accordingly.
3.2. Ensemble Learning
Algorithm 1: Machine-weighted voting algorithm |
; K base learning algorithms Output: Machines Vote Initialization; do 1: Split D into do for ith split. End Concatenate m Apply weighted class voting (4). End |
3.3. Optimal Voting Weights
3.3.1. Best Combination
3.3.2. Priori Recognition Performance Statistics
3.3.3. Model Calibration
Calibration Evaluation
Uncertainty Evaluation
3.4. Proposed Framework to Explain CNNs
3.5. Proposed Approach to Explain Facial Beauty
Beauty Feature Voting
Algorithm 2: Most voted feature (MVF) algorithm |
; Landmarks Detector initialization; do do . ; ; 4: Get class using Voters If Activations based method do ; ; ; Else if Segmentation based method do . 8: Get { 12: Apply weighted feature voting and get The Most Voted Feature (MVF) (4). end |
3.6. Golden Ratio
3.7. Evaluation Metrics
4. Empirical Results
4.1. Learning Beauty
4.2. Most Dependent Features
4.3. Golden Beauty
5. Discussion
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Adamson, P.A.; Doud Galli, S.K. Modern Concepts of Beauty. Plast. Surg. Nurs. 2009, 29, 5–9. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Fan, Y.-Y.; Samal, A.; Guo, Z. Advances in Computational Facial Attractiveness Methods. Multimed. Tools Appl. 2016, 75, 16633–16663. [Google Scholar] [CrossRef]
- Liu, X.; Li, T.; Peng, H.; Ouyang, I.C.; Kim, T.; Wang, R. Understanding Beauty via Deep Facial Features. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Ortlieb, S.A.; Kügel, W.A.; Carbon, C.-C. Fechner (1866): The Aesthetic Association Principle—A Commented Translation. i-Perception 2020, 11. [Google Scholar] [CrossRef] [PubMed]
- Pelowski, M.; Markey, P.S.; Forster, M.; Gerger, G.; Leder, H. Move Me, Astonish Me… Delight My Eyes and Brain: The Vienna Integrated Model of Top-down and Bottom-up Processes in Art Perception (VIMAP) and Corresponding Affective, Evaluative, and Neurophysiological Correlates. Phys. Life Rev. 2017, 21, 80–125. [Google Scholar] [CrossRef]
- Leder, H.; Nadal, M. Ten Years of a Model of Aesthetic Appreciation and Aesthetic Judgments: The Aesthetic Episode—Developments and Challenges in Empirical Aesthetics. Br. J. Psychol. 2014, 105, 443–464. [Google Scholar] [CrossRef]
- Baker, S.B.; Patel, P.K.; Weinzweig, J. Aesthetic Surgery of the Facial Skeleton; Elsevier: London, UK, 2021. [Google Scholar]
- Little, A.C.; Jones, B.C.; DeBruine, L.M. Facial Attractiveness: Evolutionary Based Research. Philos. Trans. R. Soc. B Biol. Sci. 2011, 366, 1638–1659. [Google Scholar] [CrossRef] [Green Version]
- Little, A.C.; Jones, B.C. Attraction Independent of Detection Suggests Special Mechanisms for Symmetry Preferences in Human Face Perception. Proc. Biol. Sci. 2006, 273, 3093–3099. [Google Scholar] [CrossRef] [Green Version]
- Buggio, L.; Vercellini, P.; Somigliana, E.; Viganò, P.; Frattaruolo, M.P.; Fedele, L. “You Are so Beautiful”: Behind Women’s Attractiveness towards the Biology of Reproduction: A Narrative Review. Gynaecol. Endocrinol. 2012, 28, 753–757. [Google Scholar] [CrossRef]
- Zeki, S. Notes towards a (Neurobiological) Definition of Beauty. Gestalt Theory 2019, 41, 107–112. [Google Scholar] [CrossRef] [Green Version]
- Yang, T.; Formuli, A.; Paolini, M.; Zeki, S. The Neural Determinants of Beauty. bioRxiv 2021, 4999. [Google Scholar] [CrossRef]
- Vegter, F.; Hage, J.J. Clinical Anthropometry and Canons of the Face in Historical Perspective. Plast. Reconstr. Surg. 2000, 106, 1090–1096. [Google Scholar] [CrossRef] [PubMed]
- Bashour, M. History and Current Concepts in the Analysis of Facial Attractiveness. Plast. Reconstr. Surg. 2006, 118, 741–756. [Google Scholar] [CrossRef] [PubMed]
- Marquardt, S.R. Stephen, R. Marquardt on the Golden Decagon and Human Facial Beauty. Interview by Dr. Gottlieb. J. Clin. Orthod. 2002, 36, 339–347. [Google Scholar]
- Iosa, M.; Morone, G.; Paolucci, S. Phi in Physiology, Psychology and Biomechanics: The Golden Ratio between Myth and Science. Biosystem 2018, 165, 31–39. [Google Scholar] [CrossRef] [PubMed]
- Petekkaya, E.; Ulusoy, M.; Bagheri, H.; Şanlı, Ş.; Ceylan, M.S.; Dokur, M.; Karadağ, M. Evaluation of the Golden Ratio in Nasal Conchae for Surgical Anatomy. Ear Nose Throat J. 2021, 100, NP57–NP61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bragatto, F.P.; Chicarelli, M.; Kasuya, A.V.; Takeshita, W.M.; Iwaki-Filho, L.; Iwaki, L.C. Golden Proportion Analysis of Dental–Skeletal Patterns of Class II and III Patients Pre and Post Orthodontic-Orthognathic Treatment. J. Contemp. Dent. Pract. 2016, 17, 728–733. [Google Scholar] [CrossRef]
- Kawakami, S.; Tsukada, S.; Hayashi, H.; Takada, Y.; Koubayashi, S. Golden Proportion for Maxillofacial Surgery in Orientals. Ann. Plast. Surg. 1989, 23, 95. [Google Scholar] [CrossRef]
- Stein, R.; Holds, J.B.; Wulc, A.E.; Swift, A.; Hartstein, M.E. Phi, Fat, and the Mathematics of a Beautiful Midface. Ophthal. Plast. Reconstr. Surg. 2018, 34, 491–496. [Google Scholar] [CrossRef]
- Jefferson, Y. Facial Beauty—Establishing a Universal Standard. Int. J. Orthod. Milwaukee 2004, 15, 9–22. [Google Scholar]
- Holland, E. Marquardt’s Phi Mask: Pitfalls of Relying on Fashion Models and the Golden Ratio to Describe a Beautiful Face. Aesthetic Plast. Surg. 2008, 32, 200–208. [Google Scholar] [CrossRef]
- Krauss, P.; Maier, A. Will We Ever Have Conscious Machines? Front. Comput. Neurosci. 2020, 14. [Google Scholar] [CrossRef] [PubMed]
- Kuzovkin, I.; Vicente, R.; Petton, M.; Lachaux, J.-P.; Baciu, M.; Kahane, P.; Rheims, S.; Vidal, J.R.; Aru, J. Activations of Deep Convolutional Neural Networks Are Aligned with Gamma Band Activity of Human Visual Cortex. Commun. Biol. 2018, 1, 107. [Google Scholar] [CrossRef] [PubMed]
- Bougourzi, F.; Dornaika, F.; Taleb-Ahmed, A. Deep Learning Based Face Beauty Prediction via Dynamic Robust Losses and Ensemble Regression. Knowl.-Based Syst. 2022, 242, 108246. [Google Scholar] [CrossRef]
- Savage, N. How AI and Neuroscience Drive Each Other Forwards. Nature 2019, 571, S15–S17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sano, T. Visualization of Facial Attractiveness Factors Using Gradient-weighted Class Activation Mapping to Understand the Connection between Facial Features and Perception of Attractiveness. Int. J. Affect. Eng. 2022, 21, 111–116. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, D.; Sun, M.-M.; Chen, F.-M. Facial Beauty Analysis Based on Geometric Feature: Toward Attractiveness Assessment Application. Expert Syst. Appl. 2017, 82, 252–265. [Google Scholar] [CrossRef]
- Gunes, H.; Piccardi, M. Assessing Facial Beauty through Proportion Analysis by Image Processing and Supervised Learning. Int. J. Hum. Comput. Stud. 2006, 64, 1184–1199. [Google Scholar] [CrossRef] [Green Version]
- Chen, F.; Zhang, D. A Benchmark for Geometric Facial Beauty Study. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; pp. 21–32. [Google Scholar]
- Fan, J.; Chau, K.P.; Wan, X.; Zhai, L.; Lau, E. Prediction of Facial Attractiveness from Facial Proportions. Pattern Recognit. 2012, 45, 2326–2334. [Google Scholar] [CrossRef]
- Xu, J.; Jin, L.; Liang, L.; Feng, Z.; Xie, D. A New Humanlike Facial Attractiveness Predictor with Cascaded Fine-Tuning Deep Learning Model. arXiv 2015, arXiv:1511.02465. [Google Scholar]
- Zhang, D.; Chen, F.; Xu, Y. Computer Models for Facial Beauty Analysis; Springer International Publishing: Cham, Switzerland, 2016; pp. 143–163. [Google Scholar] [CrossRef]
- Liang, L.; Lin, L.; Jin, L.; Xie, D.; Li, M. SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1598–1603. [Google Scholar]
- Lebedeva, I.; Guo, Y.; Ying, F. Transfer Learning Adaptive Facial Attractiveness Assessment. J. Phys. Conf. Ser. 2021, 1922, 012004. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef] [Green Version]
- Feng, G.; Lei, J. The Effect of Odor Valence on Facial Attractiveness Judgment: A Preliminary Experiment. Brain Sci. 2022, 12, 665. [Google Scholar] [CrossRef] [PubMed]
- He, D.; Workman, C.I.; He, X.; Chatterje, A. What Is Good Is Beautiful (and What Isn’t, Isn’t): How Moral Character Affects Perceived Facial Attractiveness. Psychol. Aesthet. Creat. Arts 2022. [CrossRef]
- Shahhosseini, M.; Hu, G.; Pham, H. Optimizing Ensemble Weights and Hyperparameters of Machine Learning Models for Regression Problems. Mach. Learn. Appl. 2022, 7, 100251. [Google Scholar] [CrossRef]
- Sun, J.; Li, H. Listed Companies’ Financial Distress Prediction Based on Weighted Majority Voting Combination of Multiple Classifiers. Expert Syst. Appl. 2008, 35, 818–827. [Google Scholar] [CrossRef]
- Huang, G.; Li, Y.; Pleiss, G.; Liu, Z.; Hopcroft, J.E.; Weinberger, K.Q. Snapshot Ensembles: Train 1, Get M for Free. arXiv 2017, arXiv:1704.00109. [Google Scholar]
- Perrone, M.P.; Cooper, L.N.; National Science Foundation U.S. When Networks Disagree: Ensemble Methods for Hybrid Neural Networks; U.S. Army Research Office: Research Triangle Park, NC, USA, 1992. [Google Scholar]
- Boyd, S. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Liang, G.; Zhang, Y.; Wang, X.; Jacobs, N. Improved Trainable Calibration Method for Neural Networks on Medical Imaging Classification. arXiv 2020, arXiv:2009.04057. [Google Scholar]
- Küppers, F.; Kronenberger, J.; Schneider, J.; Haselhoff, A. Bayesian Confidence Calibration for Epistemic Uncertainty Modelling. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 466–472. [Google Scholar] [CrossRef]
- Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On Calibration of Modern Neural Networks. In Proceedings of the 34th International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1321–1330. [Google Scholar] [CrossRef]
- Kauffmann, J.; Müller, K.-R.; Montavon, G. Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models. Pattern Recognit. 2020, 101, 107198. [Google Scholar] [CrossRef]
- Dyrba, M.; Pallath, A.H.; Marzban, E.N. Comparison of CNN Visualization Methods to Aid Model Interpretability for Detecting Alzheimer’s Disease. In Informatik Aktuell; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2020; pp. 307–312. [Google Scholar]
- Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.-R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef] [Green Version]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You? In ” Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Zhao, X.; Huang, W.; Huang, X.; Robu, V.; Flynn, D. BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations. In Proceedings of the Uncertainty in Artificial Intelligence, PMLR, Virtual Event, 7–8 April 2022; pp. 887–896. [Google Scholar]
- Fel, T.; Cadène, R.; Chalvidal, M.; Cord, M.; Vigouroux, D.; Serre, T. Look at the Variance! Efficient Black-Box Explanations with Sobol-Based Sensitivity Analysis. Adv. Neural Inf. Process Syst. 2021, 34. [Google Scholar] [CrossRef]
- Suzuki, S.; Abe, K. Topological Structural Analysis of Digitized Binary Images by Border Following. Comput. Vis. Graph. Image Process. 1985, 29, 396. [Google Scholar] [CrossRef]
- Weisstein, E.W. Golden Ratio. Available online: https://mathworld.wolfram.com/GoldenRatio.html (accessed on 29 October 2020).
- Hossin, M.; Sulaiman, M.N. A Review on Evaluation Metrics for Data Classification Evaluations. Int. J. Data Min. Knowl. Manag. Process 2015, 5. [Google Scholar] [CrossRef]
- Linacre, J.M.; Rasch, G. The expected value of a point-biserial (or similar) correlation. Rasch Meas. Trans. 2008, 22, 1154. [Google Scholar]
- Chicco, D.; Jurman, G. The Advantages of the Matthews Correlation Coefficient (MCC) over F1 Score and Accuracy in Binary Classification Evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [Green Version]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-And-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. In Proceedings of the British Machine Vision Conference (BMVC), Swansea, UK, 7–11 September 2015. [Google Scholar] [CrossRef] [Green Version]
- Zhang, P.; Chen, Y.; Zhu, Y.; Wang, H. Eye Region as a Predictor of Holistic Facial Aesthetic Judgment: An Eventrelated Potential Study. Soc. Behav. Pers. 2021, 49. [Google Scholar] [CrossRef]
- Lan, M.; Peng, M.; Zhao, X.; Li, H.; Yang, J. Neural Processing of the Physical Attractiveness Stereotype: Ugliness Is Bad vs. Beauty Is Good. Neuropsychologia 2021, 155, 107824. [Google Scholar] [CrossRef]
Research | Data Size | Participants | Facial Data Property | Data Type | Explaining Beauty Decisions | Learned Features Visualization |
---|---|---|---|---|---|---|
(Gunes et al., 2006) [29] | 215 | 46 | Diverse | Score on average Rating | No | No |
(Chen et al., 2010) [30] | 23,412 | Unknown | Asian | unknown | No | No |
(Fan et al., 2012) [31] | 432 | 30 | Computer Generated | Score on average Rating | No | No |
(Xu et al., 2015) [32] | 500 | 75 | Asian | Score on average Rating | No | CNNs Filters |
(Zhang et al., 2016) [33] | 799 | 25 | Diverse | Score on average Rating | No | No |
(Zhang et al., 2017) [28] | 9415 | Unknown | Asian/Computer Generated | Score on average Rating | No | No |
(Liang et al., 2018) [34] | 5500 | 60 | Asian/Caucasian | Score on average Rating | No | No |
(Liu et al., 2019) [3] | No data proposed | 0 | Diverse/Computer Generated | Voting/Score on average Rating | No | No |
(Lebedeva et al., 2021) [35] | No data proposed | 0 | Asian/Caucasian | Score on average Rating | No | No |
(Sano, 2022) [27] | No data proposed | 0 | Asian/Caucasian | Score on average Rating | No | Averaged Grad-CAM |
(Bougorzi et al., 2022) [25] | No data proposed | 0 | Asian/Caucasian | Score on average Rating | No | No |
Proposed | 50,000 | 10 | Diverse | Subjective | Yes | Proposed MVF Algorithm |
Feature | Area |
---|---|
Feature 1 | Right eye |
Feature 2 | Left eye |
Feature 3 | Right eyebrow |
Feature 4 | Left eyebrow |
Feature 5 | Nose |
Feature 6 | Center of lips |
Feature 7 | Right of lips |
Feature 8 | Left of lips |
Feature 9 | Chin |
Feature 10 | Around hairline |
Feature 11 | Right cheek |
Feature 12 | Left cheek |
Feature | Distance | Description |
---|---|---|
Feature 1 | D3 vs. D5 | Midpoint between eyes to nose tip vs. nose tip to chin |
Feature 2 | D7 vs. D19 | Left eye length vs. distance to eyes midpoint |
Feature 3 | D8 vs. D18 | Right eye length vs. eyes midpoint |
Feature 4 | D4 vs. D6 | Nose tip to lips center vs. lips center to chin |
Feature 5 | D3 vs. D4 | Nose length vs. nose width |
Feature 6 | D1 vs. D2 | Length of the face vs. width of the face |
Feature 7 | D11 vs. D17 | Hairline to right pupil vs. pupil to lips center |
Feature 8 | D12 vs. D16 | Hairline to left pupil vs. pupil to lips center |
Feature 9 | D5 vs. D6 | Nose tip to chin vs. lips to chin |
Feature 10 | D5 vs. D14 | Nose tip to chin vs. right pupil to nose tip |
Feature 11 | D5 vs. D15 | Nose tip to chin vs. left pupil to nose tip |
Feature 12 | D4 vs. D20 | Nose width vs. nose tip to lips |
Feature 13 | D21 vs. D11 | Outside eyes vs. hairline to the right pupil |
Feature 14 | D21 vs. D12 | Outside eyes vs. hairline to the left pupil |
Feature 15 | D4 vs. D21 | Nose width vs. lips length |
Feature 16 | D13 vs. D1 | Forehead vs. face length |
Feature 17 | D10 vs. D8 | Right pupil to eyebrow vs. right eye length |
Feature 18 | D23 vs. D7 | Left pupil to eyebrow vs. left eye length |
Feature 19 | D9 vs. D8 | Right eyebrow length to right eyes length |
Feature 20 | D22 vs. D7 | Left eyebrow length to left eyes length |
Feature 21 | D26 vs. D8 | Right eye height vs. width |
Feature 22 | D25 vs. D7 | Left eye height vs. width |
Feature 23 | D24 vs. D21 | Height of lips vs. width |
Voters | ECE-B | ECE-A | MCE-B | MCE-A | PICP | MPIW |
VGG16 | 35.99 (34.81–37.02) | 12.66 (11.45–13.68) | 41.05 (40.26–41.14) | 26.93 (26.1–27.71) | 81.29 (80.31–81.93) | 13.57 (9.76–17.91) |
SENet50 | 44.40 (43.56–45.08) | 3.93 (2.92–4.93) | 51.35 (49.79–52.73) | 30.66 (29.67–32.5) | 81.23 (79.37–83.76) | 15.85 (12.87–17.66) |
ResNet50 | 41.49 (39.74–42.85) | 8.61 (6.79–10.38) | 55.21 (54.96–56.48) | 17.43 (16.46–18.49) | 81.39 (80.69–82.86) | 17.06 (14.55–19.99) |
Voters | Precision | Recall | F1 | MCC | ACC |
---|---|---|---|---|---|
VGG16 | 79.00 (77.41–80.01) | 77.40 (76.58–78.48) | 78.58 (78.02–79.1) | 60.88 (59.31–62.43) | 79.48 (77.42–81.73) |
SENet50 | 83.01 (81.53–84.46) | 87.14 (85.6–87.49) | 85.47 (83.1–89.44) | 71.67 (70.28–76.26) | 84.51 (82.74–85.37) |
ResNet50 | 85.94 (84.96–87.09) | 87.20 (84.16–88.64) | 86.31 (85.67–87.82) | 74.11 (72.3–75.89) | 87.34 (86.46–88.22) |
Strategy | ECE-B | ECE-A | MCE-B | MCE-A | PICP | MPIW |
---|---|---|---|---|---|---|
Majority Voting | 34.04 (32.82–35.36) | 4.98 (3.52–6.25) | 49.57 (48.01–51.58) | 14.22 (11.78–17.51) | 52.55 (51.79–53.25) | 23.47 (22.02–25.39) |
Best Combination | 31.22 (30.53–32.02) | 4.57 (4–5.15) | 52.39 (50–54.17) | 7.89 (6.2–9.21) | 48.76 (45.92–50.69) | 26.12 (25.53–26.64) |
Priori Recognition Performance | 31.98 (31.16–33.67) | 4.84 (4.25–5.89) | 41.02 (39.12–43.84) | 9.80 (8.57–11.43) | 48.58 (46.28–50.91) | 23.10 (22.41–23.64) |
ECE (ours) | 32.87 (30.65–35.23) | 2.83 (1.38–2.95) | 47.25 (44.15–50.41) | 8.42 (5.6–9.74) | 45.46 (44.22–46.49) | 24.52 (23.29–25.48) |
MCE (ours) | 26.18 (25.33–26.9) | 2.37 (1.72–2.73) | 51.60 (48.98–53.34) | 5.80 (4.17–7.22) | 48.56 (47.31–50.04) | 25.19 (23.93–26.33) |
PICP (ours) | 34.37 (33.95–35.24) | 4.52 (3.11–6.43) | 49.51 (48.68–50.24) | 6.79 (5.69–8.37) | 45.90 (44.72–46.59) | 24.39 (23.63–25.34) |
Strategy | Precision | Recall | ACC | F1 | MCC | AUC |
---|---|---|---|---|---|---|
Majority Voting | 88.73 (88.21–90.02) | 89.97 (89.15–91.21) | 89.81 (87.01–91.52) | 90.26 (89.29–90.82) | 78.09 (77.86–78.52) | 90.01 (88.43–92.69) |
Best Combination | 90.00 (89.96–90.03) | 88.35 (87.78–88.63) | 89.42 (89.09–90.09) | 89.15 (88.63–89.44) | 78.25 (77.02–79.32) | 89.95 (89.71–90.35) |
Priori Recognition Performance | 90.48 (88.33–91.33) | 88.75 (87.63–89.66) | 89.03 (87.3–91.28) | 89.43 (89.22–89.59) | 78.69 (78.29–79.13) | 89.93 (89.8–90.09) |
ECE (ours) | 91.40 (90.18–92.63) | 90.74 (89.56–91.67) | 91.02 (90.94–91.1) | 91.00 (90.06–91.83) | 82.38 (81.17–83.5) | 91.93 (91.26–93.07) |
MCE (ours) | 90.21 (89.36–90.88) | 90.94 (90.75–91.18) | 90.96 (90.52–91.75) | 90.57 (90.29–90.81) | 81.01 (80.57–81.49) | 91.57 (91.09–93.25) |
PICP (ours) | 93.89 (93.2–94.71) | 92.16 (90.09–92.42) | 92.25 (91.8–92.71) | 92.98 (92.3–93.75) | 85.56 (85.26–86.16) | 93.66 (93.43–94.01) |
Feature | Variance | Standard Deviation | rpb | t-Test | p-Value | Correlation |
---|---|---|---|---|---|---|
Right eye area | 13,232.09 | 115.03 | 0.87 | 7.37 | <0.001 | Positive |
Left eye area | 13,024.69 | 114.13 | −0.93 | 10.60 | <0.001 | Negative |
Right eyebrow area | 623.15 | 24.96 | 0.58 | 3.06 | 0.007 | Positive |
Nose | 245.13 | 15.66 | −0.53 | 2.63 | 0.02 | Negative |
Right cheek area | 627.25 | 25.04 | 0.06 | 0.27 | 0.79 | Positive |
Right corner of lips | 181.16 | 13.46 | 0.11 | 0.48 | 0.64 | Positive |
Right Eye Average Representation | Golden Ratio | Left Eye Average Representation | Golden Ratio | |
---|---|---|---|---|
Beauty | 19.60% | 19.60% | ||
Different Beauty | 10.95% | 7.31% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Aldhahi, W.; Albusair, T.; Sull, S. Beauty in the Eyes of Machine: A Novel Intelligent Signal Processing-Based Approach to Explain the Brain Cognition and Perception of Beauty Using Uncertainty-Based Machine Voting. Electronics 2023, 12, 48. https://doi.org/10.3390/electronics12010048
Aldhahi W, Albusair T, Sull S. Beauty in the Eyes of Machine: A Novel Intelligent Signal Processing-Based Approach to Explain the Brain Cognition and Perception of Beauty Using Uncertainty-Based Machine Voting. Electronics. 2023; 12(1):48. https://doi.org/10.3390/electronics12010048
Chicago/Turabian StyleAldhahi, Waleed, Thekra Albusair, and Sanghoon Sull. 2023. "Beauty in the Eyes of Machine: A Novel Intelligent Signal Processing-Based Approach to Explain the Brain Cognition and Perception of Beauty Using Uncertainty-Based Machine Voting" Electronics 12, no. 1: 48. https://doi.org/10.3390/electronics12010048
APA StyleAldhahi, W., Albusair, T., & Sull, S. (2023). Beauty in the Eyes of Machine: A Novel Intelligent Signal Processing-Based Approach to Explain the Brain Cognition and Perception of Beauty Using Uncertainty-Based Machine Voting. Electronics, 12(1), 48. https://doi.org/10.3390/electronics12010048