Next Article in Journal
No Perfect Outdoors: Towards a Deep Profiling of GNSS-Based Location Contexts
Next Article in Special Issue
Predicting Dog Emotions Based on Posture Analysis Using DeepLabCut
Previous Article in Journal
Authorship Attribution of Social Media and Literary Russian-Language Texts Using Machine Learning Methods and Feature Selection
Previous Article in Special Issue
A Secure and Efficient Multi-Factor Authentication Algorithm for Mobile Money Applications
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Your Face Mirrors Your Deepest Beliefs—Predicting Personality and Morals through Facial Emotion Recognition

MIT Center for Collective Intelligence, Cambridge, MA 02142, USA
Department of Engineering, University of Perugia, 06123 Perugia, Italy
Galaxyadvisors AG, 5000 Aarau, Switzerland
Department of Data Science, Lucerne University of Applied Sciences and Arts, 6002 Lucerne, Switzerland
Department of Information Systems, University of Cologne, 50923 Cologne, Germany
Author to whom correspondence should be addressed.
Future Internet 2022, 14(1), 5;
Received: 30 November 2021 / Revised: 20 December 2021 / Accepted: 22 December 2021 / Published: 22 December 2021
(This article belongs to the Collection Machine Learning Approaches for User Identity)


Can we really “read the mind in the eyes”? Moreover, can AI assist us in this task? This paper answers these two questions by introducing a machine learning system that predicts personality characteristics of individuals on the basis of their face. It does so by tracking the emotional response of the individual’s face through facial emotion recognition (FER) while watching a series of 15 short videos of different genres. To calibrate the system, we invited 85 people to watch the videos, while their emotional responses were analyzed through their facial expression. At the same time, these individuals also took four well-validated surveys of personality characteristics and moral values: the revised NEO FFI personality inventory, the Haidt moral foundations test, the Schwartz personal value system, and the domain-specific risk-taking scale (DOSPERT). We found that personality characteristics and moral values of an individual can be predicted through their emotional response to the videos as shown in their face, with an accuracy of up to 86% using gradient-boosted trees. We also found that different personality characteristics are better predicted by different videos, in other words, there is no single video that will provide accurate predictions for all personality characteristics, but it is the response to the mix of different videos that allows for accurate prediction.

1. Introduction

A face is like the outside of a house, and most faces, like most houses, give us an idea of what we can expect to find inside.
~ Loretta Young
The face is the mirror of the mind, and eyes without speaking confess the secrets of the heart.
~ St. Jerome
Proverbs like the ones above allude to the fact that our faces have the potential to give away our deepest emotions. However, just like the facade of a house might be misleading about what is inside the house, the mind behind the face might hide its true feelings. Emotionally competent people claim to be able to guess what another person is thinking by just watching that person’s face. However, humans are not particularly good at reading emotions in other’s faces. For instance, the test “reading the mind in the eyes” [1], which only shows the eyes of a face, is frequently answered correctly with an accuracy of less than fifty percent. Psychologist Lisa Feldmann Barrett claims that we are actually not much better than randomness when we are not primed in reading others’ emotions [2]. Humans are also notoriously bad at identifying personality characteristics in others [3]. While early systems to read emotions from the face were extracting features from different parts of the face, and comparing them directly, for instance on the basis of the facial action coding system FACS [4], facial emotion recognition has made huge progress over the last 10 years thanks to advances in AI and deep learning [5,6,7]. In this paper, we used latest advances in this field to automatically predict personality characteristics calibrated using four well-established frameworks assessing different facets of personality: Neo-FFI [8], moral foundations [9], Schwartz moral values [10], and attitudes towards taking risk [11].
The remainder of this paper is organized as follows. First, we set the stage by explaining how the emotional response to an external event can demonstrate the moral values of an individual. We also motivate how facial expressions might indicate the personality characteristics of a person. We then introduce our system that tracks emotions through facial emotion recognition while the viewer is watching a video, with 15 small emotionally triggering video snippets. We then present our results, demonstrating through correlations, regression, and machine learning that the emotional response in the face of the viewer, captured through face emotion recognition, will indeed predict the personality and moral values of the viewer. We conclude the paper with a discussion of the results, limitations, and future work.

2. Background

2.1. Emotional Response Shows Individual Value System

On the basis of their moral values, humans experience or show different emotions in response to an external stimulus. Emotional actions triggered through moral values are called “moral affect” [12]. Moral affect—such as shame, guilt, and embarrassment—is linked to moral behavior, leading to prohibitions against behavior that is likely to have negative consequences for the well-being of others [13]. For instance, on the basis of the personal value system, an individual might have shown a different emotional reaction when President Trump was announcing the construction of a wall to keep out asylum seekers from Mexico [14]. Both philosophers [14] and psychologists [2] have investigated this link between morals and emotions. In order to experience that something is wrong, one needs to have a feeling of disapproval towards it [14]. To measure this feeling of disapproval, thus far, technologies such as tracking the hormone level in blood or saliva have been used. For instance, it has been shown that the hormone level in saliva of homosexual and heterosexual men, when shown pictures of two men kissing, is radically different [15]. The researchers showed homosexual and heterosexual men in Utah pictures of same-sex public display of affection, plus disgusting images, such as a bucket of maggots. They used the link between disgust and prejudice, which has been shown to be capable of eliciting responses from the sympathetic nervous system, one of the body’s major stress systems [16]. Salivary alpha-amylase is considered a biomarker of the sympathetic nervous system that is especially responsive to inductions of disgust. The researchers found that the difference in salivary alpha-amylase explained the degree of sexual prejudice against homosexuality among their test subjects, similar to their disgust about a bucket of maggots. In other words, their emotional response, measured through salivary alpha-amylase, indicated their moral values. Instead of measuring negative (and positive) emotions through the saliva, in our research, we measured it through face emotion recognition, maintaining the existence of a similar link between emotional response and moral values.

2.2. Reading Personality Attributes from Facial Characteristics

Studying the relationship between facial and personality characteristics has a long history going back to antiquity. The book “Physiognomics”, discussing the relationship between facial appearance and character, was written 300 BC in Aristotle’s name, but is today attributed to a different author by most researchers. Swiss poet, writer, philosopher, physiognomist, and theologian Johann Caspar Lavater published between 1775 and 1778 his magnum opus on physiognomy, “Physiognomische Fragmente zur Beförderung der Menschenkenntnis und Menschenliebe” (Physiognomic fragments to promote knowledge of human nature and human love) [17], which cataloged leaders and ordinary men (there were very few pictures of women) of his time by their facial shape, or what he called their “lines of countenance”. Lavater even invented an apparatus for taking facial silhouettes to quickly capture the characteristics of a face, and thus the personality of the person.
Later, statistician Francis Galton tried to define physiognomic characteristics of health, beauty, and criminology by creating composites through overlaying pictures of archetypical faces [18]. Italian criminologist and scientist Cesare Lombroso continued this work by defining facial measures of degeneracy and insanity including facial angles, “abnormalities” in bone structure, and volumes of brain fluid [19]. For the better part of the 20th century, scientists derogatively titled physiognomics as “pseudoscience”. This changed towards the end of the 20th century. While early physiognomists from Aristotle to Lombroso tried to develop manually assembled frameworks, AI and deep learning has given a huge boost to this emerging field. Recently, physiognomics has been experiencing renewed interest by researchers, particularly by comparing facial width to height ratio with personality. The theory of “facial width to height ratio” (fWHR) posits that men with higher “facial width to height ratio”, that means with broader, rounder faces, are more aggressive, while men with thinner faces are more trustworthy [20,21,22,23].
Recognizing these features automatically through facial emotion recognition has come a long way since the early days of the facial action coding system, thanks to recent advances in AI and deep learning. A large amount of research has addressed the issue of recognizing personality characteristics from facial attributes. For instance, ChaLearn “Looking at People First Impression Challenge” released a dataset with 10,000 15 s videos with faces (, accessed on 21 December 2021) [24], asking participants in the challenge to identify the FFI personality characteristics [8] of the person on the video, and their age, ethnicity, and gender attributes [25]. The problem with this dataset is that the personality attributes had been added by Amazon Mechanical Turkers, which sometimes leads to a biased ground truth, as it is based on guesswork by humans (the turkers). As was mentioned in the introduction, it has been shown by other researchers that accuracy of human labelers in recognizing emotions is only incrementally better than guesswork at slightly below 50 percent [2]. Nevertheless, the winners of the ChaLearn challenge have achieved impressive accuracy on this pre-labeled dataset to correctly predict the FFI personality characteristics at over 91% [26]. However, it would be better to have true ground truth on the personality characteristics of the subjects on the video. In another project using Facebook likes, where ground truth was available, the researchers showed that the computer was actually better in recognizing personality characteristics than work colleagues, who reached only 27% accuracy, while the computer achieved 56% accuracy [3]; spouses were the most accurate at 58%. The personality characteristics had been collected from 86,220 users through a personality survey on Facebook and were predicted through Facebook likes using regression.
Earlier work has used facial expression of the viewer to measure the quality of a video [27,28,29]. We extend this work to not only measure the degree of enjoyment of the viewer, but the personality characteristics and moral values of the viewer—motivated by the insight that facial expressions will mirror moral values—combining face emotion recognition with ground truth obtained directly from surveys taken by the individual.

3. Methodology—Recording Emotions While Watching Videos

Our approach extends existing systems by not only measuring video quality, but moral values and personality of the viewers, as it uses real ground truth on personality characteristics and moral values for prediction by asking the people whose faces are recorded while watching a sequence of 15 emotionally touching video segments to also fill out a series of personality characteristics tests.

3.1. Measuring Facial Emotions

Our system consists of a website (, accessed on 21 December 2021) where the participant watches a sequence of 15 videos (Figure 1).
Table 1 lists the 15 movie snippets, at a total length of 9 min 22 s, that are shown to users on the website, while the emotions of their faces are recorded after they have given informed consent that their anonymized emotions will be recorded; no video of the face is recorded.
The 15 video snippets show controversial scenes with the aim of generating a wide range of emotions in respondents [30]. We use the face-api.js tool (, accessed on 21 December 2021), which employs a convolutional neural network with a ResNet-34 architecture [31], to recognize the user’s facial emotions in each frame (up to 30 times per second) of the user’s web cam. The tracked emotions are joy, sadness, anger, fear, surprise, and disgust [32]. In addition, a seventh emotion “neutral” was added, which greatly increases machine learning accuracy when none of the six Ekman emotions can be recognized.

3.2. Measuring Personality and Morals of the Viewers

Our dependent variables are collected through four well-validated personality and moral values assessments. The user is asked on the same website where the videos are shown to fill out four online surveys for the revised NEO FFI personality inventory, the Haidt moral foundations test, the Schwartz personal value system, and the domain-specific risk-taking scale (DOSPERT). The OCEAN (Openness, Conscientiousness, Extroversion, Agreeability, Neuroticism) personality characteristics are measured with the Neo-FFI [8] survey. Risk-preference is measured by the Domain-Specific Risk-Taking (DOSPERT) survey [11], which assesses disposition to take risks in five specific domains of life (ethical, financial, recreational, health, and social). It measures both the willingness to take risks and the individual perception of an activity as risky. Moral foundational values are measured with the Haidt moral foundations survey [9]. It measures the moral values of the respondent in five categories (care, fairness, loyalty, authority, and sanctity). In addition, the two dimensions of Conservation and Transcendence also are assessed through a survey [10,33]. The Schwartz values have been validated in many countries around the world [34].

4. Results—Emotional Response Predicted Values

We found that all four dimension of a personality, FFI characteristics, DOSPERT risk taking, moral foundations, and Conservation and Transcendence (Schwartz values), can be predicted on the basis of the emotions shown while watching the 15 different video segments. Table 2 shows the descriptive statistics of our dependent variables for all four dimensions of a personality, listing the individual traits we mapped through psychometric tests.
In Appendix A, we show the Pearson’s correlation coefficients of individual traits with the different emotions experienced while watching the videos. Neither commenting on each single association and its significance, nor investigating the possible reasons behind associations, is in the scope of this research. Rather we wanted to show the possibility of predicting individual traits, based on the differential emotional response of individuals exposed to the same set of stimuli, by considering automatically recognized emotions through artificial intelligence.
The preliminary result of correlations—a suggested association between individual differences and people’s emotional responses—is confirmed by the regression models presented in Table 3, Table 4, Table 5 and Table 6. For each set of dependent variables, they show the best model, i.e., the optimal combination of predictors that can explain the larger proportion of variance. We found no evidence of collinearity problems (evaluated by calculating variance inflation factors). These regressions illustrate the predictability of personality characteristics and morals from facial expression of emotions using conventional statistical methods.
In general, we found that models for some traits—such as conservation, transcendence, and ethical and financial likelihood—had promising adjusted R2 values. In terms of emotions, fear seemed more relevant for the predictions of the DOSPERT scores, whereas happiness seemed more associated with the Big Five personality traits. Being neutral in front of a video can also play a role in determining the individual’s personality characteristics. Remember that the facial emotion recognition system returns this value if it cannot assign any other emotion with a sufficiently high threshold, corresponding to the individual sitting in front of the computer with an unmoving face. We also see that different videos triggered a variety of emotional responses, which were possibly useful for the prediction of different traits. All the relationships explored in this study could be further investigated in future research in order to better analyze their meaning from a psychological perspective.
Figure 2 summarizes findings from the regression models, providing evidence to the importance of each video and emotion for the prediction of individual traits.
For example, we can observe that videos number 14, 9, and 2 were those that triggered the most useful emotional responses. Among emotions, fear and happiness were those most used to make predictions, with fear being particularly relevant for the DOSPERT traits.

Predicting Personality and Morals Using Machine Learning

While correlations and regressions showed promising results, we wanted to complete our analysis to explore non-linear relationships and the possibility of making predictions by using machining learning and considering a test sample (a subset of observations) not used for model training. In particular, we binned the continuous scores of our dependent variables into three classes in order to understand if values were high, medium, or low. Subsequently, we used a gradient boosting approach to make predictions, namely, Xgboost [35]. We trained our models using 10-Fold Cross Validation and the SMOTE technique [36] in order to treat unbalanced classes. ADASYN was also used as an alternative to SMOTE [37], in the cases where this led to improved forecasts. In Table 7, we present the results of these forecasting exercises, made on 10% of observations that were held out for testing prediction accuracy.
As the table shows, we obtained good prediction results, both in terms of average accuracy and Cohen’s Kappa. Only for the health perceived trait of the DOSPERT scale did we obtain an accuracy score that was below 70% (60% average accuracy and a Kappa value of 0.38). This confirms our original hypothesis that facial emotion recognition can be used to predict personality and other individual traits.
Similarly to the regression models, different features were more important for the prediction of personality and other individual traits. In order to evaluate the contribution of each feature to model prediction, we used Shapley additive explanations (SHAP) [38,39]. In the following (Figure 3, Figure 4, Figure 5 and Figure 6), we provide some examples, while the remaining charts are shown in Appendix A.
As the Shapley charts illustrate, again the emotions happiness and fear were found to have the strongest predictive power. However, we cannot make any claim about what emotional response to which movie predicts what personality characteristics. This is not the point of this paper. The point is that “your emotional response predicts your personality characteristics and moral values”. Identification of the most emotionally provocative movies is most likely dependent on the individual personality and values of the viewer, which is also related to local cultures and values. It would therefore be another research project to precisely identify a minimal set of short movies that consistently provoke the most expressive emotions that are the most indicative of an individual’s personality and morals.

5. Limitations, Future Work, and Conclusions

In this work, we show that AI can be used for the task of facial emotion recognition, producing features that can in turn predict people’s personality and moral values.
Ours is an exploratory analysis with regard to associations found between different individual traits and emotions produced in response to a different set of audiovisual stimuli. These relationships could be further investigated in future research in order to better understand their meaning from a psychological perspective. Future research should consider more control variables, which we could not collect in our experiment (due to privacy arrangements), such as age, gender, and ethnicity of experiment participants. Similarly, a different set of videos could be taken into account, also looking for the optimal set of stimuli that could produce an emotional response better associable to specific individual differences.
Our research has both practical and theoretical implications. On the theoretical side, it further confirms the insight that moral affect—emotions in response to positive and negative experiences—are at the center of our ethical values. On the practical side, our approach offers a novel and more honest way to measure personality characteristics, attitudes to risk, and moral values. As has been discussed above, while humans tend to misjudge personality and moral values of others and themselves, AI provides an honest virtual mirror assisting in this task. In conclusion, this study has shown that while humans frequently are incapable of looking behind the facade of the face and “read the mind in the eyes”, artificial intelligence can lend a helping hand to people who have difficulties in this task.

Author Contributions

Conceptualization, P.A.G.; methodology, P.A.G., A.F.C. and E.A.; software, C.C., M.F.K., L.R. and T.S.; formal analysis, P.A.G., A.F.C. and E.A.; investigation, C.C., M.F.K., L.R. and T.S.; data curation, C.C., M.F.K., L.R. and T.S.; writing—original draft preparation, P.A.G. and A.F.C.; writing—review and editing, P.A.G. and A.F.C. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of MIT (protocol code 170181783) on 16 February 2017.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available because they contain information that could compromise the privacy of research participants.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1 shows the Pearson’s correlation coefficients of individual traits with the different emotions experienced while watching the videos. Each emotion is indicated together with the number of the video it is referring to. It is interesting to notice how significant associations emerge for every individual trait—with some emotions being particularly relevant for some traits, such as fear (revealed while watching videos 4–13 and 15) for the DOSPERT scale.
Table A1. Pearson’s correlation coefficients.
Table A1. Pearson’s correlation coefficients.
Big FiveDOSPERT ScaleSchwartz ValuesHaidt Moral Values
Angry 1−0.0030.0630.1880.0270.206−0.064−0.037−0.0560.035−0.0880.0060.0080.100−0.0650.1420.0540.0490.0560.1080.0340.111−0.061
Angry 20.1040.1560.233 *0.1150.322 **−0.0990.040−0.0460.092−0.1680.060−0.0510.181−0.0230.1650.0620.0790.1070.2280.1290.138−0.015
Angry 30.1180.0910.1920.1910.141−0.0200.013−0.0250.064−0.0940.075−0.0150.117−0.0140.0880.0310.0060.1010.1870.0690.097−0.055
Angry 40.0600.1940.0790.0890.1710.0740.276 *0.0880.1360.0970.1900.1990.322 **−0.0520.1990.1720.0940.1780.296 *0.278 *0.299 *0.169
Angry 5−0.078−0.0340.152−0.0110.263 *0.289 *0.0440.173−0.0420.047−0.0100.1380.250 *−0.0460.148−0.003−0.0470.020−0.081−0.023−0.061−0.138
Angry 60.0350.0480.1810.0390.2030.094−0.0150.0830.013−0.029−0.0750.1010.2400.0130.1110.077−0.0480.0450.1150.0230.056−0.025
Angry 70.0960.0780.275 *0.1000.277 *−0.0240.0000.0150.030−0.0970.056−0.0160.127−0.0400.1170.0440.0110.0980.1300.0410.077−0.017
Angry 80.0650.0700.235 *0.1130.164−0.0100.0380.0700.019−0.1050.067−0.0580.116−0.1250.1250.0090.0530.0920.1340.0340.0770.006
Angry 90.0040.0620.2110.0690.248 *0.034−0.0190.0290.072−0.0410.052−0.0610.119−0.0770.0920.022−0.0080.0130.0940.0350.053−0.119
Angry 100.0480.0890.243 *0.0780.277 *0.003−0.0010.0130.080−0.0480.054−0.0130.112−0.0270.1070.057−0.0070.0380.0980.0280.062−0.079
Angry 11−0.0050.0820.1980.0610.229 *−0.024−0.014−0.0220.058−0.0780.057−0.0310.121−0.0340.1030.070−0.0040.0550.1150.0280.084−0.061
Angry 120.0170.0650.1970.0640.151−0.027−0.0290.0250.015−0.066−0.014−0.0640.123−0.0270.0770.0680.0390.0480.1750.0410.106−0.037
Angry 130.1030.0490.1420.1710.088−0.083−0.029−0.0070.011−0.1090.037−0.0850.1470.0090.1570.0710.0160.1920.2250.0640.1150.038
Angry 14−0.012−0.1070.0750.0450.040−0.062−0.0950.026−0.091−0.095−0.084−0.0500.088−0.0820.083−0.0320.0860.1660.2280.011−0.041−0.016
Angry 150.0990.0000.0600.220−0.005−0.0970.0130.091−0.029−0.1110.063−0.0930.1600.0240.1830.034−0.0240.2090.1690.1080.1140.082
Disgusted 10.2090.1170.0660.212−0.006−0.0860.1120.0600.043−0.1150.184−0.1140.1520.1020.1610.151−0.0090.0660.0140.1160.1830.086
Disgusted 20.234 *0.1890.0640.285 *0.087−0.1140.0880.0410.025−0.1260.140−0.0400.2410.1680.1740.088−0.0630.1770.1290.1690.1780.146
Disgusted 30.223 *0.1900.0730.295 **0.044−0.1250.073−0.0130.059−0.0590.158−0.1280.1750.1160.1750.135−0.0450.1190.1480.1390.2190.140
Disgusted 40.237 *0.1850.0580.330 **0.046−0.1200.1100.0330.045−0.1000.174−0.0730.2000.1480.1840.105−0.0560.1580.1180.1600.2000.161
Disgusted 50.1980.1160.1530.233 *0.071−0.122−0.064−0.094−0.019−0.0590.019−0.0870.0990.0430.1070.069−0.0940.0430.058−0.0240.118−0.015
Disgusted 60.1940.1680.1180.265 *0.122−0.0910.0590.0150.054−0.0570.163−0.1040.1860.0620.1760.101−0.0400.1290.1820.1340.1870.078
Disgusted 70.2050.1930.1610.258 *0.073−0.1100.027−0.0540.069−0.0320.144−0.1680.1310.0440.1430.142−0.0350.0760.1900.1020.2180.080
Disgusted 80.1660.0910.1630.197−0.025−0.060−0.054−0.1290.0440.0840.050−0.197−0.001−0.0380.0320.104−0.003−0.0180.191−0.0180.1610.022
Disgusted 90.1050.1050.2040.2030.155−0.0490.010−0.0010.036−0.0920.074−0.0370.143−0.0180.1220.081−0.0200.0840.1070.0470.117−0.014
Disgusted 100.1900.2020.1020.262 *0.114−0.1080.109−0.0040.094−0.0790.194−0.1090.1990.0880.2110.161−0.0540.0990.1350.1400.2120.088
Disgusted 110.279 *0.1770.1340.225 *0.095−0.0710.079−0.0490.117−0.0380.111−0.1140.0960.0620.248 *0.043−0.2040.1700.1800.0410.2110.138
Disgusted 120.2000.1930.1180.291 **0.119−0.1060.1150.0430.055−0.1270.198−0.0780.2350.1250.1930.107−0.0580.1740.1310.1500.2060.116
Disgusted 130.2110.1470.0350.253 *−0.003−0.1110.071−0.0280.072−0.0340.173−0.1900.1670.1150.1420.131−0.0520.1000.1590.1410.2190.121
Disgusted 140.221 *0.1850.0250.287 **0.052−0.1030.1150.0520.031−0.1000.188−0.0770.2070.1580.1730.099−0.0720.1550.1090.1740.1940.154
Disgusted 150.2130.1630.0700.314 **0.106−0.1170.1690.0350.063−0.1480.219−0.0150.1750.1480.2040.051−0.1620.2000.0900.1490.1590.148
Surprised 10.0640.1170.0100.053−0.072−0.056−0.0070.036−0.019−0.114−0.0450.1080.2290.1860.253 *0.148−0.1090.0990.0680.1920.1780.167
Surprised 2−0.093−0.0020.040−0.003−0.166−0.1590.015−0.1250.080−0.2080.0890.017−0.0600.1440.1670.0020.1300.2090.0900.032−0.0600.065
Surprised 30.1530.0730.1400.172−0.190−0.090−0.068−0.169−0.0340.095−0.0680.111−0.0710.0840.0170.203−0.1680.2160.1620.0380.2080.198
Surprised 4−0.072−0.0840.0880.140−0.1660.046−0.086−0.044−0.1690.166−0.1800.108−0.046−0.071−0.0410.062−0.0210.0960.079−0.0280.0400.065
Surprised 50.0180.0140.0710.1720.0050.261 *0.0880.1550.1670.283 *0.0230.353 **0.1480.1490.1460.275 *−0.1620.1060.0740.1790.0970.129
Surprised 6−0.157−0.110−0.0900.032−0.122−0.039−0.325 **−0.032−0.151−0.018−0.2360.0940.030−0.0110.138−0.095−0.0120.1570.011−0.001−0.217−0.141
Surprised 70.0110.076−0.0040.131−0.283 *−0.138−0.018−0.054−0.026−0.1740.089−0.024−0.0150.231−0.0780.135−0.0220.1550.1290.1720.1660.166
Surprised 8−0.063−0.1710.0630.051−0.286 *0.060−0.0740.036−0.111−0.1110.051−0.006−0.145−0.151−0.0250.0760.1700.083−0.0630.1500.1200.120
Surprised 9−0.068−0.0080.233 *0.254 *−0.0270.034−0.0240.038−0.1230.058−0.079−0.034−0.062−0.030−0.0900.096−0.0080.011−0.137−0.0130.052−0.041
Surprised 10−0.114−0.266 *0.054−0.024−0.2160.085−0.0870.089−0.202−0.077−0.093−0.014−0.168−0.139−0.0520.1420.1900.103−0.185−0.097−0.092−0.055
Surprised 11−0.034−0.0990.1120.0140.1420.0810.1950.1740.143−0.0230.245 *0.099−0.0140.1630.1510.222−0.1680.2200.0040.2020.0150.094
Surprised 120.025−0.009−0.0130.033−0.1350.0620.042−0.005−0.0720.072−0.0350.263 *0.0420.175−0.0890.1140.0580.0620.1330.1610.1670.139
Surprised 130.053−0.0260.0810.122−0.1320.0500.0580.088−0.057−0.1720.165−0.0380.0750.0980.0820.1860.0880.176−0.0620.1740.2130.205
Surprised 140.011−0.0250.115−0.236 *0.086−0.050−0.0460.0130.018−0.0910.016−0.053−0.122−0.0230.0790.310 **−0.2090.089−0.0930.1440.1880.271 *
Surprised 150.0950.0670.231 *0.293 **−0.069−0.004−0.021−0.024−0.0790.045−0.0210.038−0.0180.158−0.0540.166−0.1310.144−0.0010.1780.257 *0.190
Fearful 1−0.112−0.0330.050−0.072−0.132−0.0660.017−0.095−0.061−0.2150.1090.004−0.1700.0670.032−0.0560.063−0.018−0.106−0.058−0.132−0.014
Fearful 2−0.083−0.0170.057−0.047−0.127−0.1430.1440.0370.047−0.2010.212−0.0250.1980.1550.2210.083−0.0320.1740.0930.1830.1780.185
Fearful 3−0.150−0.0390.115−0.012−0.164−0.0740.043−0.052−0.179−0.1910.130−0.004−0.2240.087−0.136−0.150−0.1450.1460.018−0.096−0.1250.092
Fearful 40.0490.143−0.0580.0210.0740.0880.279 *−0.1460.1380.2210.2070.466 **0.1220.0290.1130.0860.0630.1600.2260.1860.1730.199
Fearful 5−0.0180.045−0.0240.0220.0550.1790.336 **−0.0230.2180.315 *0.251 *0.406 **0.0990.0530.1110.1350.0390.1540.1540.1280.1090.108
Fearful 6−0.119−0.0500.039−0.055−0.133−0.129−0.024−0.195−0.040−0.0410.0800.029−0.0020.0300.1840.036−0.0760.1720.145−0.067−0.0670.016
Fearful 7−0.108−0.0210.051−0.037−0.0850.330 **0.369 **0.1690.378 **0.386 **0.353 **0.389 **0.1810.2250.250 *0.1300.0330.1630.1700.258 *0.1000.118
Fearful 8−0.112−0.0300.053−0.072−0.129−0.079−0.021−0.1960.001−0.2200.1120.029−0.030−0.0850.1030.024−0.0090.0030.0320.0410.0860.116
Fearful 9−0.182−0.0850.056−0.010−0.0330.350 **0.2040.427 **0.375 **0.2140.285 *0.0660.1050.265 *0.336 **0.092−0.0570.006−0.0960.180−0.026−0.068
Fearful 10−0.149−0.0530.046−0.061−0.0910.262 *0.2080.322 **0.404 **0.0800.335 **0.0570.0760.251 *0.371 **0.0630.0340.009−0.0140.148−0.043−0.095
Fearful 110.1120.0620.1130.065−0.1050.2130.2310.1090.2170.0970.1930.1230.0330.1770.269 *0.045−0.0300.0990.0860.035−0.0090.104
Fearful 120.1500.0540.0970.084−0.0840.1430.1520.1540.2290.0260.307 *0.2160.0710.1240.359 **0.1110.0470.108−0.0250.1660.0910.109
Fearful 13−0.0300.005−0.085−0.114−0.1030.018−0.051−0.0210.138−0.0580.0840.090−0.0320.0250.386 **0.0760.0000.047−0.0790.0490.0050.050
Fearful 140.061−0.0090.037−0.1540.0210.0140.003−0.1170.0950.024−0.097−0.002−0.159−0.0790.201−0.118−0.272 *0.1020.046−0.1490.0930.130
Fearful 15−0.076−0.1490.1380.0340.1140.272 *−0.0100.059−0.1100.1650.007−0.0460.1090.0980.065−0.119−0.229−0.011−0.1540.018−0.086−0.025
Sad 1−0.0030.1290.1760.130−0.001−0.0380.106−0.1190.103−0.1020.228−0.1420.0560.0130.1420.0690.0600.1860.2150.237 *0.2070.298 *
Sad 20.1020.1180.1530.135−0.035−0.0510.0800.012−0.053−0.0230.191−0.1440.1510.0410.1380.095−0.0200.2070.1300.2120.265 *0.363 **
Sad 3−0.0960.0070.0700.1120.0280.0430.1800.0660.0960.0100.229−0.1720.205−0.0080.2320.1790.1600.1700.2130.285 *0.256 *0.284 *
Sad 40.0310.0360.1500.0310.0450.0280.058−0.1190.0200.0580.158−0.1060.036−0.0380.114−0.031−0.1360.1790.1370.1600.2140.327 **
Sad 5−0.021−0.0130.1370.0760.0400.0830.110−0.260 *0.2310.1270.156−0.0390.012−0.0430.2160.149−0.0790.1770.238 *0.0800.1360.164
Sad 6−0.0550.0090.038−0.038−0.0300.0320.110−0.2190.0950.0710.0900.0620.009−0.0960.164−0.0370.1170.240 *0.348 **0.1030.1450.295 *
Sad 7−0.0750.0120.0330.017−0.0060.1330.127−0.0140.1490.1650.1530.0420.0470.0140.1920.0290.1190.1750.279 *0.2080.1410.260 *
Sad 8−0.0390.0460.079−0.0080.0390.1560.1320.0000.1650.2170.1190.0450.0290.0330.1760.0780.0590.1670.2350.2050.255 *0.290 *
Sad 9−0.0620.0020.101−0.0460.0020.0720.031−0.0590.0620.1080.066−0.0960.008−0.0350.1470.0410.0430.1470.1410.1610.244 *0.330 **
Sad 10−0.0130.0210.0680.0070.0370.0660.056−0.0640.1010.1760.087−0.1030.020−0.0520.1880.0980.0140.1280.1970.2020.259 *0.309 **
Sad 11−0.030−0.0320.0360.105−0.049−0.0300.089−0.2220.1730.0450.230−0.1060.114−0.1730.190−0.0620.0180.2270.326 **0.2040.1190.156
Sad 12−0.126−0.1250.0350.058−0.1010.0910.062−0.1310.1520.1330.182−0.0230.046−0.0970.213−0.064−0.0560.259 *0.306 *0.1870.0700.153
Sad 13−0.114−0.1390.039−0.062−0.0240.1890.086−0.0490.1940.1720.139−0.0420.097−0.0980.200−0.0040.0940.237 *0.299 *0.2130.1500.197
Sad 14−0.047−0.0840.1590.042−0.0170.1500.049−0.0940.1710.0820.156−0.0560.033−0.0820.1270.020−0.0620.1900.257 *0.1830.1600.176
Sad 15−0.124−0.1120.077−0.0330.0230.1340.028−0.0370.1930.0920.133−0.2100.095−0.1200.2150.0230.0730.1400.2260.1680.1780.116
Happy 1−0.195−0.297 **0.0140.091−0.220 *0.078−0.107−0.0410.0720.0310.138−0.126−0.170−0.0480.135−0.007−0.0750.343 **0.132−0.085−0.134−0.035
Happy 2−0.191−0.193−0.0190.131−0.1450.065−0.0090.0110.0680.0400.0910.131−0.1350.0440.078−0.143−0.0120.415 **0.223−0.056−0.223−0.034
Happy 3−0.187−0.0990.0050.095−0.187−0.092−0.060−0.005−0.088−0.040−0.0450.072−0.152−0.006−0.021−0.272 *−0.1130.2050.052−0.143−0.1820.021
Happy 4−0.168−0.0530.1150.024−0.105−0.032−0.1520.056−0.099−0.017−0.0250.0390.012−0.1640.013−0.1950.1790.2030.0810.005−0.0760.033
Happy 5−0.107−0.2110.111−0.009−0.121−0.018−0.2210.069−0.060−0.024−0.040−0.014−0.086−0.124−0.194−0.320 **0.1010.1870.068−0.025−0.104−0.059
Happy 6−0.100−0.0120.0560.0300.037−0.018−0.1160.0590.1500.0180.0560.0240.107−0.1420.080−0.270 *−0.0250.0810.0710.046−0.152−0.175
Happy 70.0040.1210.0800.0610.015−0.092−0.126−0.027−0.010−0.0370.066−0.0050.143−0.0530.014−0.185−0.0980.0980.0770.139−0.027−0.026
Happy 80.0350.1210.1550.272 *0.111−0.1960.021−0.0920.078−0.257 *0.273 *0.0030.1340.0170.116−0.163−0.290 *0.2060.0360.141−0.136−0.079
Happy 9−0.123−0.1120.1130.1680.028−0.099−0.0290.055−0.096−0.1200.1410.0130.0350.0030.025−0.327 **−0.247 *0.1470.0050.058−0.235−0.071
Happy 10−0.194−0.093−0.0040.1340.044−0.065−0.0270.015−0.103−0.0920.108−0.0070.0280.0060.002−0.348 **−0.278 *0.061−0.1030.010−0.319 **−0.177
Happy 11−0.136−0.0890.0150.0410.1250.096−0.0420.105−0.1240.126−0.103−0.0470.1080.019−0.144−0.270 *−0.183−0.016−0.194−0.079−0.115−0.105
Happy 12−0.004−0.0490.0630.1520.0190.0120.0520.079−0.179−0.0650.0780.0340.0520.044−0.074−0.099−0.1700.194−0.062−0.039−0.0800.003
Happy 13−0.1640.009−0.0800.0880.044−0.096−0.138−0.0560.092−0.1270.1270.1590.216−0.2280.163−0.2010.0200.1410.0810.124−0.207−0.228
Happy 14−0.081−0.165−0.0170.023−0.1800.034−0.1270.001−0.0910.0000.0340.0070.037−0.298 *0.106−0.1960.0830.1600.074−0.008−0.220−0.136
Happy 15−0.113−0.0610.283 *0.1410.038−0.108−0.121−0.109−0.084−0.094−0.0090.085−0.022−0.001−0.078−0.268 *−0.2050.1750.007−0.002−0.223−0.042
Neutral 10.1970.239 *−0.085−0.1250.227 *−0.0550.0670.076−0.1020.015−0.2080.1630.1290.034−0.192−0.0260.056−0.396 **−0.201−0.0050.048−0.067
Neutral 20.1590.147−0.040−0.1760.164−0.038−0.021−0.013−0.053−0.020−0.158−0.0800.073−0.066−0.1330.1070.017−0.483 **−0.268 *−0.0190.129−0.086
Neutral 30.1800.069−0.047−0.1650.1650.087−0.011−0.0110.0440.042−0.0530.0020.054−0.006−0.0780.1790.062−0.270 *−0.1430.0240.057−0.132
Neutral 40.070−0.022−0.210−0.1260.0530.0210.0370.0360.057−0.031−0.1270.033−0.0950.126−0.1400.138−0.046−0.335 **−0.208−0.159−0.146−0.295 *
Neutral 50.0700.163−0.236 *−0.1030.036−0.0750.1050.127−0.132−0.097−0.0940.0080.0100.126−0.0400.143−0.001−0.315 **−0.246 *−0.045−0.032−0.063
Neutral 60.075−0.031−0.103−0.041−0.015−0.005−0.0340.164−0.153−0.053−0.118−0.062−0.1290.134−0.251 *0.127−0.078−0.301 *−0.401 **−0.146−0.095−0.192
Neutral 70.038−0.110−0.130−0.091−0.031−0.055−0.0540.029−0.146−0.115−0.205−0.021−0.1540.006−0.2120.057−0.043−0.237 *−0.331 **−0.289 *−0.146−0.224
Neutral 8−0.012−0.130−0.228 *−0.228 *−0.0840.028−0.1090.078−0.1860.023−0.304 *−0.009−0.119−0.017−0.2310.0380.148−0.287 *−0.234−0.272 *−0.133−0.180
Neutral 90.1590.076−0.255 *−0.172−0.068−0.003−0.012−0.0390.0030.009−0.1880.072−0.0660.017−0.1700.1930.169−0.237 *−0.108−0.185−0.022−0.163
Neutral 100.1710.078−0.099−0.138−0.067−0.009−0.0200.0150.014−0.038−0.1690.100−0.0440.052−0.1800.1500.181−0.180−0.064−0.1660.041−0.084
Neutral 110.0530.045−0.112−0.166−0.077−0.017−0.1040.140−0.153−0.081−0.2260.125−0.1860.119−0.1880.1290.118−0.263 *−0.257 *−0.188−0.103−0.122
Neutral 120.0730.094−0.096−0.1430.082−0.077−0.0800.111−0.139−0.103−0.2120.024−0.0970.069−0.2350.0460.071−0.304 *−0.336 **−0.212−0.107−0.169
Neutral 130.1060.112−0.042−0.0080.014−0.146−0.0680.058−0.213−0.121−0.1920.046−0.1710.115−0.261 *0.007−0.094−0.291 *−0.339 **−0.260 *−0.158−0.180
Neutral 140.0300.106−0.157−0.0860.058−0.129−0.0200.082−0.137−0.052−0.1840.069−0.0770.151−0.2010.0230.057−0.273 *−0.297 *−0.199−0.123−0.162
Neutral 150.1450.120−0.264 *−0.116−0.047−0.0470.0360.083−0.108−0.016−0.1260.139−0.0960.076−0.1650.1130.079−0.256 *−0.213−0.175−0.067−0.106
* p < 0.05; ** p < 0.01. N = neuroticism; C = conscientiousness; A = agreeableness; O = openness to experience; E = extraversion; ETH_L = ethical likelihood; ETH_P = ethical perceived; FIN_L = financial likelihood; FIN_P = financial perceived; HEA_L = health likelihood; HEA_P = health perceived; SOC_L = social likelihood; SOC_P = social perceived; REC_L = recreational likelihood; REC_P = recreational perceived; CON = conservation; TRA = transcendence; HAR = harm/care; FAIR = fairness/reciprocity; ING_LOY = in-group loyalty; AUTH = authority/respect; PUR = purity/sanctity.
In the following, we provide additional charts that show the SHAP values of the features used for machine learning predictions (Figure A1, Figure A2, Figure A3, Figure A4, Figure A5, Figure A6, Figure A7, Figure A8, Figure A9, Figure A10, Figure A11, Figure A12, Figure A13, Figure A14, Figure A15, Figure A16, Figure A17 and Figure A18). We excluded those already presented in the results section.
Figure A1. Feature importance for predicting transcendence.
Figure A1. Feature importance for predicting transcendence.
Futureinternet 14 00005 g0a1
Figure A2. Feature importance for predicting fairness/reciprocity.
Figure A2. Feature importance for predicting fairness/reciprocity.
Futureinternet 14 00005 g0a2
Figure A3. Feature importance for predicting harm/care.
Figure A3. Feature importance for predicting harm/care.
Futureinternet 14 00005 g0a3
Figure A4. Feature importance for predicting in-group loyalty.
Figure A4. Feature importance for predicting in-group loyalty.
Futureinternet 14 00005 g0a4
Figure A5. Feature importance for predicting purity/sanctity.
Figure A5. Feature importance for predicting purity/sanctity.
Futureinternet 14 00005 g0a5
Figure A6. Feature importance for predicting agreeableness.
Figure A6. Feature importance for predicting agreeableness.
Futureinternet 14 00005 g0a6
Figure A7. Feature importance for predicting extraversion.
Figure A7. Feature importance for predicting extraversion.
Futureinternet 14 00005 g0a7
Figure A8. Feature importance for predicting neuroticism.
Figure A8. Feature importance for predicting neuroticism.
Futureinternet 14 00005 g0a8
Figure A9. Feature importance for predicting openness.
Figure A9. Feature importance for predicting openness.
Futureinternet 14 00005 g0a9
Figure A10. Feature importance for predicting ethical likelihood.
Figure A10. Feature importance for predicting ethical likelihood.
Futureinternet 14 00005 g0a10
Figure A11. Feature importance for predicting ethical perceived.
Figure A11. Feature importance for predicting ethical perceived.
Futureinternet 14 00005 g0a11
Figure A12. Feature importance for predicting financial likelihood.
Figure A12. Feature importance for predicting financial likelihood.
Futureinternet 14 00005 g0a12
Figure A13. Feature importance for predicting financial perceived.
Figure A13. Feature importance for predicting financial perceived.
Futureinternet 14 00005 g0a13
Figure A14. Feature importance for predicting health perceived.
Figure A14. Feature importance for predicting health perceived.
Futureinternet 14 00005 g0a14
Figure A15. Feature importance for predicting recreational likelihood.
Figure A15. Feature importance for predicting recreational likelihood.
Futureinternet 14 00005 g0a15
Figure A16. Feature importance for predicting recreational perceived.
Figure A16. Feature importance for predicting recreational perceived.
Futureinternet 14 00005 g0a16
Figure A17. Feature importance for predicting social likelihood.
Figure A17. Feature importance for predicting social likelihood.
Futureinternet 14 00005 g0a17
Figure A18. Feature importance for predicting social perceived.
Figure A18. Feature importance for predicting social perceived.
Futureinternet 14 00005 g0a18


  1. Baron-Cohen, S.; Wheelwright, S.; Hill, J.; Raste, Y.; Plumb, I. The “reading the mind in the eyes” test revised version: A study with normal adults, and adults with asperger syndrome or high-functioning autism. J. Child Psychol. Psychiatry 2001, 42, 241–251. [Google Scholar] [CrossRef]
  2. Barrett, L.F. How Emotions Are Made: The Secret Life of the Brain; Mariner Books: Boston, MA, USA, 2017. [Google Scholar]
  3. Youyou, W.; Kosinski, M.; Stillwell, D. Computer-based personality judgments are more accurate than those made by humans. Proc. Natl. Acad. Sci. USA 2015, 112, 1036–1040. [Google Scholar] [CrossRef][Green Version]
  4. Hjortsjö, C.-H. Man’s Face and Mimic Language; Studentlitteratur: Lund, Sweden, 1969. [Google Scholar]
  5. Biel, J.-I.; Teijeiro-Mosquera, L.; Gatica-Perez, D. FaceTube. In Proceedings of the 14th ACM International Conference on Multimodal Interaction—ICMI ’12, Santa Monica, CA, USA, 22–26 October 2012; ACM Press: New York, NY, USA, 2012; p. 53. [Google Scholar]
  6. Ko, B. A brief review of facial emotion recognition based on visual information. Sensors 2018, 18, 401. [Google Scholar] [CrossRef]
  7. Rößler, J.; Sun, J.; Gloor, P. Reducing videoconferencing fatigue through facial emotion recognition. Future Internet 2021, 13, 126. [Google Scholar] [CrossRef]
  8. Costa, P.T.; McCrae, R.R. The revised NEO personality inventory (NEO-PI-R). In The SAGE Handbook of Personality Theory and Assessment: Volume 2—Personality Measurement and Testing; SAGE Publications: London, UK, 2008; pp. 179–198. [Google Scholar]
  9. Graham, J.; Haidt, J.; Koleva, S.; Motyl, M.; Iyer, R.; Wojcik, S.P.; Ditto, P.H. Moral foundations theory. Adv. Exp. Soc. Psychol. 2013, 47, 55–130. [Google Scholar] [CrossRef]
  10. Schwartz, S.H.; Bilsky, W. Toward a universal psychological structure of human values. J. Pers. Soc. Psychol. 1987, 53, 550–562. [Google Scholar] [CrossRef]
  11. Blais, A.-R.; Weber, E.U. A domain-specific risk-taking (DOSPERT) scale for adult populations. Judgm. Decis. Mak. 2006, 1, 33–47. [Google Scholar] [CrossRef]
  12. Tangney, J.P. Moral affect: The good, the bad, and the ugly. J. Pers. Soc. Psychol. 1991, 61, 598–607. [Google Scholar] [CrossRef] [PubMed]
  13. Tangney, J.P.; Stuewig, J.; Mashek, D.J. Moral emotions and moral behavior. Annu. Rev. Psychol. 2007, 58, 345–372. [Google Scholar] [CrossRef] [PubMed][Green Version]
  14. Prinz, J. The emotional basis of moral judgments. Philos. Explor. 2006, 9, 29–43. [Google Scholar] [CrossRef]
  15. O’Handley, B.M.; Blair, K.L.; Hoskin, R.A. What do two men kissing and a bucket of maggots have in common? Heterosexual men’s indistinguishable salivary α-amylase responses to photos of two men kissing and disgusting images. Psychol. Sex. 2017, 8, 173–188. [Google Scholar] [CrossRef]
  16. Taylor, K. Disgust is a factor in extreme prejudice. Br. J. Soc. Psychol. 2007, 46, 597–617. [Google Scholar] [CrossRef] [PubMed][Green Version]
  17. Lavater, J.C. Physiognomische Fragmente, zur Beförderung der Menschenkenntniß und Menschenliebe; Weidmann and Reich: Leipzig, Germany, 1775. [Google Scholar]
  18. Galton, F. Composite portraits, made by combining those of many different persons into a single resultant figure. J. Anthropol. Inst. G. B. Irel. 1879, 8, 132–144. [Google Scholar] [CrossRef][Green Version]
  19. Lombroso Ferrero, G. Criminal Man, According to the Classification of Cesare Lombroso; G P Putnam’s Sons: New York, NY, USA, 1911. [Google Scholar]
  20. Alrajih, S.; Ward, J. Increased facial width-to-height ratio and perceived dominance in the faces of the UK’s leading business leaders. Br. J. Psychol. 2014, 105, 153–161. [Google Scholar] [CrossRef] [PubMed]
  21. Haselhuhn, M.P.; Ormiston, M.E.; Wong, E.M. Men’s facial width-to-height ratio predicts aggression: A meta-analysis. PLoS ONE 2015, 10, e0122637. [Google Scholar] [CrossRef] [PubMed]
  22. Loehr, J.; O’Hara, R.B. Facial morphology predicts male fitness and rank but not survival in second world war finnish soldiers. Biol. Lett. 2013, 9, 20130049. [Google Scholar] [CrossRef] [PubMed][Green Version]
  23. Yang, Y.; Tang, C.; Qu, X.; Wang, C.; Denson, T.F. Group facial width-to-height ratio predicts intergroup negotiation outcomes. Front. Psychol. 2018, 9, 214. [Google Scholar] [CrossRef] [PubMed][Green Version]
  24. Escalera, S.; Baró, X.; Gonzàlez, J.; Bautista, M.A.; Madadi, M.; Reyes, M.; Ponce-López, V.; Escalante, H.J.; Shotton, J.; Guyon, I. ChaLearn looking at people challenge 2014: Dataset and results. In Computer Vision-ECCV 2014 Workshops; Agapito, L., Bronstein, M., Rother, C., Eds.; Springer: Cham, Switzerland, 2015; pp. 459–473. [Google Scholar]
  25. Ponce-López, V.; Chen, B.; Oliu, M.; Corneanu, C.; Clapés, A.; Guyon, I.; Baró, X.; Escalante, H.J.; Escalera, S. ChaLearn LAP 2016: First round challenge on first impressions-dataset and results. In Computer Vision-ECCV 2016 Workshops; Hua, G., Jégou, H., Eds.; Springer: Cham, Switzerland, 2016; pp. 400–418. [Google Scholar]
  26. Wei, X.-S.; Zhang, C.-L.; Zhang, H.; Wu, J. Deep bimodal regression of apparent personality traits from short video sequences. IEEE Trans. Affect. Comput. 2018, 9, 303–315. [Google Scholar] [CrossRef]
  27. Porcu, S.; Floris, A.; Voigt-Antons, J.-N.; Atzori, L.; Moller, S. Estimation of the quality of experience during video streaming from facial expression and gaze direction. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2702–2716. [Google Scholar] [CrossRef]
  28. Amour, L.; Boulabiar, M.I.; Souihi, S.; Mellouk, A. An improved QoE estimation method based on QoS and affective computing. In Proceedings of the 2018 International Symposium on Programming and Systems (ISPS), Algiers, Algeria, 24–26 April 2018; IEEE: Miami, FL, USA, 2018; pp. 1–6. [Google Scholar]
  29. Bhattacharya, A.; Wu, W.; Yang, Z. Quality of experience evaluation of voice communication: An affect-based approach. Hum.-Centric Comput. Inf. Sci. 2012, 2, 7. [Google Scholar] [CrossRef][Green Version]
  30. Ekman, P. Facial expression and emotion. Am. Psychol. 1993, 48, 384–392. [Google Scholar] [CrossRef]
  31. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Miami, FL, USA, 2016; pp. 770–778. [Google Scholar]
  32. Ekman, P.; Friesen, W.V. Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 1971, 17, 124–129. [Google Scholar] [CrossRef] [PubMed][Green Version]
  33. Schwartz, S.H. Universals in the content and structure of values: Theoretical advances and empirical tests in 20 countries. Adv. Exp. Soc. Psychol. 1992, 25, 1–65. [Google Scholar] [CrossRef]
  34. Davidov, E.; Schmidt, P.; Schwartz, S.H. Bringing values back in: The adequacy of the European social survey to measure values in 20 countries. Public Opin. Q. 2008, 72, 420–445. [Google Scholar] [CrossRef][Green Version]
  35. Chen, T.; Guestrin, C. XGBoost: Reliable large-scale tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM New York: San Francisco, CA, USA, 2016; pp. 785–794. [Google Scholar]
  36. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  37. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; IEEE: Miami, FL, USA, 2008; pp. 1322–1328. [Google Scholar]
  38. Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent Feature Attribution for Tree Ensembles. 2019. Available online: (accessed on 21 December 2021).
  39. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st Conference on Neural Information Processing System, Long Beach, CA, USA, 4–9 December 2017; pp. 1–10. [Google Scholar]
Figure 1. Setup of our system with video website and four online surveys.
Figure 1. Setup of our system with video website and four online surveys.
Futureinternet 14 00005 g001
Figure 2. Alluvial diagrams illustrating the significant relationships between videos, emotions, and personality (top), and between emotions and individual traits (bottom).
Figure 2. Alluvial diagrams illustrating the significant relationships between videos, emotions, and personality (top), and between emotions and individual traits (bottom).
Futureinternet 14 00005 g002
Figure 3. Feature importance for predicting conservation.
Figure 3. Feature importance for predicting conservation.
Futureinternet 14 00005 g003
Figure 4. Feature importance for predicting authority/respect.
Figure 4. Feature importance for predicting authority/respect.
Futureinternet 14 00005 g004
Figure 5. Feature importance for predicting conscientiousness.
Figure 5. Feature importance for predicting conscientiousness.
Futureinternet 14 00005 g005
Figure 6. Feature importance for predicting health likelihood.
Figure 6. Feature importance for predicting health likelihood.
Futureinternet 14 00005 g006
Table 1. List of 15 movie snippets.
Table 1. List of 15 movie snippets.
Video NumberShort Description
1puppies—cute puppies running
2avocado—a toddler holding an avocado
3condom ad—child throwing a tantrum in a supermarket
4runner—competitive runners supporting a girl from another team over the finish line
5maggot—a guy eating a maggot
6soldier—soldiers at battle
7Trump—Donald Trump talking about the Mexican mass migration
8mountain bike—mountain biker on daring ride down a rock bridge
9roof bike—guy biking on top of a skyscraper
10roof run—guy balancing and almost falling on top of skyscraper
11racoon—man beating racoon to death
12abandoned—social worker feeding a starved abandoned black toddler
13waste—residents collecting electronic waste in the slums of Accra
14dog—sad dog on the gravestone of his master, missing him
15monster—man discovering an invisible monster through the picture on his instant camera
Table 2. Descriptive statistics of individual traits.
Table 2. Descriptive statistics of individual traits.
Openness to experience0.610.060.480.78
In-group loyalty16.544.30625
ETH_L = ethical likelihood; ETH_P = ethical perceived; FIN_L = financial likelihood; FIN_P = financial perceived; HEA_L = health likelihood; HEA_P = health perceived; SOC_L = social likelihood; SOC_P = social perceived; REC_L = recreational likelihood; REC_P = recreational perceived.
Table 3. Regression models for the Big Five personality traits.
Table 3. Regression models for the Big Five personality traits.
Predictor/DependentNeuroticismExtraversionOpenness to ExperienceAgreeablenessConsciousness
Angry 2 4.922 ***
Angry 70.665 *
Disgusted 4 0.297 *
Disgusted 11 0.547 **
Happy 1 −0.067 **
Happy 8 0.070* 0.149 ***
Happy 9 −0.104 **
Happy 13−0.398 *
Happy 150.226 **
Neutral 1 0.038 *
Neutral 10 0.062 *
Surprised 7 −2.735 **
Surprised 9 0.499 **
Surprised 11 0.352 *
Surprised 14 −3.049 **
Constant0.518 ***0.659 ***0.582 ***0.591 ***0.700 ***
Adjusted R20.1840.2630.2570.1310.198
* p < 0.05; ** p < 0.01; *** p < 0.001.
Table 4. Regression models for the DOSPERT scale values.
Table 4. Regression models for the DOSPERT scale values.
Angry 1−70.349 ***
Angry 4 22.442 **
Angry 535.700 ***
Disgusted 2 −59.387 *
Disgusted 15 62.642 **
Fearful 1 −146.951 *
Fearful 3 −119.941 * −103.950 *
Fearful 4 125.975 ***
Fearful 7 71.613 *** 67.482 **101.615 ***
Fearful 8 −150.002 *
Fearful 920.145 *** 32.366 *** 13.496 *11.481 *
Fearful 10 105.252 ***
Fearful 13 180.138 **
Happy 8 1.301 **
Happy 12 12.194 *
Happy 14 −5.893 **
Neutral 5 1.032 *
Neutral 6 −1.107 **
Sad 2 3.487 *
Sad 5 −3.118 ***
Sad 15 −1.014 *
Surprised 5 −19.917 **
Surprised 6 −16.358 **
Constant2.308 ***4.022 ***3.721 ***4.696 ***3.333 ***4.682 ***5.615 ***2.613 ***4.211 ***4.692 ***
Adjusted R20.3700.3180.3890.1900.1880.2760.2450.0900.1770.273
* p < 0.05; ** p < 0.01; *** p < 0.001. ETH_L = ethical likelihood; ETH_P = ethical perceived; FIN_L = financial likelihood; FIN_P = financial perceived; HEA_L = health likelihood; HEA_P = health perceived; SOC_L = social likelihood; SOC_P = social perceived; REC_L = recreational likelihood; REC_P = recreational perceived.
Table 5. Regression models for conservation and transcendence.
Table 5. Regression models for conservation and transcendence.
Happy 4 1.078 **
Happy 5−0.780 *
Happy 8 −1.527 ***
Happy 10−0.829 **
Fearful 14−13.963 *−15.090 **
Surprised 2 26.934 *
Surprised 1444.428 ***
Constant0.952 ***−1.232 ***
Adjusted R20.3410.280
* p < 0.05; ** p < 0.01; *** p < 0.001.
Table 6. Regression models for the Haidt moral values.
Table 6. Regression models for the Haidt moral values.
Predictor/DependentHarm/CareFairness/ReciprocityIn-Group LoyaltyAuthority/RespectPurity/Sanctity
Angry 4 71.456 *
Happy 3−4.152 * −3.442 *
Happy 10 −5.078 **
Neutral 2−7.783 ***
Neutral 6 −7.897 ***
Neutral 7 −5.329 **
Neutral 10 3.391 *
Sad 2 16.123 **
Surprised 14 143.987 *179.091 **
Constant28.435 ***25.585 ***21.432 ***13.294 ***10.433 ***
Adjusted R20.2610.1930.1150.1970.200
* p < 0.05; ** p < 0.01; *** p < 0.001.
Table 7. Accuracy of Xgboost models.
Table 7. Accuracy of Xgboost models.
VariableAverage AccuracyCohen’s Kappa
In-group loyalty80.0%0.66
Openness to experience72.2%0.58
Ethical likelihood78.6%0.65
Ethical perceived78.6%0.68
Financial likelihood84.6%0.77
Financial perceived78.6%0.68
Health likelihood84.6%0.75
Health perceived60.0%0.38
Recreational likelihood71.4%0.59
Recreational perceived86.7%0.80
Social likelihood76.9%0.63
Social perceived71.4%0.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gloor, P.A.; Fronzetti Colladon, A.; Altuntas, E.; Cetinkaya, C.; Kaiser, M.F.; Ripperger, L.; Schaefer, T. Your Face Mirrors Your Deepest Beliefs—Predicting Personality and Morals through Facial Emotion Recognition. Future Internet 2022, 14, 5.

AMA Style

Gloor PA, Fronzetti Colladon A, Altuntas E, Cetinkaya C, Kaiser MF, Ripperger L, Schaefer T. Your Face Mirrors Your Deepest Beliefs—Predicting Personality and Morals through Facial Emotion Recognition. Future Internet. 2022; 14(1):5.

Chicago/Turabian Style

Gloor, Peter A., Andrea Fronzetti Colladon, Erkin Altuntas, Cengiz Cetinkaya, Maximilian F. Kaiser, Lukas Ripperger, and Tim Schaefer. 2022. "Your Face Mirrors Your Deepest Beliefs—Predicting Personality and Morals through Facial Emotion Recognition" Future Internet 14, no. 1: 5.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop