Next Article in Journal
Development of a Collaborative Robotic Platform for Autonomous Auscultation
Next Article in Special Issue
Comparison of a VR Stylus with a Controller, Hand Tracking, and a Mouse for Object Manipulation and Medical Marking Tasks in Virtual Reality
Previous Article in Journal
Greenhouse Temperature Prediction Based on Time-Series Features and LightGBM
Previous Article in Special Issue
Effectiveness of Virtual Reality Goggles as Distraction for Children in Dental Care—A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Facial Affect Recognition in Depression Using Human Avatars

by
Marta Monferrer
1,
Arturo S. García
2,3,
Jorge J. Ricarte
4,
María J. Montes
1,
Patricia Fernández-Sotos
1,5 and
Antonio Fernández-Caballero
2,3,5,*
1
Servicio de Salud Mental, Complejo Hospitalario Universitario de Albacete, Servicio de Salud de Castilla-La Mancha, 02004 Albacete, Spain
2
Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
3
Neurocognition and Emotion Unit, Instituto de Investigación en Informática, 02071 Albacete, Spain
4
Departamento de Psicología, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
5
CIBERSAM-ISCIII (Biomedical Research Networking Center in Mental Health, Instituto de Salud Carlos III), 28016 Madrid, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(3), 1609; https://doi.org/10.3390/app13031609
Submission received: 6 December 2022 / Revised: 13 January 2023 / Accepted: 26 January 2023 / Published: 27 January 2023
(This article belongs to the Special Issue Virtual Reality Technology and Applications)

Abstract

:
This research assesses facial emotion recognition in depressed patients using a novel dynamic virtual face (DVF) collection. The participant sample comprised 54 stable depressed patients against 54 healthy controls. The experiment entailed a non-immersive virtual reality task of recognizing emotions with DVFs representing the six basic emotions. Depressed patients exhibited a deficit in facial affect recognition in comparison to healthy controls. The average recognition score for healthy controls was 88.19%, while the score was 75.17% for the depression group. Gender and educational level showed no influence on the recognition rates in depressed patients. As for age, the worst results were found in older patients as compared to other cohorts. The average recognition rate for the younger group was 84.18%, 78.63% for the middle-aged group, and 61.97% for the older group, with average reaction times of 4.00 s, 4.07 s, and 6.04 s, respectively.

1. Introduction

Depressive disorders have become a common worldwide cause of disability [1], striking more than 300 million individuals; their prevalence has been reported in the general population of about 20 percent at least once in their lifespan [2]. New studies have reported current data about the mental health of populations, which has been hit hard by the impact of the coronavirus disease 2019 (COVID-19) crisis across the world, mainly in terms of depression, anxiety, and stress [3,4]. In this way, a meta-analysis of 14 studies was recently conducted with a total sample size of 44,531 people, in which a prevalence of 33.7% was reached for depression, making evident the vulnerability of the population to this mental disease [5].
This study deals with emotional processing, which is described as the skills of identifying, facilitating, regulating, understanding, and managing emotions [6,7]. Emotional processing is further divided into three domains: emotional understanding and emotional management, both classified as higher-level perceptual processes, and emotional facial recognition, which is categorized at a lower level [8]. Emotional facial recognition is described as the identification and categorization of emotional states through facial expressions and/or non-facial cues, such as voice [9]. This may be considered the initial and most elementary stage of the entire process involved in social cognition.
In clinical practice, deficit findings on emotion recognition in patients with depression are considered of great relevance, since they consist of not just some descriptive data but can also give clinicians a promising pathway to rehabilitation. In this line, a recent systematic review by our research team, who studied interventions aimed at improving psychosocial functioning in patients with major depressive disorder (MDD), showed positive results for emotional processing and attributional style training both in behavioral and neural studies [10]. Meta-analytic and review studies in this field [11,12,13] conclude the presence of interpretation biases as a mediator of performance on emotion recognition in depressed patients; they also highlight the heterogeneity of the methodology as an important factor that makes it difficult to obtain valid and reliable conclusive results (samples, emotion recognition tasks, or statistical analysis).
The increased focus on social cognition and, specifically, on facial emotion recognition, is driving researchers towards the development of diverse assessment frameworks. In recent years, virtual reality (VR) has gained prominence with the recreation of controllable situations in near-real environments [14,15,16,17]. This has allowed for new computerized interventions and assessment tools targeting mental disorders of easier accessibility and higher ecological validity. Therefore, VR has become an attractive tool for clinical practice [18,19,20,21,22,23,24]. Interestingly, the more recent meta-analytic review on facial emotion recognition in MDD [11] has not described any studies conducted with the help of VR. To our knowledge, this is the first study to focus on the comparison of facial emotion recognition in people with MDD and healthy populations using VR technology.
The contribution of VR to research on social cognition has been mainly through the creation of dynamic avatars to represent several emotional states, allowing social real-time interaction with the participant, and evaluating affective processing [25]. The avatar faces used to be constructed based on the standardized Facial Action Coding System (FACS) [26], which makes it possible to build normative data for cataloging facial movements by encoding muscle contraction as a unit of measure called an action unit (AU) [27].
Our research team, comprising multidisciplinary experts, is in the process of developing a novel intervention aimed at enhancing facial emotion recognition, referred to as “AFRONTA: Affect Recognition Through Avatars”. Prior to implementing the therapy, various methodological steps are undertaken. The initial step entailed the representation of the six basic emotions through a new set of dynamic virtual faces (DVFs), which were constructed based on the action units of the Facial Action Coding System (FACS) [28]. Subsequently, these DVFs were evaluated by a sample of 204 healthy individuals, yielding the conclusion that the DVFs were effective in accurately recreating human facial expressions of emotions [29]. The current study is focused on analyzing facial emotion recognition scores using DVFs in a sample of 54 stable patients diagnosed with MDD and comparing these scores with those of healthy controls from the same demographic area, with the purpose of discovering specific deficits to develop better targeted interventions. To this end, the following was hypothesized:
  • Hypothesis 1 (H1). Individuals diagnosed with MDD will demonstrate a diminished ability to recognize emotions, as well as longer reaction times compared to healthy controls.
  • Hypothesis 2 (H2). Both the MDD and control groups will display greater precision in recognizing more DVFs compared to less dynamic ones, resulting in a higher number of successful identifications.
  • Hypothesis 3 (H3). Both groups will exhibit greater accuracy in recognizing DVFs presented in a frontal view in comparison to those presented in profile views, resulting in a higher number of successful identifications.
  • Hypothesis 4 (H4). For the depression group, differences in age will be observed, with younger participants performing better. No differences will be found in terms of gender or educational level.

2. Materials and Methods

2.1. Design of Dynamic Virtual Humans

The design of the present study was founded on the Facial Action Coding System (FACS) [30]. This system was selected for its widespread utilization and its suitability, as demonstrated through a psychometric evaluation of spontaneous expressions [31]. In addition, it provides more comprehensive information than other systems in terms of facial adjustments. Specifically, FACS categorizes facial movements on the basis of Action Units (AU) and outlines methods for perceiving and scoring them. Each AU represents a group of muscles that work together to produce a change in facial appearance. The different AUs are assembled based on the location of the facial muscles, which are separated into upper and lower facial muscles. The muscles of the upper face comprise the eyebrows, the forehead, the crease covering the eyes, and the upper and lower eyelids. The lower face comprises the muscles surrounding the mouth and lips and is divided into different classifications based on the directions of movement of the muscles. Additionally, other AUs exist based on the muscles that move the neck and the direction of gaze.
In addition, our approach combines several well-known software packages used in game development to refine the design of the DVFs. The workflow is shown in Figure 1 and described next.
The process started with the selection and customization of two predefined characters from Adobe Fuse CC, a software tool primarily intended for game developers. The characters obtained through this tool feature a wide range of visual characteristics commonly found in contemporary high-end video games and are fully customizable from facial features to the length of the limbs, torso, and clothing. These characters were then exported to Mixamo, an online platform that enables automated rigging of 3D humanoid models. Rigging, a technique utilized in skeletal animation, is the process of creating a digital bone structure for a 3D model, which allows the model to be manipulated as a puppet for animation. Once rigged, the 3D characters were imported into 3D Studio Max to generate the Action Units (AUs) starting from a neutral facial expression. The blend shapes technique, also known as morph animation targets, was employed to alter the mesh and store the vertex positions for each AU.
Upon the incorporation of all AUs into the virtual human models, they were subsequently exported to Unity 3D, the real-time engine employed in the reproduction of animations. The facial expressions were further augmented through the utilization of wrinkles, achieved through the generation of a surface shader utilizing normal map textures. Normal maps serve to simulate nuances in the surface of an object by altering the vertex normal, subsequently impacting the calculation of light on the surface. This surface shader described the entirety of the visual appearance of the virtual human’s face, utilizing textures for aspects such as skin color, normal mapping, reflections (specularity), and ambient occlusion. The standard lighting model was employed, and shadows were enabled for all light types. A total of seven different normal maps were utilized per virtual human, with one designated for each facial emotion in addition to the neutral expression.
In conclusion, the fine-tuning of the majority of the AU-related parameters was accomplished through manual adjustments. The virtual humans were initially designed from scratch by two engineers, who subsequently presented a preliminary version. Then, the other engineer and the psychiatrists discussed the similarity of the virtual human’s emotions to those of actual humans. The final version of the avatars was achieved through a process of iterative refinement.

2.2. Participants

The 6-month recruitment of patients with MDD and healthy controls (June to November 2021) was performed at the Mental Health Service of the Albacete University Hospital Complex, which serves about 300,000 inhabitants. All procedures contributed to this work complied with the ethical standards of the relevant national and institutional committees on human experimentation and with the 1975 Declaration of Helsinki, revised in 2008.
Table 1 lists the sociodemographic details of the patients and the healthy controls. The sample size was established at 108 participants, including 54 stable patients with an MDD diagnosis and 54 healthy controls in the same demographic area (as shown in Table 1). A patient was deemed stable when on antidepressant treatment and with no hospital admission, no changes in treatment, and no significant psychopathological changes during at least the three months preceding their inclusion. This sample was determined from the number of stable patients available for inclusion during the study period. The participants were divided into three age groups (20–39, 40–59, and 60–79 years) and three educational levels (basic, medium, and high) (see Table 1), as in a previous work [29]. Table 1 also highlights that the gender and educational level of both cohorts was identical since a control with the same characteristics was matched to each patient. This made the age very similar in both samples.
The inclusion criteria for the MDD group were (a) diagnosis of depressive disorder as assessed by the Structured Clinical Interview for DSM-5 (SCID) [32], (b) clinical stabilization at least 3 months prior to SCID, (c) outpatient status, (d) age between 20 and 79 years, and (e) fluent use and understanding of Spanish. Exclusion criteria were (a) meeting the diagnostic criteria for another DSM-5 axis I major mental disorder, except nicotine dependence, (b) intellectual disability, and (c) medical pathology that could interfere with facial affect recognition.
Inclusion criteria for healthy controls were (d) and (e) as outlined for the depressed group. Exclusion criteria were (b) and (c) as described for the depressed group, or individuals with a personal background of mental disease.

2.3. Data Collection

The research team devised a data collection notebook that included sociodemographic and clinical data. Each patient’s inclusion and exclusion criteria were assessed by the referring psychiatrist during a baseline visit appointment. The sociodemographic data collected included the patient’s age, gender, race, marital status, educational level, employment status, and profession. The clinical data comprised personal somatic, toxic, and psychiatric history; current treatment; time and dose; and relevant family history. The recruitment of the healthy controls was carried out in the same sociocultural area of residence and primarily from similar cultural and social status groups. The sociodemographic data collected was consistent with that of the depression group, and the clinical data included personal somatic, toxic, and psychiatric history, as well as pertinent familial antecedents.
The data collection was executed during a single, 30-minute individual session. Upon confirmation that the participant satisfied the inclusion criteria of this study, the facial stimulus was administered. Prior to the execution of the experiment, all participants provided informed consent following thorough clarification of this study. The collected data were safeguarded in dissociated databases with anonymity maintained.

2.4. Experimental Procedure

The experiment was conducted in a session lasting approximately 40 to 50 min, which included the acquisition of sociodemographic and clinical data scales. A brief tutorial first introduced the participants to the task to be performed. A series of 52 DVFs were presented to each participant on a “27” computer display. Each DVF began from the neutral expression, then moved to one basic emotion (happiness, sadness, anger, fear, disgust, or surprise) or stayed neutral, and then terminated in the neutral expression. The duration of the presentation was 2 s. Participants were then asked to identify the emotion depicted by choosing one of seven alternatives displayed underneath the DVF.
Of these 52 faces, 50% were interspersed with less dynamism (with only the facial features most characteristic of each emotion displaying movement), and 50% revealed more dynamic faces (movement was represented by neck and shoulder moves). Both types of dynamism of the DVFs were shown from three different viewpoints: 50% in frontal, 25% in right lateral, and 25% in left lateral angle. In addition, other physical appearance characteristics were taken into consideration, resulting in the inclusion of 2 white avatars in their 30s with distinct eye colors, skin tones, and hair; 2 black avatars in their 30s; and 2 older avatars. From the 52 avatars presented, 8 were black, and 8 were of advanced age. A comprehensive description of the DVFs can be found in a previous study that validated them in 204 healthy individuals [28].
Figure 2 shows three DVF samples. The DVF on the left is a black woman with a neutral expression shown in the frontal view. The avatar in the center is an example of a surprised white man in the left lateral view. The last avatar shows a sad older woman in the right lateral view.

2.5. Statistical Analysis

The statistical analyses were conducted utilizing IBM SPSS Statistics (version 24) and Microsoft Excel. Descriptive statistics were employed to analyze quantitative variables, including the mean and standard deviation, and the qualitative variables were represented by percentages. The distribution of hits and reaction times did not conform to a normal distribution; thus, non-parametric tests were primarily utilized for hypothesis testing, with the statistical significance being defined as a p-value < 0.05.
Comparisons among the groups were performed via the Mann–Whitney U test when only two groups were being compared and the Kruskal–Wallis test for three or more groups. In the cases in which the comparison was made among differences in performance within the same group of participants (i.e., DVF with lower vs. higher dynamism), the Wilcoxon signed-rank test and the Friedman test were employed. The correlation between the variables was examined utilizing Spearman’s rank correlation coefficient.

3. Results

3.1. Comparison of Recognition Scores and Reaction Times between Depression and Healthy Groups in Emotion Recognition (H1)

Regarding recognition scores, differences were found among both groups. The depression group presented a lower rate of emotional recognition than the healthy group, as shown by the analyses carried out via the Mann–Whitney U test (U = 555.0, p < 0.001). The results of the recognition scores for the healthy controls and patients are summarized in Table 2.
The mean score for the healthy control group was 88.19%, whereas it was 75.17% for the depression group. The most significant disparities were observed in fear, sadness, and disgust, with scores of 48.4%, 63.0%, and 66.9% for the depression group, respectively, and 77.3%, 83.4%, and 85.0% for the healthy control group. In all cases, the scores were higher for the control group. Furthermore, the Mann–Whitney U test was used to compare the reaction times between the clinical and control groups (see Table 3). The depression group showed longer reaction times in emotional recognition compared to healthy controls ( U = 207.0, p < 0.001).

3.2. Influence of Dynamism of the DVFs on Emotion Recognition (H2)

In regards to the effect of dynamism on the ability to recognize emotions among both patients and healthy controls, different results were found. The Wilcoxon signed-rank test showed that the disparities in the emotion recognition rates were statistically significant; these rates were greater for the most dynamic DVFs ( Z = −3.392, p = 0.001). Nevertheless, it was found that the patient group did not exhibit a greater degree of accuracy in recognizing the most dynamic virtual faces, as compared to the less dynamic ones ( Z = 1.114, p = 0.265).

3.3. Influence of the Presentation Angle of the DVFs on Emotion Recognition (H3)

Emotion recognition was found to be different depending on the presentation angle of the DVF. According to the Wilcoxon signed-rank test, faces presented from the front showed a greater precision for recognition than those presented in profile, being that these differences were statistically significant for both the MDD group ( Z = 2.343, p = 0.019) and the healthy group ( Z = 2.221, p = 0.026).

3.4. Influence of Sociodemographic Data on Emotion Recognition for the Depression Group (H4)

3.4.1. Influence of Age

The influence of age on the depression group was studied from two different perspectives. The first analysis considered age to be a discontinuous variable, and the second one considered it to be a continuous variable.
In the first analysis, the age of the depression group was converted into a discontinuous variable by grouping the sample into three age categories: young ( n = 9), middle-aged ( n = 27), and elderly ( n = 18). The mean scores for each group were 84.19%, 78.63%, and 61.97%, and the mean reaction time that each group took to provide a response to the DVFs presented was 4.00 s, 4.07 s, and 6.04 s, respectively. The influence of the age group on emotion recognition was determined by the Kruskal–Wallis test, which showed significant differences among the three age groups for recognition scores ( X 2 (2) = 15.487, p < 0.001) and reaction times ( X 2 (2) = 14.091, p = 0.001). Regarding recognition scores, significant differences were found between elderly and young ( p = 0.002) and elderly and middle-aged ( p = 0.004), being that the elderly group had the lowest recognition scores in both cases. The reaction times’ comparison showed similar results, with the highest reaction times for the elderly group and significant differences in relation to the young ( p = 0.013) and middle-aged ( p = 0.002) groups.
The second analysis of the influence of age as a continuous variable on emotion recognition was carried out via Spearman’s rank correlation coefficient. This analysis showed an inverse correlation between age and recognition scores ( r = 0.628, p < 0.001) and a direct correlation between age and reaction times ( r = 0.501, p < 0.001). Therefore, the older the participant, the more difficult it becomes to identify emotions, and the longer the reaction time becomes.

3.4.2. Influence of Gender

The depression group was composed of 34 women and 20 men. The rate of emotion identification for women was 84.66%, while it was 72.88% for men. As for the reaction times, the mean was 5.00 s for females and 4.23 s for males. The Mann–Whitney U test analyzed the influence of gender on emotion recognition and did not find significant differences in recognition scores ( U = 311.5, p = 0.609) and reaction times ( U = 253.0, p = 0.119). This finding suggests that women and men have no differences in recognizing emotions and have similar response times.

3.4.3. Influence of Educational Level

In order to study the influence of educational level on emotion recognition, the sample of the depression group was divided into three groups: basic ( n = 17), medium ( n = 21), and high ( n = 16) education. The mean score for the basic education group was 69.00%, while it was 73.72% and 79.69% for the medium and high education groups, respectively. Regarding the reaction times, the mean for the basic group was 5.42 s, 4.48 s for the medium, and 4.27 s for the high. Despite these differences, the Kruskal–Wallis test revealed no differences for emotional recognition in relation to educational level, neither for recognition scores ( X 2 (2) = 3.274, p = 0.195) nor for the reaction time ( X 2 (2) = 1.248, p = 0.536).

4. Discussion

The current study focused on the assessment of facial emotion recognition in 54 stable patients with diagnosed major depression as compared to 54 healthy controls matched for age, gender, and educational level. A previously validated tool, using non-immersive virtual reality with DVFs, was administered to conduct the assessment. The following is a discussion of the hypotheses put forward on the basis of the findings obtained.

4.1. Comparison of Recognition Scores and Reaction Times for the Depression and Healthy Groups in Emotion Recognition (H1)

In accordance with the predictions of previous studies [11,33], it was observed that individuals diagnosed with MDD exhibited a diminished capacity for recognizing emotions in virtual emotional expressions, as well as prolonged reaction times when compared to healthy controls across all emotions assessed. The disparities between the two groups were particularly pronounced for certain negative emotions, such as fear, sadness, and disgust. Within the MDD group, the emotions that were least recognized were fear, sadness, and disgust (with recognition rates of 48.9, 63.0%, and 66.9%, respectively). On the other hand, the most recognized emotions were neutral (90.3%), surprise (89.6%), and joy (84.5%).
Despite the growing interest in recent years in facial recognition of emotions with DVFs, a recent review on the recognition of emotions through VR revealed little research on the subject, a lack of validity of the current tools, and the difficulty in finding studies that analyze all basic emotions [34]. The comparison of our results with previous similar studies is, therefore, complicated, which is why we present them against results with static stimuli. In this way, it is known that the deterioration of the recognition of facial emotions in depression is a solid finding, consistent with previous studies [33,35,36].
Although it is difficult to know the causes of these differences, some authors have suggested that this deficit may be supported by the tendency to motor slowness and indecision shown as a symptom of MDD [32] and by a relationship between emotion recognition and alexithymia, defending the idea that the difficulties in recognizing and categorizing emotions may be mediated by another prior deficit in the identification and naming of even one’s own emotions [37,38].

4.2. Influence of Dynamism of the DVFs on Emotion Recognition (H2)

Regarding the expectancy of a greater number of hits with more dynamic DVFs, this hypothesis was only confirmed for the healthy group, as the depression group did not show a benefit from higher levels of dynamism. Accordingly, a recent study described a greater accuracy in recognizing emotions when static stimuli were presented and confronted with dynamic ones in a group of depressed older adults compared with controls [39]. These findings may be explained by the individuality of the involved neural networks in processing static or dynamic stimuli [40,41] and by the presentation time of stimuli, which has been found to exercise a negative influence on emotion recognition with less accuracy for higher times [11].

4.3. Influence of the Presentation Angle of the DVFs on Emotion Recognition (H3)

The hypothesis that a higher number of hits would be achieved with the DVFs presented in the frontal view, as opposed to the profile views was supported by the data for both groups. To the best of our knowledge, this is the first study to present virtual characters with frontal and profile views to individuals diagnosed with depression. A study conducted by our research team on healthy controls also found superior rates of emotional recognition when the avatars were presented from the frontal view in comparison to the profile view [28].

4.4. Influence of Sociodemographic Data on Emotion Recognition for the Depression Group (Recognition Scores and Reaction Times) (H4)

Regarding the depression group, gender was not correlated with successful emotional recognition, as expected and as found in previous studies with healthy and depressed samples [42,43]. The same results were found for the educational level. The non-correlation confirmed the expectancy and previous findings in healthy people [28]; there is scarce literature about this issue on samples with depression.
However, as expected, age showed itself to be an influential variable in emotion recognition scores and reaction times, with an inverse correlation between age and the recognition scores and a direct correlation between age and reaction times. Some authors have found similar results, suggesting these as mediators for the natural impairment of aging, mainly referred to as the decline in processing speed, fluid intelligence, alexithymia, and depression [44,45]. However, this emotional deterioration might also be related to a pre-clinical general cognitive impairment, since depressive disorders increase the risk for dementia [46,47], which in turn shows greater deterioration in emotion recognition [48,49]. Moreover, it is important to point out the less computer literacy of the elderly and difficulties in using the computer mouse when choosing the answers, which might lead them to select incorrect responses and raise reaction times.

5. Conclusions

As a main conclusion, consistent with previous findings in the literature, we found a deficit in facial emotion recognition in patients diagnosed with MDD compared to matched healthy controls. The better recognized emotions by the depressed patients were positive versus negative ones, with lower than expected recognition of sadness, questioning the negative bias of depression. However, much variability among previous studies in this field has been noted, and it has been suggested that the negative bias may be centered on an attentional issue, while attenuation of the positive bias has been shown to be at play.
In light of the results, dynamism seems to improve emotional recognition only in healthy populations, not in patients with MDD, while the frontal vision of the avatars seems to help emotional recognition in both groups, compared to the lateral vision. In line with previous studies, differences were found in emotional recognition in favor of younger adults, without finding differences in gender or educational level. An important finding of this study is the non-correlation between depression severity and emotion recognition in our depressed sample. We suggest a possible influence of endogeneity of depression and pharmacological effect.
This study is among the few to have assessed the recognition of the six basic emotions through DVFs in patients with MDD, which hinders the contrast of the obtained results with prior research. Bearing in mind the ongoing interest in VR, further research is required to improve and validate tools on emotion recognition through the use of DVFs, not only for assessment but also for the implementation of clinical practice rehabilitation interventions.

Author Contributions

Conceptualization, A.F.-C. and P.F.-S.; methodology, M.M., J.J.R. and M.J.M.; validation, A.S.G.; writing—original draft preparation, M.M., J.J.R. and P.F.-S.; writing—review and editing, A.F.-C.; funding acquisition, A.F.-C. All authors have read and agreed to the published version of the manuscript.

Funding

Grants PID2020-115220RB-C21 and EQC2019-006063-P funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way to make Europe”. This work was also partially funded by CIBERSAM of the Instituto de Salud Carlos III (ISCIII) and co-funded by “ERDF A way to make Europe”.

Institutional Review Board Statement

This study was approved by the Clinical Research Ethics Committee of the Albacete University Hospital Complex on 26 April 2022 under code number 2020/12/141.

Informed Consent Statement

All participants provided their written informed consent to participate in this study.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. Depression. 2021. Available online: https://www.who.int/news-room/fact-sheets/detail/depression (accessed on 30 March 2022).
  2. Ferrari, A.J.; Charlson, F.J.; Norman, R.E.; Patten, S.B.; Freedman, G.; Murray, C.J.; Vos, T.; Whiteford, H.A. Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease Study 2010. PLoS Med. 2013, 10, e1001547. [Google Scholar] [CrossRef] [Green Version]
  3. Lakhan, R.; Agrawal, A.; Sharma, M. Prevalence of depression, anxiety, and stress during COVID-19 pandemic. J. Neurosci. Rural. Pract. 2020, 11, 519–525. [Google Scholar] [CrossRef]
  4. Li, S.; Wang, Y.; Xue, J.; Zhao, N.; Zhu, T. The Impact of COVID-19 Epidemic Declaration on Psychological Consequences: A Study on Active Weibo Users. Int. J. Environ. Res. Public Health 2020, 17, 2032. [Google Scholar] [CrossRef] [Green Version]
  5. Salari, N.; Hosseinian-Far, A.; Jalali, R.; Vaisi-Raygani, A.; Rasoulpoor, S.; Mohammadi, M.; Rasoulpoor, S.; Khaledi-Paveh, B. Prevalence of stress, anxiety, depression among the general population during the COVID-19 pandemic: A systematic review and meta-analysis. Glob. Health 2020, 16, 57. [Google Scholar] [CrossRef]
  6. Lozano-Monasor, E.; López, M.T.; Fernández-Caballero, A.; Vigo-Bustos, F. Facial Expression Recognition from Webcam Based on Active Shape Models and Support Vector Machines. In Proceedings of the Ambient Assisted Living and Daily Activities; Pecchia, L., Chen, L.L., Nugent, C., Bravo, J., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 147–154. [Google Scholar]
  7. Mayer, J.D.; Salovey, P.; Caruso, D.R.; Sitarenios, G. Emotional intelligence as a standard intelligence. Emotion 2001, 1, 232–242. [Google Scholar] [CrossRef] [PubMed]
  8. Fernández-Sotos, P.; Torio, I.; Fernández-Caballero, A.; Navarro, E.; González, P.; Dompablo, M.; Rodriguez-Jimenez, R. Social cognition remediation interventions: A systematic mapping review. PLoS ONE 2019, 14, e0218720. [Google Scholar] [CrossRef] [Green Version]
  9. Pinkham, A.E.; Penn, D.L.; Green, M.F.; Buck, B.; Healey, K.; Harvey, P.D. The Social Cognition Psychometric Evaluation Study: Results of the Expert Survey and RAND Panel. Schizophr. Bull. 2013, 40, 813–823. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Monferrer, M.; Ricarte, J.J.; Montes, M.J.; Fernández-Caballero, A.; Fernández-Sotos, P. Psychosocial remediation in depressive disorders: A systematic review. J. Affect. Disord. 2021, 290, 40–51. [Google Scholar] [CrossRef] [PubMed]
  11. Krause, F.C.; Linardatos, E.; Fresco, D.M.; Moore, M.T. Facial emotion recognition in major depressive disorder: A meta-analytic review. J. Affect. Disord. 2021, 293, 320–328. [Google Scholar] [CrossRef]
  12. Everaert, J.; Podina, I.R.; Koster, E.H. A comprehensive meta-analysis of interpretation biases in depression. Clin. Psychol. Rev. 2017, 58, 33–48. [Google Scholar] [CrossRef] [Green Version]
  13. Bourke, C.; Douglas, K.; Porter, R. Processing of Facial Emotion Expression in Major Depression: A Review. Aust. N. Z. J. Psychiatry 2010, 44, 681–696. [Google Scholar] [CrossRef] [PubMed]
  14. Zhou, H.; Fujimoto, Y.; Kanbara, M.; Kato, H. Virtual Reality as a Reflection Technique for Public Speaking Training. Appl. Sci. 2021, 11, 3988. [Google Scholar] [CrossRef]
  15. Heyse, J.; Torres Vega, M.; De Jonge, T.; De Backere, F.; De Turck, F. A Personalised Emotion-Based Model for Relaxation in Virtual Reality. Appl. Sci. 2020, 10, 6124. [Google Scholar] [CrossRef]
  16. Kamińska, D.; Sapiński, T.; Wiak, S.; Tikk, T.; Haamer, R.E.; Avots, E.; Helmi, A.; Ozcinar, C.; Anbarjafari, G. Virtual Reality and Its Applications in Education: Survey. Information 2019, 10, 318. [Google Scholar] [CrossRef] [Green Version]
  17. Maples-Keller, J.L.; Bunnell, B.E.; Kim, S.J.; Rothbaum, B.O. The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harv. Rev. Psychiatry 2017, 25, 103–113. [Google Scholar] [CrossRef] [Green Version]
  18. Pallavicini, F.; Orena, E.; Achille, F.; Cassa, M.; Vuolato, C.; Stefanini, S.; Caragnano, C.; Pepe, A.; Veronese, G.; Ranieri, P.; et al. Psychoeducation on Stress and Anxiety Using Virtual Reality: A Mixed-Methods Study. Appl. Sci. 2022, 12, 9110. [Google Scholar] [CrossRef]
  19. Bolinski, F.; Etzelmüller, A.; De Witte, N.A.; van Beurden, C.; Debard, G.; Bonroy, B.; Cuijpers, P.; Riper, H.; Kleiboer, A. Physiological and self-reported arousal in virtual reality versus face-to-face emotional activation and cognitive restructuring in university students: A crossover experimental study using wearable monitoring. Behav. Res. Ther. 2021, 142, 103877. [Google Scholar] [CrossRef]
  20. Morina, N.; Kampmann, I.; Emmelkamp, P.; Barbui, C.; Hoppen, T.H. Meta-analysis of virtual reality exposure therapy for social anxiety disorder. Psychol. Med. 2021, 1–3. [Google Scholar] [CrossRef]
  21. Zainal, N.H.; Chan, W.W.; Saxena, A.P.; Taylor, C.B.; Newman, M.G. Pilot randomized trial of self-guided virtual reality exposure therapy for social anxiety disorder. Behav. Res. Ther. 2021, 147, 103984. [Google Scholar] [CrossRef]
  22. Fernández-Sotos, P.; Fernández-Caballero, A.; Rodriguez-Jimenez, R. Virtual reality for psychosocial remediation in schizophrenia: A systematic review. Eur. J. Psychiatry 2020, 34, 1–10. [Google Scholar] [CrossRef]
  23. Horigome, T.; Kurokawa, S.; Sawada, K.; Kudo, S.; Shiga, K.; Mimura, M.; Kishimoto, T. Virtual reality exposure therapy for social anxiety disorder: A systematic review and meta-analysis. Psychol. Med. 2020, 50, 2487–2497. [Google Scholar] [CrossRef] [PubMed]
  24. Ioannou, A.; Papastavrou, E.; Avraamides, M.N.; Charalambous, A. Virtual Reality and Symptoms Management of Anxiety, Depression, Fatigue, and Pain: A Systematic Review. SAGE Open Nurs. 2020, 6, 2377960820936163. [Google Scholar] [CrossRef]
  25. Dyck, M.; Winbeck, M.; Leiberg, S.; Chen, Y.; Gur, R.C.; Mathiak, K. Recognition Profile of Emotions in Natural and Virtual Faces. PLoS ONE 2008, 3, e3628. [Google Scholar] [CrossRef]
  26. Ekman, P.; Friesen, W.V. Facial Action Coding System: A Technique for the Measurement of Facial Movement; Consulting Psychologists Press: Mountain View, CA, USA, 1978. [Google Scholar]
  27. Höfling, T.T.A.; Alpers, G.W.; Büdenbender, B.; Föhl, U.; Gerdes, A.B.M. What’s in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS ONE 2022, 17, e0263863. [Google Scholar] [CrossRef]
  28. García, A.S.; Fernández-Sotos, P.; Vicente-Querol, M.A.; Lahera, G.; Rodriguez-Jimenez, R.; Fernández-Caballero, A. Design of reliable virtual human facial expressions and validation by healthy people. Integr. Comput. Aided Eng. 2020, 27, 287–299. [Google Scholar] [CrossRef]
  29. Fernández-Sotos, P.; García, A.S.; Vicente-Querol, M.A.; Lahera, G.; Rodriguez-Jimenez, R.; Fernández-Caballero, A. Validation of dynamic virtual faces for facial affect recognition. PLoS ONE 2021, 16, e0246001. [Google Scholar] [CrossRef] [PubMed]
  30. Ekman, P.; Friesen, W.V. Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues; Ishk: San Jose, CA, USA, 2003. [Google Scholar]
  31. Sayette, M.A.; Cohn, J.F.; Wertz, J.M.; Perrott, M.A.; Parrott, D.J. A psychometric evaluation of the facial action coding system for assessing spontaneous expression. J. Nonverbal Behav. 2001, 25, 167–185. [Google Scholar] [CrossRef]
  32. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; DSM-5; American Psychiatric Association: Arlington, VA, USA, 2013. [Google Scholar]
  33. Dalili, M.N.; Penton-Voak, I.S.; Harmer, C.J.; Munafò, M.R. Meta-analysis of emotion recognition deficits in major depressive disorder. Psychol. Med. 2015, 45, 1135–1144. [Google Scholar] [CrossRef] [Green Version]
  34. Marín-Morales, J.; Llinares, C.; Guixeres, J.; Alcañiz, M. Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors 2020, 20, 5163. [Google Scholar] [CrossRef]
  35. Alders, G.L.; Davis, A.D.; MacQueen, G.; Strother, S.C.; Hassel, S.; Zamyadi, M.; Sharma, G.B.; Arnott, S.R.; Downar, J.; Harris, J.K.; et al. Reduced accuracy accompanied by reduced neural activity during the performance of an emotional conflict task by unmedicated patients with major depression: A CAN-BIND fMRI study. J. Affect. Disord. 2019, 257, 765–773. [Google Scholar] [CrossRef]
  36. Kohler, C.G.; Hoffman, L.J.; Eastman, L.B.; Healey, K.; Moberg, P.J. Facial emotion perception in depression and bipolar disorder: A quantitative review. Psychiatry Res. 2011, 188, 303–309. [Google Scholar] [CrossRef] [PubMed]
  37. Suslow, T.; Günther, V.; Hensch, T.; Kersting, A.; Bodenschatz, C.M. Alexithymia Is Associated With Deficits in Visual Search for Emotional Faces in Clinical Depression. Front. Psychiatry 2021, 12, 668019. [Google Scholar] [CrossRef] [PubMed]
  38. Senior, C.; Hassel, S.; Waheed, A.; Ridout, N. Naming emotions in motion: Alexithymic traits impact the perception of implied motion in facial displays of affect. Emotion 2020, 20, 311–316. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. de Lima Bomfim, A.J.; dos Santos Ribeiro, R.A.; Chagas, M.H.N. Recognition of dynamic and static facial expressions of emotion among older adults with major depression. Trends Psychiatry Psychother. 2019, 41, 159–166. [Google Scholar] [CrossRef]
  40. Guo, W.; Yang, H.; Liu, Z.; Xu, Y.; Hu, B. Deep Neural Networks for Depression Recognition Based on 2D and 3D Facial Expressions Under Emotional Stimulus Tasks. Front. Neurosci. 2021, 15, 609760. [Google Scholar] [CrossRef]
  41. Calvo, M.G.; Avero, P.; Fernández-Martín, A.; Recio, G. Recognition thresholds for static and dynamic emotional faces. Emotion 2016, 16, 1186–1200. [Google Scholar] [CrossRef]
  42. Thompson, A.E.; Voyer, D. Sex differences in the ability to recognise non-verbal displays of emotion: A meta-analysis. Cogn. Emot. 2014, 28, 1164–1195. [Google Scholar] [CrossRef]
  43. Wright, S.L.; Langenecker, S.A.; Deldin, P.J.; Rapport, L.J.; Nielson, K.A.; Kade, A.M.; Own, L.S.; Akil, H.; Young, E.A.; Zubieta, J.K. Gender-specific disruptions in emotion processing in younger adults with depression. Depress. Anxiety 2009, 26, 182–189. [Google Scholar] [CrossRef] [Green Version]
  44. Ochi, R.; Midorikawa, A. Decline in Emotional Face Recognition Among Elderly People May Reflect Mild Cognitive Impairment. Front. Psychol. 2021, 12, 664367. [Google Scholar] [CrossRef]
  45. Murphy, J.; Millgate, E.; Geary, H.; Catmur, C.; Bird, G. No effect of age on emotion recognition after accounting for cognitive factors and depression. Q. J. Exp. Psychol. 2019, 72, 2690–2704. [Google Scholar] [CrossRef]
  46. Brzezińska, A.; Bourke, J.; Rivera-Hernández, R.; Tsolaki, M.; Woźniak, J.; Kaźmierski, J. Depression in Dementia or Dementia in Depression? Systematic Review of Studies and Hypotheses. Curr. Alzheimer Res. 2020, 17, 16–28. [Google Scholar] [CrossRef] [PubMed]
  47. Santabárbara, J.; Villagrasa, B.; Gracia-García, P. Does depression increase the risk of dementia? Updated meta-analysis of prospective studies. Actas Españolas Psiquiatr. 2020, 48, 169–180. [Google Scholar]
  48. Arshad, F.; Paplikar, A.; Mekala, S.; Varghese, F.; Purushothaman, V.V.; Kumar, D.J.; Shingavi, L.; Vengalil, S.; Ramakrishnan, S.; Yadav, R.; et al. Social Cognition Deficits Are Pervasive across Both Classical and Overlap Frontotemporal Dementia Syndromes. Dement. Geriatr. Cogn. Disord. Extra 2020, 10, 115–126. [Google Scholar] [CrossRef] [PubMed]
  49. Torres Mendonça De Melo Fádel, B.; Santos De Carvalho, R.L.; Belfort Almeida Dos Santos, T.T.; Dourado, M.C.N. Facial expression recognition in Alzheimer’s disease: A systematic review. J. Clin. Exp. Neuropsychol. 2019, 41, 192–203. [Google Scholar] [CrossRef]
Figure 1. Workflow of the dynamic virtual humans’ design.
Figure 1. Workflow of the dynamic virtual humans’ design.
Applsci 13 01609 g001
Figure 2. Some DVF samples.
Figure 2. Some DVF samples.
Applsci 13 01609 g002
Table 1. Sociodemographic data.
Table 1. Sociodemographic data.
MDD GroupHealthy Group
Sample [n]5454
Gender [female:male]34:2034:20
Age [mean (SD)]53.20 (13.63)50.54 (13.72)
Age [n]
   Young (20–39)910
   Middle-age (40–59)2727
   Elderly (60–79)1817
Education level [n]
   Basic1717
   Medium2121
   High1616
Table 2. Emotion recognition scores for each emotion depicted for the depression group and healthy controls.
Table 2. Emotion recognition scores for each emotion depicted for the depression group and healthy controls.
MDD GroupNeutralSurpriseFearAngerDisgustJoySadness
Neutral90.3%2.8%0.9%1.4%0.5%0.5%3.7%
Surprise2.3%89.6%4.4%1.4%0.2%0.9%1.2%
Fear1.2%41.7%48.4%2.8%1.9%0.2%3.9%
Anger2.1%5.1%2.5%83.6%5.3%0.0%1.4%
Disgust1.2%7.2%4.2%19.9%66.9%0.2%0.5%
Joy6.0%4.2%0.9%1.4%2.3%84.5%0.7%
Sadness7.9%7.6%8.1%8.1%4.4%0.9%63.0%
Healthy groupNeutralSurpriseFearAngerDisgustJoySadness
Neutral94.0%0.5%0.5%0.0%0.9%0.0%4.2%
Surprise0.9%90.3%8.3%0.0%0.0%0.0%0.5%
Fear0.9%12.7%77.3%0.2%0.7%0.0%8.1%
Anger0.7%1.2%1.9%92.4%3.0%0.0%0.9%
Disgust0.2%0.5%0.9%13.2%85.0%0.0%0.2%
Joy4.9%0.7%0.2%0.7%0.5%93.1%0.0%
Sadness3.0%3.5%5.6%0.9%1.6%0.0%85.4%
Columns: Recognized emotions. Rows: Displayed emotions.
Table 3. Average reaction time and standard deviation (in seconds) per emotion for the depression and healthy groups.
Table 3. Average reaction time and standard deviation (in seconds) per emotion for the depression and healthy groups.
NeutralSurpriseFearAngerDisgustJoySadness
MDD group6.25 (3.38)3.93 (1.61)4.55 (1.63)4.59 (2.87)4.55 (1.56)4.47 (2.97)5.41 (2.75)
Healthy group2.85 (1.24)2.73 (1.16)2.53 (1.02)2.58 (1.14)2.46 (1.04)2.25 (0.84)2.16 (0.77)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Monferrer, M.; García, A.S.; Ricarte, J.J.; Montes, M.J.; Fernández-Sotos, P.; Fernández-Caballero, A. Facial Affect Recognition in Depression Using Human Avatars. Appl. Sci. 2023, 13, 1609. https://doi.org/10.3390/app13031609

AMA Style

Monferrer M, García AS, Ricarte JJ, Montes MJ, Fernández-Sotos P, Fernández-Caballero A. Facial Affect Recognition in Depression Using Human Avatars. Applied Sciences. 2023; 13(3):1609. https://doi.org/10.3390/app13031609

Chicago/Turabian Style

Monferrer, Marta, Arturo S. García, Jorge J. Ricarte, María J. Montes, Patricia Fernández-Sotos, and Antonio Fernández-Caballero. 2023. "Facial Affect Recognition in Depression Using Human Avatars" Applied Sciences 13, no. 3: 1609. https://doi.org/10.3390/app13031609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop