Next Article in Journal
Generative Artificial Intelligence as a Catalyst for Change in Higher Education Art Study Programs
Previous Article in Journal
Role of Roadside Units in Cluster Head Election and Coverage Maximization for Vehicle Emergency Services
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of an Emotional Facial Recognition Task in a 3D Environment

by
Gemma Quirantes-Gutierrez
1,
Ángeles F. Estévez
2,
Gabriel Artés Ordoño
3 and
Ginesa López-Crespo
1,*
1
Departamento de Psicología y Sociología, Facultad de Ciencias Sociales y Humanas, Universidad de Zaragoza, 44003 Teruel, Spain
2
Departamento de Psicología, Facultad de Psicología, Universidad de Almería, 04120 Almería, Spain
3
Departamento de Educación, Facultad de Ciencias de la Educación, Universidad de Almería, 04120 Almería, Spain
*
Author to whom correspondence should be addressed.
Computers 2025, 14(4), 153; https://doi.org/10.3390/computers14040153
Submission received: 14 March 2025 / Revised: 1 April 2025 / Accepted: 9 April 2025 / Published: 18 April 2025
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))

Abstract

:
The recognition of emotional facial expressions is a key skill for social adaptation. Previous studies have shown that clinical and subclinical populations, such as those diagnosed with schizophrenia or autism spectrum disorder, have a significant deficit in the recognition of emotional facial expressions. These studies suggest that this may be the cause of their social dysfunction. Given the importance of this type of recognition in social functioning, the present study aims to design a tool to measure the recognition of emotional facial expressions using Unreal Engine 4 software to develop computer graphics in a 3D environment. Additionally, we tested it in a small pilot study with a sample of 37 university students, aged between 18 and 40, to compare the results with a more classical emotional facial recognition task. We also administered the SEES Scale and a set of custom-formulated questions to both groups to assess potential differences in activation levels between the two modalities (3D environment vs. classical format). The results of this initial pilot study suggest that students who completed the task in the classical format exhibited a greater lack of activation compared to those who completed the task in the 3D environment. Regarding the recognition of emotional facial expressions, both tasks were similar in two of the seven emotions evaluated. We believe that this study represents the beginning of a new line of research that could have important clinical implications.

1. Introduction

Emotion can be defined as any psychological phenomenon caused by various positive or negative events for the individual [1]. These events include a set of physiological reactions, changes in the normal course of thought, and behavioural responses. Therefore, an emotion is a well-structured, dynamic, and multi-component phenomenon activated by a specific event. It is a reaction to an external or internal antecedent that causes changes at the level of the nervous and endocrine systems, the musculoskeletal system, and the facial configuration. To confirm that we are dealing with an emotion, James in 1980 suggest that the last three must occur simultaneously, as they are considered essential components [2].
Concerning emotions, in 1979, Ekman and Oster emphasized the crucial role that recognition of emotional facial expressions plays in our daily lives. In fact, according to these authors, facial expressions of emotion are the most critical component of nonverbal language, contributing significantly to the communication of our mental and emotional states, as well as those of individuals in our proximity [3]. Given the pivotal role played by accurate recognition of emotional facial expressions, the importance of developing assessment tools for individuals who are facing difficulties becomes clear. Therefore, research in this area is essential to deepen our understanding of our current knowledge about this skill.
The assessment tools employed to measure such processes are mostly laboratory tests following a classical experimental methodology. According to González-Quevedo et al., this approach is deemed inappropriate and irrelevant as it fails to reflect how expressions are perceived in natural contexts [4]. Furthermore, it presents numerous disadvantages in terms of ecological validity. In addition to these issues, there are also challenges related to individual motivation during the execution of the classical laboratory tests. This is often due to the tasks being presented in a highly repetitive and systematic fashion over prolonged periods. Thus, the constant state of attention and activation can lead to fatigue, which, in turn, can have an impact on participants’ cognitive functions and consequently their task performance [5]. It is noteworthy that the sense of fatigue involves the participant’s ability to construct a mental framework resulting from the integration of multiple factors such as performance expectations (a prediction based on acquired memories about the muscular strength or power that one possesses and could demonstrate in a specific circumstance), the level of activation and arousal, motivation, and mood. These elements imply that, even at the same level of objective fatigue, individuals may experience diverse sensations [6].
At the same time, however, basic experimentation is crucial for the formulation of hypotheses and theories, as stated also by González-Quevedo et al. [4]. Similarly, Smith [7] asserts that laboratory-based experimentation allows for a certain degree of control over highly complex phenomena. Researchers can manage exposure to independent variables, and randomize and control the range in which these independent variables vary, significantly increasing internal validity. Therefore, it is necessary to develop assessment tools that are realistic and both sensitive and motivating for the population concerned. These tools should function similarly regardless of the individual’s cultural environment, but with the same degree of rigour and experimental control as a classical laboratory task [5,7].
The use of Immersive Virtual Environments (IVEs) can help alleviate the issue of low psychological realism observed in experimental contexts by enabling the presentation of various stimuli and environments through a computer in a more realistic manner [8,9]. This methodology has grown in recent years as a tool that allows us to assess behaviours, obtaining increasingly precise measurements [10].
In line with this idea, we have seen an expansion of the use of video games beyond mere entertainment to include educational, therapeutic, or research purposes [11]. Technological development has a potent effect on phenomena such as attention, memory, and esthetics. Tasks and applications in game format enhance the user experience and facilitate the task, as they are strongly associated with entertainment and enjoyment, and can even stimulate thinking and affectivity [1]. However, these formats need to be diverse and varied enough to avoid player boredom [12].
In this regard, it is known that the use of computerized games improves performance in various cognitive tasks while enhancing the affective and motivational states of users, as shown in a systematic review [13]. Thus, contemporary research shows a growing trend towards using video game or application formats for the measurement of psychological aspects of laboratory tasks, as opposed to traditional laboratory tasks. For instance, in a study conducted by Tong et al. [14], a Go/No-Go task was implemented in a 3D environment or a Serious game to assess response inhibition. This study successfully demonstrated that the 3D environment task measures response inhibition capabilities in a manner very similar to the traditional format of the test. After completing the task, participants were asked to evaluate its utility, and it received a rating of 8.13 out of 10. Furthermore, 91% of participants indicated that they would recommend engaging in the task. The authors concluded by highlighting the potential benefits of analyzing the combined effect of a serious game with other traditional methods of intervention in psychology to assess its utility and effectiveness [14]. Another study used the Unreal Engine game development engine to measure cognitive functions that play a pivotal role in the diagnosis and treatment of mental disorders such as schizophrenia—that is, spatial learning, mental flexibility, and working memory. Using a test suitable for clinical research, the authors created a virtual counterpart of the spatial animal task known as the Morris water maze (MWM). Analysis of data from 30 schizophrenia patients revealed cognitive deficits in the newly devised virtual task when compared to a control group of healthy volunteers, suggesting its potential utility as a diagnostic or therapeutic tool for cognitive function [15].
Regarding the use of video games for evaluation and treatment purposes in the field of social cognition, previous studies also suggest their potential. For example, for evaluation reasons, the study by Banos [16] introduced EmoAnim, a serious game designed to screen emotion recognition abilities in children, showing promising results in distinguishing between typically developing children and those with autism. The game successfully highlighted key differences in emotional understanding, demonstrating its effectiveness in identifying areas where children with autism might struggle, thus validating its potential as both an assessment and intervention tool for social cognition [16]. Another study aimed to investigate the effects of computerized interventions on emotional understanding in children with autism spectrum disorder (ASD), highlighting that, following the application of the programme, the studied abilities improved, and these improvements were attributed to the use of the computerized intervention. Additionally, statistically significant differences were observed between the two groups in the locomotor, personal–social, language, performance, and practical reasoning subscales [17]. Similarly, a computerized pilot training programme was designed for children with ASD exhibiting low scores in social perception. The intervention included a combination of computer-based social perception training exercises and a three-week trial using Remote Microphone Hearing Systems (RMHS) to provide an enhanced auditory experience. The results showed significant improvement in the children’s ability to perceive social cues, suggesting that this integrated approach can effectively enhance social perception in children with ASD, particularly in environments with auditory challenges [18].

Objectives and Hypotheses

Given that tasks in a 3D serious game format appear to be useful at both applied and research levels, the main goal of the current study is to design a test in a serious game or 3D environment format to assess the recognition of emotional facial expressions. To achieve this, we based it on tasks traditionally used to study this ability, such as the Matching to Sample (MTS) task.
As a second objective, we aim to conduct a small pilot study to verify whether this task measures the recognition of emotional facial expressions similarly to a traditional laboratory task, such as the one used by González-Rodríguez et al. [19]. We anticipate that this new video game format task will create a more engaging context for participants, preventing fatigue and being more enjoyable than a conventional task. We also expect it to measure emotional facial expression recognition in a similar manner to the traditional task [19]. Therefore, we anticipate that performance in the 3D video game task will yield very similar results to performing the same task in a traditional format (Hypothesis 1). Additionally, we expect participants to express higher levels of psychological well-being, lower psychological distress or lack of activation, and a reduction in perceived fatigue compared to the conventional task (Hypothesis 2).

2. Method

2.1. Participants

The pilot study sample comprised 37 young adult university students from a Spanish university, aged between 18 and 40 (Females: (N = 21, x ¯ = 21.2, σ = 4.55); Males: (N = 16, x ¯ = 25.8, σ = 7.49). Of these, 19 participants (10 males and 9 females) were randomly assigned to the experimental group, performing the task in a 3D environment. The control group, which performed the task in a traditional format, consisted of 6 males and 12 females. Participants were selected through incidental sampling (contacting psychology students who received course credits for their participation) and snowball sampling (offering an extra credit for each external participant they invited to participate). All participants had normal or corrected-to-normal vision. Informed consent was obtained from all participants before the study started, and the research followed the ethical guidelines outlined in the current bioethics regulations, including the Code of Good Research Practices at the University of Almería. This study was conducted in full compliance with the Declaration of Helsinki and received ethical approval from the Bioethics Committee of the University of Almería (Ref: UALBIO2020/039).

2.2. Stimulus and Materials

2.2.1. Stimulus

The stimuli employed in both the classical task and the 3D task consisted of facial images obtained from the Radboud Faces Database [20]. The models were selected based on inter-judge criteria, with two researchers participating in the selection process. Faces were chosen based on presenting a similar expression for each emotion. Specifically, this led to using the six basic universal emotions proposed by Ekman [21], happiness, sadness, surprise, disgust, anger, and fear, in addition to neutral expression, represented by 10 different actors (five women and five men of Caucasian descent). This resulted in a total of 70 images from the original image set [21].

2.2.2. SEES Scale McAuley and Courneya (1994)

This scale is a multidimensional measure that assesses the subjective experiences of global psychological responses to the stimulating properties of physical exercise. It consists of 12 items (4 per factor) that encompass three factors: variations in psychological well-being, psychological distress or lack of activation, and perceived fatigue after intense physical exercise. Two of them refer to positive poles (psychological well-being) and negative poles (lack of activation, psychological distress). The third factor presents subjective indicators of fatigue sensation.
The original psychometric data provided by the scale’s authors indicate that it has acceptable internal consistency and external validity. Subsequently, De Gracia et al. [22] replicated these results by conducting a principal component analysis with varimax rotation to verify the factorial structure of the scale. In both groups, the items were grouped into the expected three factors, with a total variance of 89.16% for the experimental group and 75.15% for the control group. The results of the factorial analysis and internal reliability of the SEES were then presented, confirming that all three factors showed a high reliability coefficient [22].
Lastly, it is important to note that this scale does not measure emotions directly; rather, it provides measures of responses that can lead to specific emotional states. According to Mousseau [23], fatigue can be used to describe the feeling of tiredness after exertion of various kinds, be it intellectual, work-related, or related to sport. The construct of fatigue also extends to that generated by mental effort. Although it has been validated and used mainly in the field of sport, it includes constructs that can be extrapolated to those arising from intellectual exertion, such as decreased activation, reduced well-being, and increased fatigue. For this reason, we have chosen to use it in the present study.

2.2.3. Ad Hoc Questionnaire

In addition to the previous scale, an online questionnaire was designed using Google Forms (Google Forms, n.d.). It included the following five questions to explore all aspects considered of interest for the research, such as enjoyment, availability, experience of this type of task, and familiarity with the format:
A. Did you enjoy the task? (Rate from 1 to 7, where 1 is not at all and 7 is totally).
B. Would you be willing to do more tasks of this type? (Yes/No).
C. Did you find it difficult to perform this task? (Yes/No). If you encountered any issues, please specify:
D. Have you participated in a similar experiment before? (Yes/No). If the answer is yes, do you prefer to participate in this type of experiment online or in a physical laboratory setting? (Online/Laboratory).
E. How much time do you spend weekly playing video games? (None/zero to five hours/five to ten hours/10–20 h/more than 20 h).

2.2.4. Reinforcers

In addition to receiving credits for their participation, participants had the opportunity to win a prize, a EUR 20 Amazon voucher, which was raffled off at the end of this study.

2.3. Procedure

2.3.1. Three-Dimensional Task Development (Objective 1)

This phase involved the programming of a video game, which was subsequently tested in a pilot experiment (Objective 2). As mentioned earlier, this task is based on a classic laboratory task used in a previous research study [19]. To develop it, we used a game development engine called Unreal Engine 4 [24]. UE4 is one of the most powerful development engines available for creating interactive environments, and since 2015, it has been free to use for creating games compatible with Windows, macOS, Linux, HTML5, Xbox One, PS4, Android, IOS, and VR [25].
UE4 allows choosing the type of project to create, loading templates, programming parameters, and adding any external content (documents, images, audio clips, videos, etc.). Regarding programming languages, it has two types: C++ and a technology called Blueprint Visual Scripting. The latter is a system that allows creators to work more synthetically. Blueprints are like assets within the UE4 editor that can be organized into nodes. This allows for the creation of a variety of codes by interconnecting different elements [26]. UE4 also provides a Getting Started Guide on its official website, along with various tutorials and complete game programming guides under ‘Unreal Engine training & simulation’. All these resources made it possible to acquire the necessary knowledge to programme this video game.
To explain the components of this game, we will use the Content Browser viewer found in any UE4 project on the left side of the screen, which serves as a guide to navigate through the different elements to be configured for game creation. The game consists mainly of the following elements: Blueprints, Excel, Photos, Textures, and Level. With all this in mind, we developed a matching task in which the participant had to match one face with another showing the same emotion. In each trial, the participant saw a virtual room containing a photo of an actor expressing a specific emotion (see Figure 1). They next moved to another room where they encountered seven photos of 7 different actors displaying the 7 emotions included in the task (happiness, sadness, anger, disgust, surprise, fear, and neutral). Only one of them, the one showing the actor in the previous room, was correct. The participant had to navigate through the space and enter the door corresponding to the correct emotion. They had a maximum of 45 s to respond. This process was repeated until a total of 70 trials or rooms had been completed. At the end of the task, the participant received a CSV document that recorded, for each correctly differentiated and numbered level: (i) the time spent in the room or level; (ii) the accuracy rates; (iii) the correct emotion for that level; and (iv) the emotion they had chosen. This last piece of data provides important information in trials where an error occurred, allowing us to understand which emotions were being confused with each other.

2.3.2. Pilot Study (Objective 2)

To conduct the pilot study, we designed a classical facial emotion recognition based on the one used by González-Rodríguez et al. [19], utilizing the Psychopy 2020.1 software [19,27]. In this case, only a matching block was created, omitting the labelling block, using the same images as in the 3D environment task. Specifically, the task consisted of a block of 70 trials that began with a fixation point for 500 ms (see Figure 2). After that, a blank screen appeared for 250 ms, followed by the presentation of the sample stimulus (a face displaying an emotional expression) for 1.000 ms. Next, seven faces of different people were presented, with only one showing the same emotional facial expression as the sample stimulus. Participants had to click on the image they considered correct using the mouse. The stimuli remained on the screen until a response had been made or 10 s had elapsed (whichever occurred first). After a blank screen of 250 ms, the next trial began. At the end, we obtained a CSV document that recorded each correct answer. The document contained reaction times, accuracy rates, and failed emotions in case of error.
Since this study took place during the COVID-19 pandemic, it could not be conducted in person in a laboratory setting. As a result, both the tasks and questionnaires were adapted to an online format. The procedure began by contacting psychology students through the virtual classroom of their subjects, offering them the opportunity to participate in exchange for course credits and entry into a raffle for a EUR 20 Amazon voucher. Once the participants were contacted, they were randomly assigned to each group. Subsequently, they received an email with links to the tasks and questionnaires, along with detailed instructions on how to complete them. The task was designed to be downloaded to a personal computer, while the questionnaires were to be completed online.

3. Results

3.1. Experimental Tasks

First of all, to rule out the possibility that gender-related effects might bias the results, a gender-by-emotion ANOVA was conducted on the data from both tasks (classic and 3D). As shown in Figure 3, no gender differences were observed for any emotion [F(6, 35) = 0.82, p = 0.55]. Therefore, the data from men and women were pooled for the subsequent analyses (see Figure 3).
Figure 3. Estimated Marginal means interaction of emotion x sex. To compare the performance of the groups that participated in the classic task and the 3D Environment task (see Table 1 for descriptive statistics), the percentage of correct responses obtained by each participant for each emotion was analyzed using a mixed ANOVA with Emotion (Happiness, Sadness, Anger, Disgust, Surprise, Fear, and Neutral) as the within-participants factor and Type of task (Classic Task and 3D Environment Task) as the between-participants factor.
Figure 3. Estimated Marginal means interaction of emotion x sex. To compare the performance of the groups that participated in the classic task and the 3D Environment task (see Table 1 for descriptive statistics), the percentage of correct responses obtained by each participant for each emotion was analyzed using a mixed ANOVA with Emotion (Happiness, Sadness, Anger, Disgust, Surprise, Fear, and Neutral) as the within-participants factor and Type of task (Classic Task and 3D Environment Task) as the between-participants factor.
Computers 14 00153 g003
To check the assumption of homogeneity of variances in performance, we refer to the Levene’s test for heterogeneity of variances, where this assumption is not met for the emotions Happiness [F(1, 35) = 7.47, p = 0.01], Disgust [F(1, 35) = 8.26; p < 0.05]; and Surprise [F(1, 35) = 9.83, p = 0.03]. That is, the variances for these data groups are different. However, it is met (p > 0.05) for the emotions Sadness, Anger, Fear, and Neutral. For this reason, we use Welch’s ANOVA. Since the sphericity assumption was not met [Mauchly’s W= 0.24 = 20, p < 0.05], the Greenhouse–Geisser correction was applied. A general effect of emotional recognition between tasks was found [Wilks’ Lambda (Λ)= 0.310, F(11, 135) = 6.000, p < 0.05, ηp2 = 0.77], general effect task which indicated superiority in the 3D task, and modulated by an Emotion x Task interaction [Type III Sum of Squares = 2.337 = 1, F = 0.008, p = 0.93]. The results of Welch’s ANOVA for the emotions that violated the homogeneity of variance assumption were Happiness [F(1, 34.42) = 372.991, p < 0.05]; Disgust [F(1, 25.24) = 35.752, p < 0.01]; Surprise [F(1, 22.37) = 1.038, p = 0.32]; For the emotions that did not violate the assumption, the results were Anger [F(1, 33.64) = 4.72, p = 0.04]; Sadness [F(1, 23.85) = 69.972, p = 0.00]; Fear [F(1, 19.01) = 25.13 p < 0.05]; Neutral [F(1, 31.87) = 2.14, p = 0.15], (see Figure 4).
Post hoc comparisons were conducted using the Games–Howell correction. In the classic task, the emotion that was best recognized was Disgust, being significantly different (p < 0.05) from four emotions (Happiness, Anger, Surprise, and Fear). This was followed by Fear, which significantly differed from Sadness, Disgust, and Neutral (p < 0.05), and Neutral, which differed from Fear, Anger, and Happiness. Significant differences (p < 0.05) were found in Happiness and Anger compared to Disgust and Neutral. Finally, in the Surprise emotion, we only found statistically significant differences (p < 0.05) with the Disgust emotion, and in Sadness, we only found statistically significant differences with Fear.
In the 3D Environment task, the emotion that achieved a better level of recognition was Sadness, where statistically significant differences (p < 0.05) were found compared to Happiness, Anger, Disgust, Surprise, Fear, and Neutral. This was followed by Fear, in which statistically significant differences (p < 0.05) were found compared to Happiness, Disgust, and Surprise. In third place, we found Happiness, Anger, and Surprise all of which differed significantly (p < 0.05) from Sadness and Fear. Finally, Neutral and Anger significantly differed (p < 0.05) from Sadness. The remaining differences were not statistically significant (p > 0.05).

3.2. SEES Scale (McAuley and Courneya, 1994)

The results of the test for normality and homogeneity of variance showed that only in the case of Lack of Activation, were these assumptions not met [U = 35= −2.30; p = 0.21], so the data for this group were analyzed using a Mann–Whitney U non-parametric test for two independent samples. In the other two factors, Psychological Well-being and Fatigue, an independent samples t-test was used. There were no statistically significant differences in the Psychological Well-being variable between the scores of participants who performed the Classic Task [M = 17.28, SE = 4.54] and those who did the 3D Environment Task [M = 19.47, SE = 4.79] [t = −1.43 = −1.517, p = 0.16] or were there statistically significant differences in Fatigue between the scores of participants who did the Classic Task [M = 10.47, SE = 4.78] and the 3D Environment Task [M = 13.72, SE = 5.12]; [t = 1.98 = 35, p = 0.06]. However, statistically significant differences were observed when comparing both groups regarding the Lack of Activation factor [Z = 2.302= 35, p = 0.02] with a very high effect size (d = 2.59). Specifically, the group that performed the task in the classic format scored higher (M = 23.17) than the group that performed the task in the 3D environment format (M = 15.05).

3.3. Ad Hoc Questionnaire Results

Regarding the ad hoc questionnaire, no statistically significant differences were found in any of the questions (See Table 2).

4. Discussion

The aims of this study were, firstly, to design a task in a 3D environment format to assess the recognition of emotional facial expressions and, secondly, to conduct a pilot study to compare the performance and subjective experience (psychological well-being, lack of activation and psychological distress) in this task with that obtained in a similar task developed in a classic format. After achieving the first objective and finalizing the task design, we conducted a pilot study. The results indicated that, although there were overall differences in performance between the two tasks in favour of the 3D task, these differences became even more evident when analyzing each emotion individually. In the case of Sadness, the scores were notably lower in the 3D task compared to the classic task, which is consistent with the literature. Previous studies suggest that negative emotions, such as sadness, tend to be more difficult to recognize than the positive ones [28,29,30,31]. This difficulty may be explained by the tendency of people to focus more on positive stimuli, leading to reduced sensitivity to negative emotions [32]. Moreover, cognitive biases may further distort emotional perception, causing a stronger emphasis on positive emotions while diminishing attention to negative ones [33]. However, it is important to note that not all negative emotions behave similarly; for example, fear is often processed more rapidly due to its evolutionary significance in threat detection [31,32].
This pattern of negative emotion recognition suggests that this new task better captures the phenomenon by getting closer to the person’s real capacity in terms of emotional facial recognition. The results of this study seem to indicate that our first hypothesis is fulfilled; the 3D task could be used, like classical matching tasks, to explore and evaluate the recognition of emotional facial expressions. These results, together with other previous studies [2,14,17], could suggest that tasks in 3D or Serious game environments are as effective as conventional tasks.
On the other hand, it is worth noting that of the three factors measured by the SEES scale, there were statistically significant differences in participants’ Lack of Activation in favour of the 3D Environment task. This improvement in activation occurs even though participants who completed the 3D Environment task had to perform a series of preliminary tasks that could have resulted in a lack of activation, tasks that participants who completed the classic task did not have to perform (e.g., the entire process related to downloading and installing the 3D task). Even considering the need to expand this pilot study, the data obtained suggest that our second hypothesis is partially fulfilled. This hypothesis referred to the idea that the 3D Environment task might be more attractive and dynamic, leading participants to manifest higher levels of psychological well-being, lower psychological distress or lack of activation, and a decrease in perceived fatigue compared to the conventional task. As we have observed, this hypothesis has been fulfilled regarding the lack of activation [5,6,34].
Regarding the questions formulated to explore other aspects of the task, we observed that participants who completed the task in the 3D environment liked it on average 1.03 points more out of 7 than those who performed the task in the classic format. All the participants who completed the task in a 3D environment would be willing to do a similar task again, while 88.88% of those who completed the task in classic format expressed the same intention. Furthermore, the task was considered easy by 100% of those who completed it in its classic version, while 94.73% of those who completed it in the 3D Environment format found it easy. Additionally, responses to open-ended questions revealed that participants who completed the task in the 3D environment encountered difficulties in downloading and executing the task file. This suggests that addressing this issue, for example, by conducting the task in a laboratory setting where participants do not have to learn how to execute the file, or by programming an online version like various software packages for psychology experiment generation, could be beneficial.
Although these results could mean that the 3D Environment task designed in this study could be used, like more traditional matching tasks, to explore and assess the recognition of emotional facial expressions, we believe that further studies are needed to extract more conclusive data. Several reasons lead us to be cautious. Firstly, although no gender differences were found, we believe that the observed variability when analyzing each emotion separately could be due to this study being conducted with a small sample size and without controlling for the proportion of men and women in each group, a factor that can influence the recognition of emotional facial expressions. This gender imbalance is largely due to the fact that most participants were psychology students, a field with a higher proportion of female students. Furthermore, this study was conducted during the COVID-19 pandemic, which significantly limited access to participants and made it difficult to achieve a more balanced sample. Additionally, due to the limitations imposed by the pandemic, the experiments were administered online, which likely influenced the results obtained as environmental variables that can be minimized (e.g., the presence of distractors) during the experiment in a laboratory setting could not be controlled. In fact, by being conducted online, the 3D Environment task proved to be more complex than the classic task, requiring prior installation on participants’ computers, which, on more than one occasion, led to issues. Therefore, it would be advisable to repeat this study with a larger sample and preferably in a laboratory setting.

5. Conclusions

In this study, we compared the performance of a classic emotional facial expression recognition task with a more innovative task (3D environment). The results indicate a higher percentage of accuracy in recognizing Happiness, Sadness, Anger, Disgust, and Fear in the 3D task. On the other hand, the results also show lower levels of perceived lack of activation in the 3D task compared to the conventional task. Although this is a pilot study that will need to be replicated and expanded, we believe that these results are promising and can serve as a foundation for the assessment of emotion recognition in clinical and subclinical populations where this process is impaired (e.g., individuals diagnosed with ASD or schizophrenia or with high levels of BAP or schizophyte). Additionally, it could be adapted for training and improving this process by implementing procedures such as the Differential Outcomes Procedure (DOP). Therefore, this study represents the beginning of a new line of research that may have important repercussions in clinical settings, starting with the generation of a useful instrument for the evaluation and training of the process of recognition of emotional facial expressions.

Author Contributions

Conceptualization, Á.F.E.; methodology, G.Q.-G. and G.A.O.; software, G.Q.-G. and G.A.O.; validation, G.Q.-G.; formal analysis, G.Q.-G.; investigation, G.Q.-G.; resources, G.L.-C.; writing—original draft preparation, G.Q.-G.; writing—review and editing, Á.F.E. and G.L.-C.; supervision, Á.F.E. and G.L.-C.; project administration, Á.F.E. and G.L.-C.; funding acquisition, G.Q.-G. and G.L.-C. All authors have read and agreed to the published version of the manuscript.

Funding

G.C. was awarded a Research Initiation Grant for Official Master’s students under the Research and Transfer Plan of the University of Almería (2021/2022). The Government of Aragón (Grant Number S62_23D) has covered the Article Processing Charges (APC) as well as the English revision of the manuscript.

Data Availability Statement

Data will be made available on request.

Acknowledgments

We would like to thank Antonio González-Rodríguez for his support in supervising the work during the initial stages of the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Galati, D. Prospettive Sulle Emozioni e Teorie del Soggetto; Bollati Boringhieri: Torino, Italy, 2002. [Google Scholar]
  2. Matarazzo, O.; Zammuner, V.L. La Regolazione Delle Emozioni, 2nd ed.; Il Mulino: Bologna, Italy, 2015. [Google Scholar]
  3. Ekman, P.; Oster, H. Facial expressions of emotion. Annu. Rev. Psychol. 1979, 30, 527–554. [Google Scholar] [CrossRef]
  4. González-Quevedo, P.; Alfonso, C. Fundamentación Crítica-Desde Zubiri-de la Aproximación Ecológica de J.J. Gibson a la Psicología de la Percepción. Doctoral Thesis, Philosophy Universidad Complutense de Madrid, Madrid, Spain, 2019. [Google Scholar]
  5. Abd-Elfattah, H.M.; Abdelazeim, F.H.; Elshennawy, S. Physical and cognitive consequences of fatigue: A review. J. Adv. Res. 2015, 351–358. [Google Scholar] [CrossRef]
  6. López-Chicharro, J.; Fernández-Vaquero, A. Fisiología del Ejercicio, 3rd ed.; Médica Panamericana: Madrid, Spain, 2006. [Google Scholar]
  7. Smith, J.W. Immersive virtual environment technology to supplement environmental perception, preference and behavior research: A review with applications. Int. J. Environ. Res. Public Health 2015, 12, 11486–11505. [Google Scholar] [CrossRef] [PubMed]
  8. Wang, L.; Chen, J.L.; Wong, A.M.K.; Liang, K.C.; Tseng, K.C. Game-Based Virtual Reality System for Upper Limb Rehabilitation After Stroke in a Clinical Environment: Systematic Review and Meta-Analysis. Games Health J. 2022, 11, 277–297. [Google Scholar] [CrossRef] [PubMed]
  9. Peng, Y.; Wang, Y.; Zhang, L.; Zhang, Y.; Sha, L.; Dong, J.; He, Y. Virtual reality exergames for improving physical function, cognition and depression among older nursing home residents: A systematic review and meta-analysis. Geriatr. Nurs. 2024, 57, 31–44. [Google Scholar] [CrossRef]
  10. Perra, A.; Riccardo, C.L.; De Lorenzo, V.; De Marco, E.; Di Natale, L.; Kurotschka, P.K.; Preti, A.; Carta, M.G. Fully Immersive Virtual Reality-Based Cognitive Remediation for Adults with Psychosocial Disabilities: A Systematic Scoping Review of Methods Intervention Gaps and Meta-Analysis of Published Effectiveness Studies. Int. J. Environ. Res. Public Health 2023, 20, 1527. [Google Scholar] [CrossRef] [PubMed]
  11. Antonova, A. Validating a model of smart service system, supporting teachers to create educational maze video games. In Proceedings of the 46th MIPRO ICT and Electronics Convention (MIPRO), Opatija, Croatia, 25 May 2023; pp. 693–698. [Google Scholar] [CrossRef]
  12. Pagulayan, R.J.; Busch, R.M.; Medina, K.L.; Bartok, J.A.; Krikorian, R. Developmental normative data for the Hopkins Verbal Learning Test–Revised and the Brief Visuospatial Memory Test–Revised in an elderly sample. Clin. Neuropsychol. 2006, 20, 390–398. [Google Scholar]
  13. Connolly, J.F.; Byrne, J.M.; Dywan, J. Event-related brain potentials and the study of emotional processing in children and adolescents. Dev. Neuropsychol. 2012, 37, 501–520. [Google Scholar]
  14. Tong, T.; Chignell, M.; De Guzman, C.A. Using a serious game to measure executive functioning: Response inhibition ability. Appl. Neuropsychol. Adult 2019, 12, 673–684. [Google Scholar] [CrossRef]
  15. Fajnerová, I.; Fajnerová, M.; Rodriguez, L.; Konrádová, L.; Mikoláš, P.; Dvorská, K.; Brom, C. Spatial memory in a virtual arena: Human virtual analogue of the Morris water maze. In Proceedings of the 2013 International Conference on Virtual Rehabilitation (ICVR), Philadelphia, PA, USA, 26–29 August 2013; pp. 186–187. [Google Scholar] [CrossRef]
  16. Banos, O.; Comas-Gonzalez, Z.; Medina, J.; Polo, A.; Gil, D.; Peral, J.; Amador, S.; Villalonga, C. Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review. Int. J. Med. Inform. 2024, 187, 105469. [Google Scholar] [CrossRef]
  17. Petrovska, I.; Trajkovski, V. Effects of a computer-based intervention on emotion understanding in children with autism spectrum conditions. J. Autism Dev. Disord. 2019, 49, 4244–4255. [Google Scholar] [CrossRef]
  18. Leung, J.H.; Purdy, S.C.; Corballis, P.M. Improving Emotion Perception in Children with Autism Spectrum Disorder with Computer-Based Training and Hearing Amplification. Brain Sci. 2021, 11, 469. [Google Scholar] [CrossRef] [PubMed]
  19. González-Rodríguez, A.; Godoy-Giménez, M.; Cañadas, F.; Estévez, A.F. Differential outcomes, schizotypy, and improvement of the recognition of emotional facial expression: A preliminary study. Psicológica 2020, 41, 162–182. [Google Scholar] [CrossRef]
  20. Langner, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Hawk, S.T.; van Knippenberg, A. Presentation and validation of the Radboud Faces Database. Cogn. Emot. 2010, 24, 1377–1388. [Google Scholar] [CrossRef]
  21. Ekman, P. Universal facial expressions of emotion. Calif. Ment. Health Res. Dig. 1970, 8, 151–158. [Google Scholar]
  22. De Gracia, M.; Marcó, M. Adaptación y validación factorial de la Subjetive Exercise Experiences Scale (SEES) [Adaptation and Factorial Validation of the ’Subjective Exercise Experiences Scale (SEES)]. Rev. Psicol. Deporte 1997, 6, 60–68. [Google Scholar]
  23. Mousseau, M.B. The Onset of Cognitive Fatigue on Simulated Sport Performance. Master’s Thesis, University of Florida, Gainesville, FL, USA, 2004. [Google Scholar]
  24. Unreal Engine (n.d.) Training & Simulation. Available online: https://www.unrealengine.com/en-US/industry/training-simulation (accessed on 13 March 2025).
  25. Egea Canales, J.M.; Lozano Ortega, M.A. Desarrollo de un Videojuego Con Unreal Engine 4, [Trabajo de fin de Grado No Publicado]. Bachelor’s thesis, Universidad de Alicante, Madrid, Spain, 2015. [Google Scholar]
  26. Marchante, D.J.; Peralta, D.J. Manual Para la Creación de Videojuegos Mediante el Motor Unreal Development Kit [Trabajo de fin de Grado No Publicado]. Bachelor’s thesis, Universidad Carlos III de Madrid, Getafe, Spain, 2011. [Google Scholar]
  27. Peirce, J.W.; Gray, J.R.; Simpson, S.; MacAskill, M.R.; Höchenberger, R.; Sogo, H.; Kastman, E.; Lindeløv, J. PsychoPy2: Experiments in behavior made easy. Behav. Res. Methods 2019, 51, 195–203. [Google Scholar] [CrossRef]
  28. Kauschke, C.; Bahn, D.; Vesker, M.; Schwarzer, G. The role of emotional valence for the processing of facial and verbal stimuli—Positivity or negativity bias? Front. Psychol. 2019, 10, 1654. [Google Scholar] [CrossRef] [PubMed]
  29. Leppänen, J.M.; Hietanen, J.K. Emotional facial expressions are processed early in the human visual system: Evidence from ERPs. Neurosci. Lett. 2004, 369, 132–137. [Google Scholar]
  30. Nummenmaa, L.; Calvo, M.G. Dissociation between recognition and detection advantage for facial expressions: A meta-analysis. Emotion 2015, 15, 243–256. [Google Scholar] [CrossRef]
  31. Vaish, A.; Grossmann, T.; Woodward, A. Not all emotions are created equal: The negativity bias in social-emotional development. Psychol. Bull. 2008, 134, 383–403. [Google Scholar] [CrossRef] [PubMed]
  32. LeDoux, J.E. The Emotional Brain: The Mysterious Underpinnings of Emotional Life; Simon & Schuster: New York, NY, USA, 1998. [Google Scholar]
  33. Barrett, L.F. Solving the emotion paradox: Categorization and the experience of emotion. Pers. Soc. Psychol. Rev. 2006, 10, 20–46. [Google Scholar] [CrossRef] [PubMed]
  34. Cárdenas, D.; Conde-González, J.; Perales, J.C. La fatiga como estado motivacional subjetivo. Rev. Andal. Med. Deporte 2017, 10, 31–41. [Google Scholar] [CrossRef]
Figure 1. Three-dimensional task.
Figure 1. Three-dimensional task.
Computers 14 00153 g001
Figure 2. Classic task.
Figure 2. Classic task.
Computers 14 00153 g002
Figure 4. Estimated Marginal means interaction of emotion x task.
Figure 4. Estimated Marginal means interaction of emotion x task.
Computers 14 00153 g004
Table 1. Descriptive statistics.
Table 1. Descriptive statistics.
HappinessSadnessAngerFearSurpriseDisgustNeutral
Mean Classic Task76.11
(±16.85)
89.44 (±15.51)78.89 (±12.31)98.33 (±5.14)81.67
(±14.65)
71.67
(±22.2)
90.56
(±13.04)
Mean 3D Task96.32
(±11.64)
61.58
(±17.21)
81.58
(±11.50)
94.21
(±6.00)
95.79 (±6.07)76.32
(±18.60)
86.84
(±12.93)
Table 2. Ad hoc questionnaire analyses.
Table 2. Ad hoc questionnaire analyses.
Aspect3D EnvironmentClassic FormatTotal
A. Task Liking (Average Score)5.75 (σ = 1.25) out of 74.72 (σ = 1.40) out of 7U(1) = 161, p = 0.278
B. Willingness to Do More TasksHigher willingnessLower willingnessχ2(1) = 0.478, p = 0.489
C. Perceived DifficultyAll found it easyMost found it easyχ2(1) = 0.005, p = 0.942
D. Preferred Experimental SettingLaboratory preferredLaboratory preferredχ2(2) = 1.77, p = 0.412
E. Weekly Video Game Playing Time
0 h47.36%50%48.6%
0–5 h26.31%16.66%21.6%
5–10 h10.52%11.11%10.81%
10–20 h5.26%11.11%8.1%
More than 20 h10.52%11.11%10.8%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quirantes-Gutierrez, G.; Estévez, Á.F.; Artés Ordoño, G.; López-Crespo, G. Design of an Emotional Facial Recognition Task in a 3D Environment. Computers 2025, 14, 153. https://doi.org/10.3390/computers14040153

AMA Style

Quirantes-Gutierrez G, Estévez ÁF, Artés Ordoño G, López-Crespo G. Design of an Emotional Facial Recognition Task in a 3D Environment. Computers. 2025; 14(4):153. https://doi.org/10.3390/computers14040153

Chicago/Turabian Style

Quirantes-Gutierrez, Gemma, Ángeles F. Estévez, Gabriel Artés Ordoño, and Ginesa López-Crespo. 2025. "Design of an Emotional Facial Recognition Task in a 3D Environment" Computers 14, no. 4: 153. https://doi.org/10.3390/computers14040153

APA Style

Quirantes-Gutierrez, G., Estévez, Á. F., Artés Ordoño, G., & López-Crespo, G. (2025). Design of an Emotional Facial Recognition Task in a 3D Environment. Computers, 14(4), 153. https://doi.org/10.3390/computers14040153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop