Next Article in Journal
A Dynamic Analysis of Angular Contact Ball Bearing 7205C Used for a Scraper Conveyor
Next Article in Special Issue
Immersion Matters: User Experience in Educational Virtual Tours Based on 360° Images and 3D Models
Previous Article in Journal
Life Cycle Cost Analysis of a Biomass-Driven ORC Cogeneration System for Medical Cannabis Greenhouse Cultivation
Previous Article in Special Issue
Exploring User Intentions for Virtual Memorialization: An Integration of TAM and Social Identity in Immersive Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance

1
Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, 46022 Valencia, Spain
2
Departamento de Psicología y Sociología, Facultad de Ciencias Sociales y Humanas, Universidad de Zaragoza, 44003 Teruel, Spain
3
Departamento de Psicología, Facultad de Psicología, Universidad de Oviedo, 33003 Oviedo, Spain
4
Instituto de Neurociencias del Principado de Asturias (INEUROPA), 33003 Oviedo, Spain
5
Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), 33011 Oviedo, Spain
6
IIS Aragón, 50009 Zaragoza, Spain
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12082; https://doi.org/10.3390/app152212082
Submission received: 17 October 2025 / Revised: 5 November 2025 / Accepted: 12 November 2025 / Published: 13 November 2025

Abstract

Emotionally expressive avatars are often used to increase engagement in virtual environments, but their effects on users’ emotional outcomes and experience during evaluative tasks are not well established. This study examined whether differences in avatar emotional expressiveness are associated with affective responses and user experience during a socially evaluative speech task in virtual reality (VR), and how individual characteristics and emotional variables relate to performance and user experience. Sixty-three university students were randomly assigned to deliver a five-minute self-presentation, simulating a job interview, in front of a virtual mirror while embodied in either a high-expressive or low-expressive avatar. In the present study, the manipulation of avatar expressiveness was implemented using Meta Quest 2 and Meta Quest Pro headsets, differing mainly in facial-tracking capability. Participants completed a structured three-phase protocol: pre-avatar embodiment (baseline questionnaires), avatar embodiment (speech task), and post-avatar embodiment (post-task measures). Emotional state and trait variables, speech fluency and engagement during the task, and user experience variables were assessed. No significant effects of avatar expressiveness were found on emotional or experiential variables. Correlation analyses revealed a positive association between extraversion and avatar embodiment. These findings contribute to our understanding of the factors that are associated with user experience and behaviour in avatar-based VR environments and suggest that individual traits, such as extraversion, should be considered when designing VR applications for training, education, and therapeutic purposes.

1. Introduction

1.1. Background

Virtual reality (VR) has become an increasingly common tool for training [1], education [2], and therapy [3]. Within these environments, avatars—virtual representations of the self—are playing an ever more prominent role. They enable embodiment, facilitate social interaction, and allow the manipulation of features that are not possible in real life [4]. Consequently, there is a growing interest in understanding how avatar characteristics affect users’ experiences and behaviours.

1.2. Related Work

Avatars can communicate emotional states through their appearance and movement, impacting users’ self-perception and emotional involvement [5]. For instance, avatars that display expansive or closed postures can subtly influence how users feel during embodiment. While posture alone may not directly alter affect, users’ positive mood can decline over time, and experiences of embodiment, such as body ownership and agency, are closely linked to emotional outcomes [6]. Extending this line of evidence, an experiment using a virtual mirror and interaction with a character showed that facial resemblance between users and their avatars enhanced the sense of embodiment. However, this effect did not translate into higher presence [7].
The sense of presence in VR is heightened when users interact with expressive avatars. Avatar personalisation has been shown to shape emotional responses and increase involvement [8], while body involvement in VR gameplay enhances positive emotions and reduces anxiety, supporting the link between embodiment, presence, and self–efficacy [9].
Emotional expressiveness of avatars has been proposed to increase engagement and social realism in VR. Research has demonstrated that the intensity of avatars’ facial expressions is particularly important for how realistic they are perceived to be, and that consistent expressive cues enhance users’ ability to recognise emotions and increase perceived realism. When expressions appear mismatched or lack intensity, realism decreases and can even give rise to perceptions of inauthenticity. These findings suggest that expressive avatars, especially those resembling the user, can support emotional recognition and strengthen the effectiveness of VR training settings for challenging situations [10]. Similarly, varying the magnitude of facial expressiveness in avatars has been shown to influence social judgements, such as perceived extraversion and persuasiveness, although excessive exaggeration can undermine realism. These effects were observed in a collaborative problem-solving task, where participants discussed survival decisions with an avatar animated in real time from a confederate’s facial motion [11]. Recent findings further highlight that real-time facial cues, such as eye and mouth movements, significantly enhance social presence and co-presence in collaborative VR tasks. These effects were observed in paired explanation games, where participants alternated between verbally describing and graphically illustrating terms for their partner, providing a naturalistic context in which facial expressions supported attention, gaze behaviour, and the feeling of being together [12]. Comparisons between tracking-based facial animation and lip-synchronisation confirm that accurate tracking better conveys emotions and enhances the perception of social presence, particularly in the case of subtle expressions or expressions with negative valence, which are otherwise more difficult to reproduce and recognise in virtual agents [13]. In the study by Visconti et al. [13], participants observed realistic full-body avatars enacting scripted scenes of basic emotions in a virtual theatre, and judged the experience on aspects such as realism, naturalness, and emotional conveyance. Results consistently favoured the tracking condition, highlighting the importance of faithful facial cues for sustaining social presence.
The use of self-representing avatars has become a common feature in VR paradigms that recreate demanding social communication tasks, offering a valuable framework to study how individual differences influence emotional and behavioural responses. For example, embodying avatars that differ from one’s real appearance has been shown to reduce aspects of social anxiety, particularly physiological responses, suggesting that alterations in virtual self-representation may support VR-based interventions [14]. Individuals with higher trait social anxiety were also more likely to prefer, and to benefit from, dissimilar avatars, underscoring the role of individual differences in regulating anxiety during virtual public speaking tasks. More recent research has shown that self-similar avatars can heighten embodiment and social presence, yet may also increase self-awareness and reduce immersion [15]. In this study, participants engaged in a cooperative social VR game where avatar appearance, voice, and name were manipulated to create varying levels of self-similarity. High self-similarity, particularly through voice and facial appearance, enhanced body ownership and the perception of avatars as authentic social partners, but also raised public self-awareness and social anxiety, mediating a decline in immersion. Likewise, research on VR audiences has highlighted the role of nonverbal behaviours, such as posture, head movement, and facial expressions, in shaping how speakers perceive valence and arousal, with direct consequences for presence and emotional impact in training settings [16]. In this experiment, participants delivered speeches to virtual audiences whose nonverbal feedback was systematically varied, demonstrating that audience behaviours influenced speakers’ emotional states and perceptions of presence. Taken together, these findings illustrate that avatar design directly affects embodiment, presence, and emotional experience, and emphasise the need to investigate the psychological factors related to these effects in communication tasks.
Personality traits such as neuroticism and extraversion have been consistently associated with communication-related outcomes, including public speaking anxiety and the anticipation of negative evaluation. Evidence from real-world contexts suggests that higher neuroticism is linked to stronger anxiety responses and disruptions in speech fluency [17], whereas extraversion has been associated with greater sociability, expressiveness, and more adaptive conversational dynamics [18]. However, only a limited number of studies have addressed these traits in communication tasks within VR environments. For example, one study employed a self–avatar together with a virtual audience in a public speaking training system, showing that individual differences in personality and immersive tendencies influenced user experience and performance. In this context, higher neuroticism predicted greater anxiety and lower perceived system quality (i.e., usefulness and ease of use), whereas extraversion was associated with more positive evaluations of the training experience [19]. In contrast, another study examined presentations to a virtual audience without self-representing avatars, where these traits were analysed mainly in relation to anxiety levels and physiological reactivity. The findings indicated that higher neuroticism was associated with stronger physiological stress responses, whereas extraversion showed weaker or inconsistent associations [20]. However, the extent to which neuroticism and extraversion are associated with users’ experience and embodiment in avatar-based communication remains largely unexplored.

1.3. Present Study

The present study aimed to examine whether differences in avatar emotional expressiveness are related to users’ emotional responses (i.e., anxiety and affect), task performance, user experience, and avatar embodiment during a socially evaluative speech task. In this task, participants were embodied in a self-representing avatar and delivered their speech while viewing themselves in a virtual mirror. A further exploratory aim was to examine the associations between personality traits, specifically neuroticism and extraversion, and these outcome measures. The study sought to provide an understanding of how avatar design may influence users’ emotional and experiential responses in VR environments, particularly in simulated performance tasks that require communication under evaluative conditions. Beyond practical implications for applied domains—such as training, education, and therapeutic interventions—it also provides insight into how individual psychological traits relate to human-avatar interaction.
The study examined two research questions:
  • Does avatar emotional expressiveness in VR influence users’ emotional responses, user experience, embodiment, and performance during a speech task?
  • Are neuroticism and extraversion associated with these outcomes in avatar-based communication tasks?
The following hypotheses were proposed:
H1. 
Based on previous findings that expressive avatars have been associated with higher involvement and presence [8,10], participants embodied in a highly expressive avatar were expected to show greater changes in emotional states, along with greater emotional involvement, presence, and embodiment than those in a low-expressive condition.
H2. 
Consistent with evidence linking avatar expressiveness with differences in communicative outcomes [11,12], participants in the high-expressive avatar condition were expected to show differences in speech performance (i.e., fewer researcher prompts required to continue speaking).
H3. 
Based on research showing that neuroticism is strongly linked to heightened communication anxiety and disrupted fluency [17,21], higher neuroticism is expected to be positively associated with speech fluency and engagement during the task.
H4. 
In line with studies linking extraversion to sociability, expressiveness, and positive conversational dynamics [18], higher extraversion is expected to be positively associated with presence, embodiment, and speech fluency.
The article is organised as follows: Section 2 provides details of the materials and methods used, including information on the characteristics of the participants, the design of the VR environment, how the expressiveness of the avatars was manipulated, the experimental speech task and the psychological and behavioural measures that were employed to evaluate emotional states, user experience and performance. Section 3 presents the statistical analyses and main results, including descriptive data, group comparisons and correlations between personality traits and experiential variables. Section 4 discusses these findings in relation to previous work, highlighting the factors that may explain the observed pattern of results, both methodological and contextual, and outlining directions for future research. Section 5 summarises the main conclusions and implications of the study for designing VR applications for use in training, education and therapy.

2. Materials and Methods

2.1. Participants

The sample consisted of 63 undergraduate students (30 males) aged between 19 and 27 years. This age range was selected because young adults are one of the most active and experienced user groups of VR technologies, typically being familiar with digital interactive systems [4]. This age group is also consistent with previous studies examining avatar embodiment, presence and emotional expressiveness in VR, enabling direct comparisons to be made with earlier findings [8,9,14].
Participants were randomly assigned to two groups: a high-expressive avatar group (HE; n = 30, 50% male) and a low-expressive avatar group (LE; n = 33, 45.5% male). The groups were equivalent based on demographic and experiential variables, as no significant differences emerged between them in age (p = 0.911; Mann–Whitney U test), sex distribution (p = 0.718), or prior video game experience (p = 0.335; χ2 tests). Likewise, no significant group differences were found in computer knowledge (p = 0.556) or VR-headset experience (p = 0.303; Fisher’s exact tests). See Section 2.5 for details on the statistical procedures applied.
The characteristics of the participants and their prior technological experience are presented in Table 1.

2.2. Virtual Reality Task

This section describes the design and characteristics of the VR environment and the avatar, as well as the specific speech task that the avatar performs within this environment.

2.2.1. Virtual Reality Environment

The VR environment was designed and implemented using the default immersive desktop environments provided by the Meta Quest 2 and Quest Pro headsets (Meta Platforms, Inc., Menlo Park, CA, USA). In both cases, the standard VR environment of the headset was selected. No visual modifications were applied to the environment itself, beyond the customization of each participant’s avatar to resemble their real appearance, ensuring consistent conditions across all participants. At the time of the study, several default immersive environments on the Meta Quest platform included a built-in virtual mirror, including Cascadia and Desert Terrace. The Cascadia environment was selected for the present experiment because it provided a neutral and well-lit setting with both interior and exterior views, suitable for self-presentation tasks. Figure 1 shows two representative views of this environment, illustrating the spatial layout and mirror position used during the task. This approach guaranteed that participants engaged with the task in a controlled and comparable setting regardless of the headset used. The integrated virtual mirror functionality of the system was employed to allow participants to view their avatars in real time during the task, supporting the sense of embodiment and familiarity with the virtual representation.

2.2.2. Avatar Characteristics and Emotional Expressiveness

The avatars used in the speech task were modelled using the default avatar customisation applications available within the Quest 2 and Quest Pro platforms. Each avatar was tailored to resemble the participant completing the task, thereby enhancing identification and promoting the sense of presence within the VR environment. This individual adaptation ensured that participants perceived their avatar as a closer representation of themselves, a factor considered relevant for the experimental aims of the study. Examples of the resemblance achieved between participants and their avatars are illustrated in Figure 2.
Avatars were not randomised by visual appearance, as each avatar was designed to resemble the participant it represented. Participants were balanced by gender and age and shared the same ethnic background, minimising variability in facial features and appearance across conditions.
The two experimental conditions—high-expressive and low-expressive avatar groups—were directly associated with the headset used, given the different technological affordances of Meta Quest Pro and Meta Quest 2. Examples of both conditions are available as Supplementary Video Materials (in Spanish). These conditions were defined as follows:
  • High-expressive avatar condition (Meta Quest Pro). In this condition, avatars were capable of reproducing nuanced facial movements, including eyebrow, eye, and mouth gestures, which were dynamically tracked and mirrored from the user in real time through the facial tracking sensors of the Meta Quest Pro headset. Although both Meta Quest Pro and Meta Quest 2 headsets supported basic head, hand, and finger tracking, the addition of facial tracking in the Meta Quest Pro endowed the avatar with a substantially higher degree of emotional expressiveness. As a result, the avatar was able to convey subtle non-verbal cues during the speech task, simulating a more natural and socially engaging interaction.
  • Low-expressive avatar condition (Meta Quest 2). In this condition, avatars were restricted to basic motor expressiveness, limited to head, hand, and finger movements. As the Meta Quest 2 lacks integrated facial tracking capabilities, the avatar maintained a neutral facial expression throughout the task. This configuration provided a sharp contrast with the high-expressive condition, enabling the study to explore the potential role of facial gesticulation and emotional cues in relation to participants’ self-perception and communicative behaviour.
Both experimental conditions were implemented using different headset models, each offering the technological features necessary for its respective condition (Meta Quest Pro for the high-expressive condition and Meta Quest 2 for the low-expressive condition). Although the headsets differed in facial-tracking capability, they were otherwise comparable in key hardware specifications. Both headsets offer comparable display resolutions (Quest 2: 1832 × 1920; Quest Pro: 1800 × 1920) and similar fields of view (Meta Quest 2: ≈97° H × 93° V; Meta Quest Pro: ≈106° H × 96° V). The Meta Quest Pro is heavier (722 g vs. 503 g) but features a more ergonomic design, and the task duration was under ten minutes, minimizing any comfort differences. Another difference concerns the passthrough mode: the Quest Pro provides colour passthrough, whereas the Quest 2 offers a grayscale view. However, this feature was not used in the present study, as participants remained fully immersed in the virtual environment throughout the task. No audio stimuli were used, and the same virtual environment (Cascadia) was employed in both conditions. Controllers were used only to display the participant’s hands in the virtual environment; no manual input or interactive responses were required during the task. Overall, the two headsets were functionally equivalent for this experiment, with facial-tracking capability being the only feature that differed in a way relevant to the expressive manipulation.

2.2.3. Speech Task

Each participant completed a speech task simulating a job interview in front of a mirror. A trained researcher monitored the task in real time.
Before the VR-headset was put on, the researcher instructed the participant to imagine being in an interview for their ideal job position and informed them that they would deliver a five-minute self-presentation in front of a mirror. To support preparation, the researcher provided a written prompt with guiding questions related to personal background, professional profile, and relevant experience, which the participant reviewed briefly before continuing. The researcher then fitted the VR-headset, reiterated the task instructions, encouraged the participant to draw on the prompt during the speech, and listened attentively to their delivery.
The five-minute task was then initiated and continuously monitored by the researcher. The occurrence of any pauses requiring intervention was recorded. Standardised verbal prompts (e.g., “Keep talking about yourself…”) were delivered when necessary to encourage continuation.

2.3. Procedure

Each participant completed the study in a single individual session comprising three phases: pre-avatar embodiment, avatar embodiment, and post-avatar embodiment (Figure 3 provides an overview of the experimental procedure).
In the pre-avatar embodiment phase, participants completed a series of online questionnaires to assess the following measures, administered in the order listed: sociodemographic data, technology experience, trait anxiety, state anxiety, positive and negative affect, and personality traits—specifically neuroticism and extraversion. A detailed description of the measures and questionnaires is provided in Section 2.4.
During the avatar embodiment phase, participants performed the virtual speech task while embodied in an avatar with either high or low emotional expressiveness. Participants were assigned to either the HE or the LE avatar group using a pseudo-randomised procedure. Group allocation was based on the order of enrollment, with assignments alternating to ensure balanced distribution of age and gender.
Immediately afterward, in the post-avatar embodiment phase, participants completed a second set of online questionnaires assessing state anxiety, affect, and various aspects of the virtual experience. These included self-efficacy during the speech task, emotional involvement, sense of presence, and embodiment (see Section 2.4 for further details).

2.4. Questionnaires and Measures

2.4.1. Sociodemographic Information and Technology Experience

Sociodemographic data included age, gender, and technology-related experience. Technology-related experience was assessed using a 3-item Likert-type questionnaire evaluating participants’ self-reported experience with computing, video games, and VR-headsets. Responses ranged from 0 (no experience) to 3 (high experience).

2.4.2. Emotional State Variables

State anxiety, measured with the State subscale of the State–Trait Anxiety Inventory [22,23], and positive (PA) and negative affect (NA), examined using the Spanish version [24] of the Positive and Negative Affect Schedule [25], were measured both before (pre) and after (post) the avatar embodiment phase to capture changes in participants’ emotional states across the experimental procedure. In this study, these validated self-report scales showed Cronbach’s α values of: STAI-S pre/post (α = 0.912/0.893), PANAS-PA pre/post (α = 0.850/0.898), and PANAS-NA pre/post (α = 0.872/0.778).

2.4.3. Trait Anxiety

To control for the possible influence of trait anxiety levels on emotional state variables, trait anxiety was assessed during the pre-avatar embodiment phase using the validated trait subscale of the State-Trait Anxiety Inventory [22,23]. The internal consistency in the current sample was high (α = 0.903).

2.4.4. Personality Traits

The personality traits of neuroticism (N) and extraversion (E) were assessed using the corresponding subscales of the NEO Five-Factor Inventory [26,27]. Internal consistency in the current sample was high for both subscales: NEO-N (α = 0.868) and NEO-E (α = 0.810).

2.4.5. Behavioural Performance During Speech Task

The number of researcher interventions required to prompt continued speaking was measured as an indicator of speech fluency and engagement (SFE) during the 5 min speech task. As there is no fixed maximum score, lower values reflect greater fluency and engagement.

2.4.6. User Experience and Embodiment in the VR Task

Following the speech task, participants completed a set of self-report measures:
  • Self-efficacy (SEf) during the speech task was assessed with the item: “How effective did you feel in persuading others of your suitability for the job position?” rated on a scale from 0 to 10.
  • Emotional involvement (EInv) was measured using two items (“I felt emotionally involved” and “I was able to express my emotions easily”), each rated on a 5-point Likert scale ranging from 0 (strongly disagree) to 4 (strongly agree), with a maximum total score of 8.
  • Sense of presence (SPre) within the VR environment was assessed using six items adapted to Spanish from the Slater-Usoh-Steed (SUS) Presence Questionnaire [28], rated on a 7-point Likert scale, with a maximum total score of 42. The internal consistency in the current sample was satisfactory (α = 0.778).
  • Avatar embodiment (AEm) was evaluated using eight items rated on a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree), with a maximum total score of 56. The items were as follows: “I felt comfortable with this avatar”, “The avatar’s facial expressions seemed natural to me”, “The avatar’s gestures seemed natural to me”, “The avatar’s facial expressions appeared appropriate”, “The avatar’s gestures appeared appropriate”, “This avatar resembles, or could resemble, a real person”, “I enjoyed interacting with this avatar”, and “While looking at the mirror, I felt as if I were actually standing in front of it”. Participants rated the extent to which they agreed with each statement. Internal consistency in the current sample was good (α = 0.825).
  • Perceived avatar resemblance (ARes) was assessed with a single item: “To what extent do you think the avatar resembled you?”, rated on a scale from 0 (not at all) to 100 (completely).

2.5. Study Design and Statistical Analyses

Given the moderate sample size of the study (n = 63), the Shapiro–Wilk test was used to assess the normality of each variable. Non-parametric tests were applied when variables did not meet the assumption of normality.
Age differences between the HE and LE groups were assessed using the Mann–Whitney U test. Categorical variables were analysed with Pearson’s χ2 test (sex and prior video-game experience). Given that more than 50% of the expected cell counts were below five for prior computer knowledge and VR-headset experience, Fisher’s exact test was employed to ensure robustness.
Scores on the STAI-S, PANAS-PA and PANAS-NA were analysed using a two-way mixed-design repeated-measures ANOVA, with Group (HE vs. LE) as the between-subjects factor and Time (pre- vs. post-speech task) as the within-subjects factor. Scores on the STAI-T were included as a covariate to account for differences in participants’ baseline emotional states. Mauchly’s test of sphericity yielded non-significant results, confirming that the sphericity assumption was met.
Mann–Whitney U tests were conducted to compare SFE, SEf, EInv and ARes scores between groups. Independent-samples t-tests, meanwhile, were used to assess differences in SPre and AEm.
Spearman’s rank-order correlations were conducted on the entire sample to examine the associations between trait-level psychological measures (STAI-T, NEO-N and NEO-E) and state-level emotional changes (STAI-S, PANAS-PA and PANAS-NA; calculated as post minus pre) with the outcome measures SFE, SPre and AEm.
Tests were two-tailed and significance was set at p < 0.05. Effect sizes are reported as r for non-parametric tests, Cohen’s d for t-tests, and partial η22p) for ANOVAs. Outliers were identified using the ±2.5 SD criterion from the variable means and excluded from the analyses. Specifically, one outlier was identified in STAI-S and PANAS (pre and post), one in PANAS-PA-pre, three in ∆PANAS-NA, and one in ∆STAI-S. Furthermore, two outliers were found in SEf, and one in each of EInv, SPre, N, and E. All analyses were performed in IBM SPSS Statistics 29 (IBM Corp., Armonk, NY, USA).
A sensitivity power analysis was conducted using G*Power (version 3.1.9.7) to estimate the minimum detectable effect sizes with 80% power (α = 0.05) for the study sample (N = 63). The results indicated that the study had sufficient power to detect between-group effects of approximately d = 0.72, and repeated-measures (within–between interaction) effects of approximately f = 0.18 (η2p ≈ 0.03). Thus, the study was adequately powered for the main ANOVA design but underpowered to detect small between-group effects.

3. Results

3.1. Emotional States in Response to Avatars with High and Low Expressiveness

3.1.1. State Anxiety

Controlling for STAI-T scores, a mixed-design repeated-measures ANOVA revealed no significant main effect of Time [F(1, 58) = 1.159, p = 0.286, η2p = 0.020], Group [F(1, 58) = 0.077, p = 0.782, η2p = 0.001], or Time × Group interaction [F(1, 58) = 2.796, p = 0.100, η2p = 0.046]. Descriptive statistics are presented in Table 2.

3.1.2. Positive Affect

Controlling for STAI-T scores, a mixed-design repeated-measures ANOVA revealed no significant main effect of Time [F(1, 59) = 0.575, p = 0.451, η2p = 0.010], Group [F(1, 59) = 1.201, p = 0.278, η2p = 0.020], or Time × Group interaction [F(1, 59) = 0.030, p = 0.863, η2p = 0.001]. See Table 2 for descriptive statistics.

3.1.3. Negative Affect

Controlling for STAI-T scores, a mixed-design repeated-measures ANOVA revealed no significant main effect of Time [F(1, 58) = 0.008, p = 0.929, η2p < 0.001], Group [F(1, 58) = 0.072, p = 0.789, η2p = 0.001], or Time × Group interaction [F(1, 58) = 2.133, p = 0.150, η2p = 0.035]. Descriptive statistics are provided in Table 2.

3.2. Speech Fluency and Engagement During the Speech Task

The Mann–Whitney U test revealed no significant differences in SFE performance between groups (U = 488.000, Z = −0.098, p = 0.922). See Table 2 for descriptive statistics.

3.3. Subjective Experience and Embodiment in the Virtual Reality Task

The Mann–Whitney U test showed no significant group differences in SEf (U = 438.500, Z = −0.374, p = 0.708), EInv (U = 416.500, Z = −0.892, p = 0.372), or ARes scores (U = 461.000, Z = −0.463, p = 0.643). Similarly, independent-samples t-tests revealed no significant differences between groups in SPre [t(60) = 0.210, p = 0.835] or AEm scores [t(61) = 1.121, p = 0.267]. See Table 2 for descriptive statistics, and Figure 4 for a visual summary of group means (+SEM) for emotional, user experience, and embodiment variables across conditions.

3.4. Associations of Personality Traits and Emotional Variables with Speech Fluency and Engagement, Sense of Presence, and Avatar Embodiment During the Speech Task

Spearman correlations revealed a significant positive association between NEO-E and AEm scores (rs = 0.269, p = 0.035), corresponding to a small-to-moderate effect size. However, no other significant associations were observed for the remaining variables (all ps ≥ 0.104; see Table 3 for further details).

4. Discussion

This study examined whether differences in avatar emotional expressiveness were associated with users’ emotional states (i.e., anxiety and affect), task performance (i.e., SFE scores), and subjective experience in a speech task (i.e., SEf, EInv, SPre, AEm and Ares scores), and whether N and E, as personality traits, were associated with these outcomes. Contrary to expectations, no effects of avatar expressiveness were found on emotional states, performance, or experiential variables. The only significant association emerged between E and AEm scores, with extraverted participants reporting higher embodiment during the task, although the effect size of this relationship was small. Table 4 summarises the hypotheses of the study (H1–H4) and their respective outcomes.
Regarding the first set of hypotheses (H1–H2), our results do not support prior findings that expressive avatars enhance emotional states, emotional involvement, presence, embodiment, and performance [8,10,11,12]. Several factors may account for this discrepancy. Our solitary mirror-based task lacked an interactive audience, which may have limited the influence of avatar expressiveness. In fact, research in socially rich VR settings has consistently shown that expressiveness becomes impactful when users interact with others. For example, ref. [29] did not observe such limitations in a collaborative charades task, where nonverbal cues were essential and highly expressive avatars improved social presence, attraction, and performance. Similarly, ref. [30] reported that even subtle smile enhancements strengthened positive affect, encouraged the use of positive language, and increased social presence in real-time dyadic interactions, highlighting that such effects emerge in reciprocal social contexts. Research also highlights the importance of timing and contingency in social feedback, showing that when these elements are lacking, the benefits of expressiveness can be undermined. In support of this, ref. [31] found that non-contingent agents reduced rapport and self-rated performance, particularly among socially anxious participants. There is also evidence that the mere presence of a virtual audience can alter both performers’ expressive behaviours and observers’ evaluations. Ref. [32] reported that musicians’ gestures and acoustic features converged differently when an audience was present, and that authenticity and emotional intensity ratings were highest under congruent social conditions. Extending this line, ref. [33] showed that increasing the number of co-located avatars enhances presence, co-presence, and perceived interaction possibilities. Together, these findings converge in indicating that avatar expressiveness exerts its strongest effects in socially interactive and contingent contexts, whereas in solitary, low-demand settings such as the present study, its impact on emotional and experiential outcomes is likely to be attenuated.
Beyond the absence of social interaction, the relatively low emotional intensity of the mirror-based task may also have contributed to the null results. Although the situation simulated an evaluative context, it lacked the emotional arousal typically triggered by real-time interpersonal feedback. In addition, individual differences in immersive tendencies and perceptual sensitivity to virtual cues could have influenced responsiveness to expressive avatars [19]. Participants with lower presence or emotional absorption in VR may have perceived the environment as less realistic, reducing the salience of facial expressiveness cues and their potential impact on emotional or experiential outcomes.
Other factors may also explain the absence of significant effects. The expressiveness manipulation may have been too subtle to yield measurable differences in users’ emotional states or experiential outcomes. Evidence indicates that when virtual agents (i.e., virtual patients) display only weak or low-intensity expressiveness, effects on affective empathy, similarity, and affective bonding may not emerge, with measurable impacts often confined to limited aspects such as perspective-taking [34]. This suggests that stronger manipulations are required to elicit emotional changes and produce consistent experiential effects. Moreover, the sample size of this study may have limited the statistical power to detect small effects. While the study had adequate power to detect small-to-moderate effects in the main repeated-measures ANOVA design (f = 0.18, η2p ≈ 0.03), the sample size was insufficient to detect small between-group effects (d < 0.72), which may have contributed to the lack of significant differences across conditions. Previous research suggests that avatar manipulations often yield subtle or modest changes in affective or behavioural outcomes, with some effects emerging only for specific expressions such as smiling [35]. Meta-analytic evidence further indicates that the Proteus effect—behavioural conformity to avatar characteristics—operates with small-to-approaching-medium effect sizes [36]. These findings imply that detecting such effects reliably requires larger samples and that studies with limited statistical power may overlook effects of small magnitude.
With respect to personality traits, the third hypothesis (H3) was not supported. Scores on the NEO-N were not significantly associated with SFE, indicating that higher levels of N did not predict lower speech fluency or increased reliance on researcher prompts. This contrasts with prior research showing that higher levels of N are associated with greater anticipatory anxiety and poorer performance in evaluative speaking situations [17], as well as with increased negative affect and reduced fluency in communicative tasks [21]. A likely explanation for this discrepancy lies in the nature of the present task. Unlike settings involving an interactive or critical audience, where evaluative threat is salient, our mirror-based self-presentation may not have elicited sufficient social pressure for an association with N to emerge as statistically significant. Evidence from interactive VR research supports this interpretation, showing that personality traits, public speaking anxiety, and immersive tendencies shape user experience and perceptions of training quality when participants present to responsive virtual audiences [19], and that virtual audiences reliably elicit public speaking anxiety, with individual differences moderating these responses [20]. Taken together, these findings suggest that the relationship between N and performance outcomes is strongly context-dependent, emerging most clearly in tasks that involve evaluative audiences and heightened social threat, but not in solitary, low-demand conditions such as those employed in the present study.
Regarding the fourth hypothesis (H4), it received only partial support. Scores on the NEO-E were positively associated with AEm, indicating that participants who were more extraverted reported a stronger sense of embodiment in the avatar. Research on personality and embodiment in VR has produced mixed results. Some evidence indicates that locus of control, rather than general personality traits, is more strongly related to embodiment components [37], whereas other findings suggest that E and related traits can enhance embodiment and body mindfulness, particularly with human-like avatars [38]. Considering this mixed evidence, our finding of an association between E and embodiment supports the view that personality can modulate virtual body experiences, even if such associations are not consistently observed across studies. However, E was not significantly related to SFE or SPre, suggesting that higher E did not translate into greater speech fluency or sense of presence during the task. This partial pattern of associations contrasts with previous findings linking E more broadly to sociability, expressiveness, and positive conversational dynamics [18]. A likely explanation is that the present task lacked interactive elements within the VR environment. While previous research [18] involved spontaneous social dialogue between partners, the current mirror-based self-presentation required participants to speak in isolation, without reciprocal verbal or non-verbal feedback. Although the researcher listened and occasionally intervened with prompts, this interaction occurred outside the virtual setting, limiting the opportunity for extraverted participants to engage socially and possibly explaining why associations with E were confined to embodiment rather than extending to other experiential variables. Taken together, these contrasts suggest that the relationship between extraversion and user experience in VR may depend strongly on the degree of interactivity and social contingency within the task.
Taken together, the results for H3 and H4 suggest that, in this mirror-based self-presentation task, the relationships between personality traits, performance, and experience were weak or context-dependent, which supports the view that such associations emerge more clearly under conditions of evaluative threat.
Overall, these results suggest that avatar expressiveness may depend on complementary social cues that are associated with user experience (i.e., audience reactions or dynamic feedback within the environment). The association between E and embodiment further suggests that personality traits can play a significant role in determining how users appropriate and engage with their avatars. This pattern supports the view that individual differences may mediate or moderate the effects of avatar design, with richer and more interactive environments likely required to reveal their broader impact on communication outcomes.
One limitation of the present study is that the expressive manipulation was implemented using two different headset models (Meta Quest 2 and Meta Quest Pro). Although both devices are comparable in terms of display resolution, field of view, ergonomics, and were used under identical environmental and task conditions, only the Quest Pro includes facial-tracking sensors. This difference defines the expressive manipulation but also implies that future research should replicate the comparison within a single headset platform to fully rule out any potential device-related effects.
Another limitation of the present study concerns the absence of a formal manipulation check evaluating participants’ perception of avatar expressiveness. Although embodiment and resemblance were assessed, the perceived emotional expressiveness of the avatars was not directly measured. This omission may limit the interpretation of certain null effects, as it prevents confirming whether participants clearly perceived the intended difference between expressive and non-expressive conditions. Nevertheless, the contrast between the two experimental conditions was technically and visually evident, as shown in the Supplementary demonstration Videos. Future studies should address this aspect by including specific self-report items or objective motion-tracking data to quantify perceived expressiveness. To increase methodological transparency, Supplementary Video Materials have been provided, illustrating both conditions and allowing readers to visually appreciate the difference in expressiveness between avatars.
Future research should examine avatar expressiveness in richer social contexts, such as tasks involving virtual audiences that provide feedback. In addition, moving beyond correlational approaches, experimental designs are needed to test how personality traits interact with avatar features, allowing stronger conclusions about their moderating role and providing a clearer understanding of how VR environments can be tailored for training, education, or therapeutic purposes. Larger and more diverse samples, including different age groups, together with longitudinal designs, are needed to clarify the durability and generalisability of the effects of avatar expressiveness and to test how these effects may vary according to personality traits.

5. Conclusions

The present study found no evidence that avatar emotional expressiveness was associated with emotional responses (no Time × Group interaction for STAI-S, PANAS-PA, or PANAS-NA; all ps ≥ 0.10), nor were there any group differences in performance (p = 0.92), or subjective experience variables (all ps ≥ 0.27) during the virtual speech task. However, extraversion was positively associated with users’ sense of embodiment (rs = 0.27, p = 0.03), suggesting that individual differences can shape how people experience their avatars. These findings highlight the need to account for both contextual demands and user characteristics when developing VR applications. Further research using richer social scenarios and larger samples is required to clarify the extent to which avatar features and personality are related to users’ emotional states, experience and performance in evaluative VR environments for verbal communication purposes.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app152212082/s1, S1_Data: dataset including all variables analysed in the study; Video S1: example of the high-expressive avatar condition (in Spanish); Video S2: example of the low-expressive avatar condition (in Spanish).

Author Contributions

Conceptualisation, M.M., M.M.-L. and M.-C.J.; data curation and formal analysis, D.P. and S.G.-A.; project administration, funding acquisition and supervision, M.M.-L. and M.-C.J.; resources, D.P., M.M., M.M.-L. and M.-C.J.; investigation, software and validation, D.P. and M.-C.J.; visualisation, D.P., S.G.-A. and M.-C.J.; methodology, writing—original draft preparation, review and editing, D.P., S.G.-A., M.M., M.M.-L. and M.-C.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU, under Grant PID2023-148731OB-I00 (XREspAva project); and the Gobierno de Aragón under Grant S31_23R for ICST Group.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Research Ethics Committee of the Aragon Community (protocol code: PI22/083; approval date: 9 March 2022) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting the findings of this study are available in the Supplementary Materials (S1_Data). Further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors thank the study volunteers.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AEmAvatar embodiment
AResAvatar resemblance
EExtraversion
EInvEmotional involvement
HEHigh-expressive
LELow-expressive
MMean
NNeuroticism
NANegative affect
NEO–EExtraversion subscale of the NEO Five–Factor Inventory
NEO–NNeuroticism subscale of the NEO Five–Factor Inventory
PAPositive affect
PANASPositive and Negative Affect Schedule
∆PANAS-NAChanges in negative affect scores (post–minus–pre)
∆PANAS-PAChanges in positive affect scores (post–minus–pre)
SDStandard deviation
SEfSelf–efficacy
SEMStandard error of the mean
SFESpeech fluency and engagement
SPreSense of presence
STAI–SState anxiety scale of the State–Trait Anxiety Inventory
∆STAI-SChanges in state anxiety scores (post–minus–pre)
STAI–TTrait anxiety scale of the State–Trait Anxiety Inventory
VRVirtual reality

References

  1. Stefan, H.; Mortimer, M.; Horan, B. Evaluating the effectiveness of virtual reality for safety-relevant training: A systematic review. Virtual Real. 2023, 27, 2839–2869. [Google Scholar] [CrossRef]
  2. Yu, L.; Kizilkaya, B.; Qi, L.; Ge, Y.; Ansari, S.; Popoola, O. Beyond the classroom: A systematic review of revolutionizing education with immersive virtual reality. In Proceedings of the 2024 IEEE Conference on Telepresence 2024, Pasadena, CA, USA, 16–17 November 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 67–72. [Google Scholar] [CrossRef]
  3. Wiebe, A.; Kannen, K.; Selaskowski, B.; Mehren, A.; Thöne, A.-K.; Pramme, L.; Braun, N. Virtual reality in the diagnostic and therapy for mental disorders: A systematic review. Clin. Psychol. Rev. 2022, 98, 102213. [Google Scholar] [CrossRef]
  4. Gasch, C.; Javanmardi, A.; Khan, A.; Garcia-Palacios, A.; Pagani, A. Exploring avatar utilization in workplace and educational environments: A study on user acceptance, preferences, and technostress. Appl. Sci. 2025, 15, 3290. [Google Scholar] [CrossRef]
  5. Hariyady, H.; Ibrahim, A.A.; Teo, J.; Suharso, W.; Barlaman, M.B.F.; Bitaqwa, M.A.; Ahmad, A.; Yassin, F.M.; Salimun, C.; Weng, N.G. Virtual reality and emotional responses: A comprehensive literature review on theories, frameworks, and research gaps. ITM Web Conf. 2024, 63, 01022. [Google Scholar] [CrossRef]
  6. Pandita, S.; Yee, J.; Won, A.S. Affective embodiment: Embodying emotions through postural representation in VR. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops 2020, Atlanta, GA, USA, 22–26 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 616–617. [Google Scholar] [CrossRef]
  7. Suk, H.; Laine, T.H. Influence of avatar facial appearance on users’ perceived embodiment and presence in immersive virtual reality. Electronics 2023, 12, 583. [Google Scholar] [CrossRef]
  8. Radiah, R.; Roth, D.; Alt, F.; Abdelrahman, Y. The influence of avatar personalization on emotions in VR. Multimodal Technol. Interact. 2023, 7, 38. [Google Scholar] [CrossRef]
  9. Pallavicini, F.; Pepe, A. Virtual reality games and the role of body involvement in enhancing positive emotions and decreasing anxiety: Within-subjects pilot study. JMIR Serious Games 2020, 8, e15635. [Google Scholar] [CrossRef]
  10. Suma, T.; Sonia, B.; Baffour, K.A.; Oyekoya, O. The effects of avatar voice and facial expression intensity on emotional recognition and user perception. In Proceedings of the SIGGRAPH Asia 2023 Technical Communications 2023, Sydney, Australia, 12–15 December 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–4. [Google Scholar] [CrossRef]
  11. Hyde, J.; Carter, E.J.; Kiesler, S.; Hodgins, J.K. Using an interactive animated avatar’s facial expressiveness to increase persuasiveness and socialness. In Proceedings of the CHI 2015 Conference on Human Factors in Computing Systems 2015, Seoul, Republic of Korea, 18–23 April 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1719–1728. [Google Scholar] [CrossRef]
  12. Kimmel, S.; Jung, F.; Matviienko, A.; Heuten, W.; Boll, S. Let’s face it: Influence of facial expressions on social presence in collaborative virtual reality. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems 2023, Hamburg, Germany, 23–28 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–16. [Google Scholar] [CrossRef]
  13. Visconti, A.; Calandra, D.; Lamberti, F. Comparing technologies for conveying emotions through realistic avatars in virtual reality-based metaverse experiences. Comput. Animat. Virtual Worlds 2023, 34, e2188. [Google Scholar] [CrossRef]
  14. Aymerich-Franch, L.; Kizilcec, R.F.; Bailenson, J.N. The relationship between virtual self similarity and social anxiety. Front. Hum. Neurosci. 2014, 8, 944. [Google Scholar] [CrossRef] [PubMed]
  15. Kim, H.; Park, J.D.; Lee, I. “To be or not to be me?” Exploration of self-similar effects of avatars on social virtual reality experiences. IEEE Trans. Vis. Comput. Graph. 2023, 29, 4794–4804. [Google Scholar] [CrossRef] [PubMed]
  16. Etienne, E.; Leclercq, A.-L.; Remacle, A.; Dessart, L.; Schyns, M. Perception of avatars’ non-verbal behaviors in virtual reality. Psychol. Mark. 2023, 40, 2464–2481. [Google Scholar] [CrossRef]
  17. MacIntyre, P.D.; Thivierge, K.A. The effects of speaker personality on anticipated reactions to public speaking. Commun. Res. Rep. 1995, 12, 125–133. [Google Scholar] [CrossRef]
  18. Arellano-Véliz, N.A.; Castillo, R.D.; Jeronimus, B.F.; Kunnen, E.S.; Cox, R.F.A. Beyond words: Speech coordination linked to personality and appraisals. J. Nonverbal Behav. 2025, 49, 85–123. [Google Scholar] [CrossRef]
  19. Chollet, M.; Ghate, P.; Neubauer, C.; Scherer, S. Influence of individual differences when training public speaking with virtual audiences. In Proceedings of the 18th International Conference on Intelligent Virtual Agents (IVA ’18) 2018, Sydney, Australia, 5–8 November 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 1–7. [Google Scholar] [CrossRef]
  20. Yadav, M.; Sakib, N.; Nirjhar, E.H.; Feng, K.; Behzadan, A.H.; Chaspari, T. Exploring individual differences of public speaking anxiety in real-life and virtual presentations. IEEE Trans. Affect. Comput. 2022, 13, 1168–1182. [Google Scholar] [CrossRef]
  21. Palombo, F.; Del Gado, F.; Rugolo, F.; Lasaponara, S.; Busan, P.; Tomaiuoli, D.; Conversi, D. The role of anticipation and neuroticism in developmental stuttering. Front. Psychol. 2025, 16, 1576681. [Google Scholar] [CrossRef] [PubMed]
  22. Spielberger, C.D.; Gorsuch, R.L.; Lushene, R.E. STAI Manual for the State–Trait Anxiety Inventory; Consulting Psychologists Press: Palo Alto, CA, USA, 1970. [Google Scholar]
  23. Guillén-Riquelme, A.; Buela-Casal, G. Psychometric revision and differential item functioning in the State–Trait Anxiety Inventory (STAI). Psicothema 2011, 23, 510–515. [Google Scholar] [PubMed]
  24. Sandín, B.; Chorot, P.; Lostao, L.; Joiner, T.E.; Santed, M.A.; Valiente, R.M. The PANAS scales of positive and negative affect: Factor analytic validation and cross-cultural convergence. Psicothema 1999, 11, 37–51. [Google Scholar]
  25. Watson, D.; Clark, L.A.; Tellegen, A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J. Pers. Soc. Psychol. 1988, 54, 1063–1070. [Google Scholar] [CrossRef]
  26. Costa, P.T.; McCrae, R.R. Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychol. Assess. 1992, 4, 5–13. [Google Scholar] [CrossRef]
  27. Manga, D.; Ramos, F.; Morán, C. The Spanish norms of the NEO Five-Factor Inventory: New data and analyses for its improvement. Int. J. Psychol. Psychol. Ther. 2004, 4, 639–648. [Google Scholar]
  28. Slater, M.; Usoh, M.; Steed, A. Depth of presence in virtual environments. Presence Teleoper. Virtual Environ. 1994, 3, 130–144. [Google Scholar] [CrossRef]
  29. Wu, Y.; Wang, Y.; Jung, S.; Hoermann, S.; Lindeman, R.W. Using a fully expressive avatar to collaborate in virtual reality: Evaluation of task performance, presence, and attraction. Front. Virtual Real. 2021, 2, 641296. [Google Scholar] [CrossRef]
  30. Oh, S.Y.; Bailenson, J.; Krämer, N.; Li, B. Let the avatar brighten your smile: Effects of enhancing facial expressions in virtual environments. PLoS ONE 2016, 11, e0161794. [Google Scholar] [CrossRef]
  31. Kang, S.; Gratch, J.; Wang, N.; Watt, J.H. Does the contingency of agents’ nonverbal feedback affect users’ social anxiety? In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2008) 2008, Estoril, Portugal, 12–16 May 2008; International Foundation for Autonomous Agents and Multiagent Systems: Richland, SC, USA, 2008; pp. 120–127. [Google Scholar]
  32. Schaerlaeken, S.; Grandjean, D.; Glowinski, D. Playing for a virtual audience: The impact of a social factor on gestures, sounds and expressive intents. Appl. Sci. 2017, 7, 1321. [Google Scholar] [CrossRef]
  33. Latoschik, M.E.; Kern, F.; Stauffert, J.-P.; Bartl, A.; Botsch, M.; Lugrin, J.-L. Not alone here?! scalability and user experience of embodied ambient crowds in distributed social virtual reality. IEEE Trans. Vis. Comput. Graph. 2019, 25, 2134–2144. [Google Scholar] [CrossRef]
  34. Milcent, A.S.; Kadri, A.; Richir, S. Using facial expressiveness of a virtual agent to induce empathy in users. Int. J. Hum.-Comput. Interact. 2022, 38, 240–252. [Google Scholar] [CrossRef]
  35. Mottelson, A.; Hornbæk, K. Emotional avatars: The interplay between affect and ownership of a virtual body. arXiv 2020, arXiv:2001.05780. [Google Scholar] [CrossRef]
  36. Ratan, R.; Beyea, D.; Li, B.J.; Graciano, L. Avatar characteristics induce users’ behavioral conformity with small-to-medium effect sizes: A meta-analysis of the Proteus effect. Media Psychol. 2020, 23, 651–675. [Google Scholar] [CrossRef]
  37. Dewez, D.; Fribourg, R.; Argelaguet, F.; Hoyet, L.; Mestre, D.; Slater, M.; Lécuyer, A. Influence of personality traits and body awareness on the sense of embodiment in virtual reality. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2019) 2019, Beijing, China, 14–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 123–134. [Google Scholar] [CrossRef]
  38. Higgins, D.; Mcdonnell, R.; Normand, J.-M.; Fribourg, R. Priming and personality effects on the sense of embodiment for human and non-human avatars in virtual reality. In Proceedings of the ICAT-EGVE 2024—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments 2024, Tsukuba, Japan, 1–3 December 2024; Eurographics Association: Goslar, Germany, 2024; pp. 1–11. [Google Scholar] [CrossRef]
Figure 1. Views of the Cascadia virtual environment used during the speech task. Participants viewed themselves in the virtual mirror while delivering their self-presentation.
Figure 1. Views of the Cascadia virtual environment used during the speech task. Participants viewed themselves in the virtual mirror while delivering their self-presentation.
Applsci 15 12082 g001
Figure 2. Examples of participant–avatar resemblance: original photographs (A,C) and corresponding avatars (B,D) for a female (A,B) and male (C,D) participant.
Figure 2. Examples of participant–avatar resemblance: original photographs (A,C) and corresponding avatars (B,D) for a female (A,B) and male (C,D) participant.
Applsci 15 12082 g002
Figure 3. Overview of the experimental procedure across the three phases of the study: pre-avatar embodiment, avatar embodiment, and post-avatar embodiment.
Figure 3. Overview of the experimental procedure across the three phases of the study: pre-avatar embodiment, avatar embodiment, and post-avatar embodiment.
Applsci 15 12082 g003
Figure 4. Mean values (+SEM) of (A) emotional state variables (PANAS-PA, PANAS-NA, STAI-S; measured pre- and post-task) and (B) user experience and embodiment variables (SEF, SEf, EInv, Ares, SPre, AEm) for the high-expressive (HE) and low-expressive (LE) avatar groups.
Figure 4. Mean values (+SEM) of (A) emotional state variables (PANAS-PA, PANAS-NA, STAI-S; measured pre- and post-task) and (B) user experience and embodiment variables (SEF, SEf, EInv, Ares, SPre, AEm) for the high-expressive (HE) and low-expressive (LE) avatar groups.
Applsci 15 12082 g004
Table 1. Sample characteristics and technology experience.
Table 1. Sample characteristics and technology experience.
VariablesTotal (n = 63)HE 1 (n = 30)LE 2 (n = 33)
Age, years (M 3 ± SD 4)21.17 (1.91)21.17 (1.98)21.18 (1.88)
Sex, n
 Male301515
 Female331518
Technology experience, n
  Computer knowledge
 None954
 Low371621
 Moderate1578
 High220
  Video game experience
 None1239
 Low201010
 Moderate17107
 High1477
  VR-headset experience
 None251411
 Low361521
 Moderate101
 High110
1 High-expressive avatar group; 2 Low-expressive avatar group; 3 Mean; 4 Standard deviation.
Table 2. Descriptive statistics (mean and standard deviation) for emotional variables, user experience and embodiment in the VR task.
Table 2. Descriptive statistics (mean and standard deviation) for emotional variables, user experience and embodiment in the VR task.
VariablesHE 1 (n = 30)LE 2 (n = 33)
STAI-T 321.53 (9.50)26.67 (10.05)
STAI-S 4
 Pre17.03 (7.50)22.09 (10.19)
 Post16.20 (7.96)17.18 (8.57)
PANAS-PA 5
 Pre23.13 (6.88)22.39 (6.42)
 Post24.07 (6.99)23.87 (7.54)
PANAS-NA 6
 Pre7.43 (6.46)10.67 (7.37)
 Post5.33 (4.82)5.60 (5.12)
SFE 73.36 (1.92)3.39 (1.80)
SEf 86.37 (2.09)6.15 (2.52)
EInv 94.90 (1.75)5.36 (1.41)
ARes 1051.97 (25.13)54.57 (25.36)
SPre 1126.37 (7.31)26.15 (7.77)
AEm 1229.23 (7.14)31.39 (8.06)
1 High-expressive avatar group; 2 Low-expressive avatar group; 3 Trait anxiety; 4 State anxiety; 5 Positive affect; 6 Negative affect; 7 Speech fluency and engagement during the speech task (no fixed maximum, lower scores = higher fluency); 8 Self-efficacy (maximum score = 10); 9 Emotional involvement (maximum score = 8); 10 Avatar resemblance (maximum score = 100); 11 Sense of presence (maximum score = 42); 12 Avatar embodiment (maximum score = 56).
Table 3. Spearman correlations between personality traits and emotional variables with speech fluency and engagement, sense of presence, and avatar embodiment during the speech task.
Table 3. Spearman correlations between personality traits and emotional variables with speech fluency and engagement, sense of presence, and avatar embodiment during the speech task.
SFE 7SPre 8AEm 9
NEO-N 1rs = −0.002 (p = 0.98)rs = −0.083 (p = 0.52)rs = −0.086 (p = 0.51)
NEO-E 2rs = −0.208 (p = 0.10)rs = 0.018 (p = 0.89)rs = 0.269 * (p = 0.03)
STAI-T 3rs = −0.022 (p = 0.87)rs = −0.093 (p = 0.47)rs = −0.165 (p = 0.20)
∆STAI-S 4rs = −0.058 (p = 0.65)rs = −0.101 (p = 0.44)rs = −0.207 (p = 0.11)
∆PANAS-PA 5rs = 0.185 (p = 0.16)rs = 0.121 (p = 0.35)rs = −0.40 (p = 0.76)
∆PANAS-NA 6rs = −0.148 (p = 0.25)rs = 0.026 (p = 0.85)rs = −0.008 (p = 0.95)
1 Neuroticism; 2 Extraversion; 3 Trait anxiety; 4 Changes in state anxiety (post minus pre); 5 Changes in positive affect (post minus pre); 6 Changes in negative affect (post minus pre); 7 Speech fluency and engagement; 8 Sense of presence; 9 Avatar embodiment. * p < 0.05.
Table 4. Summary of hypotheses and outcomes.
Table 4. Summary of hypotheses and outcomes.
HypothesesDescriptionFindings
H1Participants embodied in a highly expressive avatar were expected to exhibit greater emotional changes, involvement, presence, and embodiment than those embodied in a low-expressive avatar.Not supported—no group differences in emotional or experiential variables.
H2Participants in the high-expressive condition were expected to show better speech performance.Not supported—no differences in speech fluency or engagement.
H3Higher neuroticism was expected to be associated with greater anxiety and lower speech fluency.Not supported—neuroticism was not significantly associated with performance or experience.
H4Higher extraversion was expected to be associated with greater presence, embodiment, and speech fluency.Partially supported—extraversion correlated positively only with embodiment.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ponce, D.; Garces-Arilla, S.; Mendez, M.; Mendez-Lopez, M.; Juan, M.-C. Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance. Appl. Sci. 2025, 15, 12082. https://doi.org/10.3390/app152212082

AMA Style

Ponce D, Garces-Arilla S, Mendez M, Mendez-Lopez M, Juan M-C. Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance. Applied Sciences. 2025; 15(22):12082. https://doi.org/10.3390/app152212082

Chicago/Turabian Style

Ponce, David, Sara Garces-Arilla, Marta Mendez, Magdalena Mendez-Lopez, and M.-Carmen Juan. 2025. "Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance" Applied Sciences 15, no. 22: 12082. https://doi.org/10.3390/app152212082

APA Style

Ponce, D., Garces-Arilla, S., Mendez, M., Mendez-Lopez, M., & Juan, M.-C. (2025). Speaking Through an Avatar: Emotional Expressiveness, Individual Differences, User Experience and Performance. Applied Sciences, 15(22), 12082. https://doi.org/10.3390/app152212082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop