Next Article in Journal
Posttraumatic Growth as a Pathway to Wellness for Individuals and Organizations
Previous Article in Journal
From Desire to Action: Unpacking Push–Pull Motivations to Reveal How Travel Sparks Eco-Intentions and Actions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stress Situations and Speech Fluency: A Pilot Study of Oral Presentations in Immersive Virtual Reality Environments

1
Department of Health Rehabilitation Sciences, University of Bío-Bío, Chillán 3780000, Chile
2
Department of Computer Science and Information Technology, University of Bío-Bío, Chillán 3780000, Chile
3
Department of Visual Communication, University of Bío-Bío, Chillán 3780000, Chile
*
Author to whom correspondence should be addressed.
Behav. Sci. 2025, 15(12), 1652; https://doi.org/10.3390/bs15121652
Submission received: 7 October 2025 / Revised: 25 November 2025 / Accepted: 27 November 2025 / Published: 1 December 2025

Abstract

This pilot study investigates the relationship between stress situations and speech fluency in virtual reality environments. It aims to analyze how different stress scenarios, classified into low-, medium-, and high-stress environments, can affect speech rate, increase syllable/word repetitions, and lead to hesitations in university students. Previous research has established connections between stress situations and speech fluency, highlighting that stress can negatively influence behavior, cognitive processes, and communicative performance across various contexts, including oral presentations. An experiment was conducted with 30 participants randomly divided into three groups. Each group was exposed to different virtual stress environments (low/medium/high) during simulated oral presentations. A virtual reality platform was created to establish controlled environments and monitor the participants’ fluency in real time. An Analysis of Variance (ANOVA) test revealed that participants in the low-stress virtual environment performed better, achieving higher word and syllable production. In contrast, the high-stress virtual environment demonstrated an increase in disfluencies and hesitations. Results emphasize the impact of stress situations on oral communication, advocating for the use of virtual reality technology as a means of preparing individuals for challenging speaking scenarios. This approach has the potential to enhance speech fluency as a result of targeted practice in stress-inducing environments; that is to say, alleviating anxiety and improving overall communicative efficacy.

1. Introduction

Speech fluency is defined as the ability to produce continuous, clear, coherent, and uninterrupted speech, making it a fundamental skill for oral communication (Sandoval et al., 2022). This ability is not only an indicator of linguistic competence, but also of the speaker’s executive, cognitive and emotional skills (Guitar, 2019). On the other hand, oral presentations are necessary tools for personal and professional development, where fluency plays a key role. Delivering a good oral presentation facilitates effective communication and positively influences perceptions of an individual’s performance and confidence in academic or work environments (Dunbar et al., 2006).
Research on speech fluency has shown that environmental, biological, cognitive and emotional factors of the individual can influence this ability. For example, one of the factors having an impact on fluency is stress (Yan, 2019). In this context, virtual reality (VR) offers an immersive and interactive platform, allowing the simulation of stress situations in oral presentation scenarios (i.e., a work meeting) using a methodology that cannot be replicated by traditional tools (Olszewski et al., 2016; Lara et al., 2019). VR creates realistic and controlled environments that allow researchers to manipulate key variables such as audience size, feedback, and presentation settings, being this an unmatched capability with traditional methodologies. Studies indicate that exposure to VR can provoke physiological stress responses similar to those encountered in actual public speaking scenarios, enabling a deeper understanding of how individuals manage anxiety in these contexts (Schröder & Mühlberger, 2023; Martens et al., 2019). Furthermore, VR technology can incorporate biometric feedback systems for real-time monitoring of physiological responses, offering a comprehensive evaluation of stress reactions during simulated presentations. Additionally, VR environments also replicate various social dynamics reflective of real-life interactions, allowing participants to practice and address their stress in a safe and immersive context (Kroczek & Mühlberger, 2023).
According to Jackson et al. (2016), the stress experienced by speakers in different situations could significantly affect their oral performance. For example, high-pressure VR scenarios (e.g., an end-of-year job speech) coupled with specific stressful situations (e.g., a phone ringing during a presentation) would further increase the individual’s stress, which could hypothetically be reflected in reduced fluency. In addition, stress activates physiological responses in the body that could hinder the articulation and cohesion of ideas, thus exacerbating speech production blockages (Peeters, 2019).

1.1. Stress Situations and Speech Fluency

Stress is defined as a state of physical or mental exhaustion resulting from a negative interaction between the person and his or her environment. This phenomenon occurs when environmental demands exceed the individual’s capacity to respond, creating an imbalance that can trigger adverse physiological and emotional responses (Cohen et al., 2016). As a result, stress situations could affect behavior, thoughts and communicative performance in various activities, such as oral presentations. Similarly, stress situations are significantly associated with the presence of distracting elements in the environment, such as unexpected noises or situations, exacerbating the effects of stress and interfering with verbal memory and word production (Arrivillaga et al., 2020). As a result, speakers focus on their stress and less on the content they want to communicate, leading to errors such as hesitations and repetitions (Serna, 2023).
The effect of stress situations on different speech parameters, such as speech rate, syllable repetition, word repetition and hesitations, has been documented in several investigations (Gómez, 2018; Kappen et al., 2022). For example, stress situations could significantly alter the rate of speech production, which is often measured in terms of syllables per minute or words per minute (de Oliveira & Furquim, 2008; Guitar, 2019). In stressful situations, speakers tend to show a slower rate of speech due to increased cognitive load, tension and discomfort. Pisanski et al. (2016) suggest that physiological responses to stress may modulate speech characteristics, including rate. Thus, stress and distracting situations create a negative cycle that exacerbates a person’s fluency and communicative efficiency.
In addition, stress situations could increase the repetition of syllables and words during speech, leading to the presence of disfluencies. During periods of high stress, people repeat more syllables or words as a coping mechanism, demonstrating a “struggle” to maintain speech continuity without interruptions (Buchanan et al., 2014). In addition, the cognitive demands associated with delivering a high-stress oral presentation could affect memory retrieval, making it difficult to access the lexical system (Rojas et al., 2022) and leading to increased syllable or word repetitions (Yan, 2019).
Hesitations—long pauses or the use of filler words such as ‘eh’ or ‘well’—would also be common in speech under stress situations. Research has found that adolescent speakers show increased markers of hesitation when faced with stress in speech tasks (Gokgoz-Kurt, 2023). These interruptions would not only affect fluency, but also affect listener perception, as insecurity or lack of knowledge might be perceived (Gokgoz-Kurt, 2023). Baird et al. (2019) mention that speech produced in stress-induced scenarios often shows less fluency and an increase in interruptions due to physiological arousal, which complicates speech articulation (Contreras et al., 2020), thus highlighting the importance of designing challenging virtual environments to train users in high-stress situations.

1.2. Virtual Reality and Stress Scenarios

The concept of “virtual reality” was introduced in 1965 by Ivan Sutherland, who published a seminal article that laid the foundation for the term (Wohlgenannt et al., 2020). Over time, the equipment associated with VR systems has gradually evolved, leading to their active adoption in a wide range of fields. In the clinical context, their use has been explored to efficiently educate, assess and train individuals (Kim et al., 2021). This immersive capability allows participants to submerge themselves in situations that are difficult to replicate in real life, fostering interactions that increase both immersion and user engagement (Lara et al., 2019).
Mel Slater has made important contribution on how immersive VR can induce stress and anxiety during oral presentations, specifically public speaking, particularly through the induction of presence—the sensation of truly “being there” in a virtual setting—have been foundational. Slater et al. (2009) defined the concepts of place illusion and plausibility illusion, illustrating why individuals experience genuine stress when presenting to virtual audiences. His earlier work in 2006 (Slater et al., 2006) involved measuring real stress responses by employing physiological sensors—such as heart rate, skin conductance, and brain activity—to demonstrate that VR social scenarios, like addressing a virtual crowd, provoke authentic anxiety responses analogous to those faced in real-life situations. Further enhancing the scientific rigor of VR, Slater et al. (2010) established methods to objectively quantify presence, moving beyond reliance on subjective questionnaires. This validates the notion that even simplistic virtual audiences can elicit notable public speaking anxiety, particularly when participants receive negative feedback.
The design of the virtual stage also plays a crucial role in how stress manifests. For instance, spaces that simulate large audiences and unexpected situations, such as interruptions from ringing mobile phones, people coughing, etc., could increase participants’ sense of stress, having a negative impact on speech fluency, increasing blocking, hesitation and extension (Glémarec et al., 2021). According to Davydov et al. (2005), at a physiological level, perceived stress levels are associated with an increase in hormones such as epinephrine, which would have direct effects on the cognitive and motor functions necessary for speech production (Kim et al., 2021).
VR-based intervention programs have been shown to be effective in speech therapy and stress management in public presentation situations (Brock, 2023; Huang et al., 2020; Notaro et al., 2021; Tentu et al., 2024), allowing participants to experience and learn to manage their emotional reactivity in a controlled environment (Díaz-Pereira et al., 2025; Thrasher, 2022). This approach not only provides a safe space for users to practice and improve their communication skills, but also allows them to face the same circumstances they would in real life (Tan et al., 2025). Immersion in a virtual environment facilitates learning and practice by providing more authentic experiences than traditional forms of training, such as classroom practice or the use of more conventional methods (Kryston et al., 2021; Peng et al., 2018).
In addition, the ability to evaluate the participant’s speech more objectively is one of the key advantages of VR. The incorporation of biofeedback techniques allows not only fluency but also other physiological responses to stress to be monitored (Kudo et al., 2014). This combination improves the quality of learning, as it allows tailoring interventions to different levels of stress situations and provides a framework for implementing personalized strategies that optimize communicative competence (García et al., 2012). All in all, VR is a valuable resource for exploring stressful situations and training fluency in them. It enables participants to confront challenging scenarios in a secure and managed setting, enhancing persuasion and charisma while alleviating anxiety (Valls-Ratés et al., 2023). This fosters an environment conducive to learning and personal development.

1.3. The Present Study

This article addresses an important area at the intersection of stress situations and speech fluency in VR environments. Speech fluency is a fundamental component of effective communication, so it is important to understand how stress situations may affect this ability during oral exposures. Recent publications have shown that interruptions or unusual situations during oral presentations (e.g., external noise or conversations) can increase the physiological stress experienced by speakers (Battambang University Research Team, 2025; Li et al., 2025; Starr, 2025; Sörqvist et al., 2024). This study aims to analyze variations in speech rate, extensions, syllable repetition, and hesitation frequency experienced by participants under different levels of stress situations (low, medium and high) in a controlled virtual environment. Considering the innovative nature of the technological proposal applied to speech and behavioral sciences, particularly concerning stress situations, a controlled pilot study will be conducted. This pilot study will serve as a foundational assessment, designed to evaluate the effectiveness of the VR environments created for simulating stress in oral presentations. By using advanced VR technology, we can procure an authentic representation of real-world stressors that individuals face when performing public speaking tasks (Brundage et al., 2006; Díaz-Pereira et al., 2025; Satake et al., 2023). By focusing on how VR can simulate stressful situations in a controlled manner, it is anticipated that the results of this research will not only enrich knowledge about the inherent challenges associated with communication and, in particular, public speaking skills, but will also provide an initial empirical basis for the development of practices aimed at improving expository skills in diverse populations.

2. Materials and Methods

2.1. Participants

A sample of 30 third-grade Speech and Language Therapy students from the sponsoring university, who received a bonus of one credit for their elective courses, was selected by interest. Participants were aged between 19 and 21 years and included both males and females. Specific eligibility criteria were defined; inclusion criteria required participants to be university students with visual and hearing impairments that could be compensated by the use of glasses and/or hearing aids. On the other hand, participants who self-reported speech or voice disorders, cardiovascular disease, balance disorders, or diagnoses such as depression or neurological disorders were excluded from the study. Participants who reported high levels of academic stress (according to the survey used) were also excluded. To ensure the integrity of the findings and maintain a tightly controlled study group, a pilot study that systematically selects participants without existing conditions has been carried out. By refining participant selection to those without heightened academic stress or other significant impairments, this pilot study will allow for a more focused examination of stress effects on speech fluency in VR environments. Furthermore, to ensure the participants’ understanding of the nature of the study and its implications, as well as their voluntary participation, an informed consent form approved by the scientific ethics committee of the sponsoring university was read and signed by the subjects (code cecubb2024/1).
The 30 participants who made up the sample were randomly divided into three groups of ten, so that each group was exposed to different levels of stress (low, medium, high). The participants had to give an individual oral presentation (on a specific topic) in a VR scenario with different stress situations. For the oral presentation, the participants had a time limit of 120 s. Participants in group 1 were exposed to a low level of stress (scenario 1), as they only faced two stressful situations during their oral presentation. The situations were presented in the 20th and 40th second of the presentation. In contrast, participants in Group 2 were exposed to a medium level of stress, as they were faced with five stressful situations presented in the 10th, 30th, 50th, 70th and 90th second of the presentation (scenario 2). Finally, group 3 was exposed to a high level of stress. In this scenario they were presented with eight stressful situations, which appeared at seconds 5, 20, 35, 50, 65, 80, 95 and 110 (scenario 3).
The stress scenarios were defined in levels from lower to higher, as there is evidence that repeated exposure to stressful situations generates more stress due to the accumulation of fatigue that compromises the individual’s physiological resources. This phenomenon is explained by the overloading of regulatory systems and the decrease in resilience, creating a vicious cycle in the stress response (McEwen, 2004; McEwen & Gianaros, 2010). Therefore, and in line with some studies that claim that interruptions during oral presentations increase the speaker’s stress (Battambang University Research Team, 2025; Li et al., 2025; Starr, 2025; Sörqvist et al., 2024), the more stressful situations the participants were presented with, the more stress they could accumulate. For more details on the stress situations and scenarios created, see Table 1.

2.2. Materials and Design

2.2.1. Platform Generation—Web and Virtual Reality Components

Prior to the experimental phase, a remote platform was created to store data, generate the desired VR scenarios and record the results, among other functions. The first component generated was the World Wide Web (web), which in its implementation had two subcomponents: the backend and the frontend, which interacted closely through REST-type services, thus favoring flexibility, scalability and better system performance. The backend sub-component consisted of the database and the code that represented the logic of the web application, using MySQL 8.0 technology for its high performance and scalability, particularly suitable for smaller projects. The web application was implemented using the Django framework (version 4.2) together with the Python 3.10 programming language, ensuring a modular and robust architecture, structured according to the Model-View-Template (MVT) pattern, which facilitated the integration of future functionalities based on artificial intelligence. The front-end was responsible for data visualization in the web browser, presenting visual interfaces to the user using technologies such as HTML, CSS and JavaScript.
The second component of the platform was the VR application, which had to provide an experience that simulated oral presentations in a highly immersive environment, allowing students to practice their speeches in realistic conditions, with different levels of distraction and pressure. Key features included the simulation of a virtual audience that reacted in different ways to the speaker’s speech, and the modulation of external stress-generating elements such as ambient noise or audience interruptions. This component allowed us to configure various aspects of the exhibition scenario, such as audience size, lighting and distractors, creating a controlled environment that could be adapted to different situations. From the web interface, the exhibitor visualized a three-dimensional environment, while the collected data, including audio recordings, were transmitted to the web platform for further analysis. In this way, the combination of these technologies not only provided an immersive experience, but also facilitated the assessment of students’ fluency in a realistic pressure context. The implementation of this component was carried out using the Unity VR platform, using the C# programming language and optimized for use in the Oculus Quest 2 viewer (to understand the interaction between the components, see Figure 1).

2.2.2. Generation of the Topic for Oral Presentation

The experiment required all participants to deliver an impromptu oral presentation on the same topic. Subjects were informed two days prior about the topic to be presented. As a result, a current, general-interest text was produced providing all participants with a baseline of knowledge on the subject.
The structure of the text was as follows: (1) A short survey was carried out among the students in order to define the specific topic to be dealt with. The topic with the highest frequency of occurrence (use of social networks) was chosen. (2) Relevant information on the topic was collected through grey literature (magazines, newspapers or websites of interest). (3) Two language teachers, independent of this research, using the collected material, prepared the final text. (4) The text underwent a thorough review and editing process by the research team to rectify any grammatical and content-related errors. This diligent scrutiny ensured that the information presented is both accurate and original. For access to the revised text, refer to Supplementary Material S1.

2.3. Procedure

It consisted of three stages. In the first stage, the researcher responsible met with all second- and third-year subjects at the sponsoring university to invite them to participate in the research. General objectives were explained. A schedule was drawn up for the registered participants. Sessions were set to be individual and confidential. The day prior to the evaluation, subjects received the previously prepared information text “Influence of social networks in university life”. Additionally, attire advice –comfortable clothing- was given so they could live the VR experience.
The second stage consisted of a personal interview with the participants. This took place in a box in the Language, Speech and Cognition Laboratory of the sponsoring university and lasted 15 min per participant. The interview consisted of: (1) a medical history was taken to establish eligibility criteria; (2) participants were perceptually assessed for speech or voice disorders; (3) the Academic Stress Guideline (see Supplementary Material S2) was applied; (4) the procedures to be performed were explained and confidentiality was requested; (5) participants were asked to read and sign the informed consent form. Participants who met the eligibility criteria and signed the consent form proceeded to phase 3 of the study.

Experimental Procedure

The experimental procedure (Stage 3) was carried out after the interview and lasted approximately 10 min per participant. The evaluation took place in the VR laboratory of the sponsoring institution. First, the participant’s ID was registered in the web platform and a scenario was randomly assigned (low/medium/high stress). The VR headset was then installed to ensure the participant’s comfort. They were shown the default settings of the software without being introduced to any visual scenario, and were also instructed on the use of commands, buttons and scrolling in the virtual environment. To initiate the VR experience, all participants were immersed in a Japanese garden-like environment (see Figure 2), without any external stressful situation. They were instructed to simply observe their surroundings and enjoy the music.
Once the Japanese garden environment was closed, the participants were instructed to prepare an oral presentation on the topic previously analyzed (the influence of social networks on university life), in which they were asked to give a lecture on the content explicit in the text and to express their opinions on the topic. As mentioned above, the three scenarios presented corresponded to the auditorium of the sponsoring university (the scenarios differed only in the number of stress situations presented), so it was a virtual scenario familiar to the participants, which could generate a high level of stress for them (see Figure 3). Beginning, the specific instruction was: “OK, if you’re ready, we can go. You have time, so take it easy.” At this point, the stage was opened and the participant could proceed. In case of prolonged silence during the presentation, the evaluator could give feedback only once, with the following instructions: “what else can you say”, “give more details” “what is your opinion about it” “relax, tell me more, there is still time”. Their oral intervention was recorded using the built-in microphone of the Meta Quest 2 Oculus headset (Reality Labs, Menlo Park, CA, USA) and VR software.
Once the two-minute presentation was over and the stage was closed, participants were asked how they felt and what they thought of the activity in general. The headset and equipment were removed. Following the completion of their presentations, students were required to complete a structured satisfaction survey comprising five dichotomous (‘yes/no’) questions designed to capture their overall experience (to view the survey and its results, please check Supplementary Material S3). The purpose of this survey was to assess various dimensions of their engagement in the VR environment, including enjoyment, ease of navigation, perceived realism, quality of guidance provided, and willingness to recommend the platform to peers.

2.4. Data Analysis

The analysis of speech fluency in this study adheres to the criteria defined by Guitar (2019). Specifically, the first 200 words were extracted from speech samples and several fluency parameters were measured: total words, total syllables, words per minute (Totwords × min), syllables per minute (Tot-sil × min), extensions, hesitations and syllable repetitions. This analysis was conducted using a systematic procedure aimed at ensuring accuracy and repeatability. First, a thorough auditory review was conducted of each participant’s oral presentation recordings. Second, the first 200 words spoken by each participant were transcribed meticulously. Third, the total number of words and syllables were counted. Then, incidences of extensions, hesitations, and syllable repetitions were systematically recorded. Finally, the processing rate was calculated in terms of words per minute, and the articulatory rate in terms of syllables per minute. The data were stored using Microsoft Excel, version 2024.
Descriptive statistics were used for each of the study variables. A non-parametric inferential analysis was then performed using ANOVA (analysis of variance) in Python 3.12 (2024) online software. The ANOVA test is a statistical tool commonly used to compare means between three or more groups. However, its use is limited by several assumptions, the most critical being normality of the data and homogeneity of variances. When working with small samples (as in the present pilot study) and there is evidence that the data do not meet the assumptions of normality, it is critical to consider nonparametric ANOVAs. A major justification for using a nonparametric approach is that it is less dependent on assumptions about the distribution of the data. However, studies have shown that although ANOVA can be relatively robust to violations of normality, its statistical power may suffer (Kikvidze & Moya-Laraño, 2008).
Upon identifying significant overall differences through ANOVA, a follow-up procedure involving post hoc tests is essential for clarifying specific group differences. Bonferroni correction is a post hoc test adjustment used in statistical analysis to counteract the problem of multiple comparisons. When performing multiple hypothesis tests simultaneously (e.g., comparing several groups in ANOVA), the chance of obtaining a false positive (Type I error) increases. The Bonferroni correction controls the family-wise error rate (FWER) by dividing the desired overall alpha level (usually 0.05) by the number of comparisons being made.

3. Results

The tables below show the mean, standard deviation, mode, minimum, maximum and significant values for each fluency variable and in each of the scenarios. The sample consisted of 30 people with an average age of 21, 24 females and 6 males. In general, the tables show variations between the scenarios, resulting in noticeable changes in word and syllable production, as well as in disfluencies, extensions and hesitations. These results not only illustrate the influence of stress situations on oral presentations, but also highlight important implications for the design of virtual presentation environments that support communicative fluency.
As for scenario 1, which included 2 stress situations, it showed an adequate level of performance in terms of fluency. Although the number of extensions and hesitations was moderate, the overall production of words and syllables suggested a reasonable level of confidence among the participants. The context of exposure to only 2 stress situations seems to favor comfort, as most participants presented a minimal number of syllable repetitions and hesitations (see Table 2).
Table 3 shows the results of scenario 2 (5 stress situations). Similarly to scenario 1, adequate oral production is observed, but the multiplied number of extensions and hesitations suggests an increase in the participants’ nervousness or stress before the different situations presented. This could indicate that although the participants are able to generate large amounts of speech, the context causes disfluencies that affect the fluency of their speech.
Scenario 3, on the other hand, which consisted of 8 stressful situations, showed a more limited oral production performance compared to the previous ones, which could be associated with a higher level of stress or difficulties in presenting in a less familiar or more tiring environment. The lower rates of verbal production, together with the high number of syllables, suggest that although the participants managed to maintain the conversation, they faced significant challenges which were reflected in greater extensions and hesitations (see Table 4).
Comparing the results of the three scenarios, trends in verbal production and disfluencies were observed. Scenario 1 stood out with the highest fluency and production, being the most favorable. Scenario 2 also had high production, but with more disfluencies, which could be related to greater nervousness. Finally, Scenario 3 showed that the participants were challenged in terms of fluency, with more hesitations and extensions. This analysis highlights how the conditions of each scenario influence verbal performance (and presumably communication strategies), and suggests that a controlled environment may facilitate natural expression, while a less familiar and more tiring environment increases disfluencies.
The comparisons between groups are shown below. These were determined using a nonparametric ANOVA statistical test, followed by a Bonferroni post hoc test to correct for multiple comparisons and identify significant differences between scenarios based on the calculated variables (see Table 5).
Analysis of the results showed that Scenario 1 had a higher average word (198.4) and syllable (244.0) production than the other two scenarios, suggesting a more favorable environment for fluency. Differences in the rate of words per minute were also significant, with an average of 134.2 in Scenario 1 compared to 74.2 in Scenario 3, where the lowest performance was recorded (p < 0.05). In addition, extensions and hesitations were more pronounced in the third scenario (25 and 1.2), suggesting that participants may have experienced higher levels of stress, leading to greater difficulties in fluency.
On the other hand, the data from scenario 2 showed a reasonable average number of words (199.3) and syllables (238.9), but with a high presence of extensions (9.2) and hesitations (2.4), suggesting that although participants maintained a high level of verbal production, contextual pressures may have caused significant disfluencies. Further comparison of the mean scores across the three scenarios revealed that scenario 3 was characterized by a higher number of extensions (25.0) and lower word and syllable production, explaining the variation in overall scores (p < 0.05). These results illustrate the direct influence of the presentation context on participants’ verbal fluency and justify the need to analyses and adapt exposure environments in order to optimize communicative expression in academic and professional situations.

4. Discussion

The interaction between stress situations and speech fluency during oral presentations in VR environments is an important area of research in fields such as psychology, linguistics and education. It has been documented that stress-induced anxiety could affect an individual’s communicative performance, making their speech less coherent and less fluent (Grieve et al., 2021). Findings from previous research highlight the negative effect that stress situations could have on speech fluency during oral presentations, emphasizing that stress can interact with multiple variables such as speech content and the context of the presentation environment (Battambang University Research Team, 2025; Li et al., 2025; Starr, 2025; Sörqvist et al., 2024). In particular, these studies show that different levels of stress situations significantly affect different parameters of verbal performance, such as total words produced and fluency rates (Dwyer & Davidson, 2021). For instance, a study by Grieve et al. (2021) found that students who experienced high levels of oral presentation anxiety tended to avoid public speaking situations, which impacted on their communicative development. The obtained data indicate that participants performed best in scenario 1, where stress was minimal, reaching an average production of 198.4 words. These results are consistent with previous research which suggests that lower levels of stress correlate with improved speech quality and fluency, providing a clear context for the importance of a relaxed environment for effective speech.
On the other hand, Scenario 3 demonstrated the challenges associated with increased levels of stress situations, reflected in a significantly lower average total number of words produced (178.8) and a worrying increase in extensions (25.0). This increase in stress-related disfluency and nervousness is consistent with the findings of studies conducted by Kappen et al. (2022), who noted that intense stress can lead to a decrease in both speech rate and fluency due to cognitive overload. Other studies have suggested that stress situations could lead to an internal struggle between the individual’s desire to communicate effectively and the emotional pressure they feel, affecting their ability to articulate clearly (Davis et al., 2020). Interestingly, additional research supports the claim that stress can cause a cognitive load that limits the speaker’s ability to access their lexicon and articulate their thoughts (Díaz-Pereira et al., 2025; Satake et al., 2023).
In this matter, it has been documented that high levels of stress could lead to increase anxiety and nervousness, resulting in a reduced rate of speech and increased hesitations. In the context of the current study, participants who experienced elevated levels of stress (scenario 3) showed lower average word production, accompanied by a marked increase in syllable extensions and hesitations. For example, an average of 25.0 extensions reflects how emotional baggage can hinder fluent speech, as stress can act as a significant distractor. Research suggests that in situations of high anxiety, individuals tend to divert their attention to their own fears and worries, which hinders their ability to access the lexicon and evidences hesitations in their verbal production (Chard & van Zalk, 2022; Peeters, 2019). Thus, stress detection and management becomes critical to facilitate more fluent and effective speech in oral presentations, especially in contexts where VR adds an additional layer of complexity and pressure.
It is noteworthy to mention an unusual finding: given the stressful conditions of the oral presentation, the mean number of extensions in Scenarios 1 and 3 (21 and 25, respectively) can be considered normal. Surprisingly, however, Scenario 2 yielded a significantly lower number of extensions (mean = 9.2) than Scenario 1. While a definitive explanation is not possible, we hypothesize that this difference stems from the specific composition of Group 2, which included several confident speakers who were less susceptible to stress and thus produced fewer extensions. Supporting this view, the highest individual score in Group 2 was 28 extensions, compared to 36 in Group 1 and 47 in Group 3.
The use of VR as a platform for this study provides a critical framework to analyze the dynamic interaction between stress situations and speech. VR provides a uniquely immersive experience that can replicate high-stress situations, allowing for a deeper understanding of how individuals react under stress. In this regard, research has shown that VR can increase the perception of realism in simulated situations, which is relevant as more realistic environments could exacerbate the stress response and negatively affect communication. For example, Peeters (2019) reports that simulating stressful scenarios in VR could induce emotional and physiological responses comparable to those observed in real-life situations. Similarly, Lim et al. (2023) concluded that VR environments designed to induce stress could affect communicative effectiveness and create an increased sense of fatigue in users. This phenomenon has been observed in several studies documenting an increase in hesitations and extensions of speech, confirming that the associated emotional load can affect fluency (Lim et al., 2023).
The results of Scenario 2 support this notion, as participants showed a marked increase in hesitations (2.4). This increase in disfluency is supported by literature suggesting that highly stressful simulated environments activate physiological responses that may negatively impact speech fluency. Previous research has emphasized that stress activates the fight or flight response, which may lead to increased physiological arousal and thus affect articulation and speech coordination (Peeters, 2019).
On the other hand, the use of VR in oral fluency training offers innovative potential to mitigate the negative effects of stress. VR allows users to practice in controlled environments that simulate real-life situations in which oral discourse is required. This immersion helps to familiarize participants with the dynamics of the situation and can desensitize them to the stressful situations associated with public speaking. According to previous studies, VR has been shown to be effective in reducing stress and improving speaker confidence, resulting in greater fluency during speech (Lim et al., 2023). This is consistent with research that has explored how practice in virtual environments could reduce anxiety and improves verbal performance, contributing not only to the development of communicative skills but also to the emotional well-being of users (Díaz-Pereira et al., 2025; Kryston et al., 2021; Rizzo et al., 2010; Satake et al., 2023).
In terms of familiarity with the technological environment, this is an additional factor that could influence participants’ experience of stressful situations. When confronted with new and potentially complex technology, users may experience uncertainty, which could increase their stress during oral presentations. Some studies suggest that preparation and familiarity with the tools and environment could help to reduce stress and improve communicative performance (Amanvermez et al., 2023; Rama et al., 2023).
The research highlights the potential of VR as a platform for intervention in communication skills under stress situations. Given the positive correlation between stress management techniques and improved performance (Amanvermez et al., 2023; Rama et al., 2023), incorporating elements of familiarization with VR technology could significantly improve participants’ preparedness in stressful situations. Therefore, it is imperative that future research explores how varying levels of familiarity with VR technology and the implementation of specific stress management techniques affect speech fluency outcomes. Programs that use VR to simulate oral presentations and allow for repeated practice could be effective in reducing stress and improving speaker confidence, resulting in more fluent and effective speech. Education and speech therapy could also benefit greatly from this approach. As documented in previous studies, exposure to simulated situations would allow participants to experience the process of public speaking in a safe and controlled environment (Jiménez, 2018; As & Setiawan, 2021).

Limitations and Future Perspectives

Despite the significant contributions of this pilot study, it is important to acknowledge its limitations. The small sample size of 30 participants restricts the generalizability of the findings, as the insights gathered may not be fully representative of the broader population. Although pilot studies offer the advantage of allowing researchers to explore innovative technological applications in speech and behavioral sciences, they also come with limitations concerning the projection of results. Consequently, while the data provide preliminary insights into the relationship between stress and speech fluency in VR environments, the results should be interpreted cautiously, as broader conclusions may not readily apply without further investigation in more extensive studies.
Furthermore, not assessing stress using physiological tests was a limitation. Not having such limitation would have allowed for a better correlation of the study variables. Future studies should address this issue to determine the extent to which stress increases with an increased number of interruptions during oral speech.
To fully realize the potential implications of using VR for managing speech fluency amid stress, it is crucial to scale this study to a larger population. Future research should aim to include a more diverse demographic by increasing the sample size and ensuring varied participant backgrounds to enhance the external validity of the results. By conducting larger-scale studies, researchers can better assess the consistency and robustness of the effects noted in this pilot, as well as explore variations in outcomes as a function of factors such as age, gender, and previous experience with public speaking or VR technology. Ultimately, scaling up the study will provide a more comprehensive understanding of how to leverage VR to assist individuals in refining their communicative competencies under stress, thus contributing valuable insights to the fields of education and speech therapy.

5. Conclusions

In conclusion, the study has provided important findings regarding the relationship between stress situations and speech fluency during oral presentations in VR environments. The results indicate that fluency in verbal production could be affected by the level of stress induced in different situations. Specifically, participants demonstrated a good performance in the scenario with fewer stress stimuli, evidenced by high production of words and syllables, as well as a low incidence of disfluencies. This aligns with previous research that highlights how lower levels of stress situations correlate with improved communicative performance.
The proliferation in the number of extensions and hesitations in scenarios with higher stress levels suggests that environmental conditions could destabilize speakers’ abilities to communicate effectively. These findings emphasize the necessity of considering stress management in communication practices, particularly in presentation contexts that demand a high level of verbal performance. However, it is important to acknowledge the limitations of the study, primarily the small sample size and the inability to directly measure stress levels, which could restrict the generalizability of the results.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/bs15121652/s1, Supplementary Material S1 (Text generated for oral speech production); Supplementary Material S2 (Academic Stress Guideline) and Supplementary Material S3 (VR satisfaction survey).

Author Contributions

Conceptualization: Y.S., C.R. and B.F. Methodology: Y.S., L.G., S.Q.C. and G.L. Validation: Y.S., L.G. and S.Q.C. Investigation: Y.S., C.R. and B.F. Data curation: Y.S. and C.R. Writing—original draft: Y.S., C.R., B.F., L.G. and S.Q.C. Writing—review & editing: Y.S., C.R., B.F., L.G., S.Q.C. and G.L. Supervision: C.R. Funding acquisition: Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

ANID FONDECYT INICIACION, grant number: 11230984; Communication & Cognition Investigation Group, Universidad del Bío-Bío, grant number: GI2309435; DICREA Regular Research Proyect, Universidad del Bío-Bío grant number: RE2534906; Innovation Initiation Project (Design and conceptual validation- TRL 2- of an immersive VR environment to optimize the communication skills of first responders to emergencies in Chile), Universidad del Bío-Bío grant number: I+D45.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by Bío-Bío University Bioethical Committee (protocol number cecubb2024/1; 20 December 2024), ensuring protection of participants’ rights and privacy throughout the process. Additionally, measures were implemented to provide psychological support for participants during and after assessments, addressing any emotional discomfort arising from participation.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amanvermez, Y., Leblebici, M. A., & Guzel, F. (2023). Testing the effects of multimodal stress reduction techniques on young adults’ anxiety levels. International Journal of Behavioral Medicine, 30, 55–67. [Google Scholar]
  2. Arrivillaga, C., Rey, L., & Extremera, N. (2020). Adolescents’ problematic internet and smartphone use is related to suicide ideation: Does emotional intelligence make a difference? Computers in Human Behavior, 110, 106372. [Google Scholar] [CrossRef]
  3. As, M., & Setiawan, S. (2021). Stuttering disorder therapy using Aristotle’s rhetoric method in The King’s Speech movie. IJET, 10, 55–69. [Google Scholar]
  4. Baird, A., Robinson, K., & Boge, C. (2019). Using speech to predict sequentially measured cortisol levels during a Trier social stress test. Frontiers in Psychology, 10, 1–9. [Google Scholar]
  5. Battambang University Research Team. (2025). Exploring EFL students’ challenges in oral presentations at National University of Battambang. International Journal of Progressive Development and Learning, 2(1), 1–15. [Google Scholar]
  6. Brock, K. (2023). A preliminary investigation into the effectiveness of a virtual reality scenario for adolescents who stutter [Master’s thesis, Oklahoma State University]. [Google Scholar]
  7. Brundage, S., Graap, K., Gibbons, K., Ferrer, M., & Brooks, J. (2006). Frequency of stuttering during challenging and supportive virtual reality job interviews. Journal of Fluency Disorders, 31, 325–339. [Google Scholar] [CrossRef]
  8. Buchanan, T. W., Laures-Gore, J. S., & Duff, M. C. (2014). Acute stress reduces speech fluency. Biological Psychology, 97, 60–66. [Google Scholar] [CrossRef]
  9. Chard, I., & van Zalk, N. (2022). Virtual reality exposure therapy for treating social anxiety: A scoping review of treatment designs and adaptation to stuttering. Frontiers in Digital Health, 4, 831038. [Google Scholar] [CrossRef]
  10. Cohen, S., Gianaros, P. J., & Manuck, S. B. (2016). A stage model of stress and disease. Perspectives on Psychological Science, 11(4), 456–463. [Google Scholar] [CrossRef]
  11. Contreras, D., Quezada, C., & Castillo, J. (2020). Typical speech disfluencies in fluent children from Santiago de Chile. Revista Chilena de Fonoaudiología, 19, 1. [Google Scholar]
  12. Davis, A., Linvill, D., Hodges, L., Da Costa, A., & Lee, A. (2020). Virtual reality versus face-to-face practice: A study into situational apprehension and performance. Communication Education, 69(1), 70–84. [Google Scholar] [CrossRef]
  13. Davydov, D. M., Shapiro, D., Goldstein, I. B., & Chicz-DeMet, A. M. (2005). Moods in everyday situations: Effects of menstrual cycle, work, and stress hormones. Journal of Psychosomatic Research, 58, 343–349. [Google Scholar] [CrossRef]
  14. de Oliveira, V., & Furquim, C. (2008). Perfil evolutivo da fluência da fala de falantes do português brasileiro. Pró-Fono Revista de Atualização Científica, 20, 7–12. [Google Scholar] [CrossRef]
  15. Díaz-Pereira, M. D., Casal-de-la-Fuente, L., Delgado-Parada, J., Ricoy, M. C., Haamer, R. E., Kamińska, D., Merecz-Kot, D., Zwoliński, G., & Cuiñas, Í. (2025). Virtual reality scenarios to reduce stress of assessment in university students: Gender perspective guidelines. Interactive Learning Environments, 33(1), 244–259. [Google Scholar] [CrossRef]
  16. Dunbar, N., Brooks, C., & Kubicka, T. (2006). Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innovative Higher Education, 31, 115–128. [Google Scholar] [CrossRef]
  17. Dwyer, K., & Davidson, M. (2021). Take a public speaking course and conquer the fear. Journal of Education and Educational Development, 8, 1. [Google Scholar] [CrossRef]
  18. García, E., Rodríguez, C., Martín, R., Jiménez, J., Hernández, S., & Díaz, A. (2012). Verbal fluency test: Normative data and evolutionary development in primary school students. European Journal of Education and Psychology, 5, 53. [Google Scholar] [CrossRef]
  19. Glémarec, Y., Lugrin, J., Bosser, A., Collins, A., Buche, C., & Latoschik, M. (2021). Indifferent or enthusiastic? Virtual audiences animation and perception in virtual reality. Frontiers in Virtual Reality, 2, 666232. [Google Scholar] [CrossRef]
  20. Gokgoz-Kurt, B. (2023). Fluency, comprehensibility, and accentedness in L2 speech: Examining the role of visual and acoustic information in listener judgments. Australian Journal of Applied Linguistics, 6, 40–54. [Google Scholar] [CrossRef]
  21. Gómez, M. (2018). What do learners do while planning a task? The effects of different timings in an oral task. Estudios de Lingüística Aplicada, 67, 1–20. [Google Scholar]
  22. Grieve, R., Woodley, J., Hunt, S., & McKay, A. (2021). Student fears of oral presentations and public speaking in higher education: A qualitative survey. Journal of Further and Higher Education, 45, 1281–1293. [Google Scholar] [CrossRef]
  23. Guitar, B. (2019). Stuttering: An integrated approach to its nature and treatment (5th ed.). Lippincott Williams & Wilkins. [Google Scholar]
  24. Huang, C., Luo, Y., Yang, S., Lu, C., & Chen, A. (2020). Influence of students’ learning style, sense of presence, and cognitive load on learning outcomes in an immersive virtual reality learning environment. Journal of Educational Computing Research, 58, 596–615. [Google Scholar] [CrossRef]
  25. Jackson, E., Tiede, M., Beal, D., & Whalen, D. (2016). The impact of social-cognitive stress on speech variability, determinism, and stability in adults who do and do not stutter. Journal of Speech, Language, and Hearing Research, 59, 1295–1314. [Google Scholar] [CrossRef] [PubMed]
  26. Jiménez, J. (2018). Simulation as a didactic strategy in diagnostic imaging training. CTI-S, 3, 20–24. [Google Scholar]
  27. Kappen, M., Van Der Donckt, J., Vanhollebeke, G., Allaert, J., Degraeve, V., Madhu, N., Van Hoecke, S., & Vanderhasselt, M. A. (2022). Acoustic speech features in social comparison: How stress impacts the way you sound. Scientific Reports, 12, 22022. [Google Scholar] [CrossRef]
  28. Kikvidze, Z., & Moya-Laraño, J. (2008). Unexpected failures of recommended tests in basic statistical analyses of ecological data. Web Ecology, 8(1), 67–73. [Google Scholar] [CrossRef]
  29. Kim, H., Kim, D. J., Kim, S., Chung, W. H., Park, K. A., Kim, J. D., Kim, D., Kim, M. J., Kim, K., & Jeon, H. J. (2021). Effect of virtual reality on stress reduction and change of physiological parameters including heart rate variability in people with high stress: An open randomized crossover trial. Frontiers in Psychology, 12, 614539. [Google Scholar] [CrossRef]
  30. Kroczek, L., & Mühlberger, A. (2023). Public speaking training in front of a supportive audience in Virtual Reality improves performance in real-life. Scientific Reports, 13(1), 13968. [Google Scholar] [CrossRef]
  31. Kryston, K., Goble, H., & Eden, A. (2021). Incorporating virtual reality training in an introductory public speaking course. Journal of Communication Pedagogy, 4, 133–151. [Google Scholar] [CrossRef]
  32. Kudo, N., Shinohara, H., & Kodama, H. (2014). Heart rate variability biofeedback intervention for reduction of psychological stress during the early postpartum period. Applied Psychophysiology and Biofeedback, 39, 115–121. [Google Scholar] [CrossRef]
  33. Lara, G., Santana, A., Lira, A., & Negrón, A. (2019). The development of hardware for virtual reality. RISTI, 31, 106–117. [Google Scholar] [CrossRef]
  34. Li, Y., Zhang, X., & Wang, Z. (2025). Does background sound impact cognitive performance and relaxation states in enclosed office? Building and Environment, 248, 111098. [Google Scholar]
  35. Lim, M. H., Aryadoust, V., & Esposito, G. (2023). A meta-analysis of the effect of virtual reality on reducing public speaking anxiety. Current Psychology, 42(15), 12912–12928. [Google Scholar] [CrossRef]
  36. Martens, M., Antley, A., Freeman, D., Slater, M., Harrison, P., & Tunbridge, E. (2019). It feels real: Physiological responses to a stressful virtual reality environment and its impact on working memory. Journal of Psychopharmacology, 33(10), 1264–1273. [Google Scholar] [CrossRef]
  37. McEwen, B. (2004). Protection and damage from acute and chronic stress: Allostasis and allostatic overload and relevance to the pathophysiology of psychiatric disorders. Annals of the New York Academy of Sciences, 1032, 1–7. [Google Scholar] [CrossRef]
  38. McEwen, B., & Gianaros, P. (2010). Central role of the brain in stress and adaptation: Links to socioeconomic status, health, and disease. Annals of the New York Academy of Sciences, 1186, 190–222. [Google Scholar] [CrossRef]
  39. Notaro, A., Capraro, F., Pesavento, M., Milani, S., & Busà, M. G. (2021). Effectiveness of VR immersive applications for public speaking enhancement. In IS&T International Symposium on Electronic Imaging (p. 224). Society for Imaging Science and Technology. [Google Scholar]
  40. Olszewski, K., Lim, J., Saito, S., & Li, H. (2016). High-fidelity facial and speech animation for VR HMDs. ACM Transactions on Graphics, 35, 1–14. [Google Scholar] [CrossRef]
  41. Peeters, D. (2019). Virtual reality: A game-changing method for the language sciences. Psychonomic Bulletin & Review, 26, 894–900. [Google Scholar] [CrossRef]
  42. Peng, W., Wu, P., Wang, J., Chi, H., & Wang, X. (2018). A critical review of the use of virtual reality in construction engineering education and training. International Journal of Environmental Research and Public Health, 15, 1204. [Google Scholar] [CrossRef]
  43. Pisanski, K., Nowak, J., & Sorokowski, P. (2016). Individual differences in cortisol stress response predict increases in voice pitch during exam stress. Physiology & Behavior, 163, 234–238. [Google Scholar] [CrossRef]
  44. Rama, A., Rehakova, L., Vidal, M., Latorre, C., Ayllón, E., & Vieiro, P. (2023). How to evaluate and intervene on dysphemia? An analysis of perceptions of speech therapy specialists. Journal of Speech Therapy Research, 13. [Google Scholar]
  45. Rizzo, A. S., Difede, J., Rothbaum, B. O., Reger, G., Spitalnick, J., Cukor, J., & Mclay, R. (2010). Development and early evaluation of the Virtual Iraq/Afghanistan exposure therapy system for combat-related PTSD. Annals of the New York Academy of Sciences, 1208, 202–210. [Google Scholar] [CrossRef]
  46. Rojas, C., Riffo, B., & Guerra, E. (2022). Visual word recognition among oldest old people: The effect of age and cognitive load. Frontiers in Aging Neuroscience, 14, 1007048. [Google Scholar] [CrossRef] [PubMed]
  47. Sandoval, Y., García, V., & Sanhueza, M. (2022). Perception of people with stuttering regarding their treatment experiences based on the CALMS multidimensional model. Revista Chilena de Fonoaudiología, 21, 1–9. [Google Scholar]
  48. Satake, Y., Yamamoto, S., & Obari, H. (2023). Effects of English-speaking lessons in virtual reality on EFL learners’ confidence and anxiety. In Frontiers in Technology-Mediated Language Learning (pp. 26–40). Routledge. [Google Scholar]
  49. Schröder, B., & Mühlberger, A. (2023). Measuring attentional bias in smokers during and after psychosocial stress induction with a Trier Social Stress Test in virtual reality via eye tracking. Frontiers in Psychology, 14, 1129422. [Google Scholar] [CrossRef] [PubMed]
  50. Serna, M. (2023). Afrontamiento al estrés ante exámenes en estudiantes de una universidad pública y una privada. Revista de la Asociación Dental Mexicana, 80, 101–103. [Google Scholar] [CrossRef]
  51. Slater, M., Guger, C., Edlinger, G., Leeb, R., Pfurtscheller, G., Antley, A., & Friedman, D. (2006). Analysis of physiological responses to a social situation in an immersive virtual environment. Presence, 15(5), 553–569. [Google Scholar] [CrossRef]
  52. Slater, M., Lotto, B., Arnold, M., & Sanchez-Vives, M. (2009). How we experience immersive virtual environments: The concept of presence and its measurement. Anuario de Psicología, 40(2), 193–210. [Google Scholar]
  53. Slater, M., Spanlang, B., & Corominas, D. (2010). Simulating virtual environments within virtual environments as the basis for a psychophysics of presence. ACM Transactions on Graphics, 29(4), 1–9. [Google Scholar] [CrossRef]
  54. Sörqvist, P., Haga, A., Langeborg, L., & Dahlström, Ö. (2024). The effects of irrelevant speech on physiological stress, cognitive performance, and subjective experience. International Journal of Psychophysiology, 193, 112259. [Google Scholar]
  55. Starr, R. (2025). The psychology of interruptions: Power, anxiety, and disregard in everyday talk. Available online: https://profrjstarr.com/the-psychology-of-us/the-psychology-of-interruptions-power-anxiety-and-disregard-in-everyday-talk (accessed on 25 October 2025).
  56. Tan, Y., Chang, V., Ang, W., & Lau, Y. (2025). Virtual reality exposure therapy for social anxiety disorders: A meta-analysis and meta-regression of randomized controlled trials. Anxiety, Stress, & Coping, 38(2), 141–160. [Google Scholar]
  57. Tentu, S., Cecil, J., & Tetnowski, J. (2024, August 7–9). The potential of virtual reality digital twins to serve as therapy approaches for stuttering. 2024 IEEE 12th International Conference on Serious Games and Applications for Health (SeGAH) (Vol. 24, pp. 1–9), Funchal, Portugal. [Google Scholar]
  58. Thrasher, T. (2022). The impact of virtual reality on L2 French learners’ language anxiety and oral comprehensibility. CALICO Journal, 39(2), 219–238. [Google Scholar] [CrossRef]
  59. Valls-Ratés, Ï., Niebuhr, O., & Prieto, P. (2023). Encouraging participant embodiment during VR-assisted public speaking training improves persuasiveness and charisma and reduces anxiety in secondary school students. Frontiers in Virtual Reality, 4, 1074062. [Google Scholar] [CrossRef]
  60. Wohlgenannt, I., Simons, A., & Stieglitz, S. (2020). Virtual reality. Business & Information Systems Engineering, 62, 455–461. [Google Scholar] [CrossRef]
  61. Yan, H. (2019). Unpacking the relationship between formulaic sequences and speech fluency on elicited imitation tasks: Proficiency level, sentence length, and fluency dimensions. TESOL Quarterly, 53, 399–426. [Google Scholar] [CrossRef]
Figure 1. Interaction between components of the developed system.
Figure 1. Interaction between components of the developed system.
Behavsci 15 01652 g001
Figure 2. Initial relaxing VR environment (Japanese garden with soothing music).
Figure 2. Initial relaxing VR environment (Japanese garden with soothing music).
Behavsci 15 01652 g002
Figure 3. VR environment of oral presentation (Auditorium of the sponsoring university).
Figure 3. VR environment of oral presentation (Auditorium of the sponsoring university).
Behavsci 15 01652 g003
Table 1. Stress situations presented in VR for each scenario.
Table 1. Stress situations presented in VR for each scenario.
* Stress Situation Scenario 1Scenario 2Scenario 3
A person enters the auditorium by surprise.
Loud cell phone ringing
Person leaves the auditorium unexpectedly
Person raises hand and asks a question
Loud fire siren noise
Person coughs loudly
Two people converse and interrupt
Person points to the exhibitor
* Each stress situation was presented consecutively; there was no overlapping between them.
Table 2. Speech fluency variables with 2 stress situations (Scenario 1).
Table 2. Speech fluency variables with 2 stress situations (Scenario 1).
VariableAverageSt. Dev.ModeMinimumMaximum
Age20.30.78201921
Total Words198.429.36198156270
Total Syllables244.042.46254180365
* Tot-words × min134.225.68187106187
** Tot-sil × min135.831.29120102183
Extensions21.05.39211136
Hesitations0.50.52002
Syllable repetitions0.10.32001
* Total words per minute; ** Total syllables per minute.
Table 3. Speech fluency variables with 5 stress situations (Scenario 2).
Table 3. Speech fluency variables with 5 stress situations (Scenario 2).
VariableAverageSt. Dev.ModeMinimumMaximum
Age20.50.75201922
Total Words199.334.58165139277
Total Syllables238.966.73211199532
* Tot-words × min103.225.888959187
** Tot-sil × min145.950.9512367243
Extensions9.22.9511828
Hesitations2.41.10104
Syllable repetitions0.30.47001
* Total words per minute; ** Total syllables per minute.
Table 4. Speech fluency variables with 8 stress situations (Scenario 3).
Table 4. Speech fluency variables with 8 stress situations (Scenario 3).
VariableAverageSt. Dev.ModeMinimumMaximum
Age22.53.67211934
Total Words178.825.14172119215
Total Syllables292.782.27323225416
* Tot-words × min74.215.22635293
** Tot-sil × min126.624.7511686183
Extensions25.09.24412047
Hesitations1.20.64105
Syllable repetitions0.40.52004
* Total words per minute; ** Total syllables per minute.
Table 5. Comparison between VR scenarios. ANOVA between groups (Bonferroni’s post hoc test corrected).
Table 5. Comparison between VR scenarios. ANOVA between groups (Bonferroni’s post hoc test corrected).
VariableScenarioAverageSt.
Dev.
Min.Max.p-Value
(1 vs. 2)
p-Value (1 vs. 3)p-Value
(2 vs. 3)
Total words1198.429.361562700.845<0.001 ***0.037 ***
2199.334.58139277
3178.825.14119215
Total syllables1244.042.461803650.8450.015 ***<0.001 ***
2238.966.73199532
3292.782.27225416
* Tot-words × min1134.225.681061870.845<0.001 ***0.045 ***
2103.225.8859187
374.215.225293
** Tot-sil × min1135.831.291021830.8450.012 ***0.034 ***
2145.950.9567243
3126.624.7586183
Extensions121.05.3911360.8450.0910.002 ***
29.22.95828
325.09.242047
Hesitations10.50.52020.8450.012 ***0.022 ***
22.41.1004
31.20.6405
Syllable repetition10.10.32010.8450.1340.546
20.30.4701
30.40.5204
* Total words per minute; ** Total syllables per minute; *** p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sandoval, Y.; Farías, B.; Gajardo, L.; Quezada Cáceres, S.; Lagos, G.; Rojas, C. Stress Situations and Speech Fluency: A Pilot Study of Oral Presentations in Immersive Virtual Reality Environments. Behav. Sci. 2025, 15, 1652. https://doi.org/10.3390/bs15121652

AMA Style

Sandoval Y, Farías B, Gajardo L, Quezada Cáceres S, Lagos G, Rojas C. Stress Situations and Speech Fluency: A Pilot Study of Oral Presentations in Immersive Virtual Reality Environments. Behavioral Sciences. 2025; 15(12):1652. https://doi.org/10.3390/bs15121652

Chicago/Turabian Style

Sandoval, Yasna, Bárbara Farías, Luis Gajardo, Soledad Quezada Cáceres, Gabriel Lagos, and Carlos Rojas. 2025. "Stress Situations and Speech Fluency: A Pilot Study of Oral Presentations in Immersive Virtual Reality Environments" Behavioral Sciences 15, no. 12: 1652. https://doi.org/10.3390/bs15121652

APA Style

Sandoval, Y., Farías, B., Gajardo, L., Quezada Cáceres, S., Lagos, G., & Rojas, C. (2025). Stress Situations and Speech Fluency: A Pilot Study of Oral Presentations in Immersive Virtual Reality Environments. Behavioral Sciences, 15(12), 1652. https://doi.org/10.3390/bs15121652

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop