Next Article in Journal
Recent Advances in B-Mode Ultrasound Simulators
Previous Article in Journal
In Vitro Effect of Sequential Compressive Loading and Thermocycling on Marginal Microleakage of Digitally Fabricated Overlay Restorations Made from Five Materials
Previous Article in Special Issue
Academic Level as a Moderator in University Students’ Acceptance of Educational AI Chatbots: An Extended TAM3 Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training

by
Valentin Grecu
1,
Radu Emanuil Petruse
1,*,
Marius-Bogdan Chiliban
1 and
Elena-Teodora Tâlvan
2
1
Faculty of Engineering, Lucian Blaga University of Sibiu, 550024 Sibiu, Romania
2
Faculty of Medicine, Lucian Blaga University of Sibiu, 550024 Sibiu, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(23), 12534; https://doi.org/10.3390/app152312534
Submission received: 4 November 2025 / Revised: 21 November 2025 / Accepted: 23 November 2025 / Published: 26 November 2025

Featured Application

This study informs the design of VR-based training in technical education by demonstrating that high immersion can impose cognitive costs for novice learners. Educators and developers can apply these insights to sequence VR after foundational instruction and incorporate cognitive load-aware design features such as signaling and guided narration. The results support using VR as a complementary tool within blended learning frameworks, helping institutions and industry trainers enhance learner focus, retention, and practical skill development while minimizing cognitive overload.

Abstract

As immersive technologies increasingly permeate education and professional training, their cognitive implications for novice learners remain underexplored. This study examines the relative effectiveness of virtual reality (VR)-based instruction compared with conventional teaching modalities in a controlled experimental setting. A total of 106 undergraduate medical students with no prior technical experience were randomly assigned to one of three instructional conditions: (1) PowerPoint-based presentation, (2) real-person demonstration, or (3) immersive VR simulation of a five-axis CNC machine. Participants’ cognitive ability was assessed using Raven’s Progressive Matrices, and their learning styles were measured via the Honey and Mumford questionnaire. Immediate knowledge retention was evaluated through a 20-item multiple-choice test. Results revealed a significant main effect of instructional method on post-test performance (p < 0.001), with the real-person group achieving the highest mean score, followed by PowerPoint and VR groups. IQ was a significant predictor of performance across conditions but did not moderate the effect of instructional method. Gender and learning-style preferences showed no meaningful associations with learning outcomes. The findings suggest that, for novice learners engaging with complex technical content, immersive VR may impose additional cognitive demands that hinder immediate knowledge acquisition. These results contribute empirical support to Cognitive Load Theory and the Cognitive Affective Model of Immersive Learning, emphasizing the need for careful instructional design and cognitive scaffolding in VR-based education.

1. Introduction

Technological advancements have revolutionized the way knowledge is delivered and acquired, with immersive virtual reality (VR) emerging as one of the most promising educational innovations. VR enables learners to interact with realistic 3D environments that replicate real-world contexts, potentially enhancing engagement, spatial understanding, and experiential learning [1,2,3]. In technical and engineering education, VR allows students to visualize complex machinery and processes that are otherwise difficult to access physically, such as manufacturing systems, medical equipment, or laboratory procedures [4].
Despite this potential, evidence of VR’s effectiveness in improving learning outcomes remains mixed. While some studies report increased motivation, engagement, and perceived presence, others show no improvement or even lower immediate knowledge retention compared to traditional instructional formats [1,5,6]. These inconsistencies are often attributed to cognitive load theory [7], which suggests that the high interactivity and sensory richness of VR can overload working memory, especially for novice learners unfamiliar with the content [1]. A growing body of research indicates that the design of instructional materials and scaffolding—such as signaling, guidance, and pacing—plays a critical role in determining whether VR enhances or hinders learning [2,5].
In parallel, factors such as individual cognitive ability and learning preferences have been hypothesized to influence how learners respond to immersive instruction. Higher cognitive ability, often assessed through IQ measures like Raven’s Progressive Matrices [8], may facilitate information integration in complex visual environments. Meanwhile, learning style models such as Honey and Mumford’s typology [9] have been widely applied in educational contexts, yet their predictive validity for performance outcomes remains weak [10]. Understanding how these individual differences interact with instructional methods can inform more personalized and effective educational designs.
Given the increasing investment in immersive learning technologies for education and professional training, empirical evidence comparing VR with conventional methods remains essential. Few controlled studies have directly contrasted VR-based instruction with PowerPoint-based and real-person, hands-on teaching on the same topic, particularly among learners with no prior knowledge. Such comparisons are crucial for clarifying whether VR confers a measurable advantage—or disadvantage—under realistic classroom conditions.
Therefore, this study investigates the impact of instructional modality (VR, PowerPoint, real-person) on immediate knowledge retention among novice learners in a technical education context. Specifically, it addresses four research questions:
  • Does VR-based instruction produce better immediate knowledge retention than PowerPoint and real-person instruction?
  • How do students with no prior knowledge of a subject respond to different teaching methods in terms of comprehension and engagement?
  • What are the implications of using VR as a primary teaching tool in technical education compared to traditional approaches?
  • Does cognitive ability (as measured by an IQ test) influence learning outcomes differently across the three instructional methods?
Additionally, the study explores whether gender and Honey & Mumford learning-style preferences influence learning outcomes. In line with previous meta-analyses, we anticipate minimal effects of learning styles on knowledge performance [10].
By addressing these questions, this paper contributes to the ongoing debate on the role of immersive technologies in education, offering empirical evidence on when and how VR-based learning may—or may not—enhance instructional effectiveness. The paper proceeds as follows: after the literature review, presented in Section 2, Section 3 describes the experimental design and methods, Section 4 presents the statistical analyses and results, Section 5 discusses the implications and limitations, and Section 6 concludes with recommendations for future VR-based instructional design.

2. Literature Review

2.1. The Rise of Immersive Technologies in Education

In recent years, the proliferation of affordable and portable virtual reality (VR) systems has accelerated the adoption of immersive technologies in education. VR creates synthetic, interactive, three-dimensional environments that simulate real-world experiences, offering the potential for experiential and situated learning [4,11]. Unlike traditional instructional media such as slides or videos, VR allows learners to manipulate and explore objects and systems from multiple perspectives, facilitating embodied cognition and spatial reasoning [1].
A growing number of studies have examined VR’s impact across diverse educational domains—engineering [12], medicine [13,14,15], and science education [3]. A large meta-analysis by Merchant et al. [3] found that VR-based learning environments generally outperform traditional instruction in promoting conceptual understanding and motivation, but results vary depending on the type of learning outcome, immersion level, and instructional design. Similarly, the systematic review by Radianti et al. [4] concluded that while VR enhances engagement and presence, its effectiveness for knowledge retention is inconsistent, especially in short interventions and among novice learners.

2.2. Theoretical Frameworks Explaining VR Learning Outcomes

2.2.1. Cognitive Load Theory and Immersive Learning

A major theoretical framework explaining mixed outcomes in VR learning is Cognitive Load Theory (CLT) [7,16]. CLT posits that learning effectiveness depends on the balance between intrinsic load (task complexity), extraneous load (unnecessary cognitive effort), and germane load (schema construction). Immersive VR environments, while rich in sensory input, can impose a high extraneous load due to navigation, motion, and visual complexity [1].
Makransky and Petersen [17] proposed the Cognitive Affective Model of Immersive Learning (CAMIL), integrating CLT with affective and motivational components. CAMIL posits that immersive features increase presence and motivation, which can benefit learning only if cognitive resources are not overwhelmed. If learners’ attention is diverted to interacting with the environment rather than processing core content, learning may decline—a phenomenon empirically observed in several experiments [1,5,6].

2.2.2. Presence, Engagement, and Cognitive Outcomes

Presence—the psychological sense of “being there”—is often cited as VR’s primary advantage [18,19]. While presence correlates with enjoyment and engagement, evidence linking it to better learning is inconsistent [20,21]. Studies such as Parong and Mayer [6] demonstrated that students who learned science content in immersive VR reported higher presence and enjoyment but scored lower on factual tests compared to those learning via desktop simulation. Similarly, Makransky et al. [1] found that immersive VR increased presence and interest but led to lower immediate retention, suggesting that high perceptual load can interfere with encoding new information.
These findings challenge the simplistic assumption that “more immersive” automatically means “better learning.” Instead, the relationship between immersion, cognitive load, and performance appears non-linear, emphasizing the need for careful instructional design.

2.3. Empirical Evidence from Technical and Vocational Education

In technical and vocational contexts, VR is often promoted as a safe, cost-effective alternative for training on complex machinery or hazardous environments. For instance, Lee and Wong [12] showed that engineering students using VR simulations of mechanical systems improved spatial understanding and task performance. However, other research highlights that novice learners may struggle to translate virtual experience into conceptual understanding without explicit guidance [22].
In manufacturing and mechanical domains, short VR-based training sessions have sometimes resulted in lower immediate test scores compared to physical demonstrations or video instruction [1]. These results suggest that hands-on familiarity and guided observation may be more efficient for initial learning phases, whereas VR might excel in later transfer or procedural stages [23].
Overall, the literature suggests that VR’s pedagogical value is context-dependent, influenced by content complexity, learner expertise, and the degree of instructional support [4,22].

2.4. Individual Differences: Cognitive Ability and Learning Styles

2.4.1. Cognitive Ability (IQ)

Cognitive ability, particularly fluid intelligence, influences how learners process and retain new information [24]. In multimedia learning contexts, high-IQ learners tend to benefit more from complex, self-directed environments because they can allocate cognitive resources efficiently [25]. When applied to VR, this implies that IQ may moderate the impact of immersion—high-ability learners may thrive, while low-ability learners may experience cognitive overload. However, few VR studies have explicitly tested this moderation effect, leaving a significant empirical gap.

2.4.2. Learning Styles

Learning style theories, such as the Honey and Mumford model [9], classify learners as Activists, Reflectors, Theorists, or Pragmatists. Despite their popularity in training and higher education, rigorous reviews [10,26] found little evidence that matching instructional methods to learning styles improves learning outcomes. In VR research, learning styles have rarely been examined, and when they are, results remain inconclusive [27]. This underexplored intersection—between immersive learning and learning-style preferences—represents another theoretical and empirical gap that the present study addresses.

2.5. Identified Gaps and Rationale for the Present Study

The reviewed literature converges on several key points:
  • VR enhances engagement and presence, but its effect on knowledge retention is inconsistent.
  • Cognitive load is a central explanatory mechanism, yet few studies directly compare VR with traditional methods using controlled experimental designs and objective learning outcomes.
  • Individual differences, such as IQ and learning styles, remain understudied moderators in VR learning.
  • Most previous research focuses on domain-specific applications or short-term motivational measures rather than systematic comparisons of learning efficiency across instructional modes.
In response, this study provides a controlled empirical test comparing VR, PowerPoint, and real-person instruction on a technical subject unfamiliar to learners. By integrating cognitive ability and learning style variables, the study aims to advance understanding of when and for whom immersive technologies enhance—or impair—learning.
This literature review establishes that the study not only extends prior VR learning research but also contributes methodologically by combining objective assessment, individual difference measures, and direct comparison across instructional modalities, thereby filling a documented gap in immersive learning evaluation.

3. Materials and Methods

3.1. Participants

The study involved 106 s-year medical students (72 females, 34 males) enrolled in the General Medicine program at Lucian Blaga University of Sibiu, Romania. All participants were native Romanian speakers and had no prior knowledge of the subject matter used in the instructional intervention. Participation was voluntary, and informed consent was obtained from all individuals prior to data collection.
Participants were randomly assigned to one of three instructional conditions:
  • Virtual Reality (VR) training group (n = 35),
  • PowerPoint-based instruction group (n = 35), and
  • Real-person (hands-on) instruction group (n = 36).
The total sample size (N = 106) provides adequate statistical power (>0.99) to detect a large effect (f = 0.40) using a one-way ANOVA at α = 0.05. The observed effect size was substantially larger ( ω 2 = 0.48), confirming high sensitivity for the primary comparison.

3.2. Instructional Materials and Equipment

3.2.1. VR Instructional Setup

The VR-based instruction was delivered using a Meta Quest 3 Head-Mounted Display (HMD) with a 90 Hz refresh rate and an approximate 110-degree Field of View (FOV). The virtual simulation was custom-developed in Unity (version 2022) to replicate the design, motion, and operation of a five-axis Computer Numerical Control (CNC) milling machine used in the Faculty of Engineering’s laboratory. The immersive environment included spatial audio cues synchronized with the visual content. The VR sessions were conducted individually 3 m × 3 m physical space, and the Meta Quest Guardian system was enabled to ensure physical safety. While no formal Cybersickness Questionnaire (SSQ) was administered, participants were instructed to report any discomfort immediately. Two participants reported mild, transient dizziness, which subsided quickly upon removing the HMD; no participants discontinued the experiment due to discomfort. The environment allowed users to observe and manipulate machine components, simulating the main axes and tool movements. Audio instructions and guidance were provided in Romanian, synchronized with the visual content. The instructor observed the simulation on a monitor in real time, ensuring that explanations followed the same sequence as in the other instructional formats.
The content language and narration were in Romanian, consistent with the learners’ native language. The VR sessions were conducted individually, as only one headset was available. Each student interacted with the simulation for approximately 10 min, accompanied by a trained instructor who used a standardized instructional script.

3.2.2. PowerPoint-Based Instruction

The PowerPoint condition presented the same content as the VR simulation—introducing the structure, components, and functionality of the five-axis CNC machine—through static slides containing annotated images and diagrams. The presentation was projected in a computer laboratory with similar lighting and temperature conditions to the VR room. All participants in this group received the instruction simultaneously, with the same instructor script and timing (~10 min).

3.2.3. Real-Person (Hands-On) Instruction

In the real-person condition, students received in-person instruction in the engineering laboratory containing the physical five-axis CNC machine. The instructor demonstrated the equipment using the same sequence and verbal explanations as in the other groups. To ensure visibility and engagement, students were instructed in groups of five and allowed to observe and touch the components as they were explained. Each mini-session lasted approximately 10 min.
All instructors (one per instructional condition) were trained to deliver identical explanations, following a predefined speech script. Although students were permitted to ask questions, the content and instructional flow were held constant across methods to control for instructor bias.

3.3. Measurement Instruments

3.3.1. Knowledge Test

Learning outcomes were assessed using a 20-item multiple-choice knowledge test, developed specifically for this experiment and administered via Google Forms immediately after instruction. The test measured comprehension and factual recall of the CNC machine’s components and functions. Each correct answer was scored as one point, yielding a maximum score of 20. The test also included demographic classification questions to link responses with IQ and learning-style data.
The internal consistency of the 20-item test was evaluated on the total sample. Cronbach’s alpha was α = 0.84, the variance of item scores was ∑σi2 = 3.65, while the variance of the total test score was σT2 = 23.32, indicating good reliability. Evidence of content validity rests on the test items being directly derived from the standardized instructional script and validated by two subject-matter experts (CNC technicians) prior to the experiment. There was no time limit, and students completed the test individually on their own devices. Upon submission, the final score was automatically displayed, allowing participants to see their results immediately. No formal feedback was provided. Some informal comments were recorded: several VR participants reported difficulty concentrating or mild dizziness, attributing their lower performance to the novelty and perceptual load of the virtual environment.

3.3.2. Cognitive Ability (IQ)

Cognitive ability was measured using the Standard Progressive Matrices (SPMs) version of Raven’s Progressive Matrices [8]. The test was administered digitally through Google Forms. Each participant’s raw score was converted to an IQ value using the standard Raven norm tables, following the manual’s instructions. The conversion process ensures scores are normalized relative to the reference population norms, as per the test manual’s instructions

3.3.3. Learning Styles

Learning preferences were measured with the Honey and Mumford Learning Styles Questionnaire [9], consisting of 40 self-assessment statements—10 per style category: Activist, Reflector, Theorist, and Pragmatist. Participants responded using a dichotomous format (“agree/disagree”), and each “agree” answer contributed 1 point toward the respective style total. Thus, each learner obtained four style scores (range 0–10 per style). The questionnaire was completed in Romanian, following the official translated version used in prior educational research at the institution. Quality checks confirmed the institutional translation (used in prior educational research) faithfully preserves the original questionnaire’s intended meaning, response format, and scoring criteria

3.4. Procedure

Ethical approval for the study protocol (ULBS Ethical Committee—no. 28/12.06.2023) and informed consent were obtained prior to data collection. Participant data were collected via Google Forms and stored in an encrypted institutional drive for one year. The primary data flow involved linking the questionnaire responses (IQ, Learning Styles) and the performance data (Knowledge Test Score) via a unique, anonymized participant ID created using the last four digits of the phone number (which was immediately discarded post-linking). All analyses were conducted on de-identified and anonymized data.
Data collection occurred across two sessions, in November 2024 and January 2025. The procedure followed these steps:
  • Pre-instruction phase: Participants completed the Raven IQ test and the Honey and Mumford Learning Styles questionnaire online. These data were used to examine whether individual differences influenced learning outcomes.
  • Instructional phase: Participants were randomly assigned to one of the three methods (VR, PowerPoint, or real-person).
    • PowerPoint group: received collective instruction in a computer laboratory.
    • Real-person group: taught in groups of five in the engineering laboratory with the physical CNC machine.
    • VR group: instructed individually using the Meta Quest 3 headset.
All sessions were standardized to ~10 min, with Romanian-language explanations following the same verbal script. Students could ask questions during instruction.
3.
Assessment phase: Immediately following the instructional phase, all participants completed the 20-item knowledge test via Google Forms, administered concurrently for the three groups.
Environmental factors such as lighting, temperature, and noise were kept as consistent as possible across rooms. Each participant used personal or institutional computers for completing online questionnaires and the knowledge test.

3.5. Data Analysis

Data were exported from Google Forms to Microsoft Excel and analyzed in Minitab 20 and in Python 3.12 using the pandas, numpy, scipy, and statsmodels libraries. Prior to the main analysis, the homogeneity of variances was assessed using Levene’s test, F(2, 103) = 2.14, p = 0.123. The assumption of normality of residuals was checked using the Shapiro–Wilk test, W = 0.985, p = 0.354. Neither assumption was violated.

3.5.1. Data Preparation

  • Responses were screened for completeness and consistency.
  • IQ scores were treated as a continuous variable.
  • Learning-style scores (Activist, Reflector, Theorist, Pragmatist) were retained as separate numeric predictors.
  • Knowledge test results were expressed both as raw scores (0–20) and percentages (0–100%).

3.5.2. Statistical Procedures

The following analyses were conducted:
  • Descriptive statistics (mean, standard deviation, 95% CI) for each instructional group.
  • One-way ANOVA to test differences in knowledge scores among the three instructional methods.
  • Tukey HSD post hoc comparisons to identify specific group differences.
  • Analysis of Covariance (ANCOVA) with IQ as a covariate to control for individual cognitive ability.
  • Moderation analysis (Method × IQ interaction) to test whether cognitive ability affected the relationship between instructional method and performance.
  • Extended regression models including gender and the four learning-style scores to assess their predictive contribution.
Assumptions of normality and homogeneity of variances were verified using Shapiro–Wilk and Levene’s tests, respectively. Statistical significance was set at α = 0.05. Figures and tables were generated in accordance with MDPI data presentation guidelines (mean ± SD, 95% CI, p values, test statistics).

3.6. Controls and Validity

To ensure experimental control, all participants received identical content, differing only in presentation medium. The instructors used standardized scripts to minimize variability, and sessions were conducted under comparable physical conditions. The random assignment of participants and the inclusion of IQ as a covariate controlled for pre-existing cognitive differences. The internal consistency of the knowledge test and learning-style measures was verified prior to analysis.

4. Results

4.1. Participant Characteristics and Data Completeness

A total of 106 participants completed all parts of the study. Data collection took place during November 2024 and January 2025. All participants provided valid responses for the knowledge test, Raven IQ, and Honey & Mumford Learning Styles questionnaires. There were no missing values or exclusions. The final dataset included 35 participants in the VR group, 35 in the PowerPoint group, and 36 in the Real-person instruction group.
Participants’ gender distribution was 72 females (67.9%) and 34 males (32.1%). The mean IQ score across the sample was 106.8 (SD = 8.5). No adverse events were reported; however, three participants in the VR group informally reported temporary dizziness during headset use.

4.2. Descriptive Statistics

Descriptive statistics for the knowledge test scores across the three instructional methods are presented in Table 1.
Table 1 shows that the Real-person instruction group obtained the highest average score (M = 17.06, SD = 3.01), followed by the PowerPoint group (M = 15.77, SD = 2.85). The VR group recorded the lowest mean (M = 10.11, SD = 3.50).
The overall distribution of scores was approximately normal within each group, as verified by the Shapiro–Wilk test (p > 0.05), and the assumption of homogeneity of variances was satisfied (Levene’s test, F(2, 103) = 1.62, p = 0.203).
A visual summary of group means with 95% confidence intervals (CIs) is shown in Figure 1, while the distribution of individual scores is displayed in Figure 2.

4.3. Primary Analysis: Effect of Instructional Method (RQ1)

An omnibus one-way ANOVA revealed a statistically significant main effect of the instructional method on immediate knowledge retention score: F(2, 103) = 49.08, p < 0.001. The effect size was large, with ηp2 = 0.49 and ω2 = 0.48, indicating that the instructional method accounted for approximately 48% of the variance in test scores (Table 2).
Post hoc Tukey HSD tests (Table 3) showed that:
  • The VR group scored significantly lower than the PowerPoint group (p < 0.001). The difference was large: Mean Difference = −5.66 (95% CI: −7.44 to −3.88), Cohen’s d = 1.81 (95% CI: 1.27 to 2.35).
  • The VR group also scored significantly lower than the Real-person group (p < 0.001). The difference was very large: Mean Difference = −6.95 (95% CI: −8.71 to −5.17), Cohen’s d = 2.22 (95% CI: 1.64 to 2.79).
The difference between the PowerPoint group and the Real-person group was not statistically significant (p = 0.201). The effect size was small to medium: Mean Difference = 1.28 (95% CI: −0.48 to 3.05), Cohen’s d = 0.41 (95% CI: −0.09 to 0.91).
Thus, mean scores were ordered as: Real-person > PowerPoint > VR. The between-group difference accounted for 57% of the variance in knowledge test scores (η2 = 0.57).

4.4. Covariate Analysis: Controlling for IQ (RQ4)

An ANCOVA controlling for IQ as a covariate was conducted. Results showed that the instructional method remained highly significant: F(2, 102) = 49.35, p < 0.001, with partial eta squared (ηp2) = 0.49 and omega squared (ω2) = 0.44. The covariate, IQ, was also a significant predictor of performance: F(1, 102) = 16.74, p < 0.001 (Table 4).
IQ was a significant predictor of knowledge test performance (F(1, 102) = 13.94, p < 0.001), indicating that participants with higher IQ scores achieved higher test scores across all instructional conditions.
The adjusted means (controlling for IQ) followed the same pattern as the raw means:
  • Real-person: adjusted M = 17.00
  • PowerPoint: adjusted M = 15.82
  • VR: adjusted M = 10.18

4.5. Moderation Analysis: Method × IQ Interaction (RQ4)

To examine whether cognitive ability moderated the effect of the instructional method, a Method × IQ interaction term was added to the model. The interaction was not significant (F(2, 100) = 0.47, p = 0.628; Table 5), indicating that IQ did not differentially influence performance across instructional methods. The standardized regression coefficient for the interaction term (Method: PowerPoint × IQ) was β = −0.02 (95% CI: −0.23 to 0.19), and for the interaction term (Method: VR × IQ) was β = 0.07 (95% CI: −0.15 to 0.29). Both confidence intervals contained zero, supporting the conclusion that there was no significant interaction between instructional method and IQ. In other words, higher cognitive ability was associated with higher test scores generally, but the magnitude of this relationship was similar for all three instructional approaches.

4.6. Gender Effects

An ANCOVA including gender as a between-subjects factor (in addition to method and IQ) found no main effect of gender on knowledge test scores (F(1, 101) = 0.61, p = 0.435) and no significant Method × Gender interaction (F(2, 101) = 1.87, p = 0.159) (Table 6). Female and male students performed comparably across all instructional conditions.

4.7. Learning Style Effects

To assess whether Honey & Mumford learning style preferences (Activist, Reflector, Theorist, Pragmatist) contributed to knowledge test performance, a multiple regression including these variables, IQ, gender, and instructional method was estimated.
None of the four learning style dimensions significantly predicted test scores (all p > 0.20). The addition of these variables increased the explained variance by less than 2%, and the overall model fit remained unchanged (R2adj = 0.61).
Pairwise Pearson correlations among all continuous variables (knowledge score, IQ, and learning styles) are presented in Table 7. The only statistically significant correlation was between knowledge score and IQ (r = 0.367, p < 0.001). No significant relationships were found between learning style scores and knowledge performance.

4.8. Summary of Findings

The analyses yielded the following key quantitative findings:
  • Instructional method had a significant effect on immediate knowledge retention, with VR producing lower scores than both PowerPoint and Real-person instruction.
  • IQ significantly predicted learning performance but did not interact with the instructional method.
  • Gender and learning-style preferences were not associated with knowledge outcomes.
  • The pattern of group differences remained consistent after adjusting for cognitive ability.
No missing data or violations of statistical assumptions were identified. All findings are summarized numerically in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 and visually in Figure 1 and Figure 2.

5. Discussion

5.1. Overview of Findings

This study investigated the effectiveness of three instructional modalities—virtual reality (VR), PowerPoint-based, and real-person instruction—in teaching novice learners the structure and function of a five-axis CNC machine. The findings demonstrated that the instructional method had a strong and statistically significant effect on immediate knowledge retention. Participants who received real-person or PowerPoint-based instruction scored substantially higher on the post-test than those who learned through VR.
While IQ significantly predicted performance across conditions, indicating that higher cognitive ability was associated with better learning outcomes, there was no interaction between IQ and instructional method. This suggests that cognitive ability influenced performance similarly across the three approaches. Additionally, gender and learning-style preferences (Activist, Reflector, Theorist, Pragmatist) were not significantly related to learning outcomes.
These findings align with recent empirical work suggesting that VR-based instruction does not automatically enhance learning and may, under certain conditions, hinder immediate comprehension when compared with less immersive but cognitively efficient instructional formats [1,4,6].

5.2. Interpretation Through Cognitive Load Theory

A plausible explanation for the observed performance differences is rooted in Cognitive Load Theory (CLT) [7,16]. According to CLT, learning effectiveness depends on managing the balance between intrinsic load (complexity inherent to the material), extraneous load (inefficient processing demands), and germane load (resources devoted to schema construction).
In this study, the content—a mechanical system with multiple axes and components—carried high intrinsic complexity. The VR environment, while visually rich and interactive, likely imposed additional extraneous load on participants unfamiliar with immersive systems. Tasks such as navigating the virtual space, processing three-dimensional stimuli, and maintaining orientation can consume substantial cognitive resources [17].
The lower mean scores in the VR group support this interpretation: rather than focusing cognitive resources on understanding the CNC’s principles, participants may have been occupied with handling the novelty and sensory demands of VR. Consistent with Parong and Mayer [6], who found that students learning science in immersive VR reported higher presence but lower test scores, the present findings underscore that visual immersion does not necessarily translate to better knowledge acquisition.
Furthermore, novice learners—such as the medical students in this study—are especially prone to overload in highly multimodal environments, as they lack prior schemas to guide selective attention [25]. For such learners, simplified and well-segmented instruction, as offered by slides or live demonstrations, appears more conducive to immediate retention.

5.3. The Role of Presence and Engagement

While not directly measured in this study, presence and engagement are critical affective mechanisms in immersive learning. Previous research has shown that VR typically enhances learners’ sense of presence, defined as the subjective experience of “being there” in the virtual environment [19,20]. However, this affective benefit can coexist with reduced cognitive performance when presence diverts attention from relevant learning cues [1].
Participants’ informal feedback supported this duality: several VR learners described being “fascinated” by the simulation or “distracted” by its novelty, indicating that high presence may have increased attentional capture at the expense of comprehension. Similar patterns have been observed by Makransky et al. [1] and Albus et al. [5], who found that while immersive environments elevate motivation and enjoyment, they can impair retention if cognitive scaffolding is insufficient.
The Cognitive Affective Model of Immersive Learning (CAMIL) [17] provides a suitable explanatory framework. CAMIL posits that immersive features enhance affective engagement (interest, enjoyment) but can simultaneously induce perceptual or cognitive load, especially in complex technical tasks. The present findings align with this dual-process model: the VR simulation may have fostered engagement yet increased extraneous load beyond optimal levels for schema formation.

5.4. Comparison with Traditional Instruction

Both real-person and PowerPoint-based instruction led to superior performance. These results corroborate studies demonstrating that traditional modalities remain highly effective when the instructional goal involves declarative or conceptual knowledge [1,22].
The real-person condition yielded the highest average scores, likely due to multisensory realism and social interaction, which facilitate attention, feedback, and spatial understanding [28,29]. Learners in this group could observe real equipment, ask clarifying questions, and receive immediate responses—elements known to enhance comprehension and retention [30].
The PowerPoint condition, though less interactive, provided a clear structure and visual signaling that likely optimized cognitive processing by reducing extraneous load [31]. This finding supports multimedia learning principles, emphasizing that well-designed static visuals can outperform dynamic or immersive media when learners are novices or when material complexity is high [32].

5.5. Individual Differences: IQ and Learning Style

Cognitive ability (IQ) emerged as a significant predictor of performance across all groups, consistent with prior research linking fluid intelligence to information processing efficiency [24]. However, the absence of a method × IQ interaction indicates that cognitive ability influenced outcomes similarly across instructional formats. Thus, while high-IQ learners performed better overall, they did not benefit disproportionately from immersive or traditional methods.
This result adds to the limited literature examining individual differences in VR learning [33,34], suggesting that general cognitive ability predicts learning success but does not moderate the effect of immersion level under short instructional durations.
Regarding learning styles, none of the four Honey & Mumford dimensions significantly predicted outcomes. This aligns with decades of empirical work disputing the predictive validity of learning styles [10,26]. The findings reinforce that instructional design should be grounded in cognitive principles rather than learner-style matching.

6. Conclusions and Implications

6.1. Summary of Key Findings

This study examined the comparative effectiveness of Virtual Reality (VR), PowerPoint-based, and real-person instruction for novice learners in a technical education context. Using a sample of 106 medical students unfamiliar with CNC technology, we analyzed immediate knowledge retention as a function of instructional method, cognitive ability (IQ), gender, and learning style preferences.
The results revealed that the instructional method had a significant and substantial effect on learning outcomes. Participants who received real-person instruction achieved the highest test scores, followed by those in the PowerPoint condition, while the VR group scored significantly lower. IQ was positively associated with performance across all groups, but it did not interact with the instructional method. Gender and learning-style preferences showed no significant influence on outcomes.
In addition to the quantitative findings, informal comments made by several participants during the debriefing provide further context for interpreting the VR group’s lower performance. Many students described the simulation as “fascinating,” noting the high visual realism of the CNC model and the strong sense of presence created by the immersive environment. These reactions often reflected the novelty of using a head-mounted display and the feeling of “being inside” the virtual workspace. At the same time, some participants reported that the richness of the visual scene and the need to manage the headset and controls made the experience “distracting” or cognitively demanding. A few also mentioned brief moments of disorientation or dizziness. Although these impressions were anecdotal and not formally collected as data, they align with the theoretical expectation that immersive environments may simultaneously heighten engagement and increase extraneous cognitive load, particularly for novice learners encountering complex technical content for the first time [35].
These results provide robust evidence that VR-based instruction, when used as an isolated method for complex, unfamiliar technical content, may not optimize immediate comprehension. Instead, traditional methods—particularly instructor-led demonstrations—remain superior for short-term conceptual understanding.

6.2. Theoretical Implications

The findings contribute to the growing body of research on immersive learning by reinforcing the explanatory power of Cognitive Load Theory (CLT) [7,16] and the Cognitive Affective Model of Immersive Learning (CAMIL) [17].
The observed performance gap between VR and traditional instruction supports the notion that immersive environments can increase extraneous cognitive load through sensory and navigational demands, especially among novice learners without prior schemas. Although VR likely enhanced engagement and presence, these benefits did not translate into better retention, consistent with prior research showing a dissociation between affective engagement and cognitive performance [1,6].
The study also advances theoretical understanding of individual differences in immersive learning. IQ predicted performance across modalities, confirming that cognitive ability underpins general learning efficiency [24]. However, its non-significant interaction with method suggests that VR’s cognitive demands affect learners uniformly, regardless of ability. The null findings for learning styles provide further empirical support for abandoning the “matching hypothesis” in instructional design [10].

6.3. Practical Implications for Educators and Instructional Designers

The results have direct implications for technical and engineering education, where VR is increasingly employed for training and simulation.
  • Use VR strategically within blended learning.
    VR should complement—not replace—traditional instruction. Introducing conceptual material through guided lectures or multimedia slides before immersive practice may optimize cognitive load and learning transfer [4,22].
  • Incorporate cognitive scaffolding.
    Designers should integrate signaling, pre-training, and segmentation features [32] into VR simulations to reduce extraneous load. Structured narration and prompts can direct attention to relevant components.
  • Provide instructor presence and interaction.
    Learning in VR benefits from human guidance. Studies show that instructor or peer presence within virtual environments enhances focus and knowledge construction [30].
  • Address user comfort and acclimation.
    First-time VR users may experience cybersickness, disorientation, or attentional fatigue [36]. Implementing short introductory sessions can help learners acclimate before full immersion.
  • Evaluate long-term and procedural learning.
    While VR may not boost short-term factual recall, its potential for skill-based and procedural learning remains promising [37]. Future courses should assess VR’s impact on long-term retention and transfer.

6.4. Limitations

Several limitations should be acknowledged. The study focused on immediate post-instructional performance, not delayed retention or transfer. The novice medical student sample may not reflect learners with prior technical expertise. Each instructional method was administered by a different instructor, which, despite using standardized scripts, could introduce interpersonal variance. Furthermore, presence, cognitive load, and motivation were inferred but not directly measured.
A critical limitation is the difference in delivery format across conditions. While content and script were standardized, the delivery format differed: VR was conducted individually, Real-person instruction was delivered in groups of five in a physical lab, and PowerPoint instruction was conducted in a computer lab group. These variations in group size, instructor presence, and environmental context (physical vs. virtual) may have affected attention and performance independently of the instructional modality, potentially confounding the observed effect size. Future work should attempt to hold group format and instructor presence constant, perhaps using identical recorded narration across all conditions, to isolate the effect of the instructional medium more precisely. Furthermore, presence, cognitive load, and motivation were inferred but not directly measured. Despite these constraints, the study’s controlled experimental design, inclusion of IQ and learning-style variables, and use of objective performance measures strengthen its validity and generalizability within similar educational contexts
Despite these constraints, the study’s controlled experimental design, inclusion of IQ and learning-style variables, and use of objective performance measures strengthen its validity and generalizability within similar educational contexts.

6.5. Recommendations for Future Research

Future investigations should:
  • Assess long-term learning and skill transfer following VR exposure.
  • Include direct measures of cognitive load, presence, and motivation to test mediation mechanisms.
  • Compare VR-first versus VR-integrated sequences to identify optimal implementation strategies.
  • Explore adaptive VR systems that adjust visual complexity or pace to learners’ cognitive profiles.
  • Conduct multi-disciplinary replications in engineering, health, and vocational education to examine domain-specific effects.
By addressing these gaps, future studies can clarify how immersive technologies best contribute to knowledge acquisition, retention, and professional competence.

6.6. Final Conclusions

This research provides empirical evidence and theoretical insight into the limitations of fully immersive VR instruction for complex technical learning among novices. While VR offers unmatched realism and engagement, these benefits can be offset by increased cognitive load and reduced attentional focus.
To maximize educational value, VR should be positioned as a supportive, experience-based complement to structured traditional instruction, integrated within a blended, scaffolded pedagogical framework. When carefully designed and implemented, immersive technologies have the potential to transform technical education from passive observation into interactive, experiential learning—without compromising cognitive efficiency.

Author Contributions

Conceptualization, R.E.P., V.G., M.-B.C. and E.-T.T.; methodology, R.E.P., V.G., M.-B.C. and E.-T.T.; formal analysis, R.E.P. and V.G.; investigation, R.E.P., V.G. and M.-B.C.; data curation, R.E.P. and V.G.; writing—original draft preparation, R.E.P. and V.G.; writing—review and editing, R.E.P., V.G. and E.-T.T.; visualization, V.G.; supervision, V.G.; funding acquisition, R.E.P. All authors have read and agreed to the published version of the manuscript.

Funding

Project financed by the Lucian Blaga University of Sibiu and Hasso Plattner Foundation research grants LBUS-IRG-2023-09.

Institutional Review Board Statement

The experiments were conducted with the approval of the ULBS Ethical Committee—no. 28/12.06.2023.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data that were used for this research are available upon request.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (GPT-5, OpenAI, 2025) to assist in drafting, language refinement, and reference formatting. The authors reviewed, verified, and edited all AI-generated content and take full responsibility for the accuracy and integrity of the final version of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Makransky, G.; Terkildsen, T.S.; Mayer, R.E. Adding Immersive Virtual Reality to a Science Lab Simulation Causes More Presence but Less Learning. Learn. Instr. 2019, 60, 225–236. [Google Scholar] [CrossRef]
  2. Krajčovič, M.; Gabajová, G.; Matys, M.; Furmannová, B.; Dulina, Ľ. Virtual Reality as an Immersive Teaching Aid to Enhance the Connection between Education and Practice. Sustainability 2022, 14, 9580. [Google Scholar] [CrossRef]
  3. Merchant, Z.; Goetz, E.T.; Cifuentes, L.; Keeney-Kennicutt, W.; Davis, T.J. Effectiveness of Virtual Reality-Based Instruction on Students’ Learning Outcomes in K–12 and Higher Education: A Meta-Analysis. Comput. Educ. 2014, 70, 29–40. [Google Scholar] [CrossRef]
  4. Radianti, J.; Majchrzak, T.A.; Fromm, J.; Wohlgenannt, I. A Systematic Review of Immersive Virtual Reality Applications for Higher Education. Comput. Educ. 2020, 147, 103778. [Google Scholar] [CrossRef]
  5. Albus, P.; Vogt, J.; Seufert, T.; Kapp, S. Signaling in Virtual Reality Influences Learning Outcome and Cognitive Load. Comput. Educ. 2021, 166, 104154. [Google Scholar] [CrossRef]
  6. Parong, J.; Mayer, R.E. Learning Science in Immersive Virtual Reality. J. Educ. Psychol. 2018, 110, 785. [Google Scholar] [CrossRef]
  7. Sweller, J. Cognitive Load during Problem Solving: Effects on Learning. Cogn. Sci. 1988, 12, 257–285. [Google Scholar] [CrossRef]
  8. Raven, J.; Raven, J.C.; Court, J.H. Manual for Raven’s Progressive Matrices and Vocabulary Scales; Oxford Psychologists Press: Oxford, UK, 1998. [Google Scholar]
  9. Honey, P.; Mumford, A. The Manual of Learning Styles; Peter Honey: Maidenhead, UK, 1986. [Google Scholar]
  10. Pashler, H.; McDaniel, M.; Rohrer, D.; Bjork, R. Learning Styles: Concepts and Evidence. Psychol. Sci. Public Interest 2008, 9, 105–119. [Google Scholar] [CrossRef]
  11. Dede, C. Immersive Interfaces for Engagement and Learning. Science 2009, 323, 66–69. [Google Scholar] [CrossRef] [PubMed]
  12. Lee, E.A.; Wong, K.W. Learning with Desktop Virtual Reality: Low Spatial Ability Learners Are More Positively Affected. Comput. Educ. 2014, 79, 49–58. [Google Scholar] [CrossRef]
  13. Pantelidis, P.; Chorti, A.; Papagiouvanni, I.; Paparoidamis, G.; Drosos, C.; Panagiotakopoulos, T.; Lales, G.; Sideris, M. Virtual and Augmented Reality in Medical Education. Med. Surg. Educ.-Past Present Future 2018, 26, 77–97. [Google Scholar]
  14. Petruse, R.E.; Grecu, V.; Chiliban, M.-B.; Tâlvan, E.-T. Comparative Analysis of Mixed Reality and PowerPoint in Education: Tailoring Learning Approaches to Cognitive Profiles. Sensors 2024, 24, 5138. [Google Scholar] [CrossRef]
  15. Petruse, R.E.; Grecu, V.; Gakić, M.; Gutierrez, J.M.; Mara, D. Exploring the Efficacy of Mixed Reality versus Traditional Methods in Higher Education: A Comparative Study. Appl. Sci. 2024, 14, 1050. [Google Scholar] [CrossRef]
  16. Sweller, J. Element Interactivity and Intrinsic, Extraneous, and Germane Cognitive Load. Educ. Psychol. Rev. 2010, 22, 123–138. [Google Scholar] [CrossRef]
  17. Makransky, G.; Petersen, G.B. The Cognitive Affective Model of Immersive Learning (CAMIL): A Theoretical Research-Based Model of Learning in Immersive Virtual Reality. Educ. Psychol. Rev. 2021, 33, 937–958. [Google Scholar] [CrossRef]
  18. Slater, M. Place Illusion and Plausibility Can Lead to Realistic Behaviour in Immersive Virtual Environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
  19. Slater, M. Immersion and the Illusion of Presence in Virtual Reality. Br. J. Psychol. 2018, 109, 431–433. [Google Scholar] [CrossRef] [PubMed]
  20. Cummings, J.J.; Bailenson, J.N. How Immersive Is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence. Media Psychol. 2016, 19, 272–309. [Google Scholar] [CrossRef]
  21. Grecu, V.; Deneș, C.; Ipiña, N. Creative Teaching Methods for Educating Engineers. Appl. Mech. Mater. 2013, 371, 764–768. [Google Scholar] [CrossRef]
  22. Jensen, L.; Konradsen, F. A Review of the Use of Virtual Reality Head-Mounted Displays in Education and Training. Educ. Inf. Technol. 2018, 23, 1515–1529. [Google Scholar] [CrossRef]
  23. Doerner, R.; Horst, R. Overcoming Challenges When Teaching Hands-on Courses about Virtual Reality and Augmented Reality: Methods, Techniques and Best Practice. Graph. Vis. Comput. 2022, 6, 200037. [Google Scholar] [CrossRef]
  24. Deary, I.J.; Penke, L.; Johnson, W. The Neuroscience of Human Intelligence Differences. Nat. Rev. Neurosci. 2010, 11, 201–211. [Google Scholar] [CrossRef]
  25. Kalyuga, S. Expertise Reversal Effect and Its Implications for Learner-Tailored Instruction. Educ. Psychol. Rev. 2007, 19, 509–539. [Google Scholar] [CrossRef]
  26. Coffield, F.; Moseley, D.; Hall, E.; Ecclestone, K. Learning Styles and Pedagogy in Post-16 Learning: A Systematic and Critical Review; Learning and Skills Research Centre: London, UK, 2004. [Google Scholar]
  27. Lin, Y.; Wang, S.; Lan, Y. The Study of Virtual Reality Adaptive Learning Method Based on Learning Style Model. Comput. Appl. Eng. Educ. 2022, 30, 396–414. [Google Scholar] [CrossRef]
  28. Fiorella, L.; Mayer, R.E. Learning as a Generative Activity: Eight Learning Strategies That Promote Understanding; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  29. Mayer, R.E. Multimedia Instruction. In Handbook of Research on Educational Communications and Technology; Springer: Berlin/Heidelberg, Germany, 2013; pp. 385–399. [Google Scholar]
  30. Chi, M.T.H.; Wylie, R. The ICAP Framework: Linking Cognitive Engagement to Active Learning Outcomes. Educ. Psychol. 2014, 49, 219–243. [Google Scholar] [CrossRef]
  31. Mayer, R.E. Using Multimedia for E-Learning. J. Comput. Assist. Learn. 2017, 33, 403–423. [Google Scholar] [CrossRef]
  32. Mayer, R.E.; Moreno, R. Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educ. Psychol. 2003, 38, 43–52. [Google Scholar] [CrossRef]
  33. Efendi, D.; Apriliyasari, R.W.; Prihartami Massie, J.G.E.; Wong, C.L.; Natalia, R.; Utomo, B.; Sunarya, C.E.; Apriyanti, E.; Chen, K.-H. The Effect of Virtual Reality on Cognitive, Affective, and Psychomotor Outcomes in Nursing Staffs: Systematic Review and Meta-Analysis. BMC Nurs. 2023, 22, 170. [Google Scholar] [CrossRef]
  34. Wilson, K.E.; Martinez, M.; Mills, C.; D’Mello, S.; Smilek, D.; Risko, E.F. Instructor Presence Effect: Liking Does Not Always Lead to Learning. Comput. Educ. 2018, 122, 205–220. [Google Scholar] [CrossRef]
  35. Poupard, M.; Larrue, F.; Sauzéon, H.; Tricot, A. A Systematic Review of Immersive Technologies for Education: Effects of Cognitive Load and Curiosity State on Learning Performance. Br. J. Educ. Technol. 2025, 56, 5–41. [Google Scholar] [CrossRef]
  36. Stanney, K.; Lawson, B.; Rokers, B.; Dennison, M.; Fidopiastis, C.; Stoffregen, T. Identifying Causes of and Solutions for Cybersickness in Immersive Technology. Int. J. Hum. Comput. Interact. 2020, 36, 1783–1803. [Google Scholar] [CrossRef]
  37. Amin; Widiaty, I.; Yulia, C.; Abdullah, A.G. The Application of Virtual Reality (VR) in Vocational Education: A Systematic Review. In Proceedings of the 4th International Conference on Innovation in Engineering and Vocational Education (ICIEVE 2021), Bandung, Indonesia, 27 November 2021. [Google Scholar] [CrossRef]
Figure 1. Mean knowledge test scores (0–20 scale) with 95% CI by instructional method.
Figure 1. Mean knowledge test scores (0–20 scale) with 95% CI by instructional method.
Applsci 15 12534 g001
Figure 2. Boxplot of knowledge test scores by instructional method, showing median, interquartile range, and outliers.
Figure 2. Boxplot of knowledge test scores by instructional method, showing median, interquartile range, and outliers.
Applsci 15 12534 g002
Table 1. Descriptive statistics by method (n, mean, SD, min, max).
Table 1. Descriptive statistics by method (n, mean, SD, min, max).
VariableInstruction
Method
NMeanSE MeanStDevMinimumQ1MedianQ3Maximum
Knowledge Testface-to-face3617.0560.5023.0147.00016.00018.00019.00020.000
ppt3515.7710.4822.8507.00015.00017.00018.00019.000
VR3510.1140.5913.4963.0008.0009.00014.00016.000
Table 2. One-way ANOVA (knowledge ~ method).
Table 2. One-way ANOVA (knowledge ~ method).
IndexSum_sqdfFPR (>F)
C (method)962.170249.0800
Residual1009.603103
Table 3. Tukey HSD pairwise comparisons between methods.
Table 3. Tukey HSD pairwise comparisons between methods.
Group 1Group 2Meandiffp-AdjLowerUpperReject
PowerPointReal Person1.2840.2−0.4833.052FALSE
PowerPointVR−5.6570−7.437−3.877TRUE
Real PersonVR−6.9410−8.709−5.174TRUE
Table 4. ANCOVA (method + IQ).
Table 4. ANCOVA (method + IQ).
IndexSum_sqdfFPR (>F)
C (method)839.250249.3550
IQ142.386116.7470
Residual867.217102
Table 5. Moderation test (method × IQ).
Table 5. Moderation test (method × IQ).
IndexSum_sqdfFPR (>F)
C (method)839.250248.8400
IQ142.386116.5720
C (method):IQ8.04020.4680.628
Residual859.177100
Table 6. ANCOVA with gender (method + IQ + gender; method × gender also inspected).
Table 6. ANCOVA with gender (method + IQ + gender; method × gender also inspected).
IndexSum_sqdfFPR(>F)
C (method)838.705249.1370
C (gender)5.24210.6140.435
IQ137.329116.0910
Residual861.975101
Table 7. Pearson correlations among knowledge, IQ, and learning-style totals.
Table 7. Pearson correlations among knowledge, IQ, and learning-style totals.
Knowledge_ScoreiqActivistReflectorTheoristPragmatist
knowledge_score10.3670.0670.003−0.195−0.074
iq0.3671−0.0940.024−0.0210.027
Activist0.067−0.0941−0.403−0.2620.182
Reflector0.0030.024−0.40310.464−0.027
Theorist−0.195−0.021−0.2620.46410.088
Pragmatist−0.0740.0270.182−0.0270.0881
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Grecu, V.; Petruse, R.E.; Chiliban, M.-B.; Tâlvan, E.-T. The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training. Appl. Sci. 2025, 15, 12534. https://doi.org/10.3390/app152312534

AMA Style

Grecu V, Petruse RE, Chiliban M-B, Tâlvan E-T. The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training. Applied Sciences. 2025; 15(23):12534. https://doi.org/10.3390/app152312534

Chicago/Turabian Style

Grecu, Valentin, Radu Emanuil Petruse, Marius-Bogdan Chiliban, and Elena-Teodora Tâlvan. 2025. "The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training" Applied Sciences 15, no. 23: 12534. https://doi.org/10.3390/app152312534

APA Style

Grecu, V., Petruse, R. E., Chiliban, M.-B., & Tâlvan, E.-T. (2025). The Cognitive Cost of Immersion: Experimental Evidence from VR-Based Technical Training. Applied Sciences, 15(23), 12534. https://doi.org/10.3390/app152312534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop