1. Introduction
Immersive technologies hold transformative potential for education by enabling learners to experience knowledge and learning rather than merely learning about it. Through digitally simulated environments that combine multisensory input, interactivity, and spatial presence, they allow students to engage in situated, exploratory, and emotionally resonant learning experiences. Yet, researching their educational impact—particularly in K-12 settings—remains methodologically challenging. Young learners’ cognitive, linguistic, and attentional capacities differ substantially from those of adults. Consequently, there is a growing need for research that examines immersive technologies not only as tools of engagement but as pedagogical environments that shape learning processes and perceptions in specific classroom contexts.
In this context, recent work highlights that immersive learning effectiveness depends on more than hardware fidelity. In particular, the Cognitive Affective Model of Immersive Learning (CAMIL) explains how technological features influence learning through learners’ psychological and cognitive processes (
Makransky & Petersen, 2021). Complementing this account, later studies emphasize that learning effectiveness depends on how teachers design and guide activities and on students’ readiness to engage with the designed tasks (
W. Lin et al., 2025;
Porat et al., 2023;
Schwartz et al., 2024). Thus, immersive environments function as pedagogical ecosystems in which technological, instructional, and learner factors dynamically interact.
The present study examines these interactions across three levels of immersion—Desktop Virtual Reality (DVR), Immersive Rooms (IR), and fully immersive Virtual Reality (VR). Integrating teacher interviews, classroom observations, and student questionnaires, it investigates how technological affordances, instructional practices, and learner characteristics shape engagement, perceived learning, and self-assessment in authentic classrooms. In this study, technological affordances are the action possibilities that a technology makes available to teachers and learners in a given classroom context (
Gibson, 2014), shaped by the interaction of system features and task design; challenges refer to recurring constraints or demands that emerge during implementation and may hinder instruction or learning. By comparing these modalities, the study seeks to clarify when and how immersive technologies enhance learning-and to inform more context-sensitive extensions of existing models such as CAMIL.
2. Literature Review
2.1. Immersive Technologies in Education
Immersive learning technologies vary widely in the degree of sensory and interactive engagement they afford to learners. At the lower end of this spectrum, Desktop Virtual Reality (DVR) presents three-dimensional scenes through a conventional computer interface offering limited immersion but meaningful opportunities for learner control and exploration. Despite its relatively low level of immersion, DVR affords interactive engagement that can enhance perceived learning, immediate knowledge acquisition, and attention, while supporting self-efficacy and technology acceptance in educational contexts (
Di Natale et al., 2020;
Luo & Du, 2022;
Schwartz & Blau, 2025). Immersive Rooms (IRs), situated at an intermediate level of immersion, transform physical classrooms into shared digital spaces through wall and floor-projected environments. Building on the legacy of CAVE (Cave Automatic Virtual Environment) systems, these rooms emphasize collaborative learning affordances rather than individual sensory isolation. Earlier CAVE-based research has demonstrated how such spatially immersive environments promote social interaction, group problem-solving, and engagement (
De Back et al., 2020), providing a foundation for understanding the pedagogical affordances of today’s classroom-scale immersive rooms (
De Back et al., 2023;
Wang et al., 2024). Finally, fully immersive Virtual Reality (VR) headsets provide the most enveloping experience, immersing learners’ visual and auditory channels entirely in the virtual environment. Empirical evidence and meta-analyses demonstrate that VR enhances the sense of presence and can positively influence perceived learning outcomes (
Coban et al., 2022;
Austermann et al., 2025;
W. Lin et al., 2025;
Liu et al., 2025). However, these affordances are accompanied by challenges including heightened cognitive load, usability barriers, and physical discomfort (
Breves & Stein, 2023;
Thorp et al., 2024).
2.2. The Cognitive Affective Model of Immersive Learning (CAMIL)
The Cognitive Affective Model of Immersive Learning (CAMIL) (
Makransky & Petersen, 2021) is a theory-based framework that explains how features of immersive technologies shape learning through a set of linked psychological and cognitive mechanisms. In the model, three technological factors—
immersion,
control factors, and
representational fidelity—are posited as antecedents of two core psychological affordances,
presence (a sense of “being there”) and
agency (a sense of controlling one’s actions). Presence and agency then influence six affective and cognitive processes that mediate learning:
interest,
motivation,
self-efficacy, and
embodiment are theorized as pathways that can support learning, whereas
cognitive load and
self-regulation capture potential constraints and boundary conditions. These processes are theorized to relate to multiple learning outcomes, including
factual,
conceptual, and
procedural knowledge, as well as
transfer of learning. CAMIL is distinctive in that it conceptualizes both the affordances and the potential constraints of immersive learning—acknowledging that the same technological features that encourage engagement and presence can also introduce cognitive and self-regulatory challenges. Applying CAMIL to the classroom use of DVR, IR, and VR thus enables a systematic, interpretive mapping of how affordances translate into learning benefits or challenges across varying immersive contexts.
In language-learning contexts, previous studies have proposed extending CAMIL to include social affordances:
Chun et al. (
2022) contend that IVR models under-specify linguacultural processes and therefore call for integrating social interaction, co-presence, and feedback into the framework. Although
Chun et al. (
2022) propose a modified CAMIL tailored to linguacultural settings, it was not appropriate for this study’s broader K-12 focus and has not been adopted in subsequent CAMIL-based work (
Kablitz, 2025;
Zhi & Wu, 2023).
Zhi and Wu (
2023) operationalize CAMIL for XR second-language research, emphasizing social presence and the prior under-attention to learner agency. Together, these studies highlight that meaningful learning in immersive environments depends on designs that actively cultivate both social connection and learner agency, as these factors underpin engagement, collaboration, and perceived learning outcomes (
McGivney, 2025;
Van der Meer et al., 2023).
2.3. Students’ Perceived Learning in Immersive Contexts
While the CAMIL framework explains how technological affordances influence psychological mechanisms and actual knowledge-based learning outcomes, it does not explicitly include subjective outcomes such as perceived learning within its conceptualization of learning effects. This omission is noteworthy, as earlier studies by Makransky and colleagues (e.g.,
Makransky & Lilleholt, 2018;
Makransky et al., 2019) treated perceived learning as a meaningful self-reported indicator of learning effectiveness. Perceived learning reflects learners’ retrospective cognitive and emotional reflections on what and how they have learned, encompassing both the sense of understanding gained and the accompanying feelings that emerge during the process (
Caspi & Blau, 2011). Measuring perceived learning is important because such reflections influence motivation, self-efficacy, and willingness to re-engage in future learning experiences. When students feel that learning has occurred, they are more likely to persist, explore, and sustain interest (
Zhuofan et al., 2024)—processes especially relevant in immersive environments that rely on affective and motivational engagement (
Makransky & Lilleholt, 2018). Recent studies further show that perceived learning provides a complementary perspective to performance-based outcomes by capturing students’ subjective experiences of understanding, meaning-making, and emotional involvement in immersive contexts (
Navarro et al., 2024;
Yu et al., 2025). Consistent with the research literature, in this study, perceived learning is treated as a subjective, metacognitive outcome in its own right rather than as a direct substitute for objective learning effectiveness. Accordingly, focusing on perceived learning allows this study to examine how immersive affordances translate into learners’ subjective sense of learning—a dimension overlooked in CAMIL’s original formulation.
Immersive learning research thus points to a complex interplay between technology and pedagogy. While models such as CAMIL offer a valuable foundation for explaining cognitive and affective mechanisms, they do not yet account for the social, pedagogical, and perceptual dimensions emerging in classroom practice. These gaps highlight the need to examine how varying levels of immersive learning technologies shape teaching processes and students’ perceived learning in authentic educational settings.
3. Research Aims and Objectives
The present study examines learning across three levels of immersive technology—Desktop Virtual Reality (DVR), Immersive Rooms (IR), and fully immersive Virtual Reality (VR)—from both instructional and learner perspectives. At the instructional level, the study explores how technological and pedagogical features create student learning opportunities and challenges in the classroom, and how these features correspond to the affordances and constraints identified in the CAMIL framework. At the learner level, it focuses on students’ perceived experiences and outcomes that arise within these designed conditions. Through this dual lens, the study aims to illuminate not only how students learn with immersive technologies but also how varying degrees of immersion and agency shape their engagement, perceived learning, and self-assessment in authentic educational environments.
The following questions guided the study:
RQ1: What learning affordances and challenges arise when teaching with DVR, IR, and VR, and how do they align with the CAMIL categories?
RQ2: How do immersive technologies (DVR/IR/VR) differ in their effects on students’ perceived immersion, cognitive and socio-emotional learning?
RQ3: Do technology type, immersive duration and perceived student agency moderate the relationships between their engagement (perceived immersion, cognitive and socio-emotional learning) and achievement self-assessment?
4. Methodology
To obtain both a broad overview and a nuanced understanding of instructional and learner perspectives, this study employed a mixed-methods design integrating quantitative and qualitative approaches. The quantitative component examined patterns across a large sample of student participants, while the qualitative component (interviews and observations) provided an in-depth exploration of classroom processes and teachers’ experiences, supporting a CAMIL-informed interpretive mapping of how teachers perceive, explain, and navigate immersive-technology affordances and challenges in practice.
4.1. Research Context
The study was conducted in 21 schools across Israel, representing diverse geographical regions throughout the country. Each school had integrated immersive learning technologies into its regular curricular instruction and implemented one of three technologies: DVR, IR and VR. Detailed representative AI-generated visualizations of these environments—constructed to maintain student privacy in accordance with ethical permits—are provided in
Appendix A. The sample included schools from three educational levels—elementary (
n = 7; 2 DVR, 3 IR, 2 VR), middle (
n = 11; 3 DVR, 3 IR, 5 VR), and high school (
n = 3; 1 DVR, 1 IR, 1 VR).
Across all schools, immersive lessons were drawn from a wide range of disciplinary areas, including science, technology, social studies, foreign languages, and social-emotional learning, reflecting the versatility of immersive tools in addressing both content knowledge and broader learning skills.
4.2. Participants
The study participants included 31 teachers, ICT school coordinators, and instructional training designers, all of whom were directly involved in integrating virtual immersive technologies into classroom learning. Participants were recruited either through designated educational technology groups on social media or through two laboratories of the Ministry of Education’s Research & Development unit focused on integrating these technologies.
Table 1 summarizes the teacher participants by immersive technology, their demographics, and experience level with the immersive technology.
All student participants were enrolled in the same schools described above and took part in classes taught by one of the participating teachers who integrated immersive technologies into their instruction. The student sample represented learners across elementary, middle, and high school levels, corresponding to the specific immersive technologies implemented in each setting. Because the participating schools differed in the availability and curricular integration of the technologies, VR and IR activities were conducted mainly in the upper grades, whereas DVR applications were primarily implemented in elementary and middle school lessons. As a result, no DVR lessons were available for the oldest age group (14–16 years). Regarding the student sample size, an a priori power analysis conducted with G*Power 3.1 (
Faul et al., 2009) indicated that the total sample (N = 252) exceeded the minimum required to detect medium-sized effects (α = 0.05, power = 0.80) in the planned multivariate analyses, ensuring adequate statistical power.
Table 2 presents the distribution of students by immersive technology, gender, and age group.
4.3. Research Tools and Measures
This study employed three instruments: semi-structured interviews, non-participant classroom observations and student questionnaires all aimed at exploring how immersive technologies are integrated and experienced in classroom learning.
Semi Structured Interview: The semi-structured interview protocol was designed to elicit teachers’ perspectives on their instructional roles within immersive virtual environments according to the research questions. To establish face and content validity, the initial protocol was reviewed by a panel of three independent experts in the field of educational technology and qualitative research. These experts evaluated the items for clarity, pedagogical relevance, and their ability to elicit the desired qualitative data; the protocol was subsequently refined based on this expert feedback. The interview protocol questions addressed several focal areas. To explore pedagogical motivations for adopting immersive technologies, teachers were asked questions such as “
What led you to integrate virtual reality into your teaching?” and “
What learning goals did you hope to achieve through this medium?”. To examine instructional strategies and roles, participants were asked, for example, “
Please describe a lesson in which you integrated virtual reality,” and “
What was your instructional role during the immersive activity?” Finally, to explore student learning processes and challenges, the interview included questions such as “
What occurred before and after the immersive experience, and why?” and “
What were the main challenges or affordances you experienced when using immersive virtual environments for learning?” These questions collectively aimed to capture teachers’ underlying pedagogical reasoning, design choices, and reflections on teaching and learning with immersive technologies (see
Appendix B for the complete interview protocol).
Classroom Observations: To capture how immersive technologies were enacted in practice, classroom observations were used to complement the interviews. A structured observation protocol captured the sequence and organization of lessons, with attention to the affordances that emerged during the activity, such as student engagement, collaboration, and interaction with digital content. Similarly to the interview protocol, this observation instrument was validated for face and content validity by three independent experts in learning technologies and qualitative research to ensure the observational categories accurately represented the pedagogical constructs under study. The protocol also noted contextual features, including the level of technological immersion, classroom configuration, and the connection between the immersive activity and curricular goals. Observations recorded indicators of learning opportunities—for example, moments of exploration, embodiment, or peer exchange—and how these related to students’ overall experience (see
Appendix C for the observation protocol).
Two additional variables were derived from the classroom observations conducted immediately before the lessons in which students later completed the questionnaires, allowing alignment between observed instructional conditions and students’ reported perceptions. The first variable, Activity Agency Level, captured the extent to which the lesson design afforded learners autonomy, active control, and decision-making within the immersive activity (
Schwartz et al., 2023). Agency level was coded based on the predominant instructional pattern across the activity, using three observable indicators: (a) degree of learner control over interaction with the environment (e.g., navigation/manipulation vs. viewing only), (b) scope of meaningful choices available to students (e.g., selecting paths/targets/strategies vs. following fixed steps), and (c) student initiation and ownership of task progression (e.g., pacing and self-directed actions vs. teacher-paced sequence).
Each observed lesson was assigned one level using the following criteria:
1 = Low agency: Students’ role was primarily receptive/observational, with minimal or no control over the immersive environment. Interaction was limited to watching/listening or responding to teacher prompts without influencing the activity flow (e.g., viewing a video/scene in the immersive space; teacher controls the system and pacing).
2 = Medium agency: Students engaged in structured participation with constrained autonomy. They carried out teacher-defined steps and could make limited decisions (e.g., answering in groups, completing guided tasks, or navigating within fixed options), but the activity sequence and pacing remained largely teacher-directed.
3 = High agency: Students had substantial control and decision-making, such that they could independently explore, manipulate, or create within the immersive environment. Students initiated actions, made meaningful choices that shaped the activity process or outcomes, and progressed with limited step-by-step teacher direction (e.g., independent exploration, inquiry tasks, or student-directed creation within the environment). Descriptive statistics indicated a mean = 2.26 (SD = 0.87), median = 3, and mode = 3.
The second variable Immersion Duration, defined as the proportion of each activity conducted within the immersive environment, was measured using the structured observation logs, following prior approaches to quantifying exposure to immersive experiences (
Villena-Taranilla et al., 2022). The variable was coded as 1 = 0–40% of the lesson (22.2% of students; low exposure), 2 = 41–70% (23.8%; medium exposure), and 3 = 71–100% (54.0%; high exposure). Descriptive statistics showed a mean = 2.32 (SD = 0.83), median = 3, and mode = 3.
Immersive, Cognitive Socio-Emotional Perceived Learning Questionnaire: Students’ experiences with immersive learning were explored through a questionnaire designed to capture both their sense of immersion and their perceived learning outcomes. The instrument combined items adapted from
Selzer and Castro (
2023) regarding perceived immersiveness together with the Perceived Learning Questionnaire (
Blau & Caspi, 2010), a validated measure of cognitive and socio-emotional learning. The perceived learning questionnaire, originally designed for higher education students, was adapted for younger participants to ensure age-appropriate comprehension and relevance. The adaptation process was conducted by two learning technology experts, who simplified language, clarified item phrasing, and ensured conceptual equivalence with the original constructs. The adapted version was subsequently piloted with a small sample of 5 students to evaluate clarity, comprehensibility, and appropriateness of the items. Minor adjustments were made based on the pilot feedback prior to data collection. Students completed the perceived learning items using a six-point Likert-type scale (1 = Not at all to 6 = Very much) (see
Appendix D for the full questionnaire). In addition, the questionnaire included a single self-assessment item asking students to estimate how well they would succeed on a test about the lesson topic. This item used a separate 0–10 numeric rating scale (0 = Not at all/very low success, 10 = Very well/very high success). This format was selected to capture a performance-oriented self-estimation with greater response granularity and to keep it conceptually distinct from the Likert-scaled perceived learning constructs. (see
Appendix D for the full questionnaire). The questionnaire reflected students’ cognitive, social, and emotional learning experiences following immersive VR-integrated instruction and included a single self-assessment question regarding how they would succeed in a test about the lesson topic.
4.4. Research Procedure
Data collection took place over two academic years across the 21 participating schools. Following institutional ethical approval and authorization from the Chief Scientist of the Ministry of Education, participants were recruited through designated educational-technology groups on social media and through two research and development laboratories within the Ministry of Education that focus on the integration of DVR, IR, and VR technologies. All data were collected in accordance with the ethical guidelines of the Ministry of Education, including required school-level and participant consent procedures.
The research unfolded in three sequential phases. Phase 1 involved conducting semi-structured interviews with 31 teachers and instructional designers to understand how immersive technologies were integrated into lesson planning and what pedagogical affordances they perceived. Phase 2 consisted of 42 non-participant classroom observations of immersive lessons, documented using the structured observation protocol. Each observed lesson was accompanied by field notes capturing environmental setup, student interaction, and instructional flow. Phase 3 included administering the 252 student questionnaires immediately after immersive technology lessons. All data were anonymized, and pseudonyms were assigned to schools and participants.
4.5. Data Analysis and Measurement Model Validation
The teacher interviews were recorded, transcribed, and systematically analyzed using a bottom-up, inductive approach to identify aspects related to students’ learning. The unit of analysis was each teacher statement referring to the student-related affordances or challenges of instruction in immersive technology environments. Each identified affordance or challenge was subsequently mapped to a corresponding category within the CAMIL model; when no suitable category existed, a new one was created. Coding across categories was not mutually exclusive, as a single statement could be assigned to multiple categories and could also be coded simultaneously as an affordance and a challenge. For example, the statement “The lesson always takes place in work-stations … each student can progress at their own pace, and this setting encourages autonomous learning” (VRT6) was coded under Agency, as students control their pace and actions, and under Self-Regulation, as they monitor and manage their learning independently. In total, 459 statements were coded as affordances and 237 as challenges. The final set of categories identified for affordances and challenges was identical, enabling direct comparison across the two dimensions. The present analysis focuses exclusively on affordances and challenges related to student learning, although additional themes—such as challenges relating to teachers’ lesson preparation time and other instructional factors—also emerged. To ensure inter-rater reliability, a second rater with expertise in learning technologies and qualitative research methods independently coded 25% of the interview data. Cohen’s Kappa coefficients were calculated for each technology in both the affordances and challenges, indicating a substantial level of agreement between coders (κ = 0.73–0.87).
The classroom observations generated detailed field notes capturing the flow of each lesson’s activities. These descriptions in the field notes were segmented into 161 cohesive learning activities, each representing a distinct instructional phase or event. Of these, 91 activities directly involved the use of immersive technologies and were analyzed for affordances and challenges. The analysis of these activities followed the same procedure as the interview data: a bottom-up, inductive approach was used to identify all affordances and challenges related to student learning, and each was subsequently mapped to the relevant CAMIL category, with new categories created when necessary. Because the duration of individual activities varied substantially, each identified affordance or challenge was weighted by the length of time (in minutes) during which it occurred. The unit of analysis was therefore the total minutes in which the affordance or challenge was observed. To establish inter-rater reliability, a second rater independently coded the observation data. Cohen’s Kappa coefficients ranged from 0.65 to 0.75, indicating a substantial level of agreement between raters.
To validate the constructs of the student questionnaire, a confirmatory factor analysis (CFA) was conducted in AMOS. The four-factor model included Immersion (Questions 1–3), Cognitive learning (Questions 4–10), Emotional learning (Questions 11–13), and Social learning (Questions 14–16). Question 12 was excluded due to consistently low, non-significant loadings. All constructs were modeled as reflective latent variables, with covariances allowed. The model was tested first on the full dataset (N = 252) and then separately across the three immersive technology groups (DVR, IR, and VR). CFA evaluates whether the hypothesized factor structure provides an adequate representation of the observed item responses; model evaluation focused on (a) global fit indices and (b) standardized factor loadings to evaluate the strength of relationships between items and their corresponding latent constructs, interpreted using commonly recommended guidelines (e.g.,
Brown, 2015).
Model fit was evaluated using
χ2,
χ2/df, CFI, TLI, RMSEA, and SRMR and interpreted against the following commonly used thresholds:
χ2/df < 3, CFI/TLI ≥ 0.90, RMSEA ≤ 0.08, and SRMR ≤ 0.08 (
Hu & Bentler, 1999). Because such cutoffs are heuristic rather than absolute, especially in multifactor models, incremental fit indices were considered in relation to model complexity and the overall pattern of fit indices (
Brown, 2015;
Kline, 2023;
Marsh et al., 2004). Although
χ2 was significant (
p < 0.001), the
χ2/df ratio indicated good fit (1.77), RMSEA was 0.056, 90% CI [0.047–0.064] and SRMR was 0.08. While the CFI (0.893) and TLI (0.866) were marginally below 0.90, they are considered acceptable in light of the model’s complexity and the broader fit profile. Overall, the model demonstrated adequate fit. Factor loadings are presented in
Supplementary Table S1.
Across groups, most items demonstrated moderate to strong loadings (>0.50), with Cognitive and Social learning items loading particularly strongly. Question 3 (Immersion in VR) and Question 16 (Social across groups) were weaker, but both were retained for theoretical coverage of the constructs.
Regarding the self-assessment single item, the distribution was slightly negatively skewed (skewness = −1.21), with most students reporting high self-assessed knowledge (M = 7.25, SD = 2.55, Median = 8.00).
Descriptive statistics were computed for each of the learning perceptions construct using composite scale scores. Across the full sample (N = 252), Emotional learning (M = 4.72, SD = 1.17), Social learning (M = 4.60, SD = 1.24), Cognitive learning (M = 4.24, SD = 1.11), and Immersion (M = 3.70, SD = 1.13) reflected moderate to high student perceptions. Internal consistency reliability was evaluated using Cronbach’s alpha, and coefficients were acceptable for all constructs: Immersion (α = 0.78), Cognitive learning (α = 0.86), Emotional learning (α = 0.73), and Social learning (α = 0.81). Full descriptive statistics by group are presented in
Supplementary Table S2.
Normality was evaluated using Shapiro–Wilk tests and complemented by skewness and kurtosis indices as descriptive indicators of distributional deviation (
Orcan, 2020;
Razali & Wah, 2011). Given the sensitivity of normality tests in moderate-to-large samples, these indices were used to assess the practical magnitude of deviations from normality. Shapiro–Wilk tests were significant for most variables, suggesting deviations from a normal distribution. However, skewness and kurtosis values for all constructs were within the ±2 threshold (
George & Mallery, 2010), indicating that deviations were mild. With relatively large and balanced group sizes (95 in DVR, 81 in IR, 76 in VR), ANOVA/MANOVA procedures are considered robust to moderate violations of normality (
Blanca et al., 2017;
Glass et al., 1972;
Schmider et al., 2010), supporting the use of parametric analyses.
5. Findings
The findings are presented in accordance with the three research questions, combining qualitative and quantitative analyses to provide a comprehensive understanding of learning with immersive technologies. The first section reports on the affordances and challenges of teaching with DVR, IR, and VR as identified through interviews and classroom observations. The second section examines students’ perceived learning outcomes across immersion, cognitive, and socio-emotional dimensions. The final section explores how technology type, immersive duration, and activity agency level moderate the relationships between students’ engagement and self-assessment.
5.1. Student Learning Affordances and Challenges in Immersive Technologies
To address the first research question, student-learning affordance and challenge categories were identified inductively from both teacher interviews and classroom observations and then mapped to CAMIL categories where applicable.
Table 3 provides an overview of the resulting category set and includes illustrative statements or activities selected to represent each category. Categories aligned with CAMIL are labeled (CAMIL). Statements are tagged by data source (I = interview; O = observation) and by case/lesson identifier. “N/A” indicates that no excerpt was coded for that category in the dataset.
As shown in
Table 3, several categories—
Agency, Presence, Motivation, Technology Skills, Peer Collaboration—emerged as both affordances and challenges, illustrating that the same learning dimensions could either facilitate or hinder students’ experiences depending on the context of implementation. As noted in the examples above,
Agency and
Presence were not solely products of the technological affordances but rather emerged from the interaction between lesson design and technology—for instance, when most of the class remained outside the immersive room because of limited physical space or because the teacher intentionally designed the lesson that way.
Self-Efficacy, Self-Regulation, and Embodiment appeared only as affordances; notably, their absence did not manifest as explicit challenges. In contrast,
Cognitive Load and Language Factors were identified exclusively as challenges, which is consistent with their inherent association with learning strain and comprehension barriers. Most categories observed in both interviews and classroom data corresponded with the core components of the CAMIL model; however, four additional categories extended beyond the model’s original scope.
Language Factors emerged primarily in interviews, reflecting the difficulties faced by non-native English speakers and culturally diverse student groups.
Technology Skills were often necessary for navigating multiple devices and complex software environments—an aspect not addressed in CAMIL.
Peer Collaboration also proved critical, particularly in immersive room settings designed for group learning but constrained by technical or spatial limitations.
Physiological Factors such as eye strain, nausea, and fatigue were reported in VR contexts, occasionally leading students to disengage from the activity. These additional dimensions suggest that while CAMIL effectively captures key psychological mechanisms of immersive learning, it may not fully encompass the practical, social, and physical factors that shape students’ experiences in classroom-based immersive environments.
The findings in
Table 4 are derived from the interview dataset. Counts indicate the number of coded interview statements assigned to each category within each technology (DVR, IR, VR), reported separately for affordances and challenges. Percentages represent within-technology proportions (i.e., the share of statements in that technology and valence coded to the category). Because statements were assigned to a single technology while the coding was non-exclusive—meaning that some statements received multiple codes—category totals within a technology may exceed 100%. The table also reports chi-square tests comparing category distributions across technologies; where significant, post hoc pairwise comparisons are reported immediately after the corresponding
χ2 statistic.
As shown in
Table 4, teachers’ interview statements indicated several technology-related differences in student learning affordances and challenges. For affordances, DVR was significantly higher than IR and VR in Self-Regulation, and it was also significantly higher than IR in Student Technology Skills and Peer Collaboration. In contrast, Presence affordances were significantly higher in IR and VR than in DVR.
On the challenge side, Agency challenges were significantly higher in IR and VR than in DVR, and Presence challenges were significantly higher in VR than in DVR. Teachers also reported Motivation challenges significantly more often in DVR than in IR and VR, and Peer Collaboration challenges were significantly higher in DVR and VR than in IR. Language Barrier and Student Technology Skills challenges showed marginal overall differences across technologies. Other between-technology differences in
Table 4 were not statistically significant. Overall, the teachers’ accounts portray a nuanced balance: immersive technologies powerfully enhance students’ motivation and presence but can also undermine agency and sustained focus, particularly in the higher immersive environments.
To complement the self-reported teacher perceptions collected in interviews, the classroom observations provide an external lens on how learning affordances and challenges were enacted in practice.
Table 5 summarizes the observation-based distribution of student learning affordances and challenges across technologies using observed minutes as the unit of analysis. For each technology, the table reports the total number of coded minutes assigned to each category (
n) and the corresponding within-technology percentages (i.e., minutes in that category divided by the total coded minutes for that technology). Because the coding was non-exclusive, e.g., observation segments could be assigned to more than one category, category totals within a technology may sum to more than 100%. The table also reports chi-square tests comparing category distributions across technologies; where significant or marginal, post hoc pairwise comparisons are reported immediately after the corresponding
χ2 statistic.
Consistent with the interview findings, the classroom observations corroborated and further detailed how learning affordances and challenges unfolded across the immersive technologies. As shown in
Table 5, the observation data revealed clear modality-related patterns in how learning affordances and challenges unfolded during immersive lessons. Across affordances, DVR showed significantly more observed minutes of Agency, Self-Regulation, Self-Efficacy, Student Technology Skills, and Peer Collaboration than both IR and VR, indicating that DVR lessons were characterized by more sustained autonomous task engagement and technology-navigation behaviors. In contrast, Presence affordances were significantly higher in VR than in IR and DVR, reflecting stronger immersion-related behavioral indicators in the more immersive settings. Motivation affordances also differed significantly across technologies, with VR and DVR exceeding IR; additionally, VR was significantly higher than DVR.
On the challenge side, Agency challenges were significantly higher in IR than in VR, and higher in VR than in DVR, suggesting that constraints on student agency were most evident in IR and least evident in DVR. Presence challenges were significantly higher in IR than in both VR and DVR, while Motivation challenges were significantly higher in VR and DVR than in IR, and higher in VR than in DVR. Peer Collaboration challenges were significantly higher in DVR than VR, and higher in VR than IR. Finally, Cognitive Load challenges and Physical and psychosocial challenges were observed only in specific modalities (Cognitive Load in IR; Physical and psychosocial in VR). All other between-technology differences in
Table 5 were not statistically significant.
5.2. Effects of Technology on Students’ Learning Perceptions
Following the qualitative analysis of affordances and challenges, the next stage of the study addressed the quantitative research questions examining students’ perceptions of learning outcomes across the three immersive technologies.
Before examining quantitative research questions, preliminary analyses tested whether background variables influenced student perceptions. A multivariate analysis of covariance (MANCOVA) was conducted to examine the effect of all three immersive technologies on students’ perceived learning outcomes—while statistically controlling for student age and teacher experience. Although these covariates were included to adjust for background variability, the present paper reports only the main effects of technology, which were the focus of the research question. Significant multivariate effects were followed by univariate ANCOVAs and Bonferroni-corrected pairwise comparisons to identify group differences.
A two-way MANCOVA examined Technology (DVR, IR, VR) and student age effects on four learning outcomes, with teacher experience as a covariate. Although student age and teacher experience covariates were included to adjust for background variability, the present paper reports only the main effects of technology, which were the focus of the research question. Results revealed significant multivariate effects for Technology, Wilks’ Λ = 0.820, F(8, 480) = 6.26,
p < 0.001, η
p2 = 0.094;
Table 6 present the follow-up univariate ANCOVAs main effects of technology, on all four outcomes.
As seen in
Table 6, Technology significantly affected all four outcomes, with the largest effect size on Immersion (η
p2 = 0.103) and Cognitive learning (η
p2 = 0.085).
Table 7 presents estimated marginal means for learning perception outcomes across technology types. Bonferroni-corrected pairwise comparisons identify significant differences between conditions.
As shown in
Table 7, technology type was associated with significant differences in perceived learning outcomes. For immersion, both DVR and VR were significantly higher than IR. For cognitive learning, DVR was significantly higher than both IR and VR, and VR was significantly higher than IR. For emotional learning, DVR was significantly higher than IR, while VR did not differ significantly from either DVR or IR. For social learning, both DVR and VR were significantly higher than IR. No other pairwise differences were significant. These findings suggest DVR is the most effective technology for enhancing dimensions of perceived learning.
Figure 1 provides a visual summary of the adjusted mean differences across technologies for each perceived learning outcome reported in
Table 7.
5.3. Moderation of Engagement Effects on Self-Assessment by Technology, Immersion Duration, and Activity Agency Level
To examine whether the relationship between students’ engagement (immersion, cognitive, emotional, and social) and their self-assessment depends on the type of technology used, a series of moderated regression models were estimated using PROCESS (Model 2). Technology (1 = DVR, 2 = IR, 3 = VR) served as the primary moderator, while either Immersion Duration or Student Agency was entered as a secondary moderator.
Across all models, the main effects of engagement variables on self-assessment were positive. Cognitive and Emotional engagement showed consistent significant effects, while perceived Immersion and Social engagement were positive, but their significance varied across models. Moderation was examined by adding interaction terms to test whether the association between each engagement predictor and self-assessment differed as a function of the moderators; significant interaction effects indicate that the relationship varies across levels of the moderator, such as technology type, immersion duration, or student agency (
Aiken & West, 1991;
Hayes, 2018). This approach enables the identification of conditional effects within complex learning environments.
Table 8 reports, for each engagement predictor and model, the main effect (
B,
p), overall explained variance (R
2), and the incremental variance explained by each interaction (ΔR
2) for the Technology × predictor interaction and the secondary moderator × predictor interaction. The final column labeled “Both” in the highest-order interaction tests represents the joint contribution of the technology × predictor interaction and the secondary moderator × predictor interaction considered together.
As shown in
Table 8, moderation effects differed systematically across engagement domains and moderator combinations. To aid interpretation, results are summarized below by engagement predictor, with conditional effects at the mean level of Immersive Duration reported for each technology condition.
Perceived Immersion → Self-assessment: When Immersive Duration was included as the second moderator (
Table 8, Model 1), the positive effect of immersion on self-assessment was strongest in the DVR condition (
B = 1.429,
p < 0.001) and diminished as Immersive Duration increased, becoming nonsignificant in IR (
B = 0.169,
p = 0.547) and VR (
B = 0.248,
p = 0.425) at mean levels of Immersive Duration. When Activity Agency Level was included as the second moderator (
Table 8, Model 2), however, neither Tech nor Activity Agency Level significantly altered the immersion–self-assessment slope. Thus, immersion effects were more sensitive to time spent in immersive activities than to activity agency.
Cognitive engagement → Self-assessment: The cognitive predictor showed a consistent positive effect in DVR (
B = 1.865,
p < 0.001), while effects in IR (
B = 0.620,
p = 0.025) and VR (
B = 0.829,
p = 0.005) were weaker but remained statistically significant (
Table 8, Model 3). Activity Agency Level did not substantially change these effects (
Table 8, Model 4).
Emotional engagement → Self-assessment: Emotional engagement strongly predicted self-assessment in DVR (
B = 1.960,
p < 0.001). In IR (
B = 0.598,
p = 0.015) and VR (
B = 0.630,
p = 0.017), the effects remained significant but were reduced in magnitude, especially when Immersive Duration was high (
Table 8, Model 5). A small Tech × Activity Agency Level interaction was detected, but Activity Agency itself did not meaningfully change the overall pattern (
Table 8, Model 6).
Social engagement → Self-assessment: Social engagement displayed the largest moderation effect. In DVR, social engagement was a strong positive predictor (
B = 1.723,
p < 0.001), whereas in IR (
B = 0.112,
p = 0.646) and VR (
B = −0.018,
p = 0.943) the effect was nonsignificant at mean or higher levels of Immersive Duration (
Table 8, Model 7). By contrast, when Activity Agency Level was used as the secondary moderator, neither Tech nor Activity Agency Level contributed consistent interaction effects (
Table 8, Model 8).
Across models, the Δ
R2 values in
Table 8 indicate that including the interaction terms significantly improved the explained variance, particularly for Social engagement with Immersive Duration (Δ
R2 = 0.074,
F = 7.79,
p < 0.001). Overall, DVR consistently produced the strongest effects across all engagement types (all
B’s > 1.42, all
p’s < 0.001), while IR and VR yielded weaker effects. Most notably, the magnitude of Social engagement effects varied dramatically by technology: strongly predictive in DVR but negligible and nonsignificant in both IR and VR conditions, confirming that the relationship between social engagement and self-assessment is fundamentally technology-dependent.
6. Discussion
The findings addressing the first research question regarding learning affordances and challenges across DVR, IR, and VR revealed that they align broadly with the Cognitive Affective Model of Immersive Learning (CAMIL;
Makransky & Petersen, 2021), while also extending its scope to account for classroom-based implementation.
CAMIL conceptualizes learning outcomes in immersive environments as driven by two primary psychological mechanisms—presence and agency—mediated by self-regulation, motivation, self-efficacy, cognitive load, and embodiment. The present findings support this framework, with agency and presence emerging as the prominent affordances across all three immersive technologies though expressed differently in each. Statements depicting students as “more active, learning more, and making greater progress” (I-IRT2) and reports that immersive rooms “fully surrounded” (I-IRT1) learners, eliciting a sense of being physically present, reflect these mechanisms. However, these same categories also manifested as challenges, depending on contextual conditions. Presence, for instance, could be heightened when students were fully immersed in the environment but diminished when spatial or technical constraints excluded some learners from the interactive area. Similarly, agency was supported when task design emphasized self-paced exploration yet was constrained when sensory overload or teacher-directed pacing limited autonomy. These patterns echo previous findings—as
Makransky and Petersen (
2021) emphasize, “the CAMIL takes the theoretical perspective that media interacts with method” (p. 941)—indicating that immersive technologies’ psychological affordances are not automatically realized but depend on instructional mediation and contextual fit (
Makransky et al., 2019;
Porat et al., 2023;
Radianti et al., 2020;
Schwartz & Blau, 2025).
Within this CAMIL based logic, motivation and peer collaboration also appeared as both affordances and challenges across modalities (
Table 4 and
Table 5). Notably, teacher interviews indicated comparatively frequent Motivation challenges in DVR, alongside a similar pattern for peer collaboration in the observations. As the least immersive modality—and the one that most readily supported peer interaction—DVR lessons in this study were often implemented as highly student-centered, constructivist activities. This likely placed greater responsibility on learners to sustain effort, self-regulate, and maintain motivation (
Vosniadou et al., 2024;
Wijnia et al., 2024).
6.1. Data Source Sensitivity of Psychological Constructs
Across data sources, some CAMIL psychological factor categories appeared more salient in teachers’ interviews than in the observation-based coding. For example, teachers occasionally described Cognitive Load as a challenge in DVR and VR and Self-Efficacy as a challenge in DVR (
Table 4), yet these categories were not coded as challenges in the corresponding observation data (
Table 5). Such divergence is common in mixed-methods triangulation because interviews can foreground internal experiences and cumulative impressions, whereas observation protocols rely on overt, time-coded indicators that may not capture subjective states directly, particularly in perceptually rich environments where cognitive load can be difficult to infer from behavior alone (
Krieglstein et al., 2023,
Skulmowski, 2023). Similarly, embodiment, a learner-based largely subjective sensation, typically requires dedicated assessment measures (
Crone & Kallen, 2024) rather than being inferred from teacher reports or classroom behavior, which explains why it did practically did not emerge as either affordance or challenge in this dataset. In the same vein, Self-Efficacy and Self-Regulation likewise appeared mainly as affordances, as regulation and confidence difficulties were probably mitigated in real time or expressed indirectly through categories such as Motivation or Agency.
6.2. Variation by Modality: Balancing Immersion and Control
Differences across the three technologies show a recurring trade-off between sensory immersion and cognitive control. DVR environments were associated with high levels of agency, self-regulation, and collaboration, suggesting that moderate immersion allows teachers and students to maintain a productive balance between engagement and control (
Schwartz & Blau, 2025). In contrast, IR and VR settings were associated with stronger presence and motivation but also heightened challenges, including physical discomfort and diminished task regulation. This pattern suggests an immersion–control continuum, in which increasing sensory fidelity intensifies experiential engagement while simultaneously demanding greater self-regulatory resources and pedagogical orchestration (
Makransky et al., 2019;
Parong & Mayer, 2018).
An additional dimension relates to teacher comfort and instructional design. Teachers seem often more confident and flexible designing lessons for less interactive modalities such as DVR, where familiar interfaces and predictable lesson structures reduced perceived instructional risk. This tendency may indirectly influence student affordances: when teachers design more coherent or better-scaffolded activities in lower-immersion environments, agency and learning structure are more effectively realized (
Schwartz & Blau, 2025. Conversely, in high-immersion environments, limited teacher experience and the complexity of managing spatial and technical variables may restrict the effective use of immersive affordances, reflecting evidence that technological self-efficacy and design confidence condition how teachers translate technological affordances into meaningful learning (
X. P. Lin et al., 2024;
Santilli et al., 2025;
Schwartz et al., 2023).
6.3. Extending CAMIL: The Role of Prior Student Characteristics and Knowledge
While most identified categories correspond to CAMIL’s psychological constructs, three additional domains—language factors, student technology skills, and physiological factors—represent student-level characteristics that condition how learners experience and respond to immersive environments. Prior student knowledge and skills influence their self-efficacy and learning in innovative environments, which in turn affect smooth navigation and sustained engagement (
Han et al., 2023;
X. P. Lin et al., 2024;
Mousavi et al., 2023). Physiological factors, arising from complex interactions among student dispositions, software and hardware design, and instructional method, contribute to fatigue or motion discomfort that constrain persistence in VR activities and, consequently, limits students’ ability to learn effectively (
Souchet et al., 2023;
Stauffert et al., 2020). These findings suggest extending CAMIL by illustrating that learner-specific characteristics must be incorporated into theoretical accounts of immersive learning, as they shape how technological affordances are realized in practice.
6.4. Social and Pedagogical Dimensions Beyond CAMIL
The realization of these social affordances appears to depend on a reciprocal relationship between instructional intent and environmental constraints. While teacher choices regarding grouping are decisive in mediating social features (
X. P. Lin et al., 2024), this study suggests that environmental reliability often dictates those very choices. The finding that social engagement only significantly predicted achievement in the DVR condition (
Table 8) likely reflects a strategic response to the “social friction” found in IR and VR settings. When faced with technical instability or spatial limitations (
Table 3), teachers may proactively limit complex social interactions to maintain classroom control, effectively “designing out” the technology’s collaborative potential. In contrast, the naturalness and reliability of the DVR environment allowed teachers the instructional “capacity” to facilitate meaningful peer exchange (
Kock, 2004) without the risk of technical disruption. This reinforces the need to include social co-presence in the revised CAMIL, as it acknowledges that social engagement is not just a psychological outcome but a pedagogical choice conditioned by the spatial and technical readiness of the classroom (
Chun et al., 2022).
6.5. Students’ Perceived Learning Outcomes Across Immersive Technologies
As previously discussed, perceived learning constitutes an important psychological outcome in immersive environments, reflecting students’ motivation, persistence, and self-evaluated progress. The study results from the second research question regarding students perceived learning across the immersive technologies indicate significant differences in perceived learning across technologies, with DVR producing the highest ratings on all four dimensions—immersive perception, cognitive, social and emotional—while IR consistently scored lowest, and VR ranked between the two. These findings indicate that higher immersion alone does not enhance learners’ sense of learning; rather, as proposed in CAMIL, meaningful learning emerges from the interaction between immersive affordances and learners’ sense of agency.
Cognitive perceptions were most positive in DVR lessons, possibly stemming from reduced sensory load and familiar interfaces supporting comprehension and reflection (
Jensen & Konradsen, 2018;
Parong & Mayer, 2018).
Emotional learning perceptions—reflecting interest and enjoyment—were likewise stronger in DVR, suggesting that these perceptions are based on a sense of agency, competence and control rather than presence and immersion intensity (
X. P. Lin et al., 2024). The
social dimension showed a similar pattern: students reported greater interaction and collaboration in DVR, likely because synchronous engagement and teacher facilitation were easier to sustain in a familiar, low-latency environment that supported more natural communication. (
Kock, 2004;
Schwartz et al., 2024).
In contrast, IR showed consistently lower ratings across all perceived learning dimensions (
Table 7), suggesting a disconnect between the technology’s intended purpose, enabling a shared immersive learning experience, and its practical classroom enactment. Notably, perceived immersion ratings were significantly higher in both DVR and VR than in IR, underscoring that perceived immersion and presence do not stem solely from a system’s technical affordances or nominal immersion level, but also from how immersive conditions are enacted through instructional orchestration and participation structures (
Radianti et al., 2020). This is particularly striking given that IR is positioned conceptually as a mid-immersive modality in the continuum between DVR and VR. Although IR is theoretically optimized for collective immersion, the high frequency of agency challenges identified in the observations (80.8%;
Table 5) suggests that physical and spatial constraints often hindered student engagement and, consequently, students’ perceived sense of immersion. As some activity descriptions indicated, because the interactive area was physically limited, many students remained on the periphery as passive observers (O-IR8). Consequently, the sense of presence in IR appeared fragmented: although students were physically co-located in the room, they were sometimes psychologically excluded from the digital interaction and learning. This pattern suggests that without careful spatial orchestration, the physical limitations of an immersive room can inadvertently suppress the social and cognitive affordances typically associated with higher-fidelity media (
Radianti et al., 2020;
Makransky & Petersen, 2021;
X. P. Lin et al., 2024).
6.6. Moderation of Engagement–Self-Assessment Relationships
The third research question examined how technology type, immersion duration, and activity agency level moderated the relationships between different forms of student engagement and their achievement self-assessment. Across all models, engagement—particularly cognitive and emotional engagement—positively predicted students’ self-assessed learning. Yet, the strength of these associations varied systematically with both the immersion duration time and the technology through which learning occurred, highlighting that engagement’s impact on perceived achievement is contingent on contextual affordances rather than uniform across modalities.
A consistent pattern emerged in which the DVR condition produced the strongest and most reliable relationships between engagement and self-assessment, while effects in IR and VR were weaker or nonsignificant, relating back to previous findings in the second research question of higher perceptions of engagement in DVR environments overall.
The role of immersion duration emerged as a significant moderator of the engagement–self-assessment relationship (
Table 8). While longer exposure strengthened the positive association between engagement and self-assessment in the DVR condition, this relationship weakened in IR and VR settings. This divergent impact suggests a temporal boundary condition for high-fidelity immersion: in lower-immersion environments, extended duration provides the necessary window for students to move beyond initial interface navigation toward deeper curricular engagement. Conversely, in the more sensory-intensive VR and IR settings, the weakening of the engagement–assessment link over time points to a process of cognitive or physiological saturation (
Souchet et al., 2023). As exposure increases, the persistent sensory demands and potential physical discomfort associated with these media (
Table 3) may eventually decouple students’ emotional engagement from their ability to reflect on and evaluate their own progress (
Stauffert et al., 2020). These findings underscore that immersion is not inherently beneficial in sustained doses; its pedagogical value is constrained by a ‘saturation threshold’ beyond which high sensory presence may transition from a learning affordance into a barrier to meaningful self-reflection (
Stauffert et al., 2020;
Souchet et al., 2023;
Radianti et al., 2020).
Surprisingly, activity agency level variable did not significantly moderate engagement–self-assessment relationships, even though agency itself was central in earlier analyses. This may indicate that the agency–learning link operates indirectly through engagement rather than as a higher-order moderator. Alternatively, when technological or spatial constraints dominate (as in IR and VR), perceived autonomy may lose its impact because system control and instructional pacing are externally defined (
Zhi & Wu, 2023).
Students’ self-assessment of their learning depends more on whether they can handle the cognitive demands and on how the immersive activity is structured over time, rather than on how immersive or interactive the technology itself is. The combination of moderate immersion, shorter exposure, and structured opportunities for interaction—as exemplified by DVR—appears to optimize the alignment between engagement, and perceived achievement. Once again echoing the importance of the instructional design.
7. Conclusions
Across the three research questions, the findings demonstrate that what “works” in immersive learning depends as much on instructional design and learner characteristics (e.g., language proficiency, technological skills, physiological readiness) as on the technological properties of DVR, IR, or VR themselves. These insights emerge from the teachers’ perspectives and lesson analyses as well as from students’ perceptions and engagement patterns. This study provided a rare opportunity to observe immersive technologies in real classroom conditions and triangulate participant self-report and behavior data, offering a holistic view on pedagogical integration of immersive technologies. While
Makransky and Petersen (
2021) explicitly note that
media interacts with method, this relationship was not evident in the visual structure of the CAMIL framework.
In light of the current findings, a revised version of CAMIL presented in
Figure 2 incorporates a number of refinements to better account for the social, pedagogical, and learner-related conditions revealed in this study:
The technological factors are expanded to include
multi-user capacity and introduce a corresponding psychological affordance of
co-presence, complementing the existing constructs of presence and agency. This addition acknowledges that learning in immersive environments often depends on opportunities for interaction and shared meaning-making, not merely on individual sensory immersion as emerged from the collaboration affordances and challenges in the findings (
Table 3,
Table 4 and
Table 5).
Instructional method and
learner characteristics moderators connect technological factors with the IVR psychological affordances. Additionally, the arrows connecting the technological factors and IVR affordances are dashed to signify that the outcome is not fixed but depends on the interaction with the instructional method and learner characteristics. This visual and conceptual clarification aligns the model with CAMIL’s own textual claim that media and method interact and foregrounds pedagogy as the mechanism through which affordances become effective. This interaction is evident throughout the findings of all three research questions and, for example, demonstrated in the result that DVR students reported the highest levels of perceived immersion (
Table 6) even though the technology itself was the least immersive of all modalities. Learner characteristics were likewise reflected in the student-related challenges identified in both the interviews and classroom observations—particularly differences in technology skills and language barriers (
Table 4 and
Table 5)—which at times hindered learning outcomes.
Finally, the learning outcomes layer is extended to include
perceived learning alongside knowledge-based outcomes. In this study, perceived learning in immersive lessons is reflected through four complementary constructs—perceived immersion and perceived cognitive, emotional, and social learning (
Table 6,
Table 7 and
Table 8). These perceptions of the learning process represent a crucial psychological indicator that links engagement, motivation, and future learning intention.
Figure 2 presents an updated visual summary of the study framework. The elements newly added or revised in the current manuscript are highlighted in the figure, to make the modifications explicit.
Together, these refinements—Sociality, Method–Media Interaction, Learner Characteristics, and Perceived Learning—preserve CAMIL’s core structure while rendering it more responsive to the social, pedagogical, and experiential realities of immersive classroom learning.
8. Limitations and Future Directions
Several limitations should be considered when interpreting the findings.
First, as a field-based classroom study, the participant groups were heterogeneous across settings (
Table 1). While this heterogeneity enhances ecological validity and supports the relevance of the findings to authentic educational contexts, it may have influenced observed patterns and limits the extent to which specific effects can be attributed to particular contextual factors. Future research could complement field-based data with controlled experimental designs that isolate specific affordances (e.g., interactivity, co-presence) to clarify causal mechanisms.
Second, the sample size within each technology condition—particularly for immersive rooms and fully immersive VR—was smaller than that of the DVR group, reflecting the uneven adoption of these technologies in schools. Future studies should expand the dataset across diverse school contexts and age groups to validate the generalizability of these findings.
Third, the study relied on self-report measures of engagement, perceived learning, and self-assessment, which capture students’ subjective experience but may be influenced by social desirability or recall bias. Integrating behavioral analytics, physiological indicators, or performance-based assessments could strengthen triangulation and provide a richer picture of learning processes in immersive environments.
Fourth, while this study identified learner characteristics and instructional design as key moderators, these constructs were examined at a general level. Future research should explore specific pedagogical variables—such as scaffolding type, collaboration structure, or reflection prompts—and their interaction with learner competencies, like digital literacy or language proficiency.
Finally, the study focused on short-term learning experiences within single lessons. Longitudinal research tracking how repeated exposure to immersive technologies influences self-regulation, perceived learning, and academic achievement over time would provide essential insight into the sustainability of these effects.
Collectively, addressing these limitations will enable future work to refine the revised CAMIL model, empirically test its new components—social co-presence, method–media interaction, and learner characteristics—and better inform the pedagogical design of immersive learning environments in real educational settings.
Supplementary Materials
The following supporting information can be downloaded at:
https://www.mdpi.com/article/10.3390/educsci16020190/s1, Table S1: Standardized factor loadings per item, construct, and group (DVR, IR, VR); Table S2: Descriptive statistics for immersion, cognitive, emotional, social learning, and self-assessment by immersive technology group.
Author Contributions
Conceptualization, E.S. and I.B.; Methodology, E.S. and I.B.; Validation, E.S. and I.B.; Formal analysis, E.S.; Investigation, E.S.; Data curation, E.S.; Writing—original draft, E.S.; Writing—review and editing, I.B.; Visualization, E.S.; Supervision, I.B.; Funding acquisition, E.S. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially supported by the Research Center for Innovation in Learning Technologies at the Open University of Israel. The authors gratefully acknowledge this support.
Institutional Review Board Statement
The study was approved by the Institutional Ethics Committee Board of the Open University of Israel (protocol code 3460, 11 December 2022; protocol code 3603, 1 October 2024) and by the Chief Scientist’s Office of the Israeli Ministry of Education (protocol code 12901, 1 December 2022; protocol code 14155, 8 September 2024).
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
The datasets generated and/or analyzed during this study are available (in Hebrew) from the corresponding author upon reasonable request. Due to ethical and privacy considerations, the data are not publicly available.
Acknowledgments
We thank Shlomit Hadad and Liron Levy-Nadav for their valuable contributions to this research.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| DVR | Desktop Virtual Reality |
| IR | Immersive Rooms |
| VR | Virtual Reality |
Appendix A
Due to ethical restrictions regarding classroom photography, the following images are AI-generated representative visualizations (generated by ChatGPT v5.2) based on the researcher’s field observations (e.g., O-IR8). They are intended to illustrate the spatial configurations and interaction zones discussed in the study while ensuring the total anonymity of the participating students and schools.
Figure A1.
Desktop Virtual Reality (DVR) Learning Environment (generated by ChatGPT v5.2).
Figure A1.
Desktop Virtual Reality (DVR) Learning Environment (generated by ChatGPT v5.2).
Description: Figure A1 provides a representative visualization of the DVR modality setup used in the participating schools. In this configuration, students access a 3D virtual environment via standard desktop computers or laptops. Interaction is mediated through a traditional keyboard and mouse interface, with the digital content displayed on a high-resolution 2D monitor. This setup allows students to work in a conventional classroom layout, utilizing existing school hardware and stable network connections to navigate virtual spaces. An example of DVR platforms used in schools is Minecraft Education.
Figure A2.
Immersive Room (IR) Learning Environment (generated by ChatGPT v5.2).
Figure A2.
Immersive Room (IR) Learning Environment (generated by ChatGPT v5.2).
Description: Figure A2 illustrates the typical spatial arrangement for the IR technology used in the study. In this setting, digital content is projected onto three walls and the floor of a dedicated classroom space to create a shared 270-degree environment. While the technology is designed for collective immersion, the physical space is limited to a maximum of 12 students at a time. As depicted, students require no extra gear (such as headsets or glasses) to participate. This configuration frequently results in a small group of students engaging directly with the touch-based interface, while others remain on the periphery of the room as passive observers. IR systems commonly include proprietary software for selecting and authoring content; one example is TotallyInteractive.
Figure A3.
Virtual Reality (VR) Learning Environment.
Figure A3.
Virtual Reality (VR) Learning Environment.
Description: Figure A3 illustrates the typical spatial arrangement for the VR. Students are each equipped with an individual Head-Mounted Display (HMD) and a pair of wireless, cable-free handheld controllers, enabling them to navigate and interact with the 360-degree digital environment. To ensure safety and physical stability while immersed, students are sometimes seated on swivel chairs, which allows them to rotate 360 degrees to explore the virtual space. While this setup provides high individual immersion, the use of headsets creates sensory isolation, which limits spontaneous peer-to-peer conversation and direct visual contact with the teacher. Examples of VR programs used in classroom settings include Engage, Cook-Out, and VirtualSpeech.
Appendix B. Interview Protocol
General questions
Tell me about yourself and your teaching experience.
Briefly describe the technological conditions in the classrooms where you integrate virtual reality technology.
What are the subject-matters in the classes in which you integrate virtual reality technology.
Teacher training
Have you participated in training for integration of virtual reality technology in learning? [If yes] describe.
What technological skills were emphasized in the training? What was required of the participants? Did you feel it suited your needs?
What pedagogical skills were emphasized in the training? What was required of the participants? Did you feel it suited your needs?
What reference was there to the content that you would like to teach?
In what way did the training relate to developing skills regarding searching in pre-prepared databases of virtual reality technology?
In what way did the training relate to the construction or adaptation of virtual reality technology to your field of teaching?
What learning artifacts in the training were you required to prepare? Describe two of the learning artifacts.
Learning and teaching—lesson planning
What are the considerations for integrating immersive virtual reality in your classes? Advantages and Disadvantages?
How are virtual reality technologies integrated into your lessons? How often?
Describe the lesson planning process integrating virtual reality technology?
Describe the process of selecting/creating content for classes integrating virtual reality technology. What are the considerations for choosing this particular content.
Learning and teaching with VR—lesson process
Describe a lesson in which you integrated virtual reality?
What was the purpose of the lesson?
What happened in class before the immersive experience activity?
What was your instructional role during the lesson
Describe the students’ activity during the lesson
Describe interaction or collaboration between students in experiencing virtual world immersive environment.
Describe what happened after the experience
Describe, based on your impression, the students’ experience in the experiment.
Learning Assessment
What are the ways in which you assess the student learning process while utilizing metaverses and virtual worlds?
Are students required to produce a learning product? If so—describe it.
According to what criteria do you determine the degree of success of the students?
What are the ways in which you evaluate the students’ learning experience in classes using virtual reality?
What learning outcomes and/or achievements are required by the school or Ministry of Education when integrating virtual reality in the learning?
Summary—general impression of the experience
Describe what are the challenges/affordances you experienced using immersive virtual reality environments in the learning.
Anything else you’d like to share?
Appendix C. Observation Protocol
General details about the lesson
Teacher Instructional role
Class time
Knowledge Domain
Location
Technology used by teacher and students
Number of students in the class:
Student Age
The content of the lesson
Purpose of the lesson
Lesson
Technology: The process of organizing the class with the equipment
Characterization of activity in immersive virtual reality environment
Cooperative (Multi Player)
Immersive
The degree of cooperation and communication between the students
Time
Description of the virtual world’s activity
What does the teacher do during the virtual activity.
Virtual reality activity connection to the lesson/ unit goals.
The lesson outcome
General impression of student experience, challenges and affordances in the lesson.
Appendix D. Immersive, Cognitive Socio-Emotional Questionnaire
Questions 1–3—Immersive Aspects, 4–10 Cognitive Aspects, 11–13—Emotional Aspects, 14–16—Social Aspects
Questionnaire: Learning with Virtual Worlds
For each question, circle the answer that fits what you think.
1 = Not at all 2 3 4 5 6 = Very much
______________________________________________________________________________
Did you feel as if you were really inside another, virtual world during the lesson?
Did the experiences in the virtual world feel real to you?
When you were in the virtual world, were your reactions like they would be in real life?
Did you learn in this lesson?
Did you learn something new in this lesson?
Did the lesson help you remember what you learned?
Can you use what you learned in this lesson in other situations?
Did the lesson help you understand the topic better?
Did the lesson help you understand things you didn’t understand before?
Did the lesson make you think about the topic in a different way?
Was the lesson interesting?
Did the lesson feel easy for you?
Did you manage to stay focused during the lesson?
Did you enjoy working with others during the lesson?
Did you enjoy working with your classmates during the lesson?
Did you enjoy talking with the teacher during the lesson?
How sure are you that you would succeed in a test about what you learned in the lesson? (0–10): ________
What did you like most about the lesson? ____
What would you like to make better in the lesson? ______
References
- Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Sage. [Google Scholar]
- Austermann, C., Blanckenburg, F., Blanckenburg, K., & Utesch, T. (2025). Exploring the impact of virtual reality on presence: Findings from a classroom experiment. Frontiers in Education, 10, 1560626. [Google Scholar] [CrossRef]
- Blanca, M. J., Alarcón, R., Arnau, J., Bono, R., & Bendayan, R. (2017). Non-normal data: Is ANOVA still a valid option? Psicothema, 29(4), 552–557. [Google Scholar] [CrossRef] [PubMed]
- Blau, I., & Caspi, A. (2010). Studying invisibly: Media naturalness and learning. In N. Kock (Ed.), Evolutionary psychology and information systems research. Integrated series in information systems (Vol. 24). Springer. [Google Scholar] [CrossRef]
- Breves, P., & Stein, J. P. (2023). Cognitive load in immersive media settings: The role of spatial presence and cybersickness. Virtual Reality, 27(2), 1077–1089. [Google Scholar] [CrossRef]
- Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). The Guilford Press. [Google Scholar]
- Caspi, A., & Blau, I. (2008). Social presence in online discussion groups: Testing three conceptions and their relations to perceived learning. Social Psychology of Education, 11(3), 323–346. [Google Scholar] [CrossRef]
- Caspi, A., & Blau, I. (2011). Collaboration and psychological ownership: How does the tension between the two influence perceived learning? Social Psychology of Education, 14(2), 283–298. [Google Scholar] [CrossRef]
- Chun, D. M., Karimi, H., & Sañosa, D. J. (2022). Traveling by headset: Immersive VR for language learning. CALICO Journal, 39(2), 129–149. [Google Scholar] [CrossRef]
- Coban, M., Bolat, Y. I., & Goksu, I. (2022). The potential of immersive virtual reality to enhance learning: A meta-analysis. Educational Research Review, 36, 100452. [Google Scholar] [CrossRef]
- Crone, C. L., & Kallen, R. W. (2024). Measuring virtual embodiment: A psychometric investigation of a standardised questionnaire for the psychological sciences. Computers in Human Behavior Reports, 14, 100422. [Google Scholar] [CrossRef]
- De Back, T. T., Tinga, A. M., & Louwerse, M. M. (2023). Learning in immersed collaborative virtual environments: Design and implementation. Interactive Learning Environments, 31(8), 5364–5382. [Google Scholar] [CrossRef]
- De Back, T. T., Tinga, A. M., Nguyen, P., & Louwerse, M. M. (2020). Benefits of immersive collaborative learning in CAVE-based virtual reality. International Journal of Educational Technology in Higher Education, 17(1), 51. [Google Scholar] [CrossRef]
- Di Natale, A. F., Repetto, C., Riva, G., & Villani, D. (2020). Immersive virtual reality in K-12 and higher education: A 10-year systematic review of empirical research. British Journal of Educational Technology, 51(6), 2006–2033. [Google Scholar] [CrossRef]
- Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. [Google Scholar] [CrossRef] [PubMed]
- George, D., & Mallery, P. (2010). SPSS for Windows step by step: A simple guide and reference, 17.0 update (10th ed.). Pearson. [Google Scholar]
- Gibson, J. J. (2014). The theory of affordances (1979). In J. J. Gieseking, W. Mangold, C. Katz, S. M. Low, & S. Saegert (Eds.), The people, place, and space reader (pp. 56–60). Routledge. [Google Scholar] [CrossRef]
- Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of Educational Research, 42(3), 237–288. [Google Scholar] [CrossRef]
- Han, J., Liu, G., & Zheng, Q. (2023). Prior knowledge as a moderator between signaling and learning performance in immersive virtual reality laboratories. Frontiers in Psychology, 14, 1118174. [Google Scholar] [CrossRef] [PubMed]
- Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). The Guilford Press. [Google Scholar]
- Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. [Google Scholar] [CrossRef]
- Jensen, L., & Konradsen, F. (2018). A review of the use of virtual reality head-mounted displays in education and training. Education and Information Technologies, 23(4), 1515–1529. [Google Scholar] [CrossRef]
- Kablitz, D. (2025). Bridging theory and practice with immersive virtual reality: A study on transfer facilitation in VET. Education Sciences, 15(8), 959. [Google Scholar] [CrossRef]
- Kline, R. B. (2023). Principles and practice of structural equation modeling. Guilford Publications. [Google Scholar]
- Kock, N. (2004). The psychobiological model: Towards a new theory of computer-mediated communication based on Darwinian evolution. Organization Science, 15(3), 327–348. [Google Scholar] [CrossRef]
- Krieglstein, F., Beege, M., Rey, G. D., Sanchez-Stockhammer, C., & Schneider, S. (2023). Development and validation of a theory-based questionnaire to measure different types of cognitive load. Educational Psychology Review, 35, 9. [Google Scholar] [CrossRef]
- Lin, W., Chen, L., Xiong, W., Ran, K., & Fan, A. (2025). Measuring the sense of presence and learning efficacy in immersive virtual assembly training. International Journal of Mechanical Engineering Education. [Google Scholar] [CrossRef]
- Lin, X. P., Li, B. B., Yao, Z. N., Yang, Z., & Zhang, M. (2024). The impact of virtual reality on student engagement in the classroom—A critical review of the literature. Frontiers in Psychology, 15, 1360574. [Google Scholar] [CrossRef] [PubMed]
- Liu, C., Meng, S., Zheng, W., & Zhou, Z. (2025). Research on the impact of immersive virtual reality classroom on student experience and concentration. Virtual Reality, 29(2), 82. [Google Scholar] [CrossRef]
- Luo, Y., & Du, H. (2022). Learning with desktop virtual reality: Changes and interrelationship of self-efficacy, goal orientation, technology acceptance and learning behavior. Smart Learning Environments, 9(1), 22. [Google Scholar] [CrossRef]
- Makransky, G., Borre-Gude, S., & Mayer, R. E. (2019). Motivational and cognitive benefits of training in immersive virtual reality based on multiple assessments. Journal of Computer Assisted Learning, 35(6), 691–707. [Google Scholar] [CrossRef]
- Makransky, G., & Lilleholt, L. (2018). A structural equation modeling investigation of the emotional value of immersive virtual reality in education. Educational Technology Research and Development, 66(5), 1141–1164. [Google Scholar] [CrossRef]
- Makransky, G., & Petersen, G. B. (2021). The cognitive affective model of immersive learning (CAMIL): A theoretical research-based model of learning in immersive virtual reality. Educational Psychology Review, 33(3), 937–958. [Google Scholar] [CrossRef]
- Marsh, H. W., Hau, K. T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler’s (1999) findings. Structural Equation Modeling: A Multidisciplinary Journal, 11(3), 320–341. [Google Scholar] [CrossRef]
- McGivney, E. (2025). Interactivity and identity impact learners’ sense of agency in virtual reality field trips. British Journal of Educational Technology, 56(1), 410–434. [Google Scholar] [CrossRef]
- Mousavi, S. A., Powell, W., Louwerse, M. M., & Hendrickson, A. T. (2023). Behavior and self-efficacy modulate learning in virtual reality simulations for training: A structural equation modeling approach. Frontiers in Virtual Reality, 4, 1250823. [Google Scholar] [CrossRef]
- Navarro, R., Vega, V., Bayona, H., Bernal, V., & Garcia, A. (2024). The relationship between perceived learning, academic performance and academic engagement in virtual education for university students. Journal of Education and e-Learning Research, 11(1), 174–180. [Google Scholar] [CrossRef]
- Orcan, F. (2020). Parametric or non-parametric: Skewness to test normality for mean comparison. International Journal of Assessment Tools in Education, 7(2), 255–265. [Google Scholar] [CrossRef]
- Parong, J., & Mayer, R. E. (2018). Learning science in immersive virtual reality. Journal of Educational Psychology, 110(6), 785–797. [Google Scholar] [CrossRef]
- Porat, E., Shamir-Inbal, T., & Blau, I. (2023). Teaching prototypes and pedagogical strategies in integrating Open Sim-based virtual worlds in K-12: Insights from perspectives and practices of teachers and students. Journal of Computer Assisted Learning, 39(4), 1141–1153. [Google Scholar] [CrossRef]
- Radianti, J., Majchrzak, T. A., Fromm, J., & Wohlgenannt, I. (2020). A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Computers & Education, 147, 103778. [Google Scholar] [CrossRef]
- Razali, N. M., & Wah, Y. B. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21–33. [Google Scholar]
- Santilli, T., Ceccacci, S., Mengoni, M., & Giaconi, C. (2025). Virtual vs. traditional learning in higher education: A systematic review of comparative studies. Computers & Education, 227, 105214. [Google Scholar] [CrossRef]
- Schmider, E., Ziegler, M., Danay, E., Beyer, L., & Bühner, M. (2010). Is it really robust? Reinvestigating the robustness of ANOVA against violations of the normal distribution assumption. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 6(4), 147–151. [Google Scholar] [CrossRef]
- Schwartz, E., Abdu, R., & Blau, I. (2024). Cultivating learning through immersive technologies in Israeli formal education classrooms. In D. Olenik-Shemesh, I. Blau, N. Geri, A. Caspi, Y. Sidi, Y. Eshet-Alkalai, & Y. Kalman (Eds.), Proceedings of the 19th Chais conference for the study of innovation and learning technologies: Learning in the digital era. The Open University of Israel. Available online: https://www.openu.ac.il/Lists/MediaServer_Documents/innovation/chais/2024/a2_4.pdf (accessed on 20 January 2026).
- Schwartz, E., & Blau, I. (2025). Teaching in virtual reality: Examining teacher roles across varying levels of immersion. In T. Bastiaens (Ed.), Proceedings of EdMedia + innovate learning (pp. 1146–1151). Association for the Advancement of Computing in Education (AACE). Available online: https://www.learntechlib.org/primary/p/226252/ (accessed on 20 January 2026).
- Schwartz, E., Shamir-Inbal, T., & Blau, I. (2023). Teacher prototypes in technology-enhanced instruction in elementary school second language acquisition: Comparing routine and emergency learning in different cultures. Computers and Education Open, 5, 100155. [Google Scholar] [CrossRef]
- Selzer, M. N., & Castro, S. M. (2023). A methodology for generating virtual reality immersion metrics based on system variables. Journal of Computer Science & Technology, 23. [Google Scholar] [CrossRef]
- Skulmowski, A. (2023). Guidelines for choosing cognitive load measures in perceptually rich environments. Mind, Brain, and Education, 17(1), 20–28. [Google Scholar] [CrossRef]
- Souchet, A. D., Lourdeaux, D., Pagani, A., & Rebenitsch, L. (2023). A narrative review of immersive virtual reality’s ergonomics and risks at the workplace: Cybersickness, visual fatigue, muscular fatigue, acute stress, and mental overload. Virtual Reality, 27(1), 19–50. [Google Scholar] [CrossRef]
- Stauffert, J. P., Niebling, F., & Latoschik, M. E. (2020). Latency and cybersickness: Impact, causes, and measures. A review. Frontiers in Virtual Reality, 1, 582204. [Google Scholar] [CrossRef]
- Thorp, S. O., Rimol, L. M., Lervik, S., Evensmoen, H. R., & Grassini, S. (2024). Comparative analysis of spatial ability in immersive and non-immersive virtual reality: The role of sense of presence, simulation sickness and cognitive load. Frontiers in Virtual Reality, 5, 1343872. [Google Scholar] [CrossRef]
- Van der Meer, N., van der Werf, V., Brinkman, W. P., & Specht, M. (2023). Virtual reality and collaborative learning: A systematic literature review. Frontiers in Virtual Reality, 4, 1159905. [Google Scholar] [CrossRef]
- Villena-Taranilla, R. V., Tirado-Olivares, S., Gutiérrez, R. C., & González-Calero, J. A. (2022). Effects of virtual reality on learning outcomes in K-6 education: A meta-analysis. Educational Research Review, 35, 100434. [Google Scholar] [CrossRef]
- Vosniadou, S., Bodner, E., Stephenson, H., Jeffries, D., Lawson, M. J., Darmawan, I. N., Kang, S., Graham, L., & Dignath, C. (2024). The promotion of self-regulated learning in the classroom: A theoretical framework and an observation study. Metacognition Learning, 19, 381–419. [Google Scholar] [CrossRef]
- Wang, X., Chou, M., Lai, X., Tang, J., Chen, J., Kong, W. K., Chi, H. L., & Yam, M. C. (2024). Examining the effects of an immersive learning environment in tertiary AEC education: CAVE-VR system for students’ perception and technology acceptance. Journal of Civil Engineering Education, 150(2), 05023012. [Google Scholar] [CrossRef]
- Wijnia, L., Noordzij, G., Arends, L. R., Rikers, R. M. J. P., & Loyens, S. M. M. (2024). The effects of problem-based, project-based, and case-based learning on students’ motivation: A meta-analysis. Educational Psychology Review, 36, 29. [Google Scholar] [CrossRef]
- Yu, N., Shi, W., Dong, W., & Kang, R. (2025). The impact of virtual reality immersion on learning outcomes: A comparative study of declarative and procedural knowledge acquisition. Behavioral Sciences, 15(10), 1322. [Google Scholar] [CrossRef]
- Zhi, Y., & Wu, L. (2023). Extended reality in language learning: A cognitive affective model of immersive learning perspective. Frontiers in Psychology, 14, 1109025. [Google Scholar] [CrossRef]
- Zhuofan, H., Hidayat, R., & Ayub, A. F. M. (2024). The mediating effect of engagement in the relationship between self-efficacy and perceived learning in the online mathematics environment among Chinese students. Discover Sustainability, 5(1), 469. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |