Next Article in Journal
Smart Grid Intrusion Detection System Based on Incremental Learning
Previous Article in Journal
A Machine Learning Approach to Investigating Key Performance Factors in 5G Standalone Networks
Previous Article in Special Issue
NRXR-ID: Two-Factor Authentication (2FA) in VR Using Near-Range Extended Reality and Smartphones
 
 
Article
Peer-Review Record

A Detailed Review of the Design and Evaluation of XR Applications in STEM Education and Training

Electronics 2025, 14(19), 3818; https://doi.org/10.3390/electronics14193818
by Magesh Chandramouli 1, Aleeha Zafar 1,* and Ashayla Williams 1,2
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2025, 14(19), 3818; https://doi.org/10.3390/electronics14193818
Submission received: 19 August 2025 / Revised: 13 September 2025 / Accepted: 23 September 2025 / Published: 26 September 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The manuscript is of review nature. It is both interesting, useful and substantively correct. The authors relied on about 50 publications listed in the references section. This is a well-structured and insightful paper. I recommend the submitted work for publication after making a few minor corrections following the comments given below.

1) I suggest that, in the introductory background section, the authors briefly define STEM education and expand the acronym.
2) In Table 1, it would be beneficial to indicate which studies identified key interactive features, along with their associated design benefits and challenges. Similarly, Table 2 should include references to works that discuss agency-enhancing strategies.
3) The authors state that 50 articles were selected; however, the reference list at the end contains fewer. Several sources are dated before 2010, and a few are from 2025, resulting in a total of approximately 43 items, despite the search being limited to studies published between 2010 and 2024.
4) The authors state that they used a platform such as ResearchGate. However, this cannot be regarded as a bibliographic database; rather, it is a service that can serve as a supporting tool in review papers. Specifically, ResearchGate is a social networking platform for scientists and researchers, not a database for systematic literature searches.
5) I suggest improving the structure of the diagram (Fig. 2) that illustrates the selection workflow of XR studies in STEM education.
6) Some of the images presented in the paper are not connected to the text of the article.
7) To improve clarity and conciseness, I suggest defining each abbreviation only at its first occurrence in the text.
8) I recommend verifying the correct source citation for Figure 9, as it appears that reference [55] might be the correct one rather than [54].
9) Finally, I suggest reconsidering the selection and ordering of keywords. For example, the important term “dual-phase learning model” is missing, and it would be preferable to use “extended reality” rather than the abbreviation XR.

Author Response

We sincerely thank the reviewer for the constructive feedback and careful evaluation of our manuscript. We have carefully revised the paper to address each of the comments. Below is our detailed, point-by-point response, with the reviewer’s comments reproduced in italics for clarity.

Comment 1: I suggest that, in the introductory background section, the authors briefly define STEM education and expand the acronym.

Response 1: We agree with this comment. In the Introduction (Section 1.1, p. 2, para. 1, lines 1–2), we revised the text to explicitly expand the acronym and define STEM as “Science, Technology, Engineering, and Mathematics”. This ensures clarity for readers unfamiliar with the abbreviation.

Comment 2: In Table 1, it would be beneficial to indicate which studies identified key interactive features, along with their associated design benefits and challenges. Similarly, Table 2 should include references to works that discuss agency-enhancing strategies.

Response 2: Thank you for this suggestion. We have updated both Table 1 (p. 12) and Table 2 (p. 14) to include in-text references for each affordance and agency feature. For example, in Table 1, “Gesture-Based Interaction” now reads “Increased realism, engagement, and fine motor mapping [7,19]”. In Table 2, “Adaptive Difficulty Levels” now cites [17,28] to indicate supporting studies. This improves traceability and grounding of each entry in the reviewed literature.

Comment 3: The authors state that 50 articles were selected; however, the reference list at the end contains fewer. Several sources are dated before 2010, and a few are from 2025, resulting in a total of approximately 43 items, despite the search being limited to studies published between 2010 and 2024.

Response 3: We have carefully re-audited the reference list. After removing duplicates and ensuring accuracy, the total number of included studies is now 50 peer-reviewed works published between 2010 and 2024. Pre-2010 theoretical works (e.g., Milgram & Kishino, 1994; Slater & Wilbur, 1997) were retained because they are seminal to XR and presence research, but these are clearly acknowledged as foundational rather than empirical studies. We also removed extraneous or duplicate entries to ensure consistency with the PRISMA workflow (Figure 2).

Comment 4: The authors state that they used a platform such as ResearchGate. However, this cannot be regarded as a bibliographic database…

Response 4: We acknowledge this important clarification. The Methods section (Section 3, p. 9, para. 2, lines 5–8) has been revised to specify that ResearchGate was only used as a supplementary tool for accessing full texts, not as a bibliographic database. The primary databases were IEEE Xplore, SpringerLink, PubMed, ScienceDirect, and Google Scholar.

Comment 5: I suggest improving the structure of the diagram (Fig. 2) that illustrates the selection workflow of XR studies in STEM education.

Response 5: We agree with this suggestion. Figure 2 (p. 10) has been completely redesigned with cleaner formatting, consistent fonts, and evenly spaced arrows to improve clarity. The revised diagram is easier to read and aligns with standard PRISMA formatting conventions.

Comment 6: Some of the images presented in the paper are not connected to the text of the article.

Response 6: We revised the manuscript to ensure that every figure is explicitly linked to the text. Linking sentences were added to Sections 4.2, 6.4, 6.5, and 7.3. These linking phrases clarify the purpose of each image.

Comment 7: To improve clarity and conciseness, I suggest defining each abbreviation only at its first occurrence in the text.

Response 7: We agree. The manuscript was carefully reviewed to ensure that abbreviations such as XR, AR, MR, VR, HCI and CAMIL are defined only at their first occurrence. Redundant definitions throughout the text were removed for conciseness.

Comment 8: I recommend verifying the correct source citation for Figure 9, as it appears that reference [55] might be the correct one rather than [54].

Response 8: Thank you for noticing this. We corrected the source attribution in Figure 9 (p. 19). The citation has been updated from [54] to [57] actually, which is the correct source (Chandramouli et al., 2020).

Comment 9: Finally, I suggest reconsidering the selection and ordering of keywords. For example, the important term “dual-phase learning model” is missing, and it would be preferable to use “extended reality” rather than the abbreviation XR.

Response 9: We agree. The Keywords section (p. 2) has been revised to:
Extended Reality, Augmented Reality, Virtual Reality, Mixed Reality, Dual-Phase Learning Model, Presence, Affordances, Agency, STEM Education, Human–Computer Interaction, Cognitive Load, Constructivist Learning, Immersive Learning.

This improves keyword relevance and indexing accuracy.

Reviewer 2 Report

Comments and Suggestions for Authors

 

The paper presents a comprehensive review of the use of Extended Reality (XR)—including Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR) in STEM education. Drawing on about 50 peer-reviewed studies published between 2010 and 2024, the authors synthesize key themes such as presence, affordances, and agency in XR learning environments. The review is grounded in constructivist learning theory, cognitive load theory, and the L1–L2 learning model, which together highlight the distinction between interface learning and domain-specific knowledge acquisition. Beyond identifying the strengths and limitations of current XR applications, the paper proposes a design framework built on five pillars: presence, affordances, agency, cognitive load reduction, and inclusivity. In doing so, it offers both theoretical insights and practical guidelines for educators, researchers, and developers seeking to optimize the design and implementation of XR technologies in education.

 

Although the paper is well-organized and contributes valuable insights, several areas could be improved to enhance its scholarly impact:

  • Strengthen quantitative synthesis: Future iterations could include effect size comparisons across modalities (AR vs. VR vs. MR).
  • Expand L1–L2 application: Provide empirical or case-based analysis of how this framework functions in real XR classroom settings.
  • Deepen inclusivity discussion: Include more concrete design guidelines for accessibility (screen readers, cognitive scaffolds, low-cost XR).
  • Address implementation challenges: More focus on institutional adoption barriers (cost, training, infrastructure) would improve real-world applicability.
  • Consider comparative studies: Future reviews could systematically contrast XR with traditional or hybrid teaching methods.

The recommendation for authors is to try to answer these questions and take them into account already in this study, if possible; if not, then let us know how all of the above can be resolved in future research.

Comments for author File: Comments.pdf

Author Response

We sincerely thank Reviewer 2 for their thoughtful and constructive feedback, which has helped us improve the clarity, scope, and scholarly depth of the manuscript. Below, we address each of the points raised, outlining the revisions we made and, where appropriate, explaining how future work may address the reviewer’s suggestions.

Comment 1: Strengthen quantitative synthesis.
Reviewer: Future iterations could include effect size comparisons across modalities (AR vs. VR vs. MR).
Response: We acknowledge this valuable suggestion. As our study was designed as a qualitative synthesis rather than a meta-analysis, effect size comparisons were outside the scope of this review. However, we have revised the “Future Research Directions” section (Section 5.4) to explicitly recommend quantitative meta-analyses across XR modalities as an important next step for advancing the field.

Comment 2: Expand L1–L2 application.
Reviewer: Provide empirical or case-based analysis of how this framework functions in real XR classroom settings.
Response: We appreciate this feedback. While our paper is primarily a conceptual review, we have expanded Section 5.1 to emphasize the practical value of the L1–L2 framework in XR classroom contexts. Additionally, in Section 5.4 we highlight the need for empirical validation of this framework through classroom-based case studies in future work.

Comment 3: Deepen inclusivity discussion.
Reviewer: Include more concrete design guidelines for accessibility (screen readers, cognitive scaffolds, low-cost XR).
Response: We agree and have strengthened Section 6.5 (Accessibility & Inclusivity) by adding concrete design guidelines (e.g., compatibility with screen readers, alternative input devices, simplified UIs for neurodiverse learners, and mobile-based XR for affordability). We have also added supporting references to ground these recommendations in existing literature.

Comment 4: Address implementation challenges.
Reviewer: More focus on institutional adoption barriers (cost, training, infrastructure) would improve real-world applicability.
Response: We have expanded Section 5.3. (Limitations in Current XR Research) to address institutional barriers such as high hardware costs, faculty training gaps, and infrastructure requirements. We also include literature-based recommendations for centralized XR labs, shared device programs, and professional development initiatives, supported by new references.

Comment 5: Consider comparative studies.
Reviewer: Future reviews could systematically contrast XR with traditional or hybrid teaching methods.
Response: We agree and have added discussion in Section 5.4 (Future Research Directions) highlighting the importance of systematic comparative studies between XR-enhanced, traditional, and hybrid learning methods. This addition clarifies how future research can assess XR’s unique value in STEM education.

Reviewer 3 Report

Comments and Suggestions for Authors

This paper is particularly valuable as a comprehensive review of XR applications in STEM education. It examines the use of XR technologies (AR, MR, VR, dVR) in STEM learning, organizing findings around design principles and cognitive load. By synthesizing 50 articles and proposing a framework that integrates Presence, Affordance, Agency, and the L1–L2 model, it provides useful insights. However, the paper seems to lack clear differentiation from existing reviews, as well as quantitative analysis and explicit theoretical contributions. The following comments are offered:

  1. Numerous reviews on the educational applications of XR already exist. The paper does not sufficiently emphasize how it differs from these prior works (e.g., through a clear integration of the L1–L2 model or a comprehensive HCI perspective).

  2. Although the L1–L2 model is referenced, its use seems limited to a restatement of existing theory. The paper’s unique theoretical contribution is unclear. It should be made explicit whether this is merely a “synthesis” or a true “extension.”

  3. The repeated claim that “a large L1 burden prevents progression to L2” is not supported by empirical data, and the discussion remains limited to an explanatory diagram. The argumentation becomes redundant and lacks theoretical advancement.

  4. The frameworks of presence, affordance, and agency have been discussed in VR/AR education for many years. The paper mainly reiterates these established perspectives. Greater emphasis should instead be placed on emerging directions, such as AI integration, neuroadaptive XR, or the standardization of evaluation metrics.

Author Response

Comment 1: The paper does not sufficiently emphasize how it differs from prior XR education reviews.
Response: We have revised Section 1.2 Gaps and Challenges to explicitly highlight the novelty of our work. Unlike earlier reviews that primarily catalog XR outcomes, our study synthesizes results through the combined lens of the dual-phase L1–L2 learning model and human–computer interaction (HCI) principles. This dual framework provides a unique way to understand how interface usability and cognitive demands shape progression from surface-level interaction (L1) to domain learning (L2).

Comment 2: The L1–L2 framework use seems limited to restating theory, with unclear contribution.
Response: We expanded Section 5.1 Theoretical Implications to clarify that our contribution is not merely a restatement of the L1–L2 framework, but its systematic integration across 50 peer-reviewed studies. This synthesis identifies consistent patterns where high L1 demands impede domain learning, thus extending the framework into a comprehensive diagnostic tool for XR education research.

Comment 3: This claim is repeated without strong evidence.
Response: We streamlined Section 2.4 L1–L2 Learning in XR Systems to reduce repetition and reinforced the argument in Section 5.1 with empirical examples. For instance, AR scaffolding studies showed reduced cognitive load when tasks were sequenced [37], and VR engineering simulations reported better transfer when interface complexity was minimized [20,27]. This grounds the L1–L2 discussion in specific evidence.

Comment 4: These concepts are not new, and their treatment is repetitive.
Response: We acknowledged their established status and reframed their role in Section 5.1 to emphasize their convergence with emerging directions. We added a bridging paragraph noting that presence, affordances, and agency remain central but are evolving in relevance when linked with AI-driven adaptive feedback, neuroadaptive XR, and standardized evaluation metrics [Liu et al., 2023; Kivikangas et al., 2021].

Comment 5: Greater emphasis is needed on future trends such as AI, neuroadaptive XR, and evaluation standardization.
Response: We expanded Section 5.4 Future Research Directions with a new paragraph focusing on these directions. We discuss the potential of AI-augmented XR environments, neuroadaptive interfaces informed by physiological sensing, and the need for standardized evaluation frameworks. These additions align the paper with cutting-edge research trajectories and highlight areas for future investigation.

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

Thank you for submitting the revised manuscript. I think the manuscript seem to be well revised.

Back to TopTop