Modeling Learning Outcomes in Virtual Reality Through Cognitive Factors: A Case Study on Underwater Engineering
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper is interesting, but several points need revision:
-
Ethics / IRB
The study includes user testing, so ethical approval is required. Please provide the relevant ethics/IRB approval document as supplementary material. -
Description of the VR software & demo
Add a subsection that introduces the VR software: state whether it was developed in Unreal Engine, Unity, or another platform, and describe its core functions and architecture. Include a software-function overview figure. Additionally, please submit a short demo video (≈30 seconds) as supplementary material to help readers better understand the system and tasks. -
Sample size and statistical power
Sample size (n = 26) is small for a multi-indicator SEM. Although PLS-SEM is more robust with small samples, this raises concerns about estimate stability, overfitting, and generalizability. Please include an a priori or post-hoc power/sensitivity analysis and report stability checks (e.g., bootstrap details, leave-one-out or jackknife analyses). -
Data availability
The current statement “No new data was created” is incorrect. Please provide the anonymized raw data (CSV) and the analysis scripts (or a link to a public repository such as GitHub or OSF) as supplementary material. If the data cannot be publicly shared, explain the reason and describe how qualified researchers may obtain access. -
Language, formatting and minor fixes
Please proofread the manuscript for language and formatting issues (typos, inconsistent acronyms, figure/table captions). For example, correct “Disscusions” → “Discussions” and ensure all figures and tables include clear labels and units.
Author Response
Dear reviewer,
Thank you for allocating time to evaluate the manuscript, your feedback is much appreciated. All your observations have been addressed in the new version, which is now improved substantially.
- Comment 1: The study includes user testing, so ethical approval is required. Please provide the relevant ethics/IRB approval document as supplementary material.
- Response: The relevant documents have been provided as supplementary materials to our revised submission.
- Comment 2.1: Add a subsection that introduces the VR software: state whether it was developed in Unreal Engine, Unity, or another platform, and describe its core functions and architecture. Include a software-function overview figure.
- Response: To address this point, we have created a new sub-section named "3.7. Technical setup and architecture" in which we introduce details related to the software application, its core functions and a simplified high-level solution architecture. More specifically, you can find this information expanded between L424 - L469.
- Comment 2.2: Additionally, please submit a short demo video (≈30 seconds) as supplementary material to help readers better understand the system and tasks.
- Response: We appreciate the suggestion and believe that the video would be valuable for having a complete resubmission. As a result, in the supplementary material section we have uploaded a 90-second video introduction that showcases the functions and purpose of our VR software application.
- Comment 3: Sample size (n = 26) is small for a multi-indicator SEM. Although PLS-SEM is more robust with small samples, this raises concerns about estimate stability, overfitting, and generalizability. Please include an a priori or post-hoc power/sensitivity analysis and report stability checks (e.g., bootstrap details, leave-one-out or jackknife analyses).
- Response:
- Given the small sample size (n = 26), we performed a post-hoc statistical power analysis for the main structural paths using the observed effect sizes (f² = 0.329 for Learning Styles → VR; f² = 0.977 for VR → Performance) (Table 4).
- Calculations in G*Power indicated that, for α = 0.05 and power = 0.80, the minimum sample sizes required would be 24 and 10 respectively, confirming adequacy for detecting the observed effects. Sensitivity analysis showed that with n = 26, the model could detect effect sizes as small as f² = 0.21 at 80% power for a single predictor.
- To assess the robustness of parameter estimates, we complemented the 5,000-sample bootstrapping procedure with a jackknife resampling procedure, systematically omitting one case at a time.
- Path coefficients and their significance remained stable across all jackknife subsamples, with variations in β values < ±0.04 and no change in significance levels. This suggests the results are not unduly influenced by any single participant, supporting the stability of the findings.
- These mentions are now highlighted in an enhanced version of Section 3.6. Statistical methods. The section now includes both a better overview of the statistical indicators used (L385 - L409), alongside the previously-described idea (L411 - L423).
- Comment 4: The current statement “No new data was created” is incorrect. Please provide the anonymized raw data (CSV) and the analysis scripts (or a link to a public repository such as GitHub or OSF) as supplementary material. If the data cannot be publicly shared, explain the reason and describe how qualified researchers may obtain access.
- Response: The anonymized data (in .csv format) has been added as a supplementary material as well in the most recent submission.
- Comment 5: Please proofread the manuscript for language and formatting issues (typos, inconsistent acronyms, figure/table captions). For example, correct “Disscusions” → “Discussions” and ensure all figures and tables include clear labels and units.
- Response: The new manuscript version has been revised carefully. The submission has all these minor issues resolved.
Thank you for your support and guidance!
Best Regards,
Authors
Reviewer 2 Report
Comments and Suggestions for AuthorsThe article entitled “MODELLING THE RELATIONSHIP BETWEEN LEARNING STYLES AND PERFORMANCE IN VIRTUAL REALITY: A CASE STUDY ON UNDERWATER ENGINEERING” for the “electronics” journal is a very promising article, suitable for publication. The article deals with a cutting-edge topic and is of great interest to the magazine's readers. One of the key points of the article is that it refers to a real educational field and not to a simple laboratory study, pedagogical proposal, or teaching scenario. Their research proposal is to investigate how immersion and flow, concerning learning styles, influence learning outcomes within the Submarine Simulator, a VR-based educational tool for underwater engineering. The authors, as a research team, enhanced VR-based instructional design for the field of underwater engineering. His study proposes aggregating existing and validated models, such as Kolb’s framework, to develop new models tailored specifically for VR learning environments. Their research aims to highlight the interplay of these variables in a learning process focused on acquiring knowledge in the STEM field, specifically hydrodynamics, through designing and operating a simulated submarine model in VR. The authors have a solid pedagogical foundation (Kolb's experiential learning theory), a specific structural model (STEM), cutting-edge pedagogical content (virtual reality), and, finally, a complete, reliable pedagogical environment (26 students from MINES Paris - PSL).
Although the manuscript is of great interest and highly relevant to the journal’s scientific fields and stands as an excellent example of educators engaging in research and actively shaping the emerging educational landscapes, several sections require substantial revision to improve clarity, coherence, and overall quality. Specific elements of the formal structure and presentation need careful attention. We have identified key areas where targeted improvements could significantly enhance both the academic value and overall impact of the publication.
- Usually, the Electronics (ISSN 2079-9292) as a journal suggests the titles of prose works like “Modelling the Relationship Between Learning Styles and Performance in Virtual Reality: A Case Study on Underwater Engineering”. Unless there is a specific reason, authors should follow the magazine's approach.
- Αll abbreviations should be removed from the abstract. As an introductory piece, a summary introduces the reader to the text, and special care should be taken to arouse their interest. The abbreviations concern the main text.
- Although at the end of the introduction, the authors had formulated the main questions of their research, after clarifying whether it is research or a study, the text is poor. The scientific questions need to be reframed from the interdisciplinary context of the research and documented in greater detail. This is a matter for the introduction; the development concerns another part of the argument. Here we need more details.
- Immediately afterwards, they should clearly, rigorously, and analytically state what the parts of the article are and how they relate to the specific scientific questions.
- It is extremely important for the texture and importance of the article that the authors familiarize the readers with the interdisciplinary approach of the research. From where to where do you delimit the scope of their research? What are the structural scientific areas in which they work, research, and ultimately treat? Who does the article concern, and what closed scientific audience? It would be good if these were submitted by the authors before anyone proceeds to read the article. In “Theoretical Background”, before subchapters, or in a new special subchapter, add the interdisciplinary approach of your research and explain exactly which scientific fields it is based on.
- Please avoid excessive paragraphing. Three paragraphs per page are sufficient. Excessive use of paragraphs should be avoided. Limiting each page or section to approximately three well-structured paragraphs is generally sufficient for clarity and effective presentation.
- Some references are together, for example, [1-4], line 31, [7-8], line 41, [9 – 17], line 77, [9, 10, 18], line 81, [8, 19, 21] in line 92, etc.! What is the point of having the two or three references together? We do not use two references simultaneously without clearly explaining what we are referring to. References in an article act as tools for documenting evidence. Do not allow confusion and allow the reader to interpret it “freely”. Rewrite and explain!!! Why do the authors cite every reference anywhere in the article? What is the meaning of each reference? Explain why you use every other work in your documentation. Correct similar behavior everywhere in the paper! Explain why you cite any reference!
- If the paper needs a “state of the art”, organize it; otherwise, there is no point in citing references that tire the reader and hinder the flow of reading, without it being clear what exactly is going on with the references and what purpose they serve.
- “Section 2. Learning styles in VR”: a) Avoid submission in titles and labels, b) When you have included an abbreviation at the beginning of the article in the main text, there is no need to mention it again later on. Just create a table with abbreviations at the end of the article, and you're good to go. c) Please organize a flow chart or a concept map for the learning style material and avoid excessive paragraphing. Τhe second paragraph contains only one sentence, and the third contains two. This is not acceptable in a scientific article!
- To my knowledge, and my pedagogical taste, “Kolb’s experiential learning theory” was not developed comprehensively, it was not linked to cutting-edge technologies, and, most importantly, it was not consolidated based on the article's subject matter. In my opinion, the entire background and the connections and implications of the theoretical framework should be presented, rather than a juxtaposition of principles. Please document the connection with Kolb’s experiential learning theory fully.
- “Section 3. VR as a Learning Environment: Flow and Personalization”: Beyond the excessive paraphrasing and the extremely limited scope, it is unclear why this chapter is separate from the previous one, given that it concerns a pedagogical theme.
- Research methodology, lines 48 and 4.1. Research aim: Please write a short introduction before the subchapter.
- It must be acknowledged by the authors that the sample of 26 students does not constitute robust material for any approach beyond qualitative sampling and model organization. The number of 26 students does not support any statistical generalization, although it is an excellent case model and, as such, should be evaluated and recommended for future research.
- We believe that references to Kolb's experiential learning theory in Section 4.4 repeat Section 2. Either remove the previous section or remove the repetition from here.
- “…assigning scores that capture objective performance in each phase”: explain what the phases are! Please draw a flow chart!
- “Figure 1”. a) Please correct the question: “How does the VR application map with Kolb’s learning cycle?”, and b) what is the meaning of presenting it here? It is too late. Is this a “Research protocol” or a “Learning style”? Why has all this become too complicated? Is there any reason for edge technologies to become too mystical?
- Table 1. Research protocol based on Kolb's (!!!) learning cycle: very important, but it must be earlier.
- Is “Methodological correlation” (line 342) something different from “Research protocol” or “Learning style”? In what?
- Explain how the content of the photographs was provided. Which tool was used, and describe exactly what is depicted in it?
- Figures 2 and 3 have not been placed under the journal's guidelines, and more importantly, they have not been indexed. Explain what is depicted in them.
- How does the material in Table 2. Instructions given to all participants and the activities shown in them relate to the learning objectives. What exactly happens? How do students learn? What do students learn? Explain in detail.
- Figure 9. Research design for VR experiential learning is an important piece of evidence for the organization of the course. Please analyze and explain your pedagogical proposal in more detail. The presentation flow does not help in the overall representation and explanation of the course structure.
- Covariance Analysis of Latent Constructs: I am very sorry, but I can't follow the statistical analysis. Let's assume that we accept the validity of such a small sample. Please explain the organization and methodology of your statistical analysis. The way the figures are presented is not communicative. The hall section looks like a student exercise. As Section 5 progresses, we move further away from the scientific basis of the article, at least as stated at the beginning.
- How was “Figure 10. Path Analysis on the relation between Learning styles and Learning outcomes” produced? How were the percentages calculated, and what design tool was used to produce the diagram? What is the significance of the figures? What did we measure here? Using what metric basis? How was the calibration performed?
- “Hassan et al., 2020; Rutrecht et al., 2021”: line 508. Why did you change the reporting method? Is there a reason for this?
- What does Latent Variable Covariances mean to you? How did it come about? How did you measure it, and how did you calculate it? With which tool?
- What is the significance of Table 4? Quality indicators? What do they show, and how were the numbers derived as benchmarks? How was the quality measurement formulated? Was any weighting applied? When?
- Table 5. Construct Reliability and Validity: Please give a full explanation.
- Average Variance Extracted: Please give a full explanation.
- …“adequate discriminant validity. [39]”: What is the meaning of the dot before the reference? What is adequate discriminant validity?
- What is the Fornell-Larcker Criterion? What did show, and how were the numbers derived as benchmarks? How was the measurement formulated? Was any weighting applied? When?
- What are path coefficient calculations?
- What are Collinearity Statistics for your research? How did you use it?
- …“immersive experience itself is the more potent predictor”, I think it is the first time that we discuss predictors. What is the meaning of a predictor?
- Where and how were “emotional satisfaction and sustained focus” measured?
- “It is possible that a well-designed VR environment acts as a cognitive 'levelling field,' providing multiple, simultaneous pathways for understanding (kinesthetic, visual, and analytical) that cater to a wide range of learners”: I am very sorry, but we cannot draw scientific conclusions with the possibilities (“It is possible that”)! With this manner, the conclusion is unfounded.
- Our results indicate that VR can effectively support experiential learning, confirming the literature data [19, 20]. Your results are something different from the references that you mention. Please confirm!
- “The results from another study”…: which study and why is it mentioned in Discussions? But can the results of one study be used to argue the validity of another? Please explain.
- Where are the answers to the first research questions? (line 61)
Final remarks
In conclusion, we find that the article “MODELLING THE RELATIONSHIP BETWEEN LEARNING STYLES AND PERFORMANCE IN VIRTUAL REALITY: A CASE STUDY ON UNDERWATER ENGINEERING” presents an innovative and promising pedagogical hypothesis. However, substantial work is still required to ensure the text fully and convincingly substantiates this hypothesis. The subject matter is both timely and highly relevant, resonating with current discussions on the connection between education and virtual reality and teacher professional development. The authors show commendable engagement with real-life classroom contexts, which significantly strengthens the empirical dimension of the study. That said, the theoretical framework would benefit from greater elaboration, with clearer and more concrete reference points. At times, key terms, especially in statistical measurements, are introduced without sufficient definition, statistical explanation, or contextual grounding, which may hinder comprehension for readers even familiar with the topic. The logical flow of the arguments could also be improved, with smoother transitions between sections. The conclusions, in their current form, are rather brief and would be more convincing if explicitly and directly tied to the data presented. Minor linguistic and syntactic inconsistencies also impact readability and should be addressed. Two truly remarkable problems that struck us are the following:
1) While the authors talk about STEM methodology in the introduction to their text, there is no mention of it anywhere in the pedagogical presentation or educational basis. The authors need to think very carefully about what exactly they want to do with this specific term. If they want a STEM course, they need to organize it and prove that it is a course with STEM specifications. If it is not, they should remove it from their initial design.
2) The second serious mistake concerns the pedagogical framework they set themselves. While they talk constantly about Kolb's experiential learning theory, it ultimately appears that they do not utilize it in a way that would allow this research to be applied in practice. They repeat the same things at different points in the development without ultimately going into depth. Furthermore, the use of statistical tables and diagrams could be made more effective through clearer explanatory "bridges" that link them directly to the main findings. Also, it is not understandable what the meanings of Appendix 1 and Appendix 2 are.
Overall, the manuscript holds significant potential. With careful revision and strengthening of its theoretical, structural, and pedagogical components, it could become a valuable contribution to academic discourse. We continue to believe it is a strong article with important insights into the field of education.
Comments for author File: Comments.pdf
Author Response
Dear reviewer,
Thank you for allocating time to evaluate the manuscript, your feedback is much appreciated. All your observations have been addressed in the new version, which is now improved substantially.
- Comment 1: Usually, the Electronics (ISSN 2079-9292) as a journal suggests the titles of prose works like “Modelling the Relationship Between Learning Styles and Performance in Virtual Reality: A Case Study on Underwater Engineering”. Unless there is a specific reason, authors should follow the magazine's approach.
- Response: We appreciate the close attention to details. The title has been slightly rephrased and formatted correctly in the new manuscript version.
- Comment 2: Αll abbreviations should be removed from the abstract. As an introductory piece, a summary introduces the reader to the text, and special care should be taken to arouse their interest. The abbreviations concern the main text.
- Response: All abbreviations have been removed from the abstract (L10 - L25) and moved to the Introduction section and/or subsequent chapters. The name of the university is mandatory to remain as written.
- Comment 3: Although at the end of the introduction, the authors had formulated the main questions of their research, after clarifying whether it is research or a study, the text is poor. The scientific questions need to be reframed from the interdisciplinary context of the research and documented in greater detail. This is a matter for the introduction; the development concerns another part of the argument. Here we need more details.
- Response: The research questions have been slightly reframed (L105 - L111) to better capture the interdisciplinary approach used throughout our research paper.
- Comment 4: Immediately afterwards, they should clearly, rigorously, and analytically state what the parts of the article are and how they relate to the specific scientific questions.
- Response: The bridge towards the next subsection has been defined more clearly (L112 - L119), offering the reader a good overview of the subsequent sections.
- Comment 5a: It is extremely important for the texture and importance of the article that the authors familiarize the readers with the interdisciplinary approach of the research. From where to where do you delimit the scope of their research?
- Response:
- Adopting an interdisciplinary viewpoint, this research investigates the interconnections among learning styles, flow state, and educational achievements in teaching core hydrodynamics principles using VR.
- This study focuses exclusively on fundamental hydrodynamics concepts, Kolb's learning style inventory, established flow metrics, and assessments of post-training achievements. Using VR applications in other engineering fields and for non-academic populations have not been in scope.
- These mention are important and have been included in the new version of the manuscript between L85 and L90.
- Comment 5b: What are the structural scientific areas in which they work, research, and ultimately treat?
- Response:
- The current research operates at the intersection of several structural scientific areas, integrating both fundamental and applied domains.
- Within engineering sciences, it draws primarily on hydrodynamics to define the technical content and simulation parameters.
- The computer science dimension encompasses virtual reality development, simulation technologies, and Human–Computer Interaction (HCI), enabling the creation of immersive and pedagogically relevant environments.
- From the perspective of educational sciences, the study applies principles of educational technology and learning sciences, with a particular emphasis on cognitive load, learning styles, and educational psychology in STEM contexts.
- The measurement and evaluation sciences aspect employs psychometric techniques to assess flow states, along with applied statistical methods to analyze learning outcomes.
- Finally, in the interdisciplinary realm, principles from cognitive engineering and human factors inform the optimization of user experience, ensure the VR application promotes both technical proficiency and cognitive effectiveness.
- The current research operates at the intersection of several structural scientific areas, integrating both fundamental and applied domains.
- Comment 5c: Who does the article concern, and what closed scientific audience?
- Response:
- This article is relevant to researchers, educators, and practitioners involved in the integration of immersive technologies into STEM higher education, particularly those seeking to enhance learning efficiency and engagement through simulation-based training.
- It also targets curriculum designers and policymakers interested to integrate VR into technical training. The specialized scientific audience includes experts in naval engineering pedagogy, hydrodynamics, educational technology research, human-computer interaction, and engineering.
- These mentions have also been included in the new version of the manuscript, as we believe it holds value for all readers (L136 - L142)
- Comment 5d: It would be good if these were submitted by the authors before anyone proceeds to read the article. In “Theoretical Background”, before subchapters, or in a new special subchapter, add the interdisciplinary approach of your research and explain exactly which scientific fields it is based on.
- Response: We agree that this would be valuable for the readers, thank you. As a result, the interdisciplinary aspects have been addressed in a newly added subchapter named "1.1. Interdisciplinary approach in Virtual Reality applications".
- Comment 6: Please avoid excessive paragraphing. Three paragraphs per page are sufficient. Excessive use of paragraphs should be avoided. Limiting each page or section to approximately three well-structured paragraphs is generally sufficient for clarity and effective presentation.
- Response: This mention has been integrated in the text by limiting the theoretical approach to Kolb's Experiential Learning Theory and Learning Styles. A section has been created in the new version of the manuscript which highlights this (Section 2. Kolb’s experiential learning theory in designing Virtual Reality environments for engineering education)
- Comment 7: Some references are together, for example, [1-4], line 31, [7-8], line 41, [9 – 17], line 77, [9, 10, 18], line 81, [8, 19, 21] in line 92, etc.! What is the point of having the two or three references together? We do not use two references simultaneously without clearly explaining what we are referring to. References in an article act as tools for documenting evidence. Do not allow confusion and allow the reader to interpret it “freely”. Rewrite and explain!!! Why do the authors cite every reference anywhere in the article? What is the meaning of each reference? Explain why you use every other work in your documentation. Correct similar behavior everywhere in the paper! Explain why you cite any reference!
- Response: All references have been corrected throughout the article. All used references have been approached more analytically in the new manuscript version. In some isolated situations, we still have two or more references. In these situations, we strengthen the expressed idea by underlining that the quoted authors converge into the same conclusions in their research.
- Comment 8: If the paper needs a “state of the art”, organize it; otherwise, there is no point in citing references that tire the reader and hinder the flow of reading, without it being clear what exactly is going on with the references and what purpose they serve.
- Response:
- We appreciate your kind suggestion. As a result of your comment, Section 2 has been added to the manuscript as a chapter that encapsulates all relevant information in a concise and clear manner.
- The section now focuses on Kolb’s Experiential Learning Theory, emphasizing the relationship between experiential learning principles and the design of VR applications.
- This relationship is illustrated both in Figure 2 and Table 1 (pages 5 & 6), which consolidates the theoretical approach presented in Section 2, providing a solid foundation for the rest of the paper. Table 1 has been enhanced by clearly delimitating the tasks and learning objectives for each part of the learning cycle.
- Comment 9a: “Section 2. Learning styles in VR”: a) Avoid submission in titles and labels,
- Response: The issue has been fixed throughout the research paper.
- Comment 9b: b) When you have included an abbreviation at the beginning of the article in the main text, there is no need to mention it again later on. Just create a table with abbreviations at the end of the article, and you're good to go.
- Response: This aspect has been corrected in the new manuscript. Abbreviations are introduced once and then used throughout the paper.
- Comment 9c: c) Please organize a flow chart or a concept map for the learning style material and avoid excessive paragraphing. Τhe second paragraph contains only one sentence, and the third contains two. This is not acceptable in a scientific article!
- Response: A complete and comprehensive flow chart has been added in Section 2 in our newly revised manuscript. It describes the relationship between the activities done by students in our VR software application and Kolb's learning cycle (Figure 2).
- Comment 10: To my knowledge, and my pedagogical taste, “Kolb’s experiential learning theory” was not developed comprehensively, it was not linked to cutting-edge technologies, and, most importantly, it was not consolidated based on the article's subject matter. In my opinion, the entire background and the connections and implications of the theoretical framework should be presented, rather than a juxtaposition of principles. Please document the connection with Kolb’s experiential learning theory fully.
- Response: In response to this comment, we have introduced and structured a new chapter (Section 2), which focuses specifically on Kolb's experiential learning theory. Appendix 1 and Appendix 2 further support this section by offering additional insights into the theoretical aspects, as well as the statistical distribution of results for the Kolb Experiential Learning Profile (KELP) applied to our students. The definition of KELP itself is outlined in Section 3.4. Methods.
- Comment 11: “Section 3. VR as a Learning Environment: Flow and Personalization”: Beyond the excessive paraphrasing and the extremely limited scope, it is unclear why this chapter is separate from the previous one, given that it concerns a pedagogical theme.
- Response: The section has been deleted completely. The topic is now addressed in a more concise manner in Section 1. Introduction.
- Comment 12: Research methodology, lines 48 and 4.1. Research aim: Please write a short introduction before the subchapter.
- Response: Two introductory paragraphs have been added before the subchapter (newly numbered) 3.1. Research aim. More specifically, between L205 and L212.
- Comment 13: It must be acknowledged by the authors that the sample of 26 students does not constitute robust material for any approach beyond qualitative sampling and model organization. The number of 26 students does not support any statistical generalization, although it is an excellent case model and, as such, should be evaluated and recommended for future research.
- Response: A clearer acknowledgement related to this has been added in Section 6.1. Limitations. More specifically, between L833 and L842.
- Comment 14: We believe that references to Kolb's experiential learning theory in Section 4.4 repeat Section 2. Either remove the previous section or remove the repetition from here.
- Response: Your observation is correct. We have removed all repetitive information related to Kolb's theory. Additionally, we have restructured everything better into just one section - Section 2. Kolb’s experiential learning theory in designing Virtual Reality environments for engineering education.
- Comment 15: “…assigning scores that capture objective performance in each phase”: explain what the phases are! Please draw a flow chart!
- Response:
- The methodological framework was designed as a three-phased experiment, in which the independent variable consisted of the original software and the students’ individual characteristics, while the dependent variables were represented by the outcomes obtained across the three phases, each phase corresponding to one of the software’s testing stages.
- A flow chart has been added in accordance to your suggestion in order to support readers in better visualizing this methodological framework (Figure 3).
- Comment 16: “Figure 1”. a) Please correct the question: “How does the VR application map with Kolb’s learning cycle?”, and b) what is the meaning of presenting it here? It is too late. Is this a “Research protocol” or a “Learning style”? Why has all this become too complicated? Is there any reason for edge technologies to become too mystical?
- Response: Your observation is correct. Consequently, we have introduced a new section (Section 2) that describes the theoretical framework for learning styles and their connection with software in VR.
- Comment 17: Table 1. Research protocol based on Kolb's (!!!) learning cycle: very important, but it must be earlier.
- Response: This table has been moved much earlier in the paper, as this is indeed critical for the completeness of the newly created Section 2 named "Kolb’s experiential learning theory in designing Virtual Reality environments for engineering education."
- Comment 18: Is “Methodological correlation” (line 342) something different from “Research protocol” or “Learning style”? In what?
- Response: The information has been corrected, Table 1 is better structured and now introduced in Section 2, much earlier than before.
- Comment 19: Explain how the content of the photographs was provided. Which tool was used, and describe exactly what is depicted in it?
- Response: All figures illustrating the application throughout this article were captured directly from the original Submarine Simulator VR software. As students progressed through the experimental phases, their point of view were continuously recorded, with relevant frames extracted from these recordings. Corresponding notes have been added, more specifically: L434 – L436 and L479 – L480.
- Comment 20: Figures 2 and 3 have not been placed under the journal's guidelines, and more importantly, they have not been indexed. Explain what is depicted in them.
- Response:
- Both figures have now been formatted better and the caption has been slightly clarified (named Figure 5 and Figure 6 in the new version).
- It is also worth mentioning that in order for readers to get more accustomed to the contents of the article, we have created a 90-second video presentation (please check the supplementary materials).
- Comment 21: How does the material in Table 2. Instructions given to all participants and the activities shown in them relate to the learning objectives. What exactly happens? How do students learn? What do students learn? Explain in detail.
- Response:
- The process is very interactive. The facilitator reads out loud each of the instructions in Table 2. The participant then follows the instruction. The participants may receive additional help from the facilitator until the instruction is completed correctly (assessed on a binary done/not done basis).
- Students learn how to use the application through guided experiential learning, systematically progressing through Kolb's cycle under the facilitator's direction, with an emphasis on hands-on practice and learning by doing.
- These mentions have also been added in the text (L497 - L503) to support the readers in better understanding the process.
- Comment 22: Figure 9. Research design for VR experiential learning is an important piece of evidence for the organization of the course. Please analyze and explain your pedagogical proposal in more detail. The presentation flow does not help in the overall representation and explanation of the course structure.
- Response:
- In the respective Figure (now numbered 12), we aimed to present the research design, illustrating when and how the tests were administered, we believe it is relevant in the given context to support the reader in understanding the overall methodological approach behind the paper.
- The pedagogical proposal is now illustrated in Table 1, in which we introduce the learning objectives.
- Comment 23: Covariance Analysis of Latent Constructs: I am very sorry, but I can't follow the statistical analysis. Let's assume that we accept the validity of such a small sample. Please explain the organization and methodology of your statistical analysis. The way the figures are presented is not communicative. The hall section looks like a student exercise. As Section 5 progresses, we move further away from the scientific basis of the article, at least as stated at the beginning.
- Response:
- We appreciate your feedback and understand that some of the statistical terminology used in Section 4 may require further clarification, especially for readers less familiar with Partial Least Squares Structural Equation Modeling (PLS-SEM).
- As a result, we have enhanced the subsection named "3.6. Statistical methods" to include a simple and accessible step-by-step explanation of the statistical indicators that have been used, their terminology and optimal thresholds (L385 - L409).
- Lastly, it is worth mentioning that the employed Partial Least Squares Structural Equation Modeling (PLS-SEM) using the software SmartPLS 3.0 (Ringle et al., 2015) is an important step in accepting the validity of our sample.
- PLS-SEM is a variance-based method suitable for smaller sample sizes and complex models with multiple indicators. It allows for the simultaneous estimation of measurement models (relationships between latent variables and their observed indicators) and structural models (relationships between latent variables).
- This point has been better expressed in the respective section named "3.6. Statistical methods" (L370 - L423)
- Comment 24: How was “Figure 10. Path Analysis on the relation between Learning styles and Learning outcomes” produced? How were the percentages calculated, and what design tool was used to produce the diagram? What is the significance of the figures? What did we measure here? Using what metric basis? How was the calibration performed?
- Response:
- This figure (Figure 13 in the new manuscript) contains the Path Analysis. This analysis presents the standardized path coefficients between latent variables (Learning Styles → VR Experience → Performance). These coefficients represent the strength and direction of the relationships, scaled between -1 and +1.
- Path coefficients were obtained using SmartPLS, through an iterative algorithm that maximizes explained variance (R²) in the dependent variables. The percentages shown in the diagram correspond to R² values, representing the proportion of variance in each dependent latent variable explained by its predictors (e.g. R² = 0.494 means 49.4% of Performance variance is explained). Figure 13 and Tables 3 to 9 were generated directly by SmartPLS, which also produces this path diagram based on the statistical output.
- The raw data collected from all our participants and used in SmartPLS for our analysis has been included in the supplementary material.
- To increase clarity, we have also redesigned the diagram (Figure 13).
- Comment 25: “Hassan et al., 2020; Rutrecht et al., 2021”: line 508. Why did you change the reporting method? Is there a reason for this?
- Response: This was a mistake indeed, thank you for the attention towards details. We have corrected the citation system on this paragraph. The authors are now mentioned in the references.
- Comment 26: What does Latent Variable Covariances mean to you? How did it come about? How did you measure it, and how did you calculate it? With which tool?
- Response:
- The latent variables, in the context of this study, represent unobserved constructs derived from empirical data collected via validated questionnaires (as detailed in Section 3.4. Methods).
- Covariance, in this analytical framework, quantifies the extent to which two latent variables exhibit joint variability, indicating the strength and direction of their linear association.
- Positive covariance suggests that the variables tend to increase or decrease in tandem, while negative covariance implies an inverse relationship, thereby providing insights into underlying interdependencies among constructs.
- The tool which has been used is named SmartPLS, a specialized software package for partial least squares structural equation modeling (PLS-SEM). This tool derives covariance estimates from the fitted model by computing the covariances among the composite scores of the latent variables. This facilitates robust evaluation of relationships in non-normal data distributions and complex models.
- Comment 27: What is the significance of Table 4? Quality indicators? What do they show, and how were the numbers derived as benchmarks? How was the quality measurement formulated? Was any weighting applied? When?
- Response:
- Table 4 presents key quality indicators for assessing the structural model's predictive relevance and the impact of predictors within our Partial Least Squares Structural Equation Modeling (PLS-SEM) framework. These indicators are essential for evaluating the model's explanatory power and the practical significance of relationships, ensuring robustness beyond mere statistical significance. I will reiterate on some of the indicators:
- R² (Coefficient of Determination): This measures the proportion of variance in a dependent latent variable that is explained by its predictor(s). For instance, an R² value of 0.50 indicates that 50% of the variability in the outcome is accounted for by the model's inputs. In PLS-SEM, R² values are interpreted contextually: thresholds of 0.75, 0.50, and 0.25 are often considered substantial, moderate, and weak, respectively, though these are guidelines rather than strict cutoffs.
- f² (Cohen’s Effect Size): This quantifies the relative contribution of a specific predictor to the R² of the dependent variable, calculated as f² = (R²_included - R²_excluded) / (1 - R²_included), where "included" and "excluded" refer to models with and without the predictor. It highlights the strength of individual effects, with benchmark values of 0.02, 0.15, and 0.35 indicating small, medium, and large effects, respectively.
- The benchmarks for both R² and f² are derived from established methodological literature, primarily Cohen (1988) for effect size conventions and Hair et al. (2022) for PLS-SEM-specific applications. These sources provide empirically grounded thresholds based on extensive reviews of social and behavioral science studies, promoting consistency in model evaluation across research.
- No subjective weighting was applied to the indicators at any stage. All data was treated equally in the reflective measurement models and path estimations to maintain objectivity and avoid bias. This equal treatment aligns with default PLS-SEM practices.
- Table 4 presents key quality indicators for assessing the structural model's predictive relevance and the impact of predictors within our Partial Least Squares Structural Equation Modeling (PLS-SEM) framework. These indicators are essential for evaluating the model's explanatory power and the practical significance of relationships, ensuring robustness beyond mere statistical significance. I will reiterate on some of the indicators:
- Comment 28: Table 5. Construct Reliability and Validity: Please give a full explanation.
- Response:
- The table named "Construct Reliability and Validity" presents essential psychometric metrics for evaluating the measurement model's constructs, including Cronbach's Alpha, rho_A, Composite Reliability, and Average Variance Extracted (AVE), I will give a full explanation for each.
- Cronbach's Alpha and rho_A (Dijkstra-Henseler's rho) measure the scales' internal consistency, with values exceeding 0.70 indicating good reliability and suggesting that items within each construct are cohesively measuring the same latent variable.
- Composite Reliability offers a complementary assessment, often preferred in partial least squares structural equation modeling (PLS-SEM) for its robustness against item weighting differences, where scores above 0.70 affirm dependable construct measurement.
- Finally, AVE quantifies convergent validity by showing the proportion of variance captured by the construct relative to measurement error, with thresholds above 0.50 confirming that the indicators adequately converge.
- Collectively, these metrics support the robustness and psychometric adequacy of the measurement model
- The table named "Construct Reliability and Validity" presents essential psychometric metrics for evaluating the measurement model's constructs, including Cronbach's Alpha, rho_A, Composite Reliability, and Average Variance Extracted (AVE), I will give a full explanation for each.
- Comment 29: Average Variance Extracted: Please give a full explanation.
- Response:
- I will expand a bit on my previous answer.
- Average Variance Extracted (AVE) is an important tool in statistical analysis, especially when checking how well survey questions or items measure an underlying idea or concept.
- AVE is calculated by taking the average of the squared values from how strongly each item connects to the concept (known as factor loadings).
- In simple terms, AVE shows how much of the variation in the answers comes from the actual concept being studied, rather than from random errors or unrelated influences.
- A good AVE score is above 0.50, meaning the concept explains at least half of the variation in its items, which confirms that the questions are on target and working well together. If the AVE is too low, it might mean some questions aren't fitting right or the concept is more complex than estimated, so researchers may need to tweak their setup and approach.
- Ultimately, AVE works alongside the previously-mentioned reliability checks to make sure the study's measurements are both trustworthy and truly capturing what they're meant to.
- Comment 30: …“adequate discriminant validity. [39]”: What is the meaning of the dot before the reference?
- Response: The issue has been corrected. The dot should have been used to close the sentence. As a result, it has been moved after the citation.
- Comment 31: What is adequate discriminant validity? What is the Fornell-Larcker Criterion? What did show, and how were the numbers derived as benchmarks? How was the measurement formulated? Was any weighting applied? When?
- Response:
- Adequate discriminant validity, in simple terms, means checking that each key idea or concept (called a construct) in our study is different enough from the others. With other words, we are ensure there is not too much overlap that might make things redundant or unclear.
- The Fornell-Larcker Criterion is a standard way to test this in statistical models like structural equation modeling (especially PLS-SEM).
- The criterion checks if the square root of the average variance extracted (AVE) for each concept is bigger than its strongest link (correlation) to any other concept, showing that the concept connects more strongly to its own survey questions or items than to those from other concepts.
- In our study, as shown in Table 6, the results passed the Fornell-Larcker Criterion, since the square root of AVE for every concept was higher than its correlations with the others, which confirms solid discriminant validity and proves that the concepts in our model are nicely distinct.
- Lastly, it is worth mentioning that in the context of the Fornell-Larcker criterion within partial least squares structural equation modeling (PLS-SEM), weighting is indeed applied as part of the overall model estimation process, where indicators are combined into constructs using a weighted linear combination
- The Fornell-Larcker criterion itself doesn't involve additional weighting; it simply compares the already-existing values without further adjustments.
- Comment 32: What are path coefficient calculations?
- Response:
- To give some context, path coefficient are used in structural equation modeling (SEM), particularly PLS-SEM (explained at at earlier comment).
- Theory says that path coefficients are standardized regression coefficients that quantify the direct effects between latent variables in the structural model, indicating both the magnitude (strength of association, ranging from -1 to +1) and direction (positive for proportional relationships or negative for inverse ones) of in-scope causal links.
- These coefficients are estimated via iterative algorithms on composite scores, with statistical significance assessed through bootstrapping to generate t-values, p-values, and confidence intervals, making them ideal for exploratory research with non-normal data or smaller samples, as mentioned before.
- In practical applications, path coefficients help with:
- Validating theories by revealing interdependencies (e.g. how immersion influences performance)
- Supporting refinement of theoretical models by identifying non-significant paths (non-correlated metrics)
- Providing actionable insights for optimizing the learning environment in future iterations through metrics like explained variance (R²). As mentioned before, R² is a statistical measure in regression analysis that shows the proportion of total variation in the outcome that can be explained by the predictor (input) variables in the model. The higher the explained variance, the stronger fit.
- Comment 33: What are Collinearity Statistics for your research? How did you use it?
- Response:
- Collinearity statistics help check if the predictor variables are too closely related. This could interfere with the results by making errors bigger, twisting the path coefficients and making tests less trustworthy.
- We mainly used the Variance Inflation Factor (VIF) as a key measure, computed by running a regression of each variable against the others in its group. VIF shows how much collinearity boosts the uncertainty in the estimates.
- Once this is computed, we look at cutoffs like VIF under 3.3 to determine if the results are valid (based on standard research guidelines).
- As shown in Table 7, all VIF values for the indicators in our research were below 3.3, meaning no major collinearity problems and confirming that the model's results are solid and reliable. This check was a key early step to avoid biases.
- Comment 34: …“immersive experience itself is the more potent predictor”, I think it is the first time that we discuss predictors. What is the meaning of a predictor?
- Response:
- In the context of educational research and the learning process, a predictor variable is essentially a measurable factor that can forecast or anticipate specific outcomes.
- For instance, in our study, predictors help identify elements that might influence engagement or performance before they fully manifest. This concept of immersivity acting as a predictor is detailed in Section 3.4. Methods, where we describe the Immersive Tendencies Questionnaire (ITQ) as a reliable indicator for predicting flow states and learning efficacy in VR-based training.
- By incorporating the ITQ, our methodology highlights how personal immersive tendencies can proactively signal potential academic success, allowing for tailored instructional designs that enhance student outcomes.
- Comment 35: Where and how were “emotional satisfaction and sustained focus” measured?
- Response:
- Emotional satisfaction represents a dimension of the Basic Needs in Games (BANG) scale. This is a dimension measured by one of the applied questionnaires which captures students' emotional responses related to autonomy, competence, and relatedness in gamified environments (outlined in Section 3.4: Methods -> L323 - L348).
- Sustained focus, aligned with the flow state construct involving concentrated attention and intrinsic motivation during immersive activities, has been assessed via the Flow Short Scale (FSS) (outlined in Section 3.4: Methods).
- As discussed before, both metrics were derived from validated instruments, with responses analyzed using Likert-scale scoring and structural equation modeling in SmartPLS to quantify their mediating roles in educational outcomes.
- Comment 36: “It is possible that a well-designed VR environment acts as a cognitive 'levelling field,' providing multiple, simultaneous pathways for understanding (kinesthetic, visual, and analytical) that cater to a wide range of learners”: I am very sorry, but we cannot draw scientific conclusions with the possibilities (“It is possible that”)! With this manner, the conclusion is unfounded.
- Response: The idea has been reformulated accordingly in order to avoid confusions. The reason behind it was to reiterate on our previous findings and to emphasize the research results from the previous section, which is aligned with the 5. Discussions section.
- Comment 37: Our results indicate that VR can effectively support experiential learning, confirming the literature data [19, 20]. Your results are something different from the references that you mention. Please confirm!
- Response:
- The references are now numbered as 22 and 23 due to the previously-mentioned restructuring initiatives.
- We still stand by our results being aligned with the conclusions from these two articles. Both articles provide strong evidence supporting VR's role in experiential learning, aligning with our results, more precisely:
- The first article emphasizes how extended reality simulations, including VR, create immersive environments that facilitate knowledge transformation through sensory-rich experiences. It argues that VR enables learners to access otherwise inaccessible scenarios (like virtual field trips or medical simulations), leading to enhanced reflection, deeper retention, and skill development. For example, it proposes a structured learning design (introduction, XR immersion, and debriefing) that optimizes these outcomes, confirming VR's effectiveness in promoting active engagement and learning cycles.
- The second article focuses on virtual computer labs as experiential tools, showing that when structured around Kolb's cycle, they significantly improve student competency, quiz performance (up to 9.6% gains), and interest levels. It highlights benefits like reduced frustration through collaboration and hands-on practice, underscoring how virtual setups foster both technical proficiency and cross-curricular skills, much like broader VR applications.
- Comment 38: “The results from another study”…: which study and why is it mentioned in Discussions? But can the results of one study be used to argue the validity of another? Please explain.
- Response: This point has been clarified and rephrased in the revised manuscript (L771–779). Additionally, the cited study is referenced only after the idea is introduced, and we do not rely on its findings to validate our own results. Instead, our mention highlights the alignment between our outcomes and those reported in the respective study.
- Comment 39: Where are the answers to the first research questions? (line 61)
- Response: The first research question has been reformulated, as discussed above. Its answer has been described between L628 and L634.
We would like to thank you again for your support and guidance. Your in-depth feedback has been extremely valuable in enhancing the strength of our paper.
Best Regards,
Authors
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors have provided a correct response to my questions. I recommend accepting the paper.
Reviewer 2 Report
Comments and Suggestions for AuthorsFirst of all, we would like to congratulate the authors for the genuine spirit of cooperation that they demonstrated through our prompts and through their corrections. Honestly, we are talking about almost a new article. I compare the new one with the previous one, and I am thrilled with the progress, diligence, and passion of the authors. They frankly admitted their mistakes, and as authentic educators and researchers, they immediately understood the depth and seriousness of our suggestions. Sincere congratulations! Do not hesitate, although you have done so to a large extent, to transfer the rich material of your explanations intact to the article. It is valuable documentation material, even for the simplest things that seem obvious to you; they may be crucial for some uninitiated people. Congratulations also for the (new? to us it seems new!) statistical presentation. It covered 100% of our suggestions. We do not need to say anything about what was accepted: this is our role as reviewers, and we are all called upon to respond to this with honesty, based on our scientific and pedagogical principles, and in a spirit of cooperation, always following the standards of research, publications, the editorial team, and the journal. Congratulations, and we wish you the best in the future based on what you have gained from this publication.
Postscript 1: Delete empty lines! (120, 384, 410, 440, 454, and everywhere, even between subsections.
Postscript 2: "Figure 12. Research design for VR experiential learning": enlarge the letters and the image. It is important that the course calendar is visible.
.