Next Article in Journal
Design of and Experimental Study on Drying Equipment for Fritillaria ussuriensis
Previous Article in Journal
Acoustic Voice Analysis as a Tool for Assessing Nasal Obstruction: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Using Virtual Reality Simulators to Enhance Laparoscopic Cholecystectomy Skills Learning

1
Department of Health & Rehabilitation Sciences, University of Nebraska Medical Center, Omaha, NE 68198, USA
2
Center for Modeling, Simulation & Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
3
Department of Mechanical & Materials Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
4
Department of Surgery, Rutgers Robert Wood Johnson Medical School, Long Branch, NJ 07740, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8424; https://doi.org/10.3390/app15158424
Submission received: 17 June 2025 / Revised: 18 July 2025 / Accepted: 23 July 2025 / Published: 29 July 2025

Abstract

(1) Medical training is changing, especially for surgeons. Virtual reality simulation is an excellent way to train surgeons safely. Studies show that surgeons who train with simulation have demonstrated improved technical skills in fundamental surgical procedures. The purpose of this study is to determine the overall impact of virtual reality training on laparoscopic cholecystectomy performance and to explore whether specific training protocols or the addition of feedback confer any advantages for future surgeons. (2) MEDLINE (PubMed), Embase (Ovid SP), Web of Science, Google Scholar, and Scopus were searched for the literature related to virtual reality training, immersive simulation, laparoscopic surgical skills training, and medical education. Study quality was assessed using the Cochrane Risk of Bias Tool and NIH Quality Assessment Tool. (3) A total of 55 full-text articles were reviewed. Meta-analysis showed that virtual reality training is an effective method for learning cholecystectomy surgical skills. (4) Conclusions: Performance, measured by objective structured assessments and time to task completion, is improved with virtual reality training compared with no additional training. Positive effects of simulation training were evident in global rating scores and operative time. Continuous feedback on movement parameters during laparoscopic cholecystectomy skills training impacts skills acquisition and long-term retention.

1. Introduction

Medical educational strategies have continuously evolved, especially in surgical education [1]. The advent of laparoscopic cholecystectomy and the subsequent surge in minimally invasive surgery (MIS) have challenged the adequacy of traditional surgical training and competency evaluation [2]. After 1989, as MIS gained popularity, it became apparent that surgeons, especially those with limited experience, faced a significantly elevated risk of complications during laparoscopic procedures, especially laparoscopic cholecystectomy [3]. The shift towards more effective medical training has been driven by a combination of factors, including stricter duty-hour regulations, a focus on cost-effective care, and a heightened emphasis on patient safety. As a result, training has expanded beyond the traditional confines of the operating room and wards. The most effective training environment and learning strategies have been the subject of ongoing discussion [4,5]. This has motivated the exploration of alternative training methods, such as simulation tools, ranging from simple table-top models to complex virtual reality (VR) simulators and porcine cadavers [6,7,8,9]. Such alternative training tools could provide the number of necessary repetitions to gain proficiency in surgical techniques in a risk-free environment with no patient health in jeopardy [6,10,11,12].
Simulation training has proven to be an effective method for developing surgical skills [12,13,14,15,16], especially for laparoscopic cholecystectomy [11]. Simulation allows surgeons to practice techniques without risking patient safety by providing a safe and immersive learning environment. Multiple studies have shown that surgeons trained using simulation exhibit improved technical proficiency in the operating room [13,14,15,16]. Using VR simulation in surgical education is an effective tool for enhancing the performance of surgical trainees in laparoscopic surgery. When employing alternative training methods for teaching and learning, it is important to establish benefits, as well as limitations [6].
Previous reviews showed the superiority of simulation over no intervention (609 studies) and non-simulation instruction (92 studies) [16,17,18]. Additionally, these reviews have compared different simulation interventions (289 studies) to identify evidence-based best practices [19]. While simulation offers a promising training modality for novice surgeons, simulators and laboratories alone are insufficient to establish a comprehensive training program. This review sought to explore the current trends in laparoscopic surgical skills training, focusing on the increasing adoption of VR systems as an educational tool.
This meta-analysis aimed to comprehensively evaluate the effectiveness of Virtual reality training (VRT) for laparoscopic cholecystectomy skills acquisition. We sought to compare VRT against a range of control interventions, including no additional training, conventional training methods, and VRT enhanced with further feedback. By synthesizing the available evidence from relevant clinical trials, we aimed to determine the overall impact of VRT on laparoscopic surgical performance and to explore whether specific VRT protocols or the addition of feedback confer any advantages for future surgeons.

2. Materials and Methods

2.1. Design and Search Strategy

A systematic review and a meta-analysis were performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, including the methods of publication search, eligibility, data collection, extraction, analysis, and reporting [20]. Following the PRISMA guidelines, MEDLINE (PubMed), Embase (Ovid SP), Web of Science, Google Scholar, and Scopus databases were used. Patients and participants were not involved in this study.
A literature search was performed on the above databases using keywords “virtual reality” [All Fields], and “Simulation” [All Fields], and “laparoscopic cholecystectomy” [All Fields], and “medical education” [All Fields], and “minimally invasive surgery” [All Fields], and surgical training” [All Fields], and “English” [All Field] and “2000–2024” [Duration] in December 2024. We conducted a literature search, filtering results for studies involving human subjects and training. From the 55 identified papers, two independent reviewers screened abstracts to determine eligibility. Studies were excluded if they incorporated augmented reality or lacked haptic devices in their simulators. This resulted in the exclusion of 10 papers. Studies were further excluded if no manual skills tests or other objective measurements were recorded. This led to the removal of another 13 papers. A PRISMA flow diagram (Figure 1) visually represents this process. The remaining 39 papers were reviewed for relevancy, initially by title and abstract, and subsequently by full text with appropriate inclusion and exclusion criteria.

2.2. Quality Assessment

Two reviewers independently screened the titles and abstracts of all potentially eligible articles. The Cochrane Risk of Bias Tool was employed to objectively assess the methodological quality of the randomized controlled trial studies. Where the risk of bias was unclear, it was conservatively categorized as ‘high’ for reporting purposes. NIH Quality Assessment Tool for Before–After Studies with No Control Group and NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies [21] were applied to determine the quality of each eligible study objectively. For articles that could not be excluded based on title/abstract, we obtained and reviewed the full text, again independently and in duplicate. We resolved all disagreements by consensus.

2.3. Definitions and Data Categorized

VRT involves computer-generated environments designed to practice surgical skills. While various VRT systems were used in the included studies, the majority focused on fundamental skills like bimanual coordination (e.g., peg transfer task) and the procedural steps of laparoscopic cholecystectomy. Given the similarity in the core principles of these systems, differences between models and versions were considered to have a minimal impact on the overall results.
The studies compared VRT with traditional non-VR training or no additional training (NAT). It is important to note that while the study participants were not assigned additional training, they likely continued their standard surgical training in the operating room concurrently. While the specific impact of this variability is unknown, it is assumed to contribute to the observed differences in training and trainee experience across different centers.
Operating performance measures were collected as the primary outcome of interest. Operating performance can be quantitatively measured using global rating scores (GRSs) and error scores [22]. Two widely used assessment tools are the Global Operative Assessment of Laparoscopic Skills (GOALS) and the Objective Structured Assessment of Technical Skill (OSATS) [23]. OSATS scores, including the modified OSATS score, evaluate a range of technical skills, such as tissue handling, efficiency, instrument dexterity, knowledge of instruments, procedural flow, preoperative planning, and specific procedure knowledge. GOALS, a validated tool specifically designed for laparoscopic surgery, assesses both direct and delayed observation of surgical performance. Direct observation includes the evaluation of autonomy, a critical aspect of surgical competence [24].
Additional quantifiable performance metrics, such as the time required to complete specific tasks, were included in the analysis. However, not all studies reported exact start and finish times; some provided the duration of specific intraoperative phases. For consistency, the term “time to completion” was used to refer to the total assessment time. To conduct the meta-analysis, we utilized total OSATS and GOALS scores, individual GOALS domains for delayed observation, and total task completion time. The results are presented in tables and forest plots in Section 3.3.

2.4. Data Extraction

Two independent reviewers conducted the literature search and screened studies for eligibility. The same two reviewers independently extracted data from the included studies, including first author, year of publication, country of study, study design, inclusion and exclusion criteria, OSATS, GRS, and GOALS used, and time to task completion. Additional details extracted included participant demographics, training history, VR system used, training tasks, performance assessment methods, and duration. Mean (SD) values for continuous post-intervention data were extracted, with figures used for data not directly provided. Any discrepancies in eligibility or data extraction were resolved through consensus with a third reviewer.

2.5. Statistical Analysis

For meta-analysis, total OSATS, total GOALS, individual domains of GOALS for observation, and time to completion of task were used. These outcomes were treated as continuous data. A post-processing of the data was conducted. To eliminate the dependence of time-to-completion data on the complexity of the tasks used in the studies, time to completion has been standardized by expressing intervention procedure time as a fraction of the control intervention time. The score in each study has been standardized to a 1–10 scale and compared. For papers lacking reported mean and SD, we assumed that the study data follow a normal distribution, with the mean value equal to the median and SD = IQR/1.35. The data were tabulated using Microsoft Excel. Data processing was accomplished with the aid of MetaXL. The effect size is the standardized mean difference. Standardized mean differences were compared with inverse variance heterogeneity [25]. Heterogeneity was assessed using the Cochran’s Q and I2 tests. Pooled data analysis was reported as weighted mean difference (WMD) values, and 95% confidence intervals were calculated. Forest plots were generated as a function of sample size against effect size. Each point in the plot represents a standardized comparison of an individual study comparing the outcome effect with the mean difference (MD) with the standard error (SE) of the MD.

3. Results

3.1. Study Selection

The initial literature search produced 55 papers. After exclusion of duplicates, the two raters reviewed the abstracts of the 52 papers to determine eligibility based on predefined criteria for relevancy. Studies were excluded if they did not use laparoscopic cholecystectomy-based VR training systems, human subjects were not involved, or laparoscopic cholecystectomy-related tasks were not studied. This resulted in the exclusion of 3 review papers/technical papers, as well as 13 papers not directly related to the scope of this review. The two reviewers then further reviewed and categorized the 23 eligible papers based on the type of methodology and analysis, for subsequent bias analysis. The papers were analyzed using appropriate tools (Cochrane’s Collaboration tool (CCT), NIH observational and NIH Pre/Post) for potential bias in study design [21,26].

3.2. Risk of Bias

The reviewers extracted the data from the eligible papers and summarized them in Table 1 with Supplementary Materials. Extracted data included bibliographic information, VR modality, teaching methods, and study methods. For the 15 articles with randomized controlled trials (RCTs), the Cochrane risk-of-bias tool for randomized trials (RoB 2) was utilized for qualitative analysis. In the absence of randomized trials, the NIH Quality Assessment Tool for Before–After (Pre-Post) Studies with No Control Group and NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies were considered [21]. The results of the risk-of-bias analysis are presented in Figure 2.

3.3. Meta-Analysis

A total of 10 RCT experiments were included in the meta-analysis involving participants’ performance in the VRT group compared with that in the control training group (no additional training). For experiments comparing benchmark VRT with other interventions, the benchmark VRT group was treated as the control group. Studies were excluded when only median values were reported, and reconstruction attempts were made using the method described in Section 2.5.
The meta-analysis for the time to completion fell in favor of VRT, as is shown in Table 2. The weighted mean difference in time to completion for VRT compared with traditional training was −0.24 with 95% confidence intervals (CI) of −0.32 and −0.16. The results indicated that the intervention groups completed the tasks 24% faster than the control groups. Participants with VRT outperformed participants with no additional training or traditional training. It also showed that VRT with a deliberately designed curriculum could improve skill level significantly.
We also explored the difference in performance improvement of medical students and residents (Table 3). Surprisingly, the results showed that the residents subgroup achieved a 25% time reduction, while the medical students group achieved 20%. It indicated that the VR simulators in the market benefit both novices and participants with experience. The realism and the complexity of VR training tasks allow for significant improvement of skill level, even for participants with experience.
The meta-analysis for the performance scores fell in favor of VRT in general. As is shown in Table 4, the weighted mean difference in scores for VRT compared with traditional training was 2.92 with 95% confidence intervals (CI) of 0.96 and 4.88, on our standardized scale of 1–10. The results showed that, on average, the intervention groups scored 29.2% higher than the control groups. Similarly to time-to-completion results, we also observed that participants with VRT with a deliberately designed curriculum outperformed participants with basic VRT. The I2 value is 86%, indicating substantial heterogeneity. It was noted that Yiasemidou et al. [33] seemed to be an outlier. It emphasized the better accessibility and availability of simple “take-home” box trainers over VR simulators and reached the conclusion that “take-home” box trainers are an effective alternative to VR simulators. However, the experiment design and condition do not align with the rest of the studies in the forest plot. Thus, we again explored the effects of excluding it. The WMD increased to 3.41 with 95% confidence intervals (CI) of 2.39 and 4.43. The heterogeneity is significantly lower (I2 = 53%) (see Table 5).
We further explored the sensitivity of evaluation methods (Table 6) as well as the difference in score improvement for the students subgroup vs. the residents subgroup (Table 6). It was noted that all evaluation methods can successfully demonstrate the performance improvement after VRT. The OSATS seemed to be more descriptive and consistent, with high WMD and lower heterogeneity (I2 = 9%). For participants with different levels of experience, the students subgroup consistently achieved greater improvement in scores (4.19 vs. 3.16), and the heterogeneity is 9% compared to 83% in the residents subgroup. It seemed to indicate that the greater improvement in time-to-completion for the residents subgroup was mostly due to better proficiency. The students’ subgroup benefited more greatly from VRT in terms of skills (scores).
A comparison of outcomes of different VR simulators is presented in Table 6 and Table 7. The differences in the outcomes of VR simulators are reasonably small. LAP Mentor and LAP Sim are the most widely used simulation systems and seem to provide consistent and slightly better outcomes.

4. Discussion

We found a positive effect of simulation training on achieved global rating scores, as well as operative time, a finding that is aligned with the literature [27,30,31,32,35,36,37]. Sommer et al. revealed that all trainees improved their surgical skills irrespective of previous visual–spatial ability (VSA) during structured VR simulator training. An increase in VSA resulted in improvements in surgical performance and training progress, which was more in [29]. The aim of the study is to determine the overall impact of VRT on laparoscopic surgical performance and to explore whether specific VRT protocols or the addition of feedback confer any advantages for future surgeons. The impact of continuous feedback on movement parameters during laparoscopic cholecystectomy skills training impacts the acquisition and long-term retention of those skills by surgical novices [47].
Comparing the time to task completion and performance assessment between students vs. residents, we noticed that both types of trainees improved significantly after VRT. It is surprising to find a comparable improvement between students and residents. Due to the extended learning experience in surgery, we anticipate that residents should outperform students using simulation training. The review found similar improvements in both types of trainees. It is possible that learning laparoscopic cholecystectomy skills using simulation is new and novel to both trainees; thus, the former learning experience might not impact the performance outcomes across studies. These results encourage the medical training community to use simulation regardless of learning experience when acquiring laparoscopic cholecystectomy skills.
When comparing the performance outcomes from different VRT systems in the meta-analysis, LAP Mentor, LAP Sim, and MIST-VR are three commonly used systems among all selected studies. They all have positive training effects and help trainees to develop higher performance scores compared with controls [28,30,31,34,35]. This result indicates the comparable outcomes of using different VRT systems [28,30,31,34,35]. Regardless of which training system a simulation center determines to use, it is likely that a similar positive training effect could be warranted for trainees to learn laparoscopic cholecystectomy skills.
VRT could offer different benefits for medical students and residents. VRT programs can be tailored to their respective learning stages and professional needs. For medical students, VR provides a safe and engaging space to acquire foundational knowledge and basic skills, offering exposure to diverse clinical scenarios, enhancing anatomical understanding, and reducing pre-clinical anxiety. This early exposure helps develop initial clinical reasoning skills [6,13,14]. In contrast, residents can utilize VR to refine advanced procedural skills, practice managing complex and critical cases, improve teamwork and communication, achieve mastery through repetition, and receive objective performance feedback, ultimately preparing them for independent practice within their specialties [1,7,8,11,28,31,34,35,36]. While both groups benefit from VR’s immersive and risk-free nature, the emphasis shifts from foundational learning for students to advanced skill development and performance optimization for residents.
Previous studies found that practicing on a computer simulation can effectively predict a surgeon’s surgical skills [45]. This suggests that computer simulations can be a valuable tool for training surgeons, as they can provide a safe and controlled environment to practice and improve their skills before operating on real patients [45]. The current meta-analysis supports the benefit of using VRT to improve laparoscopic cholecystectomy skills performance.
An additional training program that uses different teaching methods (like lectures, simulations, and hands-on practice) is effective for doctors in training. Surgical educators want to know if this training works well for people with different backgrounds and learning styles. By studying this, they can improve future training programs for future surgeons [48].
There are limitations to our study; only a group of VR-specific papers were included in the analysis (n = 23), although we contend that this reflects the literature. As a result of diversity in recorded outcomes and stringent criteria for entry into data synthesis, our meta-analysis is based on a small number of studies. Furthermore, we did not include studies from diverse surgical fields, which may limit the comparability of results. Subgroup analyses were only focused on different trainees and training systems and did not consider extended factors such as training duration, proficiency goals, or trainee experience. Few systematic reviews have examined the impact of surgical simulation training on patient outcomes. Zendejas et al. reported “small-moderate patient benefits” from medical simulation training, but their analysis included a broader range of medical specialties and a wider definition of patient outcomes, including procedural success [47].
Although this study specifically examines the field of simulation, we believe that several of its findings have broader applicability. VRT may enhance performance and efficiency in laparoscopic cholecystectomy for junior surgeons, potentially serving as a valuable addition to surgical training programs. VRT simulators offer a valuable addition to surgical education, but their high cost limits widespread adoption, particularly for resource-constrained institutions. Therefore, developing portable and inexpensive VR systems is essential to democratize access to this beneficial technology. Such systems could be readily deployed in diverse settings, from smaller hospitals and training centers to individual trainees’ homes, easing the financial burden on institutions and learners. Moreover, portability enables convenient and flexible training, allowing trainees to hone essential skills anytime, anywhere. By lowering the cost barrier, VRT can reach a broader range of surgical trainees, potentially improving surgical proficiency and patient outcomes. Developing cost-effective solutions could also stimulate innovation and competition within the VRT market, driving further advancements in technology and accessibility.

5. Conclusions

Based on the findings, it can be concluded that VRT offers a significant and robust positive impact on laparoscopic surgical performance, particularly in the context of laparoscopic cholecystectomy skills. This positive effect is evidenced by improved global rating scores, which are comprehensive assessments of a surgeon’s overall performance, and reduced operative times, aligning with existing literature. Notably, VRT proves beneficial for both medical students and residents, demonstrating comparable improvements despite differences in their prior surgical experience. This suggests that the novelty of simulation-based laparoscopic skills acquisition may negate the influence of prior learning experience, thereby encouraging the widespread adoption of VRT in medical training regardless of a trainee’s current stage.
Furthermore, the meta-analysis indicates that various VRT systems (e.g., LAP Mentor, LAP Sim, MIST-VR) yield comparable positive training outcomes, implying that the specific VRT platform may be less critical than the implementation of structured simulation training itself. The inherent flexibility of VR allows for tailored training programs that cater to the distinct needs of medical students (focusing on foundational knowledge and basic skills) and residents (emphasizing advanced procedural refinement, complex case management, and objective feedback). This safety feature ensures that VRT provides a controlled and risk-free environment for both groups, leveraging VR’s immersive and risk-free environment for both groups. Ultimately, VRT serves as a valuable and effective tool for predicting and enhancing surgical skills, providing a safe and controlled avenue for future surgeons to practice and improve their proficiency before real-patient encounters, thereby supporting the continuous evolution of surgical education.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15158424/s1, Table S1: PRISMA 2020 Checklist.

Author Contributions

Conceptualization, I.S., H.L. and K.-C.S.; original draft preparation, I.S. and H.L.; review and editing, I.S., H.L., C.N. and K.-C.S.; visualization, H.L.; supervision, C.N. and K.-C.S.; project administration, Y.L., I.S. and K.-C.S.; funding acquisition, D.O., C.N. and K.-C.S. All authors have read and agreed to the published version of the manuscript.

Funding

Research reported in this paper was supported by the National Institutes of Health under award number R56EB030053. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Data Availability Statement

The data analyzed in this study are available in the published articles cited. Data generated by this analysis are available in the article and Supplementary Materials. Data extracted from figures and calculations can be shared upon reasonable request to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Harrysson, I.; Hull, L.; Sevdalis, N.; Darzi, A.; Aggarwal, R. Development of a knowledge, skills, and attitudes framework for training in laparoscopic cholecystectomy. Am. J. Surg. 2014, 207, 790–796. [Google Scholar] [CrossRef] [PubMed]
  2. Deziel, D.J.; Milikan, K.W.; Economou, S.G.; Doolas, A.; Ko, S.T.; Airan, M.C. Complications of laparoscopic cholecystectomy: A national survey of 4292 hospitals and an analysis of 77,604 cases. Am. J. Surg. 1993, 165, 9–14. [Google Scholar] [CrossRef] [PubMed]
  3. Moore, M.J.; Bennett, C.L. The learning curve for laparoscopic cholecystectomy. The Southern Surgeons Club. Am. J. Surg. 1995, 170, 55–59. [Google Scholar] [PubMed]
  4. Borman, K.; Fuhrman, G. Association Program Directors in Surgery. “Resident Duty Hours: Enhancing Sleep, Supervision, and Safety”: Response of the Association of Program Directors in Surgery to the December 2008 Report of the Institute of Medicine. Surgery 2009, 146, 420–427. [Google Scholar] [CrossRef]
  5. Britt, L.D.; Sachdeva, A.K.; Healy, G.B.; Whalen, T.V.; Blair, P.G.; Members of ACS Task Force on Resident Duty Hours. Resident duty hours in surgery for ensuring patient safety, providing optimum resident education and training, and promoting resident well-being: A response from the American College of Surgeons to the Report of the Institute of Medicine, “Resident Duty Hours: Enhancing Sleep, Supervision, and Safety”. Surgery 2009, 146, 398–409. [Google Scholar] [CrossRef]
  6. Zendejas, B.; Cook, D.A.; Bingener, J.; Huebner, M.; Dunn, W.F.; Sarr, M.G.; Farley, D.R. Simulation-based mastery learning improves patient outcomes in laparoscopic inguinal hernia repair: A randomized controlled trial. Ann. Surg. 2011, 254, 509–511. [Google Scholar] [CrossRef]
  7. Shore, E.M.; Grantcharov, T.P.; Husslein, H.; Shirreff, L.; Dedy, N.J.; McDermott, C.D.; Lefebvre, G.G. Validating a standardized laparoscopy curriculum for gynecology residents: A randomized controlled trial. Am. J. Obstet. Gynecol. 2016, 215, 204.e1–204.e11. [Google Scholar] [CrossRef]
  8. Patel, N.R.; Makai, G.E.; Sloan, N.L.; Della Badia, C.R. Traditional versus simulation resident surgical laparoscopic salpingectomy training: A randomized controlled trial. J. Minim. Invasive Gynecol. 2016, 23, 372–377. [Google Scholar] [CrossRef]
  9. Nilsson, C.; Sorensen, J.L.; Konge, L.; Westen, M.; Stadeager, M.; Ottesen, B.; Bjerrum, F. Simulation-based camera navigation training in laparoscopy a randomized trial. Surg. Endosc. 2017, 31, 2131–2139. [Google Scholar] [CrossRef]
  10. Shakur, S.F.; Luciano, C.J.; Kania, P.; Roitberg, B.Z.; Banerjee, P.P.; Slavin, K.V.; Sorenson, J.; Charbel, F.T.; Alaraj, A. Usefulness of a virtual reality percutaneous trigeminal rhizotomy simulator in neurosurgical training. Neurosurgery 2015, 11, 420–425. [Google Scholar] [CrossRef]
  11. Ahlberg, G.; Enochsson, L.; Gallagher, A.G.; Hedman, L.; Hogman, C.; McClusky, D.A., 3rd; Ramel, S.; Smith, C.D.; Arvidsson, D. Proficiency-based virtual reality training significantly reduces the error rate for residents during their first 10 laparoscopic cholecystectomies. Am. J. Surg. 2007, 193, 797–804. [Google Scholar] [CrossRef]
  12. Peltan, I.D.; Shiga, T.; Gordon, J.A.; Currier, P.F. Simulation improves procedural protocol adherence during central venous catheter placement: A randomized controlled trial. Simul. Healthc. 2015, 10, 270–276. [Google Scholar] [CrossRef]
  13. Aggarwal, R.; Ward, J.; Balasundaram, I.; Sains, P.; Athanasiou, T.; Darzi, A. Proving the effectiveness of virtual reality simulation for training in laparoscopic surgery. Ann. Surg. 2007, 246, 771–779. [Google Scholar] [CrossRef]
  14. Grantcharov, T.; Kristiansen, V.; Bendix, J.; Bardram, L.; Rosenberg, J.; Funch-Jensen, P. Randomized clinical trial of virtual reality simulation for laparoscopic skills training. Br. J. Surg. 2004, 91, 146–150. [Google Scholar] [CrossRef] [PubMed]
  15. Sedlack, R.; Kolars, J. Computer simulator training enhances the competency of gastroenterology fellows at colonoscopy: Results of a pilot study. Am. J. Gastroenterol. 2004, 99, 33–37. [Google Scholar] [CrossRef] [PubMed]
  16. Portelli, M.; Bianco, S.F.; Bezzina, T.; Abela, J.E. Virtual reality training compared with apprenticeship training in laparoscopic surgery: A meta-analysis. Ann. R. Coll. Surg. Engl. 2020, 102, 672–684. [Google Scholar] [CrossRef] [PubMed]
  17. Cook, D.A.; Hatala, R.; Brydges, R.; Zendejas, B.; Szostek, J.H.; Wang, A.T.; Erwin, P.J.; Hamstra, S.J. Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA 2011, 306, 978–988. [Google Scholar] [CrossRef]
  18. Cook, D.A.; Brydges, R.; Hamstra, S.; Zendejas, B.; Szostek, J.H.; Wang, A.T.; Erwin, P.J.; Hatala, R. Comparative Effectiveness of Technology-Enhanced Simulation vs. Other Instructional Methods: A Systematic Review and Meta-Analysis. Simul. Healthc. 2012, 7, 308–320. [Google Scholar] [CrossRef]
  19. Cook, D.A.; Hamstra, S.; Brydges, R.; Zendejas, B.; Szostek, J.H.; Wang, A.T.; Erwin, P.J.; Hatala, R. Comparative Effectiveness of Instructional Design Features in Simulation-based Education: Systematic Review and Meta-analysis. Med. Teach. 2013, 35, e867–e898. [Google Scholar] [CrossRef]
  20. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ 2009, 339, b2535. [Google Scholar] [CrossRef]
  21. Study Quality Assessment Tools. Available online: https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools (accessed on 12 December 2024).
  22. Watanabe, Y.; Bilgic, E.; Lebedeva, E.; McKendy, K.M.; Feldman, L.S.; Fried, G.M.; Vassiliou, M.C. A systematic review of performance assessment tools for laparoscopic cholecystectomy. Surg. Endosc. 2016, 30, 832–844. [Google Scholar] [CrossRef] [PubMed]
  23. Martin, J.A.; Regehr, G.; Reznick, R.; MacRae, H.; Murnaghan, J.; Hutchison, C.; Brown, M. Objective structured assessment of technical skill (OSATS) for surgical residents. Br. J. Surg. 1997, 84, 273–278. [Google Scholar] [PubMed]
  24. Vassiliou, M.C.; Feldman, L.S.; Andrew, C.G.; Bergman, S.; Leffondré, K.; Stanbridge, D.; Fried, G.M. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am. J. Surg. 2005, 190, 107–113. [Google Scholar] [CrossRef] [PubMed]
  25. Doi, S.A.; Barendregt, J.J.; Khan, S.; Thalib, L.; Williams, G.M. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model. Contemp. Clin. Trials 2015, 45, 130–138. [Google Scholar] [CrossRef]
  26. Higgins, J.P.T.; Altman, D.G.; Gøtzsche, P.C.; Jüni, P.; Moher, D.; Oxman, A.D.; Savovic, J.; Schulz, K.F.; Weeks, L.; Sterne, J.A.; et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011, 343, d5928. [Google Scholar] [CrossRef]
  27. Felinska, E.A.; Fuchs, T.E.; Kogkas, A.; Chen, Z.W.; Otto, B.; Kowalewski, K.F.; Petersen, J.; Müller-Stich, B.P.; Mylonas, G.; Nickel, F. Telestration with augmented reality improves surgical performance through gaze guidance. Surg. Endosc. 2023, 37, 3557–3566. [Google Scholar] [CrossRef]
  28. Kojima, Y.; Wong, H.J.; Kuchta, K.; Denham, W.; Haggerty, S.; Linn, J.; Ujiki, M. Resident performance in simulation module is associated with operating room performance for laparoscopic cholecystectomy. Surg. Endosc. 2022, 36, 9273–9280. [Google Scholar] [CrossRef]
  29. Sommer, G.M.; Broschewitz, J.; Huppert, S.; Sommer, C.G.; Jahn, N.; Jansen-Winkeln, B.; Gockel, I.; Hau, H.M. The role of virtual reality simulation in surgical training in the light of COVID-19 pandemic Visual spatial ability as a predictor for improved surgical performance: A randomized trial. Medicine 2021, 100, E27844. [Google Scholar] [CrossRef]
  30. Khoo, H.C.; Chik, I.; Azman, A.; Zuhdi, Z.; Harunarashid, H.; Jarmin, R. Virtual reality laparoscopic simulator: Training tool for surgical trainee in Malaysia. Formosan J. Surg. 2021, 54, 11–18. [Google Scholar] [CrossRef]
  31. Kowalewski, K.F.; Garrow, C.R.; Proctor, T.; Preukschas, A.A.; Friedrich, M.; Müller, P.C.; Kenngott, H.G.; Fischer, L.; Müller-Stich, B.P.; Nickel, F. LapTrain: Multi-modality training curriculum for laparoscopic cholecystectomy—Results of a randomized controlled trial. Surg. Endosc. 2018, 32, 3830–3838. [Google Scholar] [CrossRef]
  32. Buescher, J.F.; Mehdorn, A.-S.; Neumann, P.A.; Becker, F.; Eichelmann, A.K.; Pankratius, U.; Bahde, R.; Foell, D.; Senninger, N.; Rijcken, E. Effect of Continuous Motion Parameter Feedback on Laparoscopic Simulation Training: A Prospective Randomized Controlled Trial on Skill Acquisition and Retention. J. Surg. Educ. 2018, 75, 516–526. [Google Scholar] [CrossRef]
  33. Yiasemidou, M.; de Siqueira, J.; Tomlinson, J.; Glassman, D.; Stock, S.; Gough, M. “Take-home” box trainers are an effective alternative to virtual reality simulators. J. Surg. Res. 2017, 213, 69–74. [Google Scholar] [CrossRef]
  34. Hashimoto, D.A.; Sirimanna, P.; Gomez, E.D.; Beyer-Berjot, L.; Ericsson, K.A.; Williams, N.N.; Darzi, A.; Aggarwal, R. Deliberate practice enhances quality of laparoscopic surgical performance in a randomized controlled trial: From arrested development to expert performance. Surg. Endosc. 2015, 29, 3154–3162. [Google Scholar] [CrossRef] [PubMed]
  35. Palter, V.N.; Grantcharov, T.P. Individualized deliberate practice on a virtual reality simulator improves technical performance of surgical novices in the operating room: A randomized controlled trial. Ann. Surg. 2014, 259, 443–448. [Google Scholar] [CrossRef]
  36. von Websky, M.W.; Raptis, D.A.; Vitz, M.; Rosenthal, R.; Clavien, P.A.; Hahnloser, D. Access to a simulator is not enough: The benefits of virtual reality training based on peer-group-derived benchmarks—A randomized controlled trial. World J. Surg. 2013, 37, 2534–2541. [Google Scholar] [CrossRef] [PubMed]
  37. Palter, V.N.; Orzech, N.; Reznick, R.K.; Grantcharov, T.P. Validation of a structured training and assessment curriculum for technical skill acquisition in minimally invasive surgery: A randomized controlled trial. Ann. Surg. 2013, 257, 224–230. [Google Scholar] [CrossRef]
  38. Maschuw, K.; Schlosser, K.; Kupietz, E.; Slater, E.P.; Weyers, P.; Hassan, I. Do soft skills predict surgical performance? A single-center randomized controlled trial evaluating predictors of skill acquisition in virtual reality laparoscopy. World J. Surg. 2011, 35, 480–486. [Google Scholar] [CrossRef]
  39. Loukas, C.; Nikiteas, N.; Kanakis, M.; Georgiou, E. Deconstructing laparoscopic competence in a virtual reality simulation environment. Surgery 2011, 149, 750–760. [Google Scholar] [CrossRef] [PubMed]
  40. Sroka, G.; Feldman, L.S.; Vassiliou, M.C.; Kaneva, P.A.; Fayez, R.; Fried, G.M. Fundamentals of Laparoscopic Surgery simulator training to proficiency improves laparoscopic performance in the operating room- A randomized controlled trial. Am. J. Surg. 2010, 199, 115–120. [Google Scholar] [CrossRef]
  41. Brown, D.C.; Miskovic, D.; Tang, B.; Hanna, G.B. Impact of established skills in open surgery on the proficiency gain process for laparoscopic surgery. Surg. Endosc. Interv. Tech. 2010, 24, 1420–1426. [Google Scholar] [CrossRef]
  42. Lucas, S.; Tuncel, A.; Bensalah, K.; Zeltser, I.; Jenkins, A.; Pearle, M.; Cadeddu, J. Virtual reality training improves simulated laparoscopic surgery performance in laparoscopy naïve medical students. J. Endourol. 2008, 22, 1047–1051. [Google Scholar] [CrossRef]
  43. Hogle, N.J.; Widmann, W.D.; Ude, A.O.; Hardy, M.A.; Fowler, D.L. Does Training Novices to Criteria and Does Rapid Acquisition of Skills on Laparoscopic Simulators Have Predictive Validity or Are We Just Playing Video Games? J. Surg. Educ. 2008, 65, 431–435. [Google Scholar] [CrossRef]
  44. Schijven, M.; Klaassen, R.; Jakimowicz, J.; Terpstra, O.T. The Intercollegiate Basic Surgical Skills Course: Laparoscopic skill assessment using the Xitact LS500 laparoscopy simulator. Surg. Endosc. 2003, 17, 1978–1984. [Google Scholar] [CrossRef]
  45. Seymour, N.E.; Gallagher, A.G.; Roman, S.A.; O’Brien, M.K.; Bansal, V.K.; Andersen, D.K.; Satava, R.M. Virtual reality training improves operating room performance results of a randomized, double-blinded study. Ann. Surg. 2002, 236, 458–464. [Google Scholar] [CrossRef]
  46. Grantcharov, T.P.; Rosenberg, J.; Pahle, E.; Funch-Jensen, P. Virtual reality computer simulation: An objective method for the evaluation of laparoscopic surgical skills. Surg. Endosc. 2001, 15, 242–244. [Google Scholar] [CrossRef]
  47. Zendejas, B.; Brydges, R.; Wang, A.T.; Cook, D.A. Patient outcomes in simulation-based medical education: A systematic review. J. Gen. Intern. Med. 2013, 28, 1078–1089. [Google Scholar] [CrossRef]
  48. Johnston, M.J.; Paige, J.T.; Aggarwal, R.; Stefanidis, D.; Tsuda, S.; Khajuria, A.; Arora, S.; Association for Surgical Education Simulation Committee. An overview of research priorities in surgical simulation: What the literature shows has been achieved during the 21st century and what remains. Am. J. Surg. 2016, 21, 214–225. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA diagram.
Figure 1. PRISMA diagram.
Applsci 15 08424 g001
Figure 2. Risk of bias: (a) individual studies, (b) summary of all the studies.
Figure 2. Risk of bias: (a) individual studies, (b) summary of all the studies.
Applsci 15 08424 g002
Table 1. Summary of included studies.
Table 1. Summary of included studies.
First Author (Year), CountryStudy DesignParticipant Grade
(Total)
Intervention(s): ControlVR SystemComparatorAssessmentOutcomesResult: Was VRT Favored?
Felinska (2023) [27]
Germany, UK
MC,
Crossover Trial, RCT
Medical students (40)20:20iSurgeonNAT8-task
(ex vivo porcine LC)
OSATS, time, errors, NASA-TLX, blink rateYes
Kojima (2022) [28]SC,
Pre-Post
Residents (21)21: no controlLAPMentorSP vs. OR LCMPHands-on LC Case IGOALS, OSATS, LC-SIM, STATNo
Sommer (2021) [29]
Germany
SC,
2-RCT
Medical students
(60)
30:30LapX (SOR)
LapSim (CD)
NATTube figure testTime, instrument movement, number of missed clipsNo
Khoo (2021) [30]
Malaysia
SC,
Pre-Post
Surgical residents
(9)
9: no control LAPMentor,
VRLS
BaselineFull LC on LAPMentorTime, LAPMentor metrics, Confidence levelYes
Kowalewski (2018) [31]
Germany
SC,
2-RCT
Surgical residents
(64)
33:31LAPMentorNATVR LC
Porcine LC
GOALSYes
Buescher (2017) [32]
Germany
SC,
2-RCT
Medical students
(36)
18:18LAPMentorII LapX-Hybrid SimulatorBox
Training
VR
cholecystectomy
Time, path length, number of movements, velocity, and idle timeYes
Yiasemidou (2017) [33]
UK
SC,
2-RCT
Surgical trainees (16)9:7LAPMentor“Take-Home “
Box
Training
Two
VR
cholecystectomies
GOALS, time, path length, number of movementsNo
Hashimoto (2015) [34]
USA
SC,
2-RCT
Surgical residents
(14)
7:7LAPMentor,
LapSim
Deliberate PracticePre- and post-
Basic Task
5 and 6
GRS, OSATSNo
Palter (2014) [35]
Canada
SC,
2-RCT
Surgical trainees (16)8:8LAPSimConventional FeedbackLC in OR under supervisionOSATS, procedure-specific evaluation toolYes
Harrysson (2014) [1]
UK
SCSurgical residents
(5)
5: no controlLAPMentorClassroomLCSurveyYes
von Websky (2013) [36]SC,
2-RCT
Surgical residents
(68)
32:34
(dropouts = 2)
LAPMentorPeer-group-derived benchmarksLC on the simulator PEM,
SEM
Yes
Palter (2013) [37]
Canada
SC,
2-RCT
Surgical trainees
(20)
10:10LapSimconventionally trained
group
5 sequential
LC in OR
OSATSYes
Maschuw (2011) [38]
France
SC,
2-RCT
Pre-Post
Surgical residents
(50)
25:25LapSimNATCamera navigation in LC, SPQVR performance parametersNo
Loukas (2011) [39]
Greece
SC,
Pre-Post: Experienced
Surgical residents (20)
Expert (8)
20:no controlLapVR
(Surgical VR
Simulator)
LapVREssential tasks, procedural task, surgical procedure (LC) Time, total path length, dexterity, velocity, and technical skill (error)Yes
Sroka (2010) [40]SC,
2-RCT
Surgical trainees
(19)
9:8
(excluded = 2,
GOALS > 15)
MISTELSRegular Residency TrainingElective LC GOALSYes
Brown (2010) [41]
UK
MC,
Pre-Post-Retention
Medical students (11),
Surgical residents (14)
25: no controlLAPMentorLAPMentor10 LCOSATS, GRS,
Survey,
PM, Knowledge, confidence level, corrective instruction, PM
Yes
Lucas (2008) [42]
USA
SC,
2-RCT
Pre-Post
Medical students (32)16:16LAPMentorNATVR LC OSATS Yes
Hogle (2008) [43]
USA
SC,
2-RCT
Surgical residents
(21)
10:11LapSimNATLC in a pigGOALSNo
Ahlberg (2006) [11]
Sweden
MC,
2-RCT
Surgical residents
(13)
NRLapSimNAT1st, 5th and 10th LCTime, Path Length, Defined surgical errorsYes
Grantcharov (2004) [14]
Denmark
MC,
2-RCT
Surgical residents
(20)
10:10MIST-VRNAT2 LCTime, Error scores, Economy of movement scoresYes
Schijven (2003) [44]
Netherlands
MC,
2-RCT
Pre-Post
Surgical residents (25),
Interns (25)
25:25Xitact LS500No BSSCStandardized LC
Clip-and cut
Time, Sum score,
QCM
No
Seymour (2002) [45]
UK
SC,
2-RCT
Surgical residents
(16)
8:8MIST-VRNon-VR trainedLC with an attending surgeonTime,
Operative errors
Yes
Grantcharov (2001) [46]
Denmark
SC
Pig LC: MIST-VR
Surgical residents
(14)
14: no controlMIST-VRNATLC on pigTime,
Error score
Yes
Single Center (SC); Multi Center (MC); No Additional Training (NAT); Laparoscopic Cholecystectomy (LC); Randomized Controlled Trial (RCT); Surgical Training and Assessment Tool (STAT); Global Rating Scores (GRS); Objective Structured Assessment of Technical Skill (OSATS); NASA Task Load Index (NASA-TLX); Global Operative Assessment of Laparoscopic Skills (GOALS); operating room (OR); performance parameters (PM); no report (NR); standardized psychological questionnaires (SPQ); LC-specific simulation assessment form (LC-SIM); LapMentor VR laparoscopic simulator (LAPMentor); LapSim VR laparoscopic simulator (LapSim); LapVR (VR simulator, software build 965; Immersion Medical, San Jose, CA, USA); McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS); Simendo® Virtual Reality Laparoscopy Simulator (VRLS); Minimally Invasive Surgical Trainer–Virtual Reality (MIST-VR); Structured Training and Assessment Curriculum (STAC); Xitact LS500 laparoscopy simulator (Xitact LS500); Intercollegiate Basic Surgical Skills Course (BSSC); Questionnaire of Current Motivation (QCM). Primary endpoint metrics (PEM); Secondary endpoint metrics (SEM); scope orientation 30° right-handed (SOR); LapX-Hybrid/VR (Epona Medical B.V.); cholecystectomy dissection (CD); Simulation Performance (SP); LC Module Performance (LCMP).
Table 2. Forest plot for the time to completion of task, Virtual Reality (VR), No Additional Training (NAT), Traditional Training (TT), and Virtual Reality with feedback (VR+).
Table 2. Forest plot for the time to completion of task, Virtual Reality (VR), No Additional Training (NAT), Traditional Training (TT), and Virtual Reality with feedback (VR+).
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
VR vs. NAT subgroup Applsci 15 08424 i001
Felinska et al. [27]0.800.49201.000.5520−0.20 [−0.53, 0.12]5.51
Kowalewski et al. [31]0.730.31331.000.4031−0.27 [−0.45, −0.10]18.48
Seymour et al. [45]0.700.2281.000.278−0.30 [−0.60, −0.15]6.50
Subtotal 61 59−0.27 [−0.40, −0.13]30.49
Heterogeneity: Q = 0.20, p = 0.90, I2 = 0%
VR vs. TT subgroup
Palter et al. (13) [37]0.800.21101.000.2310−0.20 [−0.39, 0.00]15.09
VR+ vs. VR subgroup
Hashimoto et al. [34]0.950.5671.000.557−0.05 [−0.63, 0.53]1.73
von Websky et al. [36]0.760.341071.000.3881−0.24 [−0.35, −0.14]52.69
Subtotal 114 88−0.24 [−0.32, −0.13]54.42
Heterogeneity: Q = 0.41, p = 0.52, I2 = 0%
Overall 185 157−0.24 [−0.32, −0.16]100.00
Heterogeneity: Q = 0.94, p = 0.97, I2 = 0%
VRT Favored
Table 3. Forest plot for the time to completion of task: students vs. residents.
Table 3. Forest plot for the time to completion of task: students vs. residents.
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
Students subgroup Applsci 15 08424 i002
Felinska et al. [27]0.800.49201.000.5520−0.20 [−0.53, 0.12]5.51
Palter et al. (13) [37]0.800.21101.000.2310−0.20 [−0.39, 0.00]15.09
Subtotal 30 30−0.20 [−0.37, −0.03]20.60
Heterogeneity: Q = 0.00, p = 0.97, I2 = 0%
Residents subgroup
Kowalewski et al. [31]0.730.31331.000.4031−0.27 [−0.45, −0.10]18.48
Hashimoto et al. [34]0.950.5671.000.557−0.05 [−0.63, 0.53]1.73
von Websky et al. [36]0.760.341071.000.3881−0.24 [−0.35, −0.14]52.69
Seymour et al. [45]0.700.2281.000.278−0.30 [−0.60, −0.15]6.50
Subtotal 155 127−0.25 [−0.34, −0.16]79.40
Heterogeneity: Q = 0.65, p = 0.89, I2 = 0%
Overall 185 157−0.24 [−0.32, −0.16]100.00
Heterogeneity: Q = 0.94, p = 0.97, I2 = 0%
VRT Favored
Table 4. Forest plot for performance assessments.
Table 4. Forest plot for performance assessments.
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
VR vs. NAT subgroup Applsci 15 08424 i003
Felinska et al. [27]8.131.88205.134.13203.00 [1.01, 4.99]7.91
Kowalewski et al. [31]5.764.24334.003.00311.76 [−0.03, 3.55]9.71
Palter et al. (14) [35]8.171.8384.713.7183.47 [0.60, 6.33]3.80
Lucas et al. [39]7.272.73163.352.35163.91 [2.14, 5.68]9.97
Seymour et al. [45]7.970.8485.504.5082.47 [−0.70, 5.64]3.10
Subtotal 85 832.92 [1.97, 3.87]34.49
Heterogeneity: Q = 3.05, p = 0.55, I2 = 0%
VR vs. TT subgroup
Yiasemidou et al. [33]3.922.9278.781.229−4.86 [−7.17, −2.56]5.87
Palter et al. (13) [37]8.961.04103.772.77105.19 [3.36, 7.03]9.28
Sroka et al. [40]7.782.2293.572.5784.21 [1.91, 6.51]5.90
Subtotal 26 272.11 [−4.16, 8.38]21.06
Heterogeneity: Q = 49.28, p = 0.00, I2 = 96%
VR+ vs. VR subgroup
Hashimoto et al. [34]8.331.6772.671.6775.65 [3.90, 7.41]10.13
von Websky et al. [36]7.312.691074.713.70812.61 [1.66, 3.57]34.32
Subtotal 114 883.30 [0.00, 6.61]44.45
Heterogeneity: Q = 8.90, p = 0.00, I2 = 89%
Overall 225 1982.92 [0.96, 4.88]100.00
Heterogeneity: Q = 63.74, p = 0.00, I2 = 86%
VRT Favored
Table 5. Forest plot for performance assessments: students vs. residents.
Table 5. Forest plot for performance assessments: students vs. residents.
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
Students subgroup Applsci 15 08424 i004
Felinska et al. [27]8.131.88205.134.13203.00 [1.01, 4.99]8.40
Palter et al. (14) [35]8.171.8384.713.7183.47 [0.60, 6.33]4.04
Palter et al. (13) [37]8.961.04103.772.77105.19 [3.36, 7.03]9.86
Sroka et al. [40]7.782.2293.572.5784.21 [1.91, 6.51]6.27
Lucas et al. [42]7.272.73163.352.35163.91 [2.14, 5.68]10.60
Subtotal 63 624.04 [3.12, 4.96]39.16
Heterogeneity: Q = 2.77, p = 0.60, I2 = 0%
Residents subgroup
Kowalewski et al. [31]5.764.24334.003.00311.76 [−0.03, 3.55]10.32
Hashimoto et al. [34]8.331.6772.671.6775.65 [3.90, 7.41]10.76
von Websky et al. [36]7.312.691074.713.70812.61 [1.66, 3.57]36.46
Seymour et al. [45]7.970.8485.504.5082.47 [−0.70, 5.64]3.29
Subtotal 155 1273.00 [1.03, 4.97]60.84
Heterogeneity: Q = 11.36, p = 0.01, I2 = 74%
Overall 218 1893.41 [2.39, 4.43]100.00
Heterogeneity: Q = 17.13, p = 0.03, I2 = 53%
VRT Favored
Table 6. Forest plot for comparing different assessment methods.
Table 6. Forest plot for comparing different assessment methods.
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
OSATS subgroup Applsci 15 08424 i005
Felinska et al. [27]8.131.88205.134.13203.00 [1.01, 4.99]8.40
Hashimoto et al. [34]8.331.6772.671.6775.65 [3.90, 7.41]10.76
Palter et al. (14) [35]8.171.8384.713.7183.47 [0.60, 6.33]4.04
Sroka et al. [40]7.782.2293.572.5784.21 [1.91, 6.51]6.27
Lucas et al. [42]7.272.73163.352.35163.91 [2.14, 5.68]10.60
Subtotal 60 594.19 [3.23, 5.15]40.07
Heterogeneity: Q = 4.39, p = 0.36, I2 = 9%
GOALS subgroup
Kowalewski et al. [31]5.764.24334.003.00311.76 [−0.03, 3.55]10.32
PathLen subgroup
von Websky et al. [36]7.312.691074.713.70812.61 [1.66, 3.57]36.46
Palter et al. (13) [37]8.961.04103.772.77105.19 [3.36, 7.03]9.86
Subtotal 117 913.16 [0.37, 5.95]46.32
Heterogeneity: Q = 5.99, p = 0.01, I2 = 83%
Operative Error subgroup
Seymour et al. [45]7.970.8485.504.5082.47 [−0.70, 5.64]3.29
Overall 218 1893.41 [2.39, 4.43]100.00
Heterogeneity: Q = 17.13, p = 0.03, I2 = 53%
Favors Intervention
Table 7. Forest plot for the scores: VRT systems.
Table 7. Forest plot for the scores: VRT systems.
InterventionControlMean DifferenceWeightMean Difference
StudyMeanS.D.NMeanS.D.NIVhet, 95% CI(%)IVhet, 95% CI
iSurgeon subgroup Applsci 15 08424 i006
Felinska et al. [27]8.131.88205.134.13203.00 [1.01, 4.99]9.41
LAP Mentor subgroup
Kowalewski et al. [31]5.764.24334.003.00311.76 [−0.03, 3.55]11.57
von Websky et al. [36]7.312.691074.713.70812.61 [1.66, 3.57]40.86
Lucas et al. [39]7.272.73163.352.35163.91 [2.14, 5.68]11.88
Subtotal 156 1282.70 [1.68, 3.72]64.30
Heterogeneity: Q = 2.90, p = 0.23, I2 = 31%
LAP Sim subgroup
Palter et al. (14) [35]8.171.8384.713.7183.47 [0.60, 6.33]4.52
Palter et al. (13) [37]8.961.04103.772.77105.19 [3.36, 7.03]11.05
Subtotal 18 184.69 [3.15, 6.24]15.57
Heterogeneity: Q = 0.99, p = 0.32, I2 = 0%
MIST-VR subgroup
Sroka et al. [40]7.782.2293.572.5784.21 [1.91, 6.51]7.03
Seymour et al. [45]7.970.8485.504.5082.47 [−0.70, 5.64]3.69
Subtotal 17 163.61 [1.75, 5.47]10.72
Heterogeneity: Q = 0.76, p = 0.38, I2 = 0%
Overall 211 1823.14 [2.30, 3.97]100.00
Heterogeneity: Q = 10.08, p = 0.18, I2 = 31%
VRT Favored
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Suh, I.; Li, H.; Li, Y.; Nelson, C.; Oleynikov, D.; Siu, K.-C. Using Virtual Reality Simulators to Enhance Laparoscopic Cholecystectomy Skills Learning. Appl. Sci. 2025, 15, 8424. https://doi.org/10.3390/app15158424

AMA Style

Suh I, Li H, Li Y, Nelson C, Oleynikov D, Siu K-C. Using Virtual Reality Simulators to Enhance Laparoscopic Cholecystectomy Skills Learning. Applied Sciences. 2025; 15(15):8424. https://doi.org/10.3390/app15158424

Chicago/Turabian Style

Suh, Irene, Hong Li, Yucheng Li, Carl Nelson, Dmitry Oleynikov, and Ka-Chun Siu. 2025. "Using Virtual Reality Simulators to Enhance Laparoscopic Cholecystectomy Skills Learning" Applied Sciences 15, no. 15: 8424. https://doi.org/10.3390/app15158424

APA Style

Suh, I., Li, H., Li, Y., Nelson, C., Oleynikov, D., & Siu, K.-C. (2025). Using Virtual Reality Simulators to Enhance Laparoscopic Cholecystectomy Skills Learning. Applied Sciences, 15(15), 8424. https://doi.org/10.3390/app15158424

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop