Next Article in Journal
Advances in 3D-Printed Implants for Facial Plastic Surgery
Previous Article in Journal
Laparoscopic-Assisted Percutaneous Cryoablation of Abdominal Wall Desmoid Fibromatosis: Case Series and Local Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Surgeon Training in the Era of Computer-Enhanced Simulation Robotics and Emerging Technologies: A Narrative Review

1
Department of General Surgery, Joondalup Health Campus, Perth, WA 6027, Australia
2
Master of Minimally Invasive Surgery Program, Faculty of Health and Medical Science, University of Adelaide, Adelaide, SA 5005, Australia
3
School of Medicine, University of Otago Christchurch, Christchurch 4710, New Zealand
4
Department of General Surgery, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia
5
Department of Surgery, St George Hospital, Sydney, NSW 2217, Australia
6
Department of General Surgery, John Hunter Hospital, Newcastle, NSW 2305, Australia
7
Department of Surgery, The Queen Elizabeth Hospital, Adelaide, SA 5011, Australia
8
Faculty of Medicine, Department of Surgery, University of New South Wales, Sydney, NSW 2033, Australia
*
Author to whom correspondence should be addressed.
Surg. Tech. Dev. 2025, 14(3), 21; https://doi.org/10.3390/std14030021
Submission received: 20 January 2025 / Revised: 12 April 2025 / Accepted: 23 June 2025 / Published: 27 June 2025

Abstract

Background: Teaching methodology has recently undergone significant evolution from traditional apprenticeship models as we adapt to ever-increasing rates of technological advancement. Big data, artificial intelligence, and machine learning are on the precipice of revolutionising all aspects of surgical practice, with far-reaching implications. Robotic platforms will increase in autonomy as machine learning rapidly becomes more sophisticated, and therefore training requirements will no longer slow innovation. Materials and Methods: A search of published studies discussing surgeon training and computer-enhanced simulation robotics and emerging technologies using MEDLINE, PubMed, EMBASE, Scopus, CRANE, CINAHL, and Web of Science was performed in January 2024. Online resources associated with proprietary technologies related to the subject matter were also utilised. Results: Following a review of 3209 articles, 91 of which were published, relevant articles on aspects of robotics-based computer-enhanced simulation, technologies, and education were included. Publications ranged from RCTs, cohort studies, meta-analysis, and systematic reviews. The content of eight medical technology-based websites was analysed and included in this review to ensure the most up-to-date information was analysed. Discussion: Surgeons should aim to be at the forefront of this revolution for the ultimate benefit of patients. Surgical exposure will no longer be due to incidental experiences. Rather, surgeons and trainees will have access to a complete database of simulated minimally invasive procedures, and procedural simulation certification will likely become a requisite from graduation to live operating to maintain rigorous patient safety standards. This review provides a comprehensive outline of the current and future status of surgical training in the robotic and digital era.

1. Introduction

Surgical practice has undergone a dramatic evolution in the past 30 years. Rapid advances in computing and imaging technology have enabled the rise of minimally invasive surgery (MIS) as an alternative to traditional open procedures. MIS approaches have subsequently become the gold standard across many surgical disciplines owing to the perioperative benefits, improved cosmesis, and functional recovery. Initially, laparoscopy with “straight stick” instruments surged in popularity through the 1990s and subsequently became the most common approach to MIS worldwide. Robotic surgery presents an alternative MIS approach that promises to eclipse laparoscopy and become the new standard of care in the near future. Purported benefits include visual enhancement with 3D stereoscopic vision and magnification up to 10×, tremor reduction, greatly enhanced dexterity through articulated instruments offering seven degrees of freedom, improved ergonomics, and a resultant decrease in operator fatigue, both mental and physical.
The Da Vinci platform from Intuitive Surgical has dominated the robotics market to date. Adoption has been slow but steady internationally, owing primarily to the associated costs when compared to laparoscopy. However, as patents expire on many robotic technologies, numerous competitors are now entering the robotic marketplace. This promises to drive down prices and improve availability, driving robotic surgery to the forefront of global MIS. It is therefore critical to examine how best to learn and teach in this exciting new surgical era.

2. Materials and Methods

We searched for publications on the MEDLINE (EBSCO), PubMed, EMBASE, Scopus, CRANE central registry of controlled trials, CINAHL (EBSCO), and Web of Science databases using the keywords robotic surgery, surgical education AND simulation, virtual reality, 3D imaging, augmented reality, telesurgery, artificial intelligence, 3D printing, and dual-console training OR telementoring. The search was limited to articles from 2002 to 2023 and was performed in January 2024.

3. Results

Given the broad nature of the topic, 3209 potential articles were assessed and narrowed down to 91 publications that best addressed the main facets of this review. As a rapidly evolving field driven largely by medical technology companies, the content of eight medical technology-based websites was analysed and added to this review to ensure the most up-to-date information was analysed.

4. Discussion

4.1. Curriculum Development—The Apprentice Model

Traditionally, open surgical training represented an apprenticeship model. Trainees observed and assisted their surgical mentors in performing a procedure. Through mimicry and direct supervision, trainees gradually acquired operative skill and increasing independence. Competence was subjectively determined by surgical mentors throughout training, with little in the way of an objective procedure-specific assessment of technical proficiency. However, modern surgical training poses significant challenges to this model. Restricted working hours, increased subspecialisation, and a rapid expansion in technology and procedures dictate that this model is no longer able to guarantee training adequacy. Furthermore, as established surgeons adopt new technologies, it is imperative that gold-standard outcomes are maintained through thorough training. These issues are compounded by a lack of standardisation in hospital accreditation processes and pressures from patients, peers, media, medical technology companies and healthcare administrators. Thus, surgical training models must adapt in order to maintain standards of care. Zorn and colleagues highlighted the extent of the problem in 2009, noting that an estimated 85% of radical prostatectomies in the US the preceding year were performed with robotic assistance despite the lack of any formalised accreditation or training process [1].
Currently, Intuitive Surgical provides a recommended framework for robotic accreditation, including online learning, in-service training, bedside assistance, and primary operating [2]. Trainees subsequently provide a letter supporting their robotic competence from their supervisor, at which point Da Vinci will issue a certificate of system training. The completion of Da Vinci surgical simulator skills sessions is also recommended, though not required. However, one must note that this certificate is only intended to show competence in the use of the robot itself. The ability to perform a range of procedures requires far greater surgical experience with an in-depth knowledge of anatomy, tissue handling, and pathophysiology.
In the past decade there has been a deluge of studies reporting on the robotic learning curve and its potential utility in determining competence. A recent systematic review by Soomro et al. showed that the majority of the literature was of poor quality, with a large variety of outcome measures and methodologies rendering it difficult to make any meaningful conclusions for the purposes of implementing safe training in robotic MIS [3]. The challenge lies in determining at what point an individual has reached sufficient proficiency to practice independently without undue harm to patients, and it seems that case numbers alone are insufficient to determine this.
Therefore, considerable effort has been placed into the development of structured training programmes for the safe implementation of robotic surgery. The Society of American Gastrointestinal and Endoscopic Surgeons (SAGES) and the Minimally Invasive Robotic Association (MIRA) first published a consensus statement regarding guidelines for training and credentialing in 2008 [4]. These guidelines emphasised the importance of a combination of didactic teaching, live case observation, and hands-on experience, both of simulations and in vivo, though specific indicators of proficiency were not discussed. Lee and colleagues expanded on this concept, publishing best-practice guidelines for robotic training and credentialing in 2011 [5]. They divided the training process into preclinical and clinical phases. Preclinical training involved didactic teaching and online learning modules followed by the acquisition of basic robotic skills through dry labs and simulation. Trainees subsequently graduated to the clinical phase of teaching which progressed through case observation, live cases and expert discussion, bedside assisting, and finally operating as console surgeon with a procedural breakdown into measurable steps of increasing complexity. Critically, the authors recognised the inadequacy of using case numbers to determine competence. They therefore recommended an objective, outcomes-based assessment of proficiency prior to credentialing. These recommendations form the backbone of modern robotic training curricula.
This has been further refined over time. In 2014, 14 multinational surgical societies developed the “Fundamentals of Robotic Surgery” (FRS) curriculum [6]. This programme is web-based and has the formidable aspiration of generalisability to any robotics platform and any surgical discipline. It includes 25 perioperative outcome measures and is divided into three sections: cognitive skills, psychomotor skills, and team training and communication. Assessment is competency-based rather than time-based, with trainees required to reach benchmark “pass” values to complete the course. A multicentre RCT showed that significant improvements in task completion time and error rate were made following FRS training. These were comparable to a control group consisting of surgeons who had completed their own local institution specific training, confirming that the FRS was at least comparable with other common forms of training [7].
Alternative validated training curricula include the robotic training network (RTN) and the Fundamental Skills of Robotic Surgery (FSRS). However, these have their own limitations including limited international availability and, for FRS, the requirement for a specific RoSS surgical simulator [8,9].
While online curricula are more accessible and offer greater flexibility to participants, a perceived benefit of on-site training is the ability of supervisors to give advice on correcting technique and thus improve efficiency. However, this supposition has been challenged by a study comparing expert preceptorship with an educational video on skills acquisition [10]. Both groups showed significant improvements with training and no significant differences between groups. Thus, although direct comparisons between each training curricula have not been performed, this is unlikely to be of clinical significance. The most important factor is the completion of one form of validated, proficiency-based skills curriculum prior to embarking on further robotics training. The ‘best’ programme for any given surgeon is likely that which they have ready access to.
The robotic section of the EAU, known as ERUS, ultimately published the first standardised 12-week robotic training curriculum in 2015 [11]. The curriculum included an initial e-learning module on the principles of robotics followed by operative observation and assisting, simulation-based training incorporating VR simulation, dry and wet lab activities, and supervised, modular training with a progression through increasingly complex steps as proficiency increased. All participants showed significant improvement in dVSS simulator performance throughout the course of the training programme. Face, content, and construct validity were all confirmed. However, only 80% of participants were deemed competent to independently perform robot-assisted radical prostatectomy (RARP) by their mentors at the conclusion of the programme. Independent assessors similarly scored 80% as safe and competent. Further, expert mentors felt that only 30% were capable of safely and independently completing a complex case. This once again highlights the inherent variability in the learning curve and the importance of using objective performance-based outcomes over case numbers to determine competency.
The following year, the Clinical Robotic Surgery Association (CRSA) published specific recommendations on structured training in colorectal surgery [12]. These guidelines focused on a stepwise objective assessment of competency. Basic training is divided into sequential stages and each must be passed objectively before being allowed to progress to live operating. Each operation is broken down into steps and trainees are rated as independent, requiring prompting, or unable to perform for each step. Procedures are taught in order of increasing complexity until the trainee is ultimately deemed independent and credentialed.
Numerous surgical societies have subsequently followed suit and published their own guidelines for the safe introduction of robotic surgery. However, the recommendations of the CRSA arguably represent the greatest paradigm shift toward objective, procedure-specific, proficiency-based accreditation to date.

4.2. Novel Training Modalities

Having now defined the aims of robotic training, one must examine the most effective methods for achieving proficiency while minimising risk of harm to patients. Fortunately, robotic surgery is uniquely suited to innovative new training models.

4.2.1. Virtual Reality

While VR is not a new concept, having first arisen in the 1980s, imaging technology and processing power have only recently advanced sufficiently to allow virtual reality to enter the mainstream. Driven by video game development, there are now numerous open surgical simulators based on the oculus and HTC platforms. However, much of the skill in open surgery centres around tissue handling and manipulation that is well beyond the capabilities of current platforms, severely limiting their utility in open surgical training. Similarly, in performing laparoscopic surgery, there remains a significant component of haptic feedback and off-screen movement that influences surgical technique and is difficult to simulate. By comparison, modern robotic surgery lacks haptic feedback. While this is often described as a limitation in current robotic surgery, for the purposes of simulation and training, these become strengths. Technique is entirely guided by on-screen visual cues and, therefore, simulation has the potential to most accurately reproduce the robotic operative environment.
First-generation VR simulators include the Da Vinci Skills Simulator (dVSS), Mimic Da Vinci trainer (dVT), FlexVR, the ProMIS simulator, the Simsurgery Educational Platform (SEP), and the Robotic Surgical Simulator (RoSS) [13]. These platforms focus primarily on basic skill acquisition through skills drills, though a limited degree of operative simulation is offered. Second-generation platforms include the RobotiX Mentor (RM), the RoSS II/II lite, and SimNow by Da Vinci [13,14,15]. These next-generation platforms greatly expand the utility of simulation. Photorealism is vastly improved and it is now possible to perform complete operative procedures in the simulated environment, either in guided fashion or free-hand with in-depth assessment scores on completion. RoSS also offers an alternative unique module known as Hands-on Surgical Training (HoST). Here, surgical videos from real robotic procedures are displayed on screen. Through the haptic feedback of the controls, the trainee is able to experience the exact hand movements of the operating surgeon in synchrony with the surgical footage [14].
In order to be of value, simulation must lead to a demonstrable improvement in surgical skills. This may be described across several validity domains (Table 1).
The efficacy of virtual reality simulation in robotic skills training is well established. Each of the aforementioned simulators has demonstrated face, content, and construct validity in randomised controlled trials across individuals with varying levels of surgical ability [13,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32].
In considering the superiority of one simulator over another, few head-to-head comparisons have been performed. Hertz and colleagues recently compared face and content validity between dVT, dVSS, and RM. The dVSS was found to be superior to dVT, while no other significant differences were identified between platforms [33]. This potentially reflects the ageing nature of dVT, now in use since 2007, and the use of the master console with dVSS, improving simulation realism. A further comparison performed by Tanaka et al. compared face, content, and construct validity between dVT, dVSS, and RoSS [34]. dVSS and dVT both significantly outperformed RoSS in face and content validity. Additionally, dVSS and dVT both showed good construct validity, while RoSS was unable to distinguish between novice and expert users. Further support for the dVSS simulator was published by Ahmad and colleagues who showed that fellows trained on the dVT scored significantly lower on both pre- and post-test assessments, had a lower average curriculum score, and spent more time completing each assessment.
These results appear to translate to meaningful skill acquisition. Hoogenes et al. compared the performance of junior and senior trainees in performing vesicourethral anastomoses following the completion of an identical simulator-training curriculum on dVT or dVSS [35]. Junior trainees showed significantly better performance following dVSS training compared with dVT training, while senior trainees who had greater previous robotic experience showed no difference in performance between programmes. Therefore, it seems that dVSS may represent a superior training tool when compared to dVT, particularly in robotic novices. Unfortunately, there is no data comparing the next-generation simulators described above, with which to make a similarly informed decision.
Of note, the majority of aforementioned studies focus on face, content, and construct validity in performing basic skills such as object manipulation and knot tying. While such tasks are attractive for research purposes due to their simplistic nature and ease of outcome comparison between groups, they do not necessarily reflect the transfer of skills to the operating room (concurrent and predictive validity) which is of critical importance in determining the value of simulation in training. A recent meta-analysis by Schmidt et al. has examined the role of simulator skills transfer to live operating, identifying eight studies for review [36]. They concluded that VR skills acquisition is transferrable to the OR and that simulator performance on DVT and dVSS demonstrates concurrent validity, although their findings were limited by the small number and heterogeneous nature of included studies.
Furthermore, logic dictates that realistic operative simulation is likely to dramatically increase the utility of virtual reality platforms. This technology is already available and will only continue to advance in coming years as processing power and imaging technology allow for true photorealistic re-creation. To date, this has only been assessed in one randomised trial by Raison and colleagues [37]. Novice participants received no training (n = 9), basic simulation training (n = 13), or procedural simulation training (n = 13). Subsequently, each participant performed robotic radical prostatectomy on a cadaver and was assessed by blinded expert reviewers. The completion of either training model resulted in significantly higher scores than no training, demonstrating concurrent validity. Furthermore, procedural training resulted in significantly higher scores than basic training, demonstrating superiority as a training tool.

4.2.2. Animal/Cadaver Models

Of course, there are alternatives to virtual reality simulation that must also be considered in determining the most appropriate teaching tool for robotic surgery. Animal and cadaveric models are well recognised in surgical training [38,39]. These models represent the most realistic form of surgical simulation in terms of anatomical authenticity and tissue-handling properties. This is particularly true of living tissue handling in animal models, though at the expense of anatomical variance. To bridge this divide, cadaveric models may be enhanced with the re-establishment of simulated perfusion, as outlined in a systematic review by Bellier et al. [40]. However, despite their longstanding usage and validated nature, direct comparisons between animal, cadaveric, and VR simulation in MIS are severely limited. Just two papers comparing laparoscopic VR with cadaveric simulation have been published [41,42]. Both focused strongly on subjective participant satisfaction, finding a preference for cadaveric training. VR simulation found greater acceptance for basic task training with junior trainees, and was found to be less complex than cadaveric dissection in procedural assessment.
Specific to robotics, the literature is similarly limited. Bertolo and colleagues conducted a single robotic training session on fresh-frozen human cadavers for surgical residents with limited robotic experience [43]. They found a high degree of satisfaction amongst participants, who showed subjective and objective skill improvement following the session. Furthermore, participants rated the activity as superior to both VR and porcine training sessions.
Thus, very limited evidence may suggest an advantage to cadaveric simulation, particularly for more complex procedural tasks. However, as noted above, VR technology has advanced dramatically in the past decade and the relevance of the above results when compared to more modern VR simulators is unknown.

4.2.3. Three-Dimensional Printing

Another novel alternative has been the construction of artificial models through 3D printing and polymer moulding. Several studies have recently assessed the value of 3D printed models in robotic training. These have demonstrated the face, content, and construct validity of such models [44,45,46].
As with VR, modelling techniques have greatly advanced in recent years. A significant focus has been placed on the realism of tissue reproduction, paving the way for valuable, high-fidelity surgical simulation. Models can now conduct diathermy in a realistic manner and can ‘bleed’ due to artificial perfusion with solutions of similar viscosity to blood. In addition to creating realistic models for training purposes, patient-specific models have been developed from CT reconstructions, allowing for a three-dimensional tumour assessment to assist in operative planning, and even to rehearse procedures. This was first demonstrated by von Rundstedt et al., who showed that operative times and tumour characteristics were remarkably similar when comparing 3D-printed complex renal tumours to the in vivo specimen [47].
Further work was performed by Ghazi and colleagues in establishing the validity of such high-fidelity organ models. Their models allow for an ultra-realistic simulation of entire surgical procedures within a replica abdomen or pelvis [48]. Reviewers consistently rated these models as superior to porcine or cadaveric models and perfusion was considered to be a particularly important element. Experts significantly outperformed novices in performing RALPN over a variety of validated scoring systems, confirming the construct validity.
Each of these papers serve as an impressive demonstration of how far 3D-printed simulation has progressed and its great potential for teaching and operative planning. However, animal, cadaveric, and 3D models are not without other practical limitations and VR simulators demonstrate several benefits here. Training in animals and cadavers is costly, there is limited availability and there are many ethical considerations. A complete Da Vinci robot must be available in a wet laboratory environment and instruments must also be made available at considerable cost. As a result, access to training with these models is greatly restricted, and is primarily only available in dedicated teaching institutions. In the case of 3D-printed models, the requirement for a complete robot and training instruments remains a limitation, though instrument sterility and infection control issues are eliminated. Models can only be used once, and they consume considerable time and resources in preparation and generate significant waste. The more complex and realistic the model, the greater the effort required in manufacturing. In fact, Witthaus et al. noted that each model took a skilled biomedical engineer approximately 5.5 h to construct. Therefore, commercialisation is likely required in order to develop mainstream utility. Unfortunately, this often comes at considerable additional cost, particularly in the case of medical technology. By comparison, although initial purchase costs are high, VR simulators may subsequently be made available 24 h a day, 7 days a week. They can be reused an unlimited number of times with virtually no operational costs. The main limitation is in the fidelity of procedural simulation. As VR comes ever closer to achieving photorealism, it is highly likely to establish itself as the predominant training modality.

4.2.4. Dual-Console Training

Following graduation from simulation, trainees must transition to in vivo console operation. During the early transition phase, the presence of an on-site preceptor is considered critical. This individual should be an experienced robotic surgeon, able to offer guidance and supervision to maximise patient safety. With the initial Da Vinci platform, this was a cumbersome process, as described by doctors Crawford and Dwyer [49]. In addition to the provision of verbal advice, the preceptor would often act as bedside assistant, providing the ability to assist or point laparoscopically. However, the preceptor was commonly required to “break scrub” and temporarily act as console surgeon, a time-consuming and frustrating process. Fortunately, progressive technological advancement has significantly enhanced the means of interaction between supervisor and mentee since then.
Hanly and colleagues first described the concept of dual-console platforms in 2006, linking two surgeon consoles together via a special-purpose connection [50]. This enabled collaborative teaching through the fluid exchange of instrument control between the trainer and trainee, in addition to providing a form of haptic feedback, allowing both surgeons to simultaneously feel the movement of the instruments. Thus, the trainee could be guided in the performance of precise tasks such as intracorporeal suturing. This teaching model was much more akin to open teaching techniques than those employed during laparoscopy and was subjectively perceived as highly advantageous. It offered vast improvements in the ability to teach robotic surgery efficiently while reducing some of the associated anxiety involved in teaching MIS. Several studies have subsequently reported the safety and training benefits of the dual-console model [51,52,53].

4.2.5. Augmented Reality

The second major advance in robotic teaching was the introduction of telestration [49]. This likely represented the first practical application of augmented reality (AR) in robotic surgery. Telestration enabled the preceptor to direct the trainee by marking the laparoscopic image on a touch-screen display. The markings were then reproduced on the surgeon console display. Alternatively, in the dual-console model, the preceptor was given the ability to control a virtual pointer on-screen in real time to serve the same purpose, but with the added benefit of 3D. Therefore, the trainee could be provided with visual guidance without need to leave the console, greatly improving communication and efficiency.
Jarc et al. further advanced telestration techniques with the introduction of “ghost tools” [54,55]. Ghost tools offered 3D telestration abilities to proctors through use of a 3D pointer, 3D hands with the ability to point or simulate grasping, and 3D instruments that could be manipulated in similar fashion to actual operating instruments. Both proctors and trainees demonstrated a preference for 3D hands and 3D instruments over traditional 2D telestration, finding them to be more effective demonstration tools despite an increase in complexity of use for the proctors. Further objective research demonstrated that proctors made good use of the enhanced manipulation abilities on offer [55].
Though not specific to teaching, there are several other current and future augmented reality technologies that offer benefit to both trainee and mentor alike. First amongst these is image enhancement through the use of immunofluorescence. The technology, known as “Firefly”, involves the administration of indocyanine green (ICG) followed by filtering the endoscope image for near-infrared light wavelengths. The technique can be used to highlight underlying critical vasculature, biliary structures, and ureters to delineate hepatic tumours and to assess tissue perfusion during anastomosis. Since its introduction in 2011, its use has become commonplace in many surgical procedures [56].
Recently, Activ Surgical have released an augmented reality endoscope attachment that allows for a real-time assessment of tissue perfusion without the need to inject dye [57,58]. Though this is currently designed for laparoscopic surgery, there is little doubt that robotic offerings will be quick to follow, and further refinement may see this become the new standard of care.
Three-dimensional reconstructions of staging imaging can prove particularly helpful in providing a surgeon or trainee with a greater understanding of the patient’s anatomy preoperatively, enhancing surgical planning and surgical safety. Currently, the TilePro function of the Da Vinci platform allows the operator to display and manipulate this imaging intraoperatively alongside the endoscopic display to guide dissection. Intraoperative ultrasound may also be displayed on TilePro, allowing sonographically detectible lesions in solid organs to be marked out, allowing for maximal preservation of critical neurovasculature while reducing the risk of an involved resection margin [58]. The next phase of augmented reality involves overlaying this information onto the surgical field in real time for an enhanced identification of critical anatomy and improved efficiency. Proof of concept has already been successfully demonstrated in urologic and hepatic surgeries, with recent work suggesting improved accuracy in resection margins for hepatic tumours compared to the current gold standard of intraoperative ultrasound [59,60,61,62,63,64]. The next major hurdle lies in adjusting for real-time tissue deformation and manipulation intraoperatively.
In the near future, augmented reality will offer even greater value through the use of artificial intelligence (AI) and machine learning. Machine learning may be supervised, whereby a human inputs labelled data into a programme to teach it to differentiate between structures, or unsupervised, where unlabelled data is fed into the algorithm, which then attempts to identify the abnormality. In re-enforcement learning, the AI is then set a task and gains further data points based on its successes or failures [65]. Essentially, these methods allow AI programmes to ‘learn’ to analyse data and identify the desired abnormality with increasing accuracy. In minimally invasive surgery, this technology can be applied to display an intraoperative on-screen visual representation of areas of safe dissection and “no go” zones containing underlying critical anatomy in order to improve patient safety. Such algorithms have already been successfully applied to laparoscopic cholecystectomy to provide on-screen guidance around the safe dissection of Calot’s triangle and the avoidance of portal structures with high levels of efficacy [66,67]. While the application of AI to surgical training is still in its infancy, machine learning holds the ability to advance at a rate far outstripping human learning, and this will no doubt become an extremely powerful surgical tool.

4.2.6. Telementoring

Since the advent of robotic surgery there has been great interest in the potential applications of telesurgery in revolutionising healthcare. The requester–responder nature of the Da Vinci platform is perfectly suited to the performance of telesurgery, whereby the operating surgeon controls the robotic instruments from a remote location. Potential applications included the delivery of healthcare to poorly serviced areas or adverse environments (e.g., warzones), to offer highly subspecialised services from a central “institute of excellence” without requiring the patient or surgeon to travel and, in the age of COVID-19, to reduce the risk of surgeon exposure to transmissible diseases [68]. The first telerobotic cholecystectomy was performed in 2001 [69]. However, adoption and practical application has been limited due to concerns over network stability, latency times, medicolegal issues, the risk of cybersecurity threats and establishment costs [68]. Optimal latencies are considered to be below 200–300 ms, while latencies greater than 700–1500 ms make surgical performance challenging and likely unsafe [70,71,72,73].
Fortunately, these limitations are less restrictive in the case of telementoring, and this technology has been utilised with good effect in robotic surgical training. A systematic review by Bilgic et al. has confirmed the safety and efficacy of telementoring in surgery [74]. The authors examined papers comparing on-site mentoring with telementoring and found eleven studies of 453 cases that were suitable for inclusion. No differences in perioperative complication rates were encountered in any study. A total of 90% reported comparable operating times between groups, with one study showing a longer operating time due to telementoring. Technical difficulties were encountered in 3% of telementored cases. Subjective analysis of trainee satisfaction revealed no difference between on-site mentoring and telementoring, while objective improvements in operating times across the learning curve were also comparable between groups.
Subsequent publications by Papalois et al. and Artsen et al. have placed further support behind telementoring. Papalois and colleagues developed a surgical curriculum delivered in mixed reality through use of Microsoft’s Hololens [75]. The curriculum focused on surgical decision-making, operative anatomy, and expert “tips and tricks”. A total of 93% of students and 100% of tutors felt that virtual mentorship was of use in future surgical training, while 73% agreed/strongly agreed that their understanding of anatomy and decision-making rationale was improved by the module. Artsen et al. compared a series of teleproctored robotic gynaecologic cases with historical controls performed with in-person proctoring. They found high satisfaction rates amongst surgeons and no change in perioperative complication rates [76].
Several new commercial platforms have recently become available that significantly improve on existing telementoring software. These include Orpheus medical (recently acquired by Intuitive Surgical, Sunnyvale, CA, USA) Proximie, and Reacts (recently acquired by Philips, Amsterdam, The Netherlands) [77,78,79]. Each of these platforms allows for real-time collaboration and consultation for the purpose of live telementoring and utilises augmented reality overlays for advanced telestration and annotation.
Telementoring therefore offers great potential. As the technology matures, it may be utilised at any ability level, from residents receiving didactic teaching and supervised simulation training to a specialist surgeon consulting peers regarding a particular intraoperative quandary. This will aid in the standardisation of gold-standard surgical techniques globally by vastly increasing access to world leaders in any given subspecialty. This is particularly important in an age where new procedures and technologies are in constant development. In order to maintain currency, there is a strong requirement for the establishment of global surgical collaborative networks.

4.2.7. Surgical Videos

In the past two decades, online data-transfer rates have accelerated dramatically. This has greatly improved the ability to share high-definition surgical footage globally. This footage offers a valuable learning tool, particularly in the case of robotic surgery. As discussed previously, due to the haptic feedback limitations of current robotic platforms, the surgeon is entirely guided by on-screen visual cues. Thus, observers have access to identical sensory inputs afforded to the operating surgeon. This may be capitalised upon for learning purposes.
Video-based learning has become an extremely common method of surgical learning. In a recent survey of residents and surgeons, Mota et al. found that 98.6% of respondents had made use of videos in preparation for surgery [80]. Furthermore, 57% noted that surgical video was their preferred method of surgical preparation. This was particularly true of younger, less experienced respondents.
Video material may be used at different stages of teaching. Demonstration videos can be effective in teaching simulation skills for robotic surgery, as shown by Shim et al. [10]. Educational video was shown to be as effective as on-site mentoring in mastering robotic vesicourethral anastomosis, while both learning techniques were superior to self-directed learning. Video-based learning has also shown superiority when compared to hands-on practical training in the case of laparoscopic cholecystectomy. Pape-Koehler and colleagues performed a randomised controlled trial comparing a multimedia, video-based learning module with practical training or no training [81]. Participants who underwent multimedia-based training showed a significant improvement in surgical performance compared to both their practically trained and untrained colleagues.
In the near future, surgical video may be further enhanced through immersive footage that incorporates haptic feedback. Pandya and colleagues developed a novel recording system that synchronised robotic arm and surgeon–console interactions with operative footage [82]. The material could then be replayed to an observer at the surgeon console, allowing them to feel the movements of the operating surgeon’s hands to gain a deeper appreciation of correct operative technique and shorten the learning curve associated with complex procedures. This concept has already been applied to simulation training in the form of the HoST platform on the RoSS virtual simulator described earlier [14].
Huynh and colleagues offer a word of caution in the application of surgical videos [83]. The authors reviewed the most viewed 50 YouTube videos relating to MIS inguinal hernia repair and rated their surgical performance in comparison to the “9 commandments” of safe technique as defined by Daes and Felix [84]. Only 16% of videos demonstrated all nine commandments, with significant differences between laparoscopic and robotic approaches. Furthermore, 46% of videos were considered to display unsafe techniques through dangerous mesh fixation, risks to critical structures, or inappropriate tissue handling.
To address the variable quality of online material, an international multidisciplinary consensus group have recently published guidelines on the appropriate reporting of educational videos [85]. These guidelines cover 36 recommendations including requirements for video introduction and information about the authors, case presentation and staging imaging, robotic setup, procedural demonstration +/− stepwise teaching and telestration of relevant anatomy, a review of postoperative outcomes, and confirmation of high-quality footage. This standardised approach to reporting aims to improve the general quality of online material as a teaching resource. Knowledge of these guidelines will also help a prospective student to select appropriate videos from which to learn.
Several comprehensive clinical media platforms are now available that facilitate data sharing in the surgical community. Examples include Orpheus, Proximie and Touch Surgery Enterprise. These platforms simplify the process of video storage, editing, and retrieval, while offering rapid, secure, de-identified sharing of material between colleagues. Furthermore, they each possess AI algorithms for the automatic segmentation and labelling of surgical footage [77,78,86]. This further improves the utility of videos through rapid access to relevant sections of a procedure.

4.2.8. Efficacy of Novel Training Modalities Compared to Traditional Modalities

There are no trials comparing the efficacy of training on surgical robots using cadaver or more traditional models versus virtual reality programmes. The use of cadavers for surgical training is now being employed to teach open cholecystectomy, as laparoscopic cholecystectomy has greatly reduced the training opportunity [87]. The use of cadavers, as mentioned, can be an important teaching tool for robotic technique, but it depends on availability and affordability. The effectiveness of virtual reality programmes is dependant upon how close the simulation is to reality. The future likely lies in the melding of bio tissue and 3D printing of organs to be used within a simulation scenario to mimic reality. This gap in the literature remains an important one to bridge given the rapid implementation of novel digital teaching technologies to ascertain whether they are in fact more advantageous than traditional modalities, and if so in which facets of the learning curve they aid in accelerating and augmenting [88,89].

4.3. Technical Assessment and Tracked Metrics

As previously described, most modern robotic surgery curricula focus on an objective assessment of proficiency prior to graduation. This is critical to patient safety. Technical skills assessments from operative video have been directly associated with perioperative complications, morbidity, and mortality [90,91]. Assessments in robotic surgery publications have most commonly been delivered through validated scoring systems by mentors or independent reviewers. Common examples include GEARS, R-OSATS, and PACE scores [9,92,93]. However, this process is time intensive and relies on considerable goodwill on behalf of assessors. Therefore, several potential alternatives have been explored. Crowdsourcing is one such example. This technique utilises the ready availability of large numbers of individuals in an online forum. Non-surgical crowd workers are briefly trained in video evaluation through an online module and then score performances using a rating scale such as GEARS. Crowdsourcing results in much more rapid responses than the use of expert reviewers at minimal cost, and studies have shown good correlation between scores by expert reviewers and crowd workers, even in the assessment of complex procedures such as prostatectomy [94,95,96,97].
Automated performance metrics (APMs) represent an increasingly valuable means of assessment. APMs are a set of data points relating to various aspects of an operation that are routinely collected throughout a robotic procedure. They are readily available with minimal effort or cost. For the purposes of training, each surgical simulation platform automatically assesses user data and provides performance scores following the completion of a task, as compared to expert benchmarks. In order to be of value, these metrics must be shown to hold practical utility. Several studies on the validity of simulation have shown that automated metrics correlate highly with expert GEARS scores in simulation exercises. Importantly, tracked metrics were also shown to correlate well with subsequent intraoperative performance assessment [98].
Chen et al. have also proven the ability of intraoperative APMs to differentiate between expert and novice robotic surgeons in the performance of a vesicourethral anastomosis during RARP [99]. By combining the APM data with clinicopathologic characteristics in a deep learning algorithm, they were subsequently able to accurately predict postoperative continence rates [100]. This supports the value of APMs and deep learning algorithms in assessing surgical quality. Multiple subsequent studies of deep learning algorithms have unanimously proven to be highly accurate in predicting surgical skill level when compared to structured assessments by expert reviewers [101].
Currently, this remains a new and novel technology and how best to incorporate it into proficiency assessment has yet to be clearly defined. Nevertheless, APMs and deep learning algorithms are an area of burgeoning research. Due to the rapid availability of results, cost efficacy, and accuracy, it is highly likely that these will ultimately become the primary method of proficiency-based assessment for surgical trainees and will have a significant role in hospital accreditation and credentialing processes.

4.4. Artificial Intelligence, Machine Learning and Big Data

Several major device manufacturers have identified the potential of tracked metrics and machine learning. Previously discussed media platforms such as Orpheus, Proximie, and Touch Surgery currently offer their users procedural analysis based on APMs. In their current states, the information produced is useful for benchmarking and identifying areas for individual improvement. However, their true potential lies in future applications. The routine recording and upload of cases will rapidly lead to vast troves of data, known as “big data”. In combination with electronic medical records, these can be analysed by deep learning algorithms such as convolutional neural networks to identify associations between surgical technique and outcomes that cannot be appreciated by traditional research methods or simple observation. As a result, trainees and surgeons will be provided with a constant source of feedback on subtle areas for self-improvement, to the benefit of their patients.
Optimal patient management will also become far more standardised globally based on best evidence from hundreds of thousands of cases, overcoming many of the current limitations of surgical research. While big data is already utilised in the production of treatment algorithms such as CeDAR or NELA, these only utilise a handful of data points to guide clinical decision-making. Deep learning algorithms will have the ability to advise on the most appropriate treatment for a patient based on innumerable individualised data points in a far more personalised manner and will have a profound effect on the way we practice surgery.
An additional significant benefit arises in cost and time savings. Traditional research methods such as randomised controlled trials frequently take years to produce clinically relevant conclusions at great cost and often with limited generalisability. The field of surgery is rapidly expanding, with procedures undergoing constant evolution. Traditional clinical trial models can no longer keep up with this rate of advancement. By comparison, machine learning possesses the ability to produce clinically relevant outcome data at a rate that keeps pace with surgical advancement, guiding clinical practice in a current and meaningful manner.
The storage of big data does raise ethical and medicolegal issues around the potential identification of the underperforming surgeon. However, it is currently of critical importance that the information produced is utilised for self-improvement purposes alone, rather than in any disciplinary manner. Deep learning algorithms are entirely dependent on the purity of input data in order to produce accurate results. The fear that participation may result in negative repercussions may otherwise lead to data contamination through inaccurate data input or data omission. This would severely compromise the reliability of results and potentially even lead to patient harm. As a self–improvement tool, all surgeons are afforded the opportunity to improve their own practice, thus providing widespread benefits to patients.

4.5. Barriers to Implementing New Technologies and Future Horizons

The barriers to both the acceptance and delivery of training modalities in robotic surgery mirror the issues that were encountered when laparoscopic techniques were introduced to the surgical community over 25 years ago. The availability of skilled trainers, while an issue in the early years of this century, is no longer a problem. Access to a training platform can be an issue depending on the number of robots in an institution. The use of benchtop trainers is a cheap and effective way of developing operator skills but advancing to simulation of surgical procedures using cadavers is expensive, if available. As mentioned, 3D models are able to simulate operative scenarios acceptably but will improve as substrate development increasingly mimics human tissue characteristics [88]. Until that time, dissecting using 3D models will be a basic skill learning exercise. Animal models face ethical issues and are banned in some countries and require a robot dedicated to training that is not used for human surgery. The optimum way to train is by use of a second console to assist an experienced surgeon. Whether the recording of the console movements of an expert to be experienced by a trainee will be effective in improving the learning curve of a trainee is an intriguing idea. Perhaps in the future an AI-enhanced surgical robot will teach the trainee as they perform the procedure, as the “expert” will be in the machine.
The future roadmap for the adoption of advanced technologies in surgical skills training is poised to transform medical education and practice. Virtual reality (VR) and augmented reality (AR) are expected to see widespread integration within the next 5–10 years, offering immersive, risk-free environments for trainees to practice complex procedures [102]. Similarly, 3D imaging and 3D printing are anticipated to become standard tools for preoperative planning and the creation of patient-specific anatomical models, enhancing personalisation and precision training [103]. Dual-console training systems, already in use, will become more prevalent within the next 5–10 years, enabling collaborative learning and skill transfer between surgeons. These technologies, collectively, are and will continue to revolutionise surgical education by providing scalable, personalised, and accessible training solutions, ultimately improving patient outcomes.

5. Conclusions

This thesis has provided a comprehensive outline of the current and future status of surgical training in the robotic and digital era. Teaching methodology has recently undergone significant evolution from traditional apprenticeship models as we look to adapt to ever increasing rates of technological advancement. Big data, AI, and machine learning are on the precipice of revolutionising all aspects of surgical practice with far-reaching implications.
The future procedural surgical training model will likely commence with recorded didactic teaching and demonstrations by expert surgeons. International collaboration and deep learning will provide a better appreciation of the gold-standard approach to be taught. Trainees will complete basic simulation training followed by procedural simulation with constant AI guidance around safe planes, structures to avoid, and an overall detailed grading of the quality of the procedure upon conclusion. Procedures will be repeated as many times as necessary until proficiency is achieved. In-person tutors will not be required, saving significantly on costs and resources. Additionally, innovators will be able to trial new approaches in the simulated surgical environment without risk to patients. Surgical exposure will no longer be “pot luck”, with experience dependent on the procedures coming in the door. Rather, surgeons and trainees will have access to a complete database of simulated minimally invasive procedures and procedural simulation certification will likely become a requisite for graduation to live operating in order to maintain rigorous patient safety standards. This will be important not just for the new surgical trainee, but also for the established surgeon adopting new techniques or technologies. The profession is now advancing at such a rate that constant re-training will be required throughout each of our careers.
Previously, adopting new techniques into practice has been hampered by the considerable associated learning curve. Advances in AI will facilitate this in future. Learning curves will be shortened through the increased utilisation of AI in performing guided surgical procedures. Robotic platforms will increase in autonomy as machine learning rapidly becomes more sophisticated, and therefore training requirements will no longer slow innovation.
Surgeons should aim to be at the forefront of this revolution for the ultimate benefit of our patients. In many countries, public access to robotic simulators and operating consoles remains limited, creating a training bottleneck. This must be overcome through collaboration between surgical training bodies and device manufacturers. Governance measures should be implemented for the safe introduction of this exciting technology.

Author Contributions

Conceptualisation, S.K., B.J., M.G., P.J.H. and M.T.; Methodology, S.K., B.J. and M.G.; Software, S.K. and M.G.; Validation, S.K., B.J. and M.G.; Formal Analysis, S.K. and M.G.; Investigation, S.K. and B.J.; Resources, S.K. and M.G.; Data Curation, S.K. and B.J.; Writing original draft, S.K., B.J. and M.G.; Writing review and editing, M.G., P.J.H. and M.T.; Visualization, S.K. and M.G.; Supervision, P.J.H. and M.T.; Project administration, M.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zorn, K.C.; Gautam, G.; Shalhav, A.L.; Clayman, R.V.; Ahlering, T.E.; Albala, D.M.; Lee, D.I.; Sundaram, C.P.; Matin, S.F.; Castle, E.P.; et al. Training, credentialing, proctoring and medicolegal risks of robotic urological surgery: Recommendations of the society of urologic robotic surgeons. J. Urol. 2009, 182, 1126–1132. [Google Scholar] [CrossRef] [PubMed]
  2. Da Vinci Residency/Fellowship Training. 2020. Available online: https://www.davincisurgerycommunity.com (accessed on 18 January 2022).
  3. Soomro, N.A.; Hashimoto, D.A.; Porteous, A.J.; Ridley, C.J.A.; Marsh, W.J.; Ditto, R.; Roy, S. Systematic review of learning curves in robot-assisted surgery. BJS Open 2020, 4, 27–44. [Google Scholar] [CrossRef] [PubMed]
  4. Herron, D.M.; Marohn, M.; SAGES-MIRA Robotic Surgery Consensus Group. A consensus document on robotic surgery. Surg. Endosc. 2008, 22, 313–325. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, J.Y.; Mucksavage, P.; Sundaram, C.P.; McDougall, E.M. Best practices for robotic surgery training and credentialing. J. Urol. 2011, 185, 1191–1197. [Google Scholar] [CrossRef]
  6. Smith, R.; Patel, V.; Satava, R. Fundamentals of robotic surgery: A course of basic robotic surgery skills based upon a 14-society consensus template of outcomes measures and curriculum development. Int. J. Med. Robot. Comput. Assist. Surg. 2014, 10, 379–384. [Google Scholar] [CrossRef]
  7. Satava, R.M.; Stefanidis, D.; Levy, J.S.; Smith, R.; Martin, J.R.; Monfared, S.; Timsina, L.R.; Darzi, A.W.; Moglia, A.; Brand, T.C. Proving the effectiveness of the Fundamentals of Robotic Surgery (FRS) skills curriculum: A single-blinded, multispecialty, multi-institutional randomised control trial. Ann. Surg. 2020, 272, 384–392. [Google Scholar] [CrossRef]
  8. Stegemann, A.P.; Ahmed, K.; Syed, J.R.; Rehman, S.; Ghani, K.; Autorino, R.; Sharif, M.; Rao, A.; Shi, Y.; Wilding, G.E. Fundamental skills of robotic surgery: A multi-institutional randomized controlled trial for validation of a simulation-based curriculum. Urology 2013, 81, 767–774. [Google Scholar] [CrossRef]
  9. Siddiqui, N.Y.; Galloway, M.L.; Geller, E.J.; Green, I.C.; Hur, H.C.; Langston, K.; Pitter, M.C.; Tarr, M.E.; Martino, M.A. Validity and reliability of the robotic objective structured assessment of technical skills. Obstet. Gynecol. 2014, 123, 1193–1199. [Google Scholar] [CrossRef]
  10. Shim, J.S.; Kim, J.Y.; Pyun, J.H.; Cho, S.; Oh, M.M.; Kang, S.H.; Lee, J.G.; Kim, J.J.; Cheon, J.; Kang, S.G. Comparison of effective teaching methods to achieve skill acquisition using a robotic virtual reality simulator: Expert proctoring versus an educational video versus independent training. Medicine 2018, 97, e12569. [Google Scholar] [CrossRef]
  11. Volpe, A.; Ahmed, K.; Dasgupta, P.; Ficarra, V.; Novara, G.; Van Der Poel, H.; Mottrie, A. Pilot validation study of the European Association of Urology robotic training curriculum. Eur. Urol. 2015, 68, 292–299. [Google Scholar] [CrossRef]
  12. Petz, W.; Spinoglio, G.; Choi, G.S.; Parvaiz, A.; Santiago, C.; Marecik, S.; Giulianotti, P.C.; Bianchi, P.P. Structured training and competence assessment in colorectal robotic surgery. Results of a consensus experts round table. Int. J. Med. Robot. Comput. Assist. Surg. 2016, 12, 634–641. [Google Scholar] [CrossRef] [PubMed]
  13. MacCraith, E.; Forde, J.C.; Davis, N.F. Robotic simulation training for urological trainees: A comprehensive review on cost, merits and challenges. J. Robot. Surg. 2019, 13, 371–377. [Google Scholar] [CrossRef] [PubMed]
  14. RoSS II; Simulated Surgical Systems LLC: San Jose, CA, USA, 2020.
  15. Da Vinci SimNow. Da Vinci SimNow, 2021. SimNow|Da Vinci|Intuitive. Available online: https://www.intuitive.com/en-us/products-and-services/da-vinci/digital/simnow (accessed on 19 January 2022).
  16. Hung, A.J.; Zehnder, P.; Patil, M.B.; Cai, J.; Ng, C.K.; Aron, M.; Gill, I.S.; Desai, M.M. Face, content and construct validity of a novel robotic surgery simulator. J. Urol. 2011, 186, 1019–1024. [Google Scholar] [CrossRef] [PubMed]
  17. Kelly, D.C.; Margules, A.C.; Kundavaram, C.R.; Narins, H.; Gomella, L.G.; Trabulsi, E.J.; Lallas, C.D. Face, content and construct validity of the da Vinci Skills Simulator. J. Urol. 2012, 79, 1068–1072. [Google Scholar] [CrossRef]
  18. Liss, M.A.; Abdelshehid, C.; Quach, S.; Lusch, A.; Graversen, J.; Landman, J.; McDougall, E.M. Validation, correlation and comparison of the da Vinci Trainer and the da Vinci surgical skills simulator using the Mimic softward for urologic robotic surgical education. J. Endourol. 2012, 26, 1629–1634. [Google Scholar] [CrossRef]
  19. Lendvay, T.S.; Casale, P.; Sweet, R.; Peters, C. Initial validation of a virtual-reality robotic simulator. J. Robot. Surg. 2008, 2, 145–149. [Google Scholar] [CrossRef]
  20. Kenney, P.A.; Wszolek, M.F.; Gould, J.J.; Libertino, J.A.; Moinzadeh, A. Face, content and construct validity of dV-trainer: A novel virtual reality simulator for robotic surgery. J. Urol. 2009, 73, 1288–1292. [Google Scholar] [CrossRef]
  21. Sethi, A.S.; Peine, W.J.; Mohammadi, Y.; Sundaram, C.P. Validation of a novel virtual reality simulator. J. Endourol. 2009, 23, 503–508. [Google Scholar] [CrossRef]
  22. Perrenot, C.; Perez, M.; Tran, N.; Jehl, J.P.; Felblinger, J.; Bresler, L.; Hubert, J. The virtual reality simulator dV-Trainer is a valid assessment tool for robotic surgical skills. J. Surg. Endosc. 2012, 26, 2587–2593. [Google Scholar] [CrossRef]
  23. Korets, R.; Mues, A.C.; Graversen, J.A.; Gupta, M.; Benson, M.C.; Cooper, K.L.; Landman, J.; Badani, K.K. Validating the use of teh mimic dV-trainer for robotic surgery skill acquistion among urology residents. J. Urol. 2011, 78, 1326–1330. [Google Scholar] [CrossRef]
  24. Lee, J.Y.; Mucksavage, P.; Kerbl, D.C.; Huynh, V.B.; Etafy, M.; McDougall, E.M. Validation study of a virtual reality robotic simulator—Role as an assessment tool? J. Urol. 2012, 187, 998–1002. [Google Scholar] [CrossRef] [PubMed]
  25. Schreuder, H.W.R.; Persson, J.E.U.; Wolswijk, R.G.H. Validation of a novel virtual reality simulator for robotic surgery. Sci. World J. 2014, 2014, 507076. [Google Scholar] [CrossRef] [PubMed]
  26. Seixas-Mikelus, S.A.; Kesavadas, T.; Srimathveeravalli, G.; Chandrasekhar, R.; Wilding, G.E.; Guru, K.A. Face validation of a novel robotic surgical simulator. J. Urol. 2010, 76, 357–360. [Google Scholar] [CrossRef] [PubMed]
  27. Chowriappa, A.J.; Shi, Y.; Raza, S.J.; Ahmed, K.; Stegemann, A.; Wilding, G.; Kaouk, J.; Peabody, J.O.; Menon, M.; Hassett, J.M. Development and validation of a composite scoring system for robot-assisted surgical training—The Robotic Skills Assessment Score. J. Surg. Res. 2013, 185, 561–569. [Google Scholar] [CrossRef]
  28. Seixas-Mikelus, S.A.; Stegemann, A.P.; Kesavadas, T.; Srimathveeravalli, G.; Sathyaseelan, G.; Chandrasekhar, R.; Wilding, G.E.; Peabody, J.O.; Guru, K.A. Content validation of a novel robotic surgical simulator. BJU Int. 2011, 107, 1130–1135. [Google Scholar] [CrossRef]
  29. Hung, A.J.; Patil, M.B.; Zehnder, P.; Cai, J.; Ng, C.K.; Aron, M.; Gill, I.S.; Desai, M.M. Concurrent and predictive validation of a novel robotic surgery simulator: A prospective randomized study. J. Urol. 2012, 187, 630–637. [Google Scholar] [CrossRef]
  30. Colaco, M.; Balica, A.; Su, D.; Barone, J. Initial experiences with RoSS surgical simulator in residency training: A validity and model analysis. J. Robot. Surg. 2012, 7, 71–75. [Google Scholar] [CrossRef]
  31. Finnegan, K.T.; Meraney, A.M.; Staff, I.; Shichman, S.J. da Vinci skills simulator construct validation study: Correlation of prior robotic experience with overall score and time score simulator performance. J. Urol. 2012, 80, 330–335. [Google Scholar] [CrossRef]
  32. Lerner, M.A.; Ayalew, M.; Peine, W.J.; Sundaram, C.P. Does training on a virtual reality robotic simulator improve performamnce on the da Vinci surgical system? J. Endourol. 2010, 24, 467–472. [Google Scholar] [CrossRef]
  33. Hertz, A.M.; George, E.I.; Vaccaro, C.M.; Brand, T.C. Head-to-head comparison of three virtual-reality robotic surgery simulators. JSLS J. Soc. Laparoendosc. Surg. 2018, 22, e2017.00081. [Google Scholar] [CrossRef]
  34. Tanaka, A.; Graddy, C.; Simpson, K.; Perez, M.; Truong, M.; Smith, R. Robotic surgery simulation validity and usability comparative analysis. Surg. Endosc. 2016, 30, 3720–3729. [Google Scholar] [CrossRef] [PubMed]
  35. Hoogenes, J.; Wong, N.; Al-Harbi, B.; Kim, K.S.; Vij, S.; Bolognone, E.; Quantz, M.; Guo, Y.; Shayegan, B.; Matsumoto, E.D. A randomized comparison of two robotic virtual reality simulators and evaluation of trainees’ skills transfer to a simulated robotic urethrovescical anastomosis task. Urology 2018, 111, 110–115. [Google Scholar] [CrossRef] [PubMed]
  36. Schmidt, M.W.; Köppinger, K.F.; Fan, C.; Kowalewski, K.F.; Schmidt, L.P.; Vey, J.; Proctor, T.; Probst, P.; Bintintan, V.V.; Müller-Stich, B.P. Virtual reality simulation in robot-assisted surgery: Meta-analysis of skill transfer and predictability of skill. BJS Open 2021, 5, zraa066. [Google Scholar] [CrossRef] [PubMed]
  37. Raison, N.; Harrison, P.; Abe, T.; Aydin, A.; Ahmed, K.; Dasgupta, P. Procedural virtual reality simulation training for robotic surgery: A randomised controlled trial. Surg. Endosc. 2021, 35, 6897–6902. [Google Scholar] [CrossRef]
  38. Costello, D.M.; Huntington, I.; Burke, G.; Farrugia, B.; O’Connor, A.J.; Costello, A.J.; Thomas, B.C.; Dundee, P.; Ghazi, A.; Corcoran, N. A review of simulation training and new 3D computer-generated synthetic organs for robotic surgery education. J. Robot. Surg. 2022, 16, 749–763. [Google Scholar] [CrossRef]
  39. James, H.K.; Chapman, A.W.; Pattison, G.T.R.; Griffin, D.R.; Fisher, J.D. Systematic review of the current status of cadaveric simulation for surgical training. J. Br. Surg. 2019, 106, 1726–1734. [Google Scholar] [CrossRef]
  40. Bellier, A.; Chanet, A.; Belingheri, P.; Chaffanjon, P. Techniques of cadaver perfusion for surgical training: A systematic review. Surg. Radiol. Anat. 2018, 40, 439–448. [Google Scholar] [CrossRef]
  41. Sharma, M.; Horgan, A. Comparison of fresh-frozen cadaver and high-fidelity virtual reality simulator as methods of laparoscopic training. World J. Surg. 2012, 36, 1732–1737. [Google Scholar] [CrossRef]
  42. Leblanc, F.; Champagne, B.J.; Augestad, K.M.; Neary, P.C.; Senagore, A.J.; Ellis, C.N.; Delaney, C.P.; Group, C.S.T. A comparison of human cadaver and augmented reality simulator models for straight laparoscopic colorectal skill acquisition training. J. Am. Coll. Surg. 2010, 211, 250–255. [Google Scholar] [CrossRef]
  43. Bertolo, R.; Garisto, J.; Dagenais, J.; Sagalovich, D.; Kaouk, J.H. Single session of robotic human cadaver training: The immediate impact on urology residents in a teaching hospital. J. Laparoendosc. Adv. Surg. Tech. A 2018, 28, 1157–1162. [Google Scholar] [CrossRef]
  44. Shee, K.; Koo, K.; Wu, X.; Ghali, F.M.; Halter, R.J.; Hyams, E.S. A novel ex vivo trainer for robotic vesicourethral anastomosis. J. Robot. Surg. 2020, 14, 21–27. [Google Scholar] [CrossRef] [PubMed]
  45. Johnson, B.A.; Timberlake, M.; Steinberg, R.L.; Kosemund, M.; Mueller, B.; Gahan, J.C. Design and validation of a low cost, high fidelity model for urethrovesical anastomosis in radical prostatectomy. J. Endourol. 2019, 33, 331–336. [Google Scholar] [CrossRef] [PubMed]
  46. Monda, S.M.; Weese, J.R.; Anderson, B.G.; Vetter, J.M.; Venkatesh, R.; Du, K.; Andriole, G.L.; Figenshau, R.S. Development and validity of a silicone renal tumor model for robotic partial nephrectomy training. Urology 2018, 114, 114–120. [Google Scholar] [CrossRef] [PubMed]
  47. von Rundstedt, F.C.; Scovell, J.M.; Agrawal, S.; Zaneveld, J.; Link, R.E. Utility of patient-specific silicone renal models for planning and rehearsal of complex tumour resections prior to robot-assisted laparoscopic partial nephrectomy. BJU Int. 2017, 119, 598–604. [Google Scholar] [CrossRef]
  48. Ghazi, A.; Melnyk, R.; Hung, A.J.; Collins, J.; Ertefaie, A.; Saba, P.; Gurung, P.; Frye, T.; Rashid, H.; Wu, G.; et al. Multi-institutional validation of a perfused robot-assisted partial nephrectomy procedural simulation platform utilizing clinically relevant objective metrics of simulators (CROMS). BJU Int. 2021, 127, 645–653. [Google Scholar] [CrossRef]
  49. Crawford, D.L.; Dwyer, A.M. Evolution and literature review of robotic general surgery resident training 2002–2018. Updates Surg. 2018, 70, 363–368. [Google Scholar] [CrossRef]
  50. Hanly, E.J.; Miller, B.E.; Kumar, R.; Hasser, C.J.; Coste-Maniere, E.; Talamini, M.A.; Aurora, A.A.; Schenkman, N.S.; Marohn, M.R. Mentoring console improves collaboration and teaching in surgical robotics. J. Laparoendosc. Adv. Surg. Tech. A 2006, 16, 445–451. [Google Scholar] [CrossRef]
  51. Breen, M.T. Expanded robotic training and education of residents and faculty surgeons using dual console robotic platforms utilizing aviation safety trans cockpit responsibility gradient comparisons. J. Minim. Invasive Gynecol. 2014, 21, S5. [Google Scholar] [CrossRef]
  52. Smith, A.L.; Scott, E.M.; Krivak, T.C.; Olawaiye, A.B.; Chu, T.; Richard, S.D. Dual-console robotic surgery: A new teaching paradigm. J. Robot. Surg. 2013, 7, 113–118. [Google Scholar] [CrossRef]
  53. Morgan, M.S.; Shakir, N.A.; Garcia-Gil, M.; Ozayar, A.; Gahan, J.C.; Friedlander, J.I.; Roehrborn, C.G.; Cadeddu, J.A. Single-versus dual-console robot assisted radical prostatectomy: Impact on intraoperative and postoperative outcomes in a teaching institution. World J. Urol. 2015, 33, 781–786. [Google Scholar] [CrossRef]
  54. Jarc, A.M.; Shah, S.H.; Adebar, T.; Hwang, E.; Aron, M.; Gill, I.S.; Hung, A.J. Beyond 2D telestration: An evaluation of novel proctoring tools for robot-assisted minimally invasive surgery. J. Robot. Surg. 2016, 10, 103–109. [Google Scholar] [CrossRef] [PubMed]
  55. Jarc, A.M.; Stanley, A.A.; Clifford, T.; Gill, I.S.; Hung, A.J. Proctors exploit three-dimensional ghost tools during clinical-like training scenarios: A preliminary study. World J. Urol. 2017, 35, 957–965. [Google Scholar] [CrossRef] [PubMed]
  56. Da Vinci Vision. Da Vinci Vision, 2021. Available online: https://www.intuitive.com/en-us/products-and-services/da-vinci/vision (accessed on 20 January 2022).
  57. ActivSight by Activ Surgical. Available online: https://www.activsurgical.com/ (accessed on 21 January 2022).
  58. Gandaglia, G.; Schatteman, P.; De Naeyer, G.; D’Hondt, F.; Mottrie, A. Novel technologies in urologic surgery: A rapidly changing scenario. Curr. Urol. Rep. 2016, 17, 19. [Google Scholar] [CrossRef] [PubMed]
  59. Adballah, M.; Espinel, Y.; Calvet, L.; Pereira, B.; Le Roy, B.; Bartoli, A.; Buc, E. Augmented reality in laparoscopic liver resection evaluated on an ex-vivo animal model with pseudo-tumours. Surg. Endosc. 2022, 36, 833–843. [Google Scholar] [CrossRef]
  60. Soler, L.; Nicolau, S.; Pessaux, P.; Mutter, D.; Marescaux, J. Real-time 3D image reconstrution guidance in liver resection surgery. Hepatobiliary Surg. Nutr. 2014, 3, 73–81. [Google Scholar]
  61. Le Roy, B.; Ozgur, E.; Koo, B.; Buc, E.; Bartoli, A. Augmented reality guidance in laparoscopic hepatectomy with deformable semi-automatic computed tomography alignment. J. Visc. Surg. 2019, 156, 261–262. [Google Scholar] [CrossRef]
  62. Bertrand, L.R.; Abdallah, M.; Espinel, Y.; Calvet, L.; Pereira, B.; Ozgur, E.; Pezet, D.; Buc, E.; Bartoli, A. A case series study of augmented reality in laparoscopic liver resection with a deformable preoperative model. Surg. Endosc. 2020, 34, 5642–5648. [Google Scholar] [CrossRef]
  63. Porpiglia, F.; Checcucci, E.; Amparore, D.; Autorino, R.; Piana, A.; Bellin, A.; Piazzolla, P.; Massa, F.; Bollito, E.; Gned, D.; et al. Augmented-reality robot-assisted radical prostatectomy using hyper-accuracy three-dimensional reconstruction (HA3D) technology: A radiological and pathological study. BJU Int. 2019, 123, 834–845. [Google Scholar] [CrossRef]
  64. Hughes-Hallett, A.; Mayer, E.K.; Marcus, H.J.; Cundy, T.P.; Pratt, P.J.; Darzi, A.W.; Vale, J.A. Augmented reality partial nephrectomy: Examining the current status and future perspectives. Urology 2014, 83, 266–273. [Google Scholar] [CrossRef]
  65. Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R. Artificial intelligence in surgery: Promises and perils. Ann. Surg. 2018, 268, 70–76. [Google Scholar] [CrossRef]
  66. Tokuyasu, T.; Iwashita, Y.; Matsunobu, Y.; Kamiyama, T.; Ishikake, M.; Sakaguchi, S.; Ebe, K.; Tada, K.; Endo, Y.; Etoh, T. Development of an artifiial intelligence system using deep learning to indicate anatomical landmarks during laparoscopic cholecystectomy. Surg. Endosc. 2021, 35, 1651–1658. [Google Scholar] [CrossRef] [PubMed]
  67. Madani, A.; Namazi, B.; Altieri, M.S.; Hashimoto, D.A.; Rivera, A.M.; Pucher, P.H.; Navarrete-Welton, A.; Sankaranarayanan, G.; Brunt, L.M.; Okrainec, A.; et al. Artificial intelligency for intraoperative guidance: Using semantic segmentation to identify surgical anatomy during laparoscopic cholecystectomy. Ann. Surg. 2022, 276, 363–369. [Google Scholar] [CrossRef] [PubMed]
  68. Mohan, A.; Wara, U.U.; Shaikh, M.T.A.; Rahman, R.M.; Zaidi, Z.A.; Shaikh, M.T.A. Telesurgery and robotics: An improved and efficient era. Cureus 2021, 13, e14124. [Google Scholar] [CrossRef] [PubMed]
  69. Marescaux, J.; Leroy, J.; Rubino, F.; Smith, M.; Vix, M.; Simone, M.; Mutter, D. Transcontinental robot-assisted remote telesurgery: Feasibility and potential applications. Ann. Surg. 2002, 235, 487–492. [Google Scholar] [CrossRef]
  70. Perez, M.; Xu, S.; Chauhan, S.; Tanaka, A.; Simpson, K.; Abdul-Muhsin, H.; Smith, R. Impact of delay on telesurgical performance: Study on the robotic simulator dV-Trainer. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 581–587. [Google Scholar] [CrossRef]
  71. Xu, S.; Perez, M.; Yang, K.; Perrenot, C.; Felblinger, J.; Hubert, J. Determination of the latency effects on surgical performance and the acceptable latency levels in telesurgery using the dV-Trainer simulator. Surg. Endosc. 2014, 28, 2569–2576. [Google Scholar] [CrossRef]
  72. Korte, C.; Sudhakaran Nair, S.; Nistor, V.; Low, T.P.; Doarn, C.R.; Schaffner, G. Determining the threshold of time-delay for teleoperation accuracy and efficeincy in relation to telesurgery. Telemed. J. e-Health 2014, 20, 1078–1086. [Google Scholar] [CrossRef]
  73. Sterbis, J.R.; Hanly, E.J.; Herman, B.C.; Marohn, M.R.; Broderick, T.J.; Shih, S.P.; Harnett, B.; Doarn, C.; Schenkman, N.S. Transcontinental telesurgical nephrectomy using the da Vinci robot in a porcine model. Urology 2008, 71, 971–973. [Google Scholar] [CrossRef]
  74. Bilgic, E.; Turkdogan, S.; Watanabe, Y.; Madani, A.; Landry, T.; Lavigne, D.; Feldman, L.S.; Vassiliou, M.C. Effectiveness of telementoring in surgery compared with on-site mentoring: A systematic review. Surg. Innov. 2017, 24, 379–385. [Google Scholar] [CrossRef]
  75. Papalois, Z.A.; Aydın, A.; Khan, A.; Mazaris, E.; Rathnasamy Muthusamy, A.S.; Dor, F.J.; Dasgupta, P.; Ahmed, K. HoloMentor: A mixed reality surgical anatomy curriculum for robot-assisted radical prostatectomy. Eur. Surg. Res. 2022, 63, 40–45. [Google Scholar] [CrossRef]
  76. Artsen, A.M.; Burkett, L.S.; Duvvuri, U.; Bonidie, M. Surgeon satisfaction and outcomes of tele-proctoring for robotic gynecologic surgery. J. Robot. Surg. 2022, 16, 563–568. [Google Scholar] [CrossRef] [PubMed]
  77. Orpheus, an Intuitive Company. Orpheus, an Intuitive Company, 2021. Available online: https://www.surgicalroboticstechnology.com/news/orpheus-medical-acquired-by-intuitive/ (accessed on 28 January 2022).
  78. Proximie. Proximie, 2021. Available online: https://www.proximie.com/ (accessed on 28 January 2022).
  79. Reacts. Reacts, 2021. Available online: https://kpmg.com/xx/en/our-insights/esg/reaction-magazine-39.html (accessed on 28 January 2022).
  80. Mota, P.; Carvalho, N.; Carvalho-Dias, E.; Costa, M.J.; Correia-Pinto, J.; Lima, E. Video-based surgical learning: Improving trainee education and preparation for surgery. J. Surg. Educ. 2018, 75, 828–835. [Google Scholar] [CrossRef] [PubMed]
  81. Pape-Koehler, C.; Immenroth, M.; Sauerland, S.; Lefering, R.; Lindlohr, C.; Toaspern, J.; Heiss, M. Multimedia-based training on internet platforms improves surgical performance: A randomized controlled trial. Surg. Endosc. 2013, 27, 1737–1747. [Google Scholar] [CrossRef] [PubMed]
  82. Pandya, A.; Eslamian, S.; Ying, H.; Nokleby, M.; Reisner, L.A. A robotic recording and playback platform for training surgeons and learning autonomous behaviours using the da Vinci surgical system. Robotics 2019, 8, 9. [Google Scholar] [CrossRef]
  83. Huynh, D.; Fadaee, N.; Gök, H.; Wright, A.; Towfigh, S. Thou shalt not trust online videos for inguinal hernia repair techniques. Surg. Endosc. 2021, 35, 5724–5728. [Google Scholar] [CrossRef]
  84. Daes, J.; Felix, E. Critical view of the myopectineal orifice. Ann. Surg. 2017, 266, e1–e2. [Google Scholar] [CrossRef]
  85. Celentano, V.; Smart, N.; McGrath, J.; Cahill, R.A.; Spinelli, A.; Challacombe, B.; Belyansky, I.; Hasegawa, H.; Munikrishnan, V.; Pellino, G. How to report educational videos in robotic surgery: An international multidisciplinary consensus statement. Updates Surg. 2021, 73, 815–821. [Google Scholar] [CrossRef]
  86. Touch Surgery Enterprise. Touch Surgery Enterprise, 2021. Available online: https://news.medtronic.com/Touch-Surgery-Enterprise-Media-Kit/ (accessed on 28 January 2022).
  87. Killoran, C.B.; de Costa, A. Can open cholecystectomy be taught by cadaveric simulation? ANZ J. Surg. 2024, 94, 1051–1055. [Google Scholar] [CrossRef]
  88. Campi, R.; Pecoraro, A.; Vignolini, G.; Spatafora, P.; Sebastianelli, A.; Sessa, F.; Li Marzi, V.; Territo, A.; Decaestecker, K.; Breda, A.; et al. The First Entirely 3D-Printed Training Model for Robot-assisted Kidney Transplantation: The RAKT Box. Eur. Urol. Open Sci. 2023, 53, 98–105. [Google Scholar] [CrossRef]
  89. Hays, S.B.; Kuchta, K.; Rojas, A.E.; Mehdi, S.A.; Schwarz, J.L.; Talamonti, M.S.; Hogg, M.E. Residency robotic biotissue curriculum: The next frontier in robotic surgical training. HPB 2025, 27, 688–695. [Google Scholar] [CrossRef]
  90. Stulberg, J.J.; Huang, R.; Kreutzer, L.; Ban, K.; Champagne, B.J.; Steele, S.R.; Johnson, J.K.; Holl, J.L.; Greenberg, C.C.; Bilimoria, K.Y. Association between surgeon technical skills and patient outcomes. JAMA Surg. 2020, 155, 960–968. [Google Scholar] [CrossRef] [PubMed]
  91. Prebay, Z.J.; Peabody, J.O.; Miller, D.C.; Ghani, K.R. Video review for measuring and improving skill in urological surgery. Nat. Rev. Urol. 2019, 16, 261–267. [Google Scholar] [CrossRef] [PubMed]
  92. Hussein, A.A.; Ghani, K.R.; Peabody, J.; Sarle, R.; Abaza, R.; Eun, D.; Hu, J.; Fumo, M.; Lane, B.; Montgomery, J.S.; et al. Development and validation of an objective scoring tool for robot-assisted radical prostatectomy: Prostatectomy Assessment and Competency Evaluation. J. Urol. 2017, 197, 1237–1244. [Google Scholar] [CrossRef] [PubMed]
  93. Goh, A.C.; Goldfarb, D.W.; Sander, J.C.; Miles, B.J.; Dunkin, B.J. Global Evaluative Assessment of Robotic Skills: Validation of a clinical assessment tool to measure robotic surgical skills. J. Urol. 2012, 187, 247–252. [Google Scholar] [CrossRef]
  94. White, L.W.; Kowalewski, T.M.; Dockter, R.L.; Comstock, B.; Hannaford, B.; Lendvay, T.S. Crowd-sourced assessment of technical skill: A valid method for discriminating basic robotic surgical skills. J. Endourol. 2015, 29, 1295–1301. [Google Scholar] [CrossRef]
  95. Chen, C.; White, L.; Kowalewski, T.; Aggarwal, R.; Lintott, C.; Comstock, B.; Kuksenok, K.; Aragon, C.; Holst, D.; Lendvay, T. Crowd-Sourced Assessment of Technical Skills: A novel method to evaluate surgical performance. J. Surg. Res. 2014, 187, 65–71. [Google Scholar] [CrossRef]
  96. Polin, M.R.; Siddiqui, N.Y.; Comstock, B.A.; Hesham, H.; Brown, C.; Lendvay, T.S.; Martino, M.A. Crowdsourcing: A valid alternative to expert evaluation of robotic surgery skills. Am. J. Obs. Gynecol. 2016, 215, e641–e644. [Google Scholar] [CrossRef]
  97. Ghani, K.R.; Miller, D.C.; Linsell, S.; Brachulis, A.; Lane, B.; Sarle, R.; Dalela, D.; Menon, M.; Comstock, B.; Lendvay, T.S.; et al. Measuring to improve: Peer and crowd-sourced assessments of technical skill with robot-assisted radical prostatectomy. Eur. Urol. 2016, 69, 547–550. [Google Scholar] [CrossRef]
  98. Aghazadeh, M.A.; Mercado, M.A.; Pan, M.M.; Miles, B.J.; Goh, A.C. Performance of robotic simulated skills tasks is positively associated with clinical robotic surgical performance. BJU Int. 2016, 118, 475–481. [Google Scholar] [CrossRef]
  99. Chen, J.; Oh, P.J.; Cheng, N.; Shah, A.; Montez, J.; Jarc, A.; Guo, L.; Gill, I.S.; Hung, A.J. Use of autmoated performance metrics to measure surgeon performance during robotic vesicourethral anastomosis and methodical development of a training tutorial. J. Urol. 2018, 200, 895–902. [Google Scholar] [CrossRef]
  100. Hung, A.J.; Chen, J.; Ghodoussipour, S.; Oh, P.J.; Liu, Z.; Nguyen, J.; Purushotham, S.; Gill, I.S.; Liu, Y. A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU Int. 2019, 124, 487–495. [Google Scholar] [CrossRef] [PubMed]
  101. Lee, D.; Yu, H.W.; Kwon, H.; Kong, H.J.; Lee, K.E.; Kim, H.C. Evaluation of Surgical Skills during Robotic Surgery by Deep Learning-Based Multiple Surgical Instrument Tracking in Training and Actual Operations. J. Clin. Med. 2020, 9, 1964. [Google Scholar] [CrossRef] [PubMed]
  102. Pottle, J. Virtual reality and the transformation of medical education. Future Healthc. J. 2019, 6, 181–185. [Google Scholar] [CrossRef] [PubMed]
  103. Mitsouras, D.; Liacouras, P.C.; Wake, N.; Rybicki, F.J. RadioGraphics Update: Medical 3D Printing for the Radiologist. Radiographics 2020, 40, E21–E23. [Google Scholar] [CrossRef]
Table 1. Definitions of validity as related to simulation.
Table 1. Definitions of validity as related to simulation.
FaceA subjective assessment of how well the simulator replicates the real world.
ContentA subjective assessment of whether the simulation exercise is providing an accurate assessment of the intended content.
ConstructAn objective assessment of the ability of the simulator to differentiate a novice from an expert.
ConcurrentAn objective assessment of how well the simulator results correlate with current operative performance.
PredictiveAn objective assessment of how well simulator results can predict future operative performance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Keelan, S.; Guirgis, M.; Julien, B.; Hewett, P.J.; Talbot, M. Surgeon Training in the Era of Computer-Enhanced Simulation Robotics and Emerging Technologies: A Narrative Review. Surg. Tech. Dev. 2025, 14, 21. https://doi.org/10.3390/std14030021

AMA Style

Keelan S, Guirgis M, Julien B, Hewett PJ, Talbot M. Surgeon Training in the Era of Computer-Enhanced Simulation Robotics and Emerging Technologies: A Narrative Review. Surgical Techniques Development. 2025; 14(3):21. https://doi.org/10.3390/std14030021

Chicago/Turabian Style

Keelan, Simon, Mina Guirgis, Benji Julien, Peter J. Hewett, and Michael Talbot. 2025. "Surgeon Training in the Era of Computer-Enhanced Simulation Robotics and Emerging Technologies: A Narrative Review" Surgical Techniques Development 14, no. 3: 21. https://doi.org/10.3390/std14030021

APA Style

Keelan, S., Guirgis, M., Julien, B., Hewett, P. J., & Talbot, M. (2025). Surgeon Training in the Era of Computer-Enhanced Simulation Robotics and Emerging Technologies: A Narrative Review. Surgical Techniques Development, 14(3), 21. https://doi.org/10.3390/std14030021

Article Metrics

Back to TopTop