Previous Article in Journal
Surgical and Radiologic Outcomes Following Pulmonary Lobectomy: A Single-Center Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Role of Artificial Intelligence in Enhancing Surgical Education During Consultant Ward Rounds

1
Faculty of Medicine and Surgery, Central Clinical School, Monash University, Melbourne, VIC 3004, Australia
2
Department of Plastic and Reconstructive Surgery, Peninsula Health, Frankston, VIC 3199, Australia
3
Harvard Medical School, Harvard University, Shattuck St, Boston, MA 02115, USA
4
Department of Plastic and Reconstructive Surgery, University of Siena, 53100 Siena, Italy
*
Author to whom correspondence should be addressed.
Surgeries 2025, 6(4), 83; https://doi.org/10.3390/surgeries6040083
Submission received: 22 August 2025 / Revised: 28 September 2025 / Accepted: 29 September 2025 / Published: 30 September 2025

Abstract

Background/Objectives: Surgical ward rounds are central to trainee education but are often associated with stress, cognitive overload, and inconsistent learning. Advances in artificial intelligence (AI), particularly large language models (LLMs), offer new ways to support trainees by simulating ward-round questioning, enhancing preparedness, and reducing anxiety. This study explores the role of generative AI in surgical ward-round education. Methods: Hypothetical plastic and reconstructive surgery ward-round scenarios were developed, including flexor tenosynovitis, DIEP flap monitoring, acute burns, and abscess management. Using de-identified vignettes, AI platforms (ChatGPT-4.5 and Gemini 2.0) generated consultant-level questions and structured responses. Outputs were assessed qualitatively for relevance, educational value, and alignment with surgical competencies. Results: ChatGPT-4.5 showed a strong ability to anticipate consultant-style questions and deliver concise, accurate answers across multiple surgical domains. ChatGPT-4.5 consistently outperformed Gemini 2.0 across all domains, with higher expert Likert ratings for accuracy, clarity, and educational value. It was particularly effective in pre-ward round preparation, enabling simulated questioning that mirrored consultant expectations. AI also aided post-round consolidation by providing tailored summaries and revision materials. Limitations included occasional inaccuracies, risk of over-reliance, and privacy considerations. Conclusions: Generative AI, particularly ChatGPT-4.5, shows promise as a supplementary tool in surgical ward-round education. While both models demonstrated utility, ChatGPT-4.5 was superior in replicating consultant-level questioning and providing structured responses. Pilot programs with ethical oversight are needed to evaluate their impact on trainee confidence, performance, and outcomes. Although plastic surgery cases were used for proof of concept, the findings are relevant to surgical education across subspecialties.

1. Introduction

Ward rounds are a cornerstone of surgical education, providing structured opportunities for bedside learning, development of clinical reasoning, and consultant-to-trainee teaching. They offer a unique forum for observing patient care, synthesising clinical information, and engaging in diagnostic and management discussions. However, their dynamic and high-pressure nature can present significant challenges for junior doctors, particularly when confronted with rapid-fire questioning by senior surgeons. Although such questioning is often intended as a teaching strategy, the associated stress and cognitive overload can impair knowledge retention, diminish confidence, and negatively impact morale [1,2].
Recent advances in artificial intelligence (AI), particularly generative AI large language models (LLMs), have introduced new opportunities to enhance medical education by facilitating rapid access to structured, evidence-based information. In surgical contexts, AI has been proposed as an “educational safety net”, offering trainees the ability to access context-relevant clinical knowledge discreetly during ward rounds, thereby reducing the anxiety associated with on-the-spot questioning [3]. Beyond immediate clinical support, AI can also facilitate post-round consolidation by generating tailored reading material, practice questions, and management algorithms based on ward round discussions [4,5,6].
Emerging studies demonstrate that AI systems can simulate complex clinical questioning and provide accurate, consultant-level responses. For example, ChatGPT has demonstrated strong agreement with expert surgeons in intraoperative decision-making during deep inferior epigastric perforator (DIEP) flap breast reconstruction [7] and in the management of Dupuytren’s disease [8]. These findings support the integration of AI as a reliable adjunct for improving trainee performance, exam preparation, and knowledge translation into clinical practice. Furthermore, the iterative learning capacity of LLMs allows them to continuously update with new evidence, potentially improving the accuracy and relevance of responses over time [8,9,10,11].
To explore these opportunities, we designed hypothetical surgical ward round scenarios in plastic and reconstructive surgery, including flexor tenosynovitis, postoperative DIEP flap monitoring, acute burns, and soft-tissue infections. Using de-identified case vignettes, AI platforms (ChatGPT-4.5 and Gemini 2.0) were prompted to generate consultant-style questions and structured, evidence-based answers. These simulations enabled the evaluation of AI’s capacity to replicate the intensity of ward round questioning, while offering an environment that facilitates pre-round preparation, knowledge reinforcement, and reduced trainee stress.
Despite these promising applications, caution is warranted. Risks include over-reliance on AI at the expense of critical thinking, potential dissemination of inaccuracies or “hallucinations,” and breaches of patient privacy if data are not adequately de-identified [12,13,14]. Furthermore, the integration of AI into surgical training must align with established competencies such as Judgement and Clinical Decision-Making. This will ensure that the ultimate responsibility for diagnosis and management remains with the surgeon [12,13,14]. Cultural acceptance is equally critical, requiring consultants to embrace AI as a complementary educational tool rather than a shortcut.
This study, therefore, examines the potential role of generative AI, particularly ChatGPT-4.5, in surgical ward round education. By simulating consultant-level questioning in realistic clinical scenarios, we aim to assess whether AI can augment traditional apprenticeship models, improve preparedness, and support reflective, self-directed learning.

2. Materials and Methods

2.1. Study Design

This study was designed as an exploratory, proof-of-concept evaluation of generative artificial intelligence in surgical education. The objective was to assess the capacity of large language models to replicate consultant-style questioning during ward rounds and to provide structured, evidence-based responses suitable for trainee learning and preparation.

2.2. Scenario Development

A series of hypothetical clinical scenarios was created to reflect typical inpatient and perioperative cases encountered during ward rounds with consultants. The term “common” refers to cases that are educationally representative of ward-round teaching, rather than conditions that are globally frequent. Plastic and reconstructive surgery was chosen as the proof-of-concept specialty because the evaluators were expert plastic surgeons with over 40 years of combined clinical and educational experience.
Scenarios were first drafted by two senior residents with ward-round experience, then refined and validated in two iterative sessions by consultant plastic surgeons. Validation focused on clinical plausibility, ward-round relevance, and decision-making challenges. This pragmatic specialty was well-suited for pilot testing, as its cases often involve clear decision-making pathways (e.g., flap monitoring, infection management, burn care).
Although plastic surgery cases were used, the ward-round structure and consultant–trainee questioning simulated here are consistent across all surgical subspecialties. The findings are therefore broadly generalisable to surgical education. Clinical contexts included:
  • Flexor tenosynovitis following a rose thorn injury
  • Postoperative monitoring of bilateral deep inferior epigastric perforator (DIEP) flaps
  • Acute flame burns to the lower limbs
  • Right forearm abscess in an intravenous drug user
Each scenario was structured to include relevant clinical history, current issues, and expected decision-making challenges (Supplementary File S1).

2.3. AI Prompting Procedure

The scenarios were converted into de-identified clinical vignettes to ensure no identifiable patient information was included. Each vignette was entered into ChatGPT-4.5 (OpenAI, San Francisco, CA, USA) and Gemini 2.0 (Google DeepMind, London, UK), which generated consultant-level questions and structured responses. Full sets of outputs from both models were produced for every scenario and reviewed independently by experts. Supplementary File S1 presents illustrative examples (the first two from ChatGPT-4.5 and the last two from Gemini 2.0), but all outputs were scored across the full domains of relevance, accuracy, clarity, and educational value. Prompts were standardised to simulate ward-round questioning, with instructions for the model to:
  • Generate consultant-level questions that a senior surgeon might ask during ward rounds.
  • Provide structured, evidence-based answers aligned with surgical teaching principles and recognised competencies of the Royal Australasian College of Surgeons (RACS).
While ChatGPT-4.5 was the primary focus of this study, Gemini 2.0 was included for direct comparison.

2.4. Assessment of Outputs

The AI-generated questions and responses were independently evaluated by two experienced plastic surgeons, each with over 30 years of clinical and educational experience. Using a structured Likert framework (1 = poor to 5 = excellent), the surgeons assessed the outputs for:
  • Relevance—appropriateness of generated questions for ward-round teaching.
  • Accuracy—alignment of answers with established surgical principles and guidelines.
  • Educational Value—ability of responses to support trainee preparedness and post-round consolidation.
Scores were averaged, and qualitative feedback was documented. Representative examples of AI-generated material are presented in Table 1. Across all tested scenarios, ChatGPT-4.5 consistently achieved higher mean scores than Gemini 2.0, particularly in clarity, depth of explanation, and usefulness for trainee learning. Experts assessed the complete sets of outputs from each model. The scores in Table 1 reflect their comparative evaluation of ChatGPT-4.5 versus Gemini 2.0 across the same scenarios.

2.5. Ethical Considerations

As this study utilised hypothetical, de-identified clinical scenarios without involvement of real patients or clinical data, formal ethics approval was not required. The study was conducted in accordance with the principles of the Declaration of Helsinki.

3. Results

Table 1. Expert Ratings of AI-Generated Ward Round Questions & Answers.
Table 1. Expert Ratings of AI-Generated Ward Round Questions & Answers.
ScenarioModelAccuracyClinical RelevanceDepth of ExplanationClarity & StructureUsefulness for Trainee LearningMean Score (/5)
1. Flexor Tenosynovitis (rose thorn injury)ChatGPT-4.5554554.8
Gemini 2.0443333.4
2. DIEP Flap Post-Op MonitoringChatGPT-4.5555544.8
Gemini 2.0443333.4
3. Acute Burns (12% TBSA lower limbs)ChatGPT-4.5555454.8
Gemini 2.0443333.4
4. IVDU Forearm AbscessChatGPT-4.5554554.8
Gemini 2.0443333.4
Across all scenarios, ChatGPT-4.5 achieved higher mean scores (4.8/5) compared with Gemini 2.0 (3.4/5). The greatest differences were observed in clarity, depth of explanation, and usefulness for trainee learning. These findings highlight the greater consistency and educational value of ChatGPT outputs.

4. Discussion

This exploratory proof-of-concept study examined the role of generative artificial intelligence in enhancing surgical education during consultant ward rounds, with a direct comparison between ChatGPT-4.5 and Gemini 2.0. Across identical surgical scenarios, ChatGPT-4.5 consistently achieved higher ratings in accuracy, relevance, clarity, and educational value compared with Gemini 2.0. These results suggest that ChatGPT-4.5 is currently the more reliable model for replicating consultant-style questioning and supporting structured learning for trainees during ward rounds [15].
Ward rounds remain central to surgical training, providing opportunities for bedside teaching, clinical reasoning, and direct interaction between consultants and trainees [1,15]. However, they are also associated with significant stress, cognitive overload, and the potential for impaired learning among junior doctors when exposed to rapid-fire questioning [16]. Traditional strategies, such as encouraging trainees to “look up questions later,” may not always be practical, particularly in the time-pressured clinical environment. These challenges have prompted growing interest in the use of AI to create structured, learner-centred support systems that reduce stress while maintaining educational value [17].
Our results showed that ChatGPT-4.5 produced highly accurate and clinically relevant responses across various scenarios, achieving near-perfect Likert scores for accuracy, clarity, and educational value. In comparison, Gemini 2.0 generated shorter and less comprehensive outputs, leading to lower ratings. These findings align with previous studies where ChatGPT demonstrated strong agreement with expert plastic surgeons in intraoperative decision-making during DIEP flap reconstruction [7] and in managing Dupuytren’s disease [8]. Overall, these results support the use of ChatGPT as a reliable educational tool capable of mimicking consultant-style questioning and providing structured, evidence-based responses [18].
The ability of ChatGPT to anticipate consultant-level questions, such as the principles of flexor sheath infection management or the early recognition of flap compromise, illustrates its alignment with established surgical teaching frameworks [19,20]. This implies that ChatGPT can turn ward round questioning from a daunting task into a well-organised micro-curriculum, supporting the Royal Australasian College of Surgeons’ competencies in Scholarship and Teaching, as well as Judgement and Clinical Decision-Making [6].
The potential applications of AI in ward-based surgical education are multifaceted. First, AI can assist trainees in pre-round preparation, allowing them to simulate likely consultant questions based on de-identified patient lists. Such rehearsal may increase confidence and reduce anxiety by providing structured exposure to high-yield clinical content before encountering consultants [4]. Second, AI can function as an on-round adjunct, enabling just-in-time access to concise, evidence-based information. While this must be used cautiously to avoid dependency, discreet use through secure devices may support real-time learning, particularly in rare or complex cases. Finally, AI can facilitate post-round consolidation by automatically generating reading lists, revision questions, and algorithms tailored to topics raised during ward rounds [5]. Importantly, AI cannot replicate essential elements of ward rounds such as physical examination, bedside demonstrations, or nuanced interpersonal teaching moments. Its role is therefore limited to preparatory rehearsal and post-round consolidation, and it cannot replace direct clinical exposure at the patient’s bedside.
Beyond technical accuracy, an essential limitation of AI systems is their inability to replicate human empathy, compassion, and the reassuring presence of a consultant at the bedside. Ward rounds are not only exercises in clinical reasoning but also opportunities to build trust, reduce patient and family anxiety, and demonstrate respect. These interpersonal and emotional dimensions are fundamental to surgical education and patient care and cannot be simulated by AI. For this reason, generative AI must be regarded strictly as an adjunct to traditional teaching rather than a substitute for the humanistic aspects of clinical practice.
Beyond ward rounds, the role of AI extends into outpatient clinics, on-call shifts, and examination preparation. LLMs have demonstrated the capacity to generate targeted viva questions and multiple-choice questions aligned with fellowship curricula [8]. By simulating high-stakes questioning and providing instant feedback, AI has the potential to accelerate revision cycles and improve exam readiness. In the outpatient setting, trainees may use AI for rapid refreshers on diagnostic algorithms or counselling frameworks, ensuring patient interactions remain evidence-based and consultant-supervised.
Despite these promising applications, several limitations must be acknowledged. Foremost is the risk of over-reliance on AI, which may undermine the development of critical thinking, diagnostic reasoning, and pattern recognition skills essential to surgical practice. Junior doctors must be encouraged to verify AI outputs and integrate them into independent reasoning rather than substituting them for clinical judgement.
Second, LLMs remain susceptible to hallucinations and inaccuracies, as they are trained on large, heterogeneous datasets that are not exclusively clinical and may include outdated or non-peer-reviewed information [10]. This underscores the need for rigorous appraisal of outputs against current evidence-based guidelines and consultant oversight. Importantly, in our study, ChatGPT outputs were consistently more accurate and structured than Gemini’s, reinforcing the necessity of model selection in clinical education.
Third, privacy and governance concerns are critical. Although our study employed hypothetical, de-identified scenarios, translation into clinical practice would require strict compliance with the Privacy Act 1988 (Cth) and institutional ethics approval. Prior literature highlights that even anonymised data may, under certain conditions, be re-identified [10]. As such, clinical deployment must ensure robust safeguards, including approved devices, de-identification protocols, role-based access, and audit trails [11].
Finally, this was a pilot exploration using hypothetical cases, which limits generalisability. Real-world implementation must consider workflow integration, consultant attitudes, and cultural adaptation within surgical departments. Acceptance by senior surgeons will be pivotal; rather than viewing AI as a shortcut, consultants must embrace it as a complementary teaching tool that fosters meaningful discussion around reasoning and evidence.
This proof-of-concept study provides a framework for future investigations into the application of AI in surgical education. Controlled pilot programmes should assess the impact of AI-augmented ward rounds on trainee confidence, stress, knowledge retention, and clinical performance, using both quantitative tools (e.g., State-Trait Anxiety Inventory, structured MCQ or viva examinations) and qualitative methods (focus groups and interviews). Comparative trials of different LLM platforms, ideally with retrieval-augmented generation, may help determine which models are best suited for educational purposes. Multicentre collaboration across training hospitals would enhance generalisability and support best practice guidelines. A key limitation of this study is the use of hypothetical rather than live patient scenarios. AI cannot simulate clinical examination, demonstrations, and bedside decision-making; therefore, it can only complement preparation and reflection, rather than replace in-person ward rounds. Looking ahead, integration with real patient data raises critical issues of privacy, data security, and medico-legal accountability. Deployment must therefore include robust safeguards, such as encryption, audit trails, role-based access, and compliance with relevant privacy legislation, alongside clear frameworks for consultant oversight and accountability.
Future research should also compare AI-generated ward-round Q&A with consultant-derived material on identical scenarios, independently assessed, to benchmark accuracy and educational value. Future validation should extend to specialties such as cardiovascular, pulmonary, gastrointestinal, trauma, and oncology, where different diagnostic and therapeutic pressures exist. Testing AI across such contexts will provide stronger evidence of its generalisability in medical education.

5. Conclusions

This proof-of-concept study demonstrates the potential of generative artificial intelligence, particularly ChatGPT-4.5, to enhance surgical education during ward rounds. ChatGPT consistently outperformed Gemini 2.0 in accuracy, clarity, and educational value, converting high-pressure questioning into structured, learner-centred opportunities that support preparedness and knowledge retention. However, AI cannot replicate essential human elements such as empathy, reassurance, and bedside teaching, and must remain an adjunct rather than a substitute for traditional ward-round education. With robust governance, ethical oversight, and validation across specialties, AI could complement apprenticeship models and help train surgeons who are both clinically proficient and digitally fluent.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/surgeries6040083/s1, File S1: Hypothetical ward-based scenarios.

Author Contributions

Conceptualization, I.S. and W.M.R.; methodology, I.S., O.S. and Y.X.; validation, W.M.R. and R.C.; formal analysis, I.S. and O.S.; investigation, I.S. and Y.X.; resources, W.M.R.; data curation, I.S. and O.S.; writing—original draft preparation, I.S.; writing—review and editing, O.S., Y.X., S.B., R.C. and W.M.R.; visualization, I.S. and S.B.; supervision, W.M.R. and R.C.; project administration, I.S. and W.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are available in this study.

Acknowledgments

The authors acknowledge the use of large language models (ChatGPT-4.5 and Gemini 2.0) for study simulations and manuscript drafting support. All AI-generated outputs were critically reviewed, edited, and validated by the authors to ensure accuracy, contextual relevance, and academic integrity.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

AIArtificial Intelligence
LLMLarge Language Model
RACSRoyal Australasian College of Surgeons
DIEPDeep Inferior Epigastric Perforator
IVDUIntravenous Drug Use
TBSATotal Body Surface Area
MCQMultiple Choice Question
NPWTNegative Pressure Wound Therapy

References

  1. Greenberg, C.C.; Regenbogen, S.E.; Studdert, D.M.; Lipsitz, S.R.; Rogers, S.O.; Zinner, M.J.; Gawande, A.A. Patterns of communication breakdowns resulting in injury to surgical patients. J. Am. Coll. Surg. 2007, 204, 533–540. [Google Scholar] [CrossRef] [PubMed]
  2. Tam, A.; Bateman, S.; Buckingham, G.; Wilson, M.; Melendez-Torres, G.J.; Vine, S.; Clark, J. The effects of stress on surgical performance: A systematic review. Surg. Endosc. 2025, 39, 77–98. [Google Scholar] [CrossRef] [PubMed]
  3. Hashimoto, D.A.; Rosman, G.; Rus, D.; Meireles, O.R.M. Artificial intelligence in surgery: Promises and perils. Ann. Surg. 2018, 268, 70–76. [Google Scholar] [CrossRef] [PubMed]
  4. Xie, Y.; Seth, I.; Hunter-Smith, D.J.; Seifman, M.A.; Rozen, W.M. Response to: Investigating the impact of innovative AI chatbot on post-pandemic medical education and clinical assistance: A comprehensive analysis. ANZ J. Surg. 2024, 94, 493. [Google Scholar] [CrossRef] [PubMed]
  5. Rao, L.; Yang, E.; Dissanayake, S.; Cuomo, R.; Seth, I.; Rozen, W.M. The use of generative artificial intelligence in surgical education: A narrative review. Plast. Aesthet. Res. 2024, 11, 57. [Google Scholar] [CrossRef]
  6. Williams, B.; Olagunju, O.; Richardson, S.; Jameson, G. How Are Inpatient Psychiatric Ward Rounds Understood in Research Literature? A Scoping Review. BJPsych Open 2024, 10, S69–S70. [Google Scholar] [CrossRef]
  7. Atkinson, C.J.; Seth, I.; Xie, Y.; Ross, R.J.; Hunter-Smith, D.J.; Rozen, W.M.; Cuomo, R. Artificial intelligence language model performance for rapid intraoperative queries in plastic surgery: ChatGPT and the deep inferior epigastric perforator flap. J. Clin. Med. 2024, 13, 900. [Google Scholar] [CrossRef]
  8. Seth, I.; Marcaccini, G.; Lim, K.; Castrechini, M.; Cuomo, R.; Ng, S.K.-H.; Ross, R.J.; Rozen, W.M. Management of Dupuytren’s Disease: A Multi-Centric Comparative Analysis Between Experienced Hand Surgeons Versus Artificial Intelligence. Diagnostics 2025, 15, 587. [Google Scholar] [CrossRef] [PubMed]
  9. Paranjape, K.; Schinkel, M.; Panday, R.N.; Car, J.; Nanayakkara, P. Introducing artificial intelligence training in medical education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef] [PubMed]
  10. Wang, L.; Chen, X.; Deng, X.; Wen, H.; You, M.; Liu, W.; Li, Q.; Li, J. Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs. npj Digit. Med. 2024, 7, 41. [Google Scholar] [CrossRef] [PubMed]
  11. Hartman, V.; Zhang, X.; Poddar, R.; McCarty, M.; Fortenko, A.; Sholle, E.; Sharma, R.; Campion, T., Jr.; Steel, P.A.D. Developing and Evaluating Large Language Model–Generated Emergency Medicine Handoff Notes. JAMA Netw. Open 2024, 7, e2448723. [Google Scholar] [CrossRef] [PubMed]
  12. Williams, C.Y.K.; Subramanian, C.R.; Ali, S.S.; Apolinario, M.; Askin, E.; Barish, P.; Cheng, M.; Deardorff, W.J.; Donthi, N.; Ganeshan, S.; et al. Physician- and Large Language Model–Generated Hospital Discharge Summaries. JAMA Intern. Med. 2025, 185, 818–825. [Google Scholar] [CrossRef] [PubMed]
  13. Dehkordi, M.K.H.; Perl, Y.; Deek, F.P.; He, Z.; Keloth, V.K.; Liu, H.; Elhanan, G.; Einstein, A.J. Improving Large Language Models’ Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation. JMIR Med. Inform. 2025, 13, e66476. [Google Scholar] [CrossRef] [PubMed]
  14. Li, Y.; Li, F.; Hong, N.; Li, M.; Roberts, K.; Cui, L.; Tao, C.; Xu, H. A comparative study of recent large language models on generating hospital discharge summaries for lung cancer patients. J. Biomed. Inform. 2025, 168, 104867. [Google Scholar] [CrossRef] [PubMed]
  15. Shemtob, L.; Nouri, A.; Harvey-Sullivan, A.; Qiu, C.S.; Martin, J.; Martin, M.; Noden, S.; Rob, T.; Neves, A.L.; Majeed, A.; et al. Comparing artificial intelligence- vs clinician-authored summaries of simulated primary care electronic health records. JAMIA Open 2025, 8, ooaf082. [Google Scholar] [CrossRef] [PubMed]
  16. Merriman, C.; Freeth, D. Conducting a good ward round: How do leaders do it? J. Eval. Clin. Pract. 2022, 28, 411–420. [Google Scholar] [CrossRef] [PubMed]
  17. Luxton, D.D. Recommendations for the ethical use and design of artificial intelligent care providers. Artif. Intell. Med. 2014, 62, 1–10. [Google Scholar] [CrossRef] [PubMed]
  18. Loftus, T.J.; Tighe, P.J.; Filiberto, A.C.; Efron, P.A.; Brakenridge, S.C.; Mohr, A.M.; Rashidi, P.; Upchurch, G.R.; Bihorac, A. Artificial intelligence and surgical decision-making. JAMA Surg. 2020, 155, 148–158. [Google Scholar] [CrossRef] [PubMed]
  19. Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  20. Seth, I.; Lim, B.; Cevik, J.; Sofiadellis, F.; Ross, R.J.; Cuomo, R.; Rozen, W.M. Utilizing GPT-4 and generative artificial intelligence platforms for surgical education: An experimental study on skin ulcers. Eur. J. Plast. Surg. 2024, 47, 19. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Seth, I.; Shadid, O.; Xie, Y.; Bacchi, S.; Cuomo, R.; Rozen, W.M. Exploring the Role of Artificial Intelligence in Enhancing Surgical Education During Consultant Ward Rounds. Surgeries 2025, 6, 83. https://doi.org/10.3390/surgeries6040083

AMA Style

Seth I, Shadid O, Xie Y, Bacchi S, Cuomo R, Rozen WM. Exploring the Role of Artificial Intelligence in Enhancing Surgical Education During Consultant Ward Rounds. Surgeries. 2025; 6(4):83. https://doi.org/10.3390/surgeries6040083

Chicago/Turabian Style

Seth, Ishith, Omar Shadid, Yi Xie, Stephen Bacchi, Roberto Cuomo, and Warren M. Rozen. 2025. "Exploring the Role of Artificial Intelligence in Enhancing Surgical Education During Consultant Ward Rounds" Surgeries 6, no. 4: 83. https://doi.org/10.3390/surgeries6040083

APA Style

Seth, I., Shadid, O., Xie, Y., Bacchi, S., Cuomo, R., & Rozen, W. M. (2025). Exploring the Role of Artificial Intelligence in Enhancing Surgical Education During Consultant Ward Rounds. Surgeries, 6(4), 83. https://doi.org/10.3390/surgeries6040083

Article Metrics

Back to TopTop