Open AccessArticle
Leave as Fast as You Can: Using Generative AI to Automate and Accelerate Hospital Discharge Reports
by
Alex Trejo Omeñaca, Esteve Llargués Rocabruna, Jonny Sloan, Michelle Catta-Preta, Jan Ferrer i Picó, Julio Cesar Alfaro Alvarez, Toni Alonso Solis, Eloy Lloveras Gil, Xavier Serrano Vinaixa, Daniela Velasquez Villegas, Ramon Romeu Garcia, Carles Rubies Feijoo, Josep Maria Monguet i Fierro and Beatriu Bayes Genis
Viewed by 296
Abstract
Clinical documentation, particularly the hospital discharge report (HDR), is essential for ensuring continuity of care, yet its preparation is time-consuming and places a considerable clinical and administrative burden on healthcare professionals. Recent advancements in Generative Artificial Intelligence (GenAI) and the use of prompt
[...] Read more.
Clinical documentation, particularly the hospital discharge report (HDR), is essential for ensuring continuity of care, yet its preparation is time-consuming and places a considerable clinical and administrative burden on healthcare professionals. Recent advancements in Generative Artificial Intelligence (GenAI) and the use of prompt engineering in large language models (LLMs) offer opportunities to automate parts of this process, improving efficiency and documentation quality while reducing administrative workload. This study aims to design a digital system based on LLMs capable of automatically generating HDRs using information from clinical course notes and emergency care reports. The system was developed through iterative cycles, integrating various instruction flows and evaluating five different LLMs combined with prompt engineering strategies and agent-based architectures. Throughout the development, more than 60 discharge reports were generated and assessed, leading to continuous system refinement. In the production phase, 40 pneumology discharge reports were produced, receiving positive feedback from physicians, with an average score of 2.9 out of 4, indicating the system’s usefulness, with only minor edits needed in most cases. The ongoing expansion of the system to additional services and its integration within a hospital electronic system highlights the potential of LLMs, when combined with effective prompt engineering and agent-based architectures, to generate high-quality medical content and provide meaningful support to healthcare professionals. Hospital discharge reports (HDRs) are pivotal for continuity of care but consume substantial clinician time. Generative AI systems based on large language models (LLMs) could streamline this process, provided they deliver accurate, multilingual, and workflow-compatible outputs. We pursued a three-stage, design-science approach. Proof-of-concept: five state-of-the-art LLMs were benchmarked with multi-agent prompting to produce sample HDRs and define the optimal agent structure. Prototype: 60 HDRs spanning six specialties were generated and compared with clinician originals using ROUGE with average scores compatible with specialized news summarizing models in Spanish and Catalan (lower scores). A qualitative audit of 27 HDR pairs showed recurrent divergences in medication dose (56%) and social context (52%). Pilot deployment: The AI-HDR service was embedded in the hospital’s electronic health record. In the pilot, 47 HDRs were autogenerated in real-world settings and reviewed by attending physicians. Missing information and factual errors were flagged in 53% and 47% of drafts, respectively, while written assessments diminished the importance of these errors. An LLM-driven, agent-orchestrated pipeline can safely draft real-world HDRs, cutting administrative overhead while achieving clinician-acceptable quality, not without errors that require human supervision. Future work should refine specialty-specific prompts to curb omissions, add temporal consistency checks to prevent outdated data propagation, and validate time savings and clinical impact in multi-center trials.
Full article
►▼
Show Figures