Next Article in Journal
Green Tea Extract (Theaceae; Camellia sinensis L.): A Promising Antimicrobial, Anti-Quorum Sensing and Antibiofilm Candidate Against Multidrug-Resistant Campylobacter Species
Previous Article in Journal
Bacteria-Inspired Synthesis of Silver-Doped Zinc Oxide Nanocomposites: A Novel Synergistic Approach in Controlling Biofilm and Quorum-Sensing-Regulated Virulence Factors in Pseudomonas aeruginosa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review

by
Ninel Iacobus Antonie
1,2,
Gina Gheorghe
1,2,*,
Vlad Alexandru Ionescu
1,2,
Loredana-Crista Tiucă
1,2 and
Camelia Cristina Diaconu
1,2,3
1
Faculty of Medicine, University of Medicine and Pharmacy Carol Davila Bucharest, 050474 Bucharest, Romania
2
Internal Medicine Department, Clinical Emergency Hospital of Bucharest, 105402 Bucharest, Romania
3
Academy of Romanian Scientists, 050045 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Antibiotics 2025, 14(1), 60; https://doi.org/10.3390/antibiotics14010060
Submission received: 14 December 2024 / Revised: 3 January 2025 / Accepted: 7 January 2025 / Published: 9 January 2025

Abstract

:
Background/Objectives: Antimicrobial resistance represents a growing global health crisis, demanding innovative approaches to improve antibiotic stewardship. Artificial intelligence (AI) chatbots based on large language models have shown potential as tools to support clinicians, especially non-specialists, in optimizing antibiotic therapy. This review aims to synthesize current evidence on the capabilities, limitations, and future directions for AI chatbots in enhancing antibiotic selection and patient outcomes. Methods: A narrative review was conducted by analyzing studies published in the last five years across databases such as PubMed, SCOPUS, Web of Science, and Google Scholar. The review focused on research discussing AI-based chatbots, antibiotic stewardship, and clinical decision support systems. Studies were evaluated for methodological soundness and significance, and the findings were synthesized narratively. Results: Current evidence highlights the ability of AI chatbots to assist in guideline-based antibiotic recommendations, improve medical education, and enhance clinical decision-making. Promising results include satisfactory accuracy in preliminary diagnostic and prescriptive tasks. However, challenges such as inconsistent handling of clinical nuances, susceptibility to unsafe advice, algorithmic biases, data privacy concerns, and limited clinical validation underscore the importance of human oversight and refinement. Conclusions: AI chatbots have the potential to complement antibiotic stewardship efforts by promoting appropriate antibiotic use and improving patient outcomes. Realizing this potential will require rigorous clinical trials, interdisciplinary collaboration, regulatory clarity, and tailored algorithmic improvements to ensure their safe and effective integration into clinical practice.

1. Introduction

Artificial intelligence (AI) systems based on large language models (LLMs) [1], such as OpenAI’s ChatGPT [2], Google’s Gemini [3], and Anthropic’s Claude [4], have become highly recognizable and popular due to their user-friendly interfaces and by offering natural, conversational interactions in the form of chatbots [5]. Their continuous advancements are rapidly transforming them into personal assistants, conveniently accessible through smartphones or other technological applications such as wearable devices, either via text or voice communication [6,7,8]. The adoption and usage of chatbots in the healthcare industry is expected to increase, particularly due to their potential to improve medical research processes, enhance access to medical information and provide personalized support [9,10,11,12]. Moreover, the decreasing supply of healthcare professionals is likely to accelerate this trend, as chatbots can help maximize the clinical efficiency of the remaining workforce by facilitating rapid access to information [13,14,15].
One of the most promising applications of LLMs is their ability to contribute to clinical decision support systems by recommending evidence-based antibiotic regimens. When implemented as chatbots in real-world scenarios, these systems could assist clinicians, reducing the rate of inappropriate antibiotic use [16,17,18,19,20] (Figure 1). Consequently, ongoing and future research may further substantiate their role in addressing the critical goal of combating antimicrobial resistance (AMR), a challenge that stands as one of the most pressing threats to global health. Current projections estimate that by 2050, even with the development of new antibiotics, annual deaths due to AMR could reach 10 million [21]. This underscores the urgency of developing innovative strategies to combat this growing threat.
Adapting to the evolving landscape of infectious diseases requires innovation. While humans develop new interventions, pathogens like Escherichia coli, Staphylococcus aureus, and Klebsiella pneumoniae exemplify the microbial adaptability that drives resistance mechanisms, perpetuating challenges for healthcare systems [22]. This dynamic outlines why new emergent and promising technological advancements should be approached. New and exciting research is already being published on this subject with very promising results, chatbots having the ability to structure medical notes and offer treatment suggestions while being easy to use and convenient [23,24,25].
In many clinical settings, patients presenting with acute infections are often managed by non-specialist physicians due to a shortage of infectious disease specialists. This situation frequently results in suboptimal antibiotic choices that do not fully adhere to established guidelines, potentially exacerbating antimicrobial resistance [26,27,28]. AI-driven chatbots could bridge the gap between complex clinical guidelines and everyday practice by providing non-specialist clinicians with timely, evidence-based treatment recommendations. These practical challenges in antibiotic stewardship have motivated this research into the role of AI chatbots in optimizing antibiotic therapy.
Thus, we pose a pivotal question: Can AI-driven chatbots play a meaningful role in optimizing antibiotic therapy? In this comprehensive review, we will explore this question by examining current research and practical applications to assess the potential impact of these technologies on antibiotic stewardship and clinical practice.

2. Materials and Methods

In conducting this comprehensive narrative review on the role of AI-based chatbots in optimizing antibiotic therapy, a systematic search strategy was employed to identify relevant literature across multiple scientific databases.
The following databases were utilized due to their extensive coverage of biomedical and technological research: PubMed; SCOPUS; Web of Science and Google Scholar, etc.
  • Search Strategy
The literature search was carried out using a combination of keywords and Boolean operators to capture a broad range of studies related to AI chatbots and antibiotic therapy. The search queries were tailored to each database’s specific requirements, as outlined below:
The search strategy incorporated a combination of keywords related to:
  • AI Technologies: “chatbot*”, “conversational agent*”, “artificial intelligence”, “AI”, “ChatGPT”, “LLMs”, and names of specific AI systems (e.g., “Bard AI”, “Claude AI”).
  • Antibiotic Therapy: “antibiotic therapy”, “antibiotic prescribing”, “antimicrobial stewardship”, “antibiotic stewardship”, “antibiotherapy”.
  • Clinical Context: “error reduction”, “medication errors”, “prescribing errors”, “adherence”, “clinical decision support”, “decision-making”, “accessibility”, “education”, “patient education”, “health education”, “resource-limited settings”, “developing countries”.
Boolean operators such as “AND” and “OR” were used to combine these keywords effectively, allowing for a comprehensive search that included all relevant literature.
  • Search Formulas used:
1.
SCOPUS: TITLE-ABS-KEY(chatbot* OR “conversational agent*” OR “artificial intelligence” OR AI OR ChatGPT) AND TITLE-ABS-KEY(“antibiotic therapy” OR “antibiotic prescribing” OR “antimicrobial stewardship” OR “antibiotic stewardship”) AND TITLE-ABS-KEY(“error reduction” OR “medication errors” OR “prescribing errors” OR adherence OR “clinical decision support” OR “decision-making” OR accessibility OR education OR “patient education” OR “health education” OR “resource-limited settings” OR “developing countries”).
2.
Web of Science: TS = (chatbot* OR “conversational agent*” OR “artificial intelligence” OR ChatGPT OR LLMs OR “Bard AI” OR “Claude AI” OR “Siri” OR “Alexa” OR “Google Assistant” OR “Microsoft Copilot” OR “Anthropic Claude” OR “IBM Watson” OR “Jasper AI” OR “Perplexity AI” OR “Replika”) AND TS = (“antibiotic therapy” OR “antibiotic prescribing” OR “antimicrobial stewardship” OR “antibiotic stewardship” OR “antibiotherapy”).
3.
PubMed: (chatbot* OR “conversational agent*” OR “artificial intelligence” OR ChatGPT OR LLMs OR “Bard AI” OR “Claude AI” OR “Siri” OR “Alexa” OR “Google Assistant” OR “Microsoft Copilot” OR “Anthropic Claude” OR “IBM Watson” OR “Jasper AI” OR “Perplexity AI” OR “Replika”) AND (“antibiotic therapy” OR “antibiotic prescribing” OR “antimicrobial stewardship” OR “antibiotic stewardship” OR “antibiotherapy”).
4.
Google Scholar: (“chatbot*” OR “conversational agent*” OR “artificial intelligence” OR “AI” OR “ChatGPT” OR “LLMs” OR “Bard AI” OR “Claude AI”) AND (“antibiotic therapy” OR “antibiotic prescribing” OR “antimicrobial stewardship” OR “antibiotic stewardship” OR “antibiotherapy”) AND (“error reduction” OR “medication errors” OR “prescribing errors” OR “clinical decision support” OR “patient education”).
  • Inclusion and Exclusion Criteria
The selection of studies for inclusion in this review was guided by specific inclusion and exclusion criteria to ensure relevance and quality. We included articles published in English that were reviews, clinical studies, or original research articles focusing on artificial intelligence, chatbots, antibiotic therapy, antimicrobial stewardship, and related clinical decision support systems.
While no specific time restriction was set to capture both foundational and recent studies, emphasis was placed on literature published within the last five years to ensure relevance to current technologies.
Exclusion criteria comprised non-English publications, articles without accessible full texts, studies not directly related to the application of AI chatbots in antibiotic therapy or antimicrobial stewardship, and opinion pieces, editorials, and conference abstracts without accompanying full papers.
To enhance the comprehensiveness of this review, backward and forward citation tracking was employed; reference lists of included articles were examined to identify additional relevant studies; and citation databases were used to find newer articles citing the included studies.
  • Rationale for Design and Data Synthesis
This review employs a narrative approach instead of a systematic review due to the heterogeneity of available literature and the emerging nature of AI chatbot technologies in antibiotic therapy. The existing research spans diverse disciplines, including computer science, clinical medicine, and public health, utilizing varied methodologies, study designs, and outcome measures, which complicates standardization under systematic review protocols. Additionally, many studies are descriptive or proof-of-concept, lacking the structured outcomes required for meta-analytical synthesis. A narrative synthesis was therefore chosen as the most appropriate method to integrate findings, identify trends, address challenges, and suggest future directions. Although a formal quality appraisal was not performed, the included studies were evaluated for methodological soundness and relevance to the research question, with key information extracted on study design, sample size, AI chatbot characteristics, clinical applications, outcomes, benefits, challenges, and limitations. The narrative synthesis approach enabled a comprehensive overview of the current evidence, balancing both established knowledge and emerging developments in this rapidly evolving field.

3. Current Trends in Antimicrobial Resistance: Recent Data and the Need for Innovative Solutions

Recent analyses underscore the devastating global impact of antimicrobial resistance (AMR). Significant disparities in AMR-related mortality across regions highlight the need for tailored, region-specific strategies. Globally, the AMR-attributable death rate was approximately 14.5 per 100,000 in 2021, with projections suggesting an increase to 20.4 by 2050. When comparing world regions, Central Europe, Eastern Europe, and Central Asia reported an AMR-attributable death rate of around 15.3 per 100,000 in 2021, with an anticipated rise to 20.8 by 2050. In contrast, South Asia exhibited a notably higher rate of 18.1 per 100,000 in 2021, with projections indicating a significant increase to 28.8 by 2050 [21] (Figure 2).
When considering specific pathogens, in the European region, a systematic review focusing on drug-resistant bloodstream infections (BSIs) identified alarmingly high mortality odds ratios for pathogens like carbapenem-resistant Klebsiella pneumoniae and vancomycin-resistant enterococci, underscoring the need for targeted, pathogen-specific interventions [29]. Meanwhile, the European Centre for Disease Prevention and Control (ECDC) reports a mixed picture: MRSA rates are declining, yet carbapenem-resistant Klebsiella pneumoniae are on the rise and surpassing reduction targets set for 2030 [30]. Data from the World Health Organization’s (WHO) Global Antimicrobial Resistance and Use Surveillance System (GLASS) add more complexity, revealing critical gaps in testing and infrastructure—especially in low- and middle-income countries—making it clear that what works in one region may not apply in another [31].
The COVID-19 pandemic has further complicated this landscape. Some studies found no overall surge in Gram-positive resistance, yet subtle upticks in Gram-negative resistance emerged in places lacking robust prevention measures [32]. In intensive care units (ICUs), the challenge is even greater: timely, broad-spectrum antibiotics are often necessary to save lives, but without careful de-escalation and pharmacokinetic-pharmacodynamic optimization, such strategies risk fueling AMR [33]. Pediatric care faces its own hurdles. Many community hospitals lack pediatric-specific data and expertise, forcing them to rely on evidence-based principles and stewardship frameworks that can reduce adverse events like Clostridioides difficile infections and improve safety [34,35].
Geographical disparity plays a pivotal role. In Asia, antimicrobial stewardship (AMS) programs must adapt to resource constraints, limited microbiological data, and varying levels of staff awareness to effectively curb AMR [36]. Elsewhere, low- and middle-income settings grapple with self-medication, poor infrastructure, and rampant suboptimal prescribing, yet success stories exist. By starting with modest goals—like cutting carbapenem use or developing locally relevant guidelines—and building toward more complex interventions, even under-resourced hospitals can make progress [36,37]. The outpatient realm requires equally careful tactics: strategies like the “Five Ds” (right diagnosis, drug, dose, duration, and de-escalation) in managing urinary tract infections can trim down unnecessary prescriptions [38]. Better diagnostics, like reflex urine cultures or modified reporting, help distinguish symptomatic infections from asymptomatic bacteriuria, guiding more appropriate antibiotic use.
This intricate web of AMR challenges extends to critical conditions such as multidrug-resistant sepsis. Here, the stakes are life-and-death, and standard approaches struggle against organisms like carbapenem-resistant Enterobacteriaceae [39]. Advanced diagnostics, coupled with real-time surveillance and stewardship, become indispensable. Improving patient outcomes, reducing healthcare costs, and trimming hospital stays hinge on early, targeted therapy that anticipates resistance patterns. Traditional methods alone cannot keep pace with these evolving threats. Instead, what emerges is a call for innovative tools, including artificial intelligence and personalized treatments, as well as novel preventive measures like vaccines and monoclonal antibodies [29].
In the end, these findings [29,30,31,32,33,34,35,36,37,38,39] converge on a single truth: AMR is not just a clinical or microbiological issue; it is a multifaceted global health emergency that demands adaptive, data-driven, and context-specific solutions. Established stewardship principles remain fundamental, but accelerating trends and mounting complexity mean we must also embrace novel diagnostics, interdisciplinary collaboration, and advanced decision-support tools. Artificial intelligence and other cutting-edge interventions hold the promise of integrating diverse data streams—from local resistance patterns to patient history—into coherent, actionable recommendations. Achieving this synergy is challenging, yet essential. This is the path forward if we aim to outrun the evolution of drug-resistant pathogens and restore a semblance of control over the use of our most precious therapeutic resources.
These emerging trends in antimicrobial resistance, coupled with the heterogeneous success of current interventions, highlight a pressing need for advanced, adaptable solutions. It is within this context that AI-driven chatbots—capable of rapidly integrating diverse data streams and providing near-real-time recommendations—may offer a strategic advantage. In the sections that follow, we will examine how these systems function, where they excel, and what obstacles must be overcome to realize their full potential in optimizing antibiotic therapy.

4. AI-Based Chatbots: From Design Principles to Practical Applications

4.1. What Are AI-Based Chatbots?

AI-based chatbots, such as ChatGPT, Gemini, and Claude, are complex systems designed to simulate human dialogue. Building on foundational research in conversational artificial intelligence dating back to 1966, these systems demonstrate significant potential as continuous advancements enhance their capabilities and broaden their applications [40,41,42]. These chatbots operate on the principle of processing input information and generating corresponding output [43]. To function effectively, they rely on large language models, such as GPT (Generative Pre-trained Transformer), which use deep learning algorithms trained on immense datasets containing billions of words [44,45,46]. This training enables them to predict relevant information and generate contextually appropriate responses based on the input they receive. Specifically designed for conversational tasks, these systems excel at answering queries and assisting with a wide range of activities [47,48].

4.2. How Do These Models Work?

By leveraging the transformer architecture introduced by Vaswani et al., AI systems utilize self-attention mechanisms to weigh the importance of each word in a sequence. In practical terms, this means while chatbots can parse lengthy clinical notes, their lack of actual clinical reasoning may cause them to miss subtle safety cues, reinforcing the importance of complementary human expertise. This allows the model to effectively capture contextual relationships, enabling it to understand linguistic nuances and dependencies over long text spans [49].
Before becoming operational, transformer-based AI systems undergo a critical step known as pre-training. During this phase, the model is exposed to massive amounts of data, allowing it to learn grammar, semantics, and general world knowledge. Following pre-training, these systems undergo fine-tuning, which adapts them for specific applications, such as chatbots (conversational agents). This phase often employs reinforcement learning with human feedback (RLHF) to align the system’s responses with user expectations and ensure higher-quality outputs [5,50].
These AI systems process inputs as tokenized text sequences, where tokens represent fragments of data such as words. These tokens are embedded into high-dimensional vectors, which are then processed iteratively by the transformer’s algorithm. Through multiple layers of self-attention and feed-forward networks, the model generates contextually coherent responses one token at a time, building outputs until the sequence is complete [49,51].

4.3. Capabilities and Limitations of AI-Based Chatbots

Transformer-based AI models, such as GPT, are particularly adept at generating coherent and contextually appropriate text. They excel in tasks like summarizing information, translating languages, and answering questions. These capabilities are made possible by their ability to model long-range dependencies and capture nuanced linguistic structures effectively [5,51,52,53,54].
However, despite their impressive performance, these models have notable limitations. They lack intrinsic understanding or reasoning, functioning as statistical systems that generate predictions based on patterns in their training data. As a result, any flaws, biases, or limitations in the training data are inherently carried over into the model’s outputs. These systems cannot recognize or correct errors in the information they produce [43,50].
A major issue with transformer-based AI systems is their susceptibility to hallucinations—outputs that lack a basis in the input data or reality. These “hallucinations” occur when the model generates plausible-sounding but factually incorrect or fictional information. Such outputs can undermine trust in the system and pose challenges, particularly in high-stakes applications like medicine or legal analysis [55,56,57].
Understanding these underlying computational principles is not merely technical background; it reveals why chatbots can generate relevant, context-aware suggestions yet still struggle with complex clinical reasoning. For example, while self-attention mechanisms enable the model to parse long clinical notes and flag potential antibiotic choices, the absence of true clinical understanding explains why the system may overlook critical safety cues or fail to adjust therapy as patient conditions evolve.

4.4. Practical Applications of AI-Based Chatbots in Healthcare

Beyond their theoretical application in optimizing antimicrobial therapy, large language models implemented as chatbots demonstrate broader potential that will be discussed briefly in this section. Thus, despite being in the early stages of development, tools like ChatGPT are already being explored for practical uses, including supporting clinical decision-making, improving medical education, and minimizing errors.
AI-based decision support systems capable of processing dynamic text data in real-time offer significant potential. By evaluating incoming information as it is received, these systems ensure decision-makers have access to the most current and relevant data. Additionally, their ability to integrate diverse dynamic text sources enables a comprehensive analysis of information from multiple channels. Leveraging advanced algorithms, these systems can effectively interpret data to support well-informed decision-making [58,59,60,61].
In a systematic review by Frangoudes et al. (2021), chatbots functioning as virtual patients were shown to play a valuable role in medical education. These systems provide real-time feedback during interactions, allowing medical students to improve their clinical reasoning and communication skills. The review highlights that virtual patient chatbots have been used to simulate realistic patient encounters, enabling students to practice history-taking, diagnostic reasoning, and empathy in a controlled environment. Notably, studies included in the review emphasize the benefits of automatic feedback modules, which help students refine their skills through repeated scenarios and diverse case simulations. However, the review also notes the limitations of chatbot systems, including their reliance on predefined question-answer patterns and challenges in generating highly naturalistic dialogue, which may hinder deeper learning experiences for advanced users [61].
When comparing the performance of AI chatbots, (ChatGPT-4o and Claude-3), against Family Medicine residents, findings published by Huang et al. (2024) suggest that, although AI chatbots can process vast amounts of medical information and provide consistent responses, their current capabilities in reducing diagnostic errors are limited. The prevalence of logical errors highlights the need for further refinement in their reasoning algorithms. Therefore, while AI chatbots hold potential as supplementary tools in medical education and practice, they should not replace human judgment, especially in complex cases involving diagnostic uncertainty [62].
In their narrative review, Abavisani et al. examine the potential of AI-driven chatbots in addressing antibiotic resistance, highlighting their capacity to enhance clinical workflows [16]. These AI systems integrate seamlessly with electronic health records, offering patient-specific antibiotic recommendations and supporting antimicrobial stewardship by reducing inappropriate prescriptions. Furthermore, they promote evidence-based decision-making and improve adherence to clinical guidelines. Despite these advancements, significant gaps remain. The review lacks detailed discussion on the cost-effectiveness of chatbots in resource-limited settings, cultural barriers to adoption, and their integration within multidisciplinary care teams. While emphasizing the need to address biased or incomplete training data and the inability to replicate human clinicians’ nuanced reasoning, the review provides limited insight into pilot programs or real-world implementation strategies.
Building on these findings, this review seeks to expand the discussion by incorporating underexplored areas such as algorithmic refinements tailored to diverse healthcare settings and novel models designed to mitigate resistance patterns in localized scenarios. Additionally, the potential of chatbots in facilitating collaborative care, particularly in team-based settings, and their use in remote areas with limited access to specialists is explored. By addressing these gaps and contextualizing AI advancements within specific case studies, this comprehensive review contributes to the advancement of our current understanding of the practical application of AI chatbots in antibiotic therapy.

5. The Use of AI-Based Chatbots in Antibiotic Therapy

While AI-based chatbots have shown promise in various aspects of healthcare, their application in antibiotic therapy remains relatively underexplored. A comprehensive search of relevant databases yielded only four [17,18,19,20] (Table 1) experimental studies directly evaluating the use of chatbots in this domain. This paucity of research highlights a significant gap in the literature and underscores the urgent need to investigate how AI-driven chatbots can optimize antibiotic use. In the following section, we will delve into these studies, examining their findings and implications for clinical practice and future research.
Maillard et al. (2024) evaluate the role of ChatGPT-4 as a decision-support tool in managing bloodstream infections (BSIs), comparing its recommendations with those of infectious disease consultants [20]. Conducted in a tertiary care hospital, this prospective study analyzed 44 cases of BSIs, focusing on diagnostic accuracy, treatment planning, and follow-up care. While ChatGPT-4 provided satisfactory diagnostic workups in 80% of cases, its antibiotic therapy recommendations were often suboptimal, with harmful suggestions in up to 16% of cases. The study highlights the chatbot’s potential in generating structured management plans but underscores significant safety concerns, particularly for severe infections. Key limitations include a single-center design, reliance on standardized prompts, and challenges with ambiguous clinical data. Despite these limitations, the findings demonstrate the need for further refinement and integration of chatbots into clinical workflows, emphasizing their role as supplementary tools under expert supervision. Future research should focus on multicenter validation, improved dataset diversity, and hybrid models combining AI outputs with expert oversight [20].
The study by De Vito et al. (2024) evaluates the theoretical knowledge and prescriptive accuracy of ChatGPT-4 in managing bacterial infections compared to infectious disease residents and specialists [19]. The researchers assessed 72 questions across four domains: true/false queries, open-ended questions, and clinical cases with antibiograms related to endocarditis, bloodstream infections, pneumonia, and intra-abdominal infections. The questions, designed with varying difficulty levels, were reviewed by blinded experts for accuracy, completeness, and clinical relevance. Key findings revealed that ChatGPT-4 and its trained version performed comparably to human participants in theoretical questions, with correct answers in approximately 70% of cases. For open-ended questions, ChatGPT-4 demonstrated higher accuracy and completeness than residents and specialists, particularly when using the trained model. However, in clinical case management, ChatGPT-4 struggled with interpreting antibiograms and often recommended outdated treatments, such as colistin over newer options. Both ChatGPT-4 versions exhibited a tendency to overtreat and recommend unnecessarily long treatment durations. The study highlights the potential of AI tools like ChatGPT in enhancing medical education and providing preliminary diagnostic insights. However, limitations such as reliance on hypothetical data, single-center design, and lack of nuanced clinical reasoning underscore the necessity of human oversight. Future research should focus on refining AI algorithms for real-world applications, including expanding training datasets and validating these tools in multicenter trials. This study underscores the complementary role of AI in healthcare, particularly for education and support, while reaffirming the irreplaceable value of expert clinical judgment [19].
In a study conducted by Sarink et al. (2023), the capabilities and limitations of ChatGPT version 3.5 were examined, more specifically in providing antimicrobial recommendations for real-world infection scenarios [18]. The researchers evaluated ChatGPT’s responses based on criteria such as appropriateness, safety, consistency, and adherence to antimicrobial stewardship principles. The findings revealed that ChatGPT demonstrated an ability to understand and summarize clinical scenarios effectively when provided with explicit details. The model generated coherent, grammatically sound responses that often included disclaimers recommending consultation with a specialist. However, critical limitations were identified. ChatGPT frequently failed to distinguish between important and unimportant clinical factors, showed inconsistency when re-asked similar questions, and exhibited “failure modes” that led to the repeated provision of unsafe advice. Notably, the model often overlooked clinical safety cues and nuanced considerations such as the duration of therapy and the implications of source control. The authors highlighted that while ChatGPT possesses sufficient training data to generate plausible recommendations, its lack of situational awareness and inferential reasoning poses significant barriers to safe clinical implementation. These deficits underscore the model’s unreliability in complex medical decision-making, as it frequently misinterpreted scenarios of increasing complexity. To address these issues, the study proposed a qualitative assessment framework aimed at guiding future safety evaluations of AI systems across medical specialties. The strengths of the study include its focus on practical clinical scenarios and the systematic analysis of ChatGPT’s performance. Limitations involve the focus on non-chronic cases, variability in the information provided to ChatGPT, and differences in clinician-written clinical scenarios, which could influence reproducibility despite a high inter-reader reliability rate. The findings align with broader concerns regarding AI’s readiness for unsupervised application in medicine, contrasting with studies that show greater AI reliability in structured tasks like medical examination questions. The authors emphasize that future research should focus on refining AI systems to improve situational awareness and integrating interdisciplinary expertise to ensure safe and effective use in clinical practice. Given the rapid evolution of generative AI, understanding its implications for patient care is of urgent importance [18].
A study by Howard et al. (2023) evaluated ChatGPT’s performance in providing antimicrobial advice using eight hypothetical infection scenario-based questions [17]. The responses were assessed across key parameters, including appropriateness, consistency, safety, and adherence to antimicrobial stewardship principles, culminating in the development of an LLM medical safety assessment framework. The authors highlighted ChatGPT’s ability to recognize natural language, generate coherent and accurately summarized responses, and offer management options accompanied by disclaimers outlining the limitations of its recommendations. However, significant limitations were identified, such as the inability to differentiate between important and less relevant clinical factors and frequent omissions of critical considerations in increasingly complex scenarios. While the regimens proposed by ChatGPT were generally appropriate for the diagnoses and demonstrated correct antimicrobial spectrum selection, therapy duration recommendations varied inconsistently. ChatGPT often assumed that antimicrobial choice was the primary issue, potentially reflecting biases in the presented queries. Additionally, its handling of contraindications was inconsistent, and it sometimes entered “failure modes,” providing unsafe advice despite repeated corrections. Most responses included disclaimers advising consultation with infection specialists, but the lack of situational awareness, inference capabilities, and consistency were identified as barriers to clinical implementation. The study underscores the potential of ChatGPT as a decision-support tool while emphasizing the need for further refinement, particularly in managing complex clinical scenarios, and suggests that interdisciplinary expertise will be essential for its safe integration into medical practice [17].
These studies indicate that while chatbots can offer structured, evidence-based recommendations, human expertise remains indispensable. Infectious disease specialists, pharmacists, and general clinicians must remain vigilant, reviewing and interpreting chatbot outputs, particularly in complex or uncertain scenarios. The integration of AI should thus be seen as a collaborative partnership rather than a replacement—human judgment ensures that data-driven suggestions align with patient-specific nuances and uphold the highest standards of care.

6. Benefits of Chatbots in Optimizing Antibiotic Therapy

The integration of AI-driven systems, such as ChatGPT, into antimicrobial stewardship programs demonstrates potential in addressing key challenges in antibiotic prescribing. Studies by Maillard et al. (2024) and De Vito et al. (2024) emphasize ChatGPT’s ability to provide structured recommendations and preliminary diagnostic insights, showcasing its potential to reduce errors in prescribing through real-time decision support and guideline adherence [19,20]. However, significant limitations, including inconsistent handling of clinical nuances and unsafe recommendations in complex cases, highlight the need for human oversight and algorithmic refinement. Research by Sarink et al. (2023) further identifies ChatGPT’s struggles with situational awareness and inferential reasoning, which are critical for distinguishing between clinically relevant factors, ensuring adherence to antibiotic stewardship program principles, and tailoring recommendations to specific scenarios [18]. While Howard et al. (2023) illustrate ChatGPT’s competency in generating coherent responses and addressing antimicrobial spectrum selection, its inconsistency in therapy duration recommendations and handling of contraindications underlines the challenges of achieving reliable clinical implementation [17]. Collectively, these studies underscore the utility of AI in supporting rapid decision-making, especially in emergencies, and its promise in resource-limited settings by providing accessible diagnostic support. Additionally, its ability to enhance education for both clinicians and patients on responsible antibiotic use demonstrates its value as a complementary tool in antimicrobial stewardship efforts, albeit with the necessity of expert supervision to ensure safety and efficacy.

7. Beyond Chatbots: Other AI Applications in Optimizing Antibiotic Therapy

Given the extensive and rapidly growing body of research on the potential applications of AI systems in optimizing antibiotic therapy—where chatbots have only recently gained prominence—we feel it is important to briefly mention other AI applications to highlight the importance of new research being conducted in this exciting field.
Machine learning algorithms are being integrated with clinical microbiology to streamline diagnostic workflows. For instance, AI-based systems incorporating mass spectrometry data and machine learning can expedite the detection of multidrug-resistant pathogens, significantly reducing turnaround times for diagnostic decisions while improving treatment outcomes [63,64,65]. These tools enhance antimicrobial stewardship by predicting resistance patterns and optimizing antibiotic selection, especially in critical care settings where timely decisions are paramount [66,67].
In pediatric medicine, AI models are particularly useful for combating antimicrobial resistance. They support tailored antimicrobial stewardship strategies, identifying resistance trends and guiding appropriate therapy [34,68,69]. Predictive analytics driven by AI also allow for individualized treatment recommendations, factoring in patient-specific data like comorbidities, demographics, and infection history. This precision facilitates better empiric antibiotic choices, minimizing overprescription and the selection of broad-spectrum antibiotics [64,67].
Furthermore, AI-driven platforms are integrating real-time data from electronic health records to support empirical therapy decisions, leveraging local epidemiological trends to refine antimicrobial stewardship practices. These platforms contribute to early detection and intervention, which are crucial in managing infections in resource-limited settings [66,67,70]. However, barriers remain, including the need for extensive validation, regulatory frameworks, and equitable access to these technologies to ensure their effectiveness across diverse clinical contexts [63,68]. This comprehensive view of AI applications underscores its transformative potential while calling for multidisciplinary collaboration to address implementation challenges.

8. Challenges and Limitations

Algorithmic bias, data confidentiality concerns, and insufficient clinical validation are critical barriers to the effective integration of AI-based chatbots like ChatGPT in optimizing antibiotic therapy. Algorithmic bias arises from non-representative training datasets that often fail to account for diverse patient populations or prioritize high-resource settings, resulting in recommendations poorly suited for underrepresented groups or low-resource environments. Even with unbiased data, structural limitations in algorithms can amplify disparities, leading to skewed outputs that undermine trust among clinicians and patients. These challenges are exacerbated by the opacity of many AI systems, which operate as “black boxes”, making it difficult to trace and rectify biases. Mitigating these issues requires robust validation methods, diverse datasets, and interdisciplinary oversight to ensure equitable AI implementation [71,72,73].
Equally important, patient data confidentiality presents another significant concern, as compliance with regulations like the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA) is paramount [74,75,76]. AI systems risk privacy breaches through improper anonymization or storage of sensitive information for fine-tuning, eroding user trust. Addressing these challenges requires transparent data handling, adherence to international privacy standards, and explainable AI frameworks to safeguard sensitive information (Figure 3) [77].
The integration of AI-based chatbots like ChatGPT in healthcare, particularly in optimizing antibiotic therapy, faces several legal, technical, and institutional challenges that demand attention. Responsibility for errors arising from AI use remains contentious; under the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), manufacturers are primarily accountable for the safety and intended use of AI-based medical devices. However, healthcare providers deploying these tools share liability, especially when AI recommendations are applied beyond approved operational boundaries, highlighting the need for legal clarity on shared accountability between developers and clinicians. Regulatory frameworks, including the MDR, IVDR, GDPR in Europe, and the FDA’s lifecycle approach in the U.S., underscore the necessity for rigorous testing, certification, and post-market surveillance of AI systems, ensuring their safe and effective deployment. These regulations also demand transparency and adherence to pre-approved algorithm change protocols, particularly for adaptive AI systems that evolve during real-world use [78,79,80,81,82].
Compounding these challenges is the lack of extensive clinical validation and large-scale studies to confirm the efficacy and safety of chatbots in real-world settings. Many existing studies rely on hypothetical data, failing to capture clinical complexities, which hinders the establishment of benchmarks for evaluating AI performance. Without comprehensive validation, reliance on these systems risks perpetuating errors, particularly in high-stakes contexts such as antibiotic stewardship. Addressing these gaps demands rigorous multicenter trials and interdisciplinary collaboration to ensure these tools transition from experimental technologies to reliable components of healthcare delivery [29,35].
Encouragingly, pilot programs in some institutions are testing frameworks for unbiased dataset curation, including the use of representative patient cohorts and continuous model retraining against diverse clinical data. Similarly, privacy-preserving techniques, such as federated learning and differential privacy, allow models to improve without centralizing sensitive patient information [59,83,84,85,86,87]. Early collaborations between AI developers, ethicists, and legal experts are beginning to produce guidance documents and standardized operating procedures, helping clarify liability issues and ensure compliance with regulatory frameworks like the MDR, IVDR, and FDA’s lifecycle approach. Such practical steps illustrate that while the challenges are substantial, they are not insurmountable.

Proposed Strategies to Address AI Chatbot Limitations

Building on our earlier discussion of key limitations, we suggest a series of targeted measures—ranging from diverse training datasets and human oversight in high-stakes cases to refined prompt-engineering and privacy-preserving techniques—that together can enhance chatbot reliability, reduce algorithmic bias, prevent unsafe recommendations, minimize hallucinations, and safeguard patient data.
  • Algorithmic Bias. Training datasets that underrepresent certain patient populations can produce skewed recommendations and exacerbate health disparities. To reduce bias, AI chatbots need the following:
    Dataset Expansion: Collaborations across diverse institutions to include various demographics and clinical contexts.
    Regular Testing: Frequent evaluations with representative patient cohorts.
    Feedback Loops: Clinicians and pharmacists flag questionable outputs, prompting updates to training processes.
  • Unsafe Advice/Missed Clinical Nuances. Chatbots can overlook key patient factors or propose outdated therapies, underscoring the need for human oversight. Suggested fixes are as follows:
    Safety Checks: Automated alerts for allergies, interactions, or guideline mismatches.
    Specialist Review: Infectious disease experts or pharmacists approve final suggestions, especially in high-stakes scenarios.
    Contextual Prompts: Structured reminders for comorbidities, patient age, and recent antibiotic history.
  • Hallucinations and Misinformation. When chatbots confidently provide incorrect information, major clinical risks arise. Mitigation approaches include the following:
    Model Refinement: Carefully crafted prompts or limiting response scope.
    Step-by-Step Reasoning: Documenting the model’s reasoning to spot errors.
    Validation Layers: Cross-checking outputs against trusted sources (antibiograms, guidelines).
  • Data Privacy and Confidentiality. Compliance with regulations like GDPR and HIPAA is essential. Protective methods include the following:
    Federated Learning: Training models locally at each institution without centralizing sensitive data.
    Differential Privacy: Introducing controlled “noise” to prevent re-identification.
    Secure Enclaves: Using encrypted, access-controlled environments for AI model tuning.
By adopting these solutions—ranging from diverse data and specialist oversight to refined prompt-engineering and secure data handling—AI chatbots can become safer, more accurate, and more trusted aids in clinical practice.

9. Conclusions

This comprehensive narrative review reveals that AI-based chatbots, particularly ChatGPT, hold considerable promise in optimizing antibiotic therapy. By assisting with evidence-based antibiotic selection, offering structured treatment recommendations, supporting preliminary diagnostics, and providing educational guidance, these systems can enhance adherence to clinical guidelines and strengthen antimicrobial stewardship. When appropriately integrated, chatbots may help non-specialist clinicians make more informed decisions and improve patient outcomes, ultimately contributing to global efforts against antimicrobial resistance.
Despite these encouraging prospects, several challenges must be addressed before these tools can be fully realized in clinical practice. The tendency of chatbots to mishandle complex clinical nuances or produce unsafe recommendations in difficult cases underscores the need for continuous human oversight. Concerns regarding algorithmic bias, privacy, and legal accountability also demand careful consideration. Ensuring compliance with regulations like GDPR and HIPAA, establishing transparent development practices, and clarifying liability for AI-driven guidance are all essential steps in fostering trust and safeguarding patient safety.
Even so, emerging data emphasize the constructive role AI can play. For example, models like ChatGPT-4 have achieved a satisfactory level of accuracy in diagnosing bloodstream infections and recommending empirical antibiotic therapies—approximately 64% in one study [20]. Another study highlighted ChatGPT’s ability to provide nuanced responses to theoretical and open-ended clinical queries, at times outperforming infectious disease residents in specific tasks such as antibiogram interpretation, though it struggled to adapt to newer treatment guidelines [19]. While AI tools like ChatGPT can offer valuable insights, they still occasionally fail to manage complex clinical scenarios without expert supervision [17]. Nonetheless, these initial successes underscore AI’s potential to enhance guideline adherence, streamline decision-making, and possibly shorten the time to effective treatment.
As the technology advances, ongoing refinement of algorithms, robust clinical validation, interdisciplinary collaboration, and vigilant regulatory oversight will be crucial. Such efforts can help AI-driven chatbots evolve into reliable adjunct tools that complement human expertise and strengthen the global response to AMR, transforming what might seem like a high-tech promise into a tangible, enduring asset in healthcare.

10. Future Perspectives

To fully realize the potential of AI-based chatbots in optimizing antibiotic therapy, several critical areas demand sustained attention. First and foremost, refining the underlying algorithms and validating them clinically are essential steps. This includes improving the chatbot’s situational awareness so that it can better interpret complex clinical scenarios. In parallel, robust multicenter clinical trials are needed to confirm both efficacy and safety, focusing on prescribing error rates, infection resolution times, and the progression of antimicrobial resistance. Such studies should compare chatbot-generated recommendations with those of infectious disease specialists, taking into account factors such as time to correct antibiotic selection, cost-effectiveness, and adherence to official guidelines. By clarifying these outcomes, AI solutions can move from mere pilot projects to well-established tools in patient care.
Equally important is addressing the ethical and legal landscape. Strong data privacy and security measures, aligned with regulations like GDPR and HIPAA, must be embedded from the outset. Clinicians and patients alike need transparent, explainable AI systems they can trust, and clear guidance on accountability for AI-assisted clinical decisions is urgently needed to establish confidence and reliability in these tools.
On a practical level, interdisciplinary collaboration should become the norm. Clinicians, AI developers, ethicists, and policymakers must work together to develop and refine these technologies. Education and training for healthcare professionals will be key to helping them leverage AI chatbots as supportive instruments rather than stand-alone replacements for human judgment.
Integrating chatbots into existing clinical workflows will also require thoughtful design. Seamless incorporation into electronic health records, coupled with user-friendly interfaces, can help clinicians across varying levels of technological proficiency engage with the tool effectively. Meanwhile, a global perspective is crucial. Tailoring AI solutions to resource-limited settings, enabling offline functionality, providing language localization, and ensuring diverse training datasets can help reduce health disparities and extend the benefits of AI chatbots to underserved populations.
Looking ahead, AI chatbots have the potential to significantly enhance antibiotic selection and clinical outcomes by integrating real-time guideline updates, personalized local-resistance data, and advanced decision-support tools. For instance, in pediatric settings—where age-specific dosing and resistance patterns differ from adults—chatbots could deliver tailored, guideline-based recommendations aimed at minimizing broad-spectrum antibiotic use and reducing adverse events. Similarly, in critical care units, AI models capable of rapidly analyzing patient comorbidities, microbiological data, and pharmacokinetic-pharmacodynamic factors could help prevent overtreatment and shorten the time to effective therapy. To achieve these gains, however, future refinements must address the current limitations uncovered in recent studies: AI chatbots will require ongoing access to robust, high-quality datasets, continuous oversight by specialists, and transparent mechanisms to adapt recommendations when evidence evolves. By bridging these technical and practical gaps, AI chatbots stand to become indispensable adjuncts to clinical teams, offering timely and context-aware insights that can optimize antibiotic use and ultimately improve patient outcomes.
Regulatory compliance and standardization form the final cornerstone of progress. Close collaboration with bodies like the FDA and EMA is needed to establish clear standards for the approval, monitoring, and ongoing assessment of AI-based medical devices. Continuous post-market surveillance will allow for rapid identification and resolution of any emerging issues, thus promoting a stable and credible environment for these innovations. Consortia formed by industry, academia, and regulatory agencies might further unify testing protocols, ensuring transparency and adaptability throughout a model’s lifecycle. Meanwhile, structured educational programs—workshops, online courses, interactive simulations—will equip clinicians to interpret AI-driven recommendations critically, fostering trust and thoughtful use.
Over time, iterative improvements in data governance, such as using federated learning to preserve privacy, and cautious pilot implementations in controlled clinical environments will guide the steady, informed introduction of AI chatbots into routine practice. By following these paths, the field can move beyond mere proof-of-concept, ultimately codifying best practices, influencing policy, and supporting global alliances that transform what might seem like a high-tech promise into a tangible, enduring reality in healthcare.

Author Contributions

Conceptualization, N.I.A. and G.G.; methodology, N.I.A.; software, N.I.A.; validation, L.-C.T., V.A.I. and C.C.D.; formal analysis, C.C.D.; investigation, N.I.A. and G.G.; resources, N.I.A.; data curation, V.A.I. and L.-C.T.; writing—original draft preparation, N.I.A.; writing—review and editing, G.G. and C.C.D.; visualization, G.G.; supervision, C.C.D.; project administration, N.I.A.; funding acquisition, G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, as it is a narrative review of previously published research and does not involve new data collection from human subjects.

Informed Consent Statement

Not applicable. This study is a narrative review and does not involve research participants or newly collected data from human subjects.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial Intelligence in Healthcare: Past, Present and Future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed]
  2. ChatGPT. Available online: https://openai.com/chatgpt/overview/ (accessed on 21 November 2024).
  3. Gemini—Conversează Prin Chat ca să îți Inspire Idei. Available online: https://gemini.google.com (accessed on 21 November 2024).
  4. Meet Claude. Available online: https://www.anthropic.com/claude (accessed on 21 November 2024).
  5. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language Models Are Few-Shot Learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  6. Voice Mode FAQ|OpenAI Help Center. Available online: https://help.openai.com/en/articles/8400625-voice-mode-faq (accessed on 21 November 2024).
  7. Zhang, Y.; Sun, S.; Galley, M.; Chen, Y.-C.; Brockett, C.; Gao, X.; Gao, J.; Liu, J.; Dolan, B. DialoGPT: Large-Scale Generative Pre-Training for Conversational Response Generation. arXiv 2020, arXiv:1911.00536. [Google Scholar]
  8. Hoy, M.B. Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Med. Ref. Serv. Q. 2018, 37, 81–88. [Google Scholar] [CrossRef] [PubMed]
  9. Ruksakulpiwat, S.; Kumar, A.; Ajibade, A. Using ChatGPT in Medical Research: Current Status and Future Directions. J. Multidiscip. Healthc. 2023, 16, 1513–1520. [Google Scholar] [CrossRef]
  10. Omarov, B.; Narynov, S.; Zhumanov, Z. Artificial Intelligence-Enabled Chatbots in Mental Health: A Systematic Review. Comput. Mater. Contin. 2022, 74, 5105–5122. [Google Scholar] [CrossRef]
  11. Grassini, E.; Buzzi, M.; Leporini, B.; Vozna, A. A Systematic Review of Chatbots in Inclusive Healthcare: Insights from the Last 5 Years. Univers. Access Inf. Soc. 2024, 1–9. [Google Scholar] [CrossRef]
  12. Casu, M.; Triscari, S.; Battiato, S.; Guarnera, L.; Caponnetto, P. AI Chatbots for Mental Health: A Scoping Review of Effectiveness, Feasibility, and Applications. Appl. Sci. 2024, 14, 5889. [Google Scholar] [CrossRef]
  13. Laumer, S.; Maier, C.; Gubler, F. Chatbot acceptance in healthcare: Explaining user adoption of conversational agents for disease diagnosis. In Proceedings of the 27th European Conference on Information Systems (ECIS), Stockholm, Sweden, 8–14 June 2019. Research Papers. [Google Scholar]
  14. Webster, P. Virtual Health Care in the Era of COVID-19. Lancet 2020, 395, 1180–1181. [Google Scholar] [CrossRef]
  15. Zeng, F.; Liang, X.; Chen, Z. New Roles for Clinicians in the Age of Artificial Intelligence. BIO Integr. 2024, 1, 113–117. [Google Scholar] [CrossRef]
  16. Abavisani, M.; Khoshrou, A.; Karbas Foroushan, S.; Sahebkar, A. Chatting with Artificial Intelligence to Combat Antibiotic Resistance: Opportunities and Challenges. Curr. Res. Biotechnol. 2024, 7, 100197. [Google Scholar] [CrossRef]
  17. Howard, A.; Hope, W.; Gerada, A. ChatGPT and Antimicrobial Advice: The End of the Consulting Infection Doctor? Lancet Infect. Dis. 2023, 23, 405–406. [Google Scholar] [CrossRef] [PubMed]
  18. Sarink, M.J.; Bakker, I.L.; Anas, A.A.; Yusuf, E. A Study on the Performance of ChatGPT in Infectious Diseases Clinical Consultation. Clin. Microbiol. Infect. 2023, 29, 1088–1089. [Google Scholar] [CrossRef] [PubMed]
  19. De Vito, A.; Geremia, N.; Marino, A.; Bavaro, D.F.; Caruana, G.; Meschiari, M.; Colpani, A.; Mazzitelli, M.; Scaglione, V.; Venanzi Rullo, E.; et al. Assessing ChatGPT’s Theoretical Knowledge and Prescriptive Accuracy in Bacterial Infections: A Comparative Study with Infectious Diseases Residents and Specialists. Infection 2024, 1–9. [Google Scholar] [CrossRef] [PubMed]
  20. Maillard, A.; Micheli, G.; Lefevre, L.; Guyonnet, C.; Poyart, C.; Canouï, E.; Belan, M.; Charlier, C. Can Chatbot Artificial Intelligence Replace Infectious Diseases Physicians in the Management of Bloodstream Infections? A Prospective Cohort Study. Clin. Infect. Dis. 2024, 78, 825–832. [Google Scholar] [CrossRef]
  21. Naghavi, M.; Vollset, S.E.; Ikuta, K.S.; Swetschinski, L.R.; Gray, A.P.; Wool, E.E.; Aguilar, G.R.; Mestrovic, T.; Smith, G.; Han, C.; et al. Global Burden of Bacterial Antimicrobial Resistance 1990–2021: A Systematic Analysis with Forecasts to 2050. Lancet 2024, 404, 1199–1226. [Google Scholar] [CrossRef]
  22. Murray, C.J.L.; Ikuta, K.S.; Sharara, F.; Swetschinski, L.; Robles Aguilar, G.; Gray, A.; Han, C.; Bisignano, C.; Rao, P.; Wool, E.; et al. Global Burden of Bacterial Antimicrobial Resistance in 2019: A Systematic Analysis. Lancet 2022, 399, 629–655. [Google Scholar] [CrossRef]
  23. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef]
  24. Lee, P.; Bubeck, S.; Petro, J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N. Engl. J. Med. 2023, 388, 1233–1239. [Google Scholar] [CrossRef]
  25. Shah, N.H.; Entwistle, D.; Pfeffer, M.A. Creation and Adoption of Large Language Models in Medicine. JAMA 2023, 330, 866–869. [Google Scholar] [CrossRef]
  26. Wojcik, G.; Ring, N.; McCulloch, C.; Willis, D.S.; Williams, B.; Kydonaki, K. Understanding the Complexities of Antibiotic Prescribing Behaviour in Acute Hospitals: A Systematic Review and Meta-Ethnography. Arch. Public Health 2021, 79, 134. [Google Scholar] [CrossRef] [PubMed]
  27. Harun, M.G.D.; Sumon, S.A.; Hasan, I.; Akther, F.M.; Islam, M.S.; Anwar, M.M.U. Barriers, Facilitators, Perceptions and Impact of Interventions in Implementing Antimicrobial Stewardship Programs in Hospitals of Low-Middle and Middle Countries: A Scoping Review. Antimicrob. Resist. Infect. Control 2024, 13, 8. [Google Scholar] [CrossRef] [PubMed]
  28. Serban, G.P.R. Consumul de Antibiotice, Rezistența Microbiană și Infecții Asociate Asistenței Medicale în România—2018. Available online: https://insp.gov.ro/download/CNSCBT/docman-files/Analiza%20date%20supraveghere/infectii_asociate_asistentei_medicale/Consumul-de-antibiotice-rezistenta-microbiana-si-infectiile-asociate-asistentei-medicale-Romania-2018.pdf (accessed on 30 November 2024).
  29. Hassoun-Kheir, N.; Guedes, M.; Ngo Nsoga, M.-T.; Argante, L.; Arieti, F.; Gladstone, B.P.; Kingston, R.; Naylor, N.R.; Pezzani, M.D.; Pouwels, K.B.; et al. A Systematic Review on the Excess Health Risk of Antibiotic-Resistant Bloodstream Infections for Six Key Pathogens in Europe. Clin. Microbiol. Infect. 2024, 30, S14–S25. [Google Scholar] [CrossRef] [PubMed]
  30. European Centre for Disease Prevention and Control. Antimicrobial Resistance in the EU/EEA (EARS-Net). In Annual Epidemiological Report for 2023; ECDC: Solna, Sweden, 2023. [Google Scholar]
  31. Ajulo, S. Global Antimicrobial Resistance and Use Surveillance System (GLASS) Report 2022, 1st ed.; World Health Organization: Geneva, Switzerland, 2022; ISBN 978-92-4-006270-2. [Google Scholar]
  32. Langford, B.J.; Soucy, J.-P.R.; Leung, V.; So, M.; Kwan, A.T.H.; Portnoff, J.S.; Bertagnolio, S.; Raybardhan, S.; MacFadden, D.R.; Daneman, N. Antibiotic Resistance Associated with the COVID-19 Pandemic: A Systematic Review and Meta-Analysis. Clin. Microbiol. Infect. 2023, 29, 302–309. [Google Scholar] [CrossRef]
  33. Luyt, C.-E.; Bréchot, N.; Trouillet, J.-L.; Chastre, J. Antibiotic Stewardship in the Intensive Care Unit. Crit. Care 2014, 18, 480. [Google Scholar] [CrossRef]
  34. Hyun, D.Y.; Hersh, A.L.; Namtu, K.; Palazzi, D.L.; Maples, H.D.; Newland, J.G.; Saiman, L. Antimicrobial Stewardship in Pediatrics: How Every Pediatrician Can Be a Steward. JAMA Pediatr. 2013, 167, 859–866. [Google Scholar] [CrossRef]
  35. Donà, D.; Barbieri, E.; Daverio, M.; Lundin, R.; Giaquinto, C.; Zaoutis, T.; Sharland, M. Implementation and Impact of Pediatric Antimicrobial Stewardship Programs: A Systematic Scoping Review. Antimicrob. Resist. Infect. Control 2020, 9, 3. [Google Scholar] [CrossRef]
  36. Apisarnthanarak, A.; Kwa, A.L.-H.; Chiu, C.-H.; Kumar, S.; Thu, L.T.A.; Tan, B.H.; Zong, Z.; Chuang, Y.C.; Karuniawati, A.; Tayzon, M.F.; et al. Antimicrobial Stewardship for Acute-Care Hospitals: An Asian Perspective. Infect. Control Hosp. Epidemiol. 2018, 39, 1237–1245. [Google Scholar] [CrossRef]
  37. Godman, B.; Egwuenu, A.; Haque, M.; Malande, O.O.; Schellack, N.; Kumar, S.; Saleem, Z.; Sneddon, J.; Hoxha, I.; Islam, S.; et al. Strategies to Improve Antimicrobial Utilization with a Special Focus on Developing Countries. Life 2021, 11, 528. [Google Scholar] [CrossRef]
  38. The Five Ds of Outpatient Antibiotic Stewardship for Urinary Tract Infections. Available online: https://journals.asm.org/doi/epdf/10.1128/cmr.00003-20?src=getftr&utm_source=scopus&getft_integrator=scopus (accessed on 9 December 2024).
  39. Kumar, N.R.; Balraj, T.A.; Kempegowda, S.N.; Prashant, A. Multidrug-Resistant Sepsis: A Critical Healthcare Challenge. Antibiotics 2024, 13, 46. [Google Scholar] [CrossRef]
  40. Shum, H.; He, X.; Li, D. From Eliza to XiaoIce: Challenges and Opportunities with Social Chatbots. Front. Inf. Technol. Electron. Eng. 2018, 19, 10–26. [Google Scholar] [CrossRef]
  41. OpenAI.; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2024, arXiv:2303.08774. [Google Scholar]
  42. Weizenbaum, J. ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine. Commun. ACM 1966, 9, 36–45. [Google Scholar] [CrossRef]
  43. Bhattacharya, P.; Prasad, V.K.; Verma, A.; Gupta, D.; Sapsomboon, A.; Viriyasitavat, W.; Dhiman, G. Demystifying ChatGPT: An In-Depth Survey of OpenAI’s Robust Large Language Models. Arch. Comput. Methods Eng. 2024, 31, 4557–4600. [Google Scholar] [CrossRef]
  44. Bansal, G.; Chamola, V.; Hussain, A.; Guizani, M.; Niyato, D. Transforming Conversations with AI—A Comprehensive Study of ChatGPT. Cogn. Comput. 2024, 16, 2487–2510. [Google Scholar] [CrossRef]
  45. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
  46. Gao, L.; Biderman, S.; Black, S.; Golding, L.; Hoppe, T.; Foster, C.; Phang, J.; He, H.; Thite, A.; Nabeshima, N.; et al. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv 2020, arXiv:2101.00027. [Google Scholar]
  47. Kumar, V.; Srivastava, P.; Dwivedi, A.; Budhiraja, I.; Ghosh, D.; Goyal, V.; Arora, R. Large-Language-Models (LLM)-Based AI Chatbots: Architecture, In-Depth Analysis and Their Performance Evaluation, Proceedings of the Recent Trends in Image Processing and Pattern Recognition, Bidar, India, 16–17 December 2016; Santosh, K., Makkar, A., Conway, M., Singh, A.K., Vacavant, A., Abou el Kalam, A., Bouguelia, M.-R., Hegadi, R., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 237–249. [Google Scholar]
  48. Leiter, C.; Zhang, R.; Chen, Y.; Belouadi, J.; Larionov, D.; Fresen, V.; Eger, S. ChatGPT: A Meta-Analysis after 2.5 Months. Mach. Learn. Appl. 2024, 16, 100540. [Google Scholar] [CrossRef]
  49. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  50. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. In Proceedings of the 36th Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
  51. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models Are Unsupervised Multitask Learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
  52. Tenney, I.; Das, D.; Pavlick, E. BERT Rediscovers the Classical NLP Pipeline. arXiv 2019, arXiv:1905.05950. [Google Scholar]
  53. Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q.V.; Salakhutdinov, R. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
  54. Clark, K.; Khandelwal, U.; Levy, O.; Manning, C.D. What Does BERT Look At? An Analysis of BERT’s Attention. arXiv 2019, arXiv:1906.04341. [Google Scholar]
  55. Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y.J.; Madotto, A.; Fung, P. Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 2023, 55, 1–38. [Google Scholar] [CrossRef]
  56. Maynez, J.; Narayan, S.; Bohnet, B.; McDonald, R. On Faithfulness and Factuality in Abstractive Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; Jurafsky, D., Chai, J., Schluter, N., Tetreault, J., Eds.; Association for Computational Linguistics, 2020; pp. 1906–1919. [Google Scholar]
  57. Ahmad, M.A.; Yaramis, I.; Roy, T.D. Creating Trustworthy LLMs: Dealing with Hallucinations in Healthcare AI. arXiv 2023, arXiv:2311.01463. [Google Scholar]
  58. Islam, A.; Chang, K. Real-Time AI-Based Informational Decision-Making Support System Utilizing Dynamic Text Sources. Appl. Sci. 2021, 11, 6237. [Google Scholar] [CrossRef]
  59. Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A Guide to Deep Learning in Healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef]
  60. Berner, E.S.; La Lande, T.J. Overview of Clinical Decision Support Systems. In Clinical Decision Support Systems; Springer: New York, NY, USA, 2007; pp. 3–22. ISBN 978-0-387-38319-4. [Google Scholar]
  61. Frangoudes, F.; Hadjiaros, M.; Schiza, E.C.; Matsangidou, M.; Tsivitanidou, O.; Neokleous, K. An Overview of the Use of Chatbots in Medical and Healthcare Education. In Proceedings of the Learning and Collaboration Technologies: Games and Virtual Environments for Learning, Virtual, 24–29 July 2021; Zaphiris, P., Ioannou, A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 170–184. [Google Scholar]
  62. Huang, R.S.; Benour, A.; Kemppainen, J.; Leung, F.-H. The Future of AI Clinicians: Assessing the Modern Standard of Chatbots and Their Approach to Diagnostic Uncertainty. BMC Med. Educ. 2024, 24, 1133. [Google Scholar] [CrossRef]
  63. Pinto-de-Sá, R.; Sousa-Pinto, B.; Costa-de-Oliveira, S. Brave New World of Artificial Intelligence: Its Use in Antimicrobial Stewardship—A Systematic Review. Antibiotics 2024, 13, 307. [Google Scholar] [CrossRef]
  64. Chang, A.; Chen, J.H. BSAC Vanguard Series: Artificial Intelligence and Antibiotic Stewardship. J. Antimicrob. Chemother. 2022, 77, 1216–1217. [Google Scholar] [CrossRef]
  65. Peiffer-Smadja, N.; Dellière, S.; Rodriguez, C.; Birgand, G.; Lescure, F.-X.; Fourati, S.; Ruppé, E. Machine Learning in the Clinical Microbiology Laboratory: Has the Time Come for Routine Practice? Clin. Microbiol. Infect. 2020, 26, 1300–1309. [Google Scholar] [CrossRef]
  66. Vandenberg, O.; Durand, G.; Hallin, M.; Diefenbach, A.; Gant, V.; Murray, P.; Kozlakidis, Z.; van Belkum, A. Consolidation of Clinical Microbiology Laboratories and Introduction of Transformative Technologies. Clin. Microbiol. Rev. 2020, 33, 10–1128. [Google Scholar] [CrossRef]
  67. Feretzakis, G.; Loupelis, E.; Sakagianni, A.; Kalles, D.; Martsoukou, M.; Lada, M.; Skarmoutsou, N.; Christopoulos, C.; Valakis, K.; Velentza, A.; et al. Using Machine Learning Techniques to Aid Empirical Antibiotic Therapy Decisions in the Intensive Care Unit of a General Hospital in Greece. Antibiotics 2020, 9, 50. [Google Scholar] [CrossRef] [PubMed]
  68. Fanelli, U.; Pappalardo, M.; Chinè, V.; Gismondi, P.; Neglia, C.; Argentiero, A.; Calderaro, A.; Prati, A.; Esposito, S. Role of Artificial Intelligence in Fighting Antimicrobial Resistance in Pediatrics. Antibiotics 2020, 9, 767. [Google Scholar] [CrossRef] [PubMed]
  69. Oonsivilai, M.; Mo, Y.; Luangasanatip, N.; Lubell, Y.; Miliya, T.; Tan, P.; Loeuk, L.; Turner, P.; Cooper, B.S. Using Machine Learning to Guide Targeted and Locally Tailored Empiric Antibiotic Prescribing in a Children’s Hospital in Cambodia. Wellcome Open Res. 2018, 3, 131. [Google Scholar] [CrossRef] [PubMed]
  70. Coelho, J.R.; Carriço, J.A.; Knight, D.; Martínez, J.-L.; Morrissey, I.; Oggioni, M.R.; Freitas, A.T. The Use of Machine Learning Methodologies to Analyse Antibiotic and Biocide Susceptibility in Staphylococcus Aureus. PLoS ONE 2013, 8, e55582. [Google Scholar] [CrossRef] [PubMed]
  71. Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef]
  72. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  73. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Comput. Surv. 2021, 54, 1–35. [Google Scholar] [CrossRef]
  74. HIPAA, vs. GDPR Compliance: What’s the Difference? Available online: https://www.onetrust.com/blog/hipaa-vs-gdpr-compliance/ (accessed on 2 December 2024).
  75. Office for Civil Rights (OCR). Health Information Privacy. Available online: https://www.hhs.gov/hipaa/index.html (accessed on 9 December 2024).
  76. General Data Protection Regulation (GDPR)—Legal Text. Available online: https://gdpr-info.eu/ (accessed on 9 December 2024).
  77. Zhou, J.; Müller, H.; Holzinger, A.; Chen, F. Ethical ChatGPT: Concerns, Challenges, and Commandments. Electronics 2024, 13, 3417. [Google Scholar] [CrossRef]
  78. Center for Devices and Radiological Health. Artificial Intelligence and Machine Learning in Software as a Medical Device. FDA. 2024. Available online: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device (accessed on 3 December 2024).
  79. European Commission; Joint Research Centre. AI Watch: Defining Artificial Intelligence: Towards an Operational Definition and Taxonomy of Artificial Intelligence; Publications Office: Luxembourg, 2020. [Google Scholar]
  80. Regulation—2017/746—EN—Medical Device Regulation—EUR-Lex. Available online: https://eur-lex.europa.eu/eli/reg/2017/746/oj (accessed on 9 December 2024).
  81. European Union. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and Repealing Council Directives 90/385/EEC and 93/42/EEC (Text with EEA Relevance). Off. J. Eur. Union 2017, 117, 1–175. [Google Scholar]
  82. Gerke, S.; Minssen, T.; Cohen, G. Chapter 12—Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare. In Artificial Intelligence in Healthcare; Bohr, A., Memarzadeh, K., Eds.; Academic Press: Cambridge, MA, USA, 2020; pp. 295–336. ISBN 978-0-12-818438-7. [Google Scholar]
  83. Alam, K.; Kumar, A.; Samiullah, F.N.U. Prospectives and Drawbacks of ChatGPT in Healthcare and Clinical Medicine. In AI and Ethics; Springer: Berlin/Heidelberg, Germany, 2024; pp. 1–7. [Google Scholar] [CrossRef]
  84. Sujan, M.; Smith-Frazer, C.; Malamateniou, C.; Connor, J.; Gardner, A.; Unsworth, H.; Husain, H. Validation Framework for the Use of AI in Healthcare: Overview of the New British Standard BS30440. BMJ Health Care Inform. 2023, 30, e100749. [Google Scholar] [CrossRef]
  85. Sheller, M.J.; Edwards, B.; Reina, G.A.; Martin, J.; Pati, S.; Kotrotsou, A.; Milchenko, M.; Xu, W.; Marcus, D.; Colen, R.R.; et al. Federated Learning in Medicine: Facilitating Multi-Institutional Collaborations without Sharing Patient Data. Sci. Rep. 2020, 10, 12598. [Google Scholar] [CrossRef] [PubMed]
  86. Rieke, N.; Hancox, J.; Li, W.; Milletarì, F.; Roth, H.R.; Albarqouni, S.; Bakas, S.; Galtier, M.N.; Landman, B.A.; Maier-Hein, K.; et al. The Future of Digital Health with Federated Learning. NPJ Digit. Med. 2020, 3, 119. [Google Scholar] [CrossRef] [PubMed]
  87. Chen, I.Y.; Joshi, S.; Ghassemi, M. Treating Health Disparities with Artificial Intelligence. Nat. Med. 2020, 26, 16–17. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Conceptual Framework of AI Chatbot Integration into Antibiotic Therapy Decision-Making. This schematic illustrates how AI-driven chatbots, integrated with clinical data sources and supported by human expertise, can guide antibiotic prescribing. Starting from the patient’s presentation of acute infection, non-specialist clinicians access the chatbot to receive evidence-based recommendations derived from patient records, local resistance patterns, and established guidelines. Infectious disease specialists and pharmacists review the chatbot’s suggestions, providing necessary oversight to refine treatment plans and ensure adherence to stewardship principles. Feedback loops, including outcome monitoring and expert input, continuously inform algorithm refinement. In this way, AI chatbots serve as adjunct decision-support tools, working in tandem with human judgment to optimize antibiotic therapy and improve patient outcomes.
Figure 1. Conceptual Framework of AI Chatbot Integration into Antibiotic Therapy Decision-Making. This schematic illustrates how AI-driven chatbots, integrated with clinical data sources and supported by human expertise, can guide antibiotic prescribing. Starting from the patient’s presentation of acute infection, non-specialist clinicians access the chatbot to receive evidence-based recommendations derived from patient records, local resistance patterns, and established guidelines. Infectious disease specialists and pharmacists review the chatbot’s suggestions, providing necessary oversight to refine treatment plans and ensure adherence to stewardship principles. Feedback loops, including outcome monitoring and expert input, continuously inform algorithm refinement. In this way, AI chatbots serve as adjunct decision-support tools, working in tandem with human judgment to optimize antibiotic therapy and improve patient outcomes.
Antibiotics 14 00060 g001
Figure 2. Comparison of AMR-Attributable Death Rates per 100,000 in 2021 and 2050 by region. This figure compares AMR-attributable death rates per 100,000 population across global regions for the years 2021 (yellow) and 2050 (orange), highlighting significant regional disparities and projected trends The data was adapted from [21].
Figure 2. Comparison of AMR-Attributable Death Rates per 100,000 in 2021 and 2050 by region. This figure compares AMR-attributable death rates per 100,000 population across global regions for the years 2021 (yellow) and 2050 (orange), highlighting significant regional disparities and projected trends The data was adapted from [21].
Antibiotics 14 00060 g002
Figure 3. Privacy and Compliance Measures in Federated Learning for AI Models. The data flow diagram illustrates how patient data is securely processed and integrated into an AI model using federated learning, ensuring compliance with privacy regulations like GDPR and HIPAA. It highlights each step in the data pipeline, ensuring transparency and accountability in data handling.
Figure 3. Privacy and Compliance Measures in Federated Learning for AI Models. The data flow diagram illustrates how patient data is securely processed and integrated into an AI model using federated learning, ensuring compliance with privacy regulations like GDPR and HIPAA. It highlights each step in the data pipeline, ensuring transparency and accountability in data handling.
Antibiotics 14 00060 g003
Table 1. Key Findings from Studies on AI Chatbots in Antibiotic Therapy. This table presents an overview of selected studies evaluating AI chatbots in managing infectious diseases. It includes details on study design, infection types, primary outcomes, key limitations, and implications for clinical practice.
Table 1. Key Findings from Studies on AI Chatbots in Antibiotic Therapy. This table presents an overview of selected studies evaluating AI chatbots in managing infectious diseases. It includes details on study design, infection types, primary outcomes, key limitations, and implications for clinical practice.
Study Reference (Author, Year)Study Design and SettingInfection TypePrimary OutcomesKey LimitationsImplications
Maillard et al., 2023, [20] Prospective cohort study, tertiary hospitalBloodstream infections (BSIs)64% adequate empirical therapies, 36% optimal definitive therapiesInadequate source control in some cases, long treatment durationsUseful as a supplementary tool, requires oversight
De Vito et al., 2024, [19]Comparative study, single centerVarious bacterial infections (BSIs, pneumonia, etc.)70% accuracy in theoretical questions, limitations in resistance mechanism recognitionOlder antibiotic preferences, limited guideline alignmentPromising in education, unsuitable for complex decisions
Sarink et al., 2023, [18]Retrospective analysis, tertiary hospitalPositive blood culturesMean accuracy 2.8/5, highest for blood culturesAmbiguous recommendations, occasional factual inaccuraciesCannot replace clinicians, serves as diagnostic aid
Howard et al., 2023, [17]Qualitative exploratory research, single centerGeneral antimicrobial adviceRecognized contraindications inconsistently; proposed harmful recommendationsFailures in situational awareness, inconsistent inferenceNeeds human supervision, risk of dangerous advice
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Antonie, N.I.; Gheorghe, G.; Ionescu, V.A.; Tiucă, L.-C.; Diaconu, C.C. The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review. Antibiotics 2025, 14, 60. https://doi.org/10.3390/antibiotics14010060

AMA Style

Antonie NI, Gheorghe G, Ionescu VA, Tiucă L-C, Diaconu CC. The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review. Antibiotics. 2025; 14(1):60. https://doi.org/10.3390/antibiotics14010060

Chicago/Turabian Style

Antonie, Ninel Iacobus, Gina Gheorghe, Vlad Alexandru Ionescu, Loredana-Crista Tiucă, and Camelia Cristina Diaconu. 2025. "The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review" Antibiotics 14, no. 1: 60. https://doi.org/10.3390/antibiotics14010060

APA Style

Antonie, N. I., Gheorghe, G., Ionescu, V. A., Tiucă, L.-C., & Diaconu, C. C. (2025). The Role of ChatGPT and AI Chatbots in Optimizing Antibiotic Therapy: A Comprehensive Narrative Review. Antibiotics, 14(1), 60. https://doi.org/10.3390/antibiotics14010060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop