Exploring the Role of ChatGPT in Oncology: Providing Information and Support for Cancer Patients

: Introduction: Oncological patients face numerous challenges throughout their cancer journey while navigating complex medical information. The advent of AI-based conversational models like ChatGPT (San Francisco, OpenAI) represents an innovation in oncological patient management. Methods: We conducted a comprehensive review of the literature on the use of ChatGPT in providing tailored information and support to patients with various types of cancer, including head and neck, liver, prostate, breast, lung, pancreas, colon, and cervical cancer. Results and Discussion: Our findings indicate that, in most instances, ChatGPT responses were accurate, dependable, and aligned with the expertise of oncology professionals, especially for certain subtypes of cancers like head and neck and prostate cancers. Furthermore, the system demonstrated a remarkable ability to comprehend patients’ emotional responses and offer proactive solutions and advice. Nevertheless, these models have also showed notable limitations and cannot serve as a substitute for the role of a physician under any circumstances. Conclusions: Conversational models like ChatGPT can significantly enhance the overall well-being and empowerment of oncological patients. Both patients and healthcare providers must become well-versed in the advantages and limitations of these emerging technologies.


Introduction
Oncological patients face numerous challenges throughout their cancer journey, ranging from emotional distress and treatment-related side effects, to navigating complex medical information.
Gone are the days when patients depended only on their doctors for medical advice.With a simple internet search, patients may educate themselves on symptoms, diseases, and treatment options, becoming more informed and proactive in decisions regarding their health [1].
The Internet has definitely transformed how patients navigate medical information, reshaping the dynamics of patient empowerment and communication with the doctor [2].However, while access to medical knowledge can be useful under certain circumstances not all online sources are reliable, and patients may encounter incorrect or misleading information resulting in confusion or wrong self-diagnosis [3].For this reason, misinformation or harmful information about cancer continues to be a significant concern in the online communication environment [4].
Furthermore, the ever-increasing health budget limits and heightened workloads among healthcare professionals have exacerbated the decline in doctor-patient relationships, adversely affecting healthcare accessibility and prognosis [5].
Providing cancer patients with additional tools for a better understanding of their diagnosis and treatment options and adequate emotional support is critical to ensure informed decision-making and a good outcome.
In this scenario, the advent of large language models (LLMs) like ChatGPT (San Francisco, OpenAI) and others may represent a cutting-edge innovation in oncological patient management to meet their individualized needs and concerns [6].
The use of artificial intelligence in healthcare is not new, having already demonstrated surprising results in the high-performance analysis of biomedical data through machine learning and deep learning models [7].However, despite the great prospects, some issues related to reliability, privacy, and patient confidentiality should still be addressed when integrating these tools into healthcare routines [6,[8][9][10].This narrative review explores the potential advantages, limitations, and challenges associated with conversational models in supporting cancer patients.Our discussion includes aspects such as the accessibility of the models, the reliability of the information provided, as well as their role in patient empowerment and informed decision-making.We focus on the widely recognized large language model ChatGPT (developed by OpenAI, San Francisco) due to the availability of consistent literature on this topic [6].

Large Language Models
Large language models (LLMs) are sophisticated artificial intelligence systems designed to generate human-like text.They are trained on vast amounts of data and can understand and produce natural language across various tasks, such as translation, summarization, and conversation.Users provide a list of keywords or inquiries, and LLMs generate content about those topics.The user interface generally follows a conversational structure, which cycles between user questions or inputs and system responses or outputs.This design considers previous interactions to emulate human speech effectively [6].
In November 2022, OpenAI launched its GPT series of models (e.g., GPT-3.5, GPT-4, and GPT-5), which generate human-like text for usage in chatbot conversations using natural language processing (NLP) technology.Other notable LLMs are Google's PaLM and Gemini, Meta's LLaMA family of open-source models, Anthropic's Claude models, ChatBot BARD by Google, and Llama and Llama-2 by Meta.
LLMs are AI-driven, deep neural network-based models with a remarkable ability to achieve general-purpose human language generation and understanding [6][7][8][9].LLMs acquire these skills by learning statistical relationships from text documents in computationally intensive self-supervised and semi-supervised training processes.Generative pre-trained transformer (GPT) language models are built on a transformer architecture, which enables them to process large amounts of text data while producing coherent text outputs by learning the relationships between input and output sequences [10].
The GPT language model has been trained on large datasets of text sourced from websites, books, and online publications.After receiving human feedback and corrections, ChatGPT was trained to respond in a way that allowed it to produce more logical and contextually relevant answers [11]; this procedure is known as reinforcement learning from human feedback or reinforcement learning from human preference (RLHF/RLHP).Users can type any prompt, and ChatGPT will answer based on the data stored in its database.
Previous research demonstrated that it could produce high-quality and coherent text outputs, react to user questions with unexpectedly intelligent-sounding messages, and perform exceptionally well in question-answer tasks [12].In the medical area, GPT-4, created by further reinforcement learning from OpenAI's ChatGPT, recently surpassed the passing score on all steps of the US medical licensing exam [13].

Methods
An extensive literature search was performed on PubMed to find relevant publications on the current role and future potential of ChatGPT in cancer patients.For our search, we have the following string: "(cancer OR oncology OR oncological) AND (patients) AND ChatGPT".Furthermore, we also carried out a careful examination of the references of the included articles to evaluate further studies worthy of mention.
Our results are presented through a narrative summary and organized as follows: potential benefits, applications in different types of cancer, limitations, and challenges.

Results and Discussion
The potential advantages and limitations of ChatGPT and similar LLMs are presented in Table 1.LLMs represent a potential game changer for individuals with limited access to healthcare resources, particularly in low-income countries, as, despite widespread access to medical information contributing to increasing the average level of health literacy and well-being expectations, health services are still very unequal and insufficient in many parts of the world [14,15].
As of the time this article was written, the basic version of ChatGPT was free of charge for the public.Given that financial difficulties have been linked to poor health outcomes [16], large language models can contribute to limiting the effects of socioeconomic inequalities in cancer treatment by giving everyone fast access to reliable medical information regardless of their location or socioeconomic background [17][18][19][20].
ChatGPT can support underprivileged communities in many ways: first, ChatGPT communicates in multiple languages, breaking down language barriers that often hinder access to healthcare information.It can remotely deliver essential health information in areas with limited access to healthcare facilities and guide essential self-care practices, including managing chronic conditions and first aid measures.In developing countries, it can help individuals understand healthcare processes, such as insurance enrollment, appointment scheduling, and medication management, improving overall access to care.
The inability of ChatGPT to generate any offensive or harmful responses is one of the security measures put in place by the developers to prevent misuse.ChatGPT can, therefore, offers a non-judgmental platform to seek information on sensitive topics as well, such as sexual health, mental health, and substance abuse, reducing stigma and cultural barriers that often deter people from seeking help.

Information Provision and Informed Decision-Making
ChatGPT has been trained on a large amount of data, including medical literature [7].Even if, as discussed further, the input (training) data constitute a limiting factor for deep learning models' accuracy and trustworthiness, these tools represent a valuable supplement for medical information retrieval and clinical decision-making, both for patients and healthcare practitioners.ChatGPT's conversational style results in more comprehensible responses than primary official sources like guidelines and scientific articles, especially for individuals without expertise in the medical field.Additionally, it streamlines the information search process by presenting only relevant content tailored to the user's query, thus enhancing efficiency and saving time.
LLMs are already active in different areas of clinical practice and can generate differential diagnosis lists for typical clinical scenarios with good diagnostic accuracy [12].In oncology, by integrating this knowledge into coherent responses, ChatGPT can answer questions related to different types of cancer, including treatment options, potential side effects, and beneficial lifestyle modifications.These models can support patients asking for information about additional examinations, diagnosis, treatment plans, and prognosis, enabling them to make more informed decisions.In breast cancer imaging, for example, it performs reasonably well in recommending the next imaging steps for patients requiring a breast cancer screening and assessment of breast pain [18].The adequacy of ChatGPT and other LLMs as a guide for patients, who are non-experts in medicine, in navigating the correct diagnostic path, remains a contentious issue in ongoing discussions.
Additionally, ChatGPT can assist in clarifying medical terminology and lexicon, ensuring that patients better comprehend the information provided in medical documents and radiological reports [19].

Emotional Support and Patient Empowerment
A cancer diagnosis can often lead to emotional distress, anxiety, and depression for patients and their caregivers [20,21].Furthermore, because of time constraints, clinicianpatient communication may frequently be neglected, with dramatic consequences for the clinical history and life of cancer patients.
There are good reasons to think that ChatGPT could help bridge this gap.
Research into ChatGPT's ability to provide responses attuned to human emotions such as kindness and empathy has produced impressive results [22].It may give the impression that generative AI can demonstrate an understanding of human emotions, generating responses and assistance suitable for those who use it.In a recent study by Elyoseph et al. [23], ChatGPT outperformed humans in assessing emotional awareness.It demonstrated the ability to improve intrapersonal and interpersonal understanding, increasing patients' awareness of their own and their family members' emotions.This may provide patients with comfort and help them feel less "alone" [24].
Using natural language processing capabilities, ChatGPT can engage in compassionate conversations, acknowledging patients' emotions and providing emotional support [25].It can also suggest coping strategies, stress management techniques, and even provide referrals to mental health professionals, when necessary.
It helps to build a framework for each individual question presented by patients and caregivers, thus increasing provider efficiency and allowing patients to become more aware about their care.As a result, by providing patients with an additional source of information, this paradigm has the potential to boost patient participation and compliance, promoting patient-centered treatment and effective shared decision-making [11].

Supportive Care
Beyond medical treatment, oncological patients often require support in other aspects of their lives, such as managing relationships, and making lifestyle changes to preserve their health.
By offering suggestions for healthy lifestyle modifications, including exercise routines and dietary recommendations, ChatGPT can empower patients to take an active role in their overall well-being [17].
This model has showcased significant potential in aiding home care for orthopaedic patients, suggesting that this tool can play a pivotal role in improving public health policy by providing consistent and trustworthy guidance, especially in settings where access to health services is limited [26].

For Healthcare Practitioners and Medical Students
Tools like ChatGPT could be helpful not only for patients but also for healthcare practitioners [27].Conversational models can generate user-friendly explanations of medical jargon, treatment alternatives, and potential adverse effects, thereby improving patient literacy and decision-making.Of course, ChatGPT is better suited to text activities like generating summaries, treatment plans, and follow-up recommendations, which doctors may subsequently check.Furthermore, it can facilitate contact with patients from varied linguistic backgrounds by offering real-time language translation services during consultations.
Another potentially beneficial use of tools like ChatGPT is training medical students and residents by simulating patient scenarios, answering medical queries, and providing learning resources [28,29].
Advanced AI-based models could save time for oncologists by handling routine administrative tasks like scheduling appointments, sending reminders, and managing documentation.It can help with patient case documentation by creating summaries of consultations, treatment plans, and follow-up recommendations and streamlining the process of keeping complete and accurate patient records.ChatGPT can generate clinic letters with good overall correctness and humanness ratings, with a reading level roughly similar to current real-world human-generated letters, and it has been effectively used to create patient clinic letters [30].
However, much caution should likely be exercised when considering specific tasks such as information retrieval about the latest research, treatment guidelines, clinical trials related to particular types of cancer, drug interactions, side effects, and dosage information for various cancer medications.Recent studies have shown a lack of consistency when dealing with providing a threshold for decision-making or distinguishing which guidelines to follow in a specific setting [11].

Appraisal of Literature on Different Types of Cancer
So far, the literature evaluating its use in clinical practice is still limited [8] and only a few studies have evaluated the potential of ChatGPT in education and advice on the clinical path of oncology [11].

Head and Neck
Kuşcu et al. explored the accuracy and reliability of ChatGPT's responses to questions related to head and neck cancer [31].A dataset of questions was selected from commonly asked queries from reputable institutions and societies, including the American Head & Neck Society (AHNS), the National Cancer Institute, and the Medline Plus Medical Encyclopedia.These questions underwent an extensive screening process by three authors to determine their suitability for inclusion in the study, focusing primarily on patient-oriented questions to evaluate the effectiveness of the AI model in providing useful information for patients.The study revealed that the majority of ChatGPT responses were accurate, with 86.4% receiving a "complete/correct" rating on the rating scale.Significantly, none of the responses were rated "completely inaccurate/irrelevant".Furthermore, the model showed high reproducibility across all topics and performed consistently without significant differences between them.
The authors also underlined a substantial limitation of ChatGPT: the version knowledge cutoff was only extended until September 2021, potentially impacting response precision due to the exclusion of data from the previous two years.Furthermore, the reliability of ChatGPT is determined by the quality of its training data, and the model's secret sources raise questions about whether the training was based on the most reputable and accurate medical literature.Furthermore, the latest version of ChatGPT, which demonstrated better performance than the publicly available version, is exclusively accessible through paid membership, potentially restricting public access to more accurate knowledge [31].
A critical opinion regarding the current potential of ChatGPT in answering patient questions comes from the study by Wei et al., who compared the performance of Chat-GPT and the Google browser in addressing common questions related to head and neck cancers [32].A collection of 49 questions about head and neck cancers was chosen from a series of "People Also Ask" (PAA) question prompts using SearchResponse.io.The study found that, on average, Google sources outperformed ChatGPT responses.Both sources were assessed to be of similar readability difficulty, most likely at the college level.While ChatGPT responses were comparable in complexity to those from Google, they were rated as lower quality due to a drop in reliability and accuracy when answering questions.
According to Wei's assessment, particularly for questions about head and neck cancer, Google sources emerged as the primary option for patient educational resources [32].

Prostate Cancer
Zhu et al. developed a questionnaire aligning with patient education guidelines and their clinical expertise, covering screening, prevention, treatment options, and postoperative complications related to prostate cancer [17].
The questions covered a spectrum of knowledge from the basics to advanced knowledge about prostate cancer.Their investigation involved five Large Language Models, including ChatGPT (Free and Plus versions), YouChat, NeevaAI, Perplexity, and Chatsonic.Assessments revealed that LLMs excelled in addressing most questions.For instance, they effectively clarified the significance of different PSA levels and emphasized that PSA alone is not a conclusive diagnostic test and that further examinations are recommended.LLMs also demonstrated effectiveness in detailed comparisons of treatment options, presenting pros and cons, and offering informative references to aid patients in making well-informed decisions.Most importantly, in most cases, it was consistently emphasized to consult a doctor.
The accuracy of responses from most LLMs exceeded 90%, with exceptions noted for NeevaAI and Chatsonic.Basic information questions with definite answers generally achieved high accuracy, but accuracy dipped for queries tied to specific scenarios or requiring summarization and analysis.ChatGPT exhibited the highest accuracy rate among the LLMs assessed, with the free version slightly outperforming the paid version.
Zhu et al. raised a question in their study regarding whether online LLMs would surpass ChatGPT.Notably, AI models relying on search engines like NeevaAI often presented literature content without effective summarization and explanation, resulting in poor readability.This observation suggested that model training might be more crucial than real-time Internet connectivity [17].

Hepatocarcinoma
Individuals with cirrhosis and hepatocellular carcinoma (HCC), as well as their caregivers, often have unmet needs and insufficient knowledge regarding the management and prevention of complications associated with the disease.It should not be disregarded that a portion of these patients have a troublesome history behind them and lack a sufficient socioeconomic support network.Previous research has demonstrated inadequate health literacy among cirrhosis and HCC patients and the favorable impacts of focused education [33].
An interesting experience comes from the work of Yeo et al., who evaluated ChatGPT's performance in answering the most frequently asked questions regarding the management and care of patients with cirrhosis and HCC.Conversational model responses were independently scored by two transplant hepatologists and a third reviewer [11].
The study by Yeo et al. found that ChatGPT provided comprehensive or correct but inadequate answers about cirrhosis in approximately three-quarters of the responses analyzed, with better results in categories such as "basic knowledge", "treatment", "lifestyle", and "other".No answer related to cirrhosis was classified as completely incorrect.Regarding HCC, the model excels in providing detailed information on the knowledge base and potential side effects of various HCC treatments, as well as scientific evidence for lifestyle investigations.However, there were areas where the model did not respond correctly or provided outdated information, especially in diagnosis, where most information was classified as a mix of correct and incorrect or outdated data.For example, while ChatGPT correctly emphasized using abdominal ultrasound as a primary screening tool, it neglected to mention MRI and computed tomography scans for HCC surveillance in patients with ascites.However, ChatGPT accurately identified cirrhosis as an indication for HCC surveillance [11].
Overall, the results were deemed satisfactory, even though only 47.3% of cirrhosis cases and 41.1% of HCC cases were classified as comprehensive, and the system had significant shortcomings in delivering answers about oncological diagnosis.Furthermore, the system could not establish the choice limitations and treatment length, most likely due to a lack of ability to deny clinical information regarding local procedures and recommendations.This confirms the potential significance of ChatGPT and related models in providing universal access to basic medical knowledge, while simultaneously emphasizing the importance of medical consultation during the most essential stages of the diagnostic process.
Yeo et al. also evaluated ChatGPT's responses to questions about coping with psychological stress following an HCC diagnosis.The model acknowledged the patient's probable emotional response to the diagnosis and provided clear and actionable starting points for individuals newly diagnosed with HCC.It offered motivational responses, encouraging proactive steps in managing the diagnosis and treatment strategies [11].

Breast Cancer
Over the last two decades, there has been an increase in scientific research and public interest in the two most serious problems linked with breast implants.Significant progress has been made in understanding the rare T-cell lymphoma associated with textured implants.
Liu et al. investigated the suitability of ChatGPT for educating patients on breast implant-associated anaplastic large cell lymphoma (BIA-ALCL) and breast implant illness (BII).They compared the quality of responses and references offered by ChatGPT to the Google Bard service.The data demonstrated that ChatGPT outperformed Google in providing high-quality responses to frequently asked queries about BIA-ALCL and BII [34].

Lung Cancer
Rahsepar et al. studied the accuracy of responses provided by ChatGPT-3.5,Google Bard, Bing, and Google search engines to non-expert questions about lung cancer prevention, screening, and vocabulary in radiology reports [35].Out of 120 questions, ChatGPT-3.5 answered 70.8% correctly and 17.5% incorrectly.Google Bard did not respond to 23 queries, and of the 97 questions it did, 62 were correct, 11 had some errors, and 24 were incorrect.Out of 120 questions, Bing gave 61.7% correct, 10.8% mostly correct, and 27.5% incorrect answers.The Google search engine answered 120 questions with 55% correct, 22.5% mostly correct, and 22.5% incorrect.
The authors concluded that ChatGPT-3.5 was more likely to give correct or partially correct responses than Google Bard.

Colon Cancer
Regarding colon cancer, ChatGPT was twice asked 38 questions on prevention, diagnosis, and management, and three experts rated the appropriateness.Twenty-seven answers of ChatGPT were rated as "appropriate" by the three experts; overall, at least two of three experts rated the answers appropriate for 86.8% [36].Moreover, the ChatGPT responses were extensively concordant with those of the American Society of Colon and Rectal Surgeons.

Pancreatic Cancer
Another study investigated the responses provided by ChatGPT to 30 questions about pancreatic cancer and pre-surgical, surgical, and post-surgical phases [37].The response quality was then assessed by 20 surgical oncology experts and rated as 'poor', 'fair', 'good', 'very good', and 'excellent'.The most assigned quality grade was 'very good' or 'excellent' for most responses (n = 24/30, 80%); in total, 60% of the experts thought that ChatGPT was a reliable information source, and only 10% thought that the answers provided by ChatGPT could not be compared to those of skilled surgeons.Additionally, 90% of experts believed that ChatGPT will become the go-to source for online patient information, either completely replacing traditional search engines or at least co-existing with them.

Cervical Cancer
In a study by Hermann et al., when ChatGPT was challenged with questions concerning cervical cancer prevention, management, survivorship, and quality of life, its answers were rated as correct and comprehensive only in 34/64 (53.1%) questions, with the worst performance in the treatment category [38].

Radiotherapy
Although the authors did not use ChatGPT, the study by Chow et al. provides an instructive example of the efficiency of comparable conversational models.Their research focused on developing an AI-driven instructional chatbot for interactive learning in radiotherapy, using the IBM Watson Assistant platform [39].
The major purpose of the chatbot was to make it easier to communicate radiation knowledge to people of varied comprehension levels.The chatbot was created to be userfriendly and deliver simple explanations in response to user questions regarding radiation.According to their response, most physicians rated the RT Bot's material positively, with 95% of users believing the information to be sufficiently complete.

Limitations and Perspectives
Healthcare professionals must be aware of the limitations of LLMs to ensure responsible and safe use.
Although ChatGPT is free and can benefit underprivileged communities who have difficulty accessing healthcare institutions, it is important to address constraints that remain in many parts of the world, such as limited Internet connection and low digital literacy.
Conversational models can be essential tools for physicians in providing general information and context, but they should not be relied on for medical advice.ChatGPT does not offer references (or if it does, they are not necessarily correct) [40].Furthermore, it is limited to information available until the knowledge cutoff date.It does not have real-time updates, so it might not be aware of the latest medical breakthroughs, treatments, guidelines, or changes to regulations and laws.Since the model is trained on a diverse range of Internet texts, which may include biased or outdated information, this could lead to biased responses or recommendations that do not consider the most current and evidence-based medical practices.Different sources could reach different conclusions.This overlooks the current limitations in data accuracy, the evolving nature of medical knowledge, and the need for expert oversight.
From the patient's perspective, one of the potentially most harmful outcomes of the inappropriate use of ChatGPT is its ability to provide confidently stated yet incorrect answers [41], and it may be susceptible to what is termed "hallucinations", wherein information is fabricated rather than grounded in facts [42].The average user often finds it more accessible to discern reliable sources, such as those affiliated with reputable healthcare institutions or scientific organizations.Conversely, identifying erroneous information presented by ChatGPT can pose greater challenges due to its formal and plausible language delivery, coupled with the inability to trace its source.Future research could investigate how models like ChatGPT may inadvertently deceive not only individuals lacking medical training but also doctors who are not experts in the field compared to experts in the field.
General-purpose LLMs might not guarantee the accuracy and precision required for medical inquiries, which could lead to incorrect advice or information [43].ChatGPT does not have access to personal health information about individuals.LLMs are not able to consider an individual's complete medical history, conduct physical examinations, or order diagnostic tests, which are essential aspects of providing advice for accurate and personalized medicine [9,44].Any attempt to provide personalized medical advice would, therefore, be speculative and lead to inaccurate or potentially harmful recommendations.
Even while the quick availability of information helps to reduce anxiety, using LLM conversations without expert evaluation increases the danger of inaccuracy.For example, underestimating a patient's condition could negatively influence patient care, as erroneous results reporting or treatment guideline interpretation can affect patients' morbidity and mortality.Patients may develop a sense of comfort and trust with ChatGPT over time, contributing to enhanced emotional well-being.However, this sensation of comfort should not lead to an underestimation of the clinical state, causing the patient to make poor decisions.There is a real risk of oversimplifying complex medical situations, leading patients to believe the tool is a substitute for competent medical advice.Such a perception could undermine the crucial doctor-patient relationship founded on trust, expertise, and personalized care.
Therefore, while ChatGPT can support patient education, healthcare providers need to guide patients in using this tool as a complement to, rather than a substitute for, medical consultation.LLMs should be used carefully under the supervision of a qualified professional, oncologist, and psycho-oncologist to prevent the patient from forming incorrect beliefs about their illness.
Finally, providing medical advice involves legal and ethical considerations, and relying on a language model like ChatGPT may not comply with medical regulations, standards, or the patient's cultural background.
In conclusion, AI in healthcare must be strictly regulated and overseen to reduce these risks [6,30].Further research is needed to compare the performance of different AI systems and evaluate the usefulness of AI-generated responses for cancer patients in realworld clinical settings.Seeking advice from experienced healthcare professionals who can assess individual clinical histories, conduct physical examinations, and interpret diagnostic testing is critical for providing accurate and safe medical care.Furthermore, it is vital to determine the quality and composition style of input delivered to chatbots across different settings, languages, and resource capacities.Implementing such a significant technological advancement necessitates caution and proactive risk management to ensure patient safety and quality of care.

Conclusions
The emergence of AI-driven conversational technology, exemplified by ChatGPT, has created new opportunities to support cancer patients throughout their journey.LLMs can significantly improve patients' well-being and empowerment by offering accurate information, guidance in treatment decisions, and emotional support.Evidence shows how these models can satisfactorily answer many questions about the symptoms, pathophysiology, treatment options, and prognosis of various types of cancer.However, these models have limitations, the main concern being their potential to produce inaccurate or unreliable information plausibly, especially when dealing with complex medical conditions or nuanced treatment options.Additionally, ChatGPT may not interpret the context accurately or understand the subtle nuances of patient questions, leading to responses that are not fully applicable or helpful.Recognizing its limitations, integrating ChatGPT into the healthcare ecosystem promises to provide personalized, accessible, and empathetic support to cancer patients.

Table 1 .
Advantages and limitations of ChatGPT and similar LLMs for patients and doctors.