Ethical Implications of Chatbot Utilization in Nephrology
Abstract
:1. Introduction to the Ethics of Utilization of Chatbots in Medicine and Specifically Nephrology
1.1. Background and Justification for Investigating the Ethical Use of Chatbots in Medicine
1.2. Status of Chatbot Use in Medicine and Nephrology
- ChatGPT, an initiative by OpenAI, is underpinned by the GPT (Generative Pre-trained Transformer) suite of language models, known for emulating human text creation. Embracing models such as GPT-4, ChatGPT stands out in crafting quality content, language translations, and delivering in-depth query responses;
- Bard AI, from the house of Google AI, taps into sophisticated language models such as the Pathways Language Model 2 (PaLM 2) and its predecessor, the Language Model for Dialogue Applications (LaMDA). This allows Bard AI to decipher a vast range of prompts, inclusive of those that require logical, commonsensical, and mathematical insights;
- Bing Chat, a Microsoft brainchild, offers generalized information and insights across diverse subjects, including the medical realm. Bing Chat harnesses state-of-the-art natural language processing and AI-driven learning to emulate human conversational behaviors;
- Claude AI, birthed by Anthropic, operates on an exclusive language model termed Constitutional AI. Crafted with the principles of usefulness, safety, and integrity through its Constitutional AI technique, Claude excels in decoding intricate queries and delivering accurate, detailed answers.
1.3. Overview of Nephrology and the Role of Chatbots in the Care of Kidney Diseases
2. Ethical Considerations in the Utilization of Chatbots
2.1. Ethical Considerations in the Utilization of Chatbots in Medicine
2.1.1. Privacy and Data Security
2.1.2. Patient Autonomy and Informed Consent
2.1.3. Equity and Access to Care
2.2. Ethical Implications of Chatbot Utilization in Nephrology
2.2.1. Clinical Decision-Making and Patient Safety
- Accuracy and Reliability of Chatbot Diagnoses and Recommendations
- Evidence Level and Presentation to Patients
- Ethical Imperative of Transparent Communication: While accuracy is essential, the manner in which information is conveyed to patients is equally critical. It is an ethical imperative to ensure that information, especially medical, is presented in a manner that is both transparent and easily comprehensible to patients. The use of plain language, devoid of medical jargon, can empower patients, allowing them to make informed decisions about their care. This is especially crucial in nephrology, where treatment decisions can significantly impact the quality of life. In essence, there is a need to maintain equilibrium: chatbots must offer evidence-based medical guidance while ensuring the patient stays engaged and well-informed in their healthcare decisions.
- Liability and Responsibility in Case of Errors or Misdiagnoses
- Balancing Chatbot Recommendations with Healthcare Professionals’ Expertise
- Oversight of Digital Assistants in Nephrology
2.2.2. Patient–Provider Relationship and Communication
- Impact of Chatbot Utilization on Doctor–Patient Interactions
- Maintaining Empathy and Trust in the Virtual Healthcare Setting
- Ensuring Effective Communication between Chatbots and Patients
2.3. Ethical Implications of Handling Data and Bias in Algorithms
2.3.1. Ensuring Data Privacy, Security and Consent for Chatbot Generated Data
2.3.2. Detecting and Resolving Biases in Chatbot Diagnoses and Treatments
2.3.3. Ensuring Transparency and Explainability of Chatbot Algorithms
2.4. Clinical Trials and Ethical Concerns in Chatbot Utilization
3. Ethical Frameworks and Guidelines for the Use of Chatbots in Nephrology
3.1. Established Ethical Frameworks and Guidelines in the Field of Healthcare
3.1.1. The Application of Medical Ethics Principles to Chatbot Usage
- Autonomy remains a consideration when chatbots are involved since it upholds patients’ right to make informed decisions regarding their healthcare. Patients should have access to all information about the chatbot’s role and limitations so that they can autonomously decide on their preferences regarding healthcare [65,66];
- Beneficence and non-maleficence are principles that focus on promoting well-being while avoiding harm. Healthcare professionals need to weigh the potential benefits and risks associated with utilizing chatbots. Ensuring that chatbots are integrated into workflows with reliability, accuracy and appropriate measures is essential to uphold important principles and ensure patient safety [66];
- One vital principle to consider when incorporating chatbots into care is justice, which emphasizes equitable access to healthcare services. It is crucial to ensure that all patients regardless of their status, geographic location, or other potential barriers have equal access to chatbot services [35]. Addressing disparities in access and minimizing biases can contribute to promoting fairness in the utilization of chatbots [66];
- By applying medical ethics principles to the use of chatbots in nephrology healthcare professionals can navigate the landscape and make informed decisions that prioritize patient well-being while respecting their autonomy [67].
3.1.2. Ethical Guidelines from Professional Medical Associations
3.2. Developing Ethical Guidelines for Chatbot Utilization in Nephrology
3.2.1. Stakeholder Engagement and Multidisciplinary Collaboration
3.2.2. Ethical Design Principles for Nephrology Chatbots
- Transparency and Explainability; Chatbot algorithms should be designed in a way that provides explanations about their functionality as well, as their decision-making processes. Limitations should also be clearly communicated Patients and healthcare professionals should be able to understand and trust the reasoning behind the recommendations and diagnoses provided by chatbots [72];
- Informed Consent: Obtaining consent should be a fundamental part of the interaction between chatbots and patients allowing patients to make informed decisions about their healthcare [71];
- Accountability and Oversight: There should be accountability and oversight in place for the use of chatbots in nephrology. Regulatory frameworks and mechanisms for monitoring and evaluating the performance, accuracy, and safety of chatbots should be established to ensure compliance with standards [67,71].
3.2.3. Evaluating the Ethical Impact of Chatbot Utilization in Nephrology
- Patient Outcomes: Assessing the impact of chatbot utilization on patient outcomes, including health outcomes, patient satisfaction, and quality of care. Evaluating whether chatbots improve access to care, enhance patient empowerment, and contribute to positive health outcomes [71];
- Healthcare Professional–Patient Relationship: We should examine how chatbot usage affects the relationship between healthcare professionals and patients. This includes looking at changes in communication dynamics, trust levels, and patient satisfaction. We need to assess whether chatbots facilitate communication and maintain empathy in virtual healthcare settings [73];
- Equity and Accessibility: Evaluating the impact of chatbot utilization on equity and accessibility of nephrological care. We must determine if chatbots help reduce healthcare disparities improve access to care for populations and address barriers in healthcare delivery [73];
- Ethical and Legal Compliance: Another important aspect is ensuring legal compliance. It is essential to assess whether the use of chatbots aligns with existing frameworks, guidelines, and legal requirements in nephrology and healthcare as a whole. Patient autonomy, privacy, data security, and confidentiality should be respected when utilizing chatbots [63].
3.3. Implementing Ethical Guidelines and Ensuring Compliance
3.3.1. Providing Healthcare Professionals with Training on Chatbot Integration and Ethical Use
3.3.2. Monitoring Chatbot Performance and Ensuring Ethical Compliance
3.3.3. Continuous Evaluation and Adaptation of Ethical Guidelines
4. Ethical Challenges of Chatbot Integration in Nephrology Research and Practice
5. Future Studies on Ethical Considerations in Chatbot Utilization in Nephrology
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Smestad, T.L. Personality Matters! Improving the User Experience of Chatbot Interfaces-Personality Provides a Stable Pattern to Guide the Design and Behaviour of Conversational Agents. Master’s Thesis, NTNU (Norwegian University of Science and Technology), Trondheim, Norway, 2018. [Google Scholar]
- Harrer, S. Attention is not all you need: The complicated case of ethically using large language models in healthcare and medicine. EBioMedicine 2023, 90, 104512. [Google Scholar] [CrossRef]
- Adamopoulou, E.; Moussiades, L. An Overview of Chatbot Technology. Artif. Intell. Appl. Innov. 2020, 584, 373–383.eCollection 42020. [Google Scholar] [CrossRef]
- Altinok, D. An ontology-based dialogue management system for banking and finance dialogue systems. arXiv 2018, arXiv:1804.04838. [Google Scholar]
- Lee, P.; Bubeck, S.; Petro, J. Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine. N. Engl. J. Med. 2023, 388, 1233–1239. [Google Scholar] [CrossRef]
- Sojasingarayar, A. Seq2seq ai chatbot with attention mechanism. arXiv 2020, arXiv:2006.02767. [Google Scholar]
- Kung, T.H.; Cheatham, M.; Medenilla, A.; Sillos, C.; De Leon, L.; Elepaño, C.; Madriaga, M.; Aggabao, R.; Diaz-Candido, G.; Maningo, J. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digit. Health 2023, 2, e0000198. [Google Scholar] [CrossRef]
- Doshi, J. Chatbot User Interface for Customer Relationship Management using NLP models. In Proceedings of the 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India, 24–26 September 2021; pp. 1–4. [Google Scholar]
- Ker, J.; Wang, L.; Rao, J.; Lim, T. Deep learning applications in medical image analysis. IEEE Access 2017, 6, 9375–9389. [Google Scholar] [CrossRef]
- Han, K.; Cao, P.; Wang, Y.; Xie, F.; Ma, J.; Yu, M.; Wang, J.; Xu, Y.; Zhang, Y.; Wan, J. A review of approaches for predicting drug–drug interactions based on machine learning. Front. Pharmacol. 2022, 12, 814858. [Google Scholar] [CrossRef]
- Beaulieu-Jones, B.K.; Yuan, W.; Brat, G.A.; Beam, A.L.; Weber, G.; Ruffin, M.; Kohane, I.S. Machine learning for patient risk stratification: Standing on, or looking over, the shoulders of clinicians? NPJ Digit. Med. 2021, 4, 62. [Google Scholar] [CrossRef]
- Sahni, N.; Stein, G.; Zemmel, R.; Cutler, D.M. The Potential Impact of Artificial Intelligence on Healthcare Spending; National Bureau of Economic Research: Cambridge, MA, USA, 2023. [Google Scholar]
- Cutler, D.M. What Artificial Intelligence Means for Health Care. JAMA Health Forum 2023, 4, e232652. [Google Scholar] [CrossRef]
- Haug, C.J.; Drazen, J.M. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N. Engl. J. Med. 2023, 388, 1201–1208. [Google Scholar] [CrossRef] [PubMed]
- Sallam, M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
- Mello, M.M.; Guha, N. ChatGPT and Physicians’ Malpractice Risk. JAMA Health Forum 2023, 4, e231938. [Google Scholar] [CrossRef] [PubMed]
- Teixeira da Silva, J.A. Can ChatGPT rescue or assist with language barriers in healthcare communication? Patient Educ. Couns. 2023, 115, 107940. [Google Scholar] [CrossRef] [PubMed]
- Ali, O.; Abdelbaki, W.; Shrestha, A.; Elbasi, E.; Alryalat, M.A.A.; Dwivedi, Y.K. A systematic literature review of artificial intelligence in the healthcare sector: Benefits, challenges, methodologies, and functionalities. J. Innov. Knowl. 2023, 8, 100333. [Google Scholar] [CrossRef]
- Ellahham, S.; Ellahham, N.; Simsekler, M.C.E. Application of artificial intelligence in the health care safety context: Opportunities and challenges. Am. J. Med. Qual. 2020, 35, 341–348. [Google Scholar] [CrossRef]
- Haupt, C.E.; Marks, M. AI-Generated Medical Advice—GPT and Beyond. JAMA 2023, 329, 1349–1350. [Google Scholar] [CrossRef]
- Thongprayoon, C.; Kaewput, W.; Kovvuru, K.; Hansrivijit, P.; Kanduri, S.R.; Bathini, T.; Chewcharat, A.; Leeaphorn, N.; Gonzalez-Suarez, M.L.; Cheungpasitporn, W. Promises of Big Data and Artificial Intelligence in Nephrology and Transplantation. J. Clin. Med. 2020, 9, 1107. [Google Scholar] [CrossRef]
- Cheungpasitporn, W.; Kashani, K. Electronic Data Systems and Acute Kidney Injury. Contrib. Nephrol. 2016, 187, 73–83. [Google Scholar] [CrossRef]
- Furtado, E.S.; Oliveira, F.; Pinheiro, V. Conversational Assistants and their Applications in Health and Nephrology. In Innovations in Nephrology: Breakthrough Technologies in Kidney Disease Care; Bezerra da Silva Junior, G., Nangaku, M., Eds.; Springer International Publishing: Cham, Switezerland, 2022; pp. 283–303. [Google Scholar] [CrossRef]
- Thongprayoon, C.; Miao, J.; Jadlowiec, C.C.; Mao, S.A.; Mao, M.A.; Vaitla, P.; Leeaphorn, N.; Kaewput, W.; Pattharanitima, P.; Tangpanithandee, S.; et al. Differences between Very Highly Sensitized Kidney Transplant Recipients as Identified by Machine Learning Consensus Clustering. Medicina 2023, 59, 977. [Google Scholar] [CrossRef]
- De Panfilis, L.; Peruselli, C.; Tanzi, S.; Botrugno, C. AI-based clinical decision-making systems in palliative medicine: Ethical challenges. BMJ Support Palliat Care 2023, 13, 183–189. [Google Scholar] [CrossRef]
- Niel, O.; Bastard, P. Artificial Intelligence in Nephrology: Core Concepts, Clinical Applications, and Perspectives. Am. J. Kidney Dis. 2019, 74, 803–810. [Google Scholar] [CrossRef] [PubMed]
- Thongprayoon, C.; Hansrivijit, P.; Bathini, T.; Vallabhajosyula, S.; Mekraksakit, P.; Kaewput, W.; Cheungpasitporn, W. Predicting Acute Kidney Injury after Cardiac Surgery by Machine Learning Approaches. J. Clin. Med. 2020, 9, 1767. [Google Scholar] [CrossRef] [PubMed]
- Krisanapan, P.; Tangpanithandee, S.; Thongprayoon, C.; Pattharanitima, P.; Cheungpasitporn, W. Revolutionizing Chronic Kidney Disease Management with Machine Learning and Artificial Intelligence. J. Clin. Med. 2023, 12, 3018. [Google Scholar] [CrossRef] [PubMed]
- Thongprayoon, C.; Vaitla, P.; Jadlowiec, C.C.; Leeaphorn, N.; Mao, S.A.; Mao, M.A.; Pattharanitima, P.; Bruminhent, J.; Khoury, N.J.; Garovic, V.D.; et al. Use of Machine Learning Consensus Clustering to Identify Distinct Subtypes of Black Kidney Transplant Recipients and Associated Outcomes. JAMA Surg. 2022, 157, e221286. [Google Scholar] [CrossRef]
- Federspiel, F.; Mitchell, R.; Asokan, A.; Umana, C.; McCoy, D. Threats by artificial intelligence to human health and human existence. BMJ Glob. Health 2023, 8, e010435. [Google Scholar] [CrossRef]
- Marks, M.; Haupt, C.E. AI Chatbots, Health Privacy, and Challenges to HIPAA Compliance. JAMA 2023, 330, 309–310. [Google Scholar] [CrossRef]
- Hasal, M.; Nowaková, J.; Ahmed Saghair, K.; Abdulla, H.; Snášel, V.; Ogiela, L. Chatbots: Security, privacy, data protection, and social aspects. Concurr. Comput. Pract. Exp. 2021, 33, e6426. [Google Scholar] [CrossRef]
- Gillon, R. Defending the four principles approach as a good basis for good medical practice and therefore for good medical ethics. J. Med. Ethics 2015, 41, 111–116. [Google Scholar] [CrossRef]
- Karabacak, M.; Margetis, K. Embracing Large Language Models for Medical Applications: Opportunities and Challenges. Cureus 2023, 15, e39305. [Google Scholar] [CrossRef]
- Beil, M.; Proft, I.; van Heerden, D.; Sviri, S.; van Heerden, P.V. Ethical considerations about artificial intelligence for prognostication in intensive care. Intensive Care Med. Exp. 2019, 7, 70. [Google Scholar] [CrossRef] [PubMed]
- Parviainen, J.; Rantala, J. Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care. Med. Health Care Philos. 2022, 25, 61–71. [Google Scholar] [CrossRef] [PubMed]
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- Sahni, N.R.; Carrus, B. Artificial Intelligence in U.S. Health Care Delivery. N. Engl. J. Med. 2023, 389, 348–358. [Google Scholar] [CrossRef]
- Price, W.N., II; Gerke, S.; Cohen, I.G. Potential Liability for Physicians Using Artificial Intelligence. JAMA 2019, 322, 1765–1766. [Google Scholar] [CrossRef] [PubMed]
- Mello, M.M. Of swords and shields: The role of clinical practice guidelines in medical malpractice litigation. Univ. Pa. Law Rev. 2001, 149, 645–710. [Google Scholar] [CrossRef]
- Hyams, A.L.; Brandenburg, J.A.; Lipsitz, S.R.; Shapiro, D.W.; Brennan, T.A. Practice guidelines and malpractice litigation: A two-way street. Ann. Intern. Med. 1995, 122, 450–455. [Google Scholar] [CrossRef]
- Paper, C.C. Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare; The Mitre Corporation: McLean, VA, USA, 2022. [Google Scholar]
- Li, Y.; Liang, S.; Zhu, B.; Liu, X.; Li, J.; Chen, D.; Qin, J.; Bressington, D. Feasibility and effectiveness of artificial intelligence-driven conversational agents in healthcare interventions: A systematic review of randomized controlled trials. Int. J. Nurs. Stud. 2023, 143, 104494. [Google Scholar] [CrossRef]
- Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019, 6, 94–98. [Google Scholar] [CrossRef]
- Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef]
- Vázquez, A.; López Zorrilla, A.; Olaso, J.M.; Torres, M.I. Dialogue Management and Language Generation for a Robust Conversational Virtual Coach: Validation and User Study. Sensors 2023, 23, 1423. [Google Scholar] [CrossRef] [PubMed]
- Chaix, B.; Bibault, J.-E.; Pienkowski, A.; Delamon, G.; Guillemassé, A.; Nectoux, P.; Brouard, B. When chatbots meet patients: One-year prospective study of conversations between patients with breast cancer and a chatbot. JMIR Cancer 2019, 5, e12856. [Google Scholar] [CrossRef] [PubMed]
- Biro, J.; Linder, C.; Neyens, D. The Effects of a Health Care Chatbot’s Complexity and Persona on User Trust, Perceived Usability, and Effectiveness: Mixed Methods Study. JMIR Hum. Factors 2023, 10, e41017. [Google Scholar] [CrossRef] [PubMed]
- Chua, I.S.; Ritchie, C.S.; Bates, D.W. Enhancing serious illness communication using artificial intelligence. NPJ Digit. Med. 2022, 5, 14. [Google Scholar] [CrossRef] [PubMed]
- He, Y.; Yang, L.; Qian, C.; Li, T.; Su, Z.; Zhang, Q.; Hou, X. Conversational Agent Interventions for Mental Health Problems: Systematic Review and Meta-analysis of Randomized Controlled Trials. J. Med. Internet Res. 2023, 25, e43862. [Google Scholar] [CrossRef] [PubMed]
- Sujan, M.; Furniss, D.; Grundy, K.; Grundy, H.; Nelson, D.; Elliott, M.; White, S.; Habli, I.; Reynolds, N. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health Care Inform. 2019, 26, e100081. [Google Scholar] [CrossRef]
- Muscat, D.M.; Lambert, K.; Shepherd, H.; McCaffery, K.J.; Zwi, S.; Liu, N.; Sud, K.; Saunders, J.; O’Lone, E.; Kim, J.; et al. Supporting patients to be involved in decisions about their health and care: Development of a best practice health literacy App for Australian adults living with Chronic Kidney Disease. Health Promot. J. Aust. 2021, 32 (Suppl. S1), 115–127. [Google Scholar] [CrossRef]
- Lisetti, C.; Amini, R.; Yasavur, U.; Rishe, N. I Can Help You Change! An Empathic Virtual Agent Delivers Behavior Change Health Interventions. ACM Trans. Manag. Inf. Syst. 2013, 4, 19. [Google Scholar] [CrossRef]
- Xygkou, A.; Siriaraya, P.; Covaci, A.; Prigerson, H.G.; Neimeyer, R.; Ang, C.S.; She, W.-J. The “Conversation” about Loss: Understanding How Chatbot Technology was Used in Supporting People in Grief. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–29 April 2023; p. 646. [Google Scholar]
- Yang, M.; Tu, W.; Qu, Q.; Zhao, Z.; Chen, X.; Zhu, J. Personalized response generation by dual-learning based domain adaptation. Neural Netw. 2018, 103, 72–82. [Google Scholar] [CrossRef]
- Panch, T.; Pearson-Stuttard, J.; Greaves, F.; Atun, R. Artificial intelligence: Opportunities and risks for public health. Lancet Digit. Health 2019, 1, e13–e14. [Google Scholar] [CrossRef]
- Vu, E.; Steinmann, N.; Schröder, C.; Förster, R.; Aebersold, D.M.; Eychmüller, S.; Cihoric, N.; Hertler, C.; Windisch, P.; Zwahlen, D.R. Applications of Machine Learning in Palliative Care: A Systematic Review. Cancers 2023, 15, 1596. [Google Scholar] [CrossRef] [PubMed]
- Thongprayoon, C.; Cheungpasitporn, W.; Srivali, N.; Harrison, A.M.; Gunderson, T.M.; Kittanamongkolchai, W.; Greason, K.L.; Kashani, K.B. AKI after Transcatheter or Surgical Aortic Valve Replacement. J. Am. Soc. Nephrol. 2016, 27, 1854–1860. [Google Scholar] [CrossRef]
- Thongprayoon, C.; Lertjitbanjong, P.; Hansrivijit, P.; Crisafio, A.; Mao, M.A.; Watthanasuntorn, K.; Aeddula, N.R.; Bathini, T.; Kaewput, W.; Cheungpasitporn, W. Acute Kidney Injury in Patients Undergoing Cardiac Transplantation: A Meta-Analysis. Medicines 2019, 6, 108. [Google Scholar] [CrossRef] [PubMed]
- Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. In Ethics, Governance, and Policies in Artificial Intelligence; Floridi, L., Ed.; Springer International Publishing: Cham, Switezerland, 2021; pp. 19–39. [Google Scholar] [CrossRef]
- May, R.; Denecke, K. Security, privacy, and healthcare-related conversational agents: A scoping review. Inform. Health Soc. Care 2022, 47, 194–210. [Google Scholar] [CrossRef] [PubMed]
- Kanter, G.P.; Packel, E.A. Health Care Privacy Risks of AI Chatbots. JAMA 2023, 330, 311–312. [Google Scholar] [CrossRef]
- Said, G.; Azamat, K.; Ravshan, S.; Bokhadir, A. Adapting Legal Systems to the Development of Artificial Intelligence: Solving the Global Problem of AI in Judicial Processes. Int. J. Cyber Law 2023, 1, 4. [Google Scholar] [CrossRef]
- Gillon, R. Medical ethics: Four principles plus attention to scope. BMJ 1994, 309, 184. [Google Scholar] [CrossRef]
- Jones, A.H. Narrative in medical ethics. BMJ 1999, 318, 253–256. [Google Scholar] [CrossRef]
- Beauchamps, T.L.; Childress, J.F. Principles of biomedical ethics. Med. Clin. N. Am. 1994, 80, 225–243. [Google Scholar] [CrossRef]
- Martin, D.E.; Harris, D.C.H.; Jha, V.; Segantini, L.; Demme, R.A.; Le, T.H.; McCann, L.; Sands, J.M.; Vong, G.; Wolpe, P.R.; et al. Ethical challenges in nephrology: A call for action. Nat. Rev. Nephrol. 2020, 16, 603–613. [Google Scholar] [CrossRef]
- Siegler, M.; Pellegrino, E.D.; Singer, P.A. Clinical medical ethics. J. Clin. Ethics 1990, 1, 5–9. [Google Scholar] [CrossRef] [PubMed]
- Char, D.S.; Shah, N.H.; Magnus, D. Implementing Machine Learning in Health Care—Addressing Ethical Challenges. N. Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef] [PubMed]
- Ho, C.W.-L.; Caals, K. A Call for an Ethics and Governance Action Plan to Harness the Power of Artificial Intelligence and Digitalization in Nephrology. Semin. Nephrol. 2021, 41, 282–293. [Google Scholar] [CrossRef] [PubMed]
- Denecke, K.; Abd-Alrazaq, A.; Househ, M. Artificial intelligence for chatbots in mental health: Opportunities and challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare: Opportunities and Challenges; Springer: Cham, Switezerland, 2021; pp. 115–128. [Google Scholar]
- Murtarelli, G.; Gregory, A.; Romenti, S. A conversation-based perspective for shaping ethical human–machine interactions: The particular challenge of chatbots. J. Bus. Res. 2021, 129, 927–935. [Google Scholar] [CrossRef]
- Boucher, E.M.; Harake, N.R.; Ward, H.E.; Stoeckl, S.E.; Vargas, J.; Minkel, J.; Parks, A.C.; Zilca, R. Artificially intelligent chatbots in digital mental health interventions: A review. Expert Rev. Med. Devices 2021, 18, 37–49. [Google Scholar] [CrossRef]
- Editorials, N. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023, 613, 10–1038. [Google Scholar] [CrossRef]
- Gould, K.A. Updated Recommendations from the World Association of Medical Editors: Chatbots, Generative AI, and Scholarly Manuscripts. Dimens. Crit. Care Nurs. 2023, 42, 308. [Google Scholar] [CrossRef]
- Floridi, L. Soft ethics and the governance of the digital. Philos. Technol. 2018, 31, 1–8. [Google Scholar] [CrossRef]
- Miao, J.; Thongprayoon, C.; Cheungpasitporn, W. Assessing the Accuracy of ChatGPT on Core Questions in Glomerular Disease. Kidney Int. Rep. 2023, 8, 1657–1659. [Google Scholar] [CrossRef]
- Suppadungsuk, S.; Thongprayoon, C.; Krisanapan, P.; Tangpanithandee, S.; Garcia Valencia, O.; Miao, J.; Mekrasakit, P.; Kashani, K.; Cheungpasitporn, W. Examining the Validity of ChatGPT in Identifying Relevant Nephrology Literature: Findings and Implications. J. Clin. Med. 2023, 12, 5550. [Google Scholar] [CrossRef]
Section | Concerns | Recommendations for Improvement |
---|---|---|
1.1 Background and Rationale for Studying the Ethics of Chatbot Utilization in Medicine | Lack of awareness about the ethical implications of chatbot utilization in medicine | Increase awareness through education and training |
1.2 Overview of Nephrology and the Role of Chatbots in Nephrological Care | Limited understanding of the potential benefits and challenges of chatbot utilization in nephrology | Conduct research to establish evidence-based guidelines for chatbot utilization in nephrology |
2.1 Ethical Considerations in the Utilization of Chatbots in Medicine | Privacy and security risks associated with chatbot-generated data | Develop robust data protection measures and ensure compliance with privacy regulations |
2.1.1 Privacy and Data Security | Potential breaches of patient privacy and data security | Implement encryption protocols and strict access controls for chatbot-generated data |
2.1.2 Patient Autonomy and Informed Consent | Potential infringement on patient autonomy and decision-making | Implement transparent consent processes and ensure patients have control over their healthcare |
2.1.3 Equity and Access to Care | Unequal access to chatbot services and healthcare resources | Develop strategies to ensure equitable distribution and accessibility of chatbot services |
2.2 Ethical Considerations in the Utilization of Chatbots in Nephrology 2.2.1 Clinical Decision-Making and Patient Safety | Inaccurate diagnoses and recommendations | Enhance chatbot accuracy and reliability through rigorous testing and validation |
(a) Accuracy and Reliability of Chatbot Diagnoses and Recommendations | Lack of standardized diagnostic algorithms and protocols | Establish standardized guidelines for chatbot diagnoses based on best practices in nephrology |
(b) Liability and Responsibility in Case of Errors or Misdiagnoses | Legal and ethical implications in case of errors or misdiagnoses | Establish clear protocols for error reporting and accountability |
(c) Balancing Chatbot Recommendations with Healthcare Professionals’ Expertise | Overreliance on chatbot recommendations | Promote collaborative decision-making between chatbots and healthcare professionals |
2.2.2 Patient–Provider Relationship and Communication | Decreased interpersonal connection and empathy in the virtual healthcare setting | Implement training programs to enhance empathy and patient-centered communication |
(a) Impact of Chatbot Utilization on Doctor- Patient Interactions | Disruption of traditional doctor–patient dynamics | Educate healthcare professionals on incorporating chatbots into patient interactions effectively |
(b) Maintaining Empathy and Trust in the Virtual Healthcare Setting | Erosion of trust due to reliance on technology | Foster trust by emphasizing the role of chatbots as tools to augment, not replace, healthcare professionals |
(c) Ensuring Effective Communication between Chatbots and Patients | Difficulties in effective communication between chatbots and patients | Improve natural language processing capabilities and user interface design |
2.3 Ethical Implications for Data Handling and Algorithm Bias | Unauthorized access and misuse of chatbot-generated data | Enhance data privacy measures and consent processes to protect patient information |
2.3.1 Data Privacy, Security, and Consent in Chatbot-Generated Data | Inadequate consent procedures for data collection and usage | Implement explicit consent mechanisms and educate patients about data-handling practices |
2.3.2 Identifying and Addressing Algorithmic Bias in Chatbot Diagnoses and Treatments | Bias in chatbot diagnoses and treatment plans | Regularly assess and mitigate algorithmic biases |
2.3.3 Ensuring Transparency and Explainability of Chatbot Algorithms | Lack of transparency in chatbot decision-making processes | Incorporate explainable AI techniques and provide clear explanations for chatbot recommendations |
3.1 Existing Ethical Frameworks and Guidelines in Healthcare | Inadequate adaptation of existing ethical frameworks to chatbot utilization | Modify existing frameworks to address the unique ethical considerations of chatbot utilization in nephrology |
3.1.1 Principles of Medical Ethics and their Relevance to Chatbot Utilization | Limited understanding of how traditional medical ethics apply to chatbot utilization | Interpret and adapt medical ethics principles to the context of chatbot utilization |
3.1.2 Ethical Guidelines from Professional Medical Associations | Insufficient guidelines addressing chatbot utilization in nephrology | Develop specialized ethical guidelines in collaboration with professional medical associations |
3.2 Developing Ethical Guidelines for Chatbot Utilization in Nephrology | Lack of diverse stakeholder perspectives in guideline development | Engage stakeholders from various disciplines and patient groups |
3.2.1 Stakeholder Involvement and Multidisciplinary Collaboration | Inadequate consideration of ethical implications during chatbot design and development | Incorporate ethical design principles from the inception of chatbot development |
3.2.2 Ethical Design Principles for Nephrology Chatbots | Design flaws leading to ethical concerns | Regularly assess the ethical impact of chatbot utilization and make necessary improvements |
3.2.3 Evaluating the Ethical Impact of Chatbot Utilization in Nephrology | Lack of adherence to ethical guidelines and policies | Establish mechanisms for continuous evaluation and enforcement of ethical guidelines |
3.3 Implementing Ethical Guidelines and Ensuring Compliance | Insufficient training on ethical use and integration of chatbots | Develop comprehensive training programs on chatbot integration and ethical considerations |
3.3.1 Training Healthcare Professionals on Chatbot Integration and Ethical Use | Inadequate monitoring of chatbot performance and ethical compliance | Implement robust monitoring and auditing mechanisms to ensure ethical compliance |
3.3.2 Monitoring and Auditing Chatbot Performance and Ethical Compliance | Lack of regular evaluation and updating of ethical guidelines | Establish mechanisms for continuous evaluation and updating of ethical guidelines |
Topic | Ethical Dilemma | Challenges | Potential Ethical Resolutions |
---|---|---|---|
Privacy Concerns in Patient Data Handling | Balancing patient privacy and data collection | Ensuring data security and confidentiality | Implement strict data protection measures |
Establishing informed consent protocols | Clearly communicate data usage and obtain consent | ||
Protecting patient identities | Anonymize or pseudonymize patient data | ||
Bias and Discrimination in Decision-making | Unintentional bias in chatbot responses | Identifying and addressing bias in algorithms | Regularly review and update algorithms to reduce bias |
Ensuring fairness and equity in recommendations | Implement diversity and inclusion in training data | ||
Mitigating potential discrimination | Perform regular audits for discriminatory patterns | ||
Inadequate Handling of Emergency Situations | Insufficient response to urgent medical needs | Ensuring appropriate escalation and triage | Implement clear guidelines for emergency situations |
Reducing the risk of harm in critical situations | Provide prominent disclaimers for emergency scenarios | ||
Promptly connecting users to human professionals | Enable seamless transfer to human healthcare providers | ||
Informed Consent and Transparency | Lack of clarity in chatbot’s capabilities | Providing accurate information on limitations | Clearly communicate the chatbot’s capabilities |
Establishing realistic expectations | Offer transparent disclaimers about potential errors | ||
Ensuring users are fully informed | Provide accessible information about chatbot usage | ||
Psychological Impact and Emotional Support | Inadequate empathy and emotional support | Recognizing the need for emotional sensitivity | Train chatbots to provide empathetic responses |
Addressing mental health and emotional needs | Offer appropriate resources for mental health support | ||
Preventing harm or exacerbation of conditions | Provide clear disclaimers and encourage professional help | ||
Legal and Regulatory Compliance | Violation of healthcare regulations and laws | Complying with privacy and security regulations | Adhere to relevant laws such as HIPAA and GDPR |
Meeting ethical guidelines and standards | Follow ethical codes specific to the healthcare profession | ||
Avoiding unauthorized practice of medicine | Clearly define the chatbot’s role and limitations | ||
Cultural Sensitivity and Language Barriers | Insensitivity to cultural nuances and diversity | Incorporating cultural sensitivity into chatbot | Train chatbots to recognize and respect cultural norms |
Overcoming language barriers for accurate care | Provide multilingual support and translation services | ||
Ensuring inclusive care for diverse populations | Regularly update training data to include diverse case |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Garcia Valencia, O.A.; Suppadungsuk, S.; Thongprayoon, C.; Miao, J.; Tangpanithandee, S.; Craici, I.M.; Cheungpasitporn, W. Ethical Implications of Chatbot Utilization in Nephrology. J. Pers. Med. 2023, 13, 1363. https://doi.org/10.3390/jpm13091363
Garcia Valencia OA, Suppadungsuk S, Thongprayoon C, Miao J, Tangpanithandee S, Craici IM, Cheungpasitporn W. Ethical Implications of Chatbot Utilization in Nephrology. Journal of Personalized Medicine. 2023; 13(9):1363. https://doi.org/10.3390/jpm13091363
Chicago/Turabian StyleGarcia Valencia, Oscar A., Supawadee Suppadungsuk, Charat Thongprayoon, Jing Miao, Supawit Tangpanithandee, Iasmina M. Craici, and Wisit Cheungpasitporn. 2023. "Ethical Implications of Chatbot Utilization in Nephrology" Journal of Personalized Medicine 13, no. 9: 1363. https://doi.org/10.3390/jpm13091363
APA StyleGarcia Valencia, O. A., Suppadungsuk, S., Thongprayoon, C., Miao, J., Tangpanithandee, S., Craici, I. M., & Cheungpasitporn, W. (2023). Ethical Implications of Chatbot Utilization in Nephrology. Journal of Personalized Medicine, 13(9), 1363. https://doi.org/10.3390/jpm13091363