Advancing Toward P6 Medicine: Recommendations for Integrating Artificial Intelligence in Internal Medicine
Abstract
1. Introduction
1.1. Generative AI in Internal Medicine
1.2. EFIM Position on AI in Internal Medicine
- -
- Provide practical and immediate recommendations for internal medicine specialists: Offer advice on how to include AI in the clinical workflow within a department or clinic, and with other medical disciplines. AI can also be employed to support the daily clinical activities and reduce administrative workload.
- -
- Assure that human expertise and value is not overlooked and that physician well-being is supported: Core clinical skills and medical professional competencies remain a foundation on which digital literacy and AI literacy should be built.
- -
- Reduce uncertainty regarding ethical, legal, and data privacy issues: Internists require clear frameworks to manage risks such as algorithmic bias, liability, compliance with regulations like the General Data Protection Regulation (GDPR), patient consent, and transparency.
2. Methodology
2.1. Literature Review
2.2. Delphi Method
3. Research Outcomes
3.1. Consensus Recommendations on Generative AI in Internal Medicine
- Research and Drug Discovery: GAI can generate synthetic patient data for clinical trial simulations and accelerate drug development. For instance, it can assist in drug development by identifying new therapeutic compounds or simulating their effects.
- Personalized Medicine: GAI can generate synthetic data to train AI models for more accurate personalized medicine applications, leading to tailored treatment plans.
- Population Health Management: GAI has the potential to model disease outbreaks and develop more effective public health interventions. By simulating different scenarios, internists and public health officials can proactively design strategies to minimize the impact of infectious diseases and chronic conditions in populations.
- Improving Clinical Documentation: GAI provides new methods for introducing information into clinical records, such as voice recognition or predictive writing, which can significantly improve the documentation process and reduce the associated workload and professional burnout [7].
- Clinical Decision Support and Patient Education: GAI can be used for clinical decision support systems in the form of chatbots or for medical education, such as generating images, text, or synthetic patient data for history taking in virtual patients. Pilot studies have evaluated the quality and accuracy of ChatGPT-generated answers to common patient questions [8].
3.2. Recommended Integration of AI into Clinical Processes
- AI-powered Scribes: These tools automate documentation tasks, such as compiling clinical information registries or generating clinical notes, potentially reducing administrative burden and freeing up attention and time for the internal medicine specialist to focus on patient interaction [45].
- Clinical Decision Support Systems (CDSS): AI-powered CDSS can provide real-time guidance to internists at the point of care, improving diagnostic accuracy and treatment efficiency by analyzing vast datasets and offering evidence-based recommendations [46].
- Digital Therapies (DTx): Examples include DTx for preventing hospital visits, improving patient self-management, and providing remote therapy. DTx may facilitate increasing patient participation and empowerment, continuous monitoring, feedback, and behavioral support in medical treatments. Physicians can serve patients as a guide, interpreting data, personalizing treatments, and coordinating effective integration of digital treatments into health pathways, especially for patients with lower digital skills [24,47]. From an economic perspective, DTx could lead to cost-saving opportunities through increasing treatment adherence, reducing acute exacerbations of chronic conditions, offering scalable interventions, and improving healthcare service efficiencies [47].
3.3. Legal, Regulatory, and Ethical Considerations
- Transparency: AI models should be designed to allow clinicians to understand decision-making processes.
- Fairness: AI must operate without bias and should support equal access to care.
- Accountability: Clear responsibility must be assigned for AI development and implementation.
- Increasing medical data poses a significant administrative challenge.
- Organizing information is essential for digitizing clinical workflows, such as clinical decision support systems.
- A comprehensive and punctual collection of clinical data allows to design AI-driven decision support tools that integrate into internal medicine practices.
- Ethical and responsible data collection is essential to ensure patient privacy and consent to maintain patient safety and trust [54].
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lichstein, P.R. The Medical Interview. In Clinical Methods: The History, Physical, and Laboratory Examinations, 3rd ed.; Walker, H.K., Hall, W.D., Hurst, J.W., Eds.; Butterworths: Boston, MA, USA, 1990. Available online: http://www.ncbi.nlm.nih.gov/books/NBK349/ (accessed on 8 February 2025).
- Pietrantonio, F.; Orlandini, F.; Moriconi, L.; La Regina, M. Acute Complex Care Model: An organizational approach for the medical care of hospitalized acute complex patients. Eur. J. Intern. Med. 2015, 26, 759–765. [Google Scholar] [CrossRef]
- Jeyaraman, M.; Balaji, S.; Jeyaraman, N.; Yadav, S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023, 15, e43262. [Google Scholar] [CrossRef]
- Dave, T.; Athaluri, S.A.; Singh, S. ChatGPT in medicine: An overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front. Artif. Intell. 2023, 6, 1169595. [Google Scholar] [CrossRef]
- Pietrantonio, F.; Florczak, M.; Kuhn, S.; Kärberg, K.; Leung, T.; Said Criado, I.; Sikorski, S.; Ruggeri, M.; Signorini, A.; Rosiello, F.; et al. Applications to augment patient care for Internal Medicine specialists: A position paper from the EFIM working group on telemedicine, innovative technologies & digital health. Front. Public Health 2024, 12, 1370555. [Google Scholar]
- Gupta, N.S.; Kumar, P. Perspective of artificial intelligence in healthcare data management: A journey towards precision medicine. Comput. Biol. Med. 2023, 162, 107051. [Google Scholar] [CrossRef]
- Aranyossy, M.; Halmosi, P. Healthcare 4.0 value creation—The interconnectedness of hybrid value propositions. Technol. Forecast. Soc. Change 2024, 208, 123718. [Google Scholar] [CrossRef]
- Coiera, E.; Liu, S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Rep. Med. 2022, 3, 100860. [Google Scholar] [CrossRef]
- Winkel, A.F.; Telzak, B.; Shaw, J.; Hollond, C.; Magro, J.; Nicholson, J.; Quinn, G. The Role of Gender in Careers in Medicine: A Systematic Review and Thematic Synthesis of Qualitative Literature. J. Gen. Intern. Med. 2021, 36, 2392–2399. [Google Scholar] [CrossRef] [PubMed]
- Rangarajan, D.; Rangarajan, A.; Reddy, C.K.K.; Doss, S. Exploring the Next-Gen Transformations in Healthcare Through the Impact of AI and IoT. In Advances in Medical Technologies and Clinical Practice; IGI Global: Hershey, PA, USA, 2024; pp. 73–98. Available online: https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-8990-4.ch004 (accessed on 8 July 2025).
- Xie, Y.; Zhai, Y.; Lu, G. Evolution of artificial intelligence in healthcare: A 30-year bibliometric study. Front. Med. 2025, 11, 1505692. [Google Scholar] [CrossRef] [PubMed]
- Singh, B. Artificial Intelligence (AI) in the Digital Healthcare and Medical Industry: Projecting Vision for Governance and Regulations. In Advances in Healthcare Information Systems and Administration; IGI Global: Hershey, PA, USA, 2024; pp. 161–188. Available online: https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/979-8-3693-6294-5.ch007 (accessed on 8 July 2025).
- Bhavanam, B.R. The role of AI in transforming healthcare: A technical analysis. World J. Adv. Eng. Technol. Sci. 2025, 15, 803–811. [Google Scholar] [CrossRef]
- Kotowicz, M.; Bieniak-Pentchev, M.; Koczkodaj, M. Artificial Intelligence in medicine: Potential and Application Possibilities—Comprehensive literature review. Qual. Sport 2025, 41, 60222. [Google Scholar] [CrossRef]
- Pietrantonio, F.; Vinci, A.; Maurici, M.; Ciarambino, T.; Galli, B.; Signorini, A.; La Fazia, V.M.; Rosselli, F.; Fortunato, L.; Iodice, R.; et al. Intra- and Extra-Hospitalization Monitoring of Vital Signs—Two Sides of the Same Coin: Perspectives from LIMS and Greenline-HT Study Operators. Sensors 2023, 23, 5408. [Google Scholar] [CrossRef]
- De Freitas, J.; Nave, G.; Puntoni, S. Ideation with Generative AI—In Consumer Research and Beyond. J. Consum. Res. 2025, 52, 18–31. [Google Scholar] [CrossRef]
- Gude, V. Factors Influencing ChatGpt Adoption for Product Research and Information Retrieval. J. Comput. Inf. Syst. 2025, 65, 222–231. [Google Scholar] [CrossRef]
- Siragusa, L.; Angelico, R.; Angrisani, M.; Zampogna, B.; Materazzo, M.; Sorge, R.; Giordano, L.; Meniconi, R.; Coppola, A.; SPIGC Survey Collaborative Group; et al. How future surgery will benefit from SARS-CoV-2-related measures: A SPIGC survey conveying the perspective of Italian surgeons. Updat. Surg. 2023, 75, 1711–1727. [Google Scholar] [CrossRef] [PubMed]
- Eriksen, A.V.; Möller, S.; Ryg, J. Use of GPT-4 to Diagnose Complex Clinical Cases. NEJM AI 2024, 1, AIp2300031. [Google Scholar] [CrossRef]
- Goh, E.; Gallo, R.; Hom, J.; Strong, E.; Weng, Y.; Kerman, H.; Cool, J.A.; Kanjee, Z.; Parsons, A.S.; Ahuja, N.; et al. Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMA Netw. Open 2024, 7, e2440969. [Google Scholar] [CrossRef]
- Wei, Q.; Yao, Z.; Cui, Y.; Wei, B.; Jin, Z.; Xu, X. Evaluation of ChatGPT-generated medical responses: A systematic review and meta-analysis. J. Biomed. Inform. 2024, 151, 104620. [Google Scholar] [CrossRef]
- Hu, D.; Guo, Y.; Zhou, Y.; Flores, L.; Zheng, K. A systematic review of early evidence on generative AI for drafting responses to patient messages. npj Health Syst. 2025, 2, 27. [Google Scholar] [CrossRef]
- Hager, P.; Jungmann, F.; Holland, R.; Bhagat, K.; Hubrecht, I.; Knauer, M.; Vielhauer, J.; Makowski, M.; Braren, R.; Kaissis, G.; et al. Evaluation and mitigation of the limitations of large language models in clinical decision-making. Nat. Med. 2024, 30, 2613–2622. [Google Scholar] [CrossRef]
- Fassbender, A.; Donde, S.; Silva, M.; Friganovic, A.; Stievano, A.; Costa, E.; Winders, T.; Van Vugt, J. Adoption of Digital Therapeutics in Europe. Ther. Clin. Risk Manag. 2024, 20, 939–954. [Google Scholar] [CrossRef]
- Bhuyan, S.S.; Sateesh, V.; Mukul, N.; Galvankar, A.; Mahmood, A.; Nauman, M.; Rai, A.; Bordoloi, K.; Basu, U.; Samuel, J. Generative Artificial Intelligence Use in Healthcare: Opportunities for Clinical Excellence and Administrative Efficiency. J. Med. Syst. 2025, 49, 10. [Google Scholar] [CrossRef]
- Singh, A. Automated Patient History Summarization Using Generative AI. SSRN. 2025. Available online: https://www.ssrn.com/abstract=5184805 (accessed on 15 September 2025).
- Khosravi, B.; Purkayastha, S.; Erickson, B.J.; Trivedi, H.M.; Gichoya, J.W. Exploring the potential of generative artificial intelligence in medical image synthesis: Opportunities, challenges, and future directions. Lancet Digit. Health 2025, 7, 100890. [Google Scholar] [CrossRef]
- Pietrantonio, F.; Piasini, L.; Spandonaro, F. Internal Medicine and emergency admissions: From a national Hospital Discharge Records (SDO) study to a regional analysis. Ital. J. Med. 2016, 10, 157–167. [Google Scholar] [CrossRef]
- Ferrari-Light, D.; Merritt, R.E.; D’Souza, D.; Ferguson, M.K.; Harrison, S.; Madariaga, M.L.; Lee, B.E.; Moffatt-Bruce, S.D.; Kneuertz, P.J. Evaluating ChatGPT as a patient resource for frequently asked questions about lung cancer surgery—A pilot study. J. Thorac. Cardiovasc. Surg. 2025, 169, 1174–1180.e18. [Google Scholar] [CrossRef]
- Yi, Y.; Kim, K.-J. The feasibility of using generative artificial intelligence for history taking in virtual patients. BMC Res. Notes 2025, 18, 80. [Google Scholar] [CrossRef]
- Kister, K.; Laskowski, J.; Makarewicz, A.; Tarkowski, J. Application of artificial intelligence tools in diagnosis and treatment of mental disorders. Curr. Probl. Psychiatry 2023, 24, 1–18. [Google Scholar] [CrossRef] [PubMed]
- Reddy, S. Generative AI in healthcare: An implementation science informed translational path on application, integration and governance. Implement. Sci. 2024, 19, 27. [Google Scholar] [CrossRef] [PubMed]
- Stanceski, K.; Zhong, S.; Zhang, X.; Khadra, S.; Tracy, M.; Koria, L.; Lo, S.; Naganathan, V.; Kim, J.; Dunn, A.G.; et al. The quality and safety of using generative AI to produce patient-centred discharge instructions. NPJ Digit. Med. 2024, 7, 329. [Google Scholar] [CrossRef] [PubMed]
- Omar, M.; Soffer, S.; Agbareia, R.; Bragazzi, N.L.; Apakama, D.U.; Horowitz, C.R.; Charney, A.W.; Freeman, R.; Kummer, B.; Glicksberg, B.S.; et al. Sociodemographic biases in medical decision making by large language models. Nat. Med. 2025, 31, 1873–1881. [Google Scholar] [CrossRef]
- Chen, M.; Mohd Said, N.; Mohd Rais, N.C.; Ho, F.; Ling, N.; Chun, M.; Ng, Y.S.; Eng, W.N.; Yao, Y.; Korc-Grodzicki, B.; et al. Remaining Agile in the COVID-19 pandemic healthcare landscape—How we adopted a hybrid telemedicine Geriatric Oncology care model in an academic tertiary cancer center. J. Geriatr. Oncol. 2022, 13, 856–861. [Google Scholar] [CrossRef]
- Lekadir, K.; Frangi, A.F.; Porras, A.R.; Glocker, B.; Cintas, C.; Langlotz, C.P.; Weicken, E.; Asselbergs, F.W.; Prior, F.; Collins, G.S.; et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ 2025, 388, e081554. [Google Scholar] [CrossRef]
- Li, H.; Moon, J.T.; Purkayastha, S.; Celi, L.A.; Trivedi, H.; Gichoya, J.W. Ethics of large language models in medicine and medical research. Lancet Digit. Health 2023, 5, e333–e335. [Google Scholar] [CrossRef]
- Zhang, K.; Meng, X.; Yan, X.; Ji, J.; Liu, J.; Xu, H.; Zhang, H.; Liu, D.; Wang, J.; Wang, X.; et al. Revolutionizing Health Care: The Transformative Impact of Large Language Models in Medicine. J. Med. Internet Res. 2025, 27, e59069. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Drago, C.; Gatto, A.; Ruggeri, M. Telemedicine as technoinnovation to tackle COVID-19: A bibliometric analysis. Technovation 2023, 120, 102417. [Google Scholar] [CrossRef]
- Aria, M.; Cuccurullo, C. bibliometrix: An R-tool for comprehensive science mapping analysis. J. Informetr. 2017, 11, 959–975. [Google Scholar] [CrossRef]
- Pietrantonio, F.; Rosiello, F.; Alessi, E.; Pascucci, M.; Rainone, M.; Cipriano, E.; Di Berardino, A.; Vinci, A.; Ruggeri, M.; Ricci, S. Burden of COVID-19 on Italian Internal Medicine Wards: Delphi, SWOT, and Performance Analysis after Two Pandemic Waves in the Local Health Authority “Roma 6” Hospital Structures. Int. J. Environ. Res. Public Health 2021, 18, 5999. [Google Scholar] [CrossRef]
- Savic, L.C.; Smith, A.F. How to conduct a Delphi consensus process. Anaesthesia 2023, 78, 247–250. [Google Scholar] [CrossRef] [PubMed]
- Niederberger, M.; Spranger, J. Delphi Technique in Health Sciences: A Map. Front. Public Health 2020, 8, 457. [Google Scholar] [CrossRef]
- Ruggeri, M.; Drago, C.; Rosiello, F.; Orlando, V.; Santori, C. Economic Evaluation of Treatments for Migraine: An Assessment of the Generalizability Following a Systematic Review. PharmacoEconomics 2020, 38, 473–484. [Google Scholar] [CrossRef] [PubMed]
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
- Briganti, G.; Le Moine, O. Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020, 7, 27. [Google Scholar] [CrossRef]
- Kendziorra, J.; Seerig, K.H.; Winkler, T.J.; Gewald, H. From awareness to integration: A qualitative interview study on the impact of digital therapeutics on physicians’ practices in Germany. BMC Health Serv. Res. 2025, 25, 568. [Google Scholar] [CrossRef]
- Moghbeli, F.; Langarizadeh, M.; Ali, A. Application of Ethics for Providing Telemedicine Services and Information Technology. Med. Arch. 2017, 71, 351. [Google Scholar] [CrossRef] [PubMed]
- Mort, M.; Roberts, C.; Pols, J.; Domenech, M.; Moser, I. The EFORTT investigators Ethical implications of home telecare for older people: A framework derived from a multisited participative study. Health Expect. 2015, 18, 438–449. [Google Scholar] [CrossRef]
- Wang, C.; Liu, S.; Yang, H.; Guo, J.; Wu, Y.; Liu, J. Ethical Considerations of Using ChatGPT in Health Care. J. Med. Internet Res. 2023, 25, e48009. [Google Scholar] [CrossRef] [PubMed]
- Software as a Medical Device: Possible Framework for Risk Categorization and Corresponding Considerations. IMDRF. Available online: https://www.imdrf.org/documents/software-medical-device-possible-framework-risk-categorization-and-corresponding-considerations (accessed on 22 September 2025).
- World Health Organization. Telemedicine: Opportunities and Developments in Member States: Report on the Second Global Survey on eHealth. World Health Organization. 2009. Available online: https://www.afro.who.int/publications/telemedicine-opportunities-and-developments-member-state (accessed on 25 April 2025).
- Rosiello, F.; Vinci, A.; Vitali, M.; Monti, M.; Ricci, L.; D’oca, E.; Damato, F.M.; Costanzo, V.; Ferrari, G.; Ruggeri, M.; et al. Is Rome ready to react to chemical, biological, radiological, nuclear, and explosive attacks? A tabletop simulation. Ital. J. Med. 2024, 18, 2. [Google Scholar] [CrossRef]
- Hamet, P.; Tremblay, J. Artificial intelligence in medicine. Metabolism 2017, 69, S36–S40. [Google Scholar] [CrossRef] [PubMed]
- Wachter, S. Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR. Comput. Law Secur. Rev. 2018, 34, 436–449. [Google Scholar] [CrossRef]
- Tolentino, R.; Baradaran, A.; Gore, G.; Pluye, P.; Abbasgholizadeh-Rahimi, S. Curriculum Frameworks and Educational Programs in AI for Medical Students, Residents, and Practicing Physicians: Scoping Review. JMIR Med. Educ. 2024, 10, e54793. [Google Scholar] [CrossRef]
- Subbaraman, R.; De Mondesert, L.; Musiimenta, A.; Pai, M.; Mayer, K.H.; Thomas, B.E.; Haberer, J. Digital adherence technologies for the management of tuberculosis therapy: Mapping the landscape and research priorities. BMJ Glob. Health 2018, 3, e001018. [Google Scholar] [CrossRef] [PubMed]






| Strengths | Weaknesses |
| S1. Extensive data analysis: AI can recognize patterns in extensive clinical datasets and improve diagnostic accuracy. S2. Early diagnosis: By detecting subtle signs in medical images (X-rays, CT scans, MRI), AI can identify early signs of medical symptoms. S3. Development of new drugs: AI accelerates drug development, cutting down traditional research time and costs. S4. Personalized care: AI facilitates personalized treatment plans based on genetic and individual profiles. S5. Improved Accessibility: AI improves access to medical services in remote areas. S6. Lower healthcare costs: AI can decrease hospital admissions and specialist visits. S7. Increased patient engagement: Digital therapeutics (DTx) encourage an active role in managing personal health. S8. Continuous access to care: AI apps enable patients to receive support 24/7. S9. Real-time data monitoring: DTx enable personalized treatment adjustments. S10. Improved treatment tracking: AI allows physicians to monitor patient adherence and progress more accurately. | W1. Data privacy and security: It is critical to protect patient information and the security of digital platforms. W2. Limited Accessibility: Not all patients have access to or can effectively employ digital tools. W3. Regulation: A clear and shared legal framework for AI is still evolving. W4. Data dependency: AI efficacy relies on data quantity and quality for training. W5. Biased data: If AI is trained on biased data, the outcomes can be inaccurate and discriminatory. W6. High costs: Developing and implementing AI-based solutions requires significant investments. W7. Regulatory fragmentation in the EU: Substantial differences in regulatory interpretation exist across EU Member States. W8. Lack of standardized reimbursement for DTx: most countries have no defined reimbursement process for DTx that may prevent DTx adoption. W9. Insufficient funds for DTx: lack of adequate financial support for DTx. |
| Opportunities | Threats |
| O1. Decreased healthcare burden: DTx may prevent hospital visits, improve patient self-management, and provide remote therapy. O2. Advancements in Precision Medicine: Precision Medicine can rapidly evolve as AI fosters highly personalized treatments. O3. Innovative Care Models: new care models can be introduced, such as telemedicine and preventive medicine. O4. Improved human–machine cooperation: AI can function as a valuable assistant to healthcare professionals. O5. Research and Development: AI accelerates discovery of new therapies and the employment of Digital Twins technology for simulations. | T1. Regulatory uncertainty for software changes: discrepancies regarding software updates and how they influence market authorization. T2. Security and privacy risks: accurate security and privacy measures to protect patient data are still to be developed. T3. Effectiveness comparison: the effectiveness of digital therapies compared to traditional drugs is still unclear. T4. Ethical considerations: complex ethical issues arise in AI application, such as the liability in case of errors and the potential dehumanization in patient care. T5. Ethical considerations: The application of AI in healthcare raises complex ethical issues, such as the liability in case of errors and the potential dehumanization in patient care. T6. Resistance to change: The adoption of AI may face opposition from healthcare professionals and patients. T7: Technological dependence: Excessive reliance on AI-based technologies can expose healthcare systems to service disruption risks. |
| Official Statement by the European Federation of Internal Medicine on the Implementation of Artificial Intelligence |
| The European Federation of Internal Medicine (EFIM) acknowledges the pressing progress of AI and its potential to disrupt medical practice. These recommendations are designed to guide internists in using AI responsibly and effectively and improve patient care. |
| Section 1: General Recommendations for Using AI in Internal Medicine 1. Relevance of Core Clinical Skills: AI should augment core competencies, rather than substitute them. Core competencies in internal medicine are accurate information gathering through patient interviews, physical examinations, and medical history review. 2. Data Processing and Conversion: Internal medicine specialists can advocate for intelligent, supportive information systems. Increasing volumes of health data poses a significant administrative challenge. AI may be able to efficiently organize information, which is essential for digitizing clinical workflows [52]. Data architectures, AI, and health information systems capable of transforming data and information bidirectionally would best support clinical and shared decision-making. 3. Streamlining Workflow and Reducing Burnout: AI applications serve to augment clinical practice, for example, by optimizing workflow and automating routine or repetitive tasks. Internal medicine specialists in turn can then increase focus on patients and healthcare organizations can reduce burnout related to administrative burdens. 4. Interdisciplinary Cooperation: Internal medicine specialists work within a collaborative team, including but not limited to healthcare professionals, data scientists, AI application developers, and patients. Interdisciplinary teamwork ensures appropriate participation of key individuals in optimal AI applications in internal medicine. 5. Data privacy and protection: Education on security protocols and regular security audits facilitate regulatory compliance, which is mandatory. Standard data privacy and security regulations apply to ensure secure AI implementation and minimize exposures [53]. Ethical and responsible data collection is essential to ensure patient privacy and consent to maintain patient safety and trust [54]. Internal medicine specialists or their healthcare organizations must ensure compliance throughout the process of AI development, use, and evaluation. 6. Ethical Concerns and Human Expertise: Human-in-the-loop approaches for AI applications should involve internal medicine specialists. Human expertise and supervision can help support identification of biases [55] and provide professional interpretation of AI outputs. 7. Liability I and AI: Liability for errors resulting from AI use remains uncertain. This is imperative across all medical specialties, and internal medicine specialists can engage in advocacy to ensure appropriate representation of this cognitive speciality in during policymaking. 8. Digital Skills and Continuous Learning: Internal medicine specialists would benefit from continuous learning and development of digital skills, including AI literacy, as technologies evolve, even if the extent and content of such a skillset is still a matter of debate [56]. Also, as AI applications are constantly evolving, internists are encouraged to stay informed on the latest advancements in AI in Internal Medicine [57]. |
| Section 2: Specific Recommendations for using Generative AI (GAI) in Internal Medicine 1. Data Augmentation and Personalization: Generative AI (GAI) can enhance patient data collection to help personalized medicine strategies. This data could be applied to feed Big Data models for more precise diagnoses and tailored treatment plans. Additionally, GAI can create anonymized synthetic data sets for research, accelerating drug discovery and other healthcare innovations. 2. Population Health Management: GAI has the potential to model disease outbreaks and develop more effective public health interventions. By simulating different scenarios, internists and public health officials can proactively design strategies to minimize the impact of infectious diseases and chronic conditions in populations. 3. Transparency and Patient Consent: It is crucial for healthcare professionals to communicate clearly with patients about the role of Large Language Models (LLM) in the clinical setting. Internists should explain both the potential benefits and risks associated with GAI application and obtain informed consent from patients before implementing GAI-based tools in their care. 4. Specialization and Expertise: As GAI applications in healthcare evolve, it is advisable for several internists to specialize in areas where GAI can have a significant impact, such as precision medicine or chronic disease management. |
| EFIM’s Call to Action AI applications in Internal Medicine offer significant opportunities to improve both efficiency and quality of care and reduce administrative burdens. EFIM recommends that specialist physicians in internal medicine, along with their organizations, professional societies, and communities, take a leading role in shaping the education, design, implementation, evaluation, and regulation of AI in clinical settings. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Said-Criado, I.; Pietrantonio, F.; Montagna, M.; Rosiello, F.; Missikoff, O.; Drago, C.; Leung, T.I.; Vinci, A.; Signorini, A.; Gómez-Huelgas, R., on behalf of the EFIM Telemedicine, Innovative Technologies and Digital Health Working Group. Advancing Toward P6 Medicine: Recommendations for Integrating Artificial Intelligence in Internal Medicine. Clin. Pract. 2025, 15, 200. https://doi.org/10.3390/clinpract15110200
Said-Criado I, Pietrantonio F, Montagna M, Rosiello F, Missikoff O, Drago C, Leung TI, Vinci A, Signorini A, Gómez-Huelgas R on behalf of the EFIM Telemedicine, Innovative Technologies and Digital Health Working Group. Advancing Toward P6 Medicine: Recommendations for Integrating Artificial Intelligence in Internal Medicine. Clinics and Practice. 2025; 15(11):200. https://doi.org/10.3390/clinpract15110200
Chicago/Turabian StyleSaid-Criado, Ismael, Filomena Pietrantonio, Marco Montagna, Francesco Rosiello, Oleg Missikoff, Carlo Drago, Tiffany I. Leung, Antonio Vinci, Alessandro Signorini, and Ricardo Gómez-Huelgas on behalf of the EFIM Telemedicine, Innovative Technologies and Digital Health Working Group. 2025. "Advancing Toward P6 Medicine: Recommendations for Integrating Artificial Intelligence in Internal Medicine" Clinics and Practice 15, no. 11: 200. https://doi.org/10.3390/clinpract15110200
APA StyleSaid-Criado, I., Pietrantonio, F., Montagna, M., Rosiello, F., Missikoff, O., Drago, C., Leung, T. I., Vinci, A., Signorini, A., & Gómez-Huelgas, R., on behalf of the EFIM Telemedicine, Innovative Technologies and Digital Health Working Group. (2025). Advancing Toward P6 Medicine: Recommendations for Integrating Artificial Intelligence in Internal Medicine. Clinics and Practice, 15(11), 200. https://doi.org/10.3390/clinpract15110200

