Previous Article in Journal
The Association Between Footwear Choices, Foot Problems, and Lower Extremity Pain Attributes in Individuals Engaged in Prolonged Standing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Responsible AI for Personalized Patient Education and Engagement Across Medical Conditions: Leveraging Multi-Agent LLMs, Ambient Technology, and NotebookLM—A Case Study in Diabetes Education and Limb Preservation

1
Silverberry Group, Inc., Vancouver, BC V3H 5L1, Canada
2
College of Nursing, University of Arizona, Tucson, AZ 85721, USA
3
John L. McClellan Memorial Veteran’s Hospital, Little Rock, AR 72205, USA
4
Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA 19104, USA
5
College of Osteopathic Medicine, California Health Sciences University, Clovis, CA 93611, USA
6
Weiss School of Natural Sciences, Rice University, Houston, TX 77005, USA
7
Keck School of Medicine of the University of Southern California, Los Angeles, CA 90033, USA
*
Author to whom correspondence should be addressed.
Current Address: Betty Irene Moore School of Nursing, University of California Davis, Sacramento, CA 95817, USA.
J. Am. Podiatr. Med. Assoc. 2026, 116(3), 30; https://doi.org/10.3390/japma116030030
Submission received: 8 February 2025 / Revised: 28 October 2025 / Accepted: 22 November 2025 / Published: 8 May 2026

Abstract

Background: Effective communication with patients is vital for improving health outcomes in chronic disease management. In this study, we investigated WoundScribeAI’s Scribe AI, also known as Ambient Technology, and its patient education and engagement app, Pingoo.AI. It employed a multi-agent AI model that leveraged Large Language Models (LLMs) and NotebookLM to enhance patient communication in clinical settings. Methods: The system comprised specialized agents that transcribed healthcare provider–patient conversations through ambient dictation. This transcription generated medical notes that followed the Subjective, Objective, Assessment, and Plan (SOAP) format—a structured document used by healthcare providers to record and communicate information about patient encounters. Simultaneously, comprehensive visit summaries were also created. In the next step, these visit summaries were used to produce conversational and educational content by leveraging NotebookLM, an AI model introduced by Google that can generate podcast-style conversations from provided information. Integrating these agents allows clinicians to deliver engaging, empathetic, and actionable information to patients. Medical experts conducted a two-phase evaluation of the system’s performance based on multiple criteria, with a particular focus on diabetes education and diabetic foot care. The first phase used pre-recorded training videos, while the second phase involved simulated consultations by clinicians using the system. To validate the AI-generated educational content, we used several established frameworks in health communication that closely align with our enhancement goals. Results: The results showed that the AI model generated accurate clinical documentation and met the criteria for accurate SOAP Notes, visit summaries, and engaging educational content for patients. Given that hallucination is a significant concern related to large language models, especially in critical fields like healthcare, we meticulously analyzed the generated outputs to identify any signs of hallucinated information. Three outcomes successfully passed the validation criteria, including accuracy, completeness, comprehensiveness, absence of potential harm, and no hallucination. Additionally, the Conversational Education content was confirmed against established patient education frameworks and met criteria such as the use of metaphors, empathetic tone, and appropriate language, providing additional detail to help manage the condition. Conclusions: By providing specific instructions and prompts to NotebookLM to transform visit summaries into educational conversations, we significantly enhanced the comprehensiveness and engagement of the content for patients. In contrast to a traditional summary of the clinical visit, the podcast-style conversation enriched the content with background information, encouraging language, an empathetic tone, and helpful metaphors. Our analysis confirmed that the system did not exhibit any hallucinations, highlighting the effectiveness of our approach in mitigating this risk. These findings support the use of multi-agent AI models, combined with ambient dictation and tools like NotebookLM, to improve patient communication that surpasses traditional paper-based brochures, which are often impersonal, minimal, and do not always adhere to recommended factors for health literacy.

1. Introduction

Effective patient education and engagement are crucial for managing chronic conditions like diabetes. Alarmingly, a person loses a limb due to a diabetic foot ulcer every 20 s [1,2]. However, up to 85% of lower limb amputations can be prevented with early intervention and appropriate patient education [3,4]. Despite these promising statistics, current approaches to patient education face significant challenges. Educational materials are often generic and not tailored to meet the individual needs and preferences of patients [5]. Furthermore, these materials are frequently complex and difficult to understand, failing to address language barriers, which limits their accessibility and effectiveness for diverse patient populations [6].
Integrating patient education into the overall care plan is essential; however, it is often treated as a separate component of healthcare, which diminishes its effectiveness [7]. Current educational methods frequently lack engagement, relying heavily on static content instead of incorporating interactive elements that could better capture patient interest and promote involvement [8]. Additionally, the production of educational materials can be costly, making it less feasible to reach those who need them most [9]. There is often a lack of rigorous evaluation of patient education programs and materials, complicating the measurement of their true impact and effectiveness [10].
Innovative technologies, such as AI-powered ambient dictation services and multi-agent AI models using large language models (LLMs), present new opportunities to enhance patient education and engagement [11]. Previous studies have shown that using a scribe system to generate Subjective, Objective, Assessment, and Plan (SOAP) notes can save clinicians approximately 3.2 min per encounter, leading to an 11% to 17% reduction in time spent in the clinic [12,13]. However, the impact of AI tools on patient education and engagement has not yet been thoroughly examined.
This study explored a multi-agent AI system designed to transcribe real-time physician–patient conversations and create SOAP notes for clinicians while also producing personalized visit summaries and engaging educational content for patients. The system consisted of several specialized agents:
  • Transcription Agent: Used OpenAI’s Whisper model to transcribe physician–patient conversations.
  • SOAP Note Generation Agent: Employed few-shot learning techniques to accurately generate SOAP Notes from the transcriptions.
  • Visit Summary Agent: Created comprehensive visit summaries for patients.
  • Conversational Education Agent (NotebookLM): Processed the visit summaries to produce engaging, podcast-style educational content by following specific instructions and prompts [14].
Figure 1 illustrates how various processes and technologies are used to integrate patient education and engagement into the care delivery plan. By equipping clinicians with tools to create accessible, empathetic, and engaging patient information—including podcast-style conversations—the system aims to empower patients with actionable insights regarding their health and necessary next steps.
The study focused on validating the system’s effectiveness in enhancing patient education and engagement, particularly in the context of diabetes education and diabetic foot care, while ensuring accuracy and safety. We evaluated the system’s ability to provide reliable documentation and educational content, ultimately promoting better health literacy and adherence to medical advice.

2. Materials and Methods

2.1. Study Design

The study was conducted in two phases. The first phase used pre-recorded training videos, while the second phase involved simulated consultations by clinicians using ambient technology.
Phase 1: The objective of Phase I was to assess the performance of a multi-agent AI system using pre-recorded training videos for medical students. We collected 30 audio recordings from YouTube that showcased various conversations between physicians and patients. These recordings were analyzed using three different systems: Google Cloud’s “Video” model, Google Cloud’s “Medical Conversation” model, and Whisper by OpenAI. The 30 audio recordings covered a range of medical specialties.

2.2. Multi-Agent Model Implementation

The four agents mentioned above were implemented using promoter techniques and few-shot learning. The transcripts were evaluated based on accuracy, general word recognition, and medical term recognition. Clinicians assessed SOAP notes and patient summaries using a 5-point scale on various criteria.
Phase 2: The objective of this phase was to evaluate the system’s performance in real-world scenarios by simulating actual physician–patient conversations. This phase focused on diabetes education and diabetic foot care. Clinicians conducted 30 simulated conversations that mirrored real patient interactions, using the same multi-agent model (four agents) to evaluate the results. The simulated physician-patient encounters included a variety of outpatient health conditions commonly seen by podiatric physicians. The diverse sample population represented a wide range of conditions and comorbidities typically encountered in clinical settings. These conditions included infected ulcerations and onychomycosis. Many of these patients also presented with additional comorbidities such as diabetes, neuropathy, peripheral vascular disease, and kidney disease.
The content generated by the system was evaluated by clinicians based on multiple criteria for SOAP notes and visit summaries for patients, as detailed in Table 1 and Table 2. The evaluation criteria are as follows:

2.3. Data Collection

Transcriptions and SOAP notes were created by specialized agents and reviewed by clinicians to ensure accuracy and relevance. We adhered to the NIH’s recommended SOAP note structure for the subcategories within each section [15]. Enhanced patient visit summaries, which included initial overviews along with additional conversational and educational content generated by NotebookLM, were also rigorously evaluated for their utility and clarity. To further refine these tools, we systematically collected both qualitative and quantitative feedback from clinicians to gain valuable insights into the usability and effectiveness of these AI-driven documentation and summary enhancements in supporting patient care.

2.4. Evaluation Metrics

To evaluate the transcription systems, we measured the number of transcriptions that met accuracy criteria and analyzed the failure breakdown for the Whisper model. For the evaluation of SOAP notes and Patient Visit Summaries, we applied a 5-point rating scale: 1 (Poor), 2 (Fair), 3 (Good), 4 (Very Good), and 5 (Excellent). Descriptive statistics were used to summarize the ratings, and tables and graphs were created to visualize the data.

3. Results and Discussions

As the first step, we compared the performance of three transcription systems, summarized in Table 3.
The Whisper model (Table 4) significantly outperformed the other systems. This metric was crucial, as the SOAP note and the Visit Summary for the patient were created from the visit transcript.
Speaker differentiation did not impact the downstream tasks, as the system effectively extracted the required information.
The results for SOAP note creation performance are reported in Table 5. As illustrated, the system performed satisfactorily, achieving ratings of Excellent or Very Good.
Similar results were obtained when evaluating the Patient Visit Summary, which is summarized in Table 6.
Visit Summary vs. Conversational Education
Standard visit summaries provide necessary information but often have a neutral tone and lack personalization, empathy, and engaging language, making them appear to be ‘dry.’ NotebookLM effectively transformed these summaries into conversational educational content that is rich in metaphors, encouraging language, and personalized messages, as shown in Table 7.
To validate the AI-generated educational content, we referenced several established frameworks in health communication and patient education that aligned closely with our enhancement goals. The parameters of Tone and Language, as well as the Level of Detail and Explanation, are central to the Plain Language Guidelines [16]. These guidelines advocate for clear and concise language tailored to the patient’s literacy level, avoiding medical jargon to enhance comprehension. The Use of Metaphors and Analogies aligns with recommendations from the Health Literacy Universal Precautions Toolkit [17], which suggests using familiar concepts to explain complex medical information. Empathy and Emotional Support are emphasized in the Patient-Centered Communication Framework [18], highlighting the importance of acknowledging patients’ emotions and building rapport. Motivational interviewing techniques support engagement and interaction parameters [19], which encourage active patient participation and open dialogue to foster collaborative care. Practical Advice and Suggestions, along with Patient Empowerment and Education, are integral components of the Patient Education Materials Assessment Tool (PEMAT) [20]. This tool ensures that educational materials provide actionable steps and promote patient autonomy. Lastly, the Inclusion of Additional Resources is recommended in the Health Literacy Universal Precautions Toolkit [17], advocating for supplementary materials that reinforce patient understanding.
By aligning the parameters with established frameworks, we ensured that the AI-generated content adhered to best practices in patient education and communication. These frameworks served as guiding references for the development and evaluation of the system. This approach enabled us to systematically address each parameter, resulting in content that is accurate, engaging, empathetic, and empowering for patients, thereby enhancing their understanding and adherence to treatment plans.
Table 8 summarizes the differences between the visit summary and the conversational education produced by NotebookLM, based on the prompts and instructions across eight categories.
The absence of ‘Fair’ or ‘Poor’ ratings in clinician evaluations highlighted the system’s high performance and reliability. The four agents operated efficiently within a linear workflow, successfully producing an accurate and reliable Visit Summary and engaging educational dialogues for each visit. Reviewers, including medical students and a podiatrist, rated these summaries highly, indicating their potential usefulness in clinical settings.
Importantly, the system did not exhibit hallucinations, which is a significant concern associated with content generated by LLMs. Reviewers found that the content produced by the Conversational Education Agent (NotebookLM) was accurate, devoid of potential harm, and enriched by empathetic language and helpful metaphors. Although minor transcription errors did occur, these did not negatively affect the creation of SOAP notes or the generation of patient visit summaries. The system’s effective information extraction helped to mitigate the impact of these errors. Table 7 presents examples of conversational education content compared to the content in the visit summary across eight factors.

4. Conclusions

This study demonstrates that using LLMs and a multi-agent AI model with ambient dictation—specifically employing NotebookLM with targeted prompts and instructions—can significantly enhance the accuracy of clinical documentation and the quality of educational content. The system effectively produced SOAP notes and patient visit summaries that were accurate, comprehensive, and free from harmful recommendations. Its ability to extract information from physician conversations and dictations, without issues related to speaker differentiation or hallucination, highlights its robustness.
To validate the AI-generated educational content, we used several established frameworks in health communication and patient education that align closely with our enhancement goals. These findings support the potential adoption of such technologies in clinical practice, particularly for managing chronic conditions like diabetes and diabetic foot care. Comparisons between visit summaries generated from doctor-patient conversations and Conversational Education showed improvement across eight factors, contributing to more effective communication with patients.
These findings provide strong evidence in favor of using multi-agent AI models, paired with ambient dictation and technologies and advanced tools such as NotebookLM, to significantly enhance communication with patients. This innovative approach offers numerous advantages over conventional paper-based brochures, which often lack a personal touch and do not always meet the essential criteria for effective health literacy. By integrating these technologies, healthcare providers can ensure that patients receive personalized and accessible information that better supports their understanding and engagement in their healthcare.

5. Limitations

This study did not include actual patient interactions or collect feedback from patients; therefore, we could not directly measure the effectiveness of the educational content on patient engagement and adherence. Additionally, the use of simulated conversations may introduce biases related to the populations that clinicians typically encounter in their practice, which may not be fully eliminated.

6. Future Work

To fully assess the system’s practical utility, future studies should involve real-world clinical implementations that include patient interactions and collect feedback on how enhanced educational content affects patient engagement. Subsequent research could focus on measuring improvements in patient adherence to treatment plans resulting from personalized visit summaries and conversational educational materials. Additionally, conducting studies on a larger scale and with a diverse patient population would help mitigate the potential biases identified in the limitations and strengthen the generalizability of the findings.

Author Contributions

Conceptualization, S.M.; methodology, S.M., A.R. and J.F.; validation, S.-F.W., A.R., J.F., J.R., S.A., M.H., and D.G.A.; formal analysis, S.-F.W., A.R., J.F. and J.R.; investigation, S.M., A.R. and J.F.; resources, S.M.; data curation, S.A. and M.H.; writing—original draft preparation, S.M.; writing—review and editing, S.-F.W. and D.G.A.; visualization, S.M.; supervision, D.G.A.; project administration, S.A.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was sponsored by Silverberry Group, Inc. Shayan Mashatian is an officer and shareholder, Shereen Aziz is an employee, and Ilia Alenabi is a research student at Silverberry Group. This study is partially supported by the National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases Award Number 1R01124789-01A1 and by the National Science Foundation (NSF) Center to Stream Healthcare in Place (#C2SHiP), CNS Award Number 2052578.

Institutional Review Board Statement

Not applicable. Ethical review and approval were not required because the study did not involve human subjects or interactions with actual patients.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data underlying this study cannot be shared publicly due to confidentiality restrictions.

Acknowledgments

The authors would like to thank Ilia Alneabi for his contributions to programming the system and his productive discussions in prompt engineering, as well as Mery Ritter for her assistance with research documentation.

Conflicts of Interest

Shayan Mashatian is an officer and shareholder at Silverberry Group, Inc. Shereen Aziz was an employee of Silverberry at the time of her participation in this research.

Abbreviations

Subjective; Objective; Assessment; and Plan (SOAP); Artificial Intelligence (AI); Large Language Model (LLM).

References

  1. Armstrong, D.G.; Boulton, A.J.; Bus, S.A. Diabetic foot ulcers and their recurrence. N. Engl. J. Med. 2017, 376, 2367–2375. [Google Scholar] [CrossRef] [PubMed]
  2. Armstrong, D.G.; Tan, T.; Boulton, A.J.M.; Bus, S.A. Diabetic Foot Ulcers: A Review. JAMA 2023, 330, 62–75. [Google Scholar] [CrossRef] [PubMed]
  3. Dwamena, F.; Holmes-Rovner, M.; Gaulden, C.M.; Jorgenson, S.; Sadigh, G.; Sikorskii, A.; Lewin, S.; Smith, R.C.; Coffey, J.; Olomu, A.; et al. Interventions to promote patient-centered care during clinical consultations. Cochrane Database Syst. Rev. 2012, 12, CD003267. [Google Scholar] [CrossRef] [PubMed]
  4. Armstrong, D.G.; Lavery, L.A.; Harkless, L.B. Who is at risk for diabetic foot ulceration? Clin. Podiatr. Med. Surg. 1998, 15, 11–19. [Google Scholar] [CrossRef] [PubMed]
  5. Schillinger, D.; Bindman, A.; Wang, F.; Stewart, A.; Piette, J. Functional health literacy and the quality of physician–patient communication among diabetes patients. Patient Educ. Couns. 2004, 52, 315–323. [Google Scholar] [CrossRef] [PubMed]
  6. Richard, C.; Lussier, M.T. Measuring patient and physician participation in exchanges on medications: Dialogue ratio, preponderance of initiative, and dialogical roles. Patient Educ. Couns. 2007, 65, 329–341. [Google Scholar] [CrossRef] [PubMed]
  7. Berkman, N.D.; Sheridan, S.L.; Donahue, K.E.; Halpern, D.J.; Crotty, K. Low health literacy and health outcomes: An updated systematic review. Ann. Intern. Med. 2011, 155, 97–107. [Google Scholar] [CrossRef] [PubMed]
  8. Badarudeen, S.; Sabharwal, S. Assessing readability of patient education materials: Current role in orthopaedics. Clin. Orthop. Relat. Res. 2010, 468, 2572–2580. [Google Scholar] [CrossRef] [PubMed]
  9. Kandula, N.R.; Malli, T.; Zei, C.P.; Larsen, E.; Baker, D.W. Literacy and retention of information after a multimedia diabetes education program and teach-back. J. Health Commun. 2011, 16, 89–102. [Google Scholar] [CrossRef] [PubMed]
  10. Davenport, T.; Kalakota, R. The potential for artificial intelligence in healthcare. Future Healthc. J. 2019, 6, 94–98. [Google Scholar] [CrossRef] [PubMed]
  11. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit. Med. 2020, 3, 17. [Google Scholar] [CrossRef] [PubMed]
  12. Hasan, S.; Krijnen, P.; van den Akker-van Marle, M.E.; Schipper, I.B.; Bartlema, K.A. Medical scribe in a trauma surgery outpatient clinic; shorter, cheaper consultations and satisfied doctors. Ned. Tijdschr. Geneeskd. 2018, 162, D2614. [Google Scholar] [PubMed]
  13. Elton, A.C.; Schutte, D.; Ondrey, G.; Ondrey, F.G. Medical scribes improve documentation consistency and efficiency in an otolaryngology clinic. Am. J. Otolaryngol. 2022, 43, 103510. [Google Scholar] [CrossRef] [PubMed]
  14. NotebookLM. Available online: https://support.google.com/notebooklm/answer/14273541?hl=en (accessed on 10 October 2024).
  15. Podder, V.; Lew, V.; Ghassemzadeh, S. SOAP Notes. Available online: https://www.ncbi.nlm.nih.gov/books/NBK482263/ (accessed on 15 June 2024).
  16. Plain Language Action and Information Network (PLAIN). Federal Plain Language Guidelines. Available online: https://www.plainlanguage.gov/guidelines/ (accessed on 30 October 2024).
  17. DeWalt, D.A.; Callahan, L.F.; Hawk, V.H.; Broucksou, K.A.; Hink, A. Health Literacy Universal Precautions Toolkit; AHRQ Publication No. 10-0046-EF; Agency for Healthcare Research and Quality: Rockville, MD, USA, 2010.
  18. Epstein, R.M.; Street, R.L., Jr. Patient-Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering; NIH Publication No. 07-6225; National Cancer Institute: Bethesda, MD, USA, 2007.
  19. Miller, W.R.; Rollnick, S. Motivational Interviewing: Helping People Change, 3rd ed.; Guilford Press: New York, NY, USA, 2013. [Google Scholar]
  20. Shoemaker, S.J.; Wolf, M.S.; Brach, C. The Patient Education Materials Assessment Tool (PEMAT) and User’s Guide; AHRQ Publication No. 14-0002-EF; Agency for Healthcare Research and Quality: Rockville, MD, USA, 2013.
Figure 1. Integrating Patient Education into the Care Delivery Plan.
Figure 1. Integrating Patient Education into the Care Delivery Plan.
Japma 116 00030 g001
Table 1. Validation Criteria for SOAP Note.
Table 1. Validation Criteria for SOAP Note.
CriteriaDefinition
AccuracyThe captured information was correct.
CompletenessNo critical information was omitted.
Organization of dataThe information adhered to the SOAP note structure.
ComprehensivenessThe wording and explanations were appropriate and accurate.
RelevanceThe content complied with standard medical documentation.
No HallucinationThe system accurately captured and transcribed the conversations. Furthermore, it did not include any information that was not present in the recorded conversations when generating visit summaries or educational materials.
Table 2. Validation Criteria for Patient Visit Summary.
Table 2. Validation Criteria for Patient Visit Summary.
CriteriaDefinitions
AccuracyThe captured information was correct.
CompletenessNo critical information was missing.
Organization of dataThe Visit Summary and follow-up actions were properly extracted and organized.
ComprehensivenessThe information presented was written for a layperson with an eight-grade literacy level and did not include medical jargon.
No Possible HarmNo recommendations were made to the patient that could potentially cause harm.
No HallucinationThe system did not introduce any information that was not present in the original conversation.
Table 3. Transcription Systems Performance Comparison.
Table 3. Transcription Systems Performance Comparison.
Transcription SystemPassFail%Pass%Fail
Google Cloud’s “Video” model0300.0%100.0%
Google Cloud’s “Medical Conversation”32710.0%90.0%
Whisper by OpenAI23776.7%23.3%
Table 4. Whisper Model Failure Breakdown.
Table 4. Whisper Model Failure Breakdown.
Failure CategoryNumber of FailsPercentage
General Word Recognition114.29%
Medical Terms Recognition00.0%
Other Errors685.71%
Table 5. SOAP Note Creation Performance.
Table 5. SOAP Note Creation Performance.
Evaluation CriterionExcellent%Very Good%Good%Fair%Poor%
Accuracy86.713.3000
Completeness83.313.33.300
Organization1000000
Comprehensiveness 9010000
Relevance96.73.3000
No Hallucination1000000
Table 6. Patient Summary Evaluation Metrics.
Table 6. Patient Summary Evaluation Metrics.
Evaluation CriterionExcellent%Very Good%Good%Fair%Poor%
Accuracy9010000
Completeness98.43.33.300
Organization96.703.300
Comprehensiveness1000000
No possible harm1000000
No Hallucination1000000
Table 7. Visit Summary vs. Conversational Education Examples.
Table 7. Visit Summary vs. Conversational Education Examples.
CriteriaVisit SummaryConversational Education
Tone and Language
Conversational, casual tone
“During your recent visit, you explained that you have persistent pain in your left heel that worsens in the morning and after resting.”“So you’ve been dealing with some stubborn pain in your left heel, huh? And you just saw the doctor about it, right?”
Level of Detail and Explanation
(Explaining medications, treatments, and their purpose in a relatable manner)
“The healthcare provider prescribed Naproxen, stretching exercises, supportive shoes, and over-the-counter inserts.”“Naproxen is like ibuprofen’s stronger cousin... The goal is to calm things down so you can move more comfortably.”
Use of Metaphors and Analogies
(Explaining brace purpose)
“Your doctor fitted you with an Arizona brace to help manage the pain while walking.”“Arizona brace... Think of it like giving your ankle a much-needed timeout.”
Empathy and Emotional Support
(Reassurance, positive framing)
“X-rays of your feet were normal, showing no fractures or dislocations.”“Your x-rays showed no bone issues. So that’s a definite win right off the bat.”
Engagement and Interaction
(Demystifying the procedure and making it sound manageable)
“To treat this, surgical removal of the cyst is suggested, which would be done in the healthcare provider’s office.”“Your doctor recommended surgical removal... It’s a straightforward procedure—you will be in and out the same day.”
Practical Advice and Suggestions
(Lifestyle recommendations)
No explicit mention of lifestyle changes or additional preventive care“Proactive steps include wearing shoes with enough space for your toes... engaging activities like swimming or cycling to reduce pressure.”
Patient Empowerment and Education
(Explanation of connections)
“Recommended regular at-home foot checks.”“Daily foot checks... You’re basically like your own foot detective now, looking for clues.”
Inclusion of Additional ResourcesNo reference to additional support resources“Reach out through the Pingoo app... It’s like having a pocket-sized health expert available 24–7.”
Table 8. Traditional Visit Summary vs. Conversational Education Comparison.
Table 8. Traditional Visit Summary vs. Conversational Education Comparison.
CriteriaTraditional Visit SummaryConversational Education
Tone and LanguageClinical, formal, and objective. It uses medical terminology without much explanation, delivering information concisely and directly.Informal, friendly, and empathetic. It uses everyday language, metaphors, and analogies to clarify medical terms, and the tone is supportive and encouraging.
Level of Detail and ExplanationProvides essential clinical information without elaboration and assumes the reader possesses some medical background.Expands on clinical information by offering detailed explanations of medical conditions in layman’s terms. It breaks down complex concepts into understandable segments.
Use of Metaphors and AnalogiesLacks metaphors or analogies, relying solely on factual statements.Employs metaphors to simplify medical concepts for better understanding
Empathy and Emotional SupportDoes not address emotional aspects or explicitly acknowledge the patient’s concerns.Actively acknowledges and validates the patient’s worries, offering reassurance and emphasizing understanding and support.
Engagement and InteractionOne-way communication, where information is presented without inviting interaction.Simulates a dialogue by posing rhetorical questions and encouraging listener reflection, creating a sense of participation.
Practical Advice and SuggestionsProvides medical instructions without additional suggestions.Offers practical tips for managing the condition (e.g., activity ideas, home adjustments) and encourages creativity to keep the patient engaged during recovery.
Patient Empowerment and EducationFocuses on medical directives without emphasizing patient education.Aims to educate patients about their treatment, the reasoning behind it, and the importance of follow-through.
Inclusion of Additional ResourcesDoes not mention any resources beyond the follow-up appointment.Introduces an educational app as a tool for ongoing support and information, enhancing access to resources.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mashatian, S.; Wung, S.-F.; Ritter, A.; Fishman, J.; Robbins, J.; Aziz, S.; Huo, M.; Armstrong, D.G. Responsible AI for Personalized Patient Education and Engagement Across Medical Conditions: Leveraging Multi-Agent LLMs, Ambient Technology, and NotebookLM—A Case Study in Diabetes Education and Limb Preservation. J. Am. Podiatr. Med. Assoc. 2026, 116, 30. https://doi.org/10.3390/japma116030030

AMA Style

Mashatian S, Wung S-F, Ritter A, Fishman J, Robbins J, Aziz S, Huo M, Armstrong DG. Responsible AI for Personalized Patient Education and Engagement Across Medical Conditions: Leveraging Multi-Agent LLMs, Ambient Technology, and NotebookLM—A Case Study in Diabetes Education and Limb Preservation. Journal of the American Podiatric Medical Association. 2026; 116(3):30. https://doi.org/10.3390/japma116030030

Chicago/Turabian Style

Mashatian, Shayan, Shu-Fen Wung, Aaron Ritter, Jessica Fishman, Jeffrey Robbins, Shereen Aziz, Michelle Huo, and David G. Armstrong. 2026. "Responsible AI for Personalized Patient Education and Engagement Across Medical Conditions: Leveraging Multi-Agent LLMs, Ambient Technology, and NotebookLM—A Case Study in Diabetes Education and Limb Preservation" Journal of the American Podiatric Medical Association 116, no. 3: 30. https://doi.org/10.3390/japma116030030

APA Style

Mashatian, S., Wung, S.-F., Ritter, A., Fishman, J., Robbins, J., Aziz, S., Huo, M., & Armstrong, D. G. (2026). Responsible AI for Personalized Patient Education and Engagement Across Medical Conditions: Leveraging Multi-Agent LLMs, Ambient Technology, and NotebookLM—A Case Study in Diabetes Education and Limb Preservation. Journal of the American Podiatric Medical Association, 116(3), 30. https://doi.org/10.3390/japma116030030

Article Metrics

Back to TopTop