Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = PEMAT

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 787 KB  
Article
Informed and Empowered: A Pre–Post Evaluation of a Whiteboard Video for Sexual Health Education in Female Adolescents and Young Adults with Cancer
by Natalie Pitch, Anjali Sachdeva, Jennifer Catsburg, Mackenzie Noyes, Sheila Gandhi, Rebecca Côté, Chana Korenblum, Jonathan Avery and Abha A. Gupta
Curr. Oncol. 2025, 32(12), 681; https://doi.org/10.3390/curroncol32120681 - 1 Dec 2025
Viewed by 862
Abstract
Adolescents and young adults (AYA) assigned female at birth with cancer face significant sexual health challenges, yet accessible, age-appropriate educational tools remain limited. This study evaluated a 13 min whiteboard video designed to improve sexual health knowledge. Female AYA patients aged 15–39 years [...] Read more.
Adolescents and young adults (AYA) assigned female at birth with cancer face significant sexual health challenges, yet accessible, age-appropriate educational tools remain limited. This study evaluated a 13 min whiteboard video designed to improve sexual health knowledge. Female AYA patients aged 15–39 years across Canada completed pre- and post-video surveys assessing knowledge, attitudes, and satisfaction. The video’s understandability and actionability were measured using the Patient Education Materials Assessment Tool for Audiovisual Materials (PEMAT-A/V), and readability was assessed using six standard metrics. Quantitative analyses included paired t-tests and regression modeling; qualitative responses were thematically coded. Ninety participants completed the study. Knowledge scores increased by 19.5% (95% CI, 14–24%; p < 0.001, Cohen’s d = 0.89) following the video. Greater gains were observed among participants with a high school education or less (p = 0.040), while younger participants tended to show larger improvements. The video received average PEMAT-A/V scores of 96% for understandability and 94% for actionability. Most participants (89%) found it helpful for learning about sexual health and would recommend the video to peers, though suggested improvements included shorter length, enhanced visuals, and more age-specific content. Nearly half reported never discussing sexual health with providers. These findings support the feasibility of whiteboard video as an effective, scalable tool to address sexual health in oncology care. Full article
(This article belongs to the Section Psychosocial Oncology)
Show Figures

Figure 1

17 pages, 1910 KB  
Article
Large Language Models vs. Professional Resources for Post-Treatment Quality-of-Life Questions in Head and Neck Cancer: A Cross-Sectional Comparison
by Ali Alabdalhussein, Mohammed Hasan Al-Khafaji, Shazaan Nadeem, Maham Basharat, Hasan Aldallal, Mohammed Elkrim S. Mohammed, Sahar Alghnaimawi, Ali Al Yousif, Juman Baban, Soroor Hamad, Ibrahim Saleem, Sarah Mozan and Manish Mair
Curr. Oncol. 2025, 32(12), 668; https://doi.org/10.3390/curroncol32120668 - 28 Nov 2025
Viewed by 523
Abstract
Background: Recently, patients have been using large language models (LLMs) such as ChatGPT, Gemini, and Claude to address their concerns. However, it remains unclear whether the readability, understandability, actionability, and empathy meet the standard guidelines. In this study, we aim to address these [...] Read more.
Background: Recently, patients have been using large language models (LLMs) such as ChatGPT, Gemini, and Claude to address their concerns. However, it remains unclear whether the readability, understandability, actionability, and empathy meet the standard guidelines. In this study, we aim to address these concerns and compare the outcomes of the LLMS to those of professional resources. Methods: We conducted a comparative cross-sectional study by following the relevant items of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for cross-sectional studies and using 14 patient-style questions. These questions were collected from the professional platforms to represent each domain. We derived the 14 domains from validated quality-of-life instruments (EORTC QLQ-H&N35, UW-QOL, and FACT-H&N). Fourteen Responses were obtained from three LLMs (ChatGPT-4o, Gemini 2.5 Pro, and Claude Sonnet 4) and two professional sources (Macmillan Cancer Support and CURE Today). All responses were evaluated using the Patient Education Materials Assessment Tool (PEMAT), DISCERN instrument, and the Empathic Communication Coding System (ECCS). Readability was assessed using the Flesch Reading Ease and Flesch-Kincaid Grade Level metrics. Statistical analysis included one-way ANOVA and Tukey’s HSD test for group comparisons. Results: No differences were found in quality (DISCERN), understandability, actionability (PEMAT), and empathy (ECCS) between LLMS and professional resources. However, professional resources outperform the LLMs in readability. Conclusions: In our study, we found that LLMs (ChatGPT, Gemini, Claude) can produce patient information that is comparable to professional resources in terms of quality, understandability, actionability, and empathy. However, readability remains a key limitation, as LLM-generated responses often require simplification to align with recommended health-literacy standards. Full article
(This article belongs to the Section Head and Neck Oncology)
Show Figures

Figure 1

14 pages, 584 KB  
Article
Artificial Intelligence Chatbots and Temporomandibular Disorders: A Comparative Content Analysis over One Year
by Serena Incerti Parenti, Alessandro Maglioni, Elia Evangelisti, Antonio Luigi Tiberio Gracco, Giovanni Badiali, Giulio Alessandri-Bonetti and Maria Lavinia Bartolucci
Appl. Sci. 2025, 15(23), 12441; https://doi.org/10.3390/app152312441 - 24 Nov 2025
Viewed by 468
Abstract
As the use of artificial intelligence (AI) chatbots for medical queries expands, their reliability may vary as models evolve. We longitudinally assessed the quality, reliability, and readability of information on temporomandibular disorders (TMD) generated by three widely used chatbots (ChatGPT, Gemini, and Microsoft [...] Read more.
As the use of artificial intelligence (AI) chatbots for medical queries expands, their reliability may vary as models evolve. We longitudinally assessed the quality, reliability, and readability of information on temporomandibular disorders (TMD) generated by three widely used chatbots (ChatGPT, Gemini, and Microsoft Copilot). Ten TMD questions were submitted to each chatbot at two timepoints (T1: February 2024; T2: February 2025). Two blinded evaluators independently assessed all answers using validated tools like the Global Quality Score (GQS), PEMAT, DISCERN, CLEAR, Flesch Reading Ease (FRE), and Flesch–Kincaid Grade Level (FKGL) tools. Analyses followed METRICS guidance. Comparisons between models and across timepoints were conducted using non-parametric tests. At T1, Copilot scored significantly lower in GQS, CLEAR appropriateness, and relevance (p < 0.01), while ChatGPT provided less evidence-based content than its counterparts (p < 0.001). Reliability was poor across models (mean DISCERN score: 34.73 ± 9.49), and readability was difficult (mean FRE: 34.64; FKGL: 14.13). At T2, performances improved across chatbots, particularly for Copilot, yet actionability remained limited and citations were inconsistent. This year-long longitudinal analysis shows an overall improvement in chatbot performance, although concerns regarding information reliability persist. These findings underscore the importance of human oversight of AI-mediated patient information, reaffirming that clinicians should remain the primary source of patient education. Full article
(This article belongs to the Special Issue Artificial Intelligence for Dentistry and Oral Sciences)
Show Figures

Figure 1

9 pages, 1094 KB  
Article
The Clinical Integration of ChatGPT Through an Augmented Patient Encounter in a Real-World Urological Cohort: A Feasibility Study
by Shane Qin, Emre Alpay, Bodie Chislett, Joseph Ischia, Luke Gibson, Damien Bolton and Dixon T. S. Woon
Soc. Int. Urol. J. 2025, 6(5), 59; https://doi.org/10.3390/siuj6050059 - 20 Oct 2025
Viewed by 489
Abstract
Background/Objectives: To evaluate the viability of using ChatGPT in a real clinical environment for patient education during informed consent for flexible cystoscopy, assessing its practicality, patient perceptions, and clinician evaluations within a urological cohort. Methods: A prospective feasibility study was conducted at a [...] Read more.
Background/Objectives: To evaluate the viability of using ChatGPT in a real clinical environment for patient education during informed consent for flexible cystoscopy, assessing its practicality, patient perceptions, and clinician evaluations within a urological cohort. Methods: A prospective feasibility study was conducted at a single institution involving patients with haematuria who attended an in-person clinic review with access to ChatGPT-4o mini. Using predetermined prompts regarding haematuria, we evaluated the accuracy, consistency, and suitability of the ChatGPT information. Responses were appraised for errors, omission of key information, and suitability for patient education. The functionality, usability, and quality of ChatGPT for patient education were assessed by three urologists using the Patient Education Materials Assessment Tool (PEMAT) and DISCERN tools. Readability was assessed using the Flesch–Kincaid tests. Further clinician questionnaires evaluated ChatGPT’s accuracy, reproducibility, and integration potential. Results: Ten patients were recruited, but one patient was excluded because he refused to use ChatGPT due to language barriers. All patients found ChatGPT to be useful, but most believed it could not entirely replace the doctor, especially for obtaining informed consent. There were no significant errors. The mean PEMAT score for understandability was 77.8%, and actionability was 63.8%. The mean DISCERN score was 57.7, corresponding to a ‘good’ quality score. The Flesch Reading Ease score was 30.2, with the writing level comparable to US grade level 13. Conclusions: ChatGPT offers valuable support for patient education, delivering accurate and comprehensive information. However, challenges with readability, contextual understanding, and actionability highlight the need for development and careful integration. Generative artificial intelligence (AI) should augment, not replace, clinician–patient interactions, emphasising ethical considerations and patient trust. This study provides a basis for further exploration of AI’s role in healthcare. Full article
Show Figures

Figure 1

11 pages, 731 KB  
Systematic Review
Is YouTube™ a Reliable Source of Information for the Current Use of HIPEC in the Treatment of Ovarian Cancer?
by Francesco Mezzapesa, Elisabetta Pia Bilancia, Margarita Afonina, Stella Di Costanzo, Elena Masina, Pierandrea De Iaco and Anna Myriam Perrone
Cancers 2025, 17(19), 3222; https://doi.org/10.3390/cancers17193222 - 2 Oct 2025
Viewed by 779
Abstract
Introduction: YouTube™ is a widely accessible platform with unfiltered medical information. This study aimed to evaluate the educational value and reliability of YouTube™ videos on Hyperthermic Intraperitoneal Chemotherapy (HIPEC) for advanced epithelial ovarian cancer treatment. Methods: YouTube™ videos were searched using [...] Read more.
Introduction: YouTube™ is a widely accessible platform with unfiltered medical information. This study aimed to evaluate the educational value and reliability of YouTube™ videos on Hyperthermic Intraperitoneal Chemotherapy (HIPEC) for advanced epithelial ovarian cancer treatment. Methods: YouTube™ videos were searched using the keywords “ovarian cancer”, “debulking surgery”, “hyperthermic”, and “HIPEC”. Patient Education Materials Assessment Tool for Audiovisual Content (PEMAT A/V) score, DISCERN, Misinformation Scale, and the Global Quality Scale (GQS) were employed to assess the clarity, quality, and reliability of the information presented. Results: Of the 150 YouTube™ videos screened, 71 were suitable for analysis and categorized by target audience (general public vs. healthcare workers). Most (57, 80.2%) were uploaded after the “Ov-HIPEC” trial (18 January 2018), with a trend toward more videos for healthcare workers (p = 0.07). Videos for the general public were shorter (p < 0.001) but received more views (p = 0.06) and likes (p = 0.09), though they were of lower quality. The DISCERN score averaged 50 (IQR: 35–60), with public-targeted videos being less informative (p < 0.001), a trend mirrored by the Misinformation Scale (p < 0.001) and GQS (p < 0.001). The PEMAT A/V scores showed 80% Understandability (IQR: 62–90) and 33% Actionability (IQR: 25–100), with no significant difference between groups (p = 0.15, p = 0.4). Conclusions: While YouTube™ provides useful information for healthcare professionals, it cannot be considered a reliable source for patients seeking information on HIPEC for ovarian cancer. Many videos contribute to misinformation by not properly explaining treatment indications, timing, adverse effects, multimodal approaches, or clinical trial findings. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

11 pages, 1344 KB  
Article
Enhancing Patient Education with AI: A Readability Analysis of AI-Generated Versus American Academy of Ophthalmology Online Patient Education Materials
by Allison Y. Kufta and Ali R. Djalilian
J. Clin. Med. 2025, 14(19), 6968; https://doi.org/10.3390/jcm14196968 - 1 Oct 2025
Cited by 1 | Viewed by 980
Abstract
Background/Objectives: Patient education materials (PEMs) in ophthalmology often exceed recommended readability levels, limiting accessibility for many patients. While organizations like the AAO provide relatively easy-to-read resources, topics remain limited, and other associations’ PEMs are too complex. AI chatbots could help clinicians create [...] Read more.
Background/Objectives: Patient education materials (PEMs) in ophthalmology often exceed recommended readability levels, limiting accessibility for many patients. While organizations like the AAO provide relatively easy-to-read resources, topics remain limited, and other associations’ PEMs are too complex. AI chatbots could help clinicians create more comprehensive, accessible PEMs to improve patient understanding. This study aims to compare the readability of patient education materials (PEMs) written by the American Academy of Ophthalmology (AAO) with those generated by large language models (LLMs), including ChatGPT-4o, Microsoft Copilot, and Meta-Llama-3.1-70B-Instruct. Methods: LLMs were prompted to generate PEMs for 15 common diagnoses relating to cornea and anterior chamber, which was followed by a follow-up readability-optimized (FRO) prompt to reword the content at a 6th-grade reading level. The readability of these materials was evaluated using nine different readability analysis python libraries and compared to existing PEMs found on the AAO website. Results: For all 15 topics, ChatGPT, Copilot, and Llama successfully generated PEMs, though all exceeded the recommended 6th-grade reading level. While initially prompted ChatGPT, Copilot, and Llama outputs were 10.8, 12.2, and 13.2, respectively, FRO prompting significantly improved readability to 8.3 for ChatGPT, 11.2 for Copilot, and 9.3 for Llama (p < 0.001). While readability improved, AI-generated PEMs were on average, not statistically easier to read than AAO PEMs, which averaged an 8.0 Flesch–Kincaid Grade Level. Conclusions: Properly prompted AI chatbots can generate PEMs with improved readability, nearing the level of AAO materials. However, most outputs remain above the recommended 6th-grade reading level. A subjective analysis of a representative subtopic showed that compared to AAO, there was less nuance, especially in areas of clinical uncertainty. By creating a blueprint that can be utilized in human–AI hybrid workflows, AI chatbots show promise as tools for ophthalmologists to increase the availability of accessible PEMs in ophthalmology. Future work should include a detailed qualitative review by ophthalmologists using a validated tool (like DISCERN or PEMAT) to score accuracy, bias, and completeness alongside readability. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Graphical abstract

13 pages, 2559 KB  
Article
Artificial Intelligence Versus Professional Standards: A Cross-Sectional Comparative Study of GPT, Gemini, and ENT UK in Delivering Patient Information on ENT Conditions
by Ali Alabdalhussein, Nehal Singhania, Shazaan Nadeem, Mohammed Talib, Derar Al-Domaidat, Ibrahim Jimoh, Waleed Khan and Manish Mair
Diseases 2025, 13(9), 286; https://doi.org/10.3390/diseases13090286 - 1 Sep 2025
Cited by 1 | Viewed by 925
Abstract
Objective: Patient information materials are sensitive and, if poorly written, can cause misunderstanding. This study evaluated and compared the readability, actionability, and quality of patient education materials on laryngology topics generated by ChatGPT, Google Gemini, and ENT UK. Methods: We obtained patient information [...] Read more.
Objective: Patient information materials are sensitive and, if poorly written, can cause misunderstanding. This study evaluated and compared the readability, actionability, and quality of patient education materials on laryngology topics generated by ChatGPT, Google Gemini, and ENT UK. Methods: We obtained patient information from ENT UK and generated equivalent content with ChatGPT-4-turbo and Google Gemini 2.5 Pro for six laryngology conditions. We assessed readability (Flesch–Kincaid Grade Level, FKGL; Flesch Reading Ease, FRE), quality (DISCERN), and patient engagement (PEMAT-P for understandability and actionability). Statistical comparisons involved using ANOVA, Tukey’s HSD, and Kruskal–Wallis tests. Results: ENT UK showed the highest readability (FRE: 64.6 ± 8.4) and lowest grade level (FKGL: 7.4 ± 1.5), significantly better than that of ChatGPT (FRE: 38.8 ± 10.5, FKGL: 11.0 ± 1.5) and Gemini (FRE: 38.3 ± 8.5, FKGL: 11.9 ± 1.2) (all p < 0.001). DISCERN scores did not differ significantly (ENT UK: 21.3 ± 7.5, GPT: 24.7 ± 9.1, Gemini: 29.5 ± 4.6; p > 0.05). PEMAT-P understandability results were similar (ENT UK: 72.7 ± 8.3%, GPT: 79.1 ± 5.8%, Gemini: 78.5 ± 13.1%), except for lower GPT scores on vocal cord paralysis (p < 0.05). Actionability was also comparable (ENT UK: 46.7 ± 16.3%, GPT: 41.1 ± 24.0%, Gemini: 36.7 ± 19.7%). Conclusion: GPT and Gemini produce patient information of comparable quality and engagement to ENT UK but require higher reading levels and fall short of recommended literacy standards. Full article
Show Figures

Figure 1

12 pages, 950 KB  
Article
Evaluating the Reliability and Quality of Sarcoidosis-Related Information Provided by AI Chatbots
by Nur Aleyna Yetkin, Burcu Baran, Bilal Rabahoğlu, Nuri Tutar and İnci Gülmez
Healthcare 2025, 13(11), 1344; https://doi.org/10.3390/healthcare13111344 - 5 Jun 2025
Cited by 1 | Viewed by 1040
Abstract
Background and Objectives: Artificial intelligence (AI) chatbots are increasingly employed for the dissemination of health information; however, apprehensions regarding their accuracy and reliability remain. The intricacy of sarcoidosis may lead to misinformation and omissions that affect patient comprehension. This study assessed the usability [...] Read more.
Background and Objectives: Artificial intelligence (AI) chatbots are increasingly employed for the dissemination of health information; however, apprehensions regarding their accuracy and reliability remain. The intricacy of sarcoidosis may lead to misinformation and omissions that affect patient comprehension. This study assessed the usability of AI-generated information on sarcoidosis by evaluating the quality, reliability, readability, understandability, and actionability of chatbot responses to patient-centered queries. Methods: This cross-sectional evaluation included 11 AI chatbots comprising both general-purpose and retrieval-augmented tools. Four sarcoidosis-related queries derived from Google Trends were submitted to each chatbot under standardized conditions. Responses were independently evaluated by four blinded pulmonology experts using DISCERN, the Patient Education Materials Assessment Tool—Printable (PEMAT-P), and Flesch–Kincaid readability metrics. A Web Resource Rating (WRR) score was also calculated. Inter-rater reliability was assessed using intraclass correlation coefficients (ICCs). Results: Retrieval-augmented models such as ChatGPT-4o Deep Research, Perplexity Research, and Grok3 Deep Search outperformed general-purpose chatbots across the DISCERN, PEMAT-P, and WRR metrics. However, these high-performing models also produced text at significantly higher reading levels (Flesch–Kincaid Grade Level > 16), reducing accessibility. Actionability scores were consistently lower than understandability scores across all models. The ICCs exceeded 0.80 for all evaluation domains, indicating excellent inter-rater reliability. Conclusions: Although some AI chatbots can generate accurate and well-structured responses to sarcoidosis-related questions, their limited readability and low actionability present barriers for effective patient education. Optimization strategies, such as prompt refinement, health literacy adaptation, and domain-specific model development, are required to improve the utility of AI chatbots in complex disease communication. Full article
Show Figures

Figure 1

8 pages, 235 KB  
Article
Is YouTube a Reliable Source of Information for Sacral Neuromodulation in Lower Urinary Tract Dysfunction?
by Sarah Lorger, Victor Yu and Sithum Munasinghe
Soc. Int. Urol. J. 2025, 6(2), 27; https://doi.org/10.3390/siuj6020027 - 17 Apr 2025
Viewed by 1045
Abstract
Background/Objectives: YouTube is an open-access video streaming platform with minimal regulation which has led to a vast library of unregulated medical videos. This study assesses the quality of information, understandability and actionability of videos on YouTube pertaining to sacral neuromodulation (SNM). Methods [...] Read more.
Background/Objectives: YouTube is an open-access video streaming platform with minimal regulation which has led to a vast library of unregulated medical videos. This study assesses the quality of information, understandability and actionability of videos on YouTube pertaining to sacral neuromodulation (SNM). Methods: The first 50 videos on YouTube after searching “sacral neuromodulation for bladder dysfunction” were reviewed. Thirty-eight of these videos met the inclusion criteria. These videos were reviewed by two Urology Registrars and the videos were scored using two standardised tools. The DISCERN tool assesses quality of information and the Patient Education Materials Assessment Tool for Audiovisual Material (PEMAT-A/V) tool assesses user understandability and accessibility. Results: Forty-two percent of videos were deemed to be poor or very poor, with 58% being fair, good or excellent according to the DISCERN standardised tool. For PEMAT-A/V the average score for understandability was 74% (43–100%) and actionability was 38% (0–100%). We found statistical significance comparing the duration of videos to the DISCERN groups (p = 0.02). We also found significance comparing the understandability of videos using the PEMAT-A/V score to the DISCERN groups (p ≤ 0.05). Conclusions: Forty-two percent of videos on SNM are of poor or very poor quality. The actionability score for consumers to seek out further information is also low at 38%. This raises concerns about the quality of information that is widely available on YouTube and how consumers will use this information when making decisions about their health. Full article
8 pages, 1881 KB  
Article
Responses of Artificial Intelligence Chatbots to Testosterone Replacement Therapy: Patients Beware!
by Herleen Pabla, Alyssa Lange, Nagalakshmi Nadiminty and Puneet Sindhwani
Soc. Int. Urol. J. 2025, 6(1), 13; https://doi.org/10.3390/siuj6010013 - 12 Feb 2025
Cited by 1 | Viewed by 1924
Abstract
Background/Objectives: Using chatbots to seek healthcare information is becoming more popular. Misinformation and gaps in knowledge exist regarding the risk and benefits of testosterone replacement therapy (TRT). We aimed to assess and compare the quality and readability of responses generated by four [...] Read more.
Background/Objectives: Using chatbots to seek healthcare information is becoming more popular. Misinformation and gaps in knowledge exist regarding the risk and benefits of testosterone replacement therapy (TRT). We aimed to assess and compare the quality and readability of responses generated by four AI chatbots. Methods: ChatGPT, Google Bard, Bing Chat, and Perplexity AI were asked the same eleven questions regarding TRT. The responses were evaluated by four reviewers using DISCERN and Patient Education Materials Assessment Tool (PEMAT) questionnaires. Readability was assessed using the Readability Scoring system v2.0. to calculate the Flesch–Kincaid Reading Ease Score (FRES) and the Flesch–Kincaid Grade Level (FKGL). Kruskal–Wallis statistics were completed using GraphPad Prism V10.1.0. Results: Google Bard received the highest DISCERN (56.5) and PEMAT (96% understandability and 74% actionability), demonstrating the highest quality. The readability scores ranged from eleventh-grade level to college level, with Perplexity outperforming the other chatbots. Significant differences were found in understandability between Bing and Google Bard, DISCERN scores between Bing and Google Bard, FRES between ChatGPT and Perplexity, and FKGL scoring between ChatGPT and Perplexity AI. Conclusions: ChatGPT and Google Bard were the top performers based on their quality, understandability, and actionability. Despite Perplexity scoring higher in readability, the generated text still maintained an eleventh-grade complexity. Perplexity stood out for its extensive use of citations; however, it offered repetitive answers despite the diversity of questions posed to it. Google Bard demonstrated a high level of detail in its answers, offering additional value through visual aids. With improvements in technology, these AI chatbots may improve. Until then, patients and providers should be aware of the strengths and shortcomings of each. Full article
Show Figures

Figure 1

13 pages, 1125 KB  
Article
The Deadly Details: How Clear and Complete Are Publicly Available Sources of Human Rabies Information?
by Natalie Patane, Owen Eades, Jennifer Morris, Olivia Mac, Kirsten McCaffery and Sarah L. McGuinness
Trop. Med. Infect. Dis. 2025, 10(1), 16; https://doi.org/10.3390/tropicalmed10010016 - 7 Jan 2025
Cited by 3 | Viewed by 3250
Abstract
Human rabies is preventable but almost always fatal once symptoms appear, causing 59,000 global deaths each year. Limited awareness and inconsistent access to post-exposure prophylaxis hinder prevention efforts. To identify gaps and opportunities for improvement in online rabies information, we assessed the readability, [...] Read more.
Human rabies is preventable but almost always fatal once symptoms appear, causing 59,000 global deaths each year. Limited awareness and inconsistent access to post-exposure prophylaxis hinder prevention efforts. To identify gaps and opportunities for improvement in online rabies information, we assessed the readability, understandability, actionability, and completeness of online public rabies resources from government and health agencies in Australia and similar countries, with the aim of identifying gaps and opportunities for improvement. We identified materials via Google and public health agency websites, assessing readability using the Simple Measure of Gobbledygook (SMOG) index and understandability and actionability with the Patient Education Materials Tool for Print materials (PEMAT-P). Completeness was assessed using a framework focused on general and vaccine-specific rabies information. An analysis of 22 resources found a median readability of grade 13 (range: 10–15), with a mean understandability of 66% and mean actionability of 60%; both below recommended thresholds. Mean completeness was 79% for general rabies information and 36% for vaccine-specific information. Visual aids were under-utilised, and critical vaccine-specific information was often lacking. These findings highlight significant barriers in rabies information for the public, with most resources requiring a high literacy level and lacking adequate understandability and actionability. Improving readability, adding visual aids, and enhancing vaccine-related content could improve accessibility and support wider prevention efforts. Full article
(This article belongs to the Special Issue Rabies Epidemiology, Control and Prevention Studies)
Show Figures

Figure 1

14 pages, 1329 KB  
Article
Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT
by Yasir H. Abdelgadir, Charat Thongprayoon, Iasmina M. Craici, Wisit Cheungpasitporn and Jing Miao
Healthcare 2025, 13(1), 57; https://doi.org/10.3390/healthcare13010057 - 31 Dec 2024
Cited by 2 | Viewed by 2292
Abstract
Background/Objectives: It is often challenging for patients to understand treatment options, their mechanisms of action, and the potential side effects of each treatment option for glomerular disorders. This study explored the ability of ChatGPT to simplify these treatment options to enhance patient [...] Read more.
Background/Objectives: It is often challenging for patients to understand treatment options, their mechanisms of action, and the potential side effects of each treatment option for glomerular disorders. This study explored the ability of ChatGPT to simplify these treatment options to enhance patient understanding. Methods: GPT-4 was queried on sixty-seven glomerular disorders using two distinct queries for a general explanation and an explanation adjusted for an 8th grade level or lower. Accuracy was rated on a scale of 1 (incorrect) to 5 (correct and comprehensive). Readability was measured using the average of the Flesch–Kincaid Grade (FKG) and SMOG indices, along with the Flesch Reading Ease (FRE) score. The understandability score (%) was determined using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P). Results: GPT-4’s general explanations had an average readability level of 12.85 ± 0.93, corresponding to the upper end of high school. When tailored for patients at or below an 8th-grade level, the readability improved to a middle school level of 8.44 ± 0.72. The FRE and PEMAT-P scores also reflected improved readability and understandability, increasing from 25.73 ± 6.98 to 60.75 ± 4.56 and from 60.7% to 76.8% (p < 0.0001 for both), respectively. The accuracy of GPT-4’s tailored explanations was significantly lower compared to the general explanations (3.99 ± 0.39 versus 4.56 ± 0.66, p < 0.0001). Conclusions: ChatGPT shows significant potential for enhancing the readability and understandability of glomerular disorder therapies for patients, but at a cost of reduced comprehensiveness. Further research is needed to refine the performance, evaluate the real-world impact, and ensure the ethical use of ChatGPT in healthcare settings. Full article
Show Figures

Figure 1

11 pages, 1233 KB  
Article
Quality of Dietetic Patient Education Materials for Diabetes and Gastrointestinal Disorders: Where Can We Do Better?
by Kelly Lambert, Olivia Hodgson and Claudia Goodman
Dietetics 2024, 3(3), 346-356; https://doi.org/10.3390/dietetics3030026 - 6 Sep 2024
Cited by 1 | Viewed by 2144
Abstract
(1) Background: Patient education materials are frequently used by dietitians to support counselling and reinforce key concepts. No studies have examined the quality of dietetic patient education materials for diabetes and common gastrointestinal conditions. (2) Methods: Materials relating to the dietary management of [...] Read more.
(1) Background: Patient education materials are frequently used by dietitians to support counselling and reinforce key concepts. No studies have examined the quality of dietetic patient education materials for diabetes and common gastrointestinal conditions. (2) Methods: Materials relating to the dietary management of diabetes and gastrointestinal conditions (IBD, IBS, lactose intolerance, coeliac disease and low-FODMAP diets) were evaluated by three dietitian raters. Readability was assessed, and materials with a reading grade level ≤ 7 were considered readable. The PEMAT was used to assess understandability and actionability. Clarity was determined using the CDCCCI. (3) Results: Overall readability scores were satisfactory with a median grade level of 6 (IQR: 5–8). Readability scores did not differ between material types (p = 0.09). The health literacy demand of materials was suboptimal, with a mean understandability score of 65.9 ± 15.1% and an actionability score 49.6% ± 20.8%. Both scores fell below the benchmark of ≥70%. These did not differ between material types (p = 0.06 and p = 0.15, respectively). Clarity scores were below the benchmark of ≥90% (mean score 64.2 ± 14.8%). Only 6.6% of materials achieved a score of ≥90. (4) Conclusions: Improvements to the health literacy demand and clarity of dietetic patient education materials are required. Areas for future improvement have been identified. Full article
Show Figures

Figure 1

10 pages, 1600 KB  
Article
Online Patient Education in Obstructive Sleep Apnea: ChatGPT versus Google Search
by Serena Incerti Parenti, Maria Lavinia Bartolucci, Elena Biondi, Alessandro Maglioni, Giulia Corazza, Antonio Gracco and Giulio Alessandri-Bonetti
Healthcare 2024, 12(17), 1781; https://doi.org/10.3390/healthcare12171781 - 5 Sep 2024
Cited by 13 | Viewed by 2876
Abstract
The widespread implementation of artificial intelligence technologies provides an appealing alternative to traditional search engines for online patient healthcare education. This study assessed ChatGPT-3.5’s capabilities as a source of obstructive sleep apnea (OSA) information, using Google Search as a comparison. Ten frequently searched [...] Read more.
The widespread implementation of artificial intelligence technologies provides an appealing alternative to traditional search engines for online patient healthcare education. This study assessed ChatGPT-3.5’s capabilities as a source of obstructive sleep apnea (OSA) information, using Google Search as a comparison. Ten frequently searched questions related to OSA were entered into Google Search and ChatGPT-3.5. The responses were assessed by two independent researchers using the Global Quality Score (GQS), Patient Education Materials Assessment Tool (PEMAT), DISCERN instrument, CLEAR tool, and readability scores (Flesch Reading Ease and Flesch–Kincaid Grade Level). ChatGPT-3.5 significantly outperformed Google Search in terms of GQS (5.00 vs. 2.50, p < 0.0001), DISCERN reliability (35.00 vs. 29.50, p = 0.001), and quality (11.50 vs. 7.00, p = 0.02). The CLEAR tool scores indicated that ChatGPT-3.5 provided excellent content (25.00 vs. 15.50, p < 0.001). PEMAT scores showed higher understandability (60–91% vs. 44–80%) and actionability for ChatGPT-3.5 (0–40% vs. 0%). Readability analysis revealed that Google Search responses were easier to read (FRE: 56.05 vs. 22.00; FKGL: 9.00 vs. 14.00, p < 0.0001). ChatGPT-3.5 delivers higher quality and more comprehensive OSA information compared to Google Search, although its responses are less readable. This suggests that while ChatGPT-3.5 can be a valuable tool for patient education, efforts to improve readability are necessary to ensure accessibility and utility for all patients. Healthcare providers should be aware of the strengths and weaknesses of various healthcare information resources and emphasize the importance of critically evaluating online health information, advising patients on its reliability and relevance. Full article
Show Figures

Figure 1

17 pages, 5626 KB  
Article
Assessing the Quality of YouTube’s Incontinence Information after Cancer Surgery: An Innovative Graphical Analysis
by Alvaro Manuel Rodriguez-Rodriguez, Marta De la Fuente-Costa, Mario Escalera-de la Riva, Fernando Domínguez-Navarro, Borja Perez-Dominguez, Gustavo Paseiro-Ares, Jose Casaña-Granell and María Blanco-Diaz
Healthcare 2024, 12(2), 243; https://doi.org/10.3390/healthcare12020243 - 18 Jan 2024
Cited by 9 | Viewed by 2414
Abstract
Background: Prostate and colorectum cancers rank among the most common cancers, and incontinence is a significant postsurgical issue affecting the physical and psychological well-being of cancer survivors. Social media, particularly YouTube, has emerged as a vital source of health information. While YouTube offers [...] Read more.
Background: Prostate and colorectum cancers rank among the most common cancers, and incontinence is a significant postsurgical issue affecting the physical and psychological well-being of cancer survivors. Social media, particularly YouTube, has emerged as a vital source of health information. While YouTube offers valuable content, users must exercise caution due to potential misinformation. Objective: This study aims to assess the quality of publicly available YouTube videos related to incontinence after pelvic cancer surgery. Methods: A search on YouTube related to “Incontinence after cancer surgery” was performed, and 108 videos were analyzed. Multiple quality assessment tools (DISCERN, GQS, JAMA, PEMAT, and MQ-VET) and statistical analyses (descriptive statistics and intercorrelation tests) were used to evaluate the characteristics and popularity, educational value, quality, and reliability of these videos, relying on novel graphical representation techniques such as Sankey and Chord diagrams. Results: Strong positive correlations were found among quality rating scales, emphasizing agreement. The performed graphical analysis reinforced the reliability and validity of quality assessments. Conclusions: This study found strong correlations among five quality scales, suggesting their effectiveness in assessing health information quality. The evaluation of YouTube videos consistently revealed “high” quality content. Considering the source is mandatory when assessing quality, healthcare and academic institutions are reliable sources. Caution is advised with ad-containing videos. Future research should focus on policy improvements and tools to aid patients in finding high-quality health content. Full article
Show Figures

Figure 1

Back to TopTop