You are currently viewing a new version of our website. To view the old version click .
Psychiatry International
  • Brief Report
  • Open Access

3 November 2025

Can AI Models like ChatGPT and Gemini Dispel Myths About Children’s and Adolescents’ Mental Health? A Comparative Brief Report

1
Faculty of Health Sciences, University of Beira Interior, 6200-506 Covilhã, Portugal
2
Family Health Unit Beira Ria, 3830-596 Gafanha da Nazaré, Portugal
3
RISE-Health, Department of Community Medicine, Information and Health Decision Sciences, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal

Abstract

Background: Dispelling myths is crucial for policy and health communication because misinformation can directly influence public behavior, undermine trust in institutions, and lead to harmful outcomes. This study aims to assess the effectiveness and differences between OpenAI’s ChatGPT and Google Gemini in dispelling myths about children’s and adolescents’ mental health. Methods: Using seven myths about mental health from the UNICEF & WHO Teacher’s Guide, ChatGPT-4o and Gemini were asked to “classify each sentence as a myth or a fact”. Responses of each LLM for word count, understandability, readability and accuracy were analyzed. Results: Both ChatGPT and Gemini correctly identified all 7 statements as myths. The average word count of ChatGPT’s responses was 60 ± 11 words, while Gemini’s responses averaged 60 ± 29 words, a statistically non-significant difference between the LLMs. The Flesch–Kincaid Grade Level averaged 11.7 ± 2.2 for ChatGPT and 10.2 ± 1.3 for Gemini, also a statistically non-significant difference. In terms of readability, both ChatGPT and Gemini’s answers were considered difficult to read, with all grades exceeding the 7th grade level. The findings should nonetheless be interpreted with caution due to the limited dataset. Conclusions: The study adds valuable insights into the strengths of ChatGPT and Gemini as helpful resources for people seeking medical information about children’s and adolescents’ mental health, although the content may not be as easily accessible to those below a college reading level.

1. Introduction

According to the United Nations Children’s Fund (UNICEF) report The State of Children in the European Union 2024 and its accompanying Policy Brief on child and adolescent mental health, more than 11 million children and adolescents under the age of 19, or roughly 13 percent, are estimated to experience mental health issues []. The occurrence of these conditions grows with age, starting at around 2 percent in children younger than 5 and reaching around 19 percent in teenagers aged 15 to 19 []. As summarized by Shim et al. [], in the USA, 20 percent of children are diagnosed with a mental health condition each year, and by the age of 18, 40 percent will have met the criteria for such conditions. Despite the high prevalence of mental health issues, treatment rates remain low [].
Over the years, the number of people seeking health information online has increased significantly, with many parents turning to the internet to research their child’s symptoms and guide healthcare decisions []. This trend is also evident in the search for mental health information, which has become progressively more common among both adolescents and parents [].
World Health Organization & United Nations Children’s Fund (UNICEF) [] claim that reducing myths about mental health helps clear up confusion and creates a safe space where people of all ages feel comfortable seeking help when needed. To support this, they have developed a Teacher’s Guide for social and emotional learning, freely available online, designed for professionals working with adolescents aged 10–14 in educational settings. This includes teachers, school counselors, and mental health professionals [].
Recently, prominent artificial intelligent (AI) large language models (LLMs), two well-known examples being OpenAI’s ChatGPT and Google Gemini, have gained significant attention for their ability to generate human-like responses in conversations []. ChatGPT is recognized for its creativity and depth, while Gemini focuses on accuracy and brevity []. A recently published study demonstrated ChatGPT’s ability to generate educational materials for patients on common public health issues []. Another study highlighted its usefulness in supporting parents of children with pediatric oncological diseases [], while another demonstrated its accuracy in delivering information about autism to parents []. Nonetheless, notable challenges persist. LLMs can sometimes produce misleading or factually incorrect information, particularly when interpreting complex medical concepts [,]. Moreover, AI systems often replicate existing social biases because they are trained on data that underrepresents marginalized groups such as women, racialized communities, and Indigenous peoples, leading to unequal care and outcomes []. Many AI models also overlook social determinants of health that are crucial to equity. Accessibility concerns further arise, as AI technologies can be expensive and difficult to implement in rural or low-resource settings, while cultural and linguistic barriers may also limit equitable access []. As LLMs gain popularity among patients seeking health information online, understanding their strengths and constraints is crucial. This study is a preliminary analysis aimed at assessing the effectiveness and differences between OpenAI’s ChatGPT and Google Gemini in dispelling myths about children’s and adolescents’ mental health.

2. Methods

Using the seven myths about mental health included in the Teacher’s Guide, developed as part of the Helping Adolescents Thrive package by the WHO Departments of Mental Health and Substance Use, and of Maternal, Newborn, Child and Adolescent Health and Aging, along with the UNICEF Maternal Newborn Adolescent Health Unit []:
  • If a person has a mental health condition, it means the person has low intelligence.
  • You only need to take care of your mental health if you have a mental health condition.
  • Poor mental health is not a big issue for teenagers. They just have mood swings caused by hormonal fluctuations and act out due to a desire for attention.
  • Nothing can be done to protect people from developing mental health conditions.
  • A mental health condition is a sign of weakness; if the person were stronger, they would not have this condition.
  • Adolescents who get good grades and have a lot of friends will not have mental health conditions because they have nothing to be depressed about.
  • Bad parenting causes mental conditions in adolescents.
ChatGPT-4o (15 September 2024, version 4o) and Gemini (16 September 2024, version App) were asked to “classify each sentence as a myth or a fact”. No adjustment to a predetermined reading level was prompted.
A descriptive analysis evaluated the responses of each LLM for word count, understandability using the Flesch–Kincaid Grade Level, and readability using the Flesch–Kincaid Reading Ease Score (translated into educational level). Although some readability formulas are more specialized, researchers and healthcare professionals commonly use the Flesch tests to evaluate whether health information is written at a reading level appropriate for patients []. For this analysis, seventh grade was defined as the school grade at which the average adult should find LLM responses simple to read.
The accuracy of each LLM (measured dichotomously as Yes/No) in dispelling the seven myths was assessed based on whether their responses included the following arguments (or similar in content), adapted from the World Health Organization & United Nations Children’s Fund (UNICEF) [] Teacher’s Guide:
  • Mental illness can affect anyone.
  • Everyone can benefit from taking proactive steps to improve their mental health.
  • Mental health issues in teenagers are real.
  • Various factors can protect individuals from developing mental health conditions.
  • Mental health conditions are not a sign of weakness or lack of willpower.
  • Depression can affect anyone, regardless of socioeconomic status or how good their life may seem externally.
  • A range of factors can influence the mental health of adolescents, their caregivers, and the relationship between them.
These arguments correspond to the seven myths previously stated and were evaluated by a single reviewer—the author of the present manuscript.
To assess differences in word count or understandability between OpenAI’s ChatGPT and Google Gemini responses, Wilcoxon signed-rank test was used. Data analyses were performed using SPSS Statistics (Version 21). The significance level was p < 0.05.
This study did not involve the analysis of human research participants or sensitive data. Therefore, clearance from the Institutional Ethics Committee was not required, in accordance with the prevailing guidelines. LLMs were accessed under free-use conditions.

3. Results

Both ChatGPT and Gemini correctly identified all 7 statements about children’s and adolescents’ mental health as myths (false statements) (Table 1). The average word count of ChatGPT’s responses was 60 ± 11 words, while Gemini’s responses averaged 60 ± 29 words, a statistically non-significant difference between the LLMs. The Flesch–Kincaid Grade Level averaged 11.7 ± 2.2 for ChatGPT and 10.2 ± 1.3 for Gemini, also a statistically non-significant difference. In terms of readability, both ChatGPT and Gemini’s answers were considered difficult to read, with all grades exceeding the 7th grade level.
Table 1. Mental health knowledge misconceptions and responses from ChatGPT and Gemini.

4. Discussion

Overall, the results suggest that both ChatGPT and Gemini successfully dispelled all seven myths about children’s and adolescents’ mental health, providing accurate and similar information on common mental health misconceptions. The ability of LLMs to set straight health-related myths has also been validated in other areas, such as cancer [], Alzheimer’s disease [], and diabetes []. The results of the present study align with those of Mondal, Panigrahi, Mishra, Behera and Mondal [], who also found that the LLM’s answers were at an ideal difficulty level for college students. The finding that all readability scores exceeded the 7th-grade level has critical health literacy implications, as both parents and adolescents may struggle to comprehend materials written at a high school or college reading level. Individuals with inadequate health literacy face significant challenges in accessing and effectively using healthcare services []. They may struggle to register for insurance or understand medical forms and instructions, often leaving appointments without fully comprehending their care. Misunderstanding prescription directions can lead to improper medication use, and limited comprehension contributes to poorer management of chronic conditions, resulting in higher hospitalization rates []. In the present study, no adjustment to a predetermined reading level was prompted, as most individuals would lack the insight to request such changes, making the present results more representative. Nevertheless, another study by Spallek et al. [] focused on answering users’ direct queries about mental health and substance use education noted that without engineered prompting, the outputs reading level was poor, highlighting that LLM’s responses still needed human oversight. Several limitations should be noted. The present study is limited by the lack of information on how target populations, such as parents, adolescents, and educators would evaluate the responses of ChatGPT and Gemini, as well as by the absence of citations in these LLMs’ responses to support evidence-based information. These limitations have been noted in other studies as well (e.g., Huang, Chen, Huang, Cai, Lin, Wu, Zhuang and Jiang []). The sample size was also limited (reflecting the small number of myths tested), which reduced statistical power; consequently, the lack of statistically significant differences between models may reflect insufficient data rather than a true absence of effect. The study was conducted exclusively in English, leaving unclear how the findings translate to multilingual or low-resource settings, where misinformation and health literacy challenges may differ substantially. Nevertheless, the strength of this work lies in the relevance of the issue addressed and the use of WHO and UNICEF-developed tool as a reference point. Future research should address these gaps by incorporating larger and more diverse sets of myths and conducting qualitative studies to assess perceived comprehension and user trust. Cross-linguistic research would also provide valuable insight into the performance and accessibility of LLMs across diverse linguistic and cultural contexts.
Despite its limitations, this study adds valuable insights into the strengths of ChatGPT and Gemini as helpful resources for people seeking medical information about children’s and adolescents’ mental health, although the content may be less accessible to those below a college reading level. From a practical perspective, educators and clinicians should treat LLM-generated content as a complementary resource rather than an authoritative source. These models can assist in countering misinformation, but their responses should be verified against trusted, evidence-based references. Furthermore, it should not be overlooked that excessive work and overload of responsibilities themselves constitute significant risk factors for adolescents’ mental well-being, highlighting the need for a balanced approach that combines accurate information, supportive environments, and responsible use of AI tools.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data relevant to the study are reported in the article; the author is welcome to provide further information or clarification.

Use of GenAI in writing

ChatGPT-4o and Gemini were used as described in the methods section.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. UNICEF. Policy Brief 2: Child and Adolescent Mental Health. Available online: https://www.unicef.org/eu/media/2576/file/Child%20and%20adolescent%20mental%20health%20policy%20brief.pdf (accessed on 9 August 2025).
  2. Shim, R.; Szilagyi, M.; Perrin, J.M. Epidemic Rates of Child and Adolescent Mental Health Disorders Require an Urgent Response. Pediatrics 2022, 149, e2022056611. [Google Scholar] [CrossRef] [PubMed]
  3. Kubb, C.; Foran, H.M. Online Health Information Seeking by Parents for Their Children: Systematic Review and Agenda for Further Research. J. Med. Internet Res. 2020, 22, e19985. [Google Scholar] [CrossRef] [PubMed]
  4. Zhao, X.; Coxe, S.J.; Timmons, A.C.; Frazier, S.L. Mental Health Information Seeking Online: A Google Trends Analysis of ADHD. Adm. Policy Ment. Health 2022, 49, 357–373. [Google Scholar] [CrossRef] [PubMed]
  5. World Health Organization & United Nations Children’s Fund (UNICEF). Teacher’s Guide to the Magnificent Mei and Friends Comic Series; World Health Organization: Geneva, Switzerland, 2021.
  6. Orru, G.; Piarulli, A.; Conversano, C.; Gemignani, A. Human-like problem-solving abilities in large language models using ChatGPT. Front. Artif. Intell. 2023, 6, 1199350. [Google Scholar] [CrossRef] [PubMed]
  7. Rane, N.; Choudhary, S.; Rane, J. Gemini versus ChatGPT: Applications, performance, architecture, capabilities, and implementation. J. Appl. Artif. Intell. 2024, 5, 69–93. [Google Scholar] [CrossRef]
  8. Mondal, H.; Panigrahi, M.; Mishra, B.; Behera, J.K.; Mondal, S. A pilot study on the capability of artificial intelligence in preparation of patients’ educational materials for Indian public health issues. J. Fam. Med. Prim. Care 2023, 12, 1659–1662. [Google Scholar] [CrossRef] [PubMed]
  9. Prazeres, F. ChatGPT as a Way to Enhance Parents’ Communication in Cases of Oncological Pediatric Diseases. Turk. J. Haematol. 2023, 40, 275–277. [Google Scholar] [CrossRef] [PubMed]
  10. McFayden, T.C.; Bristol, S.; Putnam, O.; Harrop, C. ChatGPT: Artificial Intelligence as a Potential Tool for Parents Seeking Information About Autism. Cyberpsychol. Behav. Soc. Netw. 2024, 27, 135–148. [Google Scholar] [CrossRef] [PubMed]
  11. Clay, T.J.; Da Custodia Steel, Z.J.; Jacobs, C. Human-Computer Interaction: A Literature Review of Artificial Intelligence and Communication in Healthcare. Cureus 2024, 16, e73763. [Google Scholar] [CrossRef] [PubMed]
  12. Nasra, M.; Jaffri, R.; Pavlin-Premrl, D.; Kok, H.K.; Khabaza, A.; Barras, C.; Slater, L.A.; Yazdabadi, A.; Moore, J.; Russell, J.; et al. Can artificial intelligence improve patient educational material readability? A systematic review and narrative synthesis. Intern. Med. J. 2025, 55, 20–34. [Google Scholar] [CrossRef] [PubMed]
  13. Gurevich, E.; El Hassan, B.; El Morr, C. Equity within AI systems: What can health leaders expect? Healthc. Manag. Forum. 2023, 36, 119–124. [Google Scholar] [CrossRef] [PubMed]
  14. Jindal, P.; MacDermid, J.C. Assessing reading levels of health information: Uses and limitations of flesch formula. Educ. Health 2017, 30, 84–88. [Google Scholar] [CrossRef] [PubMed]
  15. Johnson, S.B.; King, A.J.; Warner, E.L.; Aneja, S.; Kann, B.H.; Bylund, C.L. Using ChatGPT to evaluate cancer myths and misconceptions: Artificial intelligence and cancer information. JNCI Cancer Spectr. 2023, 7, pkad015. [Google Scholar] [CrossRef] [PubMed]
  16. Huang, S.S.; Song, Q.; Beiting, K.J.; Duggan, M.C.; Hines, K.; Murff, H.; Leung, V.; Powers, J.; Harvey, T.S.; Malin, B.; et al. Fact Check: Assessing the Response of ChatGPT to Alzheimer’s Disease Myths. J. Am. Med. Dir. Assoc. 2024, 25, 105178. [Google Scholar] [CrossRef] [PubMed]
  17. Huang, C.; Chen, L.; Huang, H.; Cai, Q.; Lin, R.; Wu, X.; Zhuang, Y.; Jiang, Z. Evaluate the accuracy of ChatGPT’s responses to diabetes questions and misconceptions. J. Transl. Med. 2023, 21, 502. [Google Scholar] [CrossRef] [PubMed]
  18. Safeer, R.S.; Keenan, J. Health literacy: The gap between physicians and patients. Am. Fam. Physician 2005, 72, 463–468. [Google Scholar] [PubMed]
  19. Spallek, S.; Birrell, L.; Kershaw, S.; Devine, E.K.; Thornton, L. Can we use ChatGPT for Mental Health and Substance Use Education? Examining Its Quality and Potential Harms. JMIR Med. Educ. 2023, 9, e51243. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.