Digital Mental Health Through an Intersectional Lens: A Narrative Review
Abstract
1. Introduction
2. Methods
3. Narrative Review
3.1. The Role of AI in Mental Health Care
3.2. Bias in Digital Mental Health Diagnosis and Treatment
4. An Intersectional Framework
4.1. Intersectionality
4.2. Racial/Ethnic Disparities
4.3. Lesbian, Gay, Bisexual, Transgender, Queer, and/or Questioning (LGBTQ+)
4.4. Neurodivergence
5. Culturally Responsive and Participatory Design for Inclusive AI
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Balfour, M.E.; Carson, C.A.; Williamson, R. Alternatives to the Emergency Department. Psychiatr. Serv. 2017, 68, 306. [Google Scholar] [CrossRef]
- Hogan, M.F.; Goldman, M.L. New Opportunities to Improve Mental Health Crisis Systems. Psychiatr. Serv. 2021, 72, 169–173. [Google Scholar] [CrossRef]
- Ma, Z.; Mei, Y.; Long, Y.; Su, Z.; Gajos, K.Z. Evaluating the experience of LGBTQ+ people using large language model based chatbots for mental health support. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
- Patel, V.; Saxena, S.; Lund, C.; Thornicroft, G.; Baingana, F.; Bolton, P.; Chisholm, D.; Collins, P.Y.; Cooper, J.L.; Eaton, J.; et al. The Lancet Commission on global mental health and sustainable development. Lancet 2018, 392, 1553–1598. [Google Scholar] [CrossRef] [PubMed]
- Ettman, C.K.; Galea, S. The Potential Influence of AI on Population Mental Health. JMIR Ment. Health 2023, 10, e49936. [Google Scholar] [CrossRef] [PubMed]
- Alhuwaydi, A.M. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions—A Narrative Review for a Comprehensive Insight. Risk Manag. Healthc. Policy 2024, 17, 1339–1348. [Google Scholar] [CrossRef]
- Mentis, A.A.; Lee, D.; Roussos, P. Applications of artificial intelligence-machine learning for detection of stress: A critical overview. Mol. Psychiatry 2024, 29, 1882–1894. [Google Scholar] [CrossRef]
- Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.C.; Jeste, D.V. Artificial Intelligence for Mental Health and Mental Illnesses: An Overview. Curr. Psychiatry Rep. 2019, 21, 116. [Google Scholar] [CrossRef]
- Tornero-Costa, R.; Martinez-Millana, A.; Azzopardi-Muscat, N.; Lazeri, L.; Traver, V.; Novillo-Ortiz, D. Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review. JMIR Ment. Health 2023, 10, e42045. [Google Scholar] [CrossRef]
- Jin, K.W.; Li, Q.; Xie, Y.; Xiao, G. Artificial intelligence in mental healthcare: An overview and future perspectives. Br. J. Radiol. 2023, 96, 20230213. [Google Scholar] [CrossRef]
- Minerva, F.; Giubilini, A. Is AI the Future of Mental Healthcare? Topoi 2023, 42, 809–817. [Google Scholar] [CrossRef] [PubMed]
- Carlson, C.G. Virtual and Augmented Simulations in Mental Health. Curr. Psychiatry Rep. 2023, 25, 365–371. [Google Scholar] [CrossRef] [PubMed]
- Singh, O.P. Chatbots in psychiatry: Can treatment gap be lessened for psychiatric disorders in India. Indian J. Psychiatry 2019, 61, 225. [Google Scholar] [CrossRef] [PubMed]
- Stade, E.C.; Stirman, S.W.; Ungar, L.H.; Boland, C.L.; Schwartz, H.A.; Yaden, D.B.; Sedoc, J.; DeRubeis, R.J.; Willer, R.; Eichstaedt, J.C. Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation. npj Ment. Health Res. 2024, 3, 12. [Google Scholar] [CrossRef]
- Kingsmith, A.T. How Chatbots Deepen the Mental Health Crisis. Mad in America. Available online: https://www.madinamerica.com/2025/10/how-chatbots-deepen-the-mental-health-crisis/ (accessed on 20 November 2025).
- Rousmaniere, T.; Zhang, Y.; Li, X.; Shah, S. Large language models as mental health resources: Patterns of use in the United States. Pract. Innov. 2025. [Google Scholar] [CrossRef]
- Greene, A.S.; Shen, X.; Noble, S.; Horien, C.; Hahn, C.A.; Arora, J.; Tokoglu, F.; Spann, M.N.; Carrión, C.I.; Barron, D.S.; et al. Brain-phenotype models fail for individuals who defy sample stereotypes. Nature 2022, 609, 109–118. [Google Scholar] [CrossRef]
- Lechner, T.; Ben-David, S.; Agarwal, S.; Ananthakrishnan, N. Impossibility results for fair representations. arXiv 2021, arXiv:2107.03483. [Google Scholar] [CrossRef]
- Fields, C.T.; Black, C.; Thind, J.K.; Jegede, O.; Aksen, D.; Rosenblatt, M.; Assari, S.; Bellamy, C.; Anderson, E.; Holmes, A.; et al. Governance for anti-racist AI in healthcare: Integrating racism-related stress in psychiatric algorithms for Black Americans. Front. Digit. Health 2025, 7, 1492736. [Google Scholar] [CrossRef]
- Williams, D.R.; Rucker, T.D. Understanding and addressing racial disparities in health care. Health Care Financ. Rev. 2000, 21, 75–90. [Google Scholar]
- Raza, M.M.; Venkatesh, K.P.; Kvedar, J.C. Promoting racial equity in digital health: Applying a cross-disciplinary equity framework. npj Digit. Med. 2023, 6, 3. [Google Scholar] [CrossRef] [PubMed]
- Lee, E.E.; Torous, J.; De Choudhury, M.; Depp, C.A.; Graham, S.A.; Kim, H.C.; Paulus, M.P.; Krystal, J.H.; Jeste, D.V. Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2021, 6, 856–864. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, J.; Shen, Y.; Wang, C.; Jin, Q.; Wang, F.; Zhang, Y. Unveiling and mitigating bias in mental health analysis with large language models. arXiv 2024, arXiv:2406.12033. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Stuart, E.M.; Aboujaoude, E. Racial bias in AI-mediated psychiatric diagnosis and treatment: A qualitative comparison of four large language models. npj Digit. Med. 2025, 8, 332. [Google Scholar] [CrossRef]
- De Choudhury, M.; Pendse, S.R.; Kumar, N. Benefits and harms of large language models in digital mental health. arXiv 2023, arXiv:2311.14693. [Google Scholar] [CrossRef]
- Oexle, N.; Corrigan, P.W. Understanding Mental Illness Stigma Toward Persons with Multiple Stigmatized Conditions: Implications of Intersectionality Theory. Psychiatr. Serv. 2018, 69, 587–589. [Google Scholar] [CrossRef] [PubMed]
- Crenshaw, K. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. Univ. Chic. Leg. Forum 1989, 1989, 139–167. [Google Scholar]
- APA. Stress in America™ 2023: A Nation Grappling with Pychological Impacts of Collective Trauma. American Psychological Association (APA). 2023. Available online: https://www.apa.org/news/press/releases/2023/11/psychological-impacts-collective-trauma (accessed on 7 October 2025).
- Kessing, L.V.; Ziersen, S.C.; Caspi, A.; Moffitt, T.E.; Andersen, P.K. Lifetime Incidence of Treated Mental Health Disorders and Psychotropic Drug Prescriptions and Associated Socioeconomic Functioning. JAMA Psychiatry 2023, 80, 1000–1008. [Google Scholar] [CrossRef]
- Ormel, J.; Hollon, S.D.; Kessler, R.C.; Cuijpers, P.; Monroe, S.M. More treatment but no less depression: The treatment-prevalence paradox. Clin. Psychol. Rev. 2022, 91, 102111. [Google Scholar] [CrossRef]
- Ophir, Y.; Tikochinski, R.; Elyoseph, Z.; Efrati, Y.; Rosenberg, H. Balancing promise and concern in AI therapy: A critical perspective on early evidence from the MIT-OpenAI RCT. Front. Med. 2025, 12, 1612838. [Google Scholar] [CrossRef]
- Inkster, B.; Sarda, S.; Subramanian, V. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. JMIR Mhealth Uhealth 2018, 6, e12106. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Zhang, R.; Lee, Y.C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digit. Med. 2023, 6, 236. [Google Scholar] [CrossRef]
- Snowden, L.R. Bias in mental health assessment and intervention: Theory and evidence. Am. J. Public Health 2003, 93, 239–243. [Google Scholar] [CrossRef]
- Alegría, M.; NeMoyer, A.; Falgàs Bagué, I.; Wang, Y.; Alvarez, K. Social determinants of mental health: Where we are and where we need to go. Curr. Psychiatry Rep. 2018, 20, 95. [Google Scholar] [CrossRef]
- Cary, M.P.; Zink, A., Jr.; Wei, S.; Olson, A.; Yan, M.; Senior, R.; Bessias, S.; Gadhoumi, K.; Jean-Pierre, G.; Wang, D.; et al. Mitigating Racial and Ethnic Bias and Advancing Health Equity in Clinical Algorithms: A Scoping Review. Health Aff. 2023, 42, 1359–1368. [Google Scholar] [CrossRef]
- Chen, F.; Wang, L.; Hong, J.; Jiang, J.; Zhou, L. Unmasking bias in artificial intelligence: A systematic review of bias detection and mitigation strategies in electronic health record–based models. J. Am. Med. Inform. Assoc. 2024, 31, 1172–1183. [Google Scholar] [CrossRef]
- Barnett, P.; Mackay, E.; Matthews, H.; Gate, R.; Greenwood, H.; Ariyo, K.; Bhui, K.; Halvorsrud, K.; Pilling, S.; Smith, S. Ethnic variations in compulsory detention under the Mental Health Act: A systematic review and meta-analysis of international data. Lancet Psychiatry 2019, 6, 305–317. [Google Scholar] [CrossRef]
- Knight, S.; Jarvis, G.E.; Ryder, A.G.; Lashley, M.; Rousseau, C. ‘It just feels like an invasion’: Black first-episode psychosis patients’ experiences with coercive intervention and its influence on help-seeking behaviours. J. Black Psychol. 2022, 49, 200–235. [Google Scholar] [CrossRef]
- Faber, S.C.; Khanna Roy, A.; Michaels, T.I.; Williams, M.T. The weaponization of medicine: Early psychosis in the Black community and the need for racially informed mental healthcare. Front. Psychiatry 2023, 14, 1098292. [Google Scholar] [CrossRef]
- Rai, S.; Stade, E.C.; Giorgi, S.; Francisco, A.; Ungar, L.H.; Curtis, B.; Guntuku, S.C. Key language markers of depression on social media depend on race. Proc. Natl. Acad. Sci. USA 2024, 121, e2319837121. [Google Scholar] [CrossRef]
- Reuters. AI Fails to Detect Depression Signs in Social Media Posts by Black Americans. Reuters Health News. 2024. Available online: https://www.reuters.com/business/healthcare-pharmaceuticals/ai-fails-detect-depression-signs-social-media-posts-by-black-americans-study-2024-03-28/ (accessed on 1 November 2025).
- Moudden, I.E.; Bittner, M.C.; Karpov, M.V.; Osunmakinde, I.O.; Acheamponmaa, A.; Nevels, B.J.; Mbaye, M.T.; Fields, T.L.; Jordan, K.; Bahoura, M. Predicting mental health disparities using machine learning for African Americans in Southeastern Virginia. Sci. Rep. 2025, 15, 5900. [Google Scholar] [CrossRef]
- Himmelstein, G.; Bates, D.; Zhou, L. Examination of stigmatizing language in the electronic health record. JAMA Netw. Open 2022, 5, e2144967. [Google Scholar] [CrossRef]
- Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
- Straw, I.; Callison-Burch, C. Artificial intelligence in mental health and the biases of language-based models. PLoS ONE 2020, 15, e0240376. [Google Scholar] [CrossRef]
- Omiye, J.A.; Lester, J.C.; Spichak, S.; Rotemberg, V.; Daneshjou, R. Large language models propagate race-based medicine. NPJ Digit. Med. 2023, 6, 195. [Google Scholar] [CrossRef]
- Yang, M.; El-Attar, A.A.; Chaspari, T. Deconstructing demographic bias in speech-based machine learning models for digital health. Front. Digit. Health 2024, 6, 1351637. [Google Scholar] [CrossRef]
- Aggarwal, N.K.; Pieh, M.C.; Dixon, L.; Guarnaccia, P.; Alegría, M.; Lewis-Fernández, R. Clinician descriptions of communication strategies to improve treatment engagement by racial/ethnic minorities in mental health services: A systematic review. Patient Educ. Couns. 2016, 99, 198–209. [Google Scholar] [CrossRef]
- Mongelli, F.; Georgakopoulos, P.; Pato, M.T. Challenges and opportunities to meet the mental health needs of underserved and disenfranchised populations in the United States. Focus 2020, 18, 16–24. [Google Scholar] [CrossRef]
- Unertl, K.M.; Schaefbauer, C.L.; Campbell, T.R.; Senteio, C.; Siek, K.A.; Bakken, S.; Veinot, T.C. Integrating community-based participatory research and informatics approaches to improve the engagement and health of underserved populations. J. Am. Med. Inform. Assoc. 2016, 23, 60–73. [Google Scholar] [CrossRef]
- Tawiah, N.; Monestime, J.P. Promoting Equity in AI-Driven Mental Health Care for Marginalized Populations. Proc. AAAI Symp. Ser. 2024, 4, 323–327. [Google Scholar] [CrossRef]
- Zou, J.; Schiebinger, L. AI can be sexist and racist—it’s time to make it fair. Nature 2018, 559, 324–326. [Google Scholar] [CrossRef]
- World Health Organization. Ethics and Governance of Artificial Intelligence for Health; World Health Organization: Geneva, Switzerland, 2021; Available online: https://www.who.int/publications/i/item/9789240029200 (accessed on 23 November 2025).
- Peterson Health Technology Institute AI Taskforce. Adoption of Artificial Intelligence in Healthcare Delivery Systems: Early Applications and Impacts; Peterson Health Technology Institute AI Taskforce: New York, NY, USA, 2025; Available online: https://phti.org/ai-adoption-early-applications-impacts/ (accessed on 23 November 2025).
- Blease, C.; Rodman, A. Generative Artificial Intelligence in Mental Healthcare: An Ethical Evaluation. Curr. Treat. Options Psychiatry 2024, 12, 5. [Google Scholar] [CrossRef]
- Coiera, E.; Liu, S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Reports Med. 2022, 3, 100860. [Google Scholar] [CrossRef]
- Cross, S.; Bell, I.; Nicholas, J.; Valentine, L.; Mangelsdorf, S.; Baker, S.; Titov, N.; Alvarez-Jimenez, M. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR Ment. Health 2024, 11, e60589. [Google Scholar] [CrossRef]
- Grabb, D.; Lamparth, M.; Vasan, N. Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation. arXiv 2024, arXiv:2406.11852. [Google Scholar]
- McCradden, M.; Hui, K.; Buchman, D.Z. Evidence, ethics and the promise of artificial intelligence in psychiatry. J. Med. Ethics 2023, 49, 573–579. [Google Scholar] [CrossRef]
- Warrier, U.; Warrier, A.; Khandelwal, K. Ethical considerations in the use of artificial intelligence in mental health. Egypt. J. Neurol. Psychiatry Neurosurg. 2023, 59, 139. [Google Scholar] [CrossRef]
- Abdulai, A.F. Is Generative AI Increasing the Risk for Technology-Mediated Trauma Among Vulnerable Populations? Nurs. Inq. 2025, 32, e12686. [Google Scholar] [CrossRef]
- Ahmed, S. “You end up doing the document rather than doing the doing”: Diversity, race equality and the politics of documentation. Ethn. Racial Stud. 2007, 30, 590–609. [Google Scholar] [CrossRef]
- Londono Tobon, A.; Flores, J.M.; Taylor, J.H.; Johnson, I.; Landeros-Weisenberger, A.; Aboiralor, O.; Avila-Quintero, V.J.; Bloch, M.H. Racial Implicit Associations in Psychiatric Diagnosis, Treatment, and Compliance Expectations. Acad. Psychiatry 2021, 45, 23–33. [Google Scholar] [CrossRef]
- Jacquemard, T.; Doherty, C.P.; Fitzsimons, M.B. Examination and diagnosis of electronic patient records and their associated ethics: A scoping literature review. BMC Med. Ethics 2020, 21, 76. [Google Scholar] [CrossRef]
- Morreim, E. Errors in the EMR: Under-recognized hazard for AI in healthcare. Houst. J. Health Law Policy 2025, 24, 127–165. [Google Scholar]
- Nash, E.; Perlson, J.E.; McCann, R.; Noy, G.; Lawrence, R.; Alves-Bradford, J.-M.; Akinade, T.; Perez, D.; Arbuckle, M.R. Mitigating racism and implicit bias in psychiatric notes: A quality improvement project addressing how race and ethnicity are documented. Acad. Psychiatry 2024, 48, 211–212. [Google Scholar] [CrossRef]
- Holman, D.; Salway, S.; Bell, A.; Beach, B.; Adebajo, A.; Ali, N.; Butt, J. Can intersectionality help with understanding and tackling health inequalities? Perspectives of professional stakeholders. Health Res. Policy Sys. 2021, 19, 97. [Google Scholar] [CrossRef]
- Collins, P.H. Intersectionality’s definitional dilemmas. Annu. Rev. Sociol. 2015, 41, 1–20. [Google Scholar] [CrossRef]
- Funer, F. Admitting the heterogeneity of social inequalities: Intersectionality as a (self-)critical framework and tool within mental health care. Philos. Ethics Humanit. Med. PEHM 2023, 18, 21. [Google Scholar] [CrossRef]
- Daly, C.; Ji, E. AI and Mental Health—An Intersectional Analysis. Health Action Research Group. Available online: https://www.healthactionresearch.org.uk/selected-blogs/the-intersectionality-of-ai-an/ (accessed on 10 September 2025).
- Kunstman, J.W.; Ogungbadero, T.; Deska, J.C.; Bernstein, M.J.; Smith, A.R.; Hugenberg, K. Race-based biases in psychological distress and treatment judgments. PLoS ONE 2023, 18, e0293078. [Google Scholar] [CrossRef]
- Aleem, M.; Imama, Z.; Naseem, M. Towards Culturally Adaptive Large Language Models in Mental Health: Using ChatGPT as a Case Study. In Proceedings of the 27th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing, San José, Costa Rica, 9–13 November 2024; pp. 240–247. [Google Scholar] [CrossRef]
- Noe-Bustamante, L.; Hugo Lopez, M.; Krogstad, J. US Hispanic Population Surpassed 60 Million in 2019, but Growth Has Slowed; Pew Research Center: Washington, DC, USA, 2020; Available online: https://www.pewresearch.org/short-reads/2020/07/07/u-s-hispanic-population-surpassed-60-million-in-2019-but-growth-has-slowed/ (accessed on 22 November 2025).
- Pro, G.; Brown, C.; Rojo, M.; Patel, J.; Flax, C.; Haynes, T. Downward National Trends in Mental Health Treatment Offered in Spanish: State Differences by Proportion of Hispanic Residents. Psychiatr. Serv. 2022, 73, 1232–1238. [Google Scholar] [CrossRef]
- Breslau, J.; Cefalu, M.; Wong, E.C.; Burnam, M.A.; Hunter, G.P.; Florez, K.R.; Collins, R.L. Racial/ethnic differences in perception of need for mental health treatment in a US national sample. Soc. Psychiatry Psychiatr. Epidemiol. 2017, 52, 929–937. [Google Scholar] [CrossRef]
- O’Keefe, V.M.; Cwik, M.F.; Haroz, E.E.; Barlow, A. Increasing culturally responsive care and mental health equity with indigenous community mental health workers. Psychol. Serv. 2021, 18, 84–92. [Google Scholar] [CrossRef]
- Dinesh, D.N.; Rao, M.N.; Sinha, C. Language adaptations of mental health interventions: User interaction comparisons with an AI-enabled conversational agent (Wysa) in English and Spanish. Digit. Health 2024, 10, 20552076241255616. [Google Scholar] [CrossRef]
- Ospina-Pinillos, L.; Davenport, T.; Mendoza Diaz, A.; Navarro-Mancilla, A.; Scott, E.M.; Hickie, I.B. Using Participatory Design Methodologies to Co-Design and Culturally Adapt the Spanish Version of the Mental Health eClinic: Qualitative Study. J. Med. Internet Res. 2019, 21, e14127. [Google Scholar] [CrossRef]
- Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health 2017, 4, e19. [Google Scholar] [CrossRef]
- Ma, Z.; Mei, Y.; Su, Z. Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support. In Proceedings of the AMIA Annual Symposium Proceedings, San Francisco, CA, USA, 9–13 November 2024; Volume 2023, pp. 1105–1114. [Google Scholar]
- Henkel, T.; Linn, A.J.; van der Goot, M.J. Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+ Individuals: Testing and Extending the UTAUT. In Chatbot Research and Design; Følstad, A., Ed.; Conversations 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13815. [Google Scholar] [CrossRef]
- Meyer, I.H. Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence. Psychol. Bull. 2003, 129, 674–697. [Google Scholar] [CrossRef]
- Valentine, S.E.; Shipherd, J.C. A systematic review of social stress and mental health among transgender and gender non-conforming people in the United States. Clin. Psychol. Rev. 2018, 66, 24–38. [Google Scholar] [CrossRef]
- Bragazzi, N.L.; Crapanzano, A.; Converti, M.; Zerbetto, R.; Khamisy-Farah, R. The Impact of Generative Conversational Artificial Intelligence on the Lesbian, Gay, Bisexual, Transgender, and Queer Community: Scoping Review. J. Med. Internet Res. 2023, 25, e52091. [Google Scholar] [CrossRef]
- Veale, J.F.; Peter, T.; Travers, R.; Saewyc, E.M. Enacted Stigma, Mental Health, and Protective Factors Among Transgender Youth in Canada. Transgender Health 2017, 2, 207–216. [Google Scholar] [CrossRef]
- Felkner, V.; Chang, H.C.H.; Jang, E.; May, J. Winoqueer: A community-in-the-loop benchmark for anti-lgbtq+ bias in large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, ON, Canada, 9–14 July 2023; Volume 1, pp. 9126–9140. [Google Scholar] [CrossRef]
- Gower, A.L.; Rider, G.N.; Del Río-González, A.M.; Erickson, P.J.; Thomas, D.; Russell, S.T.; Watson, R.J.; Eisenberg, M.E. Application of an intersectional lens to bias-based bullying among LGBTQ+ youth of color in the United States. Stigma Health 2023, 8, 363–371. [Google Scholar] [CrossRef]
- Legault, M.; Catala, A.; Poirier, P. Breaking the stigma around autism: Moving away from neuronormativity using epistemic justice and 4E cognition. Synthese 2024, 204, 84. [Google Scholar] [CrossRef]
- Cleary, M.; West, S.; Kornhaber, R.; Hungerford, C. Autism, Discrimination and Masking: Disrupting a Recipe for Trauma. Issues Ment. Health Nurs. 2023, 44, 799–808. [Google Scholar] [CrossRef]
- National Research Council; Committee on Educational Interventions for Children with Autism. Educating Children with Autism; Lord, C., McGee, J.P., Eds.; National Academies Press: Washington, DC, USA, 2001. [Google Scholar]
- Volkmar, F.R.; Lord, C.; Bailey, A.; Schultz, R.T.; Klin, A. Autism and pervasive developmental disorders. J. Child Psychol. Psychiatry Allied Discip. 2004, 45, 135–170. [Google Scholar] [CrossRef]
- Helt, M.; Kelley, E.; Kinsbourne, M.; Pandey, J.; Boorstein, H.; Herbert, M.; Fein, D. Can children with autism recover? If so, how? Neuropsychol. Rev. 2008, 18, 339–366. [Google Scholar] [CrossRef]
- Rogers, S.J.; Lewis, H. An effective day treatment model for young children with pervasive developmental disorders. J. Am. Acad. Child Adolesc. Psychiatry 1989, 28, 207–214. [Google Scholar] [CrossRef]
- Reichow, B.; Wolery, M. Comprehensive synthesis of early intensive behavioral interventions for young children with autism based on the UCLA young autism project model. J. Autism Dev. Disord. 2009, 39, 23–41. [Google Scholar] [CrossRef]
- Burke, M.M.; Taylor, J.L. To better meet the needs of autistic people, we need to rethink how we measure services. Autism Int. J. Res. Pract. 2023, 27, 873–875. [Google Scholar] [CrossRef]
- Rizvi, N.; Wu, W.; Bolds, M.; Mondal, R.; Begel, A.; Munyaka, I. Are robots ready to deliver autism inclusion? a critical review. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’ 24), ACM, Honolulu, HI, USA, 11–16 May 2024. [Google Scholar] [CrossRef]
- Williams, R. I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research. Techné Res. Philos. Technol. 2021, 25, 451–478. [Google Scholar] [CrossRef]
- Brandsen, S.; Chandrasekhar, T.; Franz, L.; Grapel, J.; Dawson, G.; Carlson, D. Prevalence of bias against neurodivergence-related terms in artificial intelligence language models. Autism Res. Off. J. Int. Soc. Autism Res. 2024, 17, 234–248. [Google Scholar] [CrossRef]
- Iannone, A.; Giansanti, D. Breaking Barriers-The Intersection of AI and Assistive Technology in Autism Care: A Narrative Review. J. Pers. Med. 2023, 14, 41. [Google Scholar] [CrossRef]
- Froio, N. Who Fills the Gaps When BIPOC Mental Health Needs Are Overlooked? Teen Vogue. 2021. Available online: https://prismreports.org/2021/08/04/who-fills-the-gaps-when-bipoc-mental-health-needs-are-overlooked/ (accessed on 22 November 2025).
- Koutsouleris, N.; Hauser, T.U.; Skvortsova, V.; De Choudhury, M. From promise to practice: Towards the realisation of AI-informed mental health care. Lancet. Digit. Health 2022, 4, e829–e840. [Google Scholar] [CrossRef]
- Donia, J.; Shaw, J.A. Co-design and ethical artificial intelligence for health: An agenda for critical research and practice. Big Data Soc. 2021, 8, 20539517211065248. [Google Scholar] [CrossRef]
- Goyal, S.; Rastogi, E.; Rajagopal, S.P.; Yuan, D.; Zhao, F.; Chintagunta, J.; Naik, G.; Ward, J. HealAI: A healthcare LLM for effective medical documentation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, Mérida, Mexico, 4–8 March 2024; pp. 1167–1168. [Google Scholar] [CrossRef]
- Lawrence, H.R.; Schneider, R.A.; Rubin, S.B.; Matarić, M.J.; McDuff, D.J.; Jones Bell, M. The Opportunities and Risks of Large Language Models in Mental Health. JMIR Ment. Health 2024, 11, e59479. [Google Scholar] [CrossRef]
- Rice, B.T.; Rasmus, S.; Onders, R.; Thomas, T.; Day, G.; Wood, J.; Britton, C.; Hernandez-Boussard, T.; Hiratsuka, V. Community-engaged artificial intelligence: An upstream, participatory design, development, testing, validation, use and monitoring framework for artificial intelligence and machine learning models in the Alaska Tribal Health System. Front. Artif. Intell. 2025, 8, 1568886. [Google Scholar] [CrossRef]
- Biro, J.M.; Handley, J.L.; Mickler, J.; Reddy, S.; Kottamasu, V.; Ratwani, R.M.; Cobb, N.K. The value of simulation testing for the evaluation of ambient digital scribes: A case report. J. Am. Med. Inform. Assoc. 2025, 32, 928–931. [Google Scholar] [CrossRef]
- Heinz, M.V.; Bhattacharya, S.; Trudeau, B.; Quist, R.; Song, S.H.; Lee, C.M.; Jacobson, N.C. Testing domain knowledge and risk of bias of a large-scale general artificial intelligence model in mental health. Digit. Health 2023, 9, 20552076231170499. [Google Scholar] [CrossRef]
- Seo, J.; Choi, D.; Kim, T.; Cha, W.C.; Kim, M.; Yoo, H.; Oh, N.; Yi, Y.; Lee, K.H.; Choi, E. Evaluation Framework of Large Language Models in Medical Documentation: Development and Usability Study. J. Med. Internet Res. 2024, 26, e58329. [Google Scholar] [CrossRef]
- Altschuler, S.; Huntington, I.; Antoniak, M.; Klein, L.F. Clinician as editor: Notes in the era of AI scribes. Lancet 2024, 404, 2154–2155. [Google Scholar] [CrossRef]
- Eng, K.; Johnston, K.; Cerda, I.; Kadakia, K.; Mosier-Mills, A.; Vanka, A. A Patient-Centered Documentation Skills Curriculum for Preclerkship Medical Students in an Open Notes Era. Mededportal 2024, 20, 11392. [Google Scholar] [CrossRef]
- Lam, B.D.; Bourgeois, F.; Dong, Z.J.; Bell, S.K. Speaking up about patient-perceived serious visit note errors: Patient and family experiences and recommendations. J. Am. Med. Inform. Assoc. 2021, 28, 685–694. [Google Scholar] [CrossRef]
- Lear, R.; Freise, L.; Kybert, M.; Darzi, A.; Neves, A.L.; Mayer, E.K. Patients’ Willingness and Ability to Identify and Respond to Errors in Their Personal Health Records: Mixed Methods Analysis of Cross-Sectional Survey Data. J. Med. Internet Res. 2022, 24, e37226. [Google Scholar] [CrossRef]
- Freise, L.; Neves, A.L.; Flott, K.; Harrison, P.; Kelly, J.; Darzi, A.; Mayer, E.K. Assessment of Patients’ Ability to Review Electronic Health Record Information to Identify Potential Errors: Cross-sectional Web-Based Survey. JMIR Form. Res. 2021, 5, e19074. [Google Scholar] [CrossRef]
- Ciston, S. Intersectional AI Is Essential: Polyvocal, Multimodal, Experimental Methods to Save AI. J. Sci. Technol. Arts 2019, 11, 3–8. [Google Scholar] [CrossRef]
| AI Mental Health App | Population | Features | Findings |
|---|---|---|---|
| Wysa-Spanish | Hispanic | Mental wellness AI-chatbot in Spanish, utilizing cognitive behavioral therapy (CBT) and dialectical behavior therapy (DBT). | Users of Wysa-Spanish logged more sessions and had a higher volume for disclosures of distress. The findings emphasize the significance of linguistic and cultural variations in the development of DMHAs [78]. |
| Headspace Health | BIPOC | A guided meditation app and support app for BIPOC users experiencing racism. Offers CBT modalities such as mindfulness and reframing exercises | Many mental health apps depend on BIPOC to facilitate programs that provide lived experience narratives, rather than creating their own platforms to address inequalities [101]. |
| WinoQueer | LGBTQ+ | Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models | A benchmark study was conducted to assess whether large language models (LLMs) revealed high WinoQueer bias scores, highlighting that homophobia and transphobia are profound problems in LLMs [87]. |
| Hazel | Autism | Hazel allows neurodivergent users access to a series of tests where AI interprets results and provides personalized strategies. | AI is used in Hazel to assess an individual’s behaviors and responses to determine which tools are most effective for the individual. AI could contribute to personalized solutions by prioritizing specific needs of neurodivergent individuals [100]. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Yesha, R.; Orezzoli, M.C.E.; Sims, K.; Landau, A.Y. Digital Mental Health Through an Intersectional Lens: A Narrative Review. Healthcare 2026, 14, 211. https://doi.org/10.3390/healthcare14020211
Yesha R, Orezzoli MCE, Sims K, Landau AY. Digital Mental Health Through an Intersectional Lens: A Narrative Review. Healthcare. 2026; 14(2):211. https://doi.org/10.3390/healthcare14020211
Chicago/Turabian StyleYesha, Rose, Max C. E. Orezzoli, Kimberly Sims, and Aviv Y. Landau. 2026. "Digital Mental Health Through an Intersectional Lens: A Narrative Review" Healthcare 14, no. 2: 211. https://doi.org/10.3390/healthcare14020211
APA StyleYesha, R., Orezzoli, M. C. E., Sims, K., & Landau, A. Y. (2026). Digital Mental Health Through an Intersectional Lens: A Narrative Review. Healthcare, 14(2), 211. https://doi.org/10.3390/healthcare14020211

