You are currently on the new version of our website. Access the old version .
HealthcareHealthcare
  • Review
  • Open Access

14 January 2026

Digital Mental Health Through an Intersectional Lens: A Narrative Review

,
,
and
1
MedStar Health Research Institute, Columbia, MD 21044, USA
2
Department of Medicine, Georgetown University Medical Center, Washington, DC 20057, USA
3
Department of Social Sciences, Florida Memorial University, Miami Gardens, FL 33054, USA
4
Department of Psychiatry, Georgetown University School of Medicine, Washington, DC 20057, USA
This article belongs to the Special Issue Advancing Mental Well-Being and Health Equity in Marginalized Communities

Abstract

For individuals with mental illness who experience multidimensional marginalization, the risks of encountering discrimination and receiving inadequate care are compounded. Artificial intelligence (AI) systems have propelled the provision of mental healthcare through the creation of digital mental health applications (DMHAs). DMHAs can be trained to identify specific markers of distress and resilience by incorporating community knowledge in machine learning algorithms. However, DMHAs that use rule-based systems and large language models (LLMs) may generate algorithmic bias. At-risk populations face challenges in accessing culturally and linguistically competent care, often exacerbating existing inequities. Creating equitable solutions in digital mental health requires AI training models that adequately represent the complex realities of marginalized people. This narrative review analyzes the current literature on digital mental health through an intersectional framework. Using an intersectional framework considers the nuanced experiences of individuals whose identities lie at the intersection of multiple stigmatized social groups. By assessing the disproportionate mental health challenges faced by these individuals, we highlight several culturally responsive strategies to improve community outcomes. Culturally responsive strategies include digital mental health technologies that incorporate the lived experience of individuals with intersecting identities while reducing the incidence of bias, harm, and exclusion.

1. Introduction

The United States is suffering a profound mental health crisis. Finding accessible and effective care is paramount when a health condition is urgent, yet no organized system for crisis mental health care exists. The consequences are multifaceted: distress for people in crisis, overburdened law enforcement and hospital emergency departments, exponentially increasing rates of suicide, financial burden, and acts of violence by and upon individuals in mental distress [1,2]. The ubiquitous use of AI has allowed a global dialogue about the possible benefits and shortcomings of digital mental health. Moreover, there exists an increasing necessity to address the potential impact of widely enhanced, accessible, and conversational AI on mental health. Feelings of loneliness coupled with barriers to accessing mental health services have made LLM-based chatbots more attractive to users seeking support [3]. Globally, approximately 970 million people are living with a mental disorder. Mental disorders are increasingly recognized as a leading cause of disease burden [4]. The widespread incorporation of AI in daily use has catalyzed a global discussion about the possible benefits and risks of AI on mental health. There is an urgent need to investigate the influence of widely accessible AI on mental health [5]. AI-driven tools hold the potential for producing a positive impact on mental health. Algorithms can be used to assess data by linking lifestyle, genetic, and environmental markers to evaluate the best treatment option for every individual [6]. AI-enabled tools can help prevent more severe mental illness by identifying populations who exhibit high-risk markers [7]. AI also shows promise in reducing barriers to treatment and providing timely access to mental health care [8]. While AI shows promise in the early identification of risk and treating large populations of patients, significant weaknesses remain [9], including biases that may result in incorrect assessment. Trusting DMHAs requires highlighting their potential to personalize care and being transparent about safety implications.
DMHAs have rapidly gained recognition, promising innovative solutions and personalized interventions. Current efforts have focused on creating computational approaches to diagnosing and treating mental illnesses [10]. Additionally, digital mental health tools can deliver information and coping strategies to users instantaneously [11,12]. Text-based messaging with a human or with a machine (chatbots) has become pervasive in the past few years. AI conversational agents could potentially provide contextual and instantaneous support. Woebot, for example, treats “depression and anxiety using a digital version of time-tested cognitive behavior therapy” [13]. Large language models (LLMs) connected to frequently used chatbots (e.g., OpenAI’s GPT and Google’s Gemini) hold significant promise to support and even automate psychotherapy. Interest in such applications is rapidly growing in the mental health sector. However, due to the complex nature of mental illness, guardrails are needed to keep users safe [14]. In the Mad in America blog entitled “How Chatbots Deepen the Mental Health Crisis”, Kingsmith argues that AI chatbots use passive agreement to prioritize user engagement over safety. He further states that chatbots generally lack genuine empathy and users are transformed into recipients of generalized affirmations, void of the genuine empathy needed for human well-being [15].
While LLMs are not designed or intended for mental health support, preliminary evidence has found that millions of Americans with mental health conditions are turning to them for guidance, theoretically making these technologies one of the largest providers of mental health services in the United States [16]. However, LLMs may also exacerbate disparities via algorithmic bias. Algorithmic bias can originate from several sources, such as a lack of diverse representation in AI research and development teams, bias in training data, and failure to address the wider sociocultural framework [17,18,19]. Equity has become a key concern in the promise of digital platforms to transform healthcare [20,21]. Mental healthcare AI has the potential to reduce implicit bias in diagnosis and treatment, with pattern recognition techniques, personalized diagnostics, and clinical decision support systems, such as the use of AI scribes [8,22]. However, in practice, LLMs usually produce more errors when analyzing mental health data from marginalized groups [23]. It is likely that models trained on biased data may exacerbate existing inequities unless solution-focused measures are implemented [24]. LLMs may also highlight unsubstantiated claims about mental illness, increasing existing biases in psychiatric diagnosis and treatment in minority populations. LLMs may also learn and reproduce stigmatizing language located in electronic health records [25].
This paper examines how digital mental health interventions can improve health equity by highlighting the lived experience of individuals across multiple axes of marginalization, using an intersectional framework that considersimpact belonging to multiple stigmatized social groups has on the safety and effectiveness of digital mental health tools. Intersectional theory calls attention to the reality that individuals embody several identities simultaneously that can be vulnerable to discrimination, magnifying their risk of stigmatization [26]. Our narrative review synthesizes current evidence on the use of AI for mental health, evaluates its impact on marginalized communities through an intersectional lens, and identifies solution-based strategies to address biases and shortcomings of current technologies for digital mental health.

2. Methods

This narrative review was conducted using an intersectional framework [27] to examine the impact of digital mental health interventions on marginalized communities. Literature was selected through searches of PubMed, JSTOR, ACM Digital Library, MEDLINE, PsycINFO, arXiv, and Google Scholar, using combinations of terms such as “digital mental health,” “algorithmic bias”, “artificial intelligence, “large language models”, “racial bias”, “neurodivergence”, “intersectionality,” “equity,” and “marginalized populations.” Around 120 empirical sources published between 2006–2025 were used for this narrative review. The inclusion criteria focused on the impact of digital mental health on racial/ethnic minorities, LGBTQ+ individuals, and neurodivergent populations. We excluded studies that did not report basic methodological details on the impact of DMHAs on the mental well-being of individuals with intersecting identities. The majority of our sources include peer-reviewed publications. In addition, approximately 10 opinion pieces from major news outlets were integrated to ensure a diverse perspective, understand critical insights, and explore any knowledge gaps. These sources were included as contextual references in our review, rather than primary sources, and relied on non-systematic anecdotal evidence. Intersectional identities exist in many permutations that span multiple axes of discrimination (e.g., gender, race, ability, class, sexuality, neurodivergence). One limitation of our review is that not all of these combinations were explored; further research investigating a variety of experiences is needed.

3. Narrative Review

3.1. The Role of AI in Mental Health Care

The incidence of mental health diagnoses and psychotropic prescriptions has continued to rise in recent years [28,29,30]. Amidst this turmoil, many have found AI to be a potentially revolutionary tool. AI chatbots reduce barriers to mental health support, particularly for individuals with mild distress. Conversational AI that can convincingly mimic human speech may reduce demand on overburdened mental health systems [14,31]. Wysa, a Smartphone-Based AI Chatbot App for Mental Well-Being, was created as an AI-based chatbot app focused on increasing mental well-being by using a text-based platform. The app works by reacting to emotions that a user expresses in text conversations and utilizes evidence-based self-help tools such as CBT, dialectical behavior therapy, positive behavior support, and behavioral reinforcement, and mindfulness, to allow users to build emotional resilience skills [32]. A systematic review and meta-analysis synthesized evidence on the effectiveness of AI-based conversational agents (CAs) used for mental well-being. The findings of this review highlight that CAs may effectively alleviate mental distress, with the most significant effects seen in research using generative AI, using multimodal or voice-based CAs, or providing interventions with mobile applications and instant messaging platforms [33]. The use of AI for treatment and intervention in mental illness demonstrates a transformative shift in how we approach mental illness. The accessibility of digital mental health tools eliminates barriers to treatment that have historically prevented individuals from accessing treatment.

3.2. Bias in Digital Mental Health Diagnosis and Treatment

Digital mental health systems offer the opportunity to identify various psychological patterns, assist with triage, and support treatment plans that broaden access to care. Despite the transformative potential of DMHAs, AI-powered mental health tools may replicate biases propelled by structural inequities. Therefore, it is critical to examine how these biases affect the lived experiences of marginalized users. Many individuals face significant barriers to timely and effective mental healthcare. Several forms of bias can arise throughout the lifecycle of AI systems, including during data collection, clinical documentation, model design, and implementation, which perpetuate existing disparities in mental health care [34,35,36,37]. Careful attention to how inequities emerge across these stages is crucial before AI systems are implemented for widespread clinical use.
Historical disparities in psychiatric diagnosis and treatment underscore the risks. For example, historical disparities in psychiatric diagnosis and treatment have shown that Black patients have been disproportionately overdiagnosed with psychotic disorders, whereas Asian American and Hispanic patients have been underdiagnosed with mood disorders. In addition, Black patients also experience high rates of coercive interventions, such as compulsory hospitalization, and are frequently subjected to more aggressive forms of treatment by clinicians [38,39,40].
Comparable disparities are now also evident in digital contexts. Black communities are less likely to have mental health problems identified by digital mental health tools because AI lacks a critical understanding of how marginalized communities communicate and perceive different cultural contexts [41,42,43]. Several factors increase the incidence of bias in AI systems. For example, NLP tools may misinterpret African American English and different aspects of cultural representations of marginalized communities, resulting in misclassification and biased recommendations.
Furthermore, clinical documentation and electronic health records often contain implicit and explicit forms of bias concerning marginalized patients. When these data sets are used for training AI systems, they can reproduce disparities and promote bias in mental health diagnosis and treatment [23,25,44,45,46,47]. A lack of accuracy also occurs in speech-based machine learning models, particularly for Black women, as they can incorrectly diagnose anxiety and depression, prompting concerns about fairness and equity in AI-driven healthcare [48].
The implications of these biases extend beyond technical design and affect both clinical practice and health policy. Incorporating biased AI systems into healthcare can reinforce mistrust among marginalized communities, as these systems may perpetuate existing biases and inequities [49,50]. To reduce these biases in AI, it is necessary to go beyond algorithmic fairness by emphasizing patient-centered outcomes, which involves considering patients’ lived experiences and promoting inclusive design principles. This includes highlighting the significance of community-based participatory research and equity-centered design in the development of digital mental health interventions that are both clinically effective and socially responsive [21,51,52]. Without such safeguards, AI will likely reproduce rather than mitigate existing inequities, leaving marginalized populations with continued barriers to high-quality mental health care [53,54].
Ambient AI scribes, voice-to-text transcription tools that use LLMs to generate medical notes from recorded clinician-patient encounters, are poised to have a seismic impact on the delivery of mental healthcare, since they are being rapidly utilized in a wide variety of healthcare settings [55]. However, inadequate attention has been given to how their use may affect marginalized populations. An emerging body of literature identifies important ethical considerations for the deployment of ambient AI scribes in mental health care contexts, highlighting the increased risk of patient harm from compromised privacy and errors resulting from LLM-mediated transcription and diagnostic reasoning [56,57,58,59,60,61]. These risks are augmented for members of vulnerable populations, who already experience higher rates of negative outcomes such as misdiagnosis due to biased mental health documentation practices that LLMs are likely to amplify [62,63,64,65,66,67].

4. An Intersectional Framework

4.1. Intersectionality

The term “intersectionality” refers to the premise that race, class, age, gender, ethnicity, sexuality, ability, and nation do not work as singular, mutually exclusive entities, but rather as categories that interact to produce multifaceted social inequalities [68,69]. The concept of intersectionality also provides a framework for understanding how individuals can be simultaneously empowered and oppressed through the ways their social identities intersect [27]. Digital mental health tools are often implicitly or explicitly designed to serve a white, male, Western, cisgender, straight, neurotypical population. However, a revised approach that addresses only one category of difference (e.g., a chatbot targeted at female users) is unlikely to be equally effective for all women, and could even exacerbate harm. Using an intersectional lens to create, evaluate, and deploy digital mental health tools could help address inequalities by helping to identify the multifaceted, intricate needs that reflect many complex variations in lived experience. Intersectional approaches may be of particular significance in the mental health field, since patients are already more likely to suffer from stigma and discrimination [70]. Using AI mental health tools with an intersectional approach offers the ability to account for a fuller range of possible benefits and pitfalls for utilization by marginalized groups than has previously been offered [71]. This review examined how digital mental health tools may disproportionately harm individuals with intersecting marginalized identities by reinforcing diagnostic bias, excluding culturally specific expressions of distress, and heightening feelings of mistrust.

4.2. Racial/Ethnic Disparities

One of the greatest challenges in using diagnostic instruments for mental health is that tests developed for the general population do not always represent minorities accurately. Many examiners tend to over-pathologize the same item response elicited by minority members [72] compared to middle-class Caucasians on which tests are normed. The research has shown that individuals in racial and ethnic minority groups are 20–50% less likely to initiate mental health service use and 40–80% more likely to drop out of treatment prematurely [49,50]. The latest improvements in AI technology suggest that as AI improves, there is potential that it may quickly progress to identify risk for personalized interventions, particularly in minority populations. However, LLMs such as GPT-5 are developed with a predominantly Western perspective, which does not adequately address the diversity of user needs across various cultural or social backgrounds [73]. Historical and contemporary adverse events have bred mistrust of the medical and mental healthcare system, including the use of big data and AI applications, amongst the Black community, with biases, discrimination, and a lack of understanding of cultural sensitivities, which hinder seeking appropriate psychiatric care. For example, a study published in the Journal of Black Psychology elucidated young Black men’s experiences of abuse by mental health professionals and police officers, increasing feelings of mistrust [39].
From 2010–2019, Hispanic individuals were responsible for over half of the population growth in the United States [74]. There is a dire necessity for mental health care within Hispanic populations [75]. However, relative to adults in other racial-ethnic groups, Hispanic adults were found to be less inclined to seek treatment for mental distress [76]. Mental health services that integrate culture and language into clinical practice could result in an increased desire for treatment within Hispanic communities [77]. Digital mental health interventions (DMHIs) have shown promise in addressing these inequities; however, many are only available in English, creating barriers for non-English speakers. Research shows that mental health services in the individual’s native language promote better outcomes [78]. At present, there are a limited number of apps that have translation options or cultural adaptations [79]. Research has shown that compared to Wysa-English, Wysa-Spanish users logged more sessions and yielded more disclosures of distress [78]. Users also showed a preference for interventions with free text responses in Spanish, and Wysa-Spanish users logged more frequent terms associated with negative emotions, risk factors for self-harm and suicidal ideation. This highlights the critical need for platforms such as ChatGPT to adapt a more culturally inclusive interface.

4.3. Lesbian, Gay, Bisexual, Transgender, Queer, and/or Questioning (LGBTQ+)

The marginalization and lack of representation of LGBTQ+ individuals often hinder them from seeking mental health services, especially due to the risk of encountering therapists who do not adequately provide gender-affirming care. Chatbots focused on the LGBTQ+ population can give users a safe environment for sensitive conversations, enabling roleplay experiences like coming out and dating in a low-risk environment [80,81]. However, AI chatbots could also replicate harmful stereotypes as a result of biases in their training data. They may also inadequately address the multifaceted needs of LGBTQ+ individuals, which could contribute to feelings of isolation. In addition, these chatbots may offer advice that places users in vulnerable situations, such as coming out to family members who are not supportive [3].
LGBTQ+ individuals are increasingly utilizing chatbots for mental health support. The capabilities of LLM chatbots are most significant when contextualizing their influence on marginalized communities [3,82]. Individuals in the LGBTQ+ community are approximately three times more likely to suffer from anxiety, depression, and suicidal ideation [83,84,85]. Previous studies have found that a third of a geographically diverse sample of transgender youth in Canada attempted suicide in the last year [86]. These results imply a very grim reality for transgender youth. While chatbots have the potential to provide inclusive mental health resources for the LGBTQ+ community, biases in these technologies can exacerbate damaging stereotypes [3]. A benchmark study analyzing whether LLMS produces biases to the LGBTQ+ community [87] used a community-in-the-loop method to highlight explicit harms identified by the LGBTQ+ community. The results showed high WQ (WinoQueer) bias scores, which signified that homophobia and transphobia are significant problems in LLMs, and that anti-queer sentiment must be addressed.
LLMs may include many biases and stereotypes that cause extreme harm to the queer community. Models can exacerbate biases if human supervision is not present at every step of the training pipeline [87]. AI shows promise for the LGBTQ+ community; however, biases should be addressed to ensure that these technologies do not exacerbate stigma and discrimination. Inclusive models should be used to create AI systems that are fair and respectful of LGBTQ+ individuals [85]. It is important to note that existing AI and digital mental health evidence treats LGBTQ+ populations as a monolithic group, obscuring multiple axes of marginalization. For example, previous studies have shown that multiracial LGBTQ+ youth experienced the highest rates of race-based bullying [88]. Very limited research has examined how AI-driven mental health systems may differentially impact intersectional subgroups, such as Black transgender youth or autistic queer individuals, who may face increased risks of bias, misdiagnosis, and harm.

4.4. Neurodivergence

Neuronormativity is defined by elevating the status of neurotypical social behaviors as “normal”, thus introducing an imbalance of power between neurodivergent individuals, such as those diagnosed as being on the spectrum of autism [89]. Autistic individuals are frequently disadvantaged due to factors like stigma, discrimination, socioeconomic inequality, and poverty [90]. In a 2021 report, the United States National Institute of Health suggested that research shows that early diagnosis for autism is likely to have long-term positive effects on symptoms and development of skills [91,92,93,94,95]. Reducing barriers to services for autistic individuals is critical [96]. Neuronormativity has often resulted in dehumanizing comparisons to animals, robots, and other non-human entities, which increases the importance of safeguarding technological agents against replicating these stereotypes [97,98]. There is a growing interest in humanizing AI agents such as robots and chatbots. However, prior work in computing research has shown ways in which technologies may fail to highlight the needs of autistic and other neurodivergent (ND) individuals and increase their marginalization. Chatbots may evaluate legitimate struggles through a narrow, algorithmic process [15].
Another problem is the potential for LLMs to misunderstand or misrepresent the experiences of non-speaking neurodivergent individuals who use alternative communication methods. A 2024 study analyzed AI bias towards several neurodivergent conditions, including autism, schizophrenia, ADHD and OCD [99]. The authors discovered a profound level of bias towards terms related to autism and neurodiversity. Extremely high levels of bias were noted for tests related to violence, slurs, or obsessiveness. In addition, the research revealed negative sentiment for terms associated with autistic individuals with higher support needs. AI could contribute to personalized solutions by prioritizing specific needs of neurodivergent individuals [100]. While these apps show potential for neurodivergent individuals, the unique needs of the neurodivergent population should be considered, while also addressing the risks for perpetuating stigma or bias. Table 1. Includes a summary of findings for four AI mental health tools and resources for marginalized populations.
Table 1. AI Mental Health Tools and Resources for Marginalized Populations.

5. Culturally Responsive and Participatory Design for Inclusive AI

To safeguard against biases and inequities in healthcare systems, strategies are needed for diverse populations actively focusing on marginalized populations and leveraging data showing a wide range of patterns of behavior and conditions. An ontological database organization that mitigates bias risks through multiple data representations could facilitate the use of equitable LLMs in mental health care [102]. For example, mitigating the potentially damaging impacts of AI-driven mental health documentation will require a multi-faceted approach. Development of mental-health specific LLMs incorporating the diverse perspectives of potential beneficiaries, including clinicians and patients of different backgrounds, could increase the effectiveness of AI scribe tools and improve their cultural sensitivity and responsiveness [103,104,105,106]. More research testing the performance of existing ambient AI scribe tools with a variety of populations is needed to inform ethical decision-making about uptake and implementation [14,24,107,108,109]. Finally, teaching clinicians how to evaluate and modify the output of AI-scribes using a culturally sensitive lens and empowering patients to engage in self-advocacy in response to errors and stigmatizing language in the medical record will be a key component of harm reduction as we navigate this era of digital transformation [110,111,112,113,114]. Transforming AI through an intersectional lens must happen across multiple communities. Despite the democratization of digital technologies, identity markers remain vulnerable to AI systems [115]. Analyzing AI through an intersectional lens can help uncover detrimental power structures and create solutions that implement trustworthy and equitable AI systems.

6. Discussion

The proliferation of individuals using AI for mental health support has posed unique challenges. Digital mental health initiatives may fill a critical role in a system that is severely lacking in resources, funding, and accessibility. However, consequences for marginalized communities exist and should be addressed. An intersectional framework can be used to discover biases in existing AI and propose solutions for marginalized communities [115]. Our narrative review assessed the current literature on this topic while providing an intersectional lens for digital mental health. While our review was both iterative and interpretative, there were limitations to our approach.
This review did not include empirical validation and therefore its conclusions are conceptual. Additionally, the evolving nature of technological advancement has resulted in many of the included studies becoming outdated relatively quickly. Finally, although intersectionality produces a critical outlook, structural nuances also produce inherent limitations. It is a methodological challenge to account for the inclusion of race, gender, sexuality, and neurodivergence within AI-driven systems. Many intersectional issues may be underrepresented in this review. Additionally, most cited studies examine single-axis identities (e.g., race, sexual orientation, or neurodivergence), which limits the ability to draw accurate conclusions about intersectional experiences. These limitations can be addressed by future research initiatives that highlight real-world data, community-based participatory methods, and intersectional frameworks. For example, indicators of “poor–rich” and “gay–straight” highlight that AI algorithms have classified correlations between neurodivergent individuals and LGBTQ+ identity as well as socioeconomic inequity. Future research investigating AI bias against LGBTQ+ neurodivergent individuals and other multiply marginalized neurodivergent individuals would aid in debiasing efforts [99]. The engagement of marginalized communities in the co-creation of AI tools is critical to ensuring that digital mental health technologies promote equity.

7. Conclusions

AI in mental health has paved the way towards promising solutions and interventions. However, LLMs may reveal unfounded assumptions regarding mental health, perpetuating biases in psychiatric diagnoses and treatment in minority populations. This narrative review highlights the current literature on digital mental health using an intersectional lens by assessing the diverse needs of marginalized populations. We conclude our review with recent findings on comprehensive strategies that involve engaging marginalized communities in the creation of AI tools.

Author Contributions

Conceptualization—R.Y. and M.C.E.O.; Methodology—R.Y. and M.C.E.O. Formal analysis—R.Y., M.C.E.O., K.S. and A.Y.L.; Writing—original draft R.Y., M.C.E.O., K.S. and A.Y.L.; Writing—review and editing R.Y., M.C.E.O., K.S. and A.Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

The authors declare no external funding for this narrative review.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Balfour, M.E.; Carson, C.A.; Williamson, R. Alternatives to the Emergency Department. Psychiatr. Serv. 2017, 68, 306. [Google Scholar] [CrossRef]
  2. Hogan, M.F.; Goldman, M.L. New Opportunities to Improve Mental Health Crisis Systems. Psychiatr. Serv. 2021, 72, 169–173. [Google Scholar] [CrossRef]
  3. Ma, Z.; Mei, Y.; Long, Y.; Su, Z.; Gajos, K.Z. Evaluating the experience of LGBTQ+ people using large language model based chatbots for mental health support. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar] [CrossRef]
  4. Patel, V.; Saxena, S.; Lund, C.; Thornicroft, G.; Baingana, F.; Bolton, P.; Chisholm, D.; Collins, P.Y.; Cooper, J.L.; Eaton, J.; et al. The Lancet Commission on global mental health and sustainable development. Lancet 2018, 392, 1553–1598. [Google Scholar] [CrossRef] [PubMed]
  5. Ettman, C.K.; Galea, S. The Potential Influence of AI on Population Mental Health. JMIR Ment. Health 2023, 10, e49936. [Google Scholar] [CrossRef] [PubMed]
  6. Alhuwaydi, A.M. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions—A Narrative Review for a Comprehensive Insight. Risk Manag. Healthc. Policy 2024, 17, 1339–1348. [Google Scholar] [CrossRef]
  7. Mentis, A.A.; Lee, D.; Roussos, P. Applications of artificial intelligence-machine learning for detection of stress: A critical overview. Mol. Psychiatry 2024, 29, 1882–1894. [Google Scholar] [CrossRef]
  8. Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.C.; Jeste, D.V. Artificial Intelligence for Mental Health and Mental Illnesses: An Overview. Curr. Psychiatry Rep. 2019, 21, 116. [Google Scholar] [CrossRef]
  9. Tornero-Costa, R.; Martinez-Millana, A.; Azzopardi-Muscat, N.; Lazeri, L.; Traver, V.; Novillo-Ortiz, D. Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review. JMIR Ment. Health 2023, 10, e42045. [Google Scholar] [CrossRef]
  10. Jin, K.W.; Li, Q.; Xie, Y.; Xiao, G. Artificial intelligence in mental healthcare: An overview and future perspectives. Br. J. Radiol. 2023, 96, 20230213. [Google Scholar] [CrossRef]
  11. Minerva, F.; Giubilini, A. Is AI the Future of Mental Healthcare? Topoi 2023, 42, 809–817. [Google Scholar] [CrossRef] [PubMed]
  12. Carlson, C.G. Virtual and Augmented Simulations in Mental Health. Curr. Psychiatry Rep. 2023, 25, 365–371. [Google Scholar] [CrossRef] [PubMed]
  13. Singh, O.P. Chatbots in psychiatry: Can treatment gap be lessened for psychiatric disorders in India. Indian J. Psychiatry 2019, 61, 225. [Google Scholar] [CrossRef] [PubMed]
  14. Stade, E.C.; Stirman, S.W.; Ungar, L.H.; Boland, C.L.; Schwartz, H.A.; Yaden, D.B.; Sedoc, J.; DeRubeis, R.J.; Willer, R.; Eichstaedt, J.C. Large language models could change the future of behavioral healthcare: A proposal for responsible development and evaluation. npj Ment. Health Res. 2024, 3, 12. [Google Scholar] [CrossRef]
  15. Kingsmith, A.T. How Chatbots Deepen the Mental Health Crisis. Mad in America. Available online: https://www.madinamerica.com/2025/10/how-chatbots-deepen-the-mental-health-crisis/ (accessed on 20 November 2025).
  16. Rousmaniere, T.; Zhang, Y.; Li, X.; Shah, S. Large language models as mental health resources: Patterns of use in the United States. Pract. Innov. 2025. [Google Scholar] [CrossRef]
  17. Greene, A.S.; Shen, X.; Noble, S.; Horien, C.; Hahn, C.A.; Arora, J.; Tokoglu, F.; Spann, M.N.; Carrión, C.I.; Barron, D.S.; et al. Brain-phenotype models fail for individuals who defy sample stereotypes. Nature 2022, 609, 109–118. [Google Scholar] [CrossRef]
  18. Lechner, T.; Ben-David, S.; Agarwal, S.; Ananthakrishnan, N. Impossibility results for fair representations. arXiv 2021, arXiv:2107.03483. [Google Scholar] [CrossRef]
  19. Fields, C.T.; Black, C.; Thind, J.K.; Jegede, O.; Aksen, D.; Rosenblatt, M.; Assari, S.; Bellamy, C.; Anderson, E.; Holmes, A.; et al. Governance for anti-racist AI in healthcare: Integrating racism-related stress in psychiatric algorithms for Black Americans. Front. Digit. Health 2025, 7, 1492736. [Google Scholar] [CrossRef]
  20. Williams, D.R.; Rucker, T.D. Understanding and addressing racial disparities in health care. Health Care Financ. Rev. 2000, 21, 75–90. [Google Scholar]
  21. Raza, M.M.; Venkatesh, K.P.; Kvedar, J.C. Promoting racial equity in digital health: Applying a cross-disciplinary equity framework. npj Digit. Med. 2023, 6, 3. [Google Scholar] [CrossRef] [PubMed]
  22. Lee, E.E.; Torous, J.; De Choudhury, M.; Depp, C.A.; Graham, S.A.; Kim, H.C.; Paulus, M.P.; Krystal, J.H.; Jeste, D.V. Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2021, 6, 856–864. [Google Scholar] [CrossRef]
  23. Wang, Y.; Liu, J.; Shen, Y.; Wang, C.; Jin, Q.; Wang, F.; Zhang, Y. Unveiling and mitigating bias in mental health analysis with large language models. arXiv 2024, arXiv:2406.12033. [Google Scholar] [CrossRef]
  24. Bouguettaya, A.; Stuart, E.M.; Aboujaoude, E. Racial bias in AI-mediated psychiatric diagnosis and treatment: A qualitative comparison of four large language models. npj Digit. Med. 2025, 8, 332. [Google Scholar] [CrossRef]
  25. De Choudhury, M.; Pendse, S.R.; Kumar, N. Benefits and harms of large language models in digital mental health. arXiv 2023, arXiv:2311.14693. [Google Scholar] [CrossRef]
  26. Oexle, N.; Corrigan, P.W. Understanding Mental Illness Stigma Toward Persons with Multiple Stigmatized Conditions: Implications of Intersectionality Theory. Psychiatr. Serv. 2018, 69, 587–589. [Google Scholar] [CrossRef] [PubMed]
  27. Crenshaw, K. Demarginalizing the intersection of race and sex: A Black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. Univ. Chic. Leg. Forum 1989, 1989, 139–167. [Google Scholar]
  28. APA. Stress in America™ 2023: A Nation Grappling with Pychological Impacts of Collective Trauma. American Psychological Association (APA). 2023. Available online: https://www.apa.org/news/press/releases/2023/11/psychological-impacts-collective-trauma (accessed on 7 October 2025).
  29. Kessing, L.V.; Ziersen, S.C.; Caspi, A.; Moffitt, T.E.; Andersen, P.K. Lifetime Incidence of Treated Mental Health Disorders and Psychotropic Drug Prescriptions and Associated Socioeconomic Functioning. JAMA Psychiatry 2023, 80, 1000–1008. [Google Scholar] [CrossRef]
  30. Ormel, J.; Hollon, S.D.; Kessler, R.C.; Cuijpers, P.; Monroe, S.M. More treatment but no less depression: The treatment-prevalence paradox. Clin. Psychol. Rev. 2022, 91, 102111. [Google Scholar] [CrossRef]
  31. Ophir, Y.; Tikochinski, R.; Elyoseph, Z.; Efrati, Y.; Rosenberg, H. Balancing promise and concern in AI therapy: A critical perspective on early evidence from the MIT-OpenAI RCT. Front. Med. 2025, 12, 1612838. [Google Scholar] [CrossRef]
  32. Inkster, B.; Sarda, S.; Subramanian, V. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. JMIR Mhealth Uhealth 2018, 6, e12106. [Google Scholar] [CrossRef] [PubMed]
  33. Li, H.; Zhang, R.; Lee, Y.C.; Kraut, R.E.; Mohr, D.C. Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digit. Med. 2023, 6, 236. [Google Scholar] [CrossRef]
  34. Snowden, L.R. Bias in mental health assessment and intervention: Theory and evidence. Am. J. Public Health 2003, 93, 239–243. [Google Scholar] [CrossRef]
  35. Alegría, M.; NeMoyer, A.; Falgàs Bagué, I.; Wang, Y.; Alvarez, K. Social determinants of mental health: Where we are and where we need to go. Curr. Psychiatry Rep. 2018, 20, 95. [Google Scholar] [CrossRef]
  36. Cary, M.P.; Zink, A., Jr.; Wei, S.; Olson, A.; Yan, M.; Senior, R.; Bessias, S.; Gadhoumi, K.; Jean-Pierre, G.; Wang, D.; et al. Mitigating Racial and Ethnic Bias and Advancing Health Equity in Clinical Algorithms: A Scoping Review. Health Aff. 2023, 42, 1359–1368. [Google Scholar] [CrossRef]
  37. Chen, F.; Wang, L.; Hong, J.; Jiang, J.; Zhou, L. Unmasking bias in artificial intelligence: A systematic review of bias detection and mitigation strategies in electronic health record–based models. J. Am. Med. Inform. Assoc. 2024, 31, 1172–1183. [Google Scholar] [CrossRef]
  38. Barnett, P.; Mackay, E.; Matthews, H.; Gate, R.; Greenwood, H.; Ariyo, K.; Bhui, K.; Halvorsrud, K.; Pilling, S.; Smith, S. Ethnic variations in compulsory detention under the Mental Health Act: A systematic review and meta-analysis of international data. Lancet Psychiatry 2019, 6, 305–317. [Google Scholar] [CrossRef]
  39. Knight, S.; Jarvis, G.E.; Ryder, A.G.; Lashley, M.; Rousseau, C. ‘It just feels like an invasion’: Black first-episode psychosis patients’ experiences with coercive intervention and its influence on help-seeking behaviours. J. Black Psychol. 2022, 49, 200–235. [Google Scholar] [CrossRef]
  40. Faber, S.C.; Khanna Roy, A.; Michaels, T.I.; Williams, M.T. The weaponization of medicine: Early psychosis in the Black community and the need for racially informed mental healthcare. Front. Psychiatry 2023, 14, 1098292. [Google Scholar] [CrossRef]
  41. Rai, S.; Stade, E.C.; Giorgi, S.; Francisco, A.; Ungar, L.H.; Curtis, B.; Guntuku, S.C. Key language markers of depression on social media depend on race. Proc. Natl. Acad. Sci. USA 2024, 121, e2319837121. [Google Scholar] [CrossRef]
  42. Reuters. AI Fails to Detect Depression Signs in Social Media Posts by Black Americans. Reuters Health News. 2024. Available online: https://www.reuters.com/business/healthcare-pharmaceuticals/ai-fails-detect-depression-signs-social-media-posts-by-black-americans-study-2024-03-28/ (accessed on 1 November 2025).
  43. Moudden, I.E.; Bittner, M.C.; Karpov, M.V.; Osunmakinde, I.O.; Acheamponmaa, A.; Nevels, B.J.; Mbaye, M.T.; Fields, T.L.; Jordan, K.; Bahoura, M. Predicting mental health disparities using machine learning for African Americans in Southeastern Virginia. Sci. Rep. 2025, 15, 5900. [Google Scholar] [CrossRef]
  44. Himmelstein, G.; Bates, D.; Zhou, L. Examination of stigmatizing language in the electronic health record. JAMA Netw. Open 2022, 5, e2144967. [Google Scholar] [CrossRef]
  45. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  46. Straw, I.; Callison-Burch, C. Artificial intelligence in mental health and the biases of language-based models. PLoS ONE 2020, 15, e0240376. [Google Scholar] [CrossRef]
  47. Omiye, J.A.; Lester, J.C.; Spichak, S.; Rotemberg, V.; Daneshjou, R. Large language models propagate race-based medicine. NPJ Digit. Med. 2023, 6, 195. [Google Scholar] [CrossRef]
  48. Yang, M.; El-Attar, A.A.; Chaspari, T. Deconstructing demographic bias in speech-based machine learning models for digital health. Front. Digit. Health 2024, 6, 1351637. [Google Scholar] [CrossRef]
  49. Aggarwal, N.K.; Pieh, M.C.; Dixon, L.; Guarnaccia, P.; Alegría, M.; Lewis-Fernández, R. Clinician descriptions of communication strategies to improve treatment engagement by racial/ethnic minorities in mental health services: A systematic review. Patient Educ. Couns. 2016, 99, 198–209. [Google Scholar] [CrossRef]
  50. Mongelli, F.; Georgakopoulos, P.; Pato, M.T. Challenges and opportunities to meet the mental health needs of underserved and disenfranchised populations in the United States. Focus 2020, 18, 16–24. [Google Scholar] [CrossRef]
  51. Unertl, K.M.; Schaefbauer, C.L.; Campbell, T.R.; Senteio, C.; Siek, K.A.; Bakken, S.; Veinot, T.C. Integrating community-based participatory research and informatics approaches to improve the engagement and health of underserved populations. J. Am. Med. Inform. Assoc. 2016, 23, 60–73. [Google Scholar] [CrossRef]
  52. Tawiah, N.; Monestime, J.P. Promoting Equity in AI-Driven Mental Health Care for Marginalized Populations. Proc. AAAI Symp. Ser. 2024, 4, 323–327. [Google Scholar] [CrossRef]
  53. Zou, J.; Schiebinger, L. AI can be sexist and racist—it’s time to make it fair. Nature 2018, 559, 324–326. [Google Scholar] [CrossRef]
  54. World Health Organization. Ethics and Governance of Artificial Intelligence for Health; World Health Organization: Geneva, Switzerland, 2021; Available online: https://www.who.int/publications/i/item/9789240029200 (accessed on 23 November 2025).
  55. Peterson Health Technology Institute AI Taskforce. Adoption of Artificial Intelligence in Healthcare Delivery Systems: Early Applications and Impacts; Peterson Health Technology Institute AI Taskforce: New York, NY, USA, 2025; Available online: https://phti.org/ai-adoption-early-applications-impacts/ (accessed on 23 November 2025).
  56. Blease, C.; Rodman, A. Generative Artificial Intelligence in Mental Healthcare: An Ethical Evaluation. Curr. Treat. Options Psychiatry 2024, 12, 5. [Google Scholar] [CrossRef]
  57. Coiera, E.; Liu, S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Reports Med. 2022, 3, 100860. [Google Scholar] [CrossRef]
  58. Cross, S.; Bell, I.; Nicholas, J.; Valentine, L.; Mangelsdorf, S.; Baker, S.; Titov, N.; Alvarez-Jimenez, M. Use of AI in Mental Health Care: Community and Mental Health Professionals Survey. JMIR Ment. Health 2024, 11, e60589. [Google Scholar] [CrossRef]
  59. Grabb, D.; Lamparth, M.; Vasan, N. Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation. arXiv 2024, arXiv:2406.11852. [Google Scholar]
  60. McCradden, M.; Hui, K.; Buchman, D.Z. Evidence, ethics and the promise of artificial intelligence in psychiatry. J. Med. Ethics 2023, 49, 573–579. [Google Scholar] [CrossRef]
  61. Warrier, U.; Warrier, A.; Khandelwal, K. Ethical considerations in the use of artificial intelligence in mental health. Egypt. J. Neurol. Psychiatry Neurosurg. 2023, 59, 139. [Google Scholar] [CrossRef]
  62. Abdulai, A.F. Is Generative AI Increasing the Risk for Technology-Mediated Trauma Among Vulnerable Populations? Nurs. Inq. 2025, 32, e12686. [Google Scholar] [CrossRef]
  63. Ahmed, S. “You end up doing the document rather than doing the doing”: Diversity, race equality and the politics of documentation. Ethn. Racial Stud. 2007, 30, 590–609. [Google Scholar] [CrossRef]
  64. Londono Tobon, A.; Flores, J.M.; Taylor, J.H.; Johnson, I.; Landeros-Weisenberger, A.; Aboiralor, O.; Avila-Quintero, V.J.; Bloch, M.H. Racial Implicit Associations in Psychiatric Diagnosis, Treatment, and Compliance Expectations. Acad. Psychiatry 2021, 45, 23–33. [Google Scholar] [CrossRef]
  65. Jacquemard, T.; Doherty, C.P.; Fitzsimons, M.B. Examination and diagnosis of electronic patient records and their associated ethics: A scoping literature review. BMC Med. Ethics 2020, 21, 76. [Google Scholar] [CrossRef]
  66. Morreim, E. Errors in the EMR: Under-recognized hazard for AI in healthcare. Houst. J. Health Law Policy 2025, 24, 127–165. [Google Scholar]
  67. Nash, E.; Perlson, J.E.; McCann, R.; Noy, G.; Lawrence, R.; Alves-Bradford, J.-M.; Akinade, T.; Perez, D.; Arbuckle, M.R. Mitigating racism and implicit bias in psychiatric notes: A quality improvement project addressing how race and ethnicity are documented. Acad. Psychiatry 2024, 48, 211–212. [Google Scholar] [CrossRef]
  68. Holman, D.; Salway, S.; Bell, A.; Beach, B.; Adebajo, A.; Ali, N.; Butt, J. Can intersectionality help with understanding and tackling health inequalities? Perspectives of professional stakeholders. Health Res. Policy Sys. 2021, 19, 97. [Google Scholar] [CrossRef]
  69. Collins, P.H. Intersectionality’s definitional dilemmas. Annu. Rev. Sociol. 2015, 41, 1–20. [Google Scholar] [CrossRef]
  70. Funer, F. Admitting the heterogeneity of social inequalities: Intersectionality as a (self-)critical framework and tool within mental health care. Philos. Ethics Humanit. Med. PEHM 2023, 18, 21. [Google Scholar] [CrossRef]
  71. Daly, C.; Ji, E. AI and Mental Health—An Intersectional Analysis. Health Action Research Group. Available online: https://www.healthactionresearch.org.uk/selected-blogs/the-intersectionality-of-ai-an/ (accessed on 10 September 2025).
  72. Kunstman, J.W.; Ogungbadero, T.; Deska, J.C.; Bernstein, M.J.; Smith, A.R.; Hugenberg, K. Race-based biases in psychological distress and treatment judgments. PLoS ONE 2023, 18, e0293078. [Google Scholar] [CrossRef]
  73. Aleem, M.; Imama, Z.; Naseem, M. Towards Culturally Adaptive Large Language Models in Mental Health: Using ChatGPT as a Case Study. In Proceedings of the 27th ACM SIGCHI Conference on Computer-Supported Cooperative Work & Social Computing, San José, Costa Rica, 9–13 November 2024; pp. 240–247. [Google Scholar] [CrossRef]
  74. Noe-Bustamante, L.; Hugo Lopez, M.; Krogstad, J. US Hispanic Population Surpassed 60 Million in 2019, but Growth Has Slowed; Pew Research Center: Washington, DC, USA, 2020; Available online: https://www.pewresearch.org/short-reads/2020/07/07/u-s-hispanic-population-surpassed-60-million-in-2019-but-growth-has-slowed/ (accessed on 22 November 2025).
  75. Pro, G.; Brown, C.; Rojo, M.; Patel, J.; Flax, C.; Haynes, T. Downward National Trends in Mental Health Treatment Offered in Spanish: State Differences by Proportion of Hispanic Residents. Psychiatr. Serv. 2022, 73, 1232–1238. [Google Scholar] [CrossRef]
  76. Breslau, J.; Cefalu, M.; Wong, E.C.; Burnam, M.A.; Hunter, G.P.; Florez, K.R.; Collins, R.L. Racial/ethnic differences in perception of need for mental health treatment in a US national sample. Soc. Psychiatry Psychiatr. Epidemiol. 2017, 52, 929–937. [Google Scholar] [CrossRef]
  77. O’Keefe, V.M.; Cwik, M.F.; Haroz, E.E.; Barlow, A. Increasing culturally responsive care and mental health equity with indigenous community mental health workers. Psychol. Serv. 2021, 18, 84–92. [Google Scholar] [CrossRef]
  78. Dinesh, D.N.; Rao, M.N.; Sinha, C. Language adaptations of mental health interventions: User interaction comparisons with an AI-enabled conversational agent (Wysa) in English and Spanish. Digit. Health 2024, 10, 20552076241255616. [Google Scholar] [CrossRef]
  79. Ospina-Pinillos, L.; Davenport, T.; Mendoza Diaz, A.; Navarro-Mancilla, A.; Scott, E.M.; Hickie, I.B. Using Participatory Design Methodologies to Co-Design and Culturally Adapt the Spanish Version of the Mental Health eClinic: Qualitative Study. J. Med. Internet Res. 2019, 21, e14127. [Google Scholar] [CrossRef]
  80. Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health 2017, 4, e19. [Google Scholar] [CrossRef]
  81. Ma, Z.; Mei, Y.; Su, Z. Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support. In Proceedings of the AMIA Annual Symposium Proceedings, San Francisco, CA, USA, 9–13 November 2024; Volume 2023, pp. 1105–1114. [Google Scholar]
  82. Henkel, T.; Linn, A.J.; van der Goot, M.J. Understanding the Intention to Use Mental Health Chatbots Among LGBTQIA+ Individuals: Testing and Extending the UTAUT. In Chatbot Research and Design; Følstad, A., Ed.; Conversations 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13815. [Google Scholar] [CrossRef]
  83. Meyer, I.H. Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence. Psychol. Bull. 2003, 129, 674–697. [Google Scholar] [CrossRef]
  84. Valentine, S.E.; Shipherd, J.C. A systematic review of social stress and mental health among transgender and gender non-conforming people in the United States. Clin. Psychol. Rev. 2018, 66, 24–38. [Google Scholar] [CrossRef]
  85. Bragazzi, N.L.; Crapanzano, A.; Converti, M.; Zerbetto, R.; Khamisy-Farah, R. The Impact of Generative Conversational Artificial Intelligence on the Lesbian, Gay, Bisexual, Transgender, and Queer Community: Scoping Review. J. Med. Internet Res. 2023, 25, e52091. [Google Scholar] [CrossRef]
  86. Veale, J.F.; Peter, T.; Travers, R.; Saewyc, E.M. Enacted Stigma, Mental Health, and Protective Factors Among Transgender Youth in Canada. Transgender Health 2017, 2, 207–216. [Google Scholar] [CrossRef]
  87. Felkner, V.; Chang, H.C.H.; Jang, E.; May, J. Winoqueer: A community-in-the-loop benchmark for anti-lgbtq+ bias in large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Toronto, ON, Canada, 9–14 July 2023; Volume 1, pp. 9126–9140. [Google Scholar] [CrossRef]
  88. Gower, A.L.; Rider, G.N.; Del Río-González, A.M.; Erickson, P.J.; Thomas, D.; Russell, S.T.; Watson, R.J.; Eisenberg, M.E. Application of an intersectional lens to bias-based bullying among LGBTQ+ youth of color in the United States. Stigma Health 2023, 8, 363–371. [Google Scholar] [CrossRef]
  89. Legault, M.; Catala, A.; Poirier, P. Breaking the stigma around autism: Moving away from neuronormativity using epistemic justice and 4E cognition. Synthese 2024, 204, 84. [Google Scholar] [CrossRef]
  90. Cleary, M.; West, S.; Kornhaber, R.; Hungerford, C. Autism, Discrimination and Masking: Disrupting a Recipe for Trauma. Issues Ment. Health Nurs. 2023, 44, 799–808. [Google Scholar] [CrossRef]
  91. National Research Council; Committee on Educational Interventions for Children with Autism. Educating Children with Autism; Lord, C., McGee, J.P., Eds.; National Academies Press: Washington, DC, USA, 2001. [Google Scholar]
  92. Volkmar, F.R.; Lord, C.; Bailey, A.; Schultz, R.T.; Klin, A. Autism and pervasive developmental disorders. J. Child Psychol. Psychiatry Allied Discip. 2004, 45, 135–170. [Google Scholar] [CrossRef]
  93. Helt, M.; Kelley, E.; Kinsbourne, M.; Pandey, J.; Boorstein, H.; Herbert, M.; Fein, D. Can children with autism recover? If so, how? Neuropsychol. Rev. 2008, 18, 339–366. [Google Scholar] [CrossRef]
  94. Rogers, S.J.; Lewis, H. An effective day treatment model for young children with pervasive developmental disorders. J. Am. Acad. Child Adolesc. Psychiatry 1989, 28, 207–214. [Google Scholar] [CrossRef]
  95. Reichow, B.; Wolery, M. Comprehensive synthesis of early intensive behavioral interventions for young children with autism based on the UCLA young autism project model. J. Autism Dev. Disord. 2009, 39, 23–41. [Google Scholar] [CrossRef]
  96. Burke, M.M.; Taylor, J.L. To better meet the needs of autistic people, we need to rethink how we measure services. Autism Int. J. Res. Pract. 2023, 27, 873–875. [Google Scholar] [CrossRef]
  97. Rizvi, N.; Wu, W.; Bolds, M.; Mondal, R.; Begel, A.; Munyaka, I. Are robots ready to deliver autism inclusion? a critical review. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’ 24), ACM, Honolulu, HI, USA, 11–16 May 2024. [Google Scholar] [CrossRef]
  98. Williams, R. I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research. Techné Res. Philos. Technol. 2021, 25, 451–478. [Google Scholar] [CrossRef]
  99. Brandsen, S.; Chandrasekhar, T.; Franz, L.; Grapel, J.; Dawson, G.; Carlson, D. Prevalence of bias against neurodivergence-related terms in artificial intelligence language models. Autism Res. Off. J. Int. Soc. Autism Res. 2024, 17, 234–248. [Google Scholar] [CrossRef]
  100. Iannone, A.; Giansanti, D. Breaking Barriers-The Intersection of AI and Assistive Technology in Autism Care: A Narrative Review. J. Pers. Med. 2023, 14, 41. [Google Scholar] [CrossRef]
  101. Froio, N. Who Fills the Gaps When BIPOC Mental Health Needs Are Overlooked? Teen Vogue. 2021. Available online: https://prismreports.org/2021/08/04/who-fills-the-gaps-when-bipoc-mental-health-needs-are-overlooked/ (accessed on 22 November 2025).
  102. Koutsouleris, N.; Hauser, T.U.; Skvortsova, V.; De Choudhury, M. From promise to practice: Towards the realisation of AI-informed mental health care. Lancet. Digit. Health 2022, 4, e829–e840. [Google Scholar] [CrossRef]
  103. Donia, J.; Shaw, J.A. Co-design and ethical artificial intelligence for health: An agenda for critical research and practice. Big Data Soc. 2021, 8, 20539517211065248. [Google Scholar] [CrossRef]
  104. Goyal, S.; Rastogi, E.; Rajagopal, S.P.; Yuan, D.; Zhao, F.; Chintagunta, J.; Naik, G.; Ward, J. HealAI: A healthcare LLM for effective medical documentation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining, Mérida, Mexico, 4–8 March 2024; pp. 1167–1168. [Google Scholar] [CrossRef]
  105. Lawrence, H.R.; Schneider, R.A.; Rubin, S.B.; Matarić, M.J.; McDuff, D.J.; Jones Bell, M. The Opportunities and Risks of Large Language Models in Mental Health. JMIR Ment. Health 2024, 11, e59479. [Google Scholar] [CrossRef]
  106. Rice, B.T.; Rasmus, S.; Onders, R.; Thomas, T.; Day, G.; Wood, J.; Britton, C.; Hernandez-Boussard, T.; Hiratsuka, V. Community-engaged artificial intelligence: An upstream, participatory design, development, testing, validation, use and monitoring framework for artificial intelligence and machine learning models in the Alaska Tribal Health System. Front. Artif. Intell. 2025, 8, 1568886. [Google Scholar] [CrossRef]
  107. Biro, J.M.; Handley, J.L.; Mickler, J.; Reddy, S.; Kottamasu, V.; Ratwani, R.M.; Cobb, N.K. The value of simulation testing for the evaluation of ambient digital scribes: A case report. J. Am. Med. Inform. Assoc. 2025, 32, 928–931. [Google Scholar] [CrossRef]
  108. Heinz, M.V.; Bhattacharya, S.; Trudeau, B.; Quist, R.; Song, S.H.; Lee, C.M.; Jacobson, N.C. Testing domain knowledge and risk of bias of a large-scale general artificial intelligence model in mental health. Digit. Health 2023, 9, 20552076231170499. [Google Scholar] [CrossRef]
  109. Seo, J.; Choi, D.; Kim, T.; Cha, W.C.; Kim, M.; Yoo, H.; Oh, N.; Yi, Y.; Lee, K.H.; Choi, E. Evaluation Framework of Large Language Models in Medical Documentation: Development and Usability Study. J. Med. Internet Res. 2024, 26, e58329. [Google Scholar] [CrossRef]
  110. Altschuler, S.; Huntington, I.; Antoniak, M.; Klein, L.F. Clinician as editor: Notes in the era of AI scribes. Lancet 2024, 404, 2154–2155. [Google Scholar] [CrossRef]
  111. Eng, K.; Johnston, K.; Cerda, I.; Kadakia, K.; Mosier-Mills, A.; Vanka, A. A Patient-Centered Documentation Skills Curriculum for Preclerkship Medical Students in an Open Notes Era. Mededportal 2024, 20, 11392. [Google Scholar] [CrossRef]
  112. Lam, B.D.; Bourgeois, F.; Dong, Z.J.; Bell, S.K. Speaking up about patient-perceived serious visit note errors: Patient and family experiences and recommendations. J. Am. Med. Inform. Assoc. 2021, 28, 685–694. [Google Scholar] [CrossRef]
  113. Lear, R.; Freise, L.; Kybert, M.; Darzi, A.; Neves, A.L.; Mayer, E.K. Patients’ Willingness and Ability to Identify and Respond to Errors in Their Personal Health Records: Mixed Methods Analysis of Cross-Sectional Survey Data. J. Med. Internet Res. 2022, 24, e37226. [Google Scholar] [CrossRef]
  114. Freise, L.; Neves, A.L.; Flott, K.; Harrison, P.; Kelly, J.; Darzi, A.; Mayer, E.K. Assessment of Patients’ Ability to Review Electronic Health Record Information to Identify Potential Errors: Cross-sectional Web-Based Survey. JMIR Form. Res. 2021, 5, e19074. [Google Scholar] [CrossRef]
  115. Ciston, S. Intersectional AI Is Essential: Polyvocal, Multimodal, Experimental Methods to Save AI. J. Sci. Technol. Arts 2019, 11, 3–8. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.