Artificial intelligence (AI)-powered mental health chatbots have evolved quickly as scalable means for psychological support, bringing novel solutions through natural language processing (NLP), mobile accessibility, and generative AI. This systematic literature review (SLR), following PRISMA 2020 guidelines, collates evidence from 25 published, peer-reviewed
[...] Read more.
Artificial intelligence (AI)-powered mental health chatbots have evolved quickly as scalable means for psychological support, bringing novel solutions through natural language processing (NLP), mobile accessibility, and generative AI. This systematic literature review (SLR), following PRISMA 2020 guidelines, collates evidence from 25 published, peer-reviewed studies between 2020 and 2025 and reviews therapeutic techniques, cultural adaptation, technical design, system assessment, and ethics. Studies were extracted from seven academic databases, screened against specific inclusion criteria, and thematically analyzed. Cognitive behavioral therapy (CBT) was the most common therapeutic model, featured in 15 systems, frequently being used jointly with journaling, mindfulness, and behavioral activation, followed by emotion-based approaches, which were featured in seven systems. Innovative techniques like GPT-based emotional processing, multimodal interaction (e.g., AR/VR), and LSTM-SVM classification models (greater than 94% accuracy) showed increased conversation flexibility but missed long-term clinical validation. Cultural adaptability was varied, and effective localization was seen in systems like XiaoE, okBot, and Luda Lee, while Western-oriented systems had restricted contextual adaptability. Accessibility and inclusivity are still major challenges, especially within low-resource settings, since digital literacy, support for multiple languages, and infrastructure deficits are still challenges. Ethical aspects—data privacy, explainability, and crisis plans—were under-evidenced for most deployments. This review is different from previous ones since it focuses on cultural adaptability, ethics, and hybrid public health incorporation and proposes a comprehensive approach for deploying AI mental health chatbots safely, effectively, and inclusively. Central to this review, symmetry is emphasized as a fundamental idea incorporated into frameworks for cultural adaptation, decision-making processes, and therapeutic structures. In particular, symmetry ensures equal cultural responsiveness, balanced user–chatbot interactions, and ethically aligned AI systems, all of which enhance the efficacy and dependability of mental health services. Recognizing these benefits, the review further underscores the necessity for more rigorous academic research into the development, deployment, and evaluation of mental health chatbots and apps, particularly to address cultural sensitivity, ethical accountability, and long-term clinical outcomes.
Full article