Abstract
Digital mental health resources have expanded rapidly in the wake of the COVID-19 pandemic, offering new opportunities to improve access to mental healthcare through technologies such as AI chatbots, mobile apps, and online platforms. Despite this growth, significant challenges persist, including low user retention, limited digital literacy, unclear privacy regulations, and insufficient evidence of clinical effectiveness and safety. AI chatbots, which act as virtual therapists or companions, provide counseling and personalized support, but raise concerns about user dependence, emotional outcomes, privacy, ethical risks, and bias. User experiences are mixed: while some report enhanced social health and reduced loneliness, others question the safety, crisis response, and overall reliability of these tools, particularly in unregulated settings. Vulnerable and underserved populations may face heightened risks, highlighting the need for engagement with individuals with lived experience to define safe and supportive interactions. This review critically examines the empirical and grey literature on AI chatbot use in mental healthcare, evaluating their benefits and limitations in terms of access, user engagement, risk management, and clinical integration. Key findings indicate that AI chatbots can complement traditional care and bridge service gaps. However, current evidence is constrained by short-term studies and a lack of diverse, long-term outcome data. The review underscores the importance of transparent operations, ethical governance, and hybrid care models combining technological and human oversight. Recommendations include stakeholder-driven deployment approaches, rigorous evaluation standards, and ongoing real-world validation to ensure equitable, safe, and effective use of AI chatbots in mental healthcare.