AI Chatbots in Digital Mental Health
Abstract
:1. Introduction
2. Methods
- The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and if so, where will these impacts be the most pervasive and effective?
- Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized while potential benefits are maximized and shared?
- A growing body of evidence shows that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
- Inclusion criteria:
- Studies that have been published in peer-reviewed journals, media articles and conference proceedings.
- Studies that have been published in the English language.
- Studies that have been published between 2010 and 2023.
- Studies that have investigated the use of AI chatbots, generative artificial intelligence or conversational agents in digital mental health or mental health care.
- Studies that have reported on the effectiveness of AI chatbots, generative artificial intelligence or conversational agents in digital mental health or mental health care.
- Exclusion criteria:
- Studies that are not published in peer-reviewed journals, media articles and conference proceedings.
- Studies that are not published in the English language.
- Studies that are published before 2010 or after 2023.
- Studies that do not investigate the use of AI chatbots, generative artificial intelligence or conversational agents in digital mental health or mental health care.
- Studies that do not report on the effectiveness of AI chatbots, generative artificial intelligence or conversational agents in digital mental health or mental health care.
3. Results
3.1. The Impact of AI Chatbots on Technology Integration
- Conduct qualitative studies using AI chatbots to demonstrate how they assist with accessibility, engagement and effectiveness through (1) identifying user needs, (2) understanding barriers to its use, (3) evaluating user experience and AI chatbot impact and (4) integrating human–AI approaches to overcome problem areas.
- Contribute to empirical evidence with longitudinal studies and RCTs to see which mental health conditions and populations AI chatbots may be recommended for.
- Determine a practical attrition prediction possibility to identify individuals at a high risk of dropping out through applying advanced machine learning models (e.g., deep neural networks) to the leveraging analyses of feature sets (e.g., baseline user characteristics, self-reported user context and AI chatbot feedback, passively detected user behaviour and the clinical functioning of users).
3.2. The Balance between the Benefits and Harms of AI Chatbots
- Invest in research to evaluate the efficacy and potential harms of AI applications and develop systems to monitor and audit AI systems for unusual or suspicious activity.
- Implement rigorous safety measures, robust regulations and collaborative standards to ensure the responsible use of AI technologies.
- Validate a HAI model combining AI chatbots with human experts in research, practice and policy to optimise mental health care assistance.
3.3. The Mitigation of Bias and Prejudice in AI Applications
- Vulnerable people need more informed guidance on how to self-manage their mental health when assisted by AI chatbots in order to connect with resources and treatments.
- Social media mental health and crisis resource panels may be enhanced by linking to AI chatbots that provide vetted digital mental health and crisis services or referrals as necessary.
- HAI mental health strategies with SVMC may be explored for cautiously navigating a safer, more responsible social media with humane, fair and explainable system recommendations.
4. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ACM | Association for Computing Machinery |
AI | artificial intelligence |
CBT | cognitive behavioural therapy |
DMHIs | digital mental health interventions |
EU | European Union |
GDP | gross domestic product |
GPT | Generative Pre-Trained Transformer |
HAI | Human–artificial intelligence |
HCI | human–computer interaction |
IEEE | the Institute of Electrical and Electronics Engineers |
ML | machine learning |
NLP | natural language processing |
RCT | randomized controlled trial |
UK | United Kingdom |
US | United States |
WHO | World Health Organization |
References
- Team Capacity. The Complete Guide to AI Chatbots: The Future of AI and Automation. 2023. Available online: https://capacity.com/learn/ai-chatbots/ (accessed on 19 August 2023).
- Caldarini, G.; Jaf, S.; McGarry, K. A Literature Survey of Recent Advances in Chatbots. Information 2022, 13, 41. [Google Scholar] [CrossRef]
- Bryant, A. AI Chatbots: Threat or Opportunity? Informatics 2023, 10, 49. [Google Scholar] [CrossRef]
- The Center for Humane Technology. Align Technology with Humanity’s Best Interests. 2023. Available online: https://www.humanetech.com/ (accessed on 19 August 2023).
- World Health Organization. Mental Health. 2023. Available online: https://www.who.int/health-topics/mental-health#tab=tab_1 (accessed on 19 August 2023).
- Australian Bureau of Statistics. National Study of Mental Health and Wellbeing. 2021. Available online: https://www.abs.gov.au/statistics/health/mental-health/national-study-mental-health-and-wellbeing/latest-release (accessed on 19 August 2023).
- Australian Productivity Commission. Mental Health. 2020. Available online: https://www.pc.gov.au/inquiries/completed/mental-health#report (accessed on 19 August 2023).
- Queensland Brain Institute. Life Expectancy Mapped for People with Mental Disorders. 2019. Available online: https://qbi.uq.edu.au/article/2019/10/life-expectancy-mapped-people-mental-disorders (accessed on 19 August 2023).
- Clement, S.; Schauman, O.; Graham, T.; Maggioni, F.; Evans-Lacko, S.; Bezborodovs, N.; Morgan, C.; Rüsch, N.; Brown, J.S.L.; Thornicroft, G. What is the impact of mental health-related stigma on help-seeking? A systematic review of quantitative and qualitative studies. Psychol. Med. 2015, 45, 11–27. [Google Scholar] [CrossRef]
- Oexle, N.; Müller, M.; Kawohl, W.; Xu, Z.; Viering, S.; Wyss, C.; Vetter, S.; Rüsch, N. Self-stigma as a barrier to recovery: A longitudinal study. Eur. Arch. Psychiatry Clin. Neurosci. 2017, 268, 209–212. [Google Scholar] [CrossRef]
- Australian Institute of Health and Welfare. Mental Health: Prevalence and Impact. 2022. Available online: https://www.aihw.gov.au/reports/mental-health-services/mental-health (accessed on 19 August 2023).
- U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration. Key Substance Use and Mental Health Indicators in the United States: Results from the 2018 National Survey on Drug Use and Health. 2018. Available online: https://www.samhsa.gov/data/sites/default/files/cbhsq-reports/NSDUHDetailedTabs2018R2/NSDUHDetTabsSect8pe2018.htm#tab8-28a (accessed on 19 August 2023).
- Wies, B.; Landers, C.; Ienca, M. Digital Mental Health for Young People: A Scoping Review of Ethical Promises and Challenges. Front. Digit. Health 2021, 3, 697072. [Google Scholar] [CrossRef]
- Iyortsuun, N.K.; Kim, S.-H.; Jhon, M.; Yang, H.-J.; Pant, S. A Review of Machine Learning and Deep Learning Approaches on Mental Health Diagnosis. Healthcare 2023, 11, 285. [Google Scholar] [CrossRef]
- Andreou, A. Generative AI Could Help Solve the U.S. Mental Health Crisis. Psychology Today. Available online: https://www.psychologytoday.com/au/blog/the-doctor-of-the-future/202303/generative-ai-could-help-solve-the-us-mental-health-crisis (accessed on 19 August 2023).
- Demiris, G.; Oliver, D.P.; Washington, K.T. The Foundations of Behavioral Intervention Research in Hospice and Palliative Care. In Behavioral Intervention Research in Hospice and Palliative Care; Academic Press: Cambridge, MA, USA, 2019; pp. 17–25. [Google Scholar] [CrossRef]
- Adamopoulou, E.; Moussiades, L. Chatbots: History, technology, and applications. Mach. Learn. Appl. 2020, 2, 100006. [Google Scholar] [CrossRef]
- Haque, M.D.R.; Rubya, S. An Overview of Chatbot-Based Mobile Mental Health Apps: Insights from App Description and User Reviews. JMIR mHealth uHealth 2023, 11, e44838. [Google Scholar] [CrossRef] [PubMed]
- Denecke, K.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In Multiple Perspectives on Artificial Intelligence in Healthcare: Opportunities and Challenges; Lecture Notes in Bioengineering; Springer: Berlin/Heidelberg, Germany, 2021; pp. 115–128. [Google Scholar] [CrossRef]
- Rizvi, M. AI Chatbots Revolutionize Depression Management and Mental Health Support—DATAVERSITY. 2023. Available online: https://www.dataversity.net/ai-chatbots-revolutionize-depression-management-and-mental-health-support/ (accessed on 21 August 2023).
- Vaidyam, A.N.; Wisniewski, H.; Halamka, J.D.; Kashavan, M.S.; Torous, J.B. Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can. J. Psychiatry 2019, 64, 456–464. [Google Scholar] [CrossRef] [PubMed]
- Daley, K.; Hungerbuehler, I.; Cavanagh, K.; Claro, H.G.; Swinton, P.A.; Kapps, M. Preliminary Evaluation of the Engagement and Effectiveness of a Mental Health Chatbot. Front. Digit. Health 2020, 2, 576361. [Google Scholar] [CrossRef] [PubMed]
- Inkster, B.; Sarda, S.; Subramanian, V. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study. JMIR mHealth uHealth 2018, 6, e12106. [Google Scholar] [CrossRef]
- Lim, S.M.; Shiau, C.W.C.; Cheng, L.J.; Lau, Y. Chatbot-Delivered Psychotherapy for Adults with Depressive and Anxiety Symptoms: A Systematic Review and Meta-Regression. Behav. Ther. 2022, 53, 334–347. [Google Scholar] [CrossRef]
- Fitzpatrick, K.K.; Darcy, A.; Vierhile, M. Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment. Health 2017, 4, e19. [Google Scholar] [CrossRef]
- Klos, M.C.; Escoredo, M.; Joerin, A.; Lemos, V.N.; Rauws, M.; Bunge, E.L. Artificial Intelligence–Based Chatbot for Anxiety and Depression in University Students: Pilot Randomized Controlled Trial. JMIR Form. Res. 2021, 5, e20678. [Google Scholar] [CrossRef]
- Jang, S.; Kim, J.-J.; Kim, S.-J.; Hong, J.; Kim, S.; Kim, E. Mobile app-based chatbot to deliver cognitive behavioral therapy and psychoeducation for adults with attention deficit: A development and feasibility/usability study. Int. J. Med. Inform. 2021, 150, 104440. [Google Scholar] [CrossRef]
- Viduani, A.; Cosenza, V.; Araújo, R.M.; Kieling, C. Chatbots in the Field of Mental Health: Challenges and Opportunities. In Digital Mental Health; Springer: Berlin/Heidelberg, Germany, 2023; pp. 133–148. [Google Scholar] [CrossRef]
- Helmy, B.S.; Helmy, A.S. Role of Artificial Intelligence in Mental Wellbeing: Opportunities and Challenges. J. Artif. Intell. 2022, 15, 1–8. [Google Scholar] [CrossRef]
- Singh, O. Artificial intelligence in the era of ChatGPT—Opportunities and challenges in mental health care. Indian J. Psychiatry 2023, 65, 297–298. [Google Scholar] [CrossRef]
- Boucher, E.M.; Harake, N.R.; Ward, H.E.; Stoeckl, S.E.; Vargas, J.; Minkel, J.; Parks, A.C.; Zilca, R. Artificially intelligent chatbots in digital mental health interventions: A review. Expert Rev. Med. Devices 2021, 18, 37–49. [Google Scholar] [CrossRef] [PubMed]
- Balcombe, L.; De Leo, D. Human-Computer Interaction in Digital Mental Health. Informatics 2022, 9, 14. [Google Scholar] [CrossRef]
- Balcombe, L.; De Leo, D. Evaluation of the Use of Digital Platforms and Digital Mental Health Interventions: Scoping Review. Int. J. Environ. Res. Public Health 2022, 20, 362. [Google Scholar] [CrossRef] [PubMed]
- Balcombe, L.; De Leo, D. Digital Mental Health Challenges and the Horizon Ahead for Solutions. JMIR Ment. Health 2021, 8, e26811. [Google Scholar] [CrossRef]
- He, Y.; Yang, L.; Qian, C.; Li, T.; Su, Z.; Zhang, Q.; Hou, X. Conversational Agent Interventions for Mental Health Problems: Systematic Review and Meta-analysis of Randomized Controlled Trials. J. Med. Internet Res. 2023, 25, e43862. [Google Scholar] [CrossRef]
- Darcy, A.; Daniels, J.; Salinger, D.; Wicks, P.; Robinson, A. Evidence of Human-Level Bonds Established with a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study. JMIR Form. Res. 2021, 5, e27868. [Google Scholar] [CrossRef] [PubMed]
- Beatty, C.; Malik, T.; Meheli, S.; Sinha, C. Evaluating the Therapeutic Alliance with a Free-Text CBT Conversational Agent (Wysa): A Mixed-Methods Study. Front. Digit. Health 2022, 4, 847991. [Google Scholar] [CrossRef] [PubMed]
- Dosovitsky, G.; Bunge, E.L. Bonding with Bot: User Feedback on a Chatbot for Social Isolation. Front. Digit. Health 2021, 3, 735053. [Google Scholar] [CrossRef] [PubMed]
- Sinha, C.; Cheng, A.L.; Kadaba, M. Adherence and Engagement with a Cognitive Behavioral Therapy–Based Conversational Agent (Wysa for Chronic Pain) among Adults with Chronic Pain: Survival Analysis. JMIR Form. Res. 2022, 6, e37302. [Google Scholar] [CrossRef]
- Prochaska, J.J.; Vogel, E.A.; Chieng, A.; Kendra, M.; Baiocchi, M.; Pajarito, S.; Robinson, A. A Therapeutic Relational Agent for Reducing Problematic Substance Use (Woebot): Development and Usability Study. J. Med. Internet Res. 2021, 23, e24850. [Google Scholar] [CrossRef]
- Martínez-Miranda, J. Embodied Conversational Agents for the Detection and Prevention of Suicidal Behaviour: Current Applications and Open Challenges. J. Med. Syst. 2017, 41, 135. [Google Scholar] [CrossRef]
- Laranjo, L.; Dunn, A.G.; Tong, H.L.; Kocaballi, A.B.; Chen, J.; Bashir, R.; Surian, D.; Gallego, B.; Magrabi, F.; Lau, A.Y.S.; et al. Conversational agents in healthcare: A systematic review. J. Am. Med. Inform. Assoc. 2018, 25, 1248–1258. [Google Scholar] [CrossRef]
- Lejeune, A.; Le Glaz, A.; Perron, P.-A.; Sebti, J.; Baca-Garcia, E.; Walter, M.; Lemey, C.; Berrouiguet, S. Artificial intelligence and suicide prevention: A systematic review. Eur. Psychiatry 2022, 65, 1–22. [Google Scholar] [CrossRef]
- Robinson, J.; Cox, G.; Bailey, E.; Hetrick, S.; Rodrigues, M.; Fisher, S.; Herrman, H. Social media and suicide prevention: A systematic review. Early Interv. Psychiatry 2015, 10, 103–121. [Google Scholar] [CrossRef]
- Bernert, R.A.; Hilberg, A.M.; Melia, R.; Kim, J.P.; Shah, N.H.; Abnousi, F. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. Int. J. Environ. Res. Public Health 2020, 17, 5929. [Google Scholar] [CrossRef] [PubMed]
- Balcombe, L.; De Leo, D. The Impact of YouTube on Loneliness and Mental Health. Informatics 2023, 10, 39. [Google Scholar] [CrossRef]
- Korngiebel, D.M.; Mooney, S.D. Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. NPJ Digit. Med. 2021, 4, 93. [Google Scholar] [CrossRef] [PubMed]
- Ireland, D.; Bradford, D.K. Pandora’s Bot: Insights from the Syntax and Semantics of Suicide Notes. Stud. Health Technol. Inform. 2021, 276, 26. [Google Scholar] [CrossRef]
- Tamim, B. Belgian Woman Blames ChatGPT-Like Chatbot ELIZA for Her Husband’s Suicide. 2023. Available online: https://interestingengineering.com/culture/belgian-woman-blames-chatgpt-like-chatbot-eliza-for-her-husbands-suicide (accessed on 23 August 2023).
- Sweeney, C.; Potts, C.; Ennis, E.; Bond, R.; Mulvenna, M.D.; O’neill, S.; Malcolm, M.; Kuosmanen, L.; Kostenius, C.; Vakaloudis, A.; et al. Can Chatbots Help Support a Person’s Mental Health? Perceptions and Views from Mental Healthcare Professionals and Experts. ACM Trans. Comput. Healthc. 2021, 2, 1–15. [Google Scholar] [CrossRef]
- Evans, J. Intergenerational Report Spells Out Australia’s Future by 2063, with Warnings for Work, Climate and the Budget. 2023. Available online: https://www.abc.net.au/news/2023-08-24/intergenerational-report-work-ageing-economy-climate-in-2063/102769156 (accessed on 23 August 2023).
- Australian Commission on Safety and Quality in Health Care. National Standards in Mental Health Services. 2017. Available online: https://www.safetyandquality.gov.au/our-work/mental-health/national-standards-in-mental-health (accessed on 24 August 2023).
- Balcombe, L.; De Leo, D. An Integrated Blueprint for Digital Mental Health Services Amidst COVID-19. JMIR Ment. Health 2020, 7, e21718. [Google Scholar] [CrossRef]
- Balcombe, L.; De Leo, D. Digital Mental Health Amid COVID-19. Encyclopedia 2021, 1, 1047–1057. [Google Scholar] [CrossRef]
- Australian Commission on Safety and Quality in Health Care. National Safety and Quality Digital Mental Health Standards. 2020. Available online: https://www.safetyandquality.gov.au/standards/national-safety-and-quality-digital-mental-health-standards#about-the-standards (accessed on 23 August 2023).
- Abrams, Z. AI Is Changing Every Aspect of Psychology. Here’s What to Watch for. 2023. Available online: https://www.apa.org/monitor/2023/07/psychology-embracing-ai (accessed on 23 August 2023).
- Australian Government. Australia’s Artificial Intelligence Ethics Framework. 2019. Available online: https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework (accessed on 23 August 2023).
- Reuters. Which Countries Are Trying to Regulate Artificial Intelligence? 2023. Available online: https://www.euronews.com/next/2023/05/03/which-countries-are-trying-to-regulate-artificial-intelligence (accessed on 23 August 2023).
- European Commission. The Digital Services Act Package. 2023. Available online: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (accessed on 9 October 2023).
- OpenAI. Frontier AI Regulation: Managing Emerging Risks to Public Safety. 2023. Available online: https://openai.com/research/frontier-ai-regulation (accessed on 23 August 2023).
- Veselovsky, V.; Horta Ribeiro, M.; West, R. Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks. arXiv 2023. [Google Scholar] [CrossRef]
- Ahmed, A. Revealing the Influence of Artificial Intelligence Data on Human Contributions: Insights from Research. 2023. Available online: https://www.digitalinformationworld.com/2023/06/revealing-influence-of-artificial.html (accessed on 23 August 2023).
- Muhammad, Z. AI Chatbots Might Be a Security Risk for Business Operations, Here’s Why. 2023. Available online: https://www.digitalinformationworld.com/2023/09/ai-chatbots-might-be-security-risk-for.html#:~:text=Malicious%20actors%20can%20wreak%20havoc,the%20conducting%20of%20illegal%20transactions (accessed on 23 August 2023).
- Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.-C.; Jeste, D.V. Artificial Intelligence for Mental Health and Mental Illnesses: An Overview. Curr. Psychiatry Rep. 2019, 21, 116. [Google Scholar] [CrossRef]
- D’alfonso, S. AI in mental health. Curr. Opin. Psychol. 2020, 36, 112–117. [Google Scholar] [CrossRef] [PubMed]
- van der Schyff, E.L.; Ridout, B.; Amon, K.L.; Forsyth, R.; Campbell, A.J. Providing Self-Led Mental Health Support through an Artificial Intelligence–Powered Chat Bot (Leora) to Meet the Demand of Mental Health Care. J. Med. Internet Res. 2023, 25, e46448. [Google Scholar] [CrossRef] [PubMed]
- Timmons, A.C.; Duong, J.B.; Fiallo, N.S.; Lee, T.; Vo, H.P.Q.; Ahle, M.W.; Comer, J.S.; Brewer, L.C.; Frazier, S.L.; Chaspari, T. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. Perspect. Psychol. Sci. 2022, 18, 1062–1096. [Google Scholar] [CrossRef]
- Joyce, D.W.; Kormilitzin, A.; Smith, K.A.; Cipriani, A. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. NPJ Digit. Med. 2023, 6, 6. [Google Scholar] [CrossRef] [PubMed]
- Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
- Australian Government. Australia’s AI Ethics Principles. 2023. Available online: https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles (accessed on 23 August 2023).
- Australian Government. Tech Trends Position Statement—Generative AI. 2023. Available online: https://www.esafety.gov.au/industry/tech-trends-and-challenges (accessed on 23 August 2023).
- Trend, A. Responsible AI Is the Business for Australia. 2023. Available online: https://www.csiro.au/en/news/all/articles/2023/july/business-potential-responsible-ai (accessed on 23 August 2023).
- Bello y Villarino, J.-M.; Hua, D.; Wang, B.; Trezise, M. Standardisation, Trust and Democratic Principles: The Global Race to Regulate Artificial Intelligence. 2023. Available online: https://www.ussc.edu.au/standardisation-trust-and-democratic-principles-the-global-race-to-regulate-artificial-intelligence (accessed on 23 August 2023).
- Curtis, C.; Gillespie, N.; Lockey, S. AI-deploying organizations are key to addressing ‘perfect storm’ of AI risks. AI Ethic 2022, 3, 145–153. [Google Scholar] [CrossRef]
- Gillespie, N.; Lockey, S.; Curtis, C.; Pool, J.; Akbari, A. Trust in Artificial Intelligence: A Global Study; The University of Queensland & KPMG Australia: Brisbane, Australia, 2023. [Google Scholar] [CrossRef]
- Australian Science Media Centre. EXPERT REACTION: Australia Considers Ban on ‘High-Risk’ Uses of AI. 2023. Available online: https://www.scimex.org/newsfeed/expert-reaction-australia-considers-ban-on-high-risk-uses-of-ai (accessed on 29 August 2023).
- Morrison, R. WHO Urges Caution over Use of Generative AI in Healthcare. 2023. Available online: https://techmonitor.ai/technology/ai-and-automation/ai-in-healthcare-who (accessed on 23 August 2023).
- Hasnain, A. Understanding the Biases Embedded Within Artificial Intelligence. 2023. Available online: https://www.digitalinformationworld.com/2023/06/understanding-biases-embedded-within.html (accessed on 23 August 2023).
- Lee, M.; Kruger, L. Risks and Ethical Considerations of Generative AI. 2023. Available online: https://ukfinancialservicesinsights.deloitte.com/post/102i7s2/risks-and-ethical-considerations-of-generative-ai (accessed on 23 August 2023).
- Pearl, R. ChatGPT’s Use in Medicine Raises Questions of Security, Privacy, Bias. 2023. Available online: https://www.forbes.com/sites/robertpearl/2023/04/24/chatgpts-use-in-medicine-raises-questions-of-security-privacy-bias/?sh=5cc178415373 (accessed on 23 August 2023).
- Karp, P. Australia Considers Ban on ‘High-Risk’ Uses of AI Such as Deepfakes and Algorithmic Bias. 2023. Available online: https://www.theguardian.com/technology/2023/jun/01/australian-government-considers-ban-on-high-risk-uses-of-ai-such-as-deepfakes-and-algorithmic-bias (accessed on 29 August 2023).
- Wang, Y.; Kosinski, M. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J. Pers. Soc. Psychol. 2018, 114, 246–257. [Google Scholar] [CrossRef]
- Bidon, H.G. Mitigating Bias in AI. 2023. Available online: https://www.reinventedmagazine.com/post/mitigating-bias-in-ai (accessed on 31 August 2023).
- Semuels, A. Millions of Americans Have Lost Jobs in the Pandemic—And Robots and AI Are Replacing Them Faster Than Ever. 2023. Available online: https://time.com/5876604/machines-jobs-coronavirus/ (accessed on 31 August 2023).
- Akhtar, P.; Ghouri, A.M.; Khan, H.U.R.; Haq, M.A.U.; Awan, U.; Zahoor, N.; Khan, Z.; Ashraf, A. Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Ann. Oper. Res. 2022, 327, 633–657. [Google Scholar] [CrossRef]
- Santos, F.C.C. Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis. J. Media 2023, 4, 679–687. [Google Scholar] [CrossRef]
- Zhou, J.; Zhang, Y.; Luo, Q.; Parker, A.G.; De Choudhury, M. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–29 April 2023. [Google Scholar] [CrossRef]
- Chin, C. Navigating the Risks of Artificial Intelligence on the Digital News Landscape. 2023. Available online: https://www.csis.org/analysis/navigating-risks-artificial-intelligence-digital-news-landscape (accessed on 31 August 2023).
- Appel, G.; Neelbauer, J.; Schweidel, D.A. Generative AI Has an Intellectual Property Problem. 2023. Available online: https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem (accessed on 31 August 2023).
- Machete, P.; Turpin, M. The Use of Critical Thinking to Identify Fake News: A Systematic Literature Review. In Proceedings of the Responsible Design, Implementation and Use of Information and Communication Technology: 19th IFIP WG 6.11 Conference on e-Business, e-Services, and e-Society, I3E 2020, Skukuza, South Africa, 6–8 April 2020; pp. 235–246. [Google Scholar] [CrossRef]
- Cusumano, M.A.; Gawer, A.; Yoffie, D.B. Social Media Companies Should Self-Regulate. Now. 2021. Available online: https://hbr.org/2021/01/social-media-companies-should-self-regulate-now (accessed on 31 August 2023).
- Susskind, J. We Can Regulate Social Media without Censorship. Here’s How. 2022. Available online: https://time.com/6199565/regulate-social-media-platform-reduce-risks/ (accessed on 31 August 2023).
- McCarthy, M. Transparency Is Essential for Effective Social Media Regulation. 2022. Available online: https://www.brookings.edu/articles/transparency-is-essential-for-effective-social-media-regulation/ (accessed on 31 August 2023).
- Kooli, C. Chatbots in Education and Research: A Critical Examination of Ethical Implications and Solutions. Sustainability 2023, 15, 5614. [Google Scholar] [CrossRef]
- McLachlan, S.; Cooper, P. How the YouTube Algorithm Works in 2023: The Complete Guide. 2023. Available online: https://blog.hootsuite.com/how-the-youtube-algorithm-works/ (accessed on 31 August 2023).
- Balcombe, L. Collaborative AI to Shine a Light on YouTube Mental Health Rabbit Holes. 2023. Available online: https://www.digitalinformationworld.com/2023/06/collaborative-ai-to-shine-light-on.html (accessed on 31 August 2023).
- Gillespie, T. Do Not Recommend? Reduction as a Form of Content Moderation. Soc. Media + Soc. 2022, 8, 20563051221117552. [Google Scholar] [CrossRef]
- Ross Arguedas, A.; Robertson, C.T.; Fletcher, R.; Neilsen, R.K. Echo Chambers, Filter Bubbles, and Polarisation: A Literature Review. 2022. Available online: https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review (accessed on 31 August 2023).
- Headspace. Young People Want to Disconnect from Social Media—But FOMO Won’t Let Them. 2023. Available online: https://headspace.org.au/our-organisation/media-releases/young-people-want-to-disconnect-from-social-media-but-fomo-wont-let-them/ (accessed on 31 August 2023).
- Polanin, J.R.; Espelage, D.L.; Grotpeter, J.K.; Ingram, K.; Michaelson, L.; Spinney, E.; Valido, A.; El Sheikh, A.; Torgal, C.; Robinson, L. A Systematic Review and Meta-analysis of Interventions to Decrease Cyberbullying Perpetration and Victimization. Prev. Sci. 2021, 23, 439–454. [Google Scholar] [CrossRef]
- Pantic, I. Online Social Networking and Mental Health. Cyberpsychol. Behav. Soc. Netw. 2014, 17, 652–657. [Google Scholar] [CrossRef]
- Braghieri, L.; Levy, R.; Makarin, A. Social Media and Mental Health. Am. Econ. Rev. 2022, 112, 3660–3693. [Google Scholar] [CrossRef]
- Zsila, Á.; Reyes, M.E.S. Pros & cons: Impacts of social media on mental health. BMC Psychol. 2023, 11, 201. [Google Scholar] [CrossRef]
- Montag, C.; Yang, H.; Elhai, J.D. On the Psychology of TikTok Use: A First Glimpse from Empirical Findings. Front. Public Health 2021, 9, 641673. [Google Scholar] [CrossRef] [PubMed]
- Curtin, S.; Garnett, M. Suicide and Homicide Death Rates among Youth and Young Adults Aged 10–24: United States, 2001–2021; CDC: Atlanta, GA, USA, 2023. [CrossRef]
- Cadwalladr, C.; Graham-Harrison, E. Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach. 2018. Available online: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election (accessed on 31 August 2023).
- Australian Government. Inquiry into Social Media and Online Safety. 2022. Available online: https://www.oaic.gov.au/newsroom/inquiry-into-social-media-and-online-safety (accessed on 31 August 2023).
- Bogle, A. Australia’s Changing How It Regulates the Internet—And No-One’s Paying Attention. 2022. Available online: https://www.abc.net.au/news/science/2022-09-21/internet-online-safety-act-industry-codes/101456902 (accessed on 31 August 2023).
- Evans, J. Artificial Intelligence Technologies Could Be Classified by Risk, as Government Consults on AI Regulation. 2023. Available online: https://www.abc.net.au/news/2023-06-01/ai-government-regulation-risk-classification-plan/102417294 (accessed on 31 August 2023).
- Ognibene, D.; Wilkens, R.; Taibi, D.; Hernández-Leo, D.; Kruschwitz, U.; Donabauer, G.; Theophilou, E.; Lomonaco, F.; Bursic, S.; Lobo, R.A.; et al. Challenging social media threats using collective well-being-aware recommendation algorithms and an educational virtual companion. Front. Artif. Intell. 2023, 5, 654930. [Google Scholar] [CrossRef]
- Hohenstein, J.; Kizilcec, R.F.; DiFranzo, D.; Aghajari, Z.; Mieczkowski, H.; Levy, K.; Naaman, M.; Hancock, J.; Jung, M.F. Artificial intelligence in communication impacts language and social relationships. Sci. Rep. 2023, 13, 5487. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Balcombe, L. AI Chatbots in Digital Mental Health. Informatics 2023, 10, 82. https://doi.org/10.3390/informatics10040082
Balcombe L. AI Chatbots in Digital Mental Health. Informatics. 2023; 10(4):82. https://doi.org/10.3390/informatics10040082
Chicago/Turabian StyleBalcombe, Luke. 2023. "AI Chatbots in Digital Mental Health" Informatics 10, no. 4: 82. https://doi.org/10.3390/informatics10040082
APA StyleBalcombe, L. (2023). AI Chatbots in Digital Mental Health. Informatics, 10(4), 82. https://doi.org/10.3390/informatics10040082