Next Article in Journal
The Impact of Cyber-Ostracism on Bystanders’ Helping Behavior Among Undergraduates: The Moderating Role of Rejection Sensitivity
Previous Article in Journal
Music as Fluidum: A Rheological Approach to the Materiality of Sound as Movement Through Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Communication Errors in Human–Chatbot Interactions: A Case Study of ChatGPT Arabic Mental Health Support Inquiries

by
Ghuzayyil Mohammed Al-Otaibi
*,
Hind M. Alotaibi
* and
Sami Sulaiman Alsalmi
Department of English, College of Language Sciences, King Saud University, P.O. Box 2460, Riyadh 11451, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Behav. Sci. 2025, 15(8), 1119; https://doi.org/10.3390/bs15081119
Submission received: 19 May 2025 / Revised: 8 July 2025 / Accepted: 16 July 2025 / Published: 18 August 2025
(This article belongs to the Special Issue Digital Interventions for Addiction and Mental Health)

Abstract

Large language models (LLMs) have become extensively used among users across diverse settings. Yet, with the complex nature of these large-scale artificial intelligence (AI) systems, leveraging their capabilities effectively is yet to be explored. In this study, we looked at the types of communication errors that occur in interactions between humans and ChatGPT-3.5 in Arabic. A corpus of six Arabic-language consultations was collected from an online mental health support forum. For each consultation, the researchers provided the user’s Arabic queries to ChatGPT-3.5 and analyzed the system’s responses. The study identified 102 communication errors, mostly grammatical and repetitions. Other errors involved contradictions, ambiguous language, ignoring questions, and lacking sociality. By examining the patterns and types of communication errors observed in ChatGPT’s responses, the study is expected to provide insights into the challenges and limitations of current conversational AI systems, particularly in the context of sensitive domains like mental health support.
Keywords: mental health support; communication errors; ChatGPT; Arabic; artificial intelligence; case study mental health support; communication errors; ChatGPT; Arabic; artificial intelligence; case study

Share and Cite

MDPI and ACS Style

Al-Otaibi, G.M.; Alotaibi, H.M.; Alsalmi, S.S. Communication Errors in Human–Chatbot Interactions: A Case Study of ChatGPT Arabic Mental Health Support Inquiries. Behav. Sci. 2025, 15, 1119. https://doi.org/10.3390/bs15081119

AMA Style

Al-Otaibi GM, Alotaibi HM, Alsalmi SS. Communication Errors in Human–Chatbot Interactions: A Case Study of ChatGPT Arabic Mental Health Support Inquiries. Behavioral Sciences. 2025; 15(8):1119. https://doi.org/10.3390/bs15081119

Chicago/Turabian Style

Al-Otaibi, Ghuzayyil Mohammed, Hind M. Alotaibi, and Sami Sulaiman Alsalmi. 2025. "Communication Errors in Human–Chatbot Interactions: A Case Study of ChatGPT Arabic Mental Health Support Inquiries" Behavioral Sciences 15, no. 8: 1119. https://doi.org/10.3390/bs15081119

APA Style

Al-Otaibi, G. M., Alotaibi, H. M., & Alsalmi, S. S. (2025). Communication Errors in Human–Chatbot Interactions: A Case Study of ChatGPT Arabic Mental Health Support Inquiries. Behavioral Sciences, 15(8), 1119. https://doi.org/10.3390/bs15081119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop