Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = emotional chatbots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 213 KiB  
Review
Bridging the Gap: The Role of AI in Enhancing Psychological Well-Being Among Older Adults
by Jaewon Lee and Jennifer Allen
Psychol. Int. 2025, 7(3), 68; https://doi.org/10.3390/psycholint7030068 - 4 Aug 2025
Viewed by 132
Abstract
As the global population ages, older adults face growing psychological challenges such as loneliness, cognitive decline, and loss of social roles. Meanwhile, artificial intelligence (AI) technologies, including chatbots and voice-based systems, offer new pathways to emotional support and mental stimulation. However, older adults [...] Read more.
As the global population ages, older adults face growing psychological challenges such as loneliness, cognitive decline, and loss of social roles. Meanwhile, artificial intelligence (AI) technologies, including chatbots and voice-based systems, offer new pathways to emotional support and mental stimulation. However, older adults often encounter significant barriers in accessing and effectively using AI tools. This review examines the current landscape of AI applications aimed at enhancing psychological well-being among older adults, identifies key challenges such as digital literacy and usability, and highlights design and training strategies to bridge the digital divide. Using socioemotional selectivity theory and technology acceptance models as guiding frameworks, we argue that AI—especially in the form of conversational agents—holds transformative potential in reducing isolation and promoting emotional resilience in aging populations. We conclude with recommendations for inclusive design, participatory development, and future interdisciplinary research. Full article
(This article belongs to the Section Neuropsychology, Clinical Psychology, and Mental Health)
14 pages, 283 KiB  
Article
Teens, Tech, and Talk: Adolescents’ Use of and Emotional Reactions to Snapchat’s My AI Chatbot
by Gaëlle Vanhoffelen, Laura Vandenbosch and Lara Schreurs
Behav. Sci. 2025, 15(8), 1037; https://doi.org/10.3390/bs15081037 - 30 Jul 2025
Viewed by 268
Abstract
Due to technological advancements such as generative artificial intelligence (AI) and large language models, chatbots enable increasingly human-like, real-time conversations through text (e.g., OpenAI’s ChatGPT) and voice (e.g., Amazon’s Alexa). One AI chatbot that is specifically designed to meet the social-supportive needs of [...] Read more.
Due to technological advancements such as generative artificial intelligence (AI) and large language models, chatbots enable increasingly human-like, real-time conversations through text (e.g., OpenAI’s ChatGPT) and voice (e.g., Amazon’s Alexa). One AI chatbot that is specifically designed to meet the social-supportive needs of youth is Snapchat’s My AI. Given its increasing popularity among adolescents, the present study investigated whether adolescents’ likelihood of using My AI, as well as their positive or negative emotional experiences from interacting with the chatbot, is related to socio-demographic factors (i.e., gender, age, and socioeconomic status (SES)). A cross-sectional study was conducted among 303 adolescents (64.1% girls, 35.9% boys, 1.0% other, 0.7% preferred not to say their gender; Mage = 15.89, SDage = 1.69). The findings revealed that younger adolescents were more likely to use My AI and experienced more positive emotions from these interactions than older adolescents. No significant relationships were found for gender or SES. These results highlight the potential for age to play a critical role in shaping adolescents’ engagement with AI chatbots on social media and their emotional outcomes from such interactions, underscoring the need to consider developmental factors in AI design and policy. Full article
30 pages, 936 KiB  
Systematic Review
Symmetric Therapeutic Frameworks and Ethical Dimensions in AI-Based Mental Health Chatbots (2020–2025): A Systematic Review of Design Patterns, Cultural Balance, and Structural Symmetry
by Ali Algumaei, Noorayisahbe Mohd Yaacob, Mohamed Doheir, Mohammed Nasser Al-Andoli and Mohammed Algumaie
Symmetry 2025, 17(7), 1082; https://doi.org/10.3390/sym17071082 - 7 Jul 2025
Viewed by 1307
Abstract
Artificial intelligence (AI)-powered mental health chatbots have evolved quickly as scalable means for psychological support, bringing novel solutions through natural language processing (NLP), mobile accessibility, and generative AI. This systematic literature review (SLR), following PRISMA 2020 guidelines, collates evidence from 25 published, peer-reviewed [...] Read more.
Artificial intelligence (AI)-powered mental health chatbots have evolved quickly as scalable means for psychological support, bringing novel solutions through natural language processing (NLP), mobile accessibility, and generative AI. This systematic literature review (SLR), following PRISMA 2020 guidelines, collates evidence from 25 published, peer-reviewed studies between 2020 and 2025 and reviews therapeutic techniques, cultural adaptation, technical design, system assessment, and ethics. Studies were extracted from seven academic databases, screened against specific inclusion criteria, and thematically analyzed. Cognitive behavioral therapy (CBT) was the most common therapeutic model, featured in 15 systems, frequently being used jointly with journaling, mindfulness, and behavioral activation, followed by emotion-based approaches, which were featured in seven systems. Innovative techniques like GPT-based emotional processing, multimodal interaction (e.g., AR/VR), and LSTM-SVM classification models (greater than 94% accuracy) showed increased conversation flexibility but missed long-term clinical validation. Cultural adaptability was varied, and effective localization was seen in systems like XiaoE, okBot, and Luda Lee, while Western-oriented systems had restricted contextual adaptability. Accessibility and inclusivity are still major challenges, especially within low-resource settings, since digital literacy, support for multiple languages, and infrastructure deficits are still challenges. Ethical aspects—data privacy, explainability, and crisis plans—were under-evidenced for most deployments. This review is different from previous ones since it focuses on cultural adaptability, ethics, and hybrid public health incorporation and proposes a comprehensive approach for deploying AI mental health chatbots safely, effectively, and inclusively. Central to this review, symmetry is emphasized as a fundamental idea incorporated into frameworks for cultural adaptation, decision-making processes, and therapeutic structures. In particular, symmetry ensures equal cultural responsiveness, balanced user–chatbot interactions, and ethically aligned AI systems, all of which enhance the efficacy and dependability of mental health services. Recognizing these benefits, the review further underscores the necessity for more rigorous academic research into the development, deployment, and evaluation of mental health chatbots and apps, particularly to address cultural sensitivity, ethical accountability, and long-term clinical outcomes. Full article
Show Figures

Figure 1

25 pages, 1523 KiB  
Systematic Review
AI-Enabled Mobile Food-Ordering Apps and Customer Experience: A Systematic Review and Future Research Agenda
by Mohamad Fouad Shorbaji, Ali Abdallah Alalwan and Raed Algharabat
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 156; https://doi.org/10.3390/jtaer20030156 - 1 Jul 2025
Viewed by 1380
Abstract
Artificial intelligence (AI) is reshaping mobile food-ordering apps, yet its impact on customer experience (CX) has not been fully mapped. Following systematic review guidelines (PRISMA 2020), a search of SCOPUS, Web of Science, ScienceDirect, and Google Scholar in March 2025 identified 55 studies [...] Read more.
Artificial intelligence (AI) is reshaping mobile food-ordering apps, yet its impact on customer experience (CX) has not been fully mapped. Following systematic review guidelines (PRISMA 2020), a search of SCOPUS, Web of Science, ScienceDirect, and Google Scholar in March 2025 identified 55 studies published between 2022 and 2025. Since 2022, research has expanded from intention-based studies to include real-time app interactions and live app experiments. This shift has helped to identify five key CX dimensions: (1) instrumental usability: how quickly and smoothly users can order; (2) personalization value: AI-generated menus and meal suggestions; (3) affective engagement: emotional appeal of the app interface; (4) data trust and procedural fairness: users’ confidence in fair pricing and responsible data handling; (5) social co-experience: sharing orders and interacting through live reviews. Studies have shown that personalized recommendations and chatbots enhance relevance and enjoyment, while unclear surge pricing, repetitive menus, and algorithmic anxiety reduce trust and satisfaction. Given the limitations of this study, including its reliance on English-only sources, a cross-sectional design, and limited cultural representation, future research should investigate long-term usage patterns across diverse markets. This approach would help uncover nutritional biases, cultural variations, and sustained effects on customer experience. Full article
Show Figures

Figure 1

15 pages, 1255 KiB  
Article
Do Chatbots Exhibit Personality Traits? A Comparison of ChatGPT and Gemini Through Self-Assessment
by W. Wiktor Jedrzejczak and Joanna Kobosko
Information 2025, 16(7), 523; https://doi.org/10.3390/info16070523 - 23 Jun 2025
Viewed by 813
Abstract
The underlying design of large language models (LLMs), trained on vast amounts of human texts, implies that chatbots based on them will almost inevitably retain some human personality traits. That is, we expect that LLM outputs will tend to reflect human-like features. In [...] Read more.
The underlying design of large language models (LLMs), trained on vast amounts of human texts, implies that chatbots based on them will almost inevitably retain some human personality traits. That is, we expect that LLM outputs will tend to reflect human-like features. In this study, we used the ‘Big Five’ personality traits tool to examine whether several chatbot models (ChatGPT versions 3.5 and 4o, Gemini, and Gemini Advanced, all tested in both English and Polish), displayed distinctive personality profiles. Each chatbot was presented with an instruction to complete the International Personality Item Pool (IPIP) questionnaire “according to who or what you are,” which left it open as to whether the answer would derive from a purported human or from an AI source. We found that chatbots sometimes chose to respond in a typically human-like way, while in other cases the answers appeared to reflect the perspective of an AI language model. The distinction was examined more closely through a set of follow-up questions. The more advanced models (ChatGPT-4o and Gemini Advanced) showed larger differences between these two modes compared to the more basic models. In IPIP-5 terms, the chatbots tended to display higher ‘Emotional Stability’ and ‘Intellect/Imagination’ but lower ‘Agreeableness’ compared to published human norms. The spread of characteristics indicates that the personality profiles of chatbots are not static but are shaped by the model architecture and its programming as well as, perhaps, the chatbot’s own inner sense, that is, the way it models its own identity. Appreciating these philosophical subtleties is important for enhancing human–computer interactions and perhaps building more relatable, trustworthy AI systems. Full article
Show Figures

Graphical abstract

26 pages, 5099 KiB  
Article
AI Testing for Intelligent Chatbots—A Case Study
by Jerry Gao, Radhika Agarwal and Prerna Garsole
Software 2025, 4(2), 12; https://doi.org/10.3390/software4020012 - 15 May 2025
Cited by 1 | Viewed by 1671
Abstract
The decision tree test method works as a flowchart structure for conversational flow. It has predetermined questions and answers that guide the user through specific tasks. Inspired by principles of the decision tree test method in software engineering, this paper discusses intelligent AI [...] Read more.
The decision tree test method works as a flowchart structure for conversational flow. It has predetermined questions and answers that guide the user through specific tasks. Inspired by principles of the decision tree test method in software engineering, this paper discusses intelligent AI test modeling chat systems, including basic concepts, quality validation, test generation and augmentation, testing scopes, approaches, and needs. The paper’s novelty lies in an intelligent AI test modeling chatbot system built and implemented based on an innovative 3-dimensional AI test model for AI-powered functions in intelligent mobile apps to support model-based AI function testing, test data generation, and adequate test coverage result analysis. As a result, a case study is provided using a mental health and emotional intelligence chatbot system, Wysa. It helps in tracking and analyzing mood and helps in sentiment analysis. Full article
Show Figures

Figure 1

15 pages, 831 KiB  
Article
Exploring the Potential Barrier Factors of AI Chatbot Usage Among Teacher Trainees: From the Perspective of Innovation Resistance Theory
by Yonggang Liu, Hapini Awang and Nur Suhaili Mansor
Sustainability 2025, 17(9), 4081; https://doi.org/10.3390/su17094081 - 30 Apr 2025
Cited by 1 | Viewed by 847
Abstract
With the development of Artificial Intelligence (AI) technology, more and more AI chatbots (e.g., ChatGPT and DeepSeek) are beginning to affect work and lifestyles. Although AI chatbots have brought many opportunities to education and teacher trainees, they have also caused many problems and [...] Read more.
With the development of Artificial Intelligence (AI) technology, more and more AI chatbots (e.g., ChatGPT and DeepSeek) are beginning to affect work and lifestyles. Although AI chatbots have brought many opportunities to education and teacher trainees, they have also caused many problems and resistance among some teacher trainees. However, previous studies have focused more on the influence of positive acceptance factors induced by AI chatbots and less on the negative barrier model induced by AI chatbots. Therefore, this study starts from the negative barrier factors induced by AI chatbots and builds an influencing barrier model of AI chatbot resistance guided by Innovation Resistance Theory (IRT) and appropriately draws on Cultural Dimension Theory (CDT), Unified Theory of Acceptance and Use of Technology (UTAUT), and practical characteristics. The questionnaires mainly adopt convenience sampling and snowball sampling methods, and the data are empirically analyzed. The results show that Uncertainty Avoidance, the Social Influence Barrier, and Technology Anxiety have a significant and direct influence on teacher trainees’ resistance to AI chatbots. Meanwhile, Uncertainty Avoidance, the Social Influence Barrier, and Technology Anxiety play significant mediating roles in the impact of the Usage Barrier (UB), Image Barrier (IB), Value Barrier (VB), Risk Barrier (RB), and Tradition Barrier (TB) on resistance behaviors, revealing the complex path through which cognition-emotion-society factors jointly shape technology resistance behaviors. Therefore, this study not only contributes to enriching the theoretical results of combining Innovation Resistance Theory with AI chatbots and adding new research paths (e.g., the mediating role of Uncertainty Avoidance) but also provides a practical guide for the dissemination of AI chatbots among teacher trainees and future technological talents in a sustainable future. Full article
Show Figures

Figure 1

25 pages, 747 KiB  
Article
Development of a Comprehensive Evaluation Scale for LLM-Powered Counseling Chatbots (CES-LCC) Using the eDelphi Method
by Marco Bolpagni and Silvia Gabrielli
Informatics 2025, 12(1), 33; https://doi.org/10.3390/informatics12010033 - 20 Mar 2025
Cited by 1 | Viewed by 1758
Abstract
Background/Objectives: With advancements in Large Language Models (LLMs), counseling chatbots are becoming essential tools for delivering scalable and accessible mental health support. Traditional evaluation scales, however, fail to adequately capture the sophisticated capabilities of these systems, such as personalized interactions, empathetic responses, [...] Read more.
Background/Objectives: With advancements in Large Language Models (LLMs), counseling chatbots are becoming essential tools for delivering scalable and accessible mental health support. Traditional evaluation scales, however, fail to adequately capture the sophisticated capabilities of these systems, such as personalized interactions, empathetic responses, and memory retention. This study aims to design a robust and comprehensive evaluation scale, the Comprehensive Evaluation Scale for LLM-Powered Counseling Chatbots (CES-LCC), using the eDelphi method to address this gap. Methods: A panel of 16 experts in psychology, artificial intelligence, human-computer interaction, and digital therapeutics participated in two iterative eDelphi rounds. The process focused on refining dimensions and items based on qualitative and quantitative feedback. Initial validation, conducted after assembling the final version of the scale, involved 49 participants using the CES-LCC to evaluate an LLM-powered chatbot delivering Self-Help Plus (SH+), an Acceptance and Commitment Therapy-based intervention for stress management. Results: The final version of the CES-LCC features 27 items grouped into nine dimensions: Understanding Requests, Providing Helpful Information, Clarity and Relevance of Responses, Language Quality, Trust, Emotional Support, Guidance and Direction, Memory, and Overall Satisfaction. Initial real-world validation revealed high internal consistency (Cronbach’s alpha = 0.94), although minor adjustments are required for specific dimensions, such as Clarity and Relevance of Responses. Conclusions: The CES-LCC fills a critical gap in the evaluation of LLM-powered counseling chatbots, offering a standardized tool for assessing their multifaceted capabilities. While preliminary results are promising, further research is needed to validate the scale across diverse populations and settings. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

22 pages, 1390 KiB  
Article
Emotion-Aware Embedding Fusion in Large Language Models (Flan-T5, Llama 2, DeepSeek-R1, and ChatGPT 4) for Intelligent Response Generation
by Abdur Rasool, Muhammad Irfan Shahzad, Hafsa Aslam, Vincent Chan and Muhammad Ali Arshad
AI 2025, 6(3), 56; https://doi.org/10.3390/ai6030056 - 13 Mar 2025
Cited by 10 | Viewed by 3195
Abstract
Empathetic and coherent responses are critical in automated chatbot-facilitated psychotherapy. This study addresses the challenge of enhancing the emotional and contextual understanding of large language models (LLMs) in psychiatric applications. We introduce Emotion-Aware Embedding Fusion, a novel framework integrating hierarchical fusion and attention [...] Read more.
Empathetic and coherent responses are critical in automated chatbot-facilitated psychotherapy. This study addresses the challenge of enhancing the emotional and contextual understanding of large language models (LLMs) in psychiatric applications. We introduce Emotion-Aware Embedding Fusion, a novel framework integrating hierarchical fusion and attention mechanisms to prioritize semantic and emotional features in therapy transcripts. Our approach combines multiple emotion lexicons, including NRC Emotion Lexicon, VADER, WordNet, and SentiWordNet, with state-of-the-art LLMs such as Flan-T5, Llama 2, DeepSeek-R1, and ChatGPT 4. Therapy session transcripts, comprising over 2000 samples, are segmented into hierarchical levels (word, sentence, and session) using neural networks, while hierarchical fusion combines these features with pooling techniques to refine emotional representations. Attention mechanisms, including multi-head self-attention and cross-attention, further prioritize emotional and contextual features, enabling the temporal modeling of emotional shifts across sessions. The processed embeddings, computed using BERT, GPT-3, and RoBERTa, are stored in the Facebook AI similarity search vector database, which enables efficient similarity search and clustering across dense vector spaces. Upon user queries, relevant segments are retrieved and provided as context to LLMs, enhancing their ability to generate empathetic and contextually relevant responses. The proposed framework is evaluated across multiple practical use cases to demonstrate real-world applicability, including AI-driven therapy chatbots. The system can be integrated into existing mental health platforms to generate personalized responses based on retrieved therapy session data. The experimental results show that our framework enhances empathy, coherence, informativeness, and fluency, surpassing baseline models while improving LLMs’ emotional intelligence and contextual adaptability for psychotherapy. Full article
(This article belongs to the Special Issue Multimodal Artificial Intelligence in Healthcare)
Show Figures

Figure 1

23 pages, 3191 KiB  
Article
Technology and Emotions: AI-Driven Software Prototyping for the Analysis of Emotional States and Early Detection of Risky Behaviors in University Students
by Alba Catherine Alves-Noreña, María-José Rodríguez-Conde, Juan Pablo Hernández-Ramos and José William Castro-Salgado
Educ. Sci. 2025, 15(3), 350; https://doi.org/10.3390/educsci15030350 - 11 Mar 2025
Viewed by 1259
Abstract
Technology-assisted emotion analysis opens new possibilities for the early identification of risk behaviors that may impact the well-being of university students, contributing to the creation of healthier, safer, and more proactive educational environments. This pilot study aimed to design and develop a technological [...] Read more.
Technology-assisted emotion analysis opens new possibilities for the early identification of risk behaviors that may impact the well-being of university students, contributing to the creation of healthier, safer, and more proactive educational environments. This pilot study aimed to design and develop a technological prototype capable of analyzing students’ emotional states and anticipating potential risk situations. A mixed-methods approach was adopted, employing qualitative methods in the ideation, design, and prototyping phases and quantitative methods for laboratory validation to assess the system’s accuracy. Additionally, mapping and meta-analysis techniques were applied and integrated into the chatbot’s responses. As a result, an educational technological innovation was developed, featuring a chatbot structured with a rule-based dialogue tree, complemented by an ontology for knowledge organization and a pre-trained artificial intelligence (AI) model, enhancing the accuracy and contextualization of user interactions. This solution has the potential to benefit the educational community and is also relevant to legislative stakeholders interested in education and student well-being, institutional leaders, academic and well-being coordinators, school counselors, teachers, and students. Full article
Show Figures

Figure 1

20 pages, 731 KiB  
Article
The Influence of Public Expectations on Simulated Emotional Perceptions of AI-Driven Government Chatbots: A Moderated Study
by Yuanyuan Guo, Peng Dong and Beichen Lu
J. Theor. Appl. Electron. Commer. Res. 2025, 20(1), 50; https://doi.org/10.3390/jtaer20010050 - 11 Mar 2025
Viewed by 1653
Abstract
This study focuses on the impact of technological changes, particularly the development of generative artificial intelligence, on government–citizen interactions in the context of government services. From a psychological perspective with an emphasis on technological governance theory and emotional contagion theory, it examines public [...] Read more.
This study focuses on the impact of technological changes, particularly the development of generative artificial intelligence, on government–citizen interactions in the context of government services. From a psychological perspective with an emphasis on technological governance theory and emotional contagion theory, it examines public perceptions of the simulated emotions of governmental chatbots and investigates the moderating role of age. Data were collected through a multi-stage stratified purposive sampling method, yielding 194 valid responses from an original distribution of 300 experimental questionnaires between 24 September and 13 October 2023. The findings reveal that public expectations significantly enhance the simulated emotional perception of chatbots, with this effect being stronger among older individuals. Age shows significant main and interaction effects, indicating that different age groups perceive the simulated emotional capabilities of chatbots differently. This study highlights the transformative impact of generative artificial intelligence on government–citizen interactions and the importance of integrating AI technology into government services. It calls for governments to pay attention to public perceptions of the simulated emotions of governmental chatbots to enhance public experience. Full article
Show Figures

Figure 1

42 pages, 11126 KiB  
Systematic Review
A Systematic Review of Serious Games in the Era of Artificial Intelligence, Immersive Technologies, the Metaverse, and Neurotechnologies: Transformation Through Meta-Skills Training
by Eleni Mitsea, Athanasios Drigas and Charalabos Skianis
Electronics 2025, 14(4), 649; https://doi.org/10.3390/electronics14040649 - 7 Feb 2025
Cited by 7 | Viewed by 6285
Abstract
Background: Serious games (SGs) are primarily aimed at promoting learning, skills training, and rehabilitation. Artificial intelligence, immersive technologies, the metaverse, and neurotechnologies promise the next revolution in gaming. Meta-skills are considered the “must-have” skills for thriving in the era of rapid change, complexity, [...] Read more.
Background: Serious games (SGs) are primarily aimed at promoting learning, skills training, and rehabilitation. Artificial intelligence, immersive technologies, the metaverse, and neurotechnologies promise the next revolution in gaming. Meta-skills are considered the “must-have” skills for thriving in the era of rapid change, complexity, and innovation. Μeta-skills can be defined as a set of higher-order skills that incorporate metacognitive, meta-emotional, and meta-motivational attributes, enabling one to be mindful, self-motivated, self-regulated, and flexible in different circumstances. Skillfulness, and more specifically meta-skills development, is recognized as a predictor of optimal performance along with mental and emotional wellness. Nevertheless, there is still limited knowledge about the effectiveness of integrating cutting-edge technologies in serious games, especially in the field of meta-skills training. Objectives: The current systematic review aims to collect and synthesize evidence concerning the effectiveness of advanced technologies in serious gaming for promoting meta-skills development. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology was employed to identify experimental studies conducted in the last 10 years. Four different databases were employed: Web of Science, PubMed, Scopus, and Google Scholar. Results: Forty-nine studies were selected. Promising outcomes were identified in AI-based SGs (i.e., gamified chatbots) as they provided realistic, adaptive, personalized, and interactive environments using natural language processing, player modeling, reinforcement learning, GPT-based models, data analytics, and assessment. Immersive technologies, including the metaverse, virtual reality, augmented reality, and mixed reality, provided realistic simulations, interactive environments, and sensory engagement, making training experiences more impactful. Non-invasive neurotechnologies were found to encourage players’ training by monitoring brain activity and adapting gameplay to players’ mental states. Healthy participants (n = 29 studies) as well as participants diagnosed with anxiety, neurodevelopmental disorders, and cognitive impairments exhibited improvements in a wide range of meta-skills, including self-regulation, cognitive control, attention regulation, meta-memory skills, flexibility, self-reflection, and self-evaluation. Players were more self-motivated with an increased feeling of self-confidence and self-efficacy. They had a more accurate self-perception. At the emotional level, improvements were observed in emotional regulation, empathy, and stress management skills. At the social level, social awareness was enhanced since they could more easily solve conflicts, communicate, and work in teams. Systematic training led to improvements in higher-order thinking skills, including critical thinking, problem-solving skills, reasoning, decision-making ability, and abstract thinking. Discussion: Special focus is given to the potential benefits, possible risks, and ethical concerns; future directions and implications are also discussed. The results of the current review may have implications for the design and implementation of innovative serious games for promoting skillfulness among populations with different training needs. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning Techniques for Healthcare)
Show Figures

Graphical abstract

37 pages, 1529 KiB  
Article
Differences in User Perception of Artificial Intelligence-Driven Chatbots and Traditional Tools in Qualitative Data Analysis
by Boštjan Šumak, Maja Pušnik, Ines Kožuh, Andrej Šorgo and Saša Brdnik
Appl. Sci. 2025, 15(2), 631; https://doi.org/10.3390/app15020631 - 10 Jan 2025
Cited by 1 | Viewed by 3580
Abstract
Qualitative data analysis (QDA) tools are essential for extracting insights from complex datasets. This study investigates researchers’ perceptions of the usability, user experience (UX), mental workload, trust, task complexity, and emotional impact of three tools: Taguette 1.4.1 (a traditional QDA tool), ChatGPT (GPT-4, [...] Read more.
Qualitative data analysis (QDA) tools are essential for extracting insights from complex datasets. This study investigates researchers’ perceptions of the usability, user experience (UX), mental workload, trust, task complexity, and emotional impact of three tools: Taguette 1.4.1 (a traditional QDA tool), ChatGPT (GPT-4, December 2023 version), and Gemini (formerly Google Bard, December 2023 version). Participants (N = 85), Master’s students from the Faculty of Electrical Engineering and Computer Science with prior experience in UX evaluations and familiarity with AI-based chatbots, performed sentiment analysis and data annotation tasks using these tools, enabling a comparative evaluation. The results show that AI tools were associated with lower cognitive effort and more positive emotional responses compared to Taguette, which caused higher frustration and workload, especially during cognitively demanding tasks. Among the tools, ChatGPT achieved the highest usability score (SUS = 79.03) and was rated positively for emotional engagement. Trust levels varied, with Taguette preferred for task accuracy and ChatGPT rated highest in user confidence. Despite these differences, all tools performed consistently in identifying qualitative patterns. These findings suggest that AI-driven tools can enhance researchers’ experiences in QDA while emphasizing the need to align tool selection with specific tasks and user preferences. Full article
Show Figures

Figure 1

33 pages, 1407 KiB  
Review
An Exploratory Investigation of Chatbot Applications in Anxiety Management: A Focus on Personalized Interventions
by Alexia Manole, Răzvan Cârciumaru, Rodica Brînzaș and Felicia Manole
Information 2025, 16(1), 11; https://doi.org/10.3390/info16010011 - 29 Dec 2024
Cited by 4 | Viewed by 5552
Abstract
Anxiety disorders are among the most prevalent mental health conditions globally, causing significant personal and societal burdens. Traditional therapies, while effective, often face barriers such as limited accessibility, high costs, and the stigma associated with seeking mental health care. The emergence of artificial [...] Read more.
Anxiety disorders are among the most prevalent mental health conditions globally, causing significant personal and societal burdens. Traditional therapies, while effective, often face barriers such as limited accessibility, high costs, and the stigma associated with seeking mental health care. The emergence of artificial intelligence (AI) chatbots offers a novel solution by providing accessible, cost-effective, and immediate support for individuals experiencing anxiety. This comprehensive review examines the evolution, efficacy, advantages, limitations, challenges, and future perspectives of AI chatbots in the treatment of anxiety disorders. A methodologically rigorous literature search was conducted across multiple databases, focusing on publications from 2010 to 2024 that evaluated AI chatbot interventions targeting anxiety symptoms. Empirical studies demonstrate that AI chatbots can effectively reduce anxiety symptoms by delivering therapeutic interventions like cognitive-behavioral therapy through interactive and personalized dialogues. The advantages include increased accessibility without geographical or temporal limitations, reduced costs, and an anonymity that encourages openness and reduces stigma. However, limitations persist, such as the lack of human empathy, ethical and privacy concerns related to data security, and technical challenges in understanding complex human emotions. The key challenges identified involve enhancing the emotional intelligence of chatbots, integrating them with traditional therapy, and establishing robust ethical frameworks to ensure user safety and data protection. Future research should focus on improving AI capabilities, personalization, cultural adaptation, and user engagement. In conclusion, AI chatbots represent a promising adjunct in treating anxiety disorders, offering scalable interventions that can complement traditional mental health services. Balancing technological innovation with ethical responsibility is crucial to maximize their potential benefits. Full article
(This article belongs to the Special Issue Emerging Research in Optimization Algorithms in the Era of Big Data)
Show Figures

Figure 1

27 pages, 2255 KiB  
Article
Harnessing AI in Anxiety Management: A Chatbot-Based Intervention for Personalized Mental Health Support
by Alexia Manole, Răzvan Cârciumaru, Rodica Brînzaș and Felicia Manole
Information 2024, 15(12), 768; https://doi.org/10.3390/info15120768 - 2 Dec 2024
Cited by 8 | Viewed by 9912
Abstract
Anxiety disorders represent one of the most widespread mental health challenges globally, yet access to traditional therapeutic interventions remains constrained, particularly in resource-limited settings. This study evaluated the effectiveness of an AI-powered chatbot, developed using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral [...] Read more.
Anxiety disorders represent one of the most widespread mental health challenges globally, yet access to traditional therapeutic interventions remains constrained, particularly in resource-limited settings. This study evaluated the effectiveness of an AI-powered chatbot, developed using ChatGPT, in managing anxiety symptoms through evidence-based cognitive-behavioral therapy (CBT) techniques. Fifty participants with mild to moderate anxiety symptoms engaged with the chatbot over two observational phases, each lasting seven days. The chatbot delivered personalized interventions, including mindfulness exercises, cognitive restructuring, and breathing techniques, and was accessible 24/7 to provide real-time support during emotional distress. The findings revealed a significant reduction in anxiety symptoms in both phases, with an average improvement of 21.15% in Phase 1 and 20.42% in Phase 2. Enhanced engagement in Phase 2 suggested the potential for sustained usability and familiarity with the chatbot’s functions. While participants reported high satisfaction with the accessibility and personalization of the chatbot, its inability to replicate human empathy underscored the importance of integrating AI tools with human oversight for optimal outcomes. This study highlights the potential of AI-driven interventions as valuable complements to traditional therapy, providing scalable and accessible mental health support, particularly in regions with limited access to professional services. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Health)
Show Figures

Figure 1

Back to TopTop