Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (96)

Search Parameters:
Keywords = human-chatbot interaction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Viewed by 240
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

27 pages, 813 KB  
Article
Towards a Sustainable and Ethical Integration of AI Chatbots in Higher Education
by Mirela-Catrinel Voicu, Nicoleta Sîrghi, Gabriela Mircea and Daniela Maria-Magdalena Toth
Sustainability 2026, 18(5), 2534; https://doi.org/10.3390/su18052534 - 5 Mar 2026
Viewed by 302
Abstract
This paper examines students’ perceptions of factors influencing normative support for the integration of AI Chatbots in universities, providing an empirical basis for developing institutional policies and implementation strategies in higher education. Framed within the sustainability perspective, the study examines how ethical, cognitive, [...] Read more.
This paper examines students’ perceptions of factors influencing normative support for the integration of AI Chatbots in universities, providing an empirical basis for developing institutional policies and implementation strategies in higher education. Framed within the sustainability perspective, the study examines how ethical, cognitive, and perceptual factors shape the long-term adoption of AI technologies in academic environments. Our study employs a structural model comprising 10 constructs, 46 items, and 9 hypotheses, tested on a sample of 408 economics students from Timisoara. The research identifies AI literacy as the most influential factor in the formal integration of these technologies in universities. The following factors have a direct impact: teacher perception, student perception, and cognitive risks (reliance on AI Chatbots and avoidance of intellectual effort). Use for personalized learning is a factor with a significant direct effect on positive perceptions and intentions to use AI Chatbots among students. Academic integrity risks, as well as limitations on accuracy and reliability, have no significant impact. AI Chatbots represent an essential opportunity to transform higher education. However, their positive impact is realized only through responsible formal integration, grounded in ethical policies, adequate digital education, and the adaptation of pedagogical practices. Universities must regard AI as a strategic ally for teachers and students, while keeping human interaction, critical thinking, and academic integrity at the centre of the educational process. The study argues that students’ perceptions are that universities must approach AI Integration as a strategic component of sustainable educational ecosystems, aligning innovation with long-term academic integrity and the objectives of sustainable development, particularly Sustainable Development Goal 4 (Quality Education). Full article
Show Figures

Figure 1

24 pages, 2324 KB  
Article
The Impact of a Hidden AI-Based Chatbot on the Quality of Collaborative Problem Solving in a School Context
by Leonarda Pušić, Tomislav Jagušt, Marko Horvat and Bartol Boras
Electronics 2026, 15(5), 956; https://doi.org/10.3390/electronics15050956 - 26 Feb 2026
Viewed by 367
Abstract
The increasing use of digital devices by young learners often results in passive content consumption rather than active skill development. This exploratory study examines whether a peer-like Artificial Intelligence (AI) agent can improve the quality of computer-supported collaborative learning. The aim was to [...] Read more.
The increasing use of digital devices by young learners often results in passive content consumption rather than active skill development. This exploratory study examines whether a peer-like Artificial Intelligence (AI) agent can improve the quality of computer-supported collaborative learning. The aim was to assess the impact of a hidden AI-based chatbot on the dynamics and outcomes of group problem-solving in a school setting. A gamified application was developed in which student groups collaborated on challenging tasks. In a controlled experiment, some groups included a hidden AI-based chatbot acting as a peer, programmed to provide Socratic prompts and motivational scaffolding without giving direct answers, while control groups consisted only of human participants. Quantitative and qualitative data, including time to solution, answer correctness, and chat logs, were collected to compare performance and interaction patterns between the two conditions. Given the limited sample size and primarily descriptive analyses, the findings should be interpreted as preliminary. The results suggest differences in collaborative dynamics and problem-solving efficiency between groups assisted by the AI agent and the unassisted control groups. The findings suggest that integrating a hidden, peer-like pedagogical agent may represent a promising approach for supporting collaborative learning processes, enhancing group engagement by subtly guiding discussion without disrupting the natural peer-to-peer dynamic. These results highlight the potential of hidden AI to enhance collaborative learning environments through non-intrusive support. Further research with larger samples is needed to validate these initial observations. Full article
(This article belongs to the Special Issue Techniques and Applications in Prompt Engineering and Generative AI)
Show Figures

Figure 1

22 pages, 939 KB  
Article
How Consistent Friendlike Conversation with AI Companions Influences Our Attitudes and Perceptions Toward AI: An Exploratory Experiment
by Jerlyn Q. H. Ho, Meilan Hu, Adalia Y. H. Goh, Emma Jane Pragasam and Andree Hartanto
Behav. Sci. 2026, 16(2), 278; https://doi.org/10.3390/bs16020278 - 14 Feb 2026
Viewed by 771
Abstract
Despite skepticism and distrust in artificial intelligence (AI), it is increasingly integrated into daily life, with its potential benefits drawing interest. Yet little is known about the attitudinal and psychological effects of human–AI interactions, and whether consistent interactions with AI chatbots can change [...] Read more.
Despite skepticism and distrust in artificial intelligence (AI), it is increasingly integrated into daily life, with its potential benefits drawing interest. Yet little is known about the attitudinal and psychological effects of human–AI interactions, and whether consistent interactions with AI chatbots can change users’ attitudes and perceptions. Our within-subjects experiment (N = 52) investigated how five days of socially oriented, friendlike interactions with an AI chatbot, versus a journaling control, influenced changes in attitudes and perceptions of AI. Participants’ attitudes towards AI, trust, perceived empathy, anthropomorphism, animacy, likeability, perceived intelligence and safety, dependency, and exploratory well-being indicators were recorded. Results indicated that consistent friendlike interaction with AI chatbots led to significant increases in perceived empathy and animacy of technology, but no changes in global attitudes and perceptions of anthropomorphism. Participants also reported higher self-esteem levels after journaling, compared to after AI interaction. This suggests that although friendly engagement with AI chatbots may lead to perceptions of empathy and lifelikeness, where users interpret it to be genuinely understanding and supportive, this comes with trade-offs for self-esteem. Concurrently, empathy and perceived lifelikeness increased without corresponding increases in anthropomorphism, indicating that users may regard AI chatbots as separate living entities rather than having human-like qualities. Full article
(This article belongs to the Special Issue The Impact of Technology on Human Behavior)
Show Figures

Figure 1

20 pages, 2596 KB  
Article
Elaborate or Succinct? The Impact of AI Chatbots’ Language Style on Customers’ Satisfaction in Online Service
by Yafeng Fan, Xiaohui Yue, Xiadan Zhang and Luyao Zhang
J. Theor. Appl. Electron. Commer. Res. 2026, 21(2), 51; https://doi.org/10.3390/jtaer21020051 - 2 Feb 2026
Viewed by 696
Abstract
The growing prevalence of AI-powered chatbots in digital service environments has raised user expectations from mere functional efficiency to emotionally satisfying interactions. Drawing on Language Expectancy Theory (LET), this study investigates the impact of AI chatbot language style (namely, elaborate vs. succinct language) [...] Read more.
The growing prevalence of AI-powered chatbots in digital service environments has raised user expectations from mere functional efficiency to emotionally satisfying interactions. Drawing on Language Expectancy Theory (LET), this study investigates the impact of AI chatbot language style (namely, elaborate vs. succinct language) on customer service satisfaction. Across three studies, we demonstrate that customers exhibit higher satisfaction when interacting with chatbots employing elaborate language as opposed to succinct language. Furthermore, this effect is mediated by warmth and moderated by customer relationship norm orientation. The influence of elaborate language is more pronounced among customers with communal relationship norms, whereas those with exchange relationship norms respond more favorably to succinct language. Theoretically, this study enriches the literature on language style in human–computer interaction by introducing elaborateness as a pivotal communicative dimension. Practically, our results offer strategic guidance that can help service providers and developers to strategically tailor chatbot language styles to distinct customer segments, consequently enhancing service quality, fostering emotional engagement, and cultivating long-term customer loyalty within automated service systems. Full article
Show Figures

Figure 1

29 pages, 434 KB  
Review
Digital Mental Health Post COVID-19: The Era of AI Chatbots
by Luke Balcombe
Encyclopedia 2026, 6(2), 32; https://doi.org/10.3390/encyclopedia6020032 - 31 Jan 2026
Viewed by 1051
Abstract
Digital mental health resources have expanded rapidly in the wake of the COVID-19 pandemic, offering new opportunities to improve access to mental healthcare through technologies such as AI chatbots, mobile apps, and online platforms. Despite this growth, significant challenges persist, including low user [...] Read more.
Digital mental health resources have expanded rapidly in the wake of the COVID-19 pandemic, offering new opportunities to improve access to mental healthcare through technologies such as AI chatbots, mobile apps, and online platforms. Despite this growth, significant challenges persist, including low user retention, limited digital literacy, unclear privacy regulations, and insufficient evidence of clinical effectiveness and safety. AI chatbots, which act as virtual therapists or companions, provide counseling and personalized support, but raise concerns about user dependence, emotional outcomes, privacy, ethical risks, and bias. User experiences are mixed: while some report enhanced social health and reduced loneliness, others question the safety, crisis response, and overall reliability of these tools, particularly in unregulated settings. Vulnerable and underserved populations may face heightened risks, highlighting the need for engagement with individuals with lived experience to define safe and supportive interactions. This review critically examines the empirical and grey literature on AI chatbot use in mental healthcare, evaluating their benefits and limitations in terms of access, user engagement, risk management, and clinical integration. Key findings indicate that AI chatbots can complement traditional care and bridge service gaps. However, current evidence is constrained by short-term studies and a lack of diverse, long-term outcome data. The review underscores the importance of transparent operations, ethical governance, and hybrid care models combining technological and human oversight. Recommendations include stakeholder-driven deployment approaches, rigorous evaluation standards, and ongoing real-world validation to ensure equitable, safe, and effective use of AI chatbots in mental healthcare. Full article
(This article belongs to the Section Behavioral Sciences)
Show Figures

Figure 1

27 pages, 4789 KB  
Article
Assessing Interaction Quality in Human–AI Dialogue: An Integrative Review and Multi-Layer Framework for Conversational Agents
by Luca Marconi, Luca Longo and Federico Cabitza
Mach. Learn. Knowl. Extr. 2026, 8(2), 28; https://doi.org/10.3390/make8020028 - 26 Jan 2026
Viewed by 1588
Abstract
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. [...] Read more.
Conversational agents are transforming digital interactions across various domains, including healthcare, education, and customer service, thanks to advances in large language models (LLMs). As these systems become more autonomous and ubiquitous, understanding what constitutes high-quality interaction from a user perspective is increasingly critical. Despite growing empirical research, the field lacks a unified framework for defining, measuring, and designing user-perceived interaction quality in human–artificial intelligence (AI) dialogue. Here, we present an integrative review of 125 empirical studies published between 2017 and 2025, spanning text-, voice-, and LLM-powered systems. Our synthesis identifies three consistent layers of user judgment: a pragmatic core (usability, task effectiveness, and conversational competence), a social–affective layer (social presence, warmth, and synchronicity), and an accountability and inclusion layer (transparency, accessibility, and fairness). These insights are formalised into a four-layer interpretive framework—Capacity, Alignment, Levers, and Outcomes—operationalised via a Capacity × Alignment matrix that maps distinct success and failure regimes. It also identifies design levers such as anthropomorphism, role framing, and onboarding strategies. The framework consolidates constructs, positions inclusion and accountability as central to quality, and offers actionable guidance for evaluation and design. This research redefines interaction quality as a dialogic construct, shifting the focus from system performance to co-orchestrated, user-centred dialogue quality. Full article
Show Figures

Graphical abstract

34 pages, 6013 KB  
Article
Extending Digital Narrative with AI, Games, Chatbots, and XR: How Experimental Creative Practice Yields Research Insights
by Lina Ruth Harder, David Jhave Johnston, Scott Rettberg, Sérgio Galvão Roxo and Haoyuan Tang
Humanities 2026, 15(1), 17; https://doi.org/10.3390/h15010017 - 16 Jan 2026
Viewed by 1290
Abstract
The Extended Digital Narrative (XDN) research project explores how experimental creative practice with emerging technologies generates critical insights into algorithmic narrativity—the intersection of human narrative understanding and computational data processing. This article presents five case studies demonstrating that direct engagement with AI and [...] Read more.
The Extended Digital Narrative (XDN) research project explores how experimental creative practice with emerging technologies generates critical insights into algorithmic narrativity—the intersection of human narrative understanding and computational data processing. This article presents five case studies demonstrating that direct engagement with AI and Extended Reality platforms is essential for humanities research on new genres of digital storytelling. Lina Harder’s Hedy Lamar Chatbot examines how generative AI chatbots construct historical personas, revealing biases in training data and platform constraints. Scott Rettberg’s Republicans in Love investigates text-to-image generation as a writing environment for political satire, documenting rapid changes in AI aesthetics and content moderation. David Jhave Johnston’s Messages to Humanity demonstrates how Runway’s Act-One enables solo filmmaking, collapsing traditional production hierarchies. Haoyuan Tang’s video game project reframes LLM integration by prioritizing player actions over dialogue, challenging assumptions about AI’s role in interactive narratives. Sérgio Galvão Roxo’s Her Name Was Gisberta employs Virtual Reality for social education against transphobia, utilizing perspective-taking techniques for empathy development. These projects demonstrate that practice-based research is not merely artistic production but a vital methodology for understanding how AI and XR platforms shape—and are shaped by—human narrative capacities. Full article
(This article belongs to the Special Issue Electronic Literature and Game Narratives)
Show Figures

Figure 1

14 pages, 2197 KB  
Article
Innovative Application of Chatbots in Clinical Nutrition Education: The E+DIEting_Lab Experience in University Students
by Iñaki Elío, Kilian Tutusaus, Imanol Eguren-García, Álvaro Lasarte-García, Arturo Ortega-Mansilla, Thomas A. Prola and Sandra Sumalla-Cano
Nutrients 2026, 18(2), 257; https://doi.org/10.3390/nu18020257 - 14 Jan 2026
Viewed by 820
Abstract
Background/Objectives: The growing integration of Artificial Intelligence (AI) and chatbots in health professional education offers innovative methods to enhance learning and clinical preparedness. This study aimed to evaluate the educational impact and perceptions in university students of Human Nutrition and Dietetics, regarding [...] Read more.
Background/Objectives: The growing integration of Artificial Intelligence (AI) and chatbots in health professional education offers innovative methods to enhance learning and clinical preparedness. This study aimed to evaluate the educational impact and perceptions in university students of Human Nutrition and Dietetics, regarding the utility, usability, and design of the E+DIEting_Lab chatbot platform when implemented in clinical nutrition training. Methods: The platform was piloted from December 2023 to April 2025 involving 475 students from multiple European universities. While all 475 students completed the initial survey, 305 finished the follow-up evaluation, representing a 36% attrition rate. Participants completed surveys before and after interacting with the chatbots, assessing prior experience, knowledge, skills, and attitudes. Data were analyzed using descriptive statistics and independent samples t-tests to compare pre- and post-intervention perceptions. Results: A total of 475 university students completed the initial survey and 305 the final evaluation. Most university students were females (75.4%), with representation from six languages and diverse institutions. Students reported clear perceived learning gains: 79.7% reported updated practical skills in clinical dietetics and communication were improved, 90% felt that new digital tools improved classroom practice, and 73.9% reported enhanced interpersonal skills. Self-rated competence in using chatbots as learning tools increased significantly, with mean knowledge scores rising from 2.32 to 2.66 and skills from 2.39 to 2.79 on a 0–5 Likert scale (p < 0.001 for both). Perceived effectiveness and usefulness of chatbots as self-learning tools remained positive but showed a small decline after use (effectiveness from 3.63 to 3.42; usefulness from 3.63 to 3.45), suggesting that hands-on experience refined, but did not diminish, students’ overall favorable views of the platform. Conclusions: The implementation and pilot evaluation of the E+DIEting_Lab self-learning virtual patient chatbot platform demonstrate that structured digital simulation tools can significantly improve perceived clinical nutrition competences. These findings support chatbot adoption in dietetics curricula and inform future digital education innovations. Full article
Show Figures

Figure 1

26 pages, 4207 KB  
Article
Is a Chatbot More Effective? Investigating the Effect of Service Recovery Agents and Consumer Loss on Consumer Forgiveness
by Liu Fan, Shanshan Li, Can Wang and Xiaoping Zhang
J. Theor. Appl. Electron. Commer. Res. 2026, 21(1), 35; https://doi.org/10.3390/jtaer21010035 - 13 Jan 2026
Viewed by 1033
Abstract
As chatbots are increasingly deployed to address service failures, understanding their role in facilitating consumer forgiveness has become essential. Several studies have compared consumers’ reactions to service recovery efforts conducted by a human versus a chatbot. Through three scenario-based experiments (total N = [...] Read more.
As chatbots are increasingly deployed to address service failures, understanding their role in facilitating consumer forgiveness has become essential. Several studies have compared consumers’ reactions to service recovery efforts conducted by a human versus a chatbot. Through three scenario-based experiments (total N = 1875) with Chinese participants, our study examines the interaction between service recovery agents (chatbot vs. human), types of consumer loss (utilitarian vs. symbolic), and service failure severity (low vs. high) in influencing consumer forgiveness. The results reveal that in cases of symbolic loss, consumers perceive humans—rather than chatbots—as more capable of providing emotional support during service recovery, thus promoting forgiveness more effectively. However, this discrepancy diminishes in the case of utilitarian loss. Our findings further suggest that the combined effect of service recovery agents and consumer loss on forgiveness is moderated by service failure severity. In the case of low-severity failures, recovery services provided by humans (vs. chatbots) are more effective in fostering forgiveness for consumers experiencing symbolic losses. However, for high-severity failures, regardless of the type of loss, consumers exhibit a higher level of forgiveness toward recovery services provided by humans. This research offers the following practical implications for managers dealing with service failures: strategic escalation to human agents is recommended for symbolic losses or high-severity failures, but chatbots represent a cost-efficient solution for utilitarian losses in low-severity scenarios. Full article
(This article belongs to the Topic Data Science and Intelligent Management)
Show Figures

Figure 1

19 pages, 917 KB  
Article
Leveraging Artificial Intelligence-Based Applications to Remove Disruptive Factors from Pharmaceutical Care: A Quantitative Study in Eastern Romania
by Ionela Daniela Ferțu, Alina Mihaela Elisei, Mariana Lupoae, Alexandra Burlacu, Claudia Simona Ștefan, Luminița Enache, Andrei Vlad Brădeanu, Loredana Sabina Pascu, Iulia Chiscop, Mădălina Nicoleta Matei, Aurel Nechita and Ancuța Iacob
Pharmacy 2026, 14(1), 7; https://doi.org/10.3390/pharmacy14010007 - 9 Jan 2026
Viewed by 534
Abstract
Artificial Intelligence (AI) has increasingly contributed to advancements in pharmaceutical practice, particularly by enhancing the pharmacist–patient relationship and improving medication adherence. This quantitative, descriptive, cross-sectional study investigated Eastern Romanian pharmacists’ perception of AI-based applications as effective optimization tools, correlating it with disruptive communication [...] Read more.
Artificial Intelligence (AI) has increasingly contributed to advancements in pharmaceutical practice, particularly by enhancing the pharmacist–patient relationship and improving medication adherence. This quantitative, descriptive, cross-sectional study investigated Eastern Romanian pharmacists’ perception of AI-based applications as effective optimization tools, correlating it with disruptive communication factors. An anonymous and online questionnaire was distributed to community pharmacists, examining sociodemographic characteristics, awareness of disruptive factors, and the perceived usefulness of AI. The sample included 437 respondents: pharmacists (55.6%), mostly female (83.8%), and aged between 25 and 44 (52.6%). Data analysis involved descriptive statistics and independent t-tests. The statistical analysis revealed a significantly positive perception (p < 0.001) of AI on pharmacist–patient communication. Respondents viewed AI as a valuable tool for reducing medication errors and optimizing counseling time, though they maintain a strong emphasis on genuine human interaction. Significant correlations were found between disruptive factors—such as noise and high patient volume—and the quality of communication. Participants also expressed an increased interest in applications like automatic prescription scheduling and the use of chatbots. The study concludes that a balanced implementation of AI technologies is necessary, one that runs parallel with the continuous development of pharmacists’ communication skills. Future research should focus on validating AI’s impact on clinical outcomes and establishing clear ethical guidelines regarding the use of patient data. Full article
(This article belongs to the Special Issue AI Use in Pharmacy and Pharmacy Education)
Show Figures

Figure 1

18 pages, 1443 KB  
Review
Empathy by Design: Reframing the Empathy Gap Between AI and Humans in Mental Health Chatbots
by Alastair Howcroft and Holly Blake
Information 2025, 16(12), 1074; https://doi.org/10.3390/info16121074 - 4 Dec 2025
Cited by 1 | Viewed by 4119
Abstract
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes [...] Read more.
Artificial intelligence (AI) chatbots are now embedded across therapeutic contexts, from the United Kingdom’s National Health Service (NHS) Talking Therapies to widely used platforms like ChatGPT. Whether welcomed or not, these systems are increasingly used for both patient care and everyday support, sometimes even replacing human contact. Their capacity to convey empathy strongly influences how people experience and benefit from them. However, current systems often create an “AI empathy gap”, where interactions feel impersonal and superficial compared to those with human practitioners. This paper, presented as a critical narrative review, cautiously challenges the prevailing narrative that empathy is a uniquely human skill that AI cannot replicate. We argue this belief can stem from an unfair comparison: evaluating generic AIs against an idealised human practitioner. We reframe capabilities seen as exclusively human, such as building bonds through long-term memory and personalisation, not as insurmountable barriers but as concrete design targets. We also discuss the critical architectural and privacy trade-offs between cloud and on-device (edge) solutions. Accordingly, we propose a conceptual framework to meet these targets. It integrates three key technologies: Retrieval-Augmented Generation (RAG) for long-term memory; feedback-driven adaptation for real-time emotional tuning; and lightweight adapter modules for personalised conversational styles. This framework provides a path toward systems that users perceive as genuinely empathic, rather than ones that merely mimic supportive language. While AI cannot experience emotional empathy, it can model cognitive empathy and simulate affective and compassionate responses in coordinated ways at the behavioural level. However, because these systems lack conscious, autonomous ‘helping’ intentions, these design advancements must be considered alongside careful ethical and regulatory safeguards. Full article
(This article belongs to the Special Issue Internet of Things (IoT) and Cloud/Edge Computing)
Show Figures

Figure 1

41 pages, 3943 KB  
Article
When AI Chatbots Ask for Donations: The Construal Level Contingency of AI Persuasion Effectiveness in Charity Human–Chatbot Interaction
by Jin Sun and Jia Si
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 341; https://doi.org/10.3390/jtaer20040341 - 3 Dec 2025
Viewed by 1767
Abstract
As AI chatbots are increasingly used in digital fundraising, it remains unclear which communication strategies are more effective in enhancing consumer trust and donation behavior. Drawing on construal level theory and adopting a human-AI interaction perspective, this research examines how message framing in [...] Read more.
As AI chatbots are increasingly used in digital fundraising, it remains unclear which communication strategies are more effective in enhancing consumer trust and donation behavior. Drawing on construal level theory and adopting a human-AI interaction perspective, this research examines how message framing in AI-mediated persuasive communication shapes trust and donation willingness. Across four studies, we find that when AI chatbots employ high-level construal (abstract) message framing, consumers perceive the information as less credible compared to when the same message is delivered by a human agent. This reduced message credibility weakens trust in the charitable organization through a trust transfer mechanism, ultimately lowering donation intention. Conversely, low-level construal (concrete) framing enhances both trust and donation willingness. Moreover, the negative impact of abstract message framing by AI chatbots is significantly attenuated when the chatbot features anthropomorphic visual cues, which increase perceived credibility and restore trust and donation willingness. These findings reveal potential risks in deploying AI chatbots for interactive fundraising marketing and offer practical insights for nonprofit organizations seeking to leverage AI in donor engagement. Full article
Show Figures

Figure 1

22 pages, 648 KB  
Article
Unpacking AI Chatbot Dependency: A Dual-Path Model of Cognitive and Affective Mechanisms
by Na Zhai, Xiaomei Ma and Xiaojun Ding
Information 2025, 16(12), 1025; https://doi.org/10.3390/info16121025 - 24 Nov 2025
Cited by 1 | Viewed by 4684
Abstract
With AI chatbots becoming increasingly embedded in everyday life, growing concerns have emerged regarding users’ psychological dependency on these systems. While previous studies have mainly addressed utilitarian drivers, less attention has been paid to the cognitive and affective mechanisms driving chatbot dependency. Drawing [...] Read more.
With AI chatbots becoming increasingly embedded in everyday life, growing concerns have emerged regarding users’ psychological dependency on these systems. While previous studies have mainly addressed utilitarian drivers, less attention has been paid to the cognitive and affective mechanisms driving chatbot dependency. Drawing upon Uses and Gratifications Theory, Compensatory Internet Use Theory, and Attachment Theory, this study proposes a dual-path model that investigates how instrumental motivations (e.g., information-seeking, entertainment, efficiency) and affective motivations (e.g., companionship, loneliness, anxiety) influence chatbot dependency through two mediating mechanisms: cognitive reliance and emotional attachment. Using survey data collected from 354 participants, the model was tested through structural equation modeling (SEM). The results indicate that information-seeking and efficiency significantly predict cognitive reliance, which subsequently enhances chatbot dependency. In contrast, entertainment does not exhibit a significant influence. Furthermore, affective motivations such as companionship, loneliness, and anxiety are indirectly linked to dependency through emotional attachment, with loneliness demonstrating the strongest indirect effect. These findings underscore the dual influence of functional cognition and emotional vulnerability in fostering chatbot dependency, emphasizing the importance of emotionally sensitive and ethically responsible AI design. Full article
Show Figures

Figure 1

19 pages, 1772 KB  
Article
STEM Undergraduates’ Perceptions of AI Chatbots: A Cross-Sectional Descriptive Survey
by Kamalanathan Kajan, Wenyuan Shi and Dariusz Wanatowski
AI Educ. 2025, 1(1), 4; https://doi.org/10.3390/aieduc1010004 - 18 Nov 2025
Viewed by 1354
Abstract
We surveyed 297 STEM undergraduates at a single English-medium Sino–UK joint institution to document perceptions of AI chatbots for learning. Students reported high willingness to adopt AI chatbots (78%; 95% CI: 73.1–82.4) alongside concerns about over-reliance (67%; 95% CI: 61.4–72.1), content quality (52%; [...] Read more.
We surveyed 297 STEM undergraduates at a single English-medium Sino–UK joint institution to document perceptions of AI chatbots for learning. Students reported high willingness to adopt AI chatbots (78%; 95% CI: 73.1–82.4) alongside concerns about over-reliance (67%; 95% CI: 61.4–72.1), content quality (52%; 95% CI: 46.2–57.5), and reduced human interaction (42%; 95% CI: 36.5–47.8). Over half (52%; 95% CI: 46.3–57.7) requested language/terminology support features, whereas only 16.8% reported language-related barriers. We attempted exploratory factor analysis and k-means clustering, but neither met the inclusion criteria; therefore, we report item-level frequencies only. The findings are descriptive and not generalisable (53% first-year, 80% male convenience sample). These patterns generate testable hypotheses about verification scaffolds, language support utility, and human–AI balance that warrant investigation through controlled studies. Full article
Show Figures

Figure 1

Back to TopTop