Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = multi-turn chatbot

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2538 KB  
Article
Fic2Bot: A Scalable Framework for Persona-Driven Chatbot Generation from Fiction
by Sua Kang, Chaelim Lee, Subin Jung and Minsu Lee
Electronics 2025, 14(19), 3859; https://doi.org/10.3390/electronics14193859 - 29 Sep 2025
Viewed by 526
Abstract
This paper presents Fic2Bot, an end-to-end framework that automatically transforms raw novel text into in-character chatbots by combining scene-level retrieval with persona profiling. Unlike conventional RAG-based systems that emphasize factual accuracy but neglect stylistic coherence, Fic2Bot ensures both factual grounding and consistent persona [...] Read more.
This paper presents Fic2Bot, an end-to-end framework that automatically transforms raw novel text into in-character chatbots by combining scene-level retrieval with persona profiling. Unlike conventional RAG-based systems that emphasize factual accuracy but neglect stylistic coherence, Fic2Bot ensures both factual grounding and consistent persona expression without any manual intervention. The framework integrates (1) Major Entity Identification (MEI) for robust coreference resolution, (2) scene-structured retrieval for precise contextual grounding, and (3) stylistic and sentiment profiling to capture linguistic and emotional traits of each character. Experiments conducted on novels from diverse genres show that Fic2Bot achieves robust entity resolution, more relevant retrieval, highly accurate speaker attribution, and stronger persona consistency in multi-turn dialogues. These results highlight Fic2Bot as a scalable and domain-agnostic framework for persona-driven chatbot generation, with potential applications in interactive roleplaying, language and literary studies, and entertainment. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

42 pages, 1748 KB  
Article
Memory-Augmented Large Language Model for Enhanced Chatbot Services in University Learning Management Systems
by Jaeseung Lee and Jehyeok Rew
Appl. Sci. 2025, 15(17), 9775; https://doi.org/10.3390/app15179775 - 5 Sep 2025
Viewed by 1577
Abstract
A learning management system (LMS) plays a crucial role in supporting students’ educational activities by centralized platforms for course delivery, communication, and student support. Recently, many universities have integrated chatbots into their LMS to assist students with various inquiries and tasks. However, existing [...] Read more.
A learning management system (LMS) plays a crucial role in supporting students’ educational activities by centralized platforms for course delivery, communication, and student support. Recently, many universities have integrated chatbots into their LMS to assist students with various inquiries and tasks. However, existing chatbots often necessitate human interventions to manually respond to complex queries, resulting in limited scalability and efficiency. In this paper, we present a memory-augmented large language model (LLM) framework that enhances the reasoning and contextual continuity of LMS-based chatbots. The proposed framework first embeds user queries and retrieves semantically relevant entries from various LMS resources, including instructional documents and academic frequently asked questions. Retrieved entries are then filtered through a two-stage confidence filtering process that combines similarity thresholds and LLM-based semantic validation. Validated information, along with user queries, is processed by LLM for response generation. To maintain coherence in multi-turn interactions, the chatbot incorporates short-term, long-term, and temporal event memories, which track conversational flow and personalize responses based on user-specific information, such as recent activity history and individual preferences. To evaluate response quality, we employed a multi-layered evaluation strategy combining BERTScore-based quantitative measurement, an LLM-as-a-Judge approach for automated semantic assessment, and a user study under multi-turn scenarios. The evaluation results consistently confirm that the proposed framework improves the consistency, clarity, and usefulness of the responses. These findings highlight the potential of memory-augmented LLMs for scalable and intelligent learning support within university environments. Full article
(This article belongs to the Special Issue Applications of Digital Technology and AI in Educational Settings)
Show Figures

Figure 1

17 pages, 717 KB  
Article
A Personalized Multi-Turn Generation-Based Chatbot with Various-Persona-Distribution Data
by Shihao Zhu, Tinghuai Ma, Huan Rong and Najla Al-Nabhan
Appl. Sci. 2023, 13(5), 3122; https://doi.org/10.3390/app13053122 - 28 Feb 2023
Cited by 4 | Viewed by 5604
Abstract
Existing persona-based dialogue generation models focus on the semantic consistency between personas and responses. However, various influential factors can cause persona inconsistency, such as the speaking style in the context. Existing models perform inflexibly in speaking styles on various-persona-distribution datasets, resulting in persona [...] Read more.
Existing persona-based dialogue generation models focus on the semantic consistency between personas and responses. However, various influential factors can cause persona inconsistency, such as the speaking style in the context. Existing models perform inflexibly in speaking styles on various-persona-distribution datasets, resulting in persona style inconsistency. In this work, we propose a dialogue generation model with persona selection classifier to solve the complex inconsistency problem. The model generates responses in two steps: original response generation and rewriting responses. For training, we employ two auxiliary tasks: (1) a persona selection task to fuse the adapted persona into the original responses; (2) consistency inference to remove inconsistent persona information in the final responses. In our model, the adapted personas are predicted by an NLI-based classifier. We evaluate our model on the persona dialogue dataset with different persona distributions, i.e., the persona-dense PersonaChat dataset and the persona-spare PersonalDialog dataset. The experimental results show that our model outperforms strong models in response quality, persona consistency, and persona distribution consistency. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Applications)
Show Figures

Figure 1

19 pages, 4892 KB  
Article
Development of an Empathy-Centric Counseling Chatbot System Capable of Sentimental Dialogue Analysis
by Amy J. C. Trappey, Aislyn P. C. Lin, Kevin Y. K. Hsu, Charles V. Trappey and Kevin L. K. Tu
Processes 2022, 10(5), 930; https://doi.org/10.3390/pr10050930 - 8 May 2022
Cited by 32 | Viewed by 8962
Abstract
College students encounter various types of stresses in school due to schoolwork, personal relationships, health issues, and future career concerns. Some students are susceptible to the strikes of failures and are inexperienced with or fearful of dealing with setbacks. When these negative emotions [...] Read more.
College students encounter various types of stresses in school due to schoolwork, personal relationships, health issues, and future career concerns. Some students are susceptible to the strikes of failures and are inexperienced with or fearful of dealing with setbacks. When these negative emotions gradually accumulate without resolution, they can cause long-term negative effects on students’ physical and mental health. Some potential health problems include depression, anxiety, and disorders such as eating disorders. Universities commonly offer counseling services; however, the demand often exceeds the counseling capacities due to limited numbers of counsellors/psychologists. Thus, students may not receive immediate counseling or treatments. If students are not treated, some repercussions may lead to severe abnormal behavior and even suicide. In this study, combining immersive virtual reality (VR) technique with psychological knowledge base, we developed a VR empathy-centric counseling chatbot (VRECC) that can complementarily support troubled students when counsellors cannot provide immediate support. Through multi-turn (verbal or text) conversations with the chatbot, the system can demonstrate empathy and give therapist-like responses to the users. During the study, more than 120 students were required to complete a questionnaire and 34 subjects with an above-median stress level were randomly drawn for the VRECC experiment. We observed decreasing average stress level and psychological sensitivity scores among subjects after the experiment. Although the system did not yield improvement in life-impact scores (e.g., behavioral and physical impacts), the significant outcomes of lowering stress level and psychological sensitivity have given us a very positive outlook for continuing to integrate VR, AI sentimental natural language process, and counseling chatbot for advanced VRECC research in helping students improve their psychological well-being and life quality at schools. Full article
(This article belongs to the Special Issue Recent Advances in Machine Learning and Applications)
Show Figures

Figure 1

16 pages, 591 KB  
Article
An Empirical Study on Deep Neural Network Models for Chinese Dialogue Generation
by Zhe Li, Mieradilijiang Maimaiti, Jiabao Sheng, Zunwang Ke, Wushour Silamu, Qinyong Wang and Xiuhong Li
Symmetry 2020, 12(11), 1756; https://doi.org/10.3390/sym12111756 - 23 Oct 2020
Cited by 4 | Viewed by 3026
Abstract
The task of dialogue generation has attracted increasing attention due to its diverse downstream applications, such as question-answering systems and chatbots. Recently, the deep neural network (DNN)-based dialogue generation models have achieved superior performance against conventional models utilizing statistical machine learning methods. However, [...] Read more.
The task of dialogue generation has attracted increasing attention due to its diverse downstream applications, such as question-answering systems and chatbots. Recently, the deep neural network (DNN)-based dialogue generation models have achieved superior performance against conventional models utilizing statistical machine learning methods. However, despite that an enormous number of state-of-the-art DNN-based models have been proposed, there lacks detailed empirical comparative analysis for them on the open Chinese corpus. As a result, relevant researchers and engineers might find it hard to get an intuitive understanding of the current research progress. To address this challenge, we conducted an empirical study for state-of-the-art DNN-based dialogue generation models in various Chinese corpora. Specifically, extensive experiments were performed on several well-known single-turn and multi-turn dialogue corpora, including KdConv, Weibo, and Douban, to evaluate a wide range of dialogue generation models that are based on the symmetrical architecture of Seq2Seq, RNNSearch, transformer, generative adversarial nets, and reinforcement learning respectively. Moreover, we paid special attention to the prevalent pre-trained model for the quality of dialogue generation. Their performances were evaluated by four widely-used metrics in this area: BLEU, pseudo, distinct, and rouge. Finally, we report a case study to show example responses generated by these models separately. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

11 pages, 648 KB  
Article
Memory-Based Deep Neural Attention (mDNA) for Cognitive Multi-Turn Response Retrieval in Task-Oriented Chatbots
by Jenhui Chen, Obinna Agbodike and Lei Wang
Appl. Sci. 2020, 10(17), 5819; https://doi.org/10.3390/app10175819 - 22 Aug 2020
Cited by 11 | Viewed by 4394
Abstract
One of the important criteria used in judging the performance of a chatbot is the ability to provide meaningful and informative responses that correspond with the context of a user’s utterance. Nowadays, the number of enterprises adopting and relying on task-oriented chatbots for [...] Read more.
One of the important criteria used in judging the performance of a chatbot is the ability to provide meaningful and informative responses that correspond with the context of a user’s utterance. Nowadays, the number of enterprises adopting and relying on task-oriented chatbots for profit is increasing. Dialog errors and inappropriate response to user queries by chatbots can result in huge cost implications. To achieve high performance, recent AI chatbot models are increasingly adopting the Transformer positional encoding and the attention-based architecture. While the transformer performs optimally in sequential generative chatbot models, recent studies has pointed out the occurrence of logical inconsistency and fuzzy error problems when the Transformer technique is adopted in retrieval-based chatbot models. Our investigation discovers that the encountered errors are caused by information losses. Therefore, in this paper, we address this problem by augmenting the Transformer-based retrieval chatbot architecture with a memory-based deep neural attention (mDNA) model by using an approach similar to late data fusion. The mDNA is a simple encoder-decoder neural architecture that comprises of bidirectional long short-term memory (Bi-LSTM), attention mechanism, and a memory for information retention in the encoder. In our experiments, we trained the model extensively on a large Ubuntu dialog corpus, and the results from recall evaluation scores show that the mDNA augmentation approach slightly outperforms selected state-of-the-art retrieval chatbot models. The results from the mDNA augmentation approach are quite impressive. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

11 pages, 643 KB  
Article
Knowledge-Grounded Chatbot Based on Dual Wasserstein Generative Adversarial Networks with Effective Attention Mechanisms
by Sihyung Kim, Oh-Woog Kwon and Harksoo Kim
Appl. Sci. 2020, 10(9), 3335; https://doi.org/10.3390/app10093335 - 11 May 2020
Cited by 22 | Viewed by 4673
Abstract
A conversation is based on internal knowledge that the participants already know or external knowledge that they have gained during the conversation. A chatbot that communicates with humans by using its internal and external knowledge is called a knowledge-grounded chatbot. Although previous studies [...] Read more.
A conversation is based on internal knowledge that the participants already know or external knowledge that they have gained during the conversation. A chatbot that communicates with humans by using its internal and external knowledge is called a knowledge-grounded chatbot. Although previous studies on knowledge-grounded chatbots have achieved reasonable performance, they may still generate unsuitable responses that are not associated with the given knowledge. To address this problem, we propose a knowledge-grounded chatbot model that effectively reflects the dialogue context and given knowledge by using well-designed attention mechanisms. The proposed model uses three kinds of attention: Query-context attention, query-knowledge attention, and context-knowledge attention. In our experiments with the Wizard-of-Wikipedia dataset, the proposed model showed better performances than the state-of-the-art model in a variety of measures. Full article
(This article belongs to the Special Issue Machine Learning and Natural Language Processing)
Show Figures

Figure 1

16 pages, 1415 KB  
Article
Human Annotated Dialogues Dataset for Natural Conversational Agents
by Erinc Merdivan, Deepika Singh, Sten Hanke, Johannes Kropf, Andreas Holzinger and Matthieu Geist
Appl. Sci. 2020, 10(3), 762; https://doi.org/10.3390/app10030762 - 21 Jan 2020
Cited by 24 | Viewed by 36924
Abstract
Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. [...] Read more.
Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. In this paper, we develop a benchmark dataset with human annotations and diverse replies that can be used to develop such metric for conversational agents. The paper introduces a high-quality human annotated movie dialogue dataset, HUMOD, that is developed from the Cornell movie dialogues dataset. This new dataset comprises 28,500 human responses from 9500 multi-turn dialogue history-reply pairs. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. Such unique dialogue replies enable researchers in evaluating their models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

8 pages, 331 KB  
Article
Multi-Turn Chatbot Based on Query-Context Attentions and Dual Wasserstein Generative Adversarial Networks
by Jintae Kim, Shinhyeok Oh, Oh-Woog Kwon and Harksoo Kim
Appl. Sci. 2019, 9(18), 3908; https://doi.org/10.3390/app9183908 - 18 Sep 2019
Cited by 12 | Viewed by 6611
Abstract
To generate proper responses to user queries, multi-turn chatbot models should selectively consider dialogue histories. However, previous chatbot models have simply concatenated or averaged vector representations of all previous utterances without considering contextual importance. To mitigate this problem, we propose a multi-turn chatbot [...] Read more.
To generate proper responses to user queries, multi-turn chatbot models should selectively consider dialogue histories. However, previous chatbot models have simply concatenated or averaged vector representations of all previous utterances without considering contextual importance. To mitigate this problem, we propose a multi-turn chatbot model in which previous utterances participate in response generation using different weights. The proposed model calculates the contextual importance of previous utterances by using an attention mechanism. In addition, we propose a training method that uses two types of Wasserstein generative adversarial networks to improve the quality of responses. In experiments with the DailyDialog dataset, the proposed model outperformed the previous state-of-the-art models based on various performance measures. Full article
Show Figures

Figure 1

Back to TopTop