Next Article in Journal
Tri- and Tetrahyperbolic Isofrequency Topologies Complete Classification of Bianisotropic Materials
Previous Article in Journal
Embedded Analog Physical Unclonable Function System to Extract Reliable and Unique Security Keys
Open AccessArticle

Human Annotated Dialogues Dataset for Natural Conversational Agents

1
AIT Austrian Institute of Technology, 2700 Wiener Neustadt, Austria
2
CentraleSupélec, Université de Lorraine, CNRS, LORIA, F-57000 Metz, France
3
Holzinger Group, HCI-KDD, Institute for Medical Informatics/Statistics, Medical University Graz, 8036 Graz, Austria
4
FH Joanneum Gesellschaft mbH, 8020 Graz, Austria
5
Université de Lorraine, CNRS, LIEC, F-57000 Metz, France
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2020, 10(3), 762; https://doi.org/10.3390/app10030762
Received: 16 December 2019 / Revised: 8 January 2020 / Accepted: 16 January 2020 / Published: 21 January 2020
(This article belongs to the Section Computing and Artificial Intelligence)
Conversational agents are gaining huge popularity in industrial applications such as digital assistants, chatbots, and particularly systems for natural language understanding (NLU). However, a major drawback is the unavailability of a common metric to evaluate the replies against human judgement for conversational agents. In this paper, we develop a benchmark dataset with human annotations and diverse replies that can be used to develop such metric for conversational agents. The paper introduces a high-quality human annotated movie dialogue dataset, HUMOD, that is developed from the Cornell movie dialogues dataset. This new dataset comprises 28,500 human responses from 9500 multi-turn dialogue history-reply pairs. Human responses include: (i) ratings of the dialogue reply in relevance to the dialogue history; and (ii) unique dialogue replies for each dialogue history from the users. Such unique dialogue replies enable researchers in evaluating their models against six unique human responses for each given history. Detailed analysis on how dialogues are structured and human perception on dialogue score in comparison with existing models are also presented. View Full-Text
Keywords: conversational agents; dialogue systems; chatbots conversational agents; dialogue systems; chatbots
Show Figures

Figure 1

MDPI and ACS Style

Merdivan, E.; Singh, D.; Hanke, S.; Kropf, J.; Holzinger, A.; Geist, M. Human Annotated Dialogues Dataset for Natural Conversational Agents. Appl. Sci. 2020, 10, 762.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop