Next Article in Journal
Using ChatGPT and Persuasive Technology for Personalized Recommendation Messages in Hotel Upselling
Previous Article in Journal
PDD-ET: Parkinson’s Disease Detection Using ML Ensemble Techniques and Customized Big Dataset
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Designing a Chatbot for Contemporary Education: A Systematic Literature Review

by
Dimitrios Ramandanis
and
Stelios Xinogalos
*
Department of Applied Informatics, University of Macedonia, GR-54636 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Information 2023, 14(9), 503; https://doi.org/10.3390/info14090503
Submission received: 20 July 2023 / Revised: 2 September 2023 / Accepted: 5 September 2023 / Published: 13 September 2023
(This article belongs to the Section Information Applications)

Abstract

:
A chatbot is a technological tool that can simulate a discussion between a human and a program application. This technology has been developing rapidly over recent years, and its usage is increasing rapidly in many sectors, especially in education. For this purpose, a systematic literature review was conducted using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to analyze the developments and evolutions of this technology in the educational sector during the last 5 years. More precisely, an analysis of the development methods, practices and guidelines for the development of a conversational tutor are examined. The results of this study aim to summarize the gathered knowledge to provide useful information to educators that would like to develop a conversational assistant for their course and to developers that would like to develop chatbot systems in the educational domain.

1. Introduction

A chatbot, or conversational agent (CA), is software that is programmed to simulate human interaction via voice or text messages and, sometimes, both. This technology has been used in many fields, including educational environments [1], in which the research interest has been increasing steadily, especially in the last five years [2,3,4,5,6].
More specifically, conversational bots can be quite advantageous for educators when being used in a classroom environment. First of all, they could take on teachers’ responsibilities concerning tasks other than teaching, such as answering student’s Frequently Asked Questions (FAQs) and handling academic material [2,3]. Additionally, another possible function of CAs is that of an educators’ assessment tool, used in order to perceive how well the objectives of courses are understood by the students and whether any progress is made [3]. Even though a conversational assistant might be a valuable asset for a teacher, the process of developing one could be difficult, especially for an educator lacking prior programming knowledge. Researchers have previously implied that the current frameworks ought to be improved so that the teachers’ use of this technology may be facilitated [3]. Moreover, tools and guidelines about the methods of creating a chatbot are necessary in order for it to be easier for a professor to develop one on their own [7]. With a view to that, this paper intends to systematically review the literature on the applications of educational chatbots and propose frameworks, steps, tools and recommendations for designing educational chatbots, with the aim of providing guidelines and directions for both experienced and inexperienced educators.
In order to fulfill this goal, the related articles from the last five years (2018–2023) will be analyzed so as to synthesize a comprehensive review of the research in the field. More precisely, this analysis aims to investigate the following research question:
What are the steps for designing an educational chatbot for contemporary education?
The rest of the article is structured as follows. In Section 2, the related work on this subject is analyzed, followed by a presentation of the methodology of the study in Section 3. The results are presented in Section 4 and discussed in Section 5. The limitations of the study are presented in Section 6, and the final conclusions and plans for future research are presented in Section 7.

2. Related Work

The subject of educational chatbots (ECA) presents an increasing research interest, with many different studies and reviews about various the aspects of this subject published. In this work, six systematic literature reviews (SLRs) were found, each with a unique point of view, summarized in Table 1.
In particular, Hwang and Chang [4] conducted a study to explore trends in the field of ECAs. More specifically, they researched the educational domains where ECAs are used and the teaching methods that are utilized to teach students. Furthermore, they analyzed the 29 articles reviewed in order to present the design and the methods used for data analysis in the corresponding studies. Finally, they recorded the nationalities of the first author of each paper, the journals that these papers were published in and the productivity rate of each author based on a specific scale.
Kuhail et al. [6] analyzed 36 papers to explore the various fields relevant to ECAs. Firstly, they researched the educational domains where ECAs are used and the kind of platforms that are usually used to integrate them. Additionally, they explored the possible roles that a tutoring chatbot can take on and the communication approaches they can adopt when communicating with a student. Finally, they presented the important design principles for an ECA and the challenges and problems that exist when ECAs are used in a classroom environment, while also providing supporting arguments for using an ECA for teaching students.
In a different research study, Okonkwo and Ade-Ibijola [3] presented the ways that ECAs are used in the educational sector. Moreover, they analyzed the benefits of using ECAs and the challenges of utilizing them in the teaching process. Last but not least, they mention future educational fields where the use of ECAs could be beneficial.
Pérez et al. [2] used the PRISMA framework to study the field of ECAs. Firstly, they describe the main categories of ECAs and the teaching environments they are used in. Furthermore, the impact on students’ learning efficiency and the enhancing of teaching when using chatbots is analyzed. Furthermore, the kinds of technologies that are used to develop an ECA are recorded, while the impact of each one on students’ learning efficiency is mentioned. Finally, they explore the circumstances under which an ECA could support students as a human educator and describe the existent ways to assess the quality of an ECA.
Smutny and Schreiberova [7] conducted a review to study ECAs that operated using Facebook (now Meta) Messenger. In particular, they analyzed the functions of 47 chatbots in order to assess the quality of different aspects of their teaching capabilities and figure out their effectiveness as teaching assistants.
Last but not least, Wollny et al. [5] conducted an SLR to explore certain topics about ECAs. Specifically, their educational targets and roles are analyzed. In addition, the utilization scenarios where an ECA is used as a mentor to the students are assayed, while the adaptability of ECAs to the students’ needs is examined. Finally, they analyze the educational sectors in which ECAs have been used so far.
Even though many aspects of ECAs have been studied, no study has shifted its focus to the production of a scaffolding plan in order to guide educators through the development process of an ECA. With the aim of covering this gap, this paper tries to offer a general overview of the steps required for developing an ECA in order to support educators, with or without experience in this technology, to develop their own ECAs.

3. Methods

This paper aims to gather knowledge from the research that has been conducted during the last five years in the field of ECAs in order to contribute to the advancement of the field by providing detailed information on the current state and plans for future research. Systematic reviews of the literature are a fundamental tool for presenting such information, since they use a predefined and valid procedure and thus increase the scientific value and integrity of the information given. The PRISMA framework is a valid and reliable way to conduct a systematic literature review [8], and thus it was adopted in our work. According to PRISMA, the collected documents have to be gathered using specific criteria and a specific procedure, and then the results are assessed by discussing their contribution and their limitations.
When it comes to this study, a search was conducted with a view to finding articles that would provide the information needed for the review. The main database used for this task was Scopus. Many articles were included using specific criteria, which will be further analyzed in the next section.
Last but not least, this work was accomplished in a number of steps. Firstly, a general search was conducted in Scopus in order to find literature reviews related to the subject and research question, and they were utilized for forming a comprehensive set of keywords for the final search of the literature. After the final search query had been shaped, this was used to find the relevant documents in Scopus. The final list of documents was formed based on some inclusion and exclusion criteria. The details of this procedure are explained in detail in this section.

3.1. Eligibility Criteria

This review intends to carry out an inclusive analysis of the recent studies in order to answer one question. With a view to that, the screening process of the sources was conducted using some inclusion and exclusion criteria. More precisely the main inclusive factor was that the article was relevant to the general topic and could answer the research question of the review. Initially, this was accomplished by checking the title and the abstract of each article and excluding whatever was considered irrelevant to the subject. Then the second phase of the evaluation of the articles was carried out based on the eligibility criteria, presented in Table 2. For the inclusion factors, the document that was found should address a chatbot application used in teaching a subject to students and contain information about designing, integrating or evaluating it. On the other hand, there were two main reasons for a paper to be excluded: Firstly, the chatbot application was designed for the training of specific target groups but not students or learners at an educational institution. Secondly, articles that focused too much on the results for the learners and did not describe the role of the CA and how it contributed to the results were excluded.
Τhe literature on the subject has been increasing remarkably, especially over the last five years [2,6]. For this reason, it was decided that the sources gathered should be written between the years 2018 and 2023. In addition, the documents retrieved were only articles from academic journals, since articles are usually targeted at an issue more precisely than books or conference papers. Furthermore, only articles written in the English language were chosen for this review. Finally, since only Scopus was used for retrieving the documents, there were only two duplicate papers, which were reviews that were found during the first phase of the two-phase searching strategy, while some papers were excluded due to the non-availability of their full text.

3.2. Information Sources

Aiming to synthesize a comprehensive review of knowledge, this study was solely based on articles from scientific journal. Scopus was selected as the main database of the search for the following reasons: it is a well-respected and trustworthy tool utilized in research, including systematic literature reviews and it incorporates articles from a variety of other respected publishers such as Elsevier, Emerald, IEEE, MDPI, Sage, Springer (in alphabetical order) and many others, as can be seen in the reference list of this paper. The first document search was carried out on 7 March 2023 in order to find relevant reviews, which would provide the keywords needed for shaping the search query. The final search was conducted on 9 March 2023, resulting in the final list of documents.

3.3. Search Strategy

As described earlier, the search strategy of this review was executed in two phases. The first phase aimed to define the keywords that would subsequently be utilized in the search query. For this purpose, the general search query “Chatbots in education” was used for finding relevant reviews written in English. This search resulted in four reviews [2,3,4,5]. Using backward searching, another review was found [7], as well as one paper [1]. The keywords used in the aforementioned papers were recorded, and duplicates were removed. Next, the keywords were organized into two categories, with the one of them referring to education-related words and the other to chatbot-related words. More specifically, the education related words found were: ”education”, ”educational”, ”learning”, “learner”, ”student”, ”teaching”, ”school”, ”university” and ”pedagogical”. On the other hand, the words referring to Cas were: ”chatbot”, ”conversational agents”, ”conversational tutors”, “bots”, “agents”, and “dialogue-systems”. From this aggregate of keywords, some were excluded since they are involved more precisely in other words. For example “agents” is better mentioned as “conversational agents”. In addition, this study aims to provide information based on the last five years and using scientific articles written in the English language. So, the final search query used in Scopus was:
“TITLE-ABS-KEY ((“education*” OR “learning” OR “student” OR “teaching” OR “school” OR “university” OR “pedagogical”) AND (“chatbot” OR “conversational agents” OR “bots” OR “dialogue-systems” OR “conversational tutor”)) AND PUBYEAR > 2017 AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (LANGUAGE, “English”))”. This search function led to a list of 1597 articles, which were then sorted out following a selection process.

3.4. Selection Process and Data Collection Process

For the purpose of writing this review, the 1597 articles identified by applying the aforementioned search query in Scopus were screened. The full list of articles was initially extracted from Scopus and saved as an excel .csv file. In this file, all the information needed for the examination of each article was recorded. More specifically its authors, title, abstract, digital object identifier (DOI), date of publication and keywords were recorded in the file. Then a two-step screening process began. In the first step, the relevance of each article was examined based on its title and abstract. If the title was pertinent, then the abstract was studied in order to determine whether the article should be included or not. The second scan was that of the full text. During this process, it was determined whether each document was relevant or not to the research by analyzing its full text. In this step, the number of papers for which the full text was not available was also recorded. It should be mentioned that the initial reviews found during the first search and the papers retrieved from backward search were later added to this excel and were assessed using the same method. The whole process is presented in Figure 1.

3.5. Data Items

The articles selected for this review were not limited to one domain. On the contrary, in order to be as inclusive as possible, they were screened while ignoring their subject area. This resulted in 1597 articles from different subject fields such as Computer Science, Engineering, Social Sciences and others, as shown in Figure 2. The only limitation was that the year they were written should be after 2017, which was necessary since the review focuses on the latest developments of the field.

4. Results

Developing an ECA is a complicated process and many factors can affect the final result. In this section, 73 papers are analyzed in order to examine the current guidelines, frameworks and necessary steps in order to implement an educational assistant or tutor. The distribution of the papers over time is presented in Figure 3, showing a clear increase in the interest on the topic.

4.1. Educational Grade Levels

In this section, an analysis of the educational grade level that the collected papers refer to is presented. The studies reviewed fall into three categories: the pre-university grade level, also known as “K-12” (kindergarten through grade 12) education; higher education; and the unspecified category. The “unspecified” category includes papers where the students that used the ECA in the learning process were referred to generally as “learners”, “participants” or “students” without revealing their educational grade level. The same category includes papers that did not contain an experimental part, were focused on technical suggestions and results and used technical methods to evaluate the ECA. Lastly, the studies that provided general suggestions to be used for every grade level were characterized as “unspecified”. Finally, 14 articles referred to an application of ECAs in K-12 education, 43 papers analyzed applications of ECAs in higher education and 16 documents were classified as “unspecified”. These details are summarized in Table 3.

4.2. What Are the Steps for Designing an Educational Chatbot for Contemporary Education?

In this subsection, various procedural approaches are examined in order to take a detailed view of the procedure of constructing an ECA for supporting the teaching and learning processes based on the steps that different researchers followed.
Kohnke [73] started by analyzing what the user needs by tracking down the most commonly searched topics for the subject and communicating with educators and students to shape the material that is going to be used by the ECA and adjust its function. In the following steps, he formed the final material, trained the ECA and tested it to reveal its weak spots. Afterwards, he handed it to the teachers, and they used it in their teaching. The final step of this process was the evaluation of the ECA. Rooein et al. [56] followed a similar approach, with the development and installation of the ECA system being the first steps, followed by the definition of learning items and the customization of the education material and finishing with the usage of the agent in the learning process and its evaluation in the end.
Chien and Yao [31] executed a three-phase strategy that consisted of the following: shaping the learning material the ECA would utilize; training and testing the ECA; using the ECA in the learning process and evaluating it. Similarly, Han et al. [38] also executed a development plan with three phases. Initially, in the “Analysis phase”, they defined the teaching goals and the educational material by searching for information relevant to the course literacy and collecting information to define the students’ needs and the kind of the teaching subjects the ECA should use. In the “Design phase”, they developed the ECA and its functions, and in the final phase, they evaluated it.
Chen et al. [30] and Neo et al. [53] used a similar development process. Chen et al. [30] initially gathered students’ needs and expectations about the function of the ECA and used this data to design the ECA and shape the educational content that it would use. They also defined a communication channel with the educator, so when the ECA could not respond, it would direct students to reach a teacher. Finally, they developed the bot, motivated the students to use it and evaluated it. The second group of researchers executed the same process, with the difference being that they tried to enrich the teaching content by using different forms, such as visual aids and oral descriptions, to provide it.
Topal et al. [21] used a slightly different method in order to align the function of the ECA to the students’ curriculum. In the beginning, they conducted a literature search for similar applications and defined the educational subject and the tools that were going to be used to develop the ECA. In the next step, they studied the curriculum to modify the goals and the function of the ECA accordingly. Then the educational content was enriched in order to provide visual aids apart from text messages. Domain experts formed the final aggregate of the questions and responses the chatbot would utilize. In the following stage, the agent was carefully trained in order to improve its ability to respond correctly to rephrased queries, and the communication channels where the ECA would be accessible from were defined. The students were informed by the teachers on how to use the ECA, and after the application phase, there was an evaluation phase. Using a similar approach, Yang et al. [22] followed almost the same process with a slight difference in the formulation of the teaching material and the learning tasks, which were based on the curriculum and research for finding a suitable chatbot to use.
Vazquez-Cano et al. [60] applied a similar and slightly simpler process. To begin with, they also conducted research to find similar applications that would be suitable for their task. Then they defined the tools that would be used to develop the ECA and collected students’ questions that were going to be used as questions to be answered or as quiz questions to evaluate students’ progress. The final steps were the design, implementation and evaluation of the chatbot. Chien et al. [10] applied a similar tactic, but during the evaluation stage, the educator provided students with comments and feedback about their performance. Furthermore, Mateos-Sanchez et al. [75] applied a method similar to the above with slight differences. In the beginning, they tried to define the users’ needs and technical necessities by doing research and conducting interviews. Following that, they searched for platforms that were appropriate for the learning experience they intended to provide and could also be accessible through a variety of communication channels. After that, they made a prototype for testing, tested it and used the feedback to improve it. They then adjusted the chatbot, used it in the learning process, reevaluated it and used the data to further improve it. Bailey et al. [24] also focused on explaining to the students how to use the conversational tutor and utilizing the evaluation feedback to improve it. Likewise, a similar strategy was applied that shifted the focus on the involvement of all the necessary stakeholders in order to develop an educational tool that would be suitable for the students and also useful for the educators [36]. Finally, Abdelghani et al. [9] executed a similar strategy with a difference at the beginning, where they also searched for similar applications in order to develop the ECA.
Sarosa et al. [19] implemented a six-stage approach. In the “Planning stage”, the collection and shaping of the questions the ECA would use were conducted. During the “Analysis stage”, they collected users’ needs relevant to the system. In the “Design stage”, the design principles of the ECA were shaped based on the users’ needs, and the function of the ECA was adapted to the function of the selected communications channel that the ECA would be accessible from. Then a testing phase followed to evaluate the response ability of the chatbot. Last but not least, the two last phases were the implementation of the system and the evaluation.
Haristiani and Rifai [41] proposed a procedure framework called “ADDIE” (Analysis Design Development Implementation Evaluation). At first, they analyzed the students’ needs to define the learning goals, techniques and material that are going to be used (“Analysis stage”). In the “Design Stage”, the development tools were defined and an application flowchart was produced based on the data collected in the first stage. In the “Development stage”, the system of the application was developed and domain experts were invited to test all aspects of the ECA. The final stages were the ECA’s implementation in the learning process and its evaluation.
Ong et al. [54] focused more on the accessibility of the chatbot for the students. In particular, after defining the learning process and material, they defined the communication channels between the ECA and the student by selecting communication channels that the students would be familiar with. Then they explained to the students how to use the agent, and lastly, they evaluated it. Moreover, Yin et al. [65], Li et al. [48] and Yildiz Durak [64] followed the same steps, with the deviation that the first utilized external material to enrich the learning content by providing videos or images, and the rest kept learning analytics that were going to be used to further improve the system in the future.
Schmitt et al. [79] shifted their attention to the design requirements and principles of the system. They conducted a search of the literature to collect the usual design requirements of an ECA and also conducted student interviews to collect the user requirements. Then they shaped the final design principles that were going to be used, defined the data that the ECA was going to use in order to respond, launched the application and evaluated it. Similarly, Wambsganss et al. [62] followed the same process with the subtle change that the design principles were adapted to the principles of the university apart from the users’ needs.
Memon et al. [76] proposed a framework in order to develop a multi-agent-based educational tool. They started by collecting data for the training of the agents. Then they constructed sets of pairs of questions and answers that the ECA would use and made the database for the system. The following step was the definition of the communication method between the chatbot and the training of the ECAs, followed by the testing and evaluation phases.
Chuang et al. [12] developed a motivational ARCS (Attention Relevance Confidence Satisfaction) model to construct their application. At first, they searched for material compatible with the technology they used and tried to create an easy-to-make UI (User Interface). Then they tried to match the teaching process with the current state (COVID-19 quarantine) and also adapt the function of the chatbot to the teaching process. When it comes to the “Confidence” aspect of the model, they tried to make the learning activities trigger students’ interest and provide them with an overview of their progress. Finally, the evaluation stage was conducted.
Han et al. [40] applied a method that started with a literature search to find a suitable methodology in order to plan their own. Then they collected users’ frequent questions, used them to shape the educational material, developed the ECA, provided guidance to the students on how to use it and evaluated the application. On a similar path, Sáiz-Manzanares et al. [57] developed a similar but simpler procedure. They began by researching the literature to collect relevant information and used it to implement in the chatbot. Guidance and directions were given to the students on how to use it, and the final phase was the evaluation of the proposed educational tool. Lastly, Kharis et al. [72] and Mathew et al. [18] utilized a slightly different approach. Both research groups added a testing stage, and the first research team defined the educational purpose of the chatbot in the first stage.
Janati et al. [69] built an educational conversational assistant that utilized “speech-to-text” transformation techniques. They started by searching the literature for relevant information and formatted the database of the system by utilizing e-learning material such as videos. They applied “speech-to-text” transformation techniques to the ECA so that the agent can transform the educational material into text that could be processed easier. Finally, they developed, applied and evaluated the chatbot.
Chiu et al. [11] preferred to start by examining ready-to-use ECAs in order to find a suitable one to use in the relevant learning process. After that, they made an educational plan for the teaching process, provided directions to the students on how to use the selected chatbot and evaluated it. Similarly, Belda-Medina and Calvo-Ferrer [26] started by designing the teaching material and selecting the appropriate ECAs to be used in a learning process. In the last step, they evaluated the usage of the ECA.
Schmulian and Coetzee [58] focused on the utilization of a specific learning method. More precisely, they examined many learning theories and chose the one that was more suitable for their approach. In the next step, they defined the tools that were going to be used to develop the ECA and structured the material for the database of the agent. These steps were followed by the development and implementation of the chatbot. The last steps were the provision of guidance to the students on how to use the ECA and its evaluation. Additionally, Bagramova et al. [25] also started by searching the academic scientific literature, with the aim of finding theories relevant to the development of an ECA. The results were used to shape the methodology and the selection of the development tools. Next, they shaped the educational matter and tasks, implemented the ECA and evaluated its usage in the learning process.
Another approach was proposed by Essel et al. [34] and Jasin et al. [46], where the first step was to define the tools that were going to be used for the proposed ECA, followed by a stage of gathering students’ questions to construct the database. The last three steps were the definition of the communication channels, the development and implementation of the ECA and its evaluation. Li et al. [49] began the process in a similar way by searching for suitable tools and frameworks that would be used to build the agent. After that, they shaped the learning matter and structured the database of the system. Finally, they developed the chatbot and evaluated it.
Neo [52] followed a process that focused on the educational material, which was provided by the chatbot. To begin with, she defined the educational content the agent would utilize and redesigned it so that it could be provided in various forms apart from text messages. Furthermore, she designed a conversational flow as a prototype for the interaction between the student and the system before implementing it. The last step was the evaluation of the system.
Hew et al. [42] developed a system that consisted of two agents. The main stages of the process were the definition of the processing mechanisms for the first ECA and the learning material for the second, along with the implementation, testing and evaluation phases. The only difference was that for the second bot, communication strategies were examined in order to find a suitable one.
Hsu and Huang [44] constructed an ECA prototype that could be used in education and were focused on the technical aspects of the system. They started by collecting material that would be used for the response generation and processed it to shape pairs of answers and questions. The next steps were the development and implementation of the chatbot, followed by an evaluation of the prototype in comparison with another conversational agent.
Chaiprasurt et al. [29] used a two-phase process. “Phase 1” was relevant to the development of the ECA and consisted of the definition of students’ personas and the purpose of the ECA, the construction of the processing mechanisms and conversational standards of the system and the creation of the database. “Trial run”, which was the second phase, was about the usage of the ECA in the classroom and the evaluation of its efficiency.
González-Castro et al. [38] describe the procedure they applied to develop an ECA for the support of a Massive Open Online Course (MOOC). Firstly, they collected students’ possible questions and evaluated the complexity and the bias rate of these questions. Next, they evaluated the students’ ability to answer questions so as to define which set of questions was going to be utilized by the conversational bot based on the learning competence rate and the previous evaluation. Last but not least, they informed the educational instructors about the areas in which students needed support and modified the ECA so it could provide the part of the educational material (videos) that is best adapted to their needs. This was achieved by understanding the educational subject that a video covers, breaking it into smaller parts, if possible, based on different educational concepts and modifying the ECA and its architecture in order to provide the most suitable part of a video.
In order to summarize the aforementioned procedures and track the main phases of developing an ECA, the underlying information is summarized in Figure 4 and in the Section 4.2.1, Section 4.2.2, Section 4.2.3, Section 4.2.4, Section 4.2.5, Section 4.2.6, Section 4.2.7, Section 4.2.8, Section 4.2.9, Section 4.2.10 and Section 4.2.11. The phases are presented in the order that they are usually implemented in the procedures described in the reviewed studies and synthesized in this SLR. It is true that most of the phases included in the development cycle of an ECA are fairly typical in software development in general, but the details pertaining to each phase in terms of ECAs are analyzed in the sections that follow. In addition, it should be clarified that the references are noted based on what the researchers did and recorded as a separate stage. For example, every educational agent goes through a training phase, but the references here are made based on the researchers that focused on this part and provided details on it. Furthermore, the design, implementation, application in the learning process and evaluation are common phases for every research and do not have a variety of steps, so in the “References” column, instead of rewriting all the references, the phrase “Common to all” is used. The same term is used if a step of a phase is included in every research. Last but not least, while it is not clearly stated in most studies, the development cycle is heavily based on the sequential software development model (or waterfall model), since the ultimate goal is to develop a fully functional ECA in a limited period of time. However, based on the evaluation results, the process might have to be repeated after utilizing the ECA in the educational process, whether for a few months, a semester or even an academic year, in order to improve its teaching capabilities. The kinds of development processes are summarized in Table 4 for the studies that made a reference to the development process of their model.

4.2.1. Research

Research is usually the first stage of the development procedure of an ECA. As shown in Table 5, it usually concerns information collection in order to choose development tools and platforms and study previously used methods and theories in order to plan a new development process.

4.2.2. Analysis

During the analysis phase, technical and user requirements are collected in order to be used in later stages of the development process. The requirements analyzed during the analysis phase are presented in Table 6.

4.2.3. Definition

Definition is the stage when peremptory decisions are made that will help in organizing and shaping the final form of the next phases. According to Table 7, these decisions can be technical, such as defining development tools, or related to the teaching and learning processes, such as defining the educational purpose of the agent.
The educational purpose of an ECA also defines its usage role, and there are several possibilities. Firstly, it can be used as a helper to evaluate the teaching process of a course [62]. Another approach is that of the conversational tutor, who functions as a student with low self-efficacy and asks the students for help [6,28]. More specifically, Černý [28] proposed the approach of an agent functioning as an interlocutor that had forgotten everything and needed students’ help to remember things, and its purpose was to develop the searching skills of a student. If the student could not answer a question, they would be given a hint. In addition, González-González et al. [68] developed an ECA integrated into a gamified learning environment with the aim of guiding students through the learning process and keeping their focus on learning.
The most usual role an ECA can have is that of the self-evaluator or learning partner that helps the students to learn the teaching material (please see the references in Table 8). Similarly, Chiu et al. [11] implied three usage roles of an ECA. The first one was the use of an ECA as a tool that collaborates with students in the learning process to increase their self-efficacy. The second one was the learning agent as a tool for the teacher to support the learning progress of the students out of the classroom, and the third was as a simple application that would provide a simple learning interaction. Furthermore, Chuang et al. [12] also implemented an ECA that helped students to study biology concepts using AR. Lippert et al. [74] suggested a system that consisted of two or more agents that would be utilized to develop students’ collaboration skills. More specifically, they implied that this approach could be applied also to “virtual learning environments” (VLEs), where the agents should act as a guide that keeps students focused on the learning process. For another application, Nguyen et al. [77] proposed the implementation of a conversational tutor that was used to teach the procedure of solving a problem. To begin with, it categorized the problems and used the comments of students that had already solved them in order to produce hints and propose the steps of the solving process. In addition, in each step, it would provide the relevant knowledge to the student. If they could not proceed, it would provide a solved example, and if the student was still unable to solve the problem, the agent would explain the solution to them step by step. Lastly, Suárez et al. [59] designed an educational tutor that would function as a virtual patient in order to develop medical students’ diagnosis skills. It should also be clarified that the ECAs in other research in this category had a specific role, and here are simply presented some of them that were considered the most interesting based on the personal interpretation of the authors.
The last three approaches concern the function of an ECA that can teach a lesson through storytelling [24], an ECA that solely solves the questions of the students [11,19,37,37,46,53] and an ECA as a tool that only provides educational material to the students [40,65]. More precisely, González-Castro et al. [38] implemented a chatbot to support the provision of the educational material wanted by the students of a MOOC, while Bailey et al. [24] designed a story-telling conversational tutor that would assist the students in learning a foreign language.
The various usage roles recorded in the literature are summarized in Table 8 and Figure 5. These usage roles define the functionality that an ECA will support and should be taken into consideration when designing and implementing it in the following steps.

4.2.4. Shaping and Formulation

“Shaping and Formulation” refers to adapting the educational material that will be provided to the students through the ECA in order to respond to students’ learning needs (Table 9).

4.2.5. Adaptation and Modifications

The “adaptation and modifications” phase refers to the changes in the planning of the predicted functions of the chatbot in order to be aligned with specific prototypes, principles and rules. In other words, it concludes the basic principles and rules that will shape the design of the conversational tutor. The relevant suggested steps are presented in Table 10.

4.2.6. Design Principles and Approaches

The design and the implementation are two parts of the development process of a chatbot that are usually executed one after the other. They are relevant to applying previously collected data, methodologies, users’ needs and technical requirements combined with specific hardware architecture with the purpose of constructing the functional part of the ECA for the final application. In this part, propositions and suggestions for the design phase are examined so as to ascertain the design principles used.
The first aggregate of design directions aims to customize the system of the ECA in order to be adapted to the students’ needs. In other words, these guidelines aim to satisfy students’ emotional and learning needs and their expectations for the system, to adapt the used vocabulary to their linguistic capabilities and to utilize their curriculum to improve its function.
To begin with, Schmitt et al. [79] indicated the importance of the functional adaptation of an ECA to students’ emotional needs. Furthermore, the function of such an ECA should be aligned with a student’s learning needs [6,19,23,33,68], their curriculum [21,22] and their expectations [23,30]. In addition, the vocabulary of the ECA should be modified according to the student’s expression style. Finally, the chatbot should be an educational tool suitable for every student.
The second group of design propositions refers to the accessibility of the system. The concept of accessibility refers to making the suggestions relevant to the adaptations so that the system of the chatbot can be available for usage at any time, the ways that it should be accessed and the modifications so the chatbot can be compatible with the selected communication channels. More precisely, Sarosa et al. [19] modified their ECA’s function to be aligned with the communication channel used. Furthermore, it was suggested that an educational bot ought to be available through as many communication channels as possible [17,26]. Lastly, an ECA ought to be available for interaction at any time and from anywhere so that the possible technical requirements, such as a possibly needed internet connection, are met [23,68].
Furthermore, an important design aspect of conversational tutors is their conversational traits. These directions refer to the ways that a teaching agent should be designed in order to respond better to students’ queries, improve its conversational ability, help students utilize it better and provide a more natural dialogue.
More specifically, an ECA should be able to understand the variations of input phrases with the same meaning [19,55,59] in text and oral form. When it comes to user interaction, the ECA should address students using their username [46,52] and utilize previous conversations to provide a personalized communication [17,26,33,37,79]. Moreover, it should encourage students to persevere [12,17,27,29,45,68,74] and be able to handle small talk for casual topics [6,23,46,62], extend the discussion to collect information when it cannot figure out the users’ intentions [17,28] and provide alternative discussion flows so the user can define the conversational flow when it cannot understand them [31]. In addition, it should inform the users of its operational limitations [40] and redirect them to an educator’s communication channel when it cannot answer a question [46].
When it comes to the generated responses of the ECA, they should be grammatically correct [28], quick [20,46] and easy to understand [17]. Additionally, they should be enriched with human-like conversational characteristics, such as emoticons [27,28,29,46,62] and humor, which should be used wisely [26].
Finally, when it comes to the user interface of the discussion, the usage of button-formed discussion topic suggestions and buttons to start or restart the conversation is suggested [17].
Another important aspect of the design is the way that the conversational tutor delivers the educational material to the students. To start with, the ECA should try to rephrase and explain the educational material in various ways, like an educator does in classroom [27,52,74], while utilizing various media such as images and videos [12,17,26,29,52,53,64,65,68] and oral narration [52]. In further detail, an educational chatbot could try to predict different subjects that students did not comprehend in order to provide suitable educational material [46] and relevant external learning sources to help students learn on their own when it cannot respond to a question [33]. Similarly, the ECA could show the educational material divided into small segments with specific content adapted to the student’s needs and pace [28,38,52] and provide navigation capabilities between them with buttons [52]. Lastly, an ECA should provide self-evaluation material, such as quizzes and tests [9,10,11,12,13,17,21,22,28,29,30,33,45,52,55,56,58,59,64,65,74,75,77,79], adapted to the students’ needs [13].
Other researchers indicated that an education al dialogue bot should provide usage instructions through its functional environment [58,79]. In particular, the chatbot of Schmitt et al. [79] included a button that would provide instructions for use if the students pressed it, while Schmulian and Coetzee [58] designed their ECA to provide information about its function and usage whenever the students needed it.
A different group of suggested design principles are relevant to the learning experience that the students receive by interacting with the educational conversational assistant. More precisely, it is suggested that the learning chatbot should be able to provide a personalized learning experience [68] and also evaluate students and offer to them feedback about their learning performance in order to increase their satisfaction and support them in improving their learning efficiency [12,13,20,27,45]. Furthermore, the integration of external technologies such as Augmented Reality (AR) [12] and the usage of gamification elements [68] are suggested in order to provide a better user experience.
Another group of suggested design guidelines refer to the way the students can handle and interact with the questions that the ECA generates. In further detail, the addition of buttons was suggested so the students could avoid questions that they could not answer, such as a “don’t know” button [20] or a “skip button” [28]. In addition, Černý [28] suggested that an educational bot should provide hints to help students answer the evaluation questions relevant to the educational material and reshow the wrongly answered questions with button-formed answers so the students can answer them easier. Finally, according to Cai et al. [27] an ECA should allow students to trace back to previously answered exercises and definitions in order to revise them.
A very important and impactful aspect of the design of an ECA is the regulations and rules that it should obey in order to be able to function, which are based on two specific entities. The first entity is the user privacy, and thus the function of the ECA should be aligned with the current ethics policies and rules for the protection of the data of the user [17,26,28,46,62,68]. The second entity is the educational institution that the ECA is operating in. This means that the function of the conversational tutor should be aligned with the operational and educational policies of the educational organization in which it is going to be used [62,79].
When it comes to students’ notifications, the ECA should notify students about important events relevant to their courses [29]. For example, it could provide reminders about final exam dates, the deadline for handing in an assignment or even provide news when a lecture is cancelled or rescheduled [29].
The learning activities provided by an educational chatbot are a vital median to capture students’ interest and help them learn in a pleasant way. To achieve this, the conversational tutor should provide challenging and interesting learning activities for the students [6,12,23]. In addition, the design of the learning activities could utilize an element of competition [10,68] or collaboration [68] in order to increase the students’ interest in them.
In order to convert the educational chatbot into a tool that will support teachers in the teaching process, Sarosa et al. [19] indicated that the function of an ECA could be adapted to the teaching material defined by the educator so as to increase its teaching efficiency. Additionally, the function of a teaching chatbot could be aligned with the teaching style of the educator while also giving them the possibility to set learning goals for their students [68].
Another set of design directions are about designing the tutoring approach of a conversational tutor. Specifically, it is suggested to align its function to specific learning theories [46,58] and specific learning purposes [9]. Moreover, in order to achieve a more efficient tutoring experience, it has been suggested to align the function of the chatbot with a specific motivational framework [12,29] design the ECA to function as a student that can learn from the actual students [6] and utilize predefined learning paths to provide a personalized conversational flow that will be better adapted to a student’s learning needs [56]. Furthermore, the dialogue tutor will be more effective if it helps students to utilize their previous knowledge and skills, with the purpose of learning the new educational material [6]. Finally, it has been suggested that, like a human educator, a conversational tutor should utilize learning motives such as a scoreboard with grades in order to increase students’ engagement and willingness to participate in the learning process [29].
Last but not least, there are two design suggestions regarding the design of an ECA in terms of its processes and database. The first suggestion indicates that when designing a conversational tutor, every possible stakeholder should be involved in the process, including students, educators and designers, in order to achieve better final results [36]. The second design proposition was about the searching of the database of the dialogue teacher in order to provide better responses, and it was implied that an ECA should be able to search an aggregate of documents and provide a brief response. That aggregate should also be expanded as much as possible, and the agent should be able to search it effectively [33].
The design suggestions of the previous research are summarized in Table 11 and Figure 6. These propositions are presented so as to be taken into consideration when designing a conversational tutor and do not have to be used in order to implement an educational chatbot.

4.2.7. Implementation and Training of the ECA

When it comes to the implementation of an ECA, there are a lot of technical details to be defined, such as the programming languages, frameworks, APIs and language models underlying the modern conversational agents that will be used. Since the main purpose of this work is to present a scaffolded plan as a guide through the development process of an ECA, the implementation details are considered to be out of the context of the paper. As a matter of fact, for this issue, a different search query and more technically oriented literature would have to be reviewed.
After or as part of the implementation of an ECA, there is usually a training phase in which the agent uses previously collected data in order to be able to respond to its users more effectively. It is an essential step for building an efficient ECA. Here, the training methods analyzed in the reviewed literature are presented. That does not mean that other papers that did not mention anything about the training phase of their described ECA did not include one. An overview of the noticed training methods is presented in Table 12 and Table 13. As mentioned for the implementation phase, devising a clear picture of the training phase would require more technically oriented studies.
As it can be seen in Table 13, the training of a chatbot can be on based specific questions that were collected from current students or students from previous years, or it can be shaped by domain experts and educators. Furthermore, the training could also be conducted on data that are provided by external sources, such as a corpora of data like Wikipedia [48] or datasets that the platform or the tool that is used to implement the chatbot is offering [40]. Last but not least, ECAs can be trained using machine learning techniques with customized models [27] or models that are obtained from the utilized development platform [63]. The training method or methods that are used are up to the developer, and the more the ECA is trained, the better the quality of its generated response will be.

4.2.8. Testing of the ECA

Testing is usually implemented before the application is released, and it is about the evaluation of the chatbot application and whether it can successfully respond to the desired role. The testing results are usually used to improve the prototype of the chatbot so that the next version will be more suitable for usage in the learning process. The testing process could be conducted by evaluating the performance of the system from a functional perspective or by letting a small sample of users use and evaluate it. It should be clarified that the testing methods of research that mention details about this phase are recorded. That does not mean that documents that did not mention anything about the existence of a testing phase for the development did not include one. A summary of the examined testing methods and procedural steps is presented in Table 14 and Table 15.
As shown in Table 14, the testing methods can vary. A chatbot can be tested based on performance metrics such as the velocity and the accuracy of the generated responses of the system. In addition, the ECA could be tested by interacting with possible end users or developers and educators. The testing phase is also an important part of the procedure. It might not define the whole process, but it provides some crucial information about the function of the chatbot and can help reduce some functional flaws before it is used in the learning process.

4.2.9. Guidance and Motivation Provision for the Adoption of the ECA from the Students

This phase takes place right before students start using the chatbot to assist them in their learning process. Guidelines and instructions are given to the students on how to use the chatbot and its functions (Table 16).

4.2.10. Application in the Learning Process

This stage is common to all the examined studies and refers to the usage of the ECA in the learning process to assist students. Teachers organize the teaching process and integrate the chatbot as an educational and assistance tool [56]. This procedure is monitored, and data are collected for the evaluation phase.

4.2.11. Evaluation of an ECA

The evaluation phase is the last phase of the development cycle of an ECA. In this stage, the results of the agent in the learning process are evaluated based on the students’ performance. In addition, the efficiency of the function of the system can be evaluated using different measures. The results of the evaluation are used to restart the development cycle of the chatbot in order to produce a new and improved version. The evaluation means, objectives and metrics that were used in the examined research are presented in Table 17, Table 18, Table 19 and Table 20.
When it comes to the evaluation instruments of a conversational tutor, the most commonly used ones are questionnaires; interviews with students, educators or domain experts; students’ learning and interaction analytics; and the technical performance metrics of the system. In addition, the impact of the agent can be evaluated based on the learning progress of the students.
The evaluation of an ECA is conducted in order to assess the impact of the usage of this technology and collect data that will be used to improve it. Except for the evaluation of the learners, which is usual for every application of this technology in a classroom environment, other methods or stakeholders can also participate in the evaluation of the final version of a conversational tutor.
The metrics used for the evaluation of a conversational tutor also vary. They can be separated based on their purpose and the aspect of their evaluation. In other words, there are many metrics relevant to the user experience, learning motivation and satisfaction that students receive when they use the ECA. In addition, there are also some metrics that are related to the interaction between the student and the agent in order to evaluate how effectively the chatbot can discuss and deliver the educational knowledge. Another set of observed metrics is related to the support and scaffolding of the educational process by the bot. Finally, there are some metrics that evaluate the technical suitability and performance of the system.
Following these evaluation parameters, there were some researchers that made some suggestions for the evaluation of a conversational tutor. More precisely, Mendez et al. [51] indicated the importance of having a design plan to study the efficiency of an ECA and define specific criteria. Another suggestion for the evaluation of the agent is to conduct the evaluation using the initial implementation purpose and assess whether it is fulfilled [55]. Wan Hamzah et al. [63] made a model to exploit learning analytics better and thus evaluate their chatbot more efficiently. This five-part model starts by gathering the evaluation data, continues with the storage, cleaning and filtering of the data and ends with the analysis and the “action plan”, which is a plan for utilizing the collected data to improve the conversational tutor. Finally, design advice was given by Wambsganss et al. [62] in order to make the evaluation process more pleasant for the evaluators. Specifically, they suggested the use of a progress bar in the evaluation form so the students can view their progress and also the addition of a button that would give the students the ability to omit a question they might not want to answer. These suggestions are summarized in Table 21.
It is important to clarify that this evaluation information aims to provide some insight on how an ECA can be evaluated. Their usage is not mandatory and of course they might not be the only existent evaluation means, methods and metrics. In addition, as it was stated in other stages too, these suggestions are based on the papers that made clear references to the evaluation procedures they used, and this does not imply that the other studies did not include evaluation tasks.

5. Discussion

This SLR analyzed the necessary steps in order to develop an ECA for the educational process by examining different chatbot applications in contemporary education. More specifically, the gathered knowledge originated mostly from ECA applications in tertiary education and in a lower volume from the other educational grades. This information is summarized in order to present directions and methods about the development procedure of an ECA. The target here is to provide directions and guidelines for educators that might not have the experience or knowledge but would like to develop a conversational tutor.
The observed and proposed process starts with a research phase in which information that is relevant to the tools and methods that are going to be used is collected, accompanied by an analysis of both the technological and students’ requirements. The next step is the definition, in which everything that is going to be utilized to develop the ECA is finalized and details relevant to the conduct of the learning process using the ECA, such as the learning objectives, are defined. After that, it usually follows the shaping of the educational material in order to fulfill the learning needs that were found during the analysis. The next phase of the process is the adaptation and modification of the ECA, in which its function is aligned with predefined prototypes, principles and rules that were specified in the definition phase. This is followed by the design, implementation and training of the ECA, where all the previous tools, methods and rules are utilized to create the software of the chatbot. Before using the ECA in the classroom, it is also essential to test it in order to diagnose the possible malfunctions or weaknesses of the chatbot. After getting testing results and modifying the ECA accordingly, it is crucial to inform the students about the way that the conversational assistant is going to be used, how it can help them, and also provide a motive for them to use it and then utilize the conversational tutor in the teaching process. Last but not least is the evaluation of the chatbot. This step is crucial in order to review the results of the application. After the evaluation, the results are used to restart the development circle of the chatbot in order to improve the ECA for future applications.
It should be clarified that the process that is described in this paper is indicative, which means that the recorded steps and suggestions should be followed when designing an ECA but not strictly, since every researcher is free to modify each step by selecting the methods that they prefer. The proposed development procedures and plans do not come into contrast with each other; rather, they describe different perspectives of the process of making an ECA. It is considered that the more of these suggestions and steps can be accomplished, the better the result of the final ECA will be. Although, it should not be ignored that such a purpose demands time and resources, such as human effort, and, consequently, this procedure should be customized to tailor the circumstances of each interested educator. Finally, the purpose of this information is to be a scaffolded plan to help an interested individual create an ECA and not an absolute mandatory series of steps that someone has to follow, because otherwise the result could not be a success.
Last but not least, it should be said that all the above results are subjected to certain limitations, which are described in detail in the next subsection.

6. Limitations

This SLR was conducted based on some specific limitations that should be known and taken into consideration when examining this study. To begin with, the selected database was Scopus, and there were no other databases examined, which could lead to a possible loss of studies. Furthermore, the search results included only journal articles that were written in the English language. Additionally, this study examines papers that were produced between the years 2018 and 2023, ignoring all the previous work that has been carried out. This is because the target of the review was to analyze recent developments in the field. In further detail, it should also be mentioned that 37 articles were not included because their text was not available, and thus the information they could offer is also lost. Finally, a possible limitation that could be mentioned is the usage of a general search query in the search engine of the database instead of a customized search query that would utilize every possible special term of the examined question and could possibly lead to more targeted results. This was done because it was considered better to use a general, comprehensive query that would lead to a superset of the targeted documents, and thus the documents that should be included would be secured.

7. Conclusions and Further Research

This SLR aimed at examining the latest developments in the development cycle of chatbots in the educational sector. More specifically, the aim was to examine aspects that concern the steps that are necessary to complete this procedure. Moreover, previous research indicated the need for guidelines so that educators can develop ECAs on their own. In this context, this review tries to gather and summarize the available knowledge from this research field so that an educator can have a sense of direction when they start to develop such a tool. This work aimed to carry out a comprehensive analysis of the current developments in the examined sector. More specifically, a 12-step process was constructed in order to present the main steps of the development process of an ECA and offer guidance through this process. Further research work could utilize the presented information for actually creating an educational assistant and evaluating the proposed development cycle and the underlying steps in practice.

Author Contributions

Conceptualization, methodology, validation, formal analysis, data curation, writing—original draft preparation, writing—review and editing, visualization, D.R. and S.X.; supervision, S.X.; project administration, S.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All data are included in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adamopoulou, E.; Moussiades, L. Chatbots: History, technology, and applications. Mach. Learn. Appl. 2020, 2, 100006. [Google Scholar] [CrossRef]
  2. Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the use of chatbots in education: A systematic literature review. Comput. Appl. Eng. Educ. 2020, 28, 1549–1565. [Google Scholar] [CrossRef]
  3. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots applications in education: A systematic review. Comput. Educ. Artif. Intell. 2021, 2, 100033. [Google Scholar] [CrossRef]
  4. Hwang, G.-J.; Chang, C.-Y. A review of opportunities and challenges of chatbots in education. Interact. Learn. Environ. 2021, 1–14. [Google Scholar] [CrossRef]
  5. Wollny, S.; Schneider, J.; Di Mitri, D.; Weidlich, J.; Rittberger, M.; Drachsler, H. Are We There Yet?—A Systematic Literature Review on Chatbots in Education. Front. Artif. Intell. 2021, 4, 654924. [Google Scholar]
  6. Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf. Technol. 2023, 28, 973–1018. [Google Scholar] [CrossRef]
  7. Smutny, P.; Schreiberova, P. Chatbots for learning: A review of educational chatbots for the Facebook Messenger. Comput. Educ. 2020, 151, 103862. [Google Scholar] [CrossRef]
  8. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
  9. Abdelghani, R.; Oudeyer, P.-Y.; Law, E.; de Vulpillières, C.; Sauzéon, H. Conversational agents for fostering curiosity-driven learning in children. Int. J. Hum. Comput. Stud. 2022, 167, 102887. [Google Scholar] [CrossRef]
  10. Chien, Y.-Y.; Wu, T.-Y.; Lai, C.-C.; Huang, Y.-M. Investigation of the Influence of Artificial Intelligence Markup Language-Based LINE ChatBot in Contextual English Learning. Front. Psychol. 2022, 13, 785752. [Google Scholar] [CrossRef]
  11. Chiu, T.K.F.; Moorhouse, B.L.; Chai, C.S.; Ismailov, M. Teacher support and student motivation to learn with Artificial Intelligence (AI) based chatbot. Interact. Learn. Environ. 2023, 1–17. [Google Scholar] [CrossRef]
  12. Chuang, C.-H.; Lo, J.-H.; Wu, Y.-K. Integrating Chatbot and Augmented Reality Technology into Biology Learning during COVID-19. Electronics 2023, 12, 222. [Google Scholar] [CrossRef]
  13. Ericsson, E.; Sofkova Hashemi, S.; Lundin, J. Fun and frustrating: Students’ perspectives on practising speaking English with virtual humans. Cogent Educ. 2023, 10, 2170088. [Google Scholar] [CrossRef]
  14. Haristiani, N.; Dewanty, V.L.; Rifai, M.M. Autonomous Learning Through Chatbot-based Application Utilization to Enhance Basic Japanese Competence of Vocational High School Students. J. Tech. Educ. Train. 2022, 14, 143–155. [Google Scholar] [CrossRef]
  15. Kabiljagić, M.; Wachtler, J.; Ebner, M.; Ebner, M. Math Trainer as a Chatbot Via System (Push) Messages for Android. Int. J. Interact. Mob. Technol. 2022, 16, 75–87. [Google Scholar] [CrossRef]
  16. Katchapakirin, K.; Anutariya, C.; Supnithi, T. ScratchThAI: A conversation-based learning support framework for computational thinking development. Educ. Inf. Technol. 2022, 27, 8533–8560. [Google Scholar] [CrossRef]
  17. Mageira, K.; Pittou, D.; Papasalouros, A.; Kotis, K.; Zangogianni, P.; Daradoumis, A. Educational AI Chatbots for Content and Language Integrated Learning. Appl. Sci. 2022, 12, 3239. [Google Scholar] [CrossRef]
  18. Mathew, A.N.; Rohini, V.; Paulose, J. NLP-based personal learning assistant for school education. Int. J. Electr. Comput. Eng. 2021, 11, 4522–4530. [Google Scholar] [CrossRef]
  19. Sarosa, M.; Wijaya, M.H.; Tolle, H.; Rakhmania, A.E. Implementation of Chatbot in Online Classes using Google Classroom. Int. J. Comput. 2022, 21, 42–51. [Google Scholar] [CrossRef]
  20. Tärning, B.; Silvervarg, A. “I didn’t understand, i’m really not very smart”—How design of a digital tutee’s self-efficacy affects conversation and student behavior in a digital math game. Educ. Sci. 2019, 9, 197. [Google Scholar] [CrossRef]
  21. Deveci Topal, A.; Dilek Eren, C.; Kolburan Geçer, A. Chatbot application in a 5th grade science course. Educ. Inf. Technol. 2021, 26, 6241–6265. [Google Scholar] [CrossRef] [PubMed]
  22. Yang, H.; Kim, H.; Lee, J.H.; Shin, D. Implementation of an AI chatbot as an English conversation partner in EFL speaking classes. ReCALL 2022, 34, 327–343. [Google Scholar] [CrossRef]
  23. Al-Sharafi, M.A.; Al-Emran, M.; Iranmanesh, M.; Al-Qaysi, N.; Iahad, N.A.; Arpaci, I. Understanding the impact of knowledge management factors on the sustainable use of AI-based chatbots for educational purposes using a hybrid SEM-ANN approach. Interact. Learn. Environ. 2022, 1–20. [Google Scholar] [CrossRef]
  24. Bailey, D.; Southam, A.; Costley, J. Digital storytelling with chatbots: Mapping L2 participation and perception patterns. Interact. Technol. Smart Educ. 2020, 18, 85–103. [Google Scholar] [CrossRef]
  25. Bagramova, N.V.; Kudryavtseva, N.F.; Panteleeva, L.V.; Tyutyunnik, S.I.; Markova, I.V. Using chat bots when teaching a foreign language as an important condition for improving the quality of foreign language training of future specialists in the field of informatization of education. Perspekt. Nauk. I Obraz. 2022, 58, 617–633. [Google Scholar]
  26. Belda-Medina, J.; Calvo-Ferrer, J.R. Using Chatbots as AI Conversational Partners in Language Learning. Appl. Sci. 2022, 12, 8427. [Google Scholar] [CrossRef]
  27. Cai, W.; Grossman, J.; Lin, Z.J.; Sheng, H.; Wei, J.T.-Z.; Williams, J.J.; Goel, S. Bandit algorithms to personalize educational chatbots. Mach. Learn. 2021, 110, 2389–2418. [Google Scholar] [CrossRef]
  28. Černý, M. Educational Psychology Aspects of Learning with Chatbots without Artificial Intelligence: Suggestions for Designers. Eur. J. Investig. Health Psychol. Educ. 2023, 13, 284–305. [Google Scholar] [CrossRef]
  29. Chaiprasurt, C.; Amornchewin, R.; Kunpitak, P. Using motivation to improve learning achievement with a chatbot in blended learning. World J. Educ. Technol. Curr. Issues 2022, 14, 1133–1151. [Google Scholar] [CrossRef]
  30. Chen, Y.; Jensen, S.; Albert, L.J.; Gupta, S.; Lee, T. Artificial Intelligence (AI) Student Assistants in the Classroom: Designing Chatbots to Support Student Success. Inf. Syst. Front. 2023, 25, 161–182. [Google Scholar] [CrossRef]
  31. Chien, Y.-Y.; Yao, C.-K. Development of an ai userbot for engineering design education using an intent and flow combined framework. Appl. Sci. 2020, 10, 7970. [Google Scholar] [CrossRef]
  32. Colace, F.; De Santo, M.; Lombardi, M.; Pascale, F.; Pietrosanto, A.; Lemma, S. Chatbot for e-learning: A case of study. Int. J. Mech. Eng. Robot. Res. 2018, 7, 528–533. [Google Scholar] [CrossRef]
  33. Coronado, M.; Iglesias, C.A.; Carrera, Á.; Mardomingo, A. A cognitive assistant for learning java featuring social dialogue. Int. J. Hum. Comput. Stud. 2018, 117, 55–67. [Google Scholar] [CrossRef]
  34. Essel, H.B.; Vlachopoulos, D.; Tachie-Menson, A.; Johnson, E.E.; Baah, P.K. The impact of a virtual teaching assistant (chatbot) on students’ learning in Ghanaian higher education. Int. J. Educ. Technol. High. Educ. 2022, 19, 28. [Google Scholar] [CrossRef]
  35. Fryer, L.K.; Nakao, K.; Thompson, A. Chatbot learning partners: Connecting learning experiences, interest and competence. Comput. Hum. Behav. 2019, 93, 279–289. [Google Scholar] [CrossRef]
  36. Durall Gazulla, E.; Martins, L.; Fernández-Ferrer, M. Designing learning technology collaboratively: Analysis of a chatbot co-design. Educ. Inf. Technol. 2023, 28, 109–134. [Google Scholar] [CrossRef]
  37. González, L.A.; Neyem, A.; Contreras-McKay, I.; Molina, D. Improving learning experiences in software engineering capstone courses using artificial intelligence virtual assistants. Comput. Appl. Eng. Educ. 2022, 30, 1370–1389. [Google Scholar] [CrossRef]
  38. González-Castro, N.; Muñoz-Merino, P.J.; Alario-Hoyos, C.; Kloos, C.D. Adaptive learning module for a conversational agent to support MOOC learners. Australas. J. Educ. Technol. 2021, 37, 24–44. [Google Scholar] [CrossRef]
  39. Han, J.-W.; Park, J.; Lee, H. Analysis of the effect of an artificial intelligence chatbot educational program on non-face-to-face classes: A quasi-experimental study. BMC Med. Educ. 2022, 22, 830. [Google Scholar] [CrossRef]
  40. Han, S.; Liu, M.; Pan, Z.; Cai, Y.; Shao, P. Making FAQ Chatbots More Inclusive: An Examination of Non-Native English Users’ Interactions with New Technology in Massive Open Online Courses. Int. J. Artif. Intell. Educ. 2022, 33, 752–780. [Google Scholar] [CrossRef]
  41. Haristiani, N.; Rifai, M.M. Chatbot-based application development and implementation as an autonomous language learning medium. Indones. J. Sci. Technol. 2021, 6, 561–576. [Google Scholar] [CrossRef]
  42. Hew, K.F.; Huang, W.; Du, J.; Jia, C. Using chatbots to support student goal setting and social presence in fully online activities: Learner engagement and perceptions. J. Comput. High. Educ. 2022, 35, 40–68. [Google Scholar] [CrossRef] [PubMed]
  43. Hsu, M.-H.; Chen, P.-S.; Yu, C.-S. Proposing a task-oriented chatbot system for EFL learners speaking practice. Interact. Learn. Environ. 2021, 1–12. [Google Scholar] [CrossRef]
  44. Hsu, H.-H.; Huang, N.-F. Xiao-Shih: A Self-Enriched Question Answering Bot with Machine Learning on Chinese-Based MOOCs. IEEE Trans. Learn. Technol. 2022, 15, 223–237. [Google Scholar] [CrossRef]
  45. Huang, W.; Hew, K.F.; Gonda, D.E. Designing and evaluating three chatbot-enhanced activities for a flipped graduate course. Int. J. Mech. Eng. Robot. Res. 2019, 8, 813–818. [Google Scholar] [CrossRef]
  46. Jasin, J.; Ng, H.T.; Atmosukarto, I.; Iyer, P.; Osman, F.; Wong, P.Y.K.; Pua, C.Y.; Cheow, W.S. The implementation of chatbot-mediated immediacy for synchronous communication in an online chemistry course. Educ. Inf. Technol. 2023, 28, 10665–10690. [Google Scholar] [CrossRef]
  47. Lee, Y.-F.; Hwang, G.-J.; Chen, P.-Y. Impacts of an AI-based chabot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educ. Technol. Res. Dev. 2022, 70, 1843–1865. [Google Scholar] [CrossRef]
  48. Li, K.-C.; Chang, M.; Wu, K.-H. Developing a task-based dialogue system for english language learning. Educ. Sci. 2020, 10, 306. [Google Scholar] [CrossRef]
  49. Li, Y.S.; Lam, C.S.N.; See, C. Using a Machine Learning Architecture to Create an AI-Powered Chatbot for Anatomy Education. Med. Sci. Educ. 2021, 31, 1729–1730. [Google Scholar] [CrossRef]
  50. Liu, Q.; Huang, J.; Wu, L.; Zhu, K.; Ba, S. CBET: Design and evaluation of a domain-specific chatbot for mobile learning. Univers. Access Inf. Soc. 2020, 19, 655–673. [Google Scholar] [CrossRef]
  51. Mendez, S.L.; Johanson, K.; Conley, V.M.; Gosha, K.; Mack, N.; Haynes, C.; Gerhardt, R. Chatbots: A tool to supplement the future faculty mentoring of doctoral engineering students. Int. J. Dr. Stud. 2020, 15, 373–392. [Google Scholar] [CrossRef] [PubMed]
  52. Neo, M. The Merlin Project: Malaysian Students’ Acceptance of an AI Chatbot in Their Learning Process. Turk. Online J. Distance Educ. 2022, 23, 31–48. [Google Scholar] [CrossRef]
  53. Neo, M.; Lee, C.P.; Tan, H.Y.-J.; Neo, T.K.; Tan, Y.X.; Mahendru, N.; Ismat, Z. Enhancing Students’ Online Learning Experiences with Artificial Intelligence (AI): The MERLIN Project. Int. J. Technol. 2022, 13, 1023–1034. [Google Scholar] [CrossRef]
  54. Ong, J.S.H.; Mohan, P.R.; Han, J.Y.; Chew, J.Y.; Fung, F.M. Coding a Telegram Quiz Bot to Aid Learners in Environmental Chemistry. J. Chem. Educ. 2021, 98, 2699–2703. [Google Scholar] [CrossRef]
  55. Rodríguez, J.A.; Santana, M.G.; Perera, M.V.A.; Pulido, J.R. Embodied conversational agents: Artificial intelligence for autonomous learning. Pixel-Bit Rev. De Medios Y Educ. 2021, 62, 107–144. [Google Scholar]
  56. Rooein, D.; Bianchini, D.; Leotta, F.; Mecella, M.; Paolini, P.; Pernici, B. aCHAT-WF: Generating conversational agents for teaching business process models. Softw. Syst. Model. 2022, 21, 891–914. [Google Scholar] [CrossRef]
  57. Sáiz-Manzanares, M.C.; Marticorena-Sánchez, R.; Martín-Antón, L.J.; González Díez, I.; Almeida, L. Perceived satisfaction of university students with the use of chatbots as a tool for self-regulated learning. Heliyon 2023, 9, e12843. [Google Scholar] [CrossRef]
  58. Schmulian, A.; Coetzee, S.A. The development of Messenger bots for teaching and learning and accounting students’ experience of the use thereof. Br. J. Educ. Technol. 2019, 50, 2751–2777. [Google Scholar] [CrossRef]
  59. Suárez, A.; Adanero, A.; Díaz-Flores García, V.; Freire, Y.; Algar, J. Using a Virtual Patient via an Artificial Intelligence Chatbot to Develop Dental Students’ Diagnostic Skills. Int. J. Environ. Res. Public Health 2022, 19, 8735. [Google Scholar] [CrossRef]
  60. Vázquez-Cano, E.; Mengual-Andrés, S.; López-Meneses, E. Chatbot to improve learning punctuation in Spanish and to enhance open and flexible learning environments. Int. J. Educ. Technol. High. Educ. 2021, 18, 33. [Google Scholar] [CrossRef]
  61. Villegas-Ch, W.; Arias-Navarrete, A.; Palacios-Pacheco, X. Proposal of an Architecture for the Integration of a Chatbot with Artificial Intelligence in a Smart Campus for the Improvement of Learning. Sustainability 2020, 12, 1500. [Google Scholar] [CrossRef]
  62. Wambsganss, T.; Zierau, N.; Söllner, M.; Käser, T.; Koedinger, K.R.; Leimeister, J.M. Designing Conversational Evaluation Tools. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–27. [Google Scholar] [CrossRef]
  63. Wan Hamzah, W.M.A.F.; Ismail, I.; Yusof, M.K.; Saany, S.I.M.; Yacob, A. Using Learning Analytics to Explore Responses from Student Conversations with Chatbot for Education. Int. J. Eng. Pedagog. 2021, 11, 70–84. [Google Scholar] [CrossRef]
  64. Yildiz Durak, H. Conversational agent-based guidance: Examining the effect of chatbot usage frequency and satisfaction on visual design self-efficacy, engagement, satisfaction, and learner autonomy. Educ. Inf. Technol. 2023, 28, 471–488. [Google Scholar] [CrossRef]
  65. Yin, J.; Goh, T.-T.; Yang, B.; Xiaobin, Y. Conversation Technology with Micro-Learning: The Impact of Chatbot-Based Learning on Students’ Learning Motivation and Performance. J. Educ. Comput. Res. 2021, 59, 154–177. [Google Scholar] [CrossRef]
  66. Bakouan, M.; Kamagate, B.H.; Kone, T.; Oumtanaga, S.; Babri, M. A chatbot for automatic processing of learner concerns in an online learning platform. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 168–176. [Google Scholar] [CrossRef]
  67. Briel, A. Toward an eclectic and malleable multiagent educational assistant. Comput. Appl. Eng. Educ. 2022, 30, 163–173. [Google Scholar] [CrossRef]
  68. González-González, C.S.; Muñoz-Cruz, V.; Toledo-Delgado, P.A.; Nacimiento-García, E. Personalized Gamification for Learning: A Reactive Chatbot Architecture Proposal. Sensors 2023, 23, 545. [Google Scholar] [CrossRef]
  69. Janati, S.E.; Maach, A.; Ghanami, D.E. Adaptive e-learning AI-powered chatbot based on multimedia indexing. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 299–308. [Google Scholar] [CrossRef]
  70. Jimenez Flores, V.J.; Jimenez Flores, O.J.; Jimenez Flores, J.C.; Jimenez Castilla, J.U. Performance comparison of natural language understanding engines in the educational domain. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 753–757. [Google Scholar]
  71. Karra, R.; Lasfar, A. Impact of Data Quality on Question Answering System Performances. Intell. Autom. Soft Comput. 2023, 35, 335–349. [Google Scholar] [CrossRef]
  72. Kharis, M.; Schön, S.; Hidayat, E.; Ardiansyah, R.; Ebner, M. Mobile Gramabot: Development of a Chatbot App for Interactive German Grammar Learning. Int. J. Emerg. Technol. Learn. 2022, 17, 52–63. [Google Scholar] [CrossRef]
  73. Kohnke, L. A Pedagogical Chatbot: A Supplemental Language Learning Tool. RELC J. 2022, 1–11. [Google Scholar] [CrossRef]
  74. Lippert, A.; Shubeck, K.; Morgan, B.; Hampton, A.; Graesser, A. Multiple Agent Designs in Conversational Intelligent Tutoring Systems. Technol. Knowl. Learn. 2020, 25, 443–463. [Google Scholar] [CrossRef]
  75. Mateos-Sanchez, M.; Melo, A.C.; Blanco, L.S.; García, A.M.F. Chatbot, as Educational and Inclusive Tool for People with Intellectual Disabilities. Sustainability 2022, 14, 1520. [Google Scholar] [CrossRef]
  76. Memon, Z.; Aghian, H.; Sarfraz, M.S.; Hussain Jalbani, A.; Oskouei, R.J.; Jalbani, K.B.; Hussain Jalbani, G. Framework for Educational Domain-Based Multichatbot Communication System. Sci. Program. 2021, 2021, 5518309. [Google Scholar] [CrossRef]
  77. Nguyen, H.D.; Tran, D.A.; Do, H.P.; Pham, V.T. Design an Intelligent System to automatically Tutor the Method for Solving Problems. Int. J. Integr. Eng. 2020, 12, 211–223. [Google Scholar] [CrossRef]
  78. Pashev, G.; Gaftandzhieva, S. Facebook Integrated Chatbot for Bulgarian Language Aiding Learning Content Delivery. TEM J. 2021, 10, 1011–1015. [Google Scholar] [CrossRef]
  79. Schmitt, A.; Wambsganss, T.; Leimeister, J.M. Conversational Agents for Information Retrieval in the Education Domain: A User-Centered Design Investigation. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–22. [Google Scholar] [CrossRef]
Figure 1. PRISMA framework flowchart.
Figure 1. PRISMA framework flowchart.
Information 14 00503 g001
Figure 2. Subject fields of the collected documents.
Figure 2. Subject fields of the collected documents.
Information 14 00503 g002
Figure 3. Distribution of the 73 analyzed documents over time.
Figure 3. Distribution of the 73 analyzed documents over time.
Information 14 00503 g003
Figure 4. The development cycle of an educational agent.
Figure 4. The development cycle of an educational agent.
Information 14 00503 g004
Figure 5. Usage roles of the examined chatbots.
Figure 5. Usage roles of the examined chatbots.
Information 14 00503 g005
Figure 6. Design suggestions of the examined chatbots.
Figure 6. Design suggestions of the examined chatbots.
Information 14 00503 g006
Table 1. Summary of the found systematic reviews.
Table 1. Summary of the found systematic reviews.
ReferenceTypeYearNumber of Primary Studies/Number of Data SourcesMethodSubject
[4]SLR202129/1Coding scheme based on Chang & Hwang (2019) and Hsu et al. (2012)Learning domains of ECAs; Learning strategies used by ECAs;
Research design of studies in the field;
Analysis methods utilized in relevant studies;
Nationalities of authors and journals publishing relevant studies;
Productive authors in the field
[6]SLR202336/3Guidelines based on Keele et al. (2007)Fields in which ECAs are used;
Platforms on which the ECAs operate on;
Roles that ECAs play when interacting with students;
Interaction styles that are supported by the ECAs;
Principles that are used to guide the design of ECAs;
Empirical evidence that exists to support the capability of using ECAs as teaching assistants for students;
Challenges of applying and using ECAs in the classroom
[3]SLR202153/6Methods based on Kitchenham et al. (2007), Wohlin et al. (2012) and Aznoli & Navimipour (2017)The most recent research status or profile for ECA applications in the education domain;
The primary benefits of ECA applications in education;
The challenges faced in the implementation of an ECA system in education;
The potential future areas of education that could benefit from the use of ECAs
[2]SLR202080/8 PRISMA frameworkThe different types of educational and/or educational environment chatbots currently in use;
The way ECAs affect student learning or service improvement;
The type of technology ECAs use and the learning result that is obtained from each of them;
The cases in which a chatbot helps learning under conditions similar to those of a human tutor;
The possibility of evaluating the quality of chatbots and the techniques that exist for that
[7]SLR202047 CAs/1undefinedQualitative assessment of ECAs that operate on Meta Messenger
[5]SLR202174/4PRISMA frameworkThe objectives for implementing chatbots in education;
The pedagogical roles of chatbots;
The application scenarios that have been used to mentor students;
The extent in which chatbots are adaptable to personal students’ needs;
The domains in which chatbots have been applied so far
Table 2. Inclusion and exclusion criteria.
Table 2. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
IC1: The examined chatbot application was used in teaching a subject to students and contains information about designing, integrating or evaluating it, as well as mentioning the tools or environments used in order to achieve that.EC1: The chatbot application was designed for the training of specific target groups but not students or learners of an educational institution
IC2: The publication year of the article is between 2018 and 2023
IC3: The document type is a journal articleEC2: Articles that are focused too much on the results for the learners and do not describe the role of the CA and how it contributes to the results
IC4: The retrieved article is written in English
Table 3. Educational grade levels in which ECAs are used.
Table 3. Educational grade levels in which ECAs are used.
Educational Grade LevelReferences
K-12 education (14)[9,10,11,12,13,14,15,16,17,18,19,20,21,22]
Tertiary education (43)[23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65]
Unspecified (16)[6,7,66,67,68,69,70,71,72,73,74,75,76,77,78,79]
Table 4. Categories of the collected development processes.
Table 4. Categories of the collected development processes.
CategoryReferences
Linear
(28)
[12,18,19,21,22,29,30,34,36,38,39,42,46,52,57,58,59,60,62,64,65,68,71,72,73,75,78,79]
Iterative
(3)
[9,41,63]
Table 5. Synopsis of information collected in the research phase.
Table 5. Synopsis of information collected in the research phase.
Kind of InformationReferences
A suitable methodology to develop an ECA.[40]
Communication methods and strategies that are going to be used to form the function of the educational tutor.[42]
Design requirements to develop an ECA.[79]
Design principles to develop an ECA.[79]
Empirical information by examining similar applications.[9,36,57,60]
Learning methods that are going to be used to form the function of the educational tutor.[58]
Mechanism prototypes that will be used to develop the educational agent.[69]
Ready-to-use chatbots for the learning process.[10,11,26]
Suitable environments and tools to develop an ECA.[12,21,49,60,75]
Technical requirements to develop an ECA.[44,75]
Theories relevant to the development of an ECA.[25]
Table 6. Synopsis of the analysis’ requirements.
Table 6. Synopsis of the analysis’ requirements.
RequirementsReferences
Users’ needs and expectations.[19,30,36,39,41,62,73,75,79]
Technical requirements.[75]
Collection of students’ common questions and most searched topics to be used in the educational material.[18,19,34,38,39,46,60,73]
Table 7. Synopsis of the objects to be defined during the definition phase.
Table 7. Synopsis of the objects to be defined during the definition phase.
Objects to Be DefinedReferences
An application flow to show the function of the ECA.[41]
Communication channels the ECA is going to be accessible from.Common to all
Education plan.[11]
Learning material and items the chatbot is going to use.Common to all
Learning methods and techniques that will be used to develop the agent.[25,41]
Learning objectives, tasks and goals.[39,41]
Student personas and conversational standards of the student–agent interaction.[29]
Teaching and learning processes.[12,54,64,65]
The conversational flow between the student and the chatbot.[52,76]
The design principles of the chatbot.[19,62,79]
The processing mechanisms of the chatbots that are going to be used.[29,42]
The purpose of the educational CA.[9,29,53,72]
Tools and environments or the ready solutions of already built CAs that are going to be used.Common to all
Table 8. Usage roles of the examined chatbots.
Table 8. Usage roles of the examined chatbots.
Usage Roles of ECAsReferences
Course evaluator[62]
Learner taught by the student[6,28]
Learning guide in a gamification environment[68]
Self-Evaluator (learning partner/educational tutor)[9,10,11,12,13,17,20,21,22,29,30,33,45,52,55,57,58,59,64,65,74,75,77,79]
Storytelling conversational assistant[24]
Student assistant (provider of supportive educational material)[11,19,37,38,46,53]
Question solver[33,40]
Table 9. Synopsis of the suggested steps for the Shaping and Formulation phase.
Table 9. Synopsis of the suggested steps for the Shaping and Formulation phase.
Proposed StepsReferences
Evaluating students’ questions to measure their complexity and bias rate.[38]
Shaping of the final learning material that is going to be used from the educational tutor.Common to all
Table 10. Synopsis of the suggested steps for the adaptations and modifications phase.
Table 10. Synopsis of the suggested steps for the adaptations and modifications phase.
Proposed StepsReferences
Evaluating students’ competence in answering questions to modify the learning content accordingly.[38]
Enriching the educational material with various forms of the educational material apart from text messages.[21,52,53,65]
Studying the curriculum of the educational institute and adapting the design principles of the ECA to it.[21,22,62]
Studying the curriculum of the educational institute and modifying the function of the ECA to it.[21,22]
Adapting the function of the ECA to the teaching process.[12]
Getting domain experts’ opinions to modify the function and the learning material used by the ECA.[21,41]
Table 11. Design suggestions of the examined chatbots.
Table 11. Design suggestions of the examined chatbots.
Design DirectionDesign SuggestionReferences
Adaption of the ECA’s function to students’ needsAdaptation of the ECA’s function to the emotional needs of the students[79]
Alignment of the ECA’s function with students’ learning needs[6,19,23,33,68]
Adjustment of the ECA’s function to the curriculum of the students[21,22]
Modification of the ECA’s function to match user expectations[23,30]
Definition of the ECA’s vocabulary and expression style to be suitable with the students’ linguistic capabilities[28]
Construction of the ECA in order to be an inclusive educational tool (suitable for every learner)[75]
AccessibilityAlignment of the ECA’s function with the selected communication channels[19]
The ECA should be accessible from various communication channels[17,26]
Guaranteed chatbot availability regardless of the external conditions[23,68]
Conversational traitsEquipment of the ECA with many variations for phrases with the same meaning[19,55,59]
Acceptance of oral messages as input[17,75]
The ECA should address students by their name to provide personalized conversations[46,52]
Avoidance of spelling errors[28]
Capability to discuss wide range for discussion topics including casual, non-educational subjects[6,23,46,62]
The ECA should collect more information when the user cannot be understood by discussing with them to identify their intent[17,28]
The ECA should let the user define the conversational flow when the CA cannot respond[31]
Provision of messages about the limitation of the ECA when it cannot respond[40]
Provision of motivational comments and rewarding messages to students[12,17,27,29,45,68,74]
The ECA should produce quick responses[20,46]
Redirection of students to an educator’s communication channel when the ECA cannot respond[46]
Wise usage of humor in the user interaction[26]
Usage of easy-to-understand language in the response[17]
Utilization of human-like conversational characteristics such as emoticons, avatar representations and greeting messages[27,28,29,46,62]
Utilization of previous students’ responses to improve its conversational ability and provide personalized communication[17,26,33,36,79]
Usage of button-formed topic suggestion for quicker interaction[17]
Usage of “start” and “restart” interaction buttons [17]
Design process general suggestions Engage every possible stakeholder of the development process to gain better results[36]
Make the database of the system expendable[33]
Handling of the educational materialExplanation of the educational material from various perspectives[27,52,74]
The ECA should predict different subjects the students did not comprehend and provide relevant supportive material[46]
Suggestion of external learning sources to students when it cannot respond[33]
Proposition of learning topics similar to the current to help students learn on their own[33]
The ECA should provide educational material in small segments with specific content[28,38,52]
Provision of educational material in various forms apart from text message[12,17,26,29,52,53,64,65,68]
Navigation buttons between the segments of the presented educational material[52]
Oral narration to accompany the offered learning material[52]
Handling of quizzes, tests or self-evaluation material[9,10,11,12,13,17,21,22,28,29,30,33,45,52,55,56,58,59,64,65,74,75,77,79]
Recommendation of suitable practice exercises to the students[13]
Instruction provisionThe ECA should provide usage instructions through the functional environment of the ECA[58,79]
Provided learning experienceIntegration of other technologies such as AR to provide better user experience[12]
The ECA should provide feedback to students[12,13,20,27,45]
The ECA should provide personalized learning experience[68]
Use of gamification elements[68]
Question handling to and control by the studentsAddition of buttons so students can handle the questions they cannot answer[20,28]
The ECA should allow students to trace back to previous exercises and definitions [27]
Provision of hints to students when they cannot answer a question[28]
Use of button-formed reappearance of wrong questions so as to be easier to answer[28]
Regulations for the function of the systemAlignment of the ECA’s function with the ethics policies and rules for the protection of the user data[17,26,28,46,62,68]
Adjustment of the ECA’s function to the policies of the educational institution [62,79]
Students’ notificationsThe ECA should provide updates to students for important course events such as deadlines[29]
Traits of the provided learning activitiesProvision of challenging and interesting student learning activities[6,12,23]
The ECA should provide collaborative learning activities[68]
Utilization of competitive learning activities[10,68]
Teacher supportAlignment of the ECA’s function with the form of the teaching material[19]
Adjustment of the ECA’s function to the teaching style of the educator[68]
The ECA should provide goal-setting possibilities to the teachers[68]
Tutoring approachAlignment of the ECA’s function with specific learning theories[46,58]
Adjustment of the ECA’s function to specific motivational framework[12,29]
Alignment of the ECA’s function with the learning purpose[9]
Design of the ECA as a student that learns from the students[6]
The ECA should utilize predefined learning paths[56]
Usage of students’ previous knowledge and skills to help them learn new information[6]
Utilization of learning motives such as students’ grades to increase students’ engagement willingness[29]
Table 12. Synopsis of the suggested steps for the training phase.
Table 12. Synopsis of the suggested steps for the training phase.
Proposed StepsReferences
Using previously collected or preconstructed material to train the chatbot [31,48]
Table 13. Overview of the training methods for an educational chatbot.
Table 13. Overview of the training methods for an educational chatbot.
Training MethodReferences
Datasets of the development platforms[70,75]
Existing corpora of data[48]
Educational material (predefined sets of questions that students have done or formed by domain experts or educators)[18,21,31,34,44,49,59,76]
Machine learning techniques[27,63]
Table 14. Synopsis of the suggested steps for the testing phase.
Table 14. Synopsis of the suggested steps for the testing phase.
Proposed StepsReferences
Trying a pilot application with a few students, teachers or domain experts to evaluate the first function of the ECA.[9,18,19,31,36,41,42,72,73,75]
Applying modifications based on the testing results.
Table 15. Overview of the testing methods for an educational chatbot.
Table 15. Overview of the testing methods for an educational chatbot.
Testing MethodReferences
Domain expert or educator testing[41,55]
Student testing[9,24,31,36,42,73,75]
Testing using performance metrics[19,76]
Table 16. Synopsis of the suggested steps for the guidance and motivation provision phase.
Table 16. Synopsis of the suggested steps for the guidance and motivation provision phase.
Proposed StepsReferences
Providing guidance to the students on how to use the chatbot. [10,11,22,24,40,54,57,58]
Motivating students to use the agent.[12,24,30]
Table 17. Synopsis of the suggested steps for the evaluation phase.
Table 17. Synopsis of the suggested steps for the evaluation phase.
Proposed Steps.References
Evaluating the chatbot based on system analytics and user evaluations.common-to-all
Restarting the procedure from the design stage and using the evaluation data to improve the agent.
Table 18. Summary of the evaluation instruments for an educational chatbot.
Table 18. Summary of the evaluation instruments for an educational chatbot.
Evaluation InstrumentsReferences
Interviews[13,14,16,21,29,34,42,45,46,51,73]
Learning and interaction analytics[9,13,15,16,17,20,24,25,32,33,39,42,43,48,58,60]
Questionnaires[10,11,12,13,14,15,17,19,22,23,26,28,29,30,31,32,33,35,36,39,40,41,42,43,45,46,47,48,49,50,51,52,53,54,55,57,58,59,62,64,65,67,72]
Student performance[16,27,29,34,43,61,67]
Technical performance metrics[18,44,60,62,69,70,76]
Table 19. Synopsis of the evaluation methods for an educational chatbot.
Table 19. Synopsis of the evaluation methods for an educational chatbot.
Evaluation MethodReferences
Comparison of the current ECA with other ECAs [44,79]
Functional assessments by domain experts [39,49]
Student usage evaluationCommon to all
Usability evaluation[19,32,42,49,53]
Workshops for users or stakeholders[31]
Table 20. Overview of the evaluation metrics for an educational chatbot.
Table 20. Overview of the evaluation metrics for an educational chatbot.
Evaluation CategoryEvaluation MetricReferences
Interaction metricsAcceptance of chat messages[20]
Acceptance or indifference to the agent’s feedback[7,20]
Average duration of the interaction between the user and the ECA[24,42]
Capability of answering user questions and providing a pleasant interaction[7]
Capability of conducting a personalized conversation[7]
Capability of providing a human-like interaction and keeping a conversation going[7]
Capability of understanding and responding to the user[7,9,19,32,53,62]
Capability of understanding different uses of the language[7]
Periodicity of chat messages[20]
Quality and accuracy of responses[18,76]
The average number of words in the messages written by the students[24]
The number of propositions that were utilized by the learners[33]
The total duration of students’ time spent interacting with the ECA [9,13,19]
The total number of buttons that were pressed by the learners[24]
The total number of propositions the ECA offered to the learners.[33]
The total number of users that utilized the ECA[33]
The total number of words that were produced by the ECA[24]
The total number of words written by the students[24]
The number of user inputs that were formed using natural language and were understood by the chatbot[33]
Total number of interactions between the student and the ECA[24,33]
Total number of messages between the student and the ECA[42]
Support and scaffolding of the educational processCapability of supporting the achievement of students’ learning goals and tasks[7,13,42]
Fulfillment of the initial purpose of the agent[55]
Rate of irrelevant (in an educational context) questions asked by the students[9]
Students’ rate of correct answers (to questions posed by the chatbot)[20]
The quality of the educational material suggestions made by the ECA[7,9,19,32,62]
Technical suitability and performanceCompatibility with other software[19]
Maintenance needs[19]
User experienceStudents’ self-efficacy and learning autonomy[64,79]
Students’ workload[9]
Usability[19,32,42,49,53]
User motivation[9,53]
User satisfaction[33,51,58,64]
Table 21. Suggestions for the evaluation of an ECA.
Table 21. Suggestions for the evaluation of an ECA.
SuggestionReferences
Evaluation based on the initial purpose of the ECA[55]
Utilization of specific evaluation plan[51,63]
Usage of progress bar and “skip buttons” in the evaluation form[62]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramandanis, D.; Xinogalos, S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information 2023, 14, 503. https://doi.org/10.3390/info14090503

AMA Style

Ramandanis D, Xinogalos S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information. 2023; 14(9):503. https://doi.org/10.3390/info14090503

Chicago/Turabian Style

Ramandanis, Dimitrios, and Stelios Xinogalos. 2023. "Designing a Chatbot for Contemporary Education: A Systematic Literature Review" Information 14, no. 9: 503. https://doi.org/10.3390/info14090503

APA Style

Ramandanis, D., & Xinogalos, S. (2023). Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information, 14(9), 503. https://doi.org/10.3390/info14090503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop