Next Article in Journal
Mining Product Reviews for Important Product Features of Refurbished iPhones
Previous Article in Journal
Comparison of Grammar Characteristics of Human-Written Corpora and Machine-Generated Texts Using a Novel Rule-Based Parser
Previous Article in Special Issue
A Rubric for Peer Evaluation of Multi-User Virtual Environments for Education and Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AIMT Agent: An Artificial Intelligence-Based Academic Support System

Department of Computer Science, Democritus University of Thrace (DUTH), 65404 Kavala, Greece
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 275; https://doi.org/10.3390/info16040275
Submission received: 13 February 2025 / Revised: 27 March 2025 / Accepted: 28 March 2025 / Published: 29 March 2025
(This article belongs to the Special Issue Artificial Intelligence and Games Science in Education)

Abstract

:
The development and use of conversational agents in education has become widespread in recent years because of their ability to facilitate student interaction with the learning material, improve engagement and provide academic support, while at the same time reducing the teachers’ workload. This is especially important in the case of distance and asynchronous education, where the availability of academic support must be ensured at any time. This paper reports on the implementation and evaluation of a conversational agent called AIMT Agent, developed for the purposes of supporting postgraduate students during their studies. The conversational agent is based on an open-source framework, namely Rasa, which provides the tools for building natural language understanding into the agent. The agent is fully integrated with the Moodle Learning Management System (LMS). The agent was assessed through a questionnaire according to the Technology Acceptance Model (TAM) in terms of perceived usefulness and perceived ease of use by 24 postgraduate students. The results show that users of the AIMT agent have assessed the conversational agent favorably in both of these aspects. This confirms the validity of the approach and is a motivation for refinements and further development.

1. Introduction

Conversational agents (chatbots) are software applications whose purpose is to simulate interaction of a human user with the software agent, either through text or through voice. Their objective is to allow users to use natural language to have conversations with the agent though a user interface. Early conversational agents were based on traditional text processing and rule-based techniques. These systems had very limited capabilities in understanding and delivering natural language content, and this hindered the natural flow of conversations. As a result, their use was not widespread. The advent of Artificial Intelligence (AI), and more specifically machine learning in conjunction with more advanced natural language processing methodologies, has revolutionized the implementation of conversational agents, which became more effective in text understanding and generation [1]. This resulted in a renewed interest in the application of conversational agents in several fields. For example, in business, conversational agents have been used for sales or customer support [2,3] and in healthcare they have been used for providing health information or for patient assistance [4,5]. The introduction of Large Language Models (LLMs) for natural language processing and their application in cutting-edge chatbots like ChatGPT have augmented the capabilities of such systems, allowing for even more human-like conversations [6].
Another area where conversational agents have been applied is education [7,8]. In this case, agents have the ability to reduce the teachers’ workload by providing assistance to the students without the teacher necessarily being available. In addition, such agents provide easy access and, therefore, enhance student engagement [9]. This allows students to have an instant response to their questions. This is especially important in the case of asynchronous distance education, where students carry out their educational activities in their own time. Furthermore, a meta-analysis presented in [10] showed that the use of chatbots can significantly affect learning outcomes. Conversational agents have been applied in various spaces of the educational process, including interaction with the course material [11], assessment [12], or assistance with administrative issues [13]. Most research studies have focused on the development of chatbots designed to enhance students’ academic performance. The study presented in [14] reports on chatbots utilized in English language teaching and learning. In [15], the authors implemented an AI-driven chatbot aimed at aiding students with an introductory programming course. The chatbot used a database of common questions and answers, complemented by the teacher’s input. Similarly, in [16], a chatbot was employed to aid with the teaching of the Logo programming language. The authors found that chatbots can facilitate learning by allowing students to complete tasks at their own speed and with less pressure. The COVID-19 pandemic presented an opportunity to test the use of conversational agents in education, where guidance from teachers was limited [17]. The proposed EDUBOT chatbot resides on a separate platform from the platform where the educational resources are located. An LLM-driven chatbot integrated into the Moodle Learning Management System (LMS) is presented in [18]. The chatbot can retrieve information from lecture notes, slides and supplemental material of a specific course in order to answer student questions during self-study. The user experience regarding another LLM-based chatbot for teaching physics is presented in [19]. A purpose-made website was implemented to provide the UI of the chatbot with a GPT 3.5 engine to provide the interaction with context-specific knowledge. The authors report positive student experiences in terms of ease of use and efficiency.
The aforementioned works are examples of chatbots that directly aid students with their academic subjects (e.g., programming or language), i.e., they deal with the learning process directly. However, fewer works have focused on aiding students with administrative issues. The adoption of such service-oriented chatbots in the educational sector by users appears to be affected mainly by social influence and perceived ease of use rather than the perceived usefulness [20]. One service-oriented chatbot example is the FIT-EBot intelligent assistant presented in [13], which is designed to provide assistance with questions regarding courses, exams, important dates, etc. It uses the Facebook Messenger platform as a front end for communication and retrieves information from the University’s academic database. An advantage of this assistant is its integration with university services for the purposes of extracting knowledge. However, this knowledge is generic in terms of academic and administrative services, and it is not personalized to any one user. The academic advising system MyAdvisor in [21] is designed to help students with prescriptive academic inquiries. The chatbot is based on advising scenarios compiled by expert advisors and guides students in tasks such as how to qualify for an internship or select a course path. The evaluation of the system reveals that students responded positively to its use even though issues with the communication style and limitations to the number of available tasks are still part of ongoing work. Another example is the chatbot introduced in [22], which was designed to answer course-related queries. In this research, the agent utilizes a knowledge base constructed by assembling thousands of query-based conversations from different websites. This implementation also does not provide personalized services.
Despite issues such as privacy and personalization that may be impacting the adoption of this technology [23,24], it is clear that conversational agents are steadily becoming an increasingly popular option in educational settings. From the literature overview presented above, it is revealed that these systems lack personalization and that very few are fully integrated into existing and widely available learning platforms, which are commonly used in various courses, especially in distance education. The present paper attempts to address these shortcomings by introducing an AI-based conversational agent named AIMT Agent, which was developed specifically for the MSc in Immersive Technologies—Innovation in Education, Training and Game Design (IMT), a postgraduate program at the Department of Computer Science of the Democritus University of Thrace. The motivation behind the development of this agent is two-fold: Firstly, to introduce an automated system for providing information regarding the MSc program to prospective students, and secondly, to employ an always-online assistant for the support of current students in their postgraduate studies. For the latter case, the assistant is using the resources in each student’s enrolled courses in order to provide guidance regarding their coursework and other educational activities. Therefore, the proposed conversational agent offers a novel approach by (a) providing personalized assistance to each student and (b) being fully integrated into the chosen learning management system. The paper also reports on the results of this evaluation by the users. The agent was evaluated according to the Technology Acceptance Model (TAM) originally proposed by [25], which is a commonly used tool to evaluate software in terms of usefulness and ease of use. At this stage of the development, the objective of the evaluation is to validate the proposed integration of the LMS with the conversational agent, to ensure that the chosen user interface is appropriate, and to ascertain whether the users respond positively to the provided functionality.
The paper is structured as follows: In Section 2, the functional specifications of the conversational agent are described, and the details of the evaluation process are given. In Section 3, the results of the evaluation are presented. In Section 4, the results are discussed, and finally, the Conclusions section offers recommendations for future work based on observations during the evaluation phase and user feedback.

2. Materials and Methods

2.1. The Conversational Agent

At the core of the AIMT conversational agent is the framework Rasa. This is an open-source AI-based framework that provides a set of primitives for natural language understanding so that conversational agents can be developed. The Rasa framework was selected because of its availability at no cost, its flexibility, the ability to be installed and run as a service on a local computer, and the extensive community support. In contrast to a Large Language Models (LLMs)-based approach, the use of Rasa for natural language processing offers more customization options, more seamless integration with the existing learning platform (Moodle) and can be offered by a server with minimal system requirements. For example, while LLMs exhibit exceptional language processing capabilities, Rasa possesses the low-level functionalities that allow access to databases and string processing capabilities to produce text from the retrieved data.
The first step in implementing the conversational agent was the creation of a knowledge base (containing the training data), i.e., the queries that the agent understands and their corresponding responses, as well as components that regulate the flow of a conversation according to each query. More specifically, the knowledge base consists of the following main components:
  • Intents: These are the questions that the user may ask. Based on common questions asked by prospective and current students, an initial list of queries in English was compiled. For each question, common formulations of the question were given as examples and entered into the knowledge base. By providing more examples of a certain query, the probability of a user question being understood and matched to an intent by the agent is improved.
  • Answers: These are the predefined answers that the agent will return when a known question. The answers were based on information provided on the MSc program’s website.
  • Rules: These are the entities that define the correspondence of the questions with the predefined answers.
At the time of a query, Rasa will analyze the query based on a pre-trained language model and determine its similarity to the queries present in the knowledge base. If the similarity to the example queries falls below a certain level, the agent will trigger a default response, asking the user to rephrase. However, if the similarity is above the aforementioned threshold, Rasa will trigger a response according to which example query has the highest similarity to the user query. Figure 1 illustrates the Rasa service architecture for receiving queries and returning responses.
In general, there are two types of responses. The first type is predefined responses within the knowledge base that correspond to certain queries and will be returned to the user immediately. This type generally relates to queries that prospective students make, such as ‘what is the duration of the MSc program’. In this case, the responses are fixed and do not need further processing to produce. The second type is responses generated by actions. Actions are functions that are triggered by specified queries, and after some specific processing, they generate the returned response as a string. Actions are also part of the knowledge base and are defined in a separate file as Python functions, and their correspondence to particular questions (conversation flow) must also be defined. It can be seen that this kind of processing for the generation of the response is required when some personalization of the response is required. For example, when a query requires some information on a student, access to the Moodle database and processing of the retrieved data for the particular student is needed, and so in this case, the implementation of an action is necessary. For this reason, actions are usually employed for queries made by current students. Actions are, therefore, the focus of the Rasa service implementation in this study and are explained in more detail in the next section.
In addition, various Rasa configuration options need to be set in order to configure the system’s behavior. Such options include the pre-trained language model, the selected method for query string processing and tokenization, the sensitivity in matching queries to examples, and training parameters. Once the knowledge base has been compiled and the Rasa system is configured, a training process produces a model that includes all the information necessary for the agent to interact.
The Rasa framework, together with the trained model, was installed on a dedicated desktop computer with a static IP address. The agent is executed as a background service, always available to accept incoming queries.

2.2. Integration with Moodle LMS

Having installed and configured the Rasa framework and established its ability to respond to queries, a user interface through which the students could communicate with the system was implemented. The MSc in Immersive Technologies—Innovation in Education, Training and Game Design is a fully online postgraduate program, with the main learning activities taking place in the Moodle Learning Management System (LMS). For this reason, the user interface was implemented as a Moodle block plugin so that students can easily access the agent during their study. The name of the Moodle plugin developed is AIMT agent and was installed in the Moodle platform the MSc program uses. Figure 2 shows the interface of the AIMT Agent as an added block in the right-hand side Moodle drawer.
The block interface is divided into two main areas. The first area is the message box where the conversational agent prints its messages to the user, and the second area is the AIMT Agent avatar with its animations. The users type their queries into the textbox at the bottom of the block but they can also use voice to enter their queries by pressing the microphone icon. The agent then replies by displaying the answer in the conversation area. The agent’s reply can also be delivered using speech if the speaker icon located in the top right corner of the block is enabled.
Queries made in this user interface are communicated directly to the Rasa server described in the previous section. The server then performs its query matching operations and returns the appropriate response, which is displayed in the block. As mentioned earlier, for current students, Rasa responses are generally a result of actions being executed. In the case of AIMT Agent, actions are scripts that connect to the Moodle database, execute an explicit MySQL query that includes the user id of the current user, retrieve the desired information, and format the resulting response string, before displaying it in the user interface. To access the remote Moodle database from the workstation where the Rasa service is installed, direct connection is established with the mysql–connector–python library. Figure 3 illustrates this process.
This information is specific to the query and to the current user, such as coursework deadlines, useful course material, enrolled courses and others. These are the personalized responses mentioned earlier. Table 1 lists some of the main actions that current students can take advantage of, as well as the corresponding example queries. It must be noted that the example queries shown in the table are only example formulations of the question that can be posed for each function. In the knowledge base, multiple possible formulations of the same question are given for each function so that regardless of the way the user asks the question, there will be a response returned by matching the user query with one of the example queries given for each function.
Table 2 below shows example usages of AIMT agent. It can be seen that there are mainly three outcomes following a user query: (a) the agent answers the question correctly (questions 1 and 2), (b) the agent misunderstands the question and gives an incorrect response (question 3), and (c) the agent gives a standard response that it cannot understand the question (question 4). In the first case, the agent correctly matches the user’s question with one of the examples in the knowledge base and returns the corresponding response. In the second case, the user query is matched with one of the examples of an unrelated question, returns the corresponding response, but is not what the user intended. This can be rectified by providing more examples to the relevant question so that matching with the correct question can happen with more confidence. In question 4, the query could not be matched with a high enough degree of confidence to any of the examples in the knowledge base, which led to the agent giving a standard response that it could not understand the question. There are two reasons why this final outcome occurs: Either the agent does not have this question stored in the knowledge base and, therefore, there are no relevant examples for matching, or the question is formulated in a completely different way than the examples of an existing question. For the former case, the questions with the relevant examples can be added to the knowledge base, and for the latter case, the administrator can add further example phrasings to the existing question.
Initially, when the block of the conversational agent is first loaded, the agent greets the user and displays reminders for any of the user’s assignment deadlines and information on courses such as independent study and the thesis.
Apart from providing the user interface for student queries and the interface between the Rasa service and Moodle, the Moodle AIMT Agent plugin also serves two additional functions. The first function is to log queries and flag queries that the agent has not been able to answer. An administrator checks the log periodically and adds or amends the knowledge base so that it enriches the answers it can provide. It must be noted that all queries are recorded anonymously. The only information recorded is the user query, a timestamp and the system’s response, but not the username or computer address. Users who were invited to use the AIMT agent at this stage were informed that their queries were recorded anonymously for research purposes. Furthermore, the questions that could not be answered by the system (as in question 4 in Table 2) were specially flagged to notify the administrator to investigate. The second function of the Moodle plugin is to monitor student attendance. The plugin queries the Moodle database for the users’ last login, and if the student has not accessed his courses after a predetermined number of days, it sends relevant notifications to the teacher and the student via email.

2.3. Evaluation

The evaluation of the proposed AIMT Agent was based on the Technology Acceptance Model (TAM) evaluation framework [25], which is commonly used to evaluate information systems. According to this framework, user-perceived ease of use and usefulness of a technology can determine its potential for adoption. Compared to other evaluation frameworks, such as System Usability Scale (SUS) or Usability Metric for User Experience (UMUX), which focus on the usability aspect, the TAM framework specifically addresses the usefulness aspect of the software. This is a valuable element for this study, given that the AIMT agent was developed to assist students with their workload in an academic setting. It is, therefore, important to consider whether, apart from the usability perspective, the tool is beneficial to the end-users.
The questionnaire used in this study consists of three sections. The first section enquires about demographic information, such as age, occupation, education level, and country of residence. Then, there are two sections with questions on the perceived usefulness and the perceived ease of use, respectively. All questions are rated on a five-point Likert scale from “strongly disagree” to “strongly agree”. Table 3 shows the questions of the second section of the questionnaire, which deals with the perceived usefulness of the conversational agent.
Table 4 lists the questions of the third section of the questionnaire, relating to the perceived ease of use of AIMT agent.

3. Results

The evaluation was conducted during the winter semester of the MSc program. At the beginning of the semester, students were informed of the AIMT Agent and its functions and were encouraged to use it throughout the semester. At the end of the semester, the students were asked to complete the aforementioned questionnaire, which was distributed to them electronically. The questionnaires were completed anonymously. In total, 24 students participated in the evaluation. The results of the questionnaire per section are presented next.

3.1. Demographics

The first section of the questionnaire contained questions regarding the demographic information of the participants in the study, such as age, occupation and location. Figure 4 illustrates the responses.
It must be noted that because of the nature of the MSc program, all participants are at a postgraduate level in terms of education and possess adequate IT skills. For this reason, most of the participants are professionals over 35 years old. In terms of geographical distribution, most students attending this particular semester were located in Europe.

3.2. Perceived Usefulness

In the next section of the questionnaire, users responded to questions regarding the usefulness of the AIMT agent. Figure 5 shows the responses to each of the questions in this section. Note that the x-axis scale ranges from 1 to 5, representing the responses ranging from “strongly disagree” to “strongly agree”, respectively. The y-axis represents the number of responses.
These results show that in most cases, the conversational agent was found to be useful in supporting the users’ studies. Most notably, the conversational agent is shown to be useful in saving students time when searching for information while in the Moodle environment. This time can then be invested in the actual academic tasks, thus improving the quality of the work. The answers also indicate that the use of the agent aids in keeping the students alert in their academic tasks, such as assignments. This can be beneficial in improving student focus and help provide better workload management. However, there was only a moderate response in terms of how indispensable the AIMT agent is in carrying out the studies.

3.3. Perceived Ease of Use

Finally, in the last section of the questionnaire, users evaluated how easy to use they found the conversational agent. Figure 6 shows the responses for each of the fourteen questions in this section of the questionnaire.
The results presented in Figure 6 show that the responses strongly positive attitudes towards the ease of use of the system. More specifically, users found the conversational agent to be straightforward in its use, without the need for assistance by a system administrator or the user manual. The responses also show that the students can easily become familiar with the interface with minimal effort and that the agent does not allow errors to be made while using it. Even when an error is encountered, it is easy for users to recover and continue their work. Furthermore, users find that the agent consistently behaves in a predictable way and is helpful in guiding the students’ tasks. These observations regarding the smooth learning curve and robustness validate the correctness of the UI design that was followed.
For reference, Table 5 lists the mean and the standard deviation for each of the questions regarding the perceived usefulness and the perceived ease of use of the questionnaire.

4. Discussion and Conclusions

The paper presented the design, implementation and evaluation of an AI-powered conversational agent developed for supporting the needs of an MSc program. The conversational agent has two main functions: (a) to provide information to prospective students regarding the postgraduate program and (b) to offer personalized support to current students in coursework-related tasks. The agent is fully integrated into the asynchronous education platform used in the MSc program, namely Moodle, and utilizes Moodle’s infrastructure to provide personalized information to students. A preliminary evaluation was conducted in order to determine the proposed conversational agent’s acceptance by the students.
In general, even though the sample of this first evaluation round was limited, the results presented in the previous section clearly indicate that AIMT agent has good adoption potential, since users found the conversational agent to be very easy to use and that they consider it to be useful in their studies. Closer examination of the responses reveals that responses to the question in the usefulness section of the questionnaire are more varied than the responses on the ease of use section, which indicates that users feel more strongly on the ease of use. This can be explained by the fact that the assistant can mostly aid in managing the studies (e.g., reminders on assignment or help in locating information), but it is not yet a comprehensive assistant that will also provide support in academic matters (for example, in explaining concepts in the curriculum). This results in the ease of use being more pronounced than the usefulness. Furthermore, an examination of the system logs was conducted in order to view the questions posed by the students and inspect the success rate of the responses. It was observed that while the agent was successful in responding correctly to the majority of the questions, there were some instances where the phrasing caused the agent to fail to respond to some questions. This was due to either unknown questions, to unknown phrasing of known questions, or to incorrect English usage.
During the evaluation period, the system logs recorded a total of 196 user queries. Assuming that the only users who used the agent were the 24 participants, there were approximately 8.2 queries per student. However, this is only an approximation since there is no guarantee that the only users who used the agent were the 24 participants. Of the 196 queries, 32 were treated as unknown questions, meaning that the knowledge base did not contain a corresponding answer. As a result, the system responded by outputting the standard message notifying the user that the question was not understood. However, as seen in the demographics section above, students were predominantly non-native English speakers, so some of these questions can be attributed to incorrect grammar that caused a low confidence when attempting to match them with existing question examples in the knowledge base.
The responses that the agent provides can be divided into five categories. These are listed in Table 6 below, together with their frequency during the evaluation period.
As far as the conversational agent’s response time is concerned, this is affected by the type of question that is received by the system. The types of questions fall into two categories: a) questions that can be answered immediately using a predefined answer and b) questions that require an action to be activated so that the answer is produced after some data processing. Questions of the first category are evaluated on average within 0.07 s. For the latter category, the response time is longer, on average equal to 0.92 s, since this involves the execution of the Python script of the action, which includes the connection to the database, MySQL query execution and response string generation. The aforementioned values do not include any network latencies. In both cases, the response time is small and imperceptible to the user, which means that delays do not affect the user experience.
The evaluation results presented in this paper show that AIMT Agent is a promising support tool. However, there are several key areas where it can be further improved in the future. Firstly, the knowledge base can be significantly improved by providing more examples of existing queries so that the understanding of existing queries can be ensured. As mentioned previously, in some cases, questions posed by a user were not answered correctly or not answered at all, even though the database contained the corresponding answer. This happened because the formulation or phrasing of the question by the user did not adequately match the existing question examples in the knowledge base. Providing more example formulations and alternative phrasings of the existing examples will mitigate such occurrences. Secondly, new queries can be introduced for providing new services to prospective and current students. For example, access to data in forums and announcements can provide timelier course-related information to students. Also, by monitoring the system log for queries that were not able to be answered or were not answered correctly, the knowledge base can be improved and expanded. In addition, the evaluation results presented here are only the first round of user feedback, and so the agent is currently in the process of being refined and expanded. Finally, a longer-term goal is the integration with the course material (notes, presentations, quizzes, wikis, etc.) that is uploaded to the Moodle platform by the teachers of the postgraduate program. At the moment, a limitation of the agent is that it only provides students with support in managing their studies but does not provide assistance with the actual course material. It is the authors’ intention to expand the conversational agent so that it can interact with students regarding course material-related questions. This would produce a more comprehensive solution for LMS-based learning, particularly suitable for distance and asynchronous education, where self-study is central.

Author Contributions

Conceptualization, C.L. and A.T.; methodology, C.L. and A.T.; software, C.L.; validation, C.L. and A.T.; resources, C.L. and A.T.; data curation, C.L.; writing—original draft preparation, C.L.; writing—review and editing, C.L. and A.T.; supervision, A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank Vassileios Tsoukalas of the Department of Computer Science for his technical support throughout this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Skrebeca, J.; Kalniete, P.; Goldbergs, J.; Pitkevica, L.; Tihomirova, D.; Romanovs, A. Modern Development Trends of Chatbots Using Artificial Intelligence (AI). In Proceedings of the 2021 62nd International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS), Riga, Latvia, 14–15 October 2021; pp. 1–6. [Google Scholar]
  2. Bavaresco, R.; Silveira, D.; Reis, E.; Barbosa, J.; Righi, R.; Costa, C.; Antunes, R.; Gomes, M.; Gatti, C.; Vanzin, M.; et al. Conversational Agents in Business: A Systematic Literature Review and Future Research Directions. Comput. Sci. Rev. 2020, 36, 100239. [Google Scholar] [CrossRef]
  3. Nicolescu, L.; Tudorache, M.T. Human-Computer Interaction in Customer Service: The Experience with AI Chatbots—A Systematic Literature Review. Electronics 2022, 11, 1579. [Google Scholar] [CrossRef]
  4. Laranjo, L.; Dunn, A.G.; Tong, H.L.; Kocaballi, A.B.; Chen, J.; Bashir, R.; Surian, D.; Gallego, B.; Magrabi, F.; Lau, A.Y.S.; et al. Conversational Agents in Healthcare: A Systematic Review. J. Am. Med. Inform. Assoc. 2018, 25, 1248–1258. [Google Scholar] [CrossRef] [PubMed]
  5. Bin Sawad, A.; Narayan, B.; Alnefaie, A.; Maqbool, A.; Mckie, I.; Smith, J.; Yuksel, B.; Puthal, D.; Prasad, M.; Kocaballi, A.B. A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions. Sensors 2022, 22, 2625. [Google Scholar] [CrossRef] [PubMed]
  6. Adiguzel, T.; Kaya, M.H.; Cansu, F.K. Revolutionizing Education with AI: Exploring the Transformative Potential of ChatGPT. Contemp. Educ. Technol. 2023, 15, ep429. [Google Scholar] [CrossRef]
  7. Ramandanis, D.; Xinogalos, S. Designing a Chatbot for Contemporary Education: A Systematic Literature Review. Information 2023, 14, 503. [Google Scholar] [CrossRef]
  8. Hwang, G.-J.; Chang, C.-Y. A Review of Opportunities and Challenges of Chatbots in Education. Interact. Learn. Environ. 2023, 31, 4099–4112. [Google Scholar] [CrossRef]
  9. Okonkwo, C.W.; Ade-Ibijola, A. Chatbots Applications in Education: A Systematic Review. Comput. Educ. Artif. Intell. 2021, 2, 100033. [Google Scholar] [CrossRef]
  10. Wu, R.; Yu, Z. Do AI Chatbots Improve Students Learning Outcomes? Evidence from a Meta-analysis. Br. J. Educ. Technol. 2024, 55, 10–33. [Google Scholar] [CrossRef]
  11. Pérez, J.Q.; Daradoumis, T.; Puig, J.M.M. Rediscovering the Use of Chatbots in Education: A Systematic Literature Review. Comput. Appl. Eng. Educ. 2020, 28, 1549–1565. [Google Scholar] [CrossRef]
  12. Jeon, J. Chatbot-Assisted Dynamic Assessment (CA-DA) for L2 Vocabulary Learning and Diagnosis. Comput. Assist. Lang. Learn. 2023, 36, 1338–1364. [Google Scholar] [CrossRef]
  13. Hien, H.T.; Cuong, P.-N.; Nam, L.N.H.; Nhung, H.L.T.K.; Thang, L.D. Intelligent Assistants in Higher-Education Environments. In Proceedings of the Ninth International Symposium on Information and Communication Technology—SoICT 2018, New York, NY, USA, 6 December 2018; pp. 69–76. [Google Scholar]
  14. Kim, N.-Y.; Cha, Y.; Kim, H.-S. Future English Learning: Chatbots and Artificial Intelligence. Multimed. Assist. Lang. Learn. 2019, 22, 32–53. [Google Scholar]
  15. Verleger, M.; Pembridge, J. A Pilot Study Integrating an AI-Driven Chatbot in an Introductory Programming Course. In Proceedings of the 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 3–6 October 2018; pp. 1–4. [Google Scholar]
  16. Ait Baha, T.; El Hajji, M.; Es-Saady, Y.; Fadili, H. The Impact of Educational Chatbot on Student Learning Experience. Educ. Inf. Technol. 2024, 29, 10153–10176. [Google Scholar] [CrossRef]
  17. Sophia, J.J.; Jacob, T.P. EDUBOT-A Chatbot For Education in Covid-19 Pandemic and VQAbot Comparison. In Proceedings of the 2021 Second International Conference on Electronics and Sustainable Communication Systems (ICESC), Coimbatore, India, 4 August 2021; pp. 1707–1714. [Google Scholar]
  18. Neumann, A.T.; Yin, Y.; Sowe, S.; Decker, S.; Jarke, M. An LLM-Driven Chatbot in Higher Education for Databases and Information Systems. IEEE Trans. Educ. 2024, 68, 103–116. [Google Scholar] [CrossRef]
  19. Lieb, A.; Goel, T. Student Interaction with NewtBot: An LLM-as-Tutor Chatbot for Secondary Physics Education. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11 May 2024; pp. 1–8. [Google Scholar]
  20. Bilquise, G.; Ibrahim, S.; Salhieh, S.M. Investigating Student Acceptance of an Academic Advising Chatbot in Higher Education Institutions. Educ. Inf. Technol. 2024, 29, 6357–6382. [Google Scholar] [CrossRef]
  21. Kuhail, M.A.; Al Katheeri, H.; Negreiros, J.; Seffah, A.; Alfandi, O. Engaging Students With a Chatbot-Based Academic Advising System. Int. J. Hum. Comput. Interact. 2023, 39, 2115–2141. [Google Scholar] [CrossRef]
  22. Sinha, S.; Basak, S.; Dey, Y.; Mondal, A. An Educational Chatbot for Answering Queries. In Advances in Intelligent Systems and Computing; Springer: New York, NY, USA, 2020; Volume 937, pp. 55–60. [Google Scholar]
  23. Hwang, G.-J.; Xie, H.; Wah, B.W.; Gašević, D. Vision, Challenges, Roles and Research Issues of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2020, 1, 100001. [Google Scholar] [CrossRef]
  24. Shumanov, M.; Johnson, L. Making Conversations with Chatbots More Personalized. Comput. Human Behav. 2021, 117, 106627. [Google Scholar] [CrossRef]
  25. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
Figure 1. Rasa service architecture.
Figure 1. Rasa service architecture.
Information 16 00275 g001
Figure 2. The AIMT agent interface.
Figure 2. The AIMT agent interface.
Information 16 00275 g002
Figure 3. Execution pipeline for a Rasa action.
Figure 3. Execution pipeline for a Rasa action.
Information 16 00275 g003
Figure 4. Demographics.
Figure 4. Demographics.
Information 16 00275 g004
Figure 5. Evaluation results for the perceived usefulness.
Figure 5. Evaluation results for the perceived usefulness.
Information 16 00275 g005
Figure 6. Evaluation results for the perceived ease of use.
Figure 6. Evaluation results for the perceived ease of use.
Information 16 00275 g006
Table 1. Functions of the AIMT Agent for registered users and sample queries.
Table 1. Functions of the AIMT Agent for registered users and sample queries.
FunctionExample Query
Ask for the contact information of a teacherWhat is Dr Tsinakos email?
Ask for pending assignmentsWhen are my assignments due?
Ask for the objectives of the MSc programWhat is the goal of this Msc?
Ask for instructions on accessing journalsHow can I access journals?
Retrieve information regarding the thesisHow is the thesis completed?
Retrieve submission date of the thesisWhen is the thesis deadline?
Retrieve information regarding the independent study moduleWhat is independent study?
Retrieve submission date of the independent study moduleWhen is the deadline for the independent study?
Ask for thesis topicsWhat are the thesis topics?
Ask for independent study topicsWhat are the independent study topics?
Table 2. Examples of AIMT usage.
Table 2. Examples of AIMT usage.
QuestionSample User QuerySystem ResponseResult
1When is my thesis due?Thesis is due on 29 September 2024 03:00 AMCorrect response
2Can i proceed to a phd?PhD programs are available after coordinating with the Program Director firstSuccessful response
3In the course IMT8 are two assignments obligatory?You have no assignments pendingMisunderstanding the question
4In IMT8 do I have 2 compulsory assignments?Sorry, I didn’t quite understand that. Could you please rephrase?Unknown question
Table 3. Questions on the perceived usefulness.
Table 3. Questions on the perceived usefulness.
Questions
1. My studies would be difficult to perform without AIMT Agent.
2. Using AIMT Agent gives me greater control over my studies.
3. Using AIMT Agent improves my study performance.
4. AIMT Agent addresses my study-related needs.
5. Using AIMT Agent saves me time.
6. AIMT Agent enables me to accomplish tasks more quickly.
7. AIMT Agent supports critical aspects of my studies.
8. Using AIMT Agent allows me to accomplish more work than would otherwise be possible.
9. Using AIMT Agent reduces the time I spend on unproductive activities.
10. Using AIMT Agent enhances my effectiveness during my studies.
11. Using AIMT Agent improves the quality of the work I do.
12. Using AIMT Agent increases my productivity.
13. Using AIMT Agent makes it easier to carry out my studies.
14. Overall, I find AIMT Agent useful in my studies.
Table 4. Questions on the perceived ease of use.
Table 4. Questions on the perceived ease of use.
Questions
1. I often become confused when I use AIMT Agent.
2. I make errors frequently when using AIMT Agent.
3. Interacting with AIMT Agent is often frustrating.
4. I need to consult the user manual often when using AIMT Agent.
5. Interacting with AIMT Agent requires a lot of my mental effort.
6. I find it easy to recover from errors encountered while using AIMT Agent.
7. AIMT Agent is rigid and inflexible to interact with.
8. I find it easy to get AIMT Agent to do what I want it to do.
9. AIMT Agent often behaves in unexpected ways.
10. I find it cumbersome to use AIMT Agent.
11. My interaction with AIMT Agent is easy for me to understand.
12. It is easy for me to remember how to perform tasks using AIMT Agent.
13. AIMT Agent provides helpful guidance in performing tasks.
14. Overall, I find AIMT Agent easy to use.
Table 5. Descriptive statistics for the questionnaire responses.
Table 5. Descriptive statistics for the questionnaire responses.
QuestionPerceived UsefulnessPerceived Ease of Use
MeanStandard DeviationMeanStandard Deviation
12.171.092.541.22
22.211.283.501.32
32.251.363.291.20
41.921.103.381.13
51.961.203.541.18
63.251.223.331.37
72.631.443.041.23
83.581.023.211.28
92.251.263.171.40
102.041.043.461.22
114.080.723.041.27
123.920.833.291.27
133.581.183.541.18
144.130.853.671.01
Table 6. Types of responses and their frequency during the evaluation of the agent.
Table 6. Types of responses and their frequency during the evaluation of the agent.
CategoryDescriptionNo. of Responses
1Correct answer131 (66.8%)
2Incorrect answer due to insufficient examples19 (9.7%)
3Incorrect answer due to malformed question14 (7.1%)
4No answer due to insufficient data in the knowledge base22 (11.2%)
5No answer due to malformed question10 (5.1%)
Total196
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lytridis, C.; Tsinakos, A. AIMT Agent: An Artificial Intelligence-Based Academic Support System. Information 2025, 16, 275. https://doi.org/10.3390/info16040275

AMA Style

Lytridis C, Tsinakos A. AIMT Agent: An Artificial Intelligence-Based Academic Support System. Information. 2025; 16(4):275. https://doi.org/10.3390/info16040275

Chicago/Turabian Style

Lytridis, Chris, and Avgoustos Tsinakos. 2025. "AIMT Agent: An Artificial Intelligence-Based Academic Support System" Information 16, no. 4: 275. https://doi.org/10.3390/info16040275

APA Style

Lytridis, C., & Tsinakos, A. (2025). AIMT Agent: An Artificial Intelligence-Based Academic Support System. Information, 16(4), 275. https://doi.org/10.3390/info16040275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop