Recommending Learning Objects with Arguments and Explanations

: The massive presence of online learning resources leads many students to have more information than they can consume efﬁciently. Therefore, students do not always ﬁnd adaptive learning material for their needs and preferences. In this paper, we present a Conversational Educational Recommender System (C-ERS), which helps students in the process of ﬁnding the more appropriated learning resources considering their learning objectives and proﬁle. The recommendation process is based on an argumentation-based approach that selects the learning objects that allow a greater number of arguments to be generated to justify their suitability. Our system includes a simple and intuitive communication interface with the user that provides an explanation to any recommendation. This allows the user to interact with the system and accept or reject the recommendations, providing reasons for such behavior. In this way, the user is able to inspect the system’s operation and understand the recommendations, while the system is able to elicit the actual preferences of the user. The system has been tested online with a real group of undergraduate students in the Universidad Nacional de Colombia, showing promising results.


Introduction
Nowadays, the increasing number of learning resources available online has encouraged the massive development of Technology Enhanced Learning systems (TEL) [1]. TEL systems make education accessible to anyone, regardless of where they live. Using this technology, many universities now offer Massive Online Open Courses (MOOCs) and courses in a flipped classroom format [2]. This new teaching methodology is based on the use of TEL systems, which offer different types of learning objects (LO, e.g., videos, tutorials) that students can consume online in a non-classroom setting, and thus classroom sessions can be used to resolve exercises and doubts, group work, and other activities that require the direct presence of the teacher. However, this massive availability of LOs brings an overload problem, since students must be able to select the most suitable LOs for their learning objectives from a large number of available LOs.
This opens new challenges for the research community on recommender systems, where Educational Recommender Systems (ERS) is now a trending topic. By using ERS, students can efficiently get those LOs that are most appropriate to their educational level, learning objectives, and learning style. The student's learning style determines which LOs are more adequate for particular types of students. For instance, the widely applied VARK model (http://vark-learn.com/) identifies four distinct learning styles (i.e., Visual, Auditory, Reading, and Kinesthetic) and uses these dimensions to determine the appropriateness of specific types of online learning resources. For instance, visual learners rely more on diagrams, charts or graphs, while kinesthetic students learn best from simulations, videos and movies of 'real' things. Therefore, it is important to consider students' learning styles while developing online learning systems [3] and, consequently, educational recommender systems.
In [4], an ERS was presented that helps undergraduate students to find the LOs that are more suitable for them, taking into account their profile (level of education, preferred language, topic and format, learning style, and other personal information), learning practices (LOs that they already used), and similarity with other students. The system is a hybrid recommendation engine that combines computational argumentation with other several recommendation techniques (collaborative, content-based, and knowledge-based). This technique, which selects those LOs whose recommendation can be better supported by arguments, demonstrated being the most effective for this application. A detailed description of the technique and the different recommender algorithms can be found in [5].
The system offers a straightforward way of generating textual explanations from arguments. These arguments capture common inferences of human reasoning. Thus, while arguments are formal representations for the justification of each recommendation (as elements of an argumentation theory), explanations are textual representations of these arguments (as elements of the user interface). These explanations are used to convince students to accept the recommendations of the system and make use of the LO recommended.
However, online education is a tricky domain where learners' preferences can be highly time-variant. In addition, some students may have preferences that contradict the recommendations of learning theories (for example, because of their usual study practices). These acquired study habits are difficult to modify and students can quickly leave the ERS if they do not find it useful and effective for their learning objectives. Learners can even dynamically change their learning preferences or exhibit ad-hoc ones for specific topics or LOs.
Conversational recommender systems are particularly effective for preference elicitation through question-asking and recommendation critiquing [6,7]. Therefore, in this paper, a new conversational version of our system, which we call C-ERS (Conversational Educational Recommender System), is presented. This system can interact with users, allowing them to express their opinions on the recommendations, to specify their current preferences, and to correct the system's wrong assumptions. Its combination of argumentation theory, explanation generation, and conversational recommendation provides C-ERS with powerful tools to generate effective recommendations and engage students with the system. To evaluate the system, it has been performed a live-user trial with a real group of undergraduate students of the Universidad Nacional de Colombia, obtaining promising results.

Related Work
It has been acknowledged that online recommender approaches suffer from the cost of retraining the model, are built to optimize online performance which does not necessarily match online user behavior, and fail to capture preference drifts [7]. These challenges have boosted research towards the development of continuously learning systems, such as conversational recommender systems.
Conversational Recommender Systems are usually used to quickly establish preferences for users when we don't know too much about the one we are recommending. This is especially helpful to address the cold-start recommendation problem. The problem of eliciting user preferences can be addressed through different methods like interviews-based strategies, asking the user to rate some items, active learning, interactivity, or utilizing feedback in an online setting [7][8][9]. Interactive recommenders give the user a more active role so that the recommender system can improve their recommendations online. There are different works that use this approach, like critique-based [10], constraint-based [11], or dialogical recommenders [12].
Our system differs from critiquing-based recommenders since users do not critique items explicitly, but the way of reasoning that the system has followed to propose those items. In addition, our system is less affected by the cold-start problem, since recommendations are based on learning theories and argumentation inferences that also apply to new users once they have logged in the system and their learning profile is established. Furthermore, all these approaches have some kind of interactivity with the user to try to learn more from him/her. However, this work will not only try to learn and modify what the system knows about the user during the conversation, but in addition it will also try to persuade the user towards the best choice using arguments and explanations.
In [13], the authors present a survey where they compare 24 interactive recommender systems from the point of view of visualization, which is an effective approach to facilitate human perception and understanding of what is being recommended to the user. The authors have clustered the 24 analyzed recommenders into different clusters that focus on a feature of the recommender (transparency, justification, controllability, etc.). The feature most closely related to this work is the ability to generate justifications. Supporting justifications is usually used to explain the recommendations by comparing item tags with user preferences. Two examples are Tagsplanation [14] and MovieExplain [15]. In [16,17], there are more examples of how to use justifications for recommender systems. The way recommendations are presented to the user influence in the explanation attributes (or metrics) that are used. These attributes are: transparency (explain how the system worked), scrutability (which allows users to tell the system it is wrong), trust (to increase user's confidence in the system), effectiveness (to help users make good decisions), efficiency (to help users make decisions faster), satisfaction (to help the user to enjoy the use of the system), and persuasiveness (to convince users to choose the recommended item). This work focuses in the persuasiveness skills of the system to convince users by means of arguments generated during the recommender process.
The literature on persuasion systems reports that users find suggestions from sources that have proven to be reliable more credible [18]. Thus, the more credible a recommender system is, the more likely are its recommendations to be accepted. However, in [19], the authors analyze how the credibility of the system can be captured by a proper selection of its features. There are different approaches to follow, but explanations (i.e., arguments) is an efficient way to increase the system credibility, since they can clarify the system recommendation (the good intentions and efforts it does to provide the user the best recommendation) to users [20]. As has been presented above, some attributes or metrics help the user to trust more on recommendations. That is, they better accept recommendations if they understand how they were generated by inspecting how the system works (transparency), or if they can understand and evaluate why recommendations can be suitable to them (education). Furthermore, it has been shown that even if the recommended items are already known, users still prefer the system to provide explanations [21]. Explanations may also avoid bad choices. As explained in [17], the possibility of generating a certain type (style) of explanations depends largely on the algorithm being used and its ability to produce the information needed to generate these explanations. The styles of explanations that C-ERS can generate are directly conditioned by the type of knowledge that the rules of the system handle (collaborative, content-based, and knowledge-based).
Research in ERS has been very active in recent years [22][23][24][25][26]. Providing recommendation techniques with insights and explanations to learners and educators has become an important research line [1]. Argumentation theory and their tools have shown many successes in educational domains, especially regarding the training of critical thinking for law students [27]. Recommender systems combined with computational argumentation techniques have reported successful applications to suggest music [28], news [29], movies [30], or content-based web search [31]. Among the proposed systems, closer similarities exist between our approach and those proposed in [32], a recommender system that uses defeasible logic programming to recommend movies, and [33], a products recommender system based on the Toulmin's argumentation theory. In the former, the authors resolve possible attacks between arguments by defining a pre-established criterion of preference between rules. However, our system uses a probabilistic method to calculate the probability of one argument prevailing over another, which makes the system more adaptable. In the latter, the system allows for generating arguments to support the recommendation of specific items (cameras) and their features. However, the Toulmin's model only allows for generate claims in favor or against a specific conclusion (the recommendation of the item), while our system allows for generating different explanation styles from content-based, knowledge-based, and collaborative information sources and that allow for critiquing the explanation itself and not only the recommendation.
Although the impact of explanations on the performance of recommender systems has been proved [19], few research has been focused on the domain of educational recommender systems. Designing recommender systems for TEL environments presents several challenges [34]. For instance, each student learns following a different learning process, with a variety of learning methods and tools. Our work involves a contribution in these areas by proposing a conversational ERS provided with an argumentation framework that makes the system able to engage in a conversation with students to help them to understand the underlying logic of the system, to elicit drifts in their preferences, and to provide effective recommendations.

Conversational Educational Recommender System (C-ERS)
In this section, the recommendation and conversational functionalities of the argumentation-based C-ERS is presented. The proposed system uses expert knowledge encoded in defeasible rules to match user preferences with available LOs and infer suitable recommendations. Then, the system interacts with the user to validate its assumptions by providing explanations. We refer the reader to [5] for specific details on the recommendation algorithms and an evaluation on the performance of different recommendation techniques in this domain.

Recommendation Process
C-ERS follows a hybrid recommendation technique that uses LOs metadata and information extracted from the student's profiles to compute the recommendations. Concretely, to represent LOs the proposed system follows the IEEE-LOM standard (1484.12.1-2002-IEEE Standard for Learning Object Metadata: https://standards.ieee.org/findstds/standard/1484.12.1-2002. html).These profiles provide personalized information about the students, their interactivity level, the language they prefer, some preferences about the format, their learning style ( which can be an auditory learning style, a kinaesthetic learning style, a reader learning style, or a visual learning style), and their usage history (see Section 4). The learning styles have been obtained using the Spanish adaptation of the VARK questionnaire, available at [35].
This technique combines three types of recommender systems: one based on the content of the items being recommended, one based on comparing user's actions (called collaborative), and one based on the knowledge of the domain [36]. This technique models this expert knowledge in a logic program. The proposal of this work is to use a formalism based on defeasible argumentation (which is founded on logic programming, similar to DeLP [37]) to implement the logic of the recommendation system and build arguments to endorse the recommendation of selected Learning Objects. Thus, our C-ERS gives the users the Learning Objects that have better support by a greater number of arguments.
A defeasible logic program P = (Π, ∆) is modeled by strict knowledge (Π) and defeasible knowledge (∆) about the domain of the application. In C-ERS, Π represents a set of facts. These facts are strict inference rules with an empty body. In the example logic program of Table 1, lines 1 to 7 represent facts. The fact user_type(alice, visual) represents a student named 'alice' with a visual learning style. This learning style means that the student has a preference for learning objects with images or graphs such as png or jpeg, slides such as ppt, etc.
Furthermore, the ∆ set depicts defeasible rules of the form P ← Q 1 ∧ . . . ∧ Q k . The above rules encode the defeasible inference that literals Q 1 ∧ . . . ∧ Q k can supply arguments to support the belief in P. Several types of defeasible rules constitute the logic behind each of the recommendations of the system. A summary of these rules and their associated explanations (explanations are commented in Section 3.2) are shown in Table 2 (Due to readability reasons, the full set of rules are not provided).   For instance, in Table 1, lines 8 to 13 represent content-based rules that make recommendations for the student based on its profile. Then, an example of a rule that shows how the recommendation system chooses the learning object LO 1 as the most suitable for alice is appropriate_interactivity(alice, LO 1 ) ← user_type(alice, visual) ∧ interactivity_type(LO 1 , low). This is due to the fact that alice has a visual learning style and LO 1 has been labeled as a learning object with low interactivity, which fits with a learning style of type visual.
In addition, line 14 represents a collaborative rule that takes information from the students' profile to calculate a similarity degree among them and recommends an LO that was suitable for a similar student paul.
Finally, line 15 represents a knowledge-based rule that uses information about other LO which the user has already evaluated previously to make a recommendation of a similar LO: LO 2 . Note that, although the fact vote(alice, LO) ≥ 4 does not appear in the example logic program, in the proposed defeasible argumentation, formalism is not assumed negation as failure. We follow the approach of extended logic programming for non-monotonic reasoning, as proposed in [38], by allowing, in addition to negation as failure, a second, explicit form of negation, written as ∼. Therefore, the non-existence of a fact does not prevent the triggering of those rules that include it and, hence, the knowledge-based rule of line 5 can be used to generate an argument that supports the recommendation of LO 2 . Afterwards, rules can be defeated by the inclusion of new knowledge in the system.
The program that represents the logic of the C-ERS can be queried to resolve if an argument that supports a specific recommendation can be derived. Thus, when a recommendation of LOs for a certain student is submitted to C-ERS, it attempts to derive every recommend(user, LO) defeasible rule by backward chaining facts and defeasible rules and applying a mechanism similar to the SLD (Selective Linear Definite) derivation of standard logic programming. Next, it is presented how arguments are defined in this framework, since C-ERS has the ability to produce an argument that supports the literal that can be derived from each defeasible rule: Definition 1 (Argument). An argument A for h ( h, A ) is a minimal non-contradictory set of facts and defeasible rules that can be chained to derive the literal (or conclusion) h.
For instance, it can be derived from the program represented in Table 1  In our argumentation formalism, an argument can receive an attack from other arguments that rebut them (for instance by proposing the opposite conclusion) or undercut them (i.e., by attacking clauses of their body). Following the example above Arg 1 and Arg 2 attack Arg 3 while Arg 3 attacks Arg 1 and Arg 2 .
Definition 2 (Attack). An argument q, B attacks argument h, A if we can derive ∼ h from B or if q implies that one of the clauses of A does no longer hold (there is another argument h 1 , A 1 used to derive h, A such that Π ∪ {h 1 , q} is contradictory).
Attacks between arguments are addressed through the use of a measure of probability that calculates the likelihood of an argument succeeding based on the aggregated probability of the facts and clauses in the body of the rules employed to build the argument. Therefore, C-ERS uses a simplified probabilistic argumentation framework that maps values of probability to the arguments and then aggregates such values to calculate a suitability score for classifying and recommending LOs [39].

Definition 3 (Argumentation Framework
). An argumentation framework is defined in C-ERS as the tuple (Arg, P Arg , D) where Arg is a set of arguments, D ⊆ Arg × Arg is a defeat relation, and P Arg :→ [0 : 1] represents the probability that an argument holds.
To calculate the probability of an argument Arg (where Arg = h, A ): A fact has probability of 1. You can compute the probability of a defeasible rule as the mean of the probabilities for the literals Q 1 , . . . , Q k that form their body. For instance: The suitability value of a recommendation is then calculated as the product of the probabilities of all its supporting arguments.

Definition 4 (Defeat). An argument defined as q, B defeats another argument h, A in C-ERS if B attacks A and P B > P A .
For instance, Arg 1 and Arg 2 would have probability 1 and Arg 3 probability 0.5 (since we have no information on the vote that alice assigned to LO in the past and the fact vote(alice, LO) ≥ 4 cannot be resolved). Therefore, following the above defeat relation, Arg 1 and Arg 2 would defeat Arg 3 and LO 1 would be recommended to alice. Otherwise, if different arguments to support the recommendation of different LOs would still hold at the end of the recommendation process, such LO with the greatest suitability value would be recommended.

Conversational Process
As pointed out in Section 1, an extra benefit from our argumentation-based recommendation technique is the straightforward way to generate explanations from arguments. These explanations are justification texts that can be offered to the user to persuade them to try certain LOs and to explain why the system has proposed a specific LO. With each explanation, the user can interact with the system by selecting one of the three possible pre-defined responses (one to accept the explanation and hence the recommendation, one to ask for more justifications, and another one to reject the recommendation). Thus, the current version of C-ERS implements a constrained mode of user interaction that does not provide a natural language interface. Table 2 shows an example of the explanations that the system is able to generate. Note that the system do not provide explanations for all arguments that the system is able to generate from rules, but only for those rules that represent 'leafs' in the rule tree. These represent the deeper clauses that more general 'recommendation' rules aggregate to support recommendations. This simple design decision allows us to reduce the complexity of the dialogue and to improve the system's expressiveness.
In addition, the natural order to perform the backward chaining of rules and facts to derive arguments that support recommendations allows us to establish a conversational process between the C-ERS and the user. By this process, the system is able to elicit the actual preferences of the user and allows him/her to correct the system's wrong assumptions. Concretely, the following priority orderings have been established for rules and their associated explanations (as a result of the possible combinations among the different types of rules):  Figure 1 illustrates the sequence of explanations that the system shows, depending on the rules that have been triggered to generate arguments for the recommendations and the decisions made by the user in each step of the dialogue.

Evaluation
To evaluate C-ERS, an online evaluation to demonstrate the ability of the system was performed: 1.
To provide effective recommendations that suit the students' profile and learning objectives (effectiveness).

2.
To persuade students to try specific LOs (persuasiveness).

3.
To elicit the actual preferences of the user by allowing him/her to correct the system's assumptions (scrutability).

Methodology
For the evaluation tests of our C-ERS, a prototype was implemented that uses 75 LOs from FROAC (Federation of Learning Objects Repositories of Colombia). These LOs include learning materials from computer science such as database, systems, algorithms, programming, etc. The LOs are also provided with metadata that represents information about the title, description, keywords, type, language, format, and interactive level. All learning objects were in Spanish and 20% of them also had an English version.
The tests were performed by a group of 50 bachelor degree students in Computer Science of the Universidad Nacional de Colombia. The students signed an informed consent to participate. During the registration, the system requested from these students information about their topics of interest (interest keywords) and level of education (years as bachelor degree student). They also completed a test to determine their appropiate learning style. Thus, the system was able to register the next features for each student: • Personal data: ID, name, surname, sex, date of birth, nationality, city of residence, address, language, phone, and mail. • Student's educational preferences: -Interactivity level: high human-computer interaction LOs (preferred by nine students), medium human-computer interaction LOs (preferred by 38 students), or LOs that focus on presentation of content (preferred by three students).

-
Preferred language: all students mother tongue was Spanish, and thus all preferred LOs in Spanish.

-
Preferred format: jpeg was selected by nine students, mp4 by eight students, pd f by 31 students, and 'other formats' by two students.
• Learning Style: to model the learning style of each student, it was followed the VARK (http://vark-learn.com/) model. The model classified 25 as visual students, 6 as auditory, 12 as reader, and 7 as kinesthetic. • History of uses: for each LO ranked by the student, the system stores its ID, the rating assigned, and the date of use.
After the registration, the evaluation tests were performed as follows. Each student participated in 10 recommendation processes, where the system recommended LOs that match his/her profile for a specific search (a list of specific keywords). As pointed out in Section 3.2, the system could follow four different argumentation strategies, which result in different orderings to show arguments (see Figure 1 for details on which arguments succeed to others). In the recommendation processes, the system was settled to follow the following prioriority orderings: Among these recommendation processes, the system computed the best top five recommendations (those with the largest number of generated arguments) and provided the student with the best of the list (the one with the more arguments generated). Then, the student had to provide an initial rating for the LO recommended.
Furthermore, for 5 out of these 10 recommendation processes (even recommendation processes, i.e., 2,4,6,8,10), the system showed an explanation for the LO recommended. When no explanation was provided (odd recommedantion processes), the system's interface allowed the student to 'Accept' the recommendation, 'Reject' it, or 'Ask for a justification'. When an explanation was provided, the student could 'Accept' the LO, 'Reject' the LO, or else, 'Reject' the explanation (and hence, the underlying argument).
When the student accepted or rejected the recommendation at the fist step of the recommendation process, he/she was able to run the next recommendation process of the evaluation tests. When the student rejected the argument (and its underlying explanation), different situations could arise. On the one hand, the explanation could be rejected just because it was not sufficiently convincing for the student and he/she asked for further explanations (e.g., 'Not sure. Show me more reasons' or 'Not sure. I do not care that the LO is not updated' in Figure 1). In this case, the next available argument (if any) was used to generate a new explanation in a second step of the recommendation process. If the student rejected the explanation again, a new one was provided in a new step of the recommendation process and so on until he/she accepted or rejected the LO or there were no longer available arguments. In that case, the recommendation was just rejected.
On the other hand, the explanation could be rejected due to the content of the explanation itself, which may include errors or outdated information in the system's assumptions about the student's preferences (e.g., 'Reject. I prefer LOs of the type [RESOURCE_TYPE]' or 'Reject. I prefer LOs in [SPECIFIC_LANGUAGE]'). This entails a change in the profile of the student and the system had to compute again the recommendations taking this new information into account (which yielded to the next recommendation process, if the student had not already completed the 10 recommendation processes).
Finally, at the end of each recommendation process (after all possible steps), the student was urged to revise his/her initial rating and change it if desired. With these evaluation tests, a database of 472 ratings was obtained from the total number of 500 recommendation processes (50 students × 10 recommendation processes each). An schema of this process is shown in Figure 2.  Figure 2. Schema followed by the described experiment.

Results
The experiments evaluated the effectiveness of C-ERS to provide recommendations that suit the students' profile and learning objectives. The advantages of the argumentation-based recommendation over other approaches in this domain, such as content-based, collaborative filtering, and knowledge-based, was previously demonstrated in our previous work [4]. Therefore, we do not intend to evaluate the whole learning process of students, as proposed, for instance, in Kirkpatrick's model [40], which tries to evaluate the learning process from reaction, to learning, to behavior, and to organisational performance. Here, we just focus on Kirkpatrick's level 1, evaluating the reaction of the students to the learning objects proposed by the system. As shown in Figure 3, for all recommendation processes, we computed the average of original ratings that students provided to the learning objects that the system recommended to them in two separated groups, the average rating of the learning objects that were provided in recommendation processes were explanations were shown ('With Argument' in the figure), and the average rating to those learning objects that were provided in recommendation processes where the system did not show any explanation (just the object, 'Without Argument' in the figure). In addition, 25th and 75th percentiles are marked by the boundaries of the boxes, and the average ratings by dots. Moreover, Figure 4 shows exactly how often each rating was chosen. As we can see in the figures, students provided higher ratings to those LOs that presented explanations, which demonstrates the advantages of using explanations to offer effective recommendations. Furthermore, the number of highly rated objects (3 to 4) increased when they came with explanations. We consider as a baseline the case in which students do not receive explanations attached to the recommendation (without argument), since the hypothesis to be demonstrated is that the ratings assigned to 'justified' recommendations are higher, in other words that students are more satisfied when they understand the reason why the system provides them with a specific recommendation. On the data shown in Figure 3, the tests of statistical significance show a p-value of 0.0046 and g of Hedges of 1.27 (effect size significant for samples with different variances −1.21 without arguments; 1.64 with arguments), which demonstrate the significance of the results obtained.
In Figures 5 and 6, experiments also evaluated the average ratings provided by the students for each priority ordering to show the explanations, and how often each rating was chosen for each priority ordering. As depicted in the figures, all priority orderings seem to have a similar performance, with a slight increase in the average rating and in the quantity of LOs that got better ratings for P3 (first show arguments that suit the usage history of the users).  To evaluate the persuasive power of C-ERS, the initial rating provided by the student to LOs and the final one after the 'conversation' between the system and the student were compared. Figure 7 shows the average percentage of decreased, improved, and unchanged ratings throughout the conversational recommendation process (from the 1st iteration to the end). As illustrated in the figure, a wide percentage of ratings were improved by the interchange of explanations between the C-ERS and the student, which demonstrates the persuasive power of explanations to improve the opinion of students about the LOs recommended. In addition, Figure 8 shows the average percentage of students that accepted the recommendation in the 1st iteration (without and with explanations), and at the end of the dialogue with the system. As illustrated in the figure, most students preferred to receive explanations before they accept the LO recommended, which again demonstrates the effectiveness of explanations to influence the students' will to accept recommendations. To evaluate the scrutability of C-ERS, the average percentage of students was analyzed that rejected explanations because they changed any of their initially declared preferences. Results shown in Figure 9 demonstrate that a significant percentage of students decided to change their preferences at any step of the recommendation process (especially those regarded to the interactivity type required to use LOs). On average, over the total number of the recommendation processes where they participated, 38% of students decided to change any preference. Our C-ERS system is able to capture these changes and allow students to indicate that it has made a wrong inference about his/her preferences, which is crucial in this educational recommendation domain to provide them with recommendations that actually suit their learning objectives. Figure 9. Average % of students that changed their declared preferences. Moreover, the system performance as the average number of explanations that it has to provide until the user accepts the recommendation was also measured. We got an average number of 2.2 explanations needed to convince the students to accept LOs. This is short enough to ensure that the average time required to end each conversational dialogue is not as excessive as to bore students and make them abandon the recommendation process.

Conclusions
An argumentation-based Conversational Educational Recommender System (C-ERS) has been presented in this paper. The proposed system helps students to find the more suitable learning resources considering different aspects such as learning objectives, educational level, and learning style. According to this, the main contribution of the proposed system is that it allows the user to interact with the system and accept or reject the recommendations, providing reasons for such behavior. These reasons are used by the system to convince students to accept the recommendations of the system and make use of the recommended learning objects.
The proposed system has been tested and evaluated with a real group of undergraduate students in the Universidad Nacional de Colombia. In the proposed experimentation, each student participated in 10 recommendation processes, where the system recommended learning objects that match the students' profile for a specific search. Additionally, the system showed an explanation for the recommended learning objects. Results have shown the ability of the system to provide effective recommendations that suit the students' profile and learning objectives, to try to persuade students to use certain learning objects, and to elicit their actual preferences by allowing students to correct the system's wrong assumptions.
Actually, the current system constitutes a proof of concept tested with a set of 50 students of computer science. This could entail some bias in the students' profile. As additional work, we will try to extend the evaluation tests to a large number of students with more heterogeneous profiles. Moreover, we want to study to what extent the new user model acquired with the new preferences elicited during the conversation should be updated and stored permanently or just should be considered as an ad-hoc profile for the current recommendation process.
Finally, as future work, we plan to enhance the interaction mode of the system with a new natural language interface, able to conduct dialogues in natural language with the students. In addition, in our C-ERS explanations are textual versions of arguments. In doing so, we are just using one format of explanations, and, in fact, we acknowledge that this type is not the best type for all learners (clearly, it is more suitable for "readers") [41]. Thus, in future work, we will translate our arguments to different formats of explanations.