Next Article in Journal
The Behavior of a Thread-Bar Grouted Anchor in Soils from Local Strain Monitoring
Next Article in Special Issue
Automatic Classification of Text Complexity
Previous Article in Journal
An Improved Investigation into the Effects of the Temperature-Dependent Parasitic Elements on the Losses of SiC MOSFETs
Previous Article in Special Issue
Development of an Intelligent Tutoring System Using Bayesian Networks and Fuzzy Logic for a Higher Student Academic Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence Visual Metaphors in E-Learning Interfaces for Learning Analytics

1
Department of Mathematics and Computer Science, University of Perugia, 6100 Perugia, Italy
2
Department of Journalism, Hong Kong Baptist University, Hong Kong, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(20), 7195; https://doi.org/10.3390/app10207195
Submission received: 28 August 2020 / Revised: 8 October 2020 / Accepted: 9 October 2020 / Published: 15 October 2020
(This article belongs to the Special Issue Advances in Artificial Intelligence Learning Technologies)

Abstract

:
This work proposes an innovative visual tool for real-time continuous learners analytics. The purpose of the work is to improve the design, functionality, and usability of learning management systems to monitor user activity to allow educators to make informed decisions on e-learning design, usually limited to dashboards graphs, tables, and low-usability user logs. The standard visualisation is currently scarce, and often inadequate to inform educators about the design quality and students engagement on their learning objects. The same low usability can be found in learning analytics tools, which mostly focus on post-course analysis, demanding specific skills to be effectively used, e.g., for statistical analysis and database queries. We propose a tool for student analytics embedded in a Learning Management System, based on the innovative visual metaphor of interface morphing. Artificial intelligence provides in remote learning immediate feedback, crucial in a face-to-face setting, highlighting the students’ engagement in each single learning object. A visual metaphor is the representation of a person, group, learning object, or concept through a visual image that suggests a particular association or point of similarity. The basic idea is that elements of the application interface, e.g., learning objects’ icons and student avatars, can be modified in colour and dimension to reflect key performance indicators of learner’s activities. The goal is to provide high-affordance information on the student engagement and usage of learning objects, where aggregation functions on subsets of users allow a dynamic evaluation of cohorts with different granularity. The proposed visual metaphors (i.e., thermometer bar, dimensional morphing, and tag cloud morphing) have been implemented and experimented within academic-level courses. Experimental results have been evaluated with a comparative analysis of user logs and a subjective usability survey, which show that the tool obtains quantitative, measurable effectiveness and the qualitative appreciation of educators. Among metaphors, the highest success is obtained by Dimensional morphing and Tag cloud transformation.

1. Introduction

The diffusion of remote learning in the scenario of schools and university, boosted by the recent COVID-19 pandemic, offers new challenges to educators and web designers. The e-learning scenario requires the use of new methods to develop and deliver content [1], as well as new strategies to collect and analyse students’ direct and indirect feedback. Students feedback is valuable for evaluation and analysis of students engagement and teachers’ evaluation of learning objects and resources [2] based on their usage in a precise learning design (LD) process. The importance of designing the learning process stands in a user-centred representation of the pedagogical intention. Regrettably, these strategies do not take into account the patterns of students’ engagement, paramount for the learning process.
To analyse students’ engagement, we can take advantage of the opportunities that virtual learning environment (VLE) systems can offer to the traditional teaching and learning scenario. Using VLEs, we can monitor how students download, read or watch online learning resources, which is considerably complicated in the traditional approach.
In the face-to-face scenario of a traditional frontal lecture, teachers can understand the students’ engagement looking at their face, and monitor their activities by the evaluation survey often submitted at the end of the course, together with assessments and progression scores [3]. The traditional face-to-face setting, in which educators glean immediate feedback by verbal and facial cues, as well as the ability to adjust teaching strategies immediately in response to that feedback, is a crucial advantage of in-person teaching. The immediate feedback formed a significant rationale for our study, the goal of which is to bring such decisive capability to the e-learning world. Concerning the relevant feedback of the in-person environment, that we can add to VLEs, we can for sure consider the students engagement in each single learning object. In VLEs, all the interactions and learning activities are recorded automatically in the platform logs, resulting in digital footprints for a large number of users [3]. This continuous data gathering can provide relevant information to instructors to analyse the students’ engagement information in real-time. Using such a system, teachers can take immediate actions to align the course enactment to the underlying LD, adapting and modifying the course structure or content, e.g., providing additional learning material to the learning audience, promoting students’ interaction with the teacher or within study groups, and learning activities.
In general, educators use tools to support and inform the LD process, which does not rely only on the experience of the designer and the best practice of their colleagues. Appropriate analytics tools can support constant monitoring of the impact of the learning material, can help to revise and find flaws in the course learning objects, and test the attainment of pedagogical objectives. Analytics should be based on pre-existing aggregated data on students’ engagement, progression and achievement [3].
Most learning management systems (LMS) provide instructors with some form of information to track students’ accesses and online activities through system logs and basic visualisations. Usually, such information is provided in the form of numbers, tables, and, less often, graphs, but most of the log information is not visible at all in the user interface. The monitoring interfaces often looks like directed to the system manager, more than to the teacher, to whom is scarcely usable. From the instructor point of view, logs provide a large and useless amount of raw data, where only a few graphical outputs are provided in the form of bar charts and simple filters. Functionalities for exporting data in different formats are usually available. However, only instructors that possess advanced notions in the field of database query languages can fruitfully use them, while it remains difficult for those who do not have extensive previous Information Technology (IT) knowledge. As the population of instructors using e-learning systems is growing, especially in the actual boosting of remote learning due to the COVID-19 pandemic (using, e.g., e-learning, blended learning, and mixed lectures at the same time offered in-person and in e-learning to ensure social distancing), the number of educators with little or no IT background suddenly involved in e-learning is huge. Thus, there is a growing and urgent need of tools for continuous monitoring of learners which fulfills requirements of immediacy, effectiveness, and usability. Desirable properties of such tools are to be:
  • integrated in the standard LMS interfaces;
  • usable by every instructor, disregarding their knowledge of data analysis, data mining, or statistics;
  • informative and immediate, and to be continuously used as part of everyday instructor’s activity similarly to in-class feedback.
Learning analytics (LA) can help instructors to understand the realisation of their pedagogical intent by analysing the actions and behaviours of students, alone or in learning communities [4]. Instructors should put particular care and caution in the analysis of the students’ actions because the access to activities and material does not automatically lead to students’ knowledge acquisition [5].
All the tools and techniques that LA provides are useful to evaluate the effectiveness of the pedagogical process in the online context and to provide an in-depth insight into the development of the courses in the LD environment. Most of the current LA tools remain scarcely usable and unpractical [2], while the data is presented in ways that provide difficult interpretation by non-experts [3,5], e.g., numbers, lists, and tables which require skills and time to be read and analysed. Thus, the current trend is to move the analytics information to a separate learning dashboard, leading to a separation between the daily instructors’ activities and the data analysis phase [6]. Instructors have to learn a different interface and use a non-integrated tool to inform their LD process [7,8] (see [9] for a deeper analysis of alternative solutions using logs). Another drawback of many LA approaches is that they only allow post-course analysis, as they are developed to be used after the course conclusion (i.e., to have a comparison on the same course on different academic years [10]), without a real-time visualisation of the results. Furthermore, such visualisation often requires batch executions on the gathered data, only feasible with appropriate hardware. The current tools, lacking real-time features, fail to provide “as in-class” feedback to the instructor.
In the e-learning and blended learning environment typical of most of LMSs, most of the interactions with the learners take place offline. Interaction is the core of most of the activities, e.g., discussion forums, self-assessment tests, wiki, flipped lessons, and cooperative learning, while the necessary consumption of course material (e.g., slides, videos, documents) does not require deep interaction. The interaction through an online platform is unquestionably scarce of feedback, in contrast to the face-to-face interaction, where instructors can receive direct signals and in-class feedback from students [3].
What lacks in the current LA scenario, and what we are willing to provide, is a tool that enables the educators to have continuous real-time monitoring of course development and help them to check if the students will correctly receive the LD they pursue [11]. Our goal is to bring together the valuable features of the face-to-face approach that are implementable in the e-learning environment (e.g., immediate teacher self-evaluation, ability to adapt strategies quickly), while improving the readability and comprehensibility of the e-learning monitoring system, to improve overall teaching and learning. This strategy will allow instructors to understand how students interact with the course elements provided in the LMS and receive from like in-class feedback from the data analysis. Instructors can then make data-informed decisions about the course effectiveness and the learning objects impact within an in-place and easy-to-use tool, not requiring in-depth IT knowledge. We aim to help educators make decisions, connected to student-related factors and based on the timing of access to resources, to improve learning designs that comprise their teaching experiences. Such a goal is reached providing artificially intelligent (AI) reactive tools [12], able to transform the visualisation of the course content dynamically (e.g., dimension and colour of icons of learning objects) [13]. An approach based on AI can perform tasks commonly associated with intelligent beings and intellectual processes characteristic of human beings, such as identifying patterns, discovering meanings, generalising, or learning from past experiences. Despite continued progress in computer processing speed and memory capacity, there are still no programs that can match human flexibility on broader domains or in tasks that require much daily knowledge. However, AI can adapt tools to be automated, responsive, adaptive, flexible, and to improve usability features, such as visibility, readability, and comprehensibility [14,15]. The students’ engagement is thus represented by the learning objects usage. The same visual metaphors can be used on students avatars (e.g., dimension and colour of the avatars of the student or groups of students), to view the engagement of the single student or group with high affordance. In our implementation, we considered an essential requirement to be compliant with responsiveness for mobile devices [16], in order to grant usability and accessibility. More than ever, the COVID-19 pandemic has highlighted the necessity to grant accessibility from smartphones, in particular in all the cases where remote learning of several students in the same house (e.g., siblings, students sharing the house, teachers who are also parents of students) would require an expensive and bulky amount of personal computers. In the second section, the current state of the art of visual interfaces for learner monitoring in LMS and their main drawbacks are discussed in order to motivate our approach and the use of the innovative approach of visual interface morphing [17,18]. In section three, we detail the definition of the three proposed types of visual interface morphing methodologies. Section four presents the implementation of the morphing metaphor as a Moodle theme, introduce the experiments settings and the evaluation criteria. Finally, we discuss the experimental results, we conclude, and depict some future line of research.

2. Visual Interfaces for Learner Monitoring

The ability to monitor students’ activity gives online educators an overview of the quality of their teaching process. The reception of the content provided to students can be analysed from various points of view. Students’ engagement is one of the most relevant indicators for educators, where the usage time and frequency of the learning object can represent how much the single students, group of students, or entire classes are effectively making use of the object. For example, if a learning object is open by the student and promptly closed and this happens in many cases, the object is probably corrupted, difficult, or having any other content or format issue that the educator may fix. Typical unusual quantitative factors to be included in the analysis are the freshness of the activity, the number of accesses for single user or group of users, the relationship between the time of the registered activity (e.g., consultation of teaching materials) and some in-class events, e.g., lectures, assignments, projects. These analysis tasks are usually accomplished in current LMS through the visualisation of log reports in textual, tabular, or graphical formats.

2.1. Report Log Analysis

Direct consultation of activity logs is usually included in the comprehensive learning management system’s interface. This kind of tools offers the user the ability to show, sort, and filter the information contained in the general logs created when the learner accesses the learning objects. The selection parameter choices provided to filter and aggregate the information are usually very limited. Instructors need to use some additional features like user role aggregation, time windows selection, and other parameters to fine-tune their analysis. If those advanced options are present, the LMS engine provides a minimal selection of choices. Presentation to the user of the resulting information usually consists of simple textual or tabular visualisations, challenging to read and to analyse in-place.

2.2. Information Visualisation

Visual elements are added to the learning system’s control panel to support the analysis of data gathered from the user activities logs. Data are presented in the form of key performance indicators in learning dashboards [19]. Learning dashboards are “...a single display that aggregates different indicators about learner(s), learning process(es) or learning context(s) into one or multiple visualisations” [20]. In a typical administrative dashboard, data obtained from the LMS’s logs can be analysed using different data mining and statistical formulas than can summarise various aspects of the courses. Filtering, based on different data dimensions, can be applied to reduce and aggregate the information. The resulting data are then visualised in a graphical, synthetic form using charts. Pie charts, histograms, plots, or radar graphs are used to display the key performance indicators to the instructor [21,22].

2.3. Drawbacks

The main drawback of the currently available tools based on log report analysis [9] is that skilled instructors/administrators are required to understand and use the key performance indicators to assess the learning process effectiveness. In order to use typical analytical functions, such as predicting student outcome [23], evaluating the risk of students to drop out their study track [24], and determining courses utilization levels and quality for informing the LD process, the instructor usually needs specific statistics and data mining knowledge to give a proper interpretation of textual data and dashboard charts.
Historically, the first logs report functions to appear in LMS were a porting of the typical monitoring functions reserved to the system administrator. The design target of such features was for people with IT background. Over the years, a few efforts have been dedicated to improving the usability of logs reporting functions for the non-IT user. The chart (Figure 1) shows a typical log monitoring activity report chart, available in a standard LMS for administrators, but not designed for users without any IT skill.
From the educator point of view, all of these tools are often difficult to use or require an effort to learn new concepts and methodologies. Currently, many of the systems are data-rich, but information-poor [25]. Even in the case of the most simple log reporting functions, such as those provided in the popular LMS Moodle, the user should be able to manage the SQL language and generate appropriate queries to obtain meaningful information. While log reports can be exported in many different data formats, e.g., XML (eXtensible Markup Language), MySQL (where SQL stands for Structured Query Language), CSV (Comma Separated Values), TSV (Tab Separated Values), and JSON (JavaScript Object Notation), the monitoring tools are far from being affordable to be used by an instructor with a general background. An approach based on AI allows a flexible and easy-to-read report visualisation with real-time updates. Our design provides a tool which allows instructors to make continuous monitoring and analytics of learners activities exploiting the richness of activity logs in a high-affordance visualisation. Such dynamical AI-based interface reaches the goals of integrating informativeness, immediacy and usability (e.g., easiness and pleasantness of use) for instructors lacking specialized IT and statistics skills.

3. Metaphors for Course Analysis

We propose a definition, design, and experimentation of a visual code that can render in an intuitive and immediate metaphor the main quantitative parameters of the users’ activity. We embed the visual code into the LMS interface, thus integrating the course monitoring phase in the everyday course management process. In other words, while performing usual activities of content delivery and interaction, the instructor will have an in-place glance of the course usage, in term of student engagement, community interaction, and use of each learning object. The graphical view can be further fine-tuned or filtered to focus on specific parameters or visualisations using a simple and usable parameter selection. Figure 2 shows the architecture of system modules and their relation with teacher and students users for the learning management. Big and small arrows show the flow of data transformation and users interaction with the system. The students (upper human figures) and the educator (lower human figure) interact with the learning objects, dynamically changing their visual metaphor.

3.1. Visual Interface Morphing

We introduce a class of visual interfaces, the characteristic of which is to act on a learning management system’s interface to modify the appearance of the learning objects using visual metaphors. A visual metaphor is the visual representation of an object, e.g., person, group, learning object, or concept through a visual image that suggests a particular association between the object and its visualisation. A metaphor uses one or more visual features to compare, highlight, and in general, improve the visibility and understandability of the object properties in an intuitive way.
The general idea is to modify elements of the interface, e.g., dimension and colour of icons and text, to express some crucial dimension of the data, and to show them through an embedded real-time visualisation. This form of visualisation combines the completeness of information with the efficacy of representation, significantly improving the usability of the learning analytics and monitoring features of the e-learning platform.
The quantitative dimensions considered in this work are the following:
  • Quantity of usage of learning objects (views, number of updates, number of submissions, number of quizzes repetitions);
  • Temporal distribution of resource use;
  • Temporal proximity for a time reference—current date/time is usually considered as a reference.
Data can be further filtered/aggregated by:
  • Users, from a single user to groups (e.g., study groups) or classes of users (e.g., students from a particular country, within an age range, following a particular topic);
  • Time interval;
  • Action (e.g, read, write);
  • Learning object or module instance.

3.2. Learning Objects Quantitative Analysis

To measure the learning object (e.g., slides, documents, links) or the module instance (e.g., quiz, assignment, wiki) popularity and the user engagement, our system computes the usage of each learning object and module, tracking the users with their profile information, when needed for specific statistics and filters, under the privacy policy of the platform. The distinction between learning objects and modules is instrumental to a possible different implementation of resources and activities. We consider the frequency of access for each element present in the interface, which represents its usage—for instance, in the case of the analysis of the engagement in a forum discussion, the number of accesses to read or write posts. Similarly, regarding assignments, the number of submissions will be considered as a quantitative value, while, if the assignment is a quiz, also the single or multiple attempts will be a piece of relevant information. Such kind of quantities measures how much the users access the specific module instance, and, therefore, how popular the analysed element is among the users.
Given such elements (e.g., learning objects and module instances) in the course, our system will automatically extract the related data from the log system.
Definition Log record. Formally, a relevant activity event a for a learning object o is encoded by a log record l o g r e c t , o , a , where t is the access time stamp, i.e., a is the event of accessing the learning object or module instance o at time t.
Definition Learning Object Usage. Given a set A o of meaningful activities for a module instance or learning object o and a time interval T = t 1 , t 2 , where t 1 is the starting time and t 2 is the ending time, the quantitative value of usage of o in T is defined by Equation (1):
N T , A o = l o g r e c t , o , a a A o , t 1 t t 2

3.3. Temporal Proximity

The temporal information, needed to track improvement (or worsening, e.g., in case of procrastination) of students learning, is an expression of the chronological access history to the elements within a temporal interval of analysis. The reactivity of the AI approach allows to render the temporal process. Our system requires two time-related values associated with the learning object or module instance: the date of access, extracted by logs, and a reference date provided by the teacher to set the temporal landscape.
Temporal proximity is rendered visually with a metaphor using hot/cold colours, i.e., heat-maps, where a hot topic refers to resources more frequently and recently accessed, shown in warm colours starting from red (recently accessed) and gradually cooling down in time towards orange and yellow, i.e., in the red frequency of the colour spectrum. On the contrary, objects less recently (thus, frequently) accessed are rendered in cold colours, i.e., in the blue/magenta range of the colour spectrum of light. The resulting colour is proportional to the time difference between the object access date and the reference date set by the teacher (i.e., the analyser). The system shows the peculiar status of objects that did not underly relevant access by users using an ad hoc colour, thus providing to the teacher the meaningful information about the necessity to double-check the quality of the element, e.g., the content and presentation of a learning object, or the publicity of an evaluation module. The distinctive colour chosen to represent the null access to an object in a time interval of interest will be white or black, automatically chosen depending on the contrast with the actual background colour of the page, to guarantee usability and accessibility on the visual side.
While the intuition behind the hot/cold metaphor is very consistent, the actual definition allows many degrees of freedom, as expressed by the following definition.
Definition of Temporal Proximity. Given a reference interval T = t 1 , t 2 , a set of significant activity logs l o g r e c ( t , o , a ) for a A o of learning object o, and given Equation (2):
L T , A o = l o g r e c t , o , a a i n A o , t 1 t t 2 ,
the temporal proximity of activities over o, P T , A o is defined in Equation (3):
P T , A o = T i m e E x t r a c t L T , A o t 1 t 2 t 1
where T i m e E x t r a c t ( ) is a procedure extracting an internal reference time from the set of logs of the object o. The value of P T , A o is thus used as translation displacement on the colour temperature scale. In the current implementation T i m e E x t r a c t L T , A o = M A X T i m e L T , A o , i.e., the system extracts the most recent access to the object. The choice is consistent with the proposed intuition, corresponding to a “ the last takes it all” strategy. In general, we can consider other possible functions for T i m e E x t r a c t ( ) , as explained in-depth in the Conclusions section, which can introduce an inertial resistance to the colour change. A meaningful option is to consider T i m e E x t r a c t = A V G T i m e , i.e., to determine the colour temperature for time proximity by the average of the activity dates. An alternative is to consider the M A X or the A V G of the most recent quartile, e.g., the most recent 25% accesses, or another fixed threshold depending on the amount of users accessing the resource.

3.4. Aggregation Functions

The information acquired from logs can be aggregated under different strategies and at various granularity levels, to show different use cases depending on the instructor requirements and goals.
User Group Aggregation The relevant dimensions of the learning objects (e.g., usage and proximity temperature) can be considered by a group of users, of varying size, from activities of a single user to group projects for cohort dynamics analysis.
Time Interval Aggregation The teacher can set the time range T = t 1 , t 2 to examine. The starting date t 1 and end date t 2 can include the full duration of a course, e.g., to detect general usage trends, or focus on a specific part, to have a more detailed view of a key instructional event (e.g., deadline of an assignment submission, assessment due date, some days before the exams). In general, teachers can select a time landscape of activities on which to focus the graphical view (e.g., last week, last month) and extract data for temporal analytics.
Learning Objects Aggregation The system analysis include the possible aggregations of actions performed by students on the learning objects or module instances (e.g., read, write, visualise, submit, posts). This kind of aggregation enables educators to further focus their analysis on a specific learning or evaluation elements or classes, e.g., activities versus resources.

4. Monitor Metaphors

After the investigation of requirements and the feature design of the visualisation, we designed different metaphors for the monitor system to represent real-time embedded quantitative information on the data gathered from the LMS usage logs. Among the several possible design solutions, we propose three metaphors as the best solution for the goals of a comprehensive e-learning management system:
  • Thermometer bar;
  • Dimensional morphing;
  • Tag cloud morphing.
The design is technically easy to adapt to different goals if needed, as required by an AI-based flexible approach.

4.1. Thermometer Bar

This metaphor is straight to use because it does not require any modification to the page appearance. The information is given by a thermometer icon next to the learning object or module instance.
This metaphor renders the resource popularity through the width tag of thermometer section filled with each colour, and numerically by the percentage shown next to the Thermometer bar. The thermometer is filled linearly with the absolute number of users’ accesses. The percentage represents the access frequency, calculated on the period of interest by a normalised function T = t 1 , t 2 .
The hotness of the topic is expressed by the colour of the thermometer bar, as explained in Section 3.3, cold colours mean that the last access to the learning object is far away in the past from the reference date t 1 and hot colours mean that the accesses are recent, i.e., closer to t 2 .

4.2. Dimensional Morphing

In dimensional morphing, the LMS course content appearance is transformed in the size (and colour) of the objects’ icons and labels. While retaining the ease of use of the familiar interface structure, this change in dimensions affects the instructor’s perception of the relevance that each activity and resource has over time. In the same way, the avatar of each user can be transformed in dimension, to show which students are more engaged in the course. The size determines the popularity N T , A o of each learning object. The dimensions of the icons and labels are rendered proportionally to the numerical value of access. This metaphor requires some design constraints to grant both graphical and spatial consistency: our solution is to set a maximum and minimum size for the dynamic element. This constraints ensure the readability of the small icons, required in terms of usability and accessibility, avoiding at the same time a potentially unlimited growth of already big elements. Sizing of icons and labels S T , A o is determined according to Equation (4):
S T , A o = D m i n + N T , A o N T , m i n N T , m a x N T , m i n D m a x D m i n
where D m i n ( D m a x ) is the minimum (maximum) allowable size for the element and N T , m i n ( N T , m a x ) is the minimum (maximum) absolute number of accesses to any learning object in the interval T.
As explained in Section 3.3 and Section 4.1, the colour of the labels is changed according to the proximity parameter P T , A o , i.e., the distance from the reference date, using discrete or continuous functions.

4.3. Tag Cloud Morphing

A tag cloud is a popular method to represent and visualise groups of concepts or elements, expressing the relevance of every single component by its dimension. Tags (i.e., terms) are positioned and resized according to their importance expressed by a numerical variable or a rank, e.g., the word count in a text. Tag clouds have been investigated in various studies about usability, showing that this graphical data presentation engages the user attention intuitively highlighting the most relevant elements, where different layouts can lead to different results in terms of word recall and user focus [26].
This information representation applied to an LMS course can be considered as a type of page morphing. The label structure of learning objects and module interfaces is transformed in colour, size and position, according to their usage popularity. The original course structure may be sometimes hard to recognise when this metaphor is used; therefore, some form of cloud rendering is required to incorporate an inertial parameter of attraction that provides to anchor the object to its original position. The positioning of the learning object label central for resources with frequent access; the size is proportional to the absolute access count. The hot/cold colour of the labels is informative about the temporal distribution of access.

5. Experimental Analysis and Results

We experimented and evaluated the implementation of the proposed monitor metaphors in a Moodle theme, named MonitorView, which will be published later with an open license. The theme format allowed us to create a customised visualisation of the elements without modifying the functionality of the LMS, in a modular way easy to install.
MonitorView is an extension of the Standard theme with the addition of the following functionalities to the course visualisation page:
  • Activation/deactivation of the monitoring features;
  • Selection panel for the monitor parameters;
  • Thermometer bar visualisation mode;
  • Dimensional morphing visualisation mode;
  • Tag cloud morphing visualisation mode.
When turning on the monitor features, an additional section is displayed in the standard interface, above the current course learning objects (see Figure 3). The MonitorView settings form enables the user to select and adjust the monitor parameters (e.g., time frame). The main settings are the Metaphor and Mode selections. In a dynamic visualisation, the metaphor is morphed in real-time (i.e., having the start date of the object creation and the end date of the timestamp). The dynamic visualisation is updated at the page load only, to avoid flickering or other usability issues of a continuous refresh. The static metaphor allows the educator to select a start and end date and visualise the morphed result. Using the two dropdown boxes, the user can choose to represent the information using the Thermometer bar (“Thermometer bar”) or Dimensional morphing (“Morphing”) metaphors.
Picking up the date and time, the user can select the time window to analyse, fixing the reference date for the temporal proximity data analysis.
The last parameter is the selection of users’ aggregation. By default, the parameter is set to show all the users, i.e., all Moodle user roles. The selection can be fine-tuned choosing a specific user role or a single user or group of users.
The result of this parameters set is the in-place morphing of the course page. The colours to represent the temporal proximity to the reference date of the learning objects, as introduced in Section 3.3, are graded on a discrete scale shown in Figure 4. The least accessed objects are rendered in dark blue and hot topics are in pure red. The dark grey colour is used to highlight the learning objects with no accesses.
The experiments have been held in the University of Perugia, embedding the MonitorView theme in the academic e-learning platform, called UniStudium, developed as a component of a more complex AI-based system, aiming at modeling user behavior in e-learning systems for detecting and analysing user reaction to interfaces and semantic aspects [27,28,29]. The experiments involved 12 courses of different bachelor degrees, in blended learning, with their 12 leading instructors. The courses included from 90 to 120 students each (mostly in age range 18–25). All the instructors involved in the experimental phase had already used the Moodle platform for at least one year, to avoid a bias introduced by the teachers’ learning curve. After the collection of the learning objects’ access information in an anonymised form, the access logs of instructors and students have been analysed to investigate the potential of the three metaphors. Collected data spans a time frame of three different exam terms. Example visualisations are shown in Figures 6–9 from the “Intelligent and Mobile Technology” course of the Computer Science bachelor’s degree program.

5.1. Evaluation Criteria

The experiments have been assessed quantitatively by a comparative analysis of instructors’ access logs. Considering the same instructor and the same course, the baseline for comparison of the usage of the proposed interface is the usage of log referring to objects (i.e., learning objects and modules instances) before the introduction of visual monitoring metaphor. The log information is thus compared before and after the metaphor introduction, i.e., with and without the metaphor. In this way, since the original method of monitoring data analytics was used before and along with the three modifications, the original method can be thought of as a type of control for the study. The instructors used all the tools at least once a week for an initial period of two weeks, after which they were free to choose and switch among any of the three metaphors (i.e., Thermometer bar, Dimensional morphing, Tag cloud) or the traditional log reports. The objective data are sufficient for assessing an adequate evaluation of the tool. We decided to ask some questions to educators to understand eventual issues of the visual metaphors better, aiming at future enhancements. At the end of the experimental phase, the instructors were asked to fill a questionnaire for a qualitative evaluation of their experience (included in Appendix A). The screened parameters included the general usability assessment of the interfaces, their usefulness regarding the course management, the impact on the teaching strategy, the comparison to traditional methods of user monitoring. For each morphing metaphor, we proposed questions about how much the visualisation usability is perceived (e.g., easy to activate, easy to use, usable as a monitoring tool) and about the frequency of use (e.g., how often it has been used and in which phase of the course it has been used most, if during or after the lectures). Finally, an overall evaluation of the tool has been proposed. Users could answer in qualitative evaluation using a Likert scale [30] with five items (i.e., 1 to 5, where 1 is low and 5 is high). The final evaluation is thus composed both of subjective data from the teachers’ questionnaire, and objective data from users logs. To ensure the quality and accuracy of the answers, the questions are redundant and are used as a check to identify contradictions, such as a positive assessment of usability and usefulness along with a negative overall assessment. Consistency has then been verified with the usage logs of the metaphor declared as preferred. The results show that the educators’ responses were consistent.
Figure 3 shows an example of usage, with the aggregation parameters selection panel. In this real case studied, the time window spans over one month. The End date parameter is particularly important for that course. The second assignment was given to the students one week before the submission deadline. The temporal proximity was set one month after the end date to have a better colour distribution on the topics labels. In this use case, the instructor was looking at the usage status of learning objects to identify the resource material that students were reviewing to complete the assignment.
The standard course view without any morphing is shown in Figure 5.
Figure 6 shows the visualisation of the course page according to the dimensional morphing metaphor with its time interval and the aggregation parameters shown in Figure 3. The icons and labels are stretched and coloured, considering the learning object popularity and the access temporal distribution. In Figure 7 the same data are shown in the Bar mode following the Thermometer Bar metaphor. A Tag cloud is shown in Figure 8 and Figure 9, where the information and page layout is morphed to obtain two different styles of the Tag cloud. In the first, the labels of learning objects are adapted in Wordle style [31]; in the second, an algorithm mixed the labels in horizontal and vertical style. In addition to these two styles, the instructor can choose to arrange all the tags horizontally.

5.2. Experimental Results

Table 1 shows the instructor logs monitored during the experimental phase and compared to the previous year. Only five over twelve instructors were using the Moodle log report utility in the previous year. From the quantitative assessments based on instructor logs, reported in Table 1, is apparent that all the three proposed metaphors outperformed the log report utility. The latter was still maintained only by a single user, while four out of five teachers that previously used the log report definitively moved to one of the three new metaphors. None of the seven users new to the log monitoring task adopted the log reports. Among metaphors, the Morphing metaphor is strongly prevailing among the three, at the end of the courses.
The motivation of the successful usability performance of dimensional morphing can be found in its approach, based on the transformation of the existing interface, where the learning curve is very short for the instructor, who previously designed the visual organisation of the standard course. Interacting and viewing the platform interface in everyday work has the effect of showing at the same time the monitoring information overlaid to the interface. The act of using the monitoring interface takes place in a transparent, user-friendly and natural modality, as resulting from the teachers’ feedback. A somehow unexpected result is that the performance of the Tag cloud metaphor is better than the Thermometer bar. Although the Thermometer bar view mode is well organised and easy to read, it did not have a strong visual appeal and impact on teachers. Interviews with the users showed that while tags are randomly distributed, the relevant elements immediately drew the attention of the Tag cloud observer. On the contrary, in the Thermometer bar a slightly harder effort is often required to compare by eye the length of the thermometer sliders.
Table 2 shows a summary of the most significant results from the user qualitative evaluation questionnaires, where the report values are averaged. The qualitative assessment ranges from 1 to 5 (5 being better than 1), where the users can express non mutually exclusive evaluations. The best two metaphors entries report the total number of votes obtained, where each user could express two votes.
As an overall result, the instructors agree, with only one notable exception, that the MonitorView tool provided them with the ability to obtain valuable indirect feedback on the learning objects provided to the students during the course, allowing them to timely adjust the taught lessons approach. The learning design process is often viewed as a monolithic approach that requires long-term planning of resources and activities. While this baseline remains true, the use of the proposed tool has profoundly influenced the instructors’ ability to continuously refine the strategy implicit in the learning project they are pursuing and implementing, basing decisions on visual information of student engagement. The qualitative assessment for impact, quality and contribution to the course management confirms the quantitative evaluation: Dimensional morphing is prevailing over the other metaphors and Thermometer bar, and Tag cloud views are comparable regarding the use, with the latter slightly ahead.
It is worth noticing that some strategies for using the interface provided in the MonitorView tool have emerged from the experimental evaluation as reported in instructors interviews. The most frequent use cases in which MonitorView tool is used have been:
  • At the end of the course, to detect the least used learning objects to improve their quality and to have a better discussion in their presentation to the students.
  • During the lessons and after the end of the course, to detect the most used elements because this can be a signal of the complexity of the taught concepts for the students.
  • During the course, to discover the most used material close in time to the assignments/exams or other in-class events, when they are perceived as more useful by students.
Alternative solutions using logs, based Moodle plugins (e.g., Gizmo, SmartKlass), web servers (e.g., Intelliboard), and client applications (e.g., Excel) [7,8,9] are not directly comparable with our work because they have not been tested in the same environment and statistical data are not available in the same context. However, it is possible to discuss some differences. External solutions to visualise data in real time, even where the extraction process requires little effort, separate the phases of use and analysis. Our proposal provides an integrated page that includes both course data and feedback at the same time (i.e., the modified course interface can be used and is not a visualisation of a learning analysis graph). The use of the course with immediate feedback on the visualisation of the resources of each learning object is the most innovative added value of this work, providing the same feedback as the face-to-face setting. No other alternative solution to real-time monitoring of student engagement provides an integrated approach.

5.3. Limitations

As noticeable from the images of MonitorView in Figure 6 and Figure 7, the visual impact is different for the different visualisation metaphors. Data gathered in the experiments show that, although every student accessed the first assignment’s material and the project for the second assignment, not everyone accessed the other course resources. The interpretation and impact of this information on course learning design and strategy can vary depending on the context in which the instructor operates. For instance, a straightforward strategy is that material that has not been accessed is not attractive, too complicated or too easy to understand, thus useless if not reviewed. An alternative interpretation of the data takes into account the characteristics of the courses, i.e., the blended teaching mode, where students can share the course material through other channels, such as Facebook pages or exchanging handouts.
Another noticeable event is that students that accessed the system reviewed part of the older lessons, leaving some of the recent course learning objects almost untouched. The latter is particularly relevant for the learning object named PDDL Language slides of the Computer Science course. The educator should investigate the event because the slides were included in the second assignment project, but the students did not access the material. The access to older material can be related to a general course review made by some of the students; this event can be analysed in-depth, looking at the single student behaviour instead of the aggregated data.
The feedback provided from the instructors also suggests that there is space for a great improvement of the Tag cloud metaphor by retaining some bounds to the original structure, either by introducing a visual connection to original elements or by visually animating the morphing interface, i.e., the transformation from the standard interface to the Tag cloud.

6. Conclusions & Future Works

An innovative general method based on visual metaphors of learning object dimensional morphing in LMS has been introduced and applied to the task of continuous learners monitoring, obtaining an informative and usable interface, which can support the instructor throughout the learning design and verification process. Our approach can be easily integrated into existing LMSs that provide learner activity logs and appearance customisation. The visual metaphors have been implemented as Moodle themes and experimented on academic courses. Quantitative and qualitative evaluations show that the proposed approach outperforms traditional log reporting for real-time monitoring of students’ engagement, using cohort dynamics and temporal analytics. Among the proposed metaphors, the Dimensional Morphing prevails both in the instructors’ quantitative and qualitative assessments. The main advantage of the proposed solution is in terms of usability for educators, providing intuitive visual clues instead of requiring specialised knowledge for statistical analysis and data views. Another relevant feature of our approach is that educators do not require to become acquainted with a new interface, as it uses the same visual elements, icons, and menu structure found in the existing e-learning platform. Future work includes a further tuning of the inertial factor mentioned in Section 4.3 to decide whether to change or not the term position and colour in the Tag cloud and Thermometer bar. Methods using graphic animation can be considered to show the evolution of popularity and usage of resources over time. The formulation of other visual metaphors can be explored and tested, e.g., using a layered spatial-depth third dimension, aimed at showing in the same visual context the behaviour of different groups of users on learning objects. Further analysis on a more considerable amount of courses from different disciplines may also be useful to understand the preference for the thermometer bar better. A relevant development would also be the application of the metaphor of visual morphing and thermal maps to other web-based monitoring applications, such as portals and websites, where interface morphing is a powerful feedback tool for the site designer or manager, which conveys information about the users’ engagement in terms of time spent on a page, clicks and interest in different sections.

Author Contributions

Conceptualization, A.M. and V.F.; methodology, V.F. and A.M.; software, F.P.; validation, V.F., F.P. and A.M.; formal analysis, V.F., A.M. and P.M.; investigation, V.F., F.P. and A.M.; resources, V.F., F.P. and A.M.; data curation, F.P. and P.M.; writing—original draft preparation, V.F. and P.M.; writing—review and editing, V.F. and A.M.; visualization, V.F., A.M. and P.M.; supervision, A.M. and V.F.; project administration, A.M.; funding acquisition, A.M. and V.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been partially funded by the Italian P.R.I.N. e-learning project PHRAME—Phraseological Complexity Measures.

Conflicts of Interest

The authors declare the following conflict of interest: two of the authors (AM and VF) are guest editors of the special issue. The review of this article has been conducted independently by a third party editor of the journal editorial committee and anonymous reviewers.

Appendix A. Questionnaire on Learning Objects Monitoring Tools

Figure A1. Questionnaire page 1 of 5.
Figure A1. Questionnaire page 1 of 5.
Applsci 10 07195 g0a1
Figure A2. Questionnaire page 2 of 5.
Figure A2. Questionnaire page 2 of 5.
Applsci 10 07195 g0a2
Figure A3. Questionnaire page 3 of 5.
Figure A3. Questionnaire page 3 of 5.
Applsci 10 07195 g0a3
Figure A4. Questionnaire page 4 of 5.
Figure A4. Questionnaire page 4 of 5.
Applsci 10 07195 g0a4
Figure A5. Questionnaire page 5 of 5.
Figure A5. Questionnaire page 5 of 5.
Applsci 10 07195 g0a5

References

  1. Franzoni, V.; Tasso, S.; Pallottelli, S.; Perri, D. Sharing Linkable Learning Objects with the Use of Metadata and a Taxonomy Assistant for Categorization. In International Conference on Computational Science and Its Applications; Springer: Cham, Switzerland, 2019; pp. 336–348. [Google Scholar]
  2. Persico, D.; Pozzi, F. Informing learning design with learning analytics to improve teacher inquiry. Br. J. Educ. Technol. 2015, 46, 230–248. [Google Scholar] [CrossRef]
  3. Lockyer, L.; Heathcote, E.; Dawson, S. Informing Pedagogical Action. Am. Behav. Sci. 2013, 57, 1439–1459. [Google Scholar] [CrossRef]
  4. Mengoni, P.; Milani, A.; Poggioni, V.; Li, Y. Community elicitation from co-occurrence of activities. Future Gener. Comput. Syst. 2020, 110, 904–917. [Google Scholar] [CrossRef] [Green Version]
  5. Corrin, L.; Kennedy, G.; de Barba, P.G.; Lockyer, L.; Gasevic, D.; Williams, D.; Dawson, S.; Mulder, R.; Copeland, S.; Bakharia, A. Completing the Loop: Returning Meaningful Learning Analytic Data to Teachers; Office for Learning and Teaching: Sydney, Australia, 2016. [Google Scholar]
  6. Baruque, C.; Amaral, M.; Barcellos, A.; Da Silva Freitas, J.; Longo, C. Analysing users’ access logs in Moodle to improve e learning. In Proceedings of the 2007 Euro American conference on Telematics and information systems, New York, NY, USA, 12 May 2007. [Google Scholar] [CrossRef]
  7. Konstantinidis, A.; Grafton, C. Using Excel Macros to Analyse Moodle Logs; IEEE Press: Piscataway, NJ, USA, 2013. [Google Scholar]
  8. Dobashi, K. Automatic data integration from Moodle course logs to pivot tables for time series cross section analysis. Procedia Comput. Sci. 2017, 112, 1835–1844. [Google Scholar] [CrossRef]
  9. Aldowah, H.; Al-Samarraie, H.; Fauzy, W. Educational data mining and learning analytics for 21st century higher education: A review and synthesis. Telemat. Inform. 2019, 37, 13–49. [Google Scholar] [CrossRef]
  10. Ruiperez-Valiente, J.A.; Munoz-Merino, P.J.; Kloos, C.D.; Niemann, K.; Scheffel, M.; Wolpers, M. Analyzing the Impact of Using Optional Activities in Self-Regulated Learning. IEEE Trans. Learn. Technol. 2016, 9, 231–243. [Google Scholar] [CrossRef]
  11. Karlgren, K.; Lakkala, M.; Toom, A.; Ilomäki, L.; Lahti-Nuuttila, P.; Muukkonen, H. Assessing the learning of knowledge work competence in higher education–cross-cultural translation and adaptation of the Collaborative Knowledge Practices Questionnaire. Res. Pap. Educ. 2020, 35, 8–22. [Google Scholar] [CrossRef]
  12. Franzoni, V.; Milani, A.; Nardi, D.; Vallverdú, J. Emotional machines: The next revolution. Web Intell. 2019, 17, 1–7. [Google Scholar] [CrossRef] [Green Version]
  13. Stergiou, M.; El Raheb, K.; Ioannidis, Y. Imagery and metaphors: From movement practices to digital and immersive environments. In Proceedings of the 6th International Conference on Movement and Computing, Tempe, AZ, USA, 10–12 October 2019. [Google Scholar]
  14. Franzoni, V.; Gervasi, O. Guidelines for Web Usability and Accessibility on the Nintendo Wii. In Transactions on Computational Science VI; Gavrilova, M.L., Tan, C.J.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 19–40. [Google Scholar] [CrossRef]
  15. Artificial Intelligence. Available online: https://www.britannica.com/technology/artificial-intelligence (accessed on 12 May 2020).
  16. Mödritscher, F.; Neumann, G.; Brauer, C. Comparing LMS usage behavior of mobile and web users. In Proceedings of the 2012 IEEE 12th International Conference on Advanced Learning Technologies, Rome, Italy, 4–6 July 2012; pp. 650–651. [Google Scholar] [CrossRef]
  17. Franzoni, V.; Mengoni, P.; Milani, A. Dimensional morphing interface for dynamic learning evaluation. In Proceedings of the 2018 22nd International Conference Information Visualisation (IV), Fisciano, Italy, 10–13 July 2018; pp. 332–337. [Google Scholar] [CrossRef]
  18. Azimullah, Z.; An, Y.; Denny, P. Evaluating an interactive tool for teaching design patterns. In Proceedings of the Twenty-Second Australasian Computing Education Conference, New York, NY, USA, 4–6 February 2020; pp. 167–176. [Google Scholar] [CrossRef]
  19. Verbert, K.; Govaerts, S.; Duval, E.; Santos, J.L.; Van Assche, F.; Parra, G.; Klerkx, J. Learning dashboards: An overview and future research opportunities. Pers. Ubiquitous Comput. 2013, 18, 1499–1514. [Google Scholar] [CrossRef] [Green Version]
  20. Schwendimann, B.A.; Rodriguez-Triana, M.J.; Vozniuk, A.; Prieto, L.P.; Boroujeni, M.S.; Holzer, A.; Gillet, D.; Dillenbourg, P. Perceiving Learning at a Glance: A Systematic Literature Review of Learning Dashboard Research. IEEE Trans. Learn. Technol. 2017, 10, 30–41. [Google Scholar] [CrossRef]
  21. Kolvoord, R.A. Visual Insights: A Practical Guide to Making Sense of Data. Online Inf. Rev. 2014, 38, 994–995. [Google Scholar] [CrossRef]
  22. Verbert, K.; Duval, E.; Klerkx, J.; Govaerts, S.; Santos, J.L. Learning Analytics Dashboard Applications. Am. Behav. Sci. 2013, 57, 1500–1509. [Google Scholar] [CrossRef] [Green Version]
  23. Arnold, K.E.; Pistilli, M.D. Course Signals at Purdue: Using Learning Analytics to Increase Student Success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, New York, NY, USA, 29 April–2 May 2012; pp. 267–270. [Google Scholar] [CrossRef]
  24. Fei, M.; Yeung, D.Y. Temporal Models for Predicting Student Dropout in Massive Open Online Courses. In Proceedings of the 2015 IEEE International Conference on Data Mining Workshop (ICDMW), Alantic City, NJ, USA, 14–17 November 2015; pp. 256–263. [Google Scholar] [CrossRef]
  25. Chatti, M.A.; Dyckhoff, A.L.; Schroeder, U.; Thüs, H. A reference model for learning analytics. Int. J. Technol. Enhanc. Learn. 2012, 4, 318. [Google Scholar] [CrossRef]
  26. Siemens, G.; Baker, R.S.J.D. Learning analytics and educational data mining. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge—LAK ’12; ACM Press: New York, NY, USA, 2012; p. 252. [Google Scholar] [CrossRef]
  27. Pallottelli, S. Multi-path traces in semantic graphs for latent knowledge elicitation. In Proceedings of the 2015 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015; Volume 1, pp. 281–288. [Google Scholar] [CrossRef]
  28. Milani, A. Semantic context extraction from collaborative network. In Proceedings of the 2015 IEEE 19th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Calabria, Italy, 6–8 May 2015; pp. 131–136. [Google Scholar] [CrossRef]
  29. Franzoni, V.; Mencacci, M.; Mengoni, P.; Milani, A. Semantic heuristic search in collaborative networks: Measures and contexts. In Proceedings of the 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), Warsaw, Poland, 11–14 August 2014; pp. 141–148. [Google Scholar] [CrossRef]
  30. Bartholomeu, D.; da Silva, M.C.R.; Montiel, J.M. Improving the Likert Scale of the Children’s Social Skills Test by Means of Rasch Model. Psychology 2016, 7, 820–828. [Google Scholar] [CrossRef] [Green Version]
  31. Viegas, F.B.; Wattenberg, M.; Feinberg, J. Participatory Visualization with Wordle. IEEE Trans. Vis. Comput. Graph. 2009, 15, 1137–1144. [Google Scholar] [CrossRef]
Figure 1. Traditional log activity report chart from a Moodle LMS.
Figure 1. Traditional log activity report chart from a Moodle LMS.
Applsci 10 07195 g001
Figure 2. AI–based metaphors for students analytics architecture.
Figure 2. AI–based metaphors for students analytics architecture.
Applsci 10 07195 g002
Figure 3. MonitorView parameter selection panel.
Figure 3. MonitorView parameter selection panel.
Applsci 10 07195 g003
Figure 4. Colour gradient.
Figure 4. Colour gradient.
Applsci 10 07195 g004
Figure 5. Standard course view with no morphing.
Figure 5. Standard course view with no morphing.
Applsci 10 07195 g005
Figure 6. Course statistics in Dimensional morphing mode.
Figure 6. Course statistics in Dimensional morphing mode.
Applsci 10 07195 g006
Figure 7. Course statistics in Thermometer bar mode.
Figure 7. Course statistics in Thermometer bar mode.
Applsci 10 07195 g007
Figure 8. Tag cloud mode in Wordle style.
Figure 8. Tag cloud mode in Wordle style.
Applsci 10 07195 g008
Figure 9. Tag cloud mode in Horizontal and Vertical style.
Figure 9. Tag cloud mode in Horizontal and Vertical style.
Applsci 10 07195 g009
Table 1. Quantitative assessment.
Table 1. Quantitative assessment.
MetaphorLog
Reports
Thermometer
Bar
Dimensional
Morphing
Tag
Cloud
#Total Access264426743540
#Access Final Month23127243112
% Tot. Access except first two weeks5%23%48%24%
#Users using log reports in previous year5N/AN/AN/A
#Access to log reports in previous year632N/AN/AN/A
Table 2. Qualitative assessment.
Table 2. Qualitative assessment.
MetaphorLog
Thermometer
Bar
Dimensional
Morphing
Tag
Cloud
Usability [1–5]2.4554.3
Impact on course mng [1–5]1.43.543.6
Help to improve quality [1–5]1.33.643.7
Help to save time [1–5]1.43.43.23.2
Best two metaphors [1–10]16107
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Franzoni, V.; Milani, A.; Mengoni, P.; Piccinato, F. Artificial Intelligence Visual Metaphors in E-Learning Interfaces for Learning Analytics. Appl. Sci. 2020, 10, 7195. https://doi.org/10.3390/app10207195

AMA Style

Franzoni V, Milani A, Mengoni P, Piccinato F. Artificial Intelligence Visual Metaphors in E-Learning Interfaces for Learning Analytics. Applied Sciences. 2020; 10(20):7195. https://doi.org/10.3390/app10207195

Chicago/Turabian Style

Franzoni, Valentina, Alfredo Milani, Paolo Mengoni, and Fabrizio Piccinato. 2020. "Artificial Intelligence Visual Metaphors in E-Learning Interfaces for Learning Analytics" Applied Sciences 10, no. 20: 7195. https://doi.org/10.3390/app10207195

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop