A Privacy-Oriented Local Web Learning Analytics JavaScript Library with a Conﬁgurable Schema to Analyze Any Edtech Log: Moodle’s Case Study

: Educational institutions are transferring analytics computing to the cloud to reduce costs. Any data transfer and storage outside institutions involve serious privacy concerns, such as student identity exposure, rising untrusted and unnecessary third-party actors, data misuse, and data leakage. Institutions that adopt a “local ﬁrst” approach instead of a “cloud computing ﬁrst” approach can minimize these problems. The work aims to foster the use of local analytics computing by offering adequate nonexistent tools. Results are useful for any educational role, even investigators, to conduct data analysis locally. The novelty results are twofold: an open-source JavaScript library to analyze locally any educational log schema from any LMS; a front-end to analyze Moodle logs as proof of work of the library with different educational metrics and indicator visualizations. Nielsen heuristics user experience is executed to reduce possible users’ data literacy barrier. Visualizations are validated by surveying teachers with Likert and open-ended questions, which consider them to be of interest, but more different data sources can be added to improve indicators. The work reinforces that local educational data analysis is feasible, opens up new ways of analyzing data without data transfer to third parties while generating debate around the “local technologies ﬁrst” approach adoption.


Introduction
In this work, educational data is considered as academic or behavioral raw data, aggregated or transformed data point generated by the interactions of students in a teachinglearning process. Students' interactions within an LMS, such as browsing the contents of a course, communicating with teachers or students via chat or forum, delivering assessment assignments, or even the grades themselves generates data that can be used in analytical approaches. However, the generation of educational data is not limited to the use of LMS. Any students' use of educational technology (Edtech) such as academic management platforms, learning tools, or even social media applications in teaching-learning processes generate data and are, therefore, considered as educational data.

Context in Big Data and Small Data
Considering the above, the educational data generated by an educational institution classroom can be considered Small Data, and is even preferable in terms of analysis instead of Big Data [1]. The most important thing in educational analysis is the context, which is the same as Small Data. The Big Data movement requires constant and varied data sources to achieve high accuracy levels. However, decontextualized datasets generate • Educational data is fragmented. As more third-party educational tools are being used, the more dispersed the students' data will be and the less control there will be over privacy.

•
Low data protection is evident. Data is accessed freely by third parties and even shared, where students are identified. • There is no control over the data. Anyone who has access to the data can modify it at will and share it with third parties. Consequently, no one can ensure that the data is stored in the original form or free from unauthorized access.
Data privacy and security are connected to ethical aspects [12,13]. Educational technology evolves at a rapid pace and increasingly integrates data analysis processes. We are familiar with some of the analytical practices, issues, and ethical implications arising from the use of educational data [14]. Besides, we cannot imagine or conceive what future dangers may emerge nor trust in the legality, which is always one step behind the emergent technology. That is why we must take the utmost care when processing educational data, precisely to preserve them from the dangers already known and those to come.

Local Instead of Outside
In spite of above concerning problems, educational data analysis provides an environment for positive opportunities. Educational data analytics helps improve processes from three fundamental axes: academic management, teaching, and learning [15,16]. Datadriven academic management requires reliable analytical results of course performance, which are extracted from teacher and student satisfaction. Personalization, individualization, and learning adaptation are teaching tasks that are enhanced by data-driven decision-making. Self-regulation of learning is an educational goal that students can achieve by making data-driven decisions based on their interactions with learning environments. Besides, the data collection works as a real-time process and it concerns the digital tools used by the student. These educational tools can be in cloud computing or within the institution itself. However, the data and its analysis do not need to be stored or executed in cloud computing when it can be done locally [17].
Current research in data fragility analyzes solutions in cloud computing or the blockchain, either centralized or distributed databases, but always outside the control of educational institutions [18,19]. Outsourced solutions in the cloud computing may present some benefits that allow antifragility into educational institutions, as stated by Gong et al. [20] when describing cloud computing as "service-oriented, loose coupling, strong fault-tolerant, business model and eas[y to] use". Antifragility allows rapid adaptation to change or even pandemics such as COVID-19 where education in general adopted an emergency virtual learning methodology ipso facto [21][22][23]. When data and platforms are available in the cloud it seems that in terms of sustainability, the most suitable scenario is as follows: economic advantages, cost reduction, few resources used and fault-tolerant [20]. Working inside the educational institution from the outside through virtual private net-Sustainability 2021, 13, 5085 3 of 28 works seems to be an underestimated and underrated solution. Gaps in the current data privacy debate and the scientific community regarding research in local technologies as a solution motivates the authors of this manuscript to work in this direction.

Ethical Principles to Foster Local-First Technology
This project falls within the LEDA principles [1]: (i) legality, (ii) transparency, information, and expiration, (iii) data control, (iv) anonymous transactions, (v) interoperability, (vi) responsibility in the code, and (vii) local first. The work is mainly based on the concept of "local-first", which refers to the distance between devices: the privacy concerns reduce as the device that processes educational data is closer to the educational role. Therefore, the main motivation of this work is to offer alternatives to Learning Analytics Dashboard based on cloud computing. These alternatives are oriented to local execution to reduce the problems related to data privacy and security.
The client-side analytics operations open up a wide range of possibilities, such as new relationships between IT support and teachers, data exchange via portable and encrypted memory, new procedural considerations regarding data, or adaptation to legality such as the GDPR [24]. Adopting local-first technology goes beyond using local technology, as it involves rethinking processes, management and internal actions in educational institutions. At the same time, both the application of educational data mining techniques and the interpretation of visualizations require corresponding levels of data literacy [25] according to the analytical-visual complexity of the process. To acquire the knowledge and skills to perform analysis and understand the results, ad hoc tools should provide functionalities regarding the data literacy skill that teachers have acquired and comply with some minimum usability tests [26]. This situation requires tools that can solve ad hoc analytical needs, according to the educational context and data literacy level of academics, and at the same time protect student data by running locally and meeting the following requirements (EDPP):

•
Avoid transferring data to remote servers outside the control of institutions; • Distribute analysis to local educational role devices to reduce centralized computing complexity; • Enable analysis of data from any log of educational tools; • Enable ad hoc visualizations to be generated from logs.

Research Gaps and Our Contribution
Considering the EDPP requirements, a literature search was performed to find tools that run and analyze data locally without an Internet connection, developed in web technologies, and are open source. These should also run on any device, analyze any data scheme, provide educational metrics, indicators, and visualizations. No tools were found that meet all the requirements nor any research that proposes local-first technology as a solution to data fragility issues. Some tools are offered as a proprietary service, others are open source but require running in the cloud or in the LMS as a plugin [27] with a whole structure of services that in one way or another do not meet the requirements.
In this article, our main research question regards the possibility to develop a solution considering the above requirements. As our main contribution, we develop and share, as open source, an analytics and visualization tool that meets all the stated requirements using HTML, CSS, and JavaScript web technology. Besides, throughout the sections of the article, we present the process of development, validation, and adaptation for teaching purposes and open publication.
The structure of the paper consists of five sections. The Section 1 is the introduction. In the Section 2, we expose the followed methodology and fundamentals to develop the analytical tool. In the Section 3, we present the results of the development, the survey, and the updates made to the tool concerning survey results. Finally, we present the discussion and conclusions as to the Sections 4 and 5. The different sections show how our work makes available to (i) educators an easy-to-use Moodle log analysis tool, to (ii) developers an open-Sustainability 2021, 13, 5085 4 of 28 source tool for analysis of any log format to integrate into their developments or expand their functionalities, and to (iii) researchers a versatile tool to support the visualization of their research results.

Materials and Methods
The development of the tool consists of the execution of different phases in a mixed approach for its final open distribution:

1.
Design and implementation of the JavaScript library.

2.
Design and implementation of a front-end-*dashboard as a pilot test of the library. 3.
Usability improvement via an evaluation of Nielsen heuristics.

4.
Teacher survey to validate the interest and usefulness of the tool, the metrics and indicators used, and the visualizations presented.
The execution of the four phases has made it possible to make available a tool of teaching interest [28] but at the same time with the possibility of improvement and evolution. In the conclusions, we present different lines of research derived from this first iteration presented in this work.

JavaScript Learning Analytics Library
The first aim of the project is to develop a configurable library to analyze any log format. To achieve an EDPP compliant project, the tool should be able to: • Run in local environments, such as personal devices, e.g., computers or smartphones; • Run with technologies available in personal devices; • Visualize data and insights from any learning tool's log of students' data and interactions.
To accomplish such requirements, we considered that web technologies were adequate to be used in the solution, due to these technologies provided by default in every computer and smartphone with an installed web browser. The resulting library is called JavaScript Learning Analytics (JSLA) and is a compendium of JavaScript interdependent classes that:

•
Loads a log into an array.

•
Transforms the log array items to the desired format.

•
Analyzes and computes data to represent it.
To accomplish the above, JSLA defines two main components: • Schema: This is a JavaScript function that the JSLA library uses to transform each field on record inside the logs. Developers can implement this function considering the original log format and the transformations required to visualize the analysis results. Hence, the schema defines the transformations needed to be applied to raw data.

•
Widget: This is a JSON that the JSLA library uses to create visualizations. In each widget CSS styles, JavaScript Charts libraries, and JavaScript code can be specified so the web browser can execute to analyze data and create visualizations. Hence, the representation of a widget could be any result computed by JSLA.
With these two components, any developer can create a front-end to easily transform data from logs and create both predefined visualizations in the JSLA library or create new ones.

JavaScript Moodle Learning Analytics Front End
The second aim of the project can be separated into two main ideas:

•
Develop a schema and different widgets based on the information collected and extracted from the Moodle LMS.

•
Develop a front-end application to generate a dashboard with built-in drill-down capability and predefined visualizations.
As a result of this project, we developed the front-end application known as JavaScript Moodle Learning Analytics (JSMLA). This open-source project can be found at Github [28], and is developed using HTML, CSS, and Vanilla JavaScript, based on the JSLA library. It is a tool to easily track students' performance, engagement and interactions with the available resources and activities in any Moodle course in which they are enrolled. It gives a sense of meaningful analysis of the information from Moodle.
This project is based on the concept of "local first" [1]. Thus, data is not analyzed in the cloud, but rather, in a local environment as the data have to be downloaded from the online LMS and stored in a JSON. Supporting other file formats is a future working line, e.g., CSV or Experience API (xAPI) [2]. The JSMLA front-end application loads the downloaded JSON using the JSLA library that processes the data considering the provided schema. Finally, widgets are processed, and visualizations presented as a reporting dashboard.
The widgets shown offer an approach to the different behavior patterns that their students acquire on a specific subject, allowing an exploration of the required data to personalize methodologies and understand its meaning, differentiating itself from other tools such as Learnglish [29] or other analytical tools.

Widgets
As the first iteration for JSMLA, we developed a descriptive Learning Analytics Dashboard (LAD). The dashboard contains widgets, which are categorized so the user can easily understand the given information. The development of the widgets is ad hoc based on the author's needs in their teaching practice. We consider this first iteration as the starting point of the research, to validate its interest and usefulness for the educational community. The validation methodology is presented below.
We developed four types of widgets: • Key Performance Indicators (KPI); • Line chart; • Pie chart; • Table. The selection of the widget type depends on the information we would like to display to let the user get the maximum benefit from the dashboard.
A default JSMLA KPI widget, as shown in Figure 1, is mainly a single numerical figure that reflects the inner value of the represented metric (or metrics). The metrics shown in this widget can either be absolute or relative [3]. The latter refers to the percentage obtained using the given information to provide a more specific analysis. This includes ratio computation in case there is a comparison between different values of the same metric. In JSMLA, we represented the totals of each metric as a numerical figure.

•
Develop a schema and different widgets based on the information collected and extracted from the Moodle LMS.

•
Develop a front-end application to generate a dashboard with built-in drill-down capability and predefined visualizations.
As a result of this project, we developed the front-end application known as JavaScript Moodle Learning Analytics (JSMLA). This open-source project can be found at Github [28], and is developed using HTML, CSS, and Vanilla JavaScript, based on the JSLA library. It is a tool to easily track students' performance, engagement and interactions with the available resources and activities in any Moodle course in which they are enrolled. It gives a sense of meaningful analysis of the information from Moodle.
This project is based on the concept of "local first" [1]. Thus, data is not analyzed in the cloud, but rather, in a local environment as the data have to be downloaded from the online LMS and stored in a JSON. Supporting other file formats is a future working line, e.g., CSV or Experience API (xAPI) [2]. The JSMLA front-end application loads the downloaded JSON using the JSLA library that processes the data considering the provided schema. Finally, widgets are processed, and visualizations presented as a reporting dashboard.
The widgets shown offer an approach to the different behavior patterns that their students acquire on a specific subject, allowing an exploration of the required data to personalize methodologies and understand its meaning, differentiating itself from other tools such as Learnglish [29] or other analytical tools.

Widgets
As the first iteration for JSMLA, we developed a descriptive Learning Analytics Dashboard (LAD). The dashboard contains widgets, which are categorized so the user can easily understand the given information. The development of the widgets is ad hoc based on the author's needs in their teaching practice. We consider this first iteration as the starting point of the research, to validate its interest and usefulness for the educational community. The validation methodology is presented below.
We developed four types of widgets: • Key Performance Indicators (KPI); • Line chart; • Pie chart; • Table. The selection of the widget type depends on the information we would like to display to let the user get the maximum benefit from the dashboard.
A default JSMLA KPI widget, as shown in Figure 1, is mainly a single numerical figure that reflects the inner value of the represented metric (or metrics). The metrics shown in this widget can either be absolute or relative [3]. The latter refers to the percentage obtained using the given information to provide a more specific analysis. This includes ratio computation in case there is a comparison between different values of the same metric. In JSMLA, we represented the totals of each metric as a numerical figure.  A default JSMLA line chart widget, shown in Figure 2, is a graphical representation where several data points are connected to show the evolution of information over time.
Timelines are used to organize the information in a chronological sequence. These plots give a better understanding of the growth, change, recurring events, cause and effect, and main events of historical, social, and scientific significance [30]. The overview of the information over a certain period allows teachers to generate valuable insights. For instance, the evolution of student interaction with the course materials can be a key indicator.
A default JSMLA line chart widget, shown in Figure 2, is a graphical representation where several data points are connected to show the evolution of information over time. Timelines are used to organize the information in a chronological sequence. These plots give a better understanding of the growth, change, recurring events, cause and effect, and main events of historical, social, and scientific significance [30]. The overview of the information over a certain period allows teachers to generate valuable insights. For instance, the evolution of student interaction with the course materials can be a key indicator. In other cases, line graphs with a threshold are often indicated with horizontal lines. This feature is possible in the widget included in JSMLA which uses CanvasJS library to generate charts.
A default JSMLA pie chart widget, as shown in Figure 3, provides a visual representation of nominal or ordinal data, a percentage or proportional data, and any categorized data with relative sizes [31]. They are used to specify how the information is divided among the different indicators, and therefore, provide a deeper dive into the distribution of information and the clarity of the metrics. A default JSMLA table, as shown in Figure 4, can contain a wide range of information. Unlike the KPI widget, the table may contain alphanumeric values. Tables show top metrics as they sort them out by columns. Each column represents the categorization or attribute of the designated row. Therefore, a table may contain multiple rows with different values for each attribute. Structured tables offer better user comprehension, clarity, and consistency. The teacher may choose how the information is ordered and grouped. In other cases, line graphs with a threshold are often indicated with horizontal lines. This feature is possible in the widget included in JSMLA which uses CanvasJS library to generate charts.
A default JSMLA pie chart widget, as shown in Figure 3, provides a visual representation of nominal or ordinal data, a percentage or proportional data, and any categorized data with relative sizes [31]. They are used to specify how the information is divided among the different indicators, and therefore, provide a deeper dive into the distribution of information and the clarity of the metrics.
A default JSMLA line chart widget, shown in Figure 2, is a graphical representation where several data points are connected to show the evolution of information over time. Timelines are used to organize the information in a chronological sequence. These plots give a better understanding of the growth, change, recurring events, cause and effect, and main events of historical, social, and scientific significance [30]. The overview of the information over a certain period allows teachers to generate valuable insights. For instance, the evolution of student interaction with the course materials can be a key indicator. In other cases, line graphs with a threshold are often indicated with horizontal lines. This feature is possible in the widget included in JSMLA which uses CanvasJS library to generate charts.
A default JSMLA pie chart widget, as shown in Figure 3, provides a visual representation of nominal or ordinal data, a percentage or proportional data, and any categorized data with relative sizes [31]. They are used to specify how the information is divided among the different indicators, and therefore, provide a deeper dive into the distribution of information and the clarity of the metrics. A default JSMLA table, as shown in Figure 4, can contain a wide range of information. Unlike the KPI widget, the table may contain alphanumeric values. Tables show top metrics as they sort them out by columns. Each column represents the categorization or attribute of the designated row. Therefore, a table may contain multiple rows with different values for each attribute. Structured tables offer better user comprehension, clarity, and consistency. The teacher may choose how the information is ordered and grouped. A default JSMLA table, as shown in Figure 4, can contain a wide range of information. Unlike the KPI widget, the table may contain alphanumeric values. Tables show top metrics as they sort them out by columns. Each column represents the categorization or attribute of the designated row. Therefore, a table may contain multiple rows with different values for each attribute. Structured tables offer better user comprehension, clarity, and consistency. The teacher may choose how the information is ordered and grouped.

Basic Functionalities
To further understand and visualize JSMLA, we will provide a walkthrough for its basic functionalities. When the front-end application is launched, the homepage has a button to allow the user to upload the JSON file that contains the student logs. For a first timer, it may be quite confusing to see this screen. In this iteration, we added a tooltip that opens up a video which explains how the tool works (see Figure 5). Once the data is loaded into the tool, we will be able to see the widgets (see Figure  6), which are explained in the previous section, categorized in such a way that the user may find it useful when analyzing the students' participation and engagement in the course.

Basic Functionalities
To further understand and visualize JSMLA, we will provide a walkthrough for its basic functionalities. When the front-end application is launched, the homepage has a button to allow the user to upload the JSON file that contains the student logs. For a first timer, it may be quite confusing to see this screen. In this iteration, we added a tooltip that opens up a video which explains how the tool works (see Figure 5).

Basic Functionalities
To further understand and visualize JSMLA, we will provide a walkthrough for its basic functionalities. When the front-end application is launched, the homepage has a button to allow the user to upload the JSON file that contains the student logs. For a first timer, it may be quite confusing to see this screen. In this iteration, we added a tooltip that opens up a video which explains how the tool works (see Figure 5). Once the data is loaded into the tool, we will be able to see the widgets (see Figure  6), which are explained in the previous section, categorized in such a way that the user may find it useful when analyzing the students' participation and engagement in the course. Once the data is loaded into the tool, we will be able to see the widgets (see Figure 6), which are explained in the previous section, categorized in such a way that the user may find it useful when analyzing the students' participation and engagement in the course.  The next widget is a line-based plot that represents the evolution of all students' interactions (see Figure 8) with three course's elements through time, represented by colors: Total interactions, Tasks and Files. By hovering the mouse over, you get the data of each different element, as well as the date being consulted. This widget is useful for comparing overall analytics over time, analyzing which month has seen the most and least interaction with different types of resources.
The next widget represents evolution of interactions of all students with all elements in a week, represented with the number of interactions per hour in a table (see Figure 9). This widget is made to be able to determine in which days and hours the students are more likely to work and access the subject's Moodle. Every table has a heat map to help visually identify higher and lower values.  The next widget is a line-based plot that represents the evolution of all students' interactions (see Figure 8) with three course's elements through time, represented by colors: Total interactions, Tasks and Files. By hovering the mouse over, you get the data of each different element, as well as the date being consulted. This widget is useful for comparing overall analytics over time, analyzing which month has seen the most and least interaction with different types of resources.
The next widget represents evolution of interactions of all students with all elements in a week, represented with the number of interactions per hour in a table (see Figure 9). This widget is made to be able to determine in which days and hours the students are more likely to work and access the subject's Moodle. Every table has a heat map to help visually identify higher and lower values. The next widget is a line-based plot that represents the evolution of all students' interactions (see Figure 8) with three course's elements through time, represented by colors: Total interactions, Tasks and Files. By hovering the mouse over, you get the data of each different element, as well as the date being consulted. This widget is useful for comparing overall analytics over time, analyzing which month has seen the most and least interaction with different types of resources.
The next widget represents evolution of interactions of all students with all elements in a week, represented with the number of interactions per hour in a table (see Figure 9). This widget is made to be able to determine in which days and hours the students are more likely to work and access the subject's Moodle. Every table has a heat map to help visually identify higher and lower values.  The second section, Students, contains widgets that give more specific information about all the students in the uploaded log and their individual participation statistics (see Figure 10). The first widget is a line-based plot that represents all students' last access to the course Moodle. The x-axis represents days and y-axis represents the number of students whose last access is that day. By hovering the mouse over, the information of the given day and the list of students whose last connection corresponds to that day is acquired. This widget is useful to analyze students' participation within the course by knowing when they connected last, and who, and therefore being able to determine if students are following the course or if they are not, and also detecting students at risk of dropping out.  The second section, Students, contains widgets that give more specific information about all the students in the uploaded log and their individual participation statistics (see Figure 10). The first widget is a line-based plot that represents all students' last access to the course Moodle. The x-axis represents days and y-axis represents the number of students whose last access is that day. By hovering the mouse over, the information of the given day and the list of students whose last connection corresponds to that day is acquired. This widget is useful to analyze students' participation within the course by knowing when they connected last, and who, and therefore being able to determine if students are following the course or if they are not, and also detecting students at risk of dropping out. The second section, Students, contains widgets that give more specific information about all the students in the uploaded log and their individual participation statistics (see Figure 10). The first widget is a line-based plot that represents all students' last access to the course Moodle. The x-axis represents days and y-axis represents the number of students whose last access is that day. By hovering the mouse over, the information of the given day and the list of students whose last connection corresponds to that day is acquired. This widget is useful to analyze students' participation within the course by knowing when they connected last, and who, and therefore being able to determine if students are following the course or if they are not, and also detecting students at risk of dropping out.
The next widget is a table that represents the number of times a member of the course has interacted with each course resource (see Figure 11). It can be used to analyze more specific data about each student and its relationship with the course elements, showing the number of times every student has interacted with every course element.
The following widget shows the percentage of students' participation with all course elements and the total number (see Figure 12). This widget is used to easily detect the students who participate more and those who participate less with the course. the course Moodle. The x-axis represents days and y-axis represents the number of students whose last access is that day. By hovering the mouse over, the information of the given day and the list of students whose last connection corresponds to that day is acquired. This widget is useful to analyze students' participation within the course by knowing when they connected last, and who, and therefore being able to determine if students are following the course or if they are not, and also detecting students at risk of dropping out.  The next widget is a table that represents the number of times a member of the course has interacted with each course resource (see Figure 11). It can be used to analyze more specific data about each student and its relationship with the course elements, showing the number of times every student has interacted with every course element. The following widget shows the percentage of students' participation with all course elements and the total number (see Figure 12). This widget is used to easily detect the students who participate more and those who participate less with the course. The following widget adds information about the last connection of each member of The next widget is a table that represents the number of times a member of the course has interacted with each course resource (see Figure 11). It can be used to analyze more specific data about each student and its relationship with the course elements, showing the number of times every student has interacted with every course element. The following widget shows the percentage of students' participation with all course elements and the total number (see Figure 12). This widget is used to easily detect the students who participate more and those who participate less with the course.  The following widget adds information about the last connection of each member of the course, in a table format showing the exact date of the last connection, and how long ago it was made (see Figure 13). This widget helps identify which students have not participated for the longest time, and therefore, for example, prevent the drop out. ago it was made (see Figure 13). This widget helps identify which students have not participated for the longest time, and therefore, for example, prevent the drop out. The third section, Resources, contains widgets that provide specific information about each resource of the course and the number of interactions on them. The first widget is a table that represent the last time each resource was accessed by anyone, and how long ago it was made (see Figure 14). The heatmap helps to visually identify which course elements students are less interacting with at the time, and which ones they are interacting more.  The third section, Resources, contains widgets that provide specific information about each resource of the course and the number of interactions on them. The first widget is a table that represent the last time each resource was accessed by anyone, and how long ago it was made (see Figure 14). The heatmap helps to visually identify which course elements students are less interacting with at the time, and which ones they are interacting more. ago it was made (see Figure 13). This widget helps identify which students have not participated for the longest time, and therefore, for example, prevent the drop out. The third section, Resources, contains widgets that provide specific information about each resource of the course and the number of interactions on them. The first widget is a table that represent the last time each resource was accessed by anyone, and how long ago it was made (see Figure 14). The heatmap helps to visually identify which course elements students are less interacting with at the time, and which ones they are interacting more. Figure 14. JSMLA's list of last access to a resource by any student. Figure 14. JSMLA's list of last access to a resource by any student. The next four widgets are tables that show percentages of interaction with different resources, and the number of times. The first table shows percentage of interaction with all kinds of resources (see Figure 15). The next four widgets are tables that show percentages of interaction with different resources, and the number of times. The first table shows percentage of interaction with all kinds of resources (see Figure 15). The second table represents percentage of interaction with the components of the course (see Figure 16). The second table represents percentage of interaction with the components of the course (see Figure 16).
The third table represents percentage of interaction with the events of the course (see Figure 17).
The fourth table represents percentage of interaction with the context of the course (see Figure 18).
The last widget is a pie plot that represents the number of different kinds of element the course has (see Figure 19).
Aside from these sections, we also have a general filter on the top of the application to show data only for a specific student and/or a specific resource (tasks, files, pages, etc.). This provides the user an easy access to the specific data and not only a general visualization. To further improve the data filter, we added a calendar icon which opens a modal to modify the date range (see Figure 20).
The educational data shown on the dashboard may also be handled in different ways. We provide three options on the menu situated on the left side of the application: Default report, Load and Download.
The tool is very simple and easy to use. Its basic functionalities are intuitive and not that complicated in a user's perspective. The third table represents percentage of interaction with the events of the course (see Figure 17).  The fourth table represents percentage of interaction with the context of the course (see Figure 18).  The fourth table represents percentage of interaction with the context of the course (see Figure 18).   The last widget is a pie plot that represents the number of different kinds of element the course has (see Figure 19). Aside from these sections, we also have a general filter on the top of the application to show data only for a specific student and/or a specific resource (tasks, files, pages, etc.). This provides the user an easy access to the specific data and not only a general visualization. To further improve the data filter, we added a calendar icon which opens a modal to modify the date range (see Figure 20). The educational data shown on the dashboard may also be handled in different ways. We provide three options on the menu situated on the left side of the application: Default report, Load and Download.
The tool is very simple and easy to use. Its basic functionalities are intuitive and not The last widget is a pie plot that represents the number of different kinds of element the course has (see Figure 19). Aside from these sections, we also have a general filter on the top of the application to show data only for a specific student and/or a specific resource (tasks, files, pages, etc.). This provides the user an easy access to the specific data and not only a general visualization. To further improve the data filter, we added a calendar icon which opens a modal to modify the date range (see Figure 20). The educational data shown on the dashboard may also be handled in different ways. We provide three options on the menu situated on the left side of the application: Default report, Load and Download.
The tool is very simple and easy to use. Its basic functionalities are intuitive and not that complicated in a user's perspective.

Heuristic Analysis
Any widget has to provide fast and efficient readability across information, such as

Heuristic Analysis
Any widget has to provide fast and efficient readability across information, such as that displayed in rows and columns. The structure and flexibility can adapt to any source of data. However, the amount of information displayed in a single widget also matters. Too much information may result in a higher level of difficulty in understanding the facts and meaningful data. Hence, widgets also require deeper revision and polishing to not overload them with meaningless data. For example, considering table widgets there must be limitations to the number of rows and columns [32].
After the creation of the first version of JSMLA, we performed a revision to evaluate the improvements of the site to find and solve its user experience weaknesses, as well as to reinforce the strong points to make it more intuitive, clear, and accessible to the average user. To analyze the usability of the JSMLA front-end website, we decide to create a heuristic report [33], which consists of testing a series of usability hypotheses and achieving business objectives to obtain conclusions and proposals for improvement. This analysis is useful to understand the current state of the product by analyzing where it is deployed and what needs to be developed to accomplish objectives without involving final users.

Survey
A survey was conducted in Spanish with 65 professors from different education levels, specifically post-compulsory education such as primary and secondary education, higher education, basic and intermediate vocational training, and university education which includes both Bachelor and Master cycles. The total population for this study was provided across announcing this survey across social media [34,35], mail chains by the authors. Considering that each professor is an active participant in the educational sector, we did not set any minimum requirement as this study aims to inspect the perception of teachers towards JSMLA actual features and explore new possibilities directly from its main target.
The survey consisted of a thorough validation of the indicators shown in the dashboard to determine whether the information is useful for their student evaluation [36]. We presented a set of questions using a Liker scale from 1 to 5 as the answer. The lowest value, 1, means they strongly disagree, and the highest value, 5, means that they strongly agree. The survey also required teachers to introduce a brief description as to why the indicator is of use to them (summarized in a very short question).
We do not consider this study as experimental as it has no control group yet provides a group with enough participants to consider it a representative sample to provide significant findings. Despite these facts, it must be noted that the survey was conducted in Spanish and its population comes largely from similar and near regions.
An expected outcome from the survey is the visualization of each indicator and their kind of data as the expectation of new parameters to observe, so JSMLA may acquire new lines of development towards the study results; expanding the metrics the system may represent in the future and provide meaning for other tools which exploit said data, such as Learnglish or any other tools alike [29,37].

Structure
The survey is divided into four different sections (see Figure 21 Each section refers to a specific group of indicators that are closely related to one another, explaining current JSMLA features and asks the population in regard to its perceived usefulness within the Liker scale and an open short question to justify the given mark and explain if they would use said feature.
These four sections only gather insights concerning the current widgets in the dashboard. After collecting their insights on the indicators, they were asked whether they still lack indicators. If so, they were asked to comment on it with a brief description.
With the current widgets in the JSMLA dashboard, they were also asked for possible improvements and changes to enhance the user experience. This will boost the utility of the dashboard for various contexts and build on the collected information to develop possible future indicators that are useful on a significant level.

Results
We present the results of the heuristic analysis and survey. These results were used to improve user experience in the use of the predefined JSMLA dashboard and to create new widgets regarding the needs of surveyed teachers.

Heuristic Analysis Results
The hypotheses under which JSMLA was evaluated are Jakob Nielsen's 10 Heuristic Usability Principles [26]. The website was evaluated under the criteria of each of the 10 principles (the heuristics): 1. Visibility of system status: system always inform users about intern processes. 2. Math between system and real world: system speaks a language that the user easily understands. 3. User control and freedom: the system permits undo and redo of every function. 4. Consistency and standards: same meanings are represented equal, maintain consistency over icons. 5. Error prevention: prevention of errors occurring in the first place. 6. Recognition rather than recall: make actions visible to minimize user's use of memory. 7. Flexibility and efficiency of use: allow users to tailor frequent actions. 8. Aesthetic and minimalist design: no extra information is given. The first section, course totals, refers to the widgets that indicate the total number of accesses (i) to the course, (ii) to each type of resource, and (iii) to each activity type. In this section, the indicators allow the user to understand the activity of the course and the types of resources and activities that are highly relevant to the students.
The second section, a timeline of accesses to the course, refers to the graphs which indicate the total number of (i) accesses to the course, (ii) last accesses, and (iii) accesses to each resource and activity concerning time. This set of indicators allows the user to understand the organization of students. The user may be able to see who is actively participating in the course and constantly carrying out their tasks daily. These indicators also exhibit the most used resources. The user will be able to understand the behavior of each student and act upon the information to prevent the students who are in danger of abandoning the course.
The third section, called student participation, contains widgets that show (i) the percentage of accesses to the course, (ii) the total number of accesses to the subject webpage, and (iii) to each resource and activity. These indicators help us understand the student information individually, offering an opportunity to pinpoint those who are making an effort and those waiting until the last moment to work on the proposed projects and activities. According to the information gained from these indicators, the teachers will know when to send notifications to some students to increase their active participation in the course.
The fourth and last section, Student interactions with resources, offers indicators for the student interactions with the resources. This includes (i) the percentage of the accesses, (ii) the total number of accesses, and (iii) the date and time of last access to the resources. The resources can be any activity, task, Moodle component such as files, forums, registers, etc., as well as user events. A user event can be a submitted task, a viewed activity, a created group, etc. All of these give the user a better understanding of the student's behavior concerning each available resource in Moodle and in the course itself. The main objective is to detect which are the most used resources and the less used ones. The teacher will also see the most common actions done and who participates more in terms of resources.
These four sections only gather insights concerning the current widgets in the dashboard. After collecting their insights on the indicators, they were asked whether they still lack indicators. If so, they were asked to comment on it with a brief description.
With the current widgets in the JSMLA dashboard, they were also asked for possible improvements and changes to enhance the user experience. This will boost the utility of the dashboard for various contexts and build on the collected information to develop possible future indicators that are useful on a significant level.

Results
We present the results of the heuristic analysis and survey. These results were used to improve user experience in the use of the predefined JSMLA dashboard and to create new widgets regarding the needs of surveyed teachers.

Heuristic Analysis Results
The hypotheses under which JSMLA was evaluated are Jakob Nielsen's 10 Heuristic Usability Principles [26]. The website was evaluated under the criteria of each of the 10 principles (the heuristics): 1.
Visibility of system status: system always inform users about intern processes. 2.
Math between system and real world: system speaks a language that the user easily understands.

3.
User control and freedom: the system permits undo and redo of every function.

4.
Consistency and standards: same meanings are represented equal, maintain consistency over icons.

5.
Error prevention: prevention of errors occurring in the first place. 6.
Recognition rather than recall: make actions visible to minimize user's use of memory. 7.
Flexibility and efficiency of use: allow users to tailor frequent actions. 8.
Aesthetic and minimalist design: no extra information is given. 9.
Help users recognize errors: express error messages in plain language. 10. Help and documentation: provides help and documentation.
These principles are the basis for any designer when evaluating usability, and therefore the most common ones to employ in the heuristic analysis.
The use case consists of a written description of what actions users perform on the page, i.e., what steps they would take to perform certain tasks. Writing this can be very useful to know how a certain customer will behave on the site, so it is important to define a situation that fits the normal use of the site: define well the personality of the hypothetical user, his environment, his motivations, why he uses the site, what external factor influences him, what is his real objective, etc.
The use case used for this analysis consisted of a teacher who teaches several subjects and uses the platform to analyze the real activity of his students with the material he shares.
Regarding accessibility, an area worked from an inclusive design point of view was studied that the page respecting the current regulations, specifically, the "Royal Decree 1112/2018 on the accessibility of websites and applications for mobile devices of the public sector" [38].
The result of this analysis consists of a list of web problems, the hypotheses, grouped according to its severity. The analysis extracted 20 findings classified according to their importance: priority, significance and relevancy.
As can be seen in Figure 22, the main usability errors raised by the first version of the site are based on the eighth principle, which deals with aesthetics. The poor layout of the content and the design of the graphs and tables generated confusion in the interpretation of the information.
Priority assumptions are also detailed concerning page consistency and patterns, corresponding to the fourth principle, since a consistent design pattern is not maintained for all sections. This hinders the ease of use and clarity of design.
Regarding the first principle, which deals with "visibility of system activity", there are hypotheses of lack of communication that may confuse the user because he/she does not understand what the page is doing at a precise moment.
Thanks to the third principle, which corresponds to "the user's freedom to move around the web page and undo actions", the user had to reload the page to continue using it, resulting in major usability and fluidity of use failure.
The tenth principle deals with "the help and explanation that the page offers to the user". Two significant findings are found wherein the user may not understand the content, thus limiting the clear understanding of the information.
It can be observed that concerning the second and fifth principles only one significant finding is found, respectively. These deal with the adequacy of the system to human language and error prevention, respectively.
The result of this analysis consists of a list of web problems, the hypotheses, grouped according to its severity. The analysis extracted 20 findings classified according to their importance: priority, significance and relevancy.
As can be seen in Figure 22, the main usability errors raised by the first version of the site are based on the eighth principle, which deals with aesthetics. The poor layout of the content and the design of the graphs and tables generated confusion in the interpretation of the information. Priority assumptions are also detailed concerning page consistency and patterns, corresponding to the fourth principle, since a consistent design pattern is not maintained for all sections. This hinders the ease of use and clarity of design.
Regarding the first principle, which deals with "visibility of system activity", there are hypotheses of lack of communication that may confuse the user because he/she does not understand what the page is doing at a precise moment.
Thanks to the third principle, which corresponds to "the user's freedom to move around the web page and undo actions", the user had to reload the page to continue using it, resulting in major usability and fluidity of use failure. The tenth principle deals with "the help and explanation that the page offers to the user". Two significant findings are found wherein the user may not understand the content, thus limiting the clear understanding of the information.
It can be observed that concerning the second and fifth principles only one significant finding is found, respectively. These deal with the adequacy of the system to human language and error prevention, respectively.
The results of the heuristic analysis allow for several significant improvement changes to the first version of JSMLA and visible in Figure 23. Since the clear interpretation of data is fundamental to JSMLA, all findings related to the aesthetics of tables and content layout, e.g., are resolved:

•
Remove elements that do not contain any information or nonfunctioning buttons; • Enable resizing of tables and graphs; • Correctly align the content of tables. Figure 23. Table alignment. JSMLA has a colorimetric representation to highlight the values higher than the rest in each table. This color code is used to facilitate the search and understanding of the data visually and quickly. Initially, this code contains multiple different colors for different tables; in the analysis, it is observed that the information it is intended to provide is not clear and confuses the interpretation of the data. The system is simplified from several colors to a single color with different levels of brightness, see Figure 24.  Table alignment. JSMLA has a colorimetric representation to highlight the values higher than the rest in each table. This color code is used to facilitate the search and understanding of the data visually and quickly. Initially, this code contains multiple different colors for different tables; in the analysis, it is observed that the information it is intended to provide is not clear and confuses the interpretation of the data. The system is simplified from several colors to a single color with different levels of brightness, see Figure 24.
JSMLA has a colorimetric representation to highlight the values higher than the rest in each table. This color code is used to facilitate the search and understanding of the data visually and quickly. Initially, this code contains multiple different colors for different tables; in the analysis, it is observed that the information it is intended to provide is not clear and confuses the interpretation of the data. The system is simplified from several colors to a single color with different levels of brightness, see Figure 24. JSMLA is an open-source tool whose main users are teachers of different subjects and teaching areas, so it must be clear, useful, and easy to use for any user. To facilitate the understanding of the information that the page shows to the user, a tooltip is added to each widget, see Figure 25, which consists of a floating box with a brief explanation. JSMLA is an open-source tool whose main users are teachers of different subjects and teaching areas, so it must be clear, useful, and easy to use for any user. To facilitate the understanding of the information that the page shows to the user, a tooltip is added to each widget, see Figure 25, which consists of a floating box with a brief explanation. Another oversight of the first version lies in the user's freedom in using the page, along with the lack of visibility of the system status. The version tested does not react when an erroneous file is loaded; it simply stops the progress bar, giving the user a false sense that the tool is continuing to load the data. This error forces the user to close the page and reopen it to continue using it, which also happens when the user wants to load a different file than the current one.
Both impediments are solved by adding the functionality of a button that allows uploading other files at any time, as shown in Figure 26. To strengthen error detection, filters are also added that only allow uploading of files with proper file extension and not corrupted files.  Another oversight of the first version lies in the user's freedom in using the page, along with the lack of visibility of the system status. The version tested does not react when an erroneous file is loaded; it simply stops the progress bar, giving the user a false sense that the tool is continuing to load the data. This error forces the user to close the page and reopen it to continue using it, which also happens when the user wants to load a different file than the current one.
Both impediments are solved by adding the functionality of a button that allows uploading other files at any time, as shown in Figure 26. To strengthen error detection, filters are also added that only allow uploading of files with proper file extension and not corrupted files.
Through this Nielsen analysis, multiple findings of errors are detected in both the usability and accessibility of the web page that serves to make it more efficient and accessible. sense that the tool is continuing to load the data. This error forces the user to close the page and reopen it to continue using it, which also happens when the user wants to load a different file than the current one.
Both impediments are solved by adding the functionality of a button that allows uploading other files at any time, as shown in Figure 26. To strengthen error detection, filters are also added that only allow uploading of files with proper file extension and not corrupted files. Through this Nielsen analysis, multiple findings of errors are detected in both the usability and accessibility of the web page that serves to make it more efficient and accessible.
An important unresolved hypothesis is to improve the filter by creating subsets of categories to list and reduce the dimension, in addition to incorporating a text field that allows searching with autocomplete suggestion. This improvement is considered a relevant hypothesis, but it is finally decided to leave it as a line for the future because the priority is to reinforce clarity when interpreting data and colors, ease of use and understanding its functions, and simple design that facilitates the understanding of the information.

Teacher Survey
The questionnaire's population is based on teaching staff from different types of educational institutions located in different countries.
As shown in Figure 27, the majority of the population is located in Spain (91.8%), followed by Colombia (4.8%) and Mexico (3.2%). There is also participation of staff located in the UK, Venezuela, and Portugal. Likewise, the majority of respondents are based in An important unresolved hypothesis is to improve the filter by creating subsets of categories to list and reduce the dimension, in addition to incorporating a text field that allows searching with autocomplete suggestion. This improvement is considered a relevant hypothesis, but it is finally decided to leave it as a line for the future because the priority is to reinforce clarity when interpreting data and colors, ease of use and understanding its functions, and simple design that facilitates the understanding of the information.

Teacher Survey
The questionnaire's population is based on teaching staff from different types of educational institutions located in different countries.
As shown in Figure 27, the majority of the population is located in Spain (91.8%), followed by Colombia (4.8%) and Mexico (3.2%). There is also participation of staff located in the UK, Venezuela, and Portugal. Likewise, the majority of respondents are based in university (75.4%), high school (7.7%), and secondary school (6.2%); to a lesser, extent there are educational personnel from primary school (4.6%), high school (1.5%), middle school (1.5%) and five others (3.1%).  As shown in Figure 28, among the institutions that have participated in the study, there are some well-known university institutions, such as La Salle (Universitat Ramon Llull), UA, UCA, UPM, UCA, and UPM.  As shown in Figure 28, among the institutions that have participated in the study, there are some well-known university institutions, such as La Salle (Universitat Ramon Llull), UA, UCA, UPM, UCA, and UPM. Considering section "course totals" of the survey, as shown in Figure 29, most of the population is concentrated in the high end of the scale offered, both from a regional and educational stage perspective. From the perspective of Certificate of Higher Education, Other and a segment of University, its usefulness is not as relevant. Considering section "course totals" of the survey, as shown in Figure 29, most of the population is concentrated in the high end of the scale offered, both from a regional and educational stage perspective. From the perspective of Certificate of Higher Education, Other and a segment of University, its usefulness is not as relevant. Scores shown in Figure 30 are justified by the following comments: • It allows to determine which content is more attractive to the learner and thus makes adjustments, among others.

•
It gives an idea, but the exact reason for the access is not known.

•
It allows to know the utilization of the resources and to follow up the activity of the students. The use of the resources will allow the (re)design of these in a cycle of continuous improvement of the design of the subject.
Considering section "last access and timeline", as shown in Figure 13, most of the population answered amidst the high end of the scale offered, both from regional and educational perspectives. In this case, the population distribution for teachers across countries is similar to teachers across educational stages. Scores shown in Figure 30 are justified by the following comments: • It allows to determine which content is more attractive to the learner and thus makes adjustments, among others. • It gives an idea, but the exact reason for the access is not known. • It allows to know the utilization of the resources and to follow up the activity of the students. The use of the resources will allow the (re)design of these in a cycle of continuous improvement of the design of the subject.
students. The use of the resources will allow the (re)design of these in a cycle of continuous improvement of the design of the subject.
Considering section "last access and timeline", as shown in Figure 13, most of the population answered amidst the high end of the scale offered, both from regional and educational perspectives. In this case, the population distribution for teachers across countries is similar to teachers across educational stages. In the following, we provide a set of responses to understand distribution shown in Figure 31: • It is a piece of useful information to contrast the students' perceptions of work.

•
For continuous assessment perhaps, but if they have entered earlier or later, it is not so interesting.

•
It allows the detection of periods of more and less activity. It would be interesting to provide the ability to compare between subjects to detect moments of less activity and to be able to plan deliveries in those moments. Considering section "last access and timeline", as shown in Figure 13, most of the population answered amidst the high end of the scale offered, both from regional and educational perspectives. In this case, the population distribution for teachers across countries is similar to teachers across educational stages.
In the following, we provide a set of responses to understand distribution shown in Figure 31: • It is a piece of useful information to contrast the students' perceptions of work. • For continuous assessment perhaps, but if they have entered earlier or later, it is not so interesting.

•
It allows the detection of periods of more and less activity. It would be interesting to provide the ability to compare between subjects to detect moments of less activity and to be able to plan deliveries in those moments. Considering the student participation section, as shown in Figure 32, most of the population is concentrated in the higher values of the scale offered, whether considered from a regional perspective or the perspective of the educational stage.
The following set of comments offer feedback towards results shown in Figure 32: • It can be interesting, although the result depends on the Campus activities, which often have static information. Thus, a very organized student who downloads a PDF may not download it again but may review it more than another student who accesses it multiple times.

•
It would facilitate the attention to diversity, to dedicate more support to those who need it most, among many other things.

•
We can know whether a specific student is using the platform or not. Their degree of interaction with the subject. Considering the student participation section, as shown in Figure 32, most of the population is concentrated in the higher values of the scale offered, whether considered from a regional perspective or the perspective of the educational stage.

•
It would facilitate the attention to diversity, to dedicate more support to those who need it most, among many other things.

•
We can know whether a specific student is using the platform or not. Their degree of interaction with the subject.
Considering section resource interactions, as shown in Figure 15, most of the population is clustered in the upper range of the scale offered, both from the regional and the educational stage perspective. A set of answers to understand the position of the distribution shown in Figure 33 is provided below: • I find this information very useful to be able to identify which resources are 'preferred' by students. This information can help a teacher to improve the quality of a course by focusing on generating resources that students 'like'. The following set of comments offer feedback towards results shown in Figure 32: • It can be interesting, although the result depends on the Campus activities, which often have static information. Thus, a very organized student who downloads a PDF may not download it again but may review it more than another student who accesses it multiple times. • It would facilitate the attention to diversity, to dedicate more support to those who need it most, among many other things.

•
We can know whether a specific student is using the platform or not. Their degree of interaction with the subject.
Considering section resource interactions, as shown in Figure 15, most of the population is clustered in the upper range of the scale offered, both from the regional and the educational stage perspective.
A set of answers to understand the position of the distribution shown in Figure 33 is provided below: • I find this information very useful to be able to identify which resources are 'preferred' by students. This information can help a teacher to improve the quality of a course by focusing on generating resources that students 'like'. • Information to revamp the course or update. • Depends on the resource and how easy it is to show that the resource is being used. Lastly, teachers were asked which elements would change considering JSMLA widgets' functionality and content. As shown in Figure 16, University and Other educational stages teachers are the most demanding. The survey also gathered information regarding the elements that users would like to add or modify to the tool. Below there is an excerpt of the most commented concepts: • Time indicator (graph): determine how much time a student spends in the subject from the interactions.

•
Indicator of the relationship between the number of interactions of a student and the Lastly, teachers were asked which elements would change considering JSMLA widgets' functionality and content. As shown in Figure 16, University and Other educational stages teachers are the most demanding.
The survey also gathered information regarding the elements that users would like to add or modify to the tool. Below there is an excerpt of the most commented concepts: • Time indicator (graph): determine how much time a student spends in the subject from the interactions.

•
Indicator of the relationship between the number of interactions of a student and the grade obtained (dedication vs. grade obtained).

•
Indicator that includes academic results. • Include more graphs instead of using so many tables, text, or numerical data. • Include the possibility of downloading a file with information from the tool in Excel or HTML format.

•
Indicator with the possibility for a teacher to establish his/her metrics (for example, an indicator based on choosing other indicators with a weight each or similar).

Discussion
Considering the indicators offered by the JSMLA dashboard, teachers consider them to be of interest since the majority have positively evaluated each section. Nevertheless, there have been comments regarding the information shown suggesting the need to adapt the content to some of the teachers' needs. The university sector shows a larger need to adapt the tool to meet their needs, especially in the last section of the survey (view Figures 15 and 16). For instance, there is a particular group of teachers who are eager to incorporate (i) the students' motivations for using the resources, and (ii) the possibility to compare subjects, that is to say, load more than one JSON log file at a time. However, some of the needs cannot be solved as logs do not have them included or functionalities are too basic, such as the association of grades to the students or the creation of new indicators by the teaching staff. It may be duly noted that the study population offers a certain bias as most people come from certain geographic regions, but we considered it as acceptable as the required base profile is based on their current job: teachers. Hence, the JSMLA resulting dashboard offers interesting visualizations from Moodle's logs. However, survey results expose that Moodle's log data is limited and JSMLA can be improved by generating new kinds of visualization from other data sources, such as academic, that meet teacher needs.
In the dashboard, information is offered through different types of visualizations (KPI, line charts, pie charts, line charts, and tables). Notwithstanding the different forms of data visualization, teachers find the information too similar. Maybe this is due to the data of the Moodle log used to create visualizations. This reinforces the need of considering multiple sources of data.
Table widgets offer a plain representation of data. In a case where a large group of students and/or resources is analyzed, this representation grows larger and confuses the end-users. Due to the survey results, it is possible to state that some widgets were complex and hard to process. In particular, sections such as interactions and student accesses are more likely to need a visual aid to extract helpful information, such as an interpretation or a plot that represents data through different aids than tables, such as colors and interaction. In some cases, it seems that the data literacy of some surveyed teachers is not enough to understand the data presented. This is another future research line that opens to evaluate the data literacy of educators.
Due the result of this survey, several actions can be done that also could be of interest for other dashboard developers and investigators as well: • Replace tables with graphs when possible so that it is more intuitive to extract the information shown.

•
To incorporate new metrics that allow relating student involvement with the information displayed, such as information regarding the time students spend on resources or reviewing the tool.
• Add a flowchart that allows users to interpret the students' paths through the resources.

•
Offer more export formats for the generated data.
Regarding JSMLA, many widgets have changed their representation towards a visual definition, such as pie plots and scatter plots. To be precise, the widgets student participation, interactions with components, interactions with events, interactions with URL, interactions with context, last access and students, and resource-students access chart.
In case there is a reduced number of rows, as it happens with widgets like interaction with components, events, or URLs, their representation is translated to pie plots, as they provide a small group of elements. It is also applied to students' participation and interactions with context, as these plots provide a similar structure to identify the largest groups and allows the end-users to identify said group through a hover. If there is a large number of rows and/or students, a scatter plot is provided, as it provides an axis for each student, it is possible to represent the number of times each element has been interacted with and can be easily filtered. When it comes to representing the last access for each user, the format has changed to provide an unbiased plot that represents data without any interpolation. New widgets are developed, such as students' time dedication to the course and detailed resources. Such changes are openly available in the GitHub repository [28].

Findings Found across Previous Studies and Current
As mentioned on previous sections, JSMLA was studied through a heuristic revision as part of a user experience focus.
Whilst the previous findings [33] were focused on providing a standardized tool to analyze Moodle's logs, through the study provided on this paper we expose new concepts to offer related to the following elements: • Amount of time related to different resources and type of activities and including a flowchart to interpret the student's path. This is a new concept which was not explored previously as it was not a current feature related to the nature of the previous study. • Replace tables with graphs when possible so that it is more intuitive to extract the information shown. As this study introduces the population's ability to understand data, this result was provided as a result of introducing user's voice onto the content they see. At the heuristic analysis, the principles followed were not based on data visualization or the types of standards applied to the educational sector.

Conclusions
The data generated by an educational institution can be considered Small Data, but not Big Data. Even though the data is small, the analytical techniques that can be used are the same as for analyzing large amounts of data. Besides, these techniques are usually executed in Cloud Computing on third-party servers. This means that educational data is transferred outside of academic institutions and is often beyond their control. This situation is replicated when third-party educational technology is integrated into educational processes. Consequently, the educational context presents a serious problem of sensitivity and fragility in terms of confidentiality, privacy, and identity of the students' data. Therefore, the more educational tools used or the more data transfers to third-party servers, the more dispersed the student data will be and the less control over privacy there will be.
Notwithstanding the above, these issues are accompanied by positive opportunities. The analysis of educational data helps to improve the learning context, teaching activity, and academic management. However, the benefits are not reason enough to put aside the problems related to data. These analytical processes must comply with the law to ensure adequate levels of privacy and security.
Teachers can use local technology to reduce data transfer and third-party actors. In this sense, as the data generated in a class can be considered Small Data, the development of a local analysis tool makes sense. Considering the novel theoretical framework that sets local technology as the first option to analytical technology adoption [17], we present an analytical and visualization tool developed with web technologies, executable locally from any device that has a web browser installed, and without requiring an internet connection.
The development of the tool follows a scientific methodology based on software development iterated in four phases. The first phase consists of the development of the base library. The second phase, and considered as a pilot to help the research, consists of the creation of a schema and visualizations to analyze data from the Moodle records report; the visualizations are based on the needs of teachers of the institution of the authors. The third phase consists of a Nielsen heuristic analysis. The final fourth phase consists of improving visualizations based on a teacher survey of different educational stages. Given the indicators and visualizations provided by the tool, teachers consider them to be of interest. However, there have been comments suggesting the need to add other datasets and views beyond an interaction log. Additionally, the resulting code is open source published [28].
The heuristic analysis allows to improve the usability of the tool and the teacher survey allows to discover other needs and points of view. The results of both actions improve the tool by generating new widgets and adapting existing ones to teachers' perceptions. The developed tool shows initially analytical-descriptive results that lay the foundations for future developments and/or adaptations of educational indicators.
In summary, our work provides educators with an easy-to-use Moodle educational data analysis and visualization tool, developers an open-source JavaScript library to develop front ends, and researchers a versatile tool with a configurable data schema to support their research.
New lines of research are opened to address new ways of displaying relevant information for teachers, to create new metrics and indicators from multiple data sources, and to know the level of data literacy that can help improve interaction with the tool.