Education for Sustainable Energy: Comparison of Different Types of E-Learning Activities

This paper reports a comparison of results obtained by using different e-learning strategies for teaching a biogas topic in two courses of the chemical engineering degree at the University of Granada. Particularly, four different asynchronous e-learning activities were carefully chosen: (1) noninteractive videos and audio files; (2) reading papers and discussion; (3) virtual tour of recommended websites of entities/associations/organizations working in the biogas sector; (4) PowerPoint slides and class notes. Students evaluated their satisfaction level (assessment) and teachers gave scores for evaluation exams (scores). We discuss the results from a quantitative point of view to suggest recommendations for improving e-learning implementations in education for sustainable energy. For dependent variables, reached scores and satisfaction assessment, we find the differences between means for students in two different academic years are no significant. In addition, there are no significant differences between means depending on the type of course. Significant differences appear for scores and satisfaction assessment between different activities. Finally, we deeply analyze the relationship between score and satisfaction assessment. The results show a positive correlation between assessment of e-learning activities and the score level reached by students.


Introduction
Our society bases its functioning on the use of energy. Energy is needed for everything: to illuminate houses and streets, to heat and cool interiors, to transport goods and people, to produce and prepare food, to manufacture almost everything people use, and so on. Until just two centuries ago humans obtained the energy they used from the strength of animals, from the fire produced by burning wood, and from the force of water and wind. At the end of the eighteenth century, with the invention of the steam engine and the great industrial and technological revolution that came with it, energy consumption was triggered, making new sources such as carbon necessary. Since then, the need for energy has been increasing progressively to the point where, at present, the degree of development of a country or region is measured by its energy consumption. Access to energy is universally recognized as key to economic development and to the realization of human and social wellbeing. Most of the energy humans consume today comes from fossil fuels such as oil, coal or natural gas. It is a natural resource but not renewable, which when demanded at too high a rate risks becoming scarce, or even exhausted, with all the problems that would entail. The massive use of fossil fuels, in addition to causing problems and social inequality due to its growing scarcity, is also causing environmental problems such as pollution, changes in biodiversity, and global warming, which can cause serious difficulties in the not too distant future. That is why many international institutions and movements Table 1. Distribution of the students who participated and evaluated the e-learning activities by course and academic year.

Description of the E-Learning Activities
The main purpose of all e-learning activities in the courses was promoting the students study in their own time and at their own place. With regards to the educational goal of the e-learning activities, it was "receiving knowledge" or acquisition according to Laurillard's work on learning types [22]. That means learning specific terms, principles, practices, etc. It does not require students to do anything. This goal was met mainly by watching educational videos, listening to sound files, reading texts and multimedia presentations, among others [23].
Next, we provide a brief description of the e-learning activities chosen in this work.

Activity One: Noninteractive Videos and Radio Podcasts
A total of ten videos and three radio podcasts were provided to students. They included educational videos from different Spanish universities, four technical videos from international companies and three radio interviews featuring senior professors of different Spanish universities. In this e-learning activity, the students are physically passive but mentally active. They are "absorbing" knowledge.

Activity Two: Readings
With this activity, students had to read a total of five review papers and optionally summarize in a brief report with relevant details. The writing of the summary report helps students learn by requiring students to remember and understand the information. For developing this activity, the students had to analyze, revise and study a series of slides and class notes prepared and provided by the teacher. Table 2 summaries the distribution of the e-learning activities between students of different courses and academic years. Fully consistent with this type of study, there was no attempt by the researchers to assign certain participants to specific e-learning activities; they were randomly assigned by teachers of the course where the study was carried out. In both courses, one or two weeks before the e-learning experience, students answered some opening exploration questions to provide information about their previous knowledge about the biogas sector. Results did not show significant dissimilarity among groups. In addition, the teacher was the same for both courses and academic years.

The Module Content
The module developed in the experiment was analysis of biogas technology for energy production and climate change mitigation. It is dedicated to increasing understanding of biogas as an energy option and includes contents about: (i) biomass resources for biogas production and feedstock characterization and pretreatment, (ii) fundamental engineering of biogas plants and (iii) biogas applications including its cleaning and upgrading to biomethane.
It is important to highlight that the content of the module was exactly the same for all e-learning activities.

Postexperiment Questionnaire
At the end of the e-learning activity and some time later to see if the knowledge had been properly assimilated, participants performed a written exam, consisting of objective questions about the module Energies 2020, 13, 4022 5 of 15 content. It is important to mention that, for this study, the numerical score obtained from each student related only to the topic developed by e-learning activities.
The questions were the same for all students. Also, after the e-learning activity, participants were required to fill out a questionnaire to assess their perceived satisfaction and to give feedback on the system and their learning experience. In both questionnaires, the potential test scores ranged from 0 to 10 and the time for applied the tests were the same for all e-learning groups. With regards to the satisfaction survey, the following items were also evaluated using a Likert scale (disagree, neutral and agree). These items are an adaptation from the work of Ginns and Ellis [12].

•
Item 1: Guidelines for the e-learning activity needed to understand the purpose and contents of the unit are well provided and integrated in the course. • Item 2: The materials provided for the development of the unit are good and made the topic interesting to students. • Item 3: The e-learning activity helps students to learn better than in a face-to-face situation. • Item 4: The workload for the e-learning activity is convenient. • Item 5: Students have enough time to do/work on the e-learning activity.

Data Analysis
In the analysis of data we use the following statistical tools • Box plots: Boxplot was established as a standardized way of displaying the distribution of data. It is based on "minimum", first quartile (Q1), median, third quartile (Q3), and "maximum" numbers. It gives information on the variability or dispersion of the data.

•
Analysis of variance (ANOVA): ANOVA is a very powerful tool to determine the influence that the independent variable (in our case: year, course and activity) has on the dependent variable (in our case score and assessment) and test the differences between two or more means. Odd ratio: This is a measure of the association between two binary variables. We calculated it using a binary logistic regression, commonly used to model the probability of a certain class or event. In our case, we look for the probability of a high score instead a low one for students, depending on low or high satisfaction assessment. Table 3 shows the mean and standard deviation values for each variable, score (exam point/mark) and assessment (valuation of the inclination by the e-learning experience), and for each category and factor group. One of the conclusions of Table 3 is that teachers need to focus not only on knowledge Energies 2020, 13, 4022 6 of 15 but also their students' perceptions of the e-learning activities since the valuation of the e-learning activities was between 5.29 and 7.53 (not very good values). As for the number of students who enrolled in all the courses and academic years, a total of 17 students were assigned to each e-learning activity (28 from academic year 2018/2019 and course 1 and 40 for academic year 2019/2020 and course 2). Although we observe a higher number of students in academic year 2019/2020 and course 2, the number of students distributed in the different types of learning activities is similar.

Data Description and Analysis of Differences between the Groups
As for the distribution of teachers for all groups, the same teacher taught face-to-face classes, and designed and reviewed e-learning activities.
The academic performance (score) of the students is one of the most important dimensions in the learning process. For Ruiz et al. [24] academic performance is the result of countless factors ranging from personal aspects, those related to the family and social aspect in which the student is performed, those who depend on the institution and those that depend on teachers. As this study was performed in only one institution (University of Granada), with the same teacher, we will attribute differences to academic year, type of course and type of e-learning activity. However, some important limitations of the work must be noted. Firstly, we did not compare scores and assessment for e-learning activities against a face-to-face situation. Secondly, we did not measure students' personal skills (e.g., reading skills) and other factors related to different learning processes that could be influencing scores and assessment. These factors are actually included as random error, and quantitative models work with this assumption, but we have no evidence they are no significant for dependent variables. Figure 1 shows the box plots to give a good graphical image of the concentration of data. On the left, we can see scores for two different academic years distributions are very similar. On the right, satisfaction assessment in these two academic years are similar too, with a slight difference between dispersions.

Differences between the Academic Years
Energies 2020, 13, 4022 7 of 15 3.1.1. Differences between the Academic Years Figure 1 shows the box plots to give a good graphical image of the concentration of data. On the left, we can see scores for two different academic years distributions are very similar. On the right, satisfaction assessment in these two academic years are similar too, with a slight difference between dispersions. Analysis of variance (ANOVA) was used to determine if differences between academic years are significant (Table 4). A comparison of means showed that the observed difference was not significant in any of the variables. Analysis of variance (ANOVA) was used to determine if differences between academic years are significant (Table 4). A comparison of means showed that the observed difference was not significant in any of the variables. In addition, to test the differences between the academic years, we performed a Levene's test (Table 5). Results show that the variances between the academic years do not differ significantly from each other. As for the course followed by the students, in the box plots ( Figure 2) we observe no difference in terms of the scores. It seems that there is a greater coincidence of results in terms of the assessment of course 1 than in terms of the assessment of course 2, in which there is a little more dispersion. In addition, to test the differences between the academic years, we performed a Levene's test (Table 5). Results show that the variances between the academic years do not differ significantly from each other. As for the course followed by the students, in the box plots ( Figure 2) we observe no difference in terms of the scores. It seems that there is a greater coincidence of results in terms of the assessment of course 1 than in terms of the assessment of course 2, in which there is a little more dispersion. A comparison of means using ANOVA (Table 6) and Levene's test (Table 7) indicate that the differences are not significant, either in terms of the average score and assessment or in terms of the variability of both. As a conclusion, we achieved comparability of scores and assessment between the two different courses.  A comparison of means using ANOVA (Table 6) and Levene's test (Table 7) indicate that the differences are not significant, either in terms of the average score and assessment or in terms of the variability of both. As a conclusion, we achieved comparability of scores and assessment between the two different courses.

Differences between the Types of E-Learning Activity
As for the score and valuation recorded for each activity, Figure 3 shows a decrease in both the score and the assessment of activity 2 with respect to activity 1, with activity 3 having the greatest variability. By performing an analysis of the variance and Levene's test (Tables 8 and 9), we find significant differences in both the mean of scores and the mean of assessment of the different e-learning activities, and we find significant differences between variances of scores for two different activities.  By performing an analysis of the variance and Levene's test (Tables 8 and 9), we find significant differences in both the mean of scores and the mean of assessment of the different e-learning activities, and we find significant differences between variances of scores for two different activities. Table 8. Results of the ANOVA analysis for testing the significance of type of e-learning activity. As for the comparisons between pairs of activities, we performed the Bonferroni test. Table 10 reports the results. For test scores, activities 1 and 4 were similar to each other and activities 2 and 3 also, with significant differences between these two groups of activities. This means that test scores were similar when students used videos and podcasts to when they used PowerPoint slides and class notes, but they are statistically different from those observed when readings and websites were used. This fact may be due to the difference between active and passive learning. We can see the first and fourth activities as more passive learning methodologies, and the second and third ones as more active learning methodologies.

Degrees of Freedom
For student assessments of the activities, since activity 1 has significantly different ratings than the rest, activities 2 and 3 were observed similarly, and activities 3 and 4 also do not have a significant difference between their means. The first activity presents the highest ratings in assessment, and this is statistically significant. Although there are no differences between activities 1 and 4 regarding the final scores obtained, students express their predilection or satisfaction for passive learning through videos and podcasts (activity 1) compared to other learning activities oriented to type of learning "acquisition".
The fourth activity presents the highest mean of scores but does not have the highest mean of assessment. This activity maintains a classic dynamic, where the material is provided to the students and they do not have to face any methodology other than the usual one. The main difference with a face-to-face classical methodology is the work is asynchronous and autonomous. Comparing the first and fourth activities, we observe that students prefer work with videos and podcasts rather than studying with PowerPoint slides and class notes, but it seems there are no differences in learning if we observe the scores.
Since the second and third activities do not show significant differences, both in scores and assessment, these two activities have very similar behavior. It seems there are no important differences between reading papers and discussing them and visiting websites to find information and answer questions about the topic. In these two activities, students need an extra effort to extract relevant information from sources. Both score and assessment means are lower than means for the first and fourth activities. Table 11 focuses on the evaluation of the e-learning activities. For each one, students state whether they are agree, disagree or neutral with respect to five items of interest.  Table 11. Observed answers (D = Disagree, N = Neutral, A = Agree) for items focusing on quality of e-learning activities. Data in % except global punctuation (assessment) in absolute number about a maximum of 10.

Item
Type of E-Learning Activity Students were generally positive about the value of the e-learning activity. They suggest that the provided materials were adequate for the unit of study on biogas.
We can see that the first activity has the highest percentages of agreement in all items. Items 3 and 4 have a very high percentage of agreement in the first activity (94.1%). It indicates students are very satisfied with the workload and feel they learn better using videos and podcasts than with a face-to-face situation.
The highest neutral percentage is for the second activity, reaching a 35.3% of neutral values for item 3. Although students do not agree, they either do not disagree with whether this activity helps students to learn better than the face-to-face situation. Students think that reading papers (at least the ones they read) and discussing them makes learning neither better nor worse.
The highest percentage of disagreement is for activity 3 and the third item. Activity 3 consists in visiting websites and answering some questions. Item 3 says it helps students to learn better. Looking at the values on Table 11, one can consider that they feel visiting websites does not help them to learn better.
In Figure 4 we observe percentages of agreement for all activities and all items in a radial chart. We can see that the percentage of agreement responses is for all items and all activities over the 50% value. There is a very high percentage of agree responses for items 3 and 4 in the first activity and, for this activity, overall agreement is therefore high as well. The assessment mean for this activity is 7.53, as we can see in Table 3. On the other hand, the lowest agreement percentages are for activity 2. The assessment mean for activity 2 is the lowest one: 5.29 (Table 3). It suggests that agreement on items correlates positively with the satisfaction assessment. Andersson [10] remarked that there are some important factors that can be considered major challenges for e-learning: (1) support and guidance for students (it can be connected to the type and design of the e-learning activity and explain differences found between them); (2) flexibility: e-learning can be performed for "anytime, anywhere"; (3) access: the access to the technology and the quality of the connectivity are factors that affect the success or failure in e-learning activities [25]; (4) academic confidence: refers to the students' previous academic experience and qualifications [26]; (5) personal attitudes to e-learning.

Exploring the Relationship between Scores and Assessment
It can be observed that the scoring and assessment variables had a positive correlation between them; high scores relate to high values in students' opinion. This can be analyzed by studying the correlation; the value of the Rho Spearman's coefficient was 0.528, significant value, which indicates a positive relationship with moderate intensity (value between −1 and 1). The Spearman coefficient near to the extremes (−1 or 1) indicates a higher correlation, and, when it is near to the value of 0, it shows a lower connection.
To analyze in depth the relationship between scores and assessment, we split observed values into two groups: scores and assessment less than the value of 7 are considered as "low" and scores/assessment equal to 7 and onwards are considered as "high". By doing this, the variables are reduced to ordinals and the degree of dependence they present can be better observed. Table 12 provides a double-entry contingency table that reflects the information. Table 12. Multivariate frequency distribution of the scores and assessment classified by levels (low/high). Andersson [10] remarked that there are some important factors that can be considered major challenges for e-learning: (1) support and guidance for students (it can be connected to the type and design of the e-learning activity and explain differences found between them); (2) flexibility: e-learning can be performed for "anytime, anywhere"; (3) access: the access to the technology and the quality of the connectivity are factors that affect the success or failure in e-learning activities [25]; (4) academic confidence: refers to the students' previous academic experience and qualifications [26]; (5) personal attitudes to e-learning.

Exploring the Relationship between Scores and Assessment
It can be observed that the scoring and assessment variables had a positive correlation between them; high scores relate to high values in students' opinion. This can be analyzed by studying the correlation; the value of the Rho Spearman's coefficient was 0.528, significant value, which indicates a positive relationship with moderate intensity (value between −1 and 1). The Spearman coefficient near to the extremes (−1 or 1) indicates a higher correlation, and, when it is near to the value of 0, it shows a lower connection.
To analyze in depth the relationship between scores and assessment, we split observed values into two groups: scores and assessment less than the value of 7 are considered as "low" and scores/assessment equal to 7 and onwards are considered as "high". By doing this, the variables are reduced to ordinals and the degree of dependence they present can be better observed. Table 12 provides a double-entry contingency table that reflects the information. From the data of Table 13 the Pearson's chi-squared test of independence was performed and results are summarized in Table 13. This test is useful to assess whether two variables are independent of each other. The result of this contrast is that both variables (ordinals treated) were not independent, but were related to each other.  Table 14 shows some dependency measures. The value of gamma of 0.742, a symmetrical measure of association suitable for use with ordinal variables, is significant. It indicates moderately positive strong association between the two variables (scores and assessment). Its directional version, Somers' D, D = 0.428, indicates that the relationship, while consistent, was not entirely so because, although to a lesser extent, there was a percentage of cases in which low values of one variable are not related to low values of the other and high values of one are not related to high values of the other, which are the so-called tied cases.
From the value of lambda, an asymmetrical measure of association that is suitable for use with nominal variables, the relation between variables is not significant (sig. = 0.148). It indicates that knowledge of one of the variables does not significantly reduce uncertainty over the other and therefore it was not helpful in predicting the value of the other, either in general (symmetrically) or in either direction (predicting rating knowing valuation or vice versa).
Finally, the tau value of Goodman and Kruskal shows the strength of association of the data when both variables are ordinal measured. Value of 0.171 was significant, but although there is no absence of association, the value close to zero indicates, like the previous value, that errors do not reduce if we use the information of one of the variables to predict the value of the other. In that sense, the relationship between variables is not useful.
The last analytic method used to explore correlation between score and assessment was binary logistic regression, and the value of the odd ratio (OR) with variables redefined in categorical (low/high). In this case, the OR value is 6.75, that is, the probability of having a high score instead of a low score, is 6.75 times higher for those students who have a high appreciation of the course than for those who express a low rating opinion. Previous research works [27][28][29] also found positive correlations between valuation of the e-learning environment and learning outcomes.

Conclusions
Exploratory research such as the e-learning activities described in this study is necessary to understand how to better use online activities to complement the face-to-face learning. From the results of analysis of the variance, there were significant differences in both the average scores and the average assessment of the different e-learning activities.
We found better results in both scores and satisfaction assessments when students performed the most "passive" e-learning activities (watching videos and listening podcasts).
The quantitative study shows a positive and strong correlation between assessment and scores, and we can see a very high advantage for a high exam score in students that have a high satisfaction assessment.
We think it is necessary to study more different e-learning activities, for sustainable energies courses, in order to find balance between a better performance and students' high satisfaction level.
However, the real progress in designing new learning styles based on the quality learning materials can be evaluated only as a long-term effect.