2. Materials and Methods
2.1. Objetives
This research pursues the evaluation of the t-MOOC designed for the training of non-university teachers for the acquisition of DTC according to the DigCompEdu competence framework (O1). This research is part of a larger project called “Design, Production and Evaluation of t-MOOC for Acquisition by Teachers of Digital Teaching Competences” (DIPROMOOC). One of its objectives is to create and evaluate a training environment under the t-MOOC architecture for the training of non-university teachers in the acquisition of DTC. At the same time, it also aims to find out if the assessments made by the experts are determined by the maximum degree they hold, as well as their workplace.
2.2. Context
The evaluation of a t-MOOC produced for the development of the Teaching Digital Competence under the DigCompEdu Framework is presented. Moodle is the platform (LMS) that host the t-MOOC. Moodle allows the enrollment and massive use of any online training, including the t-MOOC (task-based) format. After the teacher’s authentication and access to the environment, they are presented with two introductory animations; one, where the teacher is explained how he should develop in it; and another, where the DigCompEdu model and the different competencies that comprise it are presented in a general way. After watching the two introductory video clips, the teacher finds the competence areas. It is important to note that, in each competence area, the teacher makes an assessment which will indicate the competence level at which he/she is. These levels are: initial, intermediate or advanced, which allows them to carry out the training itinerary adapted to their needs.
Each competence starts with an introductory video describing the competence. After viewing it, the teacher begins with the contents of the t-MOOC and ends by performing the different tasks. Specifically, you are offered between four to six activities per competence and level, of which you must select two.
The presentation of the tasks is carried out through a guide where different aspects are incorporated, such as their identification, recommendations for their realization, a checklist for the teacher to check the quality of the delivery and an evaluation rubric that is used by the students. t-MOOC tutors.
It should be noted that the proposed tasks are of different types: creation of concept maps, participation in forums, blog construction, creation of PLE with certain tools, organization of activities for students and colleagues and creation of communities of learning.
Regarding the resources used in the learning modules, note that they have been the following: didactic animations, Polimedia recordings [
31], video clips, infographics, web addresses and complementary documents (PDF).
Different types of forums have also been used: forums for general doubts on the operation of the t-MOOC, forum for doubts for each area of competence and specific forums for activities.
In short, the t-MOOC has:
66 learning modules (3 for each DigCompEdu competency: initial, intermediate and advanced);
230 tasks distributed in the learning modules;
1 animation with the instructions for navigation and use of the t-MOOC;
1 general animation (DigCompEdu);
6 specific animations for each DigCompEdu competence area;
22 animations specific to each DigCompEdu competence;
16 animations integrated into the different learning modules;
24 infographics integrated into the different learning modules;
11 polymedies integrated into the different learning modules.
Finally, a great diversity of programs are used for the production of the t-MOOC. Specifically: ExeLearning for the learning modules), VYOND for the realization of the didactic animations, Genially for the production of infographics, Photoshop for the graphic design, Adobe Premiere for the video editing and Audacity for the equalization of the audios.
2.3. Expert Judgment
For its evaluation, the technique of expert judgment is used for their assessment, as it is often the best available resource for carrying out technology assessment [
32]. As [
33] point out, “it basically consists in asking a series of people to demand a judgment about an object, an instrument, a teaching material, or their opinion regarding a specific aspect”.
This strategy is becoming popular in educational research and evaluation and has been used to solve different educational problems, from questionnaire evaluation to evaluation of technological resources [
34,
35,
36,
37,
38]. In parallel, this strategy is associated with the Delphi studies [
39].
When it comes to its application, a number of problems arise, among which are: selection and expertise in mastering different aspects (selection criteria). To solve these problems, one of the strategies used for the selection of experts is the so-called Expert Competence Coefficient (CCE) [
40,
41,
42,
43,
44]. Recently, [
45] have carried out a review of research where the SCC has been used for the selection of experts.
In this study, two mechanisms are established for the identification of experts: first, the selection is made taking into account the fulfillment of two or more of the following criteria:
Teaching at Universities in the subjects of, “Educational Technology”, “New Technologies applied to Education”, “Information and Communication Technologies Applied to Education” or similar.
Have experience in the field of ICT teacher training.
Having published an article on e-learning, virtual training, b-learning and MOOC, in the last five years.
Be from different Spanish and/or Latin American Universities.
Another problem associated with expert judgment refers to the number of experts required for the application. The proposals range between: 15–20 [
46,
47,
48]. As noted, their number is determined by different aspects: having different experts on the subject analyzed, avoiding the least number of loss of subjects if different laps are considered, the volume of work that we are able to analyze, the ease with which we can access to information and the speed with which we should provide preliminary results [
40]. In this case, since there are no problems working with a large database and only one lap is made, the decision is made to work with as many as possible.
2.4. Procedure
The number of emails that were sent according to the criteria initially taken into account is 364. Of these, after the two weeks in which the questionnaire remains open, 241 responses are received.
Due to the interest of fine-tuning in the selection process of the final experts, the CCE [
39,
40,
41,
42,
43] is applied. This index is obtained from the self-perception that the expert has about their level of knowledge regarding the subject analyzed.
To obtain it, the formula is used: K = ½ (Kc + Ka), where Kc is the, “knowledge coefficient” and is obtained from the score offered directly by the expert in the following question:
“Mark in the box that corresponds to the degree of knowledge you have about topics such as the following: teacher training in ICT, digital skills, digital literacy ... Rate yourself on a scale of 0 to 10 (considering 0 as having absolutely no knowledge and 10 full knowledge of the state of the art”.
Ka is the argumentation coefficient, which is reached by adding the options specified by the expert in the table that completes the following question:
“Assess the degree of influence that each of the sources that we present below has had on your knowledge and criteria on the subject of teacher training in ICT, digital skills, digital literacy …”. The indicators and related values of Ka are presented in
Table 1.
The values used to determine the position of the expert are:
0.8 < K < 1.0 coefficient of high competence
0.5 < K < 0.8 coefficient of competence means
K < 0.5 coefficient of competence low
The number of experts who initially answered the questionnaire is 241. In this case, to refine the process for selecting experts, it is adopted that they have a value of 0.9 or higher. This makes it possible to identify 191 experts; which represented 79.25% of the total responses obtained.
The 56% of the experts are women and 44% are men. Most experts are between 50–54 years old (38.2%) and between 40–49 years old (29.3%). All of them work as university professors in Spanish (62%) or Latin American (38%) universities, where most of the teaching is done in Spanish.
Together, it should be noted that the vast majority of teachers who complete the questionnaire are curious about new applications, programs and digital resources. Specifically, 31.8% agreed with the statement and 52.1% strongly agreed, more than 80% of the distribution.
Regarding the social networks of which the teachers are users, more than half of the respondents are regular users of more than three social networks (52.3%).
Finally, the vast majority of the experts (89.6%) identified indicate that they had experience in teaching, publishing and research on ICT issues and the digital literacy and competence of teachers.
2.5. Instrument
The instrument contains two large sections: in the first, information is collected regarding some characteristics of the expert (degree, professional activity, place where HE works...) and the questions destined to prepare the CCE are incorporated; and in the second, he was asked to evaluate the t-MOOC. For this, an adaptation of the questionnaire developed by [
49], used for the evaluation of the design of other technologies, is carried out. At the end, an open question is asked to make and obtain specific proposals for modification and improvement.
The questionnaire is administered via the internet and is carried out with the Google Forms tool:
https://cutt.ly/PzZsfCV (accessed on 3 November 2021). It should also be noted that the questionnaire incorporates a video clip where the operation of the t-MOOC was explained.
Information collection took place between the months of November–December 2020.
The instrument used uses a Likert-type scaling, with six response options: 1. MN = Very negative/Strongly disagree/Very difficult; 2. N = Negative/Disagree/Difficult; 3. R− = Regular negative/Moderately disagree/Moderately difficult; 4. R+ = Fairly positive/Moderately agree/Moderately easy; 5. P = Positive/Agree/Easy; and 6. MP = Very positive/Strongly agree/Very easy. The dimensions analyzed are technical aspects, ease of use, diversity of resources and activities and quality of content.
3. Results
Initially, the mean values and the standard deviations reached in the four large dimensions that constitute the information collection instrument are presented, in addition to the global assessment of the t-MOOC (
Table 2).
The average scores achieved allow us to indicate that, both in the assessment of each dimension and in a global way, the experts have valued the t-MOOC in a very positive way. At the same time, it can be indicated that the low values of standard deviations show the coexistence of the answers offered by the experts in the answers offered.
Next, the scores achieved in the different items that make up each of the dimensions are offered.
Table 3 shows the mean values and the standard deviations obtained in the dimension “technical and aesthetic aspects”.
Regarding the technical and aesthetic aspects, the expert evaluations allow us to point out two fundamental aspects: on the one hand, the operation of the different elements is correct and adequate, on the other hand, from the aesthetic point of view the material produced is valued positively and can be considered attractive. In none of the questions is a score lower than 5.20 observed, which suggests a high assessment within the scale offered.
The next dimension refers to the “ease of use” of the user-created environment.
Table 4 shows the mean values and the standard deviations achieved in each of the items.
The first thing to note are the high scores achieved in each of the items, although it should be noted that this dimension is the item that obtained the lowest average rating: specifically, the one called “Using the produced t-MOOC was fun”, since it was the only questionnaire item that fell below the average score of 5. In contrast, the environment was valued very positively, with higher scores in the Items referring to ease of use (“How would you rate the ease of use and handling of the t-MOOC that we have presented to you?” and “How would you rate the ease of understanding of the technical operation of the t-MOOC that we have presented to you?”). At the same time, and close to the assessment cited above, the assessment of the item “How would you assess the accessibility/usability of the t-MOOC that we have presented to you?” Is presented, which was 5.25.
Regarding the assessment made of the dimension “diversity of resources and activities”, the average scores achieved are presented in
Table 5.
Again, in all cases the average scores achieved exceed the average score of 5, with the score given to the items standing out in the dimension: “The materials, readings, animations, videos ... offered in the t-MOOC are clear and adequate” and “There are different modalities and types of activities: reinforcement, support, extension … presented in the t-MOOC”. On the other hand, and with the same score, items such as “The diversity of resources used in the t-MOOC facilitates the understanding of the contents” and “The activities offered in the t-MOOC are attractive and innovative”.
The last dimension of the questionnaire refers to the “quality of the contents”. In
Table 6, the means achieved are presented.
It should be noted that this dimension contains the three items that have achieved the highest ratings of the entire questionnaire, reaching a score of 5.41 or higher. At the same time, it is also where the standard deviation scores are the lowest, suggesting the equality of the ratings offered by the experts who rated the t-MOOC.
At the synthesis level, the ten items that offered the highest scores are offered:
The contents presented in the t-MOOC are adapted to the competences to be developed.
The contents of the t-MOOC, as well as its structure are clear and adequate.
The contents of the t-MOOC are easy to understand.
In general, the technical performance of the t-MOOC is rated as high.
This study also seeks to know if the maximum qualification of the expert influences the assessment made. Specifically, the following hypotheses are formulated:
Hypothesis 0 (H0). (Null hypothesis): there are no statistically significant differences in the evaluations made of the t-MOOC by the experts based on their degree.
Hypothesis 1 (H1). (Alternative hypothesis): there are statistically significant differences in the evaluations made of the t-MOOC by the experts based on their degree.
For this, the non-parametric Kruskal-Wallis statistic is applied, which allows to know if there are statistically significant differences between N independent samples [
50]. The results are presented in
Table 7, through the large dimensions that made up the questionnaire, as well as the global score achieved.
The results allow rejecting the H0 in the dimension “quality of the contents”, at a significance level of p ≤ 0.05. It can be concluded that there are no significant differences in the assessments made by the experts based on their degree.
In parallel, the aim is to know if there are statistically significant differences between the scores of the experts who work or not in a training-related company. For this, the following hypotheses are formulated:
H0 (Null hypothesis): there are no statistically significant differences in the evaluations made of the t-MOOC by the experts based on whether or not they work in a training-related company.
H1 (Alternative hypothesis): there are statistically significant differences in the evaluations made of the t-MOOC by the experts depending on whether or not they work in a training-related company.
For this, the Mann–Whitney U statistic is applied, which allows us to know if there are statistically significant differences between two independent samples [
50]. The results are presented in
Table 8.
As can be seen, it can be affirmed at 99% that there are statistically significant differences (p ≤ 0.05) between the scores given by the experts who work in a company related to training and those who do not. To find out which group gives the highest scores, an average rank analysis is carried out. According to the values obtained, the group of experts who work in a company related to training value more positively all the evaluated aspects of the t-MOOC: technical and aesthetic aspects, ease of use, diversity of resources and activities, quality of the contents and t-MOOC in general.
4. Discussion and Conclusions
The research findings go in different directions; some, referring to the procedure followed for the evaluation and for the selection of the evaluators. Others regard the validation of the designed and produced t-MOOC.
Regarding the first, it should be noted that the work presented allows corroborating the significance of the process followed for the selection of the experts, which consisted of two phases: a prior selection based on biographical and curricular data of the experts and a second the “Coefficient of Competence Expert”. The first would establish a first general selection, carried out by the researchers themselves. The second, more focused on the object to be evaluated, would contemplate the self-evaluation by the person regarding their competence for it.
Regarding the “Expert Competence Coefficient” [
33,
38,
45] when discriminating the expertise of those in charge of evaluating the products carried out within the investigations. However, regardless of the performance of this test, it is pointed out that it is necessary to carry out an initial filter, such as the one carried out in the present investigation; that is, the prior selection by the research team and its refinement with the self-assessment by the expert asked.
The effectiveness of the procedure is also supported by the significance of the evaluations of the t-MOOC carried out by the experts, which made it possible to considerably improve some aspects of the t-MOOC. In this sense, the final version of the t-MOOC includes a less linear structure, where the areas of competence are clearly differentiated by means of multimedia elements. In the same way, many of the tasks that, in the opinion of the experts, needed to be improved have been modified. Finally, the content presentation has been improved including more hyperlinks and complementary material.
The results also support a way of designing the t-MOOC, characterized by the use of different resources for the presentation of information (videos, animations, infographics, directed to websites...) and the performance of activities or tasks in each module by part of the student who follows the MOOC, to move on to the next levels. This form of design suggests the need to think about specific design forms for the materials used in online training, other than a mere digital translation of printed resources [
50,
51,
52,
53] and to incorporate tasks to be carried out by the students [
54,
55,
56].
In addition, indicate the effectiveness of the presentation of the tasks to be carried out by the student, which included, from the objectives pursued, a rubric that would guide the student on the quality of the production to be carried out and the recommendation of the sequence to follow for the realization of the tasks.
Finally, it is concluded by noting that this tool allows training non-university teachers in digital skills, within the DigCompEdu Framework. Although the study has limitations that may be related to the number of judges who have participated in the study and the diversity of experiences they present, the results show that the t-MOOC facilitates the approach to the non-university teacher training plan. Subsequently, the pilot experience will allow the institutions to improve and guide the guidelines to establish the training plans for teachers in DTC. The aforementioned leads suggesting different future lines of research, such as replicating the study for two or three rounds. This would require less use of experts and would require a prior commitment from them to participate in the research in a longer period of time. Another possible line of research is to carry out the study in other contexts, such as the university.