Next Article in Journal
Fabrication of Enzyme-Loaded Cartridges Using CO2-Assisted Polymer Compression
Previous Article in Journal
A Novel In Vitro Simulator to Investigate Promotion of Reconstruction of Damaged Neuronal Cell Colony Differentiated from iPS Cells with the Aid of Micro Dynamic Stimulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Validation of t-MOOC for the Development of the Digital Competence of Non-University Teachers

by
Julio Cabero-Almenara
,
Raquel Barragán-Sánchez
*,
Antonio Palacios-Rodríguez
and
Lorena Martín-Párraga
Department of Didactic and Educational Organization, University of Seville, 41013 Seville, Spain
*
Author to whom correspondence should be addressed.
Technologies 2021, 9(4), 84; https://doi.org/10.3390/technologies9040084
Submission received: 5 October 2021 / Revised: 1 November 2021 / Accepted: 4 November 2021 / Published: 8 November 2021

Abstract

:
MOOCs are configured as one of the technologies that have been gaining ground in the educational field as a new approach in virtual education. In the past few years, its presence in educational institutions has increased. In addition, the level of research and publications that revolve around these technological developments is increasing. In this sense, this research focuses on the design and validation of the structure, content and tasks of a t-MOOC for the development of the Digital Competence of non-university teachers based on the DigCompEdu Framework of the European Union. For this, a Delphi-type validation design is established using an expert coefficient that has the participation of 191 people. The results demonstrate the validity of the training proposal, as well as the uniformity of criteria of the experts. In this sense, the application and benefits of t-MOOCs as tools for competence development are discussed.

1. Introduction

1.1. Digital Competence

Currently, we are immersed in a new technological era as a result of the accelerated changes that today’s society is going through, where information opens inscrutable paths promoted by advances characterized by the use of so-called information and communication technologies (ICT).
In this versatile and complex context, new ways of relating and communicating with others emerge and, therefore, new trends and leadership frameworks that guide the development of new societies, which are increasingly demanding and competitive [1].
This social impact generates new collaborative and communicational spaces, which, as well as integrating improvements in society, also do so in the educational field [2]. Even so, immersion in this technological trend does not ensure equal opportunities for its access and use, causing social inequalities [3] and can generate, as indicated [4], a digital skills gap.
Understanding the realities previously raised, in 2013 the European Commission emphasized the need to “rethink education”, as the way to achieve quality education in the current transformation contexts, taking advantage of and effectively integrating ICT in teaching.

1.2. Digital Teaching Competence

The presence of information and communication technologies (ICT) in university education means that new terms are being coined, such as digital teaching competence (DTC). This term refers to the training of teachers for the use of ICT in broader dimensions than the simple instrumental management of them [5,6,7].
As pointed out [8], DTC is multidimensional. Therefore, it implies that the person is able to mobilize those abilities and skills that allow searching, critically selecting, obtaining and processing relevant information using ICT.
For the acquisition of these DTC, different competence frameworks have been proposed at the institutional level, such as: “International Society for Technology in Education” [9] for teachers, Unesco ICT competence for teachers, that of the Ministry of Education of Colombia and Chile; that have been analyzed through different works [10,11,12,13,14]. In our context, the one proposed by the European Union and called the “European Framework for Digital Competence of Teachers” or DigCompEdu (European Framework for the Digital Competence of Educators: DigCompEdu) [5,15] is gaining significance. This framework is structured around six areas of competence: professional engagement, digital resources, digital pedagogy, evaluation and feedback, empowering students and facilitating digital competence for students.

1.3. t-MOOC as Training Technology

The strong presence of ICT in the current education system continues to grow. This entails different types of technologies and making it possible for competencies to go beyond the full mastery of teaching content and methodologies. This inescapable and incessant need to train teachers digitally requires new training actions to achieve it.
Conditioned by these demands, MOOC technology emerges as a new approach to distance education. MOOCs are emerging to meet learning needs in an open, participatory and distributed way [16]. Some research has shown that they have strong potential as educational tools in general, as resources for learning and are very useful for the permanent training of people [17,18,19,20]. These lines of research, as suggested [21], have focused mainly on four lines of research: (a) the potential and challenges of MOOCs for universities; (b) MOOC platforms; (c) students and content in MOOC; and (d) the quality of MOOCs and instructional design problems.
Among the different types of MOOCs is the model selected for this study, the so-called t-MOOCs. These are based on the tasks that students must perform, of different types. In addition, students must perform a minimum of them to continue advancing in the course and be able to demonstrate that they master the skills developed in said t-MOOC [22,23,24,25]. Learning Engagement and Persistence in Massive Open Online Courses (MOOCS). In parallel, [16] affirm that this type of MOOC is based on instructivism and constructivism.
One of the advantages of this type of MOOC is that the student must have an active participation in the educational process. On the other hand, as suggested [26], one of the variables of success of MOOCs is evaluation. This evaluation becomes a critical variable in this MOOC format so that the subject progresses in the training action. Finally, as different authors point out, MOOCs are an excellent strategy for training teachers in digital skills [27,28,29,30].

2. Materials and Methods

2.1. Objetives

This research pursues the evaluation of the t-MOOC designed for the training of non-university teachers for the acquisition of DTC according to the DigCompEdu competence framework (O1). This research is part of a larger project called “Design, Production and Evaluation of t-MOOC for Acquisition by Teachers of Digital Teaching Competences” (DIPROMOOC). One of its objectives is to create and evaluate a training environment under the t-MOOC architecture for the training of non-university teachers in the acquisition of DTC. At the same time, it also aims to find out if the assessments made by the experts are determined by the maximum degree they hold, as well as their workplace.

2.2. Context

The evaluation of a t-MOOC produced for the development of the Teaching Digital Competence under the DigCompEdu Framework is presented. Moodle is the platform (LMS) that host the t-MOOC. Moodle allows the enrollment and massive use of any online training, including the t-MOOC (task-based) format. After the teacher’s authentication and access to the environment, they are presented with two introductory animations; one, where the teacher is explained how he should develop in it; and another, where the DigCompEdu model and the different competencies that comprise it are presented in a general way. After watching the two introductory video clips, the teacher finds the competence areas. It is important to note that, in each competence area, the teacher makes an assessment which will indicate the competence level at which he/she is. These levels are: initial, intermediate or advanced, which allows them to carry out the training itinerary adapted to their needs.
Each competence starts with an introductory video describing the competence. After viewing it, the teacher begins with the contents of the t-MOOC and ends by performing the different tasks. Specifically, you are offered between four to six activities per competence and level, of which you must select two.
The presentation of the tasks is carried out through a guide where different aspects are incorporated, such as their identification, recommendations for their realization, a checklist for the teacher to check the quality of the delivery and an evaluation rubric that is used by the students. t-MOOC tutors.
It should be noted that the proposed tasks are of different types: creation of concept maps, participation in forums, blog construction, creation of PLE with certain tools, organization of activities for students and colleagues and creation of communities of learning.
Regarding the resources used in the learning modules, note that they have been the following: didactic animations, Polimedia recordings [31], video clips, infographics, web addresses and complementary documents (PDF).
Different types of forums have also been used: forums for general doubts on the operation of the t-MOOC, forum for doubts for each area of competence and specific forums for activities.
In short, the t-MOOC has:
  • 66 learning modules (3 for each DigCompEdu competency: initial, intermediate and advanced);
  • 230 tasks distributed in the learning modules;
  • 1 animation with the instructions for navigation and use of the t-MOOC;
  • 1 general animation (DigCompEdu);
  • 6 specific animations for each DigCompEdu competence area;
  • 22 animations specific to each DigCompEdu competence;
  • 16 animations integrated into the different learning modules;
  • 24 infographics integrated into the different learning modules;
  • 11 polymedies integrated into the different learning modules.
Finally, a great diversity of programs are used for the production of the t-MOOC. Specifically: ExeLearning for the learning modules), VYOND for the realization of the didactic animations, Genially for the production of infographics, Photoshop for the graphic design, Adobe Premiere for the video editing and Audacity for the equalization of the audios.

2.3. Expert Judgment

For its evaluation, the technique of expert judgment is used for their assessment, as it is often the best available resource for carrying out technology assessment [32]. As [33] point out, “it basically consists in asking a series of people to demand a judgment about an object, an instrument, a teaching material, or their opinion regarding a specific aspect”.
This strategy is becoming popular in educational research and evaluation and has been used to solve different educational problems, from questionnaire evaluation to evaluation of technological resources [34,35,36,37,38]. In parallel, this strategy is associated with the Delphi studies [39].
When it comes to its application, a number of problems arise, among which are: selection and expertise in mastering different aspects (selection criteria). To solve these problems, one of the strategies used for the selection of experts is the so-called Expert Competence Coefficient (CCE) [40,41,42,43,44]. Recently, [45] have carried out a review of research where the SCC has been used for the selection of experts.
In this study, two mechanisms are established for the identification of experts: first, the selection is made taking into account the fulfillment of two or more of the following criteria:
  • Teaching at Universities in the subjects of, “Educational Technology”, “New Technologies applied to Education”, “Information and Communication Technologies Applied to Education” or similar.
  • Have experience in the field of ICT teacher training.
  • Having published an article on e-learning, virtual training, b-learning and MOOC, in the last five years.
Be from different Spanish and/or Latin American Universities.
Another problem associated with expert judgment refers to the number of experts required for the application. The proposals range between: 15–20 [46,47,48]. As noted, their number is determined by different aspects: having different experts on the subject analyzed, avoiding the least number of loss of subjects if different laps are considered, the volume of work that we are able to analyze, the ease with which we can access to information and the speed with which we should provide preliminary results [40]. In this case, since there are no problems working with a large database and only one lap is made, the decision is made to work with as many as possible.

2.4. Procedure

The number of emails that were sent according to the criteria initially taken into account is 364. Of these, after the two weeks in which the questionnaire remains open, 241 responses are received.
Due to the interest of fine-tuning in the selection process of the final experts, the CCE [39,40,41,42,43] is applied. This index is obtained from the self-perception that the expert has about their level of knowledge regarding the subject analyzed.
To obtain it, the formula is used: K = ½ (Kc + Ka), where Kc is the, “knowledge coefficient” and is obtained from the score offered directly by the expert in the following question:
“Mark in the box that corresponds to the degree of knowledge you have about topics such as the following: teacher training in ICT, digital skills, digital literacy ... Rate yourself on a scale of 0 to 10 (considering 0 as having absolutely no knowledge and 10 full knowledge of the state of the art”.
Ka is the argumentation coefficient, which is reached by adding the options specified by the expert in the table that completes the following question:
“Assess the degree of influence that each of the sources that we present below has had on your knowledge and criteria on the subject of teacher training in ICT, digital skills, digital literacy …”. The indicators and related values of Ka are presented in Table 1.
The values used to determine the position of the expert are:
  • 0.8 < K < 1.0 coefficient of high competence
  • 0.5 < K < 0.8 coefficient of competence means
  • K < 0.5 coefficient of competence low
The number of experts who initially answered the questionnaire is 241. In this case, to refine the process for selecting experts, it is adopted that they have a value of 0.9 or higher. This makes it possible to identify 191 experts; which represented 79.25% of the total responses obtained.
The 56% of the experts are women and 44% are men. Most experts are between 50–54 years old (38.2%) and between 40–49 years old (29.3%). All of them work as university professors in Spanish (62%) or Latin American (38%) universities, where most of the teaching is done in Spanish.
Together, it should be noted that the vast majority of teachers who complete the questionnaire are curious about new applications, programs and digital resources. Specifically, 31.8% agreed with the statement and 52.1% strongly agreed, more than 80% of the distribution.
Regarding the social networks of which the teachers are users, more than half of the respondents are regular users of more than three social networks (52.3%).
Finally, the vast majority of the experts (89.6%) identified indicate that they had experience in teaching, publishing and research on ICT issues and the digital literacy and competence of teachers.

2.5. Instrument

The instrument contains two large sections: in the first, information is collected regarding some characteristics of the expert (degree, professional activity, place where HE works...) and the questions destined to prepare the CCE are incorporated; and in the second, he was asked to evaluate the t-MOOC. For this, an adaptation of the questionnaire developed by [49], used for the evaluation of the design of other technologies, is carried out. At the end, an open question is asked to make and obtain specific proposals for modification and improvement.
The questionnaire is administered via the internet and is carried out with the Google Forms tool: https://cutt.ly/PzZsfCV (accessed on 3 November 2021). It should also be noted that the questionnaire incorporates a video clip where the operation of the t-MOOC was explained.
Information collection took place between the months of November–December 2020.
The instrument used uses a Likert-type scaling, with six response options: 1. MN = Very negative/Strongly disagree/Very difficult; 2. N = Negative/Disagree/Difficult; 3. R− = Regular negative/Moderately disagree/Moderately difficult; 4. R+ = Fairly positive/Moderately agree/Moderately easy; 5. P = Positive/Agree/Easy; and 6. MP = Very positive/Strongly agree/Very easy. The dimensions analyzed are technical aspects, ease of use, diversity of resources and activities and quality of content.

3. Results

Initially, the mean values and the standard deviations reached in the four large dimensions that constitute the information collection instrument are presented, in addition to the global assessment of the t-MOOC (Table 2).
The average scores achieved allow us to indicate that, both in the assessment of each dimension and in a global way, the experts have valued the t-MOOC in a very positive way. At the same time, it can be indicated that the low values of standard deviations show the coexistence of the answers offered by the experts in the answers offered.
Next, the scores achieved in the different items that make up each of the dimensions are offered. Table 3 shows the mean values and the standard deviations obtained in the dimension “technical and aesthetic aspects”.
Regarding the technical and aesthetic aspects, the expert evaluations allow us to point out two fundamental aspects: on the one hand, the operation of the different elements is correct and adequate, on the other hand, from the aesthetic point of view the material produced is valued positively and can be considered attractive. In none of the questions is a score lower than 5.20 observed, which suggests a high assessment within the scale offered.
The next dimension refers to the “ease of use” of the user-created environment. Table 4 shows the mean values and the standard deviations achieved in each of the items.
The first thing to note are the high scores achieved in each of the items, although it should be noted that this dimension is the item that obtained the lowest average rating: specifically, the one called “Using the produced t-MOOC was fun”, since it was the only questionnaire item that fell below the average score of 5. In contrast, the environment was valued very positively, with higher scores in the Items referring to ease of use (“How would you rate the ease of use and handling of the t-MOOC that we have presented to you?” and “How would you rate the ease of understanding of the technical operation of the t-MOOC that we have presented to you?”). At the same time, and close to the assessment cited above, the assessment of the item “How would you assess the accessibility/usability of the t-MOOC that we have presented to you?” Is presented, which was 5.25.
Regarding the assessment made of the dimension “diversity of resources and activities”, the average scores achieved are presented in Table 5.
Again, in all cases the average scores achieved exceed the average score of 5, with the score given to the items standing out in the dimension: “The materials, readings, animations, videos ... offered in the t-MOOC are clear and adequate” and “There are different modalities and types of activities: reinforcement, support, extension … presented in the t-MOOC”. On the other hand, and with the same score, items such as “The diversity of resources used in the t-MOOC facilitates the understanding of the contents” and “The activities offered in the t-MOOC are attractive and innovative”.
The last dimension of the questionnaire refers to the “quality of the contents”. In Table 6, the means achieved are presented.
It should be noted that this dimension contains the three items that have achieved the highest ratings of the entire questionnaire, reaching a score of 5.41 or higher. At the same time, it is also where the standard deviation scores are the lowest, suggesting the equality of the ratings offered by the experts who rated the t-MOOC.
At the synthesis level, the ten items that offered the highest scores are offered:
  • The contents presented in the t-MOOC are adapted to the competences to be developed.
  • The contents of the t-MOOC, as well as its structure are clear and adequate.
  • The contents of the t-MOOC are easy to understand.
  • In general, the technical performance of the t-MOOC is rated as high.
This study also seeks to know if the maximum qualification of the expert influences the assessment made. Specifically, the following hypotheses are formulated:
Hypothesis 0 (H0).
(Null hypothesis): there are no statistically significant differences in the evaluations made of the t-MOOC by the experts based on their degree.
Hypothesis 1 (H1).
(Alternative hypothesis): there are statistically significant differences in the evaluations made of the t-MOOC by the experts based on their degree.
For this, the non-parametric Kruskal-Wallis statistic is applied, which allows to know if there are statistically significant differences between N independent samples [50]. The results are presented in Table 7, through the large dimensions that made up the questionnaire, as well as the global score achieved.
The results allow rejecting the H0 in the dimension “quality of the contents”, at a significance level of p ≤ 0.05. It can be concluded that there are no significant differences in the assessments made by the experts based on their degree.
In parallel, the aim is to know if there are statistically significant differences between the scores of the experts who work or not in a training-related company. For this, the following hypotheses are formulated:
  • H0 (Null hypothesis): there are no statistically significant differences in the evaluations made of the t-MOOC by the experts based on whether or not they work in a training-related company.
  • H1 (Alternative hypothesis): there are statistically significant differences in the evaluations made of the t-MOOC by the experts depending on whether or not they work in a training-related company.
For this, the Mann–Whitney U statistic is applied, which allows us to know if there are statistically significant differences between two independent samples [50]. The results are presented in Table 8.
As can be seen, it can be affirmed at 99% that there are statistically significant differences (p ≤ 0.05) between the scores given by the experts who work in a company related to training and those who do not. To find out which group gives the highest scores, an average rank analysis is carried out. According to the values obtained, the group of experts who work in a company related to training value more positively all the evaluated aspects of the t-MOOC: technical and aesthetic aspects, ease of use, diversity of resources and activities, quality of the contents and t-MOOC in general.

4. Discussion and Conclusions

The research findings go in different directions; some, referring to the procedure followed for the evaluation and for the selection of the evaluators. Others regard the validation of the designed and produced t-MOOC.
Regarding the first, it should be noted that the work presented allows corroborating the significance of the process followed for the selection of the experts, which consisted of two phases: a prior selection based on biographical and curricular data of the experts and a second the “Coefficient of Competence Expert”. The first would establish a first general selection, carried out by the researchers themselves. The second, more focused on the object to be evaluated, would contemplate the self-evaluation by the person regarding their competence for it.
Regarding the “Expert Competence Coefficient” [33,38,45] when discriminating the expertise of those in charge of evaluating the products carried out within the investigations. However, regardless of the performance of this test, it is pointed out that it is necessary to carry out an initial filter, such as the one carried out in the present investigation; that is, the prior selection by the research team and its refinement with the self-assessment by the expert asked.
The effectiveness of the procedure is also supported by the significance of the evaluations of the t-MOOC carried out by the experts, which made it possible to considerably improve some aspects of the t-MOOC. In this sense, the final version of the t-MOOC includes a less linear structure, where the areas of competence are clearly differentiated by means of multimedia elements. In the same way, many of the tasks that, in the opinion of the experts, needed to be improved have been modified. Finally, the content presentation has been improved including more hyperlinks and complementary material.
The results also support a way of designing the t-MOOC, characterized by the use of different resources for the presentation of information (videos, animations, infographics, directed to websites...) and the performance of activities or tasks in each module by part of the student who follows the MOOC, to move on to the next levels. This form of design suggests the need to think about specific design forms for the materials used in online training, other than a mere digital translation of printed resources [50,51,52,53] and to incorporate tasks to be carried out by the students [54,55,56].
In addition, indicate the effectiveness of the presentation of the tasks to be carried out by the student, which included, from the objectives pursued, a rubric that would guide the student on the quality of the production to be carried out and the recommendation of the sequence to follow for the realization of the tasks.
Finally, it is concluded by noting that this tool allows training non-university teachers in digital skills, within the DigCompEdu Framework. Although the study has limitations that may be related to the number of judges who have participated in the study and the diversity of experiences they present, the results show that the t-MOOC facilitates the approach to the non-university teacher training plan. Subsequently, the pilot experience will allow the institutions to improve and guide the guidelines to establish the training plans for teachers in DTC. The aforementioned leads suggesting different future lines of research, such as replicating the study for two or three rounds. This would require less use of experts and would require a prior commitment from them to participate in the research in a longer period of time. Another possible line of research is to carry out the study in other contexts, such as the university.

Author Contributions

J.C.-A. designed our research and structure of the paper. R.B.-S. and A.P.-R. validated this analysis, wrote the original draft, reviewed the draft and edited this article. L.M.-P. conducted data curation and visualization. All authors equally contributed to write and review this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministry of Science and Innovation of Spain grant number RTI2018-097214-B-C31.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in https://cutt.ly/TTWM9aT (accessed on 3 October 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Artacho, E.G.; Martínez, T.S.; Torres, J.M.T.; García, A.M. Competencia digital docente en educación de adultos: Un estudio en un contexto español. Píxel-Bit Rev. Medios Educ. 2021, 62, 209–234. [Google Scholar] [CrossRef]
  2. Corujo-Vélez, C.; Barragán-Sánchez, R.; Hervás-Gómez, C.; Palacios-Rodríguez, A. Teaching Innovation in the Development of Professional Practices: Use of the Collaborative Blog. Educ. Sci. 2021, 11, 390. [Google Scholar] [CrossRef]
  3. Williamson, B.; Eynon, R.; Potter, J. Pandemic policies, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. Taylor Fr. Online 2020, 45, 107–114. [Google Scholar] [CrossRef]
  4. Fuller, A. The Promises of ‘Going Digital’ for All: Where Does Technology Meet Technology and Social Policy? A Statistical Analysis of European Union’s Digital Skills Inclusion Policies and Digital Skills Inequality. Ph.D. Thesis, University of Oxford, Oxford, UK, 2020. [Google Scholar]
  5. Cabero-Almenara, J.; Palacios-Rodríguez, A. Marco Europeo de Competencia Digital Docente «DigCompEdu». Traducción y adaptación del cuestionario «DigCompEdu Check-In». EDMETIC 2020, 9, 213–234. [Google Scholar] [CrossRef] [Green Version]
  6. Barragán-Sánchez, R.; Corujo-Vélez, M.C.; Palacios-Rodríguez, A.; Román-Graván, P. Teaching Digital Competence and Eco-Responsible Use of Technologies: Development and Validation of a Scale. Sustainability 2020, 12, 7721. [Google Scholar] [CrossRef]
  7. Romero-Tena, R.; Llorente-Cejudo, C.; Puig-Gutiérrez, M.; Barragán-Sánchez, R. The Pandemic and Changes in the Self-Perception of Teacher Digital Competences of Infant Grade Students: A Cross Sectional Study. Int. J. Environ. Res. Public Health 2021, 18, 4756. [Google Scholar] [CrossRef] [PubMed]
  8. Flores, C.; Roig, R. Diseño y validación de una escala de autoevaluación de competencias digitales para estudiantes de pedagogía. Píxel-Bit Rev. Medios Educ. 2016, 12, 209–224. [Google Scholar] [CrossRef]
  9. ISTE. NETS-T for Teachers: National Educational Technology Standards for Teachers, 2nd ed.; ISTE: Washington, DC, USA, 2007. [Google Scholar]
  10. Gómez-Trigueros, I.M.; Yáñez de Aldecoa, C. The Digital Gender Gap in Teacher Education: The TPACK Framework for the 21st Century. Eur. J. Investig. Health Psychol. Educ. 2021, 11, 1333–1349. [Google Scholar] [CrossRef]
  11. Lázaro, J.L.; Usart-Rodríguez, M.; Gisbert, M. Assessing Teacher Digital Competence: The Construction of an Instrument for Measuring the Knowledge of Pre-Service Teachers. J. New Approaches Educ. Res. 2019, 8, 73–78. [Google Scholar] [CrossRef]
  12. Padilla, A.; Gámiz, V.; Romero, M.A. Validación del contenido de un guion de entrevista sobre la competencia digital docente en Educación Superior. Rev. Lbérica Sist. Tecnol. Inf. 2019, 32, 1–16. [Google Scholar] [CrossRef] [Green Version]
  13. Rodríguez-García, A.; Trujillo, J.M.; Sánchez, J. Impacto de la productividad científica sobre competencia digital de los futuros docentes: Aproximación bibliométrica en Scopus y Web of Science. Rev. Complut. Educ. 2019, 30, 623–646. [Google Scholar] [CrossRef] [Green Version]
  14. Silva, J.; Morales, M.J.; Lázaro, J.L.; Gisbert, M. La competencia digital docente en formación inicial: Estudio a partir de los casos de Chile y Uruguay. Arch. Analíticos Políticas Educ. 2019, 27, 26. [Google Scholar] [CrossRef] [Green Version]
  15. Comisión Europea. European Framework for the Digital Competence of Educators (DigCompEdu). 2017. Available online: https://ec.europa.eu/jrc/en/digcompedu (accessed on 7 September 2021).
  16. Khalid, A.; Lundqvist, K.; Yates, A. Recommender systems for MOOCs: A systematic literature survey. Int. Rev. Res. Open Distrib. Learn. 2020, 21, 255–291. [Google Scholar] [CrossRef]
  17. Bao, W. COVID-19 and online teaching in higher education: A case study of Peking University. Hum. Behav. Emerg. Technol. 2020, 2, 113–115. [Google Scholar] [CrossRef] [Green Version]
  18. Castaño, C.; Garay, U.; Maiz, I. Factores de éxito académico en la integración de los MOOC en el aula universitaria. Rev. Española Pedagog. 2017, 266, 65–82. Available online: https://bit.ly/3icPqoq (accessed on 7 September 2021). [CrossRef]
  19. Benet, A.; García, I.; Sanahuja, A.; Nieto, R. Nuevos horizontes formativos: Una experiencia del MOOC como recurso en la formación continua. Apertura 2018, 10, 88–103. [Google Scholar] [CrossRef] [Green Version]
  20. Palacios, F.; Huertas, C.; Gómez, M.E. MOOCs: Origins, Concept and Didactic Applications: A Systematic Review of the Literature (2012–2019). Technol. Knowl. Learn. 2020, 25, 853–879. [Google Scholar] [CrossRef]
  21. Zawacki-Richer, O.; Bozkurt, A.; Alturki, U.; Aldraiweesh, A. What Research Says About MOOCs–An Explorative Content Analysis. Int. Rev. Res. Open Distrib. Learn. 2018, 19, 242–259. [Google Scholar] [CrossRef]
  22. Herranen, J.K.; Aksela, M.K.; Kaul, M.; Lehto, S. Teachers’ Expectations and Perceptions of the Relevance of Professional Development MOOCs. Educ. Sci. 2021, 11, 240. [Google Scholar] [CrossRef]
  23. Yang, Q.; Lee, Y.-C. The Critical Factors of Student Performance in MOOCs for Sustainable Education: A Case of Chinese Universities. Sustainability 2021, 13, 8089. [Google Scholar] [CrossRef]
  24. Osuna-Acedo, S.; Marta-Lazo, C.; Frau-Meig, D. De sMOOC a tMOOC, el aprendizaje hacia la transferencia profesional: El proyecto europeo ECO. Comunicar 2018, 55, 105–114. [Google Scholar] [CrossRef] [Green Version]
  25. Jung, Y.; Lee, J. Learning Engagement and Persistence in Massive Open Online Courses (MOOCS). Comput. Educ. 2018, 122, 9–22. [Google Scholar] [CrossRef]
  26. Albelbisi, N.; Yusop, F.; Mohd, U. Mapping the Factors Influencing Success of Massive Open Online Courses (MOOC) in Higher Education. EURASIA J. Math. Sci. Technol. Educ. 2018, 14, 2995–3012. [Google Scholar] [CrossRef]
  27. Feklistova, L.; Lepp, M.; Luik, P. Learners’ Performance in a MOOC on Programming. Educ. Sci. 2021, 11, 521. [Google Scholar] [CrossRef]
  28. Fernández, E.; Ordóñez, E.; Morales, B.; López, J. La Competencia Digital en la Docencia Universitaria; Octaedro: Barcelona, Spain, 2019. [Google Scholar]
  29. Gordillo, A.; López-Pernas, S.; Barra, E. Effectiveness of MOOCs for teachers in safe ICT use training. Comunicar 2019, 61, 103–112. [Google Scholar] [CrossRef]
  30. Ju, Y.; So, H.; Hee, N. Examination of relationships among students’ self-determination, technology acceptance, satisfaction, and continuance intention to use K-MOOCs. Comput. Educ. 2018, 122, 260–272. [Google Scholar] [CrossRef]
  31. Cabero-Almenara, J. La Incorporación de las Producciones Polimedias a la Formación Universitaria; SAV de la Universidad de Sevilla: Sevilla, Spain, 2018. [Google Scholar]
  32. Flostrand, A.; Pitt, L.; Bridson, S. The Delphi technique in forecasting: A 42-year literature review. Technol. Forecast. Soc. Chang. 2020, 150, 119773. [Google Scholar] [CrossRef]
  33. Cabero-Almenara, J.; Llorente, M.C. La aplicación del juicio de experto como técnica de evaluación de las tecnologías de la información (TIC). Eduweb. Rev. Tecnol. Inf. Comun. Educ. 2013, 7, 11–22. [Google Scholar]
  34. Juste, R.P. La educación de calidad: Una responsabilidad compartida. Particip. Educ. 2006, 1, 27–34. Available online: https://bit.ly/3m0TU2o (accessed on 7 September 2021).
  35. Robles, P.; Rojas, M.D.C. La validación por juicio de expertos: Dos investigaciones cualitativas en Lingüística aplicada. Rev. Nebrija Lingüística Apl. 2015, 18, 124–139. [Google Scholar] [CrossRef]
  36. Galicia, L.; Balderrama, J.; Edel, R. Validez de contenido por juicio de expertos: Propuesta de una herramienta virtual. Apertura 2017, 9, 42–53. [Google Scholar] [CrossRef] [Green Version]
  37. Peraza, M.; Armenta, A.; Hernández, S. Juicio de expertos para la validación de un proyecto formativo. Rev. Electrónica Desafíos Educ. 2019, 5, 24–34. [Google Scholar]
  38. Cabero-Almenara, J.; Barroso-Osuna, J.; Rodríguez-Gallego, M.; Palacios-Rodríguez, A. La Competencia Digital Docente. El caso de las universidades andaluzas. Aula Abierta 2020, 49, 363–372. [Google Scholar] [CrossRef]
  39. López-Gómez, E. El método Delphi en la investigación actual en educación: Una revisión teórica y metodológica. Educación XX1 2018, 21, 17–40. [Google Scholar] [CrossRef] [Green Version]
  40. Veen, D.; Stoel, D.; Schalken, N.; Mulder, K.; Van de Schoot, R. Using the Data Agreement Criterion to Rank Experts’ Beliefs. Entropy 2018, 20, 592. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Lamm, K.; Powell, A.; Lombardini, L. Identifying Critical Issues in the Horticulture Industry: A Delphi Analysis during the COVID-19 Pandemic. Horticulturae 2021, 7, 416. [Google Scholar] [CrossRef]
  42. Cabero-Almenara, J.; Gutiérrez-Castillo, J.-J.; Palacios-Rodríguez, A.; Barroso-Osuna, J. Development of the Teacher Digital Competence Validation of DigCompEdu Check-In Questionnaire in the University Context of Andalusia (Spain). Sustainability 2020, 12, 6094. [Google Scholar] [CrossRef]
  43. Martínez Sariol, E.; Travieso Ramos, N.; Sagaró del Campo, N.M.; Urbina Laza, O.; Martínez Ramírez, I. Identificación de las competencias específicas de los profesionales de enfermería en la atención al neonato en estado grave. Medisan 2018, 22, 184–194. Available online: https://bit.ly/3ocDSVU (accessed on 1 September 2021).
  44. Vega-Gea, E.; Calmaestra, J.; Ortega-Ruiz, R. Percepción docente del uso de TIC en la Educación Inclusiva. Pixel-Bit. Rev. De Med. Y Educ. 2021, 62, 235–268. [Google Scholar] [CrossRef]
  45. Cruz, M.; Martínez, M. Origen y desarrollo de un índice de competencia experta: El coeficiente k. Rev. Latinoam. Metodol. Investig. Soc.–ReLMIS 2020, 19, 40–56. [Google Scholar]
  46. Malla, F.; Zabala, I. La Previsión del Futuro en la Empresa (III): El Método Delphi; Centro de Estudios Empresariales: Granada, Spain, 1978. [Google Scholar]
  47. Landeta, J. El Método Delphi: Una Técnica de Previsión del Futuro; Ariel: Barcelona, Spain, 2002. [Google Scholar]
  48. Witkin, B.R.; Altschuld, J.W. Planning and Conducting Needs Assessment: A Practical Guide; Sage: Thousand Oaks, CA, USA, 1995. [Google Scholar]
  49. Martínez-Martínez, A.; Olmos-Gómez, M.D.C.; Tomé-Fernández, M.; Olmedo-Moreno, E.M. Analysis of Psychometric Properties and Validation of the Personal Learning Environments Questionnaire (PLE) and Social Integration of Unaccompanied Foreign Minors (MENA). Sustainability 2019, 11, 2903. [Google Scholar] [CrossRef] [Green Version]
  50. Bisquerra, R.; Alzina, R.B. Metodología de la Investigación Educativa; Editorial La Muralla: Madrid, Spain, 2004. [Google Scholar]
  51. Sahasrabudhe, V.; Kanungo, S. Propriate media choice for e-learning effectiveness: Role of learning domain and learning style. Comput. Educ. 2014, 76, 237–249. [Google Scholar] [CrossRef]
  52. Ljbojevic, M.; Vaskovic, V.; Stankovic, S.; Vaskovic, J. El uso del vídeo complementario en la enseñanza multimedia como herramienta didáctica para incrementar la eficiencia del aprendizaje y la calidad de experiencia. Rev. Mex. Bachill. Distancia 2015, 13, 134–153. [Google Scholar] [CrossRef]
  53. Salim, P.; Luo, T. Factors contributing to student retention in online learning and recommended strategies for improvement: A systematic literature review. J. Inf. Technol. Educ. Res. 2019, 18, 19–57. [Google Scholar] [CrossRef] [Green Version]
  54. Silva, J. Un modelo pedagógico virtual centrado en las E-actividades. RED. Rev. Educ. Distancia 2017, 53, 1–20. Available online: http://www.um.es/ead/red/silva.pdf (accessed on 22 August 2021). [CrossRef]
  55. Burcin, N.; Gemikonakli, O.; Duman, I.; Kirksekiz, A.; Kiyici, M. Evaluating students experiences using a virtual learning environment: Satisfaction and preferences. Educ. Tech. Res. Dev. 2020, 68, 437–462. [Google Scholar] [CrossRef]
  56. Cabero-Almenara, J.; Palacios-Rodríguez, A. La evaluación de la educación virtual: Las e-actividades. RIED. Revista Iberoam. Educ. Distancia 2021, 24, 169–188. [Google Scholar] [CrossRef]
Table 1. Indicators and related values of Ka.
Table 1. Indicators and related values of Ka.
LOWHALFHIGH
Theoretical analysis carried out by you0.100.200.30
Your experience gained from your practical activity0.200.400.50
Study of work on the subject, of Spanish authors.0.050.050.05
Study of work on the subject, of foreign authors.0.050.050.05
Your own knowledge about the status of the problem abroad0.050.050.05
Your intuition on the topic addressed.0.050.050.05
Table 2. Average evaluation and standard deviation carried out by the experts in the environments perceived jointly and separately.
Table 2. Average evaluation and standard deviation carried out by the experts in the environments perceived jointly and separately.
DimensionsMSD
Technical aspects5.300.70
Easy to use5.250.65
Diversity of resources and activities5.250.72
Content quality5.360.79
Total5.310.72
Table 3. Average evaluation and standard deviation carried out by the experts in the items of the dimension “technical and aesthetic aspects”.
Table 3. Average evaluation and standard deviation carried out by the experts in the items of the dimension “technical and aesthetic aspects”.
Technical and Aesthetic AspectsMSD
The operation of the t-MOOC that we have presented to you is:5.350.80
In general, you consider the aesthetics of the t-MOOC produced:5.200.89
In general, you would qualify the technical performance of the t-MOOC produced as:5.310.75
In general, how would you rate the presentation of the information on the screen?5.210.77
Table 4. Expert assessment of the items of the dimension “ease of use”.
Table 4. Expert assessment of the items of the dimension “ease of use”.
Ease of UseMSD
How would you rate the ease of use and management of the t-MOOC that we have presented to you?5.330.76
How would you rate the ease of understanding of the technical operation of the t-MOOC that we have presented to you?5.310.81
From your point of view, how would you rate the general design of the t-MOOC that we have developed?5.160.85
From your point of view, how would you rate the accessibility/usability of the t-MOOC that we have presented to you?5.250.83
From your point of view, how would you assess the flexibility of use of the t-MOOC that we have presented to you?5.210.81
Using the produced t-MOOC was fun for you.4.711.15
Table 5. Expert assessment regarding the dimension “Diversity of resources and activities”.
Table 5. Expert assessment regarding the dimension “Diversity of resources and activities”.
Diversity of Resources and ActivitiesMSD
The diversity of resources used in the t-MOOC facilitates the understanding of the contents.5.150.90
The materials, readings, animations, videos ... offered in the t-MOOC are clear and adequate.5.210.89
The structure and materials of the t-MOOC are motivating for the study.5.010.95
The activities offered in the t-MOOC are attractive and innovative.5.150.96
There are different modalities and types of activities: reinforcement, support, extension ... presented in the t-MOOC.5.240.78
Table 6. Expert assessment regarding the dimension “Quality of content”.
Table 6. Expert assessment regarding the dimension “Quality of content”.
Content QualityMSD
The contents of the t-MOOC as well as its structure are clear and adequate.5.410.82
The contents presented in the t-MOOC are adapted to the competences to be developed.5.460.74
The contents of the t-MOOC are easy to understand.5.420.71
Table 7. Kruskal-Wallis H test results.
Table 7. Kruskal-Wallis H test results.
DimensionKruskal-Wallis HSig.
Technical and aesthetic aspects3.0780.215
Ease of use3.1210.222
Diversity of resources and activities1.3020.545
The quality of the content6.7360.142
GENERAL4.0470.123
Table 8. Mann–Whitney U test results.
Table 8. Mann–Whitney U test results.
Mann-Whitney USig.
Technical and aesthetic aspects18320.005
Ease of use18410.003
Diversity of resources and activities16020.000
The quality of the content17120.002
GENERAL15500.001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cabero-Almenara, J.; Barragán-Sánchez, R.; Palacios-Rodríguez, A.; Martín-Párraga, L. Design and Validation of t-MOOC for the Development of the Digital Competence of Non-University Teachers. Technologies 2021, 9, 84. https://doi.org/10.3390/technologies9040084

AMA Style

Cabero-Almenara J, Barragán-Sánchez R, Palacios-Rodríguez A, Martín-Párraga L. Design and Validation of t-MOOC for the Development of the Digital Competence of Non-University Teachers. Technologies. 2021; 9(4):84. https://doi.org/10.3390/technologies9040084

Chicago/Turabian Style

Cabero-Almenara, Julio, Raquel Barragán-Sánchez, Antonio Palacios-Rodríguez, and Lorena Martín-Párraga. 2021. "Design and Validation of t-MOOC for the Development of the Digital Competence of Non-University Teachers" Technologies 9, no. 4: 84. https://doi.org/10.3390/technologies9040084

APA Style

Cabero-Almenara, J., Barragán-Sánchez, R., Palacios-Rodríguez, A., & Martín-Párraga, L. (2021). Design and Validation of t-MOOC for the Development of the Digital Competence of Non-University Teachers. Technologies, 9(4), 84. https://doi.org/10.3390/technologies9040084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop