Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Authors = Carlos Delgado Kloos

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5365 KiB  
Article
A Study of Student and Teacher Challenges in Smart Synchronous Hybrid Learning Environments
by Adrián Carruana Martín, Carlos Alario-Hoyos and Carlos Delgado Kloos
Sustainability 2023, 15(15), 11694; https://doi.org/10.3390/su151511694 - 28 Jul 2023
Cited by 13 | Viewed by 2816
Abstract
The COVID-19 pandemic has led to the growth of hybrid and online learning environments and the trend of introducing more technology into the classroom. One such change could be the use of smart synchronous hybrid learning environments (SSHLEs), which are settings with both [...] Read more.
The COVID-19 pandemic has led to the growth of hybrid and online learning environments and the trend of introducing more technology into the classroom. One such change could be the use of smart synchronous hybrid learning environments (SSHLEs), which are settings with both onsite and online students concurrently, where technology plays a key role in sensing, analyzing, and reacting throughout the teaching and learning process. These changing environments and the incorporation of new technologies can place a greater workload on participants and reduce teacher agency. In light of this, this paper aimed to analyze the workload and teacher agency across various SSHLEs. The NASA-TLX model was used to measure the workload in several scenarios. Questionnaires and interviews were used to measure teacher agency. The results obtained indicated that the workload of the teacher tended to be high (between 60 and 70 points out of 100 for the NASA-TLX workload), especially when they lacked experience in synchronous hybrid learning environments, and the workload of the students tended to have average values (between 50 and 60) in the SSHLEs analyzed. Meanwhile, the teacher agency did not appear to be altered but showed potential for improvement. Full article
Show Figures

Figure 1

33 pages, 2026 KiB  
Article
A Competency Framework for Teaching and Learning Innovation Centers for the 21st Century: Anticipating the Post-COVID-19 Age
by Mar Pérez-Sanagustín, Iouri Kotorov, António Teixeira, Fernanda Mansilla, Julien Broisin, Carlos Alario-Hoyos, Óscar Jerez, Maria do Carmo Teixeira Pinto, Boni García, Carlos Delgado Kloos, Miguel Morales, Mario Solarte, Luis Magdiel Oliva-Córdova and Astrid Helena Gonzalez Lopez
Electronics 2022, 11(3), 413; https://doi.org/10.3390/electronics11030413 - 29 Jan 2022
Cited by 19 | Viewed by 12673
Abstract
During the COVID-19 pandemic, most Higher Education Institutions (HEIs) across the globe moved towards “emergency online education”, experiencing a metamorphosis that advanced their capacities and competencies as never before. Teaching and Learning Centers (TLCs), the internal units that promote sustainable transformations, can play [...] Read more.
During the COVID-19 pandemic, most Higher Education Institutions (HEIs) across the globe moved towards “emergency online education”, experiencing a metamorphosis that advanced their capacities and competencies as never before. Teaching and Learning Centers (TLCs), the internal units that promote sustainable transformations, can play a key role in making this metamorphosis last. Existing models for TLCs have defined the competencies that they could help develop, focusing on teachers’, students’, and managers’ development, but have mislead aspects such as leadership, organizational processes, and infrastructures. This paper evaluates the PROF-XXI framework, which offers a holistic perspective on the competencies that TLCs should develop for supporting deep and sustainable transformations of HEIs. The framework was evaluated with 83 participants from four Latin American institutions and used for analyzing the transformation of their teaching and learning practices during the pandemic lockdown. The result of the analysis shows that the PROF-XXI framework was useful for identifying the teaching and learning competencies addressed by the institutions, their deficiencies, and their strategic changes. Specifically, this study shows that most institutions counted with training plans for teachers before this period, mainly in the competencies of digital technologies and pedagogical quality, but that other initiatives were created to reinforce them, including students’ support actions. Full article
Show Figures

Figure 1

18 pages, 833 KiB  
Article
Evaluation of an Algorithm for Automatic Grading of Forum Messages in MOOC Discussion Forums
by Raquel L. Pérez-Nicolás, Carlos Alario-Hoyos, Iria Estévez-Ayres, Pedro Manuel Moreno-Marcos, Pedro J. Muñoz-Merino and Carlos Delgado Kloos
Sustainability 2021, 13(16), 9364; https://doi.org/10.3390/su13169364 - 20 Aug 2021
Cited by 10 | Viewed by 3050
Abstract
Discussion forums are a valuable source of information in educational platforms such as Massive Open Online Courses (MOOCs), as users can exchange opinions or even help other students in an asynchronous way, contributing to the sustainability of MOOCs even with low interaction from [...] Read more.
Discussion forums are a valuable source of information in educational platforms such as Massive Open Online Courses (MOOCs), as users can exchange opinions or even help other students in an asynchronous way, contributing to the sustainability of MOOCs even with low interaction from the instructor. Therefore, the use of the forum messages to get insights about students’ performance in a course is interesting. This article presents an automatic grading approach that can be used to assess learners through their interactions in the forum. The approach is based on the combination of three dimensions: (1) the quality of the content of the interactions, (2) the impact of the interactions, and (3) the user’s activity in the forum. The evaluation of the approach compares the assessment by experts with the automatic assessment obtaining a high accuracy of 0.8068 and Normalized Root Mean Square Error (NRMSE) of 0.1799, which outperforms previous existing approaches. Future research work can try to improve the automatic grading by the training of the indicators of the approach depending on the MOOCs or the combination with text mining techniques. Full article
Show Figures

Figure 1

21 pages, 27959 KiB  
Article
An Algorithm and a Tool for the Automatic Grading of MOOC Learners from Their Contributions in the Discussion Forum
by Sergio García-Molina, Carlos Alario-Hoyos, Pedro Manuel Moreno-Marcos, Pedro J. Muñoz-Merino, Iria Estévez-Ayres and Carlos Delgado Kloos
Appl. Sci. 2021, 11(1), 95; https://doi.org/10.3390/app11010095 - 24 Dec 2020
Cited by 9 | Viewed by 3249
Abstract
MOOCs (massive open online courses) have a built-in forum where learners can share experiences as well as ask questions and get answers. Nevertheless, the work of the learners in the MOOC forum is usually not taken into account when calculating their grade in [...] Read more.
MOOCs (massive open online courses) have a built-in forum where learners can share experiences as well as ask questions and get answers. Nevertheless, the work of the learners in the MOOC forum is usually not taken into account when calculating their grade in the course, due to the difficulty of automating the calculation of that grade in a context with a very large number of learners. In some situations, discussion forums might even be the only available evidence to grade learners. In other situations, forum interactions could serve as a complement for calculating the grade in addition to traditional summative assessment activities. This paper proposes an algorithm to automatically calculate learners’ grades in the MOOC forum, considering both the quantitative dimension and the relevance in their contributions. In addition, the algorithm has been implemented within a web application, providing instructors with a visual and a numerical representation of the grade for each learner. An exploratory analysis is carried out to assess the algorithm and the tool with a MOOC on programming, obtaining a moderate positive correlation between the forum grades provided by the algorithm and the grades obtained through the summative assessment activities. Nevertheless, the complementary analysis conducted indicates that this correlation may not be enough to use the forum grades as predictors of the grades obtained through summative assessment activities. Full article
(This article belongs to the Special Issue Advanced Technologies in Lifelong Learning)
Show Figures

Figure 1

24 pages, 3570 KiB  
Article
Re-Defining, Analyzing and Predicting Persistence Using Student Events in Online Learning
by Pedro Manuel Moreno-Marcos, Pedro J. Muñoz-Merino, Carlos Alario-Hoyos and Carlos Delgado Kloos
Appl. Sci. 2020, 10(5), 1722; https://doi.org/10.3390/app10051722 - 3 Mar 2020
Cited by 12 | Viewed by 4895
Abstract
In education, several studies have tried to track student persistence (i.e., students’ ability to keep on working on the assigned tasks) using different definitions and self-reported data. However, self-reported metrics may be limited, and currently, online courses allow collecting many low-level events to [...] Read more.
In education, several studies have tried to track student persistence (i.e., students’ ability to keep on working on the assigned tasks) using different definitions and self-reported data. However, self-reported metrics may be limited, and currently, online courses allow collecting many low-level events to analyze student behaviors based on logs and using learning analytics. These analyses can be used to provide personalized and adaptative feedback in Smart Learning Environments. In this line, this work proposes the analysis and measurement of two types of persistence based on students’ interactions in online courses: (1) local persistence (based on the attempts used to solve an exercise when the student answers it incorrectly), and (2) global persistence (based on overall course activity/completion). Results show that there are different students’ profiles based on local persistence, although medium local persistence stands out. Moreover, local persistence is highly affected by course context and it can vary throughout the course. Furthermore, local persistence does not necessarily relate to global persistence or engagement with videos, although it is related to students’ average grade. Finally, predictive analysis shows that local persistence is not a strong predictor of global persistence and performance, although it can add some value to the predictive models. Full article
(This article belongs to the Special Issue Smart Learning)
Show Figures

Figure 1

10 pages, 234 KiB  
Proceeding Paper
Smart Education: A Review and Future Research Directions
by Adrián Carruana Martín, Carlos Alario-Hoyos and Carlos Delgado Kloos
Proceedings 2019, 31(1), 57; https://doi.org/10.3390/proceedings2019031057 - 21 Nov 2019
Cited by 20 | Viewed by 7299
Abstract
Research and development often move forward based on buzzwords. New terms are coined to summarize new developments, often with several interpretations and without a formal definition. The term Smart Education has been coined to represent a move forward in technology-enhanced education, but what [...] Read more.
Research and development often move forward based on buzzwords. New terms are coined to summarize new developments, often with several interpretations and without a formal definition. The term Smart Education has been coined to represent a move forward in technology-enhanced education, but what is behind it? Does it represent something essentially different from the educational technologies used before? In this paper, we do a systematic literature review to understand how this term is used, what the technologies behind it are, and what promises are made. We conclude that although the term is fuzzy, there are indeed several developments available today that can make educational technologies much more adapted to the learner and therefore underpin the learning in a smarter way. Full article
19 pages, 553 KiB  
Article
Generalizing Predictive Models of Admission Test Success Based on Online Interactions
by Pedro Manuel Moreno-Marcos, Tinne De Laet, Pedro J. Muñoz-Merino, Carolien Van Soom, Tom Broos, Katrien Verbert and Carlos Delgado Kloos
Sustainability 2019, 11(18), 4940; https://doi.org/10.3390/su11184940 - 10 Sep 2019
Cited by 20 | Viewed by 3134
Abstract
To start medical or dentistry studies in Flanders, prospective students need to pass a central admission test. A blended program with four Small Private Online Courses (SPOCs) was designed to support those students. The logs from the platform provide an opportunity to delve [...] Read more.
To start medical or dentistry studies in Flanders, prospective students need to pass a central admission test. A blended program with four Small Private Online Courses (SPOCs) was designed to support those students. The logs from the platform provide an opportunity to delve into the learners’ interactions and to develop predictive models to forecast success in the test. Moreover, the use of different courses allows analyzing how models can generalize across courses. This article has the following objectives: (1) to develop and analyze predictive models to forecast who will pass the admission test, (2) to discover which variables have more effect on success in different courses, (3) to analyze to what extent models can be generalized to other courses and subsequent cohorts, and (4) to discuss the conditions to achieve generalizability. The results show that the average grade in SPOC exercises using only first attempts is the best predictor and that it is possible to transfer predictive models with enough reliability when some context-related conditions are met. The best performance is achieved when transferring within the same cohort to other SPOCs in a similar context. The performance is still acceptable in a consecutive edition of a course. These findings support the sustainability of predictive models. Full article
(This article belongs to the Special Issue Sustainability of Learning Analytics)
Show Figures

Figure 1

Back to TopTop