Special Issue "Application of Technologies in E-learning Assessment"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 March 2023 | Viewed by 17784

Special Issue Editor

Prof. Dr. Paz Prendes-Espinosa
E-Mail Website
Guest Editor
Department of Didactic and School Organization, University of Murcia, 30100 Murcia, Spain
Interests: e-learning; ICT in education; learning analytics; intelligent technologies

Special Issue Information

Dear Colleagues,

E-learning assessment is one of the most important issues we have to consider in the field of e-learning experiences and models. On one hand, we can observe the evolution of technologies and how these technologies present different possibilities for e-assessment. Advanced technologies are an opportunity to change the education in all levels. On the other hand, it is always necessary to reflect about the educational models and strategies to support our experiences too. Thus, the technological knowledge must be linked to the educational knowledge; both of them will help us to understand the current strategies of e-learning assessment and the future to come. Educational research shows us the possibilities of digital tools to design and make the e-assessment in different ways, and this Special Issue will give us the possibility to look inside this relevant field.

This topic of e-assessment includes relevant research lines as techniques of e-assessment, advanced technologies, digital competences, models and strategies, practical experiences, digital tools and environments to collect data of users, big data and learning analytics.

Prof. Dr. Paz Prendes-Espinosa
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2300 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • e-learning
  • e-assessment
  • intelligent technologies
  • advanced technologies
  • ICT
  • innovative assessment strategies
  • artificial intelligence
  • big data
  • learning analytics
  • computational thinking

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Integrating Assessment in a CSCL Macro-Script Authoring Platform
Appl. Sci. 2023, 13(3), 1537; https://doi.org/10.3390/app13031537 - 24 Jan 2023
Viewed by 334
Abstract
Collaborative learning entails the involvement and the cooperation of a group of persons with the purpose of learning. Collaborative learning scripts aim to orchestrate the complex interaction among group members while Computer Supported Collaborative Learning scripts (CSCL scripts) is the research field in [...] Read more.
Collaborative learning entails the involvement and the cooperation of a group of persons with the purpose of learning. Collaborative learning scripts aim to orchestrate the complex interaction among group members while Computer Supported Collaborative Learning scripts (CSCL scripts) is the research field in which IT techniques are involved in the management of the aspects of such an interaction. This article presents assessment-related aspects of an existing CSCL script authoring and deployment platform called COSTLyP. Assessment, nowadays, is considered as a vital constituent of CSCL scripts since it may affect some of their necessary components and mechanisms. The outcome of the implementation of an assessment plan may determine what should be the next step in a collaboration activity or what actions should be undertaken to bridge the gap between the expected results and the achieved level of knowledge or expertise. At the same time, assessment can also verify the regulation level that is required within each group; consequently, these scripts should be flexibly designed in order to adapt their evolution to the real needs of the participants. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
Digital Technologies for the Assessment of Oral English Skills
Appl. Sci. 2022, 12(22), 11635; https://doi.org/10.3390/app122211635 - 16 Nov 2022
Viewed by 480
Abstract
The assessment of oral skills in the learning process of a foreign language is very important for promoting the improvement of these competences. Furthermore, digital tools can be useful for making this task easier for teachers and promoting their collaboration based on professional [...] Read more.
The assessment of oral skills in the learning process of a foreign language is very important for promoting the improvement of these competences. Furthermore, digital tools can be useful for making this task easier for teachers and promoting their collaboration based on professional online communities. In this article, we present the AROSE platform, a new digital tool developed to help secondary education teachers assess their students’ oral skills in English as a foreign language. To evaluate this platform, we have promoted an international online community of secondary education teachers in a project funded by the Erasmus+ programme. The research has been developed using a design-based research methodology. Three different iterations of the platform design process were carried out. We have used a mixed method combining focus groups (of both teachers and experts), observation forms, and questionnaires for teachers. The main results obtained demonstrate the need among teachers to integrate technology to improve the evaluation process. The AROSE platform successfully caters to this need, as demonstrated by the significant data on the use and satisfaction of its users. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
Design of an IoT-Based Remote Learning System for Medical Skill Training in the Age of COVID-19: Focusing on CPR Skill Training
Appl. Sci. 2022, 12(17), 8840; https://doi.org/10.3390/app12178840 - 02 Sep 2022
Viewed by 611
Abstract
Medical skill education has been scaled down due to the COVID-19 pandemic. In particular, the decrease in CPR skill training has caused the quality of medical services to deteriorate. While new online education methods have emerged, few studies exist on online teaching and [...] Read more.
Medical skill education has been scaled down due to the COVID-19 pandemic. In particular, the decrease in CPR skill training has caused the quality of medical services to deteriorate. While new online education methods have emerged, few studies exist on online teaching and its effects. Since the online teaching of medical skills presents several challenges for instructors, it has not been as effective as face-to-face training. This study designed a new remote system focusing on medical skill education. The proposed video-based application prototype uses an IoT device to measure CPR performance metrics and provides real-time data to users. It was tested using the Kano model on a small group of subjects. The effects of skill training were analyzed quantitatively and qualitatively. A comparative analysis of the remote group and face-to-face group revealed similar average values for appropriate compression depth. In other categories, the remote group fared poorer than the face-to-face group. Considering the high scores given to system usability in the USE survey, remote education shows promise as an alternative to face-to-face education. The significance of this study lies in being the first to develop and test a remote education system for medical skill training in the age of COVID-19. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
Online and In-Class Evaluation of a Music Theory E-Learning Platform
Appl. Sci. 2022, 12(14), 7296; https://doi.org/10.3390/app12147296 - 20 Jul 2022
Viewed by 540
Abstract
This paper presents a new version and a three-month evaluation of the Troubadour platform—an open-source music theory ear training platform. Through interviews with teachers, we gathered the most-needed features which would aid their use of the platform. In the new version of the [...] Read more.
This paper presents a new version and a three-month evaluation of the Troubadour platform—an open-source music theory ear training platform. Through interviews with teachers, we gathered the most-needed features which would aid their use of the platform. In the new version of the Troubadour platform, we implemented different types of interaction, including class management, re-occurring homework and challenges. Previous research has shown a significant improvement in the students’ performance while using the platform. However, the short time span of the previous experiments has not shown whether these results can be attributed to the novelty bias. To evaluate the efficacy of the platform beyond its novelty bias, we performed a three-month-long evaluation experiment on the students’ interaction through questionnaires and platform-collected data. We collected data on their engagement with the platform. During the experiment, the students attended the school through online courses during the first part of the evaluation, and in-class in the second part. In this paper, we investigate the students’ engagement during the three-month period, explore the influence of the platform’s use in-class versus online learning process, analyze the students’ self-report on their practice habits and compare them with the collected data. The results showed high student engagement during the lockdown period, while the in-class process showed a decrease in the platform’s use, unveiling the students’ need for such platform as a complementary learning channel in remote learning. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
How to Improve the Digital Competence for E-Learning?
Appl. Sci. 2022, 12(13), 6582; https://doi.org/10.3390/app12136582 - 29 Jun 2022
Viewed by 917
Abstract
Digital competence for learning (skills, knowledge and attitudes needed for learning with digital devices) is a factor that affects the effectiveness of both the traditional and e-learning process. More specifically, technical competence is considered to be one of the four pillars of successful [...] Read more.
Digital competence for learning (skills, knowledge and attitudes needed for learning with digital devices) is a factor that affects the effectiveness of both the traditional and e-learning process. More specifically, technical competence is considered to be one of the four pillars of successful e-learning. Several studies show that in practice, digital technology has not always been used successfully, even in countries with very high digital readiness. Therefore, it is important to assess the different dimensions of digital competence for learning and analyse the interrelations between these dimensions in order to make suggestions for advancements. In our study, we applied a test with students from primary and lower secondary schools in Estonia to assess their digital competence for learning and used Structural Equation Modelling to understand how attitudes predict digital skills and knowledge that can be acquired in the individual and social settings. The findings confirm that only behavioural intention to use digital devices predicts the development of digital skills and knowledge. Moreover, some knowledge and skills acquired in the individual settings predict the development of knowledge and skills acquired in the social settings. The study provides researchers and practitioners with suggestions for improving the structure and quality of e-learning. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
Moving to e-Service Learning in Higher Education
Appl. Sci. 2022, 12(11), 5462; https://doi.org/10.3390/app12115462 - 27 May 2022
Cited by 2 | Viewed by 901
Abstract
Service Learning is a methodology in which students achieve academic and transversal competences related to the curriculum of a subject while performing a service for the benefit of the community. With the COVID-19 pandemic, it was necessary to reorganize the Service Learning activities [...] Read more.
Service Learning is a methodology in which students achieve academic and transversal competences related to the curriculum of a subject while performing a service for the benefit of the community. With the COVID-19 pandemic, it was necessary to reorganize the Service Learning activities developed in recent years so that they do not lose their pedagogical value and community service. This scenario has been an opportunity to kick-start an e-Service Learning experience. For that purpose, this work shows how different Information and Communication Technology tools are integrated into an online platform to develop both activities and assessment following an e-Service Learning methodology. Since the experience was performed with two collaborating entities serving people with autism and in two schools of the University of A Coruña, the tools are available not only to professors and students, but also to entities. Our experience includes the assessment of both competences and service satisfaction using different resources for virtual collaborative work. The main contribution of our work is that we have greatly simplified our previous project on-site and also the monitoring of the student’s progress, the work of both professors and students, and the analysis of results, providing a virtual service that responds to user needs. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
Content Curation in E-Learning: A Case of Study with Spanish Engineering Students
Appl. Sci. 2022, 12(6), 3188; https://doi.org/10.3390/app12063188 - 21 Mar 2022
Cited by 1 | Viewed by 1337
Abstract
Over the last decade, e-learning and the use of digital tools have received a great boost in higher education. This paper presents a content curation methodology to assess the acquisition of specific content and soft skills during the attainment of a Degree in [...] Read more.
Over the last decade, e-learning and the use of digital tools have received a great boost in higher education. This paper presents a content curation methodology to assess the acquisition of specific content and soft skills during the attainment of a Degree in Industrial Electronic Engineering at the University of Jaén. In this teaching–learning experience, 101 engineering students were involved in activities with digital tools related to content curation, and four steps were proposed: search, select, sense making, and share. As evaluation tools, a rubric and a questionnaire of the digital tools were proposed. Moreover, a curation index was defined in order to assess the degree of achievement of the content curation. The academic results after using the rubric were better than previous years. The average content curation index obtained was 53.53. Of the four evaluated steps, search and sense making had the lowest scores and, therefore, these steps should be further developed in the future. In addition, the Kaiser–Meyer–Olkin test and Pearson’s correlation were used for analyzing the results of the questionnaires. It was concluded that the experience had a great impact on the skills related to collaborative work, digital information management, and lifelong learning, which are transversal skills at the university level. Thus, the results highlight the great educational potential of content curation. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Article
E-Assessment in E-Learning Degrees: Comparison vs. Face-to-Face Assessment through Perceived Stress and Academic Performance in a Longitudinal Study
Appl. Sci. 2021, 11(16), 7664; https://doi.org/10.3390/app11167664 - 20 Aug 2021
Cited by 5 | Viewed by 2592
Abstract
The COVID-19 pandemic has become both a challenge and an opportunity to implement certain changes in the world of education. One of the most important differences has been online evaluation, which had, until now, been marginal in most prestigious universities. This study compared [...] Read more.
The COVID-19 pandemic has become both a challenge and an opportunity to implement certain changes in the world of education. One of the most important differences has been online evaluation, which had, until now, been marginal in most prestigious universities. This study compared the academic achievement of the last cohort that performed classroom assessment and the first group that was graded for an official degree using synchronous online evaluation. Other variables measured were the self-assessment of students in this second group, in order to understand how it affected their perception of the process using three different indicators: stress, difficulty, and fairness. Nine hundred and nineteen students participated in the study. The results indicate that online assessment resulted in grades that were 10% higher while enjoying the same degree of validity and reliability. In addition, stress and difficulty levels were also in line with the on-site experience, as was the perception that the results were fair. The results allow us to conclude that online evaluation, when proctored, provides the same guarantees as desktop exams, with the added bonus of certain advantages which strongly support their continued use, especially in degrees with many students who may come from many different locations. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Article
Monitoring of Student Learning in Learning Management Systems: An Application of Educational Data Mining Techniques
Appl. Sci. 2021, 11(6), 2677; https://doi.org/10.3390/app11062677 - 17 Mar 2021
Cited by 15 | Viewed by 2958
Abstract
In this study, we used a module for monitoring and detecting students at risk of dropping out. We worked with a sample of 49 third-year students in a Health Science degree during a lockdown caused by COVID-19. Three follow-ups were carried out over [...] Read more.
In this study, we used a module for monitoring and detecting students at risk of dropping out. We worked with a sample of 49 third-year students in a Health Science degree during a lockdown caused by COVID-19. Three follow-ups were carried out over a semester: an initial one, an intermediate one and a final one with the UBUMonitor tool. This tool is a desktop application executed on the client, implemented with Java, and with a graphic interface developed in JavaFX. The application connects to the selected Moodle server, through the web services and the REST API provided by the server. UBUMonitor includes, among others, modules for log visualisation, risk of dropping out, and clustering. The visualisation techniques of boxplots and heat maps and the cluster analysis module (k-means ++, fuzzy k-means and Density-based spatial clustering of applications with noise (DBSCAN) were used to monitor the students. A teaching methodology based on project-based learning (PBL), self-regulated learning (SRL) and continuous assessment was also used. The results indicate that the use of this methodology together with early detection and personalised intervention in the initial follow-up of students achieved a drop-out rate of less than 7% and an overall level of student satisfaction with the teaching and learning process of 4.56 out of 5. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Review

Jump to: Research

Review
Artificial Intelligence for Student Assessment: A Systematic Review
Appl. Sci. 2021, 11(12), 5467; https://doi.org/10.3390/app11125467 - 12 Jun 2021
Cited by 18 | Viewed by 4788
Abstract
Artificial Intelligence (AI) is being implemented in more and more fields, including education. The main uses of AI in education are related to tutoring and assessment. This paper analyzes the use of AI for student assessment based on a systematic review. For this [...] Read more.
Artificial Intelligence (AI) is being implemented in more and more fields, including education. The main uses of AI in education are related to tutoring and assessment. This paper analyzes the use of AI for student assessment based on a systematic review. For this purpose, a search was carried out in two databases: Scopus and Web of Science. A total of 454 papers were found and, after analyzing them according to the PRISMA Statement, a total of 22 papers were selected. It is clear from the studies analyzed that, in most of them, the pedagogy underlying the educational action is not reflected. Similarly, formative evaluation seems to be the main use of AI. Another of the main functionalities of AI in assessment is for the automatic grading of students. Several studies analyze the differences between the use of AI and its non-use. We discuss the results and conclude the need for teacher training and further research to understand the possibilities of AI in educational assessment, mainly in other educational levels than higher education. Moreover, it is necessary to increase the wealth of research which focuses on educational aspects more than technical development around AI. Full article
(This article belongs to the Special Issue Application of Technologies in E-learning Assessment)
Show Figures

Figure 1

Back to TopTop