Automated assessment systems are increasingly used in higher education programming courses since the manual assessment of programming assignments is very time-consuming. Although a substantial amount of research work has been done on systems for the automatic assessment of programming assignments, most studies are focused on technical characteristics and further research is needed for examining the effect of using these systems on students’ perceptions and performance. This paper examines the effect of using an instructor-centered tool for automatically assessing programming assignments on students’ perceptions and performance in a web development course at a higher education institution. A total of three data sources were used: a survey to collect data regarding students’ perceptions, and the grades of the student assignment submissions and of a practical programming exam in order to analyze the students’ performance. The results show that the incorporation of the automated assessment tool into the course was beneficial for the students, since it allowed for increasing their motivation, improving the quality of their works, and enhancing their practical programming skills. Nonetheless, a significant percentage of students found the feedback that was generated by the tool hard to understand and of little use, and the generated grades unfair.
This is an open access article distributed under the Creative Commons Attribution License
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited