Next Article in Journal
Investigation of Generative AI Adoption in IT-Focused Vocational Secondary School Programming Education
Previous Article in Journal
Self-Coded Digital Portfolios as an Authentic Project-Based Learning Assessment in Computing Education: Evidence from a Web Design and Development Course
Previous Article in Special Issue
How Does Pre-Service Teachers’ Self-Efficacy Relate to the Fulfilment of Basic Psychological Needs During Teaching Practicum?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment Criteria Engagement and Peer-Feedback Quality in Higher Education: Implementing an Engagement Strategy in a Teacher Training Class

by
Elena Cano García
1 and
Maite Fernández-Ferrer
2,*
1
Faculty of Education, University of Barcelona, 08035 Barcelona, Spain
2
Faculty of Psychology and Educational Sciences, Universitat Oberta de Catalunya, 08022 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(9), 1151; https://doi.org/10.3390/educsci15091151
Submission received: 24 March 2025 / Revised: 16 July 2025 / Accepted: 4 August 2025 / Published: 4 September 2025

Abstract

Evaluative judgement is a necessary ability for future primary teachers not only to reflect on their teaching practice but also because they will need to help their students to develop this ability. Higher feedback quality implies a better ability to critically assess a peer’s assignment; thus, it can be seen as an indicator of evaluative judgement. Engagement with assessment criteria could improve evaluative judgement and, thus, feedback quality. This paper aims to examine the effects of assessment criteria engagement strategies on pre-service teachers’ peer-feedback quality and development of evaluative judgement. Two groups of university students were compared. While in one of the experiences the teacher students were only informed about the criteria, in the other one, a specific task was planned to encourage students’ engagement with those criteria. The results, based on the analysis of the feedback content, suggest that engagement with assessment criteria positively contributes to higher peer-feedback quality and, thus, the development of evaluative judgement. Therefore, initial teacher education should provide opportunities for students to engage with assessment criteria.

1. Introduction

This paper presents the results of a comparative study on the impact of the engagement with assessment criteria on the quality of feedback given to peers and, thus, the evaluative judgement development of teacher students. Evaluative judgement is “the capability to make decisions about the quality of work of self and others” (Tai et al., 2018, p. 472) and depends on the assessment literacy, which is the ability to understand grading and to apply this understanding in making academic judgements of one’s work and performance (Winstone et al., 2017). Feedback is understood as the process students undertake in order to make sense of the information received from different sources to improve a task and the learning process. Peer feedback is a strategy that allows students to apply assessment criteria, assess a peer’s work, and share their comments with the peer (Carless & Boud, 2019). Previous research has shown the importance of the students’ active role in assessment and the link between engaging with assessment criteria and a higher evaluative judgement that leads, thus, to higher quality of peer feedback uttered and better self-assessments of their own productions, fostering, in consequence, self-regulated learning (SRL). Different studies have displayed, however, contradictory findings on the impact of giving peer feedback, showing that peer-feedback quality varies widely. Simply applying assessment criteria to a peer’s product may not be enough to foster the development of the evaluative judgement in the desired ways in all students. Different levels of feedback literacy, which is “an understanding of what feedback is and how it can be managed effectively; capacities and dispositions to make productive use of feedback; and appreciation of the roles of teachers and themselves in these processes” (Carless & Boud, 2019), may intervene. Research is yet to identify additional tasks or strategies that could level initial differences in feedback literacy and foster the development of evaluative judgement and that could be easily integrated into teaching practice.
This study aims to contribute to closing this gap by examining whether assessment criteria engagement strategies positively impact pre-service teachers’ peer-feedback quality. In concrete, the quality of peer feedback given in two different groups is compared, understanding feedback quality as an indicator of evaluative judgement and assessment literacy: (1) the assessment criteria are simply shared before application in a peer-feedback activity; (2) an additional task is used to foster students’ active engagement with the assessment criteria before they apply them in the peer-feedback process.
The research question is whether the task to foster students’ engagement with assessment criteria prior to their application has a significant impact on the quality of the peer feedback. The main objective is to study the effectiveness of this rather simple and easy-to-apply measure to inform future teaching practice with clear practical recommendations.
In the following, previous research on the main concepts of the study and their interrelation is revised, and the methodological underpinning is introduced. After the presentation of the findings, these are discussed against previous research, potential limitations are considered, and practical recommendations are made explicit.

2. Assessment Criteria, Peer-Feedback Quality, and Evaluative Judgement

2.1. Development of Teacher Students’ Evaluative Judgement as a Way to Foster Assessment Literacy

A formative assessment or assessment for learning implies clarifying, sharing, and understanding learning intentions and criteria for success (Wiliam, 2011). As Yan and Pastore (2022) claim, rather than focusing on what has been attained by students, formative assessment helps to identify learning gaps, scaffold new learning, anticipate future teaching steps, and promote self-regulation of student learning (Andrade & Heritage, 2017). Thereby, there is a general agreement that learning how to assess becomes a central part of higher education curricula (Panadero et al., 2019). The development of evaluative judgement is, thus, a central ability, in particular, for teacher students, but as an element of the competence to learn how to learn, it is relevant in all disciplines. Butler and Winne (1995) stress the importance of formative feedback to foster self-regulated learning (SRL). Nevertheless, for students to benefit from feedback and formative assessment, they need to develop their assessment literacy. Assessment literacy has been defined as the process students undertake to understand grading and to apply this understanding to make academic judgements of one’s work and performance (Winstone et al., 2017).
Teacher assessment literacy refers to “teacher capabilities to plan and implement quality assessment tasks, to interpret evidence and outcomes appropriate to the assessment purpose and type, and to engage students themselves as active participants in the assessment of their own learning” (Looney et al., 2017, p. 443). Despite the importance of teachers’ assessment literacy, Pastore and Andrade (2019) suggest that initial teacher training does not provide enough opportunities to develop this capacity, and they propose a three-dimensional model: assessment-literate teachers have conceptual knowledge (knowledge about models and methods), praxeological knowledge (assessment in practice), and socio-emotional knowledge (assessment as a social practice). In this context, being assessment literate implies being able to make judgements (see Figure 1). Tai et al. (2018) have referred to this ability as evaluative judgement, which is “the capability to make decisions about the quality of work of self and others” (p. 472). It integrates two components: (a) to understand what constitutes quality and (b) to apply this comprehension to assess another task. The quality of peer-feedback utterances can, thus, be used as an indicator for the student’s evaluative judgement. Low peer-feedback quality can indicate low evaluative judgement and low assessment literacy. Higher peer-feedback quality after an intervention should, in the same way, indicate that the used intervention impacts the assessment literacy and evaluative judgement positively.
Carless and Boud (2019) argue that one central condition to foster evaluative judgement is to train students on assessment and feedback processes. Tai et al. (2018) and Winstone et al. (2017) state that engaging students with assessment criteria is a requirement for evaluative judgement. In other words, simply being informed about assessment criteria and receiving feedback is not enough to become proficient in giving high-quality feedback. For student teachers to develop their assessment literacy, they will need to have an active role during the whole feedback process.

2.2. Peer Feedback as a Strategy to Foster Teacher Students’ Evaluative Judgement

Feedback has been identified as a central strategy to foster students’ evaluative judgement and SRL because it allows students to understand what constitutes quality and apply this understanding to their learning task. Feedback is understood here as the process students undertake in order to make sense of the information received from different sources to improve a task and the learning process (Carless & Boud, 2019).
There is research evidence that shows what conditions can make feedback effective: (a) students’ active role; (b) a clear assessment design; (c) sustained feedback practices; (d) previous training on how to provide and use feedback; (e) making clear what is expected from the student; and (f) monitoring the feedback process (Dawson et al., 2019; O’Donovan et al., 2019; Poulos & Mahony, 2008).
Peer feedback can foster students’ active role in learning and assessment, as students produce feedback rather than only receiving it. Peer feedback is a strategy that allows students to apply the assessment criteria and to access a peer’s work and self-assess their production (Carless & Boud, 2019). Additionally, providing feedback implies high-order thinking skills, such as critical thinking, identifying problems, and suggesting solutions (Nicol et al., 2014).
Regarding the assessment design, Panadero et al. (2016, p. 9) offer some steps to foster SRL through peer feedback: (1) clarifying the purpose of peer assessment; (2) involving students in clarifying assessment criteria; (3) matching participants in a way that fosters productive peer assessment; (4) defining the peer assessment format and mode of interaction; (5) providing quality peer-assessment training; (6) providing tools for peer assessment; (7) specifying peer-assessment activities and timescale; and (8) monitoring the peer-assessment process and coaching students.
The available literature shows the convenience of sustaining feedback practices over time, looking for longer-term experiences. Carless (2019) argues that single loops (which include a single instance of feedback provision and reception) only involve straightforward short-term actions, whereas double-loop feedback enables more complex and long-term adjustments to learning strategies.
The influence of previous training on the quality of feedback has been identified as a key condition to use the criteria, provide constructive comments, and know how to use feedback. Sluijsmans et al. (2002) found that those students who received training were more likely to use the criteria and to give more constructive comments, as well as score higher on the structure and use fewer naive words.
This makes clear that the quality criteria are necessary for the planning phase of SRL, in which the learning goals and strategies are set (Butler & Winne, 1995). Therefore, for students to participate in the design of the assessment criteria or to be engaged with the criteria, they should apply a key process (Wiliam, 2011). In seeking different ways of thinking about assessment criteria, it is worth reviewing the problems with transparency, which have been extensively explored over the last fifteen years (O’Donovan et al., 2019). Feedback literacy involves understanding the criteria (Bearman, 2018). Textual explanations such as the use of rubrics may clarify expectations, reduce anxiety, and improve self-efficacy and self-regulation (Panadero et al., 2016). In fact, the process of designing these assessment tasks and developing rubrics is not only beneficial for students but also for teachers. It has been found that by carrying out this process, teachers better understand student learning because they must think about their learning intentions and success criteria (Ayalon & Wilkie, 2020).
Monitoring and scaffolding the feedback process is important to help students improve their peer feedback and evaluative judgement. However, this scaffolding should be gradually withdrawn, and students should be able to progressively provide better feedback with less support from the teacher (Panadero et al., 2016; Tai et al., 2018).

2.3. Dimensions of Quality Peer Feedback

Feedback is a key strategy to foster students’ evaluative judgement (Tai et al., 2018). Additionally, the feedback provided can show students’ evaluative judgement level since a higher evaluative judgement should imply a higher feedback quality. Although many studies on feedback-provision quality have been completed, teachers and higher education students do not have a shared understanding of what characterises quality feedback (Dawson et al., 2019; O’Donovan et al., 2019; Orsmond et al., 2013).
Feedback quality has been studied considering different agents and dimensions. From the agents’ perspective, O’Donovan et al. (2019) found that good feedback for students is that which justifies a mark and explains how students could gain better marks in future assignments. Dawson et al. (2019) state that both students and teachers believe that quality feedback is characterised by being useful, detailed, specific, and personalised, but students also consider that information needs to be constructive and motivating.
Feedback quality can also be defined considering several dimensions. Regarding the aim of feedback, Chi (1996) classified feedback into corrective, reinforcing, didactic explanations, and suggestive, which could have a bigger impact on students’ learning.
Another dimension to consider is the feedback’s focus. Feedback can be focused on the task, on the process, on SRL, and on the person (Hattie & Timperley, 2007). Task-level feedback denotes how well tasks are performed; process-level feedback focuses on how to perform tasks; feedback at the self-regulation level focuses on learners’ self-monitoring of their actions; and personal feedback at the self-level evaluates the learner and frequently involves praise. Hattie and Timperley (2007) conclude that feedback at the self-regulation level and feedback at the process level are generally most effective in raising achievement; feedback at the self-level is the least effective, and the main limitation of feedback at the task level is the difficulty for students to generalise messages to other tasks (Carless, 2019, p. 706). SRL-focused feedback is the most beneficial type of feedback because it makes students aware of the kind of strategies that might help them to improve their learning, which is transferable along the degree and in the professional practice (Valtonen et al., 2021). Nevertheless, students engaging in peer-feedback processes tend to provide feedback focused on the task (Huisman et al., 2019).
Feedback can also be analysed depending on the form in which it is presented. Feedback can be communicated in a written form using affirmative or negative sentences or asking questions. Written feedback can be addressed to peers or to the teacher (who will also be a reader) or have an impersonal tone. Addressing peers, such as dialoguing with them, may reach them more directly and lead them to make more changes (O’Donovan et al., 2019). Requesting the formulation of questions for their peers (more than affirmative sentences) could be an interesting way to stimulate feedback to be an internal and self-regulated process (Nicol, 2019). Assessment literacy is, however, seen as most crucial for SRL and the inner dialogue that the feedback process represents (Winstone et al., 2017). The tone of feedback (which, following Gielen et al., 2010, can be positive, negative, or mixed) has emotional effects because it could affect students’ self-esteem and, finally, the self-regulation of the learning processes (Nicol & Macfarlane-Dick, 2006).
Finally, Panadero and Alqassab (2019) stated that the quality of peer feedback could be affected by being anonymous.
Considering all of these dimensions, as well as the perspective of different agents, for the purpose of this study, feedback will be regarded as quality feedback when it is adjusted to the assessment criteria; it is didactic and suggestive; it is focused both on the task and the learning process; it has a constructive tone; it is addressed to the peer; and it is mainly explanatory and argumentative.

2.4. Assessment Criteria Engagement Strategies’ Role in Evaluative Judgement Development

Research on different strategies to foster students’ engagement with assessment criteria is still rather scarce. Assessment criteria and students’ engagement with them stand out among the elements that make feedback effective (Sadler, 2009). Criteria engagement is a strategy that allows students to represent quality and apply it in the peer-feedback process.
Students’ engagement with assessment criteria can, thus, have an impact on students’ academic performance and the improvement of SRL (Poulos & Mahony, 2008). On a more basic level, students’ engagement with assessment criteria is important to ensure that students understand them (Sadler, 2009). If students do not understand the assessment criteria, they cannot correctly judge a product or offer quality feedback. In short, the representation and engagement with the criteria must serve, ultimately, to have a greater understanding and be able to make decisions with the feedback. Carless and Boud (2019) indicate that making judgements involves the implicit or explicit application of criteria. They point out that students often find assessment criteria too dense and abstract and that they need to understand assessment standards through exemplars or specific training activities.
As Panadero and Lipnevich (2022) stated, students often receive external feedback for their learning to be improved, including information clarifying criteria for success. But feedback seems not always to have a significant effect on performance improvement (feed-up). It could be due to the feedback quality because it is not detailed enough nor usable (Panadero & Lipnevich, 2022) due to the lack of students’ active involvement (Kruiper et al., 2022) or due to a lack of feedback literacy, for example, little appreciation of the received feedback or difficulties understanding or applying it. Increasing student involvement would include students’ specific activities for their participation to be promoted (Baughan, 2020). For the feed-up to be achieved, it may be helpful to plan and develop some strategies for the engagement with assessment criteria, in particular if these can foster the development of feedback literacy.

3. Materials and Methods

3.1. Aim

This study aims to contribute to the field by examining whether assessment criteria engagement strategies positively impact pre-service teachers’ peer-feedback quality. It is assumed that a higher feedback quality implies a better ability to critically assess a peer’s assignment and, thus, a higher evaluative judgement. Feedback quality is, thus, used as an indicator of evaluative judgement and assessment literacy. Improved feedback quality in comparison to a control group after an intervention indicates that the intervention fosters evaluative judgement and assessment literacy. Moreover, this study seeks to relate the analysed peer-feedback quality with the quality of the final task and the obtained grade.

3.2. Design and Procedure

The research was designed and carried out following the principles of the institutional code of good practices of research and the ethical committee. According to Responsible Research and Innovation (RRI) principles, participating students signed an informed consent.
In the process of the study, two different groups of students took part in a peer-feedback experience, one in the years of 2017–2018 and one in 2018–2019. The students were all enrolled in the same degree (teacher education), and the peer-feedback experience was developed within the same subject, study year, and term, regarding the same task, with the same assessment criteria, and attended by the same lecturer. Both groups received training on peer feedback and formative assessment at the beginning of the subject and completed the same peer-feedback process. The task was the same for both groups, asking the small groups to analyse an educational innovation. Students in Group 2 (2018–2019) were given an additional task to engage with the assessment criteria before the peer-feedback process (Table 1). For this task, each student group had to discuss the given assessment criteria and define what would represent quality for each criterion and how it should be addressed. The students in both groups received lecturer feedback on the alignment of the peer feedback that they had provided with the assessment criteria.
Though the data collection took place several years ago and in a specific context, the insights of this study remain relevant in time within and beyond the concrete context. An inherent limitation of the quasi-experimental design is that it does not completely rule out differences between the two groups. Insights are, nevertheless, relevant, as the only element of change between the two groups was the intervention to include a strategy to foster engagement with the assessment criteria; differences between the groups can be attributed to this intervention. The original contribution of the study lies, thus, in the insights regarding the impact that can be achieved with this intervention.

3.3. Participants

The participants of this study are students of initial teacher training enrolled in a 6-ECTS (European Credit Transfer System, which means 150 h of student workload) compulsory second-year first-semester subject at the University of Barcelona during the academic years of 2017–2018 (Group 1) and 2018–2019 (Group 2). Participation was voluntary and limited to the enrolled students who opted for the continuous assessment of the course. All of these students decided to participate in the study (see Table 2).

3.4. Peer-Feedback Process

The subject in which the experience took place is titled Educational System and School Organisation. Participants of both academic years received training on peer feedback, and the formative assessment consisted of discussing the characteristics of formative assessment and of quality feedback. After this, the teacher presented the task and the assessment criteria of the task and of the feedback, which were the same in both academic years.
The task, in which peer feedback was applied, consisted of analysing an educational innovation within the school organisation area in groups of four. During the process of elaborating the task, there were two peer-feedback loops. Each peer-feedback loop consisted of the following: (1) handing in an initial version of the task; (2) assessing a group’s assignment and providing qualitative comments for each assessment criterion; (3) receiving feedback from a peer; and (4) revision and development of a new version. The assessment criteria of the task were different in the first and second loops, focusing in the first more on the task and in the second more on the process but the same for all groups.
The task and the peer-feedback process followed were, thus, the same in both academic years. The difference between both experiences is found in the prior work on assessment criteria engagement, which was only developed in Group 2 (2018–2019).
Group 1 was only informed about the assessment criteria. Students in Group 2 additionally completed a specific task in small groups. This task was planned to encourage students’ engagement with the assessment criteria and consisted of students’ discussions of each assessment criterion with their team and defining what would represent quality for that criterion and what they should do to address the criterion in their assignment. The agreed output had to be delivered.
For the peer-feedback process, the students were randomly paired, ensuring that the pairs did not work in the same small group. The pairs were not reciprocal but stable over time. Despite being a group assignment, peer feedback was individual. Each student had the role of giving and receiving feedback. Students had to provide qualitative feedback for each assessment criterion separately. After receiving the peer’s feedback, the four members of each small group had to meet to share the feedback received and adjust the task and the process accordingly. Delivered peer feedback was collected and analysed by the teacher. In terms of the alignment with the assessment criteria. Before beginning the second loop, students had to summarise the peer feedback they received in the first loop and note down what they should do to integrate that feedback and what modifications were finally made and why.

3.5. Peer-Feedback Quality Analysis

A feedback-coding protocol was created to analyse the type of feedback that pre-service teachers offered. Chi’s (1996) classification was used to analyse the purpose of the feedback: corrective, didactic, and suggestive. Hattie and Timperley’s (2007) classification determined the focus of the feedback: task, process, or person. Based on Yang and Carless’s (2013) work on dialogic feedback, it was proposed that feedback could be formulated as a statement, as a question, and as a reformulation of the assessment criteria. The tone of the feedback was defined based on Gielen et al.’s (2010) work: positive, negative, and mixed.
This initial feedback-coding protocol was validated in terms of content. To this end, some examples of feedback were selected to study the extent to which the initial guide informed about the characteristics of the feedback. After several validation loops, it was decided that two categories had to be added: the direction of feedback and the type of content. The final template used in the peer-feedback quality analysis is displayed in Table 3.
After the design and validation of the feedback-coding protocol, some feedback was selected, and three different members of the team analysed it to ensure that there was an agreement in their classifications and that the categories were clear. The second step was to refine the categories and their definition. This process was performed three times. In the end, the inter-rater reliability was 87%, and it was decided that the research assistant could analyse all feedback.
A total of 304 instances of feedback were analysed for loop 1 and 313 for loop 2 in the academic year 2017–2018. For the academic year 2018–2019, a total of 305 instances of feedback were analysed in loop 1 and 312 for loop 2.
Once all feedback from each criterion and phase was analysed, the relative frequency of each feedback code was calculated for each category (e.g., purpose), and the total feedback was coded (considering all categories). This resulted in feedback marks, visualising the overall quality of the peer feedback.
Finally, the assignment marks and the overall subject marks were collected to explore whether there was a relationship between improving students’ evaluative judgement and students’ performance.

4. Results

4.1. Group 1 (2017–2018): Peer Feedback Without Assessment Criteria Transparency Strategies

The analysis of students’ peer feedback (see Table 4) shows that the most common type of feedback is that which has a corrective purpose (66.36% loop 1; 70.4% loop 2), as well as that it is mainly focused on the task (80.85% loop 1; 73,57% loop 2), and the content is a description of the task (68.16% loop 1; 71.89% loop 2) (see Table 2).
Nevertheless, students use corrective feedback for different purposes: (1) to indicate that something is correct or well done (40.70% loop 1; 47.62% loop 2); (2) to highlight mistakes (11.50% loop 1; 6.63% loop 2); and (3) to stress that some of the requested information is missing (14.16% loop 1; 16.15% loop 2). Indeed, students do not tend to provide explanations (didactic feedback) to help their peers understand why something is well done or not (13.42% loop 1; 13.95% loop 2) or suggestions (20.21% loop 1; 15.65% loop 2) for improvement.
Teacher students write feedback as a statement (92.35% loop 1; 91.12% loop 2), suggesting that the purpose of this feedback is not directly to make the peer reflect on their work (1.83% loop 1; 1.18% loop 2). Some students tend to just rephrase the assessment criterion (5.81% loop 1; 7.69% loop 2).
Students tend to write feedback constructively. Nevertheless, the most common tone is one that mixes constructive and punitive tones (47.87% loop 1; 40.26% loop 2):
Students barely address the feedback to their peers (31.82% loop 1; 24.84% loop 2). Feedback tends to be written in an impersonal way (35.39% loop 1; 39.81% loop 2) or addressed to the teacher (32.79% loop 1; 35.35% loop 2). Consequently, the content of feedback is mainly descriptive (68.16% loop 1; 71.89% loop 2) since students just illustrate what can be found in their peer’s assignment, without assessing the quality, the work, and the relevance of the task.
Nevertheless, in loop 2, students address more issues related to the process (15.95% loop 1; 20% loop 2). This could be the result of the type of assessment criteria. That is, assessment criteria in Stage 1 tend to have a stronger focus on the task, while in Stage 2, the assessment criteria are focused on the process. Therefore, the assessment criteria seem to guide the focus in the peer feedback. Combining assessment criteria relating to the task and to the process could, hence, help students to see beyond the task and reflect more on the process in their feedback.
Overall, the similarity of the type of feedback between both loops is high. This may be due to the rather short length of the experience (one term) and could suggest that most teacher students will not necessarily improve their feedback quality simply through giving and receiving peer feedback. The little variation in the peer-feedback quality between loop 1 and loop 2 seems to indicate that the simple experience of giving and receiving feedback does not necessarily improve the ability to give feedback and, thus, evaluative judgement. To increase the feedback quality, it is, hence, recommendable to use additional strategies (trainings, tasks to foster the engagement with assessment criteria, etc.) to develop assessment and feedback literacy.
The relationship between the quality of the peer feedback provided by a student and the same student’s marks in the assignment and the subject (see Table 5) was analysed using the feedback marks visualising the feedback quality based on the template for the peer-feedback quality analysis. A significant and moderate correlation was found for feedback mark and final task (r = 0.421, p = 0.32, R2 = 0.177), suggesting a relationship between feedback quality and students’ performance.

4.2. Group 2 (2018–2019): Peer Feedback with Assessment Criteria Engagement Strategies

The analysis of the type of feedback of the second group shows that the most common type of feedback is the one that states (92.21% loop 1; 96.19% loop 2) some information about the correction (57.51% loop 1; 50.79% loop 2) of the task (80.11% loop 1; 56.78% loop 2). This information is written in a positive tone (87.17% loop 1; 93.27% loop 2) and describes (69.18% loop 1; 60.87% loop 2) in an impersonal way (40.66% loop 1; 29.20% loop 2) or to the teacher (39.67% loop 1; 42.54% loop 2) the correction of the task (see Table 6).
Corrective feedback (57.51% loop 1; 50.79% loop 2) tends to stress the positive aspects of the task and the process (46.49% loop 1; 41.70% loop 2). Additionally, some instances of didactic (20.93% loop 1; 27.13% loop 2) and suggestive feedback (21.57% loop 1; 22.08% loop 2) are found.
The main focus of feedback tends to be the task (80.11% loop 1; 56.78% loop 2). Again, this is likely encouraged by the assessment criteria setting this focus in particular in loop 1 and might also be the consequence of the type of assignment, as the processes followed by peers may not always be transparent in a written assignment. An important change from loop 1 to loop 2 is, though, the significant increase of feedback focused on the process in loop 2 (39.04%) in comparison to the first loop (16.71%).
Students hardly ever addressed the feedback to their peers (19.67% loop 1; 28.27% loop 2). The participants may see peer feedback as a demand from the teacher rather than appreciating the value of this process. The students in both groups addressed their peer feedback either to the lecturer or used an impersonal tone, which may indicate that they do not see peer feedback as being meant for the peer but rather as a demand by the lecturer. This is also in line with the mainly descriptive content, phrased mostly in statements, offering the peers few opportunities for further reflection. This may indicate a lack of feedback literacy, pointing at the need for further activities to increase, first of all, the recognition of peer feedback as a useful source.
This could explain why most feedback tends to be descriptive (69.18% loop 1; 60.87% loop 2) and hardly ever explanatory (29.91% loop 1; 38.74% loop 2). Nonetheless, what is surprising is that students do not offer argumentative feedback (0.91% loop 1; 0.40% loop 2).
The analysis of the type of feedback provided shows that, although feedback tends to be mostly corrective (50.79%), there is an increase in didactic feedback from loop 1 (20.93%) to loop 2 (27.13%). In the same line, there is an increase in explanatory feedback in loop 2 (38.74%) concerning loop 1 (29.91%).
The relationship between feedback mark, final task mark, and final marks of the subject was explored (see Table 7). Strong and significant correlations were found between feedback mean and final task (r = 0.890, p < 0.001, R2 = 0.792), as well as between the feedback mean and the final mark of the subject (r = 0.778, p < 0.001, R2 = 0.605).

4.3. Comparison of Groups 1 and 2

The analysis of feedback from both groups shows that the most common type of feedback is the one that states some information about the correction of the task. This information tends to be written in a positive tone, and it is either impersonal or addressed to the teacher.
Nevertheless, some differences are found between both groups. Although correction is the most common purpose, teacher students in Group 2 (2018–2019) provide more didactic feedback (13.95% Group 1 vs. 27.13% Group 2). Additionally, students in Group 2 (2018–2019) provide more feedback focused on the process and with a greater explanatory nature in loop 2 (29.91% loop 1; 38.74% loop 2). All of these findings could be the result of assessment criteria engagement strategies.
Students in Group 2 (2018–2019) also tend to provide more constructive feedback than those in Group 1 (2017–2018) (53.27% Group 1 vs. 93.27% Group 2). Additionally, the share of feedback that rephrases the assessment criteria decreases in the second loop of Group 2 (2018–2019) (7.64% Group 1 vs. 2.54% Group 2), but students do not tend to address feedback to their peers nor provide argumentative feedback.
As for the feedback marks, the results indicate that the overall quality of feedback is higher in the second group (M = 7.39) than in the first one (M = 5.97) (see Table 8). The strategies for assessment criteria engagement could have had a positive impact on the feedback quality. Moreover, the overall feedback mean correlates with the final mark of the task in both subjects, although the correlations are stronger in Group 2 (2018–2019). Additionally, the feedback mean in Group 2 (2018–2019) also correlates with the final mark of the subject. Consequently, feedback quality seems to be related to students’ overall performance.

5. Discussion and Conclusions

Evaluative judgement is a necessary ability for future primary teachers not only to reflect on their teaching practice and learn along with their teaching career but also because they will need to help their students to develop this ability. This study is aimed at examining the effect of assessment criteria engagement strategies on pre-service teachers’ feedback quality. To this end, the peer feedback (following Panadero et al., 2016) of two different groups has been analysed qualitatively and compared.
From the results of this study, four main conclusions can be drawn. Firstly, assessment criteria engagement strategies seem to positively contribute to feedback quality. Students in Group 2 (2018–2019) provided more orientations and explanations to help their peers adjust to the task and their learning process. Hattie and Timperley (2007) already stressed that process and self-regulatory feedback were the types of feedback that had a stronger impact on students’ learning. It could be that having a greater understanding of what is expected helps students to adjust their feedback. Consequently, they can adjust their comments and assessment. Previous studies have already documented the importance of assessment criteria transparency (Winstone et al., 2017, p. 29). Therefore, students should understand assessment standards through exemplars or specific training activities. Careful attention should be given to the understanding of the criteria that will be applied, as Sluijsmans et al. (2002) stated.
Secondly, in line with previous studies, independently of their group, the most common type of feedback provided by teacher students is descriptive and focused on the task (Huisman et al., 2019). Teacher students provide peer feedback intending to correct and describe what is in the task with a positive tone. Therefore, it seems that assessment is still understood as a corrective process rather than a source for learning (Wiliam, 2011). This would be linked to previous criticisms of criteria-based assessment, in the sense that rubrics lead to less detailed feedback from both students and teachers (Arts et al., 2016; Harris et al., 2015). In contrast, research, such as that of Dirkx et al. (2021), found that the information provided in addition to the use of the rubric contains more feedforward and process-orientated comments, and this fits very well with the intended purpose of the rubrics in general (i.e., to focus on the process). This is an important element to consider, as Nordrum et al. (2013) already ensured that the use of the comments provided in addition to the use of the rubric is different compared to the comments within the text. Feedback in the text, for example, is mainly used for clarifications, corrections, and questions, while comments provided in addition to the use of the rubric are used primarily for statements, arguments, and suggestions. This is interesting, as it helps to interpret the results of this research, given that the deliberate choice of a particular mode of feedback can help adjust the focus, level, and function towards its intended goal (Dirkx et al. (2021)). Finally, along the same line, Zhang and Zheng (2018), cited by Henderson et al. (2021), also argue that less structured feedback could be more helpful and encourage students to engage in their own deeper thinking, which would make the most sense for self-regulation of learning.
In this study, the process focus increased in both groups from loop 1 to loop 2. This could be explained with the stronger process focus of the assessment criteria given for loop 2. In Group 2, this increase was even stronger and could be a result of the additional engagement with the assessment criteria, better supporting students in adapting to the focus set in the criteria. This may also explain the clearer improvement between loops 1 and 2 in Group 2 (2018–2019). The importance of the assessment criteria for the focus of the peer feedback highlights the need to combine assessment criteria relating to the task and to the process, supporting students in seeing beyond the task and reflecting more on the process in their feedback.
Another conclusion is that pre-service teachers generally address feedback to the teacher, even though they were orientated to direct it to their peers. This fact could be the result of the type of practices developed but also a reflection of their self-confidence to peer-assess an assignment. This supports Bader et al.’s (2019) research, which already assured that some students are distrustful of their peers’ knowledge. According to them:
“The most common objection related to the quality of peer feedback was the perceived lack of constructive criticism… (it) was often described as too positive, not outlining areas in need of improvement and thus not offering any direction for the process of revision” (pp. 1022–1023).
In this sense, the students’ tendency to address their peer feedback either to the lecturer or to use an impersonal tone, the mostly descriptive content and dominance of statement phrasing, offering little opportunity for further reflection, may indicate that they do not value peer feedback as a source for learning processes but rather as a demand by the lecturer. This may indicate a lack of feedback literacy, pointing to the need for further activities to increase, first of all, the recognition of peer feedback as a useful source.
A third conclusion is that feedback quality tends to improve in the second feedback loop. That is, feedback tends to be more focused on the process; the comments come together with an explanation, as well as suggestions to improve future versions. There are two potential reasons to explain this finding: (a) sustaining a peer-feedback experience allows students to improve their feedback quality and develop their evaluative judgement (Tai et al., 2018); (b) the comments offered by the teacher about the feedback provided in the first loop helped students to give better feedback (Panadero et al., 2016). Therefore, it seems that it is not enough to provide some initial training on how to offer feedback, but some scaffolding and guidance during the whole process is needed. Carless and Boud (2019) already termed feedback literacy as “an understanding of what feedback is and how it can be managed effectively; capacities and dispositions to make productive use of feedback; and appreciation of the roles of teachers and themselves in these processes” (p. 1316). Consequently, if the aim is to foster assessment literacy, peer-feedback practices need to be carefully planned not only within a course but during the whole degree so that students become more autonomous.
Finally, the fourth main conclusion is that there seems to be a relationship between feedback quality and students’ learning, as previous studies have noted (Nicol et al., 2014). It could be that this relationship is the result of the weight feedback has on the assignment and subject marks. It can also be that having a greater representation of the task and its aims can enable students to provide better feedback and perform higher. Nevertheless, the data from this study do not allow for establishing causal relationships between feedback quality and students’ performance. Future studies should address this issue due to its relevance.
This study faces several limitations, mainly regarding aspects and contexts that were not considered here, which should be addressed in future research. Firstly, the impact of different assessment criteria engagement strategies on students’ feedback needs to be examined, as Panadero et al. (2016) suggest. Secondly, it would be interesting to know why students do not address feedback to their peers and what strategies can contribute to changing this situation. It should also be studied how argumentative feedback could be increased. Students tend to describe what their peers have done, but they hardly ever assess the quality of their work, taking into consideration the corpus of knowledge of the field of study. Therefore, it would be interesting to know what practices would increase students’ understanding of the content of the field of knowledge, as Dawson et al. (2019) and Henderson et al. (2021) suggest, and use it as the foundation for their feedback.
Overall, the findings from this study have some implications for future practices. Firstly, new evidence of the impact of students’ engagement with assessment criteria on the development of teacher students’ evaluative judgement has been presented. Therefore, the design of peer-feedback processes should include some work on assessment criteria. Additionally, the peer-feedback experience designed based on previous evidence is an example of how peer-feedback processes can be applied, and, therefore, it could be replicated and adjusted in other contexts depending on students’ experience and expertise in feedback processes. Moreover, students’ feedback quality is evidence of students’ evaluative judgement. Thus, it has to be analysed carefully in order to know what support and scaffolding students will need. Finally, the peer-feedback quality analysis designed in this research could be used to this end. All in all, this study can contribute to better understanding how evaluative judgement can be fostered through peer feedback as a way to develop pre-service teachers’ assessment literacy.

Author Contributions

Both authors have contributed equally to the development of the research and this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The APC charges of this publication have been covered by the Universitat Oberta de Catalunya through the research group GREDU. The data collection was performed in the framework of the “Formation of evaluative judgment as a basic element of the competence of learning to learn: the impact of evaluation criteria” project (REDICE18-1940, Universitat de Barcelona) and supported by the Departament de Recerca i Universitats de la Generalitat de Catalunya.

Institutional Review Board Statement

At the time when this research was conducted, formal ethical approval by an Institutional Review Board was not required. Nevertheless, the study complied fully with the institutional code of good research practices, and all procedures followed the principles of Responsible Research and Innovation (RRI).

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank all the participating students in the research and Laura Pons for her support in the data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Andrade, H. L., & Heritage, M. (2017). Using formative assessment to enhance learning, achievement, and academic self-regulation. Routledge. [Google Scholar] [CrossRef]
  2. Arts, J. G., Jaspers, M., & Joosten-ten Brinke, D. (2016). A case study on written comments as a form of feedback in teacher education: So much to gain. European Journal of Teacher Education, 39(2), 159–173. [Google Scholar] [CrossRef]
  3. Ayalon, M., & Wilkie, K. (2020). Developing assessment literacy through approximations of practice: Exploring secondary mathematics pre-service teachers developing criteria for a rich quadratics task. Teaching and Teacher Education, 80, 103011. [Google Scholar] [CrossRef]
  4. Bader, M., Burner, T., Inversen, S. H., & Varga, Z. (2019). Student perspectives on formative feedback as part of writing portfolios. Assessment & Evaluation in Higher Education, 44(7), 1017–1028. [Google Scholar] [CrossRef]
  5. Baughan, P. (2020). On your marks: Learner-focused feedback practices and feedback literacy. Advance in Higher Education. [Google Scholar]
  6. Bearman, M. (2018). Prefigurement, identities and agency: The disciplinary nature of evaluative judgement. In D. Boud, R. Ajjawi, P. Dawson, & J. Tai (Eds.), Developing evaluative judgement in higher education: Assessment for knowing and producing quality work (pp. 147–155). Routledge. [Google Scholar]
  7. Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65, 245–281. [Google Scholar] [CrossRef]
  8. Cano, E., Jardí, A., Lluch, L., & Martins, L. (2024). Improvement in the quality of feedback as an indication of the development of evaluative judgement. Assessment and Evaluation in Higher Education, 49(6), 824–837. [Google Scholar] [CrossRef]
  9. Carless, D. (2019). Feedback loops and the longer-term: Towards feedback spirals. Assessment & Evaluation in Higher Education, 44(5), 705–714. [Google Scholar] [CrossRef]
  10. Carless, D., & Boud, D. (2019). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. [Google Scholar] [CrossRef]
  11. Chi, M. T. H. (1996). Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology, 10(7), 33–49. [Google Scholar] [CrossRef]
  12. Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: Staff and student perspective. Assessment & Evaluation in Higher Education, 44(1), 25–36. [Google Scholar] [CrossRef]
  13. Dirkx, K., Brinkle, D. J., Arts, J., & Van Diggelen, M. (2021). In-text and rubric referenced feedback: Differences in focus level, and function. Active Learning in Higher Education, 22(3), 189–201. [Google Scholar] [CrossRef]
  14. Gielen, S., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20(4), 304–315. [Google Scholar] [CrossRef]
  15. Harris, L. R., Brown, G. T. L., & Harnett, J. A. (2015). Analysis of New Zealand primary and secondary student peer- and self-assessment comments: Applying Hattie and Timperley’s feedback model. Assessment in Education: Principles, Policy, and Practice, 22(2), 265–281. [Google Scholar] [CrossRef]
  16. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [Google Scholar] [CrossRef]
  17. Henderson, M., Ryan, T., Boud, D., Dawson, P., Phillips, M., Molloy, E., & Mahoney, P. (2021). The usefulness of feedback. Active Learning in Higher Education, 22(3), 229–243. [Google Scholar] [CrossRef]
  18. Huisman, B., Saab, N., Van den Broek, P., & Van Driel, J. (2019). The impact of formative peer-feedback on higher education students’ academic writing. A meta-analysis. Assessment & Evaluation in Higher Education, 44(6), 863–880. [Google Scholar] [CrossRef]
  19. Kruiper, S. M. A., Leenknecht, M. J. M., & Slof, B. (2022). Using scaffolding strategies to improve formative assessment practice in higher education. Assessment & Evaluation in Higher Education, 47(3), 458–476. [Google Scholar] [CrossRef]
  20. Looney, A., Cummuning, J., Van Der Kleij, F., & Harris, K. (2017). Reconceptualising the role of teachers as assessors: Teacher assessment identity. Assessment in Education: Principles, Policy, & Practice, 25(5), 442–467. [Google Scholar] [CrossRef]
  21. Nicol, D. (2019). Reconceptualising feedback as an internal not an external process. Italian Journal of Educational Research, Special Issue (May), 71–83. [Google Scholar]
  22. Nicol, D., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218. [Google Scholar] [CrossRef]
  23. Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102–122. [Google Scholar] [CrossRef]
  24. Nordrum, L., Evans, K., & Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: Strategies for formative assessment. Assessment & Evaluation in Higher Education, 38(8), 919–940. [Google Scholar] [CrossRef]
  25. O’Donovan, B. M., den Outer, B., Price, M., & Lloyd, A. (2019). What makes good feedback good? Studies in Higher Education, 46, 318–329. [Google Scholar] [CrossRef]
  26. Orsmond, P., Maw, S. J., Park, J. R., Gomez, S., & Crook, A. C. (2013). Moving feedback forward: Theory to practice. Assessment & Evaluation in Higher Education, 38(2), 240–252. [Google Scholar] [CrossRef]
  27. Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 44(2), 1253–1278. [Google Scholar] [CrossRef]
  28. Panadero, E., Broadbent, J., Boud, D., & Lodge, J. M. (2019). Using formative assessment to influence self- and co-regulated learning: The role of evaluative judgement. European Journal of Psychology of Education, 34, 535–557. [Google Scholar] [CrossRef]
  29. Panadero, E., Jonsson, A., & Strijbos, J. W. (2016). Scaffolding self-regulated learning through self-assessment and peer assessment: Guidelines for classroom implementation. In D. Laveault, & L. Allal (Eds.), Assessment for learning: Meeting the challenge of implementation (pp. 311–326). Springer International Publishing. [Google Scholar]
  30. Panadero, E., & Lipnevich, A. A. (2022). A review of feedback models and typologies: Towards an integrative model of feedback elements. Educational Research Review, 35, 100416. [Google Scholar] [CrossRef]
  31. Pastore, S., & Andrade, H. L. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128–138. [Google Scholar] [CrossRef]
  32. Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students’ perspective. Assessment & Evaluation in Higher Education, 33(2), 143–154. [Google Scholar] [CrossRef]
  33. Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179. [Google Scholar] [CrossRef]
  34. Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002). Peer Assessment Training in Teacher Education: Effects on performance and perceptions. Assessment & Evaluation in Higher Education, 27(5), 443–454. [Google Scholar] [CrossRef]
  35. Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 76, 467–481. [Google Scholar] [CrossRef]
  36. Valtonen, T., Hoang, N., Sointu, E., Näykki, P., Virtanen, A., Pöysä-Tarhonen, J., Häkkinen, P., Järvelä, S., Mäkitalo, K., & Kukkonen, J. (2021). How pre-service teachers perceive their 21st-century skills and dispositions: A longitudinal perspective. Computers in Human Behavior, 16, 106643. [Google Scholar] [CrossRef]
  37. Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press. [Google Scholar]
  38. Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist, 52(1), 17–37. [Google Scholar] [CrossRef]
  39. Yan, Z., & Pastore, S. (2022). Assessing teachers’ strategies in formative assessment: The teacher formative assessment practice scale. Journal of Psychoeducational Assessment, 40(5), 592–604. [Google Scholar] [CrossRef]
  40. Yang, M., & Carless, D. (2013). The feedback triangle and the enhancement of dialogic feedback processes. Teaching in Higher Education, 18(3), 285–297. [Google Scholar] [CrossRef]
  41. Zhang, L., & Zheng, Y. (2018). Feedback as an assessment for learning tool: How useful can it be? Assessment & Evaluation in Higher Education, 43(7), 1120–1132. [Google Scholar] [CrossRef]
Figure 1. Relationship between assessment literacy and evaluative judgement. Source: based on the Pastore and Andrade (2019) teacher assessment literacy model.
Figure 1. Relationship between assessment literacy and evaluative judgement. Source: based on the Pastore and Andrade (2019) teacher assessment literacy model.
Education 15 01151 g001
Table 1. Study design elements applied to Group 1 and Group 2.
Table 1. Study design elements applied to Group 1 and Group 2.
Group 1Group 2
Training on peer feedback and formative assessment++
Assessment criteria shared++
Additional task to engage with the assessment criteria +
Peer-feedback process with 2 loops++
Table 2. Number of participants in Group 1 and Group 2.
Table 2. Number of participants in Group 1 and Group 2.
Academic YearStudents EnrolledStudents Participating
2017–2018
(Group 1)
6253
(85.5%)
2018–2019
(Group 2)
6058
(96.7%)
Table 3. Final template for the peer-feedback quality analysis.
Table 3. Final template for the peer-feedback quality analysis.
DimensionCharacteristicsMeaning
Purpose 1Corrective (reinforcing)Reports on aspects that are good or favourably evaluated, reinforcing the strength of the task.
Corrective (mistakes)Reports an error in the task requirement.
Corrective (incomplete)Reports something lost or missing in the task requirement.
DidacticExplains or justifies why something is right or needs improvement.
SuggestiveGives some cues to move forward.
Focus 2TaskSuggests changes or additions to the content and/or form of the product.
ProcessSuggests changes or additions to the product development phases.
PersonRefers the comments to the person and suggests changes in the behaviour.
Phrasing 3StatementAffirmative statements are formulated.
QuestionSome questions are formulated.
Reformulation of the assessment criteriaThe feedback is a copy and paste of the assessment criteria.
Tone 4PositiveKind and constructive tone.
NegativeHard, rough text.
MixedCombination of both.
DirectionPeerThe writing text is addressed to the peer.
TeacherThe writing text is addressed to the teacher.
ImpersonalImpersonal writing is used.
ContentDescriptiveDescribes a process or task without judgements.
ExplanatoryObjectively reports on the qualities of the work.
ArgumentativeTries to persuade the peer to do something to correct, revise, or improve the task.
1 Adapted from Chi (1996). 2 Adapted from Hattie and Timperley (2007). 3 adapted from Yang and Carless (2013). 4 adapted from Gielen et al. (2010). Part of this classification has been used in Cano et al. (2024).
Table 4. Analysis of the type of feedback provided in Group 1 (2017–2018).
Table 4. Analysis of the type of feedback provided in Group 1 (2017–2018).
Type of Feedback (1)Feedback Loop 1 (%)Feedback Loop 2 (%)
Relative Frequency per CategoryOverall Relative Frequency Relative Frequency per CategoryOverall Relative Frequency
PurposeCorrective
(reinforcing)
40.7011.3147.6211.63
Corrective
(mistakes)
11.503.196.631.62
Corrective
(incomplete)
14.163.9316.153.95
Didactic13.423.7313.953.41
Suggestive20.215.6115.653.82
FocusTask80.8512.4673.5712.84
Process15.952.46203.49
Person3.190.496.431.12
PhrasingStatement92.3512.3891.1212.80
Question1.830.251.180.17
Rephrasing assessment criteria5.810.787.691.08
TonePositive45.255.6655.277.19
Negative6.880.864.470.58
Mixed47.875.9840.265.23
DirectionPeer31.824.0224.843.24
Teacher32.794.1435.354.61
Impersonal35.394.4739.815.19
Content Descriptive68.1612.4671.8912.96
Explanatory20.403.7318.433.32
Argumentative11.432.099.68 1.74
(1) See Table 3 on type of feedback.
Table 5. Marks for Group 1 (2017–2018).
Table 5. Marks for Group 1 (2017–2018).
MeansFeedback MarksAssignment MarksFinal Mark of the Subject
2017–20185.977.727.29
Table 6. Type of feedback provided in Group 2 (2018–2019).
Table 6. Type of feedback provided in Group 2 (2018–2019).
Type of FeedbackFeedback Loop 1Feedback Loop 2
Relative Frequency per CategoryOverall Relative Frequency Relative Frequency per CategoryOverall Relative Frequency
PurposeCorrective
(reinforcing)
46.4912.3441.7011.03
Corrective (mistakes)4.951.313.610.95
Corrective
(incomplete)
6.071.615.481.45
Didactic20.935.5627.137.18
Suggestive21.575.7322.085.83
FocusTask80.1112.8056.7810.38
Process16.712.6739.047.14
Person3.180.514.180.76
PhrasingStatement92.2112.0496.1911.56
Question1.300.171.270.15
Rephrasing the
assessment criteria
6.490.842.540.31
TonePositive87.1711.2393.2711.11
Negative1.970.251.280.15
Mixed10.861.405.450.65
DirectionPeer19.672.5428.253.40
Teacher39.675.1342.545.11
Impersonal40.665.2629.203.51
Content Descriptive69.1812.3460.8711.76
Explanatory29.915.5638.747.48
Argumentative0.910.170.400.08
Table 7. Marks for Group 2 (2018–2019).
Table 7. Marks for Group 2 (2018–2019).
MeansFeedback MarksAssignment MarksFinal Mark of the Subject
2018–20197.397.416.59
Table 8. Marks for Group 1 (2017–2018) and Group 2 (2018–2019).
Table 8. Marks for Group 1 (2017–2018) and Group 2 (2018–2019).
MeansFeedback MarksAssignment MarksFinal Mark of the Subject
2017–20185.977.727.29
2018–20197.397.416.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García, E.C.; Fernández-Ferrer, M. Assessment Criteria Engagement and Peer-Feedback Quality in Higher Education: Implementing an Engagement Strategy in a Teacher Training Class. Educ. Sci. 2025, 15, 1151. https://doi.org/10.3390/educsci15091151

AMA Style

García EC, Fernández-Ferrer M. Assessment Criteria Engagement and Peer-Feedback Quality in Higher Education: Implementing an Engagement Strategy in a Teacher Training Class. Education Sciences. 2025; 15(9):1151. https://doi.org/10.3390/educsci15091151

Chicago/Turabian Style

García, Elena Cano, and Maite Fernández-Ferrer. 2025. "Assessment Criteria Engagement and Peer-Feedback Quality in Higher Education: Implementing an Engagement Strategy in a Teacher Training Class" Education Sciences 15, no. 9: 1151. https://doi.org/10.3390/educsci15091151

APA Style

García, E. C., & Fernández-Ferrer, M. (2025). Assessment Criteria Engagement and Peer-Feedback Quality in Higher Education: Implementing an Engagement Strategy in a Teacher Training Class. Education Sciences, 15(9), 1151. https://doi.org/10.3390/educsci15091151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop