Next Article in Journal
Development of Transdisciplinary and Complex Learning in Inclusive Educational Practices
Previous Article in Journal
Challenges Faced by Students with Special Needs in Primary Education during Online Teaching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

University Students’ Perceptions of Peer Assessment in Oral Presentations

by
Diego Gudiño
,
María-Jesús Fernández-Sánchez
,
María-Teresa Becerra-Traver
and
Susana Sánchez-Herrera
*
Faculty of Education and Psychology, University of Extremadura, 06006 Badajoz, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(3), 221; https://doi.org/10.3390/educsci14030221
Submission received: 6 January 2024 / Revised: 14 February 2024 / Accepted: 20 February 2024 / Published: 22 February 2024

Abstract

:
Peer assessment has been shown to be useful in a variety of educational contexts, but there is a scarcity of research on how prior experience affects university students’ perceptions of this form of assessment. This study evaluates whether experience with peer assessment of oral presentations influences the perceptions and self-efficacy of university students as assessors. In the study, 58 university students completed a comprehensive questionnaire before and after assessing the oral presentations of their peers. The results indicate that prior to the assessment the students reported having limited experience, but they considered this practice beneficial to their learning. Afterwards, they showed a higher degree of agreement regarding their confidence in the ability of their peers to assess both superficial aspects and the content of the presentations. In addition, the experience helped them to feel that their ability to assess their peers was not inferior to that of their classmates. It may therefore be concluded that practice and training in peer assessment improve the students’ perception of this form of assessment, although a single session is insufficient. Consistent and extended training is crucial to achieve a substantial impact.

1. Introduction

Peer assessment is a strategy that requires students to assess the work of their peers by providing feedback based on pre-established criteria [1]. This is a relatively widespread practice in higher education classrooms, especially those where the teaching staff face a heavy assessment workload [2]. The fact that there are more students than teachers in the classroom means that peer assessment contributes towards an increase in the frequency of feedback [3]. This reduces teacher workload [4] while optimizing resources, allowing teachers to focus on the students most in need of their assistance [5].
One of the aspects that makes peer assessment useful is the resulting process of peer-to-peer communication regarding performance and assessment criteria [6]. Other authors have pointed out that elaborated feedback involving discussion and negotiation leads to a higher level of learning (e.g., [7,8]).

1.1. Peer Assessment: Experiences and Support Measures to Improve Its Effectiveness

Many studies have described the benefits of peer assessment. Firstly, it improves academic performance at different educational levels and in different subject areas [5]. Secondly, it provides students with skills that allow them to apply standards, criteria and accountability in their own assessment process [9].
Peer assessment has proven to be effective in the specific case of oral presentations. Murillo-Zamorano and Montanero [10] conducted a study involving a total of 32 university students who were divided into two experimental conditions: peer assessment of oral presentations with rubric and traditional teacher assessment. The results showed that peer assessment with rubric improved oral presentation scores by 10%, while teacher assessment only improved them by 5%. However, the improvements were not sustained in the long term, evidencing the need for more prolonged training. According to Dunbar, Brooks and Kubicka-Miller [11], the inadequacy of a brief instructional process is due to the difficulty students have transferring improvements in presentation skills to new content. To facilitate this transition of skills, it is necessary to incorporate support into the instructional intervention. These supports should include elements such as self-assessment and the opportunity for dialogue with peers and teachers about the assessment criteria and the presentation process itself. This approach encourages further reflection, allowing students to make connections with previous experiences. It also allows identification of possible difficulties that might arise in their future practice. Active engagement in dialogue with peers and teachers provides valuable strategies to address these difficulties [12].
This study did not analyze the students’ perception of peer assessment, an aspect that was considered by Dickson et al. [13]. These latter authors also identified numerous benefits of peer assessment for the oral presentation skills of university students, including improved grades, greater understanding of the assessment criteria, greater confidence and reduced anxiety compared to those who did not apply this mode of assessment.
Exposure to different ideas and styles of solving the same task and interaction with the assessment criteria promotes the critical thinking [14] of both the assessor and the assessed in online education contexts with Secondary Education students. Peer assessment enriches the assessing student’s metacognitive skills and ability to identify problems [1]. Although the role of the assessor is more demanding, the ability to effectively interpret and implement suggestions for improvement has also been shown to generate difficulties [15]. More specifically, a study exploring students’ responses to an assessment via an online platform revealed that those who perceived the feedback as beneficial were more willing to acknowledge their mistakes and make changes. On the other hand, those who did not find the feedback useful adopted a more defensive attitude. It is important to take into account certain limitations when interpreting the results of this study, as the length of the records analyzed did not provide detailed information about the students, such as their previous experience with peer assessment, or details about the original task being assessed. In any case, the correct application of suggestions involves the activation of advanced cognitive processes [16], even for university students, necessitating specific training.
However, students have difficulty providing high-quality feedback and tend to remain at a superficial level [17,18]. The first difficulty is linked to the integration of suggestions for improvement, which depends on both the quality of the comment and the frequency with which it is repeated; hence, low-quality comments often overlap with those of high quality, thus explaining the gaps in interaction [18]. Another complication is related to the skills of the assessed aspect, as described in study [17]. It was observed that students with lower writing skills improved their texts after a co-assessment with rubric, while those with more advanced skills showed no improvement. It is known that students who interact with peers of a higher level have the opportunity to improve their knowledge and skills, according to the constructivist approach.
To enhance feedback quality, various measures can be implemented, including establishing clear assessment criteria [19], promoting peer assessment among peers with similar skill levels [20] and anonymizing the peer assessment process [21]. This latter measure allows for more critical comments to be made [22], improves engagement with the process and avoids the influence of social relationships [23,24].
Another useful measure is the use of peer assessment support tools such as online systems, assessment rubrics, checklists, scripts with open-ended questions and rating scales [10,25,26]. These types of instruments facilitate objective and consistent assessment, helping to mitigate the variability that arises when different assessors carry out assessments of the same aspect [27]. In addition, looking over these tools prior to the execution of an activity is beneficial in order to better understand the expectations and requirements of the task [4,28].
However, for peer assessment to be effective it is necessary to provide adequate training on the use of these tools and ensure due understanding of the assessment criteria [29]. Lack of experience also influences the quality of assessments [30], and it is necessary to help students understand what is considered to be good performance in order for them to provide effective feedback [31]. Furthermore, undertaking an exploration of students’ attitudes toward peer assessment is a relevant next step, as it enriches our comprehension of this practice. This exploration will be conducted in the following section.

1.2. Student Perceptions of Peer Assessment

Students’ attitudes towards peer assessment tend to be positive both before and after their participation in this type of experience, which stimulates its performance and learning [32]. However, students’ perceptions appear to be linked to their level of commitment to the assessment process; the more accountable they are, the better the assessment and the more favorable their perception of the task [33].
Djelil et al. [34] performed a study involving 48 university students in higher education. Although the sample size was relatively small, they found that following an adaptation period, during which students initially preferred assessment by teachers or known peers, they gradually developed more positive attitudes towards peer assessment.
This suggests that prior experience with this assessment mode contributes towards more favorable student attitudes [35] and positive evolutions of their perceptions as they are exposed to more peer assessment opportunities [36]. In addition, frequent participation in peer assessment activities improves confidence in feedback-giving skills and self-efficacy [37], based on Zimmerman’s [38] classic definition. This author defines self-efficacy as people’s judgements of their own capabilities in the performance of a task.
However, some studies had contradictory results. For example, Gaspar et al. [39] conducted a study in which 186 university students completed a comprehensive questionnaire. The results showed that prior experience did not have a significant impact on students’ perception of their self-efficacy as assessors.
On the other hand, some researchers highlight that self-efficacy in peer assessment can be improved through the use of assessment rubrics [40]. Despite this, it has been observed that not all students feel fully competent and confident in their ability to assess their peers. Vanderhoven et al. [41] found that a considerable proportion of secondary school students disagreed or were hesitant about determining their level of competence or self-efficacy in peer assessment. This discrepancy is possibly due to the fact that students’ individual assessment skills influence their perception.
Despite its benefits, peer assessment can raise concerns among students. Spiller [9] and Kollar and Fisher [42] found that students tend to believe that only teachers are qualified to assess. The study by Koh et al. [43] noted that some students and teachers disagreed with the peer assessment ratings, perceiving them as dishonest. These types of feelings can generate discomfort when being critical of peers [44], despite the fact that assessing one’s peers helps to reflect on the improvement of one’s own work.
Despite these concerns, there is a strong correlation between student and teacher assessments, even when the perception of a group of students is negative [45]. To address this problem, Reily et al. [46] suggested adding multiple rounds of assessment. Their study highlighted the important role of both the quantity and quality of written feedback in the effectiveness of peer reviews. Students who receive feedback from multiple peers tend to be more satisfied [47]. Pearce et al. [48] proposed having at least two or three reviewers per task. Song et al. [49] found that with five reviews, accuracy is increased and reviews similar to those of teachers are obtained [50,51].
In the context described above, studies have been conducted on the effectiveness of peer assessment in oral presentations by university students, together with research exploring students’ perceptions of this assessment mode. However, none of them have addressed how students’ perceptions of their self-efficacy as assessors and that of their peers vary after participating in a peer assessment of an oral presentation. Therefore, this study aims to analyze whether prior experience of peer assessment of oral presentations has an effect on university students’ perception of this assessment method. More specifically, we aim to answer the following questions:
  • What are undergraduate students’ perceptions of the effectiveness of peer assessment in oral presentation tasks?
  • To what extent does participation in peer assessment activities influence students’ perceived self-efficacy as assessors of their peers’ oral presentations?

2. Materials and Methods

2.1. Participants

A total of 58 students (70.7% female and 29.3% male) were selected by way of convenience sampling to participate in the study, which consisted of completion of a specially designed questionnaire at two different times. The participants were enrolled in the bachelor’s degree in primary education at the Faculty of Education and Psychology of the University of Extremadura.
It is noteworthy that 84.8% of the participants mentioned that they had not recently engaged in peer assessment activities at university level. Furthermore, a significant percentage (43.5%) of those who reported participation were unable to recall whether the professor considered the peer’s grade when making the final assessment. Table 1 illustrates the distribution of the total sample.

2.2. Data Collection Procedure

The students completed the questionnaire at two different times with a three-week gap between them, before and after participating in the peer assessment of their classmates’ oral presentations on the topic of tutorial action in primary education. A simple checklist was used which had previously been modeled. The modeling of the tool involved a short training session lasting approximately 30 min, which was provided by one of the researchers. During this interactive session, the criteria of the tool were explained in detail and its practical application was illustrated through concrete examples. Active participation of the participants was encouraged by providing opportunities for questions and clarifications.
The checklist was based on a similar previous study [10]. The criteria included the adequacy of the visual support, whether it was used frequently during the presentation and whether it aided better understanding of the ideas. The clarity of the development of the ideas was also assessed, including whether the speaker explained all the ideas clearly, following a common thread and highlighting the most relevant ones. The assessment was anonymous and reciprocal.
The questionnaire was shared through Google Forms in the virtual space of the subject, in which the experience was carried out. The participation of the students was voluntary and they were previously informed of the purpose for which the data would be processed and the objective of the study.
Subsequently, their informed consent to participate in the study was expressly requested, duly guaranteeing confidentiality and anonymity of the participants.

2.3. Instruments

The questionnaire designed by Gaspar et al. [39] entitled “Questionnaire to evaluate the perception of university students regarding peer evaluation in writing tasks” was adapted for this study. The original version of the questionnaire had a validity of 0.83 and a reliability of 0.85. The adaptation involved replacing writing, the skill for which the original questionnaire was designed, with oral presentations, the skill under study in this article. In addition, dimension 5, relating to specific writing feedback issues, was removed. The final questionnaire applied consisted of a total of 19 questions intended to collect information relating to four dimensions (see Appendix A). Most of the questions were answered using a five-point Likert scale, where the response options ranged from “strongly disagree” to “strongly agree” and from “never” to “always”. Table 2 shows the dimensions of the final version of the questionnaire, the number of questions contained in each dimension and the type of questions asked.

2.4. Data Analysis

The data were analyzed using the SPSS statistical program (v.21). A descriptive analysis with percentages was performed to examine the perception of university students regarding the questionnaire items before and after the peer assessment experience. Subsequently, to analyze whether participation in peer assessment improves the perception of university students as assessors, statistical tests were performed to determine whether or not the distribution of the data allowed the application of parametric tests. Finally, the non-parametric Mann–Whitney U test was applied to identify possible differences before and after peer assessment of the oral presentations. The improvement of perceptions was assessed by comparing the means for each question, taking into account the respective standard deviations, in relation to the timing of the experience.

3. Results

Figure 1 presents the results relating to the influence of knowledge and training on peer assessment of university students before participating in a prior experience. Meanwhile, Figure 2 presents data reflecting the influence of knowledge and training on peer assessment of university students after a prior experience. Thirdly, Figure 3 and Figure 4 show the self-efficacy of university students as assessors of oral presentations before and after the experience. Finally, Table 3 compares the responses obtained before and after participating in the peer assessment. It is important to note that the content of the figures and tables does not correspond exactly to that of the questionnaire, as it is an abbreviated version. Appendix A contains the complete version of the content of these elements.

3.1. Perception of University Students Regarding Peer Assessment of Oral Presentations as a Student Evaluated before and after an Experience

In relation to the influence of knowledge and training on peer assessment of university students before assessing oral presentations, it can be seen in Figure 1 that most of the students surveyed are very willing to make useful suggestions for improvement and to clarify doubts if they do not understand something (Q8: 91.30%; Q9: 84.78%).
Regarding students’ perception of their peers’ ability to assess design aspects in a presentation (Q7), 52.17% of the respondents expressed strong agreement. Likewise, 58.70% believed that peer assessment positively influences their learning and helps them to make improvements (Q5).
However, regarding the ability of peers to assess oral presentations (Q6), only 43.48% of students strongly agreed. In addition, 39.13% said they preferred to be assessed by a teacher rather than a peer (Q10).
Finally, a large proportion of the students surveyed preferred the assessments by their peers to be previously reviewed by the teacher before receiving them (Q11: 65.22%). See Figure 1.
After carrying out the assessment, Figure 2 shows that a large majority of the respondents listen carefully and take into account suggestions for improvement of the task (Q8). It is noteworthy that after the intervention, the percentage of agreement with this question decreased from 91.30% to 81.03%. In addition, most respondents agreed that assessment influences their learning and helps them make improvements (Q5: 75.86%); after participation in the peer assessment experience, the degree of agreement increased for this question from 58.70% to 75.86%.
It can also be seen that 72.41% of the respondents ask questions if they do not understand any issue or disagree when suggestions for improvement are made (Q9), as well as preferring that the assessments made by their peers are previously reviewed by the teacher before receiving them (Q11: 63.79%). In addition, 51.72% of the respondents find it easier to understand the language in which their peers express themselves when assessing them compared to that of the teacher (Q12).
On the other hand, the results show that there is a division between those who prefer to be assessed by a teacher and those who prefer to be assessed by a peer; more specifically, 32.76% do not like to be assessed by a peer and prefer to be assessed by a teacher (Q10).
Only 1.72% of respondents strongly disagreed that they listen carefully and take into account suggestions for improvement of a task (Q8) and 6.90% of respondents strongly disagreed that they prefer the assessments by their peers to be previously reviewed by the teacher before receiving them (Q11).

3.2. Self-Efficacy of University Students as Assessors of ORAL Presentations before and after a Peer Assessment Experience

Regarding the self-efficacy of university students as assessors before proceeding with the peer assessment, it can be seen in Figure 3 that 78.26% of the respondents strongly agree that if they do not understand any issue in the assessment of their peers’ tasks, they ask to clarify doubts before correcting (Q15). Furthermore, 78.26% of the respondents strongly agree that if they feel unable to assess and make useful comments on a peer’s task, they ask for help from other peers or the teacher (Q18).
On the other hand, 71.74% of the respondents strongly agree that they believe their suggestions are valid for a peer to implement improvements (Q16), while 65.22% strongly disagree that they feel their ability to assess their peers is lower than what their peers demonstrate when they assess them (Q19).
Finally, 30.43% of the respondents consider their current training insufficient to effectively and meaningfully assess the work of their colleagues (Q17) and 45.65% strongly agree that if the task to be assessed corresponds to a very close colleague, they are likely to give him/her a higher grade (Q14).
After performing the peer assessment, it can be seen in Figure 4 that the vast majority of respondents strongly agree that if they do not understand any issue in the assessment of other peers’ tasks, they ask to clarify doubts before correcting them (Q15: 81.03%). This value only increased by close to 3% after participating in the peer assessment experience.
Also worth mentioning is the particular case of two questions whose percentages of agreement decreased after participating in the experience. Firstly, if the students feel unable to assess and make useful comments on a peer’s task, they are less inclined to ask for help from peers or the teacher than they were before participating in the experience (Q18: before = 78.26%; after = 77.59%). In contrast, their agreement regarding the likelihood of assigning a higher grade when evaluating a task from a close peer also decreased (Q14). It is noteworthy that this percentage shifted from 45.65% before engaging in the peer assessment experience to 37.93% after its completion.
On the other hand, respondents broadly agree with the belief that their suggestions are relevant for a colleague to make improvements (Q16), experiencing an increase from 71.74% initially to 72.41% after their participation in the experience.

3.3. Differences in the Responses Obtained before and after Peer Assessment by the University Student

Next, the responses obtained before and after the peer assessment by the university student were compared. The non-parametric Mann–Whitney U test was applied with a 95% confidence level to identify possible differences between the two groups. After participating in the experience, significant differences were identified in both the consideration of the peer’s grade by the teacher during the final assessment (U: −2.26; p < 0.05) and in the influence of the relationship between peers in the assessment process (U: −2.2; p < 0.05), indicating a decrease in this tendency.
It is true that other relatively large differences were observed in the means of some items, although they were not significant. For example, in item 7, which measured the degree of agreement with respect to the greater capacity of peers to assess aspects of design versus content, an average decrease was observed after participating in the peer assessment experience. There was also an increase after the peer assessment in the mean of item 6, referring to the adequacy of knowledge on the part of the participants to assess the performance of their fellow peers. Similarly, there was an important increase in the mean for the item relating to better understanding of peer language compared to that of the teacher when assessing (Q12). See Table 3.
Table 3. Differences in the responses obtained before and after performing the peer assessment.
Table 3. Differences in the responses obtained before and after performing the peer assessment.
ItemsPRE-TESTPOST-TESTDIF
x ¯ Dt x ¯ Dt x ¯
Q5: Peer assessment has an impact on my learning and assists me in making improvements to the task.3.830.854.030.860.21
Q6: I believe my peers have sufficient knowledge to evaluate my performance on a task.3.281.053.530.730.25
Q7: I think my peers are more qualified to evaluate aspects of presentation design than its content.3.521.013.341.07−0.18
Q8: When a peer provides suggestions for improvement on a task, I listen carefully and take them into consideration.4.330.634.190.78−0.14
Q9: When a peer gives me suggestions for improvement, I ask questions if I don’t understand something or disagree.4.410.754.030.99−0.38
Q10: I don’t like being assessed by a peer; I prefer to be assessed by a teacher.3.241.253.171.16−0.07
Q11: I prefer that assessments made by my peers are reviewed by the instructor before receiving them.3.961.013.860.93−0.09
Q12: I understand my peers’ language when they evaluate me better than the way the instructor expresses it.3.171.123.530.880.36
Q13: My relationship with my peer influences how they assess me.3.541.223.001.26−0.54 *
Q14: If the task to evaluate is produced by a very close peer (friend), I am likely to give them a higher score.3.241.232.811.37−0.43
Q15: When it comes to assessing the tasks of other peers, if I don’t understand something, I try to ask them to clarify any doubts before grading.4.260.804.190.78−0.07
Q16: I believe my suggestions are genuinely valuable for a peer to make improvements.3.960.923.930.83−0.03
Q17: My current training is insufficient to effectively and meaningfully assess my peers’ work.3.111.083.220.920.12
Q18: If I feel incapable of reviewing and providing helpful feedback on a peer’s task, I seek assistance from peers or the instructor.4.220.844.190.91−0.03
Q19: I feel that my ability to assess my peers is lower than what they demonstrate when assessing me.2.071.241.860.91−0.20
Note: Statistically significant differences: (*) p < 0.05.

4. Discussion and Conclusions

This research aims to analyze whether prior experience with peer assessment of oral presentations affects university students’ perception of this assessment mode and their self-efficacy. The study included a total of 58 students enrolled in the bachelor’s degree in primary education at the Faculty of Education and Psychology of the University of Extremadura. These students completed a specially designed questionnaire at two different times. Each research question is discussed below.
In relation to students’ perception of the effectiveness of evaluative feedback from their peers in oral presentation assessment tasks, the results suggest that peer assessment may positively influence students’ willingness to consider their peers’ suggestions for improvement and may also be related to increased confidence in their own assessment skills. This implies that they must apply critical thinking skills to identify strengths and weaknesses in the work of their peers [52,53], as well as providing skills in the application of standards and criteria [9].
In both cases, we can affirm that the respondents value when a peer makes suggestions for improvement of a task and ask if they do not understand any issue or disagree, promoting student independence in the assessment process [9].
Following the experience, the participants suggested that assessment is an important factor in their learning and value suggestions for improvement and prior review of assessments by the teacher, but they also indicated that there is some uncertainty and varied preferences regarding the ability and efficacy of peers as assessors. It should be noted that the quality of the feedback a student receives in the peer assessment can affect his or her learning experience. This aspect is discussed by Ada and Majid [21] in their study, which seeks to identify strategies to improve student engagement and motivation in the review process.
In relation to the second objective, relating to the students’ perceptions of their self-efficacy in peer assessment of an oral presentation task, several changes were observed after the experience. On the one hand, the influence of the students’ knowledge and training in peer assessment determined the degree of confidence in their ability to understand their peers’ tasks or, alternatively, a lesser willingness to seek clarification. On the other hand, in terms of student self-efficacy as an assessor, experience in peer assessment was found to foster a gradual improvement in students’ confidence in their assessment skills. In line with the results obtained by Loureiro and Gomes [30], this study also identified that lack of experience negatively affected the quality of the assessments, suggesting a possible relationship with lack of confidence.
In addition, the respondents took into account clarity and accuracy when assessing other peers’ tasks and sought help when they felt unable to perform an effective assessment.
There is a division between those who believe that their ability to assess their peers is lower and those who do not. In this sense, it would be interesting to carry out symmetrical feedback between students of approximately the same abilities, as proposed in the study by Rød and Nubdal [20]. This study suggests that further guidance and training is required to enable students to critically engage with academic content and provide constructive feedback.
On the other hand, this type of assessment activity could also be complemented with some kind of reward for the work done to help maintain the commitment and motivation of the students. Ada and Majid [21] and Gunnarsson and Alterman [54] suggested using a badge assignment system as a strategy to encourage students’ active participation. In their study, they analyzed the feedback students gave each other by “liking” their posts or awarding them badges.
In general, the students indicated that they did not have much experience with peer assessment. However, those who had participated in such activities found them to be a satisfying experience even before conducting the peer assessment. This is in line with the study by Lladó et al. [32], which found that peer assessment promotes cooperation and collaboration among students, giving them the opportunity to work together to assess and improve their work.
Indeed, before proceeding with the peer assessment the respondents indicated that it could have a positive impact on their learning and academic performance. However, when working with students with no prior experience of this type of assessment, it is particularly important to explain the nature of the whole process and the implications of their decisions so that they better understand the nature of assessment rubrics and do not underestimate the final value of their assessments. Therefore, it is necessary to learn how to put this assessment methodology into practice, as indicated by Liu and Carless [6], who highlight the role of student feedback in peer assessment and its importance in student learning.
Finally, the results show that the relationship with a peer is not a determining factor in the manner of their assessment. This could suggest that after the assessment experience the students acquired greater objectivity and separation between the personal relationship and the academic assessment. Still, it would be interesting to use double-blind peer assessment models, with studies by Rød and Nubdal [20], Mulder et al. [55] and Papinczak et al. [24] confirming their effectiveness. In their research, it was found that the majority of students had a positive perception of double-blind peer reviews and that the quality of students’ final papers improved significantly after the peer review process.
In conclusion, this study has examined the impact of prior experience with peer assessment on undergraduate students’ perceptions of oral presentation assessment and their self-efficacy. The results suggest that peer assessment positively influences students’ openness to feedback and confidence in their assessment skills. The study underscores the importance of cultivating critical thinking and clarifying the assessment process, especially for students with limited experience. Changes in student self-efficacy were observed, influenced by knowledge and training in peer assessment. The research suggests potential benefits of implementing symmetrical feedback and explores strategies such as a badge assignment system to motivate student participation. Overall, the findings provide valuable insights to improve our understanding of the dynamics of peer assessment and its positive effects on students’ learning experiences.

4.1. Practical Contributions

The current study extends the understanding of the relationship between peer assessment and students’ affective domain. Although the results are not entirely conclusive, they suggest that the affectivity of both the assessed and assessors may have an impact on their ability to offer and assimilate suggestions for improvement, respectively. Although not considered as the determining factor for improved academic performance in a peer assessment context, it is plausible that behavior and self-efficacy have a subtle effect on the performance of students who participate in peer assessment practices.
Our findings have several practical implications for teachers and educational researchers interested in implementing peer assessment. First, it is crucial for educators to recognize the differences in learning processes between the assessors and the assessed in the context of peer assessment. Modeling and training must be provided to foster the assessors’ sense of self-efficacy and the confidence of the assessed in their peers’ abilities. This training should focus on providing specific and constructive feedback. Secondly, the findings underline the importance of developing support tools to guide assessors regarding the type of feedback to provide and also to help the assessed to reflect on the feedback received. Assessors should also be encouraged to promote affective comments that offer socio-emotional support to their peers by acknowledging their achievements. The promotion of ‘mindful reception’ of feedback is considered essential [56], creating the need for mixed approaches in future research to further understand how peer assessment affects diverse participants under the conditions not only of the intervention, but also cultural and subjective conditions inherent in the process.

4.2. Limitations

It is important to mention certain limitations affecting the development of this study. It should be pointed out that the sample is small and has been selected using convenience sampling, which means that the results obtained cannot be generalized. We therefore recommend that the study be replicated using sample selection methods and research conditions that allow the results to be generalized. A further limitation is the unequal gender distribution in our sample resulting from the particular characteristics of our study population. We plan to address this limitation in future research by seeking a more balanced sample to further explore the possible influence of gender on perceptions during peer assessment. It would also be interesting to explore further the use of multiple rounds of assessment, the practical implementation and effectiveness of these measures in a variety of contexts, such as TEFL settings, variations in educational culture and scenarios where summative assessment is required. All of these areas are yet to be thoroughly investigated.

Author Contributions

Conceptualization, all authors; Methodology, D.G., M.-T.B.-T. and M.-J.F.-S.; Investigation, D.G.; Analyses and Data Curation, D.G. and M.-J.F.-S.; Validation, M.-T.B.-T. and S.S.-H.; Writing—Original Draft Preparation, D.G.; Writing—Review and Editing, M.-T.B.-T. and M.-J.F.-S.; Supervision and Funding Acquisition, S.S.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the European Regional Development Fund (A Way to Make Europe) and the Government of Extremadura (Junta de Extremadura) grant number GR21157, from the SEJ20 group.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the University of Extremadura (approval code 142_2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

  • Dimension 1. Sociodemographic Identification Data.
  • 1. Gender: Female, Male.
  • Dimension 2. Peer Assessment Experiences.
  • 2. How many times have you been assessed by a peer? None, Few, Several, Many.
  • 3. Have you recently participated in peer assessment activities in the university context? Yes, No, I don’t remember.
  • 4. If the answer to the previous question is affirmative, did the instructor take into account the peer’s grade when making the final assessment? Yes, No, I don’t remember.
  • Dimension 3. Influential factors in peer assessment.
  • Rate from 1 to 5 in each case according to the scale proposed in each question:
  • Scale for questions 5, 6, 7, 10, 11, 12, 13: 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree) and 5 (strongly agree).
  • Scale for questions 8, 9: 1 (never), 2 (almost never), 3 (sometimes), 4 (almost always) and 5 (always).
  • 5. Peer assessment has an impact on my learning and assists me in making improvements to the task (in the case of resubmission).
  • 6. I believe my peers have sufficient knowledge to evaluate my performance on a task.
  • 7. I think my peers are more qualified to evaluate aspects of presentation design than its content.
  • 8. When a peer provides suggestions for improvement on a task, I listen carefully and take them into consideration.
  • 9. When a peer gives me suggestions for improvement, I ask questions if I don’t understand something or disagree.
  • 10. I don’t like being assessed by a peer; I prefer to be assessed by a teacher.
  • 11. I prefer that assessments made by my peers are reviewed by the instructor before receiving them.
  • 12. I understand my peers’ language when they evaluate me better than the way the instructor expresses it.
  • 13. My relationship with my peer influences how they assess me.
  • Dimension 4. Self-efficacy as an evaluator in peer assessment.
  • Rate from 1 to 5 in each case according to the scale proposed in each question:
  • Scale for questions 14, 16, 17, 19: 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree) and 5 (strongly agree).
  • Scale for questions 15, 18: 1 (never), 2 (almost never), 3 (sometimes), 4 (almost always) and 5 (always).
  • 14. If the task to evaluate is produced by a very close peer (friend), I am likely to give them a higher score.
  • 15. When it comes to assessing the tasks of other peers, if I don’t understand something, I try to ask them to clarify any doubts before grading.
  • 16. I believe my suggestions are genuinely valuable for a peer to make improvements.
  • 17. My current training is insufficient to effectively and meaningfully assess my peers’ work.
  • 18. If I feel incapable of reviewing and providing helpful feedback on a peer’s task, I seek assistance from peers or the instructor.
  • 19. I feel that my ability to assess my peers is lower than what they demonstrate when assessing me.

References

  1. Topping, K. Peer assessment between students in colleges and universities. Rev. Educ. Res. 1998, 68, 249–276. [Google Scholar] [CrossRef]
  2. Latifi, S.; Noroozi, O. Supporting argumentative essay writing through an online supported peer-review script. Innov. Educ. Teach. Int. 2021, 58, 501–511. [Google Scholar] [CrossRef]
  3. Gielen, S.; Tops, L.; Dochy, F.; Onghena, P.; Smeets, S. A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. Br. Educ. Res. J. 2010, 36, 143–162. [Google Scholar] [CrossRef]
  4. Guelfi, M.R.; Formiconi, A.R.; Vannucci, M.; Tofani, L.; Shtylla, J.; Masoni, M. Application of peer review in a university course: Are students good reviewers? J. E-Learn. Knowl. Soc. 2021, 17, 1–8. [Google Scholar] [CrossRef]
  5. Double, K.S.; McGrane, J.A.; Hopfenbeck, T.N. The impact of peer assessment on academic performance: A meta-analysis of control group studies. Educ. Psychol. Rev. 2020, 32, 481–509. [Google Scholar] [CrossRef]
  6. Liu, N.F.; Carless, D. Peer feedback: The learning element of peer assessment. Teach. High. Educ. 2006, 11, 279–290. [Google Scholar] [CrossRef]
  7. Schillings, M.; Roebertsen, H.; Savelberg, H.; Whittingham, J.; Dolmans, D. Peer-to-peer dialogue about teachers’ written feedback enhances students’ understanding on how to improve writing skills. Educ. Stud. 2020, 46, 693–707. [Google Scholar] [CrossRef]
  8. Topping, K. Peer assessment: Learning by judging and discussing the work of other learners. Interdiscip. Educ. Psychol. 2017, 1, 1–17. [Google Scholar] [CrossRef]
  9. Spiller, D. Assessment Matters: Self-Assessment and Peer Assessment; Teaching Development, The University of Waikato: Hamilton, New Zealand, 2009; Available online: http://www.waikato.ac.nz/tdu/pdf/booklets/8_SelfPeerAssessment.pdf (accessed on 14 January 2023).
  10. Murillo-Zamorano, L.R.; Montanero, M. Oral presentations in higher education: A comparison of the impact of peer and teacher feedback. Assess. Eval. High. Educ. 2018, 43, 138–150. [Google Scholar] [CrossRef]
  11. Dunbar, N.E.; Brooks, C.F.; Kubicka-Miller, T. Oral communication skills in higher education: Using a performance-based evaluation rubric to assess communication skills. Innov. High. Educ. 2006, 31, 115–128. [Google Scholar] [CrossRef]
  12. Ryan, M. The pedagogical balancing act: Teaching reflection in higher education. Teach. High. Educ. 2013, 18, 144–155. [Google Scholar] [CrossRef]
  13. Dickson, H.; Harvey, J.; Blackwood, N. Feedback, feedforward: Evaluating the effectiveness of an oral peer review exercise amongst postgraduate students. Assess. Eval. High. Educ. 2019, 44, 692–704. [Google Scholar] [CrossRef]
  14. Lu, J.; Law, N. Online Peer Assessment: Effects of Cognitive and Affective Feedback. Instr. Sci. 2012, 40, 257–275. [Google Scholar] [CrossRef]
  15. Misiejuk, K.; Wasson, B.; Egelandsdal, K. Using learning analytics to understand student perceptions of peer feedback. Comput. Hum. Behav. 2021, 117, 106658. [Google Scholar] [CrossRef]
  16. Molloy, E.; Boud, D.; Henderson, M. Developing a Learning-Centred Framework for Feedback Literacy. Assess. Eval. High. Educ. 2020, 45, 527–540. [Google Scholar] [CrossRef]
  17. Ramon-Casas, M.; Nuño, N.; Pons, F.; Cunillera, T. The different impact of a structured peerassessment task in relation to university undergraduates’ initial writing skills. Assess. Eval. High. Educ. 2019, 44, 653–663. [Google Scholar] [CrossRef]
  18. Wu, Y.; Schunn, C.D. When peers agree, do students listen? The central role of feedback quality and feedback frequency in determining uptake of feedback. Contemp. Educ. Psychol. 2020, 62, 101897. [Google Scholar] [CrossRef]
  19. Elander, J. Student assessment from a psychological perspective. Psychol. Learn. Teach. 2004, 3, 114–121. [Google Scholar] [CrossRef]
  20. Rød, J.K.; Nubdal, M. Double-blind multiple peer reviews to change students’ reading behaviour and help them develop their writing skills. J. Geogr. High. Educ. 2022, 46, 284–303. [Google Scholar] [CrossRef]
  21. Ada, M.B.; Majid, M.U. Developing a system to increase motivation and engagement in student code peer review. In Proceedings of the 2022 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Hong Kong, 4–7 December 2022; pp. 93–98. [Google Scholar] [CrossRef]
  22. Lu, R.; Bol, L. A comparison of anonymous versus identifiable e-peer review on college student writing performance and the extent of critical feedback. J. Interact. Online Learn. 2007, 6, 100–115. Available online: https://digitalcommons.odu.edu/efl_fac_pubs/5/?utm_sourc (accessed on 15 January 2023).
  23. Morales-Martinez, G.; Latreille, P.; Denny, P. Nationality and gender biases in multicultural online learning environments: The effects of anonymity. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–14. [Google Scholar] [CrossRef]
  24. Papinczak, T.; Young, L.; Groves, M. Peer assessment in problem-based learning: A qualitative study. Adv. Health Sci. Educ. 2007, 12, 169–186. [Google Scholar] [CrossRef] [PubMed]
  25. Montanero, M.; Madeira, M.L. Collaborative chain writing: Effects on the narrative competence on primary school students. Infanc. Y Aprendiz. 2019, 42, 915–951. [Google Scholar] [CrossRef]
  26. Rahmanian, M.; Shafieian, M.; Samie, M.E. Computing with words for student peer assessment in oral presentation. Nexo Rev. Científica 2021, 34, 229–241. [Google Scholar] [CrossRef]
  27. Wang, W. Students’ perceptions of rubric-referenced peer feedback on EFL writing: A longitudinal inquiry. Assess. Writ. 2014, 19, 80–96. [Google Scholar] [CrossRef]
  28. O’Donovan, B.; Price, M.; Rust, C. The student experience of criterion-referenced assessment (through the introduction of a common criteria assessment grid). Innov. Educ. Teach. Int. 2001, 38, 74–85. [Google Scholar] [CrossRef]
  29. Mangelsdorf, K. Peer Reviews in the ESL Composition Classroom: What Do the Students Think? ELT J. 1992, 46, 274–284. [Google Scholar] [CrossRef]
  30. Loureiro, P.; Gomes, M.J. Online peer assessment for learning: Findings from higher education students. Educ. Sci. 2023, 13, 253. [Google Scholar] [CrossRef]
  31. Nicol, D.J.; Macfarlane-Dick, D. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Stud. High. Educ. 2006, 31, 199–218. [Google Scholar] [CrossRef]
  32. Lladó, A.; Soley, L.; Sansbelló, R.; Pujolras, G.; Planella, J.; Roura-Pascual, N.; Moreno, L. Student perceptions of peer assessment: An interdisciplinary study. Assess. Eval. High. Educ. 2014, 39, 592–610. [Google Scholar] [CrossRef]
  33. Patchan, M.M.; Schunn, C.D.; Correnti, R.J. The Nature of Feedback: How Peer Feedback Features Affect Students’ Implementation Rate and Quality of Revisions. J. Educ. Psychol. 2016, 108, 1098–1120. [Google Scholar] [CrossRef]
  34. Djelil, F.; Brisson, L.; Charbey, R.; Bothorel, C.; Gilliot, J.M.; Ruffieux, P. Analysing peer assessment interactions and their temporal dynamics using a graphlet-based method. In Proceedings of the EC-TEL’21, Bolzano, Italy, 20–24 September 2021; pp. 82–95. [Google Scholar] [CrossRef]
  35. Misiejuk, K.; Wasson, B. Backward evaluation in peer assessment: A scoping review. Comput. Educ. 2021, 175, 104319. [Google Scholar] [CrossRef]
  36. Harland, T.; Wald, N.; Randhawa, H. Student Peer Review: Enhancing Formative Feedback with a Rebuttal. Assess. Eval. High. Educ. 2017, 42, 801–811. [Google Scholar] [CrossRef]
  37. Double, K.S.; Birney, D.P. Reactivity to confidence ratings in older individuals performing the latin square task. Metacognition Learn. 2018, 13, 309–326. [Google Scholar] [CrossRef]
  38. Zimmerman, B.J. Attaining reciprocality between learning and development through self-regulation. Hum. Dev. 1995, 38, 367–372. [Google Scholar] [CrossRef]
  39. Gaspar, A.; Fernández, M.J.; Sánchez-Herrera, S. Percepción del alumnado universitario sobre la evaluación por pares en tareas de escritura. Rev. Complut. Educ. 2023, 34, 541–554. [Google Scholar] [CrossRef]
  40. Andrade, H.L.; Wang, X.; Du, Y.; Akawi, R.L. Rubric-referenced self-assessment and self-efficacy for writing. J. Educ. Res. 2009, 102, 287–302. [Google Scholar] [CrossRef]
  41. Vanderhoven, E.; Raes, A.; Montrieux, H.; Rotsaert, T.; Schellens, T. What if pupils can assess their peers anonymously? A quasi-experimental study. Comput. Educ. 2015, 81, 123–132. [Google Scholar] [CrossRef]
  42. Kollar, I.; Fischer, F. Peer assessment as collaborative learning: A cognitive perspective. Learn. Instr. 2010, 20, 344–348. [Google Scholar] [CrossRef]
  43. Koh, E.; Shibani, A.; Tan, J.P.L.; Hong, H. A pedagogical framework for learning analytics in collaborative inquiry tasks: An example from a teamwork competency awareness program. In Proceedings of the LAK’16, Edinburgh, Scotland, 25–29 April 2016; pp. 74–83. [Google Scholar] [CrossRef]
  44. Hunt, P.; Leijen, Ä.; van der Schaaf, M. Automated feedback is nice and human presence makes it better: Teachers’ perceptions of feedback by means of an e-portfolio enhanced with learning analytics. Educ. Sci. 2021, 11, 278. [Google Scholar] [CrossRef]
  45. Zevenbergen, R. Peer assessment of student constructed posters: Assessment alternatives in preservice mathematics education. J. Math. Teach. Educ. 2001, 4, 95–113. [Google Scholar] [CrossRef]
  46. Reily, K.; Finnerty, P.L.; Terveen, L. Two peers are better than one: Aggregating peer reviews for computing assignments is surprisingly accurate. In Proceedings of the 2009 ACM International Conference on Supporting Group Work, Sanibel Island, FL, USA, 10–13 May 2009; pp. 115–124. [Google Scholar] [CrossRef]
  47. Mulder, R.A.; Pearce, J.M. PRAZE: Innovating Teaching through Online Peer Review. ICT: Providing Choices for Learners and Learning, Proceedings of the Ascilite Singapore 2007, Singapore, 2–5 December 2007. Available online: https://people.eng.unimelb.edu.au/jonmp/pubs/ascilite2007/Mulder%20&%20Pearce%20ASCILITE%202007.pdf (accessed on 18 January 2023).
  48. Pearce, J.; Mulder, R.; Baik, C. Involving Students in Peer Review: Case Studies and Practical Strategies for University Teaching; Centre for the Study of Higher Education, University of Melbourne: Melbourne, Australia, 2010; Available online: https://apo.org.au/node/20259 (accessed on 24 January 2023).
  49. Song, X.; Goldstein, S.C.; Sakr, M. Using peer code review as an educational tool. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education, Trondheim, Norway, 15–19 June 2020; pp. 173–179. [Google Scholar] [CrossRef]
  50. Cho, K.; Schunn, C.D.; Wilson, R.W. Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. J. Educ. Psychol. 2006, 98, 891. [Google Scholar] [CrossRef]
  51. Gielen, M.; De Wever, B. Structuring peer assessment: Comparing the impact of the degree of structure on peer feedback content. Comput. Hum. Behav. 2015, 52, 315–325. [Google Scholar] [CrossRef]
  52. Lutze-Mann, L. Peer Assessment of Assignment Drafts: About Peer Assessment. Assessment Toolkit, Student Peer Assessment. 2015. Available online: https://teaching.unsw.edu.au/peer-assessment (accessed on 21 January 2023).
  53. Ross, J. The Reliability, Validity, and Utility of Self-Assessment. Pract. Assess. Res. Eval. 2006, 11, 10. [Google Scholar] [CrossRef]
  54. Gunnarsson, B.L.; Alterman, R. Peer promotions as a method to identify quality content. J. Learn. Anal. 2014, 1, 126–150. [Google Scholar] [CrossRef]
  55. Mulder, R.; Baik, C.; Naylor, R.; Pearce, J. How does student peer review influence perceptions, engagement and academic outcomes? A case study. Assess. Eval. High. Educ. 2014, 39, 657–677. [Google Scholar] [CrossRef]
  56. Bangert-Drowns, R.L.; Kulik, C.-C.; Kulik, J.A.; Morgan, M. The instructional effect of feedback in test-like events. Rev. Educ. Res. 1991, 61, 213–238. [Google Scholar] [CrossRef]
Figure 1. Influence of the knowledge and training of university students prior to performing peer assessment. Note: The scale for questions 8 and 9 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Figure 1. Influence of the knowledge and training of university students prior to performing peer assessment. Note: The scale for questions 8 and 9 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Education 14 00221 g001
Figure 2. Influence of knowledge and training of university students after performing peer assessment. Note: The scale for questions 8 and 9 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Figure 2. Influence of knowledge and training of university students after performing peer assessment. Note: The scale for questions 8 and 9 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Education 14 00221 g002
Figure 3. Self-efficacy of university students as assessors prior to peer assessment. Note: The scale for questions 15 and 18 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Figure 3. Self-efficacy of university students as assessors prior to peer assessment. Note: The scale for questions 15 and 18 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Education 14 00221 g003
Figure 4. Self-efficacy of university students as assessors after performing the peer assessment. Note: The scale for questions 15 and 18 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Figure 4. Self-efficacy of university students as assessors after performing the peer assessment. Note: The scale for questions 15 and 18 consists of five values ranging from Never to Always. The scale for the remaining items consists of five values ranging from Strongly disagree to Strongly agree.
Education 14 00221 g004
Table 1. Sample distribution.
Table 1. Sample distribution.
Number of Peer Assessment Participations by the StudentGenderCount (%)Total (%)
None/FewMale18 (88.24%)53 (91.38%)
Female38 (92.68%)
Quite a fewMale2 (11.76%)5 (8.62%)
Female3 (7.32%)
ManyMale0 (0%)0 (0%)
Female0 (0%)
Table 2. Distribution of items according to the dimensions of the questionnaire.
Table 2. Distribution of items according to the dimensions of the questionnaire.
DimensionsItemTotalDescriptionTypology
Dimension 1. Sociodemographic Identification Data.11Aims to identify the gender of the participants.Closed nominal dichotomous question.
Dimension 2. Peer Assessment Experiences.2, 3, 4, 3Aims to determine the nature and amount of prior peer assessment experience of the participants.Four-point Likert scale question (from none to many) (question 2) and closed nominal polytomous questions (questions 3 and 4).
Dimension 3. Influential Factors in Peer Assessment.5, 6, 7, 8, 9, 10, 11, 12, 139Aims to analyze whether knowledge and training influence peer assessment.Closed nominal polytomous questions with five-point Likert scale response options (from strongly disagree to strongly agree and from never to always).
Dimension 4. Self-efficacy as an evaluator in peer assessment.14, 15, 16, 17, 18, 196Aims to understand how students perceive and use the suggestions for improvement made by their peers during peer assessment.Closed nominal polytomous questions with five-point Likert scale response options (from strongly disagree to strongly agree and from never to always).
Total 19
Source: Adapted from Gaspar et al. [39].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gudiño, D.; Fernández-Sánchez, M.-J.; Becerra-Traver, M.-T.; Sánchez-Herrera, S. University Students’ Perceptions of Peer Assessment in Oral Presentations. Educ. Sci. 2024, 14, 221. https://doi.org/10.3390/educsci14030221

AMA Style

Gudiño D, Fernández-Sánchez M-J, Becerra-Traver M-T, Sánchez-Herrera S. University Students’ Perceptions of Peer Assessment in Oral Presentations. Education Sciences. 2024; 14(3):221. https://doi.org/10.3390/educsci14030221

Chicago/Turabian Style

Gudiño, Diego, María-Jesús Fernández-Sánchez, María-Teresa Becerra-Traver, and Susana Sánchez-Herrera. 2024. "University Students’ Perceptions of Peer Assessment in Oral Presentations" Education Sciences 14, no. 3: 221. https://doi.org/10.3390/educsci14030221

APA Style

Gudiño, D., Fernández-Sánchez, M. -J., Becerra-Traver, M. -T., & Sánchez-Herrera, S. (2024). University Students’ Perceptions of Peer Assessment in Oral Presentations. Education Sciences, 14(3), 221. https://doi.org/10.3390/educsci14030221

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop